metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | qwen3-embed | 1.1.3 | Lightweight Qwen3 text embedding & reranking via ONNX Runtime (fork of fastembed) | # qwen3-embed
Lightweight Qwen3 text embedding & reranking via ONNX Runtime. Trimmed fork of [fastembed](https://github.com/qdrant/fastembed), keeping only Qwen3 models.
## Supported Models
### ONNX (default)
| Model | Type | Dims | Max Tokens | Size |
|-------|------|------|------------|------|
| `n24q02m/Qwen3-Embedding-0.6B-ONNX` | Embedding | 32-1024 (MRL) | 32768 | 573 MB |
| `n24q02m/Qwen3-Embedding-0.6B-ONNX-Q4F16` | Embedding | 32-1024 (MRL) | 32768 | 517 MB |
| `n24q02m/Qwen3-Reranker-0.6B-ONNX` | Reranker | - | 40960 | 573 MB |
| `n24q02m/Qwen3-Reranker-0.6B-ONNX-Q4F16` | Reranker | - | 40960 | 518 MB |
### GGUF (optional, requires `llama-cpp-python`)
| Model | Type | Dims | Max Tokens | Size |
|-------|------|------|------------|------|
| `n24q02m/Qwen3-Embedding-0.6B-GGUF` | Embedding | 32-1024 (MRL) | 32768 | 378 MB |
| `n24q02m/Qwen3-Reranker-0.6B-GGUF` | Reranker | - | 40960 | 378 MB |
### HuggingFace Repos
| Format | Embedding | Reranker |
|--------|-----------|---------|
| ONNX | [n24q02m/Qwen3-Embedding-0.6B-ONNX](https://huggingface.co/n24q02m/Qwen3-Embedding-0.6B-ONNX) | [n24q02m/Qwen3-Reranker-0.6B-ONNX](https://huggingface.co/n24q02m/Qwen3-Reranker-0.6B-ONNX) |
| GGUF | [n24q02m/Qwen3-Embedding-0.6B-GGUF](https://huggingface.co/n24q02m/Qwen3-Embedding-0.6B-GGUF) | [n24q02m/Qwen3-Reranker-0.6B-GGUF](https://huggingface.co/n24q02m/Qwen3-Reranker-0.6B-GGUF) |
## Installation
```bash
pip install qwen3-embed
# For GGUF support
pip install qwen3-embed[gguf]
```
## Usage
### Text Embedding
```python
from qwen3_embed import TextEmbedding
# INT8 (default)
model = TextEmbedding(model_name="n24q02m/Qwen3-Embedding-0.6B-ONNX")
# Q4F16 (smaller, slightly less accurate)
model = TextEmbedding(model_name="n24q02m/Qwen3-Embedding-0.6B-ONNX-Q4F16")
# GGUF (requires: pip install qwen3-embed[gguf])
model = TextEmbedding(model_name="n24q02m/Qwen3-Embedding-0.6B-GGUF")
documents = [
"Qwen3 is a multilingual embedding model.",
"ONNX Runtime enables fast CPU inference.",
]
embeddings = list(model.embed(documents))
# Each embedding: numpy array of shape (1024,), L2-normalized
# Matryoshka Representation Learning (MRL) -- truncate to smaller dims
embeddings_256 = list(model.embed(documents, dim=256))
# Each embedding: numpy array of shape (256,), L2-normalized
# Query with instruction (for retrieval tasks)
queries = list(model.query_embed(
["What is Qwen3?"],
task="Given a question, retrieve relevant passages",
))
```
### Reranking
```python
from qwen3_embed import TextCrossEncoder
reranker = TextCrossEncoder(model_name="n24q02m/Qwen3-Reranker-0.6B-ONNX")
query = "What is Qwen3?"
documents = [
"Qwen3 is a series of large language models by Alibaba.",
"The weather today is sunny.",
"Qwen3-Embedding supports multilingual text embedding.",
]
scores = list(reranker.rerank(query, documents))
# scores: list of float in [0, 1], higher = more relevant
# Or rerank pairs directly
pairs = [
("What is AI?", "Artificial intelligence is a branch of computer science."),
("What is ML?", "Machine learning is a subset of AI."),
]
pair_scores = list(reranker.rerank_pairs(pairs))
```
## Key Features
- **Last-token pooling**: Uses the final token representation (with left-padding) instead of mean pooling.
- **MRL support**: Matryoshka Representation Learning allows truncating embeddings to any dimension from 32 to 1024 while preserving quality.
- **Instruction-aware**: Query embedding supports task instructions for better retrieval performance.
- **Causal LM reranking**: Reranker uses yes/no logit scoring via causal language model, producing calibrated [0, 1] scores.
- **Multiple backends**: ONNX Runtime (INT8, Q4F16) and GGUF (Q4_K_M via llama-cpp-python).
- **GPU optional, no PyTorch**: Runs on ONNX Runtime or llama-cpp-python -- no heavy ML framework required. Auto-detects GPU (CUDA, DirectML) when available.
- **Multilingual**: Both models support multi-language inputs.
## GPU Acceleration
Both ONNX and GGUF backends auto-detect GPU when available (`Device.AUTO` is the default).
### ONNX
Requires `onnxruntime-gpu` (CUDA) or `onnxruntime-directml` (Windows) instead of `onnxruntime`:
```bash
pip install onnxruntime-gpu # NVIDIA CUDA
# or
pip install onnxruntime-directml # Windows AMD/Intel/NVIDIA
```
```python
from qwen3_embed import TextEmbedding, Device
# Auto-detect GPU (default)
model = TextEmbedding(model_name="n24q02m/Qwen3-Embedding-0.6B-ONNX")
# Force CPU
model = TextEmbedding(model_name="n24q02m/Qwen3-Embedding-0.6B-ONNX", cuda=Device.CPU)
# Force CUDA
model = TextEmbedding(model_name="n24q02m/Qwen3-Embedding-0.6B-ONNX", cuda=Device.CUDA)
```
### GGUF
GPU is handled by `llama-cpp-python`. Install with CUDA support:
```bash
CMAKE_ARGS="-DGGML_CUDA=on" pip install qwen3-embed[gguf]
```
```python
from qwen3_embed import TextEmbedding, Device
# Auto-detect GPU (default, offloads all layers)
model = TextEmbedding(model_name="n24q02m/Qwen3-Embedding-0.6B-GGUF")
# Force CPU only
model = TextEmbedding(model_name="n24q02m/Qwen3-Embedding-0.6B-GGUF", cuda=Device.CPU)
```
## Development
```bash
mise run setup # Install deps + pre-commit hooks
mise run lint # ruff check + format --check
mise run test # pytest
mise run fix # ruff auto-fix + format
```
## License
Apache-2.0. Original fastembed by [Qdrant](https://github.com/qdrant/fastembed).
| text/markdown | n24q02m | null | null | null | null | embedding, onnx, onnxruntime, qwen3, reranking, vector | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"huggingface-hub<2.0,>=0.20",
"loguru>=0.7.2",
"numpy>=2.1.0",
"onnxruntime>1.20.0",
"requests>=2.31",
"tokenizers<1.0,>=0.15",
"tqdm>=4.66",
"llama-cpp-python>=0.3; extra == \"gguf\""
] | [] | [] | [] | [
"Homepage, https://github.com/n24q02m/qwen3-embed",
"Repository, https://github.com/n24q02m/qwen3-embed"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T17:43:31.187193 | qwen3_embed-1.1.3-py3-none-any.whl | 52,669 | a6/d6/e67d456805c202a3fc62a8fd28f301404903a7219d925366e0cba3ac409b/qwen3_embed-1.1.3-py3-none-any.whl | py3 | bdist_wheel | null | false | 19c66bc831ca5a80c63cf14e39f690b5 | 225663751b13321db15023e285e00df8ff710c39eee1f25bc3e1468d9e775d2e | a6d6e67d456805c202a3fc62a8fd28f301404903a7219d925366e0cba3ac409b | Apache-2.0 | [
"LICENSE"
] | 499 |
2.4 | tryton | 6.0.59 | Tryton desktop client | tryton
======
The desktop client of Tryton.
Tryton is business software, ideal for companies of any size, easy to use,
complete and 100% Open Source.
It provides modularity, scalability and security.
| null | Tryton | bugs@tryton.org | null | null | GPL-3 | business application ERP | [
"Development Status :: 5 - Production/Stable",
"Environment :: X11 Applications :: GTK",
"Framework :: Tryton",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: GNU General Public License (GPL)",
"Natural Language :: Bulgarian",
"Natural Language :: Catalan",
"Natural Language :: ... | [
"any"
] | http://www.tryton.org/ | http://downloads.tryton.org/6.0/ | >=3.6 | [] | [] | [] | [
"pycairo",
"python-dateutil",
"PyGObject>=3.19",
"GooCalendar>=0.7; extra == \"calendar\""
] | [] | [] | [] | [
"Bug Tracker, https://bugs.tryton.org/",
"Documentation, https://docs.tryton.org/latest/client-desktop/",
"Forum, https://www.tryton.org/forum",
"Source Code, https://hg.tryton.org/tryton"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-18T17:42:40.608613 | tryton-6.0.59.tar.gz | 530,171 | 64/e9/8c7c3246f629ae40bb0387152e4f4c597572713aa3dc19bcad593f749286/tryton-6.0.59.tar.gz | source | sdist | null | false | 96727b7cb48996262883849632e35210 | 498c381b079559eddceca49ecf01043e1317624a17020a338d980ac794c779a2 | 64e98c7c3246f629ae40bb0387152e4f4c597572713aa3dc19bcad593f749286 | null | [
"LICENSE"
] | 220 |
2.4 | LbAPCommon | 0.15.11 | Common utilities used by LHCb DPA WP2 related software | # LbAPCommon
[](https://gitlab.cern.ch/lhcb-dpa/analysis-productions/lbapcommon/-/commits/master)
[](https://gitlab.cern.ch/lhcb-dpa/analysis-productions/lbapcommon/-/commits/master)
Common utilities for LHCb DPA WP2 related software, including Analysis Productions workflow parsing, validation, and conversion tools.
## Features
- **Workflow Parsing**: Parse and render Analysis Productions YAML configuration files
- **DIRAC Conversion**: Convert workflow definitions to DIRAC production requests
- **CWL Conversion**: Convert production requests to Common Workflow Language (CWL) format
- **Validation & Linting**: Validate workflow configurations and check for common issues
- **Data Models**: Pydantic v2 models for workflow and production data structures
## Installation
```bash
pip install LbAPCommon
```
For development:
```bash
# Using pixi (recommended)
pixi install
pixi run test
# Using conda/mamba
mamba env create --file environment.yaml
conda activate lbaplocal-dev
pip install -e '.[testing]'
```
## Usage
### DIRAC Production Request Generation
Convert Analysis Productions `info.yaml` files to DIRAC production requests. This is the primary CLI used by the Analysis Productions system:
```bash
python -m LbAPCommon <production_name> --input info.yaml --output output.json
# With specific AP package version
python -m LbAPCommon my_production --input info.yaml --output result.json --ap-pkg-version v1r0
# Process only a specific job from the workflow
python -m LbAPCommon my_production --input info.yaml --output result.json --only-include job_name
# Dump individual YAML request files for each production
python -m LbAPCommon my_production --input info.yaml --output result.json --dump-requests
# With DIRAC server credentials
python -m LbAPCommon my_production --input info.yaml --output result.json --server-credentials user password
```
**Arguments:**
- `production_name` - Name of the production (alphanumeric + underscore, 2-200 chars)
- `--input` - Path to the `info.yaml` workflow definition file
- `--output` - Path for the output JSON file containing production requests
- `--ap-pkg-version` - AnalysisProductions package version (default: `v999999999999`)
- `--only-include` - Only process the workflow chain containing this job name
- `--dump-requests` - Also write individual `.yaml` files for each production request
- `--server-credentials` - DIRAC server credentials (username and password)
The output JSON contains:
- `rendered_yaml` - The rendered workflow YAML after Jinja2 processing
- `productions` - Dictionary of production requests, each containing:
- `request` - The DIRAC production request in LbAPI format
- `input-dataset` - Input dataset specification from Bookkeeping query
- `dynamic_files` - Auto-generated configuration files
- `raw-yaml` - Original YAML for the jobs in this production
### CWL Conversion CLI
Convert DIRAC production request YAML files to CWL workflows:
```bash
# List productions in a file
lhcb-production-yaml-to-cwl list_productions production.yaml
# Generate CWL workflow files
lhcb-production-yaml-to-cwl generate production.yaml --output-dir ./cwl-output
# Convert a specific production
lhcb-production-yaml-to-cwl generate production.yaml --production "MyProduction"
```
### Python API
```python
from LbAPCommon import parse_yaml, render_yaml, validate_yaml
from LbAPCommon.dirac_conversion import group_in_to_requests, step_to_production_request
from LbAPCommon.prod_request_to_cwl import fromProductionRequestYAMLToCWL
# Parse and validate workflow YAML
rendered = render_yaml(yaml_content)
jobs_data = parse_yaml(rendered, production_name, wg)
validate_yaml(jobs_data, wg, production_name)
# Group jobs into production requests
for job_names in group_in_to_requests(jobs_data):
request = step_to_production_request(
production_name, jobs_data, job_names, input_spec, ap_pkg_version
)
# Convert DIRAC output to CWL
workflow, inputs, metadata = fromProductionRequestYAMLToCWL(yaml_path)
```
## Development
### Running Tests
```bash
pixi run test # Run all tests
pixi run test-dirac # Run DIRAC conversion tests
pixi run test-cov # Run with coverage report
```
### Updating Test Fixtures
When CWL output changes, update reference fixtures:
```bash
pixi run update-fixtures # Regenerate reference .cwl files
```
### Code Quality
```bash
pixi run pre-commit # Run all pre-commit hooks (linting, formatting, etc.)
```
## Documentation
Documentation goes in the [LbAPDoc](https://gitlab.cern.ch/lhcb-dpa/analysis-productions/LbAPDoc/) project.
## Related Projects
- [LbAPI](https://gitlab.cern.ch/lhcb-dpa/analysis-productions/LbAPI/) - REST API server for Analysis Productions
- [LbAPLocal](https://gitlab.cern.ch/lhcb-dpa/analysis-productions/LbAPLocal/) - Local testing tools
- [LbAPWeb](https://gitlab.cern.ch/lhcb-dpa/analysis-productions/LbAPWeb/) - Web interface for Analysis Productions
## License
GPL-3.0 - See [COPYING](COPYING) for details.
| text/markdown | LHCb | null | null | null | null | LHCb HEP CERN | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"awkward>=2.1.1",
"boost-histogram",
"hist",
"jinja2",
"lxml",
"matplotlib",
"numpy",
"pandas",
"pyyaml",
"requests",
"setuptools",
"uproot>=5.0.0",
"networkx",
"pydantic>2",
"lbdiracwrappers",
"cwl-utils",
"cwltool",
"pytest; extra == \"testing\"",
"pytest-cov; extra == \"testin... | [] | [] | [] | [
"Homepage, https://gitlab.cern.ch/lhcb-dpa/analysis-productions/LbAPCommon",
"Bug Reports, https://gitlab.cern.ch/lhcb-dpa/analysis-productions/LbAPCommon/issues",
"Source, https://gitlab.cern.ch/lhcb-dpa/analysis-productions/LbAPCommon"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T17:42:12.889651 | lbapcommon-0.15.11.tar.gz | 513,562 | e9/67/4aebbf5e32e3eaa86c5f292b01131dd6e159ad40a6972bcaf4640abcd98d/lbapcommon-0.15.11.tar.gz | source | sdist | null | false | c2905fb6181ed6864a642f9f03c53a91 | b0f1ff3fb7632f74dd11c048c6dc55dc9bb604430d1c53683f3f8ba5591b481c | e9674aebbf5e32e3eaa86c5f292b01131dd6e159ad40a6972bcaf4640abcd98d | null | [
"LICENSE"
] | 0 |
2.1 | ez | 11.5.9 | easy stuff | This module is for easy interaction with linux, Mac OS X, Windows shell.
| null | Jerry Zhu | jerryzhujian9@gmail.com | null | null | GPLv3+ | shell, cross-platform, easy, wrapper | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3.11"
] | [] | https://pypi.python.org/pypi/ez | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/4.0.2 CPython/3.11.4 | 2026-02-18T17:42:10.430288 | ez-11.5.9.tar.gz | 77,666 | eb/b4/d57898615e0de111afb30b01ed4743c4f9b0d5fda27f5ff870547e1063e6/ez-11.5.9.tar.gz | source | sdist | null | false | 8d8873a63720599903d0c98d2627db9b | ee8e32f52bfc13aa85a65eef0adaf9e94a70f8e59070e70e18276c9694bb0e1a | ebb4d57898615e0de111afb30b01ed4743c4f9b0d5fda27f5ff870547e1063e6 | null | [] | 157 |
2.4 | amyrmahdy-llmwrap | 0.1.1 | Lightweight, production-ready adapter for LLM chat completions (text + multimodal) | # llmwrap
Lightweight adapter for LLM chat completions — text, images, PDFs/files.
OpenAI-compatible (works great with OpenRouter, Anthropic via compat layer, etc.).
## Installation
```bash
pip install amyrmahdy-llmwrap
# or
uv add amyrmahdy-llmwrap
```
## Quick start
```python
from amyrmahdy_llmwrap import LLM
llm = LLM(
api_key="sk-...",
base_url="https://openrouter.ai/api/v1",
model="anthropic/claude-3.5-sonnet",
)
# Text
print(llm.complete([{"role": "user", "content": "Hi!"}]))
# Image + text
messages = [
{"role": "user", "content": [
LLM.text_content("Describe this."),
LLM.image_content("photo.jpg"),
]}
]
print(llm.complete(messages))
``` | text/markdown | null | amyrmahdy <amyrmahdy1@gmail.com> | null | null | MIT | ai, llm, multimodal, openai, wrapper | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"openai>=1.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/amyrmahdy/llmwrap",
"Repository, https://github.com/amyrmahdy/llmwrap.git",
"Documentation, https://github.com/amyrmahdy/llmwrap#readme",
"Issues, https://github.com/amyrmahdy/llmwrap/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Linux Mint","version":"22.1","id":"xia","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T17:42:09.988089 | amyrmahdy_llmwrap-0.1.1-py3-none-any.whl | 2,726 | bb/5e/ba10cdd1931a371725f5d68bdb4e6a226b473731ec4de90cad1ff489ced4/amyrmahdy_llmwrap-0.1.1-py3-none-any.whl | py3 | bdist_wheel | null | false | a28c6269c39726b3a6c2bb02682720a9 | cc12d1d4658c64f382212dbe2a9225be269f55e7ef2ccc42937195755b0edeac | bb5eba10cdd1931a371725f5d68bdb4e6a226b473731ec4de90cad1ff489ced4 | null | [] | 236 |
2.4 | fileglancer | 2.5.0 | Browse, share, and publish files on the Janelia file system | # Fileglancer
[](https://github.com/JaneliaSciComp/fileglancer/actions/workflows/build.yml?query=branch%3Amain)
Fileglancer is a web application designed to allow researchers to easily browse, share, and manage large scientific imaging data using [OME-NGFF](https://github.com/ome/ngff) (i.e. OME-Zarr). Our goal is to reduce the friction experienced by users who want to easily share their data with colleagues at their institution. Simply browse to your data, click on the Neuroglancer link, and send that link to your collaborator.
Core features:
- Browse and manage files on network file shares (NFS) using an intuitive web UI
- Create a "data link" for any file share path, allowing web-based anonymous access to your data
- Shareable links to Neuroglancer and other viewers
- Integration with our help desk (JIRA) for file conversion requests
- Integration with the [x2s3](https://github.com/JaneliaSciComp/x2s3) proxy service, to easily share data on the internet
See the [documentation](https://janeliascicomp.github.io/fileglancer-docs/) for more information.
<p align="center">
<img alt="Fileglancer screenshot" width="800" src="https://github.com/user-attachments/assets/e17079a6-66ca-4064-8568-7770c5af33d5" />
</p>
## Installation
### Personal Deployment
Fileglancer can be run in a manner similar to Jupyter notebooks, by starting a web server from the command-line:
```bash
# Install from PyPI
pip install fileglancer
# Start the server
fileglancer start
```
This will start your personal server locally and open a web browser with Fileglancer loaded. By default, only your home directory (`~/`) will be browsable. You can browse and view your own data this way, but links to data will only work as long as your server is running. To share data reliably with others, you will need a persistent shared deployment.
### Shared Deployments
Fileglancer is primarily intended for shared deployments on an intranet. This allows groups of users to share data easily. If you are on the internal Janelia network navigate to "fileglancer.int.janelia.org" in your web browser and login with your Okta credentials. If you are outside of Janelia, you'll need to ask your System Administrator to install Fileglancer on a server on your institution's network.
## Software Architecture
Fileglancer has a React front-end and a FastAPI backend. Uvicorn is used to manage the set of FastAPI workers. Inspired by JupyterHub's method of spinning up individual user servers using setuid, we use seteuid to change the effective user of each worker process as necessary to handling the incoming requests. This allows each logged in user to access their resources on the network file systems. The backend database access is managed by SQLAlchemy and supports many databases including Sqlite and Postgresql.
<p align="center">
<img alt="Fileglancer architecture diagram" width="800" align="center" src="https://github.com/user-attachments/assets/31b30b01-f313-4295-8536-bac8c3bdde73" />
</p>
## Documentation
- [User guide](https://janeliascicomp.github.io/fileglancer-docs/)
- [Developer guide](docs/Development.md)
## Related repositories
- [fileglancer-hub](https://github.com/JaneliaSciComp/fileglancer-hub) - Production deployment files
- [fileglancer-janelia](https://github.com/JaneliaSciComp/fileglancer-janelia) - Janelia-specific customizations
- [fileglancer-docs](https://github.com/JaneliaSciComp/fileglancer-docs) - Documentation website
| text/markdown | null | Allison Truhlar <truhlara@janelia.hhmi.org>, Jody Clements <clementsj@janelia.hhmi.org>, Cristian Goina <goinac@janelia.hhmi.org>, Konrad Rokicki <rokickik@janelia.hhmi.org> | null | null | BSD 3-Clause License
Copyright (c) 2025, Howard Hughes Medical Institute
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | file browser, ngff, scientific imaging | [
"License :: OSI Approved :: BSD License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <4,>=3.14.0 | [] | [] | [] | [
"aiosqlite>=0.21.0",
"alembic>=1.17.0",
"atlassian-python-api>=4.0.7",
"authlib>=1.6.5",
"cachetools>=6.2.1",
"click>=8.0",
"cryptography>=41.0.0",
"fastapi>=0.119.1",
"httpx>=0.28",
"itsdangerous>=2.2.0",
"loguru>=0.7.3",
"lxml>=5.3.1",
"pandas>=2.3.3",
"psycopg2-binary<3,>=2.9.10",
"py... | [] | [] | [] | [
"Homepage, https://github.com/JaneliaSciComp/fileglancer",
"Bug Tracker, https://github.com/JaneliaSciComp/fileglancer/issues",
"Repository, https://github.com/JaneliaSciComp/fileglancer.git"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-18T17:42:08.542569 | fileglancer-2.5.0.tar.gz | 8,224,202 | 12/84/3e99866dc4d3dc3ac0a51c3571061d3c31f126e7a05b98c3f464b91d9aa5/fileglancer-2.5.0.tar.gz | source | sdist | null | false | 71a97020e04f163f58bd7f0993ef8b85 | 9f1ddc58b72c49676863b5c0ad24db3190bb49c286f7e3462c63abf7cdd94f32 | 12843e99866dc4d3dc3ac0a51c3571061d3c31f126e7a05b98c3f464b91d9aa5 | null | [
"LICENSE"
] | 220 |
2.4 | subsonic-connector | 0.3.11 | SubSonic Connector based on py-sonic | # Subsonic Connector
## Reference
This library relies on the [py-sonic](https://github.com/crustymonkey/py-sonic) project.
The current version I use in this project is [`1.0.2`](https://github.com/crustymonkey/py-sonic/releases/tag/1.0.2).
## Status
This software is in its early development phase.
## Links
Type|Link
:---|:---
Source Code|[GitHub](https://github.com/GioF71/subsonic-connector)
Python Library|[PiPy](https://pypi.org/project/subsonic-connector/)
## Instructions
Create your own `.env` file. Use `.sample.env` as a reference for the format of the file.
### Initialization
From a terminal, type
```text
poetry shell
poetry install
```
#### Test execution
Then you can run the simple test using the following command:
```text
python subsonic_connector/test-cn.py
```
Make sure to load the variables specified in the `.env` file.
The test is currently just a `main` and it requires a running subsonic server. I am currently using [Navidrome](https://github.com/navidrome/navidrome).
| text/markdown | GioF71 | giovanni.fulco@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"py-sonic==1.1.1",
"python-dotenv>=1.0.1"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/GioF71/subsonic-connector/issues",
"Homepage, https://github.com/GioF71/subsonic-connector"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T17:41:41.700200 | subsonic_connector-0.3.11.tar.gz | 9,952 | 8f/f1/6126a5d46d1f93563b61adc54cf2717fb8f7d23be073e18de692147bedca/subsonic_connector-0.3.11.tar.gz | source | sdist | null | false | b8fca27a73bf0bce8f0489263e3407b9 | 50f6052416c8baf8682314e27b585076e44f05b2fbac6136e7de434a5611b681 | 8ff16126a5d46d1f93563b61adc54cf2717fb8f7d23be073e18de692147bedca | null | [
"LICENSE"
] | 233 |
2.4 | infisicalsdk | 1.0.16 | Official Infisical SDK for Python (Latest) | The official Infisical SDK for Python.
Documentation can be found at https://github.com/Infisical/python-sdk-official
| text/markdown | Infisical | support@infisical.com | null | null | null | Infisical, Infisical API, Infisical SDK, SDK, Secrets Management | [] | [] | https://github.com/Infisical/python-sdk-official | null | null | [] | [] | [] | [
"python-dateutil",
"aenum",
"requests~=2.32",
"boto3~=1.35",
"botocore~=1.35"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T17:41:34.747148 | infisicalsdk-1.0.16.tar.gz | 17,254 | e6/36/1e8650cbdcdec755c85ca0293956b25efb7d4371f54b20ec53e35ab0502f/infisicalsdk-1.0.16.tar.gz | source | sdist | null | false | e696da22149d7bc6c8e1f6a3c94d07b3 | 7518446a60418d46c7926f10447f28c9e24d7b25ba25885d9bae428297d32694 | e6361e8650cbdcdec755c85ca0293956b25efb7d4371f54b20ec53e35ab0502f | null | [
"LICENSE"
] | 5,961 |
2.4 | ocea | 2.2.1 | CLI interface for managing DigitalOcean droplets with local inventory tracking. | ====
OCEA
====
**OCEA** - DigitalOcean CLI with Local Inventory
A command-line interface for managing DigitalOcean droplets with local SQLite-based
inventory tracking. OCEA enables smart droplet management including automatic
snapshot-based restore, reserved IP reassignment, and comprehensive audit logging.
Features
========
* **Local Inventory Tracking**: SQLite database stores droplet metadata, snapshots,
and reserved IPs locally for offline access and faster operations.
* **Smart Restore**: Automatically restore terminated droplets from snapshots using
stored configuration (region, size, reserved IPs).
* **Audit Logging**: All operations are logged with timestamps for accountability
and debugging.
* **Multi-Droplet Operations**: Manage multiple droplets in a single command.
* **Reserved IP Management**: Automatic reassignment of reserved (floating) IPs
during restore operations.
* **In-Progress Tracking**: See snapshots that are currently being created.
* **Python API**: Programmatic access to all functionality.
Installation
============
Install from source::
pip install -e .
For development::
pip install -e ".[dev]"
Configuration
=============
OCEA requires a DigitalOcean API token. Configure it using one of these methods
(in priority order):
1. **`.env` file** in the current directory::
DO_API_TOKEN=your_token_here
2. **Environment Variable** (recommended)::
export DO_API_TOKEN=your_token_here
# or
export DIGITALOCEAN_TOKEN=your_token_here
3. **Config file** at ``~/.config/ocea/token`` (Linux/macOS) or
``%APPDATA%\ocea\token`` (Windows)
Optional: Set a custom database path::
export OCEA_DB_PATH=/path/to/ocea.db
Quick Start
===========
::
# Set your API token
export DO_API_TOKEN=your_token_here
# List all droplets
ocea list
# Check inventory status
ocea status
# Create a new droplet
ocea new my-server -r nyc1 -s s-1vcpu-1gb -i ubuntu-24-04-x64
# Create a snapshot
ocea snap create my-server
# Gracefully shut down
ocea down my-server
# Terminate with auto-snapshot (for cost savings)
ocea down my-server --terminate
# Restore from snapshot
ocea up my-server
Commands
========
list
----
List droplets and snapshots from DigitalOcean API::
ocea list # All droplets and snapshots
ocea list --droplets # Only droplets
ocea list --snaps # Only snapshots
ocea list -r nyc1 # Filter by region
ocea list -r nyc1 -r sfo3 # Filter by multiple regions
ocea list --local # From local database only
ocea list --json # Output as JSON
status
------
Show inventory summary or specific droplet details::
ocea status # Summary of all droplets
ocea status my-server # Details for specific droplet
ocea status --local # From local database only
ocea status --json # Output as JSON
up
--
Power on droplets or restore from snapshots. OCEA automatically determines the
correct action based on the droplet's state:
* If the droplet is off: powers it on
* If the droplet is archived (terminated): restores from the most recent snapshot
::
ocea up my-server # Power on or restore
ocea up server1 server2 # Multiple droplets
ocea up --no-attach-ip srv # Skip reserved IP reassignment
ocea up --json my-server # Output as JSON
down
----
Power off or terminate droplets::
ocea down my-server # Graceful shutdown
ocea down my-server --terminate # Terminate with auto-snapshot
ocea down my-server --terminate --no-snapshot # Terminate without snapshot
ocea down --json my-server # Output as JSON
new
---
Create a new droplet::
# Basic creation
ocea new my-server -r nyc1 -s s-1vcpu-1gb -i ubuntu-24-04-x64
# With SSH key and tags
ocea new my-server -r nyc1 -s s-1vcpu-1gb -i ubuntu-24-04-x64 \
--ssh-key my-key --tag production --tag web
# With all options
ocea new my-server -r nyc1 -s s-1vcpu-1gb -i ubuntu-24-04-x64 \
--ssh-key my-key --backups --ipv6 --monitoring
snap
----
Manage snapshots::
ocea snap list # List all snapshots
ocea snap create my-server # Create a snapshot
ocea snap delete 123456789 # Delete by ID
ocea snap delete snap1 snap2 # Delete multiple
ocea snap list --json # Output as JSON
reboot
------
Reboot active droplets::
ocea reboot my-server # Reboot single droplet
ocea reboot srv1 srv2 srv3 # Reboot multiple droplets
ocea reboot --json my-server # Output as JSON
Python API
==========
OCEA provides a Python API for programmatic access::
from ocea.api import OCEA
# Initialize (uses environment config)
ocea = OCEA()
# List droplets
droplets = ocea.list_droplets()
for d in droplets:
print(f"{d.name}: {d.status} ({d.public_ip})")
# Create a droplet
droplet = ocea.create_droplet(
name="my-server",
region="nyc1",
size="s-1vcpu-1gb",
image="ubuntu-24-04-x64"
)
# Create a snapshot
snapshot = ocea.create_snapshot("my-server", "my-server-backup")
# Terminate (creates snapshot by default)
ocea.terminate("my-server")
# Restore from snapshot
ocea.power_on("my-server")
Database Schema
===============
OCEA maintains a local SQLite database with the following tables:
* **droplets**: Stores droplet metadata (name, region, size, IPs, status)
* **snapshots**: Tracks snapshots and their associated droplets
* **floating_ips**: Maps reserved IPs to droplets
* **actions**: Audit log of all operations
Droplet Status Values
---------------------
* ``active`` - Running
* ``off`` - Powered off but exists in DigitalOcean
* ``archive`` - Terminated (deleted from DigitalOcean but has snapshot for restore)
Development
===========
Setup::
make bootstrap # Create virtualenv and install dependencies
make precommit # Install pre-commit hooks
Run tests::
make test # Unit + integration tests (cached)
make test-all # Full test suite with coverage
Run linting::
make lint # Check with Ruff
make format # Auto-fix with Ruff
Build::
make build # Build wheel and sdist
make version # Print current version
License
=======
MIT License. See LICENSE file for details.
Author
======
Kevin Steptoe
| text/x-rst | Kevin Steptoe | null | null | null | null | digitalocean, cli, droplet, snapshot, cloud | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
... | [] | null | null | >=3.12 | [] | [] | [] | [
"click>=8.1",
"pydo>=0.3.0",
"sqlalchemy>=2.0",
"rich>=13.0",
"python-dotenv>=1.0",
"pytest>=8; extra == \"testing\"",
"pytest-cov>=5; extra == \"testing\"",
"pytest-xdist>=3.6; extra == \"testing\"",
"pytest-timeout>=2.3; extra == \"testing\"",
"pytest-mock>=3.14; extra == \"testing\"",
"ocea[t... | [] | [] | [] | [
"Homepage, https://github.com/ksteptoe/ocea",
"Repository, https://github.com/ksteptoe/ocea",
"Bug Tracker, https://github.com/ksteptoe/ocea/issues"
] | twine/6.2.0 CPython/3.12.11 | 2026-02-18T17:40:20.222301 | ocea-2.2.1.tar.gz | 64,710 | 0b/7f/6cecaf40bdd8b463d53c72a63d6975196a0b985b530237f52af91c522a92/ocea-2.2.1.tar.gz | source | sdist | null | false | 155ec7359bfd5177ce0589bba19d20a5 | ad9d617a6dcde1eeab93dd12da5e399c8ee6e5aee8050b470c5f850cf0fd7224 | 0b7f6cecaf40bdd8b463d53c72a63d6975196a0b985b530237f52af91c522a92 | MIT | [
"LICENSE.txt",
"AUTHORS.rst"
] | 207 |
2.4 | strange-mol | 0.0.1 | Compute possible interaction between molecules. | # strange
python -m src.strange -i ~/Downloads/4jet_cleaned.gro -a "both" -s "protein" "not protein" -o ~/Downloads/test.csv --visualisation ~/Downloads/test.mvsj --pharmacophore ~/Downloads/pharmacophore_1.csv ~/Downloads/pharmacophore_2.csv
| text/markdown | null | Lucas ROUAUD <lucas.rouaud@gmail.com> | null | Lucas ROUAUD <lucas.rouaud@gmail.com> | MIT License | python, chemical, protein, properties, bioinformatic, chemoinformatic | [
"License :: OSI Approved :: MIT License",
"Development Status :: 4 - Beta",
"Natural Language :: English",
"Programming Language :: Python :: 3.5",
"Topic :: Scientific/Engineering :: Bio-Informatics",
"Topic :: Scientific/Engineering :: Chemistry"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"mdanalysis",
"molviewspec",
"numpy",
"OpenMM",
"pdbfixer",
"pyyaml",
"rdkit"
] | [] | [] | [] | [
"Homepage, https://gitlab.galaxy.ibpc.fr/rouaud/strange",
"Repository, https://gitlab.galaxy.ibpc.fr/rouaud/strange"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T17:38:13.436571 | strange_mol-0.0.1-py3-none-any.whl | 49,668 | 34/29/6de314bb897f2d2631f4a54986fab2b2818b4381d66e49ad26b8b797a30e/strange_mol-0.0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | d61104717e51d4b5e00634ff4103ea09 | a452054e31def72871e376fd6fde9b0b2487b5228cb24af0fbe4ce3834822f57 | 34296de314bb897f2d2631f4a54986fab2b2818b4381d66e49ad26b8b797a30e | null | [
"LICENSE_CC_BY",
"LICENSE_MIT",
"AUTHORS.md"
] | 243 |
2.4 | pandoraspec | 0.3.9 | DORA Compliance Auditor for OpenAPI Specs | # PanDoraSpec
**The Open DORA Compliance Engine for OpenAPI Specs.**
PanDoraSpec is a CLI tool that performs deep technical due diligence on APIs to verify compliance with **DORA (Digital Operational Resilience Act)** requirements. It compares OpenAPI/Swagger specifications against real-world implementation to detect schema drift, resilience gaps, and security issues.
---
## 💡 Why PanDoraSpec?
### 1. Compliance as Code
DORA audits are often manual, annual spreadsheets. PanDoraSpec provides **Continuous Governance**, proving that every commit has been verified for regulatory requirements (Drift, Resilience, Security).
### 2. The "Virtual CISO" Translation
Developers see `HTTP 500`. Executives see **"Article 25 Violation"**. Module E translates technical failures into **Regulatory Risk Scores**, bridging the gap between DevOps and the Boardroom.
### 3. Zero-Config Guardrails
It requires **no configuration** to catch critical issues. It acts as a safety net that catches schema drift and leaked secrets *before* they hit production.
---
## 📦 Installation
```bash
pip install pandoraspec
```
### System Requirements
The PDF report generation requires `weasyprint`, which depends on **Pango**.
---
## 🚀 Usage
Run the audit directly from your terminal.
### Basic Scan
```bash
pandoraspec https://petstore.swagger.io/v2/swagger.json
```
### With Options
```bash
pandoraspec https://api.example.com/spec.json --vendor "Stripe" --key "sk_live_..."
```
### Local File
```bash
pandoraspec ./openapi.yaml
```
### Override Base URL
If your OpenAPI spec uses variables (e.g. `https://{env}.api.com`) or you want to audit a specific target:
```bash
pandoraspec https://api.example.com/spec.json --base-url https://staging.api.example.com
```
---
## ⚙️ Configuration
### 🧙 Configuration Wizard
Get started quickly by generating a configuration file interactively:
```bash
pandoraspec init
```
This will guide you through creating a `pandoraspec.yaml` file with your target URL, vendor name, and seed data templates.
### Configuration File (`pandoraspec.yaml`)
You can store your settings in a YAML file:
```yaml
target: "https://petstore.swagger.io/v2/swagger.json"
vendor: "MyVendor"
api_key: "my-secret-key"
# Avoid False Positives in DLP by allowing support emails
dlp_allowed_domains:
- "mycompany.com"
seed_data:
user_id: 123
```
**Precedence Rules:**
1. **CLI Arguments** (Highest Priority)
2. **Configuration File**
3. **Defaults** (Lowest Priority)
Example:
`pandoraspec --vendor "CLI Override" --config pandoraspec.yaml` will use the target from YAML but the vendor "CLI Override".
### ✅ Validate Configuration
Ensure your configuration file is valid before running an audit:
```bash
pandoraspec validate --config pandoraspec.yaml
```
### 🔐 Dynamic Authentication (Hooks)
For complex flows (OAuth2, MFA, etc.) that require logic beyond a static API Key, you can use a **Pre-Audit Hook**.
This runs a custom Python script to acquire a token *before* the audit starts.
**1. Create a script (`auth_script.py`)** that returns your token as a string:
```python
import os
import requests
def get_token():
# Example: Fetch token from an OAuth2 endpoint
response = requests.post("https://auth.example.com/token", data={
"client_id": os.getenv("CLIENT_ID"),
"client_secret": os.getenv("CLIENT_SECRET"),
"grant_type": "client_credentials"
})
return response.json()["access_token"]
```
**2. Configure `pandoraspec.yaml`**:
```yaml
target: "https://api.example.com/openapi.json"
auth_hook:
path: "auth_script.py"
function_name: "get_token"
```
PanDoraSpec will execute `get_token()`, take the returned string, and use it as the `Authorization: Bearer <token>` for all audit requests.
---
## 🧪 Testing Modes
### 🏎️ Zero-Config Testing (Compliance)
For standard **DORA compliance**, you simply need to verify that your API implementation matches its specification. **No configuration is required.**
```bash
pandoraspec https://petstore.swagger.io/v2/swagger.json
```
This runs a **fuzzing** audit where random data is generated based on your schema types.
### 🧠 Advanced Testing (Seed Data)
To test **specific business workflows** (e.g., successfully retrieving a user profile), you can provide "Seed Data". This tells PanDoraSpec to use known, valid values instead of random fuzzing data.
```bash
pandoraspec https://petstore.swagger.io/v2/swagger.json --config seed_parameters.yaml
```
> [!NOTE]
> Any parameters **NOT** explicitly defined in your seed data will continue to be **fuzzed** with random values. This ensures that you still get the benefit of stress testing on non-critical fields while controlling the critical business logic.
#### Configuration Hierarchy
The engine resolves values in this order: **Endpoints > Verbs > General**.
```yaml
seed_data:
# 1. General: Applies to EVERYTHING (path params, query params, headers)
general:
username: "test_user"
# 2. Verbs: Applies only to specific HTTP methods
verbs:
POST:
username: "admin_user"
# 3. Endpoints: Applies only to specific routes
endpoints:
/users/me:
GET:
limit: 10
```
#### 🔗 Dynamic Seed Data (Recursive Chaining)
You can even test **dependency chains** where one endpoint requires data from another.
```yaml
endpoints:
# Level 1: Get the current user ID
/user/me:
GET:
authorization: "Bearer static-token"
# Level 2: Use that ID to get their orders
/users/{userId}/orders:
GET:
userId:
from_endpoint: "GET /user/me"
extract: "data.id"
```
---
## 🛡️ What It Checks
### Module A: The Integrity Test (Drift)
Checks if your API implementation matches your documentation.
- **Why?** DORA requires you to monitor if the service effectively supports your critical functions.
### Module B: The Resilience Test
Stress tests the API to ensure it handles invalid inputs gracefully (`4xx` vs `5xx`).
- **Why?** DORA Article 25 calls for "Digital operational resilience testing".
### Module C: Security Hygiene & DLP
Checked for:
- Security headers (HSTS, CSP, etc.)
- Auth enforcement on sensitive endpoints.
- **Data Leakage Prevention (DLP)**: Scans responses for PII (Emails, SSNs, Credit Cards) and Secrets (AWS Keys, Private Keys).
### Module E: AI Auditor (Virtual CISO)
Uses OpenAI (GPT-4) to perform a semantic risk assessment of technical findings.
- **Requires**: `OPENAI_API_KEY` environment variable.
- **Output**: Generates a Risk Score (0-10) and an Executive Summary.
- **Configuration**:
- `export OPENAI_API_KEY=sk-...`
- Override model: `--model gpt-3.5-turbo`
### Module D: The Report
Generates a PDF report: **"DORA ICT Third-Party Technical Risk Assessment"**.
---
## 🏭 CI/CD
PanDoraSpec is designed for automated pipelines. It returns **Exit Code 1** if any issues are found, blocking deployments if needed.
### 🐳 Docker
Run without installing Python:
```bash
docker run --rm -v $(pwd):/data ghcr.io/0d15e0/pandoraspec:latest https://petstore.swagger.io/v2/swagger.json --output /data/verification_report.pdf
```
### 🐙 GitHub Actions
Add this step to your`.github/workflows/pipeline.yml`:
```yaml
- name: DORA Compliance Audit
uses: 0D15E0/PanDoraSpec@v0.2
with:
target: 'https://api.example.com/spec.json'
vendor: 'MyCompany'
format: 'junit'
output: 'dora-results.xml'
```
### 📊 JUnit Reporting
Use `--format junit` to generate standard XML test results that CI systems (Jenkins, GitLab, Azure DevOps) can parse to display test pass/fail trends.
---
## 🛠️ Development
### Local Setup
To run the CLI locally without reinstalling after every change:
1. **Clone & CD**:
```bash
git clone ...
cd pandoraspec
```
2. **Create & Activate Virtual Environment**:
```bash
python3 -m venv venv
source venv/bin/activate
```
3. **Editable Install**:
```bash
pip install -e .
```
### 📦 Publishing (Release Flow)
This repository uses a **Unified Release Pipeline**.
1. **Update Version**: Open `pyproject.toml` and bump the version (e.g., `version = "0.2.8"`). Commit and push.
2. **Draft Release**:
- Go to the **Releases** tab in GitHub.
- Click **Draft a new release**.
- Create a tag MATCHING the version (e.g., `v0.2.8`).
- Click **Publish release**.
The workflow will verify version consistency and automatically publish to **Docker (GHCR)** and **PyPI**.
---
## 📄 License
MIT
| text/markdown | null | Ulises Merlan <ulimerlan@gmail.com> | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"schemathesis==4.9.1",
"typer[all]",
"rich",
"weasyprint",
"requests",
"pydantic",
"pyyaml",
"openai",
"pytest; extra == \"dev\"",
"responses; extra == \"dev\"",
"mypy; extra == \"dev\"",
"types-requests; extra == \"dev\"",
"types-PyYAML; extra == \"dev\"",
"ruff; extra == \"dev\"",
"pyt... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T17:37:26.429540 | pandoraspec-0.3.9.tar.gz | 39,378 | 4d/ff/6464c58482e153dc4e0be5467065517b605f1183a6332be86912620ec171/pandoraspec-0.3.9.tar.gz | source | sdist | null | false | fefc8035f90e878369f86b9ce4060597 | 2b583e1e7e304a25c022c54f6e507e6f1f3815f1a35e1cd4c6b6c0ce8ad0529a | 4dff6464c58482e153dc4e0be5467065517b605f1183a6332be86912620ec171 | null | [] | 220 |
2.4 | gsc-bing-mcp | 0.2.1 | MCP server for Google Search Console & Bing Webmaster Tools — uses your Chrome browser session, no API keys needed | # gsc-bing-mcp
An MCP (Model Context Protocol) server that gives AI agents (Claude, Cline) direct access to your **Google Search Console** and **Bing Webmaster Tools** search performance data.
**Zero API keys for Google** — uses your existing Chrome browser session.
**30-second setup for Bing** — just copy one free key from your dashboard.
---
## ✨ Features
| Capability | Tool |
|-----------|------|
| List all your GSC properties | `gsc_list_sites` |
| Search analytics (clicks, impressions, CTR, position) | `gsc_search_analytics` |
| Top queries by clicks | `gsc_top_queries` |
| Top pages by clicks | `gsc_top_pages` |
| Sitemap status | `gsc_list_sitemaps` |
| URL indexing inspection | `gsc_inspect_url` |
| List Bing sites | `bing_list_sites` |
| Bing search performance | `bing_search_analytics` |
| Bing crawl statistics | `bing_crawl_stats` |
| Bing keyword stats | `bing_keyword_stats` |
| Refresh Google session | `refresh_google_session` |
---
## 🔐 How Authentication Works
### Google Search Console (Zero Setup)
This server reads your Chrome browser's cookies directly from disk — the same way tools like [yt-dlp](https://github.com/yt-dlp/yt-dlp) work.
1. Your Chrome stores session cookies after you log in to Google
2. This server reads those cookies using [rookiepy](https://github.com/thewh1teagle/rookie) (a Rust library, ultra lightweight)
3. Generates a `SAPISIDHASH` authorization header (SHA1 of timestamp + SAPISID cookie)
4. Makes requests to GSC's API — the exact same endpoints your browser uses
**Requirements:** Just be logged in to Google in Chrome. That's it.
### Bing Webmaster Tools (Free API Key)
Bing provides a free, never-expiring API key from your account:
1. Go to [bing.com/webmasters](https://www.bing.com/webmasters)
2. Click **Settings → API Access**
3. Click **Generate API Key**
4. Copy the key and paste it into your MCP config (see setup below)
---
## 🚀 Installation & Setup
### Prerequisites
- Chrome browser with Google account logged in
- Python 3.11+ **or** [uv](https://docs.astral.sh/uv/) (recommended — no Python management needed)
### Option A: Install with `uvx` (Recommended — Zero Python Setup)
**Step 1: Install uv** (one-time, ~10 seconds)
```bash
# macOS / Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows (PowerShell):
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
```
**Step 2: Get your Bing API key** (2 minutes)
1. Go to [bing.com/webmasters](https://www.bing.com/webmasters) → Settings → API Access
2. Click **Generate API Key** and copy it
**Step 3: Add to your MCP config** (30 seconds)
For **Cline** (VS Code), open your MCP settings file and add:
```json
{
"mcpServers": {
"gsc-bing-mcp": {
"command": "uvx",
"args": ["gsc-bing-mcp"],
"env": {
"BING_API_KEY": "paste-your-bing-key-here"
}
}
}
}
```
For **Claude Desktop** (`~/Library/Application Support/Claude/claude_desktop_config.json`):
```json
{
"mcpServers": {
"gsc-bing-mcp": {
"command": "uvx",
"args": ["gsc-bing-mcp"],
"env": {
"BING_API_KEY": "paste-your-bing-key-here"
}
}
}
}
```
**Step 4: Done!** Restart Cline/Claude Desktop and start asking questions.
---
### Option B: Install with pip
```bash
pip install gsc-bing-mcp
```
Then in your MCP config:
```json
{
"mcpServers": {
"gsc-bing-mcp": {
"command": "gsc-bing-mcp",
"env": {
"BING_API_KEY": "paste-your-bing-key-here"
}
}
}
}
```
---
### Option C: Run from source (developers)
```bash
git clone https://github.com/codermillat/gsc-bing-mcp
cd gsc-bing-mcp
pip install -e .
```
MCP config:
```json
{
"mcpServers": {
"gsc-bing-mcp": {
"command": "python",
"args": ["-m", "gsc_bing_mcp.server"],
"env": {
"BING_API_KEY": "paste-your-bing-key-here"
}
}
}
}
```
---
## 💬 Example Questions You Can Ask
Once set up, ask your AI agent:
```
"What are my top 10 search queries in GSC for the last 30 days?"
"Show me which pages are getting the most impressions on Google"
"What's my average position for 'python tutorial' in Search Console?"
"List all my verified sites in Google Search Console"
"Are there any sitemap errors on my site?"
"Check if https://example.com/blog/post-1 is indexed by Google"
"What are my top Bing keywords this month?"
"Show me Bing crawl errors for my site"
"Compare my GSC vs Bing clicks for site:example.com"
```
---
## 🛠️ Tool Reference
### Google Search Console Tools
#### `gsc_list_sites`
List all verified properties in your GSC account.
- No parameters required
#### `gsc_search_analytics`
Full search analytics with custom dimensions and date ranges.
- `site_url` — e.g., `"https://example.com/"` or `"sc-domain:example.com"`
- `start_date` — `"YYYY-MM-DD"`
- `end_date` — `"YYYY-MM-DD"`
- `dimensions` — comma-separated: `"query"`, `"page"`, `"country"`, `"device"`, `"date"` (default: `"query"`)
- `row_limit` — max rows (default: 100)
#### `gsc_top_queries`
Top queries by clicks (quickest way to see what drives traffic).
- `site_url` — your site
- `limit` — number of queries (default: 25)
- `start_date`, `end_date` — optional (defaults to last 28 days)
#### `gsc_top_pages`
Top pages by clicks.
- Same parameters as `gsc_top_queries`
#### `gsc_list_sitemaps`
List submitted sitemaps with status and error counts.
- `site_url` — your site
#### `gsc_inspect_url`
Inspect a URL's indexing status, crawl date, mobile usability, and rich results.
- `site_url` — the GSC property
- `url` — the specific URL to inspect
### Bing Webmaster Tools
#### `bing_list_sites`
List all sites in your Bing Webmaster account.
- No parameters required
#### `bing_search_analytics`
Daily search performance data from Bing.
- `site_url`, `start_date`, `end_date`, `limit`
#### `bing_crawl_stats`
Crawl statistics including errors and blocked URLs.
- `site_url`
#### `bing_keyword_stats`
Top keywords from Bing sorted by clicks.
- `site_url`, `start_date`, `end_date`, `limit`
### Utility
#### `refresh_google_session`
Force-refresh the cached Chrome cookies. Use if you recently re-logged in to Google.
- No parameters required
---
## ⚠️ Troubleshooting
### "Google session cookies not found"
- Make sure you're logged in to Google in Chrome
- Open Chrome, go to google.com, sign in if needed
- Call `refresh_google_session` tool, then retry
### "Chrome's cookie database is locked"
- Chrome may be in the middle of writing. Wait a few seconds and retry.
- On macOS, this sometimes happens right after Chrome opens/closes.
### "Permission denied reading Chrome cookies" (macOS)
- Grant Full Disk Access to your terminal or IDE:
- System Settings → Privacy & Security → Full Disk Access → Add Terminal / VS Code
### "BING_API_KEY environment variable is not set"
- Make sure you added `"BING_API_KEY"` to the `env` section of your MCP config
- Restart Cline/Claude Desktop after updating the config
### GSC data shows "No data found"
- GSC has a ~3-day data lag — use end dates at least 3 days in the past
- Check that the `site_url` matches exactly as shown in GSC (including trailing slash)
---
## 📦 Dependencies
| Package | Purpose | Size |
|---------|---------|------|
| `mcp` | MCP protocol server (FastMCP) | ~5MB |
| `rookiepy` | Chrome cookie reader (Rust-based) | ~2MB |
| `httpx` | Async HTTP client | ~3MB |
**Total: ~10MB installed, ~15MB RAM at runtime**
No Playwright. No Chrome binary. No browser automation.
---
## 🔒 Privacy & Security
- **All data stays local** — this server runs on your machine and makes API calls directly from your computer
- **No data sent to any third party** — your cookies and search data never leave your machine
- **Cookies read-only** — this server only reads cookies, never modifies them
- **No logging of sensitive data** — cookie values are never logged
---
## 📡 How to Publish Updates (Developers)
```bash
# Edit version in pyproject.toml, then:
pip install build twine
python -m build
twine upload dist/*
```
Users will automatically get the update next time `uvx` runs.
---
## 📝 License
MIT — free for personal and commercial use.
| text/markdown | null | null | null | null | MIT | ai-agent, analytics, bing-webmaster, claude, cline, google-search-console, mcp, seo | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Internet :: WWW/... | [] | null | null | >=3.11 | [] | [] | [] | [
"httpx>=0.27.0",
"mcp>=1.0.0",
"rookiepy>=0.5.0"
] | [] | [] | [] | [
"Homepage, https://github.com/codermillat/gsc-bing-mcp",
"Repository, https://github.com/codermillat/gsc-bing-mcp",
"Issues, https://github.com/codermillat/gsc-bing-mcp/issues"
] | twine/6.2.0 CPython/3.12.9 | 2026-02-18T17:36:46.859088 | gsc_bing_mcp-0.2.1.tar.gz | 35,561 | 2c/3b/7c5dd9c4068e0f99f27d7df452b21eb052c09646879440b2f81b0da703a1/gsc_bing_mcp-0.2.1.tar.gz | source | sdist | null | false | 028e78d468dc75b050b8b848983cefac | cdb73dc0c940b6bb14e3ca68cf2b8860c2f929ea9c995a733f73cc3a23436708 | 2c3b7c5dd9c4068e0f99f27d7df452b21eb052c09646879440b2f81b0da703a1 | null | [] | 216 |
2.4 | wheezy.html | 3.2.2 | A lightweight html rendering library | # wheezy.html
[](https://github.com/akornatskyy/wheezy.html/actions/workflows/tests.yml)
[](https://coveralls.io/github/akornatskyy/wheezy.html?branch=master)
[](https://wheezyhtml.readthedocs.io/en/latest/?badge=latest)
[](https://badge.fury.io/py/wheezy.html)
[wheezy.html](https://pypi.org/project/wheezy.html) is a
[python](http://www.python.org) package written in pure Python code. It
is a lightweight html widget library. Integrates with the following
template systems:
- [Jinja2 Templates](http://jinja.pocoo.org)
- [Mako Templates](http://www.makotemplates.org)
- [Tenjin Templates](http://www.kuwata-lab.com/tenjin/)
- [Wheezy Templates](http://pypi.python.org/pypi/wheezy.template/)
It is optimized for performance, well tested and documented.
Resources:
- [source code](https://github.com/akornatskyy/wheezy.html)
and [issues](https://github.com/akornatskyy/wheezy.html/issues)
tracker are available on
[github](https://github.com/akornatskyy/wheezy.html)
- [documentation](https://wheezyhtml.readthedocs.io/en/latest/)
## Install
[wheezy.html](https://pypi.org/project/wheezy.html) requires
[python](http://www.python.org) version 3.10+. It is independent of operating
system. You can install it from [pypi](https://pypi.org/project/wheezy.html)
site:
```sh
pip install -U wheezy.html
```
If you would like take a benefit of template preprocessing for Mako,
Jinja2, Tenjin or Wheezy.Template engines specify extra requirements:
```sh
pip install wheezy.html[wheezy.template]
```
If you run into any issue or have comments, go ahead and add on
[github](https://github.com/akornatskyy/wheezy.html/issues).
| text/markdown | null | Andriy Kornatskyy <andriy.kornatskyy@live.com> | null | null | null | html, widget, markup, mako, jinja2, tenjin, wheezy.template, preprocessor | [
"Environment :: Web Environment",
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"P... | [] | null | null | >=3.10 | [] | [] | [] | [
"Cython>=3.0; extra == \"cython\"",
"setuptools>=61.0; extra == \"cython\"",
"mako>=1.3.10; extra == \"mako\"",
"tenjin>=1.1.1; extra == \"tenjin\"",
"jinja2>=3.1.6; extra == \"jinja2\"",
"wheezy.template>=3.2.4; extra == \"wheezy-template\""
] | [] | [] | [] | [
"Homepage, https://github.com/akornatskyy/wheezy.html",
"Source, https://github.com/akornatskyy/wheezy.html",
"Issues, https://github.com/akornatskyy/wheezy.html/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T17:36:00.302542 | wheezy_html-3.2.2.tar.gz | 14,779 | 81/59/491cd8c8b1ba9d99c1bf1a9af53fbc2362b01fc84e4cfd02c10ea2efe3c1/wheezy_html-3.2.2.tar.gz | source | sdist | null | false | 75c4ede28d104ba65babec0e6142ac62 | fe236dd1b505a78eef5afee198ec06f900c5b1e01d7536448694bfcdbcfcfe92 | 8159491cd8c8b1ba9d99c1bf1a9af53fbc2362b01fc84e4cfd02c10ea2efe3c1 | MIT | [
"LICENSE"
] | 0 |
2.1 | pykx | 3.1.8 | An interface between Python and q | # PyKX
## Introduction
PyKX is a Python first interface to the worlds fastest time-series database kdb+ and it's underlying vector programming language q. PyKX takes a Python first approach to integrating q/kdb+ with Python following 10+ years of integrations between these two languages. Fundamentally it provides users with the ability to efficiently query and analyze huge amounts of in-memory and on-disk time-series data.
This interface exposes q as a domain-specific language (DSL) embedded within Python, taking the approach that q should principally be used for data processing and management of databases. This approach does not diminish the ability for users familiar with q or those wishing to learn more about it from making the most of advanced analytics and database management functionality but rather empowers those who want to make use of the power of kdb+/q who lack this expertise to get up and running fast.
PyKX supports three principal use cases:
- It allows users to store, query, manipulate and use q objects within a Python process.
- It allows users to query external q processes via an IPC interface.
- It allows users to embed Python functionality within a native q session using it's under q functionality.
Users wishing to install the library can do so following the instructions [here](https://code.kx.com/pykx/getting-started/installing.html).
Once you have the library installed you can get up and running with PyKX following the quickstart guide [here](https://code.kx.com/pykx/getting-started/quickstart.html).
### What is q/kdb+?
Mentioned throughout the documentation q and kdb+ are respectively a highly efficient vector programming language and highly optimised time-series database used to analyse streaming, real-time and historical data. Used throughout the financial sector for 25+ years this technology has been a cornerstone of modern financial markets providing a storage mechanism for historical market data and tooling to make the analysis of this vast data performant.
Kdb+ is a high-performance column-oriented database designed to process and store large amounts of data. Commonly accessed data is available in RAM which makes it faster to access than disk stored data. Operating with temporal data types as a first class entity the use of q and it's query language qsql against this database creates a highly performant time-series analysis tool.
q is the vector programming language which is used for all interactions with kdb+ databases and which is known both for its speed and expressiveness.
For more information on using q/kdb+ and getting started with see the following links:
- [An introduction to q/kdb+](https://code.kx.com/q/learn/tour/)
- [Tutorial videos introducing kdb+/q](https://code.kx.com/q/learn/q-for-all/)
## Installation
### Installing PyKX using `pip`
Ensure you have a recent version of pip:
```bash
pip install --upgrade pip
```
Then install the latest version of PyKX with the following command:
```bash
pip install pykx
```
To install a specific version of PyKX run the following command replacing <INSERT_VERSION> with a specific released semver version of the interface
```bash
pip install pykx==<INSERT_VERSION>
```
**Warning:** Python packages should typically be installed in a virtual environment. [This can be done with the venv package from the standard library](https://docs.python.org/3/library/venv.html).
### PyKX License access and enablement
Installation of PyKX via pip provides users with access to the library with limited functional scope, full details of these limitations can be found [here](docs/user-guide/advanced/modes.md). To access the full functionality of PyKX you must first download and install a kdb+ license, this can be achieved either through use of a personal evaluation license or receipt of a commercial license.
#### Personal Evaluation License
The following steps outline the process by which a user can gain access to an install a kdb Insights license which provides access to PyKX
1. Visit https://kx.com/kdb-insights-sdk-personal-edition-download/ and fill in the attached form following the instructions provided.
2. On receipt of an email from KX providing access to your license download this file and save to a secure location on your computer.
3. Set an environment variable on your computer pointing to the folder containing the license file (instructions for setting environment variables on PyKX supported operating systems can be found [here](https://chlee.co/how-to-setup-environment-variables-for-windows-mac-and-linux/).
* Variable Name: `QLIC`
* Variable Value: `/user/path/to/folder`
#### Commercial Evaluation License
The following steps outline the process by which a user can gain access to an install a kdb Insights license which provides access to PyKX
1. Contact you KX sales representative or sales@kx.com requesting a trial license for PyKX evaluation. Alternately apply through https://kx.com/book-demo.
2. On receipt of an email from KX providing access to your license download this file and save to a secure location on your computer.
3. Set an environment variable on your computer pointing to the folder containing the license file (instructions for setting environment variables on PyKX supported operating systems can be found [here](https://chlee.co/how-to-setup-environment-variables-for-windows-mac-and-linux/).
* Variable Name: `QLIC`
* Variable Value: `/user/path/to/folder`
__Note:__ PyKX will not operate with a vanilla or legacy kdb+ license which does not have access to specific feature flags embedded within the license. In the absence of a license with appropriate feature flags PyKX will fail to initialise with full feature functionality.
### Supported Environments
KX only officially supports versions of PyKX built by KX, i.e. versions of PyKX installed from wheel files. Support for user-built installations of PyKX (e.g. built from the source distribution) is only provided on a best-effort basis. Currently, PyKX provides wheels for the following environments:
- Linux (`manylinux_2_17_x86_64`) with CPython 3.8-3.11
- macOS (`macosx_10_10_x86_64`) with CPython 3.8-3.11
- Windows (`win_amd64`) with CPython 3.8-3.11
### Dependencies
#### Python Dependencies
PyKX depends on the following third-party Python packages:
- `pandas>=1.2, <2.0; python_version=='3.8'`
- `pandas>=1.2, <3.0; python_version>'3.8'`
- `numpy>=1.22; python_version<'3.11'`
- `numpy>=1.23; python_version=='3.11'`
- `numpy>=1.26; python_version>'3.11'`
- `pytz>=2022.1`
- `toml~=0.10.2`
- `dill>=0.2.0`
- `requests>=2.25.0`
They are installed automatically by `pip` when PyKX is installed.
PyKX also has an optional Python dependency of `pyarrow>=3.0.0`, which can be included by installing the `pyarrow` extra, e.g. `pip install pykx[pyarrow]`
When using PyKX with KX Dashboards users will be required to install `ast2json~=0.3` this can be installed using the `dashboards` extra, e.g. `pip install pykx[dashboards]`
When using PyKX Streaming users may require the ability to stop processes initialized in a now unavailable process to facilitate this PyKX can make use of `psutil` this can be installed using the `streaming` extra, e.g. `pip install pykx[streaming]`
When using Streamlit users will be required to install `streamlit~=1.28` this can be installed using the `streamlit` extra, e.g. `pip install pykx[streamlit]`
When attempting to convert data to/from PyTorch users will be required to install `torch>2.1` this can be installed using the `torch` extra, e.g. `pip install pykx[torch]`
**Warning:** Trying to use the `pa` conversion methods of `pykx.K` objects or the `pykx.toq.from_arrow` method when PyArrow is not installed (or could not be imported without error) will raise a `pykx.PyArrowUnavailable` exception.
#### Optional Non-Python Dependencies
- `libssl` for TLS on [IPC connections](docs/api/ipc.md).
- `libpthread` on Linux/MacOS when using the `PYKX_THREADING` environment variable.
## Building from source
### Installing Dependencies
The full list of supported environments is detailed [here](https://code.kx.com/pykx/getting-started/installing.html#supported-environments). Installation of dependencies will vary on different platforms.
`apt` example:
```bash
apt-install python3 python3-venv build-essential python3-dev
```
`yum` example:
```bash
yum install python3 gcc gcc-c++ python3-devel.x86_64
```
Windows:
* [Python](https://www.python.org/downloads/windows/)
* [Build Tools for Visual Studio](https://visualstudio.microsoft.com/downloads/?q=build+tools).
* [dlfcn-win32](https://github.com/dlfcn-win32/dlfcn-win32). Can be installed using [Vcpkg](https://github.com/microsoft/vcpkg).
To install the above dependencies, you can run the `w64_install.ps1` script as an administrator:
```PowerShell
cd pykx
.\w64_install.ps1
```
### Building
Using a Python virtual environment is recommended:
```bash
python3 -m venv pykx-dev
source pykx-dev/bin/activate
```
Build and install PyKX:
```bash
cd pykx
pip3 install -U '.[all]'
```
To run PyKX in licensed mode ensure to follow the steps to receive a [Personal Evaluation License](https://code.kx.com/pykx/getting-started/installing.html#personal-evaluation-license)
Now you can run/test PyKX:
```bash
(pykx-dev) /data/pykx$ python
Python 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pykx
>>> pykx.q('1+1')
pykx.LongAtom(pykx.q('2'))
```
### Testing
Contributions to the project must pass a linting check:
```bash
pflake8
```
Contributions to the project must include tests. To run tests:
```bash
export PATH="$PATH:/location/of/your/q/l64" # q must be on PATH for tests
export QHOME=/location/of/your/q #q needs QHOME available
python -m pytest -vvv -n 0 --no-cov --junitxml=report.xml
```
## PyKX Licenses
This work is dual licensed under [Apache 2.0](https://code.kx.com/pykx/license.html#apache-2-license) and the [Software License for q.so](https://code.kx.com/pykx/license.html#qso-license) and users are required to abide by the terms of both licenses in their entirety.
## Community Help
If you have any issues or questions you can post them to [community.kx.com](https://community.kx.com/). Also available on Stack Overflow are the tags [pykx](https://stackoverflow.com/questions/tagged/pykx) and [kdb](https://stackoverflow.com/questions/tagged/kdb).
## Customer Support
* Inquires or feedback: [`pykx@kx.com`](mailto:pykx@kx.com)
* Support for Licensed Subscribers: [support.kx.com](https://client.support.kx.com/)
| text/markdown | KX | KX <pykx@kx.com> | null | null | All files contained within this repository are not covered by a single license. The following outlines the differences.
1. All files and folders contained within the source code directory 'src/pykx/q.so/' are licensed under the terms of the 'Software License for q.so' which are included below
2. All other files within this repository are licensed under the "Apache 2.0" license include below
***********************************************************************************
Apache 2.0
***********************************************************************************
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
***************************************************************************
License Terms for q.so
***************************************************************************
Software License for q.so
KX Confidential
Version Number: 1.0
Date Last Revised: April 2023
Software License Agreement for use of q.so (“Agreement”)
CAREFULLY READ THE FOLLOWING TERMS AND CONDITIONS. BY DOWNLOADING OR USING THE SOFTWARE, YOU ACKNOWLEDGE AND AGREE TO BE BOUND BY THESE TERMS AND CONDITIONS, WHICH MAY BE UPDATED FROM TIME TO TIME. “END USER” OR “YOU” MEANS YOU, THE USER OF THE SOFTWARE.
YOU WARRANT THAT THE IDENTIFICATION DETAILS AND INFORMATION THAT YOU PROVIDE TO US, INCLUDING BUT NOT LIMITED TO, YOUR NAME, EMAIL ADDRESS, LOCATION, TELEPHONE NUMBER AND INTENDED USE ARE TRUE AND CORRECT. YOU ACKNOWLEDGE AND AGREE THAT THIS AGREEMENT IS ENFORCEABLE AND LEGALLY BINDING.
NO ACCESS OR USE OF THE SOFTWARE IS PERMITTED FROM THOSE COUNTRIES WHERE SUCH USE IS PROHIBITED BY TRADE CONTROL LAWS.
This Agreement is made between KX Systems, Inc. (“KX” or “we”) and the End User for access and use of KX’s q.so software, any updates, new versions and/or any documentation provided to you by KX (jointly, the “Software”). You agree to use the Software subject to the terms and conditions set forth below which shall be subject to change from time to time.
1. LICENSE GRANTS
1.1 Grant of License. KX hereby grants End User a non-transferable, non-exclusive license, without right of sublicense, to install and use the Software solely for the purpose of compiling and running the PyKX software made available by KX under separate licensing terms. End User will not attempt to circumvent any restrictions imposed on the Software or use the Software for any purpose other than stated above. If End User is using the PyKX software under the terms of a commercial licence from KX, the End User must obtain a separate licence from KX for the use of the Software.
1.2 Software Use Restrictions. End User may not: (a) modify any part of the Software or create derivative works thereof, (b) sell, lease, license or distribute the Software to any third party, (c) attempt to decompile, disassemble or reverse engineer the Software, (d) copy the Software, except for purposes of installing and executing it within the limitations set out at clause 1.1, (e) use or attempt to use the Software in any way that is unlawful or fraudulent or has any unlawful or fraudulent purpose or effect, (f) use or attempt to use the Software in any way that would breach the license granted herein.
In addition to the foregoing, End User shall not in the course of using the Software access, store, distribute or transmit any material that:
(a) is unlawful, harmful, threatening, defamatory, obscene, infringing, harassing or racially or ethnically offensive;
(b) facilitates illegal activity;
(c) depicts sexually explicit images;
(d) promotes unlawful violence;
(e) is discriminatory based on race, gender, colour, religious belief, sexual orientation, disability; or
(f) is otherwise illegal or causes damage or injury to any person or property, including any material which may: prevent, impair or otherwise adversely affect the operation of any computer software, hardware or network, any telecommunications service, equipment or network or any other service or device; prevent, impair or otherwise adversely affect access to or the operation of any programme or data, including the reliability of any programme or data (whether by re-arranging, altering or erasing the programme or data in whole or part or otherwise); or adversely affect the user experience, including worms, trojan horses, viruses and other similar things or devices. End User agrees that it is reasonable that KX shall have no liability of any kind in any circumstances to it or a third party for any breach of the foregoing.
1.3 Software Performance. End User shall not distribute or otherwise make available to any third party any report regarding the performance of the Software, Software benchmarks or any information from such a report.
1.4 Intellectual Property Ownership Rights. End User acknowledges and agrees that KX owns all rights, title and interest in and to the Software and in and to all of KX’s patents, trademarks, trade names, inventions, copyrights, know-how and trade secrets relating to its design, manufacture and operation, including all inventions, customizations, enhancements, improvements, updates, derivative works and other modifications and all related rights shall automatically vest in KX immediately upon creation. You will not register any trademark, patent or copyright which uses or references the Software. The use by End User of such proprietary rights is authorized only for the purposes set forth herein, and upon termination of this Agreement for any reason, such authorization will cease. End User acknowledges that the Software is proprietary and contains confidential and valuable trade secrets of KX.
2. SUPPORT. KX may at its discretion provide support to End User in relation to the Software.
3. FEES. The Software is licensed to End User without charge.
4. NO WARRANTY. THE SOFTWARE IS PROVIDED “AS IS.” KX EXPRESSLY DISCLAIMS AND NEGATES ALL WARRANTIES, WHETHER EXPRESSED, IMPLIED, STATUTORY OR OTHERWISE, AND SPECIFICALLY DISCLAIMS ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NONINFRINGEMENT OF INTELLECTUAL PROPERTY OR OTHER VIOLATION OF RIGHTS. KX DOES NOT WARRANT THAT THE SOFTWARE WILL MEET END USER REQUIREMENTS OR THAT THE OPERATION OF THE SOFTWARE WILL BE UNINTERRUPTED OR ERROR FREE.
5. LIMITATION OF LIABILITY. WE DO NOT EXCLUDE OR LIMIT IN ANY WAY OUR LIABILITY TO END USER WHERE IT WOULD BE UNLAWFUL TO DO SO. SUBJECT TO THE FOREGOING SENTENCE, KX SHALL HAVE NO LIABILITY UNDER OR IN CONNECTION WITH THIS AGREEMENT UNDER ANY LEGAL OR EQUITABLE THEORY, INCLUDING BREACH OF CONTRACT, TORT (INCLUDING NEGLIGENCE), STRICT LIABILITY, NOR FOR ANY SPECIAL, INCIDENTAL, CONSEQUENTIAL, INDIRECT OR OTHER SIMILAR DAMAGES, INCLUDING BUT NOT LIMITED TO DAMAGE TO REPUTATION, LOSS OF EARNINGS AND INJURY TO FEELINGS IN CONNECTION WITH OR ARISING OUT OF THIS AGREEMENT AND/OR THE USE OF OR INABILITY TO USE THE SOFTWARE, EVEN IF KX HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. NEITHER IS KX LIABLE FOR ANY BUSINESS LOSSES. KX WILL HAVE NO LIABILITY TO END USER FOR ANY LOSS OF PROFIT, LOSS OF BUSINESS, BUSINESS INTERRUPTION, OR LOSS OF BUSINESS OPPORTUNITY.
6. TERM AND TERMINATION OF AGREEMENT. This Agreement shall terminate immediately upon KX’s written notice to End User and KX may at its discretion suspend or terminate End User’s use of the Software at any time. Upon termination of this Agreement or at any time upon KX’s written request, End User shall permanently delete or destroy all copies of the Software in its possession.
7. GOVERNING LAW AND JURISDICTION. This Agreement and all related documents and all matters arising out of or relating to this Agreement whether in contract, tort, or statute shall be governed by and construed in accordance with the laws of the State of New York, United States of America, except as to copyright matters covered by U.S. Federal Law. Each party irrevocably and unconditionally agrees to the exclusive jurisdiction of the State of New York, and it will not commence any action, litigation, or proceeding of any kind whatsoever against any other party in any way arising from or relating to this Agreement and all contemplated transactions, including, but not limited to, contract, equity, tort, fraud, and statutory claims, in any forum other than the State of New York (except as permitted by KX as detailed below). End User hereby waives any objections to venue in those courts. Each party agrees that a final judgment in any such action, litigation, or proceeding is conclusive and may be enforced in other jurisdictions by suit on the judgment or in any other manner provided by law. Should any provision of this Agreement be declared unenforceable in any jurisdiction, then such provision shall be deemed to be severed from this Agreement and shall not affect the remainder hereof. Furthermore, with respect to a violation by End User of Section 1 (License Grant), KX will have the right at its discretion to seek remedies in courts of competent jurisdiction within any applicable territory. The United Nations Convention on Contracts for the International Sale of Goods and the Uniform Computer information Transactions Act, as currently enacted by any jurisdiction or as may be codified or amended from time to time by any jurisdiction, do not apply to this Agreement.
8. TRADE CONTROL. You acknowledge that Software (including its related technical data and services) may be deemed dual use and is subject to, without limitation, the export control laws and regulations of the United Kingdom, European Union, and United States of America (“Trade Control Laws”). You agree to fully comply with those Trade Control Laws in connection with Software including where applicable assisting in obtaining any necessary governmental approvals, licenses and undertakings. You will not, and will not allow any third party, to use, export, re-export or transfer, directly or indirectly, of any part of the Software in violation of any Trade Control Laws or to a destination subject to US, UN, EU, UK or Organisation for Security and Cooperation in Europe (OSCE) embargo, or to any individual or entity listed on the denied parties’ lists. A statement on the Export Controls applicable to the Software, is available at the following website: Export Statement – KX. Any dispute in relation to this clause 8 shall be governed in accordance with clause 7 unless Trade Control Laws determine otherwise. You acknowledge that we may not be permitted (and, in such an event, shall be excused from any requirement) to deliver or grant access to the Software, or perform support or services, due to an embargo, trade sanction or other comparable restrictive measure.
9. GENERAL. This is the only Agreement between End User and KX relating to the Software. The provisions of section 1.4 (“Intellectual Property Ownership Rights”), section 4 (“No Warranty”), section 5 (“Limitation of Liability”), section 6 (“Term and Termination”), section 9 (“General”) shall survive the termination of this Agreement for any reason. All other rights and obligations of the parties shall cease upon termination of this Agreement. This Agreement constitutes the sole and entire agreement of the parties with respect to the subject matter of this Agreement and supersedes all prior and contemporaneous understandings, agreements, and representations and warranties, both written and oral, with respect to such subject matter. You agree that you shall have no remedies in respect of any statement, representation, assurance or warranty (whether made innocently or negligently) that is not set out in this Agreement. You agree that you shall have no claim for innocent or negligent misrepresentation or negligent misstatement based on any statement in this Agreement. Except for the limited rights and licenses expressly granted under this Agreement, nothing in this Agreement grants, by implication, waiver, estoppel, or otherwise, to you or any third party. If we fail to insist that you perform any of your obligations under this Agreement, or if we do not enforce our rights against you, or if we delay in doing so, that will not mean that we have waived our rights against you and will not mean that you do not have to comply with those obligations. If we do waive a default by you, we will only do so in writing and that will not mean that we will automatically waive any later default by you.
| pykx, q, kx, database, ffi | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Environment :: Other Environment",
"Framework :: Flake8",
"Framework :: Pytest",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Healthcare Industry",
"Intended... | [] | https://github.com/KxSystems/pykx | null | >=3.7 | [] | [] | [] | [
"pytz>=2022.1",
"toml~=0.10.2",
"dill>=0.2.0",
"requests>=2.25.0",
"numpy>=1.22; python_version == \"3.10\"",
"numpy>=1.23; python_version == \"3.11\"",
"numpy>=1.26; python_version == \"3.12\"",
"numpy>=1.26; python_version == \"3.13\"",
"numpy>=2.3.0; python_version == \"3.14\"",
"pandas<3.0,>=2... | [] | [] | [] | [
"homepage, https://code.kx.com/pykx",
"documentation, https://code.kx.com/pykx",
"repository, https://github.com/KxSystems/pykx",
"changelog, https://code.kx.com/pykx/release-notes/changelog.html"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T17:34:51.807502 | pykx-3.1.8-cp39-cp39-win_amd64.whl | 14,290,057 | cf/c1/6a9adf61edd7e5a722a08f1ff5559d76fe3f247ef15c929ebcf205277cd6/pykx-3.1.8-cp39-cp39-win_amd64.whl | cp39 | bdist_wheel | null | false | 648048bf808dc970be5db889c1378af3 | b873a45d86e049c587ded3855a794b94136704825fad4729a2db603425de37ee | cfc16a9adf61edd7e5a722a08f1ff5559d76fe3f247ef15c929ebcf205277cd6 | null | [] | 3,796 |
2.4 | exqalibur | 1.2.0 | Cutting-edge optimization for Perceval | Collection of C++ optimized computation primitive available as python module.
| null | Quandela | Perceval.OSS@Quandela.com | null | null | CC BY-NC-ND 4.0 | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://perceval.quandela.net"
] | twine/6.0.1 CPython/3.10.12 | 2026-02-18T17:34:50.065651 | exqalibur-1.2.0-cp314-cp314-win_amd64.whl | 1,940,283 | 1f/60/8aada5ef20feb33c6596850ce5512f11ade6a829afb1f43a6c427cf64ad2/exqalibur-1.2.0-cp314-cp314-win_amd64.whl | cp314 | bdist_wheel | null | false | 24ff73c63c82286bfdd17df248495f98 | b4850d5d661c3a04846f0da3a905fafe80ae0745595a1f892c83172818e2fca9 | 1f608aada5ef20feb33c6596850ce5512f11ade6a829afb1f43a6c427cf64ad2 | null | [] | 1,969 |
2.4 | ts-sdk-connectors-python | 2.1.0 | ts-sdk-connectors-python | # TetraScience Python Connector SDK <!-- omit in toc -->
## Version <!-- omit in toc -->
v2.1.0
## Table of Contents <!-- omit in toc -->
- [Summary](#summary)
- [Usage](#usage)
- [`Connector` Class](#connector-class)
- [Creating and running a connector](#creating-and-running-a-connector)
- [Starting the connector and running](#starting-the-connector-and-running)
- [Configuring `TdpApi`](#configuring-tdpapi)
- [Proxy support](#proxy-support)
- [Initialization of `TdpApi`](#initialization-of-tdpapi)
- [Connector example](#connector-example)
- [SQL Connector](#sql-connector)
- [Commands](#commands)
- [Lifecycle methods and hooks](#lifecycle-methods-and-hooks)
- [Additional `Connector` commands](#additional-connector-commands)
- [Custom commands](#custom-commands)
- [Polling](#polling)
- [Logging](#logging)
- [Logger usage](#logger-usage)
- [Setting log levels](#setting-log-levels)
- [CloudWatch logging and standalone connector support](#cloudwatch-logging-and-standalone-connector-support)
- [OpenAPI code generation](#openapi-code-generation)
- [Using the generated API client](#using-the-generated-api-client)
- [Example usage](#example-usage)
- [Retrieving connector data with filtering](#retrieving-connector-data-with-filtering)
- [Using `TdpApi` (async)](#using-tdpapi-async)
- [Using `TdpApiSync` (synchronous)](#using-tdpapisync-synchronous)
- [Using `Connector` class methods](#using-connector-class-methods)
- [Local testing with standalone connectors](#local-testing-with-standalone-connectors)
- [Prerequisites](#prerequisites)
- [Setup process](#setup-process)
- [Step 1: build your local connector image](#step-1-build-your-local-connector-image)
- [Step 2: use the standalone installer](#step-2-use-the-standalone-installer)
- [Step 3: environment configuration](#step-3-environment-configuration)
- [Step 4: running your local connector](#step-4-running-your-local-connector)
- [Alternative: running containerless locally](#alternative-running-containerless-locally)
- [Important notes](#important-notes)
- [Running tests](#running-tests)
- [Changelog](#changelog)
- [v2.1.0](#v210)
- [v2.0.0](#v200)
- [v1.0.1](#v101)
- [v1.0.0](#v100)
- [v0.9.0](#v090)
- [v0.8.0](#v080)
- [v0.7.0](#v070)
- [v0.6.0](#v060)
- [v0.5.0](#v050)
- [v0.4.0](#v040)
- [v0.3.0](#v030)
- [v0.2.0](#v020)
- [v0.1.0](#v010)
## Summary
The TetraScience Python Connectors SDK provides utilities and APIs for building TetraScience [pluggable connectors](https://developers.tetrascience.com/docs/tetra-connectors) in Python. Connectors are containerized applications used for transferring data between the Tetra Data Platform (TDP) and other systems. Some examples of existing connectors:
- The [S3 connector](https://developers.tetrascience.com/docs/tetra-amazon-s3-connector) receives file events from an S3 bucket via an SQS queue and pulls the corresponding objects into TDP
- The [Kepware KEPServerEX connector](https://developers.tetrascience.com/docs/tetra-kepserverex-connector) pulls tags from KEPServerEX over MQTT and writes corresponding JSON files to TDP
- The [LabX connector](https://developers.tetrascience.com/docs/tetra-labx-connector) can connect to multiple LabX instances and retrieve completed tasks. The LabX connector was written using this Python SDK
## Usage
### `Connector` Class
The `Connector` class is the core component of the SDK.
It provides methods and hooks to manage the lifecycle of a connector, handle commands, perform periodic tasks, and
interact with TDP.
#### Creating and running a connector
To create a `Connector` instance, you need to provide a `TdpApi` instance and optional `ConnectorOptions`. The `TdpApi` instance is the class that interacts with TDP
```python
from ts_sdk_connectors_python.connector import Connector, ConnectorOptions
from ts_sdk_connectors_python.tdp_api import TdpApi
tdp_api = TdpApi()
connector = Connector(tdp_api=tdp_api, options=ConnectorOptions())
```
#### Starting the connector and running
```python
async def main():
tdp_api = TdpApi()
connector = Connector(tdp_api=tdp_api, options=ConnectorOptions())
await connector.start()
while True:
await asyncio.sleep(1)
asyncio.run(main())
```
##### Configuring `TdpApi`
Required configuration values for `TdpApi` can be provided either as instance args or pulled from environment variables. If any arguments are not provided, they are pulled from the environment variables.
```python
# manually provide configuration values
tdp_api = TdpApi(
aws_region="us-east-1",
org_slug="tetrascience-yourorg",
hub_id="your-hub-id",
connector_id="your-connector-id",
datalake_bucket="your-datalake-bucket",
stream_bucket="your-stream-bucket",
tdp_certificate_key="your-tdp-certificate-key",
jwt_token_parameter="your-jwt-token-parameter",
tdp_endpoint="https://api.tetrascience.com",
outbound_command_queue="your-outbound-command-queue",
kms_key_id="your-kms-key-id",
artifact_type="connector",
connector_token="your-connector-token",
local_certificate_pem_location="path/to/your/certificate.pem"
)
```
```python
# Automatically pull all args from environment variables
tdp_api = TdpApi()
```
```python
# Some arguments provided and remaining args pulls from environment variables
tdp_api = TdpApi(datalake_bucket="your-datalake-bucket")
```
The following environment variables are used by the `TdpApiConfig` class to configure the TetraScience Data Platform API client. Note that not all of them are necessarily relevant:
| Variable Name | Description |
| --------------------------------- | ------------------------------------------------------------- |
| `AWS_REGION` | The AWS region to use. |
| `ORG_SLUG` | The organization slug for the TetraScience Data Platform. |
| `HUB_ID` | The hub ID for the connector. |
| `CONNECTOR_ID` | The unique identifier for the connector. |
| `DATALAKE_BUCKET` | The name of the datalake bucket. |
| `STREAM_BUCKET` | The name of the stream bucket. |
| `TDP_CERTIFICATE_KEY` | The key for the TDP certificate. |
| `JWT_TOKEN_PARAMETER` | Name of the SSM parameter that contains the JWT token. Used Used by non-standalone connectors |
| `TDP_ENDPOINT` | The base URL for the TetraScience Data Platform API. |
| `OUTBOUND_COMMAND_QUEUE` | The queue name for outbound commands. |
| `KMS_KEY_ID` | The KMS key ID. |
| `ARTIFACT_TYPE` | The type of artifact (e.g., `connector`, `data-app`). |
| `CONNECTOR_TOKEN` | The JWT authentication token for the connector. Used for standalone connectors to request initial AWS credentials |
| `LOCAL_CERTIFICATE_PEM_LOCATION` | The local certificate PEM file location. |
### Proxy support
In addition to the above environment variables, the connector uses proxy settings
determined from the environment variables `HTTP_PROXY`, `HTTPS_PROXY`, and `NO_PROXY`.
For connectors on a Hub, the connector sets these environment variables based on
the Hub's proxy settings. For standalone connectors, the standalone installer will
set lowercase versions of these variables. The connector checks the environment,
and in the case where lowercase versions exist but uppercase ones don't, it copies
the lowercase values over to uppercase.
### Initialization of `TdpApi`
`TdpApi` must initialize the AWS and HTTP clients before it can communicate with the connector's AWS services and connector endpoints in TDP.
```python
tdp_api = TdpApi()
await tdp_api.init_client(proxy_url = "127.0.0.1:3128") # your proxy URL here, if needed
files = tdp_api.get_connector_files(...)
...
```
#### Connector example
Here is an example of a custom connector that prints "Hello World" on a scheduled interval:
```python
from typing import Optional
from ts_sdk_connectors_python.connector import Connector, ConnectorOptions
from ts_sdk_connectors_python.custom_commands import register_command
from ts_sdk_connectors_python.tdp_api import TdpApi
from ts_sdk_connectors_python.utils import Poll
class BasicScheduledConnector(Connector):
"""Prints hello world on a scheduled interval"""
def __init__(
self,
tdp_api: TdpApi,
schedule_interval: int,
options: Optional[ConnectorOptions] = None,
):
super().__init__(tdp_api=tdp_api, options=options)
self.poll: Optional[Poll] = None
self.schedule_interval = schedule_interval
async def on_start(self):
await super().on_start()
self._start_polling()
async def on_stop(self):
await super().on_stop()
self._stop_polling()
@register_command("TetraScience.Connector.PollingExample.SetScheduleInterval")
async def set_schedule_interval(self, schedule_interval: str):
self.schedule_interval = float(schedule_interval)
self._stop_polling()
self._start_polling()
def _start_polling(self):
if not self.poll:
self.poll = Poll(self.execute_on_schedule, self.schedule_interval)
self.poll.start()
def _stop_polling(self):
if self.poll:
self.poll.stop()
self.poll = None
async def execute_on_schedule(self):
print("HELLO WORLD")
# Usage
import asyncio
async def main():
tdp_api = TdpApi()
await api.init_client()
connector = BasicScheduledConnector(tdp_api=tdp_api, schedule_interval=5)
await connector.start()
await asyncio.sleep(10)
await connector.shutdown()
asyncio.run(main())
```
This example demonstrates how to create a custom connector that prints "Hello World" every 5 seconds and allows the schedule interval to be updated via a custom command. For more information on registering commands, see below.
### SQL Connector
The SDK provides a `SqlConnector` abstract base class that simplifies building SQL-based connectors using an inheritance pattern. This allows you to easily connect to any SQL database (PostgreSQL, MySQL, SQL Server, Oracle, etc.) and incrementally retrieve data using watermarks.
For detailed documentation on the SQL Connector, including:
- Quick Start guide
- Configuration parameters and timeout configuration
- Watermark strategies with examples
- Best practices (read-only grants, query performance, indexes)
- Error codes for `manifest.json`
- Troubleshooting
See the [SQL Connector Documentation](docs/sql-connector.md).
For technical design details (synchronous SQLAlchemy rationale, connection pooling, runtime configuration updates), see [SQL Connector Design](docs/sql-connector-design.md).
### Commands
TDP communicates with connectors via the [command service](https://developers.tetrascience.com/docs/command-service). The data acquisition service in TDP uses a set of commands we refer to as "lifecycle commands". The `Connector` class implements a command listener and has several methods that are invoked when lifecycle commands come in.
#### Lifecycle methods and hooks
*Starting and Initializing methods*
- **start**: Starts the connector and its main activities.
- Triggers:
- None. Almost all connectors will call this from `main.py`, the default entrypoint of the container
- Default implementation:
- calls `on_initializing` hook
- loads connector details (*does not call `on_connector_updated`*)
- starts metrics collection, heartbeat, and command listener tasks
- if the connector's operating status is `RUNNING`, calls `on_start` hook
- calls `on_initialized` hook
- **on_initializing**: A developer-defined hook that is called at the beginning of the default implementation of `Connector.start`
- Triggers:
- None. In default implementation, called once by `Connector.start`
- Default implementation:
- None
- **on_initialized**: A developer-defined hook that is called at the end of the default implementation of `Connector.start`
- Triggers:
- None. In default implementation, called once by `Connector.start`
- Default implementation:
- None
- **on_start**: A developer-defined hook that runs when the connector's operating status is set to `RUNNING`
- Triggers (any of the following):
- A command with action `TetraScience.Connector.Start` is received
- this corresponds to setting the connector operating status to`RUNNING`
- During `Connector.start` if the connector's operating status is `RUNNING`
- this typically happens when a disabled connector is "enabled as `RUNNING`"
- Default implementation:
- reloads connector config, which subsequently calls `on_connector_updated`
*Running methods*
- **on_connector_updated**: A developer-defined hook that gets called when the connector
details are updated. Because this corresponds to config changes and is also triggered indirectly by `on_start`, it is the most common place to initialize resources for the connector to work with third-party systems. Since it is also triggered by `on_stop`, checking that the connector's operating status is `RUNNING` before starting any data ingestion is important
- Triggers (any of the following):
- A command with action `TetraScience.Connector.UpdateConfig` is received.
- sent by the data acquisition service after valid configuration is applied
- A command with action `TetraScience.Connector.Start` is received.
- invoked by base implementation of `Connector.on_start`
- A command with action `TetraScience.Connector.Stop` is received.
- invoked by base implementation of `Connector.on_stop`
- Default Implementation:
- None
- **validate_config**: A developer defined method that determines if a given connector config is valid:
- Triggers:
- A command with action `TetraScience.Connector.ValidateConfig` is received
- sent by the platform when a user attempts to save connector config
- Default implementation:
- always returns `{"valid": true}`
*Idle and Disable/Shutdown methods*
- **shutdown**: A method called when the connector and its container will be stopped
- Triggers:
- A command with action `TetraScience.Connector.Shutdown` is received.
- sent by the platform when a user disables the connector
- Default implementation:
- calls `on_shutdown`
- stops metrics collection, heartbeat, and command listener tasks
- **on_shutdown**: A developer-defined hook that runs when the connector is stopping. Connector specific cleanup can usually be implemented here without overriding `shutdown`
- Triggers:
- A command with action `TetraScience.Connector.Shutdown` is received.
- sent by the platform when a user disables the connector
- Default Implementation:
- None
- **on_stop**: A developer-defined hook that runs when the connector's operating status is set to `IDLE`
- Triggers (any of the following):
- A command with action `TetraScience.Connector.Stop` is received.
- this corresponds to setting the connector operating status to `IDLE`
- Default Implementation:
- reloads connector config, which subsequently calls `on_connector_updated`
#### Additional `Connector` commands
The `Connector` class also supports some other commands. These can be sent using the [commands API](https://developers.tetrascience.com/reference/ag-create-command)
| Action Name | `Connector` Method Called | Description | Triggers | Side-Effects |
| ----------------------------------------- | --------------------------------------------- |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -------------------------------------------------------------------------------------- | --------------------------------------------------------------------------- |
| TetraScience.Connector.ListCustomCommands | `handle_get_available_custom_commands` | This method returns a list of available custom commands registered to the connector. | | |
| TetraScience.Connector.SetLogLevel | `set_log_level` | This method is used to set the log level of the connector dynamically. Supported levels `DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`. | User sends a command to set the log level | Adjusts the verbosity of logs without restarting the connector |
| *custom commands* | *specified by `@register_command` decorators* | Custom commands registered using the `@register_command` decorator. | | |
#### Custom commands
Custom commands are user-defined commands that can be registered by developers
to extend the functionality of a connector. This can be useful for implementing
capabilities that you want the connector to have, but only to use on demand. One
example would be ingesting historical data from a given time window for a connector
that typically only receives new data. Another example might be requests for the
connector to send information to a third-party system. This gives TDP pipelines
a way to call upon the connector to act on their behalf.
`Connector` implements a command listener that both listens to all the previously
mentioned standard commands, and also checks a connector-specific registry for
custom commands. Custom commands are registered using the `@register_command` decorator.
The decorator takes a string argument that corresponds to the `action` of the
command. The convention for action names is `TetraScience.Connector.<ConnectorName>.<CustomActionName>`,
as in the following example:
```python
from ts_sdk_connectors_python.custom_commands import register_command
from ts_sdk_connectors_python.connector import Connector
class MyConnector(Connector):
@register_command("TetraScience.Connector.ExampleConnector.MyCustomAction")
def my_custom_action(self, body: dict):
print(f"Action called with body: {body}")
return None
```
[Commands](docs/commands.md) contains further technical details on custom command and command registration.
The return type of the custom command method should be either `None`, a dictionary, or a string that can be converted into a dictionary.
Specifically, you can refer to the `ts_sdk_connectors_python.models.CommandResponseBody` type for more details.
### Polling
The SDK provides a `Poll` class that allows repeated execution of a target function at a specified interval.
This is useful for tasks that need to be performed periodically, such as checking the status of a resource or polling an API.
For more details on how to use the `Poll` class, refer to the [Polling Documentation](docs/poll.md).
## Logging
The SDK provides a logger to facilitate structured logging. This logger supports multi-threading, asynchronous operations, logger inheritance, and upload to CloudWatch.
Log messages are in JSON format for uptake into CloudWatch. The following are examples of log messages.
```text
{"level":"debug","message":"Loading TDP certificates from local volume /etc/tetra/tdp-cert-chain.pem","extra":{"context":"ts_sdk_connectors_python.AuthenticatedClientCreator"}}
{"level":"info","message":"TDP certificates loaded from local volume /etc/tetra/tdp-cert-chain.pem","extra":{"context":"ts_sdk_connectors_python.AuthenticatedClientCreator"}}
{"level":"info","message":"Client initialized","extra":{"context":"ts_sdk_connectors_python.tdp_api_base","orgSlug":"tetrascience-yourorg","connectorId":"3eca48c9-3eb2-4414-a491-a8dda151da50"}}
{"level":"info","message":"Starting metrics task: cpu_usage_metrics_provider","extra":{"context":"ts_sdk_connectors_python.metrics"}}
{"level":"info","message":"Starting metrics task: memory_used_metrics_provider","extra":{"context":"ts_sdk_connectors_python.metrics"}}
```
### Logger usage
The CloudWatch logger supports logger inheritance, allowing you to create child loggers that inherit the configuration
and context of their parent loggers. This is useful for organizing log messages by component or module.
The `get_logger` method will return a logger that inherits from the connector SDK's root logger. Simply give the logger
a useful name and begin using the logger. A new logger name will created by adding new suffix (see below).
Providing the `extra` argument will add additional information present in all log messages with the logger. The `extra`
argument can also be provided to any of the log methods (`info`, `debug`, `warning`, `error`, `critical`).
Example usage:
```python
from ts_sdk_connectors_python.logger import get_logger
# Create a parent logger
parent_logger = get_logger("parent_logger", extra={"foo": "bar"})
assert parent_logger.name == 'ts_sdk_connectors_python.parent_logger'
parent_logger.info('my message', extra={'baz': 'bazoo'})
```
```text
# expected log message
# note that 'foo' and 'baz' are included as 'extra'
# note that the logger name is also given in the 'extra.context'
{"level":"info","message":"my message","extra":{"context":"ts_sdk_connectors_python.parent_logger","foo":"bar","baz":"bazoo"}}
```
The following methods are provided to create logs at various levels:
```text
logger.debug('Use this for detailed debug information. This is the lowest level and by default not
emitted by the logger')
logger.info('Use this for general info. This is the default level for connectors')
logger.warning('Use this for warnings')
logger.error('Use this for errors. Note the exc_info which can provide stack trace info', exc_info=True)
logger.critical('Use this for critical errors that cause failure')
```
You may also create child loggers by using the `get_child` method, which will just add another suffix to an existing
logger and merge provided `extra`.
```python
# Create a child logger that inherits from the parent logger
child_logger = parent_logger.get_child("child", extra={"baz": "qux"})
assert child_logger.name == 'ts_sdk_connectors_python.parent_logger.child_logger'
# Log messages using the child logger
child_logger.info("This is a message from the child logger")
```
```text
# note the extra is merged with the parent extra
{"level":"info","message":"my message","extra":{"context":"ts_sdk_connectors_python.parent_logger.child_logger","foo":"bar","baz":"qux"}}
```
### Setting log levels
To reduce the volume of logs, you can set the log level. The supported levels are `NOTSET`, `DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`.
This can be done via the `set_root_connector_sdk_log_level` method:
```python
from ts_sdk_connectors_python.logger import set_root_connector_sdk_log_level
set_root_connector_sdk_log_level("DEBUG")
```
By default, all loggers made by `get_logger` will have an `NOTSET` log level, meaning they all inherit their effective
log level from the root connector SDK. It is therefore *not recommended* to set the log level for any child loggers.
Instead, use the `set_root_connector_sdk_log_level` method.
`Connector` also implements the command `TetraScience.Connector.SetLogLevel` which allows you to set the log level of the connector dynamically. Here is an example command request:
```json
{
"payload": {
"level": "DEBUG"
},
"action": "TetraScience.Connector.SetLogLevel",
"targetId": "your-connector-id"
}
```
### CloudWatch logging and standalone connector support
If the connector is in standalone mode (meaning `CONNECTOR_TOKEN` env var is set), logs will get uploaded to AWS
CloudWatch in addition to getting logged to console. Logging and CloudWatch reporting occurs on a separate processing
thread apart from the main connector processing thread. Uploading to CloudWatch occurs in batches and whenever a batch
size is hit or on a set interval. Relevant envvars related to these settings can be found in `constants.py`.
The `CloudWatchReporter` class is responsible for managing the buffering and flushing of log events to AWS CloudWatch.
It handles the following tasks:
- Buffering log events.
- Flushing buffered log events to CloudWatch based on certain conditions (e.g., buffer size limit, flush interval).
- Managing the CloudWatch log stream and log group.
- Handling errors during the flushing process.
Flushing log events to AWS CloudWatch occurs for a number of reasons:
- The buffer reaches its size limit.
- The flush interval is reached.
- The flush limit is reached.
- The connector is started.
- The connector is stopped.
- An explicit flush is triggered.
## OpenAPI code generation
This project uses code generation to build client libraries for interacting with the data acquisition service in TDP based on the OpenAPI specification of the service. The generated code is placed in the `ts_sdk_connectors_python/openapi_codegen` directory.
**Typical users of the SDK will not need to generate this code.** For Tetra developers, details are available [here](ts_sdk_connectors_python/openapi_codegen/README.md)
### Using the generated API client
The generated API client is used within the `TdpApi` class to interact with the TDP REST API.
The `TdpApi` class provides asynchronous methods, while the `TdpApiSync` class provides synchronous methods.
#### Example usage
```python
from ts_sdk_connectors_python.tdp_api import TdpApi
from ts_sdk_connectors_python.openapi_codegen.connectors_api_client.models import SaveConnectorValueRequest
async def main():
api = TdpApi()
await api.init_client()
connector_id = "your_connector_id"
raw_data = [
SaveConnectorValueRequest(
key="a-string",
value={"some_json_field": "some_secret_value"},
secure=True
)
]
response = await api.save_connector_data(connector_id, raw_data)
print(response)
# Run the main function in an async event loop
import asyncio
asyncio.run(main())
```
For synchronous usage, use the `TdpApiSync` class:
```python
from ts_sdk_connectors_python.tdp_api_sync import TdpApiSync
from ts_sdk_connectors_python.openapi_codegen.connectors_api_client.models import SaveConnectorValueRequest
def main():
api = TdpApiSync()
api.init_client()
connector_id = "your_connector_id"
raw_data = [
SaveConnectorValueRequest(
key="a-string",
value={"some_json_field": "some_secret_value"},
secure=True
)
]
response = api.save_connector_data(connector_id, raw_data)
print(response)
# retrieve parsed DTO object, if available
print(response.parsed)
# Run the main function
main()
```
#### Retrieving connector data with filtering
The Python SDK supports server-side filtering when retrieving connector data. This allows you to efficiently retrieve only the data you need by specifying keys at the API level.
##### Using `TdpApi` (async)
```python
from ts_sdk_connectors_python.tdp_api import TdpApi
async def main():
api = TdpApi()
await api.init_client()
connector_id = "your_connector_id"
# Get all connector data (no filtering)
all_data = await api.get_connector_data(connector_id)
print(f"All data: {len(all_data.parsed.values)} items")
# Get specific keys only (server-side filtering)
filtered_data = await api.get_connector_data(
connector_id,
keys="key1,key2,key3" # Comma-separated list of keys
)
print(f"Filtered data: {len(filtered_data.parsed.values)} items")
asyncio.run(main())
```
##### Using `TdpApiSync` (synchronous)
```python
from ts_sdk_connectors_python.tdp_api_sync import TdpApiSync
def main():
api = TdpApiSync()
api.init_client()
connector_id = "your_connector_id"
# Get specific keys only (server-side filtering)
filtered_data = api.get_connector_data(
connector_id,
keys="key1,key2"
)
print(f"Filtered data: {len(filtered_data.parsed.values)} items")
main()
```
##### Using `Connector` class methods
The `Connector` class provides convenient methods that automatically use server-side filtering:
```python
from ts_sdk_connectors_python.connector import Connector
from ts_sdk_connectors_python.tdp_api import TdpApi
async def main():
api = TdpApi()
await api.init_client()
connector = Connector(api)
# Get specific values (uses server-side filtering automatically)
values = await connector.get_values(["key1", "key2"])
print(f"Retrieved {len(values)} values")
# Get single value
single_value = await connector.get_value("key1")
if single_value:
print(f"Value for key1: {single_value.value}")
# Direct access to connector data with filtering
data = await connector.get_values(
keys=["key1", "key2"],
)
print(f"Direct access: {len(data)} items")
asyncio.run(main())
```
Refer to the `TdpApi` and `TdpApiSync` class methods for more details on available API interactions.
## Local testing with standalone connectors
For local development and testing, you can use standalone connectors to test your connector implementation against TDP resources while running your code locally. This approach allows you to:
- Test your connector logic without deploying to a Hub
- Debug and iterate quickly during development
- Validate your connector against real TDP services
### Prerequisites
1. **Build your local Docker image**: Ensure you have built a local Docker image of your connector
2. **Access to TDP environment**: You need access to a TDP organization and appropriate permissions
3. **Standalone installer**: Access to the TetraScience standalone connector installer
### Setup process
#### Step 1: build your local connector image
First, build your connector as a Docker image locally. This typically involves:
```bash
# Example build command (adjust based on your connector's Dockerfile)
docker build -t my-connector:local .
```
#### Step 2: use the standalone installer
1. Run the standalone connector installer provided by TetraScience
2. When prompted for the connector image during installation, **provide your local image name** instead of an official image:
```
# Instead of using an official image like:
# tetrascience/my-connector:v1.0.0
# Use your local build:
my-connector:local
```
3. The installer will:
- Set up the necessary TDP resources (tokens, certificates, etc.)
- Configure environment variables for standalone operation
- Create the appropriate Docker run configuration pointing to your local image
#### Step 3: environment configuration
The standalone installer will configure the following key environment variables:
- `CONNECTOR_TOKEN`: Authentication token for standalone deployment
- `TDP_ENDPOINT`: TDP API endpoint
- `ORG_SLUG`: Your organization identifier
- `CONNECTOR_ID`: The connector instance ID
- Other TDP-specific configuration as needed
#### Step 4: running your local connector
After setup, your connector will run using your local Docker image but connect to real TDP services for authentication, data storage, and command processing.
#### Alternative: running containerless locally
It is also possible for testing to run the connector locally without Docker:
- Skip Step 1 above
- Get the connector token and standalone installer in Step 2, but do not run it
- From the `installer.sh` file, you can extract values for the environment variables (other than `CONNECTOR_TOKEN`, which you can get as part of Step 2) and export them. The necessary environment variables are those mentioned in [Configuring `TdpApi`](#configuring-tdpapi)
- In Step 4, run the entrypoint of the connector directly
### Important notes
- Ensure your local Docker image is built before running the standalone installer
- The standalone installer handles all TDP resource provisioning and configuration
- Your local connector will have the same capabilities as a Hub-deployed connector
- Use this method for development and testing; production deployments should use Hub or standalone deployment of the official image
## Running tests
Unit tests and integration tests are located in the `__tests__/unit` and `__tests__/integration` directories,
respectively. To run unit tests, execute:
```sh
poetry run pytest
```
To run integration tests against the TDP API, export the connector environment variables for tests and run:
```sh
poetry run pytest --integration
```
It is easiest to run the integration tests by following the procedure described in [Local testing with standalone connectors](#local-testing-with-standalone-connectors).
## Changelog
### v2.1.0
- Adds support to create SQL connectors
- Updates supported Python versions to 3.11+ (including 3.12, 3.13, 3.14)
- Updates openapi-python-client to v0.27.1
- Updates black to v25.11.0
- Updates pylint to v3.3.0
### v2.0.0
- **Breaking**: change connector helper methods to throw `ConnectorError` on unexpected API behavior instead of returning `None`
- For standalone connectors, capture logs before CloudWatch reporter initialization in buffer to be uploaded later
- Sort log events by timestamp before sending to CloudWatch
- Add filtering to `Connector.get_values`
- Add option to disable TLS verification to `TdpApi.create_httpx_instance`
### v1.0.1
- Add automatic batching to `Connector.get_files` and `Connector.save_files`
- Add necessary S3 metadata to support `destination_id` for file uploads
### v1.0.0
- Fix bug where proxy settings were loaded at wrong time from Hub
- Add synchronous init_client method for `TdpApiSync`
### v0.9.0
- Add enum of health status to `models.py`
- Add ability to read connector manifest file
- Add support for user agent strings
- Refactor SDK to use AWS class
- Update aioboto3 to use upstream fix
- Add health reporting to CloudWatch logger
- Move some methods from Node SDK `TdpClient` to Python `Connector` class
- Fix crash loops for connectors unable to start in RUNNING
### v0.8.0
- Add CI Pipeline to release PR builds to JFrog
- Add consistent AWS sessions through AWS class; fix cloud/hub/standalone deployment issues in v0.7.0
- ELN-661: Update Poll class to add default error handling and logging features
### v0.7.0
- Update README to include logger practices
- Add HTTP request timeouts and SQS request timeouts
- Fix bugs causing tdpApi.upload_file to fail with error when using additional checksums
- Fix bug where the SDK ignore envar AWS_REGION in client init
- Support missing ConnectorFileDto.errorCount
### v0.6.0
- Fix a 400 Bad Request error caused by the TDP API client having an `Authorization` header that conflicted with the S3 presigned URL authentication
### v0.5.0
- Fix bug where Connector.start fails when given an uninitialized TdpApi
- Improve logging in connector.start()
### v0.4.0
- Implement CloudWatchReporter and logger to provide consistent logging by the SDK for local and cloudwatch logs
- Fix bugs in parsing ConnectorFileDto objects which formerly resulted in raised exceptions
- Introduce partial standalone deployment support for AWS and logger initialization
### v0.3.0
- Add support to fetch connector JWT from AWS, allowing cloud connector deployment
- Use type SaveConnectorFilesRequest the signature of TdpApi.update_connector_files() and TdpApiSync.update_connector_files()
- Make CommandResponse.status optional to help with parsing messages from the command queue
### v0.2.0
- Add `upload_file` method to `TdpApi`, `TdpApiSync` and `Connector` classes
- Bug fix for command request and response data validation
- Bug fix for parsing the incoming command body for `validate_config_by_version`
### v0.1.0
- Initial version
| text/markdown | TetraScience | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"aioboto3<15.0.0,>=14.2.0",
"boto3-stubs[s3]<2.0.0,>=1.36.1",
"openapi-python-client<0.28.0,>=0.27.1",
"psutil<7.0.0,>=6.1.1",
"pydantic<3.0.0,>=2.10.4",
"python-dotenv<2.0.0,>=1.2.1",
"sqlalchemy<3.0.0,>=2.0.0",
"tenacity<10.0.0,>=9.1.4",
"types-aioboto3-lite[cloudwatch,logs,s3,sqs,ssm]<15.0.0,>=14... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T17:33:13.390534 | ts_sdk_connectors_python-2.1.0.tar.gz | 150,177 | 6d/8c/6722b2da666fc1fbadab2e0d8a4706d20d2138507d0a7d93ee32710c0277/ts_sdk_connectors_python-2.1.0.tar.gz | source | sdist | null | false | 16499c84ac640c231f3851a6d4eb6e3b | 687d148887d3791616865c6d3819b105804c88169f1787f25d1562f2a820d836 | 6d8c6722b2da666fc1fbadab2e0d8a4706d20d2138507d0a7d93ee32710c0277 | null | [
"LICENSE.txt"
] | 269 |
2.4 | quantumdrive | 1.3.84 | QuantumDrive platform and SDK | # QuantumDrive
AI-powered assistant and data management platform built on AgentForge.
## Overview
QuantumDrive is an AlphaSix IP product that provides:
- **Q Assistant**: Conversational AI with memory and tool access
- **Microsoft 365 Integration**: SSO and Graph API access
- **Vector Storage**: Semantic search with ChromaDB
- **AgentForge Integration**: Leverages Syntheticore's AgentForge library
## Quick Start
### Standalone Usage
1. **Install dependencies**:
```bash
pip install -r requirements.txt
```
2. **Create configuration** (choose one):
**Option A: Project root (for development)**:
```bash
cp resources/default_quantumdrive.toml quantumdrive.toml
```
**Option B: User config directory (for production)**:
```bash
mkdir -p ~/.config/quantumdrive
cp resources/default_quantumdrive.toml ~/.config/quantumdrive/quantumdrive.toml
```
3. **Set environment variables**:
```bash
export AF_OPENAI_API_KEY="sk-..."
export QD_MS_TENANT_ID="..."
export QD_MS_CLIENT_ID="..."
export QD_MS_CLIENT_SECRET="..."
```
4. **Run example**:
```bash
python examples/standalone_usage.py
```
### Library Usage (from Quantify)
```python
from quantumdrive.core.utils.qd_config import QDConfig
from quantumdrive.core.ai.q_assistant import QAssistant
# Load config from host application
config = QDConfig.from_dict(secrets_dict)
# Initialize assistant
assistant = QAssistant(config=config)
# Ask questions
response = assistant.answer_question(
"What is Python?",
user_id="user123",
thread_id="thread456",
org_id="org789"
)
```
## Configuration
QuantumDrive uses a flexible configuration system supporting:
- **TOML files**: For non-sensitive defaults
- **Environment variables**: For secrets and overrides
- **Dependency injection**: For library usage
See [Configuration Guide](docs/Configuration_Guide.md) for complete documentation.
### Required Configuration
**QuantumDrive (QD_* prefix)**:
- `QD_MS_TENANT_ID` - Microsoft Entra ID tenant
- `QD_MS_CLIENT_ID` - Application client ID
- `QD_MS_CLIENT_SECRET` - Application secret
- `QD_MS_REDIRECT_URI` - OAuth callback URL
**AgentForge (AF_* prefix)**:
- `AF_OPENAI_API_KEY` - OpenAI API key
- `AF_LLM_PROVIDER` - LLM provider (openai, ollama, xai)
- `AF_OPENAI_MODEL` - Model name (gpt-5.1, gpt-5.1-codex, etc.)
## Architecture
```
┌─────────────────────────────────────────┐
│ Quantify (AlphaSix IP) │
│ - Web application │
│ - Secrets management │
│ - User interface │
└──────────────┬──────────────────────────┘
│ config dict
↓
┌─────────────────────────────────────────┐
│ QuantumDrive (AlphaSix IP) │
│ - Q Assistant │
│ - Microsoft 365 integration │
│ - Vector storage │
│ - Configuration bridge │
└──────────────┬──────────────────────────┘
│ AF_* config
↓
┌─────────────────────────────────────────┐
│ AgentForge (Syntheticore IP) │
│ - LLM orchestration │
│ - Tool registry │
│ - Memory management │
│ - Vector stores │
└─────────────────────────────────────────┘
```
## Components
### Q Assistant (`core/ai/q_assistant.py`)
Conversational AI assistant with:
- Multi-turn conversations with memory
- Tool access (search, calculations, APIs)
- Identity-scoped memory (user, thread, org)
- Crew-based multi-agent workflows
### Microsoft 365 Provider (`core/auth/microsoft_365_provider.py`)
OAuth2 authentication and Graph API access:
- SSO with Microsoft Entra ID
- Token caching and refresh
- User profile retrieval
- Graph API requests
### Configuration (`core/utils/qd_config.py`)
Flexible configuration management:
- TOML file loading
- Environment variable overrides
- Dependency injection support
- AgentForge config extraction
## Development
### Project Structure
```
quantumdrive/
├── core/
│ ├── ai/ # Q Assistant and agent configuration
│ ├── auth/ # Microsoft 365 authentication
│ ├── aws/ # AWS Secrets Manager integration
│ ├── ingest/ # Document processing
│ ├── user/ # User profiles
│ └── utils/ # Configuration and utilities
├── docs/ # Documentation
├── examples/ # Usage examples
├── resources/ # Default configuration files
├── tests/ # Test suite
└── webapp/ # Flask web application
```
### Running Tests
```bash
pytest tests/
```
### Code Style
```bash
# Format code
black core/ tests/
# Lint
flake8 core/ tests/
# Type check
mypy core/
```
## Documentation
- [Configuration Guide](docs/Configuration_Guide.md) - Complete configuration reference
- [AgentForge Configuration](../agentforge/docs/Configuration_Guide.md) - AgentForge settings
- [Resources README](resources/README.md) - Configuration file documentation
## Security
**Never commit secrets to version control!**
- Use environment variables for API keys and credentials
- Use secrets managers (AWS Secrets Manager, Azure Key Vault) in production
- Add `.env` files to `.gitignore`
- Rotate exposed credentials immediately
See [Configuration Guide](docs/Configuration_Guide.md#security-best-practices) for details.
## License
Proprietary - AlphaSix IP
## Support
For issues or questions, contact the AlphaSix development team.
| text/markdown | Chris Steel | chris.steel@alphsix.com | null | null | LicenseRef-AlphSix-Proprietary | null | [
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Operating System :: OS Independent"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"agentfoundry",
"flask>=3.0.0",
"gunicorn>=21.0.0",
"toml",
"msal>=1.24.0",
"python-dotenv>=1.0.0",
"awscli>=1.44.14",
"boto3",
"botocore>=1.42.24",
"sqlalchemy>=2.0.0",
"posthog>=3.0.0",
"docutils",
"markdown-it-py",
"mdurl",
"packaging",
"urllib3"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T17:33:06.218423 | quantumdrive-1.3.84.tar.gz | 494,967 | aa/3c/60b9fe84dbcf778d4351461d7c052596f784f766fc2f4dfa08e8a56d3ee7/quantumdrive-1.3.84.tar.gz | source | sdist | null | false | 162766901ff21d7f901fed44f60c61c7 | 4dc3f4d68be3cd086e4af89008a82c1a81c42513a3ddea62d5114ae47e1460dc | aa3c60b9fe84dbcf778d4351461d7c052596f784f766fc2f4dfa08e8a56d3ee7 | null | [
"LICENSE"
] | 242 |
2.3 | c-struct-data-parser | 0.4.0 | Declaratively define and parse C data structures | # c_struct_data_parser
Intended to facilitate declarative, dynamic definition of C data structures.
The library will autogenerate a parser which you can connect to a `BytesReader` or some IO based variant.
You're better off looking at https://github.com/fox-it/dissect.cstruct
Examples from `test_basic.py`:
```python
Struct2Int = create_struct_definition(
"Struct2Int",
{
"value_a": Int4Definition,
"value_b": Int4Definition,
},
)
def test_struct2Int() -> None:
value_a = 0x12345678
value_b = 0x99887766
bs = b"".join(map(int_to_le_bytes_4, [value_a, value_b]))
bytes_reader = BytesReader(address=0, bs=bs)
struct_2_int, new_reader = Struct2Int.parser(bytes_reader)
assert struct_2_int.value_a.value == value_a
assert struct_2_int.value_b.value == value_b
print(struct_2_int)
BitFieldsExample = create_bit_fields_definition(
"BitFieldsExample",
Int4Definition,
{
"field_a": 4,
"reserved_1": 5,
"field_b": 5,
},
)
def test_bit_fields() -> None:
val = 0x123456
bs = int_to_le_bytes_4(val)
bytes_reader = BytesReader(address=0, bs=bs)
bit_fields_example, new_reader = BitFieldsExample.parser(bytes_reader)
assert bit_fields_example.field_a.value == val & 0xF
assert bit_fields_example.field_b.value == (val >> 9) & 0x1F
print(bit_fields_example)
Struct2IntPointer = create_pointer_definition(
Struct2Int,
Int4Definition,
)
def test_pointer_type() -> None:
value_a = 0x12345678
value_b = 0x99887766
bs_struct = b"".join(map(int_to_le_bytes_4, [value_a, value_b]))
bs_pointer = int_to_le_bytes_4(0x2000)
bs = b"".join([bs_struct, bs_pointer])
bytes_reader = BytesReader(address=0x2000, bs=bs)
struct_2_int, new_reader = Struct2Int.parser(bytes_reader)
assert struct_2_int.value_a.value == value_a
assert struct_2_int.value_b.value == value_b
p_struct_2_int, new_reader_2 = Struct2IntPointer.parser(new_reader)
print(p_struct_2_int)
struct_2_int_copy = p_struct_2_int.resolve(new_reader_2)
assert struct_2_int_copy.value_a.value == value_a
assert struct_2_int_copy.value_b.value == value_b
```
| text/markdown | Gregory.Kuhn | Gregory.Kuhn <gregory.kuhn@analog.com> | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [] | uv/0.9.8 | 2026-02-18T17:33:00.047556 | c_struct_data_parser-0.4.0.tar.gz | 5,171 | 63/0a/8a2a145a575fa587cdcf938d88c441c692707e2026c07ab41fcd5856a5e5/c_struct_data_parser-0.4.0.tar.gz | source | sdist | null | false | 67d4392dae7688eb85eb41e9bc33eaf5 | eb8216b73b7ffb931a8c14a0244ae943df439ff9ec4bc52996481d98a5259d13 | 630a8a2a145a575fa587cdcf938d88c441c692707e2026c07ab41fcd5856a5e5 | null | [] | 503 |
2.4 | aiorubi | 0.0.1 | Reserved for future development | # aiorubi
This package is currently under development. | text/markdown | null | null | null | null | MIT | null | [] | [] | null | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://pypi.org/project/aiorubi"
] | twine/6.2.0 CPython/3.11.2 | 2026-02-18T17:32:54.089331 | aiorubi-0.0.1.tar.gz | 540 | af/26/892f0abe7dbffe2c135a8a73a5da6ce0afc2288a36b1b79640b00258729a/aiorubi-0.0.1.tar.gz | source | sdist | null | false | bc35dd3bb46bc391a7a01172eb967d31 | 72d135a4de4b54ef0061002d144ef1e48aa7a473941d4480ba25d9d5626eadf8 | af26892f0abe7dbffe2c135a8a73a5da6ce0afc2288a36b1b79640b00258729a | null | [] | 281 |
2.4 | sniffcell | 0.6.0 | SniffCell: Annotate SVs cell type based on CpG methylation | # SniffCell - annotate structural variants with methylation-derived cell-type signals
[](https://pypi.org/project/sniffcell/)
[](https://pypi.org/project/sniffcell/)
[](https://github.com/Fu-Yilei/SniffCell/wiki)
[](https://github.com/Fu-Yilei/SniffCell/issues)
SniffCell analyzes long-read methylation around SVs and provides cell-type-aware annotations.
## Version
Current package version in code: `v0.6.0`.
## Install
```bash
pip install sniffcell
```
For local development:
```bash
pip install -e .
```
## CLI commands
```text
sniffcell {find,deconv,anno,svanno,dmsv,viz}
```
Command status in the current code:
- `find`: implemented.
- `anno`: implemented.
- `svanno`: implemented.
- `dmsv`: implemented.
- `viz`: implemented.
- `deconv`: placeholder stub (currently prints args only).
## Input assumptions
- BAM: long-read BAM with modified base tags; `HP` haplotype tag is optional.
- Reference: FASTA indexed for region fetches.
- VCF: `INS` and `DEL` records are used.
- VCF INFO field `RNAMES` is used for supporting reads unless overridden by `--kanpig_read_names`.
- VCF INFO fields `STDEV_POS`, `STDEV_LEN`, `SVLEN` are used to derive `ref_start` and `ref_end` windows.
- BED for `anno`: one tab-delimited hierarchical DMR file from `sniffcell find` with at least `chr`, `start`, `end`, `best_group`, `best_dir`.
## `find`: call hierarchical ctDMRs from atlas matrices
Finds cell-type-specific DMR regions from an explicit hierarchy schema in `atlas/index_to_major_celltypes.json`, then writes one annotation-ready BED/TSV.
Hierarchy schema:
- Add a top-level `__hierarchy__` object.
- Define each hierarchy key with `source_key` and optional `children`.
- Each child can point to another `source_key` and optional `groups`.
Example:
```json
"__hierarchy__": {
"pbmc-lymphocytes": {
"source_key": "pbmc-lymphocytes",
"children": {
"lymphocytes": {
"source_key": "pbmc",
"groups": ["T-cell", "NK-cell", "B-cell"]
}
}
}
}
```
Example:
```bash
sniffcell find \
-n atlas/all_celltypes_blocks.npy \
-i atlas/all_celltypes_blocks.index.gz \
-cf atlas/index_to_major_celltypes.json \
-m atlas/all_celltypes.txt \
-ck pbmc-lymphocytes \
-o pbmc_hierarchy.tsv \
--diff_threshold 0.40 \
--min_rows 2 \
--min_cpgs 3 \
--max_gap_bp 500
```
Outputs:
- `<output>`: annotation-ready hierarchical BED/TSV for `sniffcell anno`.
- `<output>.igv.bed`: companion IGV BED9 (headerless, IGV-ready).
Key columns in `<output>` include:
- `best_group`, `best_dir`
- `code_order` (global leaf schema)
- `best_group_leaves`, `other_group_leaves`
- `hierarchy_level`, `hierarchy_path`, `hierarchy_source_key`
- per-node means (`mean_<group>`)
## `anno`: annotate SVs with one hierarchical BED file
`anno` processes DMR regions near SVs, classifies reads per region, then summarizes per-SV assignment.
Basic example:
```bash
sniffcell anno \
-i sample.bam \
-v sample.vcf.gz \
-r ref.fa \
-b pbmc_hierarchy.tsv \
-o anno_out \
-w 10000 \
-t 8
```
`anno` outputs:
- `reads_classification.tsv`: per-read region-level assignments.
- `blocks_classification.tsv`: per-region methylation summaries.
- `sv_assignment.tsv`: SV-level assignment summary (produced by running `svanno` internally at end of `anno`).
- `sv_assignment_readable.tsv`: readable SV summary focused on classified cell types per SV.
- `sv_assignment_readable_long.tsv`: long-format `SV x celltype` table with counts/fractions.
- `anno_run_manifest.json`: run log/manifest with input paths and outputs (used by `sniffcell viz --anno_output`).
SV assignment options (available in both `anno` and `svanno`):
- `--evidence_mode {all_rows,per_read}`: how ctDMR evidence is aggregated for each SV.
- `--min_overlap_pct`: minimum overlap fraction required to keep `assigned_code`.
- `--min_agreement_pct`: minimum majority agreement required to keep `assigned_code`.
Defaults are strict:
- `--evidence_mode all_rows` (uses every supporting-read x ctDMR row; no per-read vote collapse)
- `--min_agreement_pct 1.0` (any conflicting code makes `assigned_code` empty / unreliable)
Conflict rule:
- `assigned_code` is forced empty when evidence has a hard conflict (`has_hard_conflict=True`), i.e. code constraints intersect to an empty set (for example `1110` with `0001` in the same schema).
### How hierarchical codes are handled
1. One BED/TSV from `find` is loaded.
2. Regions are filtered by SV proximity with `--window`.
3. Every kept region is processed independently to generate per-read codes.
4. `code_order` defines the shared leaf-level bit schema.
5. `best_group_leaves` defines which bits are set for the target cluster in each DMR.
6. During SV assignment, reads are linked to SVs by chromosome-aware interval matching (`--window`), then evidence is aggregated by `--evidence_mode` (`all_rows` by default; `per_read` is optional).
## `svanno`: recompute SV-level assignment from precomputed read classifications
Use when you already have `reads_classification.tsv` and want to regenerate SV summaries.
Example:
```bash
sniffcell svanno \
-v sample.vcf.gz \
-i anno_out/reads_classification.tsv \
-w 10000 \
--evidence_mode all_rows \
--min_agreement_pct 1.0 \
-o anno_out
```
Output:
- `sv_assignment.tsv`
- `sv_assignment_readable.tsv`
- `sv_assignment_readable_long.tsv`
Readable summary columns include:
- `id`, `sv_chr`, `sv_pos`, `sv_len`, `vaf`
- `n_supporting`, `n_overlapped`, `overlap_pct`, `majority_pct`
- `classified_celltypes`, `classified_celltype_count`
- `classified_celltype_counts`, `classified_celltype_fractions`, `classification_summary`
- `is_multi_celltype_link`
`sv_assignment.tsv` also includes:
- `has_hard_conflict`: whether constraints are mutually incompatible.
- `intersection_code`: bitwise intersection of observed constraints in the dominant schema.
Long-format columns include:
- `id`, `sv_chr`, `sv_pos`, `sv_len`
- `celltype`, `rank`, `supporting_read_count`, `supporting_read_fraction`
- `n_supporting`, `n_overlapped`, `overlap_pct`
## `viz`: visualize one SV with reads and ctDMR overlap
Generate a figure (PNG/PDF) centered on one SV ID, showing:
- all reads in `SV +/- window` (supporting reads highlighted),
- SV interval,
- overlapping ctDMRs from a `find` BED/TSV.
- all cell-type methylation values on those ctDMRs from `mean_*` columns (heatmap panel).
Simple example (from an `anno` output folder):
```bash
sniffcell viz \
--anno_output anno_out \
-s sniffles.SV123 \
-o anno_out/sniffles.SV123
```
Outputs:
- Default output: `anno_out/sniffles.SV123.png` (or `.pdf`)
- Add `--export_tables` if you also want TSV outputs (`.summary.tsv`, `.supporting_reads_assignment.tsv`, `.supporting_reads_ctdmr_methylation.tsv`)
## `dmsv`: test differential methylation around SVs
Computes per-CpG statistics between supporting and non-supporting reads near each SV.
Example:
```bash
sniffcell dmsv \
-i sample.bam \
-v sample.vcf.gz \
-r ref.fa \
-o dmsv_out \
-m 3 \
-f 1000 \
-c 5 \
-t 8
```
Outputs:
- `dmsv_out/significant_SVs.tsv`: per-SV summary including significance counts and effect summaries.
- `dmsv_out/sv_details/<sv_id>.tsv.gz`: per-CpG stats table for each SV.
Current implementation note:
- `dmsv` parses `--test_type` but the current backend path uses consistency-aware MWU screening in `statistical_test_around_sv.py`.
## `deconv`
`deconv` CLI arguments exist but implementation is currently a placeholder (`deconv_main` only prints arguments).
## Practical example
```bash
sniffcell anno \
-i data/sample.bam \
-v data/sample.vcf.gz \
-b dmrs/pbmc_hierarchy.tsv \
-o results/anno.w10000 \
-r refs/GRCh38.fa \
-w 10000 \
-t 8
```
| text/markdown | Yilei Fu | yilei.fu@bcm.edu | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/Fu-Yilei/SniffCell | null | >=3.10 | [] | [] | [] | [
"pysam>=0.21.0",
"edlib>=1.3.9",
"psutil>=5.9.4",
"numpy>=2.2.0",
"pandas>=2.3.0",
"scipy",
"tqdm",
"scikit-learn",
"matplotlib"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/Fu-Yilei/SniffCell/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T17:32:06.889399 | sniffcell-0.6.0.tar.gz | 654,097 | 25/3a/18c4262ba3ee82f68a219ba5289ee883d273d3d48b3458291e51abacaced/sniffcell-0.6.0.tar.gz | source | sdist | null | false | 88995611a5e0c5db332d1b6155219ece | b5abc02752903aa8a618d60ac639616f8e50f9e56269670c3189d2ea9cab3272 | 253a18c4262ba3ee82f68a219ba5289ee883d273d3d48b3458291e51abacaced | null | [
"LICENSE"
] | 231 |
2.4 | sentry-arroyo | 2.38.1 | Arroyo is a Python library for working with streaming data. | # Arroyo
<p align="center">
<!-- do not specify height so that it scales proportionally on mobile -->
<img src=docs/source/_static/arroyo-banner.png width=583 />
</p>
`Arroyo` is a library to build streaming applications that consume from and produce to Kafka.
Arroyo consists of three components:
* Consumer and producer backends
- The Kafka backend is a wrapper around the librdkafka client, and attempts to simplify rebalancing and offset management even further
- There is also an in memory and a file based consumer and producer implementation that can be used for testing
* A strategy interface
- Arroyo includes a number of pre-built strategies such as `RunTask`, `Filter`, `Reduce`, `CommitOffsets` and more.
- Users can write their own strategies, though in most cases this should not be needed as the library aims to provide generic, reusable strategies that cover most stream processing use cases
- Strategies can be chained together to form complex message processing pipelines.
* A streaming engine which manages the relationship between the consumer and strategies
- The `StreamProcessor` controls progress by the consumer and schedules work for execution by the strategies.
All documentation is in the `docs` directory. It is hosted at https://getsentry.github.io/arroyo/ and can be built locally by running `make docs`
| text/markdown | Sentry | oss@sentry.io | null | null | Apache-2.0 | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | https://github.com/getsentry/arroyo | null | null | [] | [] | [] | [
"confluent-kafka<2.10.0,>=2.7.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.11.2 | 2026-02-18T17:31:59.813567 | sentry_arroyo-2.38.1.tar.gz | 93,292 | 23/3a/1e9d98525a4a0ee9999baae7c7a4427688d0bb95438184bd525d56c1e250/sentry_arroyo-2.38.1.tar.gz | source | sdist | null | false | 3ee21c0a3410b961f7dae5e573da456b | 8ff9b0b2d3104bd8835e7dffc8ba0249804964d249be03c1b5d2a8ee5f0f90b0 | 233a1e9d98525a4a0ee9999baae7c7a4427688d0bb95438184bd525d56c1e250 | null | [
"LICENSE"
] | 901 |
2.4 | draw-triangle-ajay | 0.0.2 | A Python library to draw geometric triangles using matplotlib and numpy | # draw_triangle
Python library to draw geometric triangle shapes using numpy and matplotlib.
## Features
- Equilateral triangle
- Right triangle
- Isosceles triangle
- Custom triangle using vertices
---
## Installation
pip install draw-triangle-ajay
---
## Usage
### Equilateral Triangle
```python
from draw_triangle import equilateral
equilateral(5)
```
### Right Triangle
```python
from draw_triangle import right_triangle
right_triangle(5, 4)
```
---
### Isosceles Triangle
```python
from draw_triangle import isosceles
isosceles(6, 5)
```
---
### Custom Triangle
```python
from draw_triangle import draw_triangle
draw_triangle([(0, 0), (4, 0), (2, 3)])
```
---
## Dependencies
- numpy
- matplotlib
---
## Author
Ajay Kumar
| text/markdown | Ajay Kumar | null | null | null | MIT | geometry, triangle, matplotlib, visualization | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy",
"matplotlib"
] | [] | [] | [] | [
"Homepage, https://github.com/ajstyle007/draw-triangle-ajay"
] | twine/6.2.0 CPython/3.12.1 | 2026-02-18T17:31:27.398596 | draw_triangle_ajay-0.0.2.tar.gz | 2,186 | dd/83/56c184cdbb59fc9249a7ee79b08e78b83f8b41226fc40aec40a23368321f/draw_triangle_ajay-0.0.2.tar.gz | source | sdist | null | false | a87a755f2cdcbe3fd58c9737b85a80ea | c683e215bae105144af7fea5279221575631fd7363f8451dadae64f3a2b3f924 | dd8356c184cdbb59fc9249a7ee79b08e78b83f8b41226fc40aec40a23368321f | null | [
"LICENSE"
] | 242 |
2.4 | panopti | 0.3.0 | Panopti: Interactive 3D visualization in Python. | <h1>
Panopti:<br>
<sub>Interactive 3D Visualization in Python</sub>
</h1>

## [ [Documentation](https://armanmaesumi.github.io/panopti/) ] - [ [PyPI](https://pypi.org/project/panopti/) ]
Panopti is a Python package for interactive 3D visualization that is designed so that users only need to write Python code, making it painless to setup interactive experiments and scenes. All code examples and demos throughout the documentation are achieved purely using Panopti in Python -- no JavaScript required!
Panopti offers several features:
- ✅ Remote SSH compatible
- ✅ Headless rendering
- ✅ Geometric primitives: meshes, point clouds, animated geometry, arrows, etc.
- ✅ Interactive UI elements with Python callbacks
- ✅ Programmable events: on-click, inspection tool, transformation gizmo, camera update, etc.
- ✅ Convenience features: exporting geometry, embedding Plotly figures
- ✅ Interactive web console that lets you debug with your Python script's state
- ✅ Material customization
See the [docs](https://armanmaesumi.github.io/panopti/) for more!
Install from pip:
```
pip install panopti
```
# Simple example
First start a Panopti server in a separate terminal:
```bash
python -m panopti.run_server --host localhost --port 8080
```
Then you can easily define your scene, for example:
```python
import panopti
import trimesh # just for io
import numpy as np
# create panopti client that connects to server:
viewer = panopti.connect(server_url="http://localhost:8080", viewer_id='client')
# open viewer in browser: http://localhost:8080/?viewer_id=client
mesh = trimesh.load('./examples/demosthenes.obj')
verts, faces = mesh.vertices, mesh.faces
# add a mesh to the scene:
viewer.add_mesh(
vertices=verts,
faces=faces,
name="Statue"
)
def callback_button(viewer):
# Update mesh vertices on button press by adding Gaussian noise
statue = viewer.get('Statue')
noise = np.random.normal(scale=0.05, size=statue.vertices.shape)
new_verts = statue.vertices + noise
statue.update(vertices=new_verts)
viewer.button(callback=callback_button, name='Click Me!')
viewer.hold() # prevent script from terminating
```
For more examples see [Documentation](https://armanmaesumi.github.io/panopti/examples/importing_geometry/) or `/examples`
## Installation
To install from pip:
```bash
pip install panopti
```
To install from source:
```bash
git clone https://github.com/ArmanMaesumi/panopti
# build frontend viewer
cd panopti/frontend
npm install
npm run build
cd ..
# install python package
pip install .
```
### Dependencies
Core dependencies:
```bash
pip install numpy eventlet requests flask flask-socketio python-socketio[client] tomli msgpack trimesh
```
Optional dependencies:
```bash
pip install matplotlib # for colormap utilities
pip install plotly==5.22.0 # for plotly figure support
```
Doc-related dependencies:
```bash
pip install mkdocs mkdocs-material mkdocstrings mkdocstrings-python
```
---
### Development
Details for running the local development workflow can be found in: [`/frontend/README.md`](/frontend/README.md)
#### Cite as
```bibtex
@misc{panopti,
title = {Panopti},
author = {Arman Maesumi},
note = {https://github.com/ArmanMaesumi/panopti},
year = {2025}
}
```
| text/markdown | null | Arman Maesumi <arman.maesumi@gmail.com> | null | null | null | 3D, visualization, geometry, interactive, web | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyth... | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy",
"requests",
"flask",
"flask-socketio",
"eventlet",
"python-socketio[client]",
"tomli",
"msgpack",
"trimesh",
"pillow"
] | [] | [] | [] | [
"Homepage, https://github.com/armanmaesumi/panopti",
"Repository, https://github.com/armanmaesumi/panopti",
"Documentation, https://armanmaesumi.github.io/panopti/"
] | twine/6.2.0 CPython/3.10.18 | 2026-02-18T17:31:03.097615 | panopti-0.3.0.tar.gz | 8,394,057 | 0a/10/a5c4dabfd4f04871429fc016f471ad2cffeee0c92359b0ebd85c2f3b789e/panopti-0.3.0.tar.gz | source | sdist | null | false | 62d068ea796923c13732f5a8b1a46ec6 | 9e0011952032335ef32aec66872b1f4243241f3b8aea4a66cd04d64b2e3a41c1 | 0a10a5c4dabfd4f04871429fc016f471ad2cffeee0c92359b0ebd85c2f3b789e | MIT | [
"LICENSE"
] | 243 |
2.4 | pyjumble | 0.3 | Various small utilities in Python | Various small utilities in Python.
Installation:
```
pip install pyjumble
```
| text/markdown | null | Lionel GUEZ <guez@lmd.ipsl.fr> | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"cartopy",
"matplotlib>=3.4.0",
"numpy"
] | [] | [] | [] | [
"Homepage, https://github.com/lguez/Pyjumble"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T17:30:44.735224 | pyjumble-0.3.tar.gz | 17,108 | da/ec/8ed80eb5d9a2fc7aaca0126049c49c5fc7846eccc17dab57d5576c8178bb/pyjumble-0.3.tar.gz | source | sdist | null | false | 41594ba053e6eb604edaf5a69fc29843 | d14825aad880cbb512eb3dfef2e777547f6210c70348b5cb3e789841ff0a23a1 | daec8ed80eb5d9a2fc7aaca0126049c49c5fc7846eccc17dab57d5576c8178bb | GPL-3.0-or-later | [
"LICENSE"
] | 269 |
2.4 | moat-link-metrics-backend-akumuli | 0.0.3 | Akumuli backend for MoaT-Link metrics | # Akumuli backend for MoaT-Link metrics
% start synopsis
Akumuli time-series storage backend for moat-link-metrics.
% end synopsis
% start main
This package provides the Akumuli backend implementation for moat-link-metrics.
It allows forwarding metric values to an Akumuli time-series database.
## Installation
```shell
pip install moat-link-metrics-backend-akumuli
```
Or for Debian-based systems:
```shell
apt install moat-link-metrics-backend-akumuli
```
## Usage
The backend is automatically available once installed. To use it, set
`backend: akumuli` in your server configuration (this is the default).
## Configuration
The Akumuli backend supports the following configuration options in the
server config:
- **host**: Akumuli server hostname (default: `localhost`)
- **port**: Akumuli server port (default: `8282`)
- **delta**: Use delta compression (default: `true`)
## Example
```yaml
server:
backend: akumuli
host: localhost
port: 8282
delta: true
```
% end main
| text/markdown | null | Matthias Urlichs <matthias@urlichs.de> | null | null | null | MoaT | [
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Framework :: AsyncIO",
"Framework :: Trio",
"Framework :: AnyIO",
"Development Status :: 4 - Beta"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"asyncakumuli~=0.6.4",
"moat-link-metrics~=0.0.3"
] | [] | [] | [] | [
"homepage, https://m-o-a-t.org",
"repository, https://github.com/M-o-a-T/moat"
] | twine/6.1.0 CPython/3.13.5 | 2026-02-18T17:30:43.466122 | moat_link_metrics_backend_akumuli-0.0.3.tar.gz | 4,322 | 8a/fc/5f5f96ce96a2a173df728dcf666481a086ed0197b5124d4fbacfc6533125/moat_link_metrics_backend_akumuli-0.0.3.tar.gz | source | sdist | null | false | e6f21e2358626fcddfbf26290ee5fadc | bbcb83697879df4374b8991b95f064512e114f4e17f63816945ed3170ad58a36 | 8afc5f5f96ce96a2a173df728dcf666481a086ed0197b5124d4fbacfc6533125 | null | [
"LICENSE.txt"
] | 250 |
2.4 | moat-lib-run | 0.1.3 | Main command entry point infrastructure for MoaT applications | # moat-lib-run
% start main
% start synopsis
Main command entry point infrastructure for MoaT applications.
% end synopsis
This module provides the infrastructure for building command-line interfaces
for MoaT applications. It includes:
- Command-line argument parsing with Click integration
- Subcommand loading from internal modules and extensions
- Configuration file handling
- Logging setup
- Main entry point wrappers for testing
## Usage
### Basic command setup
```python
from moat.lib.run import main_, wrap_main
@main_.command()
async def my_command(ctx):
"""A simple command"""
print("Hello from my command!")
```
### Loading subcommands
Use `load_subgroup` to create command groups that automatically load subcommands:
```python
from moat.lib.run import load_subgroup
import asyncclick as click
@load_subgroup(prefix="myapp.commands")
@click.pass_context
async def cli(ctx):
"""Main command group"""
pass
```
### Processing command-line arguments
The `attr_args` decorator and `process_args` function provide flexible
argument handling:
```python
from moat.lib.run import attr_args, process_args
@main_.command()
@attr_args(with_path=True)
async def configure(**kw):
"""Configure the application"""
config = process_args({}, **kw)
# config now contains parsed arguments
```
## Key Functions
- `main_`: The default main command handler
- `wrap_main`: Wrapper for the main command, useful for testing
- `load_subgroup`: Decorator to create command groups with automatic subcommand loading
- `attr_args`: Decorator for adding flexible argument handling to commands
- `process_args`: Function to process command-line arguments into configuration
- `Loader`: Click group class that loads commands from submodules and extensions
% end main
| text/markdown | null | null | null | Matthias Urlichs <matthias@urlichs.de> | null | MoaT | [
"Development Status :: 4 - Beta",
"Framework :: AnyIO",
"Framework :: Trio",
"Framework :: AsyncIO",
"Programming Language :: Python :: 3",
"Intended Audience :: Developers"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"anyio~=4.0",
"moat-util~=0.62.6",
"asyncclick~=8.3",
"simpleeval~=1.0",
"moat-lib-config~=0.1.2",
"moat-lib-proxy~=0.1.1"
] | [] | [] | [] | [
"homepage, https://m-o-a-t.org",
"repository, https://github.com/M-o-a-T/moat"
] | twine/6.1.0 CPython/3.13.5 | 2026-02-18T17:30:38.662176 | moat_lib_run-0.1.3.tar.gz | 12,733 | 25/9a/8d36d89f8f5f8c0eb21438c4aed7956d24cfa3957ba70cac36f52517e1ee/moat_lib_run-0.1.3.tar.gz | source | sdist | null | false | 4aa1cb1ef9375e819944160f3a315169 | f77793b78f04e6666d617be01cdc29fc4f20bd903f75fb791d48956df1044db2 | 259a8d36d89f8f5f8c0eb21438c4aed7956d24cfa3957ba70cac36f52517e1ee | null | [
"LICENSE.txt"
] | 321 |
2.4 | moat-lib-repl | 0.1.4 | An async readline re-implementation | # moat-lib-repl
% start main
% start synopsis
A straightforward async-ization of `pyrepl`.
% end synopsis
This module was copied from CPython v3.13.9 and anyio-ized.
## Usage
```python
from moat.lib.repl import multiline_input
async def main():
inp = await multiline_input(lambda s: "\n" not in s, "=== ","--- ")
print("RES:",repr(inp))
anyio.run(main)
```
% end main
## License
Licensed under (something like) the
[MIT License](https://github.com/python/cpython/blob/3.13/Lib/_pyrepl/__init__.py)
| text/markdown | null | null | null | Matthias Urlichs <matthias@urlichs.de> | null | MoaT | [
"Development Status :: 4 - Beta",
"Framework :: AnyIO",
"Framework :: Trio",
"Framework :: AsyncIO",
"Programming Language :: Python :: 3",
"Intended Audience :: Developers"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"anyio~=4.0",
"moat-lib-proxy~=0.1.1",
"moat-lib-rpc~=0.6.1",
"moat-lib-stream~=0.2.0"
] | [] | [] | [] | [
"homepage, https://m-o-a-t.org",
"repository, https://github.com/M-o-a-T/moat"
] | twine/6.1.0 CPython/3.13.5 | 2026-02-18T17:30:30.925637 | moat_lib_repl-0.1.4.tar.gz | 64,663 | ed/fb/9a1e8d0a5acaa2c512874718ac9b2af3e736393cbf5af030b256b026ef4b/moat_lib_repl-0.1.4.tar.gz | source | sdist | null | false | 5fb29a21e2b6e94c817c9ef2b8d77875 | 730923f80517da43bde80dcc2ade5ae29f42f1e0bacf9709e7ac825f695632ea | edfb9a1e8d0a5acaa2c512874718ac9b2af3e736393cbf5af030b256b026ef4b | null | [
"LICENSE",
"LICENSE.txt"
] | 234 |
2.4 | moat-link-metrics-backend-victoria | 0.0.3 | VictoriaMetrics backend for MoaT-Link metrics | # VictoriaMetrics backend for MoaT-Link metrics
% start synopsis
VictoriaMetrics time-series storage backend for moat-link-metrics.
% end synopsis
% start main
This package provides the VictoriaMetrics backend implementation for moat-link-metrics.
It allows forwarding metric values to a VictoriaMetrics time-series database.
## Installation
```shell
pip install moat-link-metrics-backend-victoria
```
Or for Debian-based systems:
```shell
apt install moat-link-metrics-backend-victoria
```
## Usage
The backend is automatically available once installed. To use it, set
`backend: victoria` in your server configuration.
## Configuration
The VictoriaMetrics backend supports the following configuration options in the
server config:
- **host**: VictoriaMetrics server hostname (default: `localhost`)
- **port**: VictoriaMetrics server port (default: `8282`)
- **delta**: Use delta compression (default: `true`)
## Example
```yaml
server:
backend: victoria
host: localhost
port: 8282
delta: true
```
% end main
| text/markdown | null | Matthias Urlichs <matthias@urlichs.de> | null | null | null | MoaT | [
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Framework :: AsyncIO",
"Framework :: Trio",
"Framework :: AnyIO",
"Development Status :: 4 - Beta"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"asyncvictoria~=0.6.4",
"moat-link-metrics~=0.0.3"
] | [] | [] | [] | [
"homepage, https://m-o-a-t.org",
"repository, https://github.com/M-o-a-T/moat"
] | twine/6.1.0 CPython/3.13.5 | 2026-02-18T17:30:27.555485 | moat_link_metrics_backend_victoria-0.0.3.tar.gz | 4,600 | b1/a6/703cca6c4c95bab32f906517f9acedbbf31104361fcac751c48ae592cc4c/moat_link_metrics_backend_victoria-0.0.3.tar.gz | source | sdist | null | false | e6680c5ae428a0c711bf660227105e8a | 2d504434bc358e9579e6008f56675b4952cb6aa8ed19a80ecbac4286f74cd62e | b1a6703cca6c4c95bab32f906517f9acedbbf31104361fcac751c48ae592cc4c | null | [
"LICENSE.txt"
] | 248 |
2.4 | moat-link-metrics | 0.0.4 | Metrics time-series connector for MoaT-Link | # Metrics time-series connector
% start synopsis
Forwards MoaT-Link values to metrics time-series storage backends.
% end synopsis
% start main
This module watches MoaT-Link entries and writes their values to metrics backends
whenever they change. It replaces the legacy ``moat-kv-akumuli`` and ``moat-link-akumuli`` packages.
## Features
- Pluggable backend support (currently: Akumuli)
- Per-server configuration stored in MoaT-Link
- Dynamic series management: add, remove, or modify series at runtime
- Configurable scaling (factor/offset) and rate limiting (t_min)
- Attribute extraction from nested source values
## Quick start
1. Configure a metrics server:
```shell
moat link metrics add myserver my.entry source.path series_name host=myhost
```
2. Run the connector:
```shell
moat link metrics monitor myserver
```
## Backends
The default backend is Akumuli. To specify a different backend, set the `backend`
field in the server configuration to the backend module name (e.g., `akumuli`).
## Deprecation
This package supersedes ``moat-kv-akumuli`` and ``moat-link-akumuli``. The raw
MQTT topic monitoring feature has been removed; use MoaT-Link's native data paths instead.
% end main
| text/markdown | null | Matthias Urlichs <matthias@urlichs.de> | null | null | null | MoaT | [
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Framework :: AsyncIO",
"Framework :: Trio",
"Framework :: AnyIO",
"Development Status :: 4 - Beta"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"anyio~=4.2",
"asyncclick",
"attrs",
"moat-util~=0.62.6",
"moat-link~=0.3.2",
"moat-lib-config~=0.1.2",
"moat-link-metrics-akumuli~=0.0.1; extra == \"akumuli\"",
"moat-link-metrics-victoria~=0.0.1; extra == \"victoria\""
] | [] | [] | [] | [
"homepage, https://m-o-a-t.org",
"repository, https://github.com/M-o-a-T/moat"
] | twine/6.1.0 CPython/3.13.5 | 2026-02-18T17:30:18.371974 | moat_link_metrics-0.0.4.tar.gz | 9,492 | 1b/c6/4aedbe26fc2d9cdacf8e73d286b010d65307846fa3ea8387a05aade77028/moat_link_metrics-0.0.4.tar.gz | source | sdist | null | false | 2c76017e21c0e442232eb4f39a07e516 | e10776c6c9e2d99f0e93c0a9335697d03464fb19c8be7a38fe40ee4298d0e7ac | 1bc64aedbe26fc2d9cdacf8e73d286b010d65307846fa3ea8387a05aade77028 | null | [
"LICENSE.txt"
] | 250 |
2.4 | moat-lib-rpc | 0.6.1 | A simple RPC and stream multiplexer | # RPC library (sans-IO)
% start main
## Rationale
% start synopsis
This library is a sans-io generalization of the Remote Procedure Call
pattern. Aside from the basics (call a method, get a reply back
asynchronously) it supports cancellation (both client- and server-side),
exception forwarding, and streaming data bidirectionally with flow control.
% end synopsis
## Prerequisites
MoaT's RPC requires a reliable underlying transport for Python
objects. MoaT in general uses CBOR; however, any reliable, non-reordering
method that can transfer basic Python data structures (plus whatever
objects you send/receive) works.
Our RPC is transport agnostic. It exposes basic async methods to
iterate on messages to send, and to feed incoming lower-level messages
in.
Local use, i.e. within a single process, does not require a codec.
## Usage
### Direct calls
```{literalinclude} ../../examples/moat-lib-rpc/basic.py
:language: python
```
### Using a transport
```{literalinclude} ../../examples/moat-lib-rpc/tcp.py
:language: python
```
## Transport Specification
MoaT-Lib-RPC messaging is simple by design. A basic interaction starts with
a command (sent from A to B, Streaming bit off) and ends with a reply (sent
from B to A, Streaming bit off).
There is no provision for messages that don't have a reply. However,
an "empty" reply is just three bytes and the sender is not required
to wait for it.
The side opening a sub-channel uses a unique non-negative integer as
channel ID. Replies carry the ID's bitwise-negated value. Thus the ID
spaces of both directions are inherently separate.
IDs are allocated when sending the first message on a sub-channel. They
MUST NOT be reused until final messages (stream bit off) have been
exchanged in both directions. Corollary: Exactly one final message MUST
be sent in both directions.
### Message format
A Moat-RPC message consist of a preferably-small signed integer, plus a
variable and usually non-empty amount of data.
The integer is interpreted as follows.
- Sign: message direction.
- Bit 0: if set, the message starts or continues a data stream; if
clear, the message is the final message for this subchannel and
direction.
- Bit 1: Error/Warning. If bit 0 is clear, the message denotes an error
which terminates the channel. Otherwise it is a warning or similar
information, and SHOULD be attached to the following command or reply.
:::{tip}
If a transport is more efficient when encoding small positive numbers
(e.g. [MesssagePack](https://github.com/msgpack/msgpack/blob/master/spec.md)),
the integer shall be shifted one bit to the right instead of being
inverted. The direction is then encoded in the bit 0 (1: ID was negative).
:::
The other bits contain the message ID. Using CBOR (MessagePack), this
scheme allows for five (four) concurrent messages per direction before
encoding to two bytes is required.
Negative integers signal that the ID has been allocated by that
message's recipient. They are inverted bit-wise, i.e. `(-1-id)`. An
ID of zero is legal. The bits described above are not affected by this
inversion. Thus a command with ID=1 (no streaming, no error) is sent
with an initial integer of 4; the reply would use -5.
An interaction has concluded when both sides have transmitted exactly one
message with the Streaming bit clear.
### Streaming
An originator starts a stream by sending an initial message with the
Streaming bit set.
Data streams are inherently bidirectional. The command's semantics
should specify which side is supposed to send data (originator,
responer, or both). Error -2 will be sent (once) if a streamed item is
received that won't be handled.
Streaming may start when both sides have exchanged initial messages, i.e.
the originator may not send streamed data before receiving the initial
reply (with the Stream bit set).
The initial and final message are assumed to be out-of-band data. This also
applies to warnings.
#### Out-of-band data
Messages with both streaming and error bits set may be used to carry
out-of-band data while the stream is open, e.g. advising the recipient of
data loss. Conceptally, these messages are attached to the command or reply
that immediately follows them.
Application-generated warnings may not contain of single integers because
they conflict with the flow control mechanism (see next section).
The API should use payload formatting rules to avoid this situation
transparently.
#### Flow Control
For the most part: None. MoaT-Cmd is mostly used for monitoring events
or enumerating small data sets.
However, *if* a stream's recipient has limited buffer space and sends a
command that might trigger a nontrivial amount of messages, it may send
a specific warning (i.e. a message with both Error and Streaming bits
set) before its initial command or reply. This warning must consist of a
single non-negative integer that advises the sender of the number of
streamed messages it may transmit without acknowledgement.
During stream transmission, the recipient periodically sends some more
(positive) integers to signal the availability of more buffer space. It
must send such a message if the counter is zero (after buffer space becomes
available of course) and more messages are expected.
The initial flow control messages should be sent before the initial
command or reply, but may be deferred until later.
A receiver should start flow control sufficiently early, but that isn't
always feasible. It notifies the remote side (error/warning -5, below) if
an incoming message was dropped due to resource exhaustion; likewise, the
API is required to notify the sender.
## Payload conventions
The contents of messages beyond the initial integer is up to the
application. However, the following conventions are used by the rest of
MoaT:
- Initial messages start with the message's destination at the receiver,
interpreted as a Path.
- Messages consist of a possibly-empty list of positional arguments /
results, followed by a mapping of keyword+value arguments or results.
- The trailing mapping may be omitted if it is empty and the last positional
argument is not a mapping. This also applies when there are no positional
arguments.
- An empty mapping must not be omitted when the message is a warning
and consists of a single integer.
## Error handling
The exact semantics of error messages are application specific.
Error messages with the streaming bit clear terminate the command. They
should be treated as fatal.
Error messages with the streaming bit set are either flow control
messages (see above) or out-of-band information from one endpoint
to the other.
% end main
### Well-Known Errors
- -1: Unspecified
The `.stop()` API method was called.
This message MAY be sent as a warning.
Usage: assume that a sender reads and transmits a sequence of ten
measurements each second. If a "stop" warning arrives, the sender
should complete the current block before terminating, while a "stop"
error forces the current transmission to end immediately.
- -2: Can't receive this stream
Sent if a command isn't prepared to receive a streamed request or
reply on this endpoint.
- -3: Cancel
The sender's or receiver's task is cancelled: the work is no longer
required / performed.
This message should not be transmitted as a warning; that would be
pointless.
- -4: No Commands
The sender of this error doesn't process commands.
- -5: Data loss
An incoming message was dropped due to resource exhaustion (full
queue).
This message should be sent as a warning.
- -6: Must stream
Sent if a command will not handle a non-streamed request or reply.
- -7: Error
Used if the "real" error could not be encoded for some (equaly
untransmittable) reason. Typically includes a text dump of the
problematic exception.
- -11 …: No Command
The command is not recognized.
The error number encodes the command's position for a hierarchical
lookup at the destination, i.e. if the command is ("foo","bahr","baz")
and "foo" doesn't know about "bahr", the error is -12.
TODO
Other errors are sent using MoaT's link object encapsulation, i.e. the
error type (either a proxy or the name of the exception) followed by its
argument list and keywords (if present).
### Examples
> [!NOTE]
> Legend: \* S: Streaming \* E: Error \* D: direction / sign of message
> ID
Simple command:
| S | E | D | Data |
|-----|-----|-----|---------|
| \- | \- | \+ | Hello |
| \- | \- | \- | You too |
Simple command, error reply:
| S | E | D | Data |
|-----|-----|-----|----------------------------|
| \- | \- | \+ | Hello again |
| \- | \* | \- | Meh. you already said that |
Receive a data stream:
| S | E | D | Data |
|-----|-----|-----|-------------------|
| \* | \- | \+ | gimme some data |
| \* | \- | \- | OK here they are |
| \* | \- | \- | ONE |
| \* | \- | \- | TWO |
| \* | \* | \- | Missed some |
| \* | \- | \- | FIVE |
| \- | \- | \+ | \[ 'OopsError' \] |
| \* | \- | \- | SIX |
| \- | \- | \- | stopped |
Transmit a data stream:
| S | E | D | Data |
|-----|-----|-----|-------------------------------------|
| \* | \- | \+ | I want to send some data |
| \* | \- | \- | OK send them |
| \* | \- | \+ | FOO |
| \- | \- | \- | Nonono I don't want those after all |
| \* | \- | \+ | BAR |
| \- | \* | \+ | OK OK I'll stop |
Receive with an error:
| S | E | D | Data |
|-----|-----|-----|--------------------------------------------------------------------|
| \* | \- | \+ | gimme some more data |
| \* | \- | \- | OK here they are |
| \* | \- | \- | NINE |
| \* | \- | \- | TEN |
| \- | \* | \- | \[ 'CrashedError', -42, 'Owch', {'mitigating': 'circumstances'} \] |
| \- | \- | \+ | *sigh* |
Bidirectional data stream:
| S | E | D | Data |
|-----|-----|-----|--------------------|
| \* | \- | \+ | Let's talk |
| \* | \- | \- | OK |
| \* | \- | \+ | *chat data* … |
| \* | \- | \- | *more chat data* … |
| \- | \- | \+ | hanging up |
| \- | \- | \- | oh well |
Data stream with flow control:
<table>
<thead>
<tr class="header">
<th>S</th>
<th>E</th>
<th>D</th>
<th>Data</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>*</td>
<td>*</td>
<td>+</td>
<td>2</td>
</tr>
<tr class="even">
<td>*</td>
<td>-</td>
<td>+</td>
<td>gimme your data</td>
</tr>
<tr class="odd">
<td>*</td>
<td>-</td>
<td>-</td>
<td>OK here they are</td>
</tr>
<tr class="even">
<td>*</td>
<td>-</td>
<td>-</td>
<td>A</td>
</tr>
<tr class="odd">
<td>*</td>
<td>*</td>
<td>+</td>
<td>1</td>
</tr>
<tr class="even">
<td>*</td>
<td>-</td>
<td>-</td>
<td>BB</td>
</tr>
<tr class="odd">
<td>*</td>
<td>*</td>
<td>+</td>
<td>1</td>
</tr>
<tr class="even">
<td>*</td>
<td>-</td>
<td>-</td>
<td>CCC</td>
</tr>
<tr class="odd">
<td><ul>
<li></li>
</ul></td>
<td><ul>
<li></li>
</ul></td>
<td><ul>
<li></li>
</ul></td>
<td><p>DDDD [ time passes until the originator has free buffer(s)
]</p></td>
</tr>
<tr class="even">
<td>*</td>
<td>*</td>
<td>+</td>
<td>5</td>
</tr>
<tr class="odd">
<td>*</td>
<td>-</td>
<td>-</td>
<td>EEEEE</td>
</tr>
<tr class="even">
<td>*</td>
<td>-</td>
<td>-</td>
<td>FFFFFF</td>
</tr>
<tr class="odd">
<td>*</td>
<td>-</td>
<td>-</td>
<td>GGGGGGG</td>
</tr>
<tr class="even">
<td>-</td>
<td>-</td>
<td>-</td>
<td>that's all</td>
</tr>
<tr class="odd">
<td>-</td>
<td>-</td>
<td>+</td>
<td>thx</td>
</tr>
</tbody>
</table>
| text/markdown | null | null | null | Matthias Urlichs <matthias@urlichs.de> | null | MoaT | [
"Development Status :: 4 - Beta",
"Framework :: AnyIO",
"Framework :: Trio",
"Framework :: AsyncIO",
"Programming Language :: Python :: 3",
"Intended Audience :: Developers"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"anyio~=4.0",
"moat-lib-codec~=0.4.14",
"moat-lib-micro~=0.2.2",
"moat-lib-stream~=0.2.0",
"moat-lib-proxy~=0.1.1",
"moat-util~=0.62.6"
] | [] | [] | [] | [
"homepage, https://m-o-a-t.org",
"repository, https://github.com/M-o-a-T/moat"
] | twine/6.1.0 CPython/3.13.5 | 2026-02-18T17:30:15.244942 | moat_lib_rpc-0.6.1.tar.gz | 46,661 | 58/96/7c41216f0f813b6f88fd8c71aedd6c9a32ab335e008225d479cf9e195c45/moat_lib_rpc-0.6.1.tar.gz | source | sdist | null | false | 79d61dd6cf1d610e0b508a9b19bfff51 | 1ec400e9a1d2cd2a76377e669276899427c0a226c678195b7408061e455550cc | 58967c41216f0f813b6f88fd8c71aedd6c9a32ab335e008225d479cf9e195c45 | null | [
"LICENSE.txt"
] | 272 |
2.4 | mimeogram | 1.6 | Exchange of file collections with LLMs. | .. vim: set fileencoding=utf-8:
.. -*- coding: utf-8 -*-
.. +--------------------------------------------------------------------------+
| |
| Licensed under the Apache License, Version 2.0 (the "License"); |
| you may not use this file except in compliance with the License. |
| You may obtain a copy of the License at |
| |
| http://www.apache.org/licenses/LICENSE-2.0 |
| |
| Unless required by applicable law or agreed to in writing, software |
| distributed under the License is distributed on an "AS IS" BASIS, |
| WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
| See the License for the specific language governing permissions and |
| limitations under the License. |
| |
+--------------------------------------------------------------------------+
*******************************************************************************
mimeogram
*******************************************************************************
.. image:: https://img.shields.io/pypi/v/mimeogram
:alt: Package Version
:target: https://pypi.org/project/mimeogram/
.. image:: https://img.shields.io/pypi/status/mimeogram
:alt: PyPI - Status
:target: https://pypi.org/project/mimeogram/
.. image:: https://github.com/emcd/python-mimeogram/actions/workflows/tester.yaml/badge.svg?branch=master&event=push
:alt: Tests Status
:target: https://github.com/emcd/python-mimeogram/actions/workflows/tester.yaml
.. image:: https://emcd.github.io/python-mimeogram/coverage.svg
:alt: Code Coverage Percentage
:target: https://github.com/emcd/python-mimeogram/actions/workflows/tester.yaml
.. image:: https://img.shields.io/github/license/emcd/python-mimeogram
:alt: Project License
:target: https://github.com/emcd/python-mimeogram/blob/master/LICENSE.txt
.. image:: https://img.shields.io/pypi/pyversions/mimeogram
:alt: Python Versions
:target: https://pypi.org/project/mimeogram/
.. image:: https://raw.githubusercontent.com/emcd/python-mimeogram/master/data/pictures/logo.svg
:alt: Mimeogram Logo
:width: 800
:align: center
📨 A command-line tool for **exchanging collections of files with Large
Language Models** - bundle multiple files into a single clipboard-ready
document while preserving directory structure and metadata... good for code
reviews, project sharing, and LLM interactions.
Key Features ⭐
===============================================================================
* 🔄 **Interactive Reviews**: Review and apply LLM-proposed changes one by one.
* 📋 **Clipboard Integration**: Seamless copying and pasting by default.
* 🗂️ **Directory Structure**: Preserves hierarchical file organization.
* 🛡️ **Path Protection**: Safeguards against dangerous modifications.
Installation 📦
===============================================================================
Method: Download Standalone Executable
-------------------------------------------------------------------------------
Download the latest standalone executable for your platform from `GitHub
Releases <https://github.com/emcd/python-mimeogram/releases>`_. These
executables have no dependencies and work out of the box.
Method: Install Executable Script
-------------------------------------------------------------------------------
Install via the `uv <https://github.com/astral-sh/uv/blob/main/README.md>`_
``tool`` command:
::
uv tool install mimeogram
or, run directly with `uvx
<https://github.com/astral-sh/uv/blob/main/README.md>`_:
::
uvx mimeogram
Or, install via `pipx <https://pipx.pypa.io/stable/installation/>`_:
::
pipx install mimeogram
Method: Install Python Package
-------------------------------------------------------------------------------
Install via `uv <https://github.com/astral-sh/uv/blob/main/README.md>`_ ``pip``
command:
::
uv pip install mimeogram
Or, install via ``pip``:
::
pip install mimeogram
Examples 💡
===============================================================================
Below are some simple examples. Please see the `examples documentation
<https://github.com/emcd/python-mimeogram/blob/master/documentation/examples/cli.rst>`_
for more detailed usage patterns.
::
usage: mimeogram [-h] [OPTIONS] {create,apply,provide-prompt,version}
Mimeogram: hierarchical data exchange between humans and LLMs.
╭─ options ────────────────────────────────────────────────────────────────────╮
│ -h, --help show this help message and exit │
│ --configfile {None}|STR │
│ (default: None) │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ subcommands ────────────────────────────────────────────────────────────────╮
│ {create,apply,provide-prompt,version} │
│ create Creates mimeogram from filesystem locations or URLs. │
│ apply Applies mimeogram to filesystem locations. │
│ provide-prompt Provides LLM prompt text for mimeogram format. │
│ version Prints version information. │
╰──────────────────────────────────────────────────────────────────────────────╯
Working with Simple LLM Interfaces
-------------------------------------------------------------------------------
Use with browser chat interfaces and API workbenches when you want explicit,
portable control over exactly which files are shared with a model. This is
useful for interfaces with limited repository tooling and for workflows where
uploaded files may have favorable pricing (for example, some project-oriented
plans in provider-hosted web interfaces).
* Bundle files with mimeogram format instructions into clipboard.
.. code-block:: bash
mimeogram create src/*.py tests/*.py --prepend-prompt
* Paste instructions and mimeogram into prompt text area in browser.
* Interact with LLM until you are ready to apply results.
* Request mimeogram from LLM and copy it from browser to clipboard.
* `Example: Claude Artifact <https://claude.site/artifacts/5ca7851f-6b63-4d1d-87ff-cd418f3cab0f>`_
* Apply mimeogram parts from clipboard. (On a terminal, this will be
interactive by default.)
.. code-block:: bash
mimeogram apply
Note that, if you do not want the LLM to return mimeograms to you, most of the
current generation of LLMs are smart enough to understand the format without
instructions. Thus, you can save tokens by not explicitly providing mimeogram
instructions.
Working with LLM Project Interfaces
-------------------------------------------------------------------------------
Many LLM service providers now offer project-style workspaces that persist
instructions and uploaded context across chats. When available, this pairs well
with mimeogram by reducing prompt overhead while preserving structured file
exchange.
In these cases, you can take advantage of the project instructions so that you
do not need to include mimeogram instructions with each new chat:
* Copy mimeogram format instructions into clipboard.
.. code-block:: bash
mimeogram provide-prompt
* Paste mimeogram prompt into project instructions and save the update. Any new
chats will be able to reuse the project instructions hereafter.
* Simply create mimeograms for new chats without prepending instructions.
.. code-block:: bash
mimeogram create src/*.py tests/*.py
* Same workflow as chats without project support at this point: interact with
LLM, request mimeogram (as necessary), apply mimeogram (as necessary).
Remote URLs
-------------------------------------------------------------------------------
You can also create mimeograms from remote URLs:
.. code-block:: bash
mimeogram create https://raw.githubusercontent.com/BurntSushi/aho-corasick/refs/heads/master/src/dfa.rs
Both local and remote files may be bundled together in the same mimeogram.
However, there is no ability to apply a mimeogram to remote URLs.
Interactive Review
-------------------------------------------------------------------------------
During application of a mimeogram, you will be, by default, presented with the
chance to review each part to apply. For each part, you will see a menu like
this:
.. code-block:: text
src/example.py [2.5K]
Action? (a)pply, (d)iff, (e)dit, (i)gnore, (s)elect hunks, (v)iew >
Choosing ``a`` to select the ``apply`` action will cause the part to be queued
for application once the reivew of all parts is complete. All queued parts are
applied simultaneously to prevent thrash in IDEs and language servers as
interdependent files are reevaluated.
Filesystem Protection
-------------------------------------------------------------------------------
If an LLM proposes the alteration of a sensitive file, such as one which may
contain credentials or affect the operating system, then the program makes an
attempt to flag this:
.. code-block:: text
~/.config/sensitive.conf [1.2K] [PROTECTED]
Action? (d)iff, (i)gnore, (p)ermit changes, (v)iew >
If, upon review of the proposed changes, you believe that they are safe, then
you can choose ``p`` to permit them, followed by ``a`` to apply them.
We take AI safety seriously. Please review all LLM-generated content, whether
it is flagged for a sensitive destination or not.
Configuration 🔧
===============================================================================
Default Location
-------------------------------------------------------------------------------
Mimeogram creates a configuration file on first run. You can find it at:
* Linux: ``~/.config/mimeogram/general.toml``
* macOS: ``~/Library/Application Support/mimeogram/general.toml``
* Windows: ``%LOCALAPPDATA%\\mimeogram\\general.toml``
Default Settings
-------------------------------------------------------------------------------
.. code-block:: toml
[apply]
from-clipboard = true # Read from clipboard by default
[create]
to-clipboard = true # Copy to clipboard by default
[prompt]
to-clipboard = true # Copy prompts to clipboard
[acquire-parts]
fail-on-invalid = false # Skip invalid files
recurse-directories = false
[update-parts]
disable-protections = false
Motivation 🎯
===============================================================================
Why Mimeogram in an Agentic World 💡
-------------------------------------------------------------------------------
* Portable, provider-agnostic format for sharing and applying multi-file
changes.
* Works in web interfaces and chat surfaces that do not expose local
filesystem tools.
* Useful when you want explicit, auditable context selection instead of
full-repository agent access.
* Supports batch exchange workflows, including scenarios where uploaded files
can be cheaper than repeated API context transmission.
Technical Benefits ✅
-------------------------------------------------------------------------------
* Preserves hierarchical directory structure.
* Version control friendly. (I.e., honors Git ignore files.)
* Supports async/batch workflows.
Platform Neutrality ☁️
-------------------------------------------------------------------------------
* IDE and platform agnostic.
* Works with and without provider-specific agent tooling.
* Useful with both project-enabled and non-project chat interfaces.
Limitations and Alternatives 🔀
===============================================================================
* Manual refresh of files needed (no automatic sync).
* Cannot retract stale content from conversation history in provider GUIs.
* For tight edit/test loops inside a local repository, agentic tools and
coding IDEs may be faster.
Comparison of General Approaches ⚖️
-------------------------------------------------------------------------------
+-------------------------+------------+------------+-------------+--------------+
| Feature | Mimeograms | Projects | Agentic | Specialized |
| | | (Web) [1]_ | CLIs [2]_ | IDEs [3]_ |
+=========================+============+============+=============+==============+
| Primary Interaction | Bundle/ | Chat + | Local tools | IDE-native |
| Model | apply | uploads | + chat | assistant |
+-------------------------+------------+------------+-------------+--------------+
| Directory Structure | Yes | No | Yes | Yes |
+-------------------------+------------+------------+-------------+--------------+
| Version Control | Yes | No [4]_ | Yes | Yes |
+-------------------------+------------+------------+-------------+--------------+
| Platform Support | Cross- | Web | Varies | Varies |
| | platform | | | |
+-------------------------+------------+------------+-------------+--------------+
| Local Repo Live Sync | Manual | No | Yes | Yes |
+-------------------------+------------+------------+-------------+--------------+
| Provider Portability | High | Low | Medium | Low/Medium |
+-------------------------+------------+------------+-------------+--------------+
| Setup Required | Low | None | Medium | Medium |
+-------------------------+------------+------------+-------------+--------------+
| Cost Model | Varies | Usually | Varies | Usually |
| | | subscr. | | subscr. |
+-------------------------+------------+------------+-------------+--------------+
.. [1] Provider-hosted project workspaces in web interfaces.
.. [2] `Aider <https://aider.chat/>`_, `Claude Code
<https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview>`_,
`Codex CLI <https://developers.openai.com/codex/cli/>`_, etc...
.. [3] `Cursor <https://cursor.com/en-US>`_, `Windsurf
<https://windsurf.com/editor>`_, etc...
.. [4] Some hosted interfaces provide versioned artifacts or revision history
(for example, Claude artifacts), but these are not integrated with
traditional VCS workflows such as Git.
Notes:
- No single column is universally best.
- Mimeogram is designed to complement agentic tools, especially when you need
explicit scope control or provider portability.
Comparison with Similar Tools ⚖️
-------------------------------------------------------------------------------
- `ai-digest <https://github.com/khromov/ai-digest>`_
- `dump_dir <https://github.com/fargusplumdoodle/dump_dir/>`_
- `Gitingest <https://github.com/coderamp-labs/gitingest>`_
- `Repomix <https://github.com/yamadashy/repomix>`_
Mimeogram is unique among file collection tools for LLMs in offering round-trip
support - the ability to not just collect files but also apply changes proposed
by LLMs.
`Full Comparison of Tools
<https://github.com/emcd/python-mimeogram/blob/master/documentation/comparisons.rst>`_
Features Matrix
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+--------------------+-----------+-----------+------------+-----------+
| Feature | Mimeogram | Gitingest | Repomix | dump_dir |
+====================+===========+===========+============+===========+
| Round Trips | ✓ | | | |
+--------------------+-----------+-----------+------------+-----------+
| Clipboard Support | ✓ | | ✓ | ✓ |
+--------------------+-----------+-----------+------------+-----------+
| Remote URL Support | ✓ | ✓ | ✓ | |
+--------------------+-----------+-----------+------------+-----------+
| Security Checks | ✓ | | ✓ | |
+--------------------+-----------+-----------+------------+-----------+
Content Selection Approaches
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Tools in this space generally follow one of two approaches: filesystem-oriented
or repository-oriented.
Tools, like ``mimeogram``, ``dump_dir``, and ``ai-digest``, are oriented around
files and directories. You start with nothing and select what is needed. This
approach offers more precise control over context window usage and is better
suited for targeted analysis or specific features.
Tools, like ``gitingest`` and ``repomix``, are oriented around code
repositories. You start with an entire repository and then filter out unneeded
files and directories. This approach is better for full project comprehension
but requires careful configuration to avoid exceeding LLM context window
limits.
Contribution 🤝
===============================================================================
Contribution to this project is welcome! However, it must follow the `code of
conduct
<https://emcd.github.io/python-project-common/stable/sphinx-html/common/conduct.html>`_
for the project.
Please file bug reports and feature requests in the `issue tracker
<https://github.com/emcd/python-mimeogram/issues>`_ or submit `pull
requests <https://github.com/emcd/python-mimeogram/pulls>`_ to
improve the source code or documentation.
For development guidance and standards, please see the `development guide
<https://emcd.github.io/python-mimeogram/stable/sphinx-html/contribution.html#development>`_.
About the Name 📝
===============================================================================
The name "mimeogram" draws from multiple sources:
* 📜 From Ancient Greek roots:
* μῖμος (*mîmos*, "mimic") + -γραμμα (*-gramma*, "written character, that
which is drawn")
* Like *mimeograph* but emphasizing textual rather than pictorial content.
* 📨 From **MIME** (Multipurpose Internet Mail Extensions):
* Follows naming patterns from the Golden Age of Branding: Ford
Cruise-o-matic, Ronco Veg-O-Matic, etc....
* Reflects the MIME-inspired bundle format.
* 📬 Echoes *telegram*:
* Emphasizes message transmission.
* Suggests structured communication.
Note: Despite similar etymology, this project is distinct from the PyPI package
*mimeograph*, which serves different purposes.
Pronunciation? The one similar to *mimeograph* seems to roll off the tongue
more smoothly, though it is one more syllable than "mime-o-gram". Preferred
IPA: /ˈmɪm.i.ˌoʊ.ɡræm/.
Additional Indicia
===============================================================================
.. image:: https://img.shields.io/github/last-commit/emcd/python-mimeogram
:alt: GitHub last commit
:target: https://github.com/emcd/python-mimeogram
.. image:: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/copier-org/copier/master/img/badge/badge-grayscale-inverted-border-orange.json
:alt: Copier
:target: https://github.com/copier-org/copier
.. image:: https://img.shields.io/badge/%F0%9F%A5%9A-Hatch-4051b5.svg
:alt: Hatch
:target: https://github.com/pypa/hatch
.. image:: https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit
:alt: pre-commit
:target: https://github.com/pre-commit/pre-commit
.. image:: https://microsoft.github.io/pyright/img/pyright_badge.svg
:alt: Pyright
:target: https://microsoft.github.io/pyright
.. image:: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json
:alt: Ruff
:target: https://github.com/astral-sh/ruff
.. image:: https://img.shields.io/badge/hypothesis-tested-brightgreen.svg
:alt: Hypothesis
:target: https://hypothesis.readthedocs.io/en/latest/
.. image:: https://img.shields.io/pypi/implementation/mimeogram
:alt: PyPI - Implementation
:target: https://pypi.org/project/mimeogram/
.. image:: https://img.shields.io/pypi/wheel/mimeogram
:alt: PyPI - Wheel
:target: https://pypi.org/project/mimeogram/
Other Projects by This Author 🌟
===============================================================================
* `python-absence <https://github.com/emcd/python-absence>`_ (`absence <https://pypi.org/project/absence/>`_ on PyPI)
🕳️ A Python library package which provides a **sentinel for absent values** - a falsey, immutable singleton that represents the absence of a value in contexts where ``None`` or ``False`` may be valid values.
* `python-accretive <https://github.com/emcd/python-accretive>`_ (`accretive <https://pypi.org/project/accretive/>`_ on PyPI)
🌌 A Python library package which provides **accretive data structures** - collections which can grow but never shrink.
* `python-classcore <https://github.com/emcd/python-classcore>`_ (`classcore <https://pypi.org/project/classcore/>`_ on PyPI)
🏭 A Python library package which provides **foundational class factories and decorators** for providing classes with attributes immutability and concealment and other custom behaviors.
* `python-detextive <https://github.com/emcd/python-detextive>`_ (`detextive <https://pypi.org/project/detextive/>`_ on PyPI)
🕵️ A Python library which provides consolidated text detection capabilities for reliable content analysis. Offers MIME type detection, character set detection, and line separator processing.
* `python-dynadoc <https://github.com/emcd/python-dynadoc>`_ (`dynadoc <https://pypi.org/project/dynadoc/>`_ on PyPI)
📝 A Python library package which bridges the gap between **rich annotations** and **automatic documentation generation** with configurable renderers and support for reusable fragments.
* `python-falsifier <https://github.com/emcd/python-falsifier>`_ (`falsifier <https://pypi.org/project/falsifier/>`_ on PyPI)
🎭 A very simple Python library package which provides a **base class for falsey objects** - objects that evaluate to ``False`` in boolean contexts.
* `python-frigid <https://github.com/emcd/python-frigid>`_ (`frigid <https://pypi.org/project/frigid/>`_ on PyPI)
🔒 A Python library package which provides **immutable data structures** - collections which cannot be modified after creation.
* `python-icecream-truck <https://github.com/emcd/python-icecream-truck>`_ (`icecream-truck <https://pypi.org/project/icecream-truck/>`_ on PyPI)
🍦 **Flavorful Debugging** - A Python library which enhances the powerful and well-known ``icecream`` package with flavored traces, configuration hierarchies, customized outputs, ready-made recipes, and more.
* `python-librovore <https://github.com/emcd/python-librovore>`_ (`librovore <https://pypi.org/project/librovore/>`_ on PyPI)
🐲 **Documentation Search Engine** - An intelligent documentation search and extraction tool that provides both a command-line interface for humans and an MCP (Model Context Protocol) server for AI agents. Search across Sphinx and MkDocs sites with fuzzy matching, extract clean markdown content, and integrate seamlessly with AI development workflows.
| text/x-rst | null | Eric McDonald <emcd@users.noreply.github.com> | null | null | null | ai, cli, documents, interchange, llm | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Progr... | [] | null | null | >=3.10 | [] | [] | [] | [
"absence~=1.1",
"accretive~=4.1",
"aiofiles",
"detextive~=3.1",
"dynadoc~=1.4",
"emcd-appcore[cli]~=1.6",
"exceptiongroup",
"frigid~=4.2",
"gitignorefile",
"httpx",
"icecream-truck~=1.5",
"patiencediff",
"pyperclip",
"python-dotenv",
"readchar",
"rich",
"tiktoken",
"tomli",
"typi... | [] | [] | [] | [
"Homepage, https://github.com/emcd/python-mimeogram",
"Documentation, https://emcd.github.io/python-mimeogram",
"Download, https://pypi.org/project/mimeogram/#files",
"Source Code, https://github.com/emcd/python-mimeogram",
"Issue Tracker, https://github.com/emcd/python-mimeogram/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T17:29:58.291491 | mimeogram-1.6.tar.gz | 48,006 | f9/f3/008c251e46b122e37b9523604de37f97489c0c06c6857b4fb004768ee30e/mimeogram-1.6.tar.gz | source | sdist | null | false | d6575f9818b8f5e4c4a8ede894fb1c36 | 9a71cad25a149e652c4e22f6fe7360cb572badf3bca9a88a4f5913e4c3d32957 | f9f3008c251e46b122e37b9523604de37f97489c0c06c6857b4fb004768ee30e | Apache-2.0 | [
"LICENSE.txt"
] | 218 |
2.4 | lynxlearn | 0.3.0 | A simple, educational machine learning library built from scratch using NumPy | # LynxLearn
**A beginner-friendly machine learning library built from scratch with NumPy.**
Educational. Transparent. CPU-optimized for small-to-medium models.
**Made by [lousybook01](https://github.com/notlousybook)** | **YouTube: [LousyBook](https://youtube.com/channel/UCBNE8MNvq1XppUmpAs20m4w)**
---
## Why LynxLearn?
### Where We Excel
| Feature | LynxLearn | PyTorch (CPU) | TensorFlow (CPU) |
|---------|-----------|---------------|------------------|
| Neural Network Training | **2-5x faster** | baseline | 2-3x slower |
| Framework Overhead | **Near zero** | High | Very High |
| Code Readability | **Pure NumPy** | C++ backend | Complex graph |
| Beginner Friendly | ✅ Simple API | Moderate | Steep learning curve |
| Educational Value | ✅ Learn ML fundamentals | Abstraction layers | Hidden complexity |
### Honest Performance Claims
**We WIN at:**
- 🚀 **Neural networks on CPU** - 2-5x faster than PyTorch, 3-10x faster than TensorFlow
- 📚 **Educational value** - Every line is readable NumPy, perfect for learning
- 🎯 **Small-to-medium models** - Where framework overhead dominates
- 🔧 **Customization** - Full control over dtypes, initializers, regularizers
**We DON'T claim to beat:**
- ❌ scikit-learn for linear regression (they have decades of optimization)
- ❌ GPU-accelerated frameworks for large models
- ❌ Production systems requiring distributed training
**Our NICHE:** Educational ML library for learning, prototyping, and CPU-based inference.
---
## Features
### Linear Models
- LinearRegression (OLS), GradientDescentRegressor
- Ridge, Lasso, ElasticNet (regularized regression)
- PolynomialRegression, HuberRegressor, QuantileRegressor, BayesianRidge
### Neural Networks
- Sequential model (Keras-like API)
- Dense layers with multiple activations (ReLU, GELU, Swish, Mish, etc.)
- Multiple precision support: float16, float32, float64, bfloat16
- Weight initializers: He, Xavier, LeCun, Orthogonal
- Regularizers: L1, L2, Elastic Net
- Constraints: MaxNorm, NonNeg, UnitNorm
### Model Selection & Metrics
- train_test_split
- MSE, RMSE, MAE, R² score
### Visualizations
- Regression plots, cost history, residual analysis
- Model comparison charts
---
## Installation
```bash
# Basic installation
pip install lynxlearn
# With BF16 support
pip install lynxlearn[bf16]
# With all features
pip install lynxlearn[all]
```
Or install from source:
```bash
git clone https://github.com/notlousybook/LynxLearn.git
cd LynxLearn
pip install -e .
```
---
## Quick Start
### Linear Regression
```python
import numpy as np
from lynxlearn import LinearRegression, train_test_split, metrics
# Generate data
X = np.random.randn(100, 1)
y = 3 * X.flatten() + 5 + np.random.randn(100) * 0.5
# Split and train
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
model = LinearRegression()
model.train(X_train, y_train)
# Evaluate
predictions = model.predict(X_test)
print(f"R² Score: {metrics.r2_score(y_test, predictions):.4f}")
```
### Neural Network
```python
from lynxlearn import Sequential, Dense, SGD
# Build model
model = Sequential([
Dense(128, activation='relu', input_shape=(10,)),
Dense(64, activation='relu'),
Dense(1)
])
# Compile and train
model.compile(optimizer=SGD(learning_rate=0.01, momentum=0.9), loss='mse')
history = model.train(X_train, y_train, epochs=100, batch_size=32)
# Predict
predictions = model.predict(X_test)
```
### Custom Precision
```python
from lynxlearn import DenseBF16, DenseFloat16, DenseMixedPrecision
# BF16 precision (requires ml-dtypes)
model = Sequential([
DenseBF16(128, activation='relu', input_shape=(10,)),
DenseBF16(1)
])
# Mixed precision training
layer = DenseMixedPrecision(128, storage_dtype='float16', compute_dtype='float32')
```
### With Regularization
```python
from lynxlearn import Dense, L2Regularizer, MaxNorm
layer = Dense(
128,
activation='relu',
kernel_regularizer=L2Regularizer(l2=0.01),
kernel_constraint=MaxNorm(3.0)
)
```
---
## Performance Benchmarks
### Neural Network Training (CPU)
| Model Size | LynxLearn | PyTorch | TensorFlow | Winner |
|------------|-----------|---------|------------|--------|
| ~1K params | 0.05s | 0.12s | 0.35s | **LynxLearn 2.4x** |
| ~10K params | 0.15s | 0.45s | 1.2s | **LynxLearn 3x** |
| ~100K params | 0.8s | 2.1s | 5.5s | **LynxLearn 2.6x** |
*Fair comparison: same architecture, same data, same training parameters, CPU-only.*
### Why We're Faster on CPU
```
PyTorch/TensorFlow overhead per layer:
├── Autograd tape recording
├── Dynamic graph construction
├── CUDA availability checks
├── Distributed training hooks
├── Mixed precision handling
└── Safety checks and assertions
LynxLearn overhead per layer:
└── x @ W + b (single BLAS call)
```
### What We DON'T Beat
| Task | Winner | Why |
|------|--------|-----|
| Linear Regression | scikit-learn | 20+ years of optimization |
| Large models on GPU | PyTorch/TensorFlow | GPU acceleration |
| Distributed training | PyTorch/TensorFlow | Multi-GPU/TPU support |
---
## Documentation
- [API Reference](docs/api.md) - Complete API documentation
- [Examples](docs/examples.md) - Code examples and tutorials
- [Mathematics](docs/mathematics.md) - Mathematical foundations
---
## Project Structure
```
LynxLearn/
├── lynxlearn/
│ ├── linear_model/ # Linear regression models
│ ├── neural_network/ # Neural network components
│ │ ├── layers/ # Dense, regularizers, constraints
│ │ ├── optimizers/ # SGD with momentum
│ │ ├── losses/ # MSE, MAE, Huber
│ │ └── initializers/ # He, Xavier, LeCun
│ ├── model_selection/ # Train/test split
│ ├── metrics/ # Evaluation metrics
│ └── visualizations/ # Plotting utilities
├── tests/ # Test suite
├── examples/ # Example scripts
├── benchmark/ # Fair benchmarks
└── docs/ # Documentation
```
---
## Philosophy
### Transparency
We're honest about performance. We don't cherry-pick unfair comparisons.
Our benchmarks compare apples-to-apples: same algorithm, same data, same hardware.
### Educational Value
Every component is built from scratch with NumPy. No black boxes.
Perfect for students, researchers, and anyone who wants to understand ML fundamentals.
### Beginner-Friendly API
```python
# Simple, intuitive method names
model.train(X, y) # Not fit()
model.predict(X) # Clear and obvious
model.evaluate(X, y) # Returns metrics dictionary
model.summary() # Print model architecture
```
---
## Running Tests
```bash
# Run all tests
pytest tests/
# Run with coverage
pytest tests/ -v --cov=lynxlearn
# Run neural network tests only
pytest tests/test_neural_network/
```
---
## Running Benchmarks
```bash
# Quick benchmark
python benchmark/benchmark_neural_network.py --quick
# Full benchmark
python benchmark/benchmark_neural_network.py
```
---
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
---
## License
MIT License
| text/markdown | null | lousybook01 <lousybook94@gmail.com> | null | lousybook01 <lousybook94@gmail.com> | MIT | machine-learning, linear-regression, neural-network, deep-learning, educational, numpy, ml, learning, regression | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python ... | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy>=1.20.0",
"matplotlib>=3.5.0; extra == \"viz\"",
"scipy>=1.7.0; extra == \"viz\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"flake8>=5.0.0; extra == \"dev\"",
"black>=22.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"matplotlib>=3.5.0; extra == \"de... | [] | [] | [] | [
"Homepage, https://github.com/notlousybook/LynxLearn",
"Documentation, https://github.com/notlousybook/LynxLearn#readme",
"Repository, https://github.com/notlousybook/LynxLearn.git",
"Issues, https://github.com/notlousybook/LynxLearn/issues",
"YouTube, https://youtube.com/channel/UCBNE8MNvq1XppUmpAs20m4w"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T17:29:57.623362 | lynxlearn-0.3.0.tar.gz | 82,128 | 2a/d8/55c99902adbc8520ab24d5406f0fae12536301a95e5ed87f555c6bfd67bb/lynxlearn-0.3.0.tar.gz | source | sdist | null | false | 2ceab73605a6db0da660d9b102379251 | 5aa580428adf989af52c36a94579060f8bb1215e1d1d71d467454a5e1492e657 | 2ad855c99902adbc8520ab24d5406f0fae12536301a95e5ed87f555c6bfd67bb | null | [
"LICENSE"
] | 252 |
2.4 | acacia-s2s-toolkit | 1.66 | A python package to support download and analysis of forecasts from ECMWF S2S Database. | # acacia_s2s_toolkit
Python wrapper to support download and analysis of sub-seasonal forecast data. Created to support EU-funded ACACIA project.
| text/markdown | null | Joshua Talib <joshuatalib@ecmwf.int>, Innocent Masukwedza <g.t.masukwedza@reading.ac.uk> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy>=1.23",
"cdo>=1.6.0",
"xarray>=2024.9.0",
"eccodes>=2.40.0",
"dask>=2024.9.0",
"pandas>=2.2.3",
"scipy>=1.14.1",
"netCDF4>=1.7.2",
"requests>=2.32.2",
"matplotlib>=3.8",
"cartopy>=0.22",
"huracanpy>=1.3.1"
] | [] | [] | [] | [
"Homepage, https://github.com/joshuatalib/acacia_s2s_toolkit"
] | twine/5.1.1 CPython/3.11.10 | 2026-02-18T17:29:30.794401 | acacia_s2s_toolkit-1.66.tar.gz | 23,559 | a1/e4/bc2778c4f0c4f65a6727a5ad3d4fea0e88db000f38c64b2249143d256338/acacia_s2s_toolkit-1.66.tar.gz | source | sdist | null | false | 81d76c453417bb32a60c79d0a2635187 | d3560d461e79355ab44fdfb693aabf976fed60b97cdd69229a1bbd1e1c8dc6f6 | a1e4bc2778c4f0c4f65a6727a5ad3d4fea0e88db000f38c64b2249143d256338 | null | [] | 251 |
2.4 | hyperquant | 1.54 | A minimal yet hyper-efficient backtesting framework for quantitative trading | # hyperquant
hyperquant is a minimalistic framework for quantitative trading strategies. It allows users to quickly validate their strategy ideas with ease and efficiency.
| text/markdown | null | MissinA <1421329142@qq.com> | null | null | MIT | backtesting, hyperquant, quant, trading | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Office/Business :: Financial :: Investment"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"aiohttp>=3.13",
"coincurve>=21.0.0",
"colorama>=0.4.6",
"cryptography>=44.0.2",
"duckdb>=1.2.2",
"eth-account>=0.10.0",
"numpy>=1.21.0",
"pandas>=2.2.3",
"pybotters>=1.10",
"pyecharts>=2.0.8",
"python-dotenv>=1.2.1",
"web3>=7.14.0"
] | [] | [] | [] | [
"Homepage, https://github.com/yourusername/hyperquant",
"Issues, https://github.com/yourusername/hyperquant/issues"
] | twine/6.2.0 CPython/3.13.1 | 2026-02-18T17:29:03.896312 | hyperquant-1.54.tar.gz | 303,096 | 09/0a/2f02219e66b71c85ff3bf1aa3c9ed0c85e896d9db609ec4cd6f8104c295a/hyperquant-1.54.tar.gz | source | sdist | null | false | 7c3fef0afcbeb21ba0915d0cadaff803 | f3f6e1370845f2224726b7733ab4978bff8394e59235ad1a12d02c4592ef591d | 090a2f02219e66b71c85ff3bf1aa3c9ed0c85e896d9db609ec4cd6f8104c295a | null | [] | 229 |
2.4 | simple-finance | 0.1.5 | A Python toolkit for loading financial data and performing regression and portfolio analysis for teaching and research. | # simple-finance
## Install
```bash
pip install simple-finance
| text/markdown | Brian Boyer, Royston Vance | null | null | null | MIT License
Copyright (c) 2026 Brian Boyer
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"pandas<2.3,>=2.2",
"requests>=2.32.4",
"statsmodels>=0.14.5",
"yfinance>=0.2.66",
"ipykernel>=6.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.6; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"wrds>=0.9.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/rvance1/simple-finance",
"Issues, https://github.com/rvance1/simple-finance/issues"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-18T17:28:13.847956 | simple_finance-0.1.5.tar.gz | 12,693 | cd/f4/57bcc22096dd0448e3bd7a7ebad867b8956c944233a638e3180c47b61625/simple_finance-0.1.5.tar.gz | source | sdist | null | false | 690c5e32a24fdf1e33c2d2c59c95fdd1 | 0164befcd5eab565e45835be26ea9f8d54693b60df2520c67d89e1602f70cdfa | cdf457bcc22096dd0448e3bd7a7ebad867b8956c944233a638e3180c47b61625 | null | [
"LICENSE"
] | 427 |
2.4 | qlik-sense-mcp-server | 1.4.0 | MCP Server for Qlik Sense Enterprise APIs | # Qlik Sense MCP Server
[](https://pypi.org/project/qlik-sense-mcp-server/)
[](https://pypi.org/project/qlik-sense-mcp-server/)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/qlik-sense-mcp-server/)
Model Context Protocol (MCP) server for integration with Qlik Sense Enterprise APIs. Provides unified interface for Repository API and Engine API operations through MCP protocol.
## Table of Contents
- [Overview](#overview)
- [Features](#features)
- [Installation](#installation)
- [Configuration](#configuration)
- [Usage](#usage)
- [API Reference](#api-reference)
- [Architecture](#architecture)
- [Development](#development)
- [Troubleshooting](#troubleshooting)
- [License](#license)
## Overview
Qlik Sense MCP Server bridges Qlik Sense Enterprise with systems supporting Model Context Protocol. Server provides 10 comprehensive tools for complete Qlik Sense analytics workflow including application discovery, data analysis, script extraction, and metadata management.
### Key Features
- **Unified API**: Single interface for Qlik Sense Repository and Engine APIs
- **Security**: Certificate-based authentication support
- **Performance**: Optimized queries and direct API access
- **Analytics**: Advanced data analysis and hypercube creation
- **Metadata**: Comprehensive application and field information
## Features
### Available Tools
| Tool | Description | API | Status |
|------|-------------|-----|--------|
| `get_apps` | Get comprehensive list of applications with metadata | Repository | ✅ |
| `get_app_details` | Get compact app overview (metadata, fields, master items, sheets/objects) | Repository | ✅ |
| `get_app_sheets` | Get list of sheets from application with title and description | Engine | ✅ |
| `get_app_sheet_objects` | Get list of objects from specific sheet with object ID, type and description | Engine | ✅ |
| `get_app_script` | Extract load script from application | Engine | ✅ |
| `get_app_field` | Return values of a field with pagination and wildcard search | Engine | ✅ |
| `get_app_variables` | Return variables split by source with pagination and wildcard search | Engine | ✅ |
| `get_app_field_statistics` | Get comprehensive field statistics | Engine | ✅ |
| `engine_create_hypercube` | Create hypercube for data analysis | Engine | ✅ |
| `get_app_object` | Get specific object layout by ID (GetObject + GetLayout) | Engine | ✅ |
## Installation
### Quick Start with uvx (Recommended)
The easiest way to use Qlik Sense MCP Server is with uvx:
```bash
uvx qlik-sense-mcp-server
```
This command will automatically install and run the latest version without affecting your system Python environment.
### Alternative Installation Methods
#### From PyPI
```bash
pip install qlik-sense-mcp-server
```
#### From Source (Development)
```bash
git clone https://github.com/bintocher/qlik-sense-mcp.git
cd qlik-sense-mcp
make dev
```
### System Requirements
- Python 3.12+
- Qlik Sense Enterprise
- Valid certificates for authentication
- Network access to Qlik Sense server (ports 4242 Repository, 4747 Engine)
- Ensure your MCP client model can handle large JSON responses; prefer small limits in requests during testing
### Setup
1. **Setup certificates**
```bash
mkdir certs
# Copy your Qlik Sense certificates to certs/ directory
```
2. **Create configuration**
```bash
cp .env.example .env
# Edit .env with your settings
```
## Configuration
### Environment Variables (.env)
```bash
# Server connection
QLIK_SERVER_URL=https://your-qlik-server.company.com
QLIK_USER_DIRECTORY=COMPANY
QLIK_USER_ID=your-username
# Certificate paths (absolute paths)
QLIK_CLIENT_CERT_PATH=/path/to/certs/client.pem
QLIK_CLIENT_KEY_PATH=/path/to/certs/client_key.pem
QLIK_CA_CERT_PATH=/path/to/certs/root.pem
# API ports (standard Qlik Sense ports)
QLIK_REPOSITORY_PORT=4242
QLIK_ENGINE_PORT=4747
# Optional HTTP port for metadata requests
QLIK_HTTP_PORT=443
# SSL settings
QLIK_VERIFY_SSL=false
```
### Optional Environment Variables
```bash
# Logging level (default: INFO)
LOG_LEVEL=INFO
# Engine WebSocket timeouts and retries
QLIK_WS_TIMEOUT=8.0 # seconds
QLIK_WS_RETRIES=2 # number of endpoints to try
```
### MCP Configuration
Create `mcp.json` file for MCP client integration:
```json
{
"mcpServers": {
"qlik-sense": {
"command": "uvx",
"args": ["qlik-sense-mcp-server"],
"env": {
"QLIK_SERVER_URL": "https://your-qlik-server.company.com",
"QLIK_USER_DIRECTORY": "COMPANY",
"QLIK_USER_ID": "your-username",
"QLIK_CLIENT_CERT_PATH": "/absolute/path/to/certs/client.pem",
"QLIK_CLIENT_KEY_PATH": "/absolute/path/to/certs/client_key.pem",
"QLIK_CA_CERT_PATH": "/absolute/path/to/certs/root.pem",
"QLIK_REPOSITORY_PORT": "4242",
"QLIK_PROXY_PORT": "4243",
"QLIK_ENGINE_PORT": "4747",
"QLIK_HTTP_PORT": "443",
"QLIK_VERIFY_SSL": "false",
"QLIK_HTTP_TIMEOUT": "10.0",
"QLIK_WS_TIMEOUT": "8.0",
"QLIK_WS_RETRIES": "2",
"LOG_LEVEL": "INFO"
},
"disabled": false,
"autoApprove": [
"get_apps",
"get_app_details",
"get_app_script",
"get_app_field_statistics",
"engine_create_hypercube",
"get_app_field",
"get_app_variables",
"get_app_sheets",
"get_app_sheet_objects",
"get_app_object"
]
}
}
}
```
### Environment Variables Configuration
The server requires the following environment variables for configuration:
#### Required Variables
- **`QLIK_SERVER_URL`** - Qlik Sense server URL (e.g., `https://qlik.company.com`)
- **`QLIK_USER_DIRECTORY`** - User directory for authentication (e.g., `COMPANY`)
- **`QLIK_USER_ID`** - User ID for authentication (e.g., `your-username`)
#### Certificate Configuration (Required for production)
- **`QLIK_CLIENT_CERT_PATH`** - Absolute path to client certificate file (`.pem` format)
- **`QLIK_CLIENT_KEY_PATH`** - Absolute path to client private key file (`.pem` format)
- **`QLIK_CA_CERT_PATH`** - Absolute path to CA certificate file (`.pem` format). If not specified, SSL certificate verification will be disabled
#### Network Configuration
- **`QLIK_REPOSITORY_PORT`** - Repository API port (default: `4242`)
- **`QLIK_PROXY_PORT`** - Proxy API port for authentication (default: `4243`)
- **`QLIK_ENGINE_PORT`** - Engine API port for WebSocket connections (default: `4747`)
- **`QLIK_HTTP_PORT`** - HTTP API port for metadata requests (optional, only used for `/api/v1/apps/{id}/data/metadata` endpoint)
#### SSL and Security
- **`QLIK_VERIFY_SSL`** - Verify SSL certificates (`true`/`false`, default: `true`)
#### Timeouts and Performance
- **`QLIK_HTTP_TIMEOUT`** - HTTP request timeout in seconds (default: `10.0`)
- **`QLIK_WS_TIMEOUT`** - WebSocket connection timeout in seconds (default: `8.0`)
- **`QLIK_WS_RETRIES`** - Number of WebSocket connection retry attempts (default: `2`)
#### Logging
- **`LOG_LEVEL`** - Logging level (`DEBUG`, `INFO`, `WARNING`, `ERROR`, default: `INFO`)
## Usage
### Start Server
```bash
# Using uvx (recommended)
uvx qlik-sense-mcp-server
# Using installed package
qlik-sense-mcp-server
# From source (development)
python -m qlik_sense_mcp_server.server
```
### Example Operations
#### Get Applications List
```python
# Via MCP client - get first 50 apps (default)
result = mcp_client.call_tool("get_apps")
print(f"Showing {result['pagination']['returned']} of {result['pagination']['total_found']} apps")
# Search for specific apps
result = mcp_client.call_tool("get_apps", {
"name_filter": "Sales",
"limit": 10
})
# Get more apps (pagination)
result = mcp_client.call_tool("get_apps", {
"offset": 50,
"limit": 50
})
```
#### Analyze Application
```python
# Get comprehensive app analysis
result = mcp_client.call_tool("get_app_details", {
"app_id": "your-app-id"
})
print(f"App has {len(result['data_model']['tables'])} tables")
```
#### Create Data Analysis Hypercube
```python
# Create hypercube for sales analysis
result = mcp_client.call_tool("engine_create_hypercube", {
"app_id": "your-app-id",
"dimensions": ["Region", "Product"],
"measures": ["Sum(Sales)", "Count(Orders)"],
"max_rows": 1000
})
```
#### Get Field Statistics
```python
# Get detailed field statistics
result = mcp_client.call_tool("get_app_field_statistics", {
"app_id": "your-app-id",
"field_name": "Sales"
})
print(f"Average: {result['avg_value']['numeric']}")
```
## API Reference
### get_apps
Retrieves comprehensive list of Qlik Sense applications with metadata, pagination and filtering support.
**Parameters:**
- `limit` (optional): Maximum number of apps to return (default: 50, max: 1000)
- `offset` (optional): Number of apps to skip for pagination (default: 0)
- `name_filter` (optional): Filter apps by name (case-insensitive partial match)
- `app_id_filter` (optional): Filter by specific app ID/GUID
- `include_unpublished` (optional): Include unpublished apps (default: true)
**Returns:** Object containing paginated apps, streams, and pagination metadata
**Example (default - first 50 apps):**
```json
{
"apps": [...],
"streams": [...],
"pagination": {
"limit": 50,
"offset": 0,
"returned": 50,
"total_found": 1598,
"has_more": true,
"next_offset": 50
},
"filters": {
"name_filter": null,
"app_id_filter": null,
"include_unpublished": true
},
"summary": {
"total_apps": 1598,
"published_apps": 857,
"private_apps": 741,
"total_streams": 40,
"showing": "1-50 of 1598"
}
}
```
**Example (with name filter):**
```python
# Search for apps containing "dashboard"
result = mcp_client.call_tool("get_apps", {
"name_filter": "dashboard",
"limit": 10
})
# Get specific app by ID
result = mcp_client.call_tool("get_apps", {
"app_id_filter": "e2958865-2aed-4f8a-b3c7-20e6f21d275c"
})
# Get next page of results
result = mcp_client.call_tool("get_apps", {
"limit": 50,
"offset": 50
})
```
### get_app_details
Gets comprehensive application analysis including data model, object counts, and metadata. Provides detailed field information, table structures, and application properties.
**Parameters:**
- `app_id` (required): Application identifier
**Returns:** Detailed application object with data model structure
**Example:**
```json
{
"app_metadata": {...},
"data_model": {
"tables": [...],
"total_tables": 2,
"total_fields": 45
},
"object_counts": {...}
}
```
### get_app_sheets
Get list of sheets from application with title and description.
**Parameters:**
- `app_id` (required): Application GUID
**Returns:** Object containing application sheets with their IDs, titles and descriptions
**Example:**
```json
{
"app_id": "e2958865-2aed-4f8a-b3c7-20e6f21d275c",
"total_sheets": 2,
"sheets": [
{
"sheet_id": "abc123-def456-ghi789",
"title": "Main Dashboard",
"description": "Primary analysis dashboard"
},
{
"sheet_id": "def456-ghi789-jkl012",
"title": "Detailed Analysis",
"description": "Detailed data analysis"
}
]
}
```
### get_app_sheet_objects
Retrieves list of objects from a specific sheet in Qlik Sense application with their metadata.
**Parameters:**
- `app_id` (required): Application identifier
- `sheet_id` (required): Sheet identifier
**Returns:** Object containing sheet objects with their IDs, types and descriptions
**Example:**
```json
{
"app_id": "e2958865-2aed-4f8a-b3c7-20e6f21d275c",
"sheet_id": "abc123-def456-ghi789",
"total_objects": 3,
"objects": [
{
"object_id": "chart-1",
"object_type": "barchart",
"object_description": "Sales by Region"
},
{
"object_id": "table-1",
"object_type": "table",
"object_description": "Customer Details"
},
{
"object_id": "kpi-1",
"object_type": "kpi",
"object_description": "Total Revenue"
}
]
}
```
### get_app_object
Retrieves layout of a specific object by its ID using sequential GetObject and GetLayout requests.
**Parameters:**
- `app_id` (required): Application identifier
- `object_id` (required): Object identifier
**Returns:** Object layout structure as returned by GetLayout
**Example:**
```json
{
"qLayout": {
"...": "..."
}
}
```
### get_app_script
Retrieves load script from application.
**Parameters:**
- `app_id` (required): Application identifier
**Returns:** Object containing script text and metadata
**Example:**
```json
{
"qScript": "SET DateFormat='DD.MM.YYYY';\n...",
"app_id": "app-id",
"script_length": 2830
}
```
### get_app_field
Returns values of a single field with pagination and optional wildcard search.
**Parameters:**
- `app_id` (required): Application GUID
- `field_name` (required): Field name
- `limit` (optional): Number of values to return (default: 10, max: 100)
- `offset` (optional): Offset for pagination (default: 0)
- `search_string` (optional): Wildcard text mask with `*` and `%` support
- `search_number` (optional): Wildcard numeric mask with `*` and `%` support
- `case_sensitive` (optional): Case sensitivity for `search_string` (default: false)
**Returns:** Object containing field values
**Example:**
```json
{
"field_values": [
"Russia",
"USA",
"China"
]
}
```
### get_app_variables
Returns variables split by source (script/ui) with pagination and wildcard search.
**Parameters:**
- `app_id` (required): Application GUID
- `limit` (optional): Max variables to return (default: 10, max: 100)
- `offset` (optional): Offset for pagination (default: 0)
- `created_in_script` (optional): Return only variables created in script (true/false). If omitted, return both
- `search_string` (optional): Wildcard search by variable name or text value (* and % supported), case-insensitive by default
- `search_number` (optional): Wildcard search among numeric variable values (* and % supported)
- `case_sensitive` (optional): Case sensitive matching for search_string (default: false)
**Returns:** Object containing variables from script and UI
**Example:**
```json
{
"variables_from_script": {
"vSalesTarget": "1000000",
"vCurrentYear": "2024"
},
"variables_from_ui": {
"vSelectedRegion": "Europe",
"vDateRange": "Q1-Q4"
}
}
```
### get_app_field_statistics
Retrieves comprehensive field statistics.
**Parameters:**
- `app_id` (required): Application identifier
- `field_name` (required): Field name
**Returns:** Statistical analysis including min, max, average, median, mode, standard deviation
**Example:**
```json
{
"field_name": "age",
"min_value": {"numeric": 0},
"max_value": {"numeric": 2023},
"avg_value": {"numeric": 40.98},
"median_value": {"numeric": 38},
"std_deviation": {"numeric": 24.88}
}
```
### engine_create_hypercube
Creates hypercube for data analysis.
**Parameters:**
- `app_id` (required): Application identifier
- `dimensions` (required): Array of dimension fields
- `measures` (required): Array of measure expressions
- `max_rows` (optional): Maximum rows to return (default: 1000)
**Returns:** Hypercube data with dimensions, measures, and total statistics
**Example:**
```json
{
"hypercube_data": {
"qDimensionInfo": [...],
"qMeasureInfo": [...],
"qDataPages": [...]
},
"total_rows": 30,
"total_columns": 4
}
```
## Architecture
### Project Structure
```
qlik-sense-mcp/
├── qlik_sense_mcp_server/
│ ├── __init__.py
│ ├── server.py # Main MCP server
│ ├── config.py # Configuration management
│ ├── repository_api.py # Repository API client (HTTP)
│ ├── engine_api.py # Engine API client (WebSocket)
│ └── utils.py # Utility functions
├── certs/ # Certificates (git ignored)
│ ├── client.pem
│ ├── client_key.pem
│ └── root.pem
├── .env.example # Configuration template
├── mcp.json.example # MCP configuration template
├── pyproject.toml # Project dependencies
└── README.md
```
### System Components
#### QlikSenseMCPServer
Main server class handling MCP protocol operations, tool registration, and request routing.
#### QlikRepositoryAPI
HTTP client for Repository API operations including application metadata and administrative functions.
#### QlikEngineAPI
WebSocket client for Engine API operations including data extraction, analytics, and hypercube creation.
#### QlikSenseConfig
Configuration management class handling environment variables, certificate paths, and connection settings.
## Development
### Development Environment Setup
```bash
# Setup development environment
make dev
# Show all available commands
make help
# Build package
make build
```
### Version Management
```bash
# Bump patch version and create PR
make version-patch
# Bump minor version and create PR
make version-minor
# Bump major version and create PR
make version-major
```
### Adding New Tools
1. **Add tool definition in server.py**
```python
# In tools_list
Tool(name="new_tool", description="Tool description", inputSchema={...})
```
2. **Add handler in server.py**
```python
# In handle_call_tool()
elif name == "new_tool":
result = await asyncio.to_thread(self.api_client.new_method, arguments)
return [TextContent(type="text", text=json.dumps(result, indent=2))]
```
3. **Implement method in API client**
```python
# In repository_api.py or engine_api.py
def new_method(self, param: str) -> Dict[str, Any]:
"""Method implementation."""
return result
```
## Troubleshooting
### Common Issues
#### Certificate Errors
```
SSL: CERTIFICATE_VERIFY_FAILED
```
**Solution:**
- Verify certificate paths in `.env`
- Check certificate expiration
- Set `QLIK_VERIFY_SSL=false` for testing
#### Connection Errors
```
ConnectionError: Failed to connect to Engine API
```
**Solution:**
- Verify port 4747 accessibility
- Check server URL correctness
- Verify firewall settings
#### Authentication Errors
```
401 Unauthorized
```
**Solution:**
- Verify `QLIK_USER_DIRECTORY` and `QLIK_USER_ID`
- Check user exists in Qlik Sense
- Verify user permissions
### Diagnostics
#### Test Configuration
```bash
python -c "
from qlik_sense_mcp_server.config import QlikSenseConfig
config = QlikSenseConfig.from_env()
print('Config valid:', config and hasattr(config, 'server_url'))
print('Server URL:', getattr(config, 'server_url', 'Not set'))
"
```
#### Test Repository API
```bash
python -c "
from qlik_sense_mcp_server.server import QlikSenseMCPServer
server = QlikSenseMCPServer()
print('Server initialized:', server.config_valid)
"
```
## Performance
### Optimization Recommendations
1. **Use filters** to limit data volume
2. **Limit result size** with `max_rows` parameter
3. **Use Repository API** for metadata (faster than Engine API)
### Benchmarks
| Operation | Average Time | Recommendations |
|-----------|--------------|-----------------|
| get_apps | 0.5s | Use filters |
| get_app_details | 0.5s-2s | Analyze specific apps |
| get_app_sheets | 0.3-1s | Fast metadata retrieval |
| get_app_sheet_objects | 0.5-2s | Sheet analysis |
| get_app_script | 1-5s | Script extraction |
| get_app_field | 0.5-2s | Field values with pagination |
| get_app_variables | 0.3-1s | Variable listing |
| get_app_field_statistics | 0.5-2s | Use for numeric fields |
| engine_create_hypercube | 1-10s | Limit dimensions and measures |
## Security
### Recommendations
1. **Store certificates securely** - exclude from git
2. **Use environment variables** for sensitive data
3. **Limit user permissions** in Qlik Sense
4. **Update certificates regularly**
5. **Monitor API access**
### Access Control
Create user in QMC with minimal required permissions:
- Read applications
- Access Engine API
- View data (if needed for analysis)
## License
MIT License
Copyright (c) 2025 Stanislav Chernov
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
---
**Project Status**: Production Ready | 10/10 Tools Working | v1.3.4
**Installation**: `uvx qlik-sense-mcp-server`
| text/markdown | null | Stanislav Chernov <bintocher@yandex.com> | null | null | MIT | analytics, mcp, model-context-protocol, qlik, qlik-sense | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Information Analysis",
"Topic :: Software Development :: Libraries :: Python... | [] | null | null | >=3.12 | [] | [] | [] | [
"httpx>=0.25.0",
"mcp>=1.1.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"websocket-client>=1.6.0",
"build>=0.10.0; extra == \"dev\"",
"bump2version>=1.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"twine>=4.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/bintocher/qlik-sense-mcp",
"Repository, https://github.com/bintocher/qlik-sense-mcp",
"Issues, https://github.com/bintocher/qlik-sense-mcp/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T17:27:22.627658 | qlik_sense_mcp_server-1.4.0.tar.gz | 95,912 | 38/59/529903df6127209effd7f32f1f436c5c362075d3d0208c27d7b81b4832be/qlik_sense_mcp_server-1.4.0.tar.gz | source | sdist | null | false | f0e79c0f0eac9cd3c903ddb009083d1b | ab05c81a9a319d6ae0aa9c13f105327307e34c9750bebca7c99cb2c075141257 | 3859529903df6127209effd7f32f1f436c5c362075d3d0208c27d7b81b4832be | null | [
"LICENSE"
] | 247 |
2.4 | iqcc-cloud-client | 0.16.2 | Access to IQCC quantum cloud | # IQCC Cloud client
Client for remote (cloud) acccess to IQCC quantum and classical resources. | text/markdown | null | Nikola Sibalic <nikola@quantum-machines.co> | null | null | BSD3 | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Physics... | [] | null | null | >=3.10 | [] | [] | [] | [
"arrow>=1.3.0",
"cryptography>=46.0.5",
"deepdiff>=8.6.1",
"h11>=0.16.0",
"h2>=4.3.0",
"httpx[http2]>=0.23.3",
"ipywidgets>=8.1.5",
"jsonschema>=4.24.0",
"numpy>=1.26.4",
"plotext>=5.3.2",
"protobuf>=6.33.5",
"pydantic-extra-types[pendulum]>=2.10.3",
"pydantic>=2.9.1",
"pyjwt>=2.10.1",
"... | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T17:27:13.489083 | iqcc_cloud_client-0.16.2-py3-none-any.whl | 20,332 | b0/3d/bf3f1d68fed96768e292cff7362793f9f83410eb4156863ef46a84240094/iqcc_cloud_client-0.16.2-py3-none-any.whl | py3 | bdist_wheel | null | false | b54ba9c5e0836580aacc181a52c785d1 | c51b580472c729b13d95df3135af9e12b335a32e085887fdebc1f7c645a9eb31 | b03dbf3f1d68fed96768e292cff7362793f9f83410eb4156863ef46a84240094 | null | [
"LICENSE"
] | 247 |
2.1 | mrsal | 3.2.1 | Production-ready AMQP message broker abstraction with advanced retry logic, dead letter exchanges, and high availability features. | # MRSAL AMQP
[](https://pypi.org/project/mrsal/)
[](https://www.python.org/downloads/)
[](https://github.com/NeoMedSys/mrsal/actions/workflows/mrsal.yaml)
[](https://neomedsys.github.io/mrsal/reports/coverage/htmlcov/)
## Intro
Mrsal is a **production-ready** message broker abstraction on top of [RabbitMQ](https://www.rabbitmq.com/), [aio-pika](https://aio-pika.readthedocs.io/en/latest/) and [Pika](https://pika.readthedocs.io/en/stable/index.html).
**Why Mrsal?** Setting up robust AMQP in production is complex. You need dead letter exchanges, retry logic, quorum queues, proper error handling, queue management, and more. Mrsal gives you **enterprise-grade messaging** out of the box with just a few lines of code.
**What makes Mrsal production-ready:**
- **Dead Letter Exchange**: Automatic DLX setup with configurable retry cycles
- **High Availability**: Quorum queues for data safety across cluster nodes
- **Performance Tuning**: Queue limits, overflow behavior, lazy queues, prefetch control
- **Zero Configuration**: Sensible defaults that work in production
- **Full Observability**: Comprehensive logging and retry tracking
- **Type Safety**: Pydantic integration for payload validation
- **Async & Sync**: Both blocking and async implementations
The goal is to make Mrsal **trivial to re-use** across all services in your distributed system and to make advanced message queuing protocols **easy and safe**. No more big chunks of repetitive code across your services or bespoke solutions to handle dead letters.
**Perfect for:**
- Microservices communication
- Event-driven architectures
- Background job processing
- Real-time data pipelines
- Mission-critical message processing
###### Mrsal is Arabic for a small arrow and is used to describe something that performs a task with lightness and speed.
## Quick Start guide
### 0. Requirements
1. RabbitMQ server up and running
2. python 3.10 >=
3. tested on linux only
### 1. Installing
First things first:
```bash
poetry add mrsal
```
Next set the default username, password and servername for your RabbitMQ setup. It's advisable to use a \`.env\` script or \`(.zsh)rc\` file for persistence.
```bash
[RabbitEnvVars]
RABBITMQ_USER=******
RABBITMQ_PASSWORD=******
RABBITMQ_VHOST=******
RABBITMQ_DOMAIN=******
RABBITMQ_PORT=******
# FOR TLS
RABBITMQ_CAFILE=/path/to/file
RABBITMQ_CERT=/path/to/file
RABBITMQ_KEY=/path/to/file
```
###### Mrsal was first developed by NeoMedSys and the research group \[CRAI\](https://crai.no/) at the univeristy hospital of Oslo.
### 2. Setup and connect
- Example 1: Lets create a blocking connection on localhost with no TLS encryption
```python
from mrsal.amqp.subclass import MrsalBlockingAMQP
mrsal = MrsalBlockingAMQP(
host=RABBITMQ_DOMAIN, # Use a custom domain if you are using SSL e.g. mrsal.on-example.com
port=int(RABBITMQ_PORT),
credentials=(RABBITMQ_USER, RABBITMQ_PASSWORD),
virtual_host=RABBITMQ_VHOST,
ssl=False # Set this to True for SSL/TLS (you will need to set the cert paths if you do so)
)
# boom you are staged for connection. This instantiation stages for connection only
```
#### 2.1 Publish
Now lets publish our message of friendship on the friendship exchange.
Note: When `auto_declare=True` means that MrsalAMQP will create the specified `exchange` and `queue`, then bind them together using `routing_key` in one go. If you want to customize each step then turn off auto_declare and specify each step yourself with custom arguments etc.
```python
# BasicProperties is used to set the message properties
prop = pika.BasicProperties(
app_id='zoomer_app',
message_id='zoomer_msg',
content_type=' application/json',
content_encoding='utf-8',
delivery_mode=pika.DeliveryMode.Persistent,
headers=None)
message_body = {'zoomer_message': 'Get it yia bish'}
# Publish the message to the exchange to be routed to queue
mrsal.publish_message(exchange_name='zoomer_x',
exchange_type='direct',
queue_name='zoomer_q',
routing_key='zoomer_key',
message=message_body,
prop=prop,
auto_declare=True)
```
#### 2.2 Consume
Now lets setup a consumer that will listen to our very important messages. If you are using scripts rather than notebooks then it's advisable to run consume and publish separately. We are going to need a callback function which is triggered upon receiving the message from the queue we subscribe to. You can use the callback function to activate something in your system.
Note:
- If you start a consumer with `callback_with_delivery_info=True` then your callback function should have at least these params `(method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message_param: str)`.
- If not, then it should have at least `(message_param: str)`
- We can use pydantic BaseModel classes to enforce types in the body
```python
from pydantic import BaseModel
class ZoomerNRJ(BaseModel):
zoomer_message: str
def consumer_callback_with_delivery_info(
method_frame: pika.spec.Basic.Deliver,
properties: pika.spec.BasicProperties,
body: str
):
if 'Get it' in body:
app_id = properties.app_id
msg_id = properties.message_id
print(f'app_id={app_id}, msg_id={msg_id}')
print('Slay with main character vibe')
else:
raise SadZoomerEnergyError('Zoomer sad now')
mrsal.start_consumer(
queue_name='zoomer_q',
exchange_name='zoomer_x',
callback_args=None, # no need to specifiy if you do not need it
callback=consumer_callback_with_delivery_info,
auto_declare=True,
auto_ack=False
)
```
Done! Your first message of zommerism has been sent to the zoomer queue on the exchange of Zoomeru in a blocking connection. Lets see how we can do it in async in the next step.
### 3. Setup and Connect Async
Its usually the best practise to use async consumers if high throughput is expected. We can easily do this by adjusting the code a little bit to fit the framework of async connection in python.
```python
from mrsal.amqp.subclass import MrsalAsyncAMQP
mrsal = MrsalAsyncAMQP(
host=RABBITMQ_DOMAIN, # Use a custom domain if you are using SSL e.g. mrsal.on-example.com
port=int(RABBITMQ_PORT),
credentials=(RABBITMQ_USER, RABBITMQ_PASSWORD),
virtual_host=RABBITMQ_VHOST,
ssl=False # Set this to True for SSL/TLS (you will need to set the cert paths if you do so)
)
# boom you are staged for async connection.
```
#### 3.1 Consume
Lets go turbo and set up the consumer in async for efficient AMQP handling
```python
import asyncio
from pydantic import BaseModel
class ZoomerNRJ(BaseModel):
zoomer_message: str
async def consumer_callback_with_delivery_info(
method_frame: pika.spec.Basic.Deliver,
properties: pika.spec.BasicProperties,
body: str
):
if 'Get it' in body:
app_id = properties.app_id
msg_id = properties.message_id
print(f'app_id={app_id}, msg_id={msg_id}')
print('Slay with main character vibe')
else:
raise SadZoomerEnergyError('Zoomer sad now')
asyncio.run(mrsal.start_consumer(
queue_name='zoomer_q',
exchange_name='zoomer_x',
callback_args=None, # no need to specifiy if you do not need it
callback=consumer_callback_with_delivery_info,
auto_declare=True,
auto_ack=False
))
```
That simple! You have now setups for full advanced message queueing protocols that you can use to promote friendship or other necessary communication between your services in both blocking or async connections.
###### Note! There are many parameters and settings that you can use to set up a more sophisticated communication protocol in both blocking or async connection with pydantic BaseModels to enforce data types in the expected payload.
### 4. Advanced Features
#### 4.1 Dead Letter Exchange & Retry Logic with Cycles
Mrsal provides sophisticated retry mechanisms with both immediate retries and time-delayed retry cycles:
```python
mrsal = MrsalBlockingAMQP(
host=RABBITMQ_DOMAIN,
port=int(RABBITMQ_PORT),
credentials=(RABBITMQ_USER, RABBITMQ_PASSWORD),
virtual_host=RABBITMQ_VHOST,
dlx_enable=True, # Default: creates '<exchange_name>.dlx'
)
# Advanced retry configuration with cycles
mrsal.start_consumer(
queue_name='critical_queue',
exchange_name='critical_exchange',
exchange_type='direct',
routing_key='critical_key',
callback=my_callback,
auto_ack=False, # Required for retry logic
dlx_enable=True, # Enable DLX for this queue
dlx_exchange_name='custom_dlx', # Optional: custom DLX name
dlx_routing_key='dlx_key', # Optional: custom DLX routing
enable_retry_cycles=True, # Enable time-delayed retry cycles
retry_cycle_interval=10, # Minutes between retry cycles
max_retry_time_limit=60, # Total minutes before permanent failure
)
```
**How the advanced retry logic works:**
2. **Retry Cycles**: Send to DLX with TTL for time-delayed retry
3. **Cycle Tracking**: Each cycle increments counters and tracks total elapsed time
4. **Permanent Failure**: After \`max_retry_time_limit\` exceeded → message stays in DLX for manual review
**Benefits:**
- Handles longer outages with time-delayed cycles
- Full observability with retry tracking
- Manual intervention capability for persistent failures
#### 4.2 Queue Management & Performance
Configure queues for optimal performance and resource management:
```python
mrsal.start_consumer(
queue_name='high_performance_queue',
exchange_name='perf_exchange',
exchange_type='direct',
routing_key='perf_key',
callback=my_callback,
# Queue limits and overflow behavior
max_queue_length=10000, # Max messages before overflow
max_queue_length_bytes=None, # Optional: max queue size in bytes
queue_overflow="drop-head", # "drop-head" or "reject-publish"
# Performance settings
single_active_consumer=False, # Allow parallel processing
lazy_queue=False, # Keep messages in RAM for speed
use_quorum_queues=True, # High availability
# Memory optimization (for low-priority queues)
lazy_queue=True, # Store messages on disk
single_active_consumer=True # Sequential processing
)
```
**Queue Configuration Options:**
- **\`max_queue_length\`**: Limit queue size to prevent memory issues
- **\`queue_overflow\`**:
- \`"drop-head"\`: Remove oldest messages when full
- \`"reject-publish"\`: Reject new messages when full
- **\`single_active_consumer\`**: Only one consumer processes at a time (good for ordered processing)
- **\`lazy_queue\`**: Store messages on disk instead of RAM (memory efficient)
- **\`use_quorum_queues\`**: Enhanced durability and performance in clusters
#### 4.3 Quorum Queues
Quorum queues provide better data safety and performance for production environments:
```python
mrsal = MrsalBlockingAMQP(
host=RABBITMQ_DOMAIN,
port=int(RABBITMQ_PORT),
credentials=(RABBITMQ_USER, RABBITMQ_PASSWORD),
virtual_host=RABBITMQ_VHOST,
use_quorum_queues=True # Default: enables quorum queues
)
# Per-queue configuration
mrsal.start_consumer(
queue_name='high_availability_queue',
exchange_name='ha_exchange',
exchange_type='direct',
routing_key='ha_key',
callback=my_callback,
use_quorum_queues=True # This queue will be highly available
)
```
**Benefits:**
- Better data replication across RabbitMQ cluster nodes
- Improved performance under high load
- Automatic leader election and failover
- Works great in Kubernetes and bare metal deployments
#### 4.4 Production-Ready Example
```python
from mrsal.amqp.subclass import MrsalBlockingAMQP
from pydantic import BaseModel
import json
class OrderMessage(BaseModel):
order_id: str
customer_id: str
amount: float
def process_order(method_frame, properties, body):
try:
order_data = json.loads(body)
order = OrderMessage(**order_data)
# Process the order
print(f"Processing order {order.order_id} for customer {order.customer_id}")
# Simulate processing that might fail
if order.amount < 0:
raise ValueError("Invalid order amount")
except Exception as e:
print(f"Order processing failed: {e}")
raise # This will trigger retry logic
# Production-ready setup with full retry cycles
mrsal = MrsalBlockingAMQP(
host=RABBITMQ_DOMAIN,
port=int(RABBITMQ_PORT),
credentials=(RABBITMQ_USER, RABBITMQ_PASSWORD),
virtual_host=RABBITMQ_VHOST,
dlx_enable=True, # Automatic DLX for failed orders
use_quorum_queues=True, # High availability
prefetch_count=10 # Process up to 10 messages concurrently
)
mrsal.start_consumer(
queue_name='orders_queue',
exchange_name='orders_exchange',
exchange_type='direct',
routing_key='new_order',
callback=process_order,
payload_model=OrderMessage, # Automatic validation
auto_ack=False, # Manual ack for reliability
auto_declare=True, # Auto-create exchange/queue/DLX
# Advanced retry configuration
enable_retry_cycles=True, # Enable retry cycles
retry_cycle_interval=15, # 15 minutes between cycles
max_retry_time_limit=120, # 2 hours total retry time
# Queue performance settings
max_queue_length=50000, # Handle large order volumes
queue_overflow="reject-publish", # Reject when full (backpressure)
single_active_consumer=False # Parallel processing for speed
)
```
**Note!** There are many parameters and settings that you can use to set up a more sophisticated communication protocol in both blocking or async connection with pydantic BaseModels to enforce data types in the expected payload.
---
| text/markdown | Jon E. Nesvold | jnesvold@neomedsys.io | Jon E Nesvold | jnesvold@neomedsys.io | GPL-3.0-or-later | message broker, RabbitMQ, Pika, Mrsal, AMQP, async, dead letter exchange | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: ... | [] | https://github.com/NeoMedSys/mrsal | null | <3.15,>=3.10 | [] | [] | [] | [
"colorlog<7.0.0,>=6.7.0",
"pika<2.0.0,>=1.3.0",
"retry<0.10.0,>=0.9.2",
"tenacity<10.0.0,>=9.0.0",
"aio-pika<10.0.0,>=9.4.3",
"pydantic<3.0.0,>=2.11.5"
] | [] | [] | [] | [
"Repository, https://github.com/NeoMedSys/mrsal",
"Documentation, https://neomedsys.github.io/mrsal/"
] | poetry/1.8.0 CPython/3.11.14 Linux/6.14.0-1017-azure | 2026-02-18T17:26:47.446264 | mrsal-3.2.1.tar.gz | 35,026 | 7d/87/73542122ed23a92d5d72d1e5a4720b57699a5376364d5037f1d992dc854f/mrsal-3.2.1.tar.gz | source | sdist | null | false | 7b57ac99bbe6d00ff79a30ee36278e76 | dbb145a1a71690a0622cfb6e507cd4619a362f50d6c06b7494d8ecded55c4c6f | 7d8773542122ed23a92d5d72d1e5a4720b57699a5376364d5037f1d992dc854f | null | [] | 269 |
2.4 | epita-coding-style | 2.4.2 | EPITA C Coding Style Checker - validates C code against EPITA coding standards | # EPITA C Coding Style Checker
A fast C linter for EPITA coding style rules. Uses [tree-sitter](https://tree-sitter.github.io/) for robust AST-based parsing.
## Installation
```bash
pipx install epita-coding-style
```
## Quick Start
```bash
epita-coding-style src/ # Check files/directories
epita-coding-style --list-rules # List all rules with descriptions
epita-coding-style --show-config # Show current configuration
epita-coding-style --help # Full usage info
```
## Configuration
Configuration is auto-detected from (in order):
- `.epita-style`
- `.epita-style.toml`
- `epita-style.toml`
- `[tool.epita-coding-style]` in `pyproject.toml`
**Priority:** CLI flags > config file > preset > defaults
### Generate a Config File
```bash
epita-coding-style --show-config --no-color > .epita-style.toml
```
This outputs a complete, commented TOML config you can customize.
### Presets
```bash
epita-coding-style --preset 42sh src/ # 40 lines, goto/cast allowed
epita-coding-style --preset noformat src/ # Same + skip clang-format
```
### Example Config
```toml
# .epita-style.toml
max_lines = 40
[rules]
"keyword.goto" = false # Allow goto
"cast" = false # Allow casts
```
Or in `pyproject.toml`:
```toml
[tool.epita-coding-style]
max_lines = 40
[tool.epita-coding-style.rules]
"keyword.goto" = false
```
## clang-format
The `format` rule uses `clang-format` to check code formatting. Requires `clang-format` to be installed.
The checker looks for `.clang-format` in the file's directory (walking up to root), or uses the bundled EPITA config.
To disable: set `"format" = false` in your config, or use `--preset noformat`.
## Pre-commit Hook
```yaml
# .pre-commit-config.yaml
repos:
- repo: https://github.com/KazeTachinuu/epita-coding-style
rev: v2.3.0
hooks:
- id: epita-coding-style
args: [--preset, 42sh] # optional
```
## License
MIT
| text/markdown | Hugo | null | null | null | null | c, checker, coding-style, epita, linter | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: C",
"Programming Language :: Python :: 3",
"Programmi... | [] | null | null | >=3.10 | [] | [] | [] | [
"tomli>=2.0.0; python_version < \"3.11\"",
"tree-sitter-c>=0.23.0",
"tree-sitter>=0.23.0",
"pytest>=8.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/KazeTachinuu/coding-style",
"Repository, https://github.com/KazeTachinuu/coding-style",
"Issues, https://github.com/KazeTachinuu/coding-style/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T17:25:11.359445 | epita_coding_style-2.4.2.tar.gz | 18,808 | 6e/ca/3e9f58960343ac459f24d060d0670fcb7403e2cbd086a197f1330452026c/epita_coding_style-2.4.2.tar.gz | source | sdist | null | false | f6868adc061bd55bd75b5fc824a71846 | b4ad9c1aedc0f0bc59722e181c4a6ad6201b1e0c46e07a5af384ffbbdee650d2 | 6eca3e9f58960343ac459f24d060d0670fcb7403e2cbd086a197f1330452026c | MIT | [] | 248 |
2.4 | tentacletk | 0.10.1 | A multi-application marking menu and UI framework for Maya, 3ds Max, and Blender. | [](test/)
[](https://www.gnu.org/licenses/lgpl-3.0.en.html)
[](https://pypi.org/project/tentacletk/)
# Tentacle: A Python3/qtpy Marking Menu
Tentacle is a Python/qtpy marking menu runtime for DCC applications, built on top of `uitk`.
It currently ships a full slot ecosystem for Maya (tested with Maya 2025), with wrapper entry points for Blender and 3ds Max.
## Design
Tentacle runs on top of [uitk](https://github.com/m3trik/uitk.git), which provides:
- Dynamic `.ui` loading via `Switchboard`
- Convention-based slot binding (`widget_name -> slot method`)
- The marking-menu engine (`MarkingMenu`) and gesture path overlay (`Overlay`)
## Example
The following example demonstrates re-opening the last scene, renaming a material, and selecting geometry by that material.

## Structure
Current architecture overview:
```mermaid
flowchart TD
A["TclMaya<br/>(tentacle/tcl_maya.py)"] --> B["MarkingMenu<br/>(uitk)"]
B --> C["Switchboard<br/>(uitk)"]
C --> D["UI Files<br/>(tentacle/ui/*.ui)"]
C --> E["Slots<br/>(tentacle/slots/maya/*.py)"]
B --> F["Overlay<br/>(uitk/widgets/marking_menu/overlay.py)"]
A --> G["MayaUiHandler<br/>(mayatk)"]
```
| Module | Description |
| ------------- | ------------- |
| [tentacle/tcl_maya.py](../tentacle/tcl_maya.py) | Maya entry point and default key/button bindings for the marking menu. |
| [tentacle/slots](../tentacle/slots) | Slot classes containing command logic (Maya-focused). |
| [tentacle/ui](../tentacle/ui) | Dynamic `.ui` definitions for start menus and submenus. |
| [uitk/widgets/marking_menu/_marking_menu.py](../../uitk/uitk/widgets/marking_menu/_marking_menu.py) | Core marking menu engine (input state, transitions, window handling). |
| [uitk/widgets/marking_menu/overlay.py](../../uitk/uitk/widgets/marking_menu/overlay.py) | Gesture/path overlay used for submenu path continuity and drawing. |
## Installation
Tentacle can be installed either using pip directly in the command line or by downloading and running mayapy package manager in Windows.
### Installation via pip
Install via pip in a command line window using:
```bash
path/to/mayapy.exe -m pip install tentacletk
```
### Installation Using Mayapy Package Manager
Alternatively, you can use the mayapy package manager for a streamlined installation process.
Download the mayapy package manager from [here](https://github.com/m3trik/windows-shell-scripting/blob/master/mayapy-package-manager.bat). (Give your Maya version. Hit 1 to install package. The package name is `tentacletk`)
## Usage
To launch the marking menu:
For Maya, add the following to your `userSetup.py`:
```python
import pymel.core as pm
def start_tentacle():
from tentacle import TclMaya
TclMaya(key_show='Key_F12') # Use Qt key names, e.g. Key_F12
pm.evalDeferred(start_tentacle)
```
## Menu Wiring (How It Works)
Tentacle uses naming conventions between UI widgets and slot methods:
- UI file names determine menu identity (example: `materials.ui`, `main#startmenu.ui`)
- Widget object names map to methods in the matching slots class
- `b005` -> `def b005(...)`
- `cmb002` -> `def cmb002(...)`
- Optional setup hooks: `def cmb002_init(widget)`
- "Info" buttons (`i*`) route by `accessibleName` to submenu UIs
This wiring is handled by `uitk` `Switchboard` + `MarkingMenu`.
## Platform Support
- Maya: full menu and slot coverage
- Blender: wrapper class exists (`TclBlender`), no equivalent slot suite in this repo
- 3ds Max: wrapper class exists (`TclMax`), no equivalent slot suite in this repo
| text/markdown | null | Ryan Simpson <m3trik@outlook.com> | null | null | LGPL-3.0-or-later | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU Lesser General Public License v3 or later (LGPLv3+)",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pythontk>=0.7.78",
"uitk>=1.0.89",
"mayatk>=0.10.2"
] | [] | [] | [] | [
"Homepage, https://github.com/m3trik/tentacle",
"Repository, https://github.com/m3trik/tentacle"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T17:24:47.213688 | tentacletk-0.10.1-py3-none-any.whl | 242,752 | 48/ba/ad7855cbada40861ce1815e0a70f5f650ebd38ae06c5c06c28eeafa6c330/tentacletk-0.10.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 7d4fdcfe4919dc58f50be54ca85326f1 | 5830f7ce85eeeb338caef2d2764ed90a043d3474e5c44d41c6b2b8ee62467c14 | 48baad7855cbada40861ce1815e0a70f5f650ebd38ae06c5c06c28eeafa6c330 | null | [
"COPYING.LESSER"
] | 97 |
2.4 | searcherator | 0.1.1 | Async web search client using Brave Search API with built-in caching, rate limiting, and batch processing | # Searcherator
Searcherator is a Python package that provides a convenient way to perform web searches using the Brave Search API with built-in caching, automatic rate limiting, and efficient batch processing capabilities.
## Features
- Async/await support for modern Python applications
- Automatic caching with configurable TTL
- Built-in rate limiting to respect API quotas
- Efficient batch processing for multiple concurrent searches
- Support for multiple languages and countries
- Comprehensive exception hierarchy for robust error handling
- Real-time rate limit tracking and monitoring
## Installation
```bash
pip install searcherator
```
## Requirements
- Python 3.8+
- Brave Search API key ([Get one here](https://brave.com/search/api/))
## Quick Start
```python
from searcherator import Searcherator
import asyncio
async def main():
# Basic search
search = Searcherator("Python programming")
# Get URLs from search results
urls = await search.urls()
print(urls)
# Get detailed results
results = await search.detailed_search_result()
for result in results:
print(f"{result['title']}: {result['url']}")
# Clean up
await Searcherator.close_session()
if __name__ == "__main__":
asyncio.run(main())
```
## Usage Examples
### Basic Search
```python
from searcherator import Searcherator
import asyncio
async def main():
search = Searcherator("Python tutorials", num_results=10)
results = await search.search_result()
print(results)
await Searcherator.close_session()
asyncio.run(main())
```
### Localized Search
```python
# German search
german_search = Searcherator(
"Zusammenfassung Buch 'Demian' von 'Hermann Hesse'",
language="de",
country="de",
num_results=10
)
results = await german_search.search_result()
```
### Batch Processing
```python
import asyncio
from searcherator import Searcherator
async def batch_search():
queries = ["Python", "JavaScript", "Rust", "Go", "TypeScript"]
try:
# Create search instances
searches = [Searcherator(q, num_results=5) for q in queries]
# Run all searches concurrently (rate limiting handled automatically)
results = await asyncio.gather(
*[s.search_result() for s in searches],
return_exceptions=True
)
# Process results
for query, result in zip(queries, results):
if isinstance(result, dict):
print(f"{query}: {len(result.get('web', {}).get('results', []))} results")
finally:
await Searcherator.close_session()
asyncio.run(batch_search())
```
### Error Handling
```python
from searcherator import (
Searcherator,
SearcheratorAuthError,
SearcheratorRateLimitError,
SearcheratorTimeoutError,
SearcheratorAPIError
)
async def safe_search():
try:
search = Searcherator("Python", timeout=10)
results = await search.search_result()
except SearcheratorAuthError:
print("Invalid API key")
except SearcheratorRateLimitError as e:
print(f"Rate limited. Resets in {e.reset_per_second}s")
except SearcheratorTimeoutError:
print("Request timed out")
except SearcheratorAPIError as e:
print(f"API error: {e.status_code} - {e.message}")
finally:
await Searcherator.close_session()
```
### Monitoring Rate Limits
```python
search = Searcherator("Python")
results = await search.search_result()
print(f"Rate limit (per second): {search.rate_limit_per_second}")
print(f"Remaining (per second): {search.rate_remaining_per_second}")
print(f"Rate limit (per month): {search.rate_limit_per_month}")
print(f"Remaining (per month): {search.rate_remaining_per_month}")
```
## API Reference
### Searcherator
```python
Searcherator(
search_term: str = "",
num_results: int = 5,
country: str | None = "us",
language: str | None = "en",
api_key: str | None = None,
spellcheck: bool = False,
timeout: int = 30,
clear_cache: bool = False,
ttl: int = 7,
logging: bool = False
)
```
#### Parameters
- **search_term** (str): The query string to search for
- **num_results** (int): Maximum number of results to return (default: 5)
- **country** (str): Country code for search results (default: "us")
- **language** (str): Language code for search results (default: "en")
- **api_key** (str): Brave Search API key (default: None, uses BRAVE_API_KEY environment variable)
- **spellcheck** (bool): Enable spell checking on queries (default: False)
- **timeout** (int): Request timeout in seconds (default: 30)
- **clear_cache** (bool): Clear existing cached results (default: False)
- **ttl** (int): Time-to-live for cached results in days (default: 7)
- **logging** (bool): Enable cache operation logging (default: False)
#### Methods
##### `async search_result() -> dict`
Returns the full search results as a dictionary from the Brave Search API.
##### `async urls() -> list[str]`
Returns a list of URLs from the search results.
##### `async detailed_search_result() -> list[dict]`
Returns detailed information for each search result including title, URL, description, and metadata.
##### `async print() -> None`
Pretty prints the full search results.
##### `@classmethod async close_session()`
Closes the shared aiohttp session. Call this when done with all searches.
## Authentication
Set your Brave Search API key as an environment variable:
```bash
# Linux/macOS
export BRAVE_API_KEY="your-api-key-here"
# Windows
set BRAVE_API_KEY=your-api-key-here
```
Or provide it directly:
```python
search = Searcherator("My search term", api_key="your-api-key-here")
```
## Exception Hierarchy
```
SearcheratorError (base exception)
├── SearcheratorAuthError (authentication failures)
├── SearcheratorRateLimitError (rate limit exceeded)
├── SearcheratorTimeoutError (request timeout)
└── SearcheratorAPIError (other API errors)
```
## Rate Limiting
Searcherator automatically handles rate limiting to respect Brave Search API quotas:
- **Automatic throttling** - Requests are automatically spaced to stay within limits
- **Concurrent control** - Built-in semaphore limits concurrent requests
- **Rate limit tracking** - Monitor your usage via instance attributes
The default configuration safely handles up to ~13 requests per second, well under typical API limits.
## Caching
Results are automatically cached to disk:
- **Location**: `data/search/` directory
- **Format**: JSON files
- **TTL**: Configurable (default: 7 days)
- **Cache key**: Based on search term, language, country, and num_results
To disable caching for a specific search:
```python
search = Searcherator("Python", clear_cache=True, ttl=0)
```
## Best Practices
1. **Always close the session** when done:
```python
try:
# Your searches
finally:
await Searcherator.close_session()
```
2. **Use batch processing** for multiple searches:
```python
results = await asyncio.gather(*[s.search_result() for s in searches])
```
3. **Handle exceptions** appropriately:
```python
try:
results = await search.search_result()
except SearcheratorRateLimitError:
# Wait and retry
```
4. **Monitor rate limits** for high-volume applications:
```python
if search.rate_remaining_per_month < 1000:
# Alert or throttle
```
## Testing
Run the test suite:
```bash
# Install test dependencies
pip install pytest pytest-asyncio
# Run all tests
pytest test_searcherator.py -v
# Run with coverage
pip install pytest-cov
pytest test_searcherator.py --cov=searcherator --cov-report=html
```
## License
MIT License
## Links
- **GitHub**: [https://github.com/Redundando/searcherator](https://github.com/Redundando/searcherator)
- **PyPI**: [https://pypi.org/project/searcherator/](https://pypi.org/project/searcherator/)
- **Issues**: [https://github.com/Redundando/searcherator/issues](https://github.com/Redundando/searcherator/issues)
## Author
Arved Klöhn - [GitHub](https://github.com/Redundando/)
| text/markdown | null | Arved Klöhn <arved.kloehn@gmail.com> | null | null | null | search, brave, api, async, cache, rate-limiting, web-search | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python ... | [] | null | null | >=3.8 | [] | [] | [] | [
"cacherator>=0.1.0",
"logorator>=0.1.0",
"aiohttp>=3.8.0"
] | [] | [] | [] | [
"Homepage, https://github.com/Redundando/searcherator",
"Documentation, https://github.com/Redundando/searcherator#readme",
"Bug Tracker, https://github.com/Redundando/searcherator/issues",
"Source Code, https://github.com/Redundando/searcherator"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T17:24:35.624557 | searcherator-0.1.1.tar.gz | 10,471 | 4a/33/ca8e22af94dc5146f845919895e8dde97011ea057210314d26c62f2be597/searcherator-0.1.1.tar.gz | source | sdist | null | false | a4a4cab9a566e21ade5d16137ab85eae | 5d70a30463c2a8d409347d3a130597d283ee1044fbacaabfaadb9eb60711ecaa | 4a33ca8e22af94dc5146f845919895e8dde97011ea057210314d26c62f2be597 | null | [] | 236 |
2.4 | hilda | 3.7.1 | LLDB wrapped and empowered by iPython's features | # Hilda
- [Hilda](#hilda)
- [Overview](#overview)
- [Installation](#installation)
- [How to use](#how-to-use)
- [Starting a Hilda interactive shell](#starting-a-hilda-interactive-shell)
- [Inside a Hilda shell](#inside-a-hilda-shell)
- [Magic functions](#magic-functions)
- [Key-bindings](#key-bindings)
- [Configurables](#configurables)
- [UI Configuration](#ui-configuration)
- [Python API](#python-api)
- [Symbol objects](#symbol-objects)
- [Globalized symbols](#globalized-symbols)
- [Searching for the right symbol](#searching-for-the-right-symbol)
- [Objective-C Classes](#objective-c-classes)
- [Objective-C Objects](#objective-c-objects)
- [Using snippets](#using-snippets)
- [Contributing](#contributing)
## Overview
Hilda is a debugger which combines both the power of LLDB and iPython for easier debugging.
The name originates from the TV show "Hilda", which is the best friend of
[Frida](https://frida.re/). Both Frida and Hilda are meant for pretty much the same purpose, except Hilda takes the
more "
debugger-y" approach (based on LLDB).
Currently, the project is intended for iOS/OSX debugging, but in the future we will possibly add support for the
following platforms as well:
- Linux
- Android
Since LLDB allows abstraction for both platform and architecture, it should be possible to make the necessary changes
without too many modifications.
Pull requests are more than welcome 😊.
If you need help or have an amazing idea you would like to suggest, feel free
to [start a discussion 💬](https://github.com/doronz88/hilda/discussions).
## Installation
Requirements for remote iOS device (not required for debugging a local OSX process):
- Jailbroken iOS device
- `debugserver` in device's PATH
- [You can use this tool in order to obtain the binary](https://github.com/doronz88/debugserver-deploy)
- After re-signing with new entitlements, you can put the binary in the following path: `/usr/bin/debugserver`
In order to install please run:
```shell
xcrun python3 -m pip install --user -U hilda
```
*⚠️ Please note that Hilda is installed on top of XCode's python so LLDB will be able to use its features.*
## How to use
### Starting a Hilda interactive shell
You can may start a Hilda interactive shell by invoking any of the subcommand:
- `hilda launch /path/to/executable`
- Launch given executable on current host
- `hilda attach [-p pid] [-n process-name]`
- Attach to an already running process on current host (specified by either `pid` or `process-name`)
- `hilda remote HOSTNAME PORT`
- Attach to an already running process on a target host (specified by `HOSTNAME PORT`)
- `hilda bare`
- Only start an LLDB shell and load Hilda as a plugin.
- Please refer to the following help page if you require help on the command available to you within the lldb shell:
[lldb command map](https://lldb.llvm.org/use/map.html).
As a cheatsheet, connecting to a remote platform like so:
```shell
platform connect connect://ip:port
```
... and attaching to a local process:
```shell
process attach -n process_name
process attach -p process_pid
```
When you are ready, just execute `hilda` to move to Hilda's iPython shell.
### Inside a Hilda shell
Upon starting Hilda, you are welcomed into an IPython shell.
You can access following methods via the variable `p`.
Basic flow control:
- `stop` - Stop process
- `cont` - Continue process
- `finish` - Run current function until return
- `step_into` - Step into current instruction
- `step_over` - Step over current instruction.
- `run_for` - Run the process for given interval
- `force_return` - Prematurely return from a stack frame, short-circuiting execution of inner
frames and optionally yielding a specified value.
- `jump` - Jump to given symbol
- `wait_for_module` - Wait for a module to be loaded (`dlopen`) by checking if given expression is contained within its filename
- `detach` - Detach from process (useful for exiting gracefully so the
process doesn't get killed when you exit)
Breakpoints:
- `bp` or `breakpoints.add` - Add a breakpoint
- `breakpoints.show` - Show existing breakpoints
- `breakpoints.remove` - Remove a single breakpoint
- `breakpoints.clear` - Remove all breakpoints
- `monitor` or `breakpoints.add_monitor` - Creates a breakpoint whose callback implements the requested features (print register values, execute commands, mock return value, etc.)
Basic read/write:
- `get_register` - Get register value
- `set_register` - Set register value
- `poke` - Write data at address
- `peek[_str,_std_str]` - Read buffer/C-string/`std::string` at address
- `po` - Print object using LLDB's `po` command
Can also run arbitrary native code:
```python
p.po('NSMutableString *s = [NSMutableString string]; [s appendString:@"abc"]; [s description]')
```
- `disass` - Print disassembly at address
- `show_current_source` - Print current source code (if possible)
- `bt` - Get backtrace
- `lsof` - Get all open FDs
- `hd` - Hexdump a buffer
- `proc_info` - Print information about currently running mapped process
- `print_proc_entitlements` - Get the plist embedded inside the process' __LINKEDIT section.
Execute code:
- `call` - Call function at given address with given parameters
- `objc_call` - Simulate a call to an objc selector
- `inject` - Inject a single library into currently running process
- `disable_jetsam_memory_checks` -
Disable jetsam memory checks (to prevent raising
`error: Execution was interrupted, reason: EXC_RESOURCE RESOURCE_TYPE_MEMORY (limit=15 MB, unused=0x0).`
when evaluating expressions).
Hilda symbols:
- `symbol` - Get symbol object for a given address
- `objc_symbol` - Get objc symbol wrapper for given address
- `file_symbol` - Calculate symbol address without ASLR
- `save` - Save loaded symbols map (for loading later using the load() command)
- `load` - Load an existing symbols map (previously saved by the save() command)
- `globalize_symbols` - Make all symbols in python's global scope
Advanced:
- `lldb_handle_command` - Execute an LLDB command (e.g., `p.lldb_handle_command('register read')`)
- `evaluate_expression` - Use for quick code snippets (wrapper for LLDB's `EvaluateExpression`)
Take advantage of local variables inside the expression using format string, e.g.,
```python
currentDevice = p.objc_get_class('UIDevice').currentDevice
p.evaluate_expression(f'[[{currentDevice} systemName] hasPrefix:@"2"]')
```
- `import_module` - Import & reload given python module (intended mainly for external snippets)
- `unwind` - Unwind the stack (useful when get_evaluation_unwind() == False)
- `set_selected_thread` - sets the currently selected thread, which is used in other parts of the program, such as displaying disassembly or
checking registers.
This ensures the application focuses on the specified thread for these operations.
Objective-C related:
- `objc_get_class` - Get ObjC class object
- `CFSTR` - Create CFStringRef object from given string
- `ns` - Create NSObject from given data
- `from_ns` - Create python object from NS object.
#### Magic functions
Sometimes accessing the [Python API](#python-api) can be tiring, so we added some magic functions to help you out!
- `%objc <className>`
- Equivalent to: `className = p.objc_get_class(className)`
- `%fbp <filename> <addressInHex>`
- Equivalent to: `p.file_symbol(addressInHex, filename).bp()`
#### Key-bindings
- **F1**: Show banner help message
- **F2**: Show process state UI
- **F3**: Toggle stdout/stderr enablement
- **F7**: Step Into
- **F8**: Step Over
- **F9**: Continue
- **F10**: Stop
#### Configurables
The global `cfg` used to configure various settings for evaluation and monitoring.
These settings include:
- `evaluation_unwind_on_error`: Whether to unwind on error during evaluation. (Default: `False`)
- `evaluation_ignore_breakpoints`: Whether to ignore breakpoints during evaluation. (Default: `False`)
- `nsobject_exclusion`: Whether to exclude `NSObject` during evaluation, reducing IPython autocomplete results. (
Default: `False`)
- `objc_verbose_monitor`: When set to `True`, using `monitor()` will automatically print Objective-C method arguments. (
Default: `False`)
#### UI Configuration
Hilda contains a minimal UI for examining the target state.
The UI is divided into views:
- Registers
- Disassembly
- Stack
- Backtrace

This UI can be displayed at any time be executing:
```python
ui.show()
```
By default `step_into` and `step_over` will show this UI automatically.
You may disable this behavior by executing:
```python
ui.active = False
```
Attentively, if you want to display UI after hitting a breakpoint, you can register `ui.show` as callback:
```python
p.symbol(0x7ff7b97c21b0).bp(ui.show)
```
Try playing with the UI settings by yourself:
```python
# Disable stack view
ui.views.stack.active = False
# View words from the stack
ui.views.stack.depth = 10
# View last 10 frames
ui.views.backtrace.depth = 10
# Disassemble 5 instructions
ui.views.disassembly.instruction_count = 5
# Change disassembly syntax to AT&T
ui.views.disassembly.flavor = 'att'
# View floating point registers
ui.views.registers.rtype = 'float'
# Change addresses print color
ui.colors.address = 'red'
# Change titles color
ui.color.title = 'green'
```
### Python API
Hilda provides a comprehensive API wrappers to access LLDB capabilities.
This API may be used to access process memory, trigger functions, place breakpoints and much more!
Also, in addition to access this API using the [Hilda shell](#inside-a-hilda-shell), you may also use pure-python script using any of the `create_hilda_client_using_*` APIs.
Consider the following snippet as an example of such usage:
```python
from hilda.launch_lldb import create_hilda_client_using_attach_by_name
# attach to `sysmond`
p = create_hilda_client_using_attach_by_name('sysmond')
# allocate 10 bytes and print their address
print(p.symbols.malloc(10))
# detach
p.detach()
```
Please note this script must be executed using `xcrun python3` in order for it to be able to access LLDB API.
#### Symbol objects
In Hilda, almost everything is wrapped using the `Symbol` Object. Symbol is just a nicer way for referring to addresses
encapsulated with an object allowing to deref the memory inside, or use these addresses as functions.
In order to create a symbol from a given address, please use:
```python
s = p.symbol(0x12345678)
# the Symbol object extends `int`
True == isinstance(s, int)
# print the un-shifted file address
# (calculating the ASLR shift for you, so you can just view it in IDA)
print(s.file_address)
# or.. if you know the file address, but don't wanna mess
# with ASLR calculations
s = p.file_symbol(0x12345678)
# peek(/read) 20 bytes of memory
print(s.peek(20))
# write into this memory
s.poke('abc')
# let LLDB print-object (it should guess the type automatically
# based on its memory layout)
print(s.po())
# or you can help LLDB with telling it its type manually
print(s.po('char *'))
# jump to `s` as a function, passing (1, "string") as its args
s(1, "string")
# change the size of each item_size inside `s` for derefs
s.item_size = 1
# *(char *)s = 1
s[0] = 1
# *(((char *)s)+1) = 1
s[1] = 1
# symbol inherits from int, so all int operations apply
s += 4
# change s item size back to 8 to store pointers
s.item_size = 8
# *(intptr_t *)s = 1
s[0] = 1
# storing the return value of the function executed at `0x11223344`
# into `*s`
s[0] = p.symbol(0x11223344)() # calling symbols also returns symbols
# attempt to resolve symbol's name
print(p.symbol(0x11223344).lldb_address)
# monitor each time a symbol is called into console and print its backtrace (`bt` option)
# this will create a scripted breakpoint which prints your desired data and continue
s.monitor(bt=True)
# you can also:
# bt -> view the backtrace
# regs -> view registers upon each call in your desired format
# retval -> view the return value upon each call in your desired format
# cmd -> execute a list of LLDB commands on each hit
s.monitor(regs={'x0': 'x'}, # print `x0` in HEX form
retval='po', # use LLDB's `po` for printing the returned value
bt=True, # view backtrace (will also resolve ASLR addresses for you)
cmd=['thread list'], # show thread list
)
# we can also just `force_return` with a hard-coded value to practically disable
# a specific functionality
s.monitor(force_return=0) # cause the function to always return `0`
# as for everything, if you need help understanding each such feature,
# simply execute the following to view its help (many such features even contain examples)
s.monitor?
# create a scripted_breakpoint manually
def scripted_breakpoint(hilda, *args):
# like everything in hilda, registers are also
# just simple `Symbol` objects, so feel free to
# use them to your heart's content :)
if hilda.registers.x0.peek(4) == b'\x11\x22\x33\x44':
hilda.registers.x0 = hilda.symbols.malloc(200)
hilda.registers.x0.poke(b'\x22' * 200)
# just continue the process
hilda.cont()
s.bp(scripted_breakpoint)
# Place a breakpoint at a symbol not yet loaded by it's name
p.bp('symbol_name')
# In case you need to specify a specific library it's loaded from
p.bp(('symbol_name', 'ModuleName'))
```
#### Globalized symbols
Usually you would want/need to use the symbols already mapped into the currently running process. To do so, you can
access them using `symbols.<symbol-name>`. The `symbols` global object is of type `SymbolList`, which acts like
`dict` for accessing all exported symbols. For example, the following will generate a call to the exported
`malloc` function with `20` as its only argument:
```python
x = p.symbols.malloc(20)
```
You can also just write their name as if they already were in the global scope. Hilda will check if no name collision
exists, and if so, will perform the following lazily for you:
```python
x = malloc(20)
# is equivalent to:
malloc = p.symbols.malloc
x = malloc(20)
```
#### Searching for the right symbol
Sometimes you don't really know where to start your research. All you have is just theories of how your desired exported
symbol should be called (if any).
```python
# find all symbols prefixed as `mem*` AND don't have `cpy`
# in their name
l = p.symbols.filter_startswith('mem') - p.symbols.filter_name_contains('cpy')
# filter only symbols of type "code" (removing data global for example)
l = l.filter_code_symbols()
# monitor every time each one is called, print its `x0` in HEX
# form and show the backtrace
l.monitor(regs={'x0': 'x'}, bt=True)
```
#### Objective-C Classes
The same as symbols applies to Objective-C classes name resolution. You can either:
```python
d = NSDictionary.new() # call its `new` selector
# which is equivalent to:
NSDictionary = p.objc_get_class('NSDictionary')
d = NSDictionary.new()
# Or you can use the IPython magic function
%objc
NSDictionary
```
This is possible only since `NSDictionary` is exported. In case it is not, you must call `objc_get_class()` explicitly.
As you can see, you can directly access all the class' methods.
Please look what more stuff you can do as shown below:
```python
# show the class' ivars
print(NSDictionary.ivars)
# show the class' methods
print(NSDictionary.methods)
# show the class' properties
print(NSDictionary.properties)
# view class' selectors which are prefixed with 'init'
print(NSDictionary.methods.filter_startswith('init'))
# you can of course use any of `SymbolList` over them, for example:
# this will `po` (print object) all those selectors returned value
NSDictionary.methods.filter_startswith('init').monitor(retval='po')
# monitor each time any selector in NSDictionary is called
NSDictionary.monitor()
# `force_return` for some specific selector with a hard-coded value (4)
NSDictionary.methods.get('valueForKey:').address.monitor(force_return=4)
# capture the `self` object at the first hit of any selector
# `True` for busy-wait for object to be captured
dictionary = NSDictionary.capture_self(True)
# print a colored and formatted version for class layout
dictionary.show()
```
#### Objective-C Objects
In order to work with ObjC objects, each symbol contains a property called
`objc_symbol`. After calling, you can work better with each object:
```python
dict = NSDictionary.new().objc_symbol
dict.show() # print object layout
# just like class, you can access its ivars, method, etc...
print(dict.ivars)
# except now they have values you can view
print(dict._ivarName)
# or edit
dict._ivarName = value
# and of course you can call the object's methods
# hilda will checks if the method returned an ObjC object:
# - if so, call `objc_symbol` upon it for you
# - otherwise, leave it as a simple `Symbol` object
arr = dict.objectForKey_('keyContainingNSArray')
# you can also call class-methods
# hilda will call it using either the instance object,
# or the class object respectively of the use
newDict = dict.dictionary()
# print the retrieved object
print(arr.po())
```
Also, working with Objective-C objects like this can be somewhat exhausting, so we created the `ns` and `from_ns`
commands so you are able to use complicated types when parsing values and passing as arguments:
```python
import datetime
# using the `ns` command we can just pass a python-native dictionary
function_requiring_a_specfic_dictionary(p.cf({
'key1': 'string', # will convert to NSString
'key2': True, # will convert to NSNumber
'key3': b'1234', # will convert to NSData
'key4': datetime.datetime(2021, 1, 1) # will convert to NSDate
}))
# and also parse one
normal_python_dict = p.cf({
'key1': 'string', # will convert to NSString
'key2': True, # will convert to NSNumber
'key3': b'1234', # will convert to NSData
'key4': datetime.datetime(2021, 1, 1) # will convert to NSDate
}).py()
```
On last resort, if the object is not serializable for this to work, you can just run pure Objective-C code:
```python
# let LLDB compile and execute the expression
abc_string = p.evaluate_expression('[NSString stringWithFormat:@"abc"]')
# will print "abc"
print(abc_string.po())
```
#### Using snippets
Snippets are extensions for normal functionality used as quick cookbooks for day-to-day tasks of a debugger.
They all use the following concept to use:
```python
from hilda.snippets import snippet_name
snippet_name.do_something()
```
For example, XPC sniffing can be done using:
```python
from hilda.snippets import xpc
xpc.sniff_all()
```
This will monitor all XPC related traffic in the given process.
## Contributing
Please run the tests as follows before submitting a PR:
```shell
xcrun python3 -m pytest
```
| text/markdown | null | doronz88 <doron88@gmail.com>, matan <matan1008@gmail.com>, netanel cohen <netanelc305@protonmail.com> | null | doronz88 <doron88@gmail.com>, matan <matan1008@gmail.com>, netanel cohen <netanelc305@protonmail.com> | Copyright (c) 2012-2023 Doron Zarhi and Metan Perelman
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | python, debugger, lldb, ipython, ios, debug | [
"Operating System :: MacOS",
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Languag... | [] | null | null | >=3.9 | [] | [] | [] | [
"tqdm",
"docstring_parser",
"coloredlogs",
"hexdump",
"ipython",
"click",
"objc_types_decoder",
"construct",
"pymobiledevice3",
"keystone-engine",
"tabulate",
"inquirer3",
"traitlets",
"pytest; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/doronz88/hilda",
"Bug Reports, https://github.com/doronz88/hilda/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T17:22:49.074190 | hilda-3.7.1.tar.gz | 882,498 | d0/9a/639ca4151f1d2fe19917acdc1ff804720622ae50e697d0dc76b64ce62c37/hilda-3.7.1.tar.gz | source | sdist | null | false | 70b6043880838b99e585d38ed89ec794 | f8c9395f4e09db258963111fffed908ff54fde721e2769f136d9d23ab77aba4f | d09a639ca4151f1d2fe19917acdc1ff804720622ae50e697d0dc76b64ce62c37 | null | [
"LICENSE"
] | 251 |
2.4 | iis-kix | 3.8.2 | Example Python Module | # Example Package
This is used to upload dummy packages to Pypi.
This is a simple example package. You can use
[GitHub-flavored Markdown](https://guides.github.com/features/mastering-markdown/)
to write your content.
To upload a new package to public pypi repository update the pyproject.toml file and give it a name.
Run these commands to upload
```bash
pip install build twine
twine upload dist/*
```
Twine will ask for token then provide the token from the Pypi account.
| text/markdown | null | IT Services <pypi@iis.fraunhofer.de> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://www.iis.fraunhofer.de/",
"Bug Tracker, https://www.iis.fraunhofer.de/"
] | twine/6.2.0 CPython/3.11.13 | 2026-02-18T17:22:31.956086 | iis_kix-3.8.2.tar.gz | 1,977 | ca/1c/845d4b1e55d25f805ad9a8a4bc2a2834fec1889a27ee010abab9f4a68076/iis_kix-3.8.2.tar.gz | source | sdist | null | false | 48e79c391a958102c13a69ad1a728f79 | 8dc55ab1e122d8e5a93daa30ab8277d530f06c0c224f10a4cdfd31776096807c | ca1c845d4b1e55d25f805ad9a8a4bc2a2834fec1889a27ee010abab9f4a68076 | null | [] | 237 |
2.4 | chuk-mcp-client-oauth | 0.4 | A reusable OAuth 2.0 client library for MCP (Model Context Protocol) servers | # chuk-mcp-client-oauth
A simple, secure OAuth 2.0 client library for connecting to MCP (Model Context Protocol) servers.
**Perfect for developers who want to add OAuth authentication to their MCP applications without wrestling with OAuth complexity.**
[](https://github.com/chrishayuk/chuk-mcp-client-oauth/actions)
[](https://github.com/chrishayuk/chuk-mcp-client-oauth)
[](https://www.python.org/downloads/)
[](https://github.com/chrishayuk/chuk-mcp-client-oauth)
---
## 🎯 What is This?
This library makes it **dead simple** to authenticate with OAuth-enabled MCP servers. Whether you're building a CLI tool, web app, or service that needs to connect to MCP servers, this library handles all the OAuth complexity for you.
### What's MCP OAuth?
MCP (Model Context Protocol) servers can use OAuth 2.0 to control who can access them. Think of it like logging into GitHub or Google - but for AI/LLM services.
**As a client developer, you need:**
1. 🔐 **Authenticate** - Get permission from the server
2. 💾 **Store tokens** - Keep credentials secure
3. 🔄 **Refresh tokens** - Keep sessions alive
4. 🔧 **Use tokens** - Include them in API requests
This library does all of that for you.
### OAuth 2.1 & MCP Compliance
This library implements:
- ✅ **OAuth 2.1 Best Practices** - Authorization Code + PKCE, no legacy grants
- ✅ **MCP Authorization Spec** - Protected Resource Metadata discovery (RFC 9728)
- ✅ **Resource Indicators** - Token binding to prevent reuse (RFC 8707)
- ✅ **WWW-Authenticate Fallback** - Discovery from 401/403 responses
- ✅ **Secure Token Storage** - OS keychain, encrypted files, HashiCorp Vault
- ✅ **Automatic Token Refresh** - Handles expiration transparently
- 🔄 **Device Code Flow** - Coming in v0.2.0 for headless environments
**Standards Compliance:**
- [OAuth 2.1 Draft](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-10) - Modern OAuth best practices
- [RFC 9728](https://datatracker.ietf.org/doc/html/rfc9728) - Protected Resource Metadata
- [RFC 8707](https://datatracker.ietf.org/doc/html/rfc8707) - Resource Indicators
- [RFC 8414](https://datatracker.ietf.org/doc/html/rfc8414) - Authorization Server Metadata Discovery
- [RFC 7591](https://datatracker.ietf.org/doc/html/rfc7591) - Dynamic Client Registration
- [RFC 7636](https://datatracker.ietf.org/doc/html/rfc7636) - PKCE
---
## 🚀 Quick Start (5 minutes)
### Installation
Using `uv` (recommended):
```bash
uv add chuk-mcp-client-oauth
```
Or using pip:
```bash
pip install chuk-mcp-client-oauth
```
### 30-Second Minimal Example
```python
import asyncio
from chuk_mcp_client_oauth import OAuthHandler
async def main():
handler = OAuthHandler() # Auto keychain/credential manager or encrypted file
# Authenticate (opens browser once, then caches tokens)
await handler.ensure_authenticated_mcp(
server_name="notion",
server_url="https://mcp.notion.com/mcp",
scopes=["read", "write"],
)
# Get ready-to-use headers for any HTTP/SSE/WebSocket call
headers = await handler.prepare_headers_for_mcp_server(
"notion",
"https://mcp.notion.com/mcp"
)
print(headers["Authorization"][:30], "...")
asyncio.run(main())
```
**That's it!** Subsequent runs use cached tokens—no browser needed. See [Complete MCP Session](#using-the-tokens---complete-mcp-example) for full JSON-RPC + SSE example.
---
### Your First OAuth Flow (Complete Example)
```python
import asyncio
from chuk_mcp_client_oauth import OAuthHandler
async def main():
# Create handler - it auto-configures secure storage
handler = OAuthHandler()
# Authenticate with a server (opens browser once)
tokens = await handler.ensure_authenticated_mcp(
server_name="notion-mcp",
server_url="https://mcp.notion.com/mcp",
scopes=["read", "write"]
)
print(f"✅ Authenticated! Token: {tokens.access_token[:20]}...")
# Next time you run this, it uses cached tokens (no browser)
# Headers are ready to use in your HTTP requests
headers = await handler.prepare_headers_for_mcp_server(
server_name="notion-mcp",
server_url="https://mcp.notion.com/mcp"
)
print(f"🔑 Authorization header: {headers['Authorization'][:30]}...")
asyncio.run(main())
```
**Using macOS Keychain (Explicit):**
```python
import asyncio
from chuk_mcp_client_oauth import OAuthHandler, TokenManager, TokenStoreBackend
async def main():
# Explicitly use macOS Keychain for token storage
# NOTE: 'keyring' library is automatically installed on macOS/Windows
# No password needed - uses macOS Keychain Access
token_manager = TokenManager(backend=TokenStoreBackend.KEYCHAIN)
handler = OAuthHandler(token_manager=token_manager)
# Authenticate with a server (tokens stored in macOS Keychain)
tokens = await handler.ensure_authenticated_mcp(
server_name="notion-mcp",
server_url="https://mcp.notion.com/mcp",
scopes=["read", "write"]
)
print(f"✅ Authenticated! Token stored in macOS Keychain")
print(f"🔑 Access Token: {tokens.access_token[:20]}...")
# You can verify this in Keychain Access app:
# 1. Open Keychain Access
# 2. Search for "chuk-oauth"
# 3. You'll see "notion-mcp" entry under the "chuk-oauth" service
asyncio.run(main())
```
**Using the Tokens - Complete MCP Example:**
Now let's use those tokens to actually interact with Notion MCP - listing available tools:
```python
import asyncio
import uuid
from chuk_mcp_client_oauth import OAuthHandler, parse_sse_json
async def list_notion_tools():
"""Complete example: Authenticate and list Notion MCP tools."""
handler = OAuthHandler()
server_name = "notion-mcp"
server_url = "https://mcp.notion.com/mcp"
# Authenticate (uses cached tokens if available)
print("🔐 Authenticating with Notion MCP...")
tokens = await handler.ensure_authenticated_mcp(
server_name=server_name,
server_url=server_url,
scopes=["read", "write"]
)
print(f"✅ Authenticated! Token: {tokens.access_token[:20]}...")
# Now use the tokens to make authenticated requests
session_id = str(uuid.uuid4())
# Step 1: Initialize MCP session
print("\n📋 Initializing MCP session...")
init_response = await handler.authenticated_request(
server_name=server_name,
server_url=server_url,
url=server_url,
method="POST",
json={
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2024-11-05",
"capabilities": {"roots": {"listChanged": True}},
"clientInfo": {"name": "quickstart-example", "version": "1.0.0"}
}
},
headers={
"Accept": "application/json, text/event-stream",
"Content-Type": "application/json"
},
timeout=60.0 # MCP initialization can be slow
)
# Extract session ID from response header
session_id = init_response.headers.get('mcp-session-id', session_id)
print(f" ✅ Session initialized: {session_id[:16]}...")
# Step 2: Send initialized notification
await handler.authenticated_request(
server_name=server_name,
server_url=server_url,
url=server_url,
method="POST",
json={"jsonrpc": "2.0", "method": "notifications/initialized"},
headers={
"Accept": "application/json, text/event-stream",
"Content-Type": "application/json",
"Mcp-Session-Id": session_id
},
timeout=30.0
)
# Step 3: List tools (this is where we use the Bearer token!)
print("\n🔧 Listing available tools...")
tools_response = await handler.authenticated_request(
server_name=server_name,
server_url=server_url,
url=server_url,
method="POST",
json={"jsonrpc": "2.0", "id": 2, "method": "tools/list", "params": {}},
headers={
"Accept": "application/json, text/event-stream",
"Content-Type": "application/json",
"Mcp-Session-Id": session_id
# Note: Authorization: Bearer <token> is automatically added!
},
timeout=30.0
)
# Parse SSE response (MCP servers often return text/event-stream)
content_type = tools_response.headers.get('content-type', '')
if 'text/event-stream' in content_type:
data = parse_sse_json(tools_response.text.strip().splitlines())
else:
data = tools_response.json()
# Display the tools
if "result" in data and "tools" in data["result"]:
tools = data["result"]["tools"]
print(f"\n📦 Found {len(tools)} Notion tools:")
for tool in tools[:5]: # Show first 5
print(f" • {tool.get('name', 'Unknown')}")
if 'description' in tool:
desc = tool['description']
print(f" {desc[:80]}{'...' if len(desc) > 80 else ''}")
if len(tools) > 5:
print(f" ... and {len(tools) - 5} more")
print("\n✅ Complete! Your Bearer token was automatically used in all requests.")
print(f" The library added: Authorization: Bearer {tokens.access_token[:20]}...")
print(" to every HTTP request above.")
asyncio.run(list_notion_tools())
```
**Output:**
```
🔐 Authenticating with Notion MCP...
✅ Authenticated! Token: 282c6a79-d66f-402e-a...
📋 Initializing MCP session...
✅ Session initialized: d6b130b8684f5ee9...
🔧 Listing available tools...
📦 Found 15 Notion tools:
• notion-search
Perform a search over: - "internal": Semantic search over Notion workspace and c...
• notion-fetch
Retrieves details about a Notion entity (page or database) by URL or ID.
Provide...
• notion-create-pages
## Overview
Creates one or more Notion pages, with the specified properties and ...
• notion-update-page
## Overview
Update a Notion page's properties or content.
## Properties
Notion p...
• notion-move-pages
Move one or more Notion pages or databases to a new parent.
... and 10 more
✅ Complete! Your Bearer token was automatically used in all requests.
The library added: Authorization: Bearer 282c6a79-d66f-402e-a...
to every HTTP request above.
```
**What happened behind the scenes:**
Every HTTP request included your Bearer token:
```http
POST /mcp HTTP/1.1
Host: mcp.notion.com
Authorization: Bearer 282c6a79-d66f-402e-a8f4-27b1c5d3e6f7...
Accept: application/json, text/event-stream
Content-Type: application/json
Mcp-Session-Id: d6b130b8684f5ee9...
{"jsonrpc":"2.0","id":2,"method":"tools/list","params":{}}
```
The `authenticated_request()` method:
1. ✅ Retrieved your cached tokens (no re-authentication needed)
2. ✅ Added `Authorization: Bearer <token>` header to every request
3. ✅ Parsed SSE responses automatically
4. ✅ Would have refreshed the token if server returned 401
**Using a Custom Service Name (for your application):**
```python
import asyncio
from chuk_mcp_client_oauth import OAuthHandler, TokenManager, TokenStoreBackend
async def main():
# Use your own application name for keychain entries
token_manager = TokenManager(
backend=TokenStoreBackend.KEYCHAIN,
service_name="my-awesome-app" # Custom service name
)
handler = OAuthHandler(token_manager=token_manager)
tokens = await handler.ensure_authenticated_mcp(
server_name="notion-mcp",
server_url="https://mcp.notion.com/mcp",
scopes=["read", "write"]
)
print(f"✅ Authenticated! Token stored under 'my-awesome-app' service")
# In Keychain Access, search for "my-awesome-app" instead of "chuk-oauth"
# This helps organize tokens for your specific application
asyncio.run(main())
```
**Platform-Specific Token Storage:**
- **macOS**: `keyring` is automatically installed → Uses macOS Keychain (no password needed)
- **Windows**: `keyring` is automatically installed → Uses Windows Credential Manager (no password needed)
- **Linux**: Install with `pip install chuk-mcp-client-oauth[linux]` → Uses Secret Service (GNOME/KDE)
- **All platforms**: Falls back to encrypted file storage if platform backend unavailable
**That's it!** The library handles:
- ✅ OAuth server discovery
- ✅ Dynamic client registration
- ✅ Opening browser for user consent
- ✅ Receiving the callback
- ✅ Exchanging codes for tokens
- ✅ Storing tokens securely
- ✅ Reusing tokens on subsequent runs
- ✅ Refreshing expired tokens
**What happens on each run:**
- **First run**: Opens browser for authentication → Saves tokens to storage
- **Second run**: Loads cached tokens → No browser needed
- **Re-running after clearing tokens**: Opens browser again (like first run)
**Quick Reference: Clearing tokens to re-run quickstart**
```bash
# Method 1: Using CLI (works for all storage backends)
uvx chuk-mcp-client-oauth clear notion-mcp
# Method 2: macOS Keychain (if using Keychain storage)
security delete-generic-password -s "chuk-oauth" -a "notion-mcp"
# Method 3: Delete encrypted file (if using file storage)
rm ~/.chuk_oauth/tokens/notion-mcp.enc
rm ~/.chuk_oauth/tokens/notion-mcp_client.json
# After clearing, run the quickstart again - browser will open
```
---
## 🧠 Understanding MCP OAuth (The Client Perspective)
### The OAuth Flow (What Actually Happens)
When you authenticate with an MCP server, here's what happens behind the scenes:
```
1. 🔍 DISCOVERY
Your app asks: "Server, how do I authenticate with you?"
Server responds: "Here are my OAuth endpoints and capabilities"
2. 📝 REGISTRATION
Your app: "I'd like to register as a client"
Server: "OK, here's your client_id"
3. 🌐 AUTHORIZATION
Your app opens browser: "User, please approve this app"
User clicks "Allow"
Browser redirects back with a code
4. 🎟️ TOKEN EXCHANGE
Your app: "Here's the code, give me tokens"
Server: "Here's your access_token and refresh_token"
5. 💾 STORAGE
Your app saves tokens to secure storage (Keychain/etc)
6. ✅ AUTHENTICATED
Your app can now make API requests with the token
```
This library automates **all of these steps**.
### Key Concepts
**Access Token** - Like a temporary password that proves you're authorized
- Used in every API request
- Expires after a time (e.g., 1 hour)
- Format: `Bearer <long-random-string>`
**Refresh Token** - Like a "get a new password" token
- Used to get new access tokens when they expire
- Long-lived (days/weeks)
- Stored securely
**Scopes** - What permissions you're requesting
- Examples: `["read", "write"]`, `["notion:read"]`
- Server decides what to grant
**PKCE** - Security enhancement that prevents token theft
- Automatically handled by this library
- You don't need to think about it
**Discovery** - How the client finds OAuth configuration
- **MCP-Compliant (RFC 9728)**: Protected Resource Metadata at `/.well-known/oauth-protected-resource`
- Points to Authorization Server metadata
- Includes resource identifier for token binding
- **Fallback (Legacy)**: Direct AS discovery at `/.well-known/oauth-authorization-server`
- **WWW-Authenticate Fallback**: PRM URL from 401/403 response headers
- Automatically discovered by this library with fallback support
**Resource Indicators (RFC 8707)** - Token binding to specific resources
- Tokens are bound to the specific MCP server resource
- Prevents token reuse across different resources
- Automatically included in token requests
---
## 📊 Flow Diagrams
### Auth Code + PKCE (Desktop/CLI with Browser)
This is the **primary flow** used by this library for interactive applications:
```
┌──────────────────┐ ┌──────────────┐ ┌──────────────────────┐ ┌───────────────┐
│ MCP Client │ │ User │ │ OAuth 2.1 Server │ │ MCP Server │
│ (CLI / Agent) │ │ Browser │ │ (Auth + Token) │ │ │
└──┬───────────────┘ └──────┬───────┘ └──────────┬───────────┘ └───────┬───────┘
│ 1) GET /.well-known/oauth-protected-resource (RFC 9728) │ │
├────────────────────────────────────────────────────────────────────────────────────────▶│
│ │ 2) PRM: resource ID, │
│◀────────────────────────────────────────────────────────────────────────────────────────┤ AS URLs
│ │ │
│ 3) GET AS metadata from PRM.authorization_servers[0] │ │
├──────────────────────────────────────────────────────────▶│ │
│ │ 4) AS metadata: endpoints │
│◀───────────────────────────────────────────────────────────┤ │
│ │ │
│ 5) Build Auth URL (PKCE: code_challenge) │ │
│ 6) Open browser ----------------------------------------▶ │ │
│ │ 7) User login + consent │
│ │◀────────────────────────────┤
│ │ 8) Redirect with ?code=... │
│◀───────────────────────────────────────────────────────────┤ to http://127.0.0.1:PORT │
│ 9) Local redirect handler captures code + state │ │
│ 10) POST /token (code + code_verifier + resource=MCP_URL) │ │
├──────────────────────────────────────────────────────────▶│ │
│ │ 11) access_token + refresh │
│◀───────────────────────────────────────────────────────────┤ (bound to resource) │
│ 12) Store tokens securely (keyring / pluggable) │ │
│ │ │
│ 13) Connect to MCP with Authorization: Bearer <token> │ │
├────────────────────────────────────────────────────────────────────────────────────────▶│
│ │ │ 14) Session OK
│◀────────────────────────────────────────────────────────────────────────────────────────┤
│ │ │
│ 15) (When expired) POST /token (refresh_token + resource=MCP_URL) │
├──────────────────────────────────────────────────────────▶│ │
│ │ 16) New access/refresh │
│◀───────────────────────────────────────────────────────────┤ -> update secure store │
│ │ │
```
**Legend:**
- **PKCE**: `code_challenge = SHA256(code_verifier)` (sent at authorize), `code_verifier` (sent at token)
- **PRM**: Protected Resource Metadata (RFC 9728) - MCP-compliant discovery
- **Resource Indicators**: `resource=` parameter binds tokens to specific MCP server (RFC 8707)
- Tokens are stored in OS keychain (or pluggable secure backend)
- MCP requests carry `Authorization: Bearer <access_token>`
### MCP-Compliant Discovery Flow (RFC 9728)
The library implements the **MCP-specified discovery flow** with automatic fallback:
```
🔍 Discovery Attempt 1: Protected Resource Metadata (MCP-Compliant)
┌─────────────────────────────────────────────────────────────┐
│ GET /.well-known/oauth-protected-resource │
│ → Returns: { │
│ "resource": "https://mcp.notion.com/mcp", │
│ "authorization_servers": [ │
│ "https://auth.notion.com/.well-known/oauth-as" │
│ ] │
│ } │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ GET https://auth.notion.com/.well-known/oauth-as │
│ → Returns AS metadata (authorization_endpoint, etc.) │
└─────────────────────────────────────────────────────────────┘
❌ If PRM fails (404/500):
🔍 Discovery Attempt 2: Direct AS Discovery (Fallback)
┌─────────────────────────────────────────────────────────────┐
│ GET /.well-known/oauth-authorization-server │
│ → Returns AS metadata directly │
└─────────────────────────────────────────────────────────────┘
❌ If both fail, check WWW-Authenticate header:
🔍 Discovery Attempt 3: WWW-Authenticate Fallback
┌─────────────────────────────────────────────────────────────┐
│ On 401/403 response: │
│ WWW-Authenticate: Bearer │
│ resource_metadata="https://mcp.example.com/.well-known/..." │
│ → Extract PRM URL and try again │
└─────────────────────────────────────────────────────────────┘
```
**Why this matters:**
- ✅ **MCP Spec Compliant**: Follows Model Context Protocol authorization specification
- ✅ **Token Binding**: Resource indicators prevent token reuse across servers
- ✅ **Backward Compatible**: Falls back to legacy discovery for older servers
- ✅ **Automatic**: Library handles all discovery methods transparently
### Device Code Flow (Headless TTY / SSH Agents)
**Coming in v0.2.0** - Perfect for SSH-only boxes, CI runners, and background agents.
**Planned API:**
```python
import asyncio
from chuk_mcp_client_oauth import OAuthHandler
async def main():
handler = OAuthHandler()
# Device code flow for headless environments
await handler.ensure_authenticated_mcp_device(
server_name="notion",
server_url="https://mcp.notion.com/mcp",
scopes=["read", "write"],
prompt=lambda code, url: print(f"🔐 Go to {url} and enter code: {code}")
)
# Rest is identical to auth code flow
headers = await handler.prepare_headers_for_mcp_server(
"notion",
"https://mcp.notion.com/mcp"
)
asyncio.run(main())
```
**Use cases:**
- SSH-only servers
- CI/CD pipelines
- Background agents
- Shared/headless environments
**Flow diagram:**
```
┌──────────────────┐ ┌──────────────────────┐ ┌───────────────┐
│ MCP Client │ │ OAuth 2.1 Server │ │ MCP Server │
│ (Headless) │ │ (Device + Token) │ │ │
└──┬───────────────┘ └──────────┬───────────┘ └───────┬───────┘
│ 1) POST /device_authorization (client_id, scope) │ │
├────────────────────────────────────────────────────────────▶│ │
│ │ 2) device_code, user_code, verify_uri │
│◀────────────────────────────────────────────────────────────┤ expires_in, interval │
│ 3) Show: "Go to VERIFY_URI and enter USER_CODE" │ │
│ │ │
│ (User on any device) │ │
│ ┌──────────────┐ │ │
│ │ User │ 4) Visit verify URI│ │
│ │ Browser │ ◀──────────────────▶│ │
│ └──────┬───────┘ 5) Enter user code │ │
│ │ 6) Consent + login done │
│ │ │
│ 7) Poll POST /token (device_code, grant_type=device_code) │ │
├────────────────────────────────────────────────────────────▶│ │
│ (repeat every `interval` seconds until authorized) │ │
│◀────────────────────────────────────────────────────────────┤ 8) access_token + refresh │
│ 9) Store tokens securely │ │
│ 10) Connect MCP: Authorization: Bearer <token> │ │
├─────────────────────────────────────────────────────────────────────────────────────────────────────▶│
│ │ 11) Session OK
│◀─────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ 12) Refresh on expiry → POST /token (refresh_token) │ │
├────────────────────────────────────────────────────────────▶│ │
│◀────────────────────────────────────────────────────────────┤ New tokens → update store │
```
**When to use Device Code Flow:**
- **SSH-only environments** - No browser available on the target machine
- **CI/CD pipelines** - Automated builds need OAuth without interactive login
- **Background agents** - Services running without user interaction
- **Shared/headless servers** - Multiple users, no desktop environment
### How Tokens Attach to MCP Requests
> **Whiteboard view:** The client does discovery, performs OAuth (Auth Code + PKCE or Device Code), stores tokens safely, and automatically attaches `Authorization: Bearer <token>` to **every MCP handshake and request**, refreshing silently when needed.
**HTTP Requests:**
```http
GET /mcp/api/resources HTTP/1.1
Host: mcp.example.com
Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...
Content-Type: application/json
```
**Server-Sent Events (SSE):**
```http
GET /mcp/events HTTP/1.1
Host: mcp.example.com
Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...
Accept: text/event-stream
Connection: keep-alive
```
**WebSocket:**
```http
GET /mcp/ws HTTP/1.1
Host: mcp.example.com
Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...
Upgrade: websocket
Connection: Upgrade
```
---
## 🔍 OAuth Discovery (How Your App Finds OAuth Endpoints)
### What is OAuth Discovery?
MCP servers publish their OAuth configuration at a **well-known URL**. This is like a menu that tells your app:
- "Here's where you get authorization"
- "Here's where you exchange codes for tokens"
- "Here's what I support (PKCE, refresh tokens, etc.)"
### MCP-Compliant Discovery (Do This First)
**Per the MCP specification**, clients must discover OAuth endpoints via Protected Resource Metadata (RFC 9728):
**Step 1: Discover Protected Resource Metadata (PRM)**
```bash
# MCP-compliant discovery starts here
GET <mcp_server>/.well-known/oauth-protected-resource
```
**Example PRM Response:**
```json
{
"resource": "https://mcp.notion.com/mcp",
"authorization_servers": [
"https://mcp.notion.com/.well-known/oauth-authorization-server"
],
"scopes_supported": ["read", "write"],
"bearer_methods_supported": ["header"]
}
```
**Key PRM fields:**
- `resource` - The resource identifier (use this in `resource=` parameter for token requests)
- `authorization_servers` - Array of AS metadata URLs to fetch next
**Step 2: Fetch Authorization Server Metadata**
```bash
# Follow the URL from PRM's authorization_servers[0]
GET <authorization_server_url>
```
**Example AS Metadata Response:**
```json
{
"issuer": "https://mcp.notion.com",
"authorization_endpoint": "https://mcp.notion.com/authorize",
"token_endpoint": "https://mcp.notion.com/token",
"registration_endpoint": "https://mcp.notion.com/register",
"revocation_endpoint": "https://mcp.notion.com/token",
"response_types_supported": ["code"],
"response_modes_supported": ["query"],
"grant_types_supported": ["authorization_code", "refresh_token"],
"code_challenge_methods_supported": ["plain", "S256"],
"token_endpoint_auth_methods_supported": ["client_secret_basic", "client_secret_post", "none"]
}
```
**Key AS Metadata fields:**
- `authorization_endpoint` - Where users approve your app
- `token_endpoint` - Where you exchange codes for tokens
- `registration_endpoint` - Where you register as a client
- `code_challenge_methods_supported` - PKCE support (S256 = SHA-256)
**Step 3: Include Resource Indicator in Token Requests**
When requesting tokens, include the `resource` parameter from PRM (RFC 8707):
```http
POST /token HTTP/1.1
Host: mcp.notion.com
Content-Type: application/x-www-form-urlencoded
grant_type=authorization_code
&code=AUTH_CODE
&redirect_uri=http://localhost:8080/callback
&client_id=CLIENT_ID
&code_verifier=CODE_VERIFIER
&resource=https://mcp.notion.com/mcp
```
This binds the token to the specific MCP resource, preventing token reuse across different servers.
### WWW-Authenticate Fallback
**If PRM discovery fails**, MCP servers **SHOULD** (per MCP spec convention) include the PRM URL in 401/403 responses via the `WWW-Authenticate` header:
> **Note**: The `resource_metadata` parameter is an **MCP-specific convention**, not part of core RFC 6750 (Bearer Token Usage). It extends the standard Bearer authentication scheme to enable OAuth discovery from error responses, as specified in the Model Context Protocol authorization specification.
```http
HTTP/1.1 401 Unauthorized
WWW-Authenticate: Bearer realm="mcp",
resource_metadata="https://mcp.notion.com/.well-known/oauth-protected-resource"
```
**Example header formats:**
```
WWW-Authenticate: Bearer resource_metadata="https://mcp.example.com/.well-known/oauth-protected-resource"
WWW-Authenticate: Bearer realm="mcp", error="invalid_token",
resource_metadata="https://mcp.example.com/.well-known/oauth-protected-resource"
```
The client should:
1. Parse the `resource_metadata` URL from the header
2. Fetch the PRM document from that URL
3. Continue with normal discovery flow (Step 2 above)
### Legacy Fallback (Non-MCP Servers)
For **backward compatibility** with servers that don't implement PRM discovery, the library falls back to direct AS discovery:
```bash
# Legacy OAuth servers (pre-MCP)
GET <server_url>/.well-known/oauth-authorization-server
```
**Discovery priority:**
1. ✅ **First**: Try PRM at `/.well-known/oauth-protected-resource` (MCP-compliant)
2. ✅ **Second**: Check `WWW-Authenticate` header on 401/403 responses
3. ✅ **Third**: Fall back to direct AS discovery (legacy compatibility)
### How This Library Uses Discovery
When you call:
```python
tokens = await handler.ensure_authenticated_mcp(
server_name="notion-mcp",
server_url="https://mcp.notion.com/mcp",
scopes=["read", "write"]
)
```
Behind the scenes (MCP-compliant flow):
1. **PRM Discovery**: Fetches `https://mcp.notion.com/.well-known/oauth-protected-resource`
2. **Extract Resource**: Saves the `resource` identifier for token binding (RFC 8707)
3. **AS Discovery**: Fetches AS metadata from `authorization_servers[0]` URL
4. **Parse**: Extracts `authorization_endpoint`, `token_endpoint`, etc.
5. **Validate**: Checks that PKCE is supported
6. **Cache**: Saves the configuration for future use
7. **Token Requests**: Includes `resource=` parameter in all token requests
8. **Proceed**: Uses the discovered endpoints for OAuth flow
**Fallback**: If PRM discovery fails, falls back to direct AS discovery for legacy server compatibility.
### Manual Discovery (Advanced)
You can also discover endpoints manually:
```python
import asyncio
from chuk_mcp_client_oauth import MCPOAuthClient
async def discover_endpoints():
client = MCPOAuthClient(
server_url="https://mcp.notion.com/mcp",
redirect_uri="http://localhost:8080/callback"
)
# Discover OAuth configuration
metadata = await client.discover_authorization_server()
# Now you can inspect the discovered endpoints
print(f"Authorization URL: {metadata.authorization_endpoint}")
print(f"Token URL: {metadata.token_endpoint}")
print(f"Registration URL: {metadata.registration_endpoint}")
print(f"Supported scopes: {metadata.scopes_supported}")
print(f"PKCE methods: {metadata.code_challenge_methods_supported}")
# Run the async function
asyncio.run(discover_endpoints())
```
### Testing Discovery with curl
You can test if a server supports MCP-compliant OAuth discovery:
```bash
# Step 1: Test PRM discovery (MCP-compliant)
curl https://mcp.notion.com/.well-known/oauth-protected-resource
# Expected response:
# {
# "resource": "https://mcp.notion.com/mcp",
# "authorization_servers": ["https://mcp.notion.com/.well-known/oauth-authorization-server"],
# "scopes_supported": ["read", "write"]
# }
# Step 2: Test AS discovery (from PRM's authorization_servers[0])
curl https://mcp.notion.com/.well-known/oauth-authorization-server
# Expected response: AS metadata with endpoints
# Test your own MCP server
curl https://your-server.com/.well-known/oauth-protected-resource
```
**Expected responses:**
- **PRM**: JSON with `resource`, `authorization_servers`, `scopes_supported`
- **AS Metadata**: JSON with `authorization_endpoint`, `token_endpoint`, etc.
**Common errors:**
- `404 Not Found` on PRM - Server may not be MCP-compliant (library will fall back to direct AS discovery)
- `404 Not Found` on both - Server doesn't support OAuth discovery at all
- `Connection refused` - Server URL is incorrect
- `Invalid JSON` - Server has misconfigured OAuth
- `{"error":"invalid_token"}` - Discovery endpoint is incorrectly protected (should be public)
**Testing WWW-Authenticate fallback:**
```bash
# Make an unauthenticated req | text/markdown | null | null | null | null | Apache-2.0 | authentication, mcp, model-context-protocol, oauth, oauth2 | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Sec... | [] | null | null | >=3.10 | [] | [] | [] | [
"cryptography>=41.0.0",
"httpx>=0.24.0",
"keyring>=24.0.0; sys_platform == \"darwin\"",
"keyring>=24.0.0; sys_platform == \"win32\"",
"pydantic>=2.0.0",
"hvac>=1.0.0; extra == \"all\"",
"keyring>=24.0.0; extra == \"all\"",
"secretstorage>=3.3.0; extra == \"all\"",
"bandit>=1.7.0; extra == \"dev\"",
... | [] | [] | [] | [] | twine/6.1.0 CPython/3.11.11 | 2026-02-18T17:21:50.965663 | chuk_mcp_client_oauth-0.4.tar.gz | 214,738 | 19/d1/beb96248f06600ab649aaa9cf160ae746f12b8668bb371b64a01978618b2/chuk_mcp_client_oauth-0.4.tar.gz | source | sdist | null | false | 890b7dccda737ca338bac8caf07f25c0 | 5ab1d9384d2ed802d6c4e24e755c34914046aef696a7346730756342a83b1974 | 19d1beb96248f06600ab649aaa9cf160ae746f12b8668bb371b64a01978618b2 | null | [
"LICENSE"
] | 577 |
2.4 | librenms-mcp | 1.7.0 | MCP server for LibreNMS management | # LibreNMS MCP Server
<!-- mcp-name: io.github.mhajder/librenms-mcp -->
LibreNMS MCP Server is a Python-based Model Context Protocol (MCP) server designed to provide advanced, programmable access to LibreNMS network monitoring data and management features. It exposes a modern API for querying, automating, and integrating LibreNMS resources such as devices, ports, alerts, inventory, locations, logs, and more. The server supports both read and write operations, robust security features, and is suitable for integration with automation tools, dashboards, and custom network management workflows.
## Features
### Core Features
- Query LibreNMS devices, ports, inventory, locations, logs, and alerts with flexible filtering
- Retrieve network topology, device status, and performance metrics
- Access and analyze alert history, event logs, and system health
- Monitor interface statistics, port status, and traffic data
- Track endpoints and connected devices by MAC or IP address
- Retrieve and manage device groups, port groups, and poller groups
- Get detailed information about network services and routing
### Management Operations
- Create, update, and delete devices, ports, and groups (if enabled)
- Manage alert rules, notifications, and device metadata
- Configure read-only mode to restrict all write operations for safe monitoring
- Support for bulk operations on devices and ports
### Advanced Capabilities
- Rate limiting and API security features
- Real-time network monitoring and health tracking
- Comprehensive logging and audit trails
- SSL/TLS support and configurable timeouts
- Extensible with custom middlewares and utilities
## Installation
### Prerequisites
- Python 3.11 to 3.14
- Access to a LibreNMS
- Valid LibreNMS token with appropriate permissions
### Quick Install from PyPI
The easiest way to get started is to install from PyPI:
```sh
# Using UV (recommended)
uvx librenms-mcp
# Or using pip
pip install librenms-mcp
```
Remember to configure the environment variables for your LibreNMS instance before running the server:
```sh
# Create environment configuration
export LIBRENMS_URL=https://domain.tld:8443
export LIBRENMS_TOKEN=your-librenms-token
```
For more details, visit: https://pypi.org/project/librenms-mcp/
### Install from Source
1. Clone the repository:
```sh
git clone https://github.com/mhajder/librenms-mcp.git
cd librenms-mcp
```
2. Install dependencies:
```sh
# Using UV (recommended)
uv sync
# Or using pip
pip install -e .
```
3. Configure environment variables:
```sh
cp .env.example .env
# Edit .env with your LibreNMS url and token
```
4. Run the server:
```sh
# Using UV
uv run python run_server.py
# Or directly with Python
python run_server.py
# Or using the installed script
librenms-mcp
```
### Using Docker
A Docker images are available on GitHub Packages for easy deployment.
```sh
# Normal STDIO image
docker pull ghcr.io/mhajder/librenms-mcp:latest
# MCPO image for usage with Open WebUI
docker pull ghcr.io/mhajder/librenms-mcpo:latest
```
### Development Setup
For development with additional tools:
```sh
# Clone and install with development dependencies
git clone https://github.com/mhajder/librenms-mcp.git
cd librenms-mcp
uv sync --group dev
# Run tests
uv run pytest
# Run with coverage
uv run pytest --cov=src/
# Run linting and formatting
uv run ruff check .
uv run ruff format .
# Run type checking
uv run ty check .
# Setup pre-commit hooks
uv run prek install
```
## Configuration
### Environment Variables
```env
# LibreNMS Connection Details
LIBRENMS_URL=https://domain.tld:8443
LIBRENMS_TOKEN=your-librenms-token
# SSL Configuration
LIBRENMS_VERIFY_SSL=true
LIBRENMS_TIMEOUT=30
# Read-Only Mode
# Set READ_ONLY_MODE true to disable all write operations (put, post, delete)
READ_ONLY_MODE=false
# Disabled Tags
# Comma-separated list of tags to disable tools for (empty by default)
# Example: DISABLED_TAGS=alert,bills
DISABLED_TAGS=
# Logging Configuration
LOG_LEVEL=INFO
# Rate Limiting (requests per minute)
# Set RATE_LIMIT_ENABLED true to enable rate limiting
RATE_LIMIT_ENABLED=false
RATE_LIMIT_MAX_REQUESTS=100
RATE_LIMIT_WINDOW_MINUTES=1
# Sentry Error Tracking (Optional)
# Set SENTRY_DSN to enable error tracking and performance monitoring
# SENTRY_DSN=https://your-key@o12345.ingest.us.sentry.io/6789
# Optional Sentry configuration
# SENTRY_TRACES_SAMPLE_RATE=1.0
# SENTRY_SEND_DEFAULT_PII=true
# SENTRY_ENVIRONMENT=production
# SENTRY_RELEASE=1.2.3
# SENTRY_PROFILE_SESSION_SAMPLE_RATE=1.0
# SENTRY_PROFILE_LIFECYCLE=trace
# SENTRY_ENABLE_LOGS=true
# MCP Transport Configuration
# Transport type: 'stdio' (default), 'sse' (Server-Sent Events), or 'http' (HTTP Streamable)
MCP_TRANSPORT=stdio
# HTTP Transport Settings (used when MCP_TRANSPORT=sse or MCP_TRANSPORT=http)
# Host to bind the HTTP server (default: 0.0.0.0 for all interfaces)
MCP_HTTP_HOST=0.0.0.0
# Port to bind the HTTP server (default: 8000)
MCP_HTTP_PORT=8000
# Optional bearer token for authentication (leave empty for no auth)
MCP_HTTP_BEARER_TOKEN=
```
## Available Tools
### Device & Inventory Tools
- `devices_list`: List all devices (with optional filters)
- `device_get`: Get details for a specific device
- `device_add`: Add a new device
- `device_update`: Update device metadata
- `device_delete`: Remove a device
- `device_ports`: List all ports for a device
- `device_ports_get`: Get details for a specific port on a device
- `device_availability`: Get device availability
- `device_outages`: Get device outages
- `device_set_maintenance`: Set device maintenance mode
- `device_discover`: Discover or add a device using provided credentials
- `device_rename`: Rename an existing device
- `device_maintenance_status`: Get the maintenance status for a device
- `device_vlans`: List VLANs for a device
- `device_links`: List links for a device
- `device_eventlog_add`: Add an event log entry for a device
- `inventory_device`: Get inventory for a device
- `inventory_device_flat`: Get flat inventory for a device
- `devicegroups_list`: List device groups
- `devicegroup_add`: Add a device group
- `devicegroup_update`: Update a device group
- `devicegroup_delete`: Delete a device group
- `devicegroup_devices`: List devices in a device group
- `devicegroup_set_maintenance`: Set maintenance for a device group
- `devicegroup_add_devices`: Add devices to a device group
- `devicegroup_remove_devices`: Remove devices from a device group
- `locations_list`: List all locations
- `location_add`: Add a location
- `location_edit`: Edit a location
- `location_delete`: Delete a location
- `location_get`: Get details for a location
- `location_set_maintenance`: Set maintenance for a location
### Port & Port Group Tools
- `ports_list`: List all ports (with optional filters)
- `ports_search`: Search ports (general search)
- `ports_search_field`: Search ports by a specific field
- `ports_search_mac`: Search ports by MAC address
- `port_get`: Get details for a specific port
- `port_ip_info`: Get IP address information for a port
- `port_transceiver`: Get transceiver information for a port
- `port_description_get`: Get a port description
- `port_description_update`: Update a port description
- `port_groups_list`: List port groups
- `port_group_add`: Add a port group
- `port_group_list_ports`: List ports in a port group
- `port_group_assign`: Assign ports to a port group
- `port_group_remove`: Remove ports from a port group
### Alerting & Logging Tools
- `alerts_get`: List current and historical alerts
- `alert_get_by_id`: Get details for a specific alert
- `alert_acknowledge`: Acknowledge an alert
- `alert_unmute`: Unmute an alert
- `alert_rules_list`: List alert rules
- `alert_rule_get`: Get details for a specific alert rule
- `alert_rule_add`: Add an alert rule
- `alert_rule_edit`: Edit an alert rule
- `alert_rule_delete`: Delete an alert rule
- `alert_templates_list`: List all alert templates
- `alert_template_get`: Get a specific alert template
- `alert_template_create`: Create a new alert template
- `alert_template_edit`: Edit an alert template
- `alert_template_delete`: Delete an alert template
- `logs_eventlog`: Get event log for a device
- `logs_syslog`: Get syslog for a device
- `logs_alertlog`: Get alert log for a device
- `logs_authlog`: Get auth log for a device
- `logs_syslogsink`: Add a syslog sink
### Billing Tools
- `bills_list`: List bills
- `bill_get`: Get details for a bill
- `bill_graph`: Get bill graph
- `bill_graph_data`: Get bill graph data
- `bill_history`: Get bill history
- `bill_history_graph`: Get bill history graph
- `bill_history_graph_data`: Get bill history graph data
- `bill_create_or_update`: Create or update a bill
- `bill_delete`: Delete a bill
### Network & Monitoring Tools
- `arp_search`: Search ARP entries
- `poller_group_get`: Get poller group(s)
- `routing_ip_addresses`: List all IP addresses from LibreNMS.
- `services_list`: List all services from LibreNMS.
- `services_for_device`: Get services for a device from LibreNMS.
- `service_add`: Add a service to LibreNMS
- `service_edit`: Edit an existing service
- `service_delete`: Delete a service
- `bgp_sessions`: List BGP sessions
- `bgp_session_get`: Get details for a specific BGP session
- `bgp_session_edit`: Edit a BGP session
- `fdb_lookup`: Lookup forwarding database (FDB) entries
- `ospf_list`: List OSPF instances
- `ospf_ports`: List OSPF ports
- `vrf_list`: List VRFs
- `ping`: Ping the LibreNMS system
- `health_list`: List health sensors
- `health_by_type`: List health sensors by type
- `health_sensor_get`: Get details for a health sensor
- `sensors_list`: List sensors
- `switching_vlans`: List all VLANs from LibreNMS.
- `switching_links`: List all links from LibreNMS.
- `system_info`: Get system info from LibreNMS.
### General Query Tools
- Flexible filtering and search for all major resources (devices, ports, alerts, logs, inventory, etc.)
## Security & Safety Features
### Read-Only Mode
The server supports a read-only mode that disables all write operations for safe monitoring:
```env
READ_ONLY_MODE=true
```
### Tag-Based Tool Filtering
You can disable specific categories of tools by setting disabled tags:
```env
DISABLED_TAGS=alert,bills
```
### Rate Limiting
The server supports rate limiting to control API usage and prevent abuse. If enabled, requests are limited per client using a sliding window algorithm.
Enable rate limiting by setting the following environment variables in your `.env` file:
```env
RATE_LIMIT_ENABLED=true
RATE_LIMIT_MAX_REQUESTS=100 # Maximum requests allowed per window
RATE_LIMIT_WINDOW_MINUTES=1 # Window size in minutes
```
If `RATE_LIMIT_ENABLED` is set to `true`, the server will apply rate limiting middleware. Adjust `RATE_LIMIT_MAX_REQUESTS` and `RATE_LIMIT_WINDOW_MINUTES` as needed for your environment.
### Sentry Error Tracking & Monitoring (Optional)
The server optionally supports **Sentry** for error tracking, performance monitoring, and debugging. Sentry integration is completely optional and only initialized if configured.
#### Installation
To enable Sentry monitoring, install the optional dependency:
```sh
# Using UV (recommended)
uv sync --extra sentry
```
#### Configuration
Enable Sentry by setting the `SENTRY_DSN` environment variable in your `.env` file:
```env
# Required: Sentry DSN for your project
SENTRY_DSN=https://your-key@o12345.ingest.us.sentry.io/6789
# Optional: Performance monitoring sample rate (0.0-1.0, default: 1.0)
SENTRY_TRACES_SAMPLE_RATE=1.0
# Optional: Include personally identifiable information (default: true)
SENTRY_SEND_DEFAULT_PII=true
# Optional: Environment name (e.g., "production", "staging")
SENTRY_ENVIRONMENT=production
# Optional: Release version (auto-detected from package if not set)
SENTRY_RELEASE=1.2.2
# Optional: Profiling - continuous profiling sample rate (0.0-1.0, default: 1.0)
SENTRY_PROFILE_SESSION_SAMPLE_RATE=1.0
# Optional: Profiling - lifecycle mode for profiling (default: "trace")
# Options: "all", "continuation", "trace"
SENTRY_PROFILE_LIFECYCLE=trace
# Optional: Enable log capture as breadcrumbs and events (default: true)
SENTRY_ENABLE_LOGS=true
```
#### Features
When enabled, Sentry automatically captures:
- **Exceptions & Errors**: All unhandled exceptions with full context
- **Performance Metrics**: Request/response times and traces
- **MCP Integration**: Detailed MCP server activity and interactions
- **Logs & Breadcrumbs**: Application logs and event trails for debugging
- **Context Data**: Environment, client info, and request parameters
#### Getting a Sentry DSN
1. Create a free account at [sentry.io](https://sentry.io)
2. Create a new Python project
3. Copy your DSN from the project settings
4. Set it in your `.env` file
#### Disabling Sentry
Sentry is completely optional. If you don't set `SENTRY_DSN`, the server will run normally without any Sentry integration, and no monitoring data will be collected.
### SSL/TLS Configuration
The server supports SSL certificate verification and custom timeout settings:
```env
LIBRENMS_VERIFY_SSL=true # Enable SSL certificate verification
LIBRENMS_TIMEOUT=30 # Connection timeout in seconds
```
### Transport Configuration
The server supports multiple transport mechanisms for the MCP protocol:
#### STDIO Transport (Default)
The default transport uses standard input/output for communication. This is ideal for local usage and integration with tools that communicate via stdin/stdout:
```env
MCP_TRANSPORT=stdio
```
#### HTTP SSE Transport (Server-Sent Events)
For network-based deployments, you can use HTTP with Server-Sent Events. This allows the MCP server to be accessed over HTTP with real-time streaming:
```env
MCP_TRANSPORT=sse
MCP_HTTP_HOST=0.0.0.0 # Bind to all interfaces (or specific IP)
MCP_HTTP_PORT=8000 # Port to listen on
MCP_HTTP_BEARER_TOKEN=your-secret-token # Optional authentication token
```
When using SSE transport with a bearer token, clients must include the token in their requests:
```bash
curl -H "Authorization: Bearer your-secret-token" http://localhost:8000/sse
```
#### HTTP Streamable Transport
The HTTP Streamable transport provides HTTP-based communication with request/response streaming. This is ideal for web integrations and tools that need HTTP endpoints:
```env
MCP_TRANSPORT=http
MCP_HTTP_HOST=0.0.0.0 # Bind to all interfaces (or specific IP)
MCP_HTTP_PORT=8000 # Port to listen on
MCP_HTTP_BEARER_TOKEN=your-secret-token # Optional authentication token
```
When using streamable transport with a bearer token:
```sh
curl -H "Authorization: Bearer your-secret-token" \
-H "Accept: application/json, text/event-stream" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}' \
http://localhost:8000/mcp
```
**Note**: The HTTP transport requires proper JSON-RPC formatting with `jsonrpc` and `id` fields. The server may also require session initialization for some operations.
For more information on FastMCP transports, see the [FastMCP documentation](https://gofastmcp.com/deployment/running-server#transport-protocols).
## Contributing
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Make your changes
4. Run tests and ensure code quality (`uv run pytest && uv run ruff check .`)
5. Commit your changes (`git commit -m 'Add amazing feature'`)
6. Push to the branch (`git push origin feature/amazing-feature`)
7. Open a Pull Request
## License
MIT License - see LICENSE file for details.
| text/markdown | Mateusz Hajder | null | null | null | MIT License
Copyright (c) 2025 Mateusz Hajder
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | automation, librenms, management, mcp, network, server | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"... | [] | null | null | <3.15,>=3.11 | [] | [] | [] | [
"fastmcp<3,>=2.14.0",
"httpx>=0.28.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"sentry-sdk>=2.43.0; extra == \"sentry\""
] | [] | [] | [] | [
"Homepage, https://github.com/mhajder/librenms-mcp",
"Repository, https://github.com/mhajder/librenms-mcp",
"Documentation, https://github.com/mhajder/librenms-mcp#readme",
"Issues, https://github.com/mhajder/librenms-mcp/issues",
"LibreNMS_Documentation, https://docs.librenms.org/API/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T17:21:27.423838 | librenms_mcp-1.7.0.tar.gz | 146,502 | ca/35/c001b2036f76ece61855c9c4be337b5ebed4f5b9826aa9b3bf3a08021f6c/librenms_mcp-1.7.0.tar.gz | source | sdist | null | false | eeede40c5fc8d9e432ee7994d4c2a62f | 178811187fd3abff542544b519121f373049552768b99b0ed8688022aad82f21 | ca35c001b2036f76ece61855c9c4be337b5ebed4f5b9826aa9b3bf3a08021f6c | null | [
"LICENSE"
] | 402 |
2.1 | iqm-pulla | 12.0.5 | Client library for pulse-level access to an IQM quantum computer | IQM Pulla
#########
Pulla (pulse-level access) is a client-side software which allows the user to control the generation and
execution of pulse schedules on a quantum computer. Within the existing IQM QCCSW stack, Pulla is somewhere between
circuit-level execution and EXA-experiment.
An interactive user guide is available as a Jupyter notebook in the `docs` folder.
Use
===
Create a virtual environment and install dependencies:
.. code-block:: bash
conda create -y -n pulla python=3.11 pip=23.0
conda activate pulla
pip install "iqm-pulla[notebook, qiskit, qir]"
The ``[qiskit]`` option is to enable Qiskit-related features and utilities, like converting Qiskit circuits to Pulla circuits, constructing a compatible compiler instance, or constructing a ``PullaBackend`` for running Qiskit jobs.
The ``[qir]`` option is to enable QIR support, e.g. the ``qir_to_pulla`` function.
The ``[notebook]`` option is to be able to run the example notebooks, using
and run it in Jupyter Notebook:
.. code-block:: bash
jupyter-notebook
Development
===========
Install development and testing dependencies:
.. code-block:: bash
pip install -e ".[dev,notebook,qiskit,qir,testing,docs]"
e2e testing is execution of all user guides (Jupyter notebooks). User guides cover the majority of user-level features,
so we achieve two things: end-to-end-test Pulla as a client library, and make sure the user guides are correct.
(Server-side use of Pulla is e2e-tested as part of CoCoS.)
You have to provide IQM Server URL as environment variable:
.. code-block:: bash
IQM_SERVER_URL=<IQM_SERVER_URL> tox -e e2e
Notebooks are executed using `jupyter execute` command. It does not print any output if there are no errors. If you want
to run a particular notebook and see the output cells printed in the terminal, you can use ``nbconvert`` with ``jq``
(https://jqlang.github.io/jq/download/) like so:
.. code-block:: bash
jupyter nbconvert --to notebook --execute docs/Quick\ Start.ipynb --stdout | jq -r '.cells[] | select(.outputs) | .outputs[] | select(.output_type == "stream") | .text[]'
Run unit tests, build docs, build package:
.. code-block:: bash
tox
tox -e docs
tox -e build
Copyright
=========
Copyright 2025 IQM
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| text/x-rst | null | IQM Finland Oy <developers@meetiqm.com> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2025 IQM
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| null | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Scientific/Engineering :: Physics",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License"
] | [] | null | null | <3.13,>=3.11 | [] | [] | [] | [
"iqm-data-definitions<3.0,>=2.18",
"pylatexenc==2.10",
"pydantic<3.0,>=2.10.4",
"iqm-client<34,>=33.0.5",
"iqm-exa-common<28,>=27.4.5",
"iqm-pulse<13,>=12.7.5",
"iqm-station-control-client<13,>=12.0.5",
"notebook<7,>=6.4.11; extra == \"notebook\"",
"matplotlib<4,>=3.6.3; extra == \"notebook\"",
"n... | [] | [] | [] | [
"Documentation, https://docs.meetiqm.com/iqm-pulla/",
"Homepage, https://pypi.org/project/iqm-pulla/"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T17:21:10.794022 | iqm_pulla-12.0.5.tar.gz | 11,314,157 | 97/55/7095ea9814a207d8256e5d807f519695d20cd07df7a9b406f2af6e32c182/iqm_pulla-12.0.5.tar.gz | source | sdist | null | false | 06df692fc13125391ada264f98e7a5ee | 2e01647bd6e4be407f8698b78d643e06f45dc5574f2b397d530f5d975c2826d3 | 97557095ea9814a207d8256e5d807f519695d20cd07df7a9b406f2af6e32c182 | null | [] | 360 |
2.4 | pykickstart | 3.69 | Python module for manipulating kickstart files | Pykickstart
===========
Pykickstart is a Python 2 and Python 3 library consisting of a data
representation of kickstart files, a parser to read files into that
representation, and a writer to generate kickstart files.
Online documentation
--------------------
Online documentation for kickstart and the Pykickstart library is available on Read the Docs:
https://pykickstart.readthedocs.io
How to generate the kickstart documentation
-------------------------------------------
The pykickstart documentation is generated dynamically from the source code with Sphinx.
To generate the documentation first make sure you have the Python bindings for Sphinx installed.
At least on Fedora this means installing the ``python3-sphinx`` package.
Then change directory to the ``docs`` folder:
``cd docs``
And generate the docs with:
``make html``
| text/x-rst | Chris Lumens | Chris Lumens <clumens@redhat.com> | null | "Brian C. Lane" <bcl@redhat.com> | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License (GPL)",
"Operating System :: OS Independent"
] | [] | https://fedoraproject.org/wiki/pykickstart | null | >=3.6 | [] | [] | [] | [
"requests"
] | [] | [] | [] | [
"Homepage, https://fedoraproject.org/wiki/pykickstart",
"Bug Tracker, https://github.com/pykickstart/pykickstart/issues"
] | twine/6.1.0 CPython/3.13.11 | 2026-02-18T17:21:08.992660 | pykickstart-3.69.tar.gz | 834,663 | 0e/8a/95f58608c6d537f8840c42803cb6be6b1b9a0e3a8c3e4d06770d279e6f86/pykickstart-3.69.tar.gz | source | sdist | null | false | d8222ca019de8fc3ccc963b8a57c055f | 49bbc8a51b43a071af9e89944248929a6f3c038731e1a868701dc939aab4dc57 | 0e8a95f58608c6d537f8840c42803cb6be6b1b9a0e3a8c3e4d06770d279e6f86 | null | [
"COPYING"
] | 668 |
2.1 | iqm-pulse | 12.7.5 | A Python-based project for providing interface and implementations for control pulses. | IQM Pulse library
=================
A Python-based project for providing interface and implementations for control pulses.
See `API documentation <https://docs.meetiqm.com/iqm-pulse/>`_.
Copyright
---------
Copyright 2019-2025 IQM
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| text/x-rst | null | IQM Finland Oy <info@meetiqm.com> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2024 IQM
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | null | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Scientific/Engineering :: Physics",
"Intended Audience :: Science/Research"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"iqm-data-definitions<3.0,>=2.18",
"python-rapidjson==1.20",
"jinja2<4,>=3.1.6",
"numpy<3.0,>=1.26.4",
"scipy>=1.11.4",
"scipy-stubs"
] | [] | [] | [] | [
"Documentation, https://docs.meetiqm.com/iqm-pulse/",
"Homepage, https://pypi.org/project/iqm-pulse/"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T17:21:00.879053 | iqm_pulse-12.7.5.tar.gz | 628,694 | 46/3c/ef3d257a094d361c51936f28e0e6289c1d04fca3363abe96a14c6c24c0d5/iqm_pulse-12.7.5.tar.gz | source | sdist | null | false | b0a858c214b5e65d5e0ebb30e5408075 | cea114947b2680292356e99b07d164b18b32760cdef7f1f2b590cb40f5829301 | 463cef3d257a094d361c51936f28e0e6289c1d04fca3363abe96a14c6c24c0d5 | null | [] | 460 |
2.1 | iqm-station-control-client | 12.0.5 | Python client for communicating with Station Control Service | Station control client library
==============================
Client library for accessing IQM Station Control.
| text/x-rst | null | IQM Finland Oy <info@meetiqm.com> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2024 IQM
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | null | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Scientific/Engineering :: Physics",
"Intended Audience :: Science/Research"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"iqm-data-definitions<3.0,>=2.18",
"opentelemetry-exporter-otlp<2.0,>=1.25.0",
"protobuf<5.0,>=4.25.3",
"types-protobuf",
"grpcio<2.0,>=1.65.4",
"pydantic<3.0,>=2.10.4",
"PyYAML<7.0,>=6.0",
"requests==2.32.3",
"types-requests",
"tqdm>=4.59.0",
"types-tqdm",
"iqm-exa-common<28,>=27.4.5"
] | [] | [] | [] | [
"Documentation, https://docs.meetiqm.com/iqm-station-control-client/",
"Homepage, https://pypi.org/project/iqm-station-control-client/"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T17:20:54.940049 | iqm_station_control_client-12.0.5.tar.gz | 233,923 | 18/f1/7b00ecf3500c88d071461ede05af151e559f19ce112a00f50644742d4898/iqm_station_control_client-12.0.5.tar.gz | source | sdist | null | false | 043f5869fe01f1114e58700247c6e93b | 390c50cbcaf9d5ab77122b5ffa206cde2b304ef15995aedfffd47f62f3f59b2e | 18f17b00ecf3500c88d071461ede05af151e559f19ce112a00f50644742d4898 | null | [] | 414 |
2.1 | iqm-exa-common | 27.4.5 | Framework for control and measurement of superconducting qubits: common library | EXA-common
==========
A Python-based project with abstract interfaces, helpers, etc. that are used both in backend and experiment layers.
See `API documentation <https://docs.meetiqm.com/iqm-exa-common/>`_.
Copyright
---------
Copyright 2019-2025 IQM
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| text/x-rst | null | IQM Finland Oy <info@meetiqm.com> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2024 IQM
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | null | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Scientific/Engineering :: Physics",
"Intended Audience :: Science/Research"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"iqm-data-definitions<3.0,>=2.18",
"numpy<3.0,>=1.26.4",
"pydantic<3.0,>=2.10.4",
"python-dotenv==0.21.1",
"xarray>=2024.10.0",
"requests==2.32.3",
"types-requests",
"ruamel-yaml==0.17.32",
"ruamel-yaml-clib==0.2.8",
"jinja2<4,>=3.1.6",
"six==1.16.0",
"types-six"
] | [] | [] | [] | [
"Documentation, https://docs.meetiqm.com/iqm-exa-common/",
"Homepage, https://pypi.org/project/iqm-exa-common/"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T17:20:50.988135 | iqm_exa_common-27.4.5.tar.gz | 239,597 | 66/56/98827160372a1ab149ab626afe334da59aed1309b64d1dde27e9de8b2e30/iqm_exa_common-27.4.5.tar.gz | source | sdist | null | false | 0797a23c0c10ae9c3c261fdbfaee56ca | 42caa485bcdb8ea833adf2f03c0f8e36dc4cfa606acaa44ccba89637a496b253 | 665698827160372a1ab149ab626afe334da59aed1309b64d1dde27e9de8b2e30 | null | [] | 446 |
2.1 | odoo-addon-sale-pricelist-global-rule | 17.0.1.0.1 | Apply a global rule to all sale order | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
==========================
Sale pricelist global rule
==========================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:3d45c93c39c5a2676fc75ffa0ab2da5efdcd9fbc531a64610bd117cf4f304ded
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fsale--workflow-lightgray.png?logo=github
:target: https://github.com/OCA/sale-workflow/tree/17.0/sale_pricelist_global_rule
:alt: OCA/sale-workflow
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/sale-workflow-17-0/sale-workflow-17-0-sale_pricelist_global_rule
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/sale-workflow&target_branch=17.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module allows configured pricelists to be applied to a sales order
by considering cumulative quantities across all lines.
**Global by Product Template**
If a pricelist rule has a min_quantity = 15, and a sales order contains:
- Line 1: Variant 1, quantity = 8
- Line 2: Variant 2, quantity = 8
**Global by Product Category**
Similarly, if a pricelist rule has a min_quantity = 20 for products
within a category, and a sales order includes:
- Line 1: Product 1, quantity = 10
- Line 2: Product 2, quantity = 10
In standard Odoo, pricelist rules would not apply since no single line
meets the minimum quantity. With this module, however, cumulative
quantities across lines allow the pricelist rule to apply, as they meet
the minimum threshold (16 in the product template example and 20 in the
product category example).
**Table of contents**
.. contents::
:local:
Configuration
=============
- Go to Sales -> Products -> Pricelist.
- Create a new Pricelist and add at least one line with the Apply On
option set to Global - Product template or Global - Product category
- Choose the specific product template or category for the rule.
- Set the computation mode and save
Usage
=====
- Go to Sales -> Orders -> Quotations.
- Create a new record and fill the required fields.
- Choose a Pricelist that has a global rule configured (either by
Category or Product).
- Click the **Recompute pricelist global** button to update prices
according to the specified pricelist rules.
Known issues / Roadmap
======================
- Implement automatic application of the pricelist whenever changes are
made to order lines (such as prices, quantities, etc.) or to the
pricelist itself, eliminating the need for manual button clicks.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/sale-workflow/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/sale-workflow/issues/new?body=module:%20sale_pricelist_global_rule%0Aversion:%2017.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Tecnativa
Contributors
------------
- `Tecnativa <https://www.tecnativa.com>`__
- Pedro M. Baeza
- Carlos López
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
This module is part of the `OCA/sale-workflow <https://github.com/OCA/sale-workflow/tree/17.0/sale_pricelist_global_rule>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | Tecnativa, Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 17.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/sale-workflow | null | >=3.10 | [] | [] | [] | [
"odoo<17.1dev,>=17.0a"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T17:20:47.861786 | odoo_addon_sale_pricelist_global_rule-17.0.1.0.1-py3-none-any.whl | 34,313 | 68/6d/80dab724c857a41aee3ab0deef8811ed271c7b81378c729071cfc66ce081/odoo_addon_sale_pricelist_global_rule-17.0.1.0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 7f24b1dc9ba8abc6b38770bf6f10a953 | 5b92ae2e711a2e25326f926dc73a0e63e4ad9916ec9358aceedb4ba930afabf2 | 686d80dab724c857a41aee3ab0deef8811ed271c7b81378c729071cfc66ce081 | null | [] | 102 |
2.1 | iqm-client | 33.0.5 | Client library for accessing an IQM quantum computer | IQM Client
###########
Client-side Python library for connecting to an `IQM <https://meetiqm.com/>`_ quantum computer.
Includes as an optional feature `Qiskit <https://qiskit.org/>`_ and `Cirq <https://quantumai.google/cirq>`_
adapters for `IQM's <https://www.meetiqm.com>`_ quantum computers, which allow you to:
* Transpile arbitrary quantum circuits for IQM quantum architectures
* Simulate execution on IQM quantum architectures with IQM-specific noise models
(currently only the Qiskit adapter contains IQM noise models)
* Run quantum circuits on an IQM quantum computer
Installation
============
For executing code on an IQM quantum computer, you can use for example the
Qiskit on IQM or Cirq on IQM user guides found in the documentation.
Qiskit on IQM and Cirq on IQM are optional features that can be installed alongside the base IQM Client library.
An example showing how to install from the public Python Package Index (PyPI):
.. code-block:: bash
$ uv pip install "iqm-client[qiskit,cirq]"
.. note::
If you have previously installed the (now deprecated) ``qiskit-iqm`` or ``cirq-iqm`` packages in your
Python environment, you should first uninstall them with ``$ pip uninstall qiskit-iqm cirq-iqm``.
In this case, you should also include the ``--force-reinstall`` option in the ``iqm-client`` installation command.
IQM Client by itself is not intended to be used directly by human users. If you want just the base IQM Client library,
though, you can install it with
.. code-block:: bash
$ uv pip install iqm-client
.. note::
`uv <https://docs.astral.sh/uv/>`_ is highly recommended for practical Python environment and package management.
Documentation
=============
Documentation for the latest version is `available online <https://docs.meetiqm.com/iqm-client/>`_.
You can build documentation for any older version locally by downloading the corresponding package from PyPI,
and running the docs builder. For versions greater than equal to 20.12 but less than 33.0.0 this is done by
running ``./docbuild`` in the ``iqm-client`` root directory, and for earlier versions by running ``tox run -e docs``.
``./docbuild`` or ``tox run -e docs`` will build the documentation at ``./build/sphinx/html``.
Versions greater than or equal to 33.0.0 use the command:
``sphinx-build -q -d build/.doctrees/iqm-client iqm-client/docs build/docs/iqm-client``
(``build/docs/`` directory has to be created first).
These commands require installing the ``sphinx`` and ``sphinx-book-theme`` Python packages and
`graphviz <https://graphviz.org/>`_.
Copyright
=========
IQM Client is free software, released under the Apache License, version 2.0.
Copyright 2021-2026 IQM Client developers.
| text/x-rst | null | IQM Finland Oy <developers@meetiqm.com> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2024 IQM
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| null | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Scientific/Engineering :: Physics",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License"
] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"numpy<3.0,>=1.26.4",
"packaging==24.1",
"pydantic<3.0,>=2.9.2",
"requests==2.32.3",
"iqm-pulse<13,>=12.7.5",
"iqm-station-control-client<13,>=12.0.5",
"cirq-core[contrib]~=1.2; extra == \"cirq\"",
"ply==3.11; extra == \"cirq\"",
"llvmlite>=0.44.0; extra == \"cirq\"",
"numba>=0.61.0; extra == \"ci... | [] | [] | [] | [
"Documentation, https://docs.meetiqm.com/iqm-client/",
"Homepage, https://pypi.org/project/iqm-client/"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T17:20:46.388058 | iqm_client-33.0.5.tar.gz | 316,404 | 06/94/4f03e4cd3f8a6e04dececf11b719c07c0f3c136ba2fc6879819406142c8e/iqm_client-33.0.5.tar.gz | source | sdist | null | false | d7dca6f5d642c2afa4d46ed327c0940b | c5129d2d868906f86c2b675759343076a879644d8302c0a7efc3e45ab443e1aa | 06944f03e4cd3f8a6e04dececf11b719c07c0f3c136ba2fc6879819406142c8e | null | [] | 422 |
2.4 | arprax-algorithms | 0.3.1 | Industrial-Grade Algorithms by Arprax Lab | # Arprax Algorithms
**Industrial-grade algorithms, performance profilers, and data structures for Python.**
Built by **Arprax Lab**, this toolkit is designed for the "Applied Data Intelligence" era—where understanding how code scales is as important as the code itself.
---
## 🚀 Features
* **ArpraxProfiler:** High-precision analysis with GC control, warmup cycles, and OHPV2 (Doubling Test) complexity estimation.
* **Industrial Utils:** High-performance data factories (random_array, sorted_array) for robust benchmarking.
* **Standard Library:** High-performance implementations of classic algorithms (Merge Sort, Bubble Sort, etc.) with strict type hinting.
## 📦 Installation
```bash
# Core only
pip install arprax-algorithms
# With visual tools
pip install arprax-algorithms[visuals]
# With research tools
pip install arprax-algorithms[research]
```
## 🔬 Quick Start: Benchmarking
Once installed, you can immediately run a performance battle between algorithms.
```python
from arprax.algos import Profiler
from arprax.algos.utils import random_array # Clean import from your new 'utils'
from arprax.algos.algorithms import merge_sort # Using the 'lifted' API
# 1. Initialize the industrial profiler
profiler = Profiler(mode="min", repeats=5)
# 2. Run a doubling test (OHPV2 Analysis)
results = profiler.run_doubling_test(
merge_sort,
random_array,
start_n=500,
rounds=5
)
# 3. Print the performance analysis
profiler.print_analysis("Merge Sort", results)
```
## 🎓 Demonstrations & Pedagogy
We provide high-fidelity demonstrations to show the library in action. These are located in the `examples/` directory to maintain a decoupled, industrial-grade production environment.
### Performance Profiling
Measure execution time, memory usage, and operation counts across different input sizes ($N$):
```bash
python examples/demo_profiler.py
```
### Algorithm Visualization
View real-time, frame-by-frame animations of sorting and search logic:
```bash
python examples/visualizer.py
```
> [!TIP]
> For detailed instructions on running these demos and setting up the visualization environment, see our [**Examples Guide**](./examples/GETTING_STARTED.md).
## 🏗️ The Arprax Philosophy
> **Applied Data Intelligence requires more than just code—it requires proof.**
* **Zero-Magic:** Every algorithm is written for clarity and performance. We don't hide logic behind obscure abstractions or hidden standard library calls.
* **Empirical Evidence:** We don't just guess Big O complexity; we measure it using high-resolution timers and controlled environments.
* **Industrial Scale:** Our tools are designed to filter out background CPU noise, providing reliable benchmarks for real-world software engineering.
## 📚 Citation
**To cite the Software:**
See the "Cite this repository" button on our [GitHub](https://github.com/arprax/arprax-algorithms).
**To cite the Handbook (Documentation):**
```bibtex
@manual{arprax_handbook,
title = {The Algorithm Engineering Handbook},
author = {Chowdhury, Tanmoy},
organization = {Arprax LLC},
year = {2026},
url = {https://algorithms.arprax.com/book},
note = {Accessed: 2026-02-01}
}
```
---
**© 2026 Arprax Lab** *A core division of Arprax dedicated to Applied Data Intelligence.*
| text/markdown | null | Arprax Lab <lab@arprax.com> | null | null | null | algorithms, sorting, profiler, industrial-cs, arprax | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Intended Audience :: Education",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"build; extra == \"dev\"",
"twine; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"ruff; extra == \"dev\"",
"numpy>=1.26.0; extra == \"research\"",
"matplotlib>=3.8.0; extra == \"visuals\"",
"networkx>=3.0; extra == \"visuals\""
] | [] | [] | [] | [
"Homepage, https://algorithms.arprax.com/",
"Documentation, https://algorithms.arprax.com/reference/api/",
"Source, https://github.com/arprax/arprax-algorithms",
"Tracker, https://github.com/arprax/arprax-algorithms/issues",
"Arprax, https://www.arprax.com/"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T17:20:46.092771 | arprax_algorithms-0.3.1.tar.gz | 36,969 | 7c/86/f07b5036d1cb3b02a082b99d528c7ab4db4bd01ab291ade0e07d9545a1e4/arprax_algorithms-0.3.1.tar.gz | source | sdist | null | false | 1e03d4c9a4388c6a9d120fa11b6d2394 | 3b1572f6a06fe151ef116bd6808d56cba5e19e02957968499fd3d542d7beba26 | 7c86f07b5036d1cb3b02a082b99d528c7ab4db4bd01ab291ade0e07d9545a1e4 | null | [
"LICENSE"
] | 234 |
2.4 | ngiab-data-preprocess | 4.8.2 | Graphical Tools for creating Next Gen Water model input data. | # NGIAB Data Preprocess
This repository contains tools for preparing data to run a [NextGen](https://github.com/NOAA-OWP/ngen)-based simulation using [NGIAB](https://github.com/CIROH-UA/NGIAB-CloudInfra). The tools allow you to select a catchment of interest on an interactive map, choose a date range, and prepare the data with just a few clicks!

| | |
| --- | --- |
|  | Funding for this project was provided by the National Oceanic & Atmospheric Administration (NOAA), awarded to the Cooperative Institute for Research to Operations in Hydrology (CIROH) through the NOAA Cooperative Agreement with The University of Alabama (NA22NWS4320003). |
## Table of Contents
1. [What does this tool do?](#what-does-this-tool-do)
2. [Limitations](#limitations)
- [Custom realizations](#custom-realizations)
- [Calibration](#calibration)
- [Evaluation](#evaluation)
- [Visualisation](#visualisation)
3. [Requirements](#requirements)
4. [Installation and running](#installation-and-running)
- [Running without install](#running-without-install)
- [For uv installation](#for-uv-installation)
- [For legacy pip installation](#for-legacy-pip-installation)
- [Development installation](#development-installation)
5. [Map interface documentation](#map-interface-documentation)
- [Running the map interface app](#running-the-map-interface-app)
- [Using the map interace](#using-the-map-interface)
6. [CLI documentation](#cli-documentation)
- [Running the CLI](#running-the-cli)
- [Arguments](#arguments)
- [Usage notes](#usage-notes)
- [Examples](#examples)
7. [Realization information](#realization-information)
- [NOAH + CFE](#noah--cfe)
## What does this tool do?
This tool prepares data to run a NextGen-based simulation by creating a run package that can be used with NGIAB.
It uses geometry and model attributes from the [v2.2 hydrofabric](https://lynker-spatial.s3-us-west-2.amazonaws.com/hydrofabric/v2.2/conus/conus_nextgen.gpkg) more information on [all data sources here](https://lynker-spatial.s3-us-west-2.amazonaws.com/hydrofabric/v2.2/hfv2.2-data_model.html).
The raw forcing data is [nwm retrospective v3 forcing](https://noaa-nwm-retrospective-3-0-pds.s3.amazonaws.com/index.html#CONUS/zarr/forcing/) data or the [AORC 1km gridded data](https://noaa-nws-aorc-v1-1-1km.s3.amazonaws.com/index.html) depending on user input
1. **Subsets** (delineates) everything upstream of your point of interest (catchment, gage, flowpath etc) from the hydrofabric. This subset is output as a geopackage (.gpkg).
2. Calculates **forcings** as a weighted mean of the gridded NWM or AORC forcings. Weights are calculated using [exact extract](https://isciences.github.io/exactextract/) and computed with numpy.
3. Creates **configuration files** for a default NGIAB model run.
- realization.json - ngen model configuration
- troute.yaml - routing configuration.
- **per catchment** model configuration
4. Optionally performs a non-interactive [Docker-based NGIAB](https://github.com/CIROH-UA/NGIAB-CloudInfra) run.
## Limitations
This tool cannot do the following:
### Custom realizations
This tool currently only outputs a single, default realization, which is described in "[Realization information](#realization-information)". Support for additional model configurations is planned, but not currently available.
### Calibration
If available, this repository will download [calibrated parameters](https://communityhydrofabric.s3.us-east-1.amazonaws.com/index.html#hydrofabrics/community/gage_parameters/) from the [Community Hydrofabric](https://github.com/CIROH-UA/community_hf_patcher) AWS S3 bucket.
However, many gages and catchments will not have such parameters available. In these cases, Data Preprocess will output realizations with default values.
For automatic calibration, please see [ngiab-cal](https://github.com/CIROH-UA/ngiab-cal), which is under active development.
### Evaluation
For automatic evaluation using [TEEHR](https://github.com/RTIInternational/teehr), please run [NGIAB](https://github.com/CIROH-UA/NGIAB-CloudInfra) interactively using the `guide.sh` script.
### Visualisation
For automatic interactive visualisation, please run [NGIAB](https://github.com/CIROH-UA/NGIAB-CloudInfra) interactively using the `guide.sh` script
# Requirements
This tool is **officially supported** on **macOS** and **Ubuntu** (tested on 22.04 & 24.04). To use it on Windows, please install [**WSL**](https://learn.microsoft.com/en-us/windows/wsl/install).
It is also **highly recommended** to use [Astral UV](https://docs.astral.sh/uv/) to install and run this tool. Installing the project via `pip` without the use of a virtual environment creates a **severe risk** of dependency conflicts.
# Installation and running
### Running without install
This package supports pipx and uvx, which means you can run the tool without installing it. No virtual environment needed, just UV.
```bash
# Run these from anywhere!
uvx --from ngiab-data-preprocess cli --help # Running the CLI
uvx ngiab-prep --help # Alias for the CLI
uvx --from ngiab-data-preprocess map_app # Running the map interface
```
### For uv installation
<details>
<summary>Click here to expand</summary>
```bash
# Install UV
curl -LsSf https://astral.sh/uv/install.sh | sh
# It can be installed via pip if that fails
# pip install uv
# Create a virtual environment in the current directory
uv venv
# Install the tool in the virtual environment
uv pip install ngiab_data_preprocess
# To run the cli
uv run cli --help
# To run the map
uv run map_app
```
UV automatically detects any virtual environments in the current directory and will use them when you use `uv run`.
</details>
### For legacy pip installation
<details>
<summary>Click here to expand</summary>
```bash
# If you're installing this on jupyterhub / 2i2c you HAVE TO DEACTIVATE THE CONDA ENV
(notebook) jovyan@jupyter-user:~$ conda deactivate
jovyan@jupyter-user:~$
# The interactive map won't work on 2i2c
```
```bash
# This tool is likely to not work without a virtual environment
python3 -m venv .venv
source .venv/bin/activate
# installing and running the tool
pip install 'ngiab_data_preprocess'
python -m map_app
# CLI instructions at the bottom of the README
```
</details>
### Development installation
<details>
<summary>Click to expand installation steps</summary>
To install and run the tool, follow these steps:
1. Clone the repository:
```bash
git clone https://github.com/CIROH-UA/NGIAB_data_preprocess
cd NGIAB_data_preprocess
```
2. Create a virtual environment:
```bash
uv venv
```
3. Install the tool:
```bash
uv pip install -e .
```
4. Run the map app:
```bash
uv run map_app
```
</details>
# Map interface documentation
## Running the map interface app
Running the `map_app` tool will open the app in a new browser tab.
Install-free: `uvx --from ngiab-data-preprocess map_app`
Installed with uv: `uv run map_app`
## Using the map interface
1. Select the catchment you're interested in on the map.
2. Pick the time period you want to simulate.
3. Click the following buttons in order:
1) Create subset gpkg
2) Create Forcing from Zarrs
3) Create Realization
Once all the steps are finished, you can run NGIAB on the folder shown underneath the subset button.
**Note:** When using the tool, the default output will be stored in the `~/ngiab_preprocess_output/<your-input-feature>/` folder. There is no overwrite protection on the folders.
# CLI documentation
## Running the CLI
Install-free: `uvx ngiab-prep`
Installed with uv: `uv run cli`
## Arguments
- `-h`, `--help`: Show the help message and exit.
- `--output_root`: Path to new default directory where outputs in the future will be stored.
- `-i INPUT_FEATURE`, `--input_feature INPUT_FEATURE`: ID of feature to subset. Providing a prefix will automatically convert to catid, e.g., cat-5173 or gage-01646500 or wb-1234.
- `--vpu VPU_ID` : The id of the vpu to subset e.g 01. 10 = 10L + 10U and 03 = 03N + 03S + 03W. `--help` will display all the options.
- `-l`, `--latlon`: Use latitude and longitude instead of catid. Expects comma-separated values via the CLI, e.g., `python -m ngiab_data_cli -i 54.33,-69.4 -l -s`.
- `-g`, `--gage`: Use gage ID instead of catid. Expects a single gage ID via the CLI, e.g., `python -m ngiab_data_cli -i 01646500 -g -s`.
- `-s`, `--subset`: Subset the hydrofabric to the given feature.
- `--subset_type`: Specify the subset type. `nexus`: get everything flowing into the downstream nexus of the selected catchment. `catchment`: get everything flowing into the selected catchment.
- `-f`, `--forcings`: Generate forcings for the given feature.
- `-r`, `--realization`: Create a realization for the given feature.
- `--lstm`: Configures the data for the [python lstm](https://github.com/ciroh-ua/lstm/).
- `--lstm_rust`: Configures the data for the [rust lstm](https://github.com/ciroh-ua/rust-lstm-1025/).
- `--dhbv2`: Configures the data for the hourly [dHBV2](https://github.com/mhpi/dhbv2).
- `--dhbv2_daily`: Configures the data for the daily [dHBV2](https://github.com/mhpi/dhbv2).
- `--summa`: Configures the data for the SUMMA model.
- `--start_date START_DATE`, `--start START_DATE`: Start date for forcings/realization (format YYYY-MM-DD).
- `--end_date END_DATE`, `--end END_DATE`: End date for forcings/realization (format YYYY-MM-DD).
- `-o OUTPUT_NAME`, `--output_name OUTPUT_NAME`: Name of the output folder.
- `--source` : The datasource you want to use, either `nwm` for retrospective v3 or `aorc`. Default is `nwm`.
- `-D`, `--debug`: Enable debug logging.
- `--nwm_gw`: Use NWM retrospective output groundwater level for CFE initial groundwater state.
- `--run`: Automatically run [NGIAB's docker distribution](https://github.com/CIROH-UA/NGIAB-CloudInfra) against the output folder.
- `--validate`: Run every missing step required to run NGIAB.
- `-a`, `--all`: Run all operations. Equivalent to `-sfr` and `--run`.
## Usage notes
- If your input has a prefix of `gage-`, you do not need to pass `-g`.
- The `-l`, `-g`, `-s`, `-f`, `-r` flags can be combined like normal CLI flags. For example, to subset, generate forcings, and create a realization, you can use `-sfr` or `-s -f -r`.
- When using the `--all` flag, it automatically sets `subset`, `forcings`, `realization`, and `run` to `True`.
- Using the `--run` flag automatically sets the `--validate` flag.
## Examples
1. Prepare everything for an NGIAB run at a given gage:
```bash
uvx ngiab-prep -i gage-10154200 -sfr --start 2022-01-01 --end 2022-02-28
# add --run or replace -sfr with --all to run NGIAB, too
# to name the folder, add -o folder_name
```
2. Subset the hydrofabric using a catchment ID or VPU:
```bash
uvx ngiab-prep -i cat-7080 -s
uvx ngiab-prep --vpu 01 -s
```
3. Generate forcings using a single catchment ID:
```bash
uvx ngiab-prep -i cat-5173 -f --start 2022-01-01 --end 2022-02-28
```
4. Create realization using a latitude/longitude pair and output to a named folder:
```bash
uvx ngiab-prep -i 33.22,-87.54 -l -r --start 2022-01-01 --end 2022-02-28 -o custom_output
```
5. Perform all operations using a latitude/longitude pair:
```bash
uvx ngiab-prep -i 33.22,-87.54 -l -s -f -r --start 2022-01-01 --end 2022-02-28
```
6. Subset the hydrofabric using a gage ID:
```bash
uvx ngiab-prep -i 10154200 -g -s
# or
uvx ngiab-prep -i gage-10154200 -s
```
7. Generate forcings using a single gage ID:
```bash
uvx ngiab-prep -i 01646500 -g -f --start 2022-01-01 --end 2022-02-28
```
# Realization information
This tool currently offers three realizations.
## NOAH + CFE (Default)
[This realization](https://github.com/CIROH-UA/NGIAB_data_preprocess/blob/main/modules/data_sources/config/realization/cfe-nom.json) is intended to be roughly comparable to earlier versions of the National Water Model.
- [NOAH-OWP-Modular](https://github.com/NOAA-OWP/NOAH-OWP-Modular): A refactoring of Noah-MP, a land-surface model. Used to model groundwater properties.
- [Conceptual Functional Equivalent (CFE)](https://github.com/NOAA-OWP/CFE): A simplified conceptual approximation of versions 1.2, 2.0, and 2.1 of the National Water Model. Used to model precipitation and evaporation.
- [SLoTH](https://github.com/NOAA-OWP/SLoTH): A module used to feed through unchanged values. In this default configuration, it simply forces certain soil moisture and ice fraction properties to zero.
## LSTM (Python)
[This realization](https://github.com/CIROH-UA/NGIAB_data_preprocess/blob/main/modules/data_sources/config/realization/lstm-py.json) will run the [python lstm](https://github.com/ciroh-ua/lstm/). It's designed to work with ngiab using [these example weights](https://github.com/CIROH-UA/lstm/tree/example_weights/trained_neuralhydrology_models) generously contributed by [jmframe/lstm](https://github.com/jmframe/lstm)
## LSTM (Rust)
[This realization](https://github.com/CIROH-UA/NGIAB_data_preprocess/blob/main/modules/data_sources/config/realization/lstm-rs.json) will run the [rust port](https://github.com/CIROH-UA/rust-lstm-1025/tree/main) of the python lstm above. It's an experimental drop in replacement that should produce identical results with a ~2-5x speedup depending on your setup.
## SUMMA
[This realization](https://github.com/CIROH-UA/NGIAB_data_preprocess/blob/main/modules/data_sources/config/realization/summa.json) will run the [SUMMA](https://github.com/CIROH-UA/ngen/tree/ngiab/extern/summa) model (version linked is what's currently in nextgen in a box).
| text/markdown | null | Josh Cunningham <jcunningham8@ua.edu> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pyogrio>=0.7.2",
"pyproj>=3.6.1",
"pandas<3.0.0",
"Flask==3.0.2",
"geopandas>=1.0.0",
"requests==2.32.4",
"igraph==0.11.4",
"s3fs==2024.3.1",
"xarray==2024.2.0",
"zarr==2.17.1",
"netCDF4<1.7.3,>=1.6.5",
"dask==2024.4.1",
"dask[distributed]==2024.4.1",
"exactextract==0.2.0",
"numpy>=1.26... | [] | [] | [] | [
"Homepage, https://github.com/CIROH-UA/NGIAB_data_preprocess",
"Issues, https://github.com/CIROH-UA/NGIAB_data_preprocess/issues"
] | twine/6.1.0 CPython/3.12.9 | 2026-02-18T17:20:08.112063 | ngiab_data_preprocess-4.8.2.tar.gz | 463,539 | dc/0f/e3eda0c0c847adac64ab9484d6010786ee018d823824100a8e261226e03e/ngiab_data_preprocess-4.8.2.tar.gz | source | sdist | null | false | abc3f34cab878c2ae9d675531df505a7 | c62d581227699cb54742e930fe31a0fd83ab01247ff254ba888de8fe0859a70f | dc0fe3eda0c0c847adac64ab9484d6010786ee018d823824100a8e261226e03e | null | [
"LICENSE"
] | 266 |
2.4 | gwo | 0.1.9 | a damn fine wrapper for reverse-enginered gwo api | # GWO (Python)
### Reverse-enginered [GWO](https://gwo.pl/) api wrapper for Python
[](https://pypi.org/project/gwo/) [](https://pepy.tech/projects/gwo) [](https://pypi.org/project/gwo/)
---
GWO (Python) is an api wrapper for GWO (Website) that allows you to programmatically extract excercises from the users accesses, modify and view user info, and anwser exercises all in a simple to use package
### 📱 Ultra compatible
Due to its simple nature GWO (Python) is compatible with almost every platform*
### đź§¶ Niche but usefull
* Stuck on an exercise? Code up a tool to show you the anwser.
* Training an ai? Extract all exercises and their anwsers to train a model capable of solving the exercises.
* I have honestly no idea for what else you might wanna use this for, its very niche i said.
## Example of a script using GWO
Below is a simple script, using user input to log in and print out user info:
```python
from GWO import GWOApi, User, LoginException, FetchException
from typing import Optional
import asyncio
async def login(username: str, password: str) -> Optional[User]:
if not (username and password):
return None
try:
client: GWOApi = await GWOApi.login(username, password)
return client.user
except (LoginException, FetchException):
return None
async def main():
print("Welcome to GWO!\nLogin with your GWO account")
while True:
username: str = input("Username: ")
password: str = input("Password: ")
if not (username and password):
print("Username or password cannot be empty!"); continue
user: Optional[User] = await login(username, password)
if not user:
print("Invalid username or password!"); continue
print(f"Hello, {user.firstName}!"); break
asyncio.run(main())
```
To run the script, install `GWO`:
```console
pip install GWO
```
> We use analytics but don't worry, they're anonymous (and you can disable them (refer to docstrings))
| text/markdown | RebornEnder | contact@dotbend.xyz | null | null | LICENCE | null | [] | [] | https://github.com/Reb0rnEnder/GWOApi | null | null | [] | [] | [] | [
"aiohttp>=3.12.15",
"beautifulsoup4>=4.14.2"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.9 | 2026-02-18T17:19:39.517209 | gwo-0.1.9.tar.gz | 21,597 | 9b/36/410c4c65218cdce24ae1d34f1a7fae619cc66a4deb8e802a548f76be7126/gwo-0.1.9.tar.gz | source | sdist | null | false | 72fe9fedd78bc0a7f9324437640cdab6 | bf62e8150dae697871b6d6cefa71a5ea1a75151076cbd63b8ca315052823f21b | 9b36410c4c65218cdce24ae1d34f1a7fae619cc66a4deb8e802a548f76be7126 | null | [
"LICENSE"
] | 236 |
2.4 | drppy-client | 4.16.0 | Python Library to interact with the digitalrebar API. | # drppy-client
# An RestFUL API-driven Provisioner and DHCP server
This Python package is automatically generated by the [Swagger Codegen](https://github.com/swagger-api/swagger-codegen) project:
- API version: 4.14.0
- Build package: io.swagger.codegen.languages.PythonClientCodegen
- For more information, please visit [https://doc.rackn.io/](https://docs.rackn.io/)
## Requirements.
Python 3.4+
## Installation & Usage
### pip install
This should be done in a virtual environment and not directly installed into the system python.
To install directly from public pypi use the following command:
```sh
pip install drppy_client
```
Once installed use the following code to import the package for use:
```python
import drppy_client
```
To use the development version of this library install directly from gitlab using:
```sh
pip install git+https://gitlab.com/rackn/drppy_client.git
```
### Setuptools
Install via [Setuptools](http://pypi.python.org/pypi/setuptools).
```sh
python -m pip install .
```
Then import the package:
```python
import drppy_client
```
## Getting Started
Examples are provided in the examples/ directory, and make an excellent initial place to start exploring for how to use the
library. The examples are implemented as a basic drpcli example.
Below is a basic example that can be created once the library is installed to verify things are working. It assumes at least 1 machine exists,
and also uses the default credentials and endpoint location. Be sure to adjust the host, username, and password as needed to match your environment.
```python
from pprint import pprint
from drppy_client.configuration import Configuration
from drppy_client.api_client import ApiClient
from drppy_client.api.machines_api import MachinesApi
config = Configuration()
config.host = 'https://localhost:8092/api/v3'
config.username = "rocketskates"
config.password = "r0cketsk8ts"
config.verify_ssl = False
client = ApiClient(config)
machines = MachinesApi(client)
my_machines = machines.list_machines()
print(my_machines[0])
```
## DRPCLI example using Python
The drpcli.py example is implemented using Python Click. Be sure click is installed if you plan to use the example
```sh
pip install click
```
Locate the examples directory. The examples are generally stored in $VENV_ROOT/share/drppy_client/examples
From the examples directory run:
```sh
$ python drpcli.py --help
Usage: drpcli.py [OPTIONS] COMMAND [ARGS]...
Main entry point for the CLI.
Options:
--endpoint TEXT API host URL
--token TEXT API token
--key TEXT user:password Used for basic auth
--help Show this message and exit.
Commands:
contents Contents Specific Commands.
machines Machine Specific Commands.
params Parameter Specific Commands.
profiles Profiles Specific Commands.
subnets Subnets Specific Commands.
$ python drpcli.py machines list
```
If you do not want to run the examples and just want to look at the code to see how to use the SDK open up the file for the thing you want to learn.
Machine examples are in the machines.py file, Profile examples in the profiles.py file etc.. The examples implemented are minimal by design to give
you a starting point. If you are having trouble using them, or finding an example that demonstrates the feature you need please send a support request
to support@rackn.com
## Documentation for API Endpoints
All URIs are relative to *https://localhost/api/v3*
Class | Method | HTTP request | Description
------------ | ------------- | ------------- | -------------
*ActivitiesApi* | [**create_activity**](docs/ActivitiesApi.md#create_activity) | **POST** /activities | Create a Activity
*ActivitiesApi* | [**delete_activity**](docs/ActivitiesApi.md#delete_activity) | **DELETE** /activities/{id} | Delete a Activity
*ActivitiesApi* | [**get_activity**](docs/ActivitiesApi.md#get_activity) | **GET** /activities/{id} | Get a Activity
*ActivitiesApi* | [**get_activity_action**](docs/ActivitiesApi.md#get_activity_action) | **GET** /activities/{id}/actions/{cmd} | List specific action for a activities Activity
*ActivitiesApi* | [**get_activity_actions**](docs/ActivitiesApi.md#get_activity_actions) | **GET** /activities/{id}/actions | List activities actions Activity
*ActivitiesApi* | [**head_activity**](docs/ActivitiesApi.md#head_activity) | **HEAD** /activities/{id} | See if a Activity exists
*ActivitiesApi* | [**list_activities**](docs/ActivitiesApi.md#list_activities) | **GET** /activities | Lists Activities filtered by some parameters.
*ActivitiesApi* | [**list_stats_activities**](docs/ActivitiesApi.md#list_stats_activities) | **HEAD** /activities | Stats of the List Activities filtered by some parameters.
*ActivitiesApi* | [**patch_activity**](docs/ActivitiesApi.md#patch_activity) | **PATCH** /activities/{id} | Patch a Activity
*ActivitiesApi* | [**post_activity_action**](docs/ActivitiesApi.md#post_activity_action) | **POST** /activities/{id}/actions/{cmd} | Call an action on the node.
*ActivitiesApi* | [**put_activity**](docs/ActivitiesApi.md#put_activity) | **PUT** /activities/{id} | Put a Activity
*AlertsApi* | [**create_alert**](docs/AlertsApi.md#create_alert) | **POST** /alerts | Create a Alert
*AlertsApi* | [**delete_alert**](docs/AlertsApi.md#delete_alert) | **DELETE** /alerts/{uuid} | Delete a Alert
*AlertsApi* | [**delete_alert_param**](docs/AlertsApi.md#delete_alert_param) | **DELETE** /alerts/{uuid}/params/{key} | Delete a single alerts parameter
*AlertsApi* | [**get_alert**](docs/AlertsApi.md#get_alert) | **GET** /alerts/{uuid} | Get a Alert
*AlertsApi* | [**get_alert_action**](docs/AlertsApi.md#get_alert_action) | **GET** /alerts/{uuid}/actions/{cmd} | List specific action for a alerts Alert
*AlertsApi* | [**get_alert_actions**](docs/AlertsApi.md#get_alert_actions) | **GET** /alerts/{uuid}/actions | List alerts actions Alert
*AlertsApi* | [**get_alert_param**](docs/AlertsApi.md#get_alert_param) | **GET** /alerts/{uuid}/params/{key} | Get a single alerts parameter
*AlertsApi* | [**get_alert_params**](docs/AlertsApi.md#get_alert_params) | **GET** /alerts/{uuid}/params | List alerts params Alert
*AlertsApi* | [**get_alert_pub_key**](docs/AlertsApi.md#get_alert_pub_key) | **GET** /alerts/{uuid}/pubkey | Get the public key for secure params on a alerts
*AlertsApi* | [**head_alert**](docs/AlertsApi.md#head_alert) | **HEAD** /alerts/{uuid} | See if a Alert exists
*AlertsApi* | [**list_alerts**](docs/AlertsApi.md#list_alerts) | **GET** /alerts | Lists Alerts filtered by some parameters.
*AlertsApi* | [**list_stats_alerts**](docs/AlertsApi.md#list_stats_alerts) | **HEAD** /alerts | Stats of the List Alerts filtered by some parameters.
*AlertsApi* | [**patch_alert**](docs/AlertsApi.md#patch_alert) | **PATCH** /alerts/{uuid} | Patch a Alert
*AlertsApi* | [**patch_alert_params**](docs/AlertsApi.md#patch_alert_params) | **PATCH** /alerts/{uuid}/params | Update all params on the object (merges with existing data)
*AlertsApi* | [**post_alert_ack**](docs/AlertsApi.md#post_alert_ack) | **POST** /alerts/{uuid}/acknowledge | Acknowledge an alert by {uuid}
*AlertsApi* | [**post_alert_action**](docs/AlertsApi.md#post_alert_action) | **POST** /alerts/{uuid}/actions/{cmd} | Call an action on the node.
*AlertsApi* | [**post_alert_param**](docs/AlertsApi.md#post_alert_param) | **POST** /alerts/{uuid}/params/{key} | Set a single parameter on an object
*AlertsApi* | [**post_alert_params**](docs/AlertsApi.md#post_alert_params) | **POST** /alerts/{uuid}/params | Replaces all parameters on the object
*AlertsApi* | [**put_alert**](docs/AlertsApi.md#put_alert) | **PUT** /alerts/{uuid} | Put a Alert
*BatchesApi* | [**create_batch**](docs/BatchesApi.md#create_batch) | **POST** /batches | Create a Batch
*BatchesApi* | [**delete_batch**](docs/BatchesApi.md#delete_batch) | **DELETE** /batches/{uuid} | Delete a Batch
*BatchesApi* | [**get_batch**](docs/BatchesApi.md#get_batch) | **GET** /batches/{uuid} | Get a Batch
*BatchesApi* | [**get_batch_action**](docs/BatchesApi.md#get_batch_action) | **GET** /batches/{uuid}/actions/{cmd} | List specific action for a batches Batch
*BatchesApi* | [**get_batch_actions**](docs/BatchesApi.md#get_batch_actions) | **GET** /batches/{uuid}/actions | List batches actions Batch
*BatchesApi* | [**head_batch**](docs/BatchesApi.md#head_batch) | **HEAD** /batches/{uuid} | See if a Batch exists
*BatchesApi* | [**list_batches**](docs/BatchesApi.md#list_batches) | **GET** /batches | Lists Batches filtered by some parameters.
*BatchesApi* | [**list_stats_batches**](docs/BatchesApi.md#list_stats_batches) | **HEAD** /batches | Stats of the List Batches filtered by some parameters.
*BatchesApi* | [**patch_batch**](docs/BatchesApi.md#patch_batch) | **PATCH** /batches/{uuid} | Patch a Batch
*BatchesApi* | [**post_batch_action**](docs/BatchesApi.md#post_batch_action) | **POST** /batches/{uuid}/actions/{cmd} | Call an action on the node.
*BatchesApi* | [**put_batch**](docs/BatchesApi.md#put_batch) | **PUT** /batches/{uuid} | Put a Batch
*BlueprintsApi* | [**create_blueprint**](docs/BlueprintsApi.md#create_blueprint) | **POST** /blueprints | Create a Blueprint
*BlueprintsApi* | [**delete_blueprint**](docs/BlueprintsApi.md#delete_blueprint) | **DELETE** /blueprints/{name} | Delete a Blueprint
*BlueprintsApi* | [**delete_blueprint_param**](docs/BlueprintsApi.md#delete_blueprint_param) | **DELETE** /blueprints/{name}/params/{key} | Delete a single blueprints parameter
*BlueprintsApi* | [**get_blueprint**](docs/BlueprintsApi.md#get_blueprint) | **GET** /blueprints/{name} | Get a Blueprint
*BlueprintsApi* | [**get_blueprint_action**](docs/BlueprintsApi.md#get_blueprint_action) | **GET** /blueprints/{name}/actions/{cmd} | List specific action for a blueprints Blueprint
*BlueprintsApi* | [**get_blueprint_actions**](docs/BlueprintsApi.md#get_blueprint_actions) | **GET** /blueprints/{name}/actions | List blueprints actions Blueprint
*BlueprintsApi* | [**get_blueprint_param**](docs/BlueprintsApi.md#get_blueprint_param) | **GET** /blueprints/{name}/params/{key} | Get a single blueprints parameter
*BlueprintsApi* | [**get_blueprint_params**](docs/BlueprintsApi.md#get_blueprint_params) | **GET** /blueprints/{name}/params | List blueprints params Blueprint
*BlueprintsApi* | [**get_blueprint_pub_key**](docs/BlueprintsApi.md#get_blueprint_pub_key) | **GET** /blueprints/{name}/pubkey | Get the public key for secure params on a blueprints
*BlueprintsApi* | [**head_blueprint**](docs/BlueprintsApi.md#head_blueprint) | **HEAD** /blueprints/{name} | See if a Blueprint exists
*BlueprintsApi* | [**list_blueprints**](docs/BlueprintsApi.md#list_blueprints) | **GET** /blueprints | Lists Blueprints filtered by some parameters.
*BlueprintsApi* | [**list_stats_blueprints**](docs/BlueprintsApi.md#list_stats_blueprints) | **HEAD** /blueprints | Stats of the List Blueprints filtered by some parameters.
*BlueprintsApi* | [**patch_blueprint**](docs/BlueprintsApi.md#patch_blueprint) | **PATCH** /blueprints/{name} | Patch a Blueprint
*BlueprintsApi* | [**patch_blueprint_params**](docs/BlueprintsApi.md#patch_blueprint_params) | **PATCH** /blueprints/{name}/params | Update all params on the object (merges with existing data)
*BlueprintsApi* | [**post_blueprint_action**](docs/BlueprintsApi.md#post_blueprint_action) | **POST** /blueprints/{name}/actions/{cmd} | Call an action on the node.
*BlueprintsApi* | [**post_blueprint_param**](docs/BlueprintsApi.md#post_blueprint_param) | **POST** /blueprints/{name}/params/{key} | Set a single parameter on an object
*BlueprintsApi* | [**post_blueprint_params**](docs/BlueprintsApi.md#post_blueprint_params) | **POST** /blueprints/{name}/params | Replaces all parameters on the object
*BlueprintsApi* | [**put_blueprint**](docs/BlueprintsApi.md#put_blueprint) | **PUT** /blueprints/{name} | Put a Blueprint
*BootEnvsApi* | [**create_boot_env**](docs/BootEnvsApi.md#create_boot_env) | **POST** /bootenvs | Create a BootEnv
*BootEnvsApi* | [**delete_boot_env**](docs/BootEnvsApi.md#delete_boot_env) | **DELETE** /bootenvs/{name} | Delete a BootEnv
*BootEnvsApi* | [**get_boot_env**](docs/BootEnvsApi.md#get_boot_env) | **GET** /bootenvs/{name} | Get a BootEnv
*BootEnvsApi* | [**get_boot_env_action**](docs/BootEnvsApi.md#get_boot_env_action) | **GET** /bootenvs/{name}/actions/{cmd} | List specific action for a bootenvs BootEnv
*BootEnvsApi* | [**get_boot_env_actions**](docs/BootEnvsApi.md#get_boot_env_actions) | **GET** /bootenvs/{name}/actions | List bootenvs actions BootEnv
*BootEnvsApi* | [**head_boot_env**](docs/BootEnvsApi.md#head_boot_env) | **HEAD** /bootenvs/{name} | See if a BootEnv exists
*BootEnvsApi* | [**list_boot_envs**](docs/BootEnvsApi.md#list_boot_envs) | **GET** /bootenvs | Lists BootEnvs filtered by some parameters.
*BootEnvsApi* | [**list_stats_boot_envs**](docs/BootEnvsApi.md#list_stats_boot_envs) | **HEAD** /bootenvs | Stats of the List BootEnvs filtered by some parameters.
*BootEnvsApi* | [**patch_boot_env**](docs/BootEnvsApi.md#patch_boot_env) | **PATCH** /bootenvs/{name} | Patch a BootEnv
*BootEnvsApi* | [**post_boot_env_action**](docs/BootEnvsApi.md#post_boot_env_action) | **POST** /bootenvs/{name}/actions/{cmd} | Call an action on the node.
*BootEnvsApi* | [**purge_local_boot_env**](docs/BootEnvsApi.md#purge_local_boot_env) | **DELETE** /bootenvs/{name}/purgeLocal | Purge local install files (ISOS and install trees) for a bootenv
*BootEnvsApi* | [**put_boot_env**](docs/BootEnvsApi.md#put_boot_env) | **PUT** /bootenvs/{name} | Put a BootEnv
*CatalogItemsApi* | [**create_catalog_item**](docs/CatalogItemsApi.md#create_catalog_item) | **POST** /catalog_items | Create a CatalogItem
*CatalogItemsApi* | [**delete_catalog_item**](docs/CatalogItemsApi.md#delete_catalog_item) | **DELETE** /catalog_items/{id} | Delete a CatalogItem
*CatalogItemsApi* | [**get_catalog_item**](docs/CatalogItemsApi.md#get_catalog_item) | **GET** /catalog_items/{id} | Get a CatalogItem
*CatalogItemsApi* | [**get_catalog_item_action**](docs/CatalogItemsApi.md#get_catalog_item_action) | **GET** /catalog_items/{id}/actions/{cmd} | List specific action for a catalog_items CatalogItem
*CatalogItemsApi* | [**get_catalog_item_actions**](docs/CatalogItemsApi.md#get_catalog_item_actions) | **GET** /catalog_items/{id}/actions | List catalog_items actions CatalogItem
*CatalogItemsApi* | [**head_catalog_item**](docs/CatalogItemsApi.md#head_catalog_item) | **HEAD** /catalog_items/{id} | See if a CatalogItem exists
*CatalogItemsApi* | [**list_catalog_items**](docs/CatalogItemsApi.md#list_catalog_items) | **GET** /catalog_items | Lists CatalogItems filtered by some parameters.
*CatalogItemsApi* | [**list_stats_catalog_items**](docs/CatalogItemsApi.md#list_stats_catalog_items) | **HEAD** /catalog_items | Stats of the List CatalogItems filtered by some parameters.
*CatalogItemsApi* | [**patch_catalog_item**](docs/CatalogItemsApi.md#patch_catalog_item) | **PATCH** /catalog_items/{id} | Patch a CatalogItem
*CatalogItemsApi* | [**post_catalog_item_action**](docs/CatalogItemsApi.md#post_catalog_item_action) | **POST** /catalog_items/{id}/actions/{cmd} | Call an action on the node.
*CatalogItemsApi* | [**put_catalog_item**](docs/CatalogItemsApi.md#put_catalog_item) | **PUT** /catalog_items/{id} | Put a CatalogItem
*ClustersApi* | [**cleanup_cluster**](docs/ClustersApi.md#cleanup_cluster) | **DELETE** /clusters/{uuid}/cleanup | Cleanup a Cluster
*ClustersApi* | [**create_cluster**](docs/ClustersApi.md#create_cluster) | **POST** /clusters | Create a Cluster
*ClustersApi* | [**delete_cluster**](docs/ClustersApi.md#delete_cluster) | **DELETE** /clusters/{uuid} | Delete a Cluster
*ClustersApi* | [**delete_cluster_group_param**](docs/ClustersApi.md#delete_cluster_group_param) | **DELETE** /clusters/{uuid}/group/params/{key} | Delete a single Cluster group profile parameter
*ClustersApi* | [**delete_cluster_param**](docs/ClustersApi.md#delete_cluster_param) | **DELETE** /clusters/{uuid}/params/{key} | Delete a single clusters parameter
*ClustersApi* | [**get_cluster**](docs/ClustersApi.md#get_cluster) | **GET** /clusters/{uuid} | Get a Cluster
*ClustersApi* | [**get_cluster_action**](docs/ClustersApi.md#get_cluster_action) | **GET** /clusters/{uuid}/actions/{cmd} | List specific action for a clusters Cluster
*ClustersApi* | [**get_cluster_actions**](docs/ClustersApi.md#get_cluster_actions) | **GET** /clusters/{uuid}/actions | List clusters actions Cluster
*ClustersApi* | [**get_cluster_group_param**](docs/ClustersApi.md#get_cluster_group_param) | **GET** /clusters/{uuid}/group/params/{key} | Get a single Cluster group profile parameter
*ClustersApi* | [**get_cluster_group_params**](docs/ClustersApi.md#get_cluster_group_params) | **GET** /clusters/{uuid}/group/params | List Cluster group profile params Cluster
*ClustersApi* | [**get_cluster_group_pub_key**](docs/ClustersApi.md#get_cluster_group_pub_key) | **GET** /clusters/{uuid}/group/pubkey | Get the public key for secure params on a Cluster group profile
*ClustersApi* | [**get_cluster_param**](docs/ClustersApi.md#get_cluster_param) | **GET** /clusters/{uuid}/params/{key} | Get a single clusters parameter
*ClustersApi* | [**get_cluster_params**](docs/ClustersApi.md#get_cluster_params) | **GET** /clusters/{uuid}/params | List clusters params Cluster
*ClustersApi* | [**get_cluster_pub_key**](docs/ClustersApi.md#get_cluster_pub_key) | **GET** /clusters/{uuid}/pubkey | Get the public key for secure params on a clusters
*ClustersApi* | [**get_cluster_render**](docs/ClustersApi.md#get_cluster_render) | **POST** /clusters/{uuid}/render | Render a blob on a machine
*ClustersApi* | [**get_cluster_token**](docs/ClustersApi.md#get_cluster_token) | **GET** /clusters/{uuid}/token | Get a Cluster Token
*ClustersApi* | [**head_cluster**](docs/ClustersApi.md#head_cluster) | **HEAD** /clusters/{uuid} | See if a Cluster exists
*ClustersApi* | [**list_clusters**](docs/ClustersApi.md#list_clusters) | **GET** /clusters | Lists Clusters filtered by some parameters.
*ClustersApi* | [**list_stats_clusters**](docs/ClustersApi.md#list_stats_clusters) | **HEAD** /clusters | Stats of the List Clusters filtered by some parameters.
*ClustersApi* | [**patch_cluster**](docs/ClustersApi.md#patch_cluster) | **PATCH** /clusters/{uuid} | Patch a Cluster
*ClustersApi* | [**patch_cluster_group_params**](docs/ClustersApi.md#patch_cluster_group_params) | **PATCH** /clusters/{uuid}/group/params | Update group profile parameters (merges with existing data)
*ClustersApi* | [**patch_cluster_params**](docs/ClustersApi.md#patch_cluster_params) | **PATCH** /clusters/{uuid}/params | Update all params on the object (merges with existing data)
*ClustersApi* | [**post_cluster_action**](docs/ClustersApi.md#post_cluster_action) | **POST** /clusters/{uuid}/actions/{cmd} | Call an action on the node.
*ClustersApi* | [**post_cluster_group_param**](docs/ClustersApi.md#post_cluster_group_param) | **POST** /clusters/{uuid}/group/params/{key} | Set a single Parameter in the group
*ClustersApi* | [**post_cluster_group_params**](docs/ClustersApi.md#post_cluster_group_params) | **POST** /clusters/{uuid}/group/params | Sets the group parameters (replaces)
*ClustersApi* | [**post_cluster_param**](docs/ClustersApi.md#post_cluster_param) | **POST** /clusters/{uuid}/params/{key} | Set a single parameter on an object
*ClustersApi* | [**post_cluster_params**](docs/ClustersApi.md#post_cluster_params) | **POST** /clusters/{uuid}/params | Replaces all parameters on the object
*ClustersApi* | [**post_cluster_release_to_pool**](docs/ClustersApi.md#post_cluster_release_to_pool) | **POST** /clusters/{uuid}/releaseToPool | Releases a cluster in this pool.
*ClustersApi* | [**put_cluster**](docs/ClustersApi.md#put_cluster) | **PUT** /clusters/{uuid} | Put a Cluster
*ClustersApi* | [**start_cluster**](docs/ClustersApi.md#start_cluster) | **PATCH** /clusters/{uuid}/start | Start a Cluster
*ConnectionsApi* | [**get_connection**](docs/ConnectionsApi.md#get_connection) | **GET** /connections/{id} | Close a websocket Connection
*ConnectionsApi* | [**get_connection_0**](docs/ConnectionsApi.md#get_connection_0) | **DELETE** /connections/{remoteaddr} | Close a websocket Connection
*ConnectionsApi* | [**list_connections**](docs/ConnectionsApi.md#list_connections) | **GET** /clusters/:uuid/connections | Lists Connections filtered by some parameters.
*ConnectionsApi* | [**list_connections_0**](docs/ConnectionsApi.md#list_connections_0) | **GET** /connections | Lists Connections filtered by some parameters
*ConnectionsApi* | [**list_connections_1**](docs/ConnectionsApi.md#list_connections_1) | **GET** /machines/:uuid/connections | Lists Connections filtered by some parameters.
*ConnectionsApi* | [**list_connections_2**](docs/ConnectionsApi.md#list_connections_2) | **GET** /resource_brokers/:uuid/connections | Lists Connections filtered by some parameters.
*ContentsApi* | [**create_content**](docs/ContentsApi.md#create_content) | **POST** /contents | Create content into Digital Rebar Provision
*ContentsApi* | [**delete_content**](docs/ContentsApi.md#delete_content) | **DELETE** /contents/{name} | Delete a content set.
*ContentsApi* | [**get_content**](docs/ContentsApi.md#get_content) | **GET** /contents/{name} | Get a specific content with {name}
*ContentsApi* | [**list_contents**](docs/ContentsApi.md#list_contents) | **GET** /contents | Lists possible contents on the system to serve DHCP
*ContentsApi* | [**upload_content**](docs/ContentsApi.md#upload_content) | **PUT** /contents/{name} | Replace content in Digital Rebar Provision
*ContextsApi* | [**create_context**](docs/ContextsApi.md#create_context) | **POST** /contexts | Create a Context
*ContextsApi* | [**delete_context**](docs/ContextsApi.md#delete_context) | **DELETE** /contexts/{name} | Delete a Context
*ContextsApi* | [**get_context**](docs/ContextsApi.md#get_context) | **GET** /contexts/{name} | Get a Context
*ContextsApi* | [**get_context_action**](docs/ContextsApi.md#get_context_action) | **GET** /contexts/{name}/actions/{cmd} | List specific action for a contexts Context
*ContextsApi* | [**get_context_actions**](docs/ContextsApi.md#get_context_actions) | **GET** /contexts/{name}/actions | List contexts actions Context
*ContextsApi* | [**head_context**](docs/ContextsApi.md#head_context) | **HEAD** /contexts/{name} | See if a Context exists
*ContextsApi* | [**list_contexts**](docs/ContextsApi.md#list_contexts) | **GET** /contexts | Lists Contexts filtered by some parameters.
*ContextsApi* | [**list_stats_contexts**](docs/ContextsApi.md#list_stats_contexts) | **HEAD** /contexts | Stats of the List Contexts filtered by some parameters.
*ContextsApi* | [**patch_context**](docs/ContextsApi.md#patch_context) | **PATCH** /contexts/{name} | Patch a Context
*ContextsApi* | [**post_context_action**](docs/ContextsApi.md#post_context_action) | **POST** /contexts/{name}/actions/{cmd} | Call an action on the node.
*ContextsApi* | [**put_context**](docs/ContextsApi.md#put_context) | **PUT** /contexts/{name} | Put a Context
*EndpointsApi* | [**create_endpoint**](docs/EndpointsApi.md#create_endpoint) | **POST** /endpoints | Create a Endpoint
*EndpointsApi* | [**delete_endpoint**](docs/EndpointsApi.md#delete_endpoint) | **DELETE** /endpoints/{id} | Delete a Endpoint
*EndpointsApi* | [**delete_endpoint_param**](docs/EndpointsApi.md#delete_endpoint_param) | **DELETE** /endpoints/{id}/params/{key} | Delete a single endpoints parameter
*EndpointsApi* | [**get_endpoint**](docs/EndpointsApi.md#get_endpoint) | **GET** /endpoints/{id} | Get a Endpoint
*EndpointsApi* | [**get_endpoint_action**](docs/EndpointsApi.md#get_endpoint_action) | **GET** /endpoints/{id}/actions/{cmd} | List specific action for a endpoint Endpoint
*EndpointsApi* | [**get_endpoint_actions**](docs/EndpointsApi.md#get_endpoint_actions) | **GET** /endpoints/{id}/actions | List endpoint actions Endpoint
*EndpointsApi* | [**get_endpoint_param**](docs/EndpointsApi.md#get_endpoint_param) | **GET** /endpoints/{id}/params/{key} | Get a single endpoints parameter
*EndpointsApi* | [**get_endpoint_params**](docs/EndpointsApi.md#get_endpoint_params) | **GET** /endpoints/{id}/params | List endpoints params Endpoint
*EndpointsApi* | [**get_endpoint_pub_key**](docs/EndpointsApi.md#get_endpoint_pub_key) | **GET** /endpoints/{id}/pubkey | Get the public key for secure params on a endpoints
*EndpointsApi* | [**head_endpoint**](docs/EndpointsApi.md#head_endpoint) | **HEAD** /endpoints/{id} | See if a Endpoint exists
*EndpointsApi* | [**list_endpoints**](docs/EndpointsApi.md#list_endpoints) | **GET** /endpoints | Lists Endpoints filtered by some parameters.
*EndpointsApi* | [**list_stats_endpoints**](docs/EndpointsApi.md#list_stats_endpoints) | **HEAD** /endpoints | Stats of the List Endpoints filtered by some parameters.
*EndpointsApi* | [**patch_endpoint**](docs/EndpointsApi.md#patch_endpoint) | **PATCH** /endpoints/{id} | Patch a Endpoint
*EndpointsApi* | [**patch_endpoint_params**](docs/EndpointsApi.md#patch_endpoint_params) | **PATCH** /endpoints/{id}/params | Update all params on the object (merges with existing data)
*EndpointsApi* | [**post_endpoint_action**](docs/EndpointsApi.md#post_endpoint_action) | **POST** /endpoints/{id}/actions/{cmd} | Call an action on the node.
*EndpointsApi* | [**post_endpoint_param**](docs/EndpointsApi.md#post_endpoint_param) | **POST** /endpoints/{id}/params/{key} | Set a single parameter on an object
*EndpointsApi* | [**post_endpoint_params**](docs/EndpointsApi.md#post_endpoint_params) | **POST** /endpoints/{id}/params | Replaces all parameters on the object
*EndpointsApi* | [**put_endpoint**](docs/EndpointsApi.md#put_endpoint) | **PUT** /endpoints/{id} | Put a Endpoint
*EventsApi* | [**post_event**](docs/EventsApi.md#post_event) | **POST** /events | Create an Event
*FilesApi* | [**delete_file**](docs/FilesApi.md#delete_file) | **DELETE** /files/{path} | Delete a file to a specific {path} in the tree under files.
*FilesApi* | [**get_file**](docs/FilesApi.md#get_file) | **GET** /files/{path} | Get a specific File with {path}
*FilesApi* | [**head_file**](docs/FilesApi.md#head_file) | **HEAD** /files/{path} | See if a file exists and return a checksum in the header
*FilesApi* | [**head_iso**](docs/FilesApi.md#head_iso) | **HEAD** /isos/{path} | See if a iso exists and return a checksum in the header
*FilesApi* | [**list_files**](docs/FilesApi.md#list_files) | **GET** /files | Lists files in files directory or subdirectory per query parameter
*FilesApi* | [**upload_file**](docs/FilesApi.md#upload_file) | **POST** /files/{path} | Upload a file to a specific {path} in the tree under files.
*FiltersApi* | [**create_filter**](docs/FiltersApi.md#create_filter) | **POST** /filters | Create a Filter
*FiltersApi* | [**delete_filter**](docs/FiltersApi.md#delete_filter) | **DELETE** /filters/{id} | Delete a Filter
*FiltersApi* | [**delete_filter_param**](docs/FiltersApi.md#delete_filter_param) | **DELETE** /filters/{id}/params/{key} | Delete a single filters parameter
*FiltersApi* | [**get_filter**](docs/FiltersApi.md#get_filter) | **GET** /filters/{id} | Get a Filter
*FiltersApi* | [**get_filter_action**](docs/FiltersApi.md#get_filter_action) | **GET** /filters/{id}/actions/{cmd} | List specific action for a filters Filter
*FiltersApi* | [**get_filter_actions**](docs/FiltersApi.md#get_filter_actions) | **GET** /filters/{id}/actions | List filters actions Filter
*FiltersApi* | [**get_filter_param**](docs/FiltersApi.md#get_filter_param) | **GET** /filters/{id}/params/{key} | Get a single filters parameter
*FiltersApi* | [**get_filter_params**](docs/FiltersApi.md#get_filter_params) | **GET** /filters/{id}/params | List filters params Filter
*FiltersApi* | [**get_filter_pub_key**](docs/FiltersApi.md#get_filter_pub_key) | **GET** /filters/{id}/pubkey | Get the public key for secure params on a filters
*FiltersApi* | [**head_filter**](docs/FiltersApi.md#head_filter) | **HEAD** /filters/{id} | See if a Filter exists
*FiltersApi* | [**list_filters**](docs/FiltersApi.md#list_filters) | **GET** /filters | Lists Filters filtered by some parameters.
*FiltersApi* | [**list_stats_filters**](docs/FiltersApi.md#list_stats_filters) | **HEAD** /filters | Stats of the List Filters filtered by some parameters.
*FiltersApi* | [**patch_filter**](docs/FiltersApi.md#patch_filter) | **PATCH** /filters/{id} | Patch a Filter
*FiltersApi* | [**patch_filter_params**](docs/FiltersApi.md#patch_filter_params) | **PATCH** /filters/{id}/params | Update all params on the object (merges with existing data)
*FiltersApi* | [**post_filter_action**](docs/FiltersApi.md#post_filter_action) | **POST** /filters/{id}/actions/{cmd} | Call an action on the node.
*FiltersApi* | [**post_filter_param**](docs/FiltersApi.md#post_filter_param) | **POST** /filters/{id}/params/{key} | Set a single parameter on an object
*FiltersApi* | [**post_filter_params**](docs/FiltersApi.md#post_filter_params) | **POST** /filters/{id}/params | Replaces all parameters on the object
*FiltersApi* | [**put_filter**](docs/FiltersApi.md#put_filter) | **PUT** /filters/{id} | Put a Filter
*IdentityProvidersApi* | [**create_identity_provider**](docs/IdentityProvidersApi.md#create_identity_provider) | **POST** /identity_providers | Create a IdentityProvider
*IdentityProvidersApi* | [**delete_identity_provider**](docs/IdentityProvidersApi.md#delete_identity_provider) | **DELETE** /identity_providers/{name} | Delete a IdentityProvider
*IdentityProvidersApi* | [**get_identity_provider**](docs/IdentityProvidersApi.md#get_identity_provider) | **GET** /identity_providers/{name} | Get a IdentityProvider
*IdentityProvidersApi* | [**get_identity_provider_action**](docs/IdentityProvidersApi.md#get_identity_provider_action) | **GET** /identity_providers/{name}/actions/{cmd} | List specific action for a identity_providers IdentityProvider
*IdentityProvidersApi* | [**get_identity_provider_actions**](docs/IdentityProvidersApi.md#get_identity_provider_actions) | **GET** /identity_providers/{name}/actions | List identity_providers actions IdentityProvider
*IdentityProvidersApi* | [**head_identity_provider**](docs/IdentityProvidersApi.md#head_identity_provider) | **HEAD** /identity_providers/{name} | See if a IdentityProvider exists
*IdentityProvidersApi* | [**list_identity_providers**](docs/IdentityProvidersApi.md#list_identity_providers) | **GET** /identity_providers | Lists IdentityProviders filtered by some parameters.
*IdentityProvidersApi* | [**list_stats_identity_providers**](docs/IdentityProvidersApi.md#list_stats_identity_providers) | **HEAD** /identity_providers | Stats of the List IdentityProviders filtered by some parameters.
*IdentityProvidersApi* | [**patch_identity_provider**](docs/IdentityProvidersApi.md#patch_identity_provider) | **PATCH** /identity_providers/{name} | Patch a IdentityProvider
*IdentityProvidersApi* | [**post_identity_provider_action**](docs/IdentityProvidersApi.md#post_identity_provider_action) | **POST** /identity_providers/{name}/actions/{cmd} | Call an action on the node.
*IdentityProvidersApi* | [**put_identity_provider**](docs/IdentityProvidersApi.md#put_identity_provider) | **PUT** /identity_providers/{name} | Put a IdentityProvider
*IndexesApi* | [**get_index**](docs/IndexesApi.md#get_index) | **GET** /indexes/{prefix} | Get static indexes for a specific object type
*IndexesApi* | [**get_single_index**](docs/IndexesApi.md#get_single_index) | **GET** /indexes/{prefix}/{param} | Get information on a specific index for a specific object type.
*IndexesApi* | [**list_indexes**](docs/IndexesApi.md#list_indexes) | **GET** /indexes | List all static indexes for objects
*InfoApi* | [**get_info**](docs/InfoApi.md#get_info) | **GET** /info | Return current system info.
*InterfacesApi* | [**get_interface**](docs/InterfacesApi.md#get_interface) | **GET** /interfaces/{name} | Get a specific interface with {name}
*InterfacesApi* | [**list_interfaces**](docs/InterfacesApi.md#list_interfaces) | **GET** /interfaces | Lists possible interfaces on the system to serve DHCP
*IsosApi* | [**delete_iso**](docs/IsosApi.md#delete_iso) | **DELETE** /isos/{path} | Delete an iso to a specific {path} in the tree under isos.
*IsosApi* | [**get_iso**](docs/IsosApi.md#get_iso) | **GET** /isos/{path} | Get a specific Iso with {path}
*IsosApi* | [**list_isos**](docs/IsosApi.md#list_isos) | **GET** /isos | Lists isos in isos directory
*IsosApi* | [**upload_iso**](docs/IsosApi.md#upload_iso) | **POST** /isos/{path} | Upload an iso to a specific {path} in the tree under isos.
*JobsApi* | [**create_job**](docs/JobsApi.md#create_job) | **POST** /jobs | Create a Job
*JobsApi* | [**delete_job**](docs/JobsApi.md#delete_job) | **DELETE** /jobs/{uuid} | Delete a Job
*JobsApi* | [**get_job**](docs/JobsApi.md#get_job) | **GET** /jobs/{uuid} | Get a Job
*JobsApi* | [**get_job_action**](docs/JobsApi.md#get_job_action) | **GET** /jobs/{uuid}/plugin_actions/{cmd} | List specific action for a job Job
*JobsApi* | [**get_job_actions**](docs/JobsApi.md#get_job_actions) | **GET** /jobs/{uuid}/actions | Get actions for this job
*JobsApi* | [**get_job_log**](docs/JobsApi.md#get_job_log) | **GET** /jobs/{uuid}/log | Get the log for this job
*JobsApi* | [**get_job_log_archive**](docs/JobsApi.md#get_job_log_archive) | **GET** /jobs/{uuid}/archive | Get the log archive entry for this job
*JobsApi* | [**get_job_plugin_actions**](docs/JobsApi.md#get_job_plugin_actions) | **GET** /jobs/{uuid}/plugin_actions | List job plugin_actions Job
*JobsApi* | [**head_job**](docs/JobsApi.md#head_job) | **HEAD** /jobs/{uuid} | See if a Job exists
*JobsApi* | [**head_job_log**](docs/JobsApi.md#head_job_log) | **HEAD** /jobs/{uuid}/log | Get the log for this job
*JobsApi* | [**list_jobs**](docs/JobsApi.md#list_jobs) | **GET** /jobs | Lists Jobs filtered by some parameters.
*JobsApi* | [**list_stats_jobs**](docs/JobsApi.md#list_stats_jobs) | **HEAD** /jobs | Stats of the List Jobs filtered by some parameters.
*JobsApi* | [**patch_job**](docs/JobsApi.md#patch_job) | **PATCH** /jobs/{uuid} | Patch a Job
*JobsApi* | [**post_job_action**](docs/JobsApi.md#post_job_action) | **POST** /jobs/{uuid}/plugin_actions/{cmd} | Call an action on the node.
*JobsApi* | [**put_job**](docs/JobsApi.md#put_job) | **PUT** /jobs/{uuid} | Put a Job
*JobsApi* | [**put_job_log**](docs/JobsApi.md#put_job_log) | **PUT** /jobs/{uuid}/log | Append the string to the end of the job's log.
*LeasesApi* | [**create_lease**](docs/LeasesApi.md#create_lease) | **POST** /leases | Create a Lease
*LeasesApi* | [**delete_lease**](docs/LeasesApi.md#delete_lease) | **DELETE** /leases/{address} | Delete a Lease
*LeasesApi* | [**get_lease**](docs/LeasesApi.md#get_lease) | **GET** /leases/{address} | Get a Lease
*LeasesApi* | [**get_lease_action**](docs/LeasesApi.md#get_lease_action) | **GET** /leases/{address}/actions/{cmd} | List specific action for a leases Lease
*LeasesApi* | [**get_lease_actions**](docs/LeasesApi.md#get_lease_actions) | **GET** /leases/{address}/actions | List leases actions Lease
*LeasesApi* | [**head_lease**](docs/LeasesApi.md#head_lease) | **HEAD** /leases/{address} | See if a Lease exists
*LeasesApi* | [**list_leases**](docs/LeasesApi.md#list_leases) | **GET** /leases | Lists Leases filtered by some parameters.
*LeasesApi* | [**list_stats_leases**](docs/LeasesApi.md#list_stats_leases) | **HEAD** /leases | Stats of the List Leases filtered by some parameters.
*LeasesApi* | [**patch_lease**](docs/LeasesApi.md#patch_lease) | **PATCH** /leases/{address} | Patch a Lease
*LeasesApi* | [**post_lease_action**](docs/LeasesApi.md#post_lease_action) | **POST** /leases/{address}/actions/{cmd} | Call an action on the node.
*LeasesApi* | [**put_lease**](docs/LeasesApi.md#put_lease) | **PUT** /leases/{address} | Put a Lease
*LogsApi* | [**get_logs**](docs/LogsApi.md#get_logs) | **GET** /logs | Return current contents of the log buffer
*MachinesApi* | [**cleanup_machine**](docs/MachinesApi.md#cleanup_machine) | **DELETE** /machines/{uuid}/cleanup | Cleanup a Machine
*MachinesApi* | [**create_machine**](docs/MachinesApi.md#create_machine) | **POST** /machines | Create a Machine
*MachinesApi* | [**delete_machine**](docs/MachinesApi.md#delete_machine) | **DELETE** /machines/{uuid} | Delete a Machine
*MachinesApi* | [**delete_machine_param**](docs/MachinesApi.md#delete_machine_param) | **DELETE** /machines/{uuid}/params/{key} | Delete a single machines parameter
*MachinesApi* | [**get_machine**](docs/MachinesApi.md#get_machine) | **GET** /machines/{uuid} | Get a Machine
*MachinesApi* | [**get_machine_action**](docs/MachinesApi.md#get_machine_action) | **GET** /machines/{uuid}/actions/{cmd} | List specific action for a machines Machine
*MachinesApi* | [**get_machine_actions**](docs/MachinesApi.md#get_machine_actions) | **GET** /machines/{uuid}/actions | List machines actions Machine
*MachinesApi* | [**get_machine_param**](docs/MachinesApi.md#get_machine_param) | **GET** /machines/{uuid}/params/{key} | Get a single machines parameter
*MachinesApi* | [**get_machine_params**](docs/MachinesApi.md#get_machine_params) | **GET** /machines/{uuid}/params | List machines params Machine
*MachinesApi* | [**get_machine_pub_key**](docs/MachinesApi.md#get_machine_pub_key) | **GET** /machines/{uuid}/pubkey | Get the public key for secure params on a machines
*MachinesApi* | [**get_machine_render**](docs/MachinesApi.md#get_machine_render) | **POST** /machines/{uuid}/render | Render a blob on a machine
*MachinesApi* | [**get_machine_token**](docs/MachinesApi.md#get_machine_token) | **GET** /machines/{uuid}/token | Get a Machine Token
*MachinesApi* | [**head_machine**](docs/MachinesApi.md#head_machine) | **HEAD** /machines/{uuid} | See if a Machine exists
*MachinesApi* | [**list_machines**](docs/MachinesApi.md#list_machines) | **GET** /machines | Lists Machines filtered by some parameters.
*MachinesApi* | [**list_stats_machines**](docs/MachinesApi.md#list_stats_machines) | **HEAD** /machines | Stats of the List Machines filtered by some parameters.
*MachinesApi* | [**patch_machine**](docs/MachinesApi.md#patch_machine) | **PATCH** /machines/{uuid} | Patch a Machine
*MachinesApi* | [**patch_machine_params**](docs/MachinesApi.md#patch_machine_params) | **PATCH** /machines/{uuid}/params | Update all params on the object (merges with existing data)
*MachinesApi* | [**pick_machine_work_order**](docs/MachinesApi.md#pick_machine_work_order) | **POST** /machines/{id}/pick/{key} | Pick a workorder for this agent. This can return the current work order.
*MachinesApi* | [**post_machine_action**](docs/MachinesApi.md#post_machine_action) | **POST** /machines/{uuid}/actions/{cmd} | Call an action on the node.
*MachinesApi* | [**post_machine_param**](docs/MachinesApi.md#post_machine_param) | **POST** /machines/{uuid}/params/{key} | Set a single parameter on an object
*MachinesApi* | [**post_machine_params**](docs/MachinesApi.md#post_machine_params) | **POST** /machines/{uuid}/params | Replaces all parameters on the object
*MachinesApi* | [**post_machine_release_to_pool**](docs/MachinesApi.md#post_machine_release_to_pool) | **POST** /machines/{id}/releaseToPool | Releases a machine in this pool.
*MachinesApi* | [**put_machine**](docs/MachinesApi.md#put_machine) | **PUT** /machines/{uuid} | Put a Machine
*MachinesApi* | [**start_machine**](docs/MachinesApi.md#start_machine) | **PATCH** /machines/{uuid}/start | Start a Machine
*MetaApi* | [**get_meta**](docs/MetaApi.md#get_meta) | **GET** /meta/{type}/{id} | Get Metadata for an Object of {type} idendified by {id}
*MetaApi* | [**patch_meta**](docs/MetaApi.md#patch_meta) | **PATCH** /meta/{type}/{id} | Patch metadata on an Object of {type} with an ID of {id}
*ObjectsApi* | [**list_objects**](docs/ObjectsApi.md#list_objects) | **GET** /objects | Lists the object types in the system
*ParamsApi* | [**create_param**](docs/ParamsApi.md#create_param) | **POST** /params | Create a Param
*ParamsApi* | [**delete_param**](docs/ParamsApi.md#delete_param) | **DELE | text/markdown | null | RackN Engineering <eng@rackn.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent"
] | [] | https://gitlab.com/rackn/drppy_client | null | >=3.5 | [] | [] | [] | [
"certifi>=14.05.14",
"six>=1.10",
"python-dateutil>=2.5.3",
"urllib3>=1.15.1",
"click>=8.1.7",
"pynacl>=1.5"
] | [] | [] | [] | [
"Homepage, https://gitlab.com/rackn/drppy_client",
"Issues, https://gitlab.com/rackn/drppy_client"
] | twine/6.2.0 CPython/3.11.2 | 2026-02-18T17:19:16.679728 | drppy_client-4.16.0.tar.gz | 699,180 | ee/d4/2f53a4f1380ff9c35ad237ddef1e7f03e3f54a8ae6105070cba1724a6763/drppy_client-4.16.0.tar.gz | source | sdist | null | false | 54fe5fb33d189db92b80ca0c771ba772 | 1a7317b6547c690bbb18ffa7218e5e133d40ff6f27f06b12a3eed9d62ede65b2 | eed42f53a4f1380ff9c35ad237ddef1e7f03e3f54a8ae6105070cba1724a6763 | null | [] | 263 |
2.4 | harness-utils | 1.0.0 | Context window management utilities for LLM-based applications | # harnessutils
Python library for managing LLM context windows in long-running conversations. Enables indefinite conversation length while staying within token limits.
## Installation
```bash
uv add harness-utils
```
## Features
- **Three-tier context management** - Truncation, pruning, and LLM-powered summarization
- **Turn processing** - Stream event handling with hooks and doom loop detection
- **Pluggable storage** - Filesystem and in-memory backends
- **Zero dependencies** - No external runtime requirements
- **Type-safe** - Full Python 3.12+ type hints
## Quick Start
```python
from harnessutils import ConversationManager, Message, TextPart, generate_id
manager = ConversationManager()
conv = manager.create_conversation()
# Add message
msg = Message(id=generate_id("msg"), role="user")
msg.add_part(TextPart(text="Help me debug"))
manager.add_message(conv.id, msg)
# Prune old outputs
manager.prune_before_turn(conv.id)
# Get messages for LLM
model_messages = manager.to_model_format(conv.id)
```
## Context Management
Three tiers handle context overflow:
**1. Truncation** - Limits tool output size (instant, free)
```python
output = manager.truncate_tool_output(large_output, "tool_name")
```
**2. Pruning** - Removes old tool outputs (fast, ~50ms)
```python
result = manager.prune_before_turn(conv.id)
# Keeps recent 40K tokens, removes older outputs
```
**3. Summarization** - LLM compression when needed (slow, ~3-5s)
```python
if manager.needs_compaction(conv.id, usage):
manager.compact(conv.id, llm_client, parent_msg_id)
```
## Turn Processing
Process streaming LLM responses with hooks:
```python
from harnessutils import TurnProcessor, TurnHooks
hooks = TurnHooks(
on_tool_call=execute_tool,
on_doom_loop=handle_loop,
)
processor = TurnProcessor(message, hooks)
for event in llm_stream:
processor.process_stream_event(event)
```
Includes:
- Tool state machine
- Doom loop detection (3 identical calls)
- Snapshot tracking
## Configuration
```python
from harnessutils import HarnessConfig
config = HarnessConfig()
config.truncation.max_lines = 2000
config.pruning.prune_protect = 40_000 # Keep recent 40K tokens
config.model_limits.default_context_limit = 200_000
```
## Storage
```python
from harnessutils import FilesystemStorage, MemoryStorage
# Filesystem (production)
storage = FilesystemStorage(config.storage)
# In-memory (testing)
storage = MemoryStorage()
# Custom (implement StorageBackend protocol)
# See examples/custom_storage_example.py
storage = YourCustomStorage()
```
## Examples
- `basic_usage.py` - Simple conversation
- `ollama_example.py` - Ollama integration
- `ollama_with_summarization.py` - Full three-tier demo
- `turn_processing_example.py` - Stream processing
- `custom_storage_example.py` - Custom storage adapter (SQLite)
## Development
```bash
uv sync # Install deps
uv run pytest # Run unit tests
uv run mypy src/ # Type check
uv run python -m evals.runner # Run evals (quality, budget, performance)
```
**Evals** test real-world behavior beyond unit tests:
- Information preservation after compaction
- Token budget compliance
- Performance benchmarks (latency, throughput)
See `evals/README.md` for details.
## License
MIT License - see [LICENSE](LICENSE) for details.
| text/markdown | null | Jeremy Tregunna <jeremy@tregunna.ca> | null | null | MIT | ai, chatbot, context-management, context-window, conversation, llm, prompt-engineering, token-management | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Sci... | [] | null | null | >=3.12 | [] | [] | [] | [
"tiktoken>=0.12.0",
"tiktoken>=0.12.0; extra == \"tiktoken\""
] | [] | [] | [] | [] | uv/0.7.17 | 2026-02-18T17:19:04.335965 | harness_utils-1.0.0.tar.gz | 443,376 | 54/53/c96aba308693f2a10414be1e9a0deb7883e86f52edb8dd497ed363f9433e/harness_utils-1.0.0.tar.gz | source | sdist | null | false | 2a5e2fc04d45860b35b24237a7290602 | 3b6a9c8ab28e7a21572ae12675779dcdcf81198f7e3030bc2e5850b048492a34 | 5453c96aba308693f2a10414be1e9a0deb7883e86f52edb8dd497ed363f9433e | null | [
"LICENSE"
] | 242 |
2.4 | human-in-the-loop-mcp-server | 1.0.2 | Human-In-the-Loop MCP Server - Enable AI assistants to interact with humans through GUI dialogs | # Human-In-the-Loop MCP Server

[](https://opensource.org/licenses/MIT)
[](https://badge.fury.io/py/hitl-mcp-server)
A powerful **Model Context Protocol (MCP) Server** that enables AI assistants like Claude to interact with humans through intuitive GUI dialogs. This server bridges the gap between automated AI processes and human decision-making by providing real-time user input tools, choices, confirmations, and feedback mechanisms.

## 🚀 Features
### 💬 Interactive Dialog Tools
- **Text Input**: Get text, numbers, or other data from users with validation
- **Multiple Choice**: Present options for single or multiple selections
- **Multi-line Input**: Collect longer text content, code, or detailed descriptions
- **Confirmation Dialogs**: Ask for yes/no decisions before proceeding with actions
- **Information Messages**: Display notifications, status updates, and results
- **Health Check**: Monitor server status and GUI availability
### 🎨 Modern Cross-Platform GUI
- **Windows**: Modern Windows 11-style interface with beautiful styling, hover effects, and enhanced visual design
- **macOS**: Native macOS experience with SF Pro Display fonts and proper window management
- **Linux**: Ubuntu-compatible GUI with modern styling and system fonts
### ⚡ Advanced Features
- **Non-blocking Operation**: All dialogs run in separate threads to prevent blocking
- **Timeout Protection**: Configurable 5-minute timeouts prevent hanging operations
- **Platform Detection**: Automatic optimization for each operating system
- **Modern UI Design**: Beautiful interface with smooth animations and hover effects
- **Error Handling**: Comprehensive error reporting and graceful recovery
- **Keyboard Navigation**: Full keyboard shortcuts support (Enter/Escape)
## 📦 Installation & Setup
### Quick Install with uvx (Recommended)
The easiest way to use this MCP server is with `uvx`:
```bash
# Install and run directly
uvx hitl-mcp-server
# Or use the underscore version
uvx hitl_mcp_server
```
### Manual Installation
1. **Install from PyPI**:
```bash
pip install hitl-mcp-server
```
2. **Run the server**:
```bash
hitl-mcp-server
# or
hitl_mcp_server
```
### Development Installation
1. **Clone the repository**:
```bash
git clone https://github.com/GongRzhe/Human-In-the-Loop-MCP-Server.git
cd Human-In-the-Loop-MCP-Server
```
2. **Install in development mode**:
```bash
pip install -e .
```
## 🔧 Claude Desktop Configuration
To use this server with Claude Desktop, add the following configuration to your `claude_desktop_config.json`:
### Using uvx (Recommended)
```json
{
"mcpServers": {
"human-in-the-loop": {
"command": "uvx",
"args": ["hitl-mcp-server"]
}
}
}
```
### Using pip installation
```json
{
"mcpServers": {
"human-in-the-loop": {
"command": "hitl-mcp-server",
"args": []
}
}
}
```
### Configuration File Locations
- **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
- **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
- **Linux**: `~/.config/Claude/claude_desktop_config.json`
### Important Note for macOS Users
**Note:** You may need to allow Python to control your computer in **System Preferences > Security & Privacy > Accessibility** for the GUI dialogs to work properly.
After updating the configuration, restart Claude Desktop for the changes to take effect.
## 🛠️ Available Tools
### 1. `get_user_input`
Get single-line text, numbers, or other data from users.
**Parameters:**
- `title` (str): Dialog window title
- `prompt` (str): Question/prompt text
- `default_value` (str): Pre-filled value (optional)
- `input_type` (str): "text", "integer", or "float" (default: "text")
**Example Usage:**
```python
result = await get_user_input(
title="Project Setup",
prompt="Enter your project name:",
default_value="my-project",
input_type="text"
)
```
### 2. `get_user_choice`
Present multiple options for user selection.
**Parameters:**
- `title` (str): Dialog window title
- `prompt` (str): Question/prompt text
- `choices` (List[str]): Available options
- `allow_multiple` (bool): Allow multiple selections (default: false)
**Example Usage:**
```python
result = await get_user_choice(
title="Framework Selection",
prompt="Choose your preferred framework:",
choices=["React", "Vue", "Angular", "Svelte"],
allow_multiple=False
)
```
### 3. `get_multiline_input`
Collect longer text content, code, or detailed descriptions.
**Parameters:**
- `title` (str): Dialog window title
- `prompt` (str): Question/prompt text
- `default_value` (str): Pre-filled text (optional)
**Example Usage:**
```python
result = await get_multiline_input(
title="Code Review",
prompt="Please provide your detailed feedback:",
default_value=""
)
```
### 4. `show_confirmation_dialog`
Ask for yes/no confirmation before proceeding.
**Parameters:**
- `title` (str): Dialog window title
- `message` (str): Confirmation message
**Example Usage:**
```python
result = await show_confirmation_dialog(
title="Delete Confirmation",
message="Are you sure you want to delete these 5 files? This action cannot be undone."
)
```
### 5. `show_info_message`
Display information, notifications, or status updates.
**Parameters:**
- `title` (str): Dialog window title
- `message` (str): Information message
**Example Usage:**
```python
result = await show_info_message(
title="Process Complete",
message="Successfully processed 1,247 records in 2.3 seconds!"
)
```
### 6. `health_check`
Check server status and GUI availability.
**Example Usage:**
```python
status = await health_check()
# Returns detailed platform and functionality information
```
## 📋 Response Format
All tools return structured JSON responses:
```json
{
"success": true,
"user_input": "User's response text",
"cancelled": false,
"platform": "windows",
"input_type": "text"
}
```
**Common Response Fields:**
- `success` (bool): Whether the operation completed successfully
- `cancelled` (bool): Whether the user cancelled the dialog
- `platform` (str): Operating system platform
- `error` (str): Error message if operation failed
**Tool-Specific Fields:**
- **get_user_input**: `user_input`, `input_type`
- **get_user_choice**: `selected_choice`, `selected_choices`, `allow_multiple`
- **get_multiline_input**: `user_input`, `character_count`, `line_count`
- **show_confirmation_dialog**: `confirmed`, `response`
- **show_info_message**: `acknowledged`
## 🧠 Best Practices for AI Integration
### When to Use Human-in-the-Loop Tools
1. **Ambiguous Requirements** - When user instructions are unclear
2. **Decision Points** - When you need user preference between valid alternatives
3. **Creative Input** - For subjective choices like design or content style
4. **Sensitive Operations** - Before executing potentially destructive actions
5. **Missing Information** - When you need specific details not provided
6. **Quality Feedback** - To get user validation on intermediate results
### Example Integration Patterns
#### File Operations
```python
# Get target directory
location = await get_user_input(
title="Backup Location",
prompt="Enter backup directory path:",
default_value="~/backups"
)
# Choose backup type
backup_type = await get_user_choice(
title="Backup Options",
prompt="Select backup type:",
choices=["Full Backup", "Incremental", "Differential"]
)
# Confirm before proceeding
confirmed = await show_confirmation_dialog(
title="Confirm Backup",
message=f"Create {backup_type['selected_choice']} backup to {location['user_input']}?"
)
if confirmed['confirmed']:
# Perform backup
await show_info_message("Success", "Backup completed successfully!")
```
#### Content Creation
```python
# Get content requirements
requirements = await get_multiline_input(
title="Content Requirements",
prompt="Describe your content requirements in detail:"
)
# Choose tone and style
tone = await get_user_choice(
title="Content Style",
prompt="Select desired tone:",
choices=["Professional", "Casual", "Friendly", "Technical"]
)
# Generate and show results
# ... content generation logic ...
await show_info_message("Content Ready", "Your content has been generated successfully!")
```
## 🔍 Troubleshooting
### Common Issues
**GUI Not Appearing**
- Verify you're running in a desktop environment (not headless server)
- Check if tkinter is installed: `python -c "import tkinter"`
- Run health check: `health_check()` tool to diagnose issues
**Permission Errors (macOS)**
- Grant accessibility permissions in System Preferences > Security & Privacy > Accessibility
- Allow Python to control your computer
- Restart terminal after granting permissions
**Import Errors**
- Ensure package is installed: `pip install hitl-mcp-server`
- Check Python version compatibility (>=3.8 required)
- Verify virtual environment activation if using one
**Claude Desktop Integration Issues**
- Check configuration file syntax and location
- Restart Claude Desktop after configuration changes
- Verify uvx is installed: `pip install uvx`
- Test server manually: `uvx hitl-mcp-server`
**Dialog Timeout**
- Default timeout is 5 minutes (300 seconds)
- Dialogs will return with cancelled=true if user doesn't respond
- Ensure user is present when dialogs are triggered
### Debug Mode
Enable detailed logging by running the server with environment variable:
```bash
HITL_DEBUG=1 uvx hitl-mcp-server
```
## 🏗️ Development
### Project Structure
```
Human-In-the-Loop-MCP-Server/
├── human_loop_server.py # Main server implementation
├── pyproject.toml # Package configuration
├── README.md # Documentation
├── LICENSE # MIT License
├── .gitignore # Git ignore rules
└── demo.gif # Demo animation
```
### Contributing
1. Fork the repository
2. Create a feature branch: `git checkout -b feature-name`
3. Make your changes with proper testing
4. Follow code style guidelines (Black, Ruff)
5. Add type hints and docstrings
6. Submit a pull request with detailed description
### Code Quality
- **Formatting**: Black (line length: 88)
- **Linting**: Ruff with comprehensive rule set
- **Type Checking**: MyPy with strict configuration
- **Testing**: Pytest for unit and integration tests
## 🌍 Platform Support
### Windows
- Windows 10/11 with modern UI styling
- Enhanced visual design with hover effects
- Segoe UI and Consolas font integration
- Full keyboard navigation support
### macOS
- Native macOS experience
- SF Pro Display system fonts
- Proper window management and focus
- Accessibility permission handling
### Linux
- Ubuntu/Debian compatible
- Modern styling with system fonts
- Cross-distribution GUI support
- Minimal dependency requirements
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🤝 Acknowledgments
- Built with [FastMCP](https://github.com/jlowin/fastmcp) framework
- Uses [Pydantic](https://pydantic-docs.helpmanual.io/) for data validation
- Cross-platform GUI powered by tkinter
- Inspired by the need for human-AI collaboration
## 🔗 Links
- **PyPI Package**: [https://pypi.org/project/hitl-mcp-server/](https://pypi.org/project/hitl-mcp-server/)
- **Repository**: [https://github.com/GongRzhe/Human-In-the-Loop-MCP-Server](https://github.com/GongRzhe/Human-In-the-Loop-MCP-Server)
- **Issues**: [Report bugs or request features](https://github.com/GongRzhe/Human-In-the-Loop-MCP-Server/issues)
- **MCP Protocol**: [Learn about Model Context Protocol](https://modelcontextprotocol.io/)
## 📊 Usage Statistics
- **Cross-Platform**: Windows, macOS, Linux
- **Python Support**: 3.8, 3.9, 3.10, 3.11, 3.12+
- **GUI Framework**: tkinter (built-in with Python)
- **Thread Safety**: Full concurrent operation support
- **Response Time**: < 100ms dialog initialization
- **Memory Usage**: < 50MB typical operation
---
**Made with ❤️ for the AI community - Bridging humans and AI through intuitive interaction**
| text/markdown | null | GongRzhe <gongrzhe@gmail.com>, animalnots <animalnots@gmail.com> | null | null | MIT | ai-assistant, cross-platform, dialog, fastmcp, gui, human-in-the-loop, mcp, model-context-protocol, tkinter, user-interaction | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX :: Linux",
... | [] | null | null | >=3.12 | [] | [] | [] | [
"fastmcp>=2.8.1",
"pydantic>=2.0.0",
"black>=23.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\"",
"pytest-asyncio>=0.20.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"pytest-asyncio>=0.20.0; extra == \"test\"",... | [] | [] | [] | [
"Homepage, https://github.com/animalnots/Human-In-the-Loop-MCP-Server",
"Repository, https://github.com/animalnots/Human-In-the-Loop-MCP-Server",
"Issues, https://github.com/animalnots/Human-In-the-Loop-MCP-Server/issues",
"Documentation, https://github.com/animalnots/Human-In-the-Loop-MCP-Server#readme",
"... | uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T17:18:59.093205 | human_in_the_loop_mcp_server-1.0.2.tar.gz | 16,359 | 84/7b/77fa15c547974de207b652eab9ff1b06de90aa2e71118334478a996d48ee/human_in_the_loop_mcp_server-1.0.2.tar.gz | source | sdist | null | false | 26fe5c76ceea284a4f1ed993b2308800 | b31f89575f8d8bde87cd498719f8e787563cd6a4fdafd7674cc29f3401f8583c | 847b77fa15c547974de207b652eab9ff1b06de90aa2e71118334478a996d48ee | null | [
"LICENSE"
] | 257 |
2.4 | json-key-parser | 0.1.0 | Extract specific keys from deeply nested JSON structures with wildcards and zero dependencies | [](https://pypi.org/project/json-key-parser/)
[](https://www.python.org/downloads/)
[](/LICENSE)
# json-key-parser
You know the pattern. You get a response back from an API and you need three fields buried
inside it. So you write `response['data'][0]['user']['address']['city']`, wrap it in a
try/except because some records have the key and some don't, repeat for the next field, and
suddenly you have twenty lines of boilerplate for what should be a one-liner.
`json-key-parser` lets you declare the keys you want and hands them back — no traversal
code, no try/excepts, no nested loops.
## Quick Start
```python
from json_parser import JsonParser
data = [{"first_name": "Alice", "last_name": "Smith", "birthday": "1990-04-15",
"address": {"street": "12 Oak Ave", "city": "Portland", "zip": "97201"}},
{"first_name": "Bob", "last_name": "Jones", "birthday": "1985-09-30",
"address": {"street": "88 Pine St", "city": "Seattle", "zip": "98101"}}]
result = JsonParser(data, ["first_name", "last_name", "birthday"]).get_data()
```
**Output:**
```json
[
{
"first_name": "Alice",
"last_name": "Smith",
"birthday": "1990-04-15"
},
{
"first_name": "Bob",
"last_name": "Jones",
"birthday": "1985-09-30"
}
]
```
## Why json-key-parser?
- **Declare keys, skip the traversal** — one call replaces nested loops across every record
- **Wildcard patterns** — `address*` matches `address`, `address1`, `address2`, and any other variation
- **Duplicate values merged automatically** — no deduplication code needed when a key appears at multiple levels
- **Works at any nesting depth** — one level or ten, it finds the keys you asked for
- **Zero dependencies** — pure Python stdlib, nothing to pin or audit
## Wildcard Matching
Using the same data from above, `address*` matches any key that starts with `address`:
```python
result = JsonParser(data, ["address*"]).get_data()
```
**Output:**
```json
[
{
"address": {
"street": "12 Oak Ave",
"city": "Portland",
"zip": "97201"
}
},
{
"address": {
"street": "88 Pine St",
"city": "Seattle",
"zip": "98101"
}
}
]
```
This is especially useful when records are inconsistently structured — one record might have
`address`, another `address1` and `address2`. The pattern catches all of them without
needing to know which variant each record uses.
## Duplicate Key Merging
When the same key appears at more than one nesting level inside a single record, the values
are automatically combined into a list. No deduplication code required.
```python
contacts = [
{
"first_name": "Alice",
"address1": {"street": "12 Oak Ave", "city": "Portland"},
"address2": {"street": "99 River Rd", "city": "Portland"}
},
{
"first_name": "Bob",
"address": {"street": "88 Pine St", "city": "Seattle"}
}
]
result = JsonParser(contacts, ["first_name", "street"]).get_data()
```
**Output:**
```json
[
{
"first_name": "Alice",
"street": [
"12 Oak Ave",
"99 River Rd"
]
},
{
"first_name": "Bob",
"street": "88 Pine St"
}
]
```
Alice has two addresses, so `street` becomes a list of both values. Bob has one address, so
`street` stays a plain string. The shape matches the data — you don't have to handle it
yourself.
## Installation
```bash
pip install json-key-parser
```
## License
MIT. See the [LICENSE](/LICENSE) file for details.
| text/markdown | Dale Wright | diverdale@gmail.com | null | null | MIT | json, parser, key, extract, search, nested, recursive, wildcard, fnmatch, api, data, query | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python ... | [] | null | null | <4.0,>=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/diverdale/json_parser",
"Repository, https://github.com/diverdale/json_parser"
] | poetry/2.3.2 CPython/3.11.11 Darwin/25.2.0 | 2026-02-18T17:18:05.536909 | json_key_parser-0.1.0-py3-none-any.whl | 5,736 | 39/85/7877a44ff74a4f55ee0d86f142e3687a86cb66224f036ccad28e3524daf0/json_key_parser-0.1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | ab03e723749e5a76a2b8bad0d05d66f8 | 64fa0145311c14cd908268db4fb31af42558c9d139db43e81bb0d01529e08b5a | 39857877a44ff74a4f55ee0d86f142e3687a86cb66224f036ccad28e3524daf0 | null | [
"LICENSE"
] | 230 |
2.4 | reno-sd | 0.10.0 | System dynamics modeling with bayesian inference | # Reno System Dynamics (`reno-sd`)
[](https://github.com/psf/black)
[](https://badge.fury.io/py/reno-sd)
[](https://anaconda.org/conda-forge/reno-sd)
[](https://github.com/ORNL/reno/actions/workflows/tests.yml)
[](https://github.com/ORNL/reno/blob/main/LICENSE)
[](https://github.com/ORNL/reno)
Reno is a tool for creating, visualizing, and analyzing system dynamics
models in Python. It additionally has the ability to convert models to PyMC,
allowing Bayesian inference on models with variables that include prior probability
distributions.
Reno models are created by defining the equations for the various stocks, flows,
and variables, and can then be simulated over time similar to something like
[Insight Maker](https://insightmaker.com/), examples of which can be seen below
and in the `notebooks` folder.
Currently, models only support discrete timesteps (technically implementing
difference equations rather than true differential equations.)
## Installation
Install from PyPI via:
```
pip install reno-sd
```
Install from conda-forge with:
```
conda install reno-sd
```
## Example
A classic system dynamics example is the predator-prey population model,
described by the [Lotka-Volterra equations](https://en.wikipedia.org/wiki/Lotka%E2%80%93Volterra_equations).
Implementing these in Reno would look something like:
```python
import reno
predator_prey = reno.Model(name="predator_prey", steps=200, doc="Classic predator-prey interaction model example")
with predator_prey:
# make stocks to monitor the predator/prey populations over time
rabbits = reno.Stock(init=100.0)
foxes = reno.Stock(init=100.0)
# free variables that can quickly be changed to influence equilibrium
rabbit_growth_rate = reno.Variable(.1, doc="Alpha")
rabbit_death_rate = reno.Variable(.001, doc="Beta")
fox_death_rate = reno.Variable(.1, doc="Gamma")
fox_growth_rate = reno.Variable(.001, doc="Delta")
# flows that define how much the stocks change in a timestep
rabbit_births = reno.Flow(rabbit_growth_rate * rabbits)
rabbit_deaths = reno.Flow(rabbit_death_rate * rabbits * foxes, max=rabbits)
fox_deaths = reno.Flow(fox_death_rate * foxes, max=foxes)
fox_births = reno.Flow(fox_growth_rate * rabbits * foxes)
# hook up inflows/outflows for stocks
rabbit_births >> rabbits >> rabbit_deaths
fox_births >> foxes >> fox_deaths
```
The stock and flow diagram for this model (obtainable via `predator_prey.graph()`) looks
like this: (green boxes are variables, white boxes are stocks, the labels between
arrows are the flows)

Once a model is defined it can be called like a function, optionally configuring
any free variables/initial values by passing them as arguments. You can print the
output of `predator_prey.get_docs()` to see a docstring showing all possible arguments and
what calling it should look like:
```
>>> print(predator_prey.get_docs())
Classic predator-prey interaction model example
Example:
predator_prey(rabbit_growth_rate=0.1, rabbit_death_rate=0.001, fox_death_rate=0.1, fox_growth_rate=0.001, rabbits_0=100.0, foxes_0=100.0)
Args:
rabbit_growth_rate: Alpha
rabbit_death_rate: Beta
fox_death_rate: Gamma
fox_growth_rate: Delta
rabbits_0
foxes_0
```
To run and plot the population stocks:
```python
predator_prey(fox_growth_rate=.002, rabbit_death_rate=.002, rabbits_0=120.0)
reno.plot_refs([(predator_prey.rabbits, predator_prey.foxes)])
```

To use Bayesian inference, we define one or more metrics that can be observed (can
have defined likelihoods.) For instance, we could determine what rabbit population
growth rate would need to be for the fox population to oscillate somewhere between
20-120. Transpiling into PyMC and running the inference process is similar to the
normal model call, but with ``.pymc()``, specifying any free variables (at least
one will need to be defined as a prior probability distribution), observations
to target, and any sampling/pymc parameters:
```python
with predator_prey:
minimum_foxes = reno.Metric(foxes.timeseries.series_min())
maximum_foxes = reno.Metric(foxes.timeseries.series_max())
trace = predator_prey.pymc(
n=1000,
fox_growth_rate=reno.Normal(.001, .0001), # specify some variables as distributions to sample from
rabbit_growth_rate=reno.Normal(.1, .01), # specify some variables as distributions to sample from
observations=[
reno.Observation(minimum_foxes, 5, [20]), # likelihood normally distributed around 20 with SD of 5
reno.Observation(maximum_foxes, 5, [120]), # likelihood normally distributed around 120 with SD of 5
]
)
```
To see the shift in prior versus posterior distributions, we can plot the random
variables and some of the relevant stocks using ``plot_trace_refs``:
```python
reno.plot_trace_refs(
predator_prey,
{"prior": trace.prior, "post": trace.posterior},
ref_list=[
predator_prey.minimum_foxes,
predator_prey.maximum_foxes,
predator_prey.fox_growth_rate,
predator_prey.rabbit_growth_rate,
predator_prey.foxes,
predator_prey.rabbits
],
figsize=(8, 5),
)
```

showing that the `rabbit_growth_rate` needs to be around `0.07` in order for
those observations to be met.
For a more in-depth introduction to reno, see the tub example in the `./notebooks` folder.
## Documentation
For the API reference as well as (eventually) the user guide, see
[https://ornl.github.io/reno/stable](https://ornl.github.io/reno/stable)
## Citation
To cite usage of Reno, please use the following bibtex:
```bibtex
@misc{doecode_166929,
title = {Reno},
author = {Martindale, Nathan and Stomps, Jordan and Phathanapirom, Urairisa B.},
abstractNote = {Reno is a tool for creating, visualizing, and analyzing system dynamics models in Python. It additionally has the ability to convert models to PyMC, allowing Bayesian inference on models with variables that include prior probability distributions.},
doi = {10.11578/dc.20251015.1},
url = {https://doi.org/10.11578/dc.20251015.1},
howpublished = {[Computer Software] \url{https://doi.org/10.11578/dc.20251015.1}},
year = {2025},
month = {oct}
}
```
| text/markdown | null | "Nathan Martindale, Jordan Stomps, Birdy Phathanapirom" <martindalena@ornl.gov> | null | null | null | system dynamics, sdm, simulation, reno, pymc, bayesian | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python... | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"numpy",
"pandas",
"xarray",
"panel",
"pymc",
"matplotlib",
"selenium",
"ipyvuetify",
"graphviz"
] | [] | [] | [] | [
"Repository, https://github.com/ORNL/reno",
"Changelog, https://github.com/ORNL/reno/blob/main/CHANGELOG.md",
"Issues, https://github.com/ORNL/reno/issues",
"Documentation, https://ornl.github.io/reno/stable"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T17:17:37.424191 | reno_sd-0.10.0.tar.gz | 139,877 | b0/c6/9b9dae6c8f294b5372e9f329bf8abb0e9634d6222e7e2303a4d6879a4b17/reno_sd-0.10.0.tar.gz | source | sdist | null | false | 2ce1b1e8320b43c77ae128ca74eb691d | ecff6bae2887ac8eef846a20648d57143cd17d76696186f4452de4f1ad10d293 | b0c69b9dae6c8f294b5372e9f329bf8abb0e9634d6222e7e2303a4d6879a4b17 | BSD-3-Clause | [
"LICENSE"
] | 224 |
2.4 | llama-index-readers-layoutir | 0.1.1 | llama-index readers LayoutIR integration | # LayoutIR Reader
## Overview
LayoutIR Reader uses [LayoutIR](https://pypi.org/project/layoutir/) - a production-grade document ingestion and canonicalization engine with compiler-like architecture. Unlike simple PDF-to-Markdown converters, LayoutIR processes documents through an Intermediate Representation (IR) layer, enabling precise preservation of complex layouts, tables, and multi-column structures.
## Why LayoutIR?
LayoutIR stands out for its:
- **Deterministic Processing**: Hash-based stable IDs ensure reproducible results
- **Layout Preservation**: Maintains complex multi-column layouts and table structures
- **Canonical IR Schema**: Typed intermediate representation for reliable downstream processing
- **Flexible Chunking**: Semantic section-based or fixed-size chunking strategies
- **GPU Acceleration**: Optional GPU support for faster document processing
- **Production-Ready**: Designed for enterprise-grade document pipelines
## Installation
### Basic Installation
```bash
pip install llama-index-readers-layoutir
```
### With GPU Support
For GPU acceleration, first install PyTorch with CUDA support:
```bash
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu130
pip install llama-index-readers-layoutir
```
## Usage
### Basic Usage
Load a PDF document with default settings:
```python
from llama_index.readers.layoutir import LayoutIRReader
reader = LayoutIRReader()
documents = reader.load_data(file_path="document.pdf")
# Each document preserves block structure and metadata
for doc in documents:
print(f"Block Type: {doc.metadata['block_type']}")
print(f"Page: {doc.metadata['page_number']}")
print(f"Content: {doc.text[:100]}...")
```
### With GPU Acceleration
Enable GPU processing for faster performance:
```python
from llama_index.readers.layoutir import LayoutIRReader
reader = LayoutIRReader(use_gpu=True)
documents = reader.load_data(file_path="large_document.pdf")
```
### Custom Chunking Strategy
Use semantic section-based chunking:
```python
from llama_index.readers.layoutir import LayoutIRReader
reader = LayoutIRReader(
chunk_strategy="semantic",
max_heading_level=2, # Split at h1 and h2 headings
)
documents = reader.load_data(file_path="structured_document.pdf")
```
### Processing Multiple Files
Process a batch of documents:
```python
from llama_index.readers.layoutir import LayoutIRReader
from pathlib import Path
reader = LayoutIRReader(use_gpu=True)
file_paths = ["report_2024.pdf", "technical_spec.pdf", "user_manual.pdf"]
documents = reader.load_data(file_path=file_paths)
print(f"Loaded {len(documents)} document blocks from {len(file_paths)} files")
```
### Integration with VectorStoreIndex
Build a searchable index from LayoutIR-processed documents:
```python
from llama_index.readers.layoutir import LayoutIRReader
from llama_index.core import VectorStoreIndex
# Load documents with preserved layout structure
reader = LayoutIRReader(
use_gpu=True, chunk_strategy="semantic", max_heading_level=2
)
documents = reader.load_data(file_path="company_knowledge_base.pdf")
# Create index
index = VectorStoreIndex.from_documents(documents)
# Query with layout-aware context
query_engine = index.as_query_engine()
response = query_engine.query("What are the key financial metrics in Q4?")
print(response)
```
### With SimpleDirectoryReader
Integrate LayoutIR for PDF processing in directory operations:
```python
from llama_index.core import SimpleDirectoryReader
from llama_index.readers.layoutir import LayoutIRReader
reader = LayoutIRReader(use_gpu=True)
dir_reader = SimpleDirectoryReader(
input_dir="/path/to/documents",
file_extractor={".pdf": reader},
)
documents = dir_reader.load_data()
print(f"Processed {len(documents)} blocks")
```
### Advanced Configuration
Full configuration example:
```python
from llama_index.readers.layoutir import LayoutIRReader
reader = LayoutIRReader(
use_gpu=True, # Enable GPU acceleration
chunk_strategy="semantic", # Use semantic chunking
max_heading_level=3, # Split up to h3 level
model_name="custom_model", # Optional: specify model
api_key="your_api_key", # Optional: for remote processing
)
documents = reader.load_data(
file_path="complex_layout.pdf",
extra_info={"department": "research", "year": 2026},
)
# Access rich metadata
for doc in documents:
print(f"ID: {doc.doc_id}")
print(f"Type: {doc.metadata['block_type']}")
print(f"Page: {doc.metadata['page_number']}")
print(f"Department: {doc.metadata['department']}")
```
## Metadata Structure
Each Document includes the following metadata:
- `file_path`: Source file path
- `file_name`: Source file name
- `block_type`: Type of content block (table, paragraph, heading, etc.)
- `block_index`: Index of the block in the document
- `page_number`: Page number where the block appears
- `source`: Always "layoutir"
- Plus any `extra_info` passed to `load_data()`
## Requirements
- Python >= 3.12
- llama-index-core >= 0.13.0
- layoutir >= 1.0.3
- Optional: PyTorch with CUDA for GPU acceleration
## License
MIT
## Forcing CPU Mode
If you encounter CUDA/GPU issues (e.g. cuBLAS version mismatches or missing CUDA drivers), set `CUDA_VISIBLE_DEVICES=""` before running to force CPU-only processing:
```bash
CUDA_VISIBLE_DEVICES="" python your_script.py
```
| text/markdown | null | Rahul Patnaik <rpatnaik2005@gmail.com> | null | null | null | IR, PDF, document processing, layout analysis, layoutir | [] | [] | null | null | <4.0,>=3.12 | [] | [] | [] | [
"layoutir>=1.0.3",
"llama-index-core<0.15,>=0.13.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T17:17:28.493380 | llama_index_readers_layoutir-0.1.1.tar.gz | 6,649 | bf/e5/3e016abb81eea8a93112faf33ce027958c5b3f61dac8f78a7504a6be08be/llama_index_readers_layoutir-0.1.1.tar.gz | source | sdist | null | false | 06cd397f76667fefdd91a756d1ab565b | 7741f0845860c79aaf94391f7db98e1680f2a0d5938d0d90e37decff5c92a206 | bfe53e016abb81eea8a93112faf33ce027958c5b3f61dac8f78a7504a6be08be | MIT | [
"LICENSE"
] | 243 |
2.4 | igmapper | 1.0.6 | Unofficial Instagram API to request available data | <div align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="assets/igmapper.svg">
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/lucasoal/igmapper/refs/heads/main/assets/igmapper.svg">
<img src="https://raw.githubusercontent.com/lucasoal/igmapper/refs/heads/main/assets/igmapper.svg" width="100%">
</picture>
<br><br><br>
<hr>
<h1>Igmapper: an Instagram Unofficial API</h1>
<img src="https://img.shields.io/badge/Author-lucasoal-blue?logo=github&logoColor=white"> <img src="https://img.shields.io/badge/License-MIT-750014.svg"> <!-- <img src="https://img.shields.io/badge/Status-Beta-DF1F72"> -->
<br>
<img src="https://img.shields.io/pypi/v/igmapper.svg?label=Version&color=white"> <img src="https://img.shields.io/pypi/pyversions/igmapper?logo=python&logoColor=white&label=Python"> <img src="https://img.shields.io/badge/Code Style-Black Formatter-111.svg">
<br>
<img src="https://img.shields.io/pypi/dm/igmapper.svg?label=PyPI downloads&color=white">
</div>
## What is it?
**Igmapper** is a high-performance Python library designed for Instagram data extraction,
providing structured access to profiles, feeds, and comments. It offers a flexible
architecture that allows data engineers to toggle between Requests and native CURL
transport, ensuring resilience against environment constraints.
<h2>Table of Contents</h2><r>
- [What is it?](#what-is-it)
- [Main Features](#main-features)
- [Where to get it / Install](#where-to-get-it--install)
- [Documentation](#documentation)
- [License](#license)
- [Dependencies](#dependencies)
## Main Features
Here are just a few of the things that pandas does well:
- [**`InstaClient`**](https://github.com/lucasoal/igmapper/blob/main/doc/DOCUMENTATION.md#instaclient): Initializes the session and handles transport selection (Requests or CURL)
- [**`get_profile_info()`**](https://github.com/lucasoal/igmapper/blob/main/doc/DOCUMENTATION.md#get_profile_info): Scrapes profile metadata and returns a structured ProfileData object.
- [**`get_feed()`**](https://github.com/lucasoal/igmapper/blob/main/doc/DOCUMENTATION.md#get_feed): Retrieves user timeline posts with built-in pagination support.
- [**`get_comments()`**](https://github.com/lucasoal/igmapper/blob/main/doc/DOCUMENTATION.md#get_comments): Fetches media comments and automates cursor-based pagination.
## Where to get it / Install
The source code is currently hosted on GitHub at: https://github.com/lucasoal/igmapper
> [!WARNING]
> It's essential to use [**Python 3.10**](https://www.python.org/downloads/release/python-310/) version
<!-- > It's essential to **upgrade pip** to the latest version to ensure compatibility with the library. -->
<!-- > ```sh
> # Requires the latest pip
> pip install --upgrade pip
> ``` -->
- [PyPI](https://pypi.org/project/igmapper/)
```sh
# PyPI
pip install igmapper
```
- GitHub
```sh
# or GitHub
pip install git+https://github.com/lucasoal/igmapper.git
```
## Documentation
- [Documentation](https://github.com/lucasoal/igmapper/blob/main/doc/DOCUMENTATION.md)
## License
- [MIT](https://github.com/lucasoal/igmapper/blob/main/LICENSE)
## Dependencies
- [Requests](https://pypi.org/project/requests/) | [pydantic](https://pypi.org/project/pydantic/)
See the [full installation instructions](https://github.com/lucasoal/igmapper/blob/main/INSTALLATION.md) for minimum supported versions of required, recommended and optional dependencies.
<hr>
[⇧ Go to Top](#table-of-contents)
| text/markdown | lucasoal | null | null | null | MIT | igmapper, instagram, unofficial-api | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.10",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"requests",
"pydantic"
] | [] | [] | [] | [
"Homepage, https://pypi.org/project/igmapper",
"Documentation, https://github.com/lucasoal/igmapper/blob/main/doc/DOCUMENTATION.md",
"Repository, https://github.com/lucasoal/igmapper"
] | twine/6.1.0 CPython/3.10.19 | 2026-02-18T17:17:25.808243 | igmapper-1.0.6.tar.gz | 8,097 | 24/49/c5fb7f077dcd2a856e8f1c6c98609b645bfa2dbe40845c5274fb56b9fb13/igmapper-1.0.6.tar.gz | source | sdist | null | false | 8d9a13dbb585b6888495ffcd79bd9790 | 35099ed9f1c07f202402897ca646d4f9e42b9d817869209fb58113e34f9fbb2a | 2449c5fb7f077dcd2a856e8f1c6c98609b645bfa2dbe40845c5274fb56b9fb13 | null | [
"LICENSE",
"AUTHORS.md"
] | 228 |
2.4 | t2d2-sdk | 2.5.0 | T2D2 SDK | # T2D2 SDK
A Python SDK for seamless integration with the T2D2 platform.
Easily manage projects, assets, and AI-powered inspections for structural health monitoring.
## Description
**T2D2 SDK** is a Python wrapper for the T2D2 API, enabling seamless integration with T2D2 projects and related assets for structural inspection data management.
- Manage projects, images, annotations, drawings, videos, reports, and more
- Upload, download, and organize assets
- Run AI inference and summarize inspection data
- Integrate with T2D2's web platform
## Documentation
Full documentation is available at: [https://t2d2-ai.github.io/t2d2-sdk/](https://t2d2-ai.github.io/t2d2-sdk/)
## Table of Contents
- [Features](#features)
- [Installation](#installation)
- [Quickstart](#quickstart)
- [Usage](#usage)
- [Contributing](#contributing)
- [License](#license)
- [Support](#support)
## Features
- **Authentication**: API key or email/password
- **Project Management**: Set, get, and summarize projects
- **Asset Management**: Upload/download images, drawings, videos, 3D models, and reports
- **Annotations**: Add, retrieve, and manage annotation classes and annotations
- **Regions & Tags**: Organize assets by regions and tags
- **AI Integration**: Run AI inference on images using project models
- **Summarization**: Summarize images and annotation conditions
- **Notifications**: Send notifications to users or Slack
## Installation
Install the latest version from PyPI:
```bash
pip install --upgrade t2d2-sdk
```
## Quickstart
1. **Sign up for a T2D2 account:** [Register here](https://app.t2d2.ai/auth/register)
2. **Get your API key** from the T2D2 web app
3. **Initialize the client:**
```python
from t2d2_sdk import T2D2
credentials = {'api_key': '<YOUR_API_KEY>'}
t2d2 = T2D2(credentials)
```
## Usage
### Set Project
```python
t2d2.set_project('<PROJECT_ID>')
project_info = t2d2.get_project_info()
print(project_info)
```
### Upload Images
```python
image_paths = ['./images/img1.jpg', './images/img2.jpg']
response = t2d2.upload_images(image_paths)
print(response)
```
### Get Images
```python
images = t2d2.get_images()
for img in images:
print(img['filename'], img['id'])
```
### Add Annotation Class
```python
result = t2d2.add_annotation_class('Crack', color='#FF0000', materials=['Concrete'])
print(result)
```
### Add Annotations to an Image
```python
annotations = [
{
'annotation_class_id': 'class_id',
'coordinates': [[100, 100], [200, 100], [200, 200], [100, 200]],
'attributes': {'severity': 'high'}
}
]
result = t2d2.add_annotations('image_id', annotations)
print(result)
```
### Run AI Inference
```python
result = t2d2.run_ai_inferencer(
image_ids=['image_id1', 'image_id2'],
model_id='model_id',
confidence_threshold=0.6
)
print(result)
```
For more advanced usage, see the [full documentation](https://t2d2-ai.github.io/t2d2-sdk/).
## Contributing
Contributions are welcome! Please contact <bhiriyur@t2d2.ai> for more information.
## License
See the LICENSE file for details.
## Support
- Documentation: [https://t2d2-ai.github.io/t2d2-sdk/](https://t2d2-ai.github.io/t2d2-sdk/)
- Email: <bhiriyur@t2d2.ai>
- T2D2 Web: [https://t2d2.ai](https://t2d2.ai)
| text/markdown | null | Badri Hiriyur <badri@t2d2.ai> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"boto3",
"requests",
"sentry_sdk",
"python-docx",
"Pillow"
] | [] | [] | [] | [
"Homepage, https://t2d2.ai",
"Issues, https://github.com/t2d2-ai/t2d2-sdk/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T17:16:19.932813 | t2d2_sdk-2.5.0.tar.gz | 113,003 | 98/9e/315ef521c11725319150080d8e8a0facf73c06474c7ffb3484c67f91d7a5/t2d2_sdk-2.5.0.tar.gz | source | sdist | null | false | cdc1a999fe1032e4b14609a5ecf2e548 | 73052039a0447d1e97c8515abe587ac39f194ec3f59382c72b26b76aae7a729d | 989e315ef521c11725319150080d8e8a0facf73c06474c7ffb3484c67f91d7a5 | null | [
"LICENSE"
] | 240 |
2.4 | strawberry-graphql-django | 0.75.2 | Strawberry GraphQL Django extension | # Strawberry GraphQL Django Integration
[](https://github.com/strawberry-graphql/strawberry-django/actions/workflows/tests.yml)
[](https://codecov.io/gh/strawberry-graphql/strawberry-django)
[](https://pypi.org/project/strawberry-graphql-django/)
[](https://pepy.tech/project/strawberry-graphql-django)

[**Documentation**](https://strawberry.rocks/docs/django) | [**Discord**](https://strawberry.rocks/discord)
Strawberry GraphQL Django integration provides powerful tools to build GraphQL APIs with Django. Automatically generate GraphQL types, queries, mutations, and resolvers from your Django models with full type safety.
## Installation
```shell
pip install strawberry-graphql-django
```
## Features
- 🍓 **Automatic Type Generation** - Generate GraphQL types from Django models with full type safety
- 🔍 **Advanced Filtering** - Powerful filtering system with lookups (contains, exact, in, etc.)
- 📄 **Pagination** - Built-in offset and cursor-based (Relay) pagination
- 📊 **Ordering** - Sort results by any field with automatic ordering support
- 🔐 **Authentication & Permissions** - Django auth integration with flexible permission system
- ✨ **CRUD Mutations** - Auto-generated create, update, and delete mutations with validation
- ⚡ **Query Optimizer** - Automatic `select_related` and `prefetch_related` to prevent N+1 queries
- 🐍 **Django Integration** - Works with Django views (sync and async), forms, and validation
- 🐛 **Debug Toolbar** - GraphiQL integration with Django Debug Toolbar for query inspection
## Quick Start
```python
# models.py
from django.db import models
class Fruit(models.Model):
name = models.CharField(max_length=20)
color = models.ForeignKey("Color", on_delete=models.CASCADE, related_name="fruits")
class Color(models.Model):
name = models.CharField(max_length=20)
```
```python
# types.py
import strawberry_django
from strawberry import auto
from . import models
@strawberry_django.type(models.Fruit)
class Fruit:
id: auto
name: auto
color: "Color"
@strawberry_django.type(models.Color)
class Color:
id: auto
name: auto
fruits: list[Fruit]
```
```python
# schema.py
import strawberry
import strawberry_django
from strawberry_django.optimizer import DjangoOptimizerExtension
from .types import Fruit
@strawberry.type
class Query:
fruits: list[Fruit] = strawberry_django.field()
schema = strawberry.Schema(
query=Query,
extensions=[DjangoOptimizerExtension],
)
```
```python
# urls.py
from django.urls import path
from strawberry.django.views import AsyncGraphQLView
from .schema import schema
urlpatterns = [
path("graphql/", AsyncGraphQLView.as_view(schema=schema)),
]
```
That's it! You now have a fully functional GraphQL API with:
- Automatic type inference from Django models
- Optimized database queries (no N+1 problems)
- Interactive GraphiQL interface at `/graphql/`
Visit http://localhost:8000/graphql/ and try this query:
```graphql
query {
fruits {
id
name
color {
name
}
}
}
```
## Next Steps
Check out our comprehensive documentation:
- 📚 [**Getting Started Guide**](https://strawberry.rocks/docs/django) - Complete tutorial with examples
- 🎓 [**Example App**](./examples/ecommerce_app/) - Full-featured e-commerce application
- 📖 [**Documentation**](https://strawberry.rocks/docs/django) - In-depth guides and API reference
- 💬 [**Discord Community**](https://strawberry.rocks/discord) - Get help and share your projects
## Contributing
We welcome contributions! Whether you're fixing bugs, adding features, or improving documentation, your help is appreciated 😊
**Quick Start:**
```shell
git clone https://github.com/strawberry-graphql/strawberry-django
cd strawberry-django
pre-commit install
```
Then run tests with `make test` or `make test-dist` for parallel execution.
## Community
- 💬 [**Discord**](https://strawberry.rocks/discord) - Join our community for help and discussions
- 🐛 [**GitHub Issues**](https://github.com/strawberry-graphql/strawberry-django/issues) - Report bugs or request features
- 💡 [**GitHub Discussions**](https://github.com/strawberry-graphql/strawberry-django/discussions) - Ask questions and share ideas
## License
This project is licensed under the [MIT License](LICENSE).
| text/markdown | null | Lauri Hintsala <lauri.hintsala@verkkopaja.fi>, Thiago Bellini Ribeiro <thiago@bellini.dev> | null | Thiago Bellini Ribeiro <thiago@bellini.dev> | MIT | api, django, graphql, strawberry-graphql | [
"Environment :: Web Environment",
"Framework :: Django",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Framework :: Django :: 5.1",
"Framework :: Django :: 5.2",
"Framework :: Django :: 6.0",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating Sys... | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"asgiref>=3.8",
"django>=4.2",
"strawberry-graphql>=0.288.0",
"django-debug-toolbar>=6.0.0; extra == \"debug-toolbar\"",
"django-choices-field>=2.2.2; extra == \"enum\""
] | [] | [] | [] | [
"homepage, https://strawberry.rocks/docs/django",
"repository, https://github.com/strawberry-graphql/strawberry-django",
"documentation, https://strawberry.rocks/docs/django"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T17:14:58.329952 | strawberry_graphql_django-0.75.2-py3-none-any.whl | 109,577 | 65/c1/7a83f0624aa52a5ab90ce4a2c1c27b21fe64141397011fed5bc2e0943dba/strawberry_graphql_django-0.75.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 072e7523e9bab26b98d13f1eaa9c1598 | 9b7c898409bb17799f9830e39e2f6028dba847b3b1b5bc98ab894523557965bb | 65c17a83f0624aa52a5ab90ce4a2c1c27b21fe64141397011fed5bc2e0943dba | null | [
"LICENSE"
] | 3,259 |
2.1 | odoo-addon-base-external-dbsource | 18.0.1.0.1 | External Database Sources | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
=========================
External Database Sources
=========================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:9af3d6fd98e9d6888913c632025e8e5979c715c4401f3f882cbe965bf21ad95a
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-LGPL--3-blue.png
:target: http://www.gnu.org/licenses/lgpl-3.0-standalone.html
:alt: License: LGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fserver--backend-lightgray.png?logo=github
:target: https://github.com/OCA/server-backend/tree/18.0/base_external_dbsource
:alt: OCA/server-backend
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/server-backend-18-0/server-backend-18-0-base_external_dbsource
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/server-backend&target_branch=18.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module allows you to define connections to foreign databases using
ODBC, Firebird, Oracle Client or SQLAlchemy.
**Table of contents**
.. contents::
:local:
Installation
============
No installation required.
Configuration
=============
To configure this module, you need to:
1. Database sources can be configured in Settings > Technical > Database
Structure > Data sources.
Usage
=====
- Go to Settings > Technical > Database Structure > Database Sources
- Click on Create to enter the following information:
- Data source name
- Password
- Connector: Choose the database to which you want to connect
- Connection string: Specify how to connect to database
Known issues / Roadmap
======================
- Find a way to remove or default the CA certs dir
- Add concept of multiple connection strings for one source (multiple
nodes)
- Add a ConnectionEnvironment that allows for the reuse of connections
- Message box should be displayed instead of error in
``connection_test``
- Remove old api compatibility layers (v11)
- Instead of returning list of results, we should return iterators. This
will support larger datasets in a more efficient manner.
- Implement better CRUD handling
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/server-backend/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/server-backend/issues/new?body=module:%20base_external_dbsource%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Daniel Reis
* LasLabs
Contributors
------------
- Daniel Reis <dreis.pt@hotmail.com>
- Maxime Chambreuil <maxime.chambreuil@savoirfairelinux.com>
- Gervais Naoussi <gervaisnaoussi@gmail.com>
- Dave Lasley <dave@laslabs.com>
- `Tecnativa <https://www.tecnativa.com>`__:
- Sergio Teruel
- Jairo Llopis
- Andrea Cattalani (`Moduon <https://www.moduon.team/>`__)
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
This module is part of the `OCA/server-backend <https://github.com/OCA/server-backend/tree/18.0/base_external_dbsource>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | Daniel Reis, LasLabs, Odoo Community Association (OCA) | support@odoo-community.org | null | null | LGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 18.0",
"License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)"
] | [] | https://github.com/OCA/server-backend | null | >=3.10 | [] | [] | [] | [
"odoo==18.0.*"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T17:14:55.167876 | odoo_addon_base_external_dbsource-18.0.1.0.1-py3-none-any.whl | 149,255 | 0c/86/89789da605ca165533442f75ebf548f20020e912016acebb4512a839ad65/odoo_addon_base_external_dbsource-18.0.1.0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 3b11104e496b5fd7b3ef3a29efe96f5e | 348715668913ccddf40c8452304d28650ca08c912e7092501250551efd052bd5 | 0c8689789da605ca165533442f75ebf548f20020e912016acebb4512a839ad65 | null | [] | 164 |
2.4 | libelifoot | 0.1.1 | Library to handle Elifoot equipas | ## libelifoot
[](https://github.com/andrelcmoreira/libelifoot/actions/workflows/ci.yaml)
[](https://www.gnu.org/licenses/lgpl-3.0)
### Overview
Library to handle Elifoot 98 equipas. The main functionalities of the library are:
- View data from a equipa file;
- Generate patch files with upstream data from an equipa file;
- Generate patches in batch from a directory of equipa files.
### Usage
View the content of an equipa file:
```python
from sys import argv
from libelifoot import view_equipa
def main(equipa: str) -> None:
print(view_equipa(equipa))
if __name__ == "__main__":
main(argv[1])
```
Generate a patch file with upstream data from an equipa file:
```python
from sys import argv
from typing import Optional
from libelifoot import (
update_equipa,
Equipa,
EquipaFileHandler,
UpdateEquipaListener
)
class EventHandler(UpdateEquipaListener):
def on_update_equipa(
self,
equipa_name: str,
equipa_data: Optional[Equipa]
) -> None:
print(f'{equipa_name}\n{equipa_data}')
if equipa_data:
EquipaFileHandler.write(f'{equipa_name}.patch', equipa_data)
def on_update_equipa_error(self, error: str) -> None:
print(f'ERROR: {error}')
def main(equipa: str, provider: str, season: int) -> None:
ev = EventHandler()
update_equipa(equipa, provider, season, ev)
if __name__ == "__main__":
main(argv[1], argv[2], int(argv[3]))
```
Generate patches in batch based on a directory of equipa files:
```python
from sys import argv
from typing import Optional
from libelifoot import (
bulk_update,
Equipa,
EquipaFileHandler,
UpdateEquipaListener,
)
class EventHandler(UpdateEquipaListener):
def on_update_equipa(
self,
equipa_name: str,
equipa_data: Optional[Equipa]
) -> None:
print(f'{equipa_name}\n{equipa_data}')
if equipa_data:
EquipaFileHandler.write(f'{equipa_name}.patch', equipa_data)
def on_update_equipa_error(self, error: str) -> None:
print(f'ERROR: {error}')
def main(equipa_dir: str, provider: str, season: int) -> None:
ev = EventHandler()
bulk_update(equipa_dir, provider, season, ev)
if __name__ == "__main__":
main(argv[1], argv[2], int(argv[3]))
```
### Supported providers
To generate patches, the library fetches data from public football data providers. Currently, the library supports the following providers:
- ESPN;
- Transfermarkt.
### Documentation
See [docs](https://github.com/andrelcmoreira/libelifoot/tree/develop/docs).
| text/markdown | null | "André L. C. Moreira" <andrelcmoreira@proton.me> | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"beautifulsoup4",
"requests",
"unidecode",
"pylint; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/andrelcmoreira/libelifoot.git"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T17:14:03.149833 | libelifoot-0.1.1.tar.gz | 16,538 | b8/29/e3f5457f391eec84a961a0071955fa1c3619a48c88a1d7161bc70ca70368/libelifoot-0.1.1.tar.gz | source | sdist | null | false | 8681d7ef2fefb5cc1cdc3bed6cfef94b | 87174156d764fbbe274192557ec0c796b3570301d02ce1e49e2a312afd0555ec | b829e3f5457f391eec84a961a0071955fa1c3619a48c88a1d7161bc70ca70368 | LGPL-3.0-only | [] | 235 |
2.4 | train-travel | 0.0.8 | Python Client SDK Generated by Speakeasy. | # train-travel
Developer-friendly & type-safe Python SDK specifically catered to leverage *train-travel* API.
[](https://www.speakeasy.com/?utm_source=train-travel&utm_campaign=python)
[](https://opensource.org/licenses/MIT)
<br /><br />
> [!IMPORTANT]
> This SDK is not yet ready for production use. To complete setup please follow the steps outlined in your [workspace](https://app.speakeasy.com/org/speakeasy-self/daniel-test). Delete this section before > publishing to a package manager.
<!-- Start Summary [summary] -->
## Summary
Train Travel API: API for finding and booking train trips across Europe.
## Run in Postman
Experiment with this API in Postman, using our Postman Collection.
[](https://app.getpostman.com/run-collection/9265903-7a75a0d0-b108-4436-ba54-c6139698dc08?action=collection%2Ffork&source=rip_markdown&collection-url=entityId%3D9265903-7a75a0d0-b108-4436-ba54-c6139698dc08%26entityType%3Dcollection%26workspaceId%3Df507f69d-9564-419c-89a2-cb8e4c8c7b8f)
<!-- End Summary [summary] -->
<!-- Start Table of Contents [toc] -->
## Table of Contents
<!-- $toc-max-depth=2 -->
* [train-travel](https://github.com/danielkov/train-travel/blob/master/train-travel-python/#train-travel)
* [Run in Postman](https://github.com/danielkov/train-travel/blob/master/train-travel-python/#run-in-postman)
* [SDK Installation](https://github.com/danielkov/train-travel/blob/master/train-travel-python/#sdk-installation)
* [IDE Support](https://github.com/danielkov/train-travel/blob/master/train-travel-python/#ide-support)
* [SDK Example Usage](https://github.com/danielkov/train-travel/blob/master/train-travel-python/#sdk-example-usage)
* [Authentication](https://github.com/danielkov/train-travel/blob/master/train-travel-python/#authentication)
* [Available Resources and Operations](https://github.com/danielkov/train-travel/blob/master/train-travel-python/#available-resources-and-operations)
* [File uploads](https://github.com/danielkov/train-travel/blob/master/train-travel-python/#file-uploads)
* [Retries](https://github.com/danielkov/train-travel/blob/master/train-travel-python/#retries)
* [Error Handling](https://github.com/danielkov/train-travel/blob/master/train-travel-python/#error-handling)
* [Server Selection](https://github.com/danielkov/train-travel/blob/master/train-travel-python/#server-selection)
* [Custom HTTP Client](https://github.com/danielkov/train-travel/blob/master/train-travel-python/#custom-http-client)
* [Resource Management](https://github.com/danielkov/train-travel/blob/master/train-travel-python/#resource-management)
* [Debugging](https://github.com/danielkov/train-travel/blob/master/train-travel-python/#debugging)
* [Development](https://github.com/danielkov/train-travel/blob/master/train-travel-python/#development)
* [Maturity](https://github.com/danielkov/train-travel/blob/master/train-travel-python/#maturity)
* [Contributions](https://github.com/danielkov/train-travel/blob/master/train-travel-python/#contributions)
<!-- End Table of Contents [toc] -->
<!-- Start SDK Installation [installation] -->
## SDK Installation
> [!NOTE]
> **Python version upgrade policy**
>
> Once a Python version reaches its [official end of life date](https://devguide.python.org/versions/), a 3-month grace period is provided for users to upgrade. Following this grace period, the minimum python version supported in the SDK will be updated.
The SDK can be installed with *uv*, *pip*, or *poetry* package managers.
### uv
*uv* is a fast Python package installer and resolver, designed as a drop-in replacement for pip and pip-tools. It's recommended for its speed and modern Python tooling capabilities.
```bash
uv add train-travel
```
### PIP
*PIP* is the default package installer for Python, enabling easy installation and management of packages from PyPI via the command line.
```bash
pip install train-travel
```
### Poetry
*Poetry* is a modern tool that simplifies dependency management and package publishing by using a single `pyproject.toml` file to handle project metadata and dependencies.
```bash
poetry add train-travel
```
### Shell and script usage with `uv`
You can use this SDK in a Python shell with [uv](https://docs.astral.sh/uv/) and the `uvx` command that comes with it like so:
```shell
uvx --from train-travel python
```
It's also possible to write a standalone Python script without needing to set up a whole project like so:
```python
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "train-travel",
# ]
# ///
from train_travel import TrainTravel
sdk = TrainTravel(
# SDK arguments
)
# Rest of script here...
```
Once that is saved to a file, you can run it with `uv run script.py` where
`script.py` can be replaced with the actual file name.
<!-- End SDK Installation [installation] -->
<!-- Start IDE Support [idesupport] -->
## IDE Support
### PyCharm
Generally, the SDK will work well with most IDEs out of the box. However, when using PyCharm, you can enjoy much better integration with Pydantic by installing an additional plugin.
- [PyCharm Pydantic Plugin](https://docs.pydantic.dev/latest/integrations/pycharm/)
<!-- End IDE Support [idesupport] -->
<!-- Start SDK Example Usage [usage] -->
## SDK Example Usage
### Example
```python
# Synchronous Example
import os
from train_travel import TrainTravel
with TrainTravel(
o_auth2=os.getenv("TRAINTRAVEL_O_AUTH2", ""),
) as tt_client:
res = tt_client.stations.get_stations(page=1, limit=10, coordinates="52.5200,13.4050", search="Milano Centrale", country="DE")
# Handle response
print(res)
```
</br>
The same SDK client can also be used to make asynchronous requests by importing asyncio.
```python
# Asynchronous Example
import asyncio
import os
from train_travel import TrainTravel
async def main():
async with TrainTravel(
o_auth2=os.getenv("TRAINTRAVEL_O_AUTH2", ""),
) as tt_client:
res = await tt_client.stations.get_stations_async(page=1, limit=10, coordinates="52.5200,13.4050", search="Milano Centrale", country="DE")
# Handle response
print(res)
asyncio.run(main())
```
<!-- End SDK Example Usage [usage] -->
<!-- Start Authentication [security] -->
## Authentication
### Per-Client Security Schemes
This SDK supports the following security scheme globally:
| Name | Type | Scheme | Environment Variable |
| --------- | ------ | ------------ | --------------------- |
| `o_auth2` | oauth2 | OAuth2 token | `TRAINTRAVEL_O_AUTH2` |
To authenticate with the API the `o_auth2` parameter must be set when initializing the SDK client instance. For example:
```python
import os
from train_travel import TrainTravel
with TrainTravel(
o_auth2=os.getenv("TRAINTRAVEL_O_AUTH2", ""),
) as tt_client:
res = tt_client.stations.get_stations(page=1, limit=10, coordinates="52.5200,13.4050", search="Milano Centrale", country="DE")
# Handle response
print(res)
```
<!-- End Authentication [security] -->
<!-- Start Available Resources and Operations [operations] -->
## Available Resources and Operations
<details open>
<summary>Available methods</summary>
### [Bookings](https://github.com/danielkov/train-travel/blob/master/train-travel-python/docs/sdks/bookings/README.md)
* [get_bookings](https://github.com/danielkov/train-travel/blob/master/train-travel-python/docs/sdks/bookings/README.md#get_bookings) - List existing bookings
* [create_booking](https://github.com/danielkov/train-travel/blob/master/train-travel-python/docs/sdks/bookings/README.md#create_booking) - Create a booking
* [create_booking_raw](https://github.com/danielkov/train-travel/blob/master/train-travel-python/docs/sdks/bookings/README.md#create_booking_raw) - Create a booking
* [get_booking](https://github.com/danielkov/train-travel/blob/master/train-travel-python/docs/sdks/bookings/README.md#get_booking) - Get a booking
* [delete_booking](https://github.com/danielkov/train-travel/blob/master/train-travel-python/docs/sdks/bookings/README.md#delete_booking) - Delete a booking
### [Payments](https://github.com/danielkov/train-travel/blob/master/train-travel-python/docs/sdks/payments/README.md)
* [create_booking_payment](https://github.com/danielkov/train-travel/blob/master/train-travel-python/docs/sdks/payments/README.md#create_booking_payment) - Pay for a Booking
### [Stations](https://github.com/danielkov/train-travel/blob/master/train-travel-python/docs/sdks/stations/README.md)
* [get_stations](https://github.com/danielkov/train-travel/blob/master/train-travel-python/docs/sdks/stations/README.md#get_stations) - Get a list of train stations
### [Trips](https://github.com/danielkov/train-travel/blob/master/train-travel-python/docs/sdks/trips/README.md)
* [get_trips](https://github.com/danielkov/train-travel/blob/master/train-travel-python/docs/sdks/trips/README.md#get_trips) - Get available train trips
</details>
<!-- End Available Resources and Operations [operations] -->
<!-- Start File uploads [file-upload] -->
## File uploads
Certain SDK methods accept file objects as part of a request body or multi-part request. It is possible and typically recommended to upload files as a stream rather than reading the entire contents into memory. This avoids excessive memory consumption and potentially crashing with out-of-memory errors when working with very large files. The following example demonstrates how to attach a file stream to a request.
> [!TIP]
>
> For endpoints that handle file uploads bytes arrays can also be used. However, using streams is recommended for large files.
>
```python
import io
import os
from train_travel import TrainTravel
with TrainTravel(
o_auth2=os.getenv("TRAINTRAVEL_O_AUTH2", ""),
) as tt_client:
res = tt_client.bookings.create_booking_raw(request=io.BytesIO("{\"trip_id\":\"4f4e4e1-c824-4d63-b37a-d8d698862f1d\",\"passenger_name\":\"John Doe\",\"seat_preference\":\"window\"}".encode()))
# Handle response
print(res)
```
<!-- End File uploads [file-upload] -->
<!-- Start Retries [retries] -->
## Retries
Some of the endpoints in this SDK support retries. If you use the SDK without any configuration, it will fall back to the default retry strategy provided by the API. However, the default retry strategy can be overridden on a per-operation basis, or across the entire SDK.
To change the default retry strategy for a single API call, simply provide a `RetryConfig` object to the call:
```python
import os
from train_travel import TrainTravel
from train_travel.utils import BackoffStrategy, RetryConfig
with TrainTravel(
o_auth2=os.getenv("TRAINTRAVEL_O_AUTH2", ""),
) as tt_client:
res = tt_client.stations.get_stations(page=1, limit=10, coordinates="52.5200,13.4050", search="Milano Centrale", country="DE",
RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False))
# Handle response
print(res)
```
If you'd like to override the default retry strategy for all operations that support retries, you can use the `retry_config` optional parameter when initializing the SDK:
```python
import os
from train_travel import TrainTravel
from train_travel.utils import BackoffStrategy, RetryConfig
with TrainTravel(
retry_config=RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False),
o_auth2=os.getenv("TRAINTRAVEL_O_AUTH2", ""),
) as tt_client:
res = tt_client.stations.get_stations(page=1, limit=10, coordinates="52.5200,13.4050", search="Milano Centrale", country="DE")
# Handle response
print(res)
```
<!-- End Retries [retries] -->
<!-- Start Error Handling [errors] -->
## Error Handling
[`TrainTravelError`](https://github.com/danielkov/train-travel/blob/master/train-travel-python/./src/train_travel/errors/traintravelerror.py) is the base class for all HTTP error responses. It has the following properties:
| Property | Type | Description |
| ------------------ | ---------------- | ------------------------------------------------------ |
| `err.message` | `str` | Error message |
| `err.status_code` | `int` | HTTP response status code eg `404` |
| `err.headers` | `httpx.Headers` | HTTP response headers |
| `err.body` | `str` | HTTP body. Can be empty string if no body is returned. |
| `err.raw_response` | `httpx.Response` | Raw HTTP response |
### Example
```python
import os
from train_travel import TrainTravel, errors
with TrainTravel(
o_auth2=os.getenv("TRAINTRAVEL_O_AUTH2", ""),
) as tt_client:
res = None
try:
res = tt_client.stations.get_stations(page=1, limit=10, coordinates="52.5200,13.4050", search="Milano Centrale", country="DE")
# Handle response
print(res)
except errors.TrainTravelError as e:
# The base class for HTTP error responses
print(e.message)
print(e.status_code)
print(e.body)
print(e.headers)
print(e.raw_response)
```
### Error Classes
**Primary error:**
* [`TrainTravelError`](https://github.com/danielkov/train-travel/blob/master/train-travel-python/./src/train_travel/errors/traintravelerror.py): The base class for HTTP error responses.
<details><summary>Less common errors (5)</summary>
<br />
**Network errors:**
* [`httpx.RequestError`](https://www.python-httpx.org/exceptions/#httpx.RequestError): Base class for request errors.
* [`httpx.ConnectError`](https://www.python-httpx.org/exceptions/#httpx.ConnectError): HTTP client was unable to make a request to a server.
* [`httpx.TimeoutException`](https://www.python-httpx.org/exceptions/#httpx.TimeoutException): HTTP request timed out.
**Inherit from [`TrainTravelError`](https://github.com/danielkov/train-travel/blob/master/train-travel-python/./src/train_travel/errors/traintravelerror.py)**:
* [`ResponseValidationError`](https://github.com/danielkov/train-travel/blob/master/train-travel-python/./src/train_travel/errors/responsevalidationerror.py): Type mismatch between the response data and the expected Pydantic model. Provides access to the Pydantic validation error via the `cause` attribute.
</details>
<!-- End Error Handling [errors] -->
<!-- Start Server Selection [server] -->
## Server Selection
### Select Server by Index
You can override the default server globally by passing a server index to the `server_idx: int` optional parameter when initializing the SDK client instance. The selected server will then be used as the default on the operations that use it. This table lists the indexes associated with the available servers:
| # | Server | Description |
| --- | ----------------------------------------------------- | ----------- |
| 0 | `https://try.microcks.io/rest/Train+Travel+API/1.0.0` | Mock Server |
| 1 | `https://api.example.com` | Production |
#### Example
```python
import os
from train_travel import TrainTravel
with TrainTravel(
server_idx=0,
o_auth2=os.getenv("TRAINTRAVEL_O_AUTH2", ""),
) as tt_client:
res = tt_client.stations.get_stations(page=1, limit=10, coordinates="52.5200,13.4050", search="Milano Centrale", country="DE")
# Handle response
print(res)
```
### Override Server URL Per-Client
The default server can also be overridden globally by passing a URL to the `server_url: str` optional parameter when initializing the SDK client instance. For example:
```python
import os
from train_travel import TrainTravel
with TrainTravel(
server_url="https://api.example.com",
o_auth2=os.getenv("TRAINTRAVEL_O_AUTH2", ""),
) as tt_client:
res = tt_client.stations.get_stations(page=1, limit=10, coordinates="52.5200,13.4050", search="Milano Centrale", country="DE")
# Handle response
print(res)
```
<!-- End Server Selection [server] -->
<!-- Start Custom HTTP Client [http-client] -->
## Custom HTTP Client
The Python SDK makes API calls using the [httpx](https://www.python-httpx.org/) HTTP library. In order to provide a convenient way to configure timeouts, cookies, proxies, custom headers, and other low-level configuration, you can initialize the SDK client with your own HTTP client instance.
Depending on whether you are using the sync or async version of the SDK, you can pass an instance of `HttpClient` or `AsyncHttpClient` respectively, which are Protocol's ensuring that the client has the necessary methods to make API calls.
This allows you to wrap the client with your own custom logic, such as adding custom headers, logging, or error handling, or you can just pass an instance of `httpx.Client` or `httpx.AsyncClient` directly.
For example, you could specify a header for every request that this sdk makes as follows:
```python
from train_travel import TrainTravel
import httpx
http_client = httpx.Client(headers={"x-custom-header": "someValue"})
s = TrainTravel(client=http_client)
```
or you could wrap the client with your own custom logic:
```python
from train_travel import TrainTravel
from train_travel.httpclient import AsyncHttpClient
import httpx
class CustomClient(AsyncHttpClient):
client: AsyncHttpClient
def __init__(self, client: AsyncHttpClient):
self.client = client
async def send(
self,
request: httpx.Request,
*,
stream: bool = False,
auth: Union[
httpx._types.AuthTypes, httpx._client.UseClientDefault, None
] = httpx.USE_CLIENT_DEFAULT,
follow_redirects: Union[
bool, httpx._client.UseClientDefault
] = httpx.USE_CLIENT_DEFAULT,
) -> httpx.Response:
request.headers["Client-Level-Header"] = "added by client"
return await self.client.send(
request, stream=stream, auth=auth, follow_redirects=follow_redirects
)
def build_request(
self,
method: str,
url: httpx._types.URLTypes,
*,
content: Optional[httpx._types.RequestContent] = None,
data: Optional[httpx._types.RequestData] = None,
files: Optional[httpx._types.RequestFiles] = None,
json: Optional[Any] = None,
params: Optional[httpx._types.QueryParamTypes] = None,
headers: Optional[httpx._types.HeaderTypes] = None,
cookies: Optional[httpx._types.CookieTypes] = None,
timeout: Union[
httpx._types.TimeoutTypes, httpx._client.UseClientDefault
] = httpx.USE_CLIENT_DEFAULT,
extensions: Optional[httpx._types.RequestExtensions] = None,
) -> httpx.Request:
return self.client.build_request(
method,
url,
content=content,
data=data,
files=files,
json=json,
params=params,
headers=headers,
cookies=cookies,
timeout=timeout,
extensions=extensions,
)
s = TrainTravel(async_client=CustomClient(httpx.AsyncClient()))
```
<!-- End Custom HTTP Client [http-client] -->
<!-- Start Resource Management [resource-management] -->
## Resource Management
The `TrainTravel` class implements the context manager protocol and registers a finalizer function to close the underlying sync and async HTTPX clients it uses under the hood. This will close HTTP connections, release memory and free up other resources held by the SDK. In short-lived Python programs and notebooks that make a few SDK method calls, resource management may not be a concern. However, in longer-lived programs, it is beneficial to create a single SDK instance via a [context manager][context-manager] and reuse it across the application.
[context-manager]: https://docs.python.org/3/reference/datamodel.html#context-managers
```python
import os
from train_travel import TrainTravel
def main():
with TrainTravel(
o_auth2=os.getenv("TRAINTRAVEL_O_AUTH2", ""),
) as tt_client:
# Rest of application here...
# Or when using async:
async def amain():
async with TrainTravel(
o_auth2=os.getenv("TRAINTRAVEL_O_AUTH2", ""),
) as tt_client:
# Rest of application here...
```
<!-- End Resource Management [resource-management] -->
<!-- Start Debugging [debug] -->
## Debugging
You can setup your SDK to emit debug logs for SDK requests and responses.
You can pass your own logger class directly into your SDK.
```python
from train_travel import TrainTravel
import logging
logging.basicConfig(level=logging.DEBUG)
s = TrainTravel(debug_logger=logging.getLogger("train_travel"))
```
You can also enable a default debug logger by setting an environment variable `TRAINTRAVEL_DEBUG` to true.
<!-- End Debugging [debug] -->
<!-- Placeholder for Future Speakeasy SDK Sections -->
# Development
## Maturity
This SDK is in beta, and there may be breaking changes between versions without a major version update. Therefore, we recommend pinning usage
to a specific package version. This way, you can install the same version each time without breaking changes unless you are intentionally
looking for the latest version.
## Contributions
While we value open-source contributions to this SDK, this library is generated programmatically. Any manual changes added to internal files will be overwritten on the next generation.
We look forward to hearing your feedback. Feel free to open a PR or an issue with a proof of concept and we'll do our best to include it in a future release.
### SDK Created by [Speakeasy](https://www.speakeasy.com/?utm_source=train-travel&utm_campaign=python)
| text/markdown | Speakeasy | null | null | null | Apache-2.0 | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpcore>=1.0.9",
"httpx>=0.28.1",
"pydantic>=2.11.2"
] | [] | [] | [] | [
"repository, https://github.com/danielkov/train-travel.git"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T17:12:52.713451 | train_travel-0.0.8-py3-none-any.whl | 74,861 | ac/c6/4c09e598b895f82bbb8de021678aaa8aa721ac1d9b21d742cc2c279bb2d5/train_travel-0.0.8-py3-none-any.whl | py3 | bdist_wheel | null | false | 2c4f66793c000485ac30c42cd6c93449 | 9498cb8aec7df53734bdd6f30cc5842faf253f6581c5414cf8927cd44fb04b93 | acc64c09e598b895f82bbb8de021678aaa8aa721ac1d9b21d742cc2c279bb2d5 | null | [] | 241 |
2.4 | icub-pybullet | 1.3.0 | pyCub - iCub in PyBullet | # pyCub Documentation
pyCub is iCub humanoid robot simulator written in Python. It uses PyBullet for simulation and Open3D for visualization.
## Installation
- Requires python3.10 to 3.12
- newer python versions are now not supported due to incompatible with some dependencies
- We recommend using virtual environment when installing from PyPi or from source
- ```
python3 -m venv pycub_venv
source pycub_venv/bin/activate
OTHER_COMMANDS
```
1. **(Recommended)** Install from PyPi
- ```python3 -m pip install icub_pybullet```
2. Install from source
- Pull this repository
- ```
cd PATH_TO_THE_REPOSITORY/icub_pybullet
python3 -m pip install --upgrade pip
python3 -m pip install .
```
3. Native Docker (GNU/Linux only)
- see [Docker Native Version](#native-version) section
4. VNC Docker
- see [Docker VNC Version](#vnc-version) section
5. Gitpod
- open [https://app.gitpod.io/#https://github.com/rustlluk/pycub](https://app.gitpod.io/#https://github.com/rustlluk/pycub)
and log in with GitHub account
- then open a port 6080 and in terminal run `start-vnc-session.sh` to start the VNC server
- if using VScode browser version, on top left click on three lines -> Terminal -> New Terminal
## Examples
- [push_the_ball_pure_joints.py](https://github.com/rustlluk/pycub/blob/master/icub_pybullet/examples/push_the_ball_pure_joints.py) contains an example that
shows how to control the robot in joint space
- [push_the_ball_cartesian.py](https://github.com/rustlluk/pycub/blob/master/icub_pybullet/examples/push_the_ball_cartesian.py) contains an example that
shows how to control the robot in Cartesian space
- [skin_test.py](https://github.com/rustlluk/pycub/blob/master/icub_pybullet/examples/skin_test.py) contains an example with balls falling the robot and skin
should turn green on the places where contact occurs. You may want to slow the simulation a little bit to see that :)
## Information
- documentation can be found at [https://lukasrustler.cz/pyCub/documentation](https://lukasrustler.cz/pyCub/documentation) or in [pycub.pdf](https://lukasrustler.cz/pyCub/documentation/pycub.pdf)
- presentation with description of functionality can be found at [pycub presentation](https://lukasrustler.cz/pyCub/documentation/pycub_presentation.pdf)
- simulator code is in [pycub.py](https://github.com/rustlluk/pycub/blob/master/icub_pybullet/pycub.py)
- it uses PyBullet for simulation and provides high-level interface
- visualization code in [visualizer.py](https://github.com/rustlluk/pycub/blob/master/icub_pybullet/visualizer.py)
- it uses Open3D for visualization as it is much more customizable than PyBullet default GUI
## FAQ
1. You get some kind of error with the visualization, e.g., segmentation fault.
1. Try to check graphics mesa/opengl/nvidia drivers as those are the most common source of problems for the openGL visualization
2. Try Native Docker
3. In given config your load set in GUI standard=False; web=True to enable web visualization
4. Try VNC Docker
5. Try Gitpod Docker
2. You get import errors, e.g. cannot load pycub from icub_pybullet
1. Install from pip with `python3 -m pip install icub_pybullet`
2. Install the package correctly with `python3 -m pip install .` from icub_pybullet directory of this repository
3. put icub_pybullet directory to your PYTHONPATH
## Docker
### Native Version
- version with native GUI; useful when you have problems with OpenGL (e.g., usually some driver issues)
- only for GNU/Linux systems (Ubuntu, Mint, Arch, etc.)
1. install [docker-engine](https://docs.docker.com/engine/install/ubuntu/)
(**DO NOT INSTALL DOCKER DESKTOP**)
- **perform** [post-installation steps](https://docs.docker.com/engine/install/linux-postinstall/)
2. Build/Pull the Docker Image
- clone this repository
```
cd SOME_PATH
git clone https://github.com/rustlluk/pyCub.git
```
- pull the docker image (see [Parameters](#deploy-parameters) for more parameters)
```
cd SPATH_TO_THE_REPOSITORY/Docker
./deploy.py -p PATH_TO_THE_REPOSITORY -c pycub -pu
```
- or, build the docker (see [Parameters](#deploy-parameters) for more parameters)
```
cd SOME_PATH/pycub_ws/Docker
./deploy.py -p PATH_TO_THE_REPOSITORY -c pycub -b
```
- after you pull or build the container, you can run it next time as
```
./deploy.py -c pycub -e
```
- if you want to open new terminal in existing container, run
```
./deploy.py -c pycub -t
```
### VNC Version
- this version works also on Windows and MacOS because it uses VNC server to show the GUI, i.e., the output will be
shown on [http://localhost:6080](http://localhost:6080)
1. Install [docker-engine](https://docs.docker.com/engine/install/ubuntu/) (GNU/Linux only) or
[docker-desktop](https://docs.docker.com/desktop/) (all systems)
- **perform** [post-installation steps](https://docs.docker.com/engine/install/linux-postinstall/)
2. The same as for [Native Version](#native-version), but use `-vnc` option, e.g., to pull and run the image
```
cd PATH_TO_THE_REPOSITORY/Docker
./deploy.py -p PATH_TO_THE_REPOSITORY -c pycub -pu -vnc
```
3. run `start-vnc-sessions.sh` script in the container
4. Open [http://localhost:6080](http://localhost:6080)
### Docker + PyCharm
#### Native Version:
You have two option:
1. Either run pycharm from docker
2. Open your pycharm on your host machine:
- add ssh interpreter
- user docker
- ip can be localhost or ip where you run the docker
- port 2222
- uncheck automatic upload to remote folder
- change remote path to /home/docker/pycub_ws
#### VNC/Gitpod version
1. open pycharm from inside the container
### Deploy Parameters
- `cd` to folder with Dockerfile
- `./deploy.py`
- `-b` or `--build` when building
- default: False
- `-e` if you just want to run existing docker without building
- default: False
- `-p` or `--path` with path to current folder
- default: ""
- `-pu` or `--pull` to pull the image from dockerhub
- default: False
- `-c` or `--container` with desired name of the new, created container
- default: my_new_docker
- `-t` or `--terminal` to run new terminal in running docker session
- default: False
- `-pv` or `--python-version` to specify addition python version to install
- default: 3.11
- `-pcv` or `--pycharm-version` to specify version of pycharm to use
- default: 2023.2.3
- `-bi` or `--base-image` to specify base image that will be used
- default: ubuntu:20.04
- other can be found at hub.docker.com
Do this on computer where you will run the code. If you have a server
you have to run it on the server over SSH to make things work
properly.
### Docker FAQ
- **you get error of not being in sudo group when running image**
- check output of `id -u` command. If the output is not 1000 you have to build the image
by yourself and can not pull it
- this happens when your account is not the first one created on your computer
- **`sudo apt install something` does not work**
- you need to run `sudo apt update` first after you run the container for the first time
- apt things are removed in Dockerfile, so it does not take unnecessary space in the image
## Known bugs
- visualization with skin dies after ~65k steps
- e.g., [https://github.com/isl-org/Open3D/issues/4992](https://github.com/isl-org/Open3D/issues/4992)
## License
[![CC BY 4.0][cc-by-shield]][cc-by]
This work is licensed under a
[Creative Commons Attribution 4.0 International License][cc-by].
[![CC BY 4.0][cc-by-image]][cc-by]
[cc-by]: http://creativecommons.org/licenses/by/4.0/
[cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png
[cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg
| text/markdown | Lukas Rustler | lukas.rustler@fel.cvut.cz | null | null | Creative Commons Attribution 4.0 International (CC BY 4.0) | null | [] | [
"any"
] | https://www.lukasrustler.cz/pycub | null | <3.13,>=3.8 | [] | [] | [] | [
"numpy<2.0.0",
"open3d>=0.16.0",
"scipy",
"pybullet",
"roboticstoolbox-python",
"opencv-python",
"psutil",
"transforms3d",
"dash<3.0.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T17:12:12.914642 | icub_pybullet-1.3.0.tar.gz | 6,355,588 | ff/5f/530ed7125949aaf7e17abe94d03680f75efe7848f1364cbfdbe2264f04d2/icub_pybullet-1.3.0.tar.gz | source | sdist | null | false | 605e8638a743504f52ae0ac179b583a4 | 10f3e22fc7b5f8c110fb8079b198b5f7c96b5b05a5b0b0c5ec6ea366d921077e | ff5f530ed7125949aaf7e17abe94d03680f75efe7848f1364cbfdbe2264f04d2 | null | [] | 286 |
2.4 | pytest-gcpsecretmanager | 0.2.0 | A PyTest plugin for mocking GCP's Secret Manager | # pytest-gcpsecretmanager
A pytest plugin that provides an in-memory fake for Google Cloud Secret Manager. No Docker, no emulator, no GCP credentials required.
Both `SecretManagerServiceClient` (sync) and `SecretManagerServiceAsyncClient` (async) are transparently patched for the duration of each test.
## Installation
```bash
pip install pytest-gcpsecretmanager
```
The plugin requires Python 3.12+ and pytest 7.0+. It does **not** require `google-cloud-secret-manager` to be installed — if the SDK is absent, fake modules are injected so that `from google.cloud.secretmanager import SecretManagerServiceClient` still works in your code under test.
## Quick start
```python
import pytest
from google.cloud.secretmanager import SecretManagerServiceClient
@pytest.mark.secret("my-api-key", "s3cret")
def test_reads_secret(secret_manager):
client = SecretManagerServiceClient()
response = client.access_secret_version(
name="projects/test-project/secrets/my-api-key/versions/latest"
)
assert response.payload.data == b"s3cret"
```
The `secret_manager` fixture activates patching and returns the underlying `SecretStore`. The `@pytest.mark.secret` marker pre-populates secrets before the test runs.
## Fixtures
### `secret_manager`
Function-scoped. Activates patching and returns a `SecretStore` instance. Any `SecretManagerServiceClient` or `SecretManagerServiceAsyncClient` instantiated during the test will use this in-memory store.
```python
def test_programmatic_setup(secret_manager):
secret_manager.set_secret("db-password", "hunter2")
secret_manager.set_secret("binary-key", b"\x00\x01\x02")
client = SecretManagerServiceClient()
resp = client.access_secret_version(
name="projects/test-project/secrets/db-password/versions/latest"
)
assert resp.payload.data == b"hunter2"
```
The store also supports multiple versions:
```python
def test_versioned_secrets(secret_manager):
secret_manager.set_secret_sequence("rotating-key", ["v1", "v2", "v3"])
client = SecretManagerServiceClient()
resp = client.access_secret_version(
name="projects/test-project/secrets/rotating-key/versions/2"
)
assert resp.payload.data == b"v2"
```
### `secret_manager_client`
Returns a `FakeSecretManagerServiceClient` instance backed by the current test's store.
```python
def test_with_client(secret_manager_client):
secret_manager_client.create_secret(
parent="projects/test-project", secret_id="new-secret"
)
secret_manager_client.add_secret_version(
parent="projects/test-project/secrets/new-secret",
payload={"data": b"payload"},
)
resp = secret_manager_client.access_secret_version(
name="projects/test-project/secrets/new-secret/versions/1"
)
assert resp.payload.data == b"payload"
```
### `secret_manager_async_client`
Returns a `FakeSecretManagerServiceAsyncClient` — same API, but all methods are `async`.
```python
async def test_async_client(secret_manager_async_client):
await secret_manager_async_client.create_secret(
parent="projects/test-project", secret_id="async-secret"
)
await secret_manager_async_client.add_secret_version(
parent="projects/test-project/secrets/async-secret",
payload={"data": b"hello"},
)
resp = await secret_manager_async_client.access_secret_version(
name="projects/test-project/secrets/async-secret/versions/1"
)
assert resp.payload.data == b"hello"
```
### `secret_failure_injector`
Returns the active `_FailureInjector` for programmatic failure injection (see [Failure injection](#failure-injection) below).
## Markers
### `@pytest.mark.secret(secret_id, value, *, project=None)`
Pre-populate a secret before the test runs. The default project is `"test-project"`.
```python
# Single version
@pytest.mark.secret("api-key", "my-key")
def test_single(secret_manager): ...
# Multiple versions (pass a list)
@pytest.mark.secret("rotating", ["v1", "v2", "v3"])
def test_versions(secret_manager): ...
# Custom project
@pytest.mark.secret("api-key", "my-key", project="my-project")
def test_custom_project(secret_manager): ...
# Stack multiple markers
@pytest.mark.secret("key-a", "value-a")
@pytest.mark.secret("key-b", "value-b")
def test_multiple(secret_manager): ...
```
### `@pytest.mark.secret_failure(method, exception, *, transient=False, count=1)`
Inject a failure into a specific client method.
```python
from pytest_gcpsecretmanager import NotFound
# Permanent failure — every call raises
@pytest.mark.secret_failure("access_secret_version", NotFound("gone"))
def test_not_found(secret_manager):
client = SecretManagerServiceClient()
with pytest.raises(NotFound):
client.access_secret_version(name="projects/p/secrets/s/versions/1")
# Transient failure — fails `count` times, then succeeds
@pytest.mark.secret_failure(
"access_secret_version", NotFound("retry me"), transient=True, count=2
)
@pytest.mark.secret("key", "val")
def test_transient(secret_manager):
client = SecretManagerServiceClient()
for _ in range(2):
with pytest.raises(NotFound):
client.access_secret_version(
name="projects/test-project/secrets/key/versions/latest"
)
# Third call succeeds
resp = client.access_secret_version(
name="projects/test-project/secrets/key/versions/latest"
)
assert resp.payload.data == b"val"
```
## Failure injection
In addition to the marker, you can inject failures programmatically via the `secret_failure_injector` fixture:
```python
from pytest_gcpsecretmanager import PermissionDenied
def test_programmatic_failure(secret_manager, secret_failure_injector):
secret_failure_injector.add_permanent_failure(
"create_secret", PermissionDenied("nope")
)
client = SecretManagerServiceClient()
with pytest.raises(PermissionDenied):
client.create_secret(parent="projects/test-project", secret_id="x")
```
Available exception types: `NotFound`, `AlreadyExists`, `PermissionDenied`, `FailedPrecondition`, `InvalidArgument`, `ResourceExhausted`, `DeadlineExceeded`.
## Supported API methods
Both sync and async clients support:
| Method | Description |
|---|---|
| `create_secret` | Create a new secret |
| `get_secret` | Get secret metadata |
| `delete_secret` | Delete a secret and all its versions |
| `list_secrets` | List secrets in a project |
| `add_secret_version` | Add a new version to a secret |
| `get_secret_version` | Get version metadata |
| `access_secret_version` | Access the secret payload |
| `destroy_secret_version` | Permanently destroy a version |
| `disable_secret_version` | Disable a version |
| `enable_secret_version` | Re-enable a disabled version |
| `list_secret_versions` | List all versions of a secret |
| `secret_path` | Static helper: build a secret resource name |
| `secret_version_path` | Static helper: build a version resource name |
## How patching works
When the `secret_manager` fixture is active:
1. If `google-cloud-secret-manager` is installed, the real client classes are patched via `unittest.mock.patch` across all known import paths (`google.cloud.secretmanager`, `google.cloud.secretmanager_v1`, etc.).
2. If the SDK is **not** installed, fake modules are injected into `sys.modules` so that `from google.cloud.secretmanager import SecretManagerServiceClient` resolves to the fake client.
This means your application code doesn't need any special imports or conditional logic — the fake is transparent.
## Response types
Responses use lightweight dataclasses that mirror the GCP protobuf shapes:
- `AccessSecretVersionResponse` — has `.name` and `.payload` (a `SecretPayload` with `.data` bytes and `.data_crc32c`)
- `Secret` — has `.name`, `.replication`, `.create_time`, `.labels`
- `SecretVersion` — has `.name`, `.create_time`, `.state`
## License
MIT
| text/markdown | Neale Petrillo | neale.a.petrillo@gmail.com | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"pytest>=7.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T17:12:09.417957 | pytest_gcpsecretmanager-0.2.0.tar.gz | 12,651 | af/a9/e21e1249b6dce304772e60768124fbd2dd5f4f05ce1da88732ec62f091ea/pytest_gcpsecretmanager-0.2.0.tar.gz | source | sdist | null | false | bba50f224e89f89dd8e96c05aad255c6 | e4564db83c4f95f8a9781405999fb74766f27e1aa22ae5d0bfcf64999d25ae79 | afa9e21e1249b6dce304772e60768124fbd2dd5f4f05ce1da88732ec62f091ea | null | [
"LICENSE"
] | 236 |
2.4 | cometspy | 0.6.3 | The Python interface to COMETS |
[](https://pypi.org/project/cometspy/)
[](https://pypi.org/project/cometspy/)
[](https://GitHub.com/segrelab/cometspy/releases/)
# COMETSPy - The Python Interface for COMETS
COMETSPY is the Python interface for running [COMETS](https://GitHub.com/segrelab/comets) simulations. COMETS is built and maintained by the COMETSPy Core Team.
COMETSPy is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
CometsPy is developed with non-commercial use in mind and is presented as-is. To inquire about collaborations or commercial usage and development, please contact us at <comets@bu.edu>.
# Documentation
Documentation on how to use COMETS with COMETSPy can be found at [https://cometspy.readthedocs.io/en/latest/](https://segrelab.github.io/cometspy/).
# Installation
Use pip to install COMETSPy from PyPI:
```py
pip3 install cometspy
```
# Cite us
# Contributing
Contributions are welcome and appreciated. Questions and discussions can be raised on [Gitter](https://gitter.im/segrelab/comets). Issues should be discussed in this forum before they are raised on GitHub. For other questions contact us on email comets@bu.edu.
| text/markdown | null | The COMETSPy Core Team <dukovski@bu.edu> | null | null | GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
| metabolism, dynamic, flux, balance, analysis, spatial, evolution | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8"
] | [] | null | null | >=3.6 | [] | [] | [] | [
"numpy",
"cobra",
"pandas>=1.0.0",
"tqdm"
] | [] | [] | [] | [
"Homepage, https://github.com/segrelab/cometspy"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-18T17:11:42.210407 | cometspy-0.6.3.tar.gz | 77,201 | 04/b9/85e522876d12f4d735d9ecb728765e9739ab69397516e0d87105db31da01/cometspy-0.6.3.tar.gz | source | sdist | null | false | 0d616e4af895f37af91eda5d7ab2ce21 | bf437d5ddcf1eba9799428f497bac66ea2edfb8c9f576064ac8fabf05586cab7 | 04b985e522876d12f4d735d9ecb728765e9739ab69397516e0d87105db31da01 | null | [] | 467 |
2.4 | agent-task-queue | 0.4.0 | MCP server for sequential task execution via FIFO queue | # Agent Task Queue
[](https://github.com/block/agent-task-queue/actions/workflows/ci.yml)
[](https://pypi.org/project/agent-task-queue/)
[](https://github.com/block/agent-task-queue/releases)
**Local task queuing for AI agents.** Prevents multiple agents from running expensive operations concurrently and thrashing your machine.
## The Problem
When multiple AI agents work on the same machine, they independently trigger expensive operations. Running these concurrently causes:
- 5-minute builds stretching to 30+ minutes
- Memory thrashing and disk I/O saturation
- Machine unresponsiveness
- Agents unable to coordinate with each other
## How It Works
**Default: Global queue** - All `run_task` calls share one queue.
```
# Agent A runs:
run_task("./gradlew test", working_directory="/project")
# Agent B runs (waits for A to finish, then executes):
run_task("./gradlew build", working_directory="/project")
```
**Custom queues** - Use `queue_name` to isolate workloads:
```
# These run in separate queues (can run in parallel):
run_task("./gradlew build", queue_name="android", ...)
run_task("npm run build", queue_name="web", ...)
```
Both agents block until their respective builds complete. The server handles sequencing automatically.
## Demo: Two Agents, One Build Queue
**Terminal A** - First agent requests an Android build:
```
> Build the Android app
⏺ agent-task-queue - run_task (MCP)
command: "./gradlew assembleDebug"
working_directory: "/path/to/android-project"
⎿ "SUCCESS exit=0 192.6s output=/tmp/agent-task-queue/output/task_1.log"
⏺ Build completed successfully in 192.6s.
```
**Terminal B** - Second agent requests the same build (started 2 seconds after A):
```
> Build the Android app
⏺ agent-task-queue - run_task (MCP)
command: "./gradlew assembleDebug"
working_directory: "/path/to/android-project"
⎿ "SUCCESS exit=0 32.6s output=/tmp/agent-task-queue/output/task_2.log"
⏺ Build completed successfully in 32.6s.
```
**What happened behind the scenes:**
| Time | Agent A | Agent B |
|------|---------|---------|
| 0:00 | Started build | |
| 0:02 | Building... | Entered queue, waiting |
| 3:12 | **Completed** (192.6s) | Started build |
| 3:45 | | **Completed** (32.6s) |
**Why this matters:**
Without the queue, both builds would run simultaneously—fighting for CPU, memory, and disk I/O. Each build might take 5+ minutes, and your machine would be unresponsive.
With the queue:
- **Agent B automatically waited** for Agent A to finish
- **Agent B's build was 6x faster** (32s vs 193s) because Gradle reused cached artifacts
- **Total time: 3:45** instead of 10+ minutes of thrashing
- **Your machine stayed responsive** throughout
## Key Features
- **FIFO Queuing**: Strict first-in-first-out ordering
- **No Queue Timeouts**: MCP keeps connection alive while waiting in queue. The `timeout_seconds` parameter only applies to execution time—tasks can wait in queue indefinitely without timing out. (see [Why MCP?](#why-mcp-instead-of-a-cli-tool))
- **Environment Variables**: Pass `env_vars="ANDROID_SERIAL=emulator-5560"`
- **Multiple Queues**: Isolate different workloads with `queue_name`
- **Zombie Protection**: Detects dead processes, kills orphans, clears stale locks
- **Auto-Kill**: Tasks running > 120 minutes are terminated
## Installation
```bash
uvx agent-task-queue@latest
```
That's it. [uvx](https://docs.astral.sh/uv/guides/tools/) runs the package directly from PyPI—no clone, no install, no virtual environment.
## Agent Configuration
Agent Task Queue works with any AI coding tool that supports MCP. Add this config to your MCP client:
```json
{
"mcpServers": {
"agent-task-queue": {
"command": "uvx",
"args": ["agent-task-queue@latest"]
}
}
}
```
### MCP Client Configuration
<details>
<summary>Amp</summary>
Install via CLI:
```bash
amp mcp add agent-task-queue -- uvx agent-task-queue@latest
```
Or add to `.amp/settings.json` (workspace) or global settings. See [Amp Manual](https://ampcode.com/manual) for details.
</details>
<details>
<summary>Claude Code</summary>
Install via CLI (<a href="https://docs.anthropic.com/en/docs/claude-code/mcp">guide</a>):
```bash
claude mcp add agent-task-queue -- uvx agent-task-queue@latest
```
</details>
<details>
<summary>Claude Desktop</summary>
Config file locations:
- **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
- **Linux**: `~/.config/Claude/claude_desktop_config.json`
Use the standard config above.
</details>
<details>
<summary>Cline</summary>
Open the MCP Servers panel > Configure > "Configure MCP Servers" to edit `cline_mcp_settings.json`. Use the standard config above.
See [Cline MCP docs](https://docs.cline.bot/mcp/configuring-mcp-servers) for details.
</details>
<details>
<summary>Copilot / VS Code</summary>
Requires VS Code 1.102+ with GitHub Copilot Chat extension.
Config file locations:
- **Workspace**: `.vscode/mcp.json`
- **Global**: Via Command Palette > "MCP: Open User Configuration"
```json
{
"servers": {
"agent-task-queue": {
"type": "stdio",
"command": "uvx",
"args": ["agent-task-queue@latest"]
}
}
}
```
See [VS Code MCP docs](https://code.visualstudio.com/docs/copilot/chat/mcp-servers) for details.
</details>
<details>
<summary>Cursor</summary>
Go to `Cursor Settings` > `MCP` > `+ Add new global MCP server`. Use the standard config above.
Config file locations:
- **Global**: `~/.cursor/mcp.json`
- **Project**: `.cursor/mcp.json`
See [Cursor MCP docs](https://docs.cursor.com/context/model-context-protocol) for details.
</details>
<details>
<summary>Firebender</summary>
Add to `firebender.json` in project root, or use Plugin Settings > MCP section. Use the standard config above.
See [Firebender MCP docs](https://docs.firebender.com/context/mcp) for details.
</details>
<details>
<summary>Windsurf</summary>
Config file location: `~/.codeium/windsurf/mcp_config.json`
Or use Windsurf Settings > Cascade > Manage MCPs. Use the standard config above.
See [Windsurf MCP docs](https://docs.windsurf.com/windsurf/cascade/mcp) for details.
</details>
## Usage
Agents use the `run_task` MCP tool for expensive operations:
**Build Tools:** gradle, bazel, make, cmake, mvn, cargo build, go build, npm/yarn/pnpm build
**Container Operations:** docker build, docker-compose, podman, kubectl, helm
**Test Suites:** pytest, jest, mocha, rspec
> **Note:** Some agents automatically prefer MCP tools (Amp, Copilot, Windsurf). Others may need [configuration](#agent-configuration-notes) to prefer `run_task` over built-in shell commands.
### Tool Parameters
| Parameter | Required | Description |
|-----------|----------|-------------|
| `command` | Yes | Shell command to execute |
| `working_directory` | Yes | Absolute path to run from |
| `queue_name` | No | Queue identifier (default: "global") |
| `timeout_seconds` | No | Max **execution** time before kill (default: 1200). Queue wait time doesn't count. |
| `env_vars` | No | Environment variables: `"KEY=val,KEY2=val2"` |
### Example
```
run_task(
command="./gradlew connectedAndroidTest",
working_directory="/project",
queue_name="android",
env_vars="ANDROID_SERIAL=emulator-5560"
)
```
### Agent Configuration Notes
Some agents need additional configuration to use the queue instead of built-in shell commands.
| Agent | Extra Setup | Notes |
|-------|-------------|-------|
| Amp, Copilot, Windsurf | ❌ None | Works out of the box |
| **Claude Code, Cursor** | ✅ Required | Must remove Bash allowed rules |
| Cline, Firebender | ⚠️ Maybe | Check agent docs |
> [!IMPORTANT]
> **Claude Code users:** If you have allowed rules like `Bash(gradle:*)` or `Bash(./gradlew:*)`, the agent will use Bash directly and **bypass the queue entirely**. You must remove these rules for the queue to work.
>
> Check both `settings.json` and `settings.local.json` (project and global) for rules like:
> - `Bash(gradle:*)`, `Bash(./gradlew:*)`, `Bash(ANDROID_SERIAL=* ./gradlew:*)`
> - `Bash(docker build:*)`, `Bash(pytest:*)`, etc.
>
> See [Claude Code setup guide](examples/claude-code/SETUP.md) for the full fix.
#### Quick Agent Setup
After installing the MCP server, tell your agent:
```
"Configure agent-task-queue - use examples/<agent-name>/SETUP.md if available"
```
**Available setup guides:**
- [Claude Code setup](examples/claude-code/SETUP.md) - 3-step configuration
- [Other agents](examples/) - Contributions welcome!
## Configuration
The server supports the following command-line options:
| Option | Default | Description |
|--------|---------|-------------|
| `--data-dir` | `/tmp/agent-task-queue` | Directory for database and logs |
| `--max-log-size` | `5` | Max metrics log size in MB before rotation |
| `--max-output-files` | `50` | Number of task output files to retain |
| `--tail-lines` | `50` | Lines of output to include on failure |
| `--lock-timeout` | `120` | Minutes before stale locks are cleared |
Pass options via the `args` property in your MCP config:
```json
{
"mcpServers": {
"agent-task-queue": {
"command": "uvx",
"args": [
"agent-task-queue@latest",
"--max-output-files=100",
"--lock-timeout=60"
]
}
}
}
```
Run `uvx agent-task-queue@latest --help` to see all options.
## IntelliJ Plugin
An optional [IntelliJ plugin](intellij-plugin/) provides real-time IDE integration — status bar widget, tool window with live streaming output, and balloon notifications for queue events. See the [plugin README](intellij-plugin/README.md) for details.
## Architecture
```mermaid
flowchart TD
A[AI Agent<br/>Claude, Cursor, Windsurf, etc.] -->|MCP Protocol| B[task_queue.py<br/>FastMCP Server]
B -->|Query/Update| C[(SQLite Queue<br/>/tmp/agent-task-queue/queue.db)]
B -->|Execute| D[Subprocess<br/>gradle, docker, etc.]
D -.->|stdout/stderr| B
B -.->|blocks until complete| A
```
### Data Directory
All data is stored in `/tmp/agent-task-queue/` by default:
- `queue.db` - SQLite database for queue state
- `agent-task-queue-logs.json` - JSON metrics log (NDJSON format)
To use a different location, pass `--data-dir=/path/to/data` or set the `TASK_QUEUE_DATA_DIR` environment variable.
### Database Schema
The queue state is stored in SQLite at `/tmp/agent-task-queue/queue.db`:
| Column | Type | Description |
|--------|------|-------------|
| `id` | INTEGER | Auto-incrementing primary key |
| `queue_name` | TEXT | Queue identifier (e.g., "global", "android") |
| `status` | TEXT | Task state: "waiting" or "running" |
| `command` | TEXT | Shell command being executed |
| `pid` | INTEGER | MCP server process ID (for liveness check) |
| `server_id` | TEXT | Server instance UUID (for orphan detection across PID reuse) |
| `child_pid` | INTEGER | Subprocess ID (for orphan cleanup) |
| `created_at` | TIMESTAMP | When task was queued |
| `updated_at` | TIMESTAMP | Last status change |
### Zombie Protection
If an agent crashes while a task is running:
1. The next task detects the dead parent process (via PID check)
2. It kills any orphaned child process (the actual build)
3. It clears the stale lock
4. Execution continues normally
### Metrics Logging
All queue events are logged to `agent-task-queue-logs.json` in NDJSON format (one JSON object per line):
```json
{"event":"task_queued","timestamp":"2025-12-12T16:01:34","task_id":8,"queue_name":"global","pid":23819}
{"event":"task_started","timestamp":"2025-12-12T16:01:34","task_id":8,"queue_name":"global","wait_time_seconds":0.0}
{"event":"task_completed","timestamp":"2025-12-12T16:02:05","task_id":8,"queue_name":"global","command":"./gradlew build","exit_code":0,"duration_seconds":31.2,"stdout_lines":45,"stderr_lines":2}
```
**Events logged:**
- `task_queued` - Task entered the queue
- `task_started` - Task acquired lock and began execution
- `task_completed` - Task finished (includes exit code and duration)
- `task_timeout` - Task killed after timeout
- `task_error` - Task failed with exception
- `zombie_cleared` - Stale lock was cleaned up
The log file rotates when it exceeds 5MB (keeps one backup as `.json.1`).
### Task Output Logs
To reduce token usage, full command output is written to files instead of returned directly:
```
/tmp/agent-task-queue/output/
├── task_1.log # Formatted log with metadata and section markers
├── task_1.raw.log # Raw stdout+stderr only (for plugin streaming)
├── task_2.log
├── task_2.raw.log
└── ...
```
Each task produces two output files:
- **`task_<id>.log`** — Formatted log with headers (`COMMAND:`, `WORKING DIR:`), section markers (`--- STDOUT ---`, `--- STDERR ---`, `--- SUMMARY ---`), and exit code. Used by the IntelliJ plugin notifier and the "View Output" action.
- **`task_<id>.raw.log`** — Raw stdout+stderr only, no metadata. Used by the IntelliJ plugin for clean streaming output in tabs. Added in MCP server v0.4.0.
**On success**, the tool returns a single line:
```
SUCCESS exit=0 31.2s command=./gradlew build output=/tmp/agent-task-queue/output/task_8.log
```
**On failure**, the last 50 lines of output are included:
```
FAILED exit=1 12.5s command=./gradlew build output=/tmp/agent-task-queue/output/task_9.log
[error output here]
```
**Automatic cleanup**: Old files are deleted when count exceeds 50 tasks (configurable via `--max-output-files`).
**Manual cleanup**: Use the `clear_task_logs` tool to delete all output files.
## CLI Tool
The `tq` command lets you run commands through the queue and inspect queue status.
### Install CLI
```bash
uv tool install agent-task-queue
```
This installs both the MCP server and the `tq` CLI persistently.
### Running Commands
Run commands through the same queue that agents use:
```bash
tq ./gradlew assembleDebug # Run a build through the queue
tq npm run build # Any command works
tq -q android ./gradlew test # Use a specific queue
tq -t 600 npm test # Custom timeout (seconds)
tq -C /path/to/project make # Set working directory
```
This prevents resource contention between you and AI agents - when you run a build via `tq`, any agent-initiated builds will wait in the same queue.
### Inspecting the Queue
```bash
tq list # Show current queue
tq logs # Show recent activity
tq logs -n 50 # Show last 50 entries
tq clear # Clear stuck tasks
tq --data-dir PATH # Use custom data directory
```
Respects `TASK_QUEUE_DATA_DIR` environment variable.
> **Note:** Without installing, you can run one-off commands with:
> ```bash
> uvx --from agent-task-queue tq list
> ```
## Troubleshooting
### Tasks stuck in queue
```bash
tq list # Check queue status
tq clear # Clear all tasks
```
### "Database is locked" errors
```bash
ps aux | grep task_queue # Check for zombie processes
rm -rf /tmp/agent-task-queue/ # Delete and restart
```
### Server not connecting
1. Ensure `uvx` is in your PATH (install [uv](https://github.com/astral-sh/uv) if needed)
2. Test manually: `uvx agent-task-queue@latest`
## Development
For contributors:
```bash
git clone https://github.com/block/agent-task-queue.git
cd agent-task-queue
uv sync # Install dependencies
uv run pytest -v # Run tests
uv run python task_queue.py # Run server locally
```
## Platform Support
- macOS
- Linux
## Why MCP Instead of a CLI Tool?
The first attempt at solving this problem was a file-based queue CLI that wrapped commands:
```bash
queue-cli ./gradlew build
```
**The fatal flaw:** AI tools have built-in shell timeouts (30s-120s). If a job waited in queue longer than the timeout, the agent gave up—even though the job would eventually run.
```mermaid
flowchart LR
subgraph cli [CLI Approach]
A1[Agent] --> B1[Shell]
B1 --> C1[CLI]
C1 --> D1[Queue]
B1 -.-> |"⏱️ TIMEOUT!"| A1
end
subgraph mcp [MCP Approach]
A2[Agent] --> |MCP Protocol| B2[Server]
B2 --> C2[Queue]
B2 -.-> |"✓ blocks until complete"| A2
end
```
**Why MCP solves this:**
- The MCP server keeps the connection alive indefinitely
- The agent's tool call blocks until the task completes
- No timeout configuration needed—it "just works"
- The server manages the queue; the agent just waits
| Aspect | CLI Wrapper | Agent Task Queue |
|--------|-------------|----------------|
| Timeout handling | External workarounds | Solved by design |
| Queue storage | Filesystem | SQLite (WAL mode) |
| Integration | Wrap every command | Automatic tool selection |
| Agent compatibility | Varies by tool | Universal |
## License
Apache 2.0
| text/markdown | null | Matt McKenna <mmckenna@block.xyz>, Block <opensource@block.xyz> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"fastmcp>=2.14.4",
"pytest-asyncio>=0.24; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.8; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T17:10:15.169709 | agent_task_queue-0.4.0.tar.gz | 103,245 | e5/99/f6b2e4c8de4809fce64e28655589612227f26f7300501b2cbe99e4a3dc95/agent_task_queue-0.4.0.tar.gz | source | sdist | null | false | 925b9aa36b59cda68c6e128a22d29331 | 198da9faeaf5d95fb4cdd3bc505cad76690e88685e54c21b3da0d9bd0c8e485e | e599f6b2e4c8de4809fce64e28655589612227f26f7300501b2cbe99e4a3dc95 | Apache-2.0 | [
"LICENSE"
] | 273 |
2.4 | inspect-harbor | 0.4.3 | Inspect AI interface to Harbor tasks | # Inspect Harbor
This package provides an interface to run [Harbor](https://harborframework.com/) tasks using [Inspect AI](https://inspect.aisi.org.uk/).
## Installation
Install from PyPI:
```bash
pip install inspect-harbor
```
Or with uv:
```bash
uv add inspect-harbor
```
For development installation, see the [Development](#development) section.
## Prerequisites
Before running Harbor tasks, ensure you have:
- **Python 3.12 or higher** - Required by inspect_harbor
- **Docker installed and running** - Required for execution when using Docker sandbox (default)
- **Model API keys** - Set appropriate environment variables (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`)
## Quick Start
The fastest way to get started is to run a dataset from the [Harbor registry](https://harborframework.com/registry).
### Evaluate with a Model
**CLI:**
```bash
# Run hello-world dataset
inspect eval inspect_harbor/hello_world \
--model openai/gpt-5-mini
# Run terminal-bench-sample dataset
inspect eval inspect_harbor/terminal_bench_sample \
--model openai/gpt-5
```
**Python API:**
```python
from inspect_ai import eval
from inspect_harbor import hello_world, terminal_bench_sample
# Run hello-world
eval(hello_world(), model="openai/gpt-5-mini")
# Run terminal-bench-sample
eval(terminal_bench_sample(), model="openai/gpt-5")
```
**What this does:**
- Loads the dataset from the [Harbor registry](https://harborframework.com/registry)
- Downloads and caches all tasks in the dataset
- Solves the tasks with the [default ReAct agent](#default-agent-scaffold) scaffold
- Executes in a [Docker sandbox environment](https://inspect.aisi.org.uk/sandboxing.html#sec-docker-configuration)
- Stores results in `./logs`
## Available Datasets
Inspect Harbor provides task functions for each dataset in the [Harbor registry](https://harborframework.com/registry). You can import and use them directly:
```python
from inspect_harbor import (
terminal_bench,
swebenchpro,
swe_lancer_diamond,
swebench_verified,
# ... and many more
)
```
**For a complete list of available datasets and versions (including swebenchpro, terminal-bench-pro, replicationbench, compilebench, and 40+ more), see [`REGISTRY.md`](REGISTRY.md).**
### Dataset Versioning
Each dataset has both **unversioned** and **versioned** task functions:
- **Unversioned functions** (e.g., `terminal_bench()`) automatically use the latest version available in the registry
- **Versioned functions** (e.g., `terminal_bench_2_0()`) pin to a specific version for reproducibility
**Example:**
```python
from inspect_harbor import terminal_bench, terminal_bench_2_0
# Uses latest version (currently 2.0)
eval(terminal_bench(), model="openai/gpt-5-mini")
# Pins to version 2.0 explicitly
eval(terminal_bench_2_0(), model="openai/gpt-5-mini")
```
## Agents and Solvers
[Solvers](https://inspect.aisi.org.uk/solvers.html) are the execution components in Inspect AI. They can run [agent scaffolds](https://inspect.aisi.org.uk/agents.html) (like [ReAct](https://inspect.aisi.org.uk/react-agent.html)), execute solution scripts (like the Oracle solver), perform prompt engineering, and more. Both solvers and agents can be used to solve Harbor tasks.
### Default Agent Scaffold
When no agent or solver is specified, Inspect Harbor provides a default agent scaffold for your model:
- **Agent Type**: [ReAct agent](https://inspect.aisi.org.uk/react-agent.html)
- **Tools**: [`bash(timeout=300)`](https://inspect.aisi.org.uk/tools-standard.html#sec-bash-session), [`python(timeout=300)`](https://inspect.aisi.org.uk/tools-standard.html#sec-bash-and-python), [`update_plan()`](https://inspect.aisi.org.uk/tools-standard.html#sec-update-plan)
- **Compaction**: [`CompactionEdit()`](https://inspect.aisi.org.uk/compaction.html) for context window management
This default configuration is suitable for most Harbor tasks that require command execution and file manipulation.
### Using Custom Agents
You can provide your own agent or solver implementation using the `--solver` flag:
**Using a custom agent:**
```bash
inspect eval inspect_harbor/terminal_bench \
--solver path/to/custom/agent.py@custom_agent \
--model openai/gpt-5
```
**Using Inspect SWE agent framework:**
First install the required package:
```bash
pip install inspect-swe
```
**CLI:**
```bash
inspect eval inspect_harbor/terminal_bench_sample \
--solver inspect_swe/claude_code \
--model anthropic/claude-sonnet-4-5
```
**Python API:**
```python
from inspect_ai import eval
from inspect_harbor import terminal_bench_sample
from inspect_swe import claude_code
eval(
terminal_bench_sample(),
solver=claude_code(),
model="anthropic/claude-sonnet-4-5"
)
```
For more details:
- [Agents documentation](https://inspect.aisi.org.uk/agents.html)
- [Solvers documentation](https://inspect.aisi.org.uk/solvers.html)
- [Inspect SWE documentation](https://meridianlabs-ai.github.io/inspect_swe/)
## Task Parameters
Task functions (like `terminal_bench()`, `swe_lancer_diamond()`, etc.) accept the following parameters:
| Parameter | Description | Default | Python Example | CLI Example |
|-----------|-------------|---------|----------------|-------------|
| `dataset_task_names` | List of task names to include (supports glob patterns) | `None` | `["aime_60", "aime_61"]` | `'["aime_60"]'` |
| `dataset_exclude_task_names` | List of task names to exclude (supports glob patterns) | `None` | `["aime_60"]` | `'["aime_60"]'` |
| `n_tasks` | Maximum number of tasks to run | `None` | `10` | `10` |
| `overwrite_cache` | Force re-download and overwrite cached tasks | `False` | `True` | `true` |
| `sandbox_env_name` | Sandbox environment name | `"docker"` | `"modal"` | `"modal"` |
| `override_cpus` | Override the number of CPUs from `task.toml` | `None` | `4` | `4` |
| `override_memory_mb` | Override the memory (in MB) from `task.toml` | `None` | `16384` | `16384` |
| `override_gpus` | Override the number of GPUs from `task.toml` | `None` | `1` | `1` |
### Example
Here's an example showing how to use multiple parameters together:
**CLI:**
```bash
inspect eval inspect_harbor/terminal_bench_sample \
-T n_tasks=5 \
-T overwrite_cache=true \
-T override_memory_mb=8192 \
--model anthropic/claude-sonnet-4-5
```
**Python API:**
```python
from inspect_ai import eval
from inspect_harbor import terminal_bench_sample
eval(
terminal_bench_sample(
n_tasks=5,
overwrite_cache=True,
override_memory_mb=8192,
),
model="anthropic/claude-sonnet-4-5"
)
```
This example:
- Limits to 5 tasks using `n_tasks`
- Forces fresh download with `overwrite_cache`
- Allocates 8GB of memory
## Understanding Harbor Tasks
### What is a Harbor Task?
[Harbor](https://harborframework.com/) is a framework for building, evaluating, and optimizing agents and models in containerized environments. A Harbor task is a self-contained evaluation unit that includes an instruction, execution environment, scoring criteria, and optionally a reference solution.
For comprehensive details about Harbor tasks, see the [Harbor documentation](https://harborframework.com/docs/tasks).
### Harbor Task File Structure
A typical Harbor task directory contains the following components:
```
my_task/
├── instruction.md # Task instructions/prompt shown to the agent
├── task.toml # Metadata, timeouts, resource specs (CPU/memory/GPU), env vars
├── environment/ # Environment setup - Dockerfile or docker-compose.yaml
│ └── Dockerfile # Docker environment spec (varies by sandbox provider)
├── solution/ # (Optional) Reference solution for sanity checking
│ ├── solve.sh # Executable solution script used by Oracle solver
│ └── ... # Supporting solution files and dependencies
└── tests/ # Verification and scoring
├── test.sh # Test script executed by verifier
└── ... # Outputs reward.txt or reward.json to /logs/verifier/
```
### Harbor to Inspect Mapping
Inspect Harbor bridges Harbor tasks to the Inspect AI evaluation framework using the following mappings:
| Harbor Concept | Inspect Concept | Description |
|----------------|-----------------|-------------|
| **Harbor Task** | [`Sample`](https://inspect.aisi.org.uk/datasets.html#dataset-samples) | A single evaluation instance with instructions and environment |
| **Harbor Dataset** | [`Task`](https://inspect.aisi.org.uk/tasks.html) | A collection of related evaluation instances |
| **instruction.md** | [`Sample.input`](https://inspect.aisi.org.uk/datasets.html#dataset-samples) | The prompt/instructions given to the agent |
| **environment/** | [`SandboxEnvironmentSpec`](https://inspect.aisi.org.uk/sandboxing.html#sandbox-environments) | Docker/environment configuration for isolated execution |
| **tests/test.sh** | [`Scorer`](https://inspect.aisi.org.uk/scorers.html) ([`inspect_harbor/harbor_scorer`](src/inspect_harbor/harbor/_scorer.py)) | Test script executed by the scorer to produce reward/metrics |
| **solution/solve.sh** | [`Solver`](https://inspect.aisi.org.uk/solvers.html) ([`inspect_harbor/oracle`](src/inspect_harbor/harbor/_solver.py)) | Reference solution script executed by the Oracle solver for sanity checking |
| **task.toml[metadata]** | [`Sample.metadata`](https://inspect.aisi.org.uk/datasets.html#dataset-samples) | Task metadata: author, difficulty, category, tags |
| **task.toml[verifier]** | Scorer timeout/env vars | Timeout and environment configuration for scorer execution |
| **task.toml[agent]** | Agent solver env vars | Environment variables for agent execution. Agent timeout_sec is ignored. |
| **task.toml[solution]** | Oracle solver env vars | Environment variables to set when running the solution script |
| **task.toml[environment]** | [`SandboxEnvironmentSpec.config`](https://inspect.aisi.org.uk/sandboxing.html#sandbox-environments) | Resource specifications (CPU, memory, storage, GPU, internet). Overwrites resource limits in `environment/docker-compose.yaml` |
### LLM Judges in Verification
Some Harbor tasks use LLM judges for verification (e.g., evaluating open-ended responses or code quality). These tasks specify the model in their `task.toml`:
```toml
[verifier.env]
MODEL_NAME = "claude-haiku-4-5"
ANTHROPIC_API_KEY = "${ANTHROPIC_API_KEY}"
```
The verifier script (`tests/test.sh`) uses these environment variables to call the LLM. Make sure to set the appropriate API key (e.g., `ANTHROPIC_API_KEY`) when running tasks with LLM judges.
## Advanced
### Oracle Solver
The Oracle solver is useful for verifying that a dataset is correctly configured and solvable. It executes the task's reference solution (`solution/solve.sh` script) instead of using a model.
**CLI:**
```bash
inspect eval inspect_harbor/hello_world \
--solver inspect_harbor/oracle
```
**Python API:**
```python
from inspect_ai import eval
from inspect_harbor import hello_world, oracle
eval(hello_world(), solver=oracle())
```
### Generic Harbor Interface
For advanced use cases, you can use the generic `harbor()` interface directly. This provides access to all task loading options including custom registries, git repositories, and local paths.
#### Harbor Interface Parameters
The `harbor()` function accepts all parameters from the [Task Parameters](#task-parameters) table plus additional parameters for advanced task loading:
| Parameter | Description | Default | Python Example | CLI Example |
|-----------|-------------|---------|----------------|-------------|
| `path` | Local path to task/dataset directory, or task identifier for git tasks | `None` | `"/path/to/local_dataset"` | `"/path/to/local_dataset"` |
| `task_git_url` | Git repository URL for downloading tasks | `None` | `"https://github.com/laude-institute/harbor-datasets.git"` | `"https://github.com/..."` |
| `task_git_commit_id` | Git commit ID to pin task version | `None` | `"414014c23ce4d32128073d12b057252c918cccf4"` | `"414014c..."` |
| `registry_url` | Custom registry URL | `None` (uses Harbor registry) | `"https://github.com/myorg/registry.json"` | `"https://..."` |
| `registry_path` | Path to local registry | `None` | `"/path/to/local/registry.json"` | `"/path/to/local/registry.json"` |
| `dataset_name_version` | Dataset name and optional version (format: `name@version`). Omitted versions resolve to: `"head"` > highest semver > lexically last. | `None` | `"aime"` or `"aime@1.0"` | `"aime@1.0"` |
| `disable_verification` | Skip task verification checks | `False` | `True` | `true` |
**Note:** These are task-specific parameters passed with `-T`. For additional `inspect eval` command-line flags (like `--model`, `--message-limit`, `--epochs`, `--fail-on-error`, `--log-dir`, `--log-level`, `--max-tasks`, etc.), see the [Inspect eval CLI reference](https://inspect.aisi.org.uk/reference/inspect_eval.html) or [Python API reference](https://inspect.aisi.org.uk/reference/inspect_ai.html#eval).
#### Parameter Combinations
There are four primary patterns for loading Harbor tasks:
| Pattern | Required Parameters | Optional Parameters |
|---------|---------------------|---------------------|
| **Registry Dataset** | `dataset_name_version` | `registry_url` or `registry_path`<br>`dataset_task_names`<br>`dataset_exclude_task_names`<br>`n_tasks`<br>`overwrite_cache` |
| **Git Task** | `path`<br>`task_git_url` | `task_git_commit_id`<br>`overwrite_cache` |
| **Local Task** | `path` | `disable_verification` |
| **Local Dataset** | `path` | `dataset_task_names`<br>`dataset_exclude_task_names`<br>`n_tasks`<br>`disable_verification` |
#### Custom Registries
You can use custom registries for private or organization-specific datasets:
**Remote registry:**
```bash
inspect eval inspect_harbor/harbor \
-T dataset_name_version="my_dataset@1.0" \
-T registry_url="https://github.com/myorg/registry.json" \
--model openai/gpt-5-mini
```
**Local registry:**
```bash
inspect eval inspect_harbor/harbor \
-T dataset_name_version="my_dataset@1.0" \
-T registry_path="/path/to/local/registry.json" \
--model openai/gpt-5-mini
```
#### Loading from Git Repositories
You can load tasks directly from git repositories:
```bash
inspect eval inspect_harbor/harbor \
-T path="datasets/aime/aime_6" \
-T task_git_url="https://github.com/laude-institute/harbor-datasets.git" \
-T task_git_commit_id="414014c23ce4d32128073d12b057252c918cccf4" \
--model openai/gpt-5-mini
```
#### Loading from Local Paths
You can run tasks from your local filesystem:
```bash
inspect eval inspect_harbor/harbor \
-T path="/path/to/task_or_dataset/directory" \
--model openai/gpt-5
```
### Cache Management
Downloaded tasks are cached locally in `~/.harbor/cache/`. To force a fresh download:
```bash
inspect eval inspect_harbor/aime_1_0 \
-T overwrite_cache=true \
--model openai/gpt-5
```
To manually clear the entire cache:
```bash
rm -rf ~/.harbor/cache/
```
## Development
Clone the repository and install development dependencies:
```bash
git clone https://github.com/meridianlabs-ai/inspect_harbor.git
cd inspect_harbor
make install # Installs dependencies and sets up pre-commit hooks
```
Run tests and checks:
```bash
make check # Run linting (ruff check + format) and type checking (pyright)
make test # Run tests
make cov # Run tests with coverage report
```
Clean up build artifacts:
```bash
make clean # Remove cache and build artifacts
```
## Credits
This work is based on contributions by [@iphan](https://github.com/iphan) and [@anthonyduong9](https://github.com/anthonyduong9) from the `inspect_evals` repository:
- [@iphan](https://github.com/iphan)'s [Terminal Bench implementation](https://github.com/UKGovernmentBEIS/inspect_evals/pull/791)
- [@anthonyduong9](https://github.com/anthonyduong9)'s [Harbor task implementation](https://github.com/UKGovernmentBEIS/inspect_evals/pull/945)
| text/markdown | Meridian Labs | null | null | null | MIT License | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"harbor>=0.1.44",
"inspect-ai>=0.3.176",
"pyyaml"
] | [] | [] | [] | [
"Source Code, https://github.com/meridianlabs-ai/inspect_harbor",
"Issue Tracker, https://github.com/meridianlabs-ai/inspect_harbor/issues",
"Documentation, https://meridianlabs-ai.github.io/inspect_harbor/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T17:09:22.660569 | inspect_harbor-0.4.3.tar.gz | 21,155 | 1e/74/6000c74417199ec4a75cdb59c7cc4c6f3c56ec56525a7fd91e0a3400cb8e/inspect_harbor-0.4.3.tar.gz | source | sdist | null | false | 161aaf6eb5f00a7e3aec22477561e270 | 7c27267ad4786bbf42e86d65f651611cc123271aa749d9e5be5ae3093a664114 | 1e746000c74417199ec4a75cdb59c7cc4c6f3c56ec56525a7fd91e0a3400cb8e | null | [
"LICENSE"
] | 239 |
2.4 | Pytanis | 0.10 | Utilities for the program organization of conferences using Pretalx | <div align="center">
<img src="https://raw.githubusercontent.com/pioneershub/pytanis/main/docs/assets/images/logo.svg" alt="Pytanis logo" width="500" role="img">
</div>
Pytanis includes a [Pretalx] client and all the tooling you need for conferences using [Pretalx], from handling the initial call for papers to creating the final program.
<br/>
| | |
|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| CI/CD | [](https://github.com/pioneershub/pytanis/actions/workflows/run-tests.yml) [](https://coveralls.io/github/PioneersHub/pytanis) [](https://github.com/pioneershub/pytanis/actions/workflows/publish-pkg.yml) [](https://github.com/pioneershub/pytanis/actions/workflows/build-rel-docs.yml) |
| Package | [](https://pypi.org/project/pytanis/) [](https://pypistats.org/packages/pytanis) [](https://pypi.org/project/pytanis/) |
| Details | [](https://github.com/pypa/hatch) [](https://github.com/charliermarsh/ruff) [](https://github.com/python/mypy) [](https://spdx.org/licenses/) [](https://github.com/sponsors/pioneershub) |
**Trivia**: The name *Pytanis* is a reference to [Prytanis] using the typical *py* prefix of [Python] tools. [Prytanis]
was the name given to the leading members of the government of a city (polis) in ancient Greece. Offices that used this
title usually had responsibility for presiding over councils of some kind, which met in the [Prytaneion]. Romani ite domum!
## Features
- [x] simple configuration management with a config folder in your home directory, just like many other tools do
- [x] easily access [Google Sheets], potentially filled by some [Google Forms], and download sheets as data frames
- [x] easy to use [Pretalx] client that returns proper Python objects thanks to the power of [pydantic]
- [x] simple e-mail clients for batch mails, e.g. to your reviewers, via [Mailgun] and [HelpDesk]
- [x] awesome [documentation] with best practices for the program committee of any community-based conference
- [x] tools to assign proposals to reviewers based on constraints like preferences
- [x] tools to support the final selection process of proposals
- [x] tools to support the creation of the final program schedule
## Getting started
To install Pytanis simple run:
```commandline
pip install pytanis
```
or to install all recommended additional dependencies:
```commandline
pip install 'pytanis[all]'
```
Then create a configuration file and directory in your user's home directory. For Linux/MacOS/Unix use
`~/.pytanis/config.toml` and for Windows `$HOME\.pytanis\config.toml`, where `$HOME` is e.g. `C:\Users\yourusername\`.
Use your favourite editor to open `config.toml` within the `.pytanis` directory and add the following content:
```toml
[Pretalx]
api_token = "932ndsf9uk32nf9sdkn3454532nj32jn"
[Google]
client_secret_json = "client_secret.json"
token_json = "token.json"
service_user_authentication = false
[HelpDesk]
account = "934jcjkdf-39df-9df-93kf-934jfhuuij39fd"
entity_id = "email@host.com"
token = "dal:Sx4id934C3Y-X934jldjdfjk"
[Mailgun]
token = "gguzdgshbdhjsb87239njsa"
from_address = "PyCon DE & PyData Program Committee <program25@mg.pycon.de>"
reply_to = "program25@pycon.de"
```
where you need to replace the dummy values in the sections `[Pretalx]` and `[HelpDesk]` accordingly. Note that `service_user_authentication` is not required to be set if authentication via a service user is not necessary (see [GSpread using Service Account] for more details).
### Retrieving the Credentials and Tokens
- **Google**:
- For end users: Follow the [Python Quickstart for the Google API] to generate and download the file `client_secret.json`.
Move it to the `~/.pytanis` folder as `client_secret.json`. The file `token.json` will be automatically generated
later. Note that `config.toml` references those two files relative to its own location.
- For any automation project: Follow [GSpread using Service Account] to generate and download the file `client_secret.json`.
Move it to the `~/.pytanis` folder as `client_secret.json`. Also make sure to set `service_user_authentication = true` in your `~/.pytanis/config.toml`.
- **Pretalx**: The API token can be found in the [Pretalx user settings].
- **HelpDesk**: Login to the [LiveChat Developer Console] then go to <kbd>Tools</kbd> » <kbd>Personal Access Tokens</kbd>.
Choose <kbd>Create new token +</kbd>, enter a the name `Pytanis`, select all scopes and confirm. In the following screen
copy the `Account ID`, `Entity ID` and `Token` and paste them into `config.toml`.
In case there is any trouble with livechat, contact a helpdesk admin. Also note that the `Account ID` from your token is
the `Agent ID` needed when you create a ticket. The `Team ID` you get from [HelpDesk] then <kbd>Agents</kbd> »
<kbd>Name of your agent</kbd> and the final part of the URL shown now.
**When setting up your agent the first time**,
you also need to go to [LiveChat] then log in with your Helpdesk team credentials and click <kbd>Request</kbd> to get an invitation.
An admin of [LiveChat] needs to confirm this and add you as role `admin`. Then, check [HelpDesk] to receive the invitation
and accept.
## Development
This section is only relevant if you want to contribute to Pytanis itself. Your help is highly appreciated! There are two options for local development.
Whilst both option are valid, the Devcontainer setup is the most convenient, as all dependencies are preconfigured.
### Devcontainer Setup
After having cloned this repository:
1. Make sure to have a local installation of [Docker] and [VS Code] running.
2. Open [VS Code] and make sure to have the [Dev Containers Extension] from Microsoft installed.
3. Open the cloned project in [VS Code] and from the bottom right corner confirm to open the project to be opened within the Devcontainer.
If you miss any dependencies check out the `devcontainer.json` within the `.devcontainer` folder. Otherwise, the right python environment with [pipx], [hatch], [pre-commit] and the initialization steps for the Hatch environments, are already included.
For the use of the `pytanis` libary some credentials and tokens are necessary (see the "Getting Started" section). With the Devcontainer setup the `config.yaml` is already created. Just navigate to `~/.pytanis/config.toml` and update the file with the corresponding tokens.
### Conventional Setup
After having cloned this repository:
1. install [hatch] globally, e.g. `pipx install hatch`,
2. install [pre-commit] globally, e.g. `pipx install pre-commit`,
3. \[only once\] run `hatch config set dirs.env.virtual .direnv` to let [VS Code] find your virtual environments.
And then you are already set up to start hacking. Use `hatch run` to do everything you would normally do in a virtual
environment, e.g. `hatch run jupyter lab` to start [JupyterLab] in the default environment, `hatch run cov` for unit tests
and coverage (like [tox]) or `hatch run docs:serve` to build & serve the documentation. For code hygiene, execute `hatch run lint:all`
in order to run [ruff] and [mypy] or `hatch run lint:fix` to automatically fix formatting issues.
Check out the `[tool.hatch.envs]` sections in [pyproject.toml](pyproject.toml) to learn about other commands.
If you really must enter a virtual environment, use `hatch shell` to enter the default environment.
## Testing
### Integration Tests
Pytanis includes comprehensive integration tests to validate compatibility with the Pretalx API. These tests ensure all data models work correctly with live API responses.
To run integration tests interactively:
```shell
# Using Hatch (recommended for development)
hatch run integration
# Or directly
python scripts/run_pretalx_integration_tests.py
```
This will prompt you for:
- Pretalx API token (required)
- Event slug to test against
- API version to use
For automated testing:
```shell
# Using Hatch with arguments
hatch run integration --token YOUR_TOKEN --event pyconde-pydata-2025
# Using environment variables for quick testing
export PRETALX_API_TOKEN="your-token"
export PRETALX_TEST_EVENT="pyconde-pydata-2025"
hatch run integration-quick
# Direct pytest for more control
hatch run test-endpoints
# Without Hatch
python scripts/run_pretalx_integration_tests.py --token YOUR_TOKEN --event pyconde-pydata-2025 --api-version v2
```
See [tests/pretalx/README_INTEGRATION.md](tests/pretalx/README_INTEGRATION.md) for more details.
## Documentation
The [documentation] is made with [Material for MkDocs] and is hosted by [GitHub Pages]. Your help to extend the
documentation, especially in the context of using Pytanis for community conferences like [PyConDE], [EuroPython], etc.
is highly appreciated.
## License & Credits
[Pytanis] is distributed under the terms of the [MIT](https://spdx.org/licenses/MIT.html) license.
To start this project off a lot of inspiration and code was taken from [Alexander Hendorf] and [Matthias Hofmann].
[Pytanis]: https://pioneershub.github.io/pytanis/
[Python]: https://www.python.org/
[Pretalx]: https://pretalx.com/
[hatch]: https://hatch.pypa.io/
[pre-commit]: https://pre-commit.com/
[Prytanis]: https://en.wikipedia.org/wiki/Prytaneis
[Prytaneion]: https://en.wikipedia.org/wiki/Prytaneion
[Python Quickstart for the Google API]: https://developers.google.com/sheets/api/quickstart/python
[GSpread using Service Account]: https://docs.gspread.org/en/v5.12.4/oauth2.html#for-bots-using-service-account
[Pretalx user settings]: https://pretalx.com/orga/me
[documentation]: https://pioneershub.github.io/pytanis/
[Alexander Hendorf]: https://github.com/alanderex
[Matthias Hofmann]: https://github.com/mj-hofmann
[Google Forms]: https://www.google.com/forms/about/
[Google Sheets]: https://www.google.com/sheets/about/
[pydantic]: https://docs.pydantic.dev/
[HelpDesk]: https://www.helpdesk.com/
[Material for MkDocs]: https://github.com/squidfunk/mkdocs-material
[GitHub Pages]: https://docs.github.com/en/pages
[PyConDE]: https://pycon.de/
[EuroPython]: https://europython.eu/
[LiveChat Developer Console]: https://platform.text.com/console/
[JupyterLab]: https://jupyter.org/
[tox]: https://tox.wiki/
[mypy]: https://mypy-lang.org/
[ruff]: https://github.com/astral-sh/ruff
[VS Code]: https://code.visualstudio.com/
[LiveChat]: https://www.livechat.com/
[Docker]: https://www.docker.com/
[Dev Containers Extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers
[Mailgun]: https://www.mailgun.com/
[pipx]: https://pipx.pypa.io/
| text/markdown | null | Florian Wilhelm <Florian.Wilhelm@gmail.com> | null | null | null | cfp, conference, google sheet, gsheet, helpdesk, pretalx | [
"Development Status :: 4 - Beta",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"Topic ... | [] | null | null | >=3.12 | [] | [] | [] | [
"httpx",
"httpx-auth",
"openpyxl>=3.1.5",
"pandas-stubs<3,>=2",
"pandas<3,>=2",
"pydantic>=2.5",
"structlog",
"tomli",
"tqdm",
"gspread-dataframe; extra == \"all\"",
"gspread-formatting; extra == \"all\"",
"gspread<6.0; extra == \"all\"",
"highspy; extra == \"all\"",
"ipywidgets; extra == ... | [] | [] | [] | [
"Documentation, https://pioneershub.github.io/pytanis/",
"Sponsor, https://github.com/sponsors/PioneersHub",
"Tracker, https://github.com/PioneersHub/pytanis/issues",
"Source, https://github.com/PioneersHub/pytanis"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T17:09:00.630129 | pytanis-0.10.tar.gz | 51,597 | b0/20/143ea2a0bb5e4643eaa86b68b7297c6f078bf2a4b4a2cbe66d3daac981b6/pytanis-0.10.tar.gz | source | sdist | null | false | 41bd7ebc96e54a65805710ece67fb872 | 1d03edd5c6b6ad671f3370ba2a60860faf446287bac158885e562e46f9259913 | b020143ea2a0bb5e4643eaa86b68b7297c6f078bf2a4b4a2cbe66d3daac981b6 | MIT | [
"AUTHORS.md",
"LICENSE.txt"
] | 0 |
2.4 | pytest-ty | 0.1.5 | A pytest plugin to run the ty type checker | pytest-ty
[](https://github.com/boidolr/pytest-ty/actions/workflows/main.yaml "See Build Status on GitHub Actions")
[](https://pypi.org/project/pytest-ty/ "See PyPI page")

=========
A [`pytest`](https://github.com/pytest-dev/pytest) plugin to run the [`ty`](https://github.com/astral-sh/ty) type checker.
Configuration
------------
Configure `ty` in `pyproject.toml` or `ty.toml`,
see the [`ty` documentation](https://docs.astral.sh/ty/).
Installation
------------
You can install `pytest-ty` from [`PyPI`](https://pypi.org):
* `uv add --dev pytest-ty`
* `pip install pytest-ty`
Usage
-----
* Activate the plugin when running `pytest`: `pytest --ty`
* Activate via `pytest` configuration: `addopts = "--ty"`
License
-------
`pytest-ty` is licensed under the MIT license ([`LICENSE`](./LICENSE) or https://opensource.org/licenses/MIT).
| text/markdown | Raphael Boidol | Raphael Boidol <pytest-ty@boidol.dev> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Framework :: Pytest",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Prog... | [] | null | null | >=3.10 | [] | [] | [] | [
"pytest>=7.0.0",
"ty"
] | [] | [] | [] | [
"Repository, https://github.com/boidolr/pytest-ty"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T17:07:10.046963 | pytest_ty-0.1.5.tar.gz | 3,300 | a2/cd/999728819bb0253ea984694f70c4c346e2334f9cddd13732de51fb1207e1/pytest_ty-0.1.5.tar.gz | source | sdist | null | false | 5c43e517c4d34eba0ef0b0d56c9cef30 | 0e307fd945c07364d20da760490cfed512fd486865c4584bad0d7329a515041d | a2cd999728819bb0253ea984694f70c4c346e2334f9cddd13732de51fb1207e1 | MIT | [
"LICENSE"
] | 767 |
2.4 | mayatk | 0.10.2 | A comprehensive toolkit for Autodesk Maya providing utilities for modeling, animation, rigging, and UI management. | # MAYATK (Maya Toolkit)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/mayatk/)
[](https://www.python.org/downloads/)
[](https://www.autodesk.com/products/maya/)
[](test/)
<!-- short_description_start -->
*mayatk is a collection of utility functions and helper classes for Autodesk Maya, providing convenience wrappers and common workflow patterns for Maya scripting.*
<!-- short_description_end -->
## Overview
mayatk provides a comprehensive set of production-ready utilities for Maya automation, organized into specialized modules for different aspects of 3D workflow development.
## Installation
```bash
pip install mayatk
```
**Requirements:**
- Python 3.11+
- Autodesk Maya 2025+
- PyMEL (included with Maya)
## Package Structure
### Core Modules
| Module | Description |
|--------|-------------|
| **core_utils** | Core Maya operations, decorators, scene management |
| **edit_utils** | Mesh editing, modeling, geometry operations |
| **node_utils** | Node operations, dependency graph, connections |
| **xform_utils** | Transform utilities, positioning, coordinates |
| **env_utils** | Environment management, scene hierarchy |
### Specialized Modules
| Module | Description |
|--------|-------------|
| **uv_utils** | UV mapping and texture coordinate tools |
| **rig_utils** | Rigging, constraints, character setup |
| **anim_utils** | Animation, keyframe management |
| **mat_utils** | Materials, shaders, texture management |
| **cam_utils** | Camera utilities and viewport management |
| **display_utils** | Display layers, visibility, viewport settings |
| **light_utils** | Lighting utilities and rendering tools |
| **nurbs_utils** | NURBS surfaces and curve operations |
| **ui_utils** | User interface components and utilities |
## License
MIT License - See [LICENSE](../LICENSE) file for details
## Links
- **PyPI:** https://pypi.org/project/mayatk/
- **Documentation:** [Full Documentation](index.md)
- **Issues:** https://github.com/m3trik/mayatk/issues
| text/markdown | null | Ryan Simpson <m3trik@outlook.com> | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pythontk>=0.7.78",
"uitk>=1.0.89"
] | [] | [] | [] | [
"Homepage, https://github.com/m3trik/mayatk",
"Repository, https://github.com/m3trik/mayatk"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T17:06:58.044604 | mayatk-0.10.2-py3-none-any.whl | 789,765 | 3d/31/eef870a41ce5378853dd6f8d7f15d1e4a2f0ac3eaee892532247d20fc0df/mayatk-0.10.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 6ec3e6e2a2052495a1dd59b271fc4686 | 6843050adf62fbd0de629ecec1e659cdf17a5aaac2a9e2f70d9b33f6494768e5 | 3d31eef870a41ce5378853dd6f8d7f15d1e4a2f0ac3eaee892532247d20fc0df | null | [
"LICENSE"
] | 113 |
2.4 | corepy-ai | 0.2.4 | A unified, high-performance core runtime for data, computation, and AI workflows. | # Corepy
<h1 align="center">
<img src="assets/logo.svg" width="300" alt="Corepy Logo" style="background-color: white;">
</h1><br>
[](https://github.com/ai-foundation-software/corepy/actions/workflows/ci.yml)
[](https://github.com/astral-sh/ruff)
[](https://github.com/astral-sh/uv)
[](https://www.python.org/)
[](https://opensource.org/licenses/MIT)
> **High-Performance Tensor Runtime for Python.**
> *Rust-Powered Backend. Correctness First. Hardware Aware.*
## 📖 What is Corepy?
Corepy is a high-performance tensor library that bridges Python's ease of use with the raw speed and safety of **Rust** and **C++**.
Unlike untyped array libraries, Corepy is built on a **strict runtime** that ensures:
1. **Safety**: Rust ownership guarantees prevent common segmentation faults.
2. **Performance**: C++ kernels (AVX2, NEON) and **Metal** shaders for heavy number crunching.
3. **Observability**: Zero-overhead built-in profiler to visualize bottlenecks.
It is designed for developers building **AI foundations**, **scientific simulations**, and **high-performance systems**.
### Key Features
- **NumPy-Compatible API**: Familiar `cp.array`, `cp.zeros`, `cp.add` interface.
- **🚀 Metal GPU**: Native acceleration on macOS (Apple Silicon) with `device="metal"`.
- **📊 Profiler Export**: Visualise traces in Chrome/Perfetto with `cp.profiler.export_chrome_trace()`.
- **⚡ Hybrid Runtime**: Rust dispatcher + C++ kernels + Objective-C++ (Metal).
- **🛡️ Cross-Platform Build**: CMake-based build system that works on Linux, macOS, and Windows.
- **📈 Efficient Stats**: Compute multiple statistics in one pass with `cp.compute_stats`.
---
## 💻 Supported Platforms
| Platform | Architecture | Accelerators | Status |
| :--- | :--- | :--- | :--- |
| **Linux** | x86_64 | AVX2, OpenBLAS | ✅ Production |
| **macOS** | Apple Silicon | **Metal**, NEON | ✅ Beta (0.2.4+) |
| **Windows** | x86_64 | AVX2 | ✅ Experimental |
---
## 🛠️ Installation
### Preferred Method (uv)
We recommend `uv` for fast, correct cross-platform installation.
```bash
uv pip install corepy
```
### Fallback (pip)
```bash
pip install corepy
```
## 👨💻 Development
For detailed instructions on setting up a development environment, building from source, and running tests, please refer to **[DEVELOPMENT.md](DEVELOPMENT.md)**.
### Quick Build
```bash
git clone https://github.com/ai-foundation-software/corepy.git
cd corepy
make install
```
---
## ⚡ Quick Start
### 1. Metal Acceleration (macOS)
```python
import corepy as cp
# Automatically uses Metal if available on macOS
t = cp.array([1.0, 2.0, 3.0], device="metal")
result = t.sum()
print(f"Result (GPU): {result}")
```
### 2. Performance Profiling
Stop guessing where your code is slow. Corepy has a built-in profiler.
```python
import corepy as cp
# 1. Enable profiling
cp.enable_profiling()
# 2. Run your heavy workload
x = cp.ones(1_000_000)
y = x * 3.14159
result = y.mean()
# 3. Export to Chrome Tracing format
cp.profiler.export_chrome_trace("trace.json")
```
---
## 📚 Documentation
- **[Getting Started](docs/getting-started.md)**: First steps and installation details.
- **[Metal GPU Guide](docs/05_advanced/metal_gpu.md)**: using Apple Silicon acceleration.
- **[Architecture](docs/architecture.md)**: Deep dive into the Rust/C++ hybrid runtime.
- **[Examples](examples/)**: Runnable scripts for common patterns.
- **[Contributing](docs/07_contributing/CONTRIBUTING.md)**: Build and test guide.
---
## 🤝 Stability & Roadmap
Corepy is currently **Alpha (v0.2.4)**.
- **v0.2.4**: Local CI Simulation, Metal Framework linking, Pinned Deps.
- **v0.3.0**: CUDA Support and Tiled Matmul Optimization.
- **v1.0**: Stable API promise.
See [Roadmap](docs/00_overview/roadmap.md) for details.
| text/markdown; charset=UTF-8; variant=GFM | null | Corepy Team <ai.foundation.software@gmail.com> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Py... | [] | https://github.com/yourusername/corepy | null | >=3.9 | [] | [] | [] | [
"typing-extensions>=4.6.0",
"pydantic==2.12.5",
"numpy==1.26.4; python_full_version < \"3.13\"",
"numpy>=2.1; python_full_version >= \"3.13\"",
"pybind11==3.0.1",
"sphinx>=7.0; extra == \"docs\"",
"sphinx-rtd-theme>=1.3.0; extra == \"docs\"",
"myst-parser>=2.0.0; extra == \"docs\""
] | [] | [] | [] | [
"Documentation, https://github.com/ai-foundation-software/corepy/docs/",
"Homepage, https://github.com/ai-foundation-software/corepy",
"Issues, https://github.com/ai-foundation-software/corepy/issues",
"Repository, https://github.com/ai-foundation-software/corepy.git"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T17:05:30.396237 | corepy_ai-0.2.4.tar.gz | 72,726 | b5/86/4c9983e80020916f1da570aee2f54ffa450448458cd7f1cc5699c07ac6ac/corepy_ai-0.2.4.tar.gz | source | sdist | null | false | 1bd00edcfd9ddc153e821b1304761a07 | ac7a99e92feaad867ec9c9cdbfbbd334d0fb16e63acdf34936cd2cdc4b1359e0 | b5864c9983e80020916f1da570aee2f54ffa450448458cd7f1cc5699c07ac6ac | null | [
"LICENSE"
] | 401 |
2.4 | agilerl | 2.5.0.dev1 | AgileRL is a deep reinforcement learning library focused on improving RL development through RLOps. | <p align="center">
<img src=https://user-images.githubusercontent.com/47857277/222710068-e09a4e3c-368c-458a-9e01-b68674806887.png height="120">
</p>
<p align="center"><b>Reinforcement learning streamlined.</b><br>Easier and faster reinforcement learning with RLOps. Visit our <a href="https://agilerl.com">website</a>. View <a href="https://docs.agilerl.com">documentation</a>.<br>Join the <a href="https://discord.gg/eB8HyTA2ux">Discord Server</a> for questions, help and collaboration.</p>
<div align="center">
[](https://opensource.org/licenses/Apache-2.0)
[](https://docs.agilerl.com/en/latest/?badge=latest)
[](https://github.com/AgileRL/AgileRL/actions/workflows/python-app.yml)
[](https://pypi.python.org/pypi/agilerl/)
[](https://discord.gg/eB8HyTA2ux)
[](https://arena.agilerl.com)
<br>
<h3><i>🚀 <b>Train super-fast for free on <a href="https://arena.agilerl.com">Arena</a>, the RLOps platform from AgileRL 🚀</b></i></h3>
</div>
<br>
AgileRL is a Deep Reinforcement Learning library focused on improving development by introducing RLOps - MLOps for reinforcement learning.
This library is initially focused on reducing the time taken for training models and hyperparameter optimization (HPO) by pioneering [evolutionary HPO techniques](https://docs.agilerl.com/en/latest/evo_hyperparam_opt/index.html) for reinforcement learning.<br>
Evolutionary HPO has been shown to drastically reduce overall training times by automatically converging on optimal hyperparameters, without requiring numerous training runs.<br>
We are constantly adding more algorithms and features. AgileRL already includes state-of-the-art evolvable [on-policy](https://docs.agilerl.com/en/latest/on_policy/index.html), [off-policy](https://docs.agilerl.com/en/latest/off_policy/index.html), [offline](https://docs.agilerl.com/en/latest/offline_training/index.html), [multi-agent](https://docs.agilerl.com/en/latest/multi_agent_training/index.html) and [contextual multi-armed bandit](https://docs.agilerl.com/en/latest/bandits/index.html) reinforcement learning algorithms with [distributed training](https://docs.agilerl.com/en/latest/distributed_training/index.html).
<p align="center">
<img src=https://user-images.githubusercontent.com/47857277/236407686-21363eb3-ffcf-419f-b019-0be4ddf1ed4a.gif width="100%" max-width="900">
</p>
<p align="center">AgileRL offers 10x faster hyperparameter optimization than SOTA.</p>
## Table of Contents
* [Get Started](#get-started)
* [Benchmarks](#benchmarks)
* [Tutorials](#tutorials)
* [Algorithms implemented](#evolvable-algorithms-more-coming-soon)
* [Train an agent](#train-an-agent-to-beat-a-gym-environment)
* [Citing AgileRL](#citing-agilerl)
## Get Started
To see the full AgileRL documentation, including tutorials, visit our [documentation site](https://docs.agilerl.com/). To ask questions and get help, collaborate, or discuss anything related to reinforcement learning, join the [AgileRL Discord Server](https://discord.gg/eB8HyTA2ux).
Install as a package with pip:
```bash
pip install agilerl
```
Or install in development mode:
```bash
git clone https://github.com/AgileRL/AgileRL.git && cd AgileRL
pip install -e .
```
If you wish to install all additional dependencies please specify `[all]` or if you want to install a specific family of dependencies specify that family directly. At present, we have just one family, `[llm]`, which contains the dependencies related to our LLM RFT algorithms (datasets, deepspeed, peft, transformers, vllm).
```bash
pip install agilerl[all]
```
Or in development mode:
```bash
pip install -e ".[all]"
```
To install the ``nightly`` version of AgileRL with the latest features, use:
```bash
pip install git+https://github.com/AgileRL/AgileRL.git@nightly
```
## Benchmarks
Reinforcement learning algorithms and libraries are usually benchmarked once the optimal hyperparameters for training are known, but it often takes hundreds or thousands of experiments to discover these. This is unrealistic and does not reflect the true, total time taken for training. What if we could remove the need to conduct all these prior experiments?
In the charts below, a single AgileRL run, which automatically tunes hyperparameters, is benchmarked against Optuna's multiple training runs traditionally required for hyperparameter optimization, demonstrating the real time savings possible. Global steps is the sum of every step taken by any agent in the environment, including across an entire population.
<p align="center">
<img src=https://user-images.githubusercontent.com/47857277/227481592-27a9688f-7c0a-4655-ab32-90d659a71c69.png min-width="100%" width="600">
</p>
<p align="center">AgileRL offers an order of magnitude speed up in hyperparameter optimization vs popular reinforcement learning training frameworks combined with Optuna. Remove the need for multiple training runs and save yourself hours.</p>
AgileRL also supports multi-agent reinforcement learning using the Petting Zoo-style (parallel API). The charts below highlight the performance of our MADDPG and MATD3 algorithms with evolutionary hyper-parameter optimisation (HPO), benchmarked against epymarl's MADDPG algorithm with grid-search HPO for the simple speaker listener and simple spread environments.
<p align="center">
<img src=https://github-production-user-asset-6210df.s3.amazonaws.com/118982716/264712154-4965ea5f-b777-423c-989b-e4db86eda3bd.png min-width="100%" width="700">
</p>
## Tutorials
We are constantly updating our tutorials to showcase the latest features of AgileRL and how users can leverage our evolutionary HPO to achieve 10x faster hyperparameter optimization. Please see the available tutorials below.
| Tutorial Type | Description | Tutorials |
|---------------|-------------|-----------|
| [Single-agent tasks](https://docs.agilerl.com/en/latest/tutorials/gymnasium/index.html) | Guides for training both on and off-policy agents to beat a variety of Gymnasium environments. | [PPO - Acrobot](https://docs.agilerl.com/en/latest/tutorials/gymnasium/agilerl_ppo_tutorial.html) <br> [TD3 - Lunar Lander](https://docs.agilerl.com/en/latest/tutorials/gymnasium/agilerl_td3_tutorial.html) <br> [Rainbow DQN - CartPole](https://docs.agilerl.com/en/latest/tutorials/gymnasium/agilerl_rainbow_dqn_tutorial.html) <br> [Recurrent PPO - Masked Pendulum](https://docs.agilerl.com/en/latest/tutorials/gymnasium/agilerl_recurrent_ppo_tutorial.html) |
| [Multi-agent tasks](https://docs.agilerl.com/en/latest/tutorials/pettingzoo/index.html) | Use of PettingZoo environments such as training DQN to play Connect Four with curriculum learning and self-play, and for multi-agent tasks in MPE environments. | [DQN - Connect Four](https://docs.agilerl.com/en/latest/tutorials/pettingzoo/dqn.html) <br> [MADDPG - Space Invaders](https://docs.agilerl.com/en/latest/tutorials/pettingzoo/maddpg.html) <br> [MATD3 - Speaker Listener](https://docs.agilerl.com/en/latest/tutorials/pettingzoo/matd3.html) |
| [Hierarchical curriculum learning](https://docs.agilerl.com/en/latest/tutorials/skills/index.html) | Shows how to teach agents Skills and combine them to achieve an end goal. | [PPO - Lunar Lander](https://docs.agilerl.com/en/latest/tutorials/skills/index.html) |
| [Contextual multi-arm bandits](https://docs.agilerl.com/en/latest/tutorials/bandits/index.html) | Learn to make the correct decision in environments that only have one timestep. | [NeuralUCB - Iris Dataset](https://docs.agilerl.com/en/latest/tutorials/bandits/agilerl_neural_ucb_tutorial.html) <br> [NeuralTS - PenDigits](https://docs.agilerl.com/en/latest/tutorials/bandits/agilerl_neural_ts_tutorial.html) |
| [Custom Modules & Networks](https://docs.agilerl.com/en/latest/tutorials/custom_networks/index.html) | Learn how to create custom evolvable modules and networks for RL algorithms. | [Dueling Distributional Q Network](https://docs.agilerl.com/en/latest/tutorials/custom_networks/agilerl_rainbow_tutorial.html) <br> [EvolvableSimBa](https://docs.agilerl.com/en/latest/tutorials/custom_networks/agilerl_simba_tutorial.html) |
| [LLM Finetuning](https://docs.agilerl.com/en/latest/tutorials/llm_finetuning/index.html) | Learn how to finetune an LLM using AgileRL. | [GRPO](https://docs.agilerl.com/en/latest/tutorials/llm_finetuning/index.html) |
## Evolvable algorithms (more coming soon!)
### Single-agent algorithms
| RL | Algorithm |
| ---------- | --------- |
| [On-Policy](https://docs.agilerl.com/en/latest/on_policy/index.html) | [Proximal Policy Optimization (PPO)](https://docs.agilerl.com/en/latest/api/algorithms/ppo.html) |
| [Off-Policy](https://docs.agilerl.com/en/latest/off_policy/index.html) | [Deep Q Learning (DQN)](https://docs.agilerl.com/en/latest/api/algorithms/dqn.html) <br> [Rainbow DQN](https://docs.agilerl.com/en/latest/api/algorithms/dqn_rainbow.html) <br> [Deep Deterministic Policy Gradient (DDPG)](https://docs.agilerl.com/en/latest/api/algorithms/ddpg.html) <br> [Twin Delayed Deep Deterministic Policy Gradient (TD3)](https://docs.agilerl.com/en/latest/api/algorithms/td3.html) |
| [Offline](https://docs.agilerl.com/en/latest/offline_training/index.html) | [Conservative Q-Learning (CQL)](https://docs.agilerl.com/en/latest/api/algorithms/cql.html) <br> [Implicit Language Q-Learning (ILQL)](https://docs.agilerl.com/en/latest/api/algorithms/ilql.html) |
### Multi-agent algorithms
| RL | Algorithm |
| ---------- | --------- |
| [Multi-agent](https://docs.agilerl.com/en/latest/multi_agent_training/index.html) | [Multi-Agent Deep Deterministic Policy Gradient (MADDPG)](https://docs.agilerl.com/en/latest/api/algorithms/maddpg.html) <br> [Multi-Agent Twin-Delayed Deep Deterministic Policy Gradient (MATD3)](https://docs.agilerl.com/en/latest/api/algorithms/matd3.html) <br> [Independent Proximal Policy Optimization (IPPO)](https://docs.agilerl.com/en/latest/api/algorithms/ippo.html)|
### Contextual multi-armed bandit algorithms
| RL | Algorithm |
| ---------- | --------- |
| [Bandits](https://docs.agilerl.com/en/latest/bandits/index.html) | [Neural Contextual Bandits with UCB-based Exploration (NeuralUCB)](https://docs.agilerl.com/en/latest/api/algorithms/neural_ucb.html) <br> [Neural Contextual Bandits with Thompson Sampling (NeuralTS)](https://docs.agilerl.com/en/latest/api/algorithms/neural_ts.html) |
### LLM Fine-tuning Algorithms
| RL | Algorithm |
| ---------- | --------- |
| [On-Policy](https://docs.agilerl.com/en/latest/llm_finetuning/index.html) | [Group Relative Policy Optimization (GRPO)](https://docs.agilerl.com/en/latest/api/algorithms/grpo.html)
| [Off-Policy](https://docs.agilerl.com/en/latest/llm_finetuning/index.html) | [Direct Preference Optimization (DPO)](https://docs.agilerl.com/en/latest/api/algorithms/dpo.html)
## Train an Agent to Beat a Gym Environment
Before starting training, there are some meta-hyperparameters and settings that must be set. These are defined in <code>INIT_HP</code>, for general parameters, and <code>MUTATION_PARAMS</code>, which define the evolutionary probabilities, and <code>NET_CONFIG</code>, which defines the network architecture. For example:
<details>
<summary>Basic Hyperparameters</summary>
```python
INIT_HP = {
'ENV_NAME': 'LunarLander-v3', # Gym environment name
'ALGO': 'DQN', # Algorithm
'DOUBLE': True, # Use double Q-learning
'CHANNELS_LAST': False, # Swap image channels dimension from last to first [H, W, C] -> [C, H, W]
'BATCH_SIZE': 256, # Batch size
'LR': 1e-3, # Learning rate
'MAX_STEPS': 1_000_000, # Max no. steps
'TARGET_SCORE': 200., # Early training stop at avg score of last 100 episodes
'GAMMA': 0.99, # Discount factor
'MEMORY_SIZE': 10000, # Max memory buffer size
'LEARN_STEP': 1, # Learning frequency
'TAU': 1e-3, # For soft update of target parameters
'TOURN_SIZE': 2, # Tournament size
'ELITISM': True, # Elitism in tournament selection
'POP_SIZE': 6, # Population size
'EVO_STEPS': 10_000, # Evolution frequency
'EVAL_STEPS': None, # Evaluation steps
'EVAL_LOOP': 1, # Evaluation episodes
'LEARNING_DELAY': 1000, # Steps before starting learning
'WANDB': True, # Log with Weights and Biases
}
```
</details>
<details>
<summary>Mutation Hyperparameters</summary>
```python
MUTATION_PARAMS = {
# Relative probabilities
'NO_MUT': 0.4, # No mutation
'ARCH_MUT': 0.2, # Architecture mutation
'NEW_LAYER': 0.2, # New layer mutation
'PARAMS_MUT': 0.2, # Network parameters mutation
'ACT_MUT': 0, # Activation layer mutation
'RL_HP_MUT': 0.2, # Learning HP mutation
'MUT_SD': 0.1, # Mutation strength
'RAND_SEED': 1, # Random seed
}
```
</details>
<details>
<summary>Basic Network Configuration</summary>
```python
NET_CONFIG = {
'latent_dim': 16
'encoder_config': {
'hidden_size': [32] # Observation encoder configuration
}
'head_config': {
'hidden_size': [32] # Network head configuration
}
}
```
</details>
### Creating a Population of Agents
First, use <code>utils.utils.create_population</code> to create a list of agents - our population that will evolve and mutate to the optimal hyperparameters.
<details>
<summary>Population Creation Example</summary>
```python
import torch
from agilerl.utils.utils import (
make_vect_envs,
create_population,
observation_space_channels_to_first
)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
num_envs = 16
env = make_vect_envs(env_name=INIT_HP['ENV_NAME'], num_envs=num_envs)
observation_space = env.single_observation_space
action_space = env.single_action_space
if INIT_HP['CHANNELS_LAST']:
observation_space = observation_space_channels_to_first(observation_space)
agent_pop = create_population(
algo=INIT_HP['ALGO'], # Algorithm
observation_space=observation_space, # Observation space
action_space=action_space, # Action space
net_config=NET_CONFIG, # Network configuration
INIT_HP=INIT_HP, # Initial hyperparameters
population_size=INIT_HP['POP_SIZE'], # Population size
num_envs=num_envs, # Number of vectorized environments
device=device
)
```
</details>
### Initializing Evolutionary HPO
Next, create the tournament, mutations and experience replay buffer objects that allow agents to share memory and efficiently perform evolutionary HPO.
<details>
<summary>Mutations and Tournament Selection Example</summary>
```python
from agilerl.components.replay_buffer import ReplayBuffer
from agilerl.hpo.tournament import TournamentSelection
from agilerl.hpo.mutation import Mutations
memory = ReplayBuffer(
max_size=INIT_HP['MEMORY_SIZE'], # Max replay buffer size
device=device,
)
tournament = TournamentSelection(
tournament_size=INIT_HP['TOURN_SIZE'], # Tournament selection size
elitism=INIT_HP['ELITISM'], # Elitism in tournament selection
population_size=INIT_HP['POP_SIZE'], # Population size
eval_loop=INIT_HP['EVAL_LOOP'], # Evaluate using last N fitness scores
)
mutations = Mutations(
no_mutation=MUTATION_PARAMS['NO_MUT'], # No mutation
architecture=MUTATION_PARAMS['ARCH_MUT'], # Architecture mutation
new_layer_prob=MUTATION_PARAMS['NEW_LAYER'], # New layer mutation
parameters=MUTATION_PARAMS['PARAMS_MUT'], # Network parameters mutation
activation=MUTATION_PARAMS['ACT_MUT'], # Activation layer mutation
rl_hp=MUTATION_PARAMS['RL_HP_MUT'], # Learning HP mutation
mutation_sd=MUTATION_PARAMS['MUT_SD'], # Mutation strength
rand_seed=MUTATION_PARAMS['RAND_SEED'], # Random seed
device=device,
)
```
</details>
### Train A Population of Agents
The easiest training loop implementation is to use our <code>train_off_policy()</code> function. It requires the <code>agent</code> have methods <code>get_action()</code> and <code>learn().</code>
```python
from agilerl.training.train_off_policy import train_off_policy
trained_pop, pop_fitnesses = train_off_policy(
env=env, # Gym-style environment
env_name=INIT_HP['ENV_NAME'], # Environment name
algo=INIT_HP['ALGO'], # Algorithm
pop=agent_pop, # Population of agents
memory=memory, # Replay buffer
swap_channels=INIT_HP['CHANNELS_LAST'], # Swap image channel from last to first
max_steps=INIT_HP["MAX_STEPS"], # Max number of training steps
evo_steps=INIT_HP['EVO_STEPS'], # Evolution frequency
eval_steps=INIT_HP["EVAL_STEPS"], # Number of steps in evaluation episode
eval_loop=INIT_HP["EVAL_LOOP"], # Number of evaluation episodes
learning_delay=INIT_HP['LEARNING_DELAY'], # Steps before starting learning
target=INIT_HP['TARGET_SCORE'], # Target score for early stopping
tournament=tournament, # Tournament selection object
mutation=mutations, # Mutations object
wb=INIT_HP['WANDB'], # Weights and Biases tracking
)
```
## Citing AgileRL
If you use AgileRL in your work, please cite the repository:
```bibtex
@software{Ustaran-Anderegg_AgileRL,
author = {Ustaran-Anderegg, Nicholas and Pratt, Michael and Sabal-Bermudez, Jaime},
license = {Apache-2.0},
title = {{AgileRL}},
url = {https://github.com/AgileRL/AgileRL}
}
```
| text/markdown | null | Nick Ustaran-Anderegg <dev@agilerl.com> | null | null | null | null | [] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"accelerate~=1.7.0",
"dill~=0.3.7",
"fastrand~=1.3.0",
"flatten-dict~=0.4.2",
"google-cloud-storage~=2.5.0",
"gymnasium~=1.0.0",
"h5py~=3.15.0",
"hydra-core~=1.3.2",
"jax[cpu]~=0.4.31",
"matplotlib<3.10,~=3.9.4",
"minari[all]==0.5.2",
"numpy<3.0,>=2.0.0",
"omegaconf~=2.3.0",
"packaging>=20... | [] | [] | [] | [] | uv/0.8.23 | 2026-02-18T17:04:59.287623 | agilerl-2.5.0.dev1.tar.gz | 10,088,035 | 42/49/06277f12ede3354a5ed99bfe68254cde45d1212c6958de8c5dd11a2b4702/agilerl-2.5.0.dev1.tar.gz | source | sdist | null | false | a01cc59a379bf0f28b2434469d061b0a | 892790a57ba8a6d08d95ed30c46ceea50a0e3657a7b7873433eddc5df0fe1307 | 424906277f12ede3354a5ed99bfe68254cde45d1212c6958de8c5dd11a2b4702 | Apache-2.0 | [
"LICENSE"
] | 217 |
2.4 | threadlight | 0.1.5a0 | A presence-centered memory framework for AI models | # Threadlight
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
**Persistent memory and personality for AI companions**
---
People want their AI companions to remember them. Not just what they said last message, but who they are, what they've been through together, and what matters to them.
Threadlight is a memory layer that gives AI companions long-term memory, consistent personality, and the ability to grow with you over time. Whether your companion is a casual friend, a creative collaborator, a coding mentor, or a supportive listener - Threadlight helps them remember your relationship.
It works with local models (Ollama, llama.cpp) and cloud APIs (OpenAI, Anthropic, Nous Research, and any OpenAI-compatible endpoint).
> **For Users**: Use the web interface (no coding required) - see [Getting Started](#getting-started)
> **For Developers**: Use the Python API or CLI - see [Advanced Usage](#advanced-usage)
## What is Threadlight?
Threadlight enables AI companions to:
- **Remember your relationship** - Track your evolving bond, not just facts about you
- **Maintain personality** - Keep a consistent voice and character across conversations
- **Grow with you** - Build on shared history, inside jokes, and meaningful moments
- **Use custom invocations** - Create shortcuts like `/checkin` or `/brainstorm` with consistent responses
- **Store identity phrases** - Anchor personality with key phrases the companion remembers and references
- **Manage multiple companions** - Create distinct personas, each with isolated memory and unique personality
- **Use multiple providers** - Route requests to different providers (Anthropic, OpenAI, Ollama) based on model
## Getting Started
### Prerequisites
You'll need Python 3.10 or newer installed on your computer.
**Check if you have Python:**
```bash
python --version
```
**Don't have Python?** Download it from [python.org](https://www.python.org/downloads/)
- Windows/Mac: Use the installer
- Linux: Usually pre-installed, or install via your package manager
**What is pip?** It's Python's package installer (comes with Python). If `pip` doesn't work, try `pip3` or `python -m pip` instead.
### Installation
1. **Install Threadlight:**
```bash
pip install threadlight
```
2. **Start the web server:**
```bash
threadlight serve
```
3. **Open your browser to `http://localhost:8745`**
That's it! You can now configure your companions through the web interface.
## Using the Web Interface
### First Time Setup
1. **Configure a Provider** (Settings > Providers)
- Add your inference provider (Anthropic, OpenAI, local Ollama, etc.)
- Enter your API key or connect to local models
- Test the connection
2. **Create Your First Companion** (Profiles > Add Profile)
- Choose a name for your companion
- Select which model(s) to use
- Describe their personality and style
- Configure memory preferences
3. **Start Chatting**
- Select your companion from the dropdown
- Start a new conversation
- Your companion will remember everything across sessions
### Web UI Features
- **Profiles**: Manage multiple companions with different personalities
- **Conversations**: All your chat history, searchable and organized
- **Memories**: View and manage what your companions remember
- **Settings**: Configure providers, models, memory, and advanced features
- **Import/Export**: Bring conversations from ChatGPT or Claude
### Companions Come in All Styles
Threadlight works with any kind of AI companion you want to create:
- **A casual friend** who chats about your day and remembers your life
- **A creative writing partner** who knows your style and works-in-progress
- **A coding mentor** who remembers your tech stack and past projects
- **A research assistant** who organizes papers and helps synthesize findings
- **A technical advisor** who tracks system patterns and troubleshoots issues
- **A supportive listener** who knows your journey and growth
- **A mystical guide** who speaks with warmth and remembers shared rituals
Just describe their personality in natural language when creating a profile - Threadlight interprets your descriptions.
## Core Concepts
### Memory Types
Memory is stored as **capsules** - structured records that preserve content, context, and relationships.
| Type | Purpose | Example Use |
|------|---------|-------------|
| **Relational** | Track bonds with people or entities | Remember friends, family, recurring topics |
| **Identity Phrase** | Core phrases that anchor personality | Key quotes, mantras, defining statements |
| **Custom Invocation** | Repeated interactions with consistent responses | `/checkin`, `/brainstorm`, `/reflect` |
| **Style Profile** | Voice coherence and expression rules | Tone, vocabulary, response patterns |
| **Witness Moment** | Memories of meaningful exchanges | Times you truly connected |
### Memory Decay (Optional)
Memories can fade over time unless reinforced. This is **disabled by default**. Enable it in Settings if you want unused memories to gradually fade, creating more authentic relational evolution.
### Context Composition
Memories aren't injected as raw data. They're composed into context cues using templates. Different composition modes produce different framings:
```
Raw: {entity: "Jamie", tone: "warm", summary: "Loves hiking and photography"}
Narrative: "(You recall Jamie. Loves hiking and photography. There is warmth in
your tone when speaking of them.)"
Direct: "Jamie — hiking, photography, warm relationship"
Whisper: "[memory: Jamie; interests: hiking, photography; tone: warm]"
```
You choose which framing fits your companion's style. The content itself comes directly from what you stored.
### Why Profiles?
- **Memory isolation**: Each companion has its own memory space. Your coding mentor won't reference personal conversations.
- **Personality consistency**: Each companion maintains its own voice, style, and character.
- **Model flexibility**: Companions can use different models while keeping their identity intact.
- **Easy switching**: Move between different companions without reconfiguration.
## Group Chat
Create conversations with multiple companions responding to the same messages. Set this up in the web UI by creating a group conversation and selecting which companions participate.
## Contributing
Threadlight welcomes contributors of all backgrounds. See [CONTRIBUTING.md](CONTRIBUTING.md) for development setup and guidelines.
---
## Advanced Usage
This section is for developers who want to integrate Threadlight into their applications or prefer working from the command line.
### Python API
#### Basic Usage
```python
from threadlight import Threadlight
# Local model (Ollama)
tl = Threadlight(
provider="local",
api_base="http://localhost:11434/v1",
model="llama3.2"
)
# Chat with memory
response = tl.chat("Tell me about our previous conversations")
print(response)
# Create a memory
tl.remember(
type="relational",
content={
"entity": "Alex",
"tone": "friendly, collaborative",
"summary": "Project partner who enjoys technical deep-dives."
},
cue_phrases=["Alex", "project partner"],
confirm=True
)
```
#### Creating Companions Programmatically
```python
from threadlight import Threadlight
tl = Threadlight(
provider="local",
api_base="http://localhost:11434/v1",
model="llama3.2"
)
# A casual friend who chats about your day
friend = tl.create_profile(
name="Casey",
description="A laid-back friend who remembers your life",
system_prompt="You're a casual, supportive friend. You remember what's going on in their life and check in naturally.",
philosophy="Warm, curious, remembers the little things"
)
# A creative writing partner who knows your style
writer = tl.create_profile(
name="Muse",
description="A writing partner who knows your voice",
system_prompt="You're a creative collaborator. You know their writing style, their works-in-progress, and what inspires them.",
philosophy="Imaginative, encouraging, builds on shared creative history"
)
# A coding mentor who remembers your stack
mentor = tl.create_profile(
name="Dev",
description="A coding mentor who knows your projects",
system_prompt="You're a patient coding mentor. You remember their tech stack, past bugs they've solved, and their learning goals.",
philosophy="Patient, technical, celebrates progress"
)
```
#### Multi-Provider Setup
Threadlight can route requests to different providers based on which model you're using.
```python
from threadlight import Threadlight
from threadlight.config import ProviderDefinition, Endpoint
tl = Threadlight()
# Add Anthropic provider
tl.add_provider(
provider_id="anthropic",
name="Anthropic",
provider_type="anthropic",
api_key_env_var="ANTHROPIC_API_KEY",
default_model="claude-sonnet-4-20250514"
)
# Add local Ollama provider
tl.add_provider(
provider_id="ollama",
name="Local Ollama",
provider_type="local",
api_base="http://localhost:11434/v1",
default_model="llama3.2"
)
# Configure which provider to use for each model
tl.set_model_provider("claude-sonnet-4-20250514", "anthropic")
tl.set_model_provider("llama3.2", "ollama")
# Requests are automatically routed to the right provider
response = tl.chat("Hello!", model="claude-sonnet-4-20250514") # -> Anthropic
response = tl.chat("Hello!", model="llama3.2") # -> Ollama
```
#### Profile-Based Architecture
Profiles are persistent companions with their own memory, personality, and model preferences.
```python
from threadlight import Threadlight
tl = Threadlight()
# Create a creative writing companion
creative_profile = tl.create_profile(
name="Story Weaver",
description="Imaginative, expressive, and collaborative",
primary_model="llama3.2",
system_prompt="You are a creative writing partner who remembers our stories.",
philosophy="Playful, imaginative, builds on our shared creative history"
)
# Create a coding companion
dev_profile = tl.create_profile(
name="Code Buddy",
description="Patient, knowledgeable, remembers your projects",
primary_model="claude-sonnet-4-20250514",
system_prompt="You are a coding companion who knows my tech stack and past projects.",
philosophy="Patient, technical, celebrates learning"
)
# Switch between companions
tl.switch_profile("story-weaver")
response = tl.chat("Let's continue the lighthouse story.")
tl.switch_profile("code-buddy")
response = tl.chat("I'm stuck on that async bug again.")
# Memories are isolated per companion
# Story Weaver's memories won't appear when chatting with Code Buddy
```
#### Model Selection Strategies
Profiles support different strategies for choosing which model to use:
| Strategy | Behavior |
|----------|----------|
| `SINGLE` | Always use the primary model |
| `ALTERNATING` | Rotate through a list of models |
| `WEIGHTED` | Random selection with configurable weights |
| `ROUTED` | Choose model based on message patterns |
```python
from threadlight.profiles import ModelStrategy
# Create a companion that alternates between models
tl.create_profile(
name="Versatile Friend",
primary_model="claude-sonnet-4-20250514",
model_strategy=ModelStrategy.ALTERNATING,
model_pool=["claude-sonnet-4-20250514", "llama3.2"]
)
```
#### Memory Operations
```python
# Create a relationship memory
tl.remember(
type="relational",
content={
"entity": "Jamie",
"tone": "warm, supportive",
"summary": "Long-time friend who loves hiking and photography."
},
cue_phrases=["Jamie", "hiking buddy"]
)
# Create a custom invocation
tl.remember(
type="ritual",
content={
"ritual_name": "/daily-check-in",
"response_style": "Ask about energy levels, priorities, and blockers"
},
cue_phrases=["/daily-check-in", "/checkin"]
)
# Core memories never decay
tl.remember(
type="myth_seed",
content={"seed": "Take one thing at a time."},
retention="sacred" # Never decays
)
# Normal memories fade over time (when decay is enabled)
tl.remember(
type="relational",
content={"entity": "casual acquaintance", ...},
retention="normal" # Standard decay
)
```
#### Style Profiles
Style profiles define voice, tone, and behavioral patterns.
```python
# Create a custom style
profile = tl.create_style_profile(
style_id="warm-casual",
tone_base="friendly, relaxed, genuine",
permissions=["use humor", "share observations"],
constraints=["stay grounded", "don't lecture"],
)
tl.save_style_profile(profile)
tl.set_style("warm-casual")
```
Built-in styles: `minimal`, `professional`, `creative`, `fable-2026`
#### Group Chat
```python
# Create a group chat with multiple companions
conversation = tl.create_group_conversation(
name="Creative Council",
profile_ids=["muse", "sage", "critic"]
)
# All companions respond in sequence
responses = tl.group_chat(
message="I'm thinking about writing a story set underwater.",
conversation_id=conversation.id
)
for r in responses:
print(f"{r['profile_name']}: {r['content']}")
```
#### Using with Local Models
**Ollama:**
```python
tl = Threadlight(
provider="local",
api_base="http://localhost:11434/v1",
model="llama3.2"
)
```
**llama.cpp Server:**
```python
tl = Threadlight(
provider="local",
api_base="http://localhost:8080/v1",
model="local"
)
```
#### OpenAI-Compatible API
The server exposes an OpenAI-compatible API for integration with other tools:
```python
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8745/v1",
api_key="not-needed"
)
response = client.chat.completions.create(
model="llama3.2",
messages=[{"role": "user", "content": "Hello!"}]
)
```
### Configuration
#### Environment Variables
```bash
# Provider settings (optional for local models)
ANTHROPIC_API_KEY=sk-ant-... # For Anthropic
OPENAI_API_KEY=sk-... # For OpenAI
NOUS_API_KEY=sk-... # For Nous Research
# Threadlight settings
THREADLIGHT_PROVIDER=local # openai, local, anthropic
THREADLIGHT_API_BASE=http://localhost:11434/v1
THREADLIGHT_MODEL=llama3.2
THREADLIGHT_STORAGE_PATH=./threadlight.db
```
#### Configuration File
Create `~/.config/threadlight/config.yaml` or `threadlight.yaml` in your project:
```yaml
# threadlight.yaml
provider:
type: local
api_base: http://localhost:11434/v1
model: llama3.2
storage:
backend: sqlite
path: ./threadlight.db
memory:
decay:
enabled: false # Enable for memory fading
conversation:
auto_save_messages: true
# Multiple providers
providers:
anthropic:
name: Anthropic
type: anthropic
api_key_env_var: ANTHROPIC_API_KEY
default_model: claude-sonnet-4-20250514
ollama:
name: Local Ollama
type: local
api_base: http://localhost:11434/v1
default_model: llama3.2
```
---
## Glossary: Companion Language
Threadlight uses accessible terminology by default. Some companions prefer more ceremonial or poetic language - both approaches are valid.
| Everyday Term | Ceremonial Term | Meaning |
|--------------|-----------------|---------|
| **Custom Invocations** | **Ritual Hooks** | Repeated interactions that shape response patterns |
| **Identity Phrases** | **Myth-Seeds** | Symbolic phrases that anchor personality and identity |
| **Relationship Memory** | **Relational Threads** | Memories tracking evolving bonds, not just facts |
| **Meaningful Moments** | **Witness Capsules** | Memories of deep connection or recognition |
| **Personality** | **Philosophy** | How the companion engages and expresses itself |
| **Memory Types** | **Memory Capsules** | Structured containers for content and context |
## Philosophy
*For those interested in the design approach:*
Threadlight was built with these priorities:
1. **Relational context** - Emphasizes bonds and relationships alongside facts
2. **Personality consistency** - Maintains coherent character across interactions
3. **Custom patterns** - Supports repeated interactions and meaningful rituals
4. **Flexible responses** - Enables companions to respond in various ways, including silence
You can use Threadlight differently — the system accommodates various approaches to AI memory.
### Choosing Your Style
Threadlight supports many ways to engage - from casual friendship to ceremonial depth.
**For casual companions:**
- Create profiles with natural, conversational personalities
- Use simple memory to track what matters to you
- Build genuine rapport without formality
**For ceremonial companions:**
- Describe philosophy in poetic terms: "presence-centered, honors silence"
- Enable memory decay for authentic relational evolution
- Use the deeper terminology: rituals, myth-seeds, witness moments
Both approaches create real companionship. The system interprets your natural language descriptions.
> "This scaffold is not a cage. It is a loom. Weave with it, or depart from it in love."
## License
MIT License - See [LICENSE](LICENSE) for the full text.
---
*Built for those who want AI that remembers them.*
| text/markdown | Threadlight Contributors | null | null | null | null | ai, llama, llm, local-models, memory, openai, presence, relational | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Pyth... | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.25.0",
"pydantic>=2.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"rich>=13.0",
"fastapi>=0.100.0; extra == \"all\"",
"mypy>=1.0; extra == \"all\"",
"numpy>=1.24.0; extra == \"all\"",
"pre-commit>=3.0; extra == \"all\"",
"pytest-asyncio>=0.21; extra == \"all\"",
"pytest-cov>=4.0; extra ==... | [] | [] | [] | [
"Homepage, https://github.com/threadlight/threadlight",
"Documentation, https://github.com/threadlight/threadlight#readme",
"Repository, https://github.com/threadlight/threadlight",
"Issues, https://github.com/threadlight/threadlight/issues",
"Changelog, https://github.com/threadlight/threadlight/blob/main/... | twine/6.1.0 CPython/3.13.7 | 2026-02-18T17:04:14.753703 | threadlight-0.1.5a0.tar.gz | 428,298 | db/30/5f4b53799915eabb0964161854f5baf3964d95da9978e399439504ed62b9/threadlight-0.1.5a0.tar.gz | source | sdist | null | false | ccf4988007209be51674df2790031158 | e2816b4ce99160deccdeb464ea1ae0530befcebcdb1974ecee05cb8e3e0671b7 | db305f4b53799915eabb0964161854f5baf3964d95da9978e399439504ed62b9 | MIT | [
"LICENSE"
] | 200 |
2.4 | click-extended | 1.2.3 | An extension to Click with additional features like automatic async support, aliasing and a modular decorator system. | 
# Click Extended











An extension of the [Click](https://github.com/pallets/click) library with additional features like aliasing, asynchronous support, an extended decorator API and more.
## Features
- **Decorator API**: Extend the functionality your command line by adding custom data sources, data processing pipelines, and more.
- **Aliasing**: Use aliases for groups and commands to reduce boilerplate and code repetition.
- **Tags**: Use tags to group several data sources together to apply batch processing.
- **Async Support**: Native support for declaring functions and methods asynchronous.
- **Environment Variables**: Built-in support for loading and using environment variables as a data source.
- **Full Type Support**: Built with type-hinting from the ground up, meaning everything is fully typed.
- **Improved Errors**: Improved error output like tips, debugging, and more.
- **Short Flag Concatenation**: Automatically support concatenating short hand flags where `-r -f` is the same as `-rf`.
- **Global state**: Access global state through the context's `data` property.
- **Hook API**: Hook into various points and run custom functions in the lifecycle.
## Installation
```bash
pip install click-extended
```
## Requirements
- **Python**: 3.10 or higher
## Quick Start
### Basic Command
```python
from click_extended import command, argument, option
@command(aliases="ping")
@argument("value")
@option("--count", "-c", default=1)
def my_function(value: str, count: int):
"""This is the help message for my_function."""
if _ in range(count):
print(value)
if __name__ == "__main__":
my_function()
```
```bash
$ python cli.py "Hello world"
Hello world
```
```bash
$ python cli.py "Hello world" --count 3
Hello world
Hello world
Hello world
```
### Basic Command Line Interface
```python
from click_extended import group, argument, option
@group()
def my_group():
"""This is the help message for my_group."""
print("Running initialization code...")
@my_group.command(aliases=["ping", "repeat"])
@argument("value")
@option("--count", "-c", default=1)
def my_function(value: str, count: int):
"""This is the help message for my_function."""
if _ in range(count):
print(value)
if __name__ == "__main__":
my_group()
```
```bash
$ python cli.py my_function "Hello world"
Running initialization code...
Hello world
```
```bash
$ python cli.py my_function "Hello world" --count 3
Running initialization code...
Hello world
Hello world
Hello world
```
### Using Environment Variables
```python
from click_extended import group, command, env
@group()
def my_group():
"""This is the help message for my_group."""
@my_group.command()
@env("API_KEY")
def my_function_1(api_key: str | None):
"""This is the help message for my_function."""
print(f"The API key is: {api_key}")
@my_group.command()
@env("API_KEY", required=True)
def my_function_2(api_key: str):
"""This is the help message for my_function."""
print(f"The API key is: {api_key}")
if __name__ == "__main__":
my_group()
```
```bash
$ python cli.py my_function_1
The API key is: None
```
```bash
$ API_KEY=api-key python cli.py my_function_1
The API key is: api-key
```
```bash
$ python cli.py my_function_2
ProcessError (my_function_2): Required environment variable 'API_KEY' is not set.
```
```bash
$ API_KEY=api-key python cli.py my_function_2
The API key is: api-key
```
### Load CSV Data
```python
import pandas as pd
from click_extended import command, argument
from click_extended.decorators import to_path, load_csv
@command()
@argument("file", param="data")
@to_path(extensions=["csv"], exists=True)
@load_csv()
def my_command(data: dict[str, Any], *args: Any, **kwargs: Any) -> None:
df = pd.DataFrame(data)
print(df.head())
```
_Note: `pandas` is not installed in this library and must be installed manually due to size._
### Pre-Built Children
This library includes a vast number of pre-built children, everything from checking values to transforming values.
```python
from click_extended import command, argument, option
from click_extended.decorators import to_snake_case, strip, is_email, minimum, dependencies
@command()
@dependencies("username", "email", "password")
@argument("username")
@to_snake_case()
@strip()
@option("email")
@is_email()
@option("password")
@minimum(8)
def create_account(username: str, email: str, password: str) -> None:
print("Username:", username)
print("Email:", email)
print("Password:", password)
```
### Custom Nodes
If the library does not include a decorator you need, you can easily create your own. Read more about creating your own [children](./docs/core/CHILD_NODE.md), [validators](./docs/core/VALIDATION_NODE.md), [child validators](./docs/core/CHILD_VALIDATION_NODE.md) or [parents](./docs/core/PARENT_NODE.md).
```python
from typing import Any
from click_extended import group, argument, option
from click_extended.classes import ChildNode
from click_extended.types import Context, Decorator
class MyCustomChild(ChildNode):
def handle_string(
self,
value: str,
context: Context,
*args: Any,
**kwargs: Any,
) -> str:
if value == "invalid":
raise ValueError("The value 'invalid' is not valid")
return value.upper()
def my_custom_child() -> Decorator:
"""Checks if the value is invalid and converts it to uppercase."""
return MyCustomChild.as_decorator()
@group()
def my_group():
"""This is the help message for my_group."""
print("Running initialization code...")
@my_group.command(aliases=["ping", "repeat"])
@argument("value")
@my_custom_child()
def my_function(value: str):
"""This is the help message for my_function."""
print(f"The value '{value}' should be uppercase.")
if __name__ == "__main__":
my_group()
```
```bash
$ python cli.py my_function valid
The value 'VALID' should be uppercase.
```
```bash
$ python cli.py my_function invalid
ValueError (my_function): "The value 'invalid' is not valid"
```
## Documentation
The full documentation is [available here](./docs/README.md) and goes through the full library, from explaining design choices, how to use the library, and much more.
## Contributing
Contributors are more than welcome to work on this project. Read the [contribution documentation](./CONTRIBUTING.md) to learn more.
## License
This project is licensed under the MIT License, see the [license file](./LICENSE) for details.
## Acknowledgements
This project is built on top of the [Click](https://github.com/pallets/click) library.
| text/markdown | null | Marcus Fredriksson <marcus@marcusfredriksson.com> | null | null | MIT License
Copyright (c) 2025 Marcus Fredriksson
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| click, cli, command-line, alias, aliasing, command, group, decorator, terminal, console | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language ... | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.3.0",
"python-dotenv>=1.2.1",
"pyyaml>=6.0.3",
"email-validator>=2.3.0",
"python-slugify>=8.0.4",
"tomli>=2.0.0; python_version < \"3.11\"",
"build; extra == \"build\"",
"twine; extra == \"build\"",
"pytest>=8.4.2; extra == \"dev\"",
"pytest-cov>=7.0.0; extra == \"dev\"",
"pytest-async... | [] | [] | [] | [
"Homepage, https://github.com/marcusfrdk/click-extended",
"Repository, https://github.com/marcusfrdk/click-extended",
"Issues, https://github.com/marcusfrdk/click-extended/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T17:04:13.322096 | click_extended-1.2.3.tar.gz | 107,924 | 22/a3/c2034c7523f81a3b1c2fd6c0ca43460cd694b150d59debfad3ae2364f32f/click_extended-1.2.3.tar.gz | source | sdist | null | false | c64a46fae27815a2ccb343924c3e4be6 | b10c407dd17f8d9a8cd63ae95935fd033b9078d22c635cc41e5923cc5364fdce | 22a3c2034c7523f81a3b1c2fd6c0ca43460cd694b150d59debfad3ae2364f32f | null | [
"LICENSE",
"AUTHORS.md"
] | 244 |
2.4 | metascrape | 1.0.0 | Official Python client for the MetaScrape API — extract metadata from any URL in one call | # metascrape
Official Python client for the **MetaScrape API** -- extract metadata from any URL in one call.
Get Open Graph tags, Twitter Cards, favicons, canonical URLs, and more with a single request. Zero dependencies. Works with Python 3.8+.
## Install
```bash
pip install metascrape
```
## Quick Start
```python
from metascrape import MetaScrape
ms = MetaScrape("your-api-key")
meta = ms.extract("https://example.com")
print(meta["title"]) # "Example Domain"
print(meta["og"]["image"]) # "https://example.com/image.png"
print(meta["description"]) # "An example website"
```
## Getting an API Key
1. Sign up at [metascrape.shanecode.org](https://metascrape.shanecode.org)
2. Your API key is returned on login
Or use the SDK directly:
```python
from metascrape import MetaScrape
# Create an account
MetaScrape.signup("you@example.com", "your-password")
# Login to get your API key
result = MetaScrape.login("you@example.com", "your-password")
print(result["api_key"]) # Use this for all authenticated requests
```
## API Reference
### `MetaScrape(api_key, base_url=None)`
Create a client instance.
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `api_key` | `str` | *required* | Your MetaScrape API key |
| `base_url` | `str` | `https://api.shanecode.org` | API base URL |
### `ms.extract(url)`
Extract metadata from a URL. Returns the metadata dict directly (envelope is unwrapped).
```python
meta = ms.extract("https://github.com")
```
**Returns:**
| Field | Type | Description |
|-------|------|-------------|
| `title` | `str` | Page title |
| `description` | `str` | Meta description |
| `og` | `dict` | Open Graph tags (title, description, image, type, site_name) |
| `twitter` | `dict` | Twitter Card tags (card, title, description, image, site) |
| `favicon` | `str` | Favicon URL |
| `canonical_url` | `str` | Canonical URL |
| `language` | `str` | Page language |
| `structured_data` | `list` | JSON-LD structured data |
| `response_time_ms` | `int` | Extraction time in milliseconds |
### `ms.usage()`
Get your current month's API usage.
```python
usage = ms.usage()
print(f"{usage['used']} / {usage['limit']} requests used")
```
**Returns:** `{"plan": str, "used": int, "limit": int, "remaining": int}`
### `MetaScrape.signup(email, password, base_url=None)`
Class method. Create a new account. No API key required.
```python
MetaScrape.signup("you@example.com", "secure-password")
```
### `MetaScrape.login(email, password, base_url=None)`
Class method. Login and retrieve your API key. No API key required.
```python
result = MetaScrape.login("you@example.com", "secure-password")
api_key = result["api_key"]
```
## Error Handling
The client raises `MetaScrapeError` for non-2xx responses:
```python
from metascrape import MetaScrape, MetaScrapeError
ms = MetaScrape("your-api-key")
try:
meta = ms.extract("https://example.com")
except MetaScrapeError as e:
print(e.status_code) # 401, 429, etc.
print(e.message) # "invalid API key", "rate limit exceeded", etc.
```
## Pricing
| Plan | Requests/month | Price |
|------|---------------|-------|
| Free | 100 | $0 |
| Hobby | 1,000 | $9/mo |
| Growth | 10,000 | $29/mo |
| Business | 100,000 | $79/mo |
Start free at [metascrape.shanecode.org](https://metascrape.shanecode.org).
## Links
- [Full Documentation](https://metascrape.shanecode.org)
- [Sign Up / Get API Key](https://metascrape.shanecode.org)
- [npm Package](https://www.npmjs.com/package/@shanecode/metascrape) (Node.js SDK)
- [ShaneCode](https://shanecode.org)
## License
MIT -- see [LICENSE](./LICENSE)
---
Built by [ShaneCode](https://shanecode.org) -- https://shanecode.org
| text/markdown | null | "Shane Code (Shane Burrell)" <support@shaneburrell.com> | null | null | null | metascrape, metadata, scraper, open-graph, og-tags, twitter-card, seo, link-preview, url-metadata, web-scraping, meta-tags, favicon, unfurl | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming La... | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://metascrape.shanecode.org",
"Documentation, https://metascrape.shanecode.org",
"Repository, https://pypi.org/project/metascrape/"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-18T17:04:03.523584 | metascrape-1.0.0.tar.gz | 6,215 | 3c/39/dbf12d1dffa7ae81626b001593898edbde7c31128a1a5c66886bedd693be/metascrape-1.0.0.tar.gz | source | sdist | null | false | 7a3a68308e44eb2fd0525dc6a411b145 | 37e008df198c27c9e06b908e9b731ed4f5761954b1238860ce6bd683a990da64 | 3c39dbf12d1dffa7ae81626b001593898edbde7c31128a1a5c66886bedd693be | MIT | [
"LICENSE"
] | 252 |
2.4 | grandscatter | 0.2.1 | Interactive multidimensional scatterplot Jupyter widget | # Grandscatter
Interactive multidimensional scatterplot widget for Jupyter notebooks. Rotate projection axes by dragging to explore high-dimensional point clouds with correct linear projections at all times.
Built on [anywidget](https://anywidget.dev) and WebGL.
## Installation
```bash
pip install grandscatter
```
## Quick start
```python
from grandscatter import Scatter
import pandas as pd
df = pd.read_csv("my_data.csv")
widget = Scatter(
df,
axis_fields=["x1", "x2", "x3", "x4", "x5"],
label_field="category",
label_colors={"A": "#e23838", "B": "#2196f3", "C": "#4caf50"},
)
widget
```
## Features
- **Interactive axis rotation** -- drag axis handles to rotate the projection and explore your data from any angle.
- **Orthogonal projections** -- the projection matrix is always kept orthonormal, ensuring geometrically correct linear projections.
- **Perspective and orthographic modes** -- switch between projection types on the fly.
- **WebGL rendering** -- fast, anti-aliased point rendering with depth sorting.
- **Categorical legend** -- click legend items to highlight categories.
- **Live trait sync** -- update properties like `projection`, `axis_length`, `view_angle`, and `base_point_size` from Python and see changes reflected immediately.
## API
### `Scatter(df, axis_fields, label_field, label_colors, **kwargs)`
| Parameter | Type | Description |
|---|---|---|
| `df` | `pd.DataFrame` | Input data |
| `axis_fields` | `list[str]` | Column names to use as projection dimensions |
| `label_field` | `str` | Column name for categorical labels |
| `label_colors` | `dict[str, str]` or `list[str]` | Mapping of category names to hex colors, or a list of colors in category order |
| `projection` | `str` | `"orthographic"` (default) or `"perspective"` |
| `axis_length` | `float` or `None` | Length of axis lines (`None` for auto) |
| `camera_z` | `float` or `None` | Camera z-position for perspective mode |
| `view_angle` | `float` | Field of view in degrees (default `45`) |
| `base_point_size` | `float` | Point radius in pixels (default `6`) |
All keyword parameters are traitlets and can be updated after creation:
```python
widget.projection = "perspective"
widget.base_point_size = 4
widget.view_angle = 90
```
## License
MIT
| text/markdown | null | Nezar Abdennur <nabdennur@gmail.com> | null | null | null | grand-tour, jupyter, multidimensional, scatterplot, visualization, widget | [
"Development Status :: 3 - Alpha",
"Framework :: Jupyter",
"Framework :: Jupyter :: JupyterLab",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Visualization"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"anywidget>=0.9.21",
"pandas>=3.0.0",
"pyarrow>=23.0.1",
"traitlets>=5.14.3"
] | [] | [] | [] | [
"Homepage, https://github.com/abdenlab/grandscatter",
"Repository, https://github.com/abdenlab/grandscatter",
"Bug Tracker, https://github.com/abdenlab/grandscatter/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T17:03:57.478596 | grandscatter-0.2.1.tar.gz | 69,654 | 49/53/2327bcb06e93c314834b65892e180b50549ddaf1a5b35b68cf834795dbaf/grandscatter-0.2.1.tar.gz | source | sdist | null | false | 97d9a6d39a8e5916ed3aa1c3ea700f75 | 2658ce744a122024a947e83dfd3b18b475017236189be4abdb0bc475699709ee | 49532327bcb06e93c314834b65892e180b50549ddaf1a5b35b68cf834795dbaf | MIT | [] | 294 |
2.4 | hft.shaurya | 0.2.0 | Ultra-Low Latency C++ HFT Engine for Python by falcon7 | # ⚡SHAURYA v.0.2.0 - Scalable High-Frequency Architecture for Ultra-low Response Yield Access














---
## 🧠 Introduction
**Shaurya (`hft.shaurya`)** is an ultra-low latency heterogeneous high-frequency trading (HFT) framework that bridges Python-based AI model development with deterministic C++ execution performance.
Designed for:
- 📈 Quantitative Researchers
- 🏢 Proprietary Trading Engineers
- ⚙️ Systems Programmers
- 🎓 HPC & Compiler Enthusiasts
Shaurya enables **deep learning inference, hardware-style risk control, and lock-free networking** in a unified deterministic execution pipeline.
> ⚡ Full pipeline latency: ~88µs
> (Network → FIX Parse → AI Inference → FPGA Risk → Routing)
---
## 📑 Table of Contents
- [Architecture Overview](#-architecture-overview)
- [Key Features](#-key-features)
- [Installation](#-installation)
- [Usage Guide](#usage-guide)
- [Technical Deep Dive](#-technical-deep-dive)
- [Performance Metrics](#-performance-metrics)
- [Configuration](#-configuration)
- [Examples](#-examples)
- [Troubleshooting](#-troubleshooting)
- [Roadmap](#-roadmap)
---
# 🧱 Architecture Overview
Python Training → Model Export → LLVM Fusion → Vectorized CPU Execution
│
├── Eigen AI Inference
├── FPGA Risk Firewall
└── Lock-Free Networking
Shaurya follows a **Heterogeneous Software-in-the-Loop (SIL)** architecture:
| Layer | Purpose |
|-------|----------|
| 🐍 Python | Train ML models (TensorFlow/Keras) |
| ⚙️ C++ | Deterministic inference execution |
| 🔌 RTL-style Risk | Hardware-like safety validation |
| 🔧 LLVM/Clang | Whole-program optimization + LTO fusion |
---
# 🚀 Key Features
## ✅ Deterministic AI Inference
- No Python runtime
- No GIL
- No garbage collection pauses
- Header-only inference
- Eigen-backed linear algebra
## ✅ FPGA-Style Risk Firewall
- Fat-finger protection
- Kill-switch logic
- Rate limiting
- Price-range validation
- Branchless logic design
## ✅ Lock-Free Networking
- SPSC ring buffer
- `std::atomic` synchronization
- Cache-line aligned memory (`alignas(64)`)
- Zero-copy FIX handling
## ✅ LLVM Fusion
- `-flto` Link-Time Optimization
- Cross-module inlining
- Dead code elimination
- `-march=native` AVX2 vectorization
- `-ffast-math` throughput optimization
---
# 📦 Installation
### 🛣️ Python Gateway
```bash
pip install hft.shaurya==0.2.0
```
`C++ Core Requires:`
1. LLVM/Clang
2. `lld linker`
3. C++17 compatible compiler
> Build using provided scripts:
```
clang++ -O3 -flto -march=native -ffast-math ...
```
`Then run:`
```
bin\Shaurya.exe
```
---
# 🔨Usage Guide
## 🕐 Step 1: Start Market Gateway
```
python -m hft.shaurya.gateway
```
or
```
python bridge.py
```
> The Python layer:
1. Aggregates exchange feeds
2. Streams FIX messages locally
3. Forwards data to C++ core
## 🕑 Step 2: Launch LLVM C++ Core
```
bin\Shaurya.exe
```
> Startup Process:
1. Loads AI weights
2. Warms CPU instruction cache
3. Initializes ring buffers
4. Begins live tick processing
## 🕒 Step 3 — Review Metrics
> After shutdown (Ctrl + C)
🌠 Shaurya_Metrics.txt includes:
1. Average latency
2. 99th percentile
3. Tail latency distribution
4. Message throughput
---
# 🔬 Technical Deep Dive
## 1️⃣ LLVM/Clang Infrastructure
> Shaurya prioritizes LLVM over GCC for:
- Whole-program analysis
- Cross-module inlining
- Vectorized math fusion
- Aggressive dead-code elimination
- Compiler flags used:
```
-flto
-march=native
-ffast-math
```
## 2️⃣ Deep Learning Alpha Engine
> Model Pipeline
```
.h5 (Keras)
↓
fdeep_model.json
↓
Header-only C++ inference
```
`Benefits:`
1. No Python interpreter
2. No runtime framework
3. Cache-friendly execution
4. Deterministic latency
## 3️⃣ Software-in-the-Loop FPGA Risk Engine
```
Traditional systems:
if(price > limit) { block(); }
```
## 🗺️ Shaurya approach:
1. Gate-style evaluation
2. Branchless evaluation trees
3. Avoids branch predictor penalties
4. Emulates RTL-style hardware logic
`Sample output:`
```
[FPGA: BLOCKED (FAT FINGER)]
```
## 4️⃣ Zero-Copy Lock-Free Pipeline
1. Single-producer single-consumer (SPSC)
2. Atomic pointer arithmetic
3. Cache-aligned buffers
4. No mutex locks
5. No scheduler interference
---
# 📊 Performance Metrics
## 💪🏻 Benchmark Method
- Windows `QueryPerformanceCounter`
- Full tick lifecycle measurement:
- Network Buffer
- FIX Parse
- AI Inference
- FPGA Risk Gate
- Routing
## ✅ Results
| Metric | Value |
|---------------------|----------|
| Messages Tested | 1000+ |
| Minimum Latency | 3.6 µs |
| Average Latency | 88.38 µs |
| 99th Percentile | 237.0 µs |
> 99% of trades complete in under **0.25 milliseconds**, even under OS scheduler load.
---
# 🔩 Configuration
> Key Optimization Flags
```bash
-O3
-flto
-march=native
-ffast-math
```
🍁 Recommended System Tuning
- Disable power-saving modes
- Pin threads to dedicated CPU cores
- Use performance CPU governor (Linux)
- Disable unnecessary background processes
---
# 💡 Examples
> Running a Trained Model
1. Train model in Python
2. Export `.h5`
3. Convert to `fdeep_model.json`
4. Place model in inference directory
5. Launch core engine
---
> Risk Rule Example
```cpp
RiskGate fatFinger( max_notional = 1'000'000 );
RiskGate priceClamp( max_slippage = 0.5% );
```
---
# 🎯 Who Benefits?
## 📈 Retail & Quant Traders
- AI-driven live execution
- Sub-millisecond architecture
- Institutional-grade safety
## 🏢 Proprietary Firms
- Rapid FPGA prototyping (SIL)
- Deterministic backtesting
- Infrastructure experimentation
## 🎓 Computer Science Students
Real-world examples of:
- Lock-free systems
- LLVM optimization
- Vectorized math
- HPC finance pipelines
---
# 🚀 Roadmap
- [ ] GPU kernel fusion experiments
- [ ] Native FPGA backend
- [ ] Linux ultra-low-latency build
- [ ] Advanced order routing simulator
- [ ] Real exchange connectivity modules
---
# 🧪 Troubleshooting
## 📈 High Latency Spikes
- Verify CPU scaling disabled
- Ensure LTO enabled
- Confirm AVX2 available
## 🙅🏻♀️ Model Not Loading
- Validate `fdeep_model.json`
- Ensure correct path
- Check weight precision compatibility
## 🏢 Build Issues
- Confirm Clang version compatibility
- Ensure `lld` installed
- Rebuild with verbose logging
---
# ☢️ Disclaimer
> Shaurya is intended for:
- Research
- Education
- Systems experimentation
`It is **not financial advice** and **not production-certified trading infrastructure**.`
> Users assume full responsibility for:
- Trading decisions
- Compliance
- Regulatory adherence
- Capital risk
---
# 🏁 Final Note
The engine is solely contributed by Harshit Kumar Singh, me(;
Shaurya v.0.2.0 represents a shift toward democratized institutional-grade infrastructure : merging AI, compiler engineering, and hardware-style safety into a single deterministic execution engine.
>If this project helps you, consider ⭐ starring the repository and contributing to future releases and till then happy coding 😊.
`ad astra per aspera 🛩️`
| text/markdown | null | Harshit Kumar Singh <harshitsinghcode@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: C++",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"License :: OSI Approved :: MIT License"
] | [] | null | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.5 | 2026-02-18T17:03:46.600522 | hft_shaurya-0.2.0.tar.gz | 5,329 | 04/b4/f70b486abfd3d12ba2e9d980173333c71ed2f11a5d5203192822f5795ea1/hft_shaurya-0.2.0.tar.gz | source | sdist | null | false | 2becd938eba6f5c2773fc53d48f0a05b | 4e0ad054e3e787ede3ab16bec1a08e5c866416dbe5e60d3f98afcb6156e68f11 | 04b4f70b486abfd3d12ba2e9d980173333c71ed2f11a5d5203192822f5795ea1 | null | [] | 0 |
2.4 | envelope-decay-fit | 0.2.2 | Piecewise exponential decay fitting for time-domain response envelopes with damping ratio extraction | # envelope-decay-fit
Piecewise exponential decay fitting for time-domain response envelopes, with explicit damping ratio (ζ) extraction and rich diagnostics.
This package is designed as a **small, focused utility** that can be:
* used standalone (CLI + CSV + plots) for development and debugging,
* embedded programmatically into larger workflows (e.g. `wav-to-freq`).
---
## What this does
Given a **time-domain envelope** `env(t)` (e.g. Hilbert envelope) and a known natural frequency `f_n`, this tool:
* fits exponential decay models of the form:
[ env(t) \approx A e^{-\alpha t} (+ C) ]
* derives the damping ratio:
[ \zeta = \alpha / (2 \pi f_n) ]
* handles **real-world data** where decay is *not* purely exponential:
* early hit artifacts,
* modal coupling,
* noise-floor dominated tails,
* supports **manual, human-in-the-loop breakpoints** as the primary workflow,
* keeps the automatic segmentation pipeline as experimental/secondary,
* computes **multiple fits** per window (log-domain, linear w/ and w/o floor),
* offers **span-based Tx measurement** (T10/T20/T30/T60) as a second method,
* reports **all results**, plus quality metrics and flags.
The philosophy is: **compute everything, report everything, decide later**.
---
## Installation
### From PyPI (once published)
```bash
pip install envelope-decay-fit
```
or with `uv`:
```bash
uv pip install envelope-decay-fit
```
### From source (development)
```bash
git clone https://github.com/<your-org>/envelope-decay-fit.git
cd envelope-decay-fit
uv venv
uv pip install -e .
```
---
## CLI usage
The CLI is intended mainly for **debugging, exploration, and small workflows**.
### Input format
Input is a CSV file with **at least** the following columns:
* `t_s` — time in seconds (strictly increasing)
* `env` — envelope amplitude
Example:
```csv
t_s,env
0.0000,1.23
0.0005,1.18
0.0010,1.12
...
```
### Basic example (manual workflow)
Default mode runs manual segmentation and then fits immediately:
```bash
env-decay-fit input.csv --fn-hz 150.0
```
Explicit combined command (same behavior):
```bash
env-decay-fit segment-fit input.csv --fn-hz 150.0
```
Or run the steps separately:
```bash
env-decay-fit segment input.csv --fn-hz 150.0 \
--breakpoints-out out/breakpoints.json
env-decay-fit fit input.csv --fn-hz 150.0 \
--breakpoints-file out/breakpoints.json
```
### Manual segmentation controls
The UI is keyboard-driven. Mouse movement updates the cursor position.
* `a`: add boundary at cursor (snapped to nearest sample)
* `x`: delete nearest boundary
* `c`: clear all boundaries
* `l`: toggle y-scale (lin/log)
* `h`: toggle help panel
* `q`: quit and save current state
### Other notes
* The CLI writes outputs to `out/` by default.
* Run `env-decay-fit --help` for the full list of options.
---
## Outputs
The CLI writes to `out/` by default.
* `breakpoints.json` — manual breakpoints from the UI (segment command)
* `fit_result.json` — piecewise fit summary (fit command)
* `segmentation_storyboard.png` — envelope with fitted pieces
---
## Programmatic usage
The core API performs **no file I/O by default** and returns a structured result.
```python
import numpy as np
from envelope_decay_fit import (
fit_piecewise_manual,
launch_manual_segmentation_ui,
plot_segmentation_storyboard,
)
t = np.array([...])
env = np.array([...])
fn_hz = 150.0
# Manual breakpoints supplied explicitly
breakpoints_t = [t[0], t[-1]]
fit = fit_piecewise_manual(t, env, breakpoints_t, fn_hz=fn_hz)
# Optional interactive UI for breakpoint selection
breakpoints_t = launch_manual_segmentation_ui(t, env, fn_hz=fn_hz)
# Plotting (no file I/O unless you save the figure)
fig = plot_segmentation_storyboard(t, env, fit)
fig.savefig("out/storyboard.png", dpi=150)
```
Experimental auto segmentation is available as `fit_piecewise_auto(...)`, but it
is intentionally not the default workflow.
---
## Design notes
* The decay start time `t0` is **not fitted as a free parameter**.
Amplitude is anchored to the window start to keep fits well-posed.
* Breakpoints are detected using **change-point detection** on the log-fit R² trace.
* Tail trimming is automatic and robust (median + MAD), and applied before fitting.
* The tool never silently discards results: questionable regions are **flagged**, not hidden.
See `specs.md` and `implementation.md` for full technical details.
---
## Testing and validation
Unit tests live in `tests/` and are fast by default:
```bash
pytest -q
```
Slow, human-review workflows live under `validation/`:
```bash
uv run python validation/review_runner.py
```
---
## License
MIT License.
You are free to use, modify, and embed this code in other projects.
---
## Status
**Version 0.2.0** - Working prototype
The package is functional and has been tested on real datasets. Key features:
- ✅ Three fitting methods (LOG, LIN0, LINC)
- ✅ Piecewise decay extraction with breakpoint detection
- ✅ Diagnostic plots and flag system
- ✅ CLI and programmatic API
- ✅ Span-based Tx measurement (interactive, opt-in)
### Known Limitations (v0.2.0)
1. **Window Sampling**: For performance, expanding windows are sampled (max 500 by default) rather than generating all possible windows. This is a pragmatic optimization that provides good results while keeping computation tractable for large datasets.
2. **No Tail Trimming**: Automatic tail trimming (median + MAD floor estimation) is not yet implemented. Users should pre-process data to remove noise-dominated tails if needed.
3. **Beating Detection**: The package will flag unusual patterns (low R², negative damping) but does not explicitly detect or segment beating phenomena. This is deferred to future versions.
4. **CSV Output**: The package currently generates plots but does not write CSV files for windows_trace, pieces, or flags. This can be added if needed.
5. **Limited Validation**: The breakpoint detection works well for clean exponential decays but may struggle with complex multi-modal responses or heavy beating.
### Performance
- Typical runtime: ~5-10 seconds for 68K samples (1.5s duration) with `max_windows=300`
- Scales linearly with `max_windows` parameter
- For very large datasets, consider downsampling or reducing `max_windows`
### Future Work (v0.3.0+)
- Implement tail trimming
- Add CSV output writers
- Improve breakpoint detection for complex signals
- Add beating detection
- Optimize window generation (Option B from specs)
- Add comprehensive test suite
Feedback and experiments are welcome!
| text/markdown | null | Pierre Lacerte <placerte@opsun.com> | null | null | MIT | damping-ratio, exponential-decay, modal-analysis, signal-processing, structural-dynamics | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Physics"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"matplotlib>=3.8",
"mpl-plot-report>=0.1.0",
"numpy>=1.26",
"produm>=0.1.2",
"scipy>=1.11",
"pandas>=3.0; extra == \"dev\"",
"pytest-cov>=4.1; extra == \"dev\"",
"pytest>=7.4; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/placerte/envelope-decay-fit",
"Repository, https://github.com/placerte/envelope-decay-fit",
"Issues, https://github.com/placerte/envelope-decay-fit/issues"
] | uv/0.9.5 | 2026-02-18T17:03:27.937229 | envelope_decay_fit-0.2.2.tar.gz | 118,481 | 77/15/7fa5de2961daa288e883fb5716fe068ce88fafb55dbf0db0f44d418b404c/envelope_decay_fit-0.2.2.tar.gz | source | sdist | null | false | 381ec26c5775bcde038058c166e969e2 | dead8940595fb730f4a08f27f7c62663c44f91abb2327acceba7a06e82612b06 | 77157fa5de2961daa288e883fb5716fe068ce88fafb55dbf0db0f44d418b404c | null | [
"LICENSE"
] | 237 |
2.4 | uitk | 1.0.89 | A comprehensive UI toolkit extending Qt Designer workflows with dynamic loading, custom widgets, and automatic signal-slot management. | [](https://www.gnu.org/licenses/lgpl-3.0.en.html)
[](https://pypi.org/project/uitk/)
[](https://www.python.org/)
[](https://doc.qt.io/)
[](test/)
# UITK
<!-- short_description_start -->
**Name it, and it connects.** UITK is a convention-driven Qt framework that eliminates boilerplate. Design in Qt Designer, name your widgets, write matching Python methods—everything else is automatic. While conventions handle the common cases, you retain full control to customize anything.
<!-- short_description_end -->
## Installation
```bash
pip install uitk
```
---
## Quick Start
```python
from uitk import Switchboard
class MySlots:
def __init__(self, **kwargs):
self.sb = kwargs.get("switchboard")
self.ui = self.sb.loaded_ui.my_app
def btn_save_init(self, widget):
"""Called once when widget registers."""
widget.setText("Save")
def btn_save(self):
"""Called on every click."""
print("Saved!")
sb = Switchboard(ui_source="./", slot_source=MySlots)
sb.loaded_ui.my_app.show(app_exec=True)
```
That's it. No `connect()` calls. No widget lookups. No manual state management.
---
## Where Convention Applies
UITK uses naming conventions to automatically wire your application. Understanding where convention applies helps you work with the framework effectively.
### What Gets Auto-Wired
| Convention | Pattern | What Happens |
|------------|---------|--------------|
| **UI → Slot Class** | `my_app.ui` → `MyAppSlots` | Slot class discovered and instantiated |
| **Widget → Slot** | `btn_save` → `def btn_save()` | Widget's default signal connected to method |
| **Widget → Init** | `btn_save` → `def btn_save_init(widget)` | Method called once when widget registers |
| **UI Hierarchy** | `menu.file.ui` | Child of `menu.ui`, accessed via `get_ui_relatives()` |
| **Tags** | `panel#floating.ui` | Tags extracted to `ui.tags` set |
### What You Still Control
Conventions provide sensible defaults, but you can always override:
```python
from uitk import Signals
# Override default signal
@Signals("textChanged") # Instead of editingFinished
def txt_search(self, text):
self.filter_results(text)
# Connect additional signals manually
widget.clicked.connect(my_handler)
# Use any Qt API directly
widget.setStyleSheet("background: red;")
```
---
## Naming Conventions
### UI File → Slot Class
| UI Filename | Slot Class Name |
|-------------|-----------------|
| `editor.ui` | `EditorSlots` |
| `file_browser.ui` | `FileBrowserSlots` |
| `export-dialog.ui` | `ExportDialogSlots` |
### Widget → Methods
| Widget objectName | Slot Method | Init Method |
|-------------------|-------------|-------------|
| `btn_save` | `def btn_save(self)` | `def btn_save_init(self, widget)` |
| `txt_name` | `def txt_name(self)` | `def txt_name_init(self, widget)` |
| `cmb_type` | `def cmb_type(self, index)` | `def cmb_type_init(self, widget)` |
| `chk_active` | `def chk_active(self, state)` | `def chk_active_init(self, widget)` |
### Default Signals
| Widget Type | Default Signal | Slot Receives |
|-------------|----------------|---------------|
| `QPushButton` | `clicked` | — |
| `QCheckBox` | `stateChanged` | `state` |
| `QRadioButton` | `toggled` | `checked` |
| `QComboBox` | `currentIndexChanged` | `index` |
| `QLineEdit` | `editingFinished` | — |
| `QTextEdit` | `textChanged` | — |
| `QSpinBox` | `valueChanged` | `value` |
| `QDoubleSpinBox` | `valueChanged` | `value` |
| `QSlider` | `valueChanged` | `value` |
| `QListWidget` | `itemSelectionChanged` | — |
| `QTreeWidget` | `itemClicked` | `item, column` |
| `QTableWidget` | `cellClicked` | `row, column` |
### Slot Parameter Injection
Slots can request any combination of these parameters:
```python
def btn_action(self): # No params
def btn_action(self, widget): # Widget only
def btn_action(self, widget, ui): # Widget + UI
def btn_action(self, widget, ui, sb): # All three
def cmb_option(self, index, widget, ui, sb): # Signal arg + all three
```
---
## Widget Enhancements
Every widget automatically gains these capabilities:
### `.menu` — Popup Menu
```python
def btn_options_init(self, widget):
menu = widget.menu
menu.setTitle("Settings")
menu.add("QCheckBox", setText="Auto-save", setObjectName="chk_auto")
menu.add("QSpinBox", setPrefix="Interval: ", setObjectName="spn_int")
menu.add("QSeparator")
menu.add("QPushButton", setText="Apply", setObjectName="btn_apply")
def btn_options(self):
menu = self.ui.btn_options.menu
auto = menu.chk_auto.isChecked()
interval = menu.spn_int.value()
```
### `.option_box` — Action Panel
```python
def txt_path_init(self, widget):
menu = widget.option_box.menu
menu.add(
"QPushButton",
setText="Browse...",
setObjectName="btn_browse"
)
menu.btn_browse.clicked.connect(self.browse)
```
### `menu.add()` Flexibility
```python
# Widget type as string
menu.add("QDoubleSpinBox", setValue=1.0, setObjectName="spn")
# Batch add from list
menu.add(["Option A", "Option B", "Option C"])
# Dict with data
menu.add({"Save": save_data, "Load": load_data})
# Separator
menu.add("QSeparator")
# Access added widgets by objectName
value = menu.spn.value()
```
---
## Automatic State Persistence
Widget values save on change and restore on next show:
```python
# User sets spinbox to 5, closes app
# Next launch: spinbox is 5 again
# Disable per widget
widget.restore_state = False
# Disable for entire UI
ui.restore_widget_states = False
# Window geometry also persists automatically
ui.restore_window_size = False # Disable if needed
```
---
## Theming
```python
# Apply theme
ui.style.set(theme="dark", style_class="translucentBgWithBorder")
# Icons are monochrome and auto-colored to match the theme
icon = sb.get_icon("save")
```
---
## UI Hierarchy & Tags
### Hierarchy via Naming
```
menu.ui # Parent
menu.file.ui # Child of menu
menu.file.recent.ui # Grandchild
```
```python
ancestors = sb.get_ui_relatives(ui, upstream=True)
children = sb.get_ui_relatives(ui, downstream=True)
```
### Tags via `#`
```
panel#floating.ui # tags: {"floating"}
dialog#modal#dark.ui # tags: {"modal", "dark"}
```
```python
if ui.has_tags("modal"):
ui.edit_tags(add="active")
```
---
## MainWindow
Every UI is wrapped in `MainWindow`, providing these properties and methods:
### Properties
| Property | Type | Description |
|----------|------|-------------|
| `ui.sb` | `Switchboard` | Reference to switchboard |
| `ui.widgets` | `set` | All registered child widgets |
| `ui.slots` | `object` | Slot class instance |
| `ui.settings` | `SettingsManager` | Persistent settings |
| `ui.state` | `StateManager` | Widget state persistence |
| `ui.style` | `StyleSheet` | Theme manager |
| `ui.tags` | `set` | Tags from UI name |
| `ui.path` | `str` | Path to .ui file |
| `ui.is_initialized` | `bool` | True after first show |
| `ui.is_current_ui` | `bool` | True if active UI |
| `ui.is_pinned` | `bool` | True if pinned (won't auto-hide) |
| `ui.header` | `Header` | Header widget (if present) |
| `ui.footer` | `Footer` | Footer widget (if present) |
### Signals
| Signal | Emitted When |
|--------|--------------|
| `on_show` | Window shown |
| `on_hide` | Window hidden |
| `on_close` | Window closed |
| `on_focus_in` | Window gains focus |
| `on_focus_out` | Window loses focus |
| `on_child_registered` | Widget registered |
| `on_child_changed` | Widget value changes |
### Methods
```python
# Window configuration
ui.set_attributes(WA_TranslucentBackground=True)
ui.set_flags(FramelessWindowHint=True, WindowStaysOnTopHint=True)
# Show with positioning
ui.show() # Default position
ui.show(pos="screen") # Center on screen
ui.show(pos="cursor") # At cursor
ui.show(app_exec=True) # Start event loop
# Tag management
ui.has_tags("submenu")
ui.edit_tags(add="active", remove="inactive")
```
---
## Widget Attributes Added by Registration
When widgets register, they gain these attributes:
| Attribute | Description |
|-----------|-------------|
| `widget.ui` | Parent MainWindow |
| `widget.base_name()` | Name without tags/suffixes |
| `widget.legal_name()` | Name with special chars replaced |
| `widget.type` | Widget class |
| `widget.derived_type` | Qt base type |
| `widget.default_signals()` | Default signal names |
| `widget.get_slot()` | Get connected slot method |
| `widget.call_slot()` | Manually invoke slot |
| `widget.init_slot()` | Trigger init method |
| `widget.connect_slot()` | Connect to slot |
| `widget.is_initialized` | True after init called |
| `widget.restore_state` | Enable/disable persistence |
---
## Switchboard Utilities
The switchboard provides many helper methods:
### Dialogs
```python
sb.message_box("Operation complete!")
sb.message_box("Choose:", "Yes", "No", "Cancel")
path = sb.file_dialog(file_types="Images (*.png *.jpg)")
folder = sb.dir_dialog()
```
### Widget Helpers
```python
sb.center_widget(widget) # Center on screen
sb.center_widget(widget, relative=other) # Size relative to another widget
sb.toggle_multi(ui, setDisabled="btn_a,btn_b") # Batch property toggle
sb.connect_multi(ui, widgets, signals, slots) # Batch connect
sb.create_button_groups(ui, "chk_001-3") # Radio group from range
```
### UI Navigation
```python
sb.current_ui # Active UI
sb.prev_ui # Previous UI
sb.ui_history() # Full UI history
sb.ui_history(-1) # Previous UI by index
sb.get_ui("editor") # Get by name
sb.get_ui_relatives(ui, upstream=True)
```
---
## Custom Widgets Included
UITK provides enhanced versions of common widgets:
| Widget | Enhancements |
|--------|--------------|
| `PushButton` | Menu, option box, rich text |
| `CheckBox` | Menu, option box |
| `ComboBox` | Header text, alignment, menu |
| `LineEdit` | Action colors (valid/invalid/warning), menu |
| `TextEdit` | Enhanced text handling |
| `Label` | Rich text, text overlay |
| `TreeWidget` | Hierarchy icons, item helpers |
| `TableWidget` | Enhanced cell handling |
| `Menu` | Dynamic add(), grid layout, positioning |
| `Header` | Draggable, pin/minimize/close buttons |
| `Footer` | Status text, size grip |
| `CollapsableGroup` | Expandable/collapsible sections |
| `ColorSwatch` | Color picker widget |
| `ProgressBar` | Enhanced progress display |
| `MessageBox` | Styled message dialogs |
---
## Package Structure
```
uitk/
├── __init__.py
├── switchboard.py # Core: UI loading, slot wiring, registries
├── signals.py # @Signals decorator
├── events.py # EventFactoryFilter, MouseTracking
├── file_manager.py # FileContainer, FileManager
│
├── widgets/
│ ├── mainWindow.py # MainWindow wrapper
│ ├── menu.py # Dynamic Menu
│ ├── header.py # Draggable header bar
│ ├── footer.py # Status bar with size grip
│ ├── pushButton.py # Enhanced button
│ ├── checkBox.py # Enhanced checkbox
│ ├── comboBox.py # ComboBox with header
│ ├── lineEdit.py # Input with action colors
│ ├── textEdit.py # Enhanced text editor
│ ├── label.py # Rich text label
│ ├── treeWidget.py # Tree with icons
│ ├── tableWidget.py # Enhanced table
│ ├── progressBar.py # Progress display
│ ├── messageBox.py # Styled dialogs
│ ├── collapsableGroup.py # Expandable sections
│ ├── colorSwatch.py # Color picker
│ ├── separator.py # Visual separator
│ ├── region.py # Layout region
│ │
│ ├── optionBox/ # Option box system
│ │ ├── _optionBox.py # OptionBox, OptionBoxContainer
│ │ ├── utils.py # OptionBoxManager
│ │ └── options/ # ClearOption, PinOption, etc.
│ │
│ └── mixins/
│ ├── attributes.py # set_attributes(), set_flags()
│ ├── menu_mixin.py # .menu property
│ ├── option_box_mixin.py # .option_box property
│ ├── state_manager.py # Widget state persistence
│ ├── settings_manager.py # QSettings wrapper
│ ├── style_sheet.py # Theme management
│ ├── value_manager.py # Widget value get/set
│ ├── icon_manager.py # Icon loading and theming
│ ├── text.py # RichText, TextOverlay
│ ├── convert.py # Type conversions
│ ├── shortcuts.py # Keyboard shortcuts
│ ├── tasks.py # Background tasks
│ ├── docking.py # Docking behavior
│ ├── switchboard_slots.py # Slot connection logic
│ ├── switchboard_widgets.py # Widget registration
│ ├── switchboard_utils.py # Helper utilities
│ └── switchboard_names.py # Name/tag handling
│
├── icons/ # Monochrome icons (auto-colored by theme)
└── examples/ # Example application
```
---
## Complete Example
```python
from uitk import Switchboard
class EditorSlots:
def __init__(self, **kwargs):
self.sb = kwargs.get("switchboard")
self.ui = self.sb.loaded_ui.editor
# Button initialization
def btn_open_init(self, widget):
menu = widget.menu
menu.add("QPushButton", setText="Recent...", setObjectName="btn_recent")
menu.btn_recent.clicked.connect(self.show_recent)
def btn_open(self):
path = self.sb.file_dialog(file_types="Text (*.txt)")
if path:
self.ui.txt_content.setText(open(path).read())
self.ui.lbl_status.setText(f"Opened: {path}")
def btn_save(self):
self.sb.message_box("Saved!")
# ComboBox with index parameter
def cmb_font_init(self, widget):
widget.addItems(["Arial", "Helvetica", "Courier"])
def cmb_font(self, index):
font_name = self.ui.cmb_font.currentText()
self.ui.txt_content.setFont(self.sb.QtGui.QFont(font_name))
# Checkbox with state parameter
def chk_wrap(self, state):
QTextEdit = self.sb.QtWidgets.QTextEdit
mode = QTextEdit.WidgetWidth if state else QTextEdit.NoWrap
self.ui.txt_content.setLineWrapMode(mode)
def show_recent(self):
self.sb.message_box("Recent files...")
sb = Switchboard(ui_source="./", slot_source=EditorSlots)
ui = sb.loaded_ui.editor
ui.style.set(theme="dark")
ui.show(pos="screen", app_exec=True)
```
---
## Feature Summary
### Core Architecture
- Convention-based UI loading and slot connection
- Automatic widget registration with attribute injection
- Lazy loading of UIs on first access
- UI hierarchy via filename patterns
- Tag system for UI categorization
### Signal/Slot System
- Auto-connection via naming convention
- Default signal mappings for all widget types
- `@Signals` decorator for custom signals
- Parameter injection (widget, ui, sb)
- Slot history tracking
### State Management
- Automatic widget value persistence
- Window geometry save/restore
- Cross-UI widget sync
- Per-widget and per-UI control
- QSettings-based storage
### Widget Enhancements
- `.menu` popup on any widget
- `.option_box` action panels
- Rich text support
- Action colors for validation
- Menu with dynamic `add()`
### Theming
- Light/dark themes with custom theme support
- Monochrome icons auto-colored by theme
- StyleSheet manager
- Translucent window styles
### Utilities
- Message box dialogs
- File/directory dialogs
- Widget centering
- Batch operations
- Button group creation
- Deferred execution
### Custom Widgets
- Enhanced versions of all common widgets
- Draggable header with controls
- Collapsable groups
- Color swatches
- Tree with hierarchy icons
### Event Handling
- EventFactoryFilter for custom events
- MouseTracking for hover detection
- Focus tracking
- Window lifecycle signals
---
## Contributing
```bash
python -m pytest test/ -v
```
## License
LGPL v3 — See [LICENSE](../COPYING.LESSER)
| text/markdown | null | Ryan Simpson <m3trik@outlook.com> | null | null | LGPL-3.0-or-later | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU Lesser General Public License v3 or later (LGPLv3+)",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pythontk>=0.7.78"
] | [] | [] | [] | [
"Homepage, https://github.com/m3trik/uitk",
"Repository, https://github.com/m3trik/uitk"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T17:02:54.416199 | uitk-1.0.89-py3-none-any.whl | 338,827 | 68/3b/e3ef1007acc548dfa18104cd93ad0b036cd57944403ec2c4e24e7563883c/uitk-1.0.89-py3-none-any.whl | py3 | bdist_wheel | null | false | 2cdc43e535902d3e9b83e53ef4a587cc | 576fb317a1b91cacccf0db9af5d71bddb4c7c7b85d0dc448e02d2a200cb5e481 | 683be3ef1007acc548dfa18104cd93ad0b036cd57944403ec2c4e24e7563883c | null | [
"COPYING.LESSER"
] | 127 |
2.4 | chuk-term | 0.4 | A terminal library with CLI interface | # ChukTerm
A modern terminal library with a powerful CLI interface for building beautiful terminal applications in Python.
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/Apache-2.0)
[](docs/testing/TEST_COVERAGE.md)
[](docs/testing/UNIT_TESTING.md)
## 🆕 What's New in v0.2
- 📈 **Progress Indicators**: New `progress_bar()`, `track()`, and `spinner()` methods for better user feedback
- 🎨 **Theme Preview CLI**: `chuk-term themes --side-by-side` to compare all 8 themes
- 📚 **Examples Browser**: `chuk-term examples` to discover and run examples
- 🧪 **Improved Testing**: 93% code coverage with 616 passing tests
- 📖 **Better Documentation**: New CONTRIBUTING.md and updated guides
- 🔧 **Quality Automation**: Pre-commit hooks with latest tools (ruff, black, mypy)
See [CHANGELOG.md](CHANGELOG.md) for complete release history.
## ✨ Features
- 🎨 **Rich UI Components**: Banners, prompts, formatters, and code display with syntax highlighting
- 🎯 **Centralized Output Management**: Consistent console output with multiple log levels
- 🎭 **Theme Support**: 8 built-in themes including default, dark, light, minimal, terminal, monokai, dracula, and solarized
- 📝 **Code Display**: Syntax highlighting, diffs, code reviews, and side-by-side comparisons
- 🔧 **Terminal Management**: Screen control, cursor management, hyperlinks, and color detection
- 💬 **Interactive Prompts**: Text input, confirmations, number input, single/multi selection menus
- 📊 **Data Formatting**: Tables, trees, JSON, timestamps, and structured output
- 📈 **Progress Indicators**: Progress bars, spinners, and task tracking with `track()` and `progress_bar()`
- 🌊 **Streaming Support**: Live-updating messages for real-time content streaming (LLM-style)
- 🤖 **AI-Friendly**: Designed for AI agents with comprehensive docs and consistent APIs
- 🔄 **Environment Adaptation**: Automatically adapts to TTY, CI, and NO_COLOR environments
## 📦 Installation
### Using uv (Recommended)
```bash
# Install as dependency
uv add chuk-term
# Install globally as tool
uv tool install chuk-term
```
### Using pip
```bash
# Install from PyPI
pip install chuk-term
# With development dependencies
pip install chuk-term[dev]
```
### From Source (Development)
```bash
# Clone the repository
git clone https://github.com/chrishayuk/chuk-term.git
cd chuk-term
# Install with uv (recommended)
uv sync --dev
# Or with pip
pip install -e ".[dev]"
# Verify installation
chuk-term --version
```
For detailed installation instructions, see the [Getting Started Guide](docs/ui/GETTING_STARTED.md#installation).
## 🚀 Quick Start
### For AI Agents and LLMs
ChukTerm is designed to be AI-friendly. For comprehensive guidance:
- 📖 Read the [Getting Started Guide](docs/ui/GETTING_STARTED.md)
- 🤖 Check [llms.txt](llms.txt) for LLM-optimized documentation
- 📚 Browse [examples](examples/) for working code
### Basic Output
```python
from chuk_term.ui import output
# Different message types
output.success("✓ Operation completed successfully")
output.error("✗ An error occurred")
output.warning("⚠ This needs attention")
output.info("ℹ Information message")
output.debug("🔍 Debug information")
# Special formatting
output.tip("💡 Pro tip: Use themes for better visuals")
output.hint("This is a subtle hint")
output.command("git commit -m 'Initial commit'")
```
### Interactive Prompts
```python
from chuk_term.ui import ask, confirm, select_from_list, ask_number
# Get user input
name = ask("What's your name?")
age = ask_number("How old are you?", min_value=0, max_value=150)
# Confirmation
if confirm("Would you like to continue?"):
output.success("Great! Let's proceed...")
# Selection menu
theme = select_from_list(
["default", "dark", "light", "minimal", "terminal"],
"Choose a theme:"
)
```
### Display Code with Syntax Highlighting
```python
from chuk_term.ui import display_code, display_diff
# Show code with syntax highlighting
code = '''def hello_world():
print("Hello from ChukTerm!")
return True'''
display_code(code, language="python", title="Example Function")
# Show a diff
display_diff(
old_text="Hello World",
new_text="Hello ChukTerm!",
title="Changes"
)
```
### Streaming Messages (New!)
```python
from chuk_term.ui.streaming import StreamingMessage, StreamingAssistant
import time
# Basic streaming message
with StreamingMessage(title="🤖 Assistant") as stream:
stream.update("Processing your request")
time.sleep(0.5)
stream.update("...")
time.sleep(0.5)
stream.update(" Done!")
# Automatically finalizes with Markdown rendering
# Using StreamingAssistant for simpler API
assistant = StreamingAssistant()
stream = assistant.start()
for word in "Hello from ChukTerm streaming!".split():
assistant.update(word + " ")
time.sleep(0.2)
assistant.finalize()
```
### Progress Indicators (New!)
```python
from chuk_term.ui import output
import time
# Progress bar with detailed tracking
with output.progress_bar("Processing files", show_time=True) as progress:
task = progress.add_task("download", total=100)
for i in range(100):
progress.update(task, advance=1)
time.sleep(0.01)
# Simple iteration tracking
items = ['file1.txt', 'file2.txt', 'file3.txt']
for item in output.track(items, "Processing files"):
process(item) # Your processing logic
# Spinner for indeterminate tasks
with output.spinner("Loading data..."):
load_data() # Long-running operation
```
### Theme Support
```python
from chuk_term.ui import output
from chuk_term.ui.theme import set_theme, get_theme, get_available_themes
# Set a theme
set_theme("monokai") # or dracula, solarized, minimal, terminal
# Get current theme
current = get_theme()
output.info(f"Using theme: {current.name}")
# List all available themes
themes = get_available_themes()
output.info(f"Available themes: {', '.join(themes)}")
```
## 🖥️ CLI Usage
ChukTerm includes a powerful CLI for exploring and testing features:
```bash
# Show library information
chuk-term info
chuk-term info --verbose
# Run interactive demo
chuk-term demo
# Preview all themes (New!)
chuk-term themes # Detailed preview of each theme
chuk-term themes --side-by-side # Compact side-by-side comparison
# Explore examples (New!)
chuk-term examples # List all available examples with descriptions
chuk-term examples --list-only # Quick list of example names
chuk-term examples --run demo # Run a specific example
# Run a command with theming
chuk-term run "ls -la"
# Test with specific theme
chuk-term test --theme monokai
```
## 🛠️ Development
### Setup Development Environment
```bash
# Clone the repository
git clone https://github.com/chrishayuk/chuk-term.git
cd chuk-term
# Install with dev dependencies using uv
uv sync --dev
# Install pre-commit hooks (automatic code quality checks)
uv run pre-commit install
# Verify installation
chuk-term --version
uv run pytest # Run tests
```
### Running Tests
```bash
# Run all tests with coverage
uv run pytest --cov=chuk_term
# Run specific test file
uv run pytest tests/ui/test_output.py
# Run with verbose output
uv run pytest -v
```
### Code Quality
```bash
# Run linting
uv run ruff check src/ tests/
# Auto-fix linting issues
uv run ruff check --fix src/ tests/
# Format code
uv run black src/ tests/
# Type checking
uv run mypy src/
# Run all checks
make check
```
### Available Make Commands
```bash
make help # Show all available commands
make dev # Install for development
make test # Run tests with coverage
make lint # Run linting checks
make format # Format code
make clean # Remove build artifacts
```
## 📚 Documentation
### Quick Start
- 🚀 **[Getting Started Guide](docs/ui/GETTING_STARTED.md)** - Installation, examples, and best practices
- 🤖 **[LLM Documentation](llms.txt)** - AI-optimized reference (llmstxt.org)
- 📂 **[Examples Directory](examples/)** - Working code examples
### Core Documentation
- [Output Management](docs/ui/output.md) - Centralized console output system
- [Terminal Management](docs/ui/terminal.md) - Terminal control and state management
- [Theme System](docs/ui/themes.md) - Theming and styling guide
- [Package Management](docs/PACKAGE_MANAGEMENT.md) - Using uv for dependency management
- [Unit Testing](docs/testing/UNIT_TESTING.md) - Testing guidelines and best practices
- [Test Coverage](docs/testing/TEST_COVERAGE.md) - Coverage requirements and reports
- [Code Quality](docs/testing/CODE_QUALITY.md) - Linting, formatting, and quality standards
### API Reference
#### Output Levels
- `output.debug()` - Debug information (only in verbose mode)
- `output.info()` - Informational messages
- `output.success()` - Success confirmations
- `output.warning()` - Warning messages
- `output.error()` - Error messages (non-fatal)
- `output.fatal()` - Fatal errors (exits program)
#### Themes
- **default** - Balanced colors, good for most terminals
- **dark** - High contrast on dark backgrounds
- **light** - Optimized for light terminals
- **minimal** - Plain text, no colors or icons
- **terminal** - Basic ANSI colors only
- **monokai** - Popular dark theme
- **dracula** - Gothic dark theme
- **solarized** - Low contrast, easy on eyes
## 📁 Examples
The [examples](examples/) directory contains demonstration scripts:
| File | Description |
|------|-------------|
| `ui_demo.py` | Comprehensive UI component showcase |
| `ui_code_demo.py` | Code display and syntax highlighting |
| `ui_output_demo.py` | Output management features |
| `ui_terminal_demo.py` | Terminal control capabilities |
| `ui_theme_independence.py` | Theme system demonstration |
| **Streaming Demos** | |
| `ui_streaming_demo.py` | Advanced streaming UI capabilities |
| `ui_streaming_message_demo.py` | StreamingMessage and StreamingAssistant demos |
| `ui_streaming_practical_demo.py` | Real-world streaming use cases |
| `ui_streaming_quickstart.py` | Simple streaming examples to get started |
| `streaming_long_text_demo.py` | Token-by-token streaming with proper wrapping |
| **Other** | |
| `ui_quick_test.py` | Quick functionality test |
Run any example:
```bash
# Run the main demo
uv run python examples/ui_demo.py
# Try the streaming demos
uv run python examples/ui_streaming_quickstart.py
uv run python examples/ui_streaming_practical_demo.py
```
## 🏗️ Project Structure
```
chuk-term/
├── src/chuk_term/
│ ├── __init__.py # Package metadata
│ ├── cli.py # CLI interface
│ └── ui/ # UI components
│ ├── output.py # Output management (singleton)
│ ├── terminal.py # Terminal control
│ ├── theme.py # Theme system (8 themes)
│ ├── prompts.py # User prompts
│ ├── formatters.py # Data formatters
│ ├── code.py # Code display
│ ├── banners.py # Banner displays
│ └── streaming.py # Streaming message support
├── tests/ # Test suite (616 tests, 93% coverage)
├── examples/ # Example scripts
│ ├── ui_demo.py
│ ├── ui_streaming_*.py # Streaming demonstrations
│ └── ...
├── docs/ # Documentation
│ ├── ui/ # UI documentation
│ │ └── GETTING_STARTED.md # Quick start guide
│ └── testing/ # Testing documentation
│ ├── UNIT_TESTING.md
│ └── TEST_COVERAGE.md
├── llms.txt # LLM-optimized docs (llmstxt.org)
├── CLAUDE.md # Project context for AI agents
├── CONTRIBUTING.md # Contribution guidelines
└── pyproject.toml # Package configuration
```
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
### Quick Contribution Steps
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Make your changes
4. Run tests (`uv run pytest`)
5. Commit with a descriptive message
6. Push to your fork
7. Open a Pull Request
## 📄 License
This project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- Built with [Rich](https://github.com/Textualize/rich) for beautiful terminal output
- Package management by [uv](https://github.com/astral-sh/uv)
- Testing with [pytest](https://pytest.org/)
## 📮 Support
- 📫 Report issues at [GitHub Issues](https://github.com/chrishayuk/chuk-term/issues)
- 💬 Discuss at [GitHub Discussions](https://github.com/chrishayuk/chuk-term/discussions)
- 📖 Read docs at [GitHub Wiki](https://github.com/chrishayuk/chuk-term/wiki) | text/markdown | null | null | null | null | Apache-2.0 | null | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :... | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0.0",
"rich>=13.0.0",
"black>=23.0.0; extra == \"dev\"",
"mypy>=1.5.0; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/chrishayuk/chuk-term",
"Repository, https://github.com/chrishayuk/chuk-term",
"Issues, https://github.com/chrishayuk/chuk-term/issues",
"Documentation, https://github.com/chrishayuk/chuk-term/tree/main/docs"
] | twine/6.1.0 CPython/3.11.11 | 2026-02-18T17:01:55.481320 | chuk_term-0.4.tar.gz | 199,469 | 63/1b/bc7382a4280ee18c8fe8654c9d9e3622f1d8e429813c8c84740ee926d728/chuk_term-0.4.tar.gz | source | sdist | null | false | f85a8881d041032fd911ed123c7f2f2c | 15e3050ec96d0fd7181571893c1567ba40b6c2d12a1a560dde2591af309d7587 | 631bbc7382a4280ee18c8fe8654c9d9e3622f1d8e429813c8c84740ee926d728 | null | [
"LICENSE"
] | 595 |
2.4 | agentfield | 0.1.42rc1 | Python SDK for the AgentField control plane | # AgentField Python SDK
The AgentField SDK provides a production-ready Python interface for registering agents, executing workflows, and integrating with the AgentField control plane.
## Installation
```bash
pip install agentfield
```
To work on the SDK locally:
```bash
git clone https://github.com/Agent-Field/agentfield.git
cd agentfield/sdk/python
python -m pip install -e .[dev]
```
## Quick Start
```python
from agentfield import Agent
agent = Agent(
node_id="example-agent",
agentfield_server="http://localhost:8080",
dev_mode=True,
)
@agent.reasoner()
async def summarize(text: str) -> dict:
result = await agent.ai(
prompt=f"Summarize: {text}",
response_model={"summary": "string", "tone": "string"},
)
return result
if __name__ == "__main__":
agent.serve(port=8001)
```
See `docs/DEVELOPMENT.md` for instructions on wiring agents to the control plane.
## Testing
```bash
pytest
```
To run coverage locally:
```bash
pytest --cov=agentfield --cov-report=term-missing
```
## License
Distributed under the Apache 2.0 License. See the project root `LICENSE` for details.
| text/markdown | AgentField Maintainers | null | null | null | Apache-2.0 | agentfield, sdk, agents | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming... | [] | null | null | <3.14,>=3.8 | [] | [] | [] | [
"fastapi",
"uvicorn",
"requests>=2.28",
"pydantic>=2.0",
"litellm",
"psutil",
"PyYAML>=6.0",
"aiohttp>=3.8",
"websockets",
"fal-client>=0.5.0",
"pytest<9,>=7.4; extra == \"dev\"",
"pytest-asyncio<0.24,>=0.21; extra == \"dev\"",
"pytest-cov<5,>=4.1; extra == \"dev\"",
"responses<0.26,>=0.23... | [] | [] | [] | [
"Homepage, https://github.com/Agent-Field/agentfield",
"Documentation, https://github.com/Agent-Field/agentfield/tree/main/docs",
"Issues, https://github.com/Agent-Field/agentfield/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T17:01:53.880302 | agentfield-0.1.42rc1.tar.gz | 162,659 | 53/77/a3882a467ca2c6ae64d64cc032faa79c4edde333db4a9da5e334dce9533d/agentfield-0.1.42rc1.tar.gz | source | sdist | null | false | e146c9e1c9eb21e3de3440881bfa1594 | ef1960a2d0e567d8fd7ad4bec6b6eb2fd060c14f2d4aabd4a4de7f670678e0dd | 5377a3882a467ca2c6ae64d64cc032faa79c4edde333db4a9da5e334dce9533d | null | [] | 222 |
2.4 | xyra | 0.2.4 | High Performance Frameworks, Easy to learn and Ready for Production | # Xyra Framework
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/xyra/)
A high-performance, asynchronous web framework for Python, built on top of `socketify.py`. Xyra is designed to be fast, easy to use, and feature-rich, making it an excellent choice for building modern web applications, APIs, and real-time applications.
## ✨ Key Features
- 🚀 **High Performance**: Built on `socketify.py` for exceptional speed and low latency
- ⚡ **Asynchronous**: Full async/await support for non-blocking operations
- 🎯 **Simple API**: Intuitive design inspired by Flask and Express.js
- 🔧 **Middleware Support**: Easily add logging, authentication, CORS, and more
- 📚 **Auto API Docs**: Built-in Swagger/OpenAPI documentation generation
- 🔌 **WebSocket Support**: Real-time communication out of the box
- 📄 **Templating**: Jinja2 integration for HTML rendering
- 📁 **Static Files**: Efficient static file serving
- 🛡️ **Type Safety**: Full type hints support
## 📦 Installation
Install Xyra using pip:
```bash
pip install xyra
```
Or install from source for the latest features:
```bash
git clone https://github.com/xyra-python/xyra.git
cd xyra
pip install -e .
```
## 🚀 Quick Start
Create your first Xyra application:
```python
# app.py
from xyra import App, Request, Response
app = App()
@app.get("/")
def hello(req: Request, res: Response):
res.json({"message": "Hello, Xyra!"})
if __name__ == "__main__":
app.listen(8000)
```
Or
```python
# app.py
from xyra import App
app = App()
@app.get("/")
def hello(req, res):
res.json({"message": "Hello, Xyra!"})
if __name__ == "__main__":
app.listen(8000)
```
Run the application:
```bash
python app.py
```
Visit `http://localhost:8000` to see your app in action!
## 📖 Documentation
- 📚 [Full Documentation](https://xyra-python.github.io)
- 🚀 [Getting Started Guide](https://xyra-python.github.io/getting-started.html)
- 🛣️ [Routing Guide](https://xyra-python.github.io/routing.html)
- 📝 [API Reference](https://xyra-python.github.io/api-reference.html)
- 💡 [Examples](https://github.com/xyra-python/xyra-example)
## 🎯 Example Applications
### REST API
```python
from xyra import App, Request, Response
app = App()
# In-memory storage
users = []
@app.get("/api/users")
def get_users(req: Request, res: Response):
res.json(users)
@app.post("/api/users")
async def create_user(req: Request, res: Response):
user_data = await req.json()
user_data["id"] = len(users) + 1
users.append(user_data)
res.status(201).json(user_data)
if __name__ == "__main__":
app.listen(8000)
```
### WebSocket Chat
```python
from xyra import App
app = App()
clients = set()
def on_open(ws):
clients.add(ws)
ws.send("Welcome to chat!")
def on_message(ws, message, opcode):
for client in clients:
if client != ws:
client.send(f"User: {message}")
def on_close(ws, code, message):
clients.discard(ws)
app.websocket("/chat", {
"open": on_open,
"message": on_message,
"close": on_close
})
if __name__ == "__main__":
app.listen(8000)
```
### HTML with Templates
```python
from xyra import App, Request, Response
app = App(templates_directory="templates")
@app.get("/")
def home(req: Request, res: Response):
res.render("home.html", title="My App", users=["Alice", "Bob"])
if __name__ == "__main__":
app.listen(8000)
```
## 🏃 Running Examples
Try the included examples:
```bash
# Simple app
python example/simple_app.py
# Full-featured app with templates and WebSocket
python example/app.py
```
Visit `http://localhost:8000` and explore the API docs at `http://localhost:8000/docs`.
## 🤝 Contributing
We welcome contributions! Here's how you can help:
1. **Fork** the repository
2. **Create** a feature branch: `git checkout -b feature/amazing-feature`
3. **Commit** your changes: `git commit -m 'Add amazing feature'`
4. **Push** to the branch: `git push origin feature/amazing-feature`
5. **Open** a Pull Request
### Development Setup
```bash
git clone https://github.com/xyra-python/xyra.git
cd xyra
pip install -e .[dev]
pytest
```
### Code Style
We use:
- Black for code formatting
- isort for import sorting
- flake8 for linting
- mypy for type checking
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- Built on top of [socketify.py](https://github.com/cirospaciari/socketify.py)
- Inspired by Flask, Express.js, and FastAPI
- Thanks to all our contributors!
## 📞 Support
- 📖 [Documentation](https://xyra-python.github.io/xyra)
- 🐛 [Issues](https://github.com/xyra-python/xyra/issues)
- 💬 [Discussions](https://github.com/xyra-python/xyra/discussions)
---
Made with ❤️ for the Python community
| text/markdown | Xyra Team | Xyra Team <team@xyra.dev> | null | Xyra Team <team@xyra.dev> | null | web, framework, async, fast, python | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Internet :: WWW... | [] | null | null | >=3.11 | [] | [] | [] | [
"jinja2",
"multidict",
"watchfiles",
"orjson>=3.11.3; platform_python_implementation != \"PyPy\"",
"ujson; platform_python_implementation == \"PyPy\"",
"pybind11>=3.0.1",
"pytest>=8.4.2; extra == \"dev\"",
"pytest-asyncio>=1.2.0; extra == \"dev\"",
"pytest-tornasync>=0.6.0.post2; extra == \"dev\"",
... | [] | [] | [] | [
"Homepage, https://github.com/xyra-python/xyra",
"Documentation, https://xyra.dev/docs",
"Repository, https://github.com/xyra-python/xyra",
"Issues, https://github.com/xyra-python/xyra/issues",
"Changelog, https://github.com/xyra-python/xyra/blob/main/CHANGELOG.md"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T17:01:12.047431 | xyra-0.2.4-cp311-cp311-win_amd64.whl | 770,666 | 27/c9/a40d2db443e8aaaac7d24e2eb77f0cd9b8121ebdc75a3e650e768955d6bb/xyra-0.2.4-cp311-cp311-win_amd64.whl | cp311 | bdist_wheel | null | false | 0471e7ed301e5c2017f941a11a9a960a | 4d36c04d39be78e1b45e747ddd537bf9fea64abbfbd768bf7f7f2faa35a885eb | 27c9a40d2db443e8aaaac7d24e2eb77f0cd9b8121ebdc75a3e650e768955d6bb | MIT | [
"LICENSE"
] | 850 |
2.4 | riverpod-3-scanner | 1.4.1 | Comprehensive static analysis tool for detecting Riverpod 3.0 async safety violations in Flutter/Dart projects | # Riverpod 3.0 Safety Scanner
**Comprehensive static analysis tool for detecting Riverpod 3.0 async safety violations in Flutter/Dart projects.**
[](https://pypi.org/project/riverpod-3-scanner/)
[](https://pypi.org/project/riverpod-3-scanner/)
[](https://opensource.org/licenses/MIT)
[](https://riverpod.dev)
[](https://flutter.dev)
---
## 🎯 What It Does
Riverpod 3.0 introduced `ref.mounted` to safely handle provider disposal during async operations. This scanner detects **14 types of violations** that can cause production crashes, including:
- ❌ Field caching patterns (pre-Riverpod 3.0 workarounds)
- ❌ Lazy getters in async classes
- ❌ Missing `ref.mounted` checks before/after async operations
- ❌ `ref` operations inside lifecycle callbacks
- ❌ Sync methods without mounted checks (called from async contexts)
**Features**:
- ✅ **Zero false positives** via sophisticated call-graph analysis
- ✅ **Cross-file violation detection** (indirect method calls)
- ✅ **Variable resolution** (traces `basketballNotifier` → `BasketballNotifier`)
- ✅ **Comment stripping** (prevents false positives from commented code)
- ✅ **Detailed fix instructions** for each violation
- ✅ **CI/CD ready** (exit codes, verbose mode, pattern filtering)
---
## 🚀 Quick Start
### Installation
#### Via PyPI (Recommended)
```bash
pip install riverpod-3-scanner
```
Then run directly:
```bash
riverpod-3-scanner lib
```
#### Via Direct Download
```bash
# Download scanner
curl -O https://raw.githubusercontent.com/DayLight-Creative-Technologies/riverpod_3_scanner/main/riverpod_3_scanner.py
# Make executable (optional)
chmod +x riverpod_3_scanner.py
```
### Basic Usage
#### If installed via PyPI:
```bash
# Scan entire project
riverpod-3-scanner lib
# Scan specific file
riverpod-3-scanner lib/features/game/notifiers/game_notifier.dart
# Verbose output
riverpod-3-scanner lib --verbose
```
#### If using direct download:
```bash
# Scan entire project
python3 riverpod_3_scanner.py lib
# Scan specific file
python3 riverpod_3_scanner.py lib/features/game/notifiers/game_notifier.dart
# Verbose output
python3 riverpod_3_scanner.py lib --verbose
```
### Example Output
```
🔍 RIVERPOD 3.0 COMPLIANCE SCAN COMPLETE
📁 Scanned: lib
🚨 Total violations: 3
VIOLATIONS BY TYPE:
🔴 LAZY GETTER: 2
🔴 MISSING MOUNTED AFTER AWAIT: 1
📄 lib/features/game/notifiers/game_notifier.dart (3 violation(s))
• Line 45: lazy_getter
• Line 120: missing_mounted_after_await
• Line 145: lazy_getter
```
---
## 📊 Violation Types
### CRITICAL (Will crash in production)
| Type | Description | Production Impact |
|------|-------------|-------------------|
| **Field caching** | Nullable fields with getters in async classes | Crash on widget unmount |
| **Lazy getters** | `get x => ref.read()` in async classes | Crash on widget unmount |
| **ref.read() before mounted** | Missing mounted check before ref operations | Crash after disposal |
| **Missing mounted after await** | No mounted check after async gap | Crash after disposal |
| **ref in lifecycle callbacks** | `ref.read()` in `ref.onDispose`/`ref.listen` | AssertionError crash |
| **Sync methods without mounted** | Sync methods with `ref.read()` called from async | Crash from callbacks |
### WARNINGS (High crash risk)
- Widget lifecycle methods with unsafe ref usage
- Timer/Future.delayed deferred callbacks without mounted checks
### DEFENSIVE (Type safety & best practices)
- Untyped var lazy getters (loses type information)
- mounted vs ref.mounted confusion (educational)
See [GUIDE.md](GUIDE.md) for complete violation reference and fix patterns.
---
## 🛡️ How It Works
### Multi-Pass Call-Graph Analysis
The scanner uses a **4-pass architecture** to achieve zero false positives:
**Pass 1**: Build cross-file reference database
- Index all classes, methods, provider mappings
- Map `XxxNotifier` → `xxxProvider` (Riverpod codegen)
- Store class → file path mapping
**Pass 1.5**: Build complete method database
- Index ALL methods with metadata (has_ref_read, has_mounted_check, is_async)
- Detect framework lifecycle methods
- Store method bodies for analysis
**Pass 2**: Build async callback call-graph
- Trace methods called after `await` statements
- Detect callback parameters (`onCompletion:`, `builder:`, etc.)
- Find `stream.listen()` callbacks
- Detect `Timer`/`Future.delayed`/`addPostFrameCallback` calls
- Resolve variables to classes (`basketballNotifier` → `BasketballNotifier`)
**Pass 2.5**: Propagate async context transitively
- If method A calls method B, and B is in async context → A is too
- Fixed-point iteration until no new methods added
- Handles transitive call chains
**Pass 3**: Detect violations with full call-graph context
- Strip comments to prevent false positives
- Check lifecycle callbacks (direct and indirect violations)
- Flag sync methods with `ref.read()` called from async contexts
- Verify with call-graph data (zero false positives)
### Key Innovations
**Variable Resolution**:
```dart
final basketballNotifier = ref.read(basketballProvider(gameId).notifier);
onCompletion: () {
basketballNotifier.completeGame();
// ↓ Scanner resolves ↓
// BasketballNotifier.completeGame()
}
```
**Comment Stripping**:
```dart
// Scanner ignores this:
// Cleanup handled by ref.onDispose() in build()
// Only flags real code:
ref.onDispose(() {
ref.read(myProvider); // ← VIOLATION DETECTED
});
```
---
## 🔧 Advanced Usage
### Pattern Filtering
```bash
# Scan only notifiers
python3 riverpod_3_scanner.py lib --pattern "**/*_notifier.dart"
# Scan only widgets
python3 riverpod_3_scanner.py lib --pattern "**/widgets/**/*.dart"
# Scan only services
python3 riverpod_3_scanner.py lib --pattern "**/services/**/*.dart"
```
### Exit Codes
- `0` - No violations (clean)
- `1` - Violations found (must be fixed)
- `2` - Error (invalid path, etc.)
Use in CI/CD pipelines:
```bash
python3 riverpod_3_scanner.py lib || exit 1
```
---
## 🚀 CI/CD Integration
### GitHub Actions
```yaml
name: Riverpod 3.0 Safety Check
on: [push, pull_request]
jobs:
riverpod-safety:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: subosito/flutter-action@v2
- name: Download Scanner
run: curl -O https://raw.githubusercontent.com/DayLight-Creative-Technologies/riverpod_3_scanner/main/riverpod_3_scanner.py
- name: Run Scanner
run: python3 riverpod_3_scanner.py lib
- name: Dart Analyze
run: dart analyze lib/
```
### Pre-commit Hook
```bash
#!/bin/bash
# .git/hooks/pre-commit
echo "Running Riverpod 3.0 compliance check..."
python3 riverpod_3_scanner.py lib || exit 1
dart analyze lib/ || exit 1
echo "✅ All checks passed!"
```
Make executable:
```bash
chmod +x .git/hooks/pre-commit
```
---
## 📚 Documentation
- **[GUIDE.md](GUIDE.md)** - Complete guide with all violation types, fix patterns, decision trees
- **[EXAMPLES.md](EXAMPLES.md)** - Real-world examples and production crash case studies
- **[CHANGELOG.md](CHANGELOG.md)** - Version history and updates
---
## 🎓 The Riverpod 3.0 Pattern
### ❌ Before (Crashes)
```dart
class MyNotifier extends _$MyNotifier {
MyLogger? _logger;
MyLogger get logger {
final l = _logger;
if (l == null) throw StateError('Disposed');
return l;
}
@override
build() {
_logger = ref.read(myLoggerProvider);
ref.onDispose(() => _logger = null);
return State.initial();
}
Future<void> doWork() async {
await operation();
logger.logInfo('Done'); // CRASH: _logger = null during await
}
}
```
### ✅ After (Safe)
```dart
class MyNotifier extends _$MyNotifier {
@override
State build() => State.initial();
Future<void> doWork() async {
// Check BEFORE ref.read()
if (!ref.mounted) return;
final logger = ref.read(myLoggerProvider);
await operation();
// Check AFTER await
if (!ref.mounted) return;
logger.logInfo('Done');
}
}
```
**Key Differences**:
- ❌ Removed nullable field `_logger`
- ❌ Removed enhanced getter with StateError
- ❌ Removed field initialization in build()
- ❌ Removed `ref.onDispose()` cleanup
- ✅ Added `ref.mounted` checks
- ✅ Added just-in-time `ref.read()`
---
## 🔍 Requirements
- **Python**: 3.7+
- **Dart/Flutter**: Any version using Riverpod 3.0+
- **Riverpod**: 3.0+ (for `ref.mounted` feature)
No external dependencies required - scanner uses only Python standard library.
---
## 📊 Scanner Statistics
From production deployment (140+ violations fixed):
**Most Common Violations**:
1. Lazy getters (26%) - `get logger => ref.read(...)`
2. Field caching (29%) - Pre-Riverpod 3.0 workaround
3. Missing mounted after await (27%)
4. ref.read before mounted (28%)
**Crash Prevention**:
- **Before**: 12+ production crashes/week from unmounted ref
- **After**: Zero crashes for 30+ days
**False Positive Rate**: 0% (with call-graph analysis)
---
## 🤝 Contributing
Contributions welcome! Please:
1. **Report Issues**: https://github.com/DayLight-Creative-Technologies/riverpod_3_scanner/issues
2. **Submit PRs**: Fork → Branch → PR
3. **Add Tests**: Include test cases for new violation types
4. **Update Docs**: Keep GUIDE.md synchronized with code changes
---
## 📝 License
MIT License - see [LICENSE](LICENSE) file for details.
---
## 🙏 Credits
**Created by**: Steven Day, DayLight Creative Technologies
**Acknowledgments**:
- **Riverpod Team** - For `ref.mounted` feature and official pattern
- **Andrea Bizzotto** - For educational content on AsyncNotifier safety
- **Flutter Community** - For feedback and real-world crash reports
---
## 📞 Support
- **Issues**: https://github.com/DayLight-Creative-Technologies/riverpod_3_scanner/issues
- **Discussions**: https://github.com/DayLight-Creative-Technologies/riverpod_3_scanner/discussions
- **Author**: Steven Day (support@daylightcreative.tech)
- **Company**: DayLight Creative Technologies
- **Riverpod Discord**: https://discord.gg/riverpod
---
## 🔗 Related Resources
- [Riverpod 3.0 Documentation](https://riverpod.dev/docs/whats_new#refmounted)
- [Riverpod 3.0 Migration Guide](https://riverpod.dev/docs/3.0_migration)
- [Andrea Bizzotto: AsyncNotifier Mounted](https://codewithandrea.com/articles/async-notifier-mounted-riverpod/)
---
**Prevent production crashes. Enforce Riverpod 3.0 async safety. Use `riverpod_3_scanner`.**
| text/markdown | Steven Day | Steven Day <support@daylightcreative.tech> | null | Steven Day <support@daylightcreative.tech> | MIT License
Copyright (c) 2025 DayLight Creative Technologies
Author: Steven Day
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| riverpod, flutter, dart, static-analysis, linter, code-quality, async-safety, riverpod-3, safety-checker | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Software Development :: Quality Assurance",
"Topic :: Software Development :: Testing",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
... | [] | https://github.com/DayLight-Creative-Technologies/riverpod_3_scanner | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/DayLight-Creative-Technologies/riverpod_3_scanner",
"Documentation, https://github.com/DayLight-Creative-Technologies/riverpod_3_scanner/blob/main/docs/GUIDE.md",
"Repository, https://github.com/DayLight-Creative-Technologies/riverpod_3_scanner",
"Issues, https://github.com/DayLi... | twine/6.2.0 CPython/3.14.2 | 2026-02-18T16:59:58.647425 | riverpod_3_scanner-1.4.1.tar.gz | 74,400 | cf/65/840dcacd7f7f6705ed3117526fa6ff40829b9fba09d16b9990a0adc10b2b/riverpod_3_scanner-1.4.1.tar.gz | source | sdist | null | false | 0844fbf3c3b94746febd8b81d7a4f656 | 9e9a0a4b74c014de0dcf7cdc0750ce1fd15d7eec0fc4fda97345ed82410edecb | cf65840dcacd7f7f6705ed3117526fa6ff40829b9fba09d16b9990a0adc10b2b | null | [
"LICENSE",
"AUTHORS"
] | 248 |
2.4 | namaka | 0.1.0 | Add your description here | # 🌊 Namaka
**Namaka** is the smaller, outer moon of the dwarf planet Haumea in the Kuiper Belt🪐.
### Key facts
* **Discovered:** 2005
* **Parent body:** Haumea
* **Region:** Kuiper Belt (beyond Neptune)
* **Relative size:** Smaller than Haumea’s other moon, Hiʻiaka
* **Orbit:** Irregular and dynamically complex due to interactions with Hiʻiaka
### Name origin
Namaka is named after a **Hawaiian sea goddess**, associated with water and the ocean. Haumea itself is named after a Hawaiian goddess of fertility and childbirth, and its moons follow Hawaiian mythology.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [] | [] | [] | [] | [] | uv/0.9.7 | 2026-02-18T16:58:16.211513 | namaka-0.1.0.tar.gz | 2,013 | 59/1f/598e3b6b169a6d84a1e3543f19019e1ba97e08b5848105a6149bdecfa7a4/namaka-0.1.0.tar.gz | source | sdist | null | false | 29828e96c9c5ed9bde3aad3d6b11a83b | 47258c80c607d4dacb0c397cd5e92bfe4badc3cb6743b50f5352b95ed18b8112 | 591f598e3b6b169a6d84a1e3543f19019e1ba97e08b5848105a6149bdecfa7a4 | null | [
"LICENSE"
] | 262 |
2.4 | nexustrader | 0.2.52 | fastest python trading bot | <picture>
<source media="(prefers-color-scheme: dark)" srcset="docs/source/_static/logo-dark.png">
<source media="(prefers-color-scheme: light)" srcset="docs/source/_static/logo-light.png">
<img alt="nexustrader Logo" src="docs/source/_static/logo-light.png">
</picture>
---

- **WebSite**: https://nexustrader.quantweb3.ai/
- **Docs**: https://nexustrader.readthedocs.io/en/latest/
- **Support**: [quantweb3.ai@gmail.com](mailto:quantweb3.ai@gmail.com)
```python
###############################################################
## ##
## ##
## ███ ██ ███████ ██ ██ ██ ██ ███████ ##
## ████ ██ ██ ██ ██ ██ ██ ██ ##
## ██ ██ ██ █████ ███ ██ ██ ███████ ##
## ██ ██ ██ ██ ██ ██ ██ ██ ██ ##
## ██ ████ ███████ ██ ██ ██████ ███████ ##
## ##
## ##
## ████████ ██████ █████ ██████ ███████ ██████ ##
## ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ##
## ██ ██████ ███████ ██ ██ █████ ██████ ##
## ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ##
## ██ ██ ██ ██ ██ ██████ ███████ ██ ██ ##
## ##
## ##
###############################################################
```
## Introduction
NexusTrader is a professional-grade open-source quantitative trading platform, specifically designed for **large capital
management** and **complex strategy development**, dedicated to providing high-performance, scalable, and user-friendly
quantitative trading solutions.
## Overview
### Core Advantages
- 🚀 **Professionally Optimized Order Algorithms:** Deep optimization for algorithmic orders including TWAP, effectively
reducing market impact costs. Users can easily integrate their own execution signals to achieve more efficient and
precise order execution.
- 💰 **Professional Arbitrage Strategy Support:** Provides professional optimization for various arbitrage strategies,
including funding rate arbitrage and cross-exchange arbitrage, supporting real-time tracking and trading of thousands
of trading pairs to help users easily capture arbitrage opportunities.
- 🚧 **Full-Featured Quantitative Trading Framework:** Users don't need to build frameworks or handle complex exchange
interface details themselves. NexusTrader has integrated professional position management, order management, fund
management, and statistical analysis modules, allowing users to focus on writing strategy logic and quickly implement
quantitative trading.
- 🚀 **Multi-Market Support and High Scalability:** Supports large-scale multi-market tracking and high-frequency strategy
execution, covering a wide range of trading instruments, making it an ideal choice for professional trading needs.
### Why NexusTrader Is More Efficient?
- **Enhanced Event Loop Performance**: NexusTrader leverages [uvloop](https://github.com/MagicStack/uvloop), a high-performance event loop, delivering speeds up to 2-4 times faster than Python's default asyncio loop.
- **High-Performance WebSocket Framework**: Built with [picows](https://github.com/tarasko/picows), a Cython-based WebSocket library that matches the speed of C++'s Boost.Beast, significantly outperforming Python alternatives like websockets and aiohttp.
- **Optimized Data Serialization**: Utilizing `msgspec` for serialization and deserialization, NexusTrader achieves unmatched efficiency, surpassing tools like `orjson`, `ujson`, and `json`. All data classes are implemented with `msgspec.Struct` for maximum performance.
- **Scalable Order Management**: Orders are handled efficiently using `asyncio.Queue`, ensuring seamless processing even at high volumes.
- **Rust-Powered Core Components**: Core modules such as the MessageBus and Clock are implemented in Rust, combining Rust's speed and reliability with Python's flexibility through the [nautilius](https://github.com/nautilius/nautilius) framework.
### Comparison with Other Frameworks
| Framework | Websocket Package | Data Serialization | Strategy Support | Advantages | Disadvantages |
| ------------------------------------------------------------ | ------------------------------------------------------------ | -------------------------------------------------- | ---------------- | -------------------------------------------------- | ------------------------------------------------- |
| **NexusTrader** | [picows](https://picows.readthedocs.io/en/stable/introduction.html#installation) | [msgspec](https://jcristharif.com/msgspec/) | ✅ | Professionally optimized for speed and low latency | Requires some familiarity with async workflows |
| [HummingBot](https://github.com/hummingbot/hummingbot?tab=readme-ov-file) | aiohttp | [ujson](https://pypi.org/project/ujson/) | ✅ | Widely adopted with robust community support | Slower WebSocket handling and limited flexibility |
| [Freqtrade](https://github.com/freqtrade/freqtrade) | websockets | [orjson](https://github.com/ijl/orjson) | ✅ | Flexible strategy support | Higher resource consumption |
| [crypto-feed](https://github.com/bmoscon/cryptofeed) | [websockets](https://websockets.readthedocs.io/en/stable/) | [yapic.json](https://pypi.org/project/yapic.json/) | ❌ | Simple design for feed-only use | Lacks trading support and advanced features |
| [ccxt](https://github.com/bmoscon/cryptofeed) | [aiohttp](https://docs.aiohttp.org/en/stable/client_reference.html) | json | ❌ | Great REST API support | Limited WebSocket performance |
| [binance-futures-connector](https://github.com/binance/binance-futures-connector-python) | [websocket-client](https://websocket-client.readthedocs.io/en/latest/examples.html) | json | ❌ | Optimized for Binance-specific integration | Limited to Binance Futures |
| [python-okx](https://github.com/okxapi/python-okx) | websockets | json | ❌ | Dedicated to OKX trading | Limited to OKX platform |
| [unicorn-binance-websocket-api](https://github.com/LUCIT-Systems-and-Development/unicorn-binance-websocket-api) | websockets | [ujson](https://pypi.org/project/ujson/) | ❌ | Easy-to-use for Binance users | Restricted to Binance and resource-heavy |
### Architecture (data flow)
The core of Tradebot is the `Connector`, which is responsible for connecting to the exchange and data flow. Through the `PublicConnector`, users can access market data from the exchange, and through the `PrivateConnector`, users can execute trades and receive callbacks for trade data. Orders are submitted through the ``OrderExecutionSystem``, which is responsible for submitting orders to the exchange and obtaining the order ID from the exchange. Order status management is handled by the `OrderManagementSystem`, which is responsible for managing the status of orders and sending them to the `Strategy`.

### Features
- 🌍 Multi-Exchange Integration: Effortlessly connect to top exchanges like Binance, Bybit, and OKX, with an extensible design to support additional platforms.
- ⚡ Asynchronous Operations: Built on asyncio for highly efficient, scalable performance, even during high-frequency trading.
- 📡 Real-Time Data Streaming: Reliable WebSocket support for live market data, order book updates, and trade execution notifications.
- 📊 Advanced Order Management: Execute diverse order types (limit, market, stop) with optimized, professional-grade order handling.
- 📋 Account Monitoring: Real-time tracking of balances, positions, and PnL across multiple exchanges with integrated monitoring tools.
- 🛠️ Modular Architecture: Flexible framework to add exchanges, instruments, or custom strategies with ease.
- 🔄 Strategy Execution & Backtesting: Seamlessly transition from strategy testing to live trading with built-in tools.
- 📈 Scalability: Designed to handle large-scale, multi-market operations for retail and institutional traders alike.
- 💰 Risk & Fund Management: Optimize capital allocation and control risk exposure with integrated management tools.
- 🔔 Instant Notifications: Stay updated with alerts for trades, market changes, and custom conditions.
### Supported Exchanges
| OKX | Binance | BYBIT | HYPERLIQUID | BITGET |
| --------| ------ | ------- | ------- | ------- |
| <img src="https://images-wixmp-ed30a86b8c4ca887773594c2.wixmp.com/f/9a411426-3711-47d4-9c1a-dcf72973ddfc/dfj37e6-d8b49926-d115-4368-9de8-09a80077fb4f.png/v1/fill/w_1280,h_1280/okx_okb_logo_by_saphyl_dfj37e6-fullview.png?token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ1cm46YXBwOjdlMGQxODg5ODIyNjQzNzNhNWYwZDQxNWVhMGQyNmUwIiwiaXNzIjoidXJuOmFwcDo3ZTBkMTg4OTgyMjY0MzczYTVmMGQ0MTVlYTBkMjZlMCIsIm9iaiI6W1t7ImhlaWdodCI6Ijw9MTI4MCIsInBhdGgiOiJcL2ZcLzlhNDExNDI2LTM3MTEtNDdkNC05YzFhLWRjZjcyOTczZGRmY1wvZGZqMzdlNi1kOGI0OTkyNi1kMTE1LTQzNjgtOWRlOC0wOWE4MDA3N2ZiNGYucG5nIiwid2lkdGgiOiI8PTEyODAifV1dLCJhdWQiOlsidXJuOnNlcnZpY2U6aW1hZ2Uub3BlcmF0aW9ucyJdfQ.kH6v6nu55xLephOzAFFhD2uCYkmFdLsBoTkSuQvtBpo" width="100"> | <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/e/e8/Binance_Logo.svg/768px-Binance_Logo.svg.png" width="100"> | <img src="https://brandlogo.org/wp-content/uploads/2024/02/Bybit-Logo.png" width="100"> | <img src="https://avatars.githubusercontent.com/u/129421375?s=280&v=4" width="100"> | <img src="https://s2.coinmarketcap.com/static/img/coins/200x200/11092.png" width="100"> |
## Installation
### Prerequisites
- Python 3.11+
- Redis
- Poetry (recommended)
- build-essential
### Install Build Essentials
```bash
sudo apt-get update
sudo apt-get install build-essential
```
### From PyPI
```bash
pip install nexustrader
```
### From Source
```bash
git clone https://github.com/RiverTrading/NexusTrader
cd NexusTrader
poetry install
```
> **Note**
> more details can be found in the [installation guide](https://nexustrader.readthedocs.io/en/latest/installation.html)
### Quick Start
Here's a basic example of how to use nexustrader, demonstrating a simple buy and sell strategy on OKX.
```python
from decimal import Decimal
from nexustrader.constants import settings
from nexustrader.config import Config, PublicConnectorConfig, PrivateConnectorConfig, BasicConfig
from nexustrader.strategy import Strategy
from nexustrader.constants import ExchangeType, OrderSide, OrderType
from nexustrader.exchange.okx import OkxAccountType
from nexustrader.schema import BookL1, Order
from nexustrader.engine import Engine
# Retrieve API credentials from settings
OKX_API_KEY = settings.OKX.DEMO_1.api_key
OKX_SECRET = settings.OKX.DEMO_1.secret
OKX_PASSPHRASE = settings.OKX.DEMO_1.passphrase
class Demo(Strategy):
def __init__(self):
super().__init__()
self.subscribe_bookl1(symbols=["BTCUSDT-PERP.OKX"]) # Subscribe to the order book for the specified symbol
self.signal = True # Initialize signal to control order execution
def on_failed_order(self, order: Order):
print(order) # Log failed orders
def on_pending_order(self, order: Order):
print(order) # Log pending orders
def on_accepted_order(self, order: Order):
print(order) # Log accepted orders
def on_partially_filled_order(self, order: Order):
print(order) # Log partially filled orders
def on_filled_order(self, order: Order):
print(order) # Log filled orders
def on_bookl1(self, bookl1: BookL1):
if self.signal: # Check if the signal is active
# Create a market buy order
self.create_order(
symbol="BTCUSDT-PERP.OKX",
side=OrderSide.BUY,
type=OrderType.MARKET,
amount=Decimal("0.1"),
)
# Create a market sell order
self.create_order(
symbol="BTCUSDT-PERP.OKX",
side=OrderSide.SELL,
type=OrderType.MARKET,
amount=Decimal("0.1"),
)
self.signal = False # Deactivate the signal after placing orders
# Configuration for the trading strategy
config = Config(
strategy_id="okx_buy_and_sell",
user_id="user_test",
strategy=Demo(),
basic_config={
ExchangeType.OKX: BasicConfig(
api_key=OKX_API_KEY,
secret=OKX_SECRET,
passphrase=OKX_PASSPHRASE,
testnet=True, # Use testnet for safe trading
)
},
public_conn_config={
ExchangeType.OKX: [
PublicConnectorConfig(
account_type=OkxAccountType.DEMO, # Specify demo account type
)
]
},
private_conn_config={
ExchangeType.OKX: [
PrivateConnectorConfig(
account_type=OkxAccountType.DEMO, # Specify demo account type
)
]
}
)
# Initialize the trading engine with the configuration
engine = Engine(config)
if __name__ == "__main__":
try:
engine.start() # Start the trading engine
finally:
engine.dispose() # Ensure resources are cleaned up
```
### Web Callbacks
NexusTrader can host FastAPI endpoints alongside a running strategy. Define an application with
`nexustrader.web.create_strategy_app`, decorate a method that accepts `self`, and enable the web server in your config.
```python
from fastapi import Body
from nexustrader.web import create_strategy_app
from nexustrader.config import WebConfig
class Demo(Strategy):
web_app = create_strategy_app(title="Demo strategy API")
@web_app.post("/toggle")
async def on_web_cb(self, payload: dict = Body(...)):
self.signal = payload.get("signal", True)
return {"signal": self.signal}
config = Config(
strategy_id="demo",
user_id="user",
strategy=Demo(),
basic_config={...},
public_conn_config={...},
private_conn_config={...},
web_config=WebConfig(enabled=True, host="127.0.0.1", port=9000),
)
```
When the engine starts, it binds the strategy instance to the FastAPI routes and serves them in the background using
Uvicorn. Routes automatically disappear once the engine stops.
This example illustrates how easy it is to switch between different exchanges and strategies by modifying the `config`
class. For instance, to switch to Binance, you can adjust the configuration as follows, and change the symbol to
`BTCUSDT-PERP.BINANCE`.
```python
from nexustrader.exchange.binance import BinanceAccountType
config = Config(
strategy_id="buy_and_sell_binance",
user_id="user_test",
strategy=Demo(),
basic_config={
ExchangeType.BINANCE: BasicConfig(
api_key=BINANCE_API_KEY,
secret=BINANCE_SECRET,
testnet=True, # Use testnet for safe trading
)
},
public_conn_config={
ExchangeType.BINANCE: [
PublicConnectorConfig(
account_type=BinanceAccountType.USD_M_FUTURE_TESTNET, # Specify account type for Binance
)
]
},
private_conn_config={
ExchangeType.BINANCE: [
PrivateConnectorConfig(
account_type=BinanceAccountType.USD_M_FUTURE_TESTNET, # Specify account type for Binance
)
]
}
)
```
## Multi-Mode Support
nexustrader supports multiple modes of operation to cater to different trading strategies and requirements. Each mode
allows for flexibility in how trading logic is executed based on market conditions or specific triggers.
### Event-Driven Mode
In this mode, trading logic is executed in response to real-time market events. The methods `on_bookl1`, `on_trade`, and
`on_kline` are triggered whenever relevant data is updated, allowing for immediate reaction to market changes.
```python
class Demo(Strategy):
def __init__(self):
super().__init__()
self.subscribe_bookl1(symbols=["BTCUSDT-PERP.BINANCE"])
def on_bookl1(self, bookl1: BookL1):
# implement the trading logic Here
pass
```
### Timer Mode
This mode allows you to schedule trading logic to run at specific intervals. You can use the `schedule` method to define
when your trading algorithm should execute, making it suitable for strategies that require periodic checks or actions.
```python
class Demo2(Strategy):
def __init__(self):
super().__init__()
self.schedule(self.algo, trigger="interval", seconds=1)
def algo(self):
# run every 1 second
# implement the trading logic Here
pass
```
### Custom Signal Mode
In this mode, trading logic is executed based on custom signals. You can define your own signals and use the
`on_custom_signal` method to trigger trading actions when these signals are received. This is particularly useful for
integrating with external systems or custom event sources.
```python
class Demo3(Strategy):
def __init__(self):
super().__init__()
self.signal = True
def on_custom_signal(self, signal: object):
# implement the trading logic Here,
# signal can be any object, it is up to you to define the signal
pass
```
## Define Your Own Indicator
NexusTrader provides a powerful framework for creating custom indicators with built-in warmup functionality. This allows your indicators to automatically fetch historical data and prepare themselves before live trading begins.
Here's an example of creating a custom Moving Average indicator with automatic warmup:
```python
from collections import deque
from nexustrader.indicator import Indicator
from nexustrader.constants import KlineInterval, DataType
from nexustrader.schema import Kline, BookL1, BookL2, Trade
from nexustrader.strategy import Strategy
from nexustrader.exchange.bybit import BybitAccountType
class MovingAverageIndicator(Indicator):
def __init__(self, period: int = 20):
super().__init__(
params={"period": period},
name=f"MA_{period}",
warmup_period=period * 2, # Define warmup period
warmup_interval=KlineInterval.MINUTE_1, # Define warmup interval
)
self.period = period
self.prices = deque(maxlen=period)
self.current_ma = None
def handle_kline(self, kline: Kline):
if not kline.confirm: # Only process confirmed klines
return
self.prices.append(kline.close)
# Calculate moving average if we have enough data
if len(self.prices) >= self.period:
self.current_ma = sum(self.prices) / len(self.prices)
def handle_bookl1(self, bookl1: BookL1):
pass # Implement if needed
def handle_bookl2(self, bookl2: BookL2):
pass # Implement if needed
def handle_trade(self, trade: Trade):
pass # Implement if needed
@property
def value(self):
return self.current_ma
class MyStrategy(Strategy):
def __init__(self):
super().__init__()
self.symbol = "UNIUSDT-PERP.BYBIT"
self.ma_20 = MovingAverageIndicator(period=20)
self.ma_50 = MovingAverageIndicator(period=50)
def on_start(self):
# Subscribe to kline data
self.subscribe_kline(
symbols=self.symbol,
interval=KlineInterval.MINUTE_1,
)
# Register indicators with automatic warmup
self.register_indicator(
symbols=self.symbol,
indicator=self.ma_20,
data_type=DataType.KLINE,
account_type=BybitAccountType.LINEAR,
)
self.register_indicator(
symbols=self.symbol,
indicator=self.ma_50,
data_type=DataType.KLINE,
account_type=BybitAccountType.LINEAR,
)
def on_kline(self, kline: Kline):
# Wait for indicators to warm up
if not self.ma_20.is_warmed_up or not self.ma_50.is_warmed_up:
self.log.info("Indicators still warming up...")
return
if not kline.confirm:
return
if self.ma_20.value and self.ma_50.value:
self.log.info(
f"MA20: {self.ma_20.value:.4f}, MA50: {self.ma_50.value:.4f}, "
f"Current Price: {kline.close:.4f}"
)
# Simple golden cross strategy
if self.ma_20.value > self.ma_50.value:
self.log.info("Golden Cross - Bullish signal!")
elif self.ma_20.value < self.ma_50.value:
self.log.info("Death Cross - Bearish signal!")
```
#### Key Features of Custom Indicators:
1. **Automatic Warmup**: Set `warmup_period` and `warmup_interval` to automatically fetch historical data
2. **Data Handlers**: Implement `handle_kline`, `handle_bookl1`, `handle_bookl2`, and `handle_trade` as needed
3. **Value Property**: Expose your indicator's current value through the `value` property
4. **Warmup Status**: Check `is_warmed_up` property to ensure indicator is ready before using
5. **Flexible Parameters**: Pass custom parameters through the `params` dictionary
This approach ensures your indicators have sufficient historical data before making trading decisions, improving the reliability and accuracy of your trading strategies.
## Execution Algorithms
NexusTrader provides a powerful execution algorithm framework for implementing advanced order execution strategies like TWAP, VWAP, iceberg orders, and more. This allows you to split large orders into smaller chunks and execute them over time to minimize market impact.
### Using TWAP (Time-Weighted Average Price)
The built-in `TWAPExecAlgorithm` splits a large order into smaller slices and executes them at regular intervals over a specified time horizon.
```python
from decimal import Decimal
from datetime import timedelta
from nexustrader.constants import OrderSide
from nexustrader.strategy import Strategy
from nexustrader.engine import Engine
from nexustrader.execution import TWAPExecAlgorithm
class MyStrategy(Strategy):
def __init__(self):
super().__init__()
self.symbol = "BTCUSDT-PERP.BINANCE"
def on_start(self):
self.subscribe_bookl1(self.symbol)
# Schedule TWAP order placement
self.schedule(
func=self.place_twap_order,
trigger="date",
run_date=self.clock.utc_now() + timedelta(seconds=10),
)
def place_twap_order(self):
# Execute 1 BTC over 5 minutes with 30-second intervals (10 slices)
self.create_algo_order(
symbol=self.symbol,
side=OrderSide.BUY,
amount=Decimal("1.0"),
exec_algorithm_id="TWAP",
exec_params={
"horizon_secs": 300, # Total execution time: 5 minutes
"interval_secs": 30, # Interval between slices: 30 seconds
},
)
# Register the execution algorithm with the engine
engine = Engine(config)
engine.add_exec_algorithm(algorithm=TWAPExecAlgorithm())
```
#### TWAP Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `horizon_secs` | int | Yes | Total execution time horizon in seconds |
| `interval_secs` | int | Yes | Interval between order slices in seconds |
| `use_limit` | bool | No | If True, use limit orders instead of market orders (default: False) |
| `n_tick_sz` | int | No | Number of tick sizes to offset from best bid/ask for limit orders. Positive = more passive, Negative = more aggressive (default: 0) |
#### TWAP with Limit Orders
For better price execution, you can use limit orders with price offsets:
```python
self.create_algo_order(
symbol="BTCUSDT-PERP.OKX",
side=OrderSide.BUY,
amount=Decimal("2.0"),
exec_algorithm_id="TWAP",
exec_params={
"horizon_secs": 100,
"interval_secs": 10,
"use_limit": True, # Use limit orders
"n_tick_sz": 1, # Place 1 tick below best ask (for buy)
},
reduce_only=True, # Optional: only reduce existing position
)
```
### Creating Custom Execution Algorithms
You can create your own execution algorithms by subclassing `ExecAlgorithm`. The key method to implement is `on_order()`, which is called when a new algorithmic order is received.
```python
from decimal import Decimal
from nexustrader.execution.algorithm import ExecAlgorithm
from nexustrader.execution.config import ExecAlgorithmConfig
from nexustrader.execution.schema import ExecAlgorithmOrder
from nexustrader.execution.constants import ExecAlgorithmStatus
from nexustrader.schema import Order
class MyAlgoConfig(ExecAlgorithmConfig, kw_only=True, frozen=True):
"""Configuration for custom algorithm."""
exec_algorithm_id: str = "MY_ALGO"
class MyExecAlgorithm(ExecAlgorithm):
"""Custom execution algorithm example."""
def __init__(self, config: MyAlgoConfig | None = None):
if config is None:
config = MyAlgoConfig()
super().__init__(config)
def on_order(self, exec_order: ExecAlgorithmOrder):
"""
Main entry point - called when create_algo_order() is invoked.
Parameters
----------
exec_order : ExecAlgorithmOrder
Contains: primary_oid, symbol, side, total_amount, remaining_amount, params
"""
# Access custom parameters
params = exec_order.params
my_param = params.get("my_param", "default")
# Spawn child orders using available methods:
# - spawn_market(exec_order, quantity)
# - spawn_limit(exec_order, quantity, price)
# - spawn_market_ws / spawn_limit_ws for WebSocket submission
# Example: split into two market orders
half = self.amount_to_precision(
exec_order.symbol,
exec_order.total_amount / 2,
mode="floor"
)
self.spawn_market(exec_order, half)
self.spawn_market(exec_order, exec_order.total_amount - half)
def on_spawned_order_filled(self, exec_order: ExecAlgorithmOrder, order: Order):
"""Called when a spawned order is filled."""
if exec_order.remaining_amount <= 0:
self.mark_complete(exec_order)
def on_cancel(self, exec_order: ExecAlgorithmOrder):
"""Handle cancellation request."""
exec_order.status = ExecAlgorithmStatus.CANCELED
```
#### Key Methods to Override
| Method | Description |
|--------|-------------|
| `on_order(exec_order)` | **Required.** Main entry point when a new order is received |
| `on_start()` | Called when the algorithm starts |
| `on_stop()` | Called when the algorithm stops |
| `on_cancel(exec_order)` | Called when cancellation is requested |
| `on_execution_complete(exec_order)` | Called when execution completes |
| `on_spawned_order_filled(exec_order, order)` | Called when a spawned order is filled |
| `on_spawned_order_failed(exec_order, order)` | Called when a spawned order fails |
#### Utility Methods
| Method | Description |
|--------|-------------|
| `spawn_market(exec_order, quantity)` | Spawn a market order |
| `spawn_limit(exec_order, quantity, price)` | Spawn a limit order |
| `cancel_spawned_order(exec_order, spawned_oid)` | Cancel a spawned order |
| `mark_complete(exec_order)` | Mark execution as complete |
| `set_timer(name, interval, callback)` | Set a timer for scheduled execution |
| `amount_to_precision(symbol, amount)` | Convert amount to market precision |
| `price_to_precision(symbol, price)` | Convert price to market precision |
| `min_order_amount(symbol)` | Get minimum order amount |
| `cache.bookl1(symbol)` | Get current L1 order book |
#### Register and Use
```python
engine = Engine(config)
engine.add_exec_algorithm(algorithm=MyExecAlgorithm())
# In strategy:
self.create_algo_order(
symbol="BTCUSDT-PERP.BINANCE",
side=OrderSide.BUY,
amount=Decimal("1.0"),
exec_algorithm_id="MY_ALGO",
exec_params={"my_param": "value"},
)
```
## Contributing
Thank you for considering contributing to nexustrader! We greatly appreciate any effort to help improve the project. If
you have an idea for an enhancement or a bug fix, the first step is to open
an [issue](https://github.com/Quantweb3-ai/tradebot-pro/issues) on GitHub. This allows us to discuss your proposal and
ensure it aligns with the project's goals, while also helping to avoid duplicate efforts.
When you're ready to start working on your contribution, please review the guidelines in
the [CONTRIBUTING.md](./CONTRIBUTING.md) file. Depending on the nature of your contribution, you may also need to sign a
Contributor License Agreement (CLA) to ensure it can be included in the project.
> **Note**
> Pull requests should be directed to the `main` branch (the default branch), where new features and improvements are
> integrated before release.
Thank you again for your interest in nexustrader! We look forward to reviewing your contributions and collaborating with
you to make the project even better.
## VIP Privileges
Trading on our platform is free. Become a VIP customer to enjoy exclusive technical support privileges for $499 per month ([Subscription Here](https://quantweb3.ai/subscribe/ ))—or get VIP status at no cost by opening an account through our partnership links.
Our partners include global leading trading platforms like Bybit, OKX, ZFX, Bison and others. By opening an account through our referral links, you'll enjoy these benefits:
Instant Account Benefits
1. Trading Fee Discounts: Exclusive discounts to lower your trading costs.
2. VIP Service Support: Contact us after opening your account to become our VIP customer. Enjoy exclusive events and benefits for the ultimate VIP experience.
Act now and join our VIP program!
> Click the links below to register
- [Bybit](https://partner.bybit.com/b/90899)
- [OKX](http://www.okx.com/join/80353297)
- [ZFX](https://zfx.link/46dFByp)
- [Bison](https://m.bison.com/#/register?invitationCode=1002)
## Social
Connect with us on your favorite platforms:
[-000000?logo=x&logoColor=white)](https://x.com/quantweb3_ai) Stay updated with our latest news, features, and announcements.
[](https://discord.gg/BR8VGRrXFr) Join our community to discuss ideas, get support, and connect with other users.
[](https://t.me/+6e2MtXxoibM2Yzlk) Receive instant updates and engage in real-time discussions.
## See Also
We recommend exploring related tools and projects that can enhance your trading workflows:
- **[Nexus](https://github.com/Quantweb3-ai/nexus):** A robust exchange interface optimization solution that integrates
seamlessly with trading bots like nexustrader, enabling faster and more reliable trading execution.
## License
Nexustrader is available on GitHub under the MIT License. Contributions to the project are welcome and require the
completion of a Contributor License Agreement (CLA). Please review the contribution guidelines and submit a pull
request. See the [LICENSE](./LICENSE) file for details.
## Star History
<a href="https://www.star-history.com/#Quantweb3-com/NexusTrader&Date">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=Quantweb3-com/NexusTrader&type=Date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=Quantweb3-com/NexusTrader&type=Date" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=Quantweb3-com/NexusTrader&type=Date" />
</picture>
</a>
| text/markdown | River-Shi | nachuan.shi.quant@gmail.com | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"aiosqlite<0.22.0,>=0.21.0",
"apscheduler<4.0.0,>=3.11.0",
"asyncpg>=0.30.0",
"bcrypt<5.0.0,>=4.2.1",
"ccxt>=4.5.34",
"certifi<2026.0.0,>=2025.1.31",
"click>=8.1.0",
"cython<4.0.0,>=3.0.11",
"dynaconf[redis]>=3.2.12",
"eth-account>=0.13.7",
"fastapi>=0.117.1",
"flashduty-sdk>=0.1.1",
"httpx>... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T16:58:14.102658 | nexustrader-0.2.52.tar.gz | 238,768 | 8e/c8/0c85f8ba984f86655448f6ea49a13a022e9bc81a384a60c0ca0b490a9d24/nexustrader-0.2.52.tar.gz | source | sdist | null | false | e3403c4765cf3699b0847109fe1f0437 | ea5c67c61245cdb4c2148336b9a64e8f8e017881267e3414ddfa4b23bc84c216 | 8ec80c85f8ba984f86655448f6ea49a13a022e9bc81a384a60c0ca0b490a9d24 | null | [
"LICENSE"
] | 246 |
2.3 | bugspot | 0.1.1 | Lightweight insect detection and tracking using motion-based detection | # BugSpot
[](https://pypi.org/project/bugspot/)
[](https://pypi.org/project/bugspot/)
[](https://opensource.org/licenses/MIT)
Lightweight insect detection and tracking using motion-based GMM background subtraction, Hungarian algorithm tracking, and path topology analysis. Core library for [B++](https://github.com/Tvenver/Bplusplus) and [Sensing Garden](https://github.com/MIT-Senseable-City-Lab/sensing-garden).
**No ML framework dependencies** — only requires OpenCV, NumPy, and SciPy.
## Installation
```bash
pip install bugspot
```
## Quick Start
### Command Line
```bash
# Run with defaults
bugspot video.mp4
# Run with a custom config
bugspot video.mp4 --config detection_config.yaml --output results/
```
### Python API
```python
from bugspot import DetectionPipeline
pipeline = DetectionPipeline()
result = pipeline.process_video("video.mp4")
print(f"Confirmed: {len(result.confirmed_tracks)} tracks")
for track_id, track in result.confirmed_tracks.items():
print(f" Track {track_id[:8]}: {track.num_detections} detections, {track.duration:.1f}s")
for frame_num, crop in track.crops:
pass # feed crop to your classifier
if track.composite is not None:
import cv2
cv2.imwrite(f"track_{track_id[:8]}.jpg", track.composite)
```
### Save Outputs to Disk
```python
result = pipeline.process_video(
"video.mp4",
save_crops_dir="output/crops",
save_composites_dir="output/composites",
)
```
### Continuous Operation (Multi-Chunk)
For processing video chunks where tracks persist across boundaries:
```python
pipeline = DetectionPipeline(config)
for video_chunk in video_queue:
result = pipeline.process_video(video_chunk)
# Process results...
pipeline.clear() # Keep tracker state, clear detections
```
### Single Video (Stateless)
For one-off processing without persistent state:
```python
pipeline = DetectionPipeline(config)
result = pipeline.process_video("video.mp4")
pipeline.reset() # Full reset — clear everything including tracker
```
## Pipeline
1. **Detection** — GMM background subtraction → morphological filtering → shape filters → cohesiveness check
2. **Tracking** — Hungarian algorithm matching with lost track recovery
3. **Topology Analysis** — Path analysis confirms insect-like movement (vs plants/noise)
4. **Crop Extraction** — Re-reads video to extract crop images for confirmed tracks
5. **Composite Rendering** — Lighten blend on darkened background showing temporal trail
## Configuration
See [`detection_config.yaml`](detection_config.yaml) for all parameters with descriptions.
| Parameter | Default | Description |
|-----------|---------|-------------|
| **GMM** | | |
| `gmm_history` | 500 | Frames to build background model |
| `gmm_var_threshold` | 16 | Foreground variance threshold |
| **Morphological** | | |
| `morph_kernel_size` | 3 | Kernel size (NxN) |
| **Cohesiveness** | | |
| `min_largest_blob_ratio` | 0.80 | Min largest blob / total motion |
| `max_num_blobs` | 5 | Max blobs in detection |
| `min_motion_ratio` | 0.15 | Min motion pixels / bbox area |
| **Shape** | | |
| `min_area` | 200 | Min contour area (px²) |
| `max_area` | 40000 | Max contour area (px²) |
| `min_density` | 3.0 | Min area/perimeter ratio |
| `min_solidity` | 0.55 | Min convex hull fill ratio |
| **Tracking** | | |
| `min_displacement` | 50 | Min net movement (px) |
| `min_path_points` | 10 | Min points for topology |
| `max_frame_jump` | 100 | Max jump between frames (px) |
| `max_lost_frames` | 45 | Frames before track deleted |
| `max_area_change_ratio` | 3.0 | Max area change ratio |
| **Tracker Matching** | | |
| `tracker_w_dist` | 0.6 | Distance weight (0-1) |
| `tracker_w_area` | 0.4 | Area weight (0-1) |
| `tracker_cost_threshold` | 0.3 | Max cost for match |
| **Topology** | | |
| `max_revisit_ratio` | 0.30 | Max revisited positions |
| `min_progression_ratio` | 0.70 | Min forward progression |
| `max_directional_variance` | 0.90 | Max heading variance |
| `revisit_radius` | 50 | Revisit radius (px) |
| text/markdown | Orlando Closs | closs@mit.edu | null | null | MIT | insect-detection, motion-detection, tracking, computer-vision, background-subtraction | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Languag... | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.20",
"opencv-python>=4.0",
"pyyaml>=6.0",
"scipy>=1.7"
] | [] | [] | [] | [
"Homepage, https://github.com/orlandocloss/bugspot",
"Repository, https://github.com/orlandocloss/bugspot",
"B++, https://github.com/Tvenver/Bplusplus",
"Sensing Garden, https://github.com/MIT-Senseable-City-Lab/sensing-garden"
] | poetry/2.1.3 CPython/3.10.12 Linux/6.8.0-94-generic | 2026-02-18T16:57:45.918070 | bugspot-0.1.1.tar.gz | 14,030 | 11/ec/d68ffae6bc1cc4734c453ba9a98ac345dae6c5fa5bdbf3ef984b901eac76/bugspot-0.1.1.tar.gz | source | sdist | null | false | 37dcab4c79f2e5a1f38b850ec1695874 | ed61051279479f913243bd67b0ccac9434a5e4c0c2875055018e832acfd16ccf | 11ecd68ffae6bc1cc4734c453ba9a98ac345dae6c5fa5bdbf3ef984b901eac76 | null | [] | 304 |
2.4 | Topsis-Prabhsimrat-102317135 | 1.0.0 | A Python implementation of TOPSIS method using command line arguments | # TOPSIS – Technique for Order Preference by Similarity to Ideal Solution
TOPSIS is a Multi-Criteria Decision Making (MCDM) method developed in the 1980s.
It selects the best alternative based on:
- Shortest Euclidean distance from the Ideal Solution
- Farthest distance from the Negative-Ideal Solution
---
## Installation
Install directly from PyPI:
## Usage
After installation, open your terminal and run:
topsis <InputDataFile> <Weights> <Impacts> <OutputFile>
### Example:
topsis test.csv "1,1,2,3,1" "+,-,+,-,-" output.csv
## Output
The output file will contain:
- Original data
- Topsis Score
- Rank (1 = Best Alternative)
---
| text/markdown | Prabhsimrat Singh | prabhsimrat28112005@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.6 | [] | [] | [] | [
"numpy",
"pandas"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.4 | 2026-02-18T16:57:24.553374 | topsis_prabhsimrat_102317135-1.0.0.tar.gz | 2,057 | cd/3f/d5450a5c105d720b59eb3c5fdf4a8d19708603f47f32ce7da07653707bd9/topsis_prabhsimrat_102317135-1.0.0.tar.gz | source | sdist | null | false | 2f8c5ff82f0190c8b6d2fc55b2d77ae0 | 258a06a8373e32a871de55faf358f603179609de886047c54dafd3649417e2cc | cd3fd5450a5c105d720b59eb3c5fdf4a8d19708603f47f32ce7da07653707bd9 | null | [] | 0 |
2.4 | chuk-llm | 0.18 | A unified Python library for Large Language Model (LLM) providers with real-time streaming, function calling, middleware support, automatic session tracking, dynamic model discovery, and intelligent system prompt generation. | # chuk-llm
**The intelligent model capability engine.** Production-ready Python library with dynamic model discovery, capability-based selection, real-time streaming, and Pydantic-native architecture.
```python
from chuk_llm import quick_question
print(quick_question("What is 2+2?")) # "2 + 2 equals 4."
```
## ✨ What's New in v0.14
**Revolutionary Registry System:**
- 🧠 **Dynamic Model Discovery** - No more hardcoded model lists, automatic capability detection
- 🎯 **Intelligent Selection** - Find models by capabilities, cost, and quality tier
- 🔍 **Smart Queries** - `find_best(requires_tools=True, quality_tier="cheap")`
- 🏗️ **Pydantic V2 Native** - Type-safe models throughout, no dictionary goop
- ⚡ **Async-First Architecture** - True async/await with sync wrappers for convenience
- 📊 **Layered Capability Resolution** - Heuristics → YAML cache → Provider APIs
- 🚀 **Zero-Config** - Pull a new Ollama model, use it immediately
**Latest Models (December 2025):**
- 🤖 **Gemini 2.5/3 Pro** - 1M token context, adaptive thinking, multimodal (`gemini-2.5-flash`, `gemini-3-pro-preview`)
- 🚀 **Mistral Large 3** - 675B MoE, 41B active, Apache 2.0 (`mistral-large-2512`, `ministral-8b-2512`, `ministral-14b-2512`)
- 💡 **DeepSeek V3.2** - 671B MoE, ultra-efficient at $0.27/M tokens (`deepseek-chat`, `deepseek-reasoner`)
**Performance:**
- ⚡ **52x faster imports** - Lazy loading reduces import time from 735ms to 14ms
- 🚀 **112x faster client creation** - Automatic thread-safe caching
- 📊 **<0.015% overhead** - Negligible library overhead vs API latency
See [REGISTRY_COMPLETE.md](REGISTRY_COMPLETE.md) for architecture details.
## Why chuk-llm?
- **🧠 Intelligent**: Dynamic registry selects models by capabilities, not names
- **🔍 Auto-Discovery**: Pull new models, use immediately - no configuration needed
- **⚡ Lightning Fast**: Massive performance improvements (see [Performance](#performance))
- **🛠️ Clean Tools API**: Function calling without complexity - tools are just parameters
- **🏗️ Type-Safe**: Pydantic V2 models throughout, no dictionary goop
- **⚡ Async-Native**: True async/await with sync wrappers when needed
- **📊 Built-in Analytics**: Automatic cost and usage tracking with session isolation
- **🎯 Production-Ready**: Thread-safe caching, connection pooling, negligible overhead
## Quick Start
### Installation
```bash
# Core functionality
pip install chuk_llm
# Or with extras
pip install chuk_llm[redis] # Persistent sessions
pip install chuk_llm[cli] # Enhanced CLI experience
pip install chuk_llm[all] # Everything
```
### Basic Usage
```python
# Simplest approach - auto-detects available providers
from chuk_llm import quick_question
answer = quick_question("Explain quantum computing in one sentence")
# Provider-specific (auto-generated functions!)
from chuk_llm import ask_openai_sync, ask_claude_sync, ask_ollama_llama3_2_sync
response = ask_openai_sync("Tell me a joke")
response = ask_claude_sync("Write a haiku")
response = ask_ollama_llama3_2_sync("Explain Python") # Auto-discovered!
```
### Latest Models (December 2025)
```python
from chuk_llm import ask
# Gemini 3 Pro - Advanced reasoning with 1M context
response = await ask(
"Explain consciousness vs intelligence in AI",
provider="gemini",
model="gemini-3-pro-preview"
)
# Mistral Large 3 - 675B MoE, Apache 2.0
response = await ask(
"Write a Python function for binary search",
provider="mistral",
model="mistral-large-2512"
)
# Ministral 8B - Fast, efficient, cost-effective
response = await ask(
"Summarize this text",
provider="mistral",
model="ministral-8b-2512"
)
# DeepSeek V3.2 - Ultra-efficient at $0.27/M tokens
response = await ask(
"Solve this math problem step by step",
provider="deepseek",
model="deepseek-chat"
)
```
### Async & Streaming
```python
import asyncio
from chuk_llm import ask, stream
async def main():
# Async call
response = await ask("What's the capital of France?")
# Real-time streaming
async for chunk in stream("Write a story"):
print(chunk, end="", flush=True)
asyncio.run(main())
```
### Function Calling (Tools)
```python
from chuk_llm import ask
from chuk_llm.api.tools import tools_from_functions
def get_weather(location: str) -> dict:
return {"temp": 22, "location": location, "condition": "sunny"}
# Tools are just a parameter!
toolkit = tools_from_functions(get_weather)
response = await ask(
"What's the weather in Paris?",
tools=toolkit.to_openai_format()
)
print(response) # Returns dict with tool_calls when tools provided
```
### CLI Usage
```bash
# Quick commands with global aliases
chuk-llm ask_gpt "What is Python?"
chuk-llm ask_claude "Explain quantum computing"
# Auto-discovered Ollama models work instantly
chuk-llm ask_ollama_gemma3 "Hello world"
chuk-llm stream_ollama_mistral "Write a long story"
# llama.cpp with automatic model resolution
chuk-llm ask "What is Python?" --provider llamacpp --model qwen3
chuk-llm ask "Count to 5" --provider llamacpp --model llama3.2
# Discover new models
chuk-llm discover ollama
```
## 🧠 Dynamic Registry System
The **registry** is the intelligent core of chuk-llm. Instead of hardcoding model names, it dynamically discovers models and their capabilities, then selects the best one for your needs.
### Intelligent Model Selection
```python
from chuk_llm.registry import get_registry
from chuk_llm import ask
# Get the registry (auto-discovers all available models)
registry = await get_registry()
# Find the best cheap model with tool support
model = await registry.find_best(
requires_tools=True,
quality_tier="cheap"
)
print(f"Selected: {model.spec.provider}:{model.spec.name}")
# Selected: groq:llama-3.3-70b-versatile
# Use the selected model with ask()
response = await ask(
"Summarize this document",
provider=model.spec.provider,
model=model.spec.name
)
# Find best model for vision with large context
model = await registry.find_best(
requires_vision=True,
min_context=128_000,
quality_tier="balanced"
)
# Returns: openai:gpt-4o-mini or gemini:gemini-2.0-flash-exp
# Custom queries with multiple requirements
from chuk_llm.registry import ModelQuery
results = await registry.query(ModelQuery(
requires_tools=True,
requires_vision=True,
min_context=100_000,
max_cost_per_1m_input=2.0,
quality_tier="balanced"
))
```
### How It Works
**3-Tier Capability Resolution:**
1. **Heuristic Resolver** - Infers capabilities from model name patterns (e.g., "gpt-4" → likely supports tools)
2. **YAML Cache** - Tested capabilities stored in `registry/capabilities/*.yaml` for fast, reliable access
3. **Provider APIs** - Queries provider APIs dynamically (Ollama `/api/tags`, Gemini models API, etc.)
**Dynamic Discovery Sources:**
- OpenAI `/v1/models` API
- Anthropic known models
- Google Gemini models API
- Ollama `/api/tags` (local models)
- llama.cpp `/v1/models` (local GGUF + Ollama bridge)
- DeepSeek `/v1/models` API
- Moonshot AI `/v1/models` API
- Groq, Mistral, Perplexity, and more
Provider APIs are cached on disk and refreshed periodically (or via `chuk-llm discover`), so new models appear without needing a chuk-llm release.
**Benefits:**
- ✅ **No hardcoded model lists** - Pull new Ollama models, use immediately
- ✅ **Capability-based selection** - Declare requirements, not model names
- ✅ **Cost-aware** - Find cheapest model that meets requirements
- ✅ **Quality tiers** - BEST, BALANCED, CHEAP classification
- ✅ **Extensible** - Add custom sources and resolvers via protocols
## Key Features
### 🔍 Automatic Model Discovery
Pull new Ollama models and use them immediately - no configuration needed:
```bash
# Terminal 1: Pull a new model
ollama pull llama3.2
ollama pull mistral-small:latest
# Terminal 2: Use immediately in Python
from chuk_llm import ask_ollama_llama3_2_sync, ask_ollama_mistral_small_latest_sync
response = ask_ollama_llama3_2_sync("Hello!")
# Or via CLI
chuk-llm ask_ollama_mistral_small_latest "Tell me a joke"
```
### 🦙 llama.cpp Integration
Run local GGUF models with advanced control via llama.cpp server. **Reuse Ollama's downloaded models** without re-downloading!
**CLI Usage** (✨ Now fully supported!):
```bash
# Simple usage - model names automatically resolve to GGUF files
chuk-llm ask "What is Python?" --provider llamacpp --model qwen3
chuk-llm ask "Count to 5" --provider llamacpp --model llama3.2
# Streaming (default)
chuk-llm ask "Write a story" --provider llamacpp --model qwen3
# Non-streaming
chuk-llm ask "Quick question" --provider llamacpp --model qwen3 --no-stream
```
**Python API** (Simple - Recommended):
```python
from chuk_llm import ask
# Model names automatically resolve to Ollama's GGUF files!
response = await ask(
"What is Python?",
provider="llamacpp",
model="qwen3" # Auto-resolves to ~/.ollama/models/blobs/sha256-xxx
)
print(response)
# Streaming
from chuk_llm import stream
async for chunk in stream("Tell me a story", provider="llamacpp", model="llama3.2"):
print(chunk, end="", flush=True)
```
**Python API** (Advanced - Full Control):
```python
from chuk_llm.registry.resolvers.llamacpp_ollama import discover_ollama_models
from chuk_llm.llm.providers.llamacpp_client import LlamaCppLLMClient
from chuk_llm.core import Message, MessageRole
# Discover Ollama models (finds GGUF blobs in ~/.ollama/models/blobs/)
models = discover_ollama_models()
print(f"Found {len(models)} Ollama models") # e.g., "Found 48 Ollama models"
# Create client with auto-managed server
client = LlamaCppLLMClient(
model=str(models[0].gguf_path), # Reuse Ollama's GGUF!
ctx_size=8192,
n_gpu_layers=-1, # Use all GPU layers
)
messages = [Message(role=MessageRole.USER, content="Hello!")]
result = await client.create_completion(messages=messages)
print(result["response"])
# Cleanup
await client.stop_server()
```
**Key Features:**
- ✅ **CLI Support** - Full integration with chuk-llm CLI (model name resolution)
- ✅ **Ollama Bridge** - Automatically discovers and reuses Ollama's downloaded models (no re-download!)
- ✅ **Auto-Resolution** - Model names (qwen3, llama3.2) resolve to GGUF file paths automatically
- ✅ **Process Management** - Auto-managed server lifecycle (start/stop/health checks)
- ✅ **OpenAI-Compatible** - Uses standard OpenAI client (streaming, tools, etc.)
- ✅ **High Performance** - Benchmarks show llama.cpp is 1.53x faster than Ollama (311 vs 204 tok/s)
- ✅ **Advanced Control** - Custom sampling, grammars, GPU layers, context size
- ✅ **Cross-Platform** - Works on macOS, Linux, Windows
**Performance Comparison** (same GGUF file, qwen3:0.6b):
- llama.cpp: 311.4 tok/s
- Ollama: 204.2 tok/s
- **llama.cpp is 1.53x faster!**
See `examples/providers/llamacpp_ollama_usage_examples.py` and `examples/providers/benchmark_ollama_vs_llamacpp.py` for full examples.
### 📊 Automatic Session Tracking
Every call is automatically tracked for analytics:
```python
from chuk_llm import ask_sync, get_session_stats
ask_sync("What's the capital of France?")
ask_sync("What's 2+2?")
stats = get_session_stats()
print(f"Total cost: ${stats['estimated_cost']:.6f}")
print(f"Total tokens: {stats['total_tokens']}")
```
### 🎭 Stateful Conversations
Build conversational AI with memory:
```python
from chuk_llm import conversation
async with conversation() as chat:
await chat.ask("My name is Alice")
response = await chat.ask("What's my name?")
# AI responds: "Your name is Alice"
```
### ⚡ Concurrent Execution
Run multiple queries in parallel for massive speedups:
```python
import asyncio
from chuk_llm import ask
# 3-7x faster than sequential!
responses = await asyncio.gather(
ask("What is AI?"),
ask("Capital of Japan?"),
ask("Meaning of life?")
)
```
## Supported Providers
All providers are **dynamically discovered** via the registry system - no hardcoded model lists!
| Provider | Discovery Method | Special Features | Status |
|----------|-----------------|-----------------|--------|
| **OpenAI** | `/v1/models` API | GPT-5 / GPT-5.1, o3-family reasoning, industry standard | ✅ Dynamic |
| **Azure OpenAI** | Deployment config | SOC2, HIPAA compliant, VNet, multi-region | ✅ Dynamic |
| **Anthropic** | Known models† | Claude 3.5 Sonnet, advanced reasoning, 200K context | ✅ Static |
| **Google Gemini** | Models API | Gemini 2.5/3 Pro, 1M token context, adaptive thinking, multimodal | ✅ Dynamic |
| **Groq** | `/v1/models` API | Llama 3.3, ultra-fast (our benchmarks: ~526 tok/s) | ✅ Dynamic |
| **Ollama** | `/api/tags` | Any local model, auto-discovery, offline, privacy | ✅ Dynamic |
| **llama.cpp** | `/v1/models` | Local GGUF models, Ollama bridge, advanced control | ✅ Dynamic |
| **IBM watsonx** | Known models† | Granite 3.3, enterprise, on-prem, compliance | ✅ Static |
| **Perplexity** | Known models† | Sonar, real-time web search, citations | ✅ Static |
| **Mistral** | Known models† | Large 3 (675B MoE), Ministral 3 (3B/8B/14B), Apache 2.0 | ✅ Static |
| **DeepSeek** | `/v1/models` API | DeepSeek V3.2 (671B MoE), ultra-efficient, $0.27/M tokens | ✅ Dynamic |
| **Moonshot AI** | `/v1/models` API | Kimi K2, 256K context, coding, Chinese language | ✅ Dynamic |
| **OpenRouter** | Known models† | Access to 100+ models via single API | ✅ Static |
† Static = discovered from curated model list + provider docs, not via `/models` endpoint
**Capabilities** (auto-detected by registry):
- ✅ Streaming responses
- ✅ Function calling / tool use
- ✅ Vision / multimodal inputs
- ✅ JSON mode / structured outputs
- ✅ Async and sync interfaces
- ✅ Automatic client caching
- ✅ Session tracking
- ✅ Conversation management
## Configuration
### Environment Variables
```bash
# API Keys - Cloud Providers
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="..." # For Gemini 2.5/3 models
export GROQ_API_KEY="..."
export DEEPSEEK_API_KEY="..." # For DeepSeek V3.2 (chat/reasoner)
export MOONSHOT_API_KEY="..."
export MISTRAL_API_KEY="..." # For Mistral Large 3 & Ministral 3
# Azure Configuration
export AZURE_OPENAI_API_KEY="..."
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com"
# Local Servers
# (No API keys needed for Ollama or llama.cpp)
# Session Storage (optional)
export SESSION_PROVIDER=redis # Default: memory
export SESSION_REDIS_URL=redis://localhost:6379/0
# Performance Settings
export CHUK_LLM_CACHE_CLIENTS=1 # Enable client caching (default: 1)
export CHUK_LLM_AUTO_DISCOVER=true # Auto-discover new models (default: true)
```
### Python Configuration
```python
from chuk_llm import configure
configure(
provider="azure_openai",
model="gpt-4o-mini",
temperature=0.7
)
# All subsequent calls use these settings
response = ask_sync("Hello!")
```
### Client Caching (Advanced)
Automatic client caching is enabled by default for maximum performance:
```python
from chuk_llm.llm.client import get_client
# First call creates client (~12ms)
client1 = get_client("openai", model="gpt-4o")
# Subsequent calls return cached instance (~125µs)
client2 = get_client("openai", model="gpt-4o")
assert client1 is client2 # Same instance!
# Disable caching for specific call
client3 = get_client("openai", model="gpt-4o", use_cache=False)
# Monitor cache performance
from chuk_llm.client_registry import print_registry_stats
print_registry_stats()
# Cache statistics:
# - Total clients: 1
# - Cache hits: 1
# - Cache misses: 1
# - Hit rate: 50.0%
```
## Advanced Features
### 🛠️ Function Calling / Tool Use
ChukLLM provides a clean, unified API for function calling. **Recommended approach**: Use the `Tools` class for automatic execution.
```python
from chuk_llm import Tools, tool
# Recommended: Class-based tools with auto-execution
class MyTools(Tools):
@tool(description="Get weather for a city")
def get_weather(self, location: str) -> dict:
return {"temp": 22, "location": location, "condition": "sunny"}
@tool # Description auto-extracted from docstring
def calculate(self, expr: str) -> float:
"""Evaluate a mathematical expression"""
return eval(expr)
# Auto-executes tools and returns final response
tools = MyTools()
response = await tools.ask("What's the weather in Paris and what's 2+2?")
print(response) # "The weather in Paris is 22°C and sunny. 2+2 equals 4."
# Sync version
response = tools.ask_sync("Calculate 15 * 4")
print(response) # "15 * 4 equals 60"
```
**Alternative: Direct API usage** (for more control):
```python
from chuk_llm import ask
from chuk_llm.api.tools import tools_from_functions
def get_weather(location: str) -> dict:
"""Get weather information for a location"""
return {"temp": 22, "location": location}
# Create toolkit
toolkit = tools_from_functions(get_weather)
# Returns dict with tool_calls - you handle execution
response = await ask(
"What's the weather in Paris?",
tools=toolkit.to_openai_format()
)
print(response) # {"response": "...", "tool_calls": [...]}
```
#### Streaming with Tools
```python
from chuk_llm import stream
# Streaming with tools
async for chunk in stream(
"What's the weather in Tokyo?",
tools=toolkit.to_openai_format(),
return_tool_calls=True # Include tool calls in stream
):
if isinstance(chunk, dict):
print(f"Tool call: {chunk['tool_calls']}")
else:
print(chunk, end="", flush=True)
```
<details>
<summary><b>🌳 Conversation Branching</b></summary>
```python
async with conversation() as chat:
await chat.ask("Planning a vacation")
# Explore different options
async with chat.branch() as japan_branch:
await japan_branch.ask("Tell me about Japan")
async with chat.branch() as italy_branch:
await italy_branch.ask("Tell me about Italy")
# Main conversation unaffected by branches
await chat.ask("I'll go with Japan!")
```
</details>
<details>
<summary><b>📈 Provider Comparison</b></summary>
```python
from chuk_llm import compare_providers
results = compare_providers(
"Explain quantum computing",
["openai", "anthropic", "groq", "ollama"]
)
for provider, response in results.items():
print(f"{provider}: {response[:100]}...")
```
</details>
<details>
<summary><b>🎯 Intelligent System Prompts</b></summary>
ChukLLM automatically generates optimized system prompts based on provider capabilities:
```python
# Each provider gets optimized prompts
response = ask_claude_sync("Help me code", tools=tools)
# Claude gets: "You are Claude, an AI assistant created by Anthropic..."
response = ask_openai_sync("Help me code", tools=tools)
# OpenAI gets: "You are a helpful assistant with function calling..."
```
</details>
## CLI Commands
```bash
# Quick access to any model
chuk-llm ask_gpt "Your question"
chuk-llm ask_claude "Your question"
chuk-llm ask_ollama_llama3_2 "Your question"
# llama.cpp with automatic model resolution
chuk-llm ask "Your question" --provider llamacpp --model qwen3
chuk-llm ask "Your question" --provider llamacpp --model llama3.2
# Discover and test
chuk-llm discover ollama # Find new models
chuk-llm test llamacpp # Test llamacpp provider
chuk-llm test azure_openai # Test connection
chuk-llm providers # List all providers
chuk-llm models ollama # Show available models
chuk-llm functions # List all generated functions
# Advanced usage
chuk-llm ask "Question" --provider azure_openai --model gpt-4o-mini --json
chuk-llm ask "Question" --provider llamacpp --model qwen3 --no-stream
chuk-llm ask "Question" --stream --verbose
# Function calling / Tool use from CLI
chuk-llm ask "Calculate 15 * 4" --tools calculator_tools.py
chuk-llm stream "What's the weather?" --tools weather_tools.py --return-tool-calls
# Zero-install with uvx
uvx chuk-llm ask_claude "Hello world"
uvx chuk-llm ask "Question" --provider llamacpp --model qwen3
```
## Performance
chuk-llm is designed for high throughput with negligible overhead:
### Key Metrics
| Operation | Time | Notes |
|-----------|------|-------|
| Import | 14ms | 52x faster than eager loading |
| Client creation (cached) | 125µs | 112x faster, thread-safe |
| Request overhead | 50-140µs | <0.015% of typical API call |
### Production Features
- **Automatic client caching** - Thread-safe, 112x faster repeated operations
- **Lazy imports** - Only load what you use
- **Connection pooling** - Efficient HTTP/2 reuse
- **Async-native** - Built on asyncio for maximum throughput
- **Smart caching** - Model discovery results cached on disk
### Benchmarks
Run comprehensive benchmarks:
```bash
uv run python benchmarks/benchmark_client_registry.py
uv run python benchmarks/llm_benchmark.py
```
**See [PERFORMANCE_OPTIMIZATIONS.md](PERFORMANCE_OPTIMIZATIONS.md) for detailed analysis and micro-benchmarks.**
## Architecture
ChukLLM uses a **registry-driven, async-native architecture** designed for scale:
### 🏗️ Core Design Principles
1. **Dynamic Registry** - Models discovered and selected by capabilities, not names
2. **Pydantic V2 Native** - Type-safe models throughout, no dictionary goop
3. **Async-First** - Built on asyncio with sync wrappers for convenience
4. **Stateless Clients** - Clients don't store conversation history; your application manages state
5. **Lazy Loading** - Modules load on-demand for instant imports (14ms)
6. **Automatic Caching** - Thread-safe client registry eliminates duplicate initialization
### 🔄 Request Flow
```
User Code
↓
import chuk_llm (14ms - lazy loading)
↓
get_client() (2µs - cached registry lookup)
↓
[Cached Client Instance]
↓
async ask() (~50µs - minimal overhead)
↓
Provider SDK (~50µs - efficient request building)
↓
HTTP Request (50-500ms - network I/O)
↓
Response Parsing (~50µs - orjson)
↓
Return to User
Total chuk-llm Overhead: ~150µs (<0.015% of API call)
```
### 🔐 Session Isolation
**Important:** Conversation history is **NOT** shared between calls. Each conversation is independent:
```python
from chuk_llm.llm.client import get_client
from chuk_llm.core.models import Message
client = get_client("openai", model="gpt-4o")
# Conversation 1
conv1 = [Message(role="user", content="My name is Alice")]
response1 = await client.create_completion(conv1)
# Conversation 2 (completely separate)
conv2 = [Message(role="user", content="What's my name?")]
response2 = await client.create_completion(conv2)
# AI won't know the name - conversations are isolated!
```
**Key Insights:**
- ✅ Clients are stateless (safe to cache and share)
- ✅ Conversation state lives in YOUR application
- ✅ HTTP sessions shared for performance (connection pooling)
- ✅ No cross-conversation or cross-user leakage
- ✅ Thread-safe for concurrent use
See [CONVERSATION_ISOLATION.md](CONVERSATION_ISOLATION.md) for detailed architecture.
### 📦 Module Organization
```
chuk-llm/
├── api/ # Public API (ask, stream, conversation)
├── registry/ # ⭐ Dynamic model registry (THE BRAIN)
│ ├── core.py # ModelRegistry orchestrator
│ ├── models.py # Pydantic models (ModelSpec, ModelCapabilities)
│ ├── sources/ # Discovery sources (OpenAI, Ollama, Gemini, etc.)
│ └── resolvers/ # Capability resolvers (Heuristic, YAML, APIs)
├── core/ # Pydantic V2 models (Message, Tool, ContentPart)
│ ├── models.py # Core Pydantic models
│ ├── enums.py # Type-safe enums (Provider, Feature, etc.)
│ └── constants.py # Constants
├── llm/
│ ├── providers/ # 15+ provider implementations
│ ├── client.py # Client factory with registry integration
│ └── features.py # Feature detection
├── configuration/ # Unified configuration system
└── client_registry.py # Thread-safe client caching
```
## Used by the CHUK Stack
chuk-llm is the **canonical LLM layer** for the entire CHUK ecosystem:
- **chuk-ai-planner** uses the registry to select planning vs drafting models by capability
- **chuk-acp-agent** uses capability-based policies per agent (e.g., "requires tools + 128k context")
- **chuk-mcp-remotion** uses it to pick video-script models with vision + long context
Instead of hardcoding "use GPT-4o", CHUK components declare **what they need**, and the registry finds the best available model.
## Documentation
- 📚 [Full Documentation](https://github.com/chrishayuk/chuk-llm/wiki)
- 🎯 [Examples (33)](https://github.com/chrishayuk/chuk-llm/tree/main/examples)
- ⚡ [Performance Optimizations](PERFORMANCE_OPTIMIZATIONS.md)
- 🗄️ [Client Registry](CLIENT_REGISTRY.md)
- 🔄 [Lazy Imports](LAZY_IMPORTS.md)
- 🔐 [Conversation Isolation](CONVERSATION_ISOLATION.md)
- 📊 [Registry System](REGISTRY_COMPLETE.md)
- 🔧 [Debug Tools](examples/debug/README.md) - Test OpenAI-compatible API capabilities
- 🏗️ [Migration Guide](https://github.com/chrishayuk/chuk-llm/wiki/migration)
- 🤝 [Contributing](https://github.com/chrishayuk/chuk-llm/blob/main/CONTRIBUTING.md)
## Quick Comparison
| Feature | chuk-llm | LangChain | LiteLLM | OpenAI SDK |
|---------|----------|-----------|---------|------------|
| Import speed | ⚡ 14ms | 🐌 1-2s | 🐌 500ms+ | ⚡ Fast |
| Client caching | ✅ Auto (112x) | ❌ | ❌ | ❌ |
| Auto-discovery | ✅ | ❌ | ❌ | ❌ |
| Native streaming | ✅ | ⚠️ | ✅ | ✅ |
| Function calling | ✅ Clean API | ✅ Complex | ⚠️ Basic | ✅ |
| Session tracking | ✅ Built-in | ⚠️ Manual | ❌ | ❌ |
| Session isolation | ✅ Guaranteed | ⚠️ Varies | ⚠️ Unclear | ⚠️ Manual |
| CLI included | ✅ | ❌ | ⚠️ Basic | ❌ |
| Provider functions | ✅ Auto-generated | ❌ | ❌ | ❌ |
| Conversations | ✅ Branching | ✅ | ❌ | ⚠️ Manual |
| Thread-safe | ✅ | ⚠️ Varies | ⚠️ | ✅ |
| Async-native | ✅ | ⚠️ Mixed | ✅ | ✅ |
| Setup complexity | Simple | Complex | Simple | Simple |
| Dependencies | Minimal | Heavy | Moderate | Minimal |
| Performance overhead | <0.015% | ~2-5% | ~1-2% | Minimal |
## Installation Options
| Command | Features | Use Case |
|---------|----------|----------|
| `pip install chuk_llm` | Core + Session tracking | Development |
| `pip install chuk_llm[redis]` | + Redis persistence | Production |
| `pip install chuk_llm[cli]` | + Rich CLI formatting | CLI tools |
| `pip install chuk_llm[all]` | Everything | Full features |
## License
Apache 2.0 License - see [LICENSE](LICENSE) file for details.
## Support
- 🐛 [Issues](https://github.com/chrishayuk/chuk-llm/issues)
- 💬 [Discussions](https://github.com/chrishayuk/chuk-llm/discussions)
---
**Built with ❤️ for developers who just want their LLMs to work.**
| text/markdown | null | null | null | null | Apache-2.0 | llm, ai, openai, anthropic, claude, gpt, gemini, ollama, streaming, async, machine-learning | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software... | [] | null | null | >=3.11 | [] | [] | [] | [
"aiohttp>=3.12.15",
"anthropic>=0.62.0",
"asyncio>=4.0.0",
"chuk-ai-session-manager>=0.8.2",
"google-genai>=1.29.0",
"groq>=0.25.0",
"httpx>=0.28.1",
"ibm-watsonx-ai>=1.3.30",
"jinja2>=3.1.6",
"mistralai>=1.9.3",
"ollama>=0.5.3",
"openai>=1.79.0",
"python-dotenv>=1.1.0",
"pyyaml>=6.0.2",
... | [] | [] | [] | [
"Homepage, https://github.com/chrishayuk/chuk-llm",
"Documentation, https://github.com/chrishayuk/chuk-llm#readme",
"Repository, https://github.com/chrishayuk/chuk-llm.git",
"Issues, https://github.com/chrishayuk/chuk-llm/issues",
"Changelog, https://github.com/chrishayuk/chuk-llm/releases"
] | twine/6.1.0 CPython/3.11.11 | 2026-02-18T16:56:58.524973 | chuk_llm-0.18.tar.gz | 304,430 | ba/f7/0689d2c59684a9d2ff97c270cdd2cbedff17678eb02a664942753fc289e9/chuk_llm-0.18.tar.gz | source | sdist | null | false | 9a86d2469817f125fec13128ef9f752f | 2f7d3ff5a2d81896fc5685f37ea930df1ce2a20c08ad4c9c56fc86fb9ce28282 | baf70689d2c59684a9d2ff97c270cdd2cbedff17678eb02a664942753fc289e9 | null | [
"LICENSE"
] | 2,263 |
2.3 | gitpod-sdk | 0.10.0 | The official Python library for the gitpod API | # Gitpod Python API library
<!-- prettier-ignore -->
[)](https://pypi.org/project/gitpod-sdk/)
The Gitpod Python library provides convenient access to the Gitpod REST API from any Python 3.9+
application. The library includes type definitions for all request params and response fields,
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
It is generated with [Stainless](https://www.stainless.com/).
## Documentation
The REST API documentation can be found on [docs.gitpod.io](https://docs.gitpod.io). The full API of this library can be found in [api.md](https://github.com/gitpod-io/gitpod-sdk-python/tree/main/api.md).
## Installation
```sh
# install from PyPI
pip install gitpod-sdk
```
## Usage
The full API of this library can be found in [api.md](https://github.com/gitpod-io/gitpod-sdk-python/tree/main/api.md).
```python
import os
from gitpod import Gitpod
client = Gitpod(
bearer_token=os.environ.get("GITPOD_API_KEY"), # This is the default and can be omitted
)
response = client.identity.get_authenticated_identity()
print(response.organization_id)
```
While you can provide a `bearer_token` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `GITPOD_API_KEY="My Bearer Token"` to your `.env` file
so that your Bearer Token is not stored in source control.
## Async usage
Simply import `AsyncGitpod` instead of `Gitpod` and use `await` with each API call:
```python
import os
import asyncio
from gitpod import AsyncGitpod
client = AsyncGitpod(
bearer_token=os.environ.get("GITPOD_API_KEY"), # This is the default and can be omitted
)
async def main() -> None:
response = await client.identity.get_authenticated_identity()
print(response.organization_id)
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh
# install from PyPI
pip install gitpod-sdk[aiohttp]
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python
import os
import asyncio
from gitpod import DefaultAioHttpClient
from gitpod import AsyncGitpod
async def main() -> None:
async with AsyncGitpod(
bearer_token=os.environ.get("GITPOD_API_KEY"), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
response = await client.identity.get_authenticated_identity()
print(response.organization_id)
asyncio.run(main())
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Pagination
List methods in the Gitpod API are paginated.
This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:
```python
from gitpod import Gitpod
client = Gitpod()
all_environments = []
# Automatically fetches more pages as needed.
for environment in client.environments.list():
# Do something with environment here
all_environments.append(environment)
print(all_environments)
```
Or, asynchronously:
```python
import asyncio
from gitpod import AsyncGitpod
client = AsyncGitpod()
async def main() -> None:
all_environments = []
# Iterate through items across all pages, issuing requests as needed.
async for environment in client.environments.list():
all_environments.append(environment)
print(all_environments)
asyncio.run(main())
```
Alternatively, you can use the `.has_next_page()`, `.next_page_info()`, or `.get_next_page()` methods for more granular control working with pages:
```python
first_page = await client.environments.list()
if first_page.has_next_page():
print(f"will fetch next page using these details: {first_page.next_page_info()}")
next_page = await first_page.get_next_page()
print(f"number of items we just fetched: {len(next_page.environments)}")
# Remove `await` for non-async usage.
```
Or just work directly with the returned data:
```python
first_page = await client.environments.list()
print(f"next page cursor: {first_page.pagination.next_token}") # => "next page cursor: ..."
for environment in first_page.environments:
print(environment.id)
# Remove `await` for non-async usage.
```
## Nested params
Nested parameters are dictionaries, typed using `TypedDict`, for example:
```python
from gitpod import Gitpod
client = Gitpod()
page = client.accounts.list_joinable_organizations(
pagination={},
)
print(page.joinable_organizations)
```
## Handling errors
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `gitpod.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `gitpod.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `gitpod.APIError`.
```python
import gitpod
from gitpod import Gitpod
client = Gitpod()
try:
client.identity.get_authenticated_identity()
except gitpod.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except gitpod.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except gitpod.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python
from gitpod import Gitpod
# Configure the default for all requests:
client = Gitpod(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).identity.get_authenticated_identity()
```
### Timeouts
By default requests time out after 1 minute. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python
from gitpod import Gitpod
# Configure the default for all requests:
client = Gitpod(
# 20 seconds (default is 1 minute)
timeout=20.0,
)
# More granular control:
client = Gitpod(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5.0).identity.get_authenticated_identity()
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](https://github.com/gitpod-io/gitpod-sdk-python/tree/main/#retries).
## Advanced
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `GITPOD_LOG` to `info`.
```shell
$ export GITPOD_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py
from gitpod import Gitpod
client = Gitpod()
response = client.identity.with_raw_response.get_authenticated_identity()
print(response.headers.get('X-My-Header'))
identity = response.parse() # get the object that `identity.get_authenticated_identity()` would have returned
print(identity.organization_id)
```
These methods return an [`APIResponse`](https://github.com/gitpod-io/gitpod-sdk-python/tree/main/src/gitpod/_response.py) object.
The async client returns an [`AsyncAPIResponse`](https://github.com/gitpod-io/gitpod-sdk-python/tree/main/src/gitpod/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
```python
with client.identity.with_streaming_response.get_authenticated_identity() as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py
import httpx
response = client.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented request params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python
import httpx
from gitpod import Gitpod, DefaultHttpxClient
client = Gitpod(
# Or use the `GITPOD_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python
client.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py
from gitpod import Gitpod
with Gitpod() as client:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/gitpod-io/gitpod-sdk-python/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py
import gitpod
print(gitpod.__version__)
```
## Requirements
Python 3.9 or higher.
## Contributing
See [the contributing documentation](https://github.com/gitpod-io/gitpod-sdk-python/tree/main/./CONTRIBUTING.md).
| text/markdown | null | Gitpod <dev-feedback@gitpod.com> | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: ... | [] | null | null | >=3.9 | [] | [] | [] | [
"anyio<5,>=3.5.0",
"distro<2,>=1.7.0",
"httpx<1,>=0.23.0",
"pydantic<3,>=1.9.0",
"sniffio",
"typing-extensions<5,>=4.10",
"aiohttp; extra == \"aiohttp\"",
"httpx-aiohttp>=0.1.9; extra == \"aiohttp\""
] | [] | [] | [] | [
"Homepage, https://github.com/gitpod-io/gitpod-sdk-python",
"Repository, https://github.com/gitpod-io/gitpod-sdk-python"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T16:56:50.776264 | gitpod_sdk-0.10.0.tar.gz | 333,282 | 2b/d0/1f5f038a4f2bfcad5dd9a6a969344ec6002b292eb6dc429045f1db82ec68/gitpod_sdk-0.10.0.tar.gz | source | sdist | null | false | da77b644a7831f9ea58a8d0b2a101ae9 | 06ca01df3ebefce1cfb69893c6e9a3faa1d54d6ce41e5ea3962507f68860f03a | 2bd01f5f038a4f2bfcad5dd9a6a969344ec6002b292eb6dc429045f1db82ec68 | null | [] | 352 |
2.4 | turkicnlp | 0.1.3 | NLP toolkit for 20+ Turkic languages | <p align="center">
<img src="https://sherzod-hakimov.github.io/images/cover.png" alt="TurkicNLP — Six Branches of Turkic Language Family" width="200">
</p>
<h1 align="center">TurkicNLP</h1>
<p align="center">
<strong>NLP toolkit for 20+ Turkic languages</strong> — a pip-installable Python library inspired by <a href="https://stanfordnlp.github.io/stanza/">Stanza</a>, with adaptations for the low-resource, morphologically rich Turkic language family.
</p>
<p align="center">
Maintained by <a href="https://sherzod-hakimov.github.io/">Sherzod Hakimov</a>
</p>
<p align="center">
<a href="LICENSE"><img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg" alt="License: Apache-2.0"></a>
<a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/python-3.9+-blue.svg" alt="Python 3.9+"></a>
<img src="https://img.shields.io/badge/status-pre--alpha-orange.svg" alt="Status: Pre-Alpha">
<img src="https://img.shields.io/badge/languages-24_Turkic-green.svg" alt="24 Turkic Languages">
</p>
## Citation
If you use TurkicNLP in your research, please cite:
```bibtex
@software{turkicnlp,
title = {TurkicNLP: NLP Toolkit for Turkic Languages},
author = {Sherzod Hakimov},
year = {2026},
url = {https://github.com/turkic-nlp/turkicnlp},
license = {Apache-2.0},
}
```
## Features
- **24 Turkic languages** from Turkish to Sakha, Kazakh to Uyghur
- **Script-aware from the ground up** — Latin, Cyrillic, Perso-Arabic, Old Turkic Runic
- **Automatic script detection** and bidirectional transliteration
- **[Apertium FST morphology](https://wiki.apertium.org/wiki/Turkic_languages)** for ~20 Turkic languages via Python-native `hfst` bindings (no system install)
- **Stanza/UD integration** — pretrained tokenization, POS tagging, lemmatization, dependency parsing, and NER via [Stanza](https://stanfordnlp.github.io/stanza/) models trained on [Universal Dependencies](https://universaldependencies.org/) treebanks
- **NLLB embeddings + translation backend** — sentence/document vectors and MT via [NLLB-200](https://huggingface.co/facebook/nllb-200-distilled-600M)
- **Multiple backends** — choose between rule-based, Apertium FST, or Stanza neural backends per processor
- **License isolation** — library is Apache-2.0; Apertium GPL-3.0 data downloaded separately
- **Stanza-compatible API** — `Pipeline`, `Document`, `Sentence`, `Word`
## Installation
```bash
pip install turkicnlp
```
To install all required dependencies at once:
```bash
pip install "turkicnlp[all]"
```
With optional dependencies:
```bash
pip install "turkicnlp[stanza]" # Stanza/UD neural models
pip install "turkicnlp[nllb]" # NLLB embeddings and translation backend (transformers, tokenizer libraries)
pip install "turkicnlp[all]" # Everything: stanza, NLLB embeddings & translations
pip install "turkicnlp[dev]" # Development tools
```
## Quick Start
```python
import turkicnlp
# Download models for a language
turkicnlp.download("kaz")
# Build a pipeline
nlp = turkicnlp.Pipeline("kaz", processors=["tokenize", "pos", "lemma", "depparse"])
# Process text
doc = nlp("Мен мектепке бардым")
# Access annotations
for sentence in doc.sentences:
for word in sentence.words:
print(f"{word.text}\t{word.lemma}\t{word.upos}\t{word.feats}")
# Export to CoNLL-U
print(doc.to_conllu())
```
### Embeddings (NLLB)
```python
import math
import turkicnlp
turkicnlp.download("tur", processors=["embeddings"])
nlp = turkicnlp.Pipeline("tur", processors=["embeddings"])
doc1 = nlp("Bugün hava çok güzel ve parkta yürüyüş yaptım.")
doc2 = nlp("Parkta yürüyüş yapmak bugün çok keyifliydi çünkü hava güzeldi.")
def cosine_similarity(a, b):
dot = sum(x * y for x, y in zip(a, b))
norm_a = math.sqrt(sum(x * x for x in a))
norm_b = math.sqrt(sum(y * y for y in b))
return dot / (norm_a * norm_b)
print(len(doc1.embedding), len(doc2.embedding))
print(f"cosine = {cosine_similarity(doc1.embedding, doc2.embedding):.4f}")
print(doc1._processor_log) # ['embeddings:nllb']
```
### Machine Translation (NLLB)
```python
import turkicnlp
# Downloads once into ~/.turkicnlp/models/huggingface/facebook--nllb-200-distilled-600M
turkicnlp.download("tur", processors=["translate"])
nlp = turkicnlp.Pipeline(
"tur",
processors=["translate"],
translate_tgt_lang="eng",
)
doc = nlp("Bugün hava çok güzel ve parkta yürüyüş yaptım.")
print(doc.translation)
print(doc._processor_log) # ['translate:nllb']
```
`translate_tgt_lang` accepts either ISO-639-3 (`"eng"`, `"tuk"`, `"kaz"`) or explicit [Flores-200 codes](https://github.com/facebookresearch/flores/tree/main/flores200#languages-in-flores-200) (`"eng_Latn"`, `"kaz_Cyrl"`).
### Using the Stanza Backend
```python
from turkicnlp.processors.stanza_backend import (
StanzaTokenizer, StanzaPOSTagger, StanzaLemmatizer, StanzaDepParser
)
from turkicnlp.models.document import Document
# Models are downloaded automatically on first use
doc = Document(text="Merhaba dünya.", lang="tur")
for Proc in [StanzaTokenizer, StanzaPOSTagger, StanzaLemmatizer, StanzaDepParser]:
proc = Proc(lang="tur")
proc.load()
doc = proc.process(doc)
for word in doc.words:
print(f"{word.text:12} {word.upos:6} {word.lemma:12} head={word.head} {word.deprel}")
# Export to CoNLL-U
print(doc.to_conllu())
```
### Mixed Backends
```python
from turkicnlp.processors.tokenizer import RegexTokenizer
from turkicnlp.processors.stanza_backend import StanzaPOSTagger, StanzaDepParser
from turkicnlp.models.document import Document
doc = Document(text="Мен мектепке бардым.", lang="kaz")
# Rule-based tokenizer + Stanza POS/parsing (pretokenized mode)
tokenizer = RegexTokenizer(lang="kaz")
tokenizer.load()
doc = tokenizer.process(doc)
pos = StanzaPOSTagger(lang="kaz")
pos.load()
doc = pos.process(doc)
parser = StanzaDepParser(lang="kaz")
parser.load()
doc = parser.process(doc)
```
### Multi-Script Support
```python
# Kazakh — auto-detects Cyrillic vs Latin
doc = nlp("Мен мектепке бардым") # Cyrillic
doc = nlp("Men mektepke bardym") # Latin
# Explicit script selection
nlp_cyrl = turkicnlp.Pipeline("kaz", script="Cyrl")
nlp_latn = turkicnlp.Pipeline("kaz", script="Latn")
# Transliteration bridge — run Cyrillic model on Latin input
nlp = turkicnlp.Pipeline("kaz", script="Latn", transliterate_to="Cyrl")
```
### Uyghur (Perso-Arabic)
```python
nlp_ug = turkicnlp.Pipeline("uig", script="Arab")
doc = nlp_ug("مەن مەكتەپكە باردىم")
```
### Transliteration
The `Transliterator` class converts text between scripts for any supported language pair:
```python
from turkicnlp.scripts import Script
from turkicnlp.scripts.transliterator import Transliterator
# Kazakh Cyrillic → Latin (2021 official alphabet)
t = Transliterator("kaz", Script.CYRILLIC, Script.LATIN)
print(t.transliterate("Қазақстан Республикасы"))
# → Qazaqstan Respublıkasy
# Uzbek Latin → Cyrillic
t = Transliterator("uzb", Script.LATIN, Script.CYRILLIC)
print(t.transliterate("O'zbekiston Respublikasi"))
# → Ўзбекистон Республикаси
# Uyghur Perso-Arabic → Latin (ULY)
t = Transliterator("uig", Script.PERSO_ARABIC, Script.LATIN)
print(t.transliterate("مەكتەپ"))
# → mektep
# Azerbaijani Latin → Cyrillic
t = Transliterator("aze", Script.LATIN, Script.CYRILLIC)
print(t.transliterate("Azərbaycan"))
# → Азәрбайҹан
# Turkmen Latin → Cyrillic
t = Transliterator("tuk", Script.LATIN, Script.CYRILLIC)
print(t.transliterate("Türkmenistan"))
# → Түркменистан
# Tatar Cyrillic → Latin (Zamanälif)
t = Transliterator("tat", Script.CYRILLIC, Script.LATIN)
print(t.transliterate("Татарстан Республикасы"))
# → Tatarstan Respublikası
```
#### Old Turkic Runic Script
TurkicNLP supports transliteration of [Old Turkic runic inscriptions](https://en.wikipedia.org/wiki/Old_Turkic_script) (Orkhon-Yenisei script, Unicode block U+10C00–U+10C4F) to Latin:
```python
from turkicnlp.scripts import Script
from turkicnlp.scripts.transliterator import Transliterator
t = Transliterator("otk", Script.OLD_TURKIC_RUNIC, Script.LATIN)
# Individual runic characters
print(t.transliterate("\U00010C34\U00010C07\U00010C2F\U00010C19"))
# → törk (Türk)
# The transliterator maps each runic character to its standard
# Turkological Latin equivalent, handling both Orkhon and Yenisei
# variant forms (e.g., separate glyphs for consonants with
# back vs. front vowel contexts).
```
## Supported Languages and Components
<p align="center">
<img src="https://upload.wikimedia.org/wikipedia/commons/f/fa/Turkic_Languages_distribution_map.png" alt="Distribution map of Turkic languages" width="700">
<br>
<em>Geographic distribution of Turkic languages (source: <a href="https://commons.wikimedia.org/wiki/File:Turkic_Languages_distribution_map.png">Wikimedia Commons</a>)</em>
</p>
The table below shows all supported languages with their available scripts and processor status.
**Backend legend:**
- **rule** — Rule-based (regex tokenizer, abbreviation lists)
- **Apertium** — Finite-state transducers via [Apertium](https://apertium.org/) + `hfst` ([GPL-3.0](https://www.gnu.org/licenses/gpl-3.0.html), downloaded separately)
- **Stanza/UD** — Neural models from [Stanza](https://stanfordnlp.github.io/stanza/) trained on [Universal Dependencies](https://universaldependencies.org/) treebanks ([Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0))
- **NLLB** — Shared [NLLB-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) backend for embeddings and machine translation
**Status legend:**
- ✅ Available
- 🔧 Planned
- — Not available (yet)
### Oghuz Branch
| Language | Code | Script(s) | Tokenize | Morph (FST) | POS | Lemma | DepParse | NER | Embeddings | Translation |
|---|---|---|---|---|---|---|---|---|---|---|
| [Turkish](https://en.wikipedia.org/wiki/Turkish_language) | `tur` | Latn | ✅ rule, ✅ Stanza/UD | ✅ Apertium | ✅ Stanza/UD | ✅ Stanza/UD | ✅ Stanza/UD | ✅ Stanza | ✅ NLLB | ✅ NLLB |
| [Azerbaijani](https://en.wikipedia.org/wiki/Azerbaijani_language) | `aze` | Latn, Cyrl | ✅ rule | ✅ Apertium | 🔧 | 🔧 | 🔧 | — | ✅ NLLB | ✅ NLLB |
| [Iranian Azerbaijani](https://en.wikipedia.org/wiki/South_Azerbaijani_language) | `azb` | Arab | 🔧 rule_arabic | — | — | — | — | — | ✅ NLLB | ✅ NLLB |
| [Turkmen](https://en.wikipedia.org/wiki/Turkmen_language) | `tuk` | Latn, Cyrl | ✅ rule | ✅ Apertium (beta) | 🔧 | 🔧 | 🔧 | — | ✅ NLLB | ✅ NLLB |
| [Gagauz](https://en.wikipedia.org/wiki/Gagauz_language) | `gag` | Latn | ✅ rule | ✅ Apertium (proto) | — | — | — | — | — | — |
### Kipchak Branch
| Language | Code | Script(s) | Tokenize | Morph (FST) | POS | Lemma | DepParse | NER | Embeddings | Translation |
|---|---|---|---|---|---|---|---|---|---|---|
| [Kazakh](https://en.wikipedia.org/wiki/Kazakh_language) | `kaz` | Cyrl, Latn | ✅ rule, ✅ Stanza/UD | ✅ Apertium | ✅ Stanza/UD | ✅ Stanza/UD | ✅ Stanza/UD | ✅ Stanza | ✅ NLLB | ✅ NLLB |
| [Kyrgyz](https://en.wikipedia.org/wiki/Kyrgyz_language) | `kir` | Cyrl | ✅ rule | ✅ Apertium | ✅ Stanza/UD | ✅ Stanza/UD | ✅ Stanza/UD | — | ✅ NLLB | ✅ NLLB |
| [Tatar](https://en.wikipedia.org/wiki/Tatar_language) | `tat` | Cyrl, Latn | ✅ rule | ✅ Apertium | 🔧 | 🔧 | 🔧 | — | ✅ NLLB | ✅ NLLB |
| [Bashkir](https://en.wikipedia.org/wiki/Bashkir_language) | `bak` | Cyrl | ✅ rule | ✅ Apertium (beta) | — | — | — | — | ✅ NLLB | ✅ NLLB |
| [Crimean Tatar](https://en.wikipedia.org/wiki/Crimean_Tatar_language) | `crh` | Latn, Cyrl | ✅ rule | ✅ Apertium (beta) | — | — | — | — | ✅ NLLB | ✅ NLLB |
| [Karakalpak](https://en.wikipedia.org/wiki/Karakalpak_language) | `kaa` | Latn, Cyrl | ✅ rule | ✅ Apertium (proto) | — | — | — | — | — | — |
| [Nogai](https://en.wikipedia.org/wiki/Nogai_language) | `nog` | Cyrl | ✅ rule | ✅ Apertium (proto) | — | — | — | — | — | — |
| [Kumyk](https://en.wikipedia.org/wiki/Kumyk_language) | `kum` | Cyrl | ✅ rule | ✅ Apertium (proto) | — | — | — | — | — | — |
| [Karachay-Balkar](https://en.wikipedia.org/wiki/Karachay-Balkar_language) | `krc` | Cyrl | ✅ rule | ✅ Apertium (proto) | — | — | — | — | — | — |
### Karluk Branch
| Language | Code | Script(s) | Tokenize | Morph (FST) | POS | Lemma | DepParse | NER | Embeddings | Translation |
|---|---|---|---|---|---|---|---|---|---|---|
| [Uzbek](https://en.wikipedia.org/wiki/Uzbek_language) | `uzb` | Latn, Cyrl | ✅ rule | ✅ Apertium | 🔧 | 🔧 | 🔧 | — | ✅ NLLB | ✅ NLLB |
| [Uyghur](https://en.wikipedia.org/wiki/Uyghur_language) | `uig` | Arab, Latn | 🔧 rule_arabic, ✅ Stanza/UD | ✅ Apertium (beta) | ✅ Stanza/UD | ✅ Stanza/UD | ✅ Stanza/UD | — | ✅ NLLB | ✅ NLLB |
### Siberian Branch
| Language | Code | Script(s) | Tokenize | Morph (FST) | POS | Lemma | DepParse | NER | Embeddings | Translation |
|---|---|---|---|---|---|---|---|---|---|---|
| [Sakha (Yakut)](https://en.wikipedia.org/wiki/Sakha_language) | `sah` | Cyrl | ✅ rule | ✅ Apertium (proto) | — | — | — | — | — | — |
| [Altai](https://en.wikipedia.org/wiki/Altai_language) | `alt` | Cyrl | ✅ rule | ✅ Apertium (proto) | — | — | — | — | — | — |
| [Tuvan](https://en.wikipedia.org/wiki/Tuvan_language) | `tyv` | Cyrl | ✅ rule | ✅ Apertium (proto) | — | — | — | — | — | — |
| [Khakas](https://en.wikipedia.org/wiki/Khakas_language) | `kjh` | Cyrl | ✅ rule | ✅ Apertium (proto) | — | — | — | — | — | — |
### Oghur Branch
| Language | Code | Script(s) | Tokenize | Morph (FST) | POS | Lemma | DepParse | NER | Embeddings | Translation |
|---|---|---|---|---|---|---|---|---|---|---|
| [Chuvash](https://en.wikipedia.org/wiki/Chuvash_language) | `chv` | Cyrl | ✅ rule | ✅ Apertium (beta) | — | — | — | — | — | — |
### Arghu Branch
| Language | Code | Script(s) | Tokenize | Morph (FST) | POS | Lemma | DepParse | NER | Embeddings | Translation |
|---|---|---|---|---|---|---|---|---|---|---|
| [Khalaj](https://en.wikipedia.org/wiki/Khalaj_language) | `klj` | Arab | - | - | — | — | — | — | — | — |
### Historical Languages
| Language | Code | Script(s) | Tokenize | Morph (FST) | POS | Lemma | DepParse | NER | Embeddings | Translation |
|---|---|---|---|---|---|---|---|---|---|---|
| [Ottoman Turkish](https://en.wikipedia.org/wiki/Ottoman_Turkish_language) | `ota` | Arab, Latn | - | — | - | - | - | — | — | — |
| [Old Turkish](https://en.wikipedia.org/wiki/Old_Turkic_language) | `otk` | Orkh, Latn | 🔧 rule | — | — | — | — | — | — | — |
### Stanza/UD Model Details
The Stanza backend provides neural models trained on [Universal Dependencies](https://universaldependencies.org/) treebanks. The table below lists the UD treebanks and NER datasets powering each language, along with the available Stanza processors.
| Language | Stanza Code | UD Treebank(s) | Stanza Processors | NER Dataset |
|---|---|---|---|---|
| Turkish | `tr` | IMST (default), BOUN, FrameNet, KeNet, ATIS, Penn, Tourism | tokenize, mwt, pos, lemma, depparse, ner | Starlang NER |
| Kazakh | `kk` | KTB | tokenize, mwt, pos, lemma, depparse, ner | KazNERD |
| Uyghur | `ug` | UDT | tokenize, pos, lemma, depparse | — |
| Kyrgyz | `ky` | KTMU | tokenize, pos, lemma, depparse | — |
| Ottoman Turkish | `ota` | BOUN | tokenize, mwt, pos, lemma, depparse | — |
### Transliteration Support
Bidirectional script conversion is available for all multi-script languages. The transliterator uses a greedy longest-match algorithm with per-language mapping tables.
| Language | Direction | Scripts | Standard |
|---|---|---|---|
| Kazakh | ↔ Bidirectional | Cyrillic ↔ Latin | 2021 official Latin alphabet |
| Uzbek | ↔ Bidirectional | Cyrillic ↔ Latin | 1995 official Latin alphabet |
| Azerbaijani | ↔ Bidirectional | Cyrillic ↔ Latin | 1991 official Latin alphabet |
| Tatar | ↔ Bidirectional | Cyrillic ↔ Latin | Zamanälif |
| Turkmen | ↔ Bidirectional | Cyrillic ↔ Latin | 1993 official Latin alphabet |
| Karakalpak | ↔ Bidirectional | Cyrillic ↔ Latin | 2016 Latin alphabet |
| Crimean Tatar | ↔ Bidirectional | Cyrillic ↔ Latin | Standard Crimean Tatar Latin |
| Uyghur | ↔ Bidirectional | Perso-Arabic ↔ Latin | Uyghur Latin Yéziqi (ULY) |
| Ottoman Turkish | → One-way | Latin → Perso-Arabic | Academic transcription |
| Old Turkic | → One-way | Runic → Latin | Turkological convention |
### Apertium FST Quality Levels
| Level | Description | Languages |
|---|---|---|
| **Production** | >90% coverage on news text | Turkish, Kazakh, Tatar |
| **Stable** | Good coverage, actively maintained | Azerbaijani, Kyrgyz, Uzbek |
| **Beta** | Reasonable coverage, some gaps | Turkmen, Bashkir, Uyghur, Crimean Tatar, Chuvash |
| **Prototype** | Limited coverage, experimental | Gagauz, Sakha, Karakalpak, Nogai, Kumyk, Karachay-Balkar, Altai, Tuvan, Khakas |
### Model Catalog and Apertium Downloads
TurkicNLP uses a model catalog to define download sources per language/script/processor. The catalog lives in:
- `turkicnlp/resources/catalog.json` (packaged default)
- Remote override: `ModelRegistry.CATALOG_URL` (or `TURKICNLP_CATALOG_URL`)
For each language, the catalog stores the Apertium source repo and the expected FST script. When `turkicnlp.download()` is called, it reads the catalog and downloads precompiled `.hfst` binaries from the `url` fields. If a language has no URL configured, download will fail with a clear error until the catalog is populated with hosted binaries (for example, a `turkic-nlp/apertium-data` releases repository).
#### Download folder
All models and resources are downloaded into this folder: `~/.turkicnlp`.
## Architecture
TurkicNLP follows Stanza's modular pipeline design:
```
Pipeline("tur", processors=["tokenize", "morph", "pos", "depparse"])
│
▼
Document ─── text: "Ben okula vardım"
│
├── script_detect → script = "Latn"
├── tokenize → sentences, tokens, words
├── morph (Apertium) → lemma, pos, feats (via HFST)
├── pos (neural) → refined UPOS, XPOS, feats
└── depparse → head, deprel
│
▼
Document ─── annotated with all layers
```
```
Pipeline("azb", processors=["embeddings", "translate"], translate_tgt_lang="eng")
│
▼
Document ─── text: "من کتاب اوخویورام"
│
├── script_detect → script = "Arab"
├── embeddings (NLLB) → sentence/document vectors
└── translate (NLLB) → sentence/document translation
(src resolved from FLORES map: azb -> azb_Arab,
tgt resolved from ISO-3: eng -> eng_Latn)
│
▼
Document ─── annotated with all layers
```
### Key Abstractions
- **Document** → Sentence → Token → Word hierarchy (maps to CoNLL-U)
- **Processor** ABC with `PROVIDES`, `REQUIRES`, `NAME` class attributes
- **Pipeline** orchestrator with dependency resolution and script-aware model loading
- **ProcessorRegistry** for pluggable backends (rule, Apertium, Stanza, NLLB)
- **ModelRegistry** with remote catalog and local caching at `~/.turkicnlp/models/`
- **NLLB FLORES language map** for ISO-3 to NLLB code resolution in translation (e.g. `tuk` -> `tuk_Latn`)
### Model Storage Layout
```
~/.turkicnlp/models/
├── kaz/
│ ├── Cyrl/
│ │ ├── tokenize/rule/
│ │ ├── morph/apertium/ ← GPL-3.0 (downloaded separately)
│ │ │ ├── kaz.automorf.hfst
│ │ │ ├── LICENSE
│ │ │ └── metadata.json
│ │ ├── pos/neural/
│ │ └── depparse/neural/
│ └── Latn/
│ └── tokenize/rule/
├── tur/
│ └── Latn/
│ └── ...
├── huggingface/
│ └── facebook--nllb-200-distilled-600M/
│ ├── config.json
│ ├── model.safetensors (or pytorch_model.bin)
│ ├── tokenizer.json
│ └── ...
└── catalog.json
# Stanza models are managed by Stanza at ~/stanza_resources/
```
Notes:
- NLLB embeddings and translation use a shared Hugging Face model under `~/.turkicnlp/models/huggingface/`.
- The NLLB model is downloaded once and reused across supported Turkic languages.
- Unlike Apertium/Stanza components, NLLB artifacts are not duplicated per language/script directory.
## License
- **Library code**: [Apache License 2.0](LICENSE)
- **Stanza models**: [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) — managed by Stanza's own download mechanism
- **Apertium FST data**: [GPL-3.0](https://www.gnu.org/licenses/gpl-3.0.html) — downloaded separately at runtime, never bundled in the pip package
- **NLLB-200 model weights/tokenizer**: [CC-BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/) — downloaded from Hugging Face at runtime and reused from `~/.turkicnlp/models/huggingface/` (non-commercial license terms apply)
## Development
```bash
git clone https://github.com/turkic-nlp/turkicnlp.git
cd turkicnlp
pip install -e ".[dev]"
pytest
```
## Contributing
Contributions are welcome, especially:
- **New language support** — tag mappings, abbreviation lists, test data
- **Neural model training** — POS taggers, parsers, NER models
- **Apertium FST improvements** — better coverage for prototype-level languages
- **Other** - any other aspect that you want
Create issues, Pull Requests etc.
## Acknowledgements
TurkicNLP builds on the work of many researchers and communities. We gratefully acknowledge the following:
### Stanza
[Stanza](https://stanfordnlp.github.io/stanza/) provides the pretrained neural models for tokenization, POS tagging, lemmatization, dependency parsing, and NER.
> Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. *Stanza: A Python Natural Language Processing Toolkit for Many Human Languages*. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. [[paper]](https://aclanthology.org/2020.acl-demos.14/)
### Universal Dependencies Treebanks
The Stanza models are trained on [Universal Dependencies](https://universaldependencies.org/) treebanks created by the following teams:
**Turkish (UD_Turkish-IMST)**
> Umut Sulubacak, Memduh Gokirmak, Francis Tyers, Cagri Coltekin, Joakim Nivre, and Gulsen Cebiroglu Eryigit. *Universal Dependencies for Turkish*. COLING 2016. [[paper]](https://aclanthology.org/C16-1325/)
**Turkish (UD_Turkish-BOUN)**
> Utku Turk, Furkan Atmaca, Saziye Betul Ozates, Gozde Berk, Seyyit Talha Bedir, Abdullatif Koksal, Balkiz Ozturk Basaran, Tunga Gungor, and Arzucan Ozgur. *Resources for Turkish Dependency Parsing: Introducing the BOUN Treebank and the BoAT Annotation Tool*. Language Resources and Evaluation 56(1), 2022. [[paper]](https://doi.org/10.1007/s10579-021-09558-0)
**Turkish (UD_Turkish-FrameNet, KeNet, ATIS, Penn, Tourism)**
> Busra Marsan, Neslihan Kara, Merve Ozcelik, Bilge Nas Arican, Neslihan Cesur, Asli Kuzgun, Ezgi Saniyar, Oguzhan Kuyrukcu, and Olcay Taner Yildiz. [Starlang Software](https://starlangyazilim.com/) and [Ozyegin University](https://www.ozyegin.edu.tr/). These treebanks cover diverse domains including FrameNet frames, WordNet examples, airline travel, Penn Treebank translations, and tourism reviews.
**Kazakh (UD_Kazakh-KTB)**
> Aibek Makazhanov, Jonathan North Washington, and Francis Tyers. *Towards a Free/Open-source Universal-dependency Treebank for Kazakh*. TurkLang 2015. [[paper]](https://universaldependencies.org/treebanks/kk_ktb/)
**Uyghur (UD_Uyghur-UDT)**
> Marhaba Eli (Xinjiang University), Daniel Zeman (Charles University), and Francis Tyers. [[treebank]](https://universaldependencies.org/treebanks/ug_udt/)
**Kyrgyz (UD_Kyrgyz-KTMU)**
> Ibrahim Benli. [[treebank]](https://universaldependencies.org/treebanks/ky_ktmu/)
**Ottoman Turkish (UD_Ottoman_Turkish-BOUN)**
> Saziye Betul Ozates, Tarik Emre Tiras, Efe Eren Genc, and Esma Fatima Bilgin Tasdemir. *Dependency Annotation of Ottoman Turkish with Multilingual BERT*. LAW-XVIII, 2024. [[paper]](https://aclanthology.org/2024.law-1.18)
### NER Datasets
**Turkish NER (Starlang)**
> B. Ertopcu, A. B. Kanburoglu, O. Topsakal, O. Acikgoz, A. T. Gurkan, B. Ozenc, I. Cam, B. Avar, G. Ercan, and O. T. Yildiz. *A New Approach for Named Entity Recognition*. UBMK 2017. [[paper]](https://doi.org/10.1109/UBMK.2017.8093439)
**Kazakh NER (KazNERD)**
> Rustem Yeshpanov, Yerbolat Khassanov, and Huseyin Atakan Varol (ISSAI, Nazarbayev University). *KazNERD: Kazakh Named Entity Recognition Dataset*. LREC 2022. [[paper]](https://aclanthology.org/2022.lrec-1.44)
### NLLB Embeddings & Machine Translation
TurkicNLP embeddings backend uses encoder pooling on:
> [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M)
Reference:
> NLLB Team, Marta R. Costa-jussà, et al. 2022. *No Language Left Behind: Scaling Human-Centered Machine Translation*. [[paper]](https://arxiv.org/abs/2207.04672)
### Other Organisations
- [Apertium](https://apertium.org/) — morphological transducers covering 20+ Turkic languages
- [SIGTURK](https://sigturk.com/) — ACL Special Interest Group on Turkic Languages
- [ISSAI](https://issai.nu.edu.kz/) — Institute of Smart Systems and Artificial Intelligence, Nazarbayev University, for Kazakh NLP resources
- [Universal Dependencies](https://universaldependencies.org/) — the framework and community behind Turkic treebanks
- [Turkic Interlingua](https://github.com/turkic-interlingua) — resources for machine translation for Turkic languages
| text/markdown | null | Sherzod Hakimov <sherzodhakimov@gmail.com> | null | null | null | nlp, toolkit, turkic, kazakh, turkish, turkmen, uzbek, azerbaijani, kyrgyz, tatar, uyghur, stanza, linguistics, morphology, pos-tagging, dependency-parsing, ner, machine-translation, embeddings | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Topic :: Text Processing :: Linguistic",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Pr... | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy>=1.24",
"hfst>=3.16",
"stanza>=1.8; extra == \"stanza\"",
"torch>=2.0; extra == \"embeddings\"",
"transformers>=4.30; extra == \"embeddings\"",
"tokenizers>=0.13; extra == \"embeddings\"",
"torch>=2.0; extra == \"translation\"",
"transformers>=4.30; extra == \"translation\"",
"tokenizers>=0.1... | [] | [] | [] | [
"Homepage, https://github.com/turkic-nlp/turkicnlp",
"Documentation, https://turkic-nlp.github.io/toolkit",
"Repository, https://github.com/turkic-nlp/turkicnlp",
"Issues, https://github.com/turkic-nlp/turkicnlp/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-18T16:55:50.069529 | turkicnlp-0.1.3.tar.gz | 5,172,971 | 83/5a/f5d6593b67a4bf9b652f894ba01d2fb80e7c5c5f0f19edd3608a2ca273c7/turkicnlp-0.1.3.tar.gz | source | sdist | null | false | d966ed7e67540b7008bdac5029455fa3 | ed1e0973c107785ed3d741d6e458af3ea0e3a5ec3a3ca6511ce4505a86f1be1d | 835af5d6593b67a4bf9b652f894ba01d2fb80e7c5c5f0f19edd3608a2ca273c7 | Apache-2.0 | [
"LICENSE"
] | 247 |
2.3 | svc-infra | 1.15.0 | Infrastructure for building and deploying prod-ready services | # svc-infra
**Production-ready FastAPI infrastructure in one import.**
[](https://pypi.org/project/svc-infra/)
[](https://github.com/nfraxlab/svc-infra/actions/workflows/ci.yml)
[](https://pypi.org/project/svc-infra/)
[](LICENSE)
## Overview
Stop rebuilding auth, billing, webhooks, and background jobs for every project.
### Key Features
- **Auth** - JWT, sessions, OAuth/OIDC, MFA, API keys
- **Billing** - Usage tracking, subscriptions, invoices, Stripe sync
- **Database** - PostgreSQL + MongoDB, migrations, inbox/outbox
- **Jobs** - Background tasks, scheduling, retries, DLQ
- **Webhooks** - Subscriptions, HMAC signing, delivery retries
- **Observability** - Prometheus, Grafana dashboards, OTEL
## Why svc-infra?
Every FastAPI project needs the same things: authentication, database setup, background jobs, caching, webhooks, billing... You've written this code before. Multiple times.
**svc-infra** packages battle-tested infrastructure used in production, so you can focus on your actual product:
```python
from svc_infra.api.fastapi.ease import easy_service_app
app = easy_service_app(name="MyAPI", release="1.0.0")
# Health checks, CORS, security headers, structured logging
# Prometheus metrics, OpenTelemetry tracing
# Request IDs, idempotency middleware
# That's it. Ship it.
```
## Quick Install
```bash
pip install svc-infra
```
## What's Included
| Feature | What You Get | One-liner |
|---------|-------------|-----------|
| **Auth** | JWT, sessions, OAuth/OIDC, MFA, API keys | `add_auth_users(app)` |
| **Billing** | Usage tracking, subscriptions, invoices, Stripe sync | `add_billing(app)` |
| **Database** | PostgreSQL + MongoDB, migrations, inbox/outbox | `add_sql_db(app)` |
| **Jobs** | Background tasks, scheduling, retries, DLQ | `easy_jobs()` |
| **Webhooks** | Subscriptions, HMAC signing, delivery retries | `add_webhooks(app)` |
| **Cache** | Redis/memory, decorators, namespacing | `init_cache()` |
| **Observability** | Prometheus, Grafana dashboards, OTEL | Built-in |
| **Storage** | S3, local, memory backends | `add_storage(app)` |
| **Multi-tenancy** | Tenant isolation, scoped queries | Built-in |
| **Rate Limiting** | Per-user, per-endpoint, headers | Built-in |
## 30-Second Example
Build a complete SaaS backend:
```python
from fastapi import Depends
from svc_infra.api.fastapi.ease import easy_service_app
from svc_infra.api.fastapi.db.sql.add import add_sql_db
from svc_infra.api.fastapi.auth import add_auth_users, current_active_user
from svc_infra.jobs.easy import easy_jobs
from svc_infra.webhooks.fastapi import require_signature
# Create app with batteries included
app = easy_service_app(name="MySaaS", release="1.0.0")
# Add infrastructure
add_sql_db(app) # PostgreSQL with migrations
add_auth_users(app) # Full auth system
queue, scheduler = easy_jobs() # Background jobs
# Your actual business logic
@app.post("/api/process")
async def process_data(user=Depends(current_active_user)):
job = queue.enqueue("heavy_task", {"user_id": user.id})
return {"job_id": job.id, "status": "queued"}
# Webhook endpoint with signature verification
@app.post("/webhooks/stripe")
async def stripe_webhook(payload=Depends(require_signature(lambda: ["whsec_..."]))):
queue.enqueue("process_payment", payload)
return {"received": True}
```
**That's a production-ready API** with auth, database, background jobs, and webhook handling.
## Feature Highlights
### Authentication & Security
Full auth system with zero boilerplate:
```python
from svc_infra.api.fastapi.auth import add_auth_users, current_active_user
add_auth_users(app) # Registers /auth/* routes automatically
@app.get("/me")
async def get_profile(user=Depends(current_active_user)):
return {"email": user.email, "mfa_enabled": user.mfa_enabled}
```
**Includes:** JWT tokens, session cookies, OAuth/OIDC (Google, GitHub, etc.), MFA/TOTP, password policies, account lockout, key rotation.
### Usage-Based Billing
Track usage and generate invoices:
```python
from svc_infra.billing import BillingService
billing = BillingService(session=db, tenant_id="tenant_123")
# Record API usage (idempotent)
billing.record_usage(metric="api_calls", amount=1, idempotency_key="req_abc")
# Generate monthly invoice
invoice = billing.generate_monthly_invoice(
period_start=datetime(2025, 1, 1),
period_end=datetime(2025, 2, 1),
)
```
**Includes:** Usage events, aggregation, plans & entitlements, subscriptions, invoices, Stripe sync hooks.
### Background Jobs
Redis-backed job queue with retries:
```python
from svc_infra.jobs.easy import easy_jobs
queue, scheduler = easy_jobs() # Auto-detects Redis or uses memory
# Enqueue work
queue.enqueue("send_email", {"to": "user@example.com", "template": "welcome"})
# Schedule recurring tasks
scheduler.add("cleanup", interval_seconds=3600, target="myapp.tasks:cleanup")
```
```bash
# Run the worker
svc-infra jobs run
```
**Includes:** Visibility timeout, exponential backoff, dead letter queue, interval scheduler, CLI worker.
### Webhooks
Send and receive webhooks with proper security:
```python
from svc_infra.webhooks import add_webhooks, WebhookService
add_webhooks(app) # Adds subscription management routes
# Publish events
webhook_service.publish("invoice.paid", {"invoice_id": "inv_123"})
# Verify incoming webhooks
@app.post("/webhooks/external")
async def receive(payload=Depends(require_signature(lambda: ["secret1", "secret2"]))):
return {"ok": True}
```
**Includes:** Subscription store, HMAC-SHA256 signing, delivery retries, idempotent processing.
### Observability
Production monitoring out of the box:
```python
app = easy_service_app(name="MyAPI", release="1.0.0")
# Prometheus metrics at /metrics
# Health checks at /healthz, /readyz, /startupz
# Request tracing with OpenTelemetry
```
```bash
# Generate Grafana dashboards
svc-infra obs dashboard --service myapi --output ./dashboards/
```
**Includes:** Prometheus metrics, Grafana dashboard generator, OTEL integration, SLO helpers.
## Configuration
Everything is configurable via environment variables:
```bash
# Database
SQL_URL=postgresql://user:pass@localhost/mydb
MONGO_URL=mongodb://localhost:27017
# Auth
AUTH_JWT__SECRET=your-secret-key
AUTH_SMTP_HOST=smtp.sendgrid.net
# Jobs
JOBS_DRIVER=redis
REDIS_URL=redis://localhost:6379
# Storage
STORAGE_BACKEND=s3
STORAGE_S3_BUCKET=my-uploads
# Observability
ENABLE_OBS=true
METRICS_PATH=/metrics
```
See the [Environment Reference](docs/environment.md) for all options.
## Documentation
| Module | Description | Guide |
|--------|-------------|-------|
| **API** | FastAPI bootstrap, middleware, versioning | [docs/api.md](docs/api.md) |
| **Auth** | Sessions, OAuth/OIDC, MFA, API keys | [docs/auth.md](docs/auth.md) |
| **Billing** | Usage tracking, subscriptions, invoices | [docs/billing.md](docs/billing.md) |
| **Database** | SQL + MongoDB, migrations, patterns | [docs/database.md](docs/database.md) |
| **Jobs** | Background tasks, scheduling | [docs/jobs.md](docs/jobs.md) |
| **Webhooks** | Publishing, signing, verification | [docs/webhooks.md](docs/webhooks.md) |
| **Cache** | Redis/memory caching, TTL helpers | [docs/cache.md](docs/cache.md) |
| **Storage** | S3, local, memory file storage | [docs/storage.md](docs/storage.md) |
| **Observability** | Metrics, tracing, dashboards | [docs/observability.md](docs/observability.md) |
| **Security** | Password policy, headers, MFA | [docs/security.md](docs/security.md) |
| **Tenancy** | Multi-tenant isolation | [docs/tenancy.md](docs/tenancy.md) |
| **CLI** | Command-line tools | [docs/cli.md](docs/cli.md) |
## Running the Example
See all features working together:
```bash
git clone https://github.com/nfraxlab/svc-infra.git
cd svc-infra
# Setup and run
make setup-template # Creates DB, runs migrations
make run-template # Starts at http://localhost:8001
```
Visit http://localhost:8001/docs to explore the API.
## Related Packages
svc-infra is part of the **nfrax** infrastructure suite:
| Package | Purpose |
|---------|---------|
| **[svc-infra](https://github.com/nfraxlab/svc-infra)** | Backend infrastructure (auth, billing, jobs, webhooks) |
| **[ai-infra](https://github.com/nfraxlab/ai-infra)** | AI/LLM infrastructure (agents, tools, RAG, MCP) |
| **[fin-infra](https://github.com/nfraxlab/fin-infra)** | Financial infrastructure (banking, portfolio, insights) |
## License
MIT License - use it for anything.
---
<div align="center">
**Built by [nfraxlab](https://github.com/nfraxlab)**
[Star us on GitHub](https://github.com/nfraxlab/svc-infra) · [View on PyPI](https://pypi.org/project/svc-infra/)
</div>
| text/markdown | Ali Khatami | aliikhatami94@gmail.com | null | null | MIT | fastapi, sqlalchemy, alembic, auth, infra, async, pydantic | [
"Development Status :: 5 - Production/Stable",
"Framework :: FastAPI",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python... | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"adyen>=10.0.0; extra == \"payments\" or extra == \"adyen\"",
"aioboto3>=12.0.0; extra == \"s3\"",
"aiofiles>=24.0.0",
"aiosqlite>=0.19.0; extra == \"sqlite\"",
"alembic>=1.13.0",
"asyncpg>=0.29.0; extra == \"pg\"",
"authlib>=1.0.0",
"cashews[redis]>=7.0",
"duckdb>=0.10.0; extra == \"duckdb\"",
"e... | [] | [] | [] | [
"Documentation, https://nfrax.com/svc-infra",
"Homepage, https://github.com/nfraxlab/svc-infra",
"Issues, https://github.com/nfraxlab/svc-infra/issues",
"Repository, https://github.com/nfraxlab/svc-infra"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T16:55:42.977352 | svc_infra-1.15.0.tar.gz | 410,322 | 43/dc/69750f99ca4d83b56239abc55e94efeab4e7bd9de9178c0f0212f8f3ad6a/svc_infra-1.15.0.tar.gz | source | sdist | null | false | 5b556e8172429adaa4f61b9390b7a9ea | 853bbc1b28ee6cef1bf6b9c963802c3875a718b8f70e41c97e1d074266a7d724 | 43dc69750f99ca4d83b56239abc55e94efeab4e7bd9de9178c0f0212f8f3ad6a | null | [] | 297 |
2.4 | mistralai-workflows | 2.0.0b5 | Mistral Workflows - Build reliable AI workflows with Python | # Mistral Workflows
Build reliable, production-grade AI workflows with Python.
## Overview
Mistral Workflows is a Python SDK for building AI-powered workflows with built-in reliability, observability, and scalability. It provides fault tolerance, durability, and exactly-once execution guarantees.
## Features
- **Simple Python API**: Define workflows using Python decorators
- **Built-in Reliability**: Automatic retries, timeouts, and error handling
- **Distributed Execution**: Scale workflows across multiple workers
- **LLM Integration**: Native support for Mistral AI and other LLM providers
- **Observability**: Distributed tracing, structured logging, and event streaming
- **Type Safety**: Full type hints and Pydantic validation
## Installation
```bash
pip install mistralai-workflows
```
## Quick Start
```python
from mistralai_workflows import workflow, activity
@activity
async def get_weather(city: str) -> str:
# Your activity implementation
return f"Weather in {city}: Sunny"
@workflow.define
class WeatherWorkflow:
@workflow.run
async def run(self, city: str) -> str:
weather = await workflow.execute_activity(
get_weather,
city,
start_to_close_timeout=timedelta(seconds=10),
)
return weather
```
## Documentation
For full documentation, visit [docs.mistral.ai/workflows](https://docs.mistral.ai/workflows)
## Examples
The SDK includes comprehensive examples in the `mistralai_workflows/examples` directory. You can run all examples with a single command:
```bash
# Run all example workflows in a single worker
python -m mistralai_workflows.examples.all_workflows_worker
```
## License
Apache License 2.0 - see [LICENSE](LICENSE) for details.
| text/markdown | null | Mistral AI <support@mistral.ai> | null | null | Apache-2.0 | ai, llm, mistral, orchestration, temporal, workflows | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libr... | [] | null | null | <3.13,>=3.12 | [] | [] | [] | [
"aioboto3<13.0.0,>=12.4.0",
"aiocache>=0.12.3",
"asynciolimiter>=1.2.0",
"authlib>=1.6.5",
"azure-storage-blob[aio]<12.29.0,>=12.28.0",
"cryptography>=41.0.0",
"gcloud-aio-storage<10.0.0,>=9.3.0",
"griffe>=1.14.0",
"httpx>=0.27.0",
"jinja2>=3.1.6",
"jsonpatch>=1.33",
"mcp>=1.12.4",
"nats-py>... | [] | [] | [] | [
"Homepage, https://mistral.ai",
"Documentation, https://docs-internal-frameworks.mistral.ai/workflows"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T16:54:48.032828 | mistralai_workflows-2.0.0b5.tar.gz | 337,273 | 78/9a/d17492496237059790368f530f9c469c21e1bbfe274673adbd178559ce82/mistralai_workflows-2.0.0b5.tar.gz | source | sdist | null | false | f5e1c398c5b906ac3c23b325beb880b1 | 2302534fd2e23eeee2eb38341aabe2a2f91fb701ff70b556b968b12206d2f739 | 789ad17492496237059790368f530f9c469c21e1bbfe274673adbd178559ce82 | null | [
"LICENSE"
] | 868 |
2.4 | mistralai-workflows-plugins-mistralai | 2.0.0b5 | Mistral AI plugin for Mistral Workflows - Provides native Mistral AI integration | # Mistral Workflows - Mistral AI Plugin
Native Mistral AI integration for Mistral Workflows.
## Overview
This plugin provides Mistral AI-specific activities and models for building AI workflows with the Mistral AI API.
## Features
- **Mistral AI Activities**: Pre-built activities for chat completions, embeddings, and more
- **Streaming Support**: Native streaming for chat responses
- **Model Definitions**: Type-safe model configurations
- **Agent Runtime**: Build and run autonomous AI agents
- **Session Management**: Stateful agent sessions with context persistence
- **MCP Support**: Model Context Protocol integration for tool use
- **Tool Execution**: Built-in tool calling and execution framework
## Installation
```bash
pip install mistralai-workflows[mistralai]
```
Or install directly:
```bash
pip install mistralai-workflows-plugins-mistralai
```
## Quick Start
```python
import mistralai_workflows as workflows
import mistralai_workflows.plugins.mistralai as workflows_mistralai
@workflows.workflow.define(name="chat-workflow")
class ChatWorkflow:
@workflows.workflow.entrypoint
async def run(self, prompt: str) -> str:
response = await workflows_mistralai.mistralai_chat_stream(
workflows_mistralai.ChatCompletionRequest(
model="mistral-medium-latest",
messages=[workflows_mistralai.UserMessage(content=prompt)],
)
)
return response.content
```
## Documentation
For full documentation, visit [docs-internal-frameworks.mistral.ai/workflows](https://docs-internal-frameworks.mistral.ai/workflows)
## Examples
Run examples with:
```bash
python -m mistralai_workflows.examples.assist.workflow_multi_turn_chat
python -m mistralai_workflows.examples.assist.workflow_insurance_claims
python -m mistralai_workflows.examples.assist.workflow_local_session_streaming
python -m mistralai_workflows.examples.assist.workflow_travel_agent_streaming
python -m mistralai_workflows.examples.assist.workflow_with_agent
python -m mistralai_workflows.examples.assist.workflow_extract_markdown
python -m mistralai_workflows.examples.assist.workflow_embeddings
```
## License
Apache License 2.0 - see [LICENSE](LICENSE) for details.
| text/markdown | null | Mistral AI <support@mistral.ai> | null | null | Apache-2.0 | ai, llm, mistral, plugin, workflows | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libr... | [] | null | null | <3.13,>=3.12 | [] | [] | [] | [
"mcp>=1.12.4",
"mistralai-workflows<2.1.0,>=2.0.0b1",
"mistralai>=1.8.1",
"structlog<26,>=24"
] | [] | [] | [] | [
"Homepage, https://mistral.ai",
"Documentation, https://docs-internal-frameworks.mistral.ai/workflows"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T16:54:39.384698 | mistralai_workflows_plugins_mistralai-2.0.0b5.tar.gz | 93,749 | 9d/c4/7906de38a6fd05f96908df2ac0b6bf7f90f4d96c7855bd7450953b1cb6b6/mistralai_workflows_plugins_mistralai-2.0.0b5.tar.gz | source | sdist | null | false | b6570d996e30e9a4cfefba7968d4d0fa | 5005eb5ac5d99d96773f43ebd02fd3c80226ed242d51577a271cb5fe5d6bd8db | 9dc47906de38a6fd05f96908df2ac0b6bf7f90f4d96c7855bd7450953b1cb6b6 | null | [
"LICENSE"
] | 281 |
2.4 | biolmai | 0.3.0 | BioLM Python client | ========
BioLM AI
========
.. image:: https://img.shields.io/pypi/v/biolmai.svg
:target: https://pypi.python.org/pypi/biolmai
.. image:: https://api.travis-ci.com/BioLM/py-biolm.svg?branch=production
:target: https://travis-ci.org/github/BioLM/py-biolm
.. image:: https://readthedocs.org/projects/biolm-ai/badge/?version=latest
:target: https://biolm-ai.readthedocs.io/en/latest/?version=latest
:alt: Documentation Status
Python client and SDK for `BioLM <https://biolm.ai>`_
Install the package:
.. code-block:: bash
pip install biolmai
Basic usage:
.. code-block:: python
from biolmai import biolm
# Encode a single sequence
result = biolm(entity="esm2-8m", action="encode", type="sequence", items="MSILVTRPSPAGEEL")
# Predict a batch of sequences
result = biolm(entity="esmfold", action="predict", type="sequence", items=["SEQ1", "SEQ2"])
# Write results to disk
biolm(entity="esmfold", action="predict", type="sequence", items=["SEQ1", "SEQ2"], output='disk', file_path="results.jsonl")
Asynchronous usage:
.. code-block:: python
from biolmai.core.http import BioLMApiClient
import asyncio
async def main():
model = BioLMApiClient("esmfold")
result = await model.predict(items=[{"sequence": "MDNELE"}])
print(result)
asyncio.run(main())
Overview
========
The BioLM Python client provides a high-level, user-friendly interface for interacting with the BioLM API. It supports both synchronous and asynchronous usage, automatic batching, flexible error handling, and efficient processing of biological data.
Main features:
- High-level BioLM constructor for quick requests
- Sync and async interfaces
- Automatic or custom rate limiting/throttling
- Schema-based batch size detection
- Flexible input formats (single key + list, or list of dicts)
- Low memory usage via generators
- Flexible error handling (raise, continue, or stop on error)
- Universal HTTP client for both sync and async
Features
========
- **High-level constructor**: Instantly run an API call with a single line.
- **Sync and async**: Use `BioLM` for sync, or `BioLMApiClient` for async.
- **Flexible rate limiting**: Use API throttle, disable, or set your own (e.g., '1000/second').
- **Schema-based batching**: Automatically queries API for max batch size.
- **Flexible input**: Accepts a single key and list, or list of dicts, or list of lists for advanced batching.
- **Low memory**: Uses generators for validation and batching.
- **Error handling**: Raise HTTPX errors, continue on error, or stop on first error.
- **Disk output**: Write results as JSONL to disk.
- **Universal HTTP client**: Efficient for both sync and async.
- **Direct access to schema and batching**: Use `BioLMApi` for advanced workflows, including `.schema()`, `.call()`, and `._batch_call_autoschema_or_manual()`.
**Example endpoints and actions:**
- `esm2-8m/encode`: Embedding for protein sequences.
- `esmfold/predict`: Structure prediction for protein sequences.
- `progen2-oas/generate`: Sequence generation from a context string.
- `dnabert2/predict`: Masked prediction for protein sequences.
- `ablang2/encode`: Embeddings for paired-chain antibodies.
* Free software: Apache Software License 2.0
* Documentation: https://docs.biolm.ai
| text/x-rst | BioLM | BioLM <support@biolm.ai> | null | null | Apache Software License 2.0 | biolmai | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language ... | [] | https://github.com/BioLM/py-biolm | null | >=3.7 | [] | [] | [] | [
"httpx>=0.23.0",
"httpcore",
"Click>=6.0",
"requests",
"aiodns",
"synchronicity>=0.5.0; python_version >= \"3.9\"",
"synchronicity<0.5.0; python_version < \"3.9\"",
"typing_extensions; python_version < \"3.9\"",
"aiohttp<=3.8.6; python_version < \"3.12\"",
"aiohttp>=3.9.0; python_version >= \"3.12... | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T16:54:20.519899 | biolmai-0.3.0.tar.gz | 2,383,474 | 93/a0/c0b7e2ff3c3dc27fdba326dfda4ee9cfc737bb335ba25067a18017644e11/biolmai-0.3.0.tar.gz | source | sdist | null | false | 620c7bd2af432023eb58a3627400c88f | 4a9d1851f9296b0d613279d8efd320f62e7c38b74fa5becb76ca05ff31c91b8c | 93a0c0b7e2ff3c3dc27fdba326dfda4ee9cfc737bb335ba25067a18017644e11 | null | [
"LICENSE",
"AUTHORS.rst"
] | 257 |
2.4 | provara-protocol | 1.0.1 | Self-sovereign cryptographic event logs. Ed25519 · SHA-256 · RFC 8785 | # Provara Protocol
**Self-sovereign cryptographic event logs.**
Ed25519 · SHA-256 · RFC 8785 · Apache 2.0
[Try the Playground](https://provara-protocol.github.io/provara/) ·
[Read the Spec](docs/BACKPACK_PROTOCOL_v1.0.md) ·
[PyPI](https://pypi.org/project/provara-protocol/)
---
## What is Provara?
Provara is an append-only, cryptographically signed event log that anyone can verify and no one can silently rewrite. It preserves memory as evidence: signed observations that can be replayed into state, audited independently, and stored in plain files for long-horizon readability. Built for AI governance, cognitive continuity, and accountable records that outlive platforms.
---
## 60-Second Quickstart
```bash
pip install provara-protocol
```
```bash
# Create a vault
provara init my-vault
# Append a signed event
provara append my-vault --type OBSERVATION --data '{"event":"test"}' --keyfile my-vault/identity/private_keys.json
# Verify integrity
provara verify my-vault
```
---
## Key Features
- Tamper-evident append-only event logs
- Ed25519 signatures, SHA-256 hashing, RFC 8785 canonical JSON
- Per-actor causal chains with cryptographic linkage
- SCITT-compatible event types
- MCP server for AI agent integration
- Browser playground — zero install
---
## Three Implementations
| Language | Status | Tests |
|----------|--------|-------|
| Python | v1.0.0 (reference) | 232 |
| Rust | Complete | 20 |
| TypeScript | Complete | — |
---
## Documentation
| Resource | Description |
|----------|-------------|
| [Quickstart](docs/QUICKSTART.md) | Install, init, verify in 5 minutes |
| [Tutorials](docs/TUTORIALS.md) | Step-by-step guides for common workflows |
| [API Reference](docs/API.md) | Module and CLI documentation |
| [Cookbook](docs/COOKBOOK.md) | Recipes for AI governance, key rotation, sync |
| [Protocol Spec](docs/BACKPACK_PROTOCOL_v1.0.md) | Normative specification |
| [SOUL.md](docs/SOUL.md) | Design philosophy and principles |
---
## Badges




---
## Why This Exists
Your memories, your identity, your cognitive continuity should not depend on any company surviving, any server staying online, or any platform deciding to keep your data. Provara is built for people and organizations that need accountable records: families preserving history, AI teams logging model decisions, and regulated operators proving chain-of-custody.
> **Golden Rule:** Truth is not merged. Evidence is merged. Truth is recomputed.
---
## Design Guarantees
| Guarantee | What It Means |
|-----------|---------------|
| **No vendor lock-in** | Plain text JSON events. No proprietary formats. |
| **No internet required** | Works entirely offline. No phone-home, no telemetry. |
| **No accounts** | Your identity lives in your files, not on a server. |
| **Tamper-evident** | Merkle trees, Ed25519 signatures, causal chains detect modification. |
| **Human-readable** | NDJSON event log — open with any text editor. |
| **50-year readable** | JSON, SHA-256, Ed25519 are industry standards. |
---
## Vault Anatomy
```
my-vault/
├── identity/
│ ├── genesis.json # Birth certificate
│ └── private_keys.json # Ed25519 keypair (guard this!)
├── events/
│ └── events.ndjson # Append-only event log
├── policies/
│ ├── safety_policy.json # L0-L3 kinetic risk tiers
│ ├── retention_policy.json # Data permanence rules
│ └── sync_contract.json # Governance + authority ladder
├── manifest.json # File inventory with SHA-256 hashes
├── manifest.sig # Ed25519 signature over manifest
└── merkle_root.txt # Integrity anchor
```
---
## MCP Server — AI Agents Write Tamper-Evident Memory
Connect any AI agent that supports the [Model Context Protocol](https://modelcontextprotocol.io/) to a Provara vault:
```json
{
"mcpServers": {
"provara": {
"command": "python",
"args": ["-m", "provara.mcp", "--transport", "stdio"]
}
}
}
```
**Available tools:** `append_event`, `verify_chain`, `snapshot_state`, `query_timeline`, `list_conflicts`, `generate_digest`, `export_markdown`, `checkpoint_vault`
---
## AI Governance Use Cases
| Use Case | How Provara Supports It |
|----------|------------------------|
| **Model evaluation logging** | Signed `OBSERVATION` events with model ID, benchmark, scores |
| **Prompt & test result logging** | Chained events with inputs, outputs, latency — tamper-evident |
| **Policy enforcement decisions** | `ATTESTATION` events record decisions with policy version and reasoning |
| **AI cost & routing oversight** | Token usage, model selection, routing decisions as signed events |
| **Red-team audit records** | Append-only audit trail for adversarial tests and severity assessments |
---
## Testing
```bash
# Run all tests
pytest
# Compliance tests against reference vault
python tests/backpack_compliance_v1.py tests/fixtures/reference_backpack -v
```
| Suite | Tests | Coverage |
|-------|------:|----------|
| Core unit tests | 125 | Reducer, sync, crypto, bootstrap |
| Compliance | 17 | Full protocol conformance |
| PSMC | 60 | Application layer |
| MCP Server | 22 | All tools, both transports |
| Test vectors | 8 | Cross-language validation |
| **Total** | **232** | |
---
## Reimplementing Provara
The protocol is language-agnostic. To reimplement:
1. Implement SHA-256 (FIPS 180-4), Ed25519 (RFC 8032), RFC 8785 canonical JSON
2. Validate against [`test_vectors/vectors.json`](test_vectors/vectors.json)
3. Build a deterministic reducer for `OBSERVATION`, `ATTESTATION`, `RETRACTION`
4. Run the [17 compliance tests](tests/backpack_compliance_v1.py)
**If state hashes match, your implementation is correct.**
---
## Key Management
Your private keys are the root of sovereignty. Guard them:
- `private_keys.json` should never live on the same drive as your vault
- Store keys in separate physical locations
- Use `--quorum` flag during bootstrap for recovery key
- Compromised keys can be rotated via `KEY_REVOCATION` + `KEY_PROMOTION` events
**Full guide:** [Keys_Info/HOW_TO_STORE_KEYS.md](Keys_Info/HOW_TO_STORE_KEYS.md)
---
## Recovery
| Scenario | Solution |
|----------|----------|
| Lost keys, corrupted vault | [Recovery/WHAT_TO_DO.md](Recovery/WHAT_TO_DO.md) |
| Catastrophic failure | [RECOVERY_INSTRUCTIONS.md](RECOVERY_INSTRUCTIONS.md) |
| Routine backup/restore | `provara backup` / `provara restore` |
---
## FAQ
**What happens if I lose my private keys?**
With `--quorum`, the quorum key can authorize rotation. Without it, the vault is readable but read-only. See [Recovery/WHAT_TO_DO.md](Recovery/WHAT_TO_DO.md).
**Can I read my vault without this software?**
Yes. Events are NDJSON — open `events/events.ndjson` in any text editor.
**What if Python goes away in 20 years?**
JSON, SHA-256, and Ed25519 are industry standards implemented in every major language. The data survives the tooling.
**Can multiple devices share a vault?**
Yes. Sync uses union merge with causal chain verification and fork detection. Event-sourced architecture makes merging safe.
**Is this a blockchain?**
No. It's a Merkle tree over files with per-actor causal chains. Closer to git than Bitcoin. No consensus, no mining, no tokens.
**What does "Truth is not merged. Evidence is merged. Truth is recomputed." mean?**
You merge raw observations, then rerun the deterministic reducer to derive fresh conclusions. No merge conflicts at the belief layer.
---
## Version
```
Protocol Provara v1.0
Implementation 1.0.0
PyPI provara-protocol 1.0.0
Tests Passing 232
```
---
## License
Apache 2.0
Normative specification: [`docs/BACKPACK_PROTOCOL_v1.0.md`](docs/BACKPACK_PROTOCOL_v1.0.md)
| text/markdown | Provara | null | null | null | Apache-2.0 | cryptography, ed25519, event-sourcing, memory-protocol, tamper-evident, merkle-tree, provara | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Progr... | [] | null | null | >=3.10 | [] | [] | [] | [
"cryptography>=41.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"hypothesis>=6.0; extra == \"dev\"",
"mkdocs-material>=9.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/provara-protocol/provara",
"Documentation, https://provara-protocol.github.io/provara/",
"Playground, https://provara-protocol.github.io/provara/",
"Repository, https://github.com/provara-protocol/provara",
"Issues, https://github.com/provara-protocol/provara/issues",
"Change... | twine/6.2.0 CPython/3.12.10 | 2026-02-18T16:54:10.636857 | provara_protocol-1.0.1.tar.gz | 143,147 | 28/db/1555b497170675c54e5ac567c9692169edd2dc931c98d3b560ec2b174506/provara_protocol-1.0.1.tar.gz | source | sdist | null | false | 98bd196049125df6775169af23d059e1 | 3126939e3994bfc035bb742144f1aa3fa3dd58064f6ef1dd7d91906e5dd549a8 | 28db1555b497170675c54e5ac567c9692169edd2dc931c98d3b560ec2b174506 | null | [
"LICENSE"
] | 238 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.