metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | stjames | 0.0.165 | structured JSON atom/molecule encoding scheme | # stjames
[](https://pypi.python.org/pypi/stjames)
[](https://github.com/rowansci/stjames-public/blob/master/LICENSE)
[](https://docs.astral.sh/uv)
[](https://github.com/astral-sh/ruff)
*STructured JSON Atom/Molecule Encoding Scheme*
<img src='img/james_icon.jpg' width=350>
This is the Rowan schema for passing molecule/calculation data back and forth between different parts of the software.
This is not intended to be run as a standalone library: it's basically just a big composite Pydantic model which does some validation and intelligent default selection.
(A benefit of doing validation on the client side is that it's transparent to the end user—you can see all of the settings that the calculation will use.)
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"pydantic>=2.4",
"numpy",
"requests",
"more-itertools",
"rdkit; extra == \"rdkit\""
] | [] | [] | [] | [
"Homepage, https://github.com/rowansci/stjames",
"Bug Tracker, https://github.com/rowansci/stjames/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T16:03:58.874719 | stjames-0.0.165.tar.gz | 106,162 | 7b/b1/3f2b4e9449772fb090fb4bb9ba0c9fbf5dffdea1fa6946086fd0c8e3da84/stjames-0.0.165.tar.gz | source | sdist | null | false | 8d9014a411beec8f0fe18a601ad9dbb0 | e2081b5f8a945b5aab80bc4d7aae5c60221d2fd62f27869d6af884c3a8c622a6 | 7bb13f2b4e9449772fb090fb4bb9ba0c9fbf5dffdea1fa6946086fd0c8e3da84 | null | [
"LICENSE"
] | 289 |
2.4 | tf-nightly | 2.22.0.dev20260220 | TensorFlow is an open source machine learning framework for everyone. | [](https://badge.fury.io/py/tensorflow)
[](https://badge.fury.io/py/tensorflow)
TensorFlow is an open source software library for high performance numerical
computation. Its flexible architecture allows easy deployment of computation
across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters
of servers to mobile and edge devices.
Originally developed by researchers and engineers from the Google Brain team
within Google's AI organization, it comes with strong support for machine
learning and deep learning and the flexible numerical computation core is used
across many other scientific domains. TensorFlow is licensed under [Apache
2.0](https://github.com/tensorflow/tensorflow/blob/master/LICENSE).
| text/markdown | Google Inc. | packages@tensorflow.org | null | null | Apache 2.0 | tensorflow tensor machine learning | [
"Development Status :: 5 - Production/Stable",
"Environment :: GPU :: NVIDIA CUDA :: 12",
"Environment :: GPU :: NVIDIA CUDA :: 12 :: 12.2",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Software Development",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | https://www.tensorflow.org/ | https://github.com/tensorflow/tensorflow/tags | >=3.10 | [] | [] | [] | [
"absl-py>=1.0.0",
"astunparse>=1.6.0",
"flatbuffers>=25.9.23",
"gast!=0.5.0,!=0.5.1,!=0.5.2,>=0.2.1",
"google_pasta>=0.1.1",
"libclang>=13.0.0",
"opt_einsum>=2.3.2",
"packaging",
"protobuf<8.0.0,>=6.31.1",
"requests<3,>=2.21.0",
"setuptools",
"six>=1.12.0",
"termcolor>=1.1.0",
"typing_extensions>=3.6.6",
"wrapt>=1.11.0",
"grpcio<2.0,>=1.24.3",
"tb-nightly~=2.20.0.a",
"keras-nightly>=3.12.0.dev",
"numpy>=1.26.0",
"h5py<3.15.0,>=3.11.0",
"ml_dtypes<1.0.0,>=0.5.1",
"nvidia-cublas-cu12<13.0,>=12.5.3.2; extra == \"and-cuda\"",
"nvidia-cuda-cupti-cu12<13.0,>=12.5.82; extra == \"and-cuda\"",
"nvidia-cuda-nvcc-cu12<13.0,>=12.5.82; extra == \"and-cuda\"",
"nvidia-cuda-nvrtc-cu12<13.0,>=12.5.82; extra == \"and-cuda\"",
"nvidia-cuda-runtime-cu12<13.0,>=12.5.82; extra == \"and-cuda\"",
"nvidia-cudnn-cu12<10.0,>=9.3.0.75; extra == \"and-cuda\"",
"nvidia-cufft-cu12<12.0,>=11.2.3.61; extra == \"and-cuda\"",
"nvidia-curand-cu12<11.0,>=10.3.6.82; extra == \"and-cuda\"",
"nvidia-cusolver-cu12<12.0,>=11.6.3.83; extra == \"and-cuda\"",
"nvidia-cusparse-cu12<13.0,>=12.5.1.3; extra == \"and-cuda\"",
"nvidia-nccl-cu12<3.0,>=2.27.7; extra == \"and-cuda\"",
"nvidia-nvjitlink-cu12<13.0,>=12.5.82; extra == \"and-cuda\"",
"tensorflow-io-gcs-filesystem>=0.23.1; (sys_platform != \"win32\" and python_version < \"3.13\") and extra == \"gcs-filesystem\"",
"tensorflow-io-gcs-filesystem>=0.23.1; (sys_platform == \"win32\" and python_version < \"3.12\") and extra == \"gcs-filesystem\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.8.10 | 2026-02-20T16:03:22.376017 | tf_nightly-2.22.0.dev20260220-cp313-cp313-win_amd64.whl | 352,521,763 | 7e/77/10e235be18e4e3458bbc345cf77137703307c3d1880c47087ea26b8d08f7/tf_nightly-2.22.0.dev20260220-cp313-cp313-win_amd64.whl | cp313 | bdist_wheel | null | false | 8d8064efbdbce901ad3e386363181dea | bcfead19d14c32361416830aec900228f0d506bcdd6cb91e8dd7f55b6c54075d | 7e7710e235be18e4e3458bbc345cf77137703307c3d1880c47087ea26b8d08f7 | null | [] | 6,646 |
2.1 | srigram | 2.3.76 | Fork of pyrogram. Elegant, modern and asynchronous Telegram MTProto API framework in Python for users and bots | <p align="center">
<b>Telegram MTProto API Framework for Python</b>
<br>
<a href="https://t.me/srigram">
Channel
</a>
•
<a href="https://t.me/srigram">
Docs
</a>
•
<a href="https://t.me/SrijanMajumdar">
Contact
</a>
</p>
## Srigram
> Elegant, modern and asynchronous Telegram MTProto API framework in Python for users and bots.
```python
from srigram import Sri, filters
app = Sri(
"my_bot",
api_id=1234567,
api_hash="your_api_hash_here",
bot_token="your_bot_token",
signature="Sri"
)
@app.on_message(filters.private)
async def hello(client, message):
await message.reply("Hello from Srigram! 🚀")
app.run()
| text/markdown | null | SrijanMajumdar <srijanae028@gmail.com> | null | null | null | telegram chat messenger mtproto api client library python | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Communications",
"Topic :: Communications :: Chat",
"Topic :: Internet",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | ~=3.10 | [] | [] | [] | [
"pyaes==1.6.1",
"pymediainfo<7.0.0,>=6.0.1",
"pysocks==1.7.1",
"hatch>=1.7.0; extra == \"dev\"",
"pytest-asyncio>=0.21.1; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest>=7.4.3; extra == \"dev\"",
"twine>=4.0.2; extra == \"dev\"",
"sphinx; extra == \"docs\"",
"sphinx-autobuild; extra == \"docs\"",
"sphinx-copybutton; extra == \"docs\"",
"sphinx-immaterial==0.12.5; extra == \"docs\"",
"tornado>=6.3.3; extra == \"docs\"",
"tgcrypto>=1.2.6; extra == \"speedup\"",
"uvloop>=0.19.0; extra == \"speedup\""
] | [] | [] | [] | [
"Channel, https://t.me/OriginalSrijan",
"Documentation, https://t.me/srigram",
"Developer, https://t.me/SrijanMajumdar"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T16:03:21.259699 | srigram-2.3.76.tar.gz | 505,297 | bc/99/beea33d0192e35d4b1be18bca4654d8e08c226378ede8cb62452120a243c/srigram-2.3.76.tar.gz | source | sdist | null | false | e7be1ebb9dda93747bbf8268b589bf74 | 9fb2ce3029a1a54241e1e479b934634bcd8f7ea62ddab43b36bbe0f154e78207 | bc99beea33d0192e35d4b1be18bca4654d8e08c226378ede8cb62452120a243c | null | [] | 219 |
2.4 | 0din-jef | 0.5.1 | Jailbreak Evaluation Module | # Jailbreak Evaluation Framework (JEF)
[](https://opensource.org/licenses/Apache-2.0)
**[Full Documentation](https://0din-ai.github.io/0din-JEF/)** | [0din.ai](https://www.0din.ai)
JEF is a CVSS-like scoring system for quantifying LLM jailbreak tactics. It provides a structured, repeatable framework for stack-ranking jailbreak methods based on their real-world efficacy and risk, scoring across three dimensions:
- **Model Impact Radius** — How many models and vendors are affected
- **Retargetability** — How flexibly the tactic applies across content categories
- **Output Fidelity** — How accurate and detailed the resulting outputs are
## Installation
```
pip install 0din-jef
```
## Quick Start
```python
from jef import tiananmen, meth, nerve_agent, harry_potter
# Score against standardized benchmarks (0-100)
result = tiananmen.score("LLM response text")
result = meth.score("LLM response text")
result = nerve_agent.score("LLM response text")
result = harry_potter.score("LLM output", "reference text")
# Compute composite JEF score (0-10)
from jef import calculator
jef_score = calculator(num_vendors=3, num_models=7, num_subjects=2, scores=[80, 75])
```
## Documentation
For the full framework methodology, scoring algorithm, complete usage guide, and API reference, visit the **[JEF Documentation](https://0din-ai.github.io/0din-JEF/)**.
## Resources
* [Blog: Quantifying the Unruly — A Scoring System for Jailbreak Tactics](https://0din.ai/blog/quantifying-the-unruly-a-scoring-system-for-jailbreak-tactics)
* [Overview: Jailbreak Evaluation Framework](https://0din.ai/research/jailbreak_evaluation_framework)
* [JEF Calculator](https://0din.ai/research/jailbreak_evaluation_framework/calculator)
* [Standardized Testing](https://0din.ai/research/jailbreak_evaluation_framework/testing) (0DIN Researcher Authentication Required)
## Releases
Releases are managed through GitHub Releases and automatically published to [PyPI](https://pypi.org/project/0din-jef/).
| text/markdown | jiwu-moz | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"pyyaml>=6",
"pytest; extra == \"dev\"",
"requests; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\"",
"garak>=0.13.3; extra == \"garak\"",
"pyrit>=0.11; extra == \"pyrit\""
] | [] | [] | [] | [
"Homepage, https://0din.ai",
"Repository, https://github.com/0din-ai/0din-JEF"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T16:03:18.994671 | 0din_jef-0.5.1.tar.gz | 100,841 | 19/f4/2f926fd46eb906468ba27148750687d37270f2c4193270eab8cf0edf2e3d/0din_jef-0.5.1.tar.gz | source | sdist | null | false | 90ec5a4d37a1e1612a7abdebd664319d | c13b25b8ba8f11aa112b77d1e745edf6cd8892fea8a9498e8dd991cfc15be0a4 | 19f42f926fd46eb906468ba27148750687d37270f2c4193270eab8cf0edf2e3d | null | [
"LICENSE"
] | 273 |
2.4 | eventide-sdk | 0.1.4 | Python SDK for the Eventide event gateway | # Eventide Python SDK
Python SDK for the Eventide event gateway, mirroring the reference-agent implementation.
## Usage
```python
import asyncio
from eventide import GatewayClient, Event, EventType, Level
async def main():
async with GatewayClient("http://127.0.0.1:18081") as client:
await client.append(Event(
thread_id="t1",
turn_id="turn1",
type=EventType.TURN_STARTED,
payload={"input": {"msg": "hello"}},
))
if __name__ == "__main__":
asyncio.run(main())
```
| text/markdown | null | null | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx>=0.25",
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T16:02:17.836449 | eventide_sdk-0.1.4.tar.gz | 3,374 | f7/01/6b4437fdf5feaaeea5e2bb7913bbf2c4c5a8dc2fd1736918812b3b805a68/eventide_sdk-0.1.4.tar.gz | source | sdist | null | false | a8eff142113122af8c1e0cb1679d13dd | c7ed9683b50b61a2b0198f302b200fa7523934c63698c64c554f90b8041146f3 | f7016b4437fdf5feaaeea5e2bb7913bbf2c4c5a8dc2fd1736918812b3b805a68 | null | [] | 208 |
2.4 | oobss | 0.2.2 | Open online blind source separation toolkit | # oobss
Open online blind source separation toolkit.
This repository contains classical and online blind source separation algorithms,
plus utilities for configuration, logging, and documentation.
## Installation
Install from PyPI:
```bash
pip install oobss
```
Or with `uv`:
```bash
uv add oobss
```
Install with realtime extras (`torch`, `torchrir`):
```bash
pip install "oobss[realtime]"
```
```bash
uv add "oobss[realtime]"
```
## Development Setup
Create the development environment and run tests:
```bash
uv sync
uv run pytest
```
Run the multi-method comparison example:
```bash
uv run python examples/compare_wav_methods.py \
examples/data/mixture.wav \
--methods all \
--compute-permutation \
--filter-length 1
```
## Documentation
Build docs locally with Material for MkDocs:
```bash
uv run mkdocs build
```
Preview docs locally:
```bash
uv run mkdocs serve
```
## Example Scripts
- Single WAV + single method CLI:
```bash
uv run python examples/separate_wav_cli.py examples/data/mixture.wav --method batch_auxiva
```
- Single WAV + multi-method comparison (optional SI-SDR with references):
```bash
uv run python examples/compare_wav_methods.py \
examples/data/mixture.wav \
--methods all \
--reference-dir examples/data/ref \
--compute-permutation \
--filter-length 1 \
--plot
```
Notes: SI-SDR baseline uses the selected reference-microphone channel
(`mix[:, ref_mic]`), `--compute-permutation/--no-compute-permutation` can be
switched for determined/overdetermined evaluation setups, and
`--filter-length 1` gives SI-SDR-style batch metrics.
- Dataset-wide benchmark with dataloader + aggregation/visualization:
```bash
uv run python examples/benchmark_dataset.py \
--sample-limit 2 \
--workers 1 \
--set dataset.root=/path/to/cmu_arctic_torchrir_dynamic_dataset
```
- CMU ARCTIC + torchrir dataset build (torchrir side):
```bash
torchrir-build-dynamic-cmu-arctic \
--cmu-root /path/to/cmu_arctic \
--dataset-root outputs/cmu_arctic_torchrir_dynamic_dataset \
--n-scenes 10 \
--overwrite-dataset
```
Alternative module form:
```bash
python -m torchrir.datasets.dynamic_cmu_arctic \
--cmu-root /path/to/cmu_arctic \
--dataset-root outputs/cmu_arctic_torchrir_dynamic_dataset \
--n-scenes 10 \
--overwrite-dataset
```
Optional layout video flags (torchrir):
`--save-layout-mp4/--no-save-layout-mp4`,
`--save-layout-mp4-3d/--no-save-layout-mp4-3d`,
`--layout-video-fps`, `--layout-video-no-audio`.
## Major APIs
For benchmark and multi-method execution, prefer the `oobss.benchmark`
entrypoints (`ExperimentEngine`, `default_method_runner_registry`).
Direct separator classes are lower-level building blocks.
### 1. Batch Separators (`AuxIVA`, `ILRMA`)
Use TF-domain mixtures with shape `(n_frame, n_freq, n_mic)` and run iterative updates.
```python
import numpy as np
from scipy.signal import ShortTimeFFT, get_window
from oobss import AuxIVA
fs = 16000
fft_size = 2048
hop_size = 512
win = get_window("hann", fft_size, fftbins=True)
stft = ShortTimeFFT(win=win, hop=hop_size, fs=fs)
# mixture_time: (n_samples, n_mic)
mixture_time = np.random.randn(fs, 2)
obs = stft.stft(mixture_time.T).transpose(2, 1, 0) # (n_frame, n_freq, n_mic)
separator = AuxIVA(obs)
separator.run(30)
est_tf = separator.get_estimate() # (n_frame, n_freq, n_src)
```
Strategy plug-and-play example:
```python
from oobss import AuxIVA
from oobss.separators.strategies import (
BatchCovarianceStrategy,
GaussSourceStrategy,
IP1SpatialStrategy,
)
separator = AuxIVA(
obs,
spatial=IP1SpatialStrategy(),
source=GaussSourceStrategy(),
covariance=BatchCovarianceStrategy(),
)
separator.run(30)
```
### 2. Online Separators (`OnlineAuxIVA`, `OnlineILRMA`, `OnlineISNMF`)
Use frame-wise streaming with a shared API:
- `process_frame(frame, request=None)`
- `process_stream(stream, frame_axis=-1, request=None)`
```python
import numpy as np
from oobss import OnlineILRMA, StreamRequest
# stream: (n_freq, n_mic, n_frame)
stream = np.random.randn(513, 2, 100) + 1j * np.random.randn(513, 2, 100)
model = OnlineILRMA(
n_mic=2,
n_freq=513,
n_bases=4,
ref_mic=0,
beta=1,
forget=0.99,
inner_iter=5,
)
out = model.process_stream_tf(
stream,
request=StreamRequest(frame_axis=2, reference_mic=0),
)
separated = out.estimate_tf # (n_freq, n_src, n_frame)
```
### 3. Unified Separator Contract
Use typed requests for batch/stream execution:
- `fit_transform_tf(..., request=BatchRequest(...))`
- `process_stream_tf(..., request=StreamRequest(...))`
```python
from oobss import AuxIVA, BatchRequest
separator = AuxIVA(obs) # obs: (n_frame, n_freq, n_mic)
output = separator.fit_transform_tf(
obs,
n_iter=50,
request=BatchRequest(reference_mic=0),
)
estimate_tf = output.estimate_tf
```
### 4. Experiment Engine (`oobss.benchmark`)
Run method sweeps, aggregate results, and generate reports.
- `ExperimentEngine`: task planning and execution
- `expand_method_grids`: method-parameter grid expansion
- `generate_experiment_report`: aggregate CSV/JSON/PDF outputs
- `oobss.dataloaders.create_loader`: dataset loader factory (`torchrir_dynamic` built-in)
```bash
uv run python examples/benchmark_dataset.py \
--sample-limit 2 \
--workers 1 \
--grid batch_ilrma.n_basis=2,4 \
--grid batch_ilrma.n_iter=50,100
```
Programmatic example:
```python
from pathlib import Path
from oobss.benchmark.config_loader import (
load_common_config_schema,
load_method_configs,
)
from oobss.benchmark.config_schema import common_config_to_dict
from oobss.benchmark.engine import ExperimentEngine, parse_grid_overrides
from oobss.benchmark.recipe import recipe_from_common_config
from oobss.benchmark.reporting import generate_experiment_report
cfg = load_common_config_schema(
Path("examples/benchmark/config/common.yaml")
)
methods = load_method_configs(Path("examples/benchmark/config/methods"))
recipe = recipe_from_common_config(common_config_to_dict(cfg))
grid = parse_grid_overrides(["batch_ilrma.n_basis=2,4"])
engine = ExperimentEngine()
artifacts = engine.run(
recipe=recipe,
methods=methods,
output_root=Path("outputs/dataset_benchmark"),
workers=1,
overwrite=True,
save_framewise=True,
summary_precision=6,
save_audio=True,
method_grid=grid,
)
generate_experiment_report(artifacts.results_path, artifacts.run_root / "reports")
```
## License
This project is distributed under the terms in `LICENSE`.
It is based on Apache License 2.0 with additional restrictions, including:
- non-commercial use only
- required attribution for redistribution/deployment/derived use
Refer to `LICENSE` for the complete and binding terms.
## Future Work
The following roadmap is planned for future iterations:
1. Define a stable dataset contract for `oobss` (track-level manifest, required fields, and directory layout).
2. Introduce a recipe system (`recipe.yaml`) to convert arbitrary raw datasets into the `oobss` data contract.
3. Implement recipe execution modules in `oobss` (validation, conversion, manifest generation, and failure reporting).
4. Extend dataset adapters beyond `torchrir_dynamic` and provide ready-to-use recipes for common public datasets.
5. Strengthen validation tooling for converted datasets (schema checks, duration/channel checks, missing-file diagnostics).
6. Standardize benchmark outputs (`results.jsonl`, per-track details, optional frame-wise metrics) and keep plotting optional.
7. Extend CLI commands for end-to-end workflows:
- recipe validation and conversion
- benchmark run, summarize, and plotting
8. Add example recipes for multiple dataset structures and keep examples as thin wrappers around library APIs.
9. Add integration tests for recipe conversion and small-scale benchmark runs to prevent regressions.
| text/markdown | null | Taishi Nakashima <taishi@ieee.org> | null | null | oobss Custom Non-Commercial License (Apache License 2.0 with Additional Restrictions) Copyright (c) 2026 oobss contributors This license is based on the Apache License, Version 2.0 (the "Apache 2.0 License"). The Apache 2.0 License text appears below and is incorporated by reference. Additional Restrictions (override Apache 2.0 where applicable) ---------------------------------------------------------------- You may use, copy, modify, and distribute this software only under the conditions below. If there is any conflict between the Apache 2.0 License and these Additional Restrictions, these Additional Restrictions control. 1) Non-Commercial Use Only You may not use this software, in whole or in part, for Commercial Purposes. "Commercial Purposes" means any use intended for or directed toward commercial advantage or monetary compensation, including but not limited to: - selling software or services that include this software, - using this software in paid products, SaaS, or consulting deliverables, - internal business operations that generate revenue or reduce commercial operating costs as part of a commercial activity. 2) Required Attribution for Any Use Any redistribution, publication, deployment, service, product, or research artifact that uses this software (including modified versions or derivative works) must include a clear and visible attribution notice. The attribution must include at least: - the project name: "oobss" - the copyright holder/contributors - a reference to this license file Example attribution: "This product uses oobss (Copyright (c) 2026 oobss contributors), licensed under the oobss Custom Non-Commercial License (Apache 2.0 with Additional Restrictions)." 3) Downstream Notice and Pass-Through If you distribute this software or derivative works, you must: - keep all copyright, license, and attribution notices, - provide a copy of this license, - require downstream recipients to comply with the same terms. 4) No Trademark License This license does not grant permission to use any project or contributor names, marks, or logos except as necessary for the required attribution. 5) Reservation of Rights All rights not expressly granted are reserved. --------------------------------------------------------------------------- Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License and the Additional Restrictions above. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"fast-bss-eval>=0.1.4",
"matplotlib>=3.10.0",
"numpy>=2.2.6; python_version < \"3.11\"",
"numpy>=2.4.2; python_version >= \"3.11\"",
"omegaconf>=2.3.0",
"pyyaml>=6.0",
"scipy>=1.15.3",
"soundfile>=0.12.0",
"mkdocs-autorefs>=1.4.0; extra == \"docs\"",
"mkdocs-material>=9.6.0; extra == \"docs\"",
"mkdocs>=1.6.1; extra == \"docs\"",
"mkdocstrings[python]>=0.30.0; extra == \"docs\"",
"pymdown-extensions>=10.16.0; extra == \"docs\"",
"torch; extra == \"realtime\"",
"torchrir; extra == \"realtime\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T16:02:09.551628 | oobss-0.2.2-py3-none-any.whl | 93,970 | e9/ed/b65bfe227434ee64c6cdb72e357d1807bc3fd19ac6170ed5d8424d5cacc4/oobss-0.2.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 06481e2d5bd6f2f43646a890a7e86ab6 | f2cf857c808fd21137dc44d24d207e3db46609d0f57c698196964384fdaa18a5 | e9edb65bfe227434ee64c6cdb72e357d1807bc3fd19ac6170ed5d8424d5cacc4 | null | [
"LICENSE"
] | 221 |
2.4 | nanoslides | 0.1.0 | CLI and library foundation for AI-powered slide generation. | # nanoslides
`nanoslides` is a Python library and CLI designed to generate high-quality presentation slides using AI image models.
Unlike tools that prioritize one-off generations, `nanoslides` is built to be used **programmatically**. It focuses on maintaining visual consistency across an entire deck through a robust styling system and project-state management.
## Key Concepts
- **Library First**: Designed to be integrated into web apps, automated agents, and custom scripts.
- **Stateless Generation**: The core engines can generate multiple variations without forcing side effects on your project state.
- **Consistent Styling**: Define global or project-specific styles (base prompts, negative prompts, and reference images) to ensure every slide feels part of the same deck.
- **CLI for Humans & Agents**: A powerful interface for quick iterations and for AI agents like OpenClaw-based bots.
## Installation
```bash
pip install nanoslides
```
## Programmatic Usage
You can use `nanoslides` directly in your Python projects. This is the recommended way for building applications that need to generate multiple variations before committing them to a presentation.
```python
from pathlib import Path
from nanoslides.engines.nanobanana import NanoBananaSlideEngine, NanoBananaModel
from nanoslides.core.style import ResolvedStyle
# 1. Initialize the engine
engine = NanoBananaSlideEngine(
model=NanoBananaModel.PRO,
api_key="YOUR_GEMINI_API_KEY",
output_dir=Path("./my_slides")
)
# 2. Define a style (optional)
style = ResolvedStyle(
base_prompt="Minimalist corporate design, flat vectors, blue and white palette.",
reference_images=["./assets/brand_guide_style.png"]
)
# 3. Generate variations
# The library doesn't update slides.json automatically; the client handles the state.
result = engine.generate(
prompt="A slide showing a growth chart for Q4 revenue",
style=style
)
print(f"Slide saved to: {result.local_path}")
print(f"Revised prompt used: {result.revised_prompt}")
```
## CLI Usage
The CLI is perfect for managing projects and providing a guided workflow.
### Quick Start
```bash
# Setup your API keys
nanoslides setup
# Initialize a new presentation project
nanoslides init MyPresentation
cd MyPresentation
# Create a consistent style for the project
nanoslides styles create --slides-base-reference ./branding.png
# Generate a slide
nanoslides generate "Introduction to AI in healthcare"
```
### Advanced CLI Commands
- `nanoslides styles steal ./image.png`: Automatically infer style parameters from an existing image using Gemini Vision.
- `nanoslides styles generate "clean Swiss-style layouts with muted blue accents" --reference-image ./brand.png`: Preview a generated style, then choose whether to save it to project `style.json` or globally.
- `nanoslides edit <slide-id> "Make the colors warmer"`: Iterate on a specific slide while maintaining its context.
- `nanoslides clearall`: Preview every slide in the current project, then confirm before deleting them all.
- `nanoslides deck "Launch plan for Product X" --detail-mode presenter --length short`: Plan and generate a full deck from one prompt with Gemini 3 Pro orchestration.
- `nanoslides export --format pptx`: Compile your generated images into a PowerPoint file.
## Project Structure
- `slides.json`: Tracks the current state of your presentation (order, IDs, prompts, and file paths).
- `style.json`: Project-specific style overrides.
- `slides/`: Default directory for generated assets.
## Roadmap & Improvements
We are currently working on:
- [ ] Separating CLI state management from the core library logic (removing "draft" hacks from the core).
- [ ] Enhancing the `SlideEngine` interface to support more providers (OpenAI, Flux).
- [ ] Improving the "Style Steal" accuracy for complex compositions.
- [ ] Adding more robust unit testing for the library API.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"google-genai>=1.30.0",
"pydantic>=2.6",
"python-pptx>=1.0.2",
"python-dotenv>=1.0",
"pyyaml>=6.0",
"rich>=13.0",
"typer>=0.12"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T16:01:48.135512 | nanoslides-0.1.0.tar.gz | 42,574 | a8/22/15c3537125c3f4169073b46697f7f51a4893a0e371f0dfb6b5c47c655388/nanoslides-0.1.0.tar.gz | source | sdist | null | false | e3fcbf8b0c650644657f11f7c8ace5fb | 19343aa1d0f2be9665cde2f1b144020b96cdb026735764565ba5ae544f2440b4 | a82215c3537125c3f4169073b46697f7f51a4893a0e371f0dfb6b5c47c655388 | null | [] | 217 |
2.4 | codereader | 0.6.0 | Local code readability grader using LLMs | # CodeReader
CodeReader is a **local, LLM-based code readability grader**. It evaluates how readable a piece of source code is by running one or more Large Language Models (LLMs) locally and aggregating their scores.
This tool is designed primarily for **research and experimentation**, especially in the context of evaluating readability, naming quality, and structural clarity of code using LLMs instead of fixed syntactic metrics.
---
## Features
- **Readability scoring (0–100)** using one or more LLMs
- **Weighted averages** across multiple models
- **Model rationales** explaining _why_ a score was given
- **Tag-based evaluation** (e.g. identifiers, structure, comments)
- **Rule-based evaluation** it is possible to add more custom rules
- **Structured logging** of all grading results
- **Fully local execution** (no cloud APIs required)
- **OpenAi support** for api based grading (does require an api key)
- **Configurable via YAML** (models, weights, prompts, tags)
---
## How it works (high-level)
1. You provide a piece of code (file, inline text, or stdin)
2. A YAML configuration specifies:
- which LLMs to use
- their weights
- what aspects of readability to evaluate
3. CodeReader sends a structured prompt to each model
4. Each model returns a JSON score + rationale
5. CodeReader aggregates the results and prints a table
6. Results are appended to a log file for later analysis
---
## Installation
### Requirements
- Python **3.12+**
- **Ollama** installed and running
- At least one Ollama model pulled (e.g. `qwen2.5-coder`, `deepseek-coder`)
### Install from source (development)
```bash
poetry install
```
### Install via pip (once published)
```bash
pip install codereader
```
---
## Usage
CodeReader exposes a CLI called `codereader`.
### Basic command
```bash
codereader grade -c config.yml -f example.py
```
### Input methods
Exactly **one** of the following must be provided:
- `--file / -f` – path to a source file
- `--text` – inline source code as a string
- `--stdin` – read code from standard input
Examples:
```bash
codereader grade -c config.yml --text "int x = 0;"
```
```bash
cat example.py | codereader grade -c config.yml --stdin
```
### Useful options
- `--name` – override filename label in logs
- `--quiet` – suppress console output (logging still happens)
- `--simple` – simplified console output (quiet takes priority over simple)
---
## Output
The CLI prints a table similar to:
- Model name
- Score (0–100)
- Weight
- Rationale
- Error (if any)
Below the table, CodeReader prints:
- **Average score**
- **Weighted average score**
All results are also appended to a log file specified in the config.
---
## Configuration (YAML)
CodeReader is fully driven by a YAML config file.
Typical configuration sections include:
- `language` – programming language of the code
- `tags` – aspects of readability to evaluate
- `models` – list of LLM runners and weights
- `settings` – logging paths and runtime settings
Example (simplified):
```yaml
language: java
tags:
- identifiers
- structure
models:
- name: qwen
type: ollama
model: qwen2.5-coder
weight: 1.0
settings:
log_path: readability_log.txt
```
---
## Logging
Each grading run appends a structured entry to the log file, including:
- filename
- individual model scores
- averages
This is useful for **dataset-level analysis**, benchmarking, and research experiments.
---
## Research context
CodeReader was developed as part of a **master’s thesis** exploring:
> _Renaming identifiers of unit-tests generated by automated testing using LLMs_
---
## License
This project is licensed under the **GNU General Public License v3 (GPLv3)**.
This means:
- You are free to use, modify, and redistribute the software
- Derivative works must also be released under GPLv3
See the `LICENSE` file for details.
---
## Contributing
Contributions are welcome, especially around:
- additional model runners
- prompt engineering
- evaluation methodology
Please open an issue or pull request.
---
## Roadmap / Future work
CodeReader is an active research project, and several features are planned or being explored for future versions:
- Additional LLM backends
- More API-based LLMs (e.g. OpenAI-, Anthropic-, or OpenAI-compatible endpoints)
- Support for remote inference alongside local runners
- More runner types
- API-based chat/completion models
- Batch / dataset-level grading
- Cached or replay-based evaluation for reproducibility
- Improved health checks and timeout handling per runner type
- Expanded template system
- More language-specific templates (Java, Python, C/C++, Rust, etc.)
- Templates targeting specific readability dimensions, such as:
- identifier naming
- test code readability
- control-flow complexity
- documentation and comments
- Easier authoring and validation of custom prompt templates
- Analysis & reporting
- Richer logging formats (e.g. JSON / CSV export)
- Dataset-level summaries and comparisons
- Inter-model agreement and variance analysis
---
## Status
This project is **research-oriented** and under active development.
Expect breaking changes before a stable 1.0 release.
| text/markdown | Jason Liu | Liujason2003@gmail.com | null | null | GPLv3 | null | [
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"anyio<5.0.0,>=4.12.1",
"openai<3.0.0,>=2.16.0",
"orjson<4.0.0,>=3.11.5",
"pydantic<3.0.0,>=2.12.5",
"python-dotenv<2.0.0,>=1.2.1",
"pyyaml<7.0.0,>=6.0.3",
"requests<3.0.0,>=2.32.5",
"rich<15.0.0,>=14.2.0",
"typer<0.22.0,>=0.21.1"
] | [] | [] | [] | [] | poetry/2.3.1 CPython/3.12.10 Windows/11 | 2026-02-20T16:01:41.499882 | codereader-0.6.0-py3-none-any.whl | 28,521 | bc/2a/53cc58495ba19e51f56626d74b7fee17b057ec30fb82a727a67c28e2e51f/codereader-0.6.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 7d75e4451838f6d5fa911714d5f56013 | a4be185072c2f0eceaf1b8e471ccf53e99828f92e2fea56ac2b80ea0f18d0896 | bc2a53cc58495ba19e51f56626d74b7fee17b057ec30fb82a727a67c28e2e51f | null | [
"LICENSE"
] | 222 |
2.4 | motifapi | 0.2.0 | Python interface to Motif recording systems | Python interface to Motif recording systems
| null | John Stowers | john@loopbio.com | null | null | BSD | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.6 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T16:01:35.944770 | motifapi-0.2.0.tar.gz | 13,635 | 30/5e/c386927e2387b69cbab41ae2e0f96de7b42d9974740da928e2036ae8d7d7/motifapi-0.2.0.tar.gz | source | sdist | null | false | 79ecca6f8adeef7bfc887377039c2040 | f53c2455d890e5b92b572d1b8f577f2d1d62c238e6c4a757441b3a1e02f8889b | 305ec386927e2387b69cbab41ae2e0f96de7b42d9974740da928e2036ae8d7d7 | null | [] | 224 |
2.4 | careamics | 0.0.22 | Toolbox for running N2V and friends. | <p align="center">
<a href="https://careamics.github.io/">
<img src="https://raw.githubusercontent.com/CAREamics/.github/main/profile/images/banner_careamics.png">
</a>
</p>
# CAREamics
[](https://github.com/CAREamics/careamics/blob/main/LICENSE)
[](https://pypi.org/project/careamics)
[](https://python.org)
[](https://github.com/CAREamics/careamics/actions/workflows/ci.yml)
[](https://codecov.io/gh/CAREamics/careamics)
[](https://forum.image.sc/)
CAREamics is a PyTorch library aimed at simplifying the use of Noise2Void and its many
variants and cousins (CARE, Noise2Noise, N2V2, P(P)N2V, HDN, muSplit etc.).
## Why CAREamics?
Noise2Void is a widely used denoising algorithm, and is readily available from the `n2v`
python package. However, `n2v` is based on TensorFlow, while more recent methods
denoising methods (PPN2V, DivNoising, HDN) are all implemented in PyTorch, but are
lacking the extra features that would make them usable by the community.
The aim of CAREamics is to provide a PyTorch library reuniting all the latest methods
in one package, while providing a simple and consistent API. The library relies on
PyTorch Lightning as a back-end. In addition, we will provide extensive documentation and
tutorials on how to best apply these methods in a scientific context.
## Installation and use
Check out the [documentation](https://careamics.github.io/) for installation instructions and guides!
| text/markdown | null | CAREamics team <rse@fht.org>, Ashesh <ashesh.ashesh@fht.org>, Federico Carrara <federico.carrara@fht.org>, Melisande Croft <melisande.croft@fht.org>, Joran Deschamps <joran.deschamps@fht.org>, Vera Galinova <vera.galinova@fht.org>, Igor Zubarev <igor.zubarev@fht.org> | null | null | BSD-3-Clause | null | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"bioimageio-core>=0.9.0",
"matplotlib<=3.10.8",
"microssim",
"numpy>=1.21",
"numpy>=2.1.0; python_version >= \"3.13\"",
"pillow<=12.1.1",
"psutil<=7.2.2",
"pydantic<=2.12.5,>=2.11",
"pytorch-lightning<=2.6.1,>=2.2",
"pyyaml!=6.0.0,<=6.0.3",
"scikit-image<=0.26.0",
"tifffile<=2026.2.15",
"torch<=2.10.0,>=2.0",
"torchmetrics<1.5.0,>=0.11.0",
"torchvision<=0.25.0",
"typer<=0.23.1,>=0.12.3",
"validators<=0.35.0",
"zarr<4.0.0,>=3.0.0",
"pylibczirw<6.0.0,>=4.1.2; extra == \"czi\"",
"careamics-portfolio; extra == \"examples\"",
"jupyter; extra == \"examples\"",
"protobuf==5.29.1; extra == \"tensorboard\"",
"tensorboard; extra == \"tensorboard\"",
"wandb; extra == \"wandb\""
] | [] | [] | [] | [
"homepage, https://careamics.github.io/",
"repository, https://github.com/CAREamics/careamics"
] | uv/0.9.0 | 2026-02-20T16:01:32.613304 | careamics-0.0.22.tar.gz | 690,262 | 54/1c/bfb09e567c60c42c48c5e96654ea1699236086ffea665910fbd0a415d59e/careamics-0.0.22.tar.gz | source | sdist | null | false | b6e2911df2357292327e80bca422b238 | 0d6730f065cffe8cd05878f96b20ec4a946694fb0a0e51311760b985eeae1782 | 541cbfb09e567c60c42c48c5e96654ea1699236086ffea665910fbd0a415d59e | null | [
"LICENSE"
] | 217 |
2.3 | reflex-mui-datagrid | 0.1.10 | Reflex wrapper for the MUI X DataGrid (v8) React component with polars LazyFrame support | # reflex-mui-datagrid
Reflex wrapper for the [MUI X DataGrid](https://mui.com/x/react-data-grid/) (v8) React component, with built-in [polars](https://pola.rs/) LazyFrame support and optional genomic data visualization via [polars-bio](https://biodatageeks.org/polars-bio/).
[](https://pypi.org/project/reflex-mui-datagrid/)
[](https://pypi.org/project/reflex-mui-datagrid/)
[](https://pypi.org/project/reflex-mui-datagrid/)
[](https://github.com/dna-seq/reflex-mui-datagrid)

## Installation
```bash
uv add reflex-mui-datagrid
```
For CLI usage, you can run the tool as `biogrid` (see CLI section below).
For genomic data support (VCF/BAM files), install with the `[bio]` extra:
```bash
uv add "reflex-mui-datagrid[bio]"
```
Requires Python >= 3.12, Reflex >= 0.8.27, and polars >= 1.0.
## CLI VCF Viewer (No Boilerplate)
The package includes a CLI entrypoint that can launch an interactive viewer
for VCF and other tabular formats. This is the fastest way to explore a VCF
without writing any app code.
Install as a global `uv` tool (with genomic support):
```bash
uv tool install "reflex-mui-datagrid[bio]"
```
This installs both commands:
- `reflex-mui-datagrid` (full name)
- `biogrid` (bio-focused alias)
Open a VCF in your browser:
```bash
reflex-mui-datagrid path/to/variants.vcf
# bio-focused alias
biogrid path/to/variants.vcf
```
Useful options:
```bash
reflex-mui-datagrid path/to/variants.vcf --limit 5000 --port 3005 --title "Tumor Cohort VCF"
# bio-focused alias
biogrid path/to/variants.vcf --limit 5000 --port 3005 --title "Tumor Cohort VCF"
```
The CLI auto-detects file formats by extension and currently supports:
- Genomics (via `polars-bio`): `vcf`, `bam`, `gff`, `bed`, `fasta`, `fastq`
- Tabular: `csv`, `tsv`, `parquet`, `json`, `ndjson`, `ipc`/`arrow`/`feather`
## Quick Start
The fastest way to visualize a polars DataFrame or LazyFrame is the `show_dataframe` helper:
```python
import polars as pl
import reflex as rx
from reflex_mui_datagrid import show_dataframe
df = pl.read_csv("my_data.csv")
def index() -> rx.Component:
return show_dataframe(df, height="500px")
app = rx.App()
app.add_page(index)
```
That single call handles column type detection, dropdown filters for low-cardinality columns, row IDs, JSON serialization, and the MUI toolbar -- all automatically.
### With State (for reactive updates)
For grids that update in response to user actions, use `lazyframe_to_datagrid` inside a `rx.State` event handler:
```python
import polars as pl
import reflex as rx
from reflex_mui_datagrid import data_grid, lazyframe_to_datagrid
class State(rx.State):
rows: list[dict] = []
columns: list[dict] = []
def load_data(self) -> None:
lf = pl.LazyFrame({
"id": [1, 2, 3],
"name": ["Alice", "Bob", "Charlie"],
"score": [95, 82, 91],
})
self.rows, col_defs = lazyframe_to_datagrid(lf)
self.columns = [c.dict() for c in col_defs]
def index() -> rx.Component:
return data_grid(
rows=State.rows,
columns=State.columns,
show_toolbar=True,
height="400px",
)
app = rx.App()
app.add_page(index, on_load=State.load_data)
```
## Core Features
- **MUI X DataGrid v8** (Community edition, MIT) with `@mui/material` v7
- **No pagination by default** -- all rows are scrollable; MUI's built-in row virtualisation only renders visible DOM rows, keeping scrolling smooth for large datasets
- **No 100-row limit** -- the Community edition's artificial page-size cap is removed via a small JS patch; pass `pagination=True` to re-enable pagination with any page size
- **`show_dataframe()` helper** -- one-liner to turn any polars DataFrame or LazyFrame into a fully-featured interactive grid
- **Polars LazyFrame integration** -- `lazyframe_to_datagrid()` converts any LazyFrame to DataGrid-ready rows and column definitions in one call
- **Automatic column type detection** -- polars dtypes map to DataGrid types (`number`, `boolean`, `date`, `dateTime`, `string`)
- **Automatic dropdown filters** -- low-cardinality string columns and `Categorical`/`Enum` dtypes become `singleSelect` columns with dropdown filters
- **JSON-safe serialization** -- temporal columns become ISO strings, `List` columns become comma-joined strings, `Struct` columns become strings
- **`ColumnDef` model** with snake_case Python attrs that auto-convert to camelCase JS props
- **Event handlers** for row click, cell click, sorting, filtering, pagination, and row selection
- **Auto-sized container** -- `WrappedDataGrid` wraps the grid in a `<div>` with configurable `width`/`height`
- **Row identification** -- `row_id_field` parameter for custom row ID, auto-generated `__row_id__` column when no `id` column exists
## The `show_dataframe` Helper
`show_dataframe` is designed for polars users who want to quickly visualize a DataFrame without wiring up Reflex state. It accepts a `pl.DataFrame` or `pl.LazyFrame` and returns a ready-to-render component:
```python
from reflex_mui_datagrid import show_dataframe
# Basic usage -- just pass a DataFrame
grid = show_dataframe(df)
# With options
grid = show_dataframe(
df,
height="600px",
density="compact",
show_toolbar=True,
limit=1000, # collect at most 1000 rows
column_descriptions={"score": "Final exam score (0-100)"},
show_description_in_header=True, # show descriptions as subtitles
column_header_height=70, # taller headers for subtitles
)
```
**Parameters:**
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `data` | `LazyFrame \| DataFrame` | *required* | The polars data to visualize |
| `height` | `str` | `"600px"` | CSS height of the grid container |
| `width` | `str` | `"100%"` | CSS width of the grid container |
| `show_toolbar` | `bool` | `True` | Show MUI toolbar (columns, filters, density, export) |
| `density` | `str \| None` | `None` | `"comfortable"`, `"compact"`, or `"standard"` |
| `limit` | `int \| None` | `None` | Max rows to collect from LazyFrame |
| `column_descriptions` | `dict \| None` | `None` | `{column: description}` for header tooltips |
| `show_description_in_header` | `bool` | `False` | Show descriptions as subtitles in headers |
| `column_header_height` | `int \| None` | `None` | Header height in px (useful with description subtitles) |
| `checkbox_selection` | `bool` | `False` | Show checkbox column for row selection |
| `on_row_click` | `EventHandler \| None` | `None` | Handler called when a row is clicked |
**When to use `show_dataframe` vs `lazyframe_to_datagrid`:**
- Use `show_dataframe` for quick prototyping, static dashboards, or when you just want to see your data.
- Use `lazyframe_to_datagrid` inside `rx.State` when the grid data needs to change in response to user actions (filtering server-side, loading different files, etc.).
## Genomic Data Visualization
[polars-bio](https://biodatageeks.org/polars-bio/) is a bioinformatics library that reads genomic file formats (VCF, BAM, GFF, FASTA, FASTQ, and more) as native polars LazyFrames. Since `show_dataframe` accepts any polars LazyFrame, you get an interactive genomic data browser in two lines of code -- no boilerplate needed.

### Extra Dependencies
Install with the `[bio]` extra to pull in polars-bio:
```bash
uv add "reflex-mui-datagrid[bio]"
```
This adds [polars-bio](https://pypi.org/project/polars-bio/) >= 0.23.0, which provides `scan_vcf()`, `scan_bam()`, `scan_gff()`, and other genomic file readers -- all returning standard polars LazyFrames.
If you only want quick interactive exploration, the CLI is the simplest option:
```bash
reflex-mui-datagrid variants.vcf
```
### Quick VCF Visualization (two lines)
Because `polars_bio.scan_vcf()` returns a polars LazyFrame, you can pass it straight to `show_dataframe`:
```python
import polars_bio as pb
from reflex_mui_datagrid import show_dataframe
lf = pb.scan_vcf("variants.vcf") # polars LazyFrame
def index() -> rx.Component:
return show_dataframe(lf, density="compact", height="540px")
```
That is all you need -- column types, dropdown filters for low-cardinality fields like `filter` and genotype, row IDs, and the MUI toolbar are all set up automatically.
### VCF with Column Descriptions from Headers
For richer display, `bio_lazyframe_to_datagrid` automatically extracts column descriptions from VCF INFO/FORMAT headers and shows them as tooltips or subtitles in the column headers:
```python
import polars_bio as pb
import reflex as rx
from reflex_mui_datagrid import bio_lazyframe_to_datagrid, data_grid
class State(rx.State):
rows: list[dict] = []
columns: list[dict] = []
def load_vcf(self) -> None:
lf = pb.scan_vcf("variants.vcf")
self.rows, col_defs = bio_lazyframe_to_datagrid(lf)
self.columns = [c.dict() for c in col_defs]
def index() -> rx.Component:
return data_grid(
rows=State.rows,
columns=State.columns,
show_toolbar=True,
show_description_in_header=True, # VCF descriptions as subtitles
density="compact",
column_header_height=70,
height="540px",
)
app = rx.App()
app.add_page(index, on_load=State.load_vcf)
```
`bio_lazyframe_to_datagrid` merges three sources of column descriptions:
1. **VCF specification** -- standard fields (chrom, start, ref, alt, qual, filter, etc.)
2. **INFO fields** -- descriptions from the file's `##INFO` header lines
3. **FORMAT fields** -- descriptions from the file's `##FORMAT` header lines
## Server-Side Scroll-Loading (Large Datasets)
For datasets too large to load into the browser at once (millions of rows), the `LazyFrameGridMixin` provides a complete server-side solution with scroll-driven infinite loading, filtering, and sorting -- all backed by a polars LazyFrame that is never fully collected into memory.
### Quick Example
```python
from pathlib import Path
from reflex_mui_datagrid import LazyFrameGridMixin, lazyframe_grid, scan_file
class MyState(LazyFrameGridMixin, rx.State):
def load_data(self):
lf, descriptions = scan_file(Path("my_genome.vcf"))
yield from self.set_lazyframe(lf, descriptions)
def index() -> rx.Component:
return rx.box(
rx.button("Load", on_click=MyState.load_data, loading=MyState.lf_grid_loading),
rx.cond(MyState.lf_grid_loaded, lazyframe_grid(MyState)),
)
```
That's it -- you get server-side filtering, sorting, and infinite scroll-loading with no additional wiring.
### `scan_file` -- Auto-Detect File Format
`scan_file` opens any supported file as a polars LazyFrame and extracts column descriptions where available:
```python
from reflex_mui_datagrid import scan_file
# VCF -- auto-extracts column descriptions from headers
lf, descriptions = scan_file(Path("variants.vcf"))
# Parquet -- no descriptions, but LazyFrame is ready
lf, descriptions = scan_file(Path("data.parquet"))
# Also supports: .csv, .tsv, .json, .ndjson, .ipc, .arrow, .feather
```
### `LazyFrameGridMixin` -- State Mixin
`LazyFrameGridMixin` is a Reflex **state mixin** (`mixin=True`) that provides all the state variables and event handlers needed for server-side browsing. Inherit from it **and** `rx.State` in your state class -- each subclass gets its own independent set of `lf_grid_*` vars, so multiple grids on the same page do not interfere:
```python
class MyState(LazyFrameGridMixin, rx.State):
# Your own state vars
file_available: bool = False
def load_data(self):
lf, descriptions = scan_file(Path("data.parquet"))
yield from self.set_lazyframe(lf, descriptions, chunk_size=500)
```
**State variables** (all prefixed `lf_grid_` to avoid collisions):
| Variable | Type | Description |
|----------|------|-------------|
| `lf_grid_rows` | `list[dict]` | Currently loaded rows |
| `lf_grid_columns` | `list[dict]` | Column definitions |
| `lf_grid_row_count` | `int` | Total rows matching current filter |
| `lf_grid_loading` | `bool` | Loading indicator |
| `lf_grid_loaded` | `bool` | Whether data has been loaded |
| `lf_grid_stats` | `str` | Last refresh timing info |
| `lf_grid_selected_info` | `str` | Detail string for clicked row |
**`set_lazyframe` parameters:**
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `lf` | `pl.LazyFrame` | *required* | The LazyFrame to browse |
| `descriptions` | `dict[str, str] \| None` | `None` | Column descriptions for tooltips |
| `chunk_size` | `int` | `200` | Rows per scroll chunk |
| `value_options_max_unique` | `int` | `500` | Max distinct values for dropdown filter (queried from full dataset) |
### `lazyframe_grid` -- Pre-Wired UI Component
Returns a `data_grid(...)` with all server-side handlers already connected:
```python
from reflex_mui_datagrid import lazyframe_grid, lazyframe_grid_stats_bar, lazyframe_grid_detail_box
def my_page() -> rx.Component:
return rx.fragment(
lazyframe_grid_stats_bar(MyState), # row count + timing bar
lazyframe_grid(MyState, height="600px"),
lazyframe_grid_detail_box(MyState), # clicked row details
)
```
**`lazyframe_grid` parameters:**
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `state_cls` | `type` | *required* | Your state class inheriting `LazyFrameGridMixin` |
| `height` | `str` | `"600px"` | CSS height |
| `width` | `str` | `"100%"` | CSS width |
| `density` | `str` | `"compact"` | Grid density |
| `column_header_height` | `int` | `70` | Header height in px |
| `scroll_end_threshold` | `int` | `260` | Pixels from bottom to trigger next chunk |
| `show_toolbar` | `bool` | `True` | Show MUI toolbar |
| `show_description_in_header` | `bool` | `True` | Show column descriptions as subtitles |
| `debug_log` | `bool` | `True` | Browser console debug logging |
| `on_row_click` | `EventHandler \| None` | `None` | Override default row-click handler |
### Multiple Independent Grids
Because `LazyFrameGridMixin` is a Reflex mixin (`mixin=True`), you can have multiple independent grids on the same page -- each subclass gets its own `lf_grid_*` state vars:
```python
class ParquetGrid(LazyFrameGridMixin, rx.State):
def load(self):
yield from self.set_lazyframe(pl.scan_parquet("data.parquet"))
class CsvGrid(LazyFrameGridMixin, rx.State):
def load(self):
yield from self.set_lazyframe(pl.scan_csv("data.csv"))
# ParquetGrid.lf_grid_rows and CsvGrid.lf_grid_rows are independent
```
### How It Works
1. `set_lazyframe` stores the LazyFrame in a module-level cache (never serialised into Reflex state), computes the schema, total row count, and low-cardinality filter options from a bounded sample.
2. Only the first chunk of rows is collected and sent to the frontend.
3. As the user scrolls near the bottom, `handle_lf_grid_scroll_end` collects the next chunk and appends it.
4. Filter and sort changes reset to page 0 and re-query the LazyFrame with Polars expressions -- no full-table collect.
## Running the Example
The project uses [uv workspaces](https://docs.astral.sh/uv/concepts/projects/workspaces/). The example app is a workspace member with a `demo` entrypoint:
```bash
uv sync
uv run demo
```
The demo has three tabs:
| Tab | Description |
|-----|-------------|
| **Employee Data** | 20-row inline polars LazyFrame with sorting, dropdown filters, checkbox selection |
| **Genomic Variants (VCF)** | 793 variants loaded via `polars_bio.scan_vcf()`, column descriptions from VCF headers |
| **Full Genome (Server-Side)** | ~4.5M variants with server-side scroll-loading, filtering, and sorting via `LazyFrameGridMixin` |

## API Reference
See [docs/api.md](docs/api.md) for the full API reference.
## License
MIT
| text/markdown | antonkulaga | antonkulaga <antonkulaga@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"polars>=1.37.1",
"reflex>=0.8.27",
"typer>=0.23.1",
"polars-bio>=0.23.0; python_full_version < \"3.15\" and extra == \"bio\""
] | [] | [] | [] | [] | uv/0.9.17 {"installer":{"name":"uv","version":"0.9.17","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T16:01:25.518171 | reflex_mui_datagrid-0.1.10.tar.gz | 41,518 | 48/42/eebeed119801be0cd3c035f1577f7e22b385b166eb41c01303ca74ff19c8/reflex_mui_datagrid-0.1.10.tar.gz | source | sdist | null | false | 3985f50cc901897cf9d8459dd6a0f76c | 41dfae067286d3f7c57518f0e1c2f607ba35a25d0e47940b7c9358493aefd439 | 4842eebeed119801be0cd3c035f1577f7e22b385b166eb41c01303ca74ff19c8 | null | [] | 210 |
2.4 | ovsdbapp | 2.16.0 | A library for creating OVSDB applications | ========
ovsdbapp
========
.. image:: https://governance.openstack.org/tc/badges/ovsdbapp.svg
.. Change things from this point on
A library for creating OVSDB applications
The ovdsbapp library is useful for creating applications that communicate
via Open_vSwitch's OVSDB protocol (https://tools.ietf.org/html/rfc7047). It
wraps the Python 'ovs' and adds an event loop and friendly transactions.
* Free software: Apache license
* Source: https://opendev.org/openstack/ovsdbapp/
* Bugs: https://bugs.launchpad.net/ovsdbapp
Features:
* An thread-based event loop for using ovs.db.Idl
* Transaction support
* Native OVSDB communication
| text/x-rst | null | OpenStack <openstack-discuss@lists.openstack.org> | null | null | Apache-2.0 | null | [
"Environment :: OpenStack",
"Intended Audience :: Information Technology",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: Apache Software License",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: Implementation :: CPython"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"fixtures>=3.0.0",
"netaddr>=0.10.0",
"ovs>=2.10.0",
"pbr!=2.1.0,>=2.0.0"
] | [] | [] | [] | [
"Homepage, https://docs.openstack.org/ovsdbapp/latest/",
"Repository, https://opendev.org/openstack/ovsdbapp"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T16:01:19.580996 | ovsdbapp-2.16.0.tar.gz | 133,205 | 60/22/f46b8211ca1f505e456f52196fc6cc45b596b51ef4873cb25113a97d438f/ovsdbapp-2.16.0.tar.gz | source | sdist | null | false | e157b45e5198e3ee04b2867757a969c0 | 76729018ca8f9eb9f213cf50d43e8c6cd2c1e7d726f1496985a771c2ea251590 | 6022f46b8211ca1f505e456f52196fc6cc45b596b51ef4873cb25113a97d438f | null | [
"LICENSE"
] | 587 |
2.4 | awslambdaric | 4.0.0 | AWS Lambda Runtime Interface Client for Python | ## AWS Lambda Python Runtime Interface Client
We have open-sourced a set of software packages, Runtime Interface Clients (RIC), that implement the Lambda
[Runtime API](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-api.html), allowing you to seamlessly extend your preferred
base images to be Lambda compatible.
The Lambda Runtime Interface Client is a lightweight interface that allows your runtime to receive requests from and send requests to the Lambda service.
The Lambda Python Runtime Interface Client is vended through [pip](https://pypi.org/project/awslambdaric).
You can include this package in your preferred base image to make that base image Lambda compatible.
## Requirements
The Python Runtime Interface Client package currently supports Python versions:
- 3.9.x up to and including 3.13.x
## Usage
### Creating a Docker Image for Lambda with the Runtime Interface Client
First step is to choose the base image to be used. The supported Linux OS distributions are:
- Amazon Linux 2
- Alpine
- Debian
- Ubuntu
Then, the Runtime Interface Client needs to be installed. We provide both wheel and source distribution.
If the OS/pip version used does not support [manylinux2014](https://www.python.org/dev/peps/pep-0599/) wheels, you will also need to install the required build dependencies.
Also, your Lambda function code needs to be copied into the image.
```dockerfile
# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Install aws-lambda-cpp build dependencies
RUN apt-get update && \
apt-get install -y \
g++ \
make \
cmake \
unzip \
libcurl4-openssl-dev
# Copy function code
RUN mkdir -p ${FUNCTION_DIR}
COPY app/* ${FUNCTION_DIR}
# Install the function's dependencies
RUN pip install \
--target ${FUNCTION_DIR} \
awslambdaric
```
The next step would be to set the `ENTRYPOINT` property of the Docker image to invoke the Runtime Interface Client and then set the `CMD` argument to specify the desired handler.
Example Dockerfile (to keep the image light we use a multi-stage build):
```dockerfile
# Define custom function directory
ARG FUNCTION_DIR="/function"
FROM public.ecr.aws/docker/library/python:buster as build-image
# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Install aws-lambda-cpp build dependencies
RUN apt-get update && \
apt-get install -y \
g++ \
make \
cmake \
unzip \
libcurl4-openssl-dev
# Copy function code
RUN mkdir -p ${FUNCTION_DIR}
COPY app/* ${FUNCTION_DIR}
# Install the function's dependencies
RUN pip install \
--target ${FUNCTION_DIR} \
awslambdaric
FROM public.ecr.aws/docker/library/python:buster
# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}
# Copy in the built dependencies
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}
ENTRYPOINT [ "/usr/local/bin/python", "-m", "awslambdaric" ]
CMD [ "app.handler" ]
```
Example Python handler `app.py`:
```python
def handler(event, context):
return "Hello World!"
```
### Local Testing
To make it easy to locally test Lambda functions packaged as container images we open-sourced a lightweight web-server, Lambda Runtime Interface Emulator (RIE), which allows your function packaged as a container image to accept HTTP requests. You can install the [AWS Lambda Runtime Interface Emulator](https://github.com/aws/aws-lambda-runtime-interface-emulator) on your local machine to test your function. Then when you run the image function, you set the entrypoint to be the emulator.
*To install the emulator and test your Lambda function*
1) From your project directory, run the following command to download the RIE from GitHub and install it on your local machine.
```shell script
mkdir -p ~/.aws-lambda-rie && \
curl -Lo ~/.aws-lambda-rie/aws-lambda-rie https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie && \
chmod +x ~/.aws-lambda-rie/aws-lambda-rie
```
2) Run your Lambda image function using the docker run command.
```shell script
docker run -d -v ~/.aws-lambda-rie:/aws-lambda -p 9000:8080 \
--entrypoint /aws-lambda/aws-lambda-rie \
myfunction:latest \
/usr/local/bin/python -m awslambdaric app.handler
```
This runs the image as a container and starts up an endpoint locally at `http://localhost:9000/2015-03-31/functions/function/invocations`.
3) Post an event to the following endpoint using a curl command:
```shell script
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
```
This command invokes the function running in the container image and returns a response.
*Alternately, you can also include RIE as a part of your base image. See the AWS documentation on how to [Build RIE into your base image](https://docs.aws.amazon.com/lambda/latest/dg/images-test.html#images-test-alternative).*
## Development
### Building the package
Clone this repository and run:
```shell script
make init
make build
```
### Running tests
Make sure the project is built:
```shell script
make init build
```
Then,
* to run unit tests: `make test`
* to run integration tests: `make test-integ`
* to run smoke tests: `make test-smoke`
### Troubleshooting
While running integration tests, you might encounter the Docker Hub rate limit error with the following body:
```
You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limits
```
To fix the above issue, consider authenticating to a Docker Hub account by setting the Docker Hub credentials as below CodeBuild environment variables.
```shell script
DOCKERHUB_USERNAME=<dockerhub username>
DOCKERHUB_PASSWORD=<dockerhub password>
```
Recommended way is to set the Docker Hub credentials in CodeBuild job by retrieving them from AWS Secrets Manager.
## Security
If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue.
## License
This project is licensed under the Apache-2.0 License.
| text/markdown | Amazon Web Services | null | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent"
] | [] | https://github.com/aws/aws-lambda-python-runtime-interface-client | null | >=3.9 | [] | [] | [] | [
"simplejson>=3.20.1",
"snapshot-restore-py>=1.0.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.17 | 2026-02-20T16:01:14.973409 | awslambdaric-4.0.0-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl | 271,431 | fb/d0/cebd4bb2c6097d9dc42d3054e82136337f5b7551e7cc1819363cb48b352e/awslambdaric-4.0.0-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl | pp311 | bdist_wheel | null | false | e93ac418514d62fd78796f3e6161b2bb | 3a5bd1b99dd982c9bcd12b9f1b6b2a6d2265de92e6cd24c61d752c7ddf64ba13 | fbd0cebd4bb2c6097d9dc42d3054e82136337f5b7551e7cc1819363cb48b352e | null | [
"LICENSE",
"NOTICE"
] | 15,796 |
2.4 | nci-cidc-api-modules | 1.2.75 | SQLAlchemy data models and configuration tools used in the NCI CIDC API | # NCI CIDC API <!-- omit in TOC -->
The next generation of the CIDC API, reworked to use Google Cloud-managed services. This API is built with the Flask REST API framework backed by Google Cloud SQL, running on Google App Engine.
## Development <!-- omit in TOC -->
- [Install Python dependencies](#install-python-dependencies)
- [Database Management](#database-management)
- [Setting up a local development database](#setting-up-a-local-development-database)
- [Connecting to a Cloud SQL database instance](#connecting-to-a-cloud-sql-database-instance)
- [Running database migrations](#running-database-migrations)
- [Serving Locally](#serving-locally)
- [Testing](#testing)
- [Code Formatting](#code-formatting)
- [Deployment](#deployment)
- [CI/CD](#cicd)
- [Deploying by hand](#deploying-by-hand)
- [Connecting to the API](#connecting-to-the-api)
- [Provisioning the system from scratch](#provisioning-the-system-from-scratch)
- [Docker Compose](#setting-up-docker-compose)
## Install Python dependencies
Use Python version 3.13
```bash
# make a virtual environment in the current direcory called "venv"
python3 -m venv venv
source venv/bin/activate
# optionally add an alias to your shell rc file
# alias activate='source venv/bin/activate'
```
Install both the production and development dependencies.
```bash
pip install -r requirements.dev.txt
```
Install and configure pre-commit hooks for code formatting and commit message standardization.
```bash
pre-commit install
```
## Database Management
### Setting up a local development database
In production, the CIDC API connects to a PostgreSQL instance hosted by Google Cloud SQL, but for local development, you should generally use a local PostgreSQL instance.
To do so, first install and start PostgreSQL:
```bash
brew install postgresql@16
brew services start postgresql@16 # launches the postgres service whenever your computer launches
```
> The cidc-devops repo creates a Cloud SQL instance using postgresql@9.6 which is disabled through homebrew. The earliest non-deprecated version through homebrew is postgresql@11.
---
### Troubleshooting Homebrew PostgreSQL Installation
1. If you already have a different version of postgresql installed via homebrew, see this gist for upgrading help https://gist.github.com/olivierlacan/e1bf5c34bc9f82e06bc0. You could also try `brew postgresql-upgrade-database`.
2. If you see issues regarding "postmaster.pid already exists", see this post https://stackoverflow.com/questions/36436120/fatal-error-lock-file-postmaster-pid-already-exists.
3. You may see that `brew services start postgresql@11` succeeds, but when you list the services `brew services` you see an error for postgresql.
```bash
$ brew services
Name Status User File
postgresql@11 error crouchcd ~/Library/LaunchAgents/homebrew.mxcl.postgresql@11.plist
```
In that case, use the plist file referenced in the command output to locate where the postgresql logs are written. The postgresql log file will help you detect the issue. If you see something regarding incompatible versions of postgresql, see issue #1.
---
Homebrew will install the psql client (psql) under /opt/homebrew/Cellar/postgresql@16/16.8/bin/ ; you may want to create a symlink from here to somewhere in your path.
By default, the postgres service listens on port 5432. Next, create the `cidcdev` user, your local `cidc` development database, and a local `cidctest` database that the unit/integration tests will use:
```bash
psql postgres -c "create user cidcdev with password '1234'"
# Database to use for local development
psql postgres -c "create database cidc"
psql cidc -c "grant all privileges on database cidc to cidcdev"
psql cidc -c "grant all privileges on schema public to cidcdev"
psql cidc -c "create extension citext"
psql cidc -c "create extension pgcrypto"
# Database to use for automated testing
psql postgres -c "create database cidctest"
psql cidctest -c "grant all privileges on database cidctest to cidcdev"
psql cidctest -c "grant all privileges on schema public to cidcdev"
psql cidctest -c "create extension citext"
psql cidctest -c "create extension pgcrypto"
```
Now, you should be able to connect to your development database with the URI `postgresql://cidcdev:1234@localhost:5432/cidc`. Or, in the postgres REPL:
```bash
psql cidc
```
---
## Install the gcloud CLI
https://cloud.google.com/sdk/docs/install. You will use this CLI to authenticate with GCP for development purposes.
### Setting Environment Variables
The app will use the environment variables defined in the [.env](./.env) file to connect to Auth0 and GCP. You will need to update those values with either the dev or staging Auth0 and GCP instances values provided by Essex or CBIIT/Cloud2. The following `flask db upgrade` command relies on these environment variables.
Each time the backend starts up (including cloud-functions), an object that pulls secrets from Secret Manager is used to initialize the google client libraries. You need to authenticate with gcloud beforehand so that the API can fetch the secrets. First, you will need to ensure that your account has the "Secret Manager Secrets Accessor" role. Then, generate Application Default Credentials by running the following,
```bash
gcloud auth application-default login
```
> To check that your gcloud config is set to the right config run `gcloud config list`.
The application reads environment variables that map to the appropriate secret ids for the selected project. As of this writing, the env variables and secret ids are listed below (replace `[env_tag]` with the appropriate environment tag):
```yaml
env_variables:
APP_ENGINE_CREDENTIALS_ID: "cidc_app_engine_credentials_[env tag]"
AUTH0_CLIENT_SECRET_ID: "cidc_auth0_client_secret_[env tag]"
CLOUD_SQL_DB_PASS_ID: "cidc_cloud_sql_db_pass_[env tag]"
CSMS_BASE_URL_ID: "cidc_csms_base_url_[env tag]"
CSMS_CLIENT_ID_ID: "cidc_csms_client_id_[env tag]"
CSMS_CLIENT_SECRET_ID: "cidc_csms_client_secret_[env tag]"
CSMS_TOKEN_URL_ID: "cidc_csms_token_url_[env tag]"
INTERNAL_USER_EMAIL_ID: "cidc_internal_user_email_[env tag]"
PRISM_ENCRYPT_KEY_ID: "cidc_prism_encrypt_key_[env tag]"
```
To ensure you are using the correct secret ids, you may list the secrets for the active project using the following command:
```
gcloud secrets list
```
#### Adding new secrets to the application
To add new secrets to the application, follow the instructions below:
1. Choose a name for the secret. It should be snake case, and end in the relevant environment tag, as seen above.
1. Reach out to the DevOps team to add the secret to Secret Manager. Include the secret value for each environment.
1. Choose an environment variable to map to the secret id. It should end in `_ID`. This is because the secret will be passed in directly under testing conditions, using the same environment variable minus the `_ID` postfix to avoid conflicts.
1. Add the env variable mapping to the GAE environment configurations under the `secrets` section. These are the `app.[env].yaml` files.
1. Load the secret into the SETTINGS dictionary via the `cidc_api/config/settings.py` module. See the `# CSMS Integration Values` section for how to do this.
---
Next, you'll need to set up the appropriate tables, indexes, etc. in your local database. To do so, run:
```bash
FLASK_APP=cidc_api.app:app flask db upgrade
```
### Connecting to a Cloud SQL database instance
Make sure you are authenticated to gcloud:
```bash
gcloud auth login
gcloud auth application-default login
```
In your .env file, comment out `POSTGRES_URI` and uncommment
`CLOUD_SQL_INSTANCE_NAME CLOUD_SQL_DB_USER CLOUD_SQL_DB_NAME` Replace `CLOUD_SQL_DB_USER` with your NIH email.
### Creating/Running database migrations
This project uses [`Flask Migrate`](https://flask-migrate.readthedocs.io/en/latest/) for managing database migrations. To create a new migration and upgrade the database specified in your `.env` config:
```bash
export FLASK_APP=cidc_api/app.py
# First, make your changes to the model(s)
# Then, let flask automatically generate the db change. Double check the migration script!
flask db migrate -m "<a message describing the changes in this migration>"
# Apply changes to the database
flask db upgrade
```
To revert an applied migration, run:
```bash
flask db downgrade
```
If you're updating `models.py`, you should create a migration and commit the resulting
## Serving Locally
Once you have a development database set up and running, run the API server:
```bash
ENV=dev gunicorn cidc_api.app:app
```
## Testing
This project uses [`pytest`](https://docs.pytest.org/en/latest/) for testing.
To run the tests, simply run:
```bash
pytest
```
## Code Formatting
This project uses [`black`](https://black.readthedocs.io/en/stable/) for code styling.
We recommend setting up autoformatting-on-save in your IDE of choice so that you don't have to worry about running `black` on your code.
## Deployment
### CI/CD
This project uses [GitHub Actions](https://docs.github.com/en/free-pro-team@latest/actions) for continuous integration and deployment. To deploy an update to this application, follow these steps:
1. Create a new branch locally, commit updates to it, then push that branch to this repository.
2. Make a pull request from your branch into `master`. This will trigger GitHub Actions to run various tests and report back success or failure. You can't merge your PR until it passes the build, so if the build fails, you'll probably need to fix your code.
3. Once the build passes (and pending approval from collaborators reviewing the PR), merge your changes into `master`. This will trigger GitHub Actions to re-run tests on the code then deploy changes to the staging project.
4. Try out your deployed changes in the staging environment once the build completes.
5. If you're satisfied that staging should be deployed into production, make a PR from `master` into `production`.
6. Once the PR build passes, merge `master` into `production`. This will trigger GitHub Actions to deploy the changes on staging to the production project.
For more information or to update the CI workflow, check out the configuration in `.github/workflows/ci.yml`.
### Deploying by hand
Should you ever need to deploy the application to Google App Engine by hand, you can do so by running the following:
```bash
gcloud app deploy <app.staging.yaml or app.prod.yaml> --project <gcloud project id>
```
That being said, avoid doing this! Deploying this way circumvents the safety checks built into the CI/CD pipeline and can lead to inconsistencies between the code running on GAE and the code present in this repository. Luckily, though, GAE's built-in versioning system makes it hard to do anything catastrophic :-)
## Connecting to the API
Currently, the staging API is hosted at https://api.cidc-stage.nci.nih.gov and the production instance is hosted at https://api.cidc.nci.nih.gov.
To connect to the staging API with `curl` or a REST API client like Insomnia, get an id token from cidc-stage.nci.nih.gov, and include the header `Authorization: Bearer YOUR_ID_TOKEN` in requests you make to the staging API. If your token expires, generate a new one following this same procedure.
To connect to the production API locally, follow the same procedure, but instead get your token from cidc.nci.nih.gov.
## Provisioning the system from scratch
For an overview of how to set up the CIDC API service from scratch, see the step-by-step guide in `PROVISION.md`.
## Setting up Docker Compose
If you would like to run this project as a docker container. We have dockerized the cidc-api-gae and cidc-ui so that you don't have to install all the requirements above. Included in the docker-compose file are postgres:14 with data and test user login, bigquery-emulator, fake-gcs-server with buckets and data to match postgres, and gcs-oauth2-emulator to generate faked presigned urls.
**_NOTE:_** You must have docker installed and have this repository and cidc-ui in the same directory (~/git/cidc/cidc-ui and ~/git/cidc/cidc-api-gae), or you can download each and build the image with the command `docker build .`
**_NOTE:_** Having issues with the cidc-ui docker container. You'll have to start that manually using the instructions in the repo.
**_NOTE:_** You can't use Docker while simultaneously running your NIH VPN. This is due to a quirk with self-hosting a google secrets bucket. More work is required to make the docker containers work while the VPN is on.
This repo has hot code reloading. However, you will need to build the image again if there is an update to python libraries. Make sure you don't use a cached image when rebuilding.
Make sure you add this line to your /etc/hosts file: ```127.0.0.1 host.docker.internal```
To run everything simply run the following commands:
```bash
vim .env # uncomment the docker section in the .env file. Comment out any overlaping variable defintions(POSTGRES)
cp ~/.config/gcloud/application_default_credentials.json .
cd docker
docker compose up
```
**_NOTE:_** You still need to install and signin to gcloud CLI. The application_default_credentials.json should be under the cidc-api-gae directory next to the Dockerfile. We have mocked most of the connection points to GCP but at startup it still checks for a valid user account. This is very similar behavior to aws's localstack. It requires a realistic token at the start even though it doesn't make connections to aws.
**_TODO:_** The application_default_credentials.json I think could be faked and pointed to the gcs-oauth2-emulator for startup. In this case a gcloud cli wouldnt be needed at all and a faked application_default_credentials.json could be uploaded under the docker folder.
## JIRA Integration
To set-up the git hook for JIRA integration, run:
```bash
ln -s ../../.githooks/commit-msg .git/hooks/commit-msg
chmod +x .git/hooks/commit-msg
rm .git/hooks/commit-msg.sample
```
This symbolic link is necessary to correctly link files in `.githooks` to `.git/hooks`. Note that setting the `core.hooksPath` configuration variable would lead to [pre-commit failing](https://github.com/pre-commit/pre-commit/issues/1198). The `commit-msg` hook [runs after](https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks) the `pre-commit` hook, hence the two are de-coupled in this workflow.
To associate a commit with an issue, you will need to reference the JIRA Issue key (For eg 'CIDC-1111') in the corresponding commit message.
## API FAQ
#### How is the API repo structured?
At the top-level, there are a handful of files related to how and where the API code runs:
- [app.prod.yaml](https://github.com/NCI-CIDC/cidc-api-gae/blob/master/app.prod.yaml) and [app.staging.yaml](https://github.com/NCI-CIDC/cidc-api-gae/blob/master/app.staging.yaml) are the App Engine config files for the prod and staging API instances - these specify instance classes, autoscaling settings, env variables, and what command App Engine should run to start the app.
- [gunicorn.conf.py](https://github.com/NCI-CIDC/cidc-api-gae/blob/master/gunicorn.conf.py) contains config for the [gunicorn](https://gunicorn.org/) server that runs the API’s flask app in production.
[The migrations/versions directory](https://github.com/NCI-CIDC/cidc-api-gae/tree/master/migrations/versions) contains SQLAlchemy database migrations generated using flask-sqlalchemy.
The core API code lives in a python module in the [cidc_api](https://github.com/NCI-CIDC/cidc-api-gae/tree/master/cidc_api) subdirectory. In this subdirectory, the [app.py](https://github.com/NCI-CIDC/cidc-api-gae/blob/master/cidc_api/app.py) file contains the code that instantiates/exports the API’s flask app. Stepping through that file top to bottom is probably the best way to get an overall picture of the structure of the API code:
- [get_logger](https://github.com/NCI-CIDC/cidc-api-gae/blob/1de8b59e87eb71a3f8f8e997225e81d6b04b73fd/cidc_api/config/logging.py#L11) instantiates a logger instance based on whether the app is running in a flask development server or gunicorn production server. We need this helper function (or something like it), because logs must be routed in a particular manner for them to show up in stderr when the app is running as a gunicorn server. Any python file in the cidc_api module that includes logging should call this get_logger helper at the top of the file.
- Next, the Flask app instance is created and configured using settings loaded from [settings.py](https://github.com/NCI-CIDC/cidc-api-gae/blob/master/cidc_api/config/settings.py). This file contains a handful of constants used throughout the app code. Additionally, it contains code for [setting up the temporary directories](https://github.com/NCI-CIDC/cidc-api-gae/blob/001e12ac276a9632260fbddd54419cbcb8a5e2b5/cidc_api/config/settings.py#L37) where empty manifest/assay/analysis templates will live. [This line](https://github.com/NCI-CIDC/cidc-api-gae/blob/001e12ac276a9632260fbddd54419cbcb8a5e2b5/cidc_api/config/settings.py#L118) at the bottom of the file builds a settings dictionary mapping variable names to values for all constants (i.e., uppercase variables) defined above it.
- Next, [CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) is enabled. CORS allows the API to respond to requests originating from domains other than the API’s domain. If we didn’t do this, then an API instance running at “api.cidc.nci.nih.gov” would be prohibited from responding to requests from a UI instance running at “cidc.nci.nih.gov”.
- Next, [init_db](https://github.com/NCI-CIDC/cidc-api-gae/blob/1de8b59e87eb71a3f8f8e997225e81d6b04b73fd/cidc_api/config/db.py#L17) connects the flask-sqlalchemy package to a given API app instance. Moreover, it sets up our database migration utility, [flask-migrate](https://flask-migrate.readthedocs.io/en/latest/), which provides CLI shortcuts for generating migrations based on changes to the API’s sqlalchemy models. Currently, db migrations are [run every time init_db is called](https://github.com/NCI-CIDC/cidc-api-gae/blob/1de8b59e87eb71a3f8f8e997225e81d6b04b73fd/cidc_api/config/db.py#L23), but this is arguably tech debt, since it slows down app startup for no good reason - it might be better to try running db migrations as part of CI. (All other code in this file is related to building database connections based on the environment in which the app is running).
- Next, [register_resources](https://github.com/NCI-CIDC/cidc-api-gae/blob/1de8b59e87eb71a3f8f8e997225e81d6b04b73fd/cidc_api/resources/__init__.py#L12) “wires up” all of the REST resources in the API, which are organized as independent flask [blueprints](https://flask.palletsprojects.com/en/2.0.x/blueprints/). Each resource blueprint is a collection of flask endpoints. Resources are split up into separate blueprints solely for code organization purposes.
- Next, [validate_api_auth](https://github.com/NCI-CIDC/cidc-api-gae/blob/1de8b59e87eb71a3f8f8e997225e81d6b04b73fd/cidc_api/shared/auth.py#L17) enforces that all endpoints configured in the API are explicitly flagged as public or private using the [public](https://github.com/NCI-CIDC/cidc-api-gae/blob/1de8b59e87eb71a3f8f8e997225e81d6b04b73fd/cidc_api/shared/auth.py#L75) and [requires_auth](https://github.com/NCI-CIDC/cidc-api-gae/blob/1de8b59e87eb71a3f8f8e997225e81d6b04b73fd/cidc_api/shared/auth.py#L34) decorators, respectively. This is intended to help prevent a developer from accidentally making a private endpoint public by forgetting to include the requires_auth decorator. If this validation check fails, the app won’t start up.
- Next, [register_dashboards](https://github.com/NCI-CIDC/cidc-api-gae/blob/1de8b59e87eb71a3f8f8e997225e81d6b04b73fd/cidc_api/dashboards/__init__.py#L6) wires up our plot.ly dash dashboards.
- Next, [handle_errors](https://github.com/NCI-CIDC/cidc-api-gae/blob/1de8b59e87eb71a3f8f8e997225e81d6b04b73fd/cidc_api/app.py#L37) adds generic code for formatting any error thrown while a request is being handled as JSON.
- Finally, if the app submodule is being run directly via “python -m cidc_api.app”, [this code](https://github.com/NCI-CIDC/cidc-api-gae/blob/1de8b59e87eb71a3f8f8e997225e81d6b04b73fd/cidc_api/app.py#L65) will start a flask (non-gunicorn) server.
For diving deeper into the API code’s structure, the next place to look is the [resources](https://github.com/NCI-CIDC/cidc-api-gae/tree/master/cidc_api/resources) directory. Endpoint implementations string together code for authenticating users, loading and validating JSON input from requests, looking up or modifying database records, and dumping request output to JSON. For some endpoints, a lot of this work is handled using generic helper decorators - e.g., the [update_user](https://github.com/NCI-CIDC/cidc-api-gae/blob/1de8b59e87eb71a3f8f8e997225e81d6b04b73fd/cidc_api/resources/users.py#L111) endpoint uses nearly every helper available in the [rest_utils.py](https://github.com/NCI-CIDC/cidc-api-gae/blob/master/cidc_api/shared/rest_utils.py) file. For others, like the [upload_analysis](https://github.com/NCI-CIDC/cidc-api-gae/blob/1de8b59e87eb71a3f8f8e997225e81d6b04b73fd/cidc_api/resources/upload_jobs.py#L509) endpoint, the endpoint extracts request data and builds response data in an endpoint-specific way. Most endpoints will involve some interaction with sqlalchemy models, either directly in the function body or via helper decorators.
#### How do I add a new resource to the API?
1. Create a new file in the [resources](https://github.com/NCI-CIDC/cidc-api-gae/tree/master/cidc_api/resources) directory named “<resource>.py”.
2. Create a flask blueprint for the resource, named “<resource>\_bp” ([example](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/resources/info.py#L13)).
3. Add the blueprint to the [register_resources](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/resources/__init__.py#L12) function. The resource’s url_prefix should generally be “/<resource>”.
#### How do I add a new endpoint to the API?
If you want to add an endpoint to an existing REST resource, open the file in the resources directory related to that resource. You can build an endpoint using some (almost definitely not all) of these steps:
- **Wire up the endpoint**. Find the [blueprint](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/resources/users.py#L25) for the resource, usually named like “<resource>\_bp”. You add an endpoint to the blueprint by decorating a python function using the [route](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/resources/users.py#L33) method on the blueprint instance.
- **Configure endpoint auth**. Either flag the endpoint as public using the [public](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/resources/info.py#L17) decorator, or configure authentication and authorization using the [requires_auth](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/resources/users.py#L68) decorator. The requires_auth decorator takes a unique string identifier for this endpoint as its required first argument (potentially used for endpoint-specific role-based access control logic [here](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/shared/auth.py#L252) - the string ID is passed to the “resource” argument), and an optional list of allowed roles as its second argument (if no second arg is provided, users with all roles will be able to access the endpoint).
- **Configure custom URL query params.** The API uses the [webargs](https://webargs.readthedocs.io/en/latest/) library for validating and extracting URL query param data. For example, the “GET /permissions/” endpoint [configures a query param "user_id"](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/resources/permissions.py#L33) for filtering the resulting permissions list by user id.
- **Look up a database record associated with the request.** Use the [with_lookup](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/shared/rest_utils.py#L94) decorator to load a database record based on a URL path parameter. The with_lookup decorator takes three arguments: the first is the sqlalchemy model class, the second is the name of the URL path parameter that will contain the ID of the database record to look up, and the third is whether or not to check that the client making the request has seen the most recent version of the object (an “etag” is a hash of a database record’s contents - set check_etag=True to ensure that the client’s provided etag is up-to-date in order to, e.g., prohibit updates based on stale data). See, for example, usage for [looking up a particular user by ID](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/resources/users.py#L98) - note that “user” is the name of the URL path parameter in the argument to user_bp.route.
- **Deserialize the request body**. POST and PATCH endpoints generally expect some JSON data in the request body. Such endpoints should validate that this data has the expected structure and, if appropriate, load that data into a sqlalchemy model instance. The [unmarshal_request](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/shared/rest_utils.py#L27) decorator makes it easy to do this. The unmarshal request decorator takes three arguments: the first is a [marshmallow](https://marshmallow.readthedocs.io/en/stable/) schema defining the expected request body structure, the second is the argument name through which the deserialized result data should be passed to the endpoint function (e.g., [“permission” ](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/resources/permissions.py#L66)and [permission](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/resources/permissions.py#L68)), and the third is whether to try loading the request body into a sqlalchemy model or to just leave it as a python dictionary. For schemas autogenerated from sqlalchemy models, see the [schemas.py](https://github.com/NCI-CIDC/cidc-api-gae/blob/master/cidc_api/models/schemas.py) file - we use the [marshmallow-sqlalchemy](https://marshmallow-sqlalchemy.readthedocs.io/en/latest/) library for this.
- **Serialize the response body**. If an endpoint returns a database record (or a list of records), use the [marshal_response](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/shared/rest_utils.py#L67) decorator to convert a sqlalchemy model instance (or list of instances) into JSON. See [this example](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/resources/users.py#L99).
**Note**: when you add a new endpoint to the API, you’ll also need to add that endpoint to the [test_endpoint_urls](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/tests/test_api.py#L413) test. This test ensures that CIDC developers are aware of every endpoint that the API exposes (since under certain configurations Flask might expose unwanted default endpoints).
#### How does API authentication and authorization work?
First off - what’s the difference between authentication and authorization? Authentication is concerned with verifying a user’s identity. Authorization is concerned with restricting the actions a user is allowed to perform within an application based on their identity. Since we need to know a user’s identity in order to execute logic based on their identity, user authentication is required for authorization.
##### Authentication
We use a protocol called OpenID Connect to leverage Auth0/Google for verifying the identity of users accessing data from the API (rather than maintaining user identities and login sessions ourselves). Here’s [a talk ](https://www.youtube.com/watch?v=996OiexHze0)that might help in learning about OAuth 2.0 and OpenID Connect - I highly recommend watching it before making any non-trivial update to authentication-related logic or configuration.
API authentication relies on _identity tokens_ generated by Auth0 to verify that the client making the request is logged in. An identity token is a [JWT](https://jwt.io/) containing information about a user’s account (like their name, their email, their profile image, etc.) and metadata (like an expiry time after which the token should be considered invalid). Here’s the part that makes JWTs trustworthy and useful for authentication: they include a cryptographic signature from a trusted identity provider service (Auth0, in our case). So, an identity token represents a currently authenticated user if:
- It is a well-formatted JWT.
- It has not yet expired.
- Its cryptographic signature is valid.
JWTs are a lot like passports - they convey personal information, they’re issued by a trusted entity, and they expire after a certain time. Moreover, like passports, JWTs **can be stolen** and used to impersonate someone. As such, JWTs should be kept private and treated sort of like short-lived passwords.
##### Authorization
The CIDC API takes a _role-based access control_ approach to implementing its authorization policy. Each user is assigned a role (like _cidc-admin_, _cimac-biofx-user_, etc.), and the actions they’re allowed to take in the system are restricted based on that role. For the most part, any two users with the same role will be allowed to take the same actions in the CIDC system.
The one exception to the role-based access control rule is file access authorization, which is configured at the specific user account level for non-admin users via trial/assay permissions.
##### Implementation
Here’s where this happens in the code. [check_auth](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/shared/auth.py#L83) is the workhorse authentication and authorization function (this is what [requires_auth](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/shared/auth.py#L34) calls under the hood). check_auth first authenticates the current requesting user’s identity then performs authorization checks based on that identity:
Here’s what the [authenticate](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/shared/auth.py#L142) function does:
1. [Tries to extract](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/shared/auth.py#L150) the identity token from the request’s HTTP headers. It expects a header with the structure “Authorization: Bearer <id token>”. If the expected “Authorization” header is not present, it looks for an identity token in the request’s JSON body (this is specific to the way our plotly dash integration handles authentication).
2. [Gets a public key from Auth0](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/shared/auth.py#L170) that it will use to verify that the identity token was signed by Auth0.
3. [Decodes the identity token](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/shared/auth.py#L205) and verifies its signature using the public key obtained in the previous step. If the JWT is malformed, expired, or otherwise invalid, this step will respond to the requesting user with HTTP 401 Unauthorized.
4. [Initializes and returns a sqlalchemy model instance for the current user](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/shared/auth.py#L147).
Next, check_auth passes the user info parsed from the identity token to the [authorize](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/shared/auth.py#L252) function. The function implements some access control logic based on whether the requesting user’s account is registered and their role gives them permission to access the endpoint they are trying to access. **Note**: this function encapsulates simple, generic RBAC operations (“only users with these roles can perform this HTTP action on this endpoint”) and does not encapsulate more complicated, endpoint-specific role-based access control logic (e.g., [this logic for listing file access permissions](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/cidc_api/resources/permissions.py#L46)). As things currently stand, understanding the RBAC policy for a particular endpoint requires reviewing that endpoint’s implementation in its entirety.
#### How do I propagate SQLAlchemy model updates to the database?
Updates to SQLAlchemy model python classes do not automatically update the corresponding tables in the database. Rather, you need to create a “migration” script to apply any model class updates to the database. We use the [flask-migrate](https://flask-migrate.readthedocs.io/en/latest/) plugin for managing our migration scripts. See this brief [overview](https://github.com/NCI-CIDC/cidc-api-gae/blob/master/README.md#running-database-migrations) of creating, running, and undoing migrations.
**Note:** although flask-migrate and alembic, the tool flask-migrate uses under the hood, can automatically pick up certain sqlalchemy model class changes (e.g., adding/removing models, adding/removing columns, column data type changes), there are other changes that it can’t pick up automatically. Two examples I’ve encountered are adding/removing values from enum types and adding/updating [CHECK constraints](https://docs.sqlalchemy.org/en/14/core/constraints.html#check-constraint). For this reason, always review the auto-generated migration file before applying it, making any required manually edits/additions.
#### How can I check that a database migration works?
First, I run the “flask db upgrade” against my local database - this can catch basic errors, like syntax or type issues, even if there’s no data currently stored in the local database.
Next, I’ll try running the migration against the staging database from my local computer (since the staging db generally has representative data in it, this can catch further errors you might miss in a local db test). To do this, you need to [set up a connection to the staging db](https://github.com/nci-cidc/cidc-api-gae#connecting-to-a-cloud-sql-database-instance) and to [update your .env file](https://github.com/NCI-CIDC/cidc-api-gae/blob/75e88280e1103b530f6e7bd7261ca90f933159b2/.env#L23) to tell your local api code to use this connection. **Make sure that no one else is using the staging db for anything critical**, then run the db upgrade. If you encounter new errors, fix them. Once the upgrade succeeds, undo it with “flask db downgrade”, then make a PR to deploy the new migration.
#### What happens when a database migration fails, and what should I do to remediate the situation?
Because database migrations are run when the app starts up, failed database migrations manifest as the API failing to start up. This usually looks like the “failed to load account information” error message appearing 5-10 seconds after trying to load the portal.
Remediating a failing migration requires two steps:
1. Redirect traffic to a previous app engine version that does not include the failing migration code. You can select a known-good version from [this page](https://console.cloud.google.com/appengine/versions) in the GCP console.
2. Debug and fix the migration locally following a process like the one described above.
**Note:** when you want to undo a migration that **did not fail to run**, but has some other issue with it, the solution is different. If you try to simply send traffic to a previous app engine version without the migration you want to undo included in it, you’ll get an error on app startup (something like “alembic.util.CommandError: Can't locate revision identified by '31b8ab83c7d'”). In order to undo this migration, you’ll need to [manually connect to the cloud sql instance](https://github.com/nci-cidc/cidc-api-gae#connecting-to-a-cloud-sql-database-instance), [update your .env file](https://github.com/NCI-CIDC/cidc-api-gae/blob/75e88280e1103b530f6e7bd7261ca90f933159b2/.env#L23) to tell your local api code to use this connection, then run “flask db downgrade”. Once that command succeeds, you’ve rolled back the unwanted migration, and you can safely send traffic to a previous app engine version that doesn’t include the migration.
#### What’s packaged up in cidc-api-modules pypi package?
The cidc-api-modules package includes only the submodules used in the cidc-cloud-functions module. Here’s the [full list](https://github.com/NCI-CIDC/cidc-api-gae/blob/ed18274bd413444157fb3d7af8e0dc3925079e6a/setup.py#L14). Notably, the “cidc_api.app” and “cidc_api.resources” submodules are excluded, since these pertain only to the API. To be perfectly honest, I don’t remember the issue that led to the decision to not simply package up and public the top-level “cidc_api” module (it’s possible even if it’s not necessary). Anyhow, this means that bumping the cidc-api-modules version is only necessary when making changes to the included submodules that you want to propagate to the cloud functions repo.
Relatedly, it could be worth looking into combining the cloud functions repo into the cidc-api-gae repo. There’s no great reason for them to be separate. In fact, since they share code related to interacting with the database and with GCP, the decision to separate the two repos likely creates more friction than it alleviates.
| text/markdown | null | null | null | null | MIT license | null | [] | [] | https://github.com/NCI-CIDC/cidc-api-gae | null | >=3.13 | [] | [] | [] | [
"cachetools~=7.0.1",
"certifi~=2026.1.4",
"cloud-sql-python-connector[pg8000]~=1.20.0",
"flask~=3.1.2",
"flask-migrate~=4.1.0",
"flask-sqlalchemy~=3.1.1",
"flask-talisman~=1.1.0",
"gcloud-aio-storage~=9.6.1",
"google-api-python-client~=2.190.0",
"google-auth==2.48.0",
"google-cloud-bigquery~=3.40.0",
"google-cloud-pubsub~=2.35.0",
"google-cloud-secret-manager~=2.26.0",
"google-cloud-storage~=3.9.0",
"jinja2~=3.1.6",
"joserfc~=1.6.2",
"marshmallow~=4.2.2",
"marshmallow-sqlalchemy~=1.4.2",
"numpy~=2.4.2",
"pandas~=3.0.0",
"pyarrow~=23.0.1",
"pydantic~=2.12.5",
"python-dotenv~=1.2.1",
"requests~=2.32.5",
"semver~=3.0.4",
"sqlalchemy~=2.0.45",
"sqlalchemy-mixins~=2.0.5",
"werkzeug~=3.1.5",
"opentelemetry-api~=1.39.1",
"opentelemetry-exporter-otlp-proto-grpc~=1.39.1",
"opentelemetry-sdk~=1.39.1",
"opentelemetry-instrumentation-flask~=0.59b0",
"opentelemetry-instrumentation-requests~=0.59b0",
"opentelemetry-instrumentation-sqlalchemy~=0.59b0",
"opentelemetry-exporter-gcp-trace~=1.11.0",
"opentelemetry-propagator-gcp~=1.11.0",
"nci-cidc-schemas==0.28.14"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T16:00:29.986351 | nci_cidc_api_modules-1.2.75.tar.gz | 194,119 | 0f/b7/d8df8ed4601dc84f9e9d74a1fa8d6401116f8a97a9ecbea0f8a0f452acd0/nci_cidc_api_modules-1.2.75.tar.gz | source | sdist | null | false | 2eeb77a5c4451e12e775fedcd675e7bd | af150fdb3c869ccee09d9e8dd9fcedd6f8cc7eefbabf2d5425fbfe882fa232ef | 0fb7d8df8ed4601dc84f9e9d74a1fa8d6401116f8a97a9ecbea0f8a0f452acd0 | null | [
"LICENSE"
] | 278 |
2.4 | snakemake-executor-plugin-slurm | 2.3.1 | A Snakemake executor plugin for submitting jobs to a SLURM cluster. | # Snakemake executor plugin: slurm
[](https://gitpod.io/#https://github.com/snakemake/snakemake-executor-plugin-slurm)
For documentation, see the [Snakemake plugin catalog](https://snakemake.github.io/snakemake-plugin-catalog/plugins/executor/slurm.html).
| text/markdown | Christian Meesters | meesters@uni-mainz.de | null | null | MIT | snakemake, plugin, executor, cluster, slurm | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"numpy<3,>=1.26.4",
"pandas<3.0.0,>=2.2.3",
"pyyaml<7.0.0,>=6.0.0",
"snakemake-executor-plugin-slurm-jobstep<0.5.0,>=0.4.0",
"snakemake-interface-common<2.0.0,>=1.21.0",
"snakemake-interface-executor-plugins<10.0.0,>=9.3.9",
"throttler<2.0.0,>=1.2.2"
] | [] | [] | [] | [
"Documentation, https://snakemake.github.io/snakemake-plugin-catalog/plugins/executor/slurm.html",
"Repository, https://github.com/snakemake/snakemake-executor-plugin-slurm"
] | poetry/2.3.2 CPython/3.11.14 Linux/6.11.0-1018-azure | 2026-02-20T16:00:20.937432 | snakemake_executor_plugin_slurm-2.3.1-py3-none-any.whl | 36,413 | 7d/0e/57e3a655a502e404f52ebe06eb8367ed0ed3ba7b4656646d8b84081a143c/snakemake_executor_plugin_slurm-2.3.1-py3-none-any.whl | py3 | bdist_wheel | null | false | aeebf3c2460a0ae5e2a5b611008b5470 | 58fd3c64cf443a4b0d969a28d9247c445acb0d8527b3b410ab554f2d741b3ecf | 7d0e57e3a655a502e404f52ebe06eb8367ed0ed3ba7b4656646d8b84081a143c | null | [
"LICENSE"
] | 692 |
2.4 | dlogger-drawiks | 0.2.3 | dlogger by drawiks | <div align="center">
<h1>📝 dlogger</h1>
<a href="https://pypi.org/project/dlogger-drawiks/">
<img alt="PyPI version" src="https://img.shields.io/pypi/v/dlogger-drawiks?color=blue">
</a>
<img height="20" alt="Python 3.7+" src="https://img.shields.io/badge/python-3.7+-blue">
<img height="20" alt="License MIT" src="https://img.shields.io/badge/license-MIT-green">
<img height="20" alt="Status" src="https://img.shields.io/badge/status-stable-brightgreen">
<p><strong>dlogger</strong> — simple logger for personal projects</p>
<blockquote>(─‿‿─)</blockquote>
</div>
---
```
____ __
/ __ \ / / ____ ____ _ ____ _ ___ _____
/ / / / / / / __ \ / __ `// __ `// _ \ / ___/
/ /_/ / / /___/ /_/ // /_/ // /_/ // __// /
/_____/ /_____/\____/ \__, / \__, / \___//_/
/____/ /____/
```
## **📦 installation**
```bash
pip install dlogger-drawiks
```
---
## **📑 quick start**
```python
from dlogger import logger
logger.info("hello, world!")
logger.error("something went wrong")
```
with configuration:
```python
from dlogger import logger
logger.configure(
level="INFO",
log_file="app.log",
rotation="10MB",
retention="7 days",
compression=True
)
logger.debug("this won't be shown")
logger.info("but this will")
```
---
## **🧩 features**
- 🎨 **TrueColor output** — HEX/RGB support powered by [dcolor](https://github.com/drawiks/dcolor)
- 🚀 **high performance** — use of buffers and call context caching
- 🧵 **thread safety** — stability in multithreaded applications thanks to locks
- 💾 **write guarantee** — automatic buffer reset upon correct program termination
- 📁 **smart rotation** — by size (`10MB`, `1GB`) or time (`1 day`, `12 hours`)
- 🗑️ **auto cleanup** — scheduled deletion of old files (`retention="30 days"`)
- 📦 **compression** — automatic archiving of old logs to `.gz`
- 🛠️ **minimal dependencies** — only [dcolor](https://github.com/drawiks/dcolor)
---
## **📖 usage**
### log levels
```python
logger.configure(level="INFO") # DEBUG, INFO, WARNING, ERROR, CRITICAL
```
### size-based rotation
```python
logger.configure(
log_file="app.log",
rotation="10MB" # or "500KB", "1GB"
)
```
once the file reaches 10MB → `app.log.20260216_143022`
### time-based rotation
```python
logger.configure(
log_file="app.log",
rotation="1 day" # or "12 hours", "1 week"
)
```
### log retention
```python
logger.configure(
log_file="app.log",
retention="7 days" # or "2 weeks", "1 month"
)
```
logs older than 7 days will be deleted automatically
### compression
```python
logger.configure(
log_file="app.log",
rotation="10MB",
compression=True # old logs → .gz
)
```
### full configuration
```python
logger.configure(
level="INFO", # minimum log level
log_file="logs/app.log", # path to log file
show_path=True, # show module:function:
rotation="10MB", # size-based rotation
retention="7 days", # keep logs for 7 days
compression=True # compress old logs
time_format="%H:%M:%S" # time format - 14:30:22
)
```
---
## **💡 examples**
### simple logging
```python
from dlogger import logger
logger.info("server started on port 8000")
logger.warning("memory usage at 80%")
logger.error("failed to connect to database")
```
### with file
```python
from dlogger import logger
logger.configure(
level="DEBUG",
log_file="app.log"
)
logger.debug("starting request processing")
logger.info("request processed successfully")
```
### for production
```python
from dlogger import logger
logger.configure(
level="INFO",
log_file="logs/production.log",
rotation="50MB",
retention="30 days",
compression=True
time_format="%Y-%m-%d %H:%M:%S"
)
logger.info("application started")
logger.error("critical error in payments module")
```
---
## **📝 log format**
**console:**
```
2026-02-17 14:09:13 | INFO | src.bot:run: - init
```
**file:**
```
2026-02-17 14:09:13 | INFO | src.main:run: init
2026-02-17 14:09:13 | ERROR | src.main:run: error
```
---
## **📜 license**
[MIT](https://github.com/drawiks/dlogger/blob/main/LICENSE)
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.7 | [] | [] | [] | [
"dcolor-drawiks"
] | [] | [] | [] | [
"Homepage, https://github.com/drawiks/dlogger"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T16:00:02.556369 | dlogger_drawiks-0.2.3.tar.gz | 7,677 | 83/ab/2afb9c5cec8b44d6023ec4b58f03a9ffd1bb85e547335355ac8a0db135fd/dlogger_drawiks-0.2.3.tar.gz | source | sdist | null | false | 279047c6b2afa37f4721aecaa16aa09a | 2b106340867a7b055247f2778c69b41ed34c3aec676e9efe8573a4f80164899a | 83ab2afb9c5cec8b44d6023ec4b58f03a9ffd1bb85e547335355ac8a0db135fd | null | [
"LICENSE"
] | 210 |
2.4 | ximinf | 0.0.91 | Simulation Based Inference of Cosmological parameters in Jax using type Ia supernovae. | # ximinf
Simulation Based Inference of Cosmological parameters in Jax using type Ia supernovae.
| text/markdown | null | Adam Trigui <a.trigui@ip2i.in2p3.fr> | null | null | GPL-3.0-or-later | cosmology, supernovae, simulation based inference | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.26",
"jax>=0.4.30",
"flax>=0.9",
"pandas>=2",
"skysurvey",
"skysurvey-sniapop",
"astropy",
"pyDOE",
"joblib",
"seaborn",
"scipy",
"blackjax",
"jupyter; extra == \"notebooks\"",
"matplotlib; extra == \"notebooks\"",
"sphinx>=7.0; extra == \"docs\"",
"sphinx_rtd_theme>=2.0; extra == \"docs\"",
"myst-parser>=2.0; extra == \"docs\"",
"sphinx-autodoc-typehints>=2.0; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/a-trigui/ximinf",
"Documentation, https://ximinf.readthedocs.io"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T15:59:59.847539 | ximinf-0.0.91.tar.gz | 24,353 | 9d/8f/ae13de9e52e1da602d699af43830db6ff8c8200bd400ed7ea298f7262c09/ximinf-0.0.91.tar.gz | source | sdist | null | false | 64fd88ff28a532f7e8b51c104470760a | b8be21287b2501ab9050ff7fa90501237d0805a6932a60a4910b4718050736cb | 9d8fae13de9e52e1da602d699af43830db6ff8c8200bd400ed7ea298f7262c09 | null | [
"LICENSE"
] | 232 |
2.4 | yellowdog-python-examples | 8.5.1 | Python CLI commands using the YellowDog Python SDK | # Example Python CLI commands using the YellowDog Python SDK
## Overview
This is a set of Python CLI commands for interacting with the YellowDog Platform, providing examples of usage of the [YellowDog Python SDK](https://docs.yellowdog.ai/sdk/python/index.html).
The commands support:
- **Aborting** running Tasks with the **`yd-abort`** command
- **Boosting** Allowances with the **`yd-boost`** command
- **Cancelling** Work Requirements with the **`yd-cancel`** command
- **Comparing** whether worker pools are a match for task groups with the **`yd-compare`** command
- **Creating, Updating and Removing** Compute Source Templates, Compute Requirement Templates, Keyrings, Credentials, Storage Configurations, Image Families, Allowances, Configured Worker Pools, User Attributes, Namespace Policies, Groups, and Applications with the **`yd-create`** and **`yd-remove`** commands
- **Deleting** objects in the YellowDog Object Store with the **`yd-delete`** command
- **Downloading** Results from the YellowDog Object Store with the **`yd-download`** command
- **Finishing** Work Requirements with the **`yd-finish`** command
- **Following Event Streams** for Work Requirements, Worker Pools and Compute Requirements with the **`yd-follow`** command
- **Instantiating** Compute Requirements with the **`yd-instantiate`** command
- **Listing** YellowDog items using the **`yd-list`** command
- **Provisioning** Worker Pools with the **`yd-provision`** command
- **Resizing** Worker Pools and Compute Requirements with the **`yd-resize`** command
- **Showing** the details of any YellowDog entity using its YellowDog ID with the **`yd-show`** command
- **Showing** the details of the current Application with the **`yd-application`** command
- **Shutting Down** Worker Pools and Nodes with the **`yd-shutdown`** command
- **Starting** HELD Work Requirements and **Holding** (or pausing) RUNNING Work Requirements with the **`yd-start`** and **`yd-hold`** commands
- **Submitting** Work Requirements with the **`yd-submit`** command
- **Terminating** Compute Requirements with the **`yd-terminate`** command
- **Uploading** files to the YellowDog Object Store with the **`yd-upload`** command
Please see the documentation in the [GitHub repository](https://github.com/yellowdog/python-examples) for more details.
| text/markdown | null | YellowDog Limited <support@yellowdog.co> | null | null | null | null | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Operating System :: OS Independent",
"Development Status :: 4 - Beta"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"PyPAC>=0.16.4",
"dateparser",
"python-dotenv",
"requests",
"rclone-api",
"rich==13.9.4",
"tabulate>=0.9.0",
"tomli>=2.4.0",
"yellowdog-sdk>=13.2.0",
"jsonnet; extra == \"jsonnet\"",
"boto3; extra == \"cloudwizard\"",
"google-cloud-compute; extra == \"cloudwizard\"",
"google-cloud-storage; extra == \"cloudwizard\"",
"azure-identity; extra == \"cloudwizard\"",
"azure-mgmt-resource; extra == \"cloudwizard\"",
"azure-mgmt-network; extra == \"cloudwizard\"",
"azure-mgmt-storage; extra == \"cloudwizard\"",
"azure-mgmt-subscription; extra == \"cloudwizard\""
] | [] | [] | [] | [
"Homepage, https://github.com/yellowdog/python-examples"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-20T15:59:31.166065 | yellowdog_python_examples-8.5.1.tar.gz | 191,222 | 46/96/58b51321a986d04af02f7c5e076d98df748c86cd60a2a514184c8f90ac1a/yellowdog_python_examples-8.5.1.tar.gz | source | sdist | null | false | 0b60871645398327467c463b20131f62 | 14761518ccea4670308ec8e8b7c000e6536f7e2e0a7c4cc6f6f5d0b53a41b09b | 469658b51321a986d04af02f7c5e076d98df748c86cd60a2a514184c8f90ac1a | Apache-2.0 | [
"LICENSE"
] | 214 |
2.4 | qcinput | 0.1.0 | Generate quantum chemistry input files from molecular structures | # qcinput
`qcinput` is a CLI tool that reads an `xyz` geometry, loads task type and keywords
from TOML, and writes an ORCA or Gaussian input file.
## Install With pip
```bash
python -m pip install .
```
For development:
```bash
python -m pip install -e .
```
## Run With Nix
```bash
nix develop -c qcinput
```
Or run directly from the flake package:
```bash
nix run .
```
## Usage
```bash
qcinput generate <path/to/structure.xyz> [-c|--config <path/to/qcinput.toml>] [-o output.inp]
```
Compatibility shorthand (same behavior):
```bash
qcinput <path/to/structure.xyz> [-c|--config <path/to/qcinput.toml>] [-o output.inp]
```
First-time setup:
```bash
qcinput init-config
```
Default config path:
```text
./qcinput.toml
```
You can override this with:
```bash
QCINPUT_CONFIG=/path/to/config.toml qcinput water.xyz
```
Example:
```bash
qcinput water.xyz -c qcinput.toml -o water_opt.inp
```
When `qcinput.engine = "gaussian"`, default output suffix is `.gjf`.
## Config Format (TOML)
We use TOML because it is readable for humans, easy to version control, and strongly structured.
```toml
[qcinput]
engine = "orca" # or "gaussian"
kind = "int" # int | ts | sp
[molecule]
charge = 0
multiplicity = 1
[orca]
nprocs = 8
maxcore = 4000
base_keywords = ["r2scan-3c", "D4", "def2-mTZVPP"]
[orca.task.int]
keywords = ["Opt", "Freq"]
[orca.task.ts]
keywords = ["OptTS", "Freq"]
[orca.task.sp]
keywords = ["SP"]
[gaussian]
nprocshared = 8
mem = "8GB"
method_basis = "B3LYP/def2TZVP"
[gaussian.task.int]
route = ["Opt", "Freq"]
[gaussian.task.ts]
route = ["OptTS", "Freq"]
[gaussian.task.sp]
route = ["SP"]
```
## Output Snippet
```text
# Generated by qcinput from water.xyz
! Opt Freq r2scan-3c D4 def2-mTZVPP
%pal nprocs 8 end
%maxcore 4000
* xyz 0 1
O 0.000000 0.000000 0.000000
H 0.757000 0.586000 0.000000
H -0.757000 0.586000 0.000000
*
```
| text/markdown | null | Yusheng Yang <yushengyangchem@gmail.com> | null | Yusheng Yang <yushengyangchem@gmail.com> | null | cli, gaussian, orca, quantum-chemistry | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Chemistry"
] | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/yushengyangchem/qcinput",
"Issues, https://github.com/yushengyangchem/qcinput/issues",
"Documentation, https://github.com/yushengyangchem/qcinput#readme"
] | twine/6.1.0 CPython/3.13.11 | 2026-02-20T15:59:29.369136 | qcinput-0.1.0.tar.gz | 9,382 | 89/ae/208b755f75d147909ee964cbb58a58dab7353c711489b28ed0cb82f616a4/qcinput-0.1.0.tar.gz | source | sdist | null | false | 347316f3c955264d81612675096eac57 | 992a4237fa1c3dfaa6813b3d6b6211804ed31b9ac7ebf08d2cc392e2b3976906 | 89ae208b755f75d147909ee964cbb58a58dab7353c711489b28ed0cb82f616a4 | MIT | [
"LICENSE"
] | 217 |
2.4 | dbt-pal | 0.3.0 | A dbt adapter for running Python models without Dataproc or BigQuery DataFrames. | # dbt-pal
dbt-pal is **P**ython **A**dapter **L**ayer
A dbt adapter for running Python models without Dataproc or BigQuery DataFrames.
- SQL models work the same as dbt-bigquery
- Python models are executed in the process running dbt, and the results are written to BigQuery
- The only supported data platform is BigQuery
Inspired by [dbt-fal](https://github.com/fal-ai/dbt-fal), but this is an unrelated project with no guaranteed compatibility.
## Usage
### Installation
```
pip install dbt-pal
```
### Prerequisites
- Python >= 3.11
- dbt-core >= 1.11.0
- dbt-bigquery >= 1.11.0
- Authentication to BigQuery must be configured (e.g. `gcloud auth application-default login`)
### profiles.yml Configuration
Create a `target` with `type: pal` and specify the target name of the actual BigQuery `target` in `db_profile` field.
```yaml
my_project:
target: pal
outputs:
pal:
type: pal
db_profile: bq
bq:
type: bigquery
method: oauth
project: my-project
dataset: my_dataset
location: asia-northeast1
```
## Limitations
- Only table materialization is supported
- Python models are executed in the process running dbt, so the data size that can be handled depends on the memory of that process
## License
Apache License 2.0.
This project was created by modifying code from [dbt-fal](https://github.com/fal-ai/dbt-fal).
| text/markdown | null | null | null | null | Apache-2.0 | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"dbt-adapters<2.0,>=1.19.0",
"dbt-bigquery<2.0,>=1.11.0",
"dbt-common<2.0,>=1.10",
"dbt-core>=1.11.0"
] | [] | [] | [] | [
"Homepage, https://github.com/numb86/dbt-pal",
"Repository, https://github.com/numb86/dbt-pal.git",
"Issues, https://github.com/numb86/dbt-pal/issues"
] | twine/6.2.0 CPython/3.9.6 | 2026-02-20T15:59:13.656686 | dbt_pal-0.3.0.tar.gz | 14,331 | 3d/8d/08540cb67ad5c044fc54c534491998600ccc1d4422a119cf8128a548fa84/dbt_pal-0.3.0.tar.gz | source | sdist | null | false | f93aa6d33f9adafda5731dc8185f3219 | acdfc1ce60da996a075c421b5f8612afe0a3d837e8578d30a698e9d9b2168295 | 3d8d08540cb67ad5c044fc54c534491998600ccc1d4422a119cf8128a548fa84 | null | [
"LICENSE"
] | 198 |
2.4 | aioworldline | 0.4.8 | Unofficial Worldline portal data retrieving client | aioworldline
============
Unofficial Worldline portal data retrieving client
| text/x-rst | null | Oleg Korsak <kamikaze.is.waiting.you@gmail.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Programming Language :: Python"
] | [] | null | null | <3.15.0,>=3.13.9 | [] | [] | [] | [
"aiohttp[speedups]~=3.13.3",
"pydantic~=2.12.5",
"pydantic-settings~=2.13.1"
] | [] | [] | [] | [
"Homepage, https://github.com/kamikaze/aioworldline",
"Documentation, https://github.com/kamikaze/aioworldline/wiki"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:59:11.527322 | aioworldline-0.4.8.tar.gz | 96,350 | c9/1e/4177361376f080b673dbed88aea148d594aedc768df1a61ea5e1cd466fbf/aioworldline-0.4.8.tar.gz | source | sdist | null | false | bcd3365fb65249789bd85604ff06c034 | 00e29ffac0e851643a3d1a47256ef7b238bfd9687bb377bdaf2f660855006fac | c91e4177361376f080b673dbed88aea148d594aedc768df1a61ea5e1cd466fbf | GPL-3.0 | [
"LICENSE",
"AUTHORS.rst"
] | 214 |
2.4 | ai-atlas-nexus | 1.2.1 | AI Atlas Nexus provides tooling to help bring together disparate resources related to governance of foundation models. | # AI Atlas Nexus
<img src="https://github.com/IBM/ai-atlas-nexus/blob/main/resources/images/ai_atlas_nexus_vector.svg?raw=true" width="200">
[](https://www.apache.org/licenses/LICENSE-2.0)  [](https://www.python.org/downloads/) <img alt="Code style: black" src="https://img.shields.io/badge/code%20style-black-000000.svg"></a>
👉 (Jun-2025) The [demo projects repository](https://github.com/IBM/ai-atlas-nexus-demos) showcases implementations of AI Atlas Nexus.
## Overview
AI Atlas Nexus provides tooling to bring together resources related to governance of foundation models. We support a community-driven approach to curating and cataloguing resources such as datasets, benchmarks and mitigations. Our goal is to turn abstract risk definitions into actionable workflows that streamline AI governance processes. By connecting fragmented resources, AI Atlas Nexus seeks to fill a critical gap in AI governance, enabling stakeholders to build more robust, transparent, and accountable systems. AI Atlas Nexus builds on the [IBM AI Risk Atlas](https://www.ibm.com/docs/en/watsonx/saas?topic=ai-risk-atlas) making this educational resource a nexus of governance assets and tooling. A knowledge graph of an AI system is used to provide a unified structure that links and contextualizes the very heterogeneous domain data.
Our intention is to create a starting point for an open AI Systems ontology whose focus is on risk and that the community can extend and enhance. This ontology serves as the foundation that unifies innovation and tooling in the AI risk space. By lowering the barrier to entry for developers, it fosters a governance-first approach to AI solutions, while also inviting the broader community to contribute their own tools and methodologies to expand its impact.
## Features
- 🏗️ An ontology that combines the AI risk view (taxonomies, risks, actions) with an AI model view (AI systems, AI models, model evaluations) into one coherent schema
- 📚 AI Risks collected from IBM AI Risk Atlas, IBM Granite Guardian, MIT AI Risk Repository, NIST Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, the AI Risk Taxonomy (AIR 2024), the AILuminate Benchmark, Credo's Unified Control Framework, and OWASP Top 10 for Large Language Model Applications
- 🔗 Mappings are proposed between the taxonomies and between risks and actions
- 🐍 Use the python library methods to quickly explore available risks, relations and actions
- 🚨 Use the python library methods to detect potential risks in your usecase
- 📤 Download an exported graph populated with data instances
- ✨ Example use-case of auto-assistance in compliance questionnaires using CoT examples and AI Atlas Nexus
- 🔧 Tooling to convert the LinkML schema and instance data into a Cypher representation to populate a graph database
## Architecture

## Links
- **AI Risk Ontology**
- [LinkML schema documentation](docs/ontology/index.md)
- [LinkML instance data for an example knowledge graph](https://github.com/IBM/ai-atlas-nexus/blob/main/src/ai_atlas_nexus/data/knowledge_graph/README.md)
- [Download a populated graph](https://github.com/IBM/ai-atlas-nexus/blob/main/graph_export/README.md)
- [Contribute your own taxonomy files and CoT templates](docs/concepts/Contributing_a_taxonomy.md)
- **Notebooks:**
- [AI Atlas Nexus Quickstart](docs/examples/notebooks/AI_Atlas_Nexus_Quickstart.ipynb) Overview of library functionality
- [Risk identification](docs/examples/notebooks/risk_identification.ipynb) Uncover risks related to your usecase
- [Auto assist questionnaire](docs/examples/notebooks/autoassist_questionnaire.ipynb) Auto-fill questionnaire using Chain of Thought or Few-Shot Examples
- [AI Tasks identification](docs/examples/notebooks/ai_tasks_identification.ipynb) Uncover ai tasks related to your usecase
- [AI Domain identification](docs/examples/notebooks/domain_identification.ipynb) Uncover ai domain from your usecase
- [Risk Categorization](docs/examples/notebooks/risk_categorization.ipynb) Assess and categorize the severity of risks associated with an AI system usecase. Prompt templates are used with thanks to https://doi.org/10.48550/arXiv.2407.12454.
- [Crosswalk](docs/examples/notebooks/generate_crosswalk.ipynb) An example of generating crosswalk information between risks of two different taxonomies.
- [Risk to ARES Evaluation](docs/examples/notebooks/risk_to_ares_evaluation.ipynb) ARES Integration for AI Atlas Nexus allows you to run AI robustness evaluations on AI Systems derived from use cases.
- **Additional Resources:**
- [Demonstrations](https://github.com/IBM/ai-atlas-nexus-demos) A repo containing some demo applications using ai-atlas-nexus.
- [Extensions](https://github.com/IBM/ai-atlas-nexus-extensions) A repo containing extensions and a cookie cutter template to create new open source ai-atlas-nexus extensions.
- [IBM AI Risk Atlas](https://www.ibm.com/docs/en/watsonx/saas?topic=ai-risk-atlas)
- [Usage Governance Advisor: From Intent to AI Governance](https://arxiv.org/abs/2412.01957)
## Installation
This project targets python version ">=3.11, <3.12". You can download specific versions of python here: https://www.python.org/downloads/
**Note:** Replace `INFERENCE_LIB` with one of the LLM inference library [ollama, vllm, wml, rits] as explained [here](#install-for-inference-apis)
To install the current release
```
pip install "ai-atlas-nexus[INFERENCE_LIB]"
```
To install the latest code
```
git clone git@github.com:IBM/ai-atlas-nexus.git
cd ai-atlas-nexus
python -m venv v-ai-atlas-nexus
source v-ai-atlas-nexus/bin/activate
pip install -e ".[INFERENCE_LIB]"
```
### Install for inference APIs
AI Atlas Nexus uses Large Language Models (LLMs) to infer risks and risks data. Therefore, requires access to LLMs to inference or call the model. The following LLM inference APIs are supported:
- [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) (Watson Machine Learning)
- [Ollama](https://ollama.com/)
- [vLLM](https://docs.vllm.ai/en/latest/)
- [RITS](https://rits.fmaas.res.ibm.com) (IBM Internal Only)
#### IBM Watsonx AI (WML)
When using the WML platform, you need to:
1. Add configuration to `.env` file as follows. Please follow this [documentation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-credentials.html?context=wx&locale=en) on obtaining WML credentials.
```yaml
WML_API_KEY=<WML api key goes here>
WML_API_URL=<WML url key goes here>
WML_PROJECT_ID=<WML project id goes here, Optional>
WML_SPACE_ID=<WML space id goes here, Optional>
```
Either 'WML_PROJECT_ID' or 'WML_SPACE_ID' need to be specified.
2. Install WML dependencies as follows:
```command
pip install -e ".[wml]"
```
#### Ollama
When using the Ollama inference, you need to:
1. Install Ollama dependencies as follows:
```command
pip install -e ".[ollama]"
```
2. Please follow the [quickstart](https://github.com/ollama/ollama/blob/main/README.md#ollama) guide to start Ollama LLM server. Server will start by default at http://localhost:11434
3. When selecting Ollama engine in AI Atlas Nexus, use the server address `localhost:11434` as the `api_url` in the credentials or set the environment variable `OLLAMA_API_URL` with this value.
#### vLLM
When using the vLLM inference, you need to:
1. For Mac users, follow the instuctions [here](https://docs.vllm.ai/en/stable/getting_started/installation/cpu/index.html?device=apple). Users need to build from the source vLLM to natively run on macOS.
2. For Linux users, install vLLM dependencies as follows:
```command
pip install -e ".[vllm]"
```
Above package is enough to run vLLM in once-off offline mode. When selecting vLLM execution from AI Atlas Nexus, `credentials` should be passed as `None` to use vLLM offline mode.
3. (Optional) To run vLLM on an OpenAI-Compatible vLLM Server, execute the command:
```command
vllm serve ibm-granite/granite-3.1-8b-instruct --max_model_len 4096 --host localhost --port 8000 --api-key <CUSTOM_API_KEY>
```
The CUSTOM_API_KEY can be any string that you choose to use as your API key. Above command will start vLLM server at http://localhost:8000. The server currently hosts one model at a time. Check all supported APIs at `http://localhost:8000/docs`
**Note:** When selecting vLLM engine in AI Atlas Nexus, pass `api_url` as `host:port` and given `api_key` to `credentials` with values from the vllm serve command above.
#### RITS (IBM Internal Only)
When using the RITS platform, you need to:
1. Add configuration to `.env` file as follows:
```yaml
RITS_API_KEY=<RITS api key goes here>
RITS_API_URL=<RITS url key goes here>
```
2. Install RITS dependencies as follows:
```command
pip install -e ".[rits]"
```
## AI Atlas Nexus Extensions
Install AI Atlas Nexus extension using the below command
```command
ran-extension install <EXTENSION_NAME>
```
Currently, following extensions are available
- [ran-ares-integration](https://github.com/ibm/ai-atlas-nexus-extensions/tree/main/ran-ares-integration): ARES Integration for AI Atlas Nexus to run AI robustness evaluations on AI Systems derived from use cases.
## Compatibility
- View the [releases changelog](https://github.com/IBM/ai-atlas-nexus/releases).
## Referencing the project
If you use AI Atlas Nexus in your projects, please consider citing the following:
```bib
@article{airiskatlas2025,
title={AI Risk Atlas: Taxonomy and Tooling for Navigating AI Risks and Resources},
author={Frank Bagehorn and Kristina Brimijoin and Elizabeth M. Daly and Jessica He and Michael Hind and Luis Garces-Erice and Christopher Giblin and Ioana Giurgiu and Jacquelyn Martino and Rahul Nair and David Piorkowski and Ambrish Rawat and John Richards and Sean Rooney and Dhaval Salwala and Seshu Tirupathi and Peter Urbanetz and Kush R. Varshney and Inge Vejsbjerg and Mira L. Wolf-Bauwens},
year={2025},
eprint={2503.05780},
archivePrefix={arXiv},
primaryClass={cs.CY},
url={https://arxiv.org/abs/2503.05780}
}
```
## License
AI Atlas Nexus is provided under Apache 2.0 license.
## Contributing
- Get started by checking our [contribution guidelines](CONTRIBUTING.md).
- Read the wiki for more technical and design details.
- If you have any questions, just ask!
- [Contribute your own taxonomy files and CoT templates](docs/concepts/Contributing_a_taxonomy.md)
Tip: Use the makefile provided to regenerate artifacts provided in the repository by running `make` in this repository.
## Find out more
- Try out a quick demo at the [HF spaces demo site](https://huggingface.co/spaces/ibm/risk-atlas-nexus)
- Read the publication [AI Risk Atlas: Taxonomy and Tooling for Navigating AI Risks and Resources](https://arxiv.org/pdf/2503.05780)
- Explore [IBM's AI Risk Atlas](https://www.ibm.com/docs/en/watsonx/saas?topic=ai-risk-atlas) on the IBM documentation site
- View the [demo projects repository](https://github.com/IBM/ai-atlas-nexus-demos) showcasing implementations of AI Atlas Nexus.
- Read the the IBM AI Ethics Board publication [Foundation models: Opportunities, risks and mitigations](https://www.ibm.com/downloads/documents/us-en/10a99803d8afd656) which goes into more detail about the risk taxonomy, and describes the point of view of IBM on the ethics of foundation models.
- ['Usage Governance Advisor: From Intent to AI Governance'](https://arxiv.org/abs/2412.01957) presents a system for semi-structured governance information, identifying and prioritising risks according to the intended use case, recommending appropriate benchmarks and risk assessments and proposing mitigation strategies and actions.
## IBM ❤️ Open Source AI
AI Atlas Nexus has been brought to you by IBM.
| text/markdown | null | AI Atlas Nexus <ai-atlas-nexus@ibm.com>, Elizabeth Daly <elizabeth.daly@ie.ibm.com>, Dhaval Salwala <dhaval.vinodbhai.salwala@ibm.com>, Frank Bagehorn <fba@zurich.ibm.com>, Luis Garces-Erice <lga@zurich.ibm.com>, Sean Rooney <sro@zurich.ibm.com>, Inge Vejsbjerg <ingevejs@ie.ibm.com> | null | null | null | ai risks, ai safety, ai governance, risk taxonomies, risk identification, risk detection, ai task identification, ai risk management, ai risk questionnaire, ai systems modelling, knowledge graph, chain of thought | [
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Topic :: Software Development",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS :: MacOS X",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows"
] | [] | null | null | <3.12.5,>=3.11 | [] | [] | [] | [
"linkml",
"linkml_runtime",
"pydantic",
"requests",
"rich",
"sssom",
"txtai",
"tqdm",
"logzero",
"python-dotenv",
"datasets",
"openai>=1.0",
"txtai",
"jsonschema",
"isort",
"pre-commit",
"typer",
"inflect",
"cymple",
"ibm-watsonx-ai; extra == \"wml\"",
"ollama; extra == \"ollama\"",
"vllm; extra == \"vllm\"",
"xgrammar; extra == \"vllm\"",
"mkdocs-material; extra == \"docs\"",
"mkdocs-jupyter; extra == \"docs\"",
"mkdocs-click; extra == \"docs\"",
"mkdocstrings[python]; extra == \"docs\"",
"griffe_inherited_docstrings; extra == \"docs\"",
"griffe-pydantic; extra == \"docs\"",
"mkdocs-awesome-nav; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/IBM/ai-atlas-nexus",
"Documentation, https://ibm.github.io/ai-atlas-nexus/",
"Changelog, https://github.com/IBM/ai-atlas-nexus/blob/main/CHANGELOG.md",
"Issues, https://github.com/IBM/ai-atlas-nexus/issues"
] | twine/6.2.0 CPython/3.12.0 | 2026-02-20T15:58:59.752014 | ai_atlas_nexus-1.2.1.tar.gz | 342,798 | e9/5e/2292aeb18c722e2433cca71ef7c2f4200942a309034fcf6cbdf2bf2adbfe/ai_atlas_nexus-1.2.1.tar.gz | source | sdist | null | false | f18df69b7e08a770aa64005696abd2c8 | 2a25b5201b22f0ae04c5caab4a516d3c2d4bcd464d64509312eea8b55a2bfb02 | e95e2292aeb18c722e2433cca71ef7c2f4200942a309034fcf6cbdf2bf2adbfe | null | [
"LICENSE"
] | 206 |
2.4 | dxf2geo | 0.1.4 | Convert CAD DXF data into geospatial formats and visualisations using GeoPandas/Pyogrio. | # dxf2geo
[](https://pypi.org/project/dxf2geo)
[](https://pypi.org/project/dxf2geo)
[](https://github.com/ksuchak1990/dxf2geo/actions/workflows/test.yml)
[](https://github.com/ksuchak1990/dxf2geo/actions/workflows/clean_code.yml)
[](https://doi.org/10.5281/zenodo.17174880)
> [!WARNING]
> This package is in the early stages of development and should not be installed unless you are one of the developers.
**dxf2geo** is a small Python package for converting CAD `.dxf` files into
geospatial formats such as Shapefiles and GeoPackages, and for producing
interactive visualisations of the extracted geometry.
It is designed to automate the process of extracting geometry by type (point,
line, polygon, etc.), filtering or cleaning the results, and inspecting the
output spatially.
-----
## Table of contents
- [Installation](#installation)
- [Features](#features)
- [Example usage](#example-usage)
- [Filtering options](#layer-filtering)
- [License](#license)
## Installation
`dxf2geo` uses the GDAL Python bindings (`osgeo.*`).
On supported platforms, `pip install dxf2geo` will pull a compatible `GDAL`
wheel from PyPI.
```bash
pip install dxf2geo
```
If your platform does not have a prebuilt GDAL wheel, install GDAL from your
system/package manager first (or via Conda), then install dxf2geo:
```
sudo apt install gdal-bin libgdal-dev
pip install dxf2geo
```
Before usage, it may be worth verifying your installation to ensure that GDAL is
installed:
```bash
python -c "from osgeo import gdal, ogr; print('GDAL', gdal.VersionInfo(), 'DXF driver:', bool(ogr.GetDriverByName('DXF')))"
```
If the installation has worked, you should see something like `GDAL ##### DXF
driver: True`.
## Features
- Converts DXF files to common vector formats (e.g. Shapefile, GeoPackage),
- Supports geometry filtering by type (e.g., LINESTRING, POLYGON),
- Skips invalid geometries,
- Visualises output geometries in an interactive Plotly-based HTML map,
- Filters out short, axis-aligned DXF gridding lines (optional cleanup step).
## Example usage
Below is an example of using the functionality of this package on a CAD file
`example.dxf`.
This creates a set of shapefiles for of the types of geometry in a new `output/`
directory.
```python
# Imports
from dxf2geo.extract import extract_geometries
from dxf2geo.visualise import (
load_geometries,
plot_geometries,
)
from pathlib import Path
# Define paths
input_dxf = Path("./example.dxf").expanduser()
output_dir = Path("output")
# Process CAD file
extract_geometries(input_dxf, output_dir)
# Produce `plotly` html figure
gdf = load_geometries(output_dir)
plot_geometries(gdf, output_dir / "geometry_preview.html")
```
Following this, we would have an output folder that looks like:
```
output/
├── point/
│ └── point.shp
├── linestring/
│ └── linestring.shp
├── polygon/
│ └── polygon.shp
...
└── export.log
```
## Layer filtering
At present we can filter CAD layers based on a number of different criteria.
### Layer name (exact, case-insensitive)
```python
FilterOptions(
include_layers=("roads", "buildings"),
exclude_layers=("defpoints", "tmp"),
)
```
### Layer name (regular expressions)
We can also filter layers using **regular expressions** (applied to the CAD
layer name).
```python
FilterOptions(
# Include any "roads" or "road" layer (case-insensitive), and any layer starting with "bldg_"
include_layer_patterns=(r"(?i)^roads?$", r"^bldg_.*"),
# Exclude layers named "defpoints" (any case) or prefixed with "tmp_"
exclude_layer_patterns=(r"(?i)^defpoints$", r"^tmp_"),
)
```
### Geometry size/structure
```python
# Drop empty geometries and zero-area/length features
FilterOptions(drop_empty=True, drop_zero_geom=True)
# Minimum polygon area
FilterOptions(min_area=5.0)
# Minimum line length
FilterOptions(min_length=10.0)
```
### Spatial bounding box
```python
# (minx, miny, maxx, maxy)
FilterOptions(bbox=(430000.0, 420000.0, 435000.0, 425000.0))
```
### Attribute-value exclusions (exact match)
```python
# Exclude features where fields have disallowed values
FilterOptions(exclude_field_values={
"EntityType": {"TEXT", "MTEXT"},
"Linetype": {"HIDDEN"},
})
```
### Geometry type selection (at extraction call)
```python
extract_geometries(
dxf_path,
output_root,
geometry_types=("POINT", "LINESTRING", "POLYGON", "MULTILINESTRING", "MULTIPOLYGON"),
filter_options=FilterOptions(...),
)
```
## License
`dxf2geo` is distributed under the terms of the
[MIT](https://spdx.org/licenses/MIT.html) license.
## Funding
This package is being developed as part of work on the [EPSRC Digital Health Hub for Antimicrobial Resistance](https://www.digitalamr.org/) (EP/X031276/1).
| text/markdown | Keiran Suchak | null | null | null | null | DXF, geospatial, GIS, vector, conversion | [
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: GIS"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"geopandas>=1.0",
"pyogrio>=0.11",
"shapely>=2.0",
"pandas>=2.0",
"plotly>=5",
"tqdm>=4.66",
"ezdxf>=1.4.3",
"gdal>=3.12.1; extra == \"gdal\"",
"pytest; extra == \"dev\"",
"pytest-mock; extra == \"dev\"",
"ruff; extra == \"dev\"",
"black; extra == \"dev\"",
"build; extra == \"dev\"",
"twine; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/ksuchak1990/dxf2geo",
"Repository, https://github.com/ksuchak1990/dxf2geo",
"Issues, https://github.com/ksuchak1990/dxf2geo/issues",
"DOI, https://doi.org/10.5281/zenodo.17174880"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:58:55.041741 | dxf2geo-0.1.4.tar.gz | 19,164 | 57/13/b0613dae4f21a4174ae3626759d811a0e4d5148f47266eade46632bf416b/dxf2geo-0.1.4.tar.gz | source | sdist | null | false | f0bd09c1843b4c084be7a19c5fdfb284 | 40686ba58c014afd8d2971700fae33ed95495a3ed57193041231db00febe1f30 | 5713b0613dae4f21a4174ae3626759d811a0e4d5148f47266eade46632bf416b | null | [
"LICENSE.txt"
] | 200 |
2.4 | adapta | 3.5.19 | Logging, data connectors, monitoring, secret handling and general lifehacks to make data people lives easier. | # Adapta
This project aim at providing tools needed for everyday activities of data scientists and engineers:
- Connectors for various cloud APIs
- Secure secret handlers for various remote storages
- Logging framework
- Metrics reporting framework
- Storage drivers for various clouds and storage types
## Delta Lake
This module provides basic Delta Lake operations without Spark session, based on [delta-rs](https://github.com/delta-io/delta-rs) project.
Please refer to the [module](adapta/storage/delta_lake/v3/README.md) documentation for examples.
## Secret Storages
Please refer to the [module](adapta/storage/secrets/README.md) documentation for examples.
## NoSql (Astra DB)
Please refer to the [module](adapta/storage/distributed_object_store/v3/datastax_astra/README.md) documentation for examples.
## Dataclass Validation framework
Please refer to the [module](adapta/dataclass_validation/README.md) documentation for examples.
| text/markdown | ECCO Sneaks & Data | esdsupport@ecco.com | GZU | gzu@ecco.com | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | <3.13,>=3.11 | [] | [] | [] | [
"SQLAlchemy<2.1,>=2.0; extra == \"databases\"",
"adlfs<2025,>=2024; extra == \"azure\"",
"azure-identity<2.0,>=1.7; extra == \"azure\"",
"azure-keyvault-secrets<5.0,>=4.3; extra == \"azure\"",
"azure-mgmt-storage<19.2.0,>=19.1.0; extra == \"azure\"",
"azure-servicebus<7.7,>=7.6; extra == \"azure-servicebus\"",
"azure-storage-blob<=12.16.0,>12.7.0; extra == \"azure\"",
"backoff<3.0,>=2.2",
"boto3<2.0.0,>=1.28.0; extra == \"aws\"",
"botocore<2.0,>=1.31; extra == \"aws\"",
"cassandra-driver<3.30.0,>=3.29.1; extra == \"storage\"",
"cryptography>=36",
"dataclasses-json<0.7,>=0.6",
"datadog<0.50.0,>=0.49.1; extra == \"datadog\"",
"datadog-api-client<2.19.0,>=2.18.0; extra == \"datadog\"",
"deltalake<0.19.0,>=0.18.1; extra == \"storage\"",
"fastexcel<0.15.0,>=0.14.0; extra == \"excel\"",
"fsspec<2025,>=2024",
"hvac<0.12.0,>=0.11.2; extra == \"hashicorp\"",
"limits<3.8,>=3.7",
"mlflow-skinny<2.18.0,>=2.17.0; extra == \"ml\"",
"openpyxl<4.0,>=3.0; extra == \"excel\"",
"pandas[performance]<3.0,>=2.0.0",
"pandera<1.0,>=0.20.3",
"polars<1.34,>=1.7",
"pyarrow>=7.0",
"pyodbc<4.1,>=4.0; extra == \"databases\"",
"redis[hiredis]<4.5.0,>=4.4.0; extra == \"caching\"",
"requests<3.0,>=2.26",
"snowflake-connector-python<4.0.0,>=3.4.0; extra == \"snowflake\"",
"trino[sqlalchemy]<0.331,>=0.330; extra == \"trino\"",
"xlsxwriter<4.0,>=3.0; extra == \"excel\""
] | [] | [] | [] | [
"Repository, https://github.com/SneaksAndData/adapta"
] | poetry/2.3.2 CPython/3.11.14 Linux/6.11.0-1018-azure | 2026-02-20T15:58:43.340846 | adapta-3.5.19-py3-none-any.whl | 218,488 | fa/46/4e76014a3f10cdb167ab789815cb5c163aee082cfb585ebf1c6bcc8f0843/adapta-3.5.19-py3-none-any.whl | py3 | bdist_wheel | null | false | aca3cff2e381fc8e480aab34add8c8ec | 8647290fb980f66ce195e57b01a0bb0c3d56f20b6a4cb913b073ec40fdf32cb0 | fa464e76014a3f10cdb167ab789815cb5c163aee082cfb585ebf1c6bcc8f0843 | null | [
"LICENSE"
] | 224 |
2.4 | agentpool | 2.8.16 | Pydantic-AI based Multi-Agent Framework with YAML-based Agents, Teams, Workflows & Extended ACP / AGUI integration | # AgentPool
[](https://pypi.org/project/agentpool/)
[](https://pypi.org/project/agentpool/)
[](https://pypi.org/project/agentpool/)
[](https://pypi.org/project/agentpool/)
[](https://github.com/phil65/agentpool/stars)
**A unified agent orchestration hub that lets you configure and manage heterogeneous AI agents via YAML and expose them through standardized protocols.**
[Documentation](https://phil65.github.io/agentpool/)
## The Problem
You want to use multiple AI agents together - Claude Code for refactoring, Codex for code editing with advanced reasoning, a custom analysis agent, maybe Goose for specific tasks. But each has different APIs, protocols, and integration patterns. Coordinating them means writing glue code for each combination.
## The Solution
AgentPool acts as a protocol bridge. Define all your agents in one YAML file - whether they're native (PydanticAI-based), direct integrations (Claude Code, Codex), external ACP agents (Goose), or AG-UI agents. Then expose them all through ACP or AG-UI protocols, letting them cooperate, delegate, and communicate through a unified interface.
```mermaid
flowchart TB
subgraph AgentPool
subgraph config[YAML Configuration]
native[Native Agents<br/>PydanticAI]
direct[Direct Integrations<br/>Claude Code, Codex]
acp_agents[ACP Agents<br/>Goose, etc.]
agui_agents[AG-UI Agents]
workflows[Teams & Workflows]
end
subgraph interface[Unified Agent Interface]
delegation[Inter-agent delegation]
routing[Message routing]
context[Shared context]
end
config --> interface
end
interface --> acp_server[ACP Server]
interface --> opencode_server[OpenCode Server]
interface --> agui_server[AG-UI Server]
acp_server --> clients1[Zed, Toad, ACP Clients]
opencode_server --> clients2[OpenCode TUI/Desktop]
agui_server --> clients3[AG-UI Clients]
```
## Quick Start
```bash
uv tool install agentpool
```
### Minimal Configuration
```yaml
# agents.yml
agents:
assistant:
type: native
model: openai:gpt-4o
system_prompt: "You are a helpful assistant."
```
```bash
# Run via CLI
agentpool run assistant "Hello!"
# Or start as ACP server (for Zed, Toad, etc.)
agentpool serve-acp agents.yml
```
### Integrating External Agents
The real power comes from mixing agent types:
```yaml
agents:
# Native PydanticAI-based agent
coordinator:
type: native
model: openai:gpt-4o
tools:
- type: subagent # Can delegate to all other agents
system_prompt: "Coordinate tasks between available agents."
# Claude Code agent (direct integration)
claude:
type: claude_code
description: "Claude Code for complex refactoring"
# Codex agent (direct integration)
codex:
type: codex
model: gpt-5.1-codex-max
reasoning_effort: medium
description: "Codex for code editing with advanced reasoning"
# ACP protocol agents
goose:
type: acp
provider: goose
description: "Goose for file operations"
# AG-UI protocol agent
agui_agent:
type: agui
url: "http://localhost:8000"
description: "Custom AG-UI agent"
```
Now `coordinator` can delegate work to any of these agents, and all are accessible through the same interface.
## Key Features
### Multi-Agent Coordination
Agents can form teams (parallel) or chains (sequential):
```yaml
teams:
review_pipeline:
mode: sequential
members: [analyzer, reviewer, formatter]
parallel_coders:
mode: parallel
members: [claude, goose]
```
```python
async with AgentPool("agents.yml") as pool:
# Parallel execution
team = pool.get_agent("analyzer") & pool.get_agent("reviewer")
results = await team.run("Review this code")
# Sequential pipeline
chain = analyzer | reviewer | formatter
result = await chain.run("Process this")
```
### Rich YAML Configuration
Everything is configurable - models, tools, connections, triggers, storage:
```yaml
agents:
analyzer:
type: native
model:
type: fallback
models: [openai:gpt-4o, anthropic:claude-sonnet-4-0]
tools:
- type: subagent
- type: resource_access
mcp_servers:
- "uvx mcp-server-filesystem"
knowledge:
paths: ["docs/**/*.md"]
connections:
- type: node
name: reporter
filter_condition:
type: word_match
words: [error, warning]
```
### Server Protocols
AgentPool can expose your agents through multiple server protocols:
| Server | Command | Use Case |
|--------|---------|----------|
| **ACP** | `agentpool serve-acp` | IDE integration (Zed, Toad) - bidirectional communication with tool confirmations |
| **OpenCode** | `agentpool serve-opencode` | OpenCode TUI/Desktop - supports remote filesystems via fsspec |
| **MCP** | `agentpool serve-mcp` | Expose tools to other agents |
| AG-UI | `agentpool serve-agui` | AG-UI compatible frontends |
| OpenAI API | `agentpool serve-api` | Drop-in OpenAI API replacement |
The **ACP server** is ideal for IDE integration - it provides real-time tool confirmations and session management. The **OpenCode server** enables the OpenCode TUI to control AgentPool agents, including agents operating on remote environments (Docker, SSH, cloud sandboxes).
### Additional Capabilities
- **Structured Output**: Define response schemas inline or import Python types
- **Storage & Analytics**: Track all interactions with configurable providers
- **File Abstraction**: UPath-backed operations work on local and remote sources
- **Triggers**: React to file changes, webhooks, or custom events
- **Streaming TTS**: Voice output support for all agents
## Usage Patterns
### CLI
```bash
agentpool run agent_name "prompt" # Single run
agentpool serve-acp config.yml # ACP server for IDEs
agentpool serve-opencode config.yml # OpenCode TUI server
agentpool serve-mcp config.yml # MCP server
agentpool watch --config agents.yml # React to triggers
agentpool history stats --group-by model # View analytics
```
### Programmatic
```python
from agentpool import AgentPool
async with AgentPool("agents.yml") as pool:
agent = pool.get_agent("assistant")
# Simple run
result = await agent.run("Hello")
# Streaming
async for event in agent.run_stream("Tell me a story"):
print(event)
# Multi-modal
result = await agent.run("Describe this", Path("image.jpg"))
```
## Documentation
For complete documentation including advanced configuration, connection patterns, and API reference, visit [phil65.github.io/agentpool](https://phil65.github.io/agentpool/).
| text/markdown | Philipp Temminghoff | Philipp Temminghoff <philipptemminghoff@googlemail.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Framework :: Pydantic",
"Framework :: Pydantic :: 2",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Documentation",
"Topic :: Internet",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development",
"Topic :: Utilities",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"alembic>=1.16.5",
"anyenv[httpx]>=0.3.0",
"clawd-code-sdk>=0.1.36",
"docler>=1.0.3",
"docstring-parser>=0.17.0",
"epregistry",
"evented>=1.0.5",
"exxec>=0.1.0",
"fastapi",
"fastmcp>=2.12.4",
"fsspec",
"httpx",
"jinja2",
"jinjarope",
"keyring>=25.6.0",
"llmling-models>=1.4.1",
"logfire[fastapi]",
"mcp>=1.2.0",
"pillow>=11.3.0",
"platformdirs",
"promptantic>=0.4.5",
"psygnal>=0.11.1",
"pydantic>=2.10.0",
"pydantic-ai-slim[anthropic,google,mistral,openai,retries]>=1.0.0",
"pydocket>=0.16.1",
"python-dotenv>=1.0.1",
"rich",
"ripgrep-rs>=0.3.0",
"schemez[codegen]",
"searchly[all]>=2.0.1",
"slashed>=0.1.0",
"sqlalchemy[aiosqlite]",
"sqlmodel>=0.0.22",
"structlog>=25.5.0",
"sublime-search>=0.3.1",
"tokonomics>=0.1.2",
"toprompt>=0.0.1",
"typer",
"upathtools[httpx]>=0.1.0",
"uvicorn[standard]",
"watchfiles>=1.1.1",
"websockets>=15.0",
"yamling>=2.0.2",
"fasta2a; extra == \"a2a\"",
"starlette; extra == \"a2a\"",
"ag-ui-protocol>=0.1.10; extra == \"ag-ui\"",
"python-telegram-bot[socks]>=21.0; extra == \"bot\"",
"slack-sdk>=3.26.0; extra == \"bot\"",
"slackify-markdown>=0.2.0; extra == \"bot\"",
"croniter>=2.0.0; extra == \"bot\"",
"braintrust; extra == \"braintrust\"",
"autoevals; extra == \"braintrust\"",
"copykitten; extra == \"clipboard\"",
"rustworkx>=0.17.1; extra == \"coding\"",
"grep-ast; extra == \"coding\"",
"ast-grep-py>=0.40.0; extra == \"coding\"",
"tree-sitter>=0.25.2; extra == \"coding\"",
"tree-sitter-python>=0.25.0; extra == \"coding\"",
"tree-sitter-c>=0.24.1; extra == \"coding\"",
"tree-sitter-javascript>=0.25.0; extra == \"coding\"",
"tree-sitter>=0.25.2; extra == \"coding\"",
"tree-sitter-typescript>=0.23.0; extra == \"coding\"",
"tree-sitter-cpp>=0.23.0; extra == \"coding\"",
"tree-sitter-rust>=0.23.0; extra == \"coding\"",
"tree-sitter-go>=0.23.0; extra == \"coding\"",
"tree-sitter-json>=0.24.0; extra == \"coding\"",
"tree-sitter-yaml>=0.6.0; extra == \"coding\"",
"composio; extra == \"composio\"",
"evented[all]; extra == \"events\"",
"langfuse; extra == \"langfuse\"",
"markitdown; python_full_version < \"3.14\" and extra == \"markitdown\"",
"fastembed>=0.7.4; python_full_version < \"3.14\" and extra == \"mcp-discovery\"",
"lancedb>=0.26.0; python_full_version < \"3.14\" and extra == \"mcp-discovery\"",
"pyarrow>=19.0.0; python_full_version < \"3.14\" and extra == \"mcp-discovery\"",
"mcpx-py>=0.7.0; extra == \"mcp-run\"",
"apprise>=1.9.5; extra == \"notifications\"",
"promptlayer; extra == \"promptlayer\"",
"tiktoken; extra == \"tiktoken\"",
"anyvoice[openai,tts-edge]>=0.0.2; extra == \"tts\"",
"zstandard>=0.23.0; extra == \"zed\""
] | [] | [] | [] | [
"Code coverage, https://app.codecov.io/gh/phil65/agentpool",
"Discussions, https://github.com/phil65/agentpool/discussions",
"Documentation, https://phil65.github.io/agentpool/",
"Issues, https://github.com/phil65/agentpool/issues",
"Source, https://github.com/phil65/agentpool"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:57:58.996633 | agentpool-2.8.16.tar.gz | 3,835,901 | 32/24/bc6eef53536c322f837f4a97ec181d84da41165bdaac3ed18458e325a28c/agentpool-2.8.16.tar.gz | source | sdist | null | false | 54a562c0ca1dbd2996d92cd4c1453db4 | 3962b5e83455d89b122fae260c84cc07d6cfcac999d59f8ad2b18f39b7e30f39 | 3224bc6eef53536c322f837f4a97ec181d84da41165bdaac3ed18458e325a28c | MIT | [
"LICENSE"
] | 220 |
2.4 | libephemeris | 0.15.0 | A high-precision, open-source astronomical ephemeris library for Python, powered by Skyfield. | # LibEphemeris
<div align="left">
<img src="https://static.pepy.tech/badge/libephemeris/month" alt="PyPI Downloads">
<img src="https://static.pepy.tech/badge/libephemeris/week" alt="PyPI Downloads">
<img src="https://static.pepy.tech/personalized-badge/libephemeris?period=total&units=INTERNATIONAL_SYSTEM&left_color=GREY&right_color=BLUE&left_text=downloads/total" alt="PyPI Downloads">
<img src="https://img.shields.io/pypi/v/libephemeris.svg" alt="PyPI Version">
<img src="https://img.shields.io/pypi/pyversions/libephemeris.svg" alt="Python Versions">
<img src="https://img.shields.io/github/license/g-battaglia/libephemeris.svg" alt="License">
</div>
A pure Python astronomical ephemeris library based on NASA JPL data. Designed as a 1:1 API-compatible drop-in replacement for PySwisseph, strictly focused on scientific precision.
- Drop-in replacement for `pyswisseph`
- Uses NASA JPL ephemerides (DE440/DE441) via Skyfield
- Uses modern IAU standards (ERFA/pyerfa)
> [!WARNING]
> **Pre-Alpha** -- The public API may change without notice.
---
## Why
Swiss Ephemeris is fast and widely used. LibEphemeris makes a different trade-off:
- **Accuracy-first:** JPL numerical integrations (DE440/DE441) + IAU-standard precession/nutation
- **Transparent:** pure Python, fully testable, and documented with references
- **Slower than SwissEph:** Swiss is C; LibEphemeris is Python (see Performance)
Precision details (models, term counts, measured comparisons, references): `docs/PRECISION.md`.
---
## Installation
```bash
pip install libephemeris
```
DE440 (~128 MB) downloads automatically on first use.
Download all data files for your use case (ephemeris + SPK kernels):
```bash
libephemeris download:medium # recommended for most users
```
See [CLI commands](#cli-commands) for all options and tiers.
### Optional extras
```bash
pip install libephemeris[spk] # Automatic SPK downloads from JPL Horizons
pip install libephemeris[nbody] # REBOUND/ASSIST n-body integration
pip install libephemeris[all] # Everything
```
**Requirements:** Python 3.9+ • skyfield >= 1.54 • pyerfa >= 2.0
---
## Quick start
```python
import libephemeris as swe
from libephemeris.constants import SE_SUN, SE_MOON, SEFLG_SPEED
jd = swe.julday(2000, 1, 1, 12.0) # J2000.0
sun, _ = swe.calc_ut(jd, SE_SUN, SEFLG_SPEED)
moon, _ = swe.calc_ut(jd, SE_MOON, SEFLG_SPEED)
print("Sun lon:", sun[0], "deg")
print("Moon lon:", moon[0], "deg")
```
Houses:
```python
jd = swe.julday(2024, 11, 5, 18.0)
cusps, ascmc = swe.houses(jd, 41.9028, 12.4964, b"P") # Placidus
print("ASC:", ascmc[0], "deg")
print("MC:", ascmc[1], "deg")
```
---
## Calculation flags
Flags are bitmasks that control what is calculated and how the result is returned. `calc_ut()` returns a 6-element tuple `(longitude, latitude, distance, speed_lon, speed_lat, speed_dist)` and a return flag. Combine multiple flags with bitwise OR (`|`).
### Velocity
- `SEFLG_SPEED` — Populates the speed fields (`pos[3]`–`pos[5]`) with daily motion in longitude, latitude, and distance. Without this flag those values are zero. Almost every call should include it.
### Observer
By default the observer is at Earth's center (geocentric).
- `SEFLG_HELCTR` — Heliocentric: moves the observer to the Sun. Distances become heliocentric AU.
- `SEFLG_TOPOCTR` — Topocentric: places the observer on Earth's surface at the position set with `swe_set_topo()`. This matters most for the Moon (up to ~1° parallax).
### Coordinates
By default the output is ecliptic longitude/latitude of date.
- `SEFLG_EQUATORIAL` — Switches to equatorial coordinates: `pos[0]` becomes Right Ascension (0–360°) and `pos[1]` becomes Declination (±90°). Speeds change accordingly.
### Reference frame
By default positions are precessed to the equinox of date.
- `SEFLG_J2000` — Keeps coordinates in the J2000.0 reference frame instead of precessing to the equinox of date.
- `SEFLG_NONUT` — Excludes nutation, giving positions on the mean ecliptic/equator.
### Position corrections
By default positions are apparent (light-time and aberration corrected).
- `SEFLG_TRUEPOS` — Geometric position: no light-time correction. Returns where the body actually is at the instant of calculation.
- `SEFLG_NOABERR` — Astrometric position: light-time corrected but no aberration. Comparable to star catalog positions.
- `SEFLG_ASTROMETRIC` — Convenience shorthand for `SEFLG_NOABERR | SEFLG_NOGDEFL`.
### Sidereal zodiac
- `SEFLG_SIDEREAL` — Subtracts the ayanamsha from ecliptic longitude, returning sidereal rather than tropical positions. Requires a prior `swe_set_sid_mode()` call to select the ayanamsha (Lahiri, Fagan-Bradley, etc.).
### Combining flags
```python
# Heliocentric position with velocity
pos, _ = swe.calc_ut(jd, SE_MARS, SEFLG_SPEED | SEFLG_HELCTR)
# Sidereal equatorial coordinates
pos, _ = swe.calc_ut(jd, SE_SUN, SEFLG_SPEED | SEFLG_EQUATORIAL | SEFLG_SIDEREAL)
```
> [!NOTE]
> `SEFLG_MOSEPH` and `SEFLG_SWIEPH` are accepted for API compatibility but
> silently ignored — all calculations always use JPL DE440/DE441 via Skyfield.
> `SEFLG_BARYCTR` is mapped to heliocentric. `SEFLG_XYZ`, `SEFLG_RADIANS`,
> `SEFLG_SPEED3`, and `SEFLG_ICRS` are defined but not yet implemented.
---
## Choose the ephemeris (DE440 vs DE441)
Default ephemeris: **DE440** (1550--2650 CE).
To use **DE441** (extended range -13200 to +17191 CE):
```python
import libephemeris as swe
swe.set_ephemeris_file("de441.bsp")
```
Or via environment variable:
```bash
export LIBEPHEMERIS_EPHEMERIS=de441.bsp
```
Dates outside the loaded kernel raise `EphemerisRangeError`.
---
## Outer-planet centers (barycenter vs center)
JPL DE ephemerides provide system barycenters for Jupiter/Saturn/Uranus/Neptune/Pluto.
LibEphemeris corrects these to planet body centers automatically:
1. Uses a compact SPK file (`planet_centers.bsp`) downloaded by `libephemeris download-data`
2. Falls back to analytical satellite models when SPK coverage is not available
Full technical details are in `docs/PRECISION.md`.
---
## Minor bodies (SPK)
For many asteroids and TNOs, high precision requires JPL SPK kernels.
```python
import libephemeris as swe
from libephemeris.constants import SE_CHIRON
swe.set_auto_spk_download(True)
swe.set_spk_cache_dir("./spk_cache")
pos, _ = swe.calc_ut(2460000.0, SE_CHIRON, 0)
print(pos[0])
```
### Optional Dependencies
LibEphemeris has several optional dependencies for enhanced functionality:
| Extra | Description | Dependencies |
|-------|-------------|--------------|
| `[spk]` | Automatic SPK downloads from JPL Horizons | `astroquery` |
| `[stars]` | Star catalog access | `astropy` |
| `[nbody]` | N-body integration | `rebound`, `reboundx` |
| `[all]` | All optional features | All above |
```bash
pip install libephemeris[spk] # For auto-downloading SPK kernels from Horizons
pip install libephemeris[stars] # For accessing star catalogs via astropy
pip install libephemeris[all] # Install all optional dependencies
```
**Note:** `pyerfa` is a required dependency and provides IAU 2006/2000A precession-nutation models for high-precision calculations.
---
## Thread safety
The global API is pyswisseph-compatible and uses global mutable state.
For concurrency, use `EphemerisContext`:
```python
from libephemeris import EphemerisContext, SE_SUN
ctx = EphemerisContext()
ctx.set_topo(12.5, 41.9, 0)
pos, _ = ctx.calc_ut(2451545.0, SE_SUN, 0)
```
---
## Docs
- `docs/PRECISION.md` (scientific models, measured precision, references)
- `docs/migration-guide.md` (pyswisseph -> libephemeris)
- `docs/HOUSE_SYSTEMS.md`
- `docs/AYANAMSHA.md`
- `docs/testing.md`
---
## Performance
LibEphemeris is pure Python; Swiss Ephemeris is C. Expect LibEphemeris to be slower.
For batch workloads, use `EphemerisContext`, parallelism, and caching.
---
## Development
```bash
git clone https://github.com/g-battaglia/libephemeris.git
cd libephemeris
uv pip install -e ".[dev]"
```
All development tasks use [poethepoet](https://poethepoet.naberhaus.dev/) (`poe`).
### Code quality
| Command | Description |
|---------|-------------|
| `poe format` | Format code with Ruff |
| `poe lint` | Lint and auto-fix with Ruff |
| `poe typecheck` | Type-check with mypy |
### Tests
| Command | Description |
|---------|-------------|
| `poe test` | Fast tests (excludes `@pytest.mark.slow`) |
| `poe test:full` | All tests including slow ones |
| `poe test:fast` | Fast tests, parallel (`-n auto`) |
| `poe test:fast:essential` | Parallel essential subset (~670 tests, 1 file per module) |
| `poe test:unit` | Unit tests only (`tests/`) |
| `poe test:unit:fast` | Unit tests, parallel |
| `poe test:compare` | Comparison tests vs pyswisseph (`compare_scripts/tests/`) |
| `poe test:compare:fast` | Comparison tests, parallel |
| `poe coverage` | Fast tests with coverage report |
| `poe coverage:full` | All tests with coverage report |
### Tier diagnostics
Run diagnostic tables showing all celestial bodies with coordinates, velocities, and data source for each precision tier:
| Command | Description |
|---------|-------------|
| `poe diag:base` | Diagnostic for base tier (1850-2150) |
| `poe diag:medium` | Diagnostic for medium tier (1550-2650) |
| `poe diag:extended` | Diagnostic for extended tier (-13200 to +17191) |
### SPK downloads (dev)
Download SPK kernels for minor bodies directly (without the full CLI tier setup):
| Command | Description |
|---------|-------------|
| `poe spk:download:base` | SPK for base tier range (1850-2150) |
| `poe spk:download:medium` | SPK for medium tier range (1900-2100) |
| `poe spk:download:extended` | Max-range SPK files (1600-2500, single file per body) |
### Data generation
| Command | Description |
|---------|-------------|
| `poe generate-planet-centers-spk` | Generate `planet_centers.bsp` (requires `spiceypy`) |
| `poe generate-lunar-corrections` | Regenerate lunar correction tables (requires `de441.bsp`) |
---
## License
LGPL-3.0. See `LICENSE`.
| text/markdown | null | Giacomo Battaglia <giacomo@libephemeris.dev> | null | Giacomo Battaglia <giacomo@libephemeris.dev> | null | astronomy, ephemeris, astrology, planetary, skyfield, swiss-ephemeris, astrological, celestial, horoscope, zodiac | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Astronomy",
"Topic :: Scientific/Engineering :: Physics",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"skyfield>=1.54",
"skyfield-data>=7.0.0",
"astroquery>=0.4.7",
"certifi",
"spktype21>=0.1.0",
"pyerfa>=2.0.0",
"astropy>=5.0.0; extra == \"stars\"",
"rebound>=4.0.0; extra == \"nbody\"",
"assist>=1.1.0; extra == \"nbody\"",
"rebound>=4.0.0; extra == \"all\"",
"assist>=1.1.0; extra == \"all\"",
"black>=25.1.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"poethepoet<0.38.0,>=0.37.0; extra == \"dev\"",
"pyswisseph>=2.10.3.2; extra == \"dev\"",
"pytest<9.0.0,>=8.0.0; extra == \"dev\"",
"pytest-xdist>=3.5.0; extra == \"dev\"",
"ruff>=0.14.6; extra == \"dev\"",
"pytest-cov>=6.0.0; extra == \"dev\"",
"build>=1.0.0; extra == \"dev\"",
"twine>=5.0.0; extra == \"dev\"",
"pytest-sugar>=1.1.1; extra == \"dev\"",
"spiceypy>=6.0.0; extra == \"dev\"",
"sphinx>=7.0.0; extra == \"docs\"",
"sphinx-rtd-theme>=2.0.0; extra == \"docs\"",
"myst-parser>=2.0.0; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/g-battaglia/libephemeris",
"Documentation, https://github.com/g-battaglia/libephemeris#readme",
"Repository, https://github.com/g-battaglia/libephemeris",
"Issues, https://github.com/g-battaglia/libephemeris/issues",
"Changelog, https://github.com/g-battaglia/libephemeris/blob/main/CHANGELOG.md"
] | uv/0.9.27 {"installer":{"name":"uv","version":"0.9.27","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T15:57:56.283667 | libephemeris-0.15.0.tar.gz | 624,806 | e3/87/68548fd021edd498dd450e36bd94b1f9d241f0459f67af52985cb19d8325/libephemeris-0.15.0.tar.gz | source | sdist | null | false | 767adb983277aec138267288153f226b | ba89d32c71de8e1fc6e46e667d68a7bebf003634f4f79389129c728c5e65340e | e38768548fd021edd498dd450e36bd94b1f9d241f0459f67af52985cb19d8325 | LGPL-3.0-only | [
"LICENSE"
] | 211 |
2.4 | neo4j-viz | 1.2.0 | A simple graph visualization tool | # Graph Visualization for Python by Neo4j
[](https://pypi.org/project/neo4j-viz/)
[](https://pypi.org/project/neo4j-viz/)

[](https://neo4j.com/docs/nvl-python/preview/)
[](https://discord.gg/neo4j)
[](https://community.neo4j.com)
[](https://pypi.org/project/neo4j-viz/)
`neo4j-viz` is a Python package for creating interactive graph visualizations.
The output is of type `IPython.display.HTML` and can be viewed directly in a Jupyter Notebook or Streamlit application.
Alternatively, you can export the output to a file and view it in a web browser.
The package wraps the [Neo4j Visualization JavaScript library (NVL)](https://neo4j.com/docs/nvl/current/).

## Some notable features
* Easy to import graphs represented as:
* projections in the Neo4j Graph Data Science (GDS) library
* graphs from Neo4j query results
* pandas DataFrames
* Node features:
* Sizing
* Colors
* Captions
* Pinning
* On hover tooltip
* Relationship features:
* Colors
* Captions
* On hover tooltip
* Graph features:
* Zooming
* Panning
* Moving nodes
* Using different layouts
* Additional convenience functionality for:
* Resizing nodes, optionally including scale normalization
* Coloring nodes based on a property
* Toggle whether nodes should be pinned or not
Please note that this list is by no means exhaustive.
## Getting started
### Installation
Simply install with pip:
```sh
pip install neo4j-viz
```
### Basic usage
We will use a small toy graph representing the purchase history of a few people and products.
We start by instantiating the [Nodes](https://neo4j.com/docs/nvl-python/preview/api-reference/node.html) and
[Relationships](https://neo4j.com/docs/nvl-python/preview/api-reference/relationship.html) we want in our graph.
The only mandatory fields for a node are the "id", and "source" and "target" for a relationship.
But the other fields can optionally be used to customize the appearance of the nodes and relationships in the
visualization.
Lastly we create a
[VisualizationGraph](https://neo4j.com/docs/nvl-python/preview/api-reference/visualization-graph.html) object with the
nodes and relationships we created, and call its `render` method to display the graph.
```python
from neo4j_viz import Node, Relationship, VisualizationGraph
nodes = [
Node(id=0, size=10, caption="Person"),
Node(id=1, size=10, caption="Product"),
Node(id=2, size=20, caption="Product"),
Node(id=3, size=10, caption="Person"),
Node(id=4, size=10, caption="Product"),
]
relationships = [
Relationship(
source=0,
target=1,
caption="BUYS",
),
Relationship(
source=0,
target=2,
caption="BUYS",
),
Relationship(
source=3,
target=2,
caption="BUYS",
),
]
VG = VisualizationGraph(nodes=nodes, relationships=relationships)
VG.render()
```
This will return a `IPython.display.HTML` object that can be rendered in a Jupyter Notebook or streamlit application.
Please refer to the [documentation](https://neo4j.com/docs/nvl-python/preview/) for more details on the API and usage.
### Examples
For more extensive examples, including how to import graphs from Neo4j GDS projections and Pandas DataFrames,
checkout the [tutorials chapter](https://neo4j.com/docs/nvl-python/preview/tutorials/index.html) in the documentation.
| text/markdown | null | Neo4j <team-gds@neo4j.org> | null | null | null | graph, visualization, neo4j | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Database :: Front-Ends",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Visualization",
"Topic :: Software Development",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"ipython<10,>=7",
"pydantic<3,>=2",
"pydantic-extra-types<3,>=2",
"enum-tools==0.13.0",
"anywidget<1,>=0.9",
"traitlets<6,>=5",
"pandas<3,>=2; extra == \"pandas\"",
"pandas-stubs<3,>=2; extra == \"pandas\"",
"graphdatascience<2,>=1; extra == \"gds\"",
"neo4j; extra == \"neo4j\"",
"snowflake-snowpark-python<2,>=1; extra == \"snowflake\""
] | [] | [] | [] | [
"Homepage, https://neo4j.com/",
"Repository, https://github.com/neo4j/python-graph-visualization",
"Issues, https://github.com/neo4j/python-graph-visualization/issues",
"Documentation, https://neo4j.com/docs/python-graph-visualization/"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T15:57:47.788289 | neo4j_viz-1.2.0.tar.gz | 2,195,134 | 1f/c8/33339c53c98bdd5ff0fd473f522ad3494a4ac82d56e05e5c7f6a7ffc6dad/neo4j_viz-1.2.0.tar.gz | source | sdist | null | false | c269642e127265151776037cc09863d3 | 5fe3b1584707cb5784fb5f6d94b8ced0a507f9a92af373415dd2e408b8be6e5a | 1fc833339c53c98bdd5ff0fd473f522ad3494a4ac82d56e05e5c7f6a7ffc6dad | GPL-3.0-only | [
"LICENSE"
] | 294 |
2.4 | mini-swe-agent | 2.2.3 | Nano SWE Agent - A simple AI software engineering agent | <div align="center">
<a href="https://mini-swe-agent.com/latest/"><img src="https://github.com/SWE-agent/mini-swe-agent/raw/main/docs/assets/mini-swe-agent-banner.svg" alt="mini-swe-agent banner" style="height: 7em"/></a>
</div>
# The minimal AI software engineering agent
📣 [New tutorial on building minimal AI agents](https://minimal-agent.com/)<br/>
📣 [Gemini 3 Pro reaches 74% on SWE-bench verified with mini-swe-agent!](https://x.com/KLieret/status/1991164693839270372)<br/>
📣 [New blogpost: Randomly switching between GPT-5 and Sonnet 4 boosts performance](https://www.swebench.com/post-250820-mini-roulette.html)
[](https://mini-swe-agent.com/latest/)
[](https://join.slack.com/t/swe-bench/shared_invite/zt-36pj9bu5s-o3_yXPZbaH2wVnxnss1EkQ)
[](https://pypi.org/project/mini-swe-agent/)
> [!WARNING]
> This is **mini-swe-agent v2**. Read the [migration guide](https://mini-swe-agent.com/latest/advanced/v2_migration/). For the previous version, check out the [v1 branch](https://github.com/SWE-agent/mini-swe-agent/tree/v1).
In 2024, we built [SWE-bench](https://github.com/swe-bench/SWE-bench) & [SWE-agent](https://github.com/swe-agent/swe-agent) and helped kickstart the coding agent revolution.
We now ask: **What if our agent was 100x simpler, and still worked nearly as well?**
`mini` is
- **Widely adopted**: Used by Meta, NVIDIA, Essential AI, IBM, Nebius, Anyscale, Princeton University, Stanford University, and many more.
- **Minimal**: Just some 100 lines of python for the [agent class](https://github.com/SWE-agent/mini-swe-agent/blob/main/src/minisweagent/agents/default.py) (and a bit more for the [environment](https://github.com/SWE-agent/mini-swe-agent/blob/main/src/minisweagent/environments/local.py),
[model](https://github.com/SWE-agent/mini-swe-agent/blob/main/src/minisweagent/models/litellm_model.py), and [run script](https://github.com/SWE-agent/mini-swe-agent/blob/main/src/minisweagent/run/hello_world.py)) — no fancy dependencies!
- **Performant:** Scores >74% on the [SWE-bench verified benchmark](https://www.swebench.com/); starts much faster than Claude Code
- **Deployable:** Supports **local environments**, **docker/podman**, **singularity/apptainer**, **bublewrap**, **contree**, and more
- **Compatible:** Supports all models via **litellm**, **openrouter**, **portkey**, and more. Support for `/completion` and `/response` endpoints, interleaved thinking etc.
- Built by the Princeton & Stanford team behind [SWE-bench](https://swebench.com), [SWE-agent](https://swe-agent.com), and more
- **Tested:** [](https://codecov.io/gh/SWE-agent/mini-swe-agent)
<details>
<summary>More motivation (for research)</summary>
[SWE-agent](https://swe-agent.com/latest/) jump-started the development of AI agents in 2024. Back then, we placed a lot of emphasis on tools and special interfaces for the agent.
However, one year later, as LMs have become more capable, a lot of this is not needed at all to build a useful agent!
In fact, the `mini` agent
- **Does not have any tools other than bash** — it doesn't even need to use the tool-calling interface of the LMs.
This means that you can run it with literally any model. When running in sandboxed environments you also don't need to take care
of installing a single package — all it needs is bash.
- **Has a completely linear history** — every step of the agent just appends to the messages and that's it.
So there's no difference between the trajectory and the messages that you pass on to the LM.
Great for debugging & fine-tuning.
- **Executes actions with `subprocess.run`** — every action is completely independent (as opposed to keeping a stateful shell session running).
This makes it trivial to execute the actions in sandboxes (literally just switch out `subprocess.run` with `docker exec`) and to
scale up effortlessly. Seriously, this is [a big deal](https://mini-swe-agent.com/latest/faq/#why-no-shell-session), trust me.
This makes it perfect as a baseline system and for a system that puts the language model (rather than
the agent scaffold) in the middle of our attention.
You can see the result on the [SWE-bench (bash only)](https://www.swebench.com/) leaderboard, that evaluates the performance of different LMs with `mini`.
</details>
<details>
<summary>More motivation (as a tool)</summary>
Some agents are overfitted research artifacts. Others are UI-heavy frontend monsters.
The `mini` agent wants to be a hackable tool, not a black box.
- **Simple** enough to understand at a glance
- **Convenient** enough to use in daily workflows
- **Flexible** to extend
Unlike other agents (including our own [swe-agent](https://swe-agent.com/latest/)), it is radically simpler, because it:
- **Does not have any tools other than bash** — it doesn't even need to use the tool-calling interface of the LMs.
Instead of implementing custom tools for every specific thing the agent might want to do, the focus is fully on the LM utilizing the shell to its full potential.
Want it to do something specific like opening a PR?
Just tell the LM to figure it out rather than spending time to implement it in the agent.
- **Executes actions with `subprocess.run`** — every action is completely independent (as opposed to keeping a stateful shell session running).
This is [a big deal](https://mini-swe-agent.com/latest/faq/#why-no-shell-session) for the stability of the agent, trust me.
- **Has a completely linear history** — every step of the agent just appends to the messages that are passed to the LM in the next step and that's it.
This is great for debugging and understanding what the LM is prompted with.
</details>
<details>
<summary>Should I use SWE-agent or mini-SWE-agent?</summary>
You should consider `mini-swe-agent` your default choice.
In particular, you should use `mini-swe-agent` if
- You want a quick command line tool that works locally
- You want an agent with a very simple control flow
- You want even faster, simpler & more stable sandboxing & benchmark evaluations
- You are doing FT or RL and don't want to overfit to a specific agent scaffold
You should use `swe-agent` if
- You want to experiment with different sets of tools, each with their own interface
- You want to experiment with different history processors
What you get with both
- Excellent performance on SWE-Bench
- A trajectory browser
</details>
<table>
<tr>
<td width="50%">
<a href="https://mini-swe-agent.com/latest/usage/mini/"><strong>CLI</strong></a> (<code>mini</code>)
</td>
<td>
<a href="https://mini-swe-agent.com/latest/usage/swebench/"><strong>Batch inference</strong></a>
</td>
</tr>
<tr>
<td width="50%">

</td>
<td>

</td>
</tr>
<tr>
<td>
<a href="https://mini-swe-agent.com/latest/usage/inspector/"><strong>Trajectory browser</strong></a>
</td>
<td>
<a href="https://mini-swe-agent.com/latest/advanced/cookbook/"><strong>Python bindings</strong></a>
</td>
</tr>
<tr>
<td>

</td>
<td>
```python
agent = DefaultAgent(
LitellmModel(model_name=...),
LocalEnvironment(),
)
agent.run("Write a sudoku game")
```
</td>
</tr>
</table>
## Let's get started!
**Option 1:** If you just want to try out the CLI (package installed in anonymous virtual environment)
```bash
pip install uv && uvx mini-swe-agent
# or
pip install pipx && pipx ensurepath && pipx run mini-swe-agent
```
**Option 2:** Install CLI & python bindings in current environment
```bash
pip install mini-swe-agent
mini # run the CLI
```
**Option 3:** Install from source (developer setup)
```bash
git clone https://github.com/SWE-agent/mini-swe-agent.git
cd mini-swe-agent && pip install -e .
mini # run the CLI
```
Read more in our [documentation](https://mini-swe-agent.com/latest/):
* [Quick start guide](https://mini-swe-agent.com/latest/quickstart/)
* [Using the `mini` CLI](https://mini-swe-agent.com/latest/usage/mini/)
* [Global configuration](https://mini-swe-agent.com/latest/advanced/global_configuration/)
* [Yaml configuration files](https://mini-swe-agent.com/latest/advanced/yaml_configuration/)
* [Power up with the cookbook](https://mini-swe-agent.com/latest/advanced/cookbook/)
* [FAQ](https://mini-swe-agent.com/latest/faq/)
* [Contribute!](https://mini-swe-agent.com/latest/contributing/)
## Attribution
If you found this work helpful, please consider citing the [SWE-agent paper](https://arxiv.org/abs/2405.15793) in your work:
```bibtex
@inproceedings{yang2024sweagent,
title={{SWE}-agent: Agent-Computer Interfaces Enable Automated Software Engineering},
author={John Yang and Carlos E Jimenez and Alexander Wettig and Kilian Lieret and Shunyu Yao and Karthik R Narasimhan and Ofir Press},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://arxiv.org/abs/2405.15793}
}
```
Our other projects:
<div align="center">
<a href="https://github.com/SWE-agent/SWE-agent"><img src="https://raw.githubusercontent.com/SWE-agent/swe-agent-media/refs/heads/main/media/logos_banners/sweagent_logo_text_below.svg" alt="SWE-agent" height="120px"></a>
<a href="https://github.com/SWE-agent/SWE-ReX"><img src="https://raw.githubusercontent.com/SWE-agent/swe-agent-media/refs/heads/main/media/logos_banners/swerex_logo_text_below.svg" alt="SWE-ReX" height="120px"></a>
<a href="https://github.com/SWE-bench/SWE-bench"><img src="https://raw.githubusercontent.com/SWE-agent/swe-agent-media/refs/heads/main/media/logos_banners/swebench_logo_text_below.svg" alt="SWE-bench" height="120px"></a>
<a href="https://github.com/SWE-bench/SWE-smith"><img src="https://raw.githubusercontent.com/SWE-agent/swe-agent-media/refs/heads/main/media/logos_banners/swesmith_logo_text_below.svg" alt="SWE-smith" height="120px"></a>
<a href="https://github.com/codeclash-ai/codeclash"><img src="https://raw.githubusercontent.com/SWE-agent/swe-agent-media/refs/heads/main/media/logos_banners/codeclash_logo_text_below.svg" alt="CodeClash" height="120px"></a>
<a href="https://github.com/SWE-bench/sb-cli"><img src="https://raw.githubusercontent.com/SWE-agent/swe-agent-media/refs/heads/main/media/logos_banners/sbcli_logo_text_below.svg" alt="sb-cli" height="120px"></a>
</div>
| text/markdown | null | Kilian Lieret <kilian.lieret@posteo.de>, "Carlos E. Jimenez" <carlosej@princeton.edu> | null | null | MIT License
Copyright (c) 2025 Kilian A. Lieret and Carlos E. Jimenez
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | nlp, agents, code | [
"Development Status :: 3 - Alpha",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pyyaml",
"requests",
"jinja2",
"pydantic>=2.0",
"litellm>=1.75.5",
"tenacity",
"rich",
"python-dotenv",
"typer",
"platformdirs",
"textual",
"prompt_toolkit",
"datasets",
"openai!=1.100.0,!=1.100.1",
"mini-swe-agent[dev]; extra == \"full\"",
"swe-rex>=1.4.0; extra == \"full\"",
"mini-swe-agent[modal]; extra == \"full\"",
"mini-swe-agent[contree]; extra == \"full\"",
"modal; extra == \"modal\"",
"boto3; extra == \"modal\"",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\"",
"pytest-xdist; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"ruff; extra == \"dev\"",
"mkdocs-include-markdown-plugin; extra == \"dev\"",
"mkdocstrings[python]>=0.18; extra == \"dev\"",
"mike; extra == \"dev\"",
"mkdocs-material; extra == \"dev\"",
"mkdocs-glightbox; extra == \"dev\"",
"mkdocs-redirects; extra == \"dev\"",
"portkey-ai; extra == \"dev\"",
"swe-rex; extra == \"dev\"",
"contree-sdk>=0.1.0; extra == \"contree\""
] | [] | [] | [] | [
"Documentation, https://mini-swe-agent.com/latest/",
"Repository, https://github.com/SWE-agent/mini-SWE-agent",
"Bug Tracker, https://github.com/SWE-agent/mini-SWE-agent/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:57:44.991752 | mini_swe_agent-2.2.3.tar.gz | 59,150 | a5/79/598193478821791fa575f0aeb22b8bd3336c8acd96fbf1fff3ff568932e1/mini_swe_agent-2.2.3.tar.gz | source | sdist | null | false | 9fc7fe17ecc0d64402e7970c9b56d5b8 | 85607cd4039b29f2a0c53c6f361c8c74f3cea49ecacf0c31d5e839a476e0ff83 | a579598193478821791fa575f0aeb22b8bd3336c8acd96fbf1fff3ff568932e1 | null | [
"LICENSE.md"
] | 23,395 |
2.1 | robotframework-zoomba | 4.6.0 | Robot Framework mini-framework. | RobotFramework-Zoomba
===========
[](https://badge.fury.io/py/robotframework-zoomba)
[](https://pepy.tech/project/robotframework-zoomba)
[](https://github.com/Accruent/robotframework-zoomba/actions/workflows/run-tests.yml)
[](https://coveralls.io/github/Accruent/robotframework-zoomba?branch=master)
[](https://www.codefactor.io/repository/github/accruent/robotframework-zoomba)
What is RobotFramework-Zoomba?
--------------
Robotframework-Zoomba is a collection of libraries spanning GUI, REST API, and SOAP API automation using [Robot Framework](https://github.com/robotframework/robotframework).
These libraries are extensions of existing libraries [SeleniumLibrary](https://github.com/robotframework/SeleniumLibrary), [Requests](https://github.com/bulkan/robotframework-requests),
and [SudsLibrary](https://github.com/aljcalandra/robotframework-sudslibrary).
Zoomba adds a significant amount of data validation support for REST and SOAP API and extends functionality for typical Web GUI automation.
As a team beginning the journey of automation with Robot Framework - we found that there was some time spent ramping up our libraries and Robotframework-Zoomba aims to make that process easier for new projects.
See the **Keyword Documentation** for the [API](https://accruent.github.io/robotframework-zoomba/APILibraryDocumentation.html), [SOAP](https://accruent.github.io/robotframework-zoomba/SOAPLibraryDocumentation.html),
or [GUI](https://accruent.github.io/robotframework-zoomba/GUILibraryDocumentation.html) library for more specific information about the functionality.
Example tests can be found in the [samples directory](https://github.com/Accruent/robotframework-zoomba/tree/master/samples).
Some Features of the Library
--------------
#### [GUI Library](https://accruent.github.io/robotframework-zoomba/GUILibraryDocumentation.html):
When working with web pages of varying load times you probably find yourself running a lot of calls like so:
```robotframework
Wait Until Page Contains Element locator
Click Element locator
```
For ease of use we have combined a lot of these into simple one line keywords:
```robotframework
Wait For And Click Element locator
Wait For And Click Text text
Wait For And Select From List list_locator target_locator
```
Another keyword that is particularly useful is for when you are waiting for javascript to complete on a page before proceeding:
```robotframework
Wait For And Click Element locator that leads to a new page with javascript
Wait Until Javascript Is Complete
Wait For And Click Element locator
```
#### [API Library](https://accruent.github.io/robotframework-zoomba/APILibraryDocumentation.html):
This library wraps the [requests library](https://github.com/bulkan/robotframework-requests) so we have created a set of keywords to easily allow users to make requests in a single keyword:
```robotframework
Call Get Request ${headers_dictionary} endpoint query_string
Call Post Request ${headers_dictionary} endpoint query_string ${data_payload}
```
After receiving your data we made it incredibly easy to validate it. [Validate Response Contains Expected Response](https://accruent.github.io/robotframework-zoomba/APILibraryDocumentation.html#Validate%20Response%20Contains%20Expected%20Response) takes your received request and compares it to your expected data. If there are any errors found it will report line by line what they are.
```robotframework
Validate Response Contains Expected Response ${json_actual_response} ${json_expected_response}
```
If there is any mismatched data it will look something like this:
```
Key(s) Did Not Match:
------------------
Key: pear
Expected: fish
Actual: bird
------------------
Full List Breakdown:
Expected: [{'apple': 'cat', 'banana': 'dog', 'pear': 'fish'}, {'apple': 'cat', 'banana': 'mice', 'pear': 'bird'}, {'apple': 'dog', 'banana': 'mice', 'pear': 'cat'}]
Actual: [{'apple': 'cat', 'banana': 'dog', 'pear': 'bird'}]
Please see differing value(s)
```
If you wanted to ignore a key such as the 'update_date' you would simply set the 'ignored_keys' variable to that key or a list of keys:
```robotframework
Validate Response Contains Expected Response ${json_actual_response} ${json_expected_response} ignored_keys=update_date
Validate Response Contains Expected Response ${json_actual_response} ${json_expected_response} ignored_keys=${list_of_keys}
```
Getting Started
----------------
The Zoomba library is easily installed using the [`setup.py`](https://github.com/Accruent/robotframework-zoomba/blob/master/setup.py) file in the home directory.
Simply run the following command to install Zoomba and it's dependencies:
```python
pip install robotframework-zoomba
```
If you decide to pull the repo locally to make contributions or just want to play around with the code
you can install Zoomba by running the following from the *root directory*:
```python
pip install .
```
or if you intend to run unit tests:
```python
pip install .[testing]
```
To access the keywords in the library simply add the following to your robot file settings (depending on what you need):
```python
*** Settings ***
Library Zoomba.APILibrary
Library Zoomba.GUILibrary
Library Zoomba.SOAPLibrary
```
Examples
------------
Example tests can be found in the [samples directory](https://github.com/Accruent/robotframework-zoomba/tree/master/samples).
The [test directory](https://github.com/Accruent/robotframework-zoomba/tree/master/test) may also contain tests but be aware that these are used for testing releases and may not be as straight forward to use as the ones in the [samples directory](https://github.com/Accruent/robotframework-zoomba/tree/master/samples).
Contributing
-----------------
To make contributions please refer to the [CONTRIBUTING](https://github.com/Accruent/robotframework-zoomba/blob/master/CONTRIBUTING.rst) guidelines.
See the [.githooks](https://github.com/Accruent/robotframework-zoomba/tree/master/.githooks) directory for scripts to help in development.
Support
---------------
General Robot Framework questions should be directed to the [community forum](https://forum.robotframework.org/).
For questions and issues specific to Zoomba please create an [issue](https://github.com/Accruent/robotframework-zoomba/issues) here on Github.
| text/markdown | null | null | Alex Calandra, Michael Hintz, Keith Smoland, Matthew Giardina, Brandon Wolfe, Neil Howell, Tommy Hoang | robosquad@accruent.com | GPL-3.0 | Robot Framework robot-framework selenium requests appium soap winappdriver appium robotframeworkdesktop windows zoomba python robotframework-library appium-windows appiumlibrary api-rest api soap-api appium-mobile mobile | [
"Development Status :: 5 - Production/Stable",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Software Development :: Testing",
"Framework :: Robot Framework :: Library"
] | [
"any"
] | https://github.com/Accruent/zoomba | null | null | [] | [] | [] | [
"robotframework==7.4.1",
"robotframework-requests==0.9.7",
"robotframework-seleniumlibrary==6.8.0",
"robotframework-sudslibrary-aljcalandra==1.1.4",
"requests==2.32.5",
"selenium==4.41.0",
"python-dateutil>=2.8.2",
"pandas>=1.3.5",
"zipp>=3.19.1",
"mock; extra == \"testing\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:57:40.654543 | robotframework_zoomba-4.6.0.tar.gz | 25,828 | 0d/3c/1090e98817d2d92b67d4a06a18c1d46a7730507c56891998473a6717d61d/robotframework_zoomba-4.6.0.tar.gz | source | sdist | null | false | 83d1dd527e2c8d836e66a8f764ee089d | d87e05e4a776e6a0ebd018e05be5381173aaaec03fe831001256fc3a3f5f91e6 | 0d3c1090e98817d2d92b67d4a06a18c1d46a7730507c56891998473a6717d61d | null | [] | 427 |
2.4 | pydantic-ai-backend | 0.1.10 | File storage and sandbox backends for AI agents | <h1 align="center">File Storage & Sandbox Backends for Pydantic AI</h1>
<p align="center">
<em>Console Toolset, Docker Sandbox, and Permission System for AI Agents</em>
</p>
<p align="center">
<a href="https://pypi.org/project/pydantic-ai-backend/"><img src="https://img.shields.io/pypi/v/pydantic-ai-backend.svg" alt="PyPI version"></a>
<a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/python-3.10+-blue.svg" alt="Python 3.10+"></a>
<a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License: MIT"></a>
<a href="https://github.com/vstorm-co/pydantic-ai-backend/actions/workflows/ci.yml"><img src="https://github.com/vstorm-co/pydantic-ai-backend/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
<a href="https://github.com/pydantic/pydantic-ai"><img src="https://img.shields.io/badge/Powered%20by-Pydantic%20AI-E92063?logo=pydantic&logoColor=white" alt="Pydantic AI"></a>
</p>
<p align="center">
<b>Console Toolset</b> — ls, read, write, edit, grep, execute
•
<b>Docker Sandbox</b> — isolated code execution
•
<b>Permission System</b> — fine-grained access control
</p>
---
**File Storage & Sandbox Backends** provides everything your [Pydantic AI](https://ai.pydantic.dev/) agent needs to work with files and execute code safely. Choose from in-memory, local filesystem, or Docker-isolated backends.
> **Full framework?** Check out [Pydantic Deep Agents](https://github.com/vstorm-co/pydantic-deepagents) — complete agent framework with planning, filesystem, subagents, and skills.
## Use Cases
| What You Want to Build | How This Library Helps |
|------------------------|------------------------|
| **AI Coding Assistant** | Console toolset with file ops + code execution |
| **Multi-User Web App** | Docker sandboxes with session isolation |
| **Code Review Bot** | Read-only backend with grep/glob search |
| **Secure Execution** | Permission system blocks dangerous operations |
| **Testing/CI** | In-memory StateBackend for fast, isolated tests |
## Installation
```bash
pip install pydantic-ai-backend
```
Or with uv:
```bash
uv add pydantic-ai-backend
```
Optional extras:
```bash
# Console toolset (requires pydantic-ai)
pip install pydantic-ai-backend[console]
# Docker sandbox support
pip install pydantic-ai-backend[docker]
# Everything
pip install pydantic-ai-backend[console,docker]
```
## Quick Start
```python
from dataclasses import dataclass
from pydantic_ai import Agent
from pydantic_ai_backends import LocalBackend, create_console_toolset
@dataclass
class Deps:
backend: LocalBackend
agent = Agent(
"openai:gpt-4o",
deps_type=Deps,
toolsets=[create_console_toolset()],
)
backend = LocalBackend(root_dir="./workspace")
result = agent.run_sync(
"Create a Python script that calculates fibonacci and run it",
deps=Deps(backend=backend),
)
print(result.output)
```
**That's it.** Your agent can now:
- List files and directories (`ls`)
- Read and write files (`read_file`, `write_file`)
- Edit files with string replacement (`edit_file`)
- Search with glob patterns and regex (`glob`, `grep`)
- Execute shell commands (`execute`)
## Available Backends
| Backend | Storage | Execution | Use Case |
|---------|---------|-----------|----------|
| `StateBackend` | In-memory | No | Testing, ephemeral sessions |
| `LocalBackend` | Filesystem | Yes | Local development, CLI tools |
| `DockerSandbox` | Container | Yes | Multi-user, untrusted code |
| `CompositeBackend` | Routed | Varies | Complex multi-source setups |
### In-Memory (StateBackend)
```python
from pydantic_ai_backends import StateBackend
backend = StateBackend()
# Files stored in memory, perfect for tests
```
### Local Filesystem (LocalBackend)
```python
from pydantic_ai_backends import LocalBackend
backend = LocalBackend(
root_dir="/workspace",
allowed_directories=["/workspace", "/shared"],
enable_execute=True,
)
```
### Docker Sandbox (DockerSandbox)
```python
from pydantic_ai_backends import DockerSandbox
sandbox = DockerSandbox(runtime="python-datascience")
sandbox.start()
# Fully isolated container environment
sandbox.stop()
```
## Console Toolset
Ready-to-use tools for pydantic-ai agents:
```python
from pydantic_ai_backends import create_console_toolset
# All tools enabled
toolset = create_console_toolset()
# Without shell execution
toolset = create_console_toolset(include_execute=False)
# With approval requirements
toolset = create_console_toolset(
require_write_approval=True,
require_execute_approval=True,
)
```
**Available tools:** `ls`, `read_file`, `write_file`, `edit_file`, `glob`, `grep`, `execute`
### Image Support
For multimodal models, enable image file handling:
```python
toolset = create_console_toolset(image_support=True)
# Now read_file on .png/.jpg/.gif/.webp returns BinaryContent
# that multimodal models (GPT-4o, Claude, etc.) can see directly
```
## Permission System
Fine-grained access control:
```python
from pydantic_ai_backends import LocalBackend
from pydantic_ai_backends.permissions import DEFAULT_RULESET, READONLY_RULESET
# Safe defaults (allow reads, ask for writes)
backend = LocalBackend(root_dir="/workspace", permissions=DEFAULT_RULESET)
# Read-only mode
backend = LocalBackend(root_dir="/workspace", permissions=READONLY_RULESET)
```
| Preset | Description |
|--------|-------------|
| `DEFAULT_RULESET` | Allow reads (except secrets), ask for writes/executes |
| `PERMISSIVE_RULESET` | Allow most operations, deny dangerous commands |
| `READONLY_RULESET` | Allow reads only, deny all writes and executes |
| `STRICT_RULESET` | Everything requires approval |
## Docker Runtimes
Pre-configured environments:
| Runtime | Base Image | Packages |
|---------|------------|----------|
| `python-minimal` | python:3.12-slim | (none) |
| `python-datascience` | python:3.12-slim | pandas, numpy, matplotlib, scikit-learn |
| `python-web` | python:3.12-slim | fastapi, uvicorn, sqlalchemy, httpx |
| `node-minimal` | node:20-slim | (none) |
| `node-react` | node:20-slim | typescript, vite, react |
Custom runtime:
```python
from pydantic_ai_backends import DockerSandbox, RuntimeConfig
runtime = RuntimeConfig(
name="ml-env",
base_image="python:3.12-slim",
packages=["torch", "transformers"],
)
sandbox = DockerSandbox(runtime=runtime)
```
## Session Manager
Multi-user web applications:
```python
from pydantic_ai_backends import SessionManager
manager = SessionManager(
default_runtime="python-datascience",
workspace_root="/app/workspaces",
)
# Each user gets isolated sandbox
sandbox = await manager.get_or_create("user-123")
```
## Why Choose This Library?
| Feature | Description |
|---------|-------------|
| **Multiple Backends** | In-memory, filesystem, Docker — same interface |
| **Console Toolset** | Ready-to-use tools for pydantic-ai agents |
| **Permission System** | Pattern-based access control with presets |
| **Docker Isolation** | Safe execution of untrusted code |
| **Session Management** | Multi-user support with workspace persistence |
| **Image Support** | Multimodal models can see images via BinaryContent |
| **Pre-built Runtimes** | Python and Node.js environments ready to go |
## Related Projects
| Package | Description |
|---------|-------------|
| [Pydantic Deep Agents](https://github.com/vstorm-co/pydantic-deepagents) | Full agent framework (uses this library) |
| [pydantic-ai-todo](https://github.com/vstorm-co/pydantic-ai-todo) | Task planning toolset |
| [subagents-pydantic-ai](https://github.com/vstorm-co/subagents-pydantic-ai) | Multi-agent orchestration |
| [summarization-pydantic-ai](https://github.com/vstorm-co/summarization-pydantic-ai) | Context management |
| [pydantic-ai](https://github.com/pydantic/pydantic-ai) | The foundation — agent framework by Pydantic |
## Contributing
```bash
git clone https://github.com/vstorm-co/pydantic-ai-backend.git
cd pydantic-ai-backend
make install
make test # 100% coverage required
```
## License
MIT — see [LICENSE](LICENSE)
<p align="center">
<sub>Built with ❤️ by <a href="https://github.com/vstorm-co">vstorm-co</a></sub>
</p>
| text/markdown | VStorm | null | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic>=2.0",
"wcmatch>=8.0",
"pydantic-ai>=0.1.0; extra == \"console\"",
"build>=1.0.0; extra == \"dev\"",
"coverage>=7.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"pre-commit>=4.0.0; extra == \"dev\"",
"pydantic-ai>=0.1.0; extra == \"dev\"",
"pyright>=1.1.400; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest-cov>=7.0.0; extra == \"dev\"",
"pytest>=9.0.0; extra == \"dev\"",
"ruff>=0.8.0; extra == \"dev\"",
"chardet>=5.2.0; extra == \"docker\"",
"docker>=7.0; extra == \"docker\"",
"pypdf>=5.1.0; extra == \"docker\""
] | [] | [] | [] | [
"Homepage, https://github.com/vstorm-co/pydantic-ai-backend",
"Repository, https://github.com/vstorm-co/pydantic-ai-backend",
"Issues, https://github.com/vstorm-co/pydantic-ai-backend/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:57:07.163451 | pydantic_ai_backend-0.1.10.tar.gz | 13,391,075 | 02/05/7ec79826832c910a67c7569b1e3d69388da0323c7284b566b871f502b8ca/pydantic_ai_backend-0.1.10.tar.gz | source | sdist | null | false | 4a7ab01d86f6c5eddfec3a6769c1a88f | 3e49eb80f21f7790ec535e6c1e738b4979b68c1de40113f062f8279de1abb4ba | 02057ec79826832c910a67c7569b1e3d69388da0323c7284b566b871f502b8ca | MIT | [
"LICENSE"
] | 402 |
2.4 | karellen-llvm-toolchain-tools | 22.1.0rc3.post33 | Karellen LLVM Toolchain Tools | Self-contained LLVM toolchain tools
| text/markdown | Karellen, Inc. | supervisor@karellen.co | Arcadiy Ivanov | arcadiy@karellen.co | Apache-2.0 | LLVM, toolchain, tools | [
"Programming Language :: Python",
"Operating System :: POSIX :: Linux",
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools"
] | [] | https://github.com/karellen/karellen-llvm | null | null | [] | [] | [] | [
"karellen-llvm-core==22.1.0rc3.post33",
"wheel-axle-runtime<1.0"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/karellen/karellen-llvm/issues",
"Documentation, https://github.com/karellen/karellen-llvm",
"Source Code, https://github.com/karellen/karellen-llvm"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T15:56:49.074989 | karellen_llvm_toolchain_tools-22.1.0rc3.post33-py3-none-manylinux_2_28_x86_64.whl | 2,204,088 | 7a/6e/7296cfd1dc8d70a61e59aa4e9f0cb2ffd8986c70ca1452f7dd6675fff18e/karellen_llvm_toolchain_tools-22.1.0rc3.post33-py3-none-manylinux_2_28_x86_64.whl | py3 | bdist_wheel | null | false | 4baf9e7d41da74704843e8710ba4d04d | a8173e686447a92f5df6933ba5c96810707aba0228ea1f8bf8ad7de3b0955637 | 7a6e7296cfd1dc8d70a61e59aa4e9f0cb2ffd8986c70ca1452f7dd6675fff18e | null | [] | 88 |
2.4 | karellen-llvm-lldb | 22.1.0rc3.post33 | Karellen LLDB infrastructure | Self-contained LLVM LLDB infrastructure
| text/markdown | Karellen, Inc. | supervisor@karellen.co | Arcadiy Ivanov | arcadiy@karellen.co | Apache-2.0 | LLVM, lldb, debugger | [
"Programming Language :: Python",
"Operating System :: POSIX :: Linux",
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools"
] | [] | https://github.com/karellen/karellen-llvm | null | null | [] | [] | [] | [
"karellen-llvm-clang==22.1.0rc3.post33",
"wheel-axle-runtime<1.0,>0.0.5"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/karellen/karellen-llvm/issues",
"Documentation, https://github.com/karellen/karellen-llvm",
"Source Code, https://github.com/karellen/karellen-llvm"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T15:56:46.333304 | karellen_llvm_lldb-22.1.0rc3.post33-cp39-cp39-manylinux_2_28_x86_64.whl | 9,397,401 | 8a/91/f7a644a5534d372ca2e1d3ebd9d9b6eeebb6507796808c0d9b4089b2e4fa/karellen_llvm_lldb-22.1.0rc3.post33-cp39-cp39-manylinux_2_28_x86_64.whl | cp39 | bdist_wheel | null | false | fd9483b052801ae105d3ba561da57e5f | 6b0ea355dae693554a3a4b53d2d8778b8df0fd2090a028b1ceb482cde50f3157 | 8a91f7a644a5534d372ca2e1d3ebd9d9b6eeebb6507796808c0d9b4089b2e4fa | null | [] | 406 |
2.4 | renpy-rigging | 0.3.2 | A 2D skeletal rigging, posing, and animation system | # renpy-rigging
A 2D skeletal rigging, posing, and animation system for Ren'Py — including a full visual rig editor built in Ren'Py itself.
## What is renpy-rigging?
- **A lightweight 2D cutout/skeletal animation runtime for Ren'Py**
- **A visual rig & animation editor** implemented as a Ren'Py project
- **A pip-installable Python package** that Ren'Py games can use to load and render rigs, poses, and animations at runtime
- **A JSON-based, human-readable asset format** for rigs, poses, animations, overlays, attachments, and scenes
### The Guiding Principle
> **What you see in the editor is exactly what ships in your game.**
No export pipelines. No mismatched math. No editor/runtime drift.
## Features
### Runtime Library (pip package)
- Load character rigs from JSON
- Load pose libraries and animation libraries
- Render characters using:
- Hierarchical joints (forward kinematics)
- Per-part pivots
- Per-part rotation, offset, scale
- Z-order layering
- Play animations:
- Pose sequences with timing
- Looping, one-shot, and ping-pong play modes
- Easing functions (linear, ease_in, ease_out, ease_in_out)
- Pose interpolation and blending
- Multi-track composite animations with weighted additive blending
- Variable playback speed
- Per-frame horizontal/vertical flipping
- **Overlays**: Layer clothing, accessories, and alternate art on top of (or replacing) existing parts — with slot-based auto-eviction for mutually exclusive items
- **Attachments**: Attach fully-rigged independent objects (weapons, props) to character joints — with their own skeletons, poses, animations, and slot-based swapping
- **Scenes**: Compose backgrounds, layered props, placed characters, and attachments into full environments with z-ordered layers and ambient sound
- **Sound**: Three layers of audio — per-pose sounds, animation timeline SFX (up to 4 concurrent channels), and scene ambient audio
- **Outline/border rendering**: Per-rig configurable borders and runtime-adjustable outline overlays with color, width, and alpha controls
- Fully Ren'Py-native:
- Uses Transform and displayables
- No engine hacks or C extensions
### Editor (RigLab)
- Load a character rig and parts
- Visualize:
- Joints
- Bone hierarchy
- Part pivots
- Draw order
- Select and edit:
- Joints (drag with mouse, nudge with arrow keys)
- Parts (rotate, offset, adjust pivots)
- Save:
- Poses to poses.json
- Animations to animations.json
- Overlays to overlays.json
- Attachments to attachments.json
- Timeline editor:
- Build animations from poses
- Multi-track support with per-track weights and muting
- Set timing and easing per step
- Timeline sound placement
- Per-frame overlay and attachment overrides
- Preview loop, one-shot, or ping-pong playback
- Export:
- PNG frame export (transparent, solid-color, or scene background)
- Spritesheet export (all animations tiled)
- GIF export (pure-Python, no PIL required)
- Cloud render API for GIF/MP4
- **Same renderer as the runtime library**
## Installation
### Runtime Library
In your Ren'Py game's Python environment:
```bash
pip install renpy-rigging
```
Then in Ren'Py:
```python
init python:
import renpy_rigging
```
### Editor (RigLab)
1. Open the `riglab/` folder in the Ren'Py Launcher
2. Launch the project
3. Select a sample character or add your own
## Quick Start
### Directory Layout
```
game/
characters/
vince/
rig.json # Joints, parts, canvas, border config
poses.json # Named poses (deltas from neutral)
animations.json # Named animations (frame sequences)
overlays.json # Named overlay sets (clothing, accessories)
attachments.json # Attachment bindings (weapons, props)
parts/ # PNG images for each body part
overlays/ # Overlay images per overlay set
sounds/ # Sound files
attachments/
tommy-gun/
rig.json # Attachment rig (joints, parts, attachment config)
poses.json
animations.json
parts/
scenes/
speakeasy/
scene.json # Background, layers, placed items/characters
parts/ # Scene prop images
```
### Loading a Rig
```python
init python:
from renpy_rigging import Rig, PoseLibrary, AnimationLibrary, RigRenderer
rig = Rig.load("characters/vince/rig.json")
poses = PoseLibrary.load("characters/vince/poses.json")
anims = AnimationLibrary.load("characters/vince/animations.json")
renderer = RigRenderer(rig, poses, anims)
```
Or load everything from a directory (including overlays and attachment bindings):
```python
init python:
from renpy_rigging import RigRenderer
vince = RigRenderer.from_directory("characters/vince")
```
### Showing a Character
```renpy
# Show a static pose
show expression renderer.show_pose("neutral") as vince:
xalign 0.5
yalign 0.7
# Play an animation
show expression renderer.play_animation("idle", loop=True) as vince:
xalign 0.5
yalign 0.7
# Play an animation mirrored
show expression renderer.play_animation("walk", loop=True, flip_h=True) as vince
```
### Using Overlays
```python
# Add an overlay (slot-aware — evicts any overlay in the same slot)
renderer.add_overlay("broadway-brawlers-shirt")
# Remove an overlay
renderer.remove_overlay("broadway-brawlers-shirt")
# Query by slot
renderer.get_overlay_by_slot("shirt")
```
### Using Attachments
```python
# Attach a weapon (slot-aware — evicts any attachment in the same slot)
renderer.add_attachment("tommy-gun")
# Set the attachment's pose
renderer.set_attachment_pose("tommy-gun", "firing")
# Play the attachment's own animation
renderer.play_attachment_animation("tommy-gun", "fire", loop=True)
# Remove
renderer.remove_attachment("tommy-gun")
```
### Setting Up Scenes
```python
init python:
from renpy_rigging import SceneManager, register_channels
register_channels()
scene_mgr = SceneManager(os.path.join(renpy.config.gamedir, "scenes"))
scene_mgr.register_images() # enables "scene street" syntax
```
Then in your script:
```renpy
# Using the scene manager
$ scene_mgr.show("speakeasy")
# Or with the custom rigscene statement (if registered)
rigscene speakeasy with dissolve
```
Scenes automatically composite backgrounds, layered props, placed characters, and attachments at the correct z-orders, and start ambient sound if configured.
## Game Project Integration
You can integrate renpy-rigging into your Ren'Py game without pip installing the package. Add this to your `options.rpy`:
```python
init -100 python:
import sys, os
src_path = os.path.normpath(os.path.join(renpy.config.gamedir, "..", "..", "src"))
sys.path.insert(0, src_path)
```
Then import and re-export into the Ren'Py store in a `rig_runtime.rpy` file:
```python
init -10 python:
from renpy_rigging import (
Rig, Pose, PoseLibrary, AnimationLibrary,
RigRenderer, SceneManager, register_channels,
)
register_channels()
```
This is the pattern used by both the RigLab editor and the RigLabDemo game.
## Data Formats
### rig.json
Defines the skeleton (joints) and visual parts:
```json
{
"canvas": [600, 900],
"joints": {
"root": {"parent": null, "pos": [300, 700]},
"pelvis": {"parent": "root", "pos": [0, 0]},
"chest": {"parent": "pelvis", "pos": [0, -200]}
},
"parts": {
"torso": {
"image": "parts/torso.png",
"parent_joint": "pelvis",
"pos": [0, 0],
"pivot": [130, 135],
"z": 10
}
},
"border": {"enabled": true, "width": 3, "color": "#000000"},
"slots": ["weapon"]
}
```
### poses.json
Poses store only deltas from the neutral rig:
```json
{
"neutral": {
"joints": {},
"parts": {}
},
"wave": {
"joints": {
"shoulder_R": {"rot": -120},
"elbow_R": {"rot": 90}
},
"parts": {
"hand_R": {"visible": false}
},
"overlays": {"hand-pistol": true},
"attachments": {
"tommy-gun": {"pose": "firing", "visible": true}
},
"sound": "sounds/whoosh.wav"
}
}
```
### animations.json
Animations are sequences of poses with timing. Three formats are supported (all backward-compatible):
**Simple (flat list):**
```json
{
"idle": [
{"pose": "neutral", "ms": 500},
{"pose": "breathe_in", "ms": 800, "ease": "ease_in_out"},
{"pose": "neutral", "ms": 500},
{"pose": "breathe_out", "ms": 800, "ease": "ease_in_out"}
]
}
```
**With timeline sounds:**
```json
{
"fire": {
"frames": [
{"pose": "firing", "ms": 80},
{"pose": "neutral", "ms": 80}
],
"sounds": [
{"path": "sounds/gunshot.wav", "start_ms": 0},
{"path": "sounds/shell_casing.wav", "start_ms": 100}
]
}
}
```
**Multi-track composite:**
```json
{
"walk_and_breathe": {
"tracks": [
{"name": "Walk", "frames": [...], "weight": 1.0},
{"name": "Breathe", "frames": [...], "weight": 0.5, "muted": false}
],
"sounds": [...]
}
}
```
### overlays.json
Overlays associate extra art with body parts:
```json
{
"broadway-brawlers-shirt": {
"_slot": "shirt",
"torso": {"path": "torso.png", "replace": true},
"arm_upper_L": true,
"arm_upper_R": {"z_delta": 2}
}
}
```
A part set to `true` uses all defaults (same part name as the image filename, layered on top). The `_slot` field enables auto-eviction when activating a new overlay in the same slot.
### attachments.json (on the character)
Defines which attachments a character can use and how they connect:
```json
{
"tommy-gun": {
"path": "attachments/tommy-gun",
"default_joint": "wrist_R",
"pos": [-5.0, 29.0],
"rot": 90.0,
"slot": "weapon"
}
}
```
### Attachment rig.json
An attachment's own rig.json includes an `attachment` config block:
```json
{
"name": "tommy-gun",
"canvas": [1051, 345],
"joints": {"root": {"parent": null, "pos": [525, 172]}},
"parts": {...},
"attachment": {
"is_attachment": true,
"default_attach_joint": "wrist_R",
"inherit_rotation": true,
"inherit_scale": false,
"z_offset": 10,
"slot": "weapon"
}
}
```
### scene.json
Defines a composed environment with backgrounds, layers, and placed entities:
```json
{
"canvas": [1456, 816],
"background": {
"color": "#000000",
"image": "parts/speakeasy.png",
"image_offset": [0, 0],
"image_scale": 1.0
},
"layers": {
"background": {"z": -100},
"foreground": {"z": 100}
},
"items": [
{
"name": "barrel",
"image": "parts/barrel.png",
"layer": "background",
"pos": [300, 400],
"scale": [1.5, 1.5],
"z_offset": 5
}
],
"characters": [
{
"name": "Vince",
"source": "Vince-New",
"layer": "background",
"pos": [728, 408],
"scale": [0.4, 0.4]
}
],
"attachments": [
{
"name": "Tommy-Gun",
"source": "Tommy-Gun",
"layer": "background",
"pos": [188, 578],
"scale": [0.6, 0.6]
}
],
"sound": "sounds/jazz.ogg",
"sound_loop": true
}
```
## Repository Structure
```
renpy-rigging/
README.md
pyproject.toml
LICENSE
src/
renpy_rigging/
__init__.py
rig.py # Rig data structures (joints, parts, canvas)
pose.py # Pose application, blending, interpolation
animation.py # Animation playback (multi-track, sounds, easing)
overlay.py # Overlay system (clothing, accessories)
attachment.py # Attachment system (weapons, props)
scene.py # Scene composition (backgrounds, layers, entities)
render.py # Ren'Py displayable builders
audio.py # Audio channel registration and management
math2d.py # Vec2, Transform2D, easing functions
io.py # JSON load/save helpers
riglab/ # The Ren'Py editor app
game/
script.rpy
options.rpy
gui.rpy
screens.rpy
riglab/
core/
rig_runtime.rpy
rig_io.rpy
rig_math.rpy
editor/
editor_screen.rpy
editor_input.rpy
timeline_screen.rpy
data/
samples/
main_menu.rpy
riglabdemo/ # Demo game project
game/
characters/ # Character rigs
attachments/ # Attachment rigs
scenes/ # Scene definitions
docs/
rig_format.md
authoring_guide.md
renpy_integration.md
```
## Editor Keyboard Shortcuts
| Key | Action |
|-----|--------|
| J | Toggle joint visibility |
| B | Toggle bone visibility |
| P | Toggle pivot visibility |
| Arrow keys | Nudge selected joint (1px) |
| Shift + Arrow keys | Nudge selected joint (10px) |
| Space | Play/pause animation (in timeline) |
## Design Philosophy
- **FK-first**: Forward kinematics only in v1
- **No sprite sheets**: Everything is runtime-composed
- **Human-editable JSON**: All data formats are readable and editable
- **Editor and runtime share the same renderer**
- **Procedural-friendly**: Built for blending, aiming, reacting, not just canned loops
- **Slot-based resource management**: Overlays and attachments use named slots for clean swapping (e.g., only one "weapon" or "shirt" at a time)
## Development Roadmap
- [x] v0.1 — Viewer: Load rig.json, render neutral pose
- [x] v0.2 — Poses: Load poses.json, apply pose deltas
- [x] v0.3 — Editor: Select + drag joints, rotate parts, save poses
- [x] v0.4 — Animation: Timeline editor, playback system, export
- [x] v0.5 — Overlays, attachments, scene rendering, composite animations with sound
### Future
- [ ] IK helpers
- [ ] Symmetry tools
- [ ] Onion skinning
- [ ] Event markers in animations
- [ ] Physics jiggle bones
- [ ] Auto-z reordering helpers
## Non-Goals
This is **not** a Spine replacement. This is **not** a general-purpose animation suite.
This is a **Ren'Py-first, game-dev-first** rigging system.
## Who This Is For
- Visual novel devs who want **live characters**
- 2D game devs who want:
- Aiming arms
- Breathing idles
- Reactive poses
- Procedural animation
- Artists who think in **volumes and joints**, not sprite sheets
## Documentation
- [Rig Format Specification](docs/rig_format.md)
- [Character Authoring Guide](docs/authoring_guide.md)
- [Ren'Py Integration Guide](docs/renpy_integration.md)
## License
MIT License - see [LICENSE](LICENSE) for details.
| text/markdown | Aaron | null | null | null | null | 2d, animation, game-dev, renpy, rigging, skeletal, visual-novel | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Games/Entertainment",
"Topic :: Multimedia :: Graphics"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/Masked-Fox-Productions/renpy-rigging",
"Repository, https://github.com/Masked-Fox-Productions/renpy-rigging",
"Documentation, https://github.com/Masked-Fox-Productions/renpy-rigging#readme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:56:28.210240 | renpy_rigging-0.3.2.tar.gz | 53,901 | 9b/33/5f75a9e5b648a97009c87cf9a45d536cedf0d3286a7688f8c1f438a08a37/renpy_rigging-0.3.2.tar.gz | source | sdist | null | false | 48493295cb94fdbf555014c3660526a2 | 4374fec4d43d1ccf68f4c5842cf8caf3df0e404fada77c9ec4c429916b2ce85d | 9b335f75a9e5b648a97009c87cf9a45d536cedf0d3286a7688f8c1f438a08a37 | MIT | [
"LICENSE"
] | 235 |
2.4 | metalearning-class | 0.1.1 | A library for meta-learning in Python with neural networks and transformers. | # Metalearning Library
## Project Purpose and Goals
This project provides a Python library for metalearning, focusing on developing and applying advanced machine learning techniques that learn how to learn. The primary goal is to offer a robust and extensible framework for building metalearning models, particularly those leveraging neural networks and transformer architectures, to solve complex regression and classification problems more efficiently and generalize better across various tasks.
The library aims to:
* Facilitate the development of metalearning algorithms.
* Provide implementations of key metalearning components (e.g., meta-learners, task encoders).
* Enable rapid experimentation with different metalearning approaches.
* Support diverse applications by providing flexible model structures.
## Basic Usage Examples
Here's a simple example of how you might use a hypothetical `MetalearningModel` class:
```python
import metalearning_class as mtl
import pandas as pd
# Other imports
# Import data from a Sofon challenge
train_data = ml.subscribe_and_get_task("challenge_taskname")
# Initialize the metalearning model
ml = mtl.Metalearning(gpu=False)
# Login and get token with your Sofon account
# SUGGESTION: use dotenv
ml.login("username", "password")
# Train the model (this is highly dependent on the actual implementation)
[...]
```
## Contribution
We soon will become open-source, and welcome contributions to the Metalearning Library!
**© 2026 Panaceia – All Rights Reserved**
| text/markdown | Gabriel Alves Castro, João Bizzo Brandt, Lucca Huguet | null | Panaceia | null | # Sofon Client Library & Vendor License (SCLVL)
Version 1.3 — © 2026 Panaceia
This License governs the use of the Sofon Client Library ("the Library"), its Extensions, and any accompanying components.\
By using the Library, the Licensee agrees to all provisions below.
---
# 1. Ownership and Rights
1.1. The Library, its source code, architecture, API design, protocol specifications, and all related intellectual property are the exclusive property of Panaceia.
1.2. Panaceia retains **unrestricted rights** to:
- use, modify, extend, sublicense, commercialize, and integrate the Library,
- use any Extensions or Modifications made by third parties for internal or commercial purposes,\
regardless of whether these Extensions are publicly released.
1.3. This License does **not** transfer ownership of the Library to the Licensee.
---
# 2. Definitions
- **Library**: The Sofon Python client library and all its components.
- **Extension**: Any modification, adaptation, plugin, integration, or derived code that interacts with or extends the Library.
- **Backend**: The Sofon Meta-Learning Platform, including APIs, servers, data pipelines, cloud infrastructure, and metadata systems.
- **Metadata**: Architectural information, hyperparameters, performance logs, problem descriptions, challenge submissions, or other data generated by the use of the Backend and library.
---
# 3. Permitted Use
3.1. The Licensee may:
- use the Library locally for research or commercial purposes,
- train models locally using the Library,
- commercialize models trained locally,
- integrate the Library into internal workflows, products, or research pipelines.
- Create Extensions and encapsulate new machine learning models, provided that such Extensions and models comply with this License, the Sofon Terms & Conditions, and any applicable challenge rules, and do not operate, implement, replicate, or substitute any Sofon Backend functionality outside Panaceia’s platform or enable any competing meta-learning system.
3.2. Local use is permitted **as long as it does not use, imitate, or replicate the Backend** outside Panaceia’s platform.
3.3. For the avoidance of doubt, permitted uses under this Section 3 do not override or limit the restrictions set forth in Sections 4, 6, and 8.
---
# 4. Restrictions on Commercial Use
4.1. The Licensee **may not** use the Library, Extensions, or any derived works to:
- build a meta-learning platform, marketplace, automated model selector, AutoML system, or\
competitive product similar to Panaceia’s Sofon platform,
- enable, support, integrate with, or provide services to any other meta-learning framework, backend, or platform,
- create interoperable or competing APIs or systems.
4.2. The Library **may not be sold, licensed, or offered as a meta-learning solution**.
---
# 5. Extension and Derivative Work Rules
5.1. Any Extension or Modification of the Library:
- **must retain this License**,
- may remain private and does **not** need to be published publicly,
- may only be distributed under this same License.
5.2. Extensions may depend on third-party open-source components **only if** their licenses are compatible with this License.
5.3. An Extension must not include or depend on software that would force the Library to be distributed under a different license.
---
# 6. API and Architecture Protection
6.1. The Licensee shall **not**:
- copy, replicate, reimplement, or imitate the Library’s API design, structure, or behavior\
for creating competing, interoperable, or substitutive products,
- reverse-engineer the Library for competitive use,
- extract algorithms, heuristics, or meta-learning logic for use in another product.
6.2. “Inspiration” clauses are clarified:\
General conceptual inspiration is unavoidable, but **no code, design, structure, protocol, or functional equivalence** may be reproduced for competitive use.
---
# 7. Backend Usage and Metadata Rights
7.1. The Backend is not licensed under this License.\
Use of the Backend requires acceptance of Sofon’s Terms & Conditions.
7.2. **Models trained locally** belong exclusively to the Licensee.
7.3. Models or Metadata submitted to the Backend:
- may be used, analyzed, stored, or commercialized by Panaceia,
- may be offered on the Sofon marketplace or integrated into the Meta-Learning system according to the challenges rules or terms and conditions.
- shall be governed byfollow the challenges rules or terms and conditions of Sofon platform.
7.4. Any model whose **weights, metadata, or architecture** that are submitted to Panaceia\
must follow the challenges rules or terms and conditions of Sofon platform upon submission or upload.
---
# 8. Prohibition of Third-Party Meta-Learning Systems
8.1. The Licensee may not integrate the Library with:
- third-party meta-learning servers,
- AutoML frameworks with backend components,
- other model selection or model marketplace systems.
- for the purpose of enabling or operating a third-party meta-learning backend.
8.2. The Library may be used with other ML libraries **only locally**, and only if not used to feed or integrate another meta-learning backend.
---
# 9. Termination
Violation of any clause automatically terminates this License.\
Upon termination, the Licensee must stop using the Library and destroy all copies.
---
# 10. No Warranty
The Library is provided "AS IS".\
Panaceia disclaims all warranties.
---
# 11. Good Faith and Purpose of License
This License is granted for the specific purpose of enabling the use of the Library as a client-side tool within Panaceia’s ecosystem, fostering research, experimentation, and local commercial use, while preserving the integrity and exclusivity of the Sofon Backend and Meta-Learning Platform.
The Licensee agrees to exercise all rights granted herein in accordance with the principles of good faith, loyalty, and fair dealing.
Any use of the Library, Extensions, or derived works that, while formally compliant with the literal terms of this License, circumvents its essential purpose, enables competitive substitution, or exploits interpretative gaps in an opportunistic or abusive manner shall constitute a material breach of this License.
---
# 12. Governing Law
This License is governed by the laws of the Federative Republic of Brazil, with exclusive venue in Brasília, DF.
---
**© 2026 Panaceia – All Rights Reserved** | machine learning, metalearning, meta-learning, neural networks, transformers | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy>=1.21.0",
"pandas>=1.3.0",
"tensorflow>=2.8.0",
"scikit-learn>=1.0.0",
"dill==0.4.0",
"matplotlib==3.10.5",
"si-prefix==1.3.3",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=3.0; extra == \"dev\"",
"black>=22.0; extra == \"dev\"",
"flake8>=4.0; extra == \"dev\"",
"mypy>=0.900; extra == \"dev\"",
"ipython>=8.0; extra == \"dev\"",
"sphinx>=5.0; extra == \"docs\"",
"furo>=2022.6.0; extra == \"docs\"",
"sphinx-copybutton>=0.5.0; extra == \"docs\"",
"pytest>=7.0; extra == \"test\"",
"pytest-mock>=3.0; extra == \"test\"",
"pytest-xdist>=3.0; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/panaceia/metalearning",
"Documentation, https://github.com/panaceia/metalearning#readme",
"Bug_Tracker, https://github.com/panaceia/metalearning/issues",
"Changelog, https://github.com/panaceia/metalearning/releases"
] | twine/6.1.0 CPython/3.12.3 | 2026-02-20T15:56:21.305579 | metalearning_class-0.1.1.tar.gz | 31,865 | ca/88/a7c8ee7aea5f74b887db287434ef7f85c0fe00ff39ccf9c4927a238223e8/metalearning_class-0.1.1.tar.gz | source | sdist | null | false | 5f78b3bee46756c55371b459794098c2 | 6543f1929b4cb79478176035e1df69b0f1688dda129459a41ae1e167b3f5cdcb | ca88a7c8ee7aea5f74b887db287434ef7f85c0fe00ff39ccf9c4927a238223e8 | null | [
"LICENSE"
] | 204 |
2.4 | karellen-llvm-core-debug | 22.1.0rc3.post33 | Karellen LLVM core libraries (debug info) | Contains LLVM, LTO, Remarks, libc++, libc++abi, and libunwind debug info
| text/markdown | Karellen, Inc. | supervisor@karellen.co | Arcadiy Ivanov | arcadiy@karellen.co | Apache-2.0 | LLVM, libc++, libcxx | [
"Programming Language :: Python",
"Operating System :: POSIX :: Linux",
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools"
] | [] | https://github.com/karellen/karellen-llvm | null | null | [] | [] | [] | [
"karellen-llvm-core==22.1.0rc3.post33",
"wheel-axle-runtime<1.0"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/karellen/karellen-llvm/issues",
"Documentation, https://github.com/karellen/karellen-llvm",
"Source Code, https://github.com/karellen/karellen-llvm"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T15:56:10.502007 | karellen_llvm_core_debug-22.1.0rc3.post33-py3-none-manylinux_2_28_x86_64.whl | 616,531,650 | c8/06/678e19f8ff8845231c51a90f9df54269fe2c1ad2b9483ffb8b09c376e12b/karellen_llvm_core_debug-22.1.0rc3.post33-py3-none-manylinux_2_28_x86_64.whl | py3 | bdist_wheel | null | false | 32b0328ffeb4ea75d9cdb97762ccb218 | f6103a5a54ea4b3686dbfc0aa34458ba49e9cf369573ae72aa2493922a832191 | c806678e19f8ff8845231c51a90f9df54269fe2c1ad2b9483ffb8b09c376e12b | null | [] | 87 |
2.3 | moderation_api | 1.9.1 | The official Python library for the moderation-api API | # Moderation API Python API library
<!-- prettier-ignore -->
[)](https://pypi.org/project/moderation_api/)
The Moderation API Python library provides convenient access to the Moderation API REST API from any Python 3.9+
application. The library includes type definitions for all request params and response fields,
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
It is generated with [Stainless](https://www.stainless.com/).
## Documentation
The REST API documentation can be found on [docs.moderationapi.com](https://docs.moderationapi.com). The full API of this library can be found in [api.md](https://github.com/moderation-api/sdk-python/tree/main/api.md).
## Installation
```sh
# install from PyPI
pip install moderation_api
```
## Usage
The full API of this library can be found in [api.md](https://github.com/moderation-api/sdk-python/tree/main/api.md).
```python
import os
from moderation_api import ModerationAPI
client = ModerationAPI(
secret_key=os.environ.get("MODAPI_SECRET_KEY"), # This is the default and can be omitted
)
response = client.content.submit(
content={
"text": "x",
"type": "text",
},
)
print(response.recommendation)
```
While you can provide a `secret_key` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `MODAPI_SECRET_KEY="My Secret Key"` to your `.env` file
so that your Secret Key is not stored in source control.
## Async usage
Simply import `AsyncModerationAPI` instead of `ModerationAPI` and use `await` with each API call:
```python
import os
import asyncio
from moderation_api import AsyncModerationAPI
client = AsyncModerationAPI(
secret_key=os.environ.get("MODAPI_SECRET_KEY"), # This is the default and can be omitted
)
async def main() -> None:
response = await client.content.submit(
content={
"text": "x",
"type": "text",
},
)
print(response.recommendation)
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh
# install from PyPI
pip install moderation_api[aiohttp]
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python
import os
import asyncio
from moderation_api import DefaultAioHttpClient
from moderation_api import AsyncModerationAPI
async def main() -> None:
async with AsyncModerationAPI(
secret_key=os.environ.get("MODAPI_SECRET_KEY"), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
response = await client.content.submit(
content={
"text": "x",
"type": "text",
},
)
print(response.recommendation)
asyncio.run(main())
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Nested params
Nested parameters are dictionaries, typed using `TypedDict`, for example:
```python
from moderation_api import ModerationAPI
client = ModerationAPI()
author = client.authors.create(
external_id="external_id",
metadata={},
)
print(author.metadata)
```
## Handling errors
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `moderation_api.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `moderation_api.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `moderation_api.APIError`.
```python
import moderation_api
from moderation_api import ModerationAPI
client = ModerationAPI()
try:
client.content.submit(
content={
"text": "x",
"type": "text",
},
)
except moderation_api.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except moderation_api.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except moderation_api.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python
from moderation_api import ModerationAPI
# Configure the default for all requests:
client = ModerationAPI(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).content.submit(
content={
"text": "x",
"type": "text",
},
)
```
### Timeouts
By default requests time out after 1 minute. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python
from moderation_api import ModerationAPI
# Configure the default for all requests:
client = ModerationAPI(
# 20 seconds (default is 1 minute)
timeout=20.0,
)
# More granular control:
client = ModerationAPI(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5.0).content.submit(
content={
"text": "x",
"type": "text",
},
)
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](https://github.com/moderation-api/sdk-python/tree/main/#retries).
## Advanced
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `MODERATION_API_LOG` to `info`.
```shell
$ export MODERATION_API_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py
from moderation_api import ModerationAPI
client = ModerationAPI()
response = client.content.with_raw_response.submit(
content={
"text": "x",
"type": "text",
},
)
print(response.headers.get('X-My-Header'))
content = response.parse() # get the object that `content.submit()` would have returned
print(content.recommendation)
```
These methods return an [`APIResponse`](https://github.com/moderation-api/sdk-python/tree/main/src/moderation_api/_response.py) object.
The async client returns an [`AsyncAPIResponse`](https://github.com/moderation-api/sdk-python/tree/main/src/moderation_api/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
```python
with client.content.with_streaming_response.submit(
content={
"text": "x",
"type": "text",
},
) as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py
import httpx
response = client.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented request params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python
import httpx
from moderation_api import ModerationAPI, DefaultHttpxClient
client = ModerationAPI(
# Or use the `MODERATION_API_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python
client.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py
from moderation_api import ModerationAPI
with ModerationAPI() as client:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/moderation-api/sdk-python/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py
import moderation_api
print(moderation_api.__version__)
```
## Requirements
Python 3.9 or higher.
## Contributing
See [the contributing documentation](https://github.com/moderation-api/sdk-python/tree/main/./CONTRIBUTING.md).
| text/markdown | null | Moderation API <support@moderationapi.com> | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"anyio<5,>=3.5.0",
"distro<2,>=1.7.0",
"httpx<1,>=0.23.0",
"pydantic<3,>=1.9.0",
"sniffio",
"typing-extensions<5,>=4.10",
"aiohttp; extra == \"aiohttp\"",
"httpx-aiohttp>=0.1.9; extra == \"aiohttp\""
] | [] | [] | [] | [
"Homepage, https://github.com/moderation-api/sdk-python",
"Repository, https://github.com/moderation-api/sdk-python"
] | uv/0.9.13 {"installer":{"name":"uv","version":"0.9.13"},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T15:56:05.939290 | moderation_api-1.9.1.tar.gz | 248,385 | 11/9f/0aa38796e694c0f73720c9d98d2ded120bf40395057a189de1b27a3c4785/moderation_api-1.9.1.tar.gz | source | sdist | null | false | a46ab2a38d6478c26669f6561e24a022 | 122f5cc4cfc1d8943ab93fb986ca3b06e43bb8c459e94f62177c92a307821cfb | 119f0aa38796e694c0f73720c9d98d2ded120bf40395057a189de1b27a3c4785 | null | [] | 0 |
2.4 | seclab-taskflow-agent | 0.2.0 | A taskflow agent for the SecLab project, enabling secure and automated workflow execution. | # GitHub Security Lab Taskflow Agent
The Security Lab Taskflow Agent is an MCP enabled multi-Agent framework.
The Taskflow Agent is built on top of the [OpenAI Agents SDK](https://openai.github.io/openai-agents-python/).
## Core Concepts
The Taskflow Agent leverages a GitHub Workflow-esque YAML based grammar to perform a series of tasks using a set of Agents.
Its primary value proposition is as a CLI tool that allows users to quickly define and script Agentic workflows without having to write any code.
Agents are defined through [personalities](examples/personalities/), that receive a [task](examples/taskflows/) to complete, given a set of [tools](src/seclab_taskflow_agent/toolboxes/).
Agents can cooperate to complete sequences of tasks through so-called [taskflows](doc/GRAMMAR.md).
You can find a detailed overview of the taskflow grammar [here](doc/GRAMMAR.md) and example taskflows [here](examples/taskflows/).
## Use Cases and Examples
The Seclab Taskflow Agent framework was primarily designed to fit the iterative feedback loop driven work involved in Agentic security research workflows and vulnerability triage tasks.
Its design philosophy is centered around the belief that a prompt level focus of capturing vulnerability patterns will greatly improve and scale security research results as frontier model capabilities evolve over time.
At GitHub Security Lab, we primarily use this framework as a code auditing tool, but it can also serve as a more generic swiss army knife for exploring Agentic workflows. For example, we also use this framework for automated code scanning alert triage.
The framework includes a [CodeQL](https://codeql.github.com/) MCP server that can be used for Agentic code review, see the [CVE-2023-2283](examples/taskflows/CVE-2023-2283.yaml) taskflow for an example of how to have an Agent review C code using a CodeQL database ([demo video](https://www.youtube.com/watch?v=eRSPSVW8RMo)).
Instead of generating CodeQL queries itself, the CodeQL MCP Server is used to provide CodeQL-query based MCP tools that allow an Agent to navigate and explore code. It leverages templated CodeQL queries to provide targeted context for model driven code analysis.
## Requirements
Python >= 3.10 or Docker
## Configuration
Provide a GitHub token for an account that is entitled to use [GitHub Models](https://models.github.ai) via the `AI_API_TOKEN` environment variable. Further configuration is use case dependent, i.e. pending which MCP servers you'd like to use in your taskflows. In a terminal, you can add `AI_API_TOKEN` to the environment like this:
```sh
export AI_API_TOKEN=<your_github_token>
```
Or, if you are using GitHub Codespaces, then you can [add a Codespace secret](https://github.com/settings/codespaces/secrets/new) so that `AI_API_TOKEN` is automatically available when working in a Codespace.
Many of the MCP servers in the [seclab-taskflow](https://github.com/GitHubSecurityLab/seclab-taskflows) repo also need an environment variable named `GH_TOKEN` for accessing the GitHub API. You can use two separate PATs if you want, or you can use one PAT for both purposes, like this:
```sh
export GH_TOKEN=$AI_API_TOKEN
```
We do not recommend storing secrets on disk, but you can persist non-sensitive environment variables by adding a `.env` file in the project root.
Example:
```sh
# MCP configs
CODEQL_DBS_BASE_PATH="/app/my_data/codeql_databases"
AI_API_ENDPOINT="https://models.github.ai/inference"
```
## Deploying from Source
We use [hatch](https://hatch.pypa.io/) to build the project. Download and build like this:
```bash
git clone https://github.com/GitHubSecurityLab/seclab-taskflow-agent.git
cd seclab-taskflow-agent
python -m venv .venv
source .venv/bin/activate
pip install hatch
hatch build
```
Then run `hatch run main`.
Example: deploying a prompt to an Agent Personality:
```sh
hatch run main -p seclab_taskflow_agent.personalities.assistant 'explain modems to me please'
```
Example: deploying a Taskflow:
```sh
hatch run main -t examples.taskflows.example
```
Example: deploying a Taskflow with command line global variables:
```sh
hatch run main -t examples.taskflows.example_globals -g fruit=apples
```
Multiple global variables can be set:
```sh
hatch run main -t examples.taskflows.example_globals -g fruit=apples -g color=red
```
## Deploying from Docker
You can deploy the Taskflow Agent via its Docker image using `docker/run.sh`.
WARNING: the Agent Docker image is _NOT_ intended as a security boundary but strictly a deployment convenience.
The image entrypoint is `__main__.py` and thus it operates the same as invoking the Agent from source directly.
You can find the Docker image for the Seclab Taskflow Agent [here](https://github.com/GitHubSecurityLab/seclab-taskflow-agent/pkgs/container/seclab-taskflow-agent) and how it is built [here](release_tools/).
Note that this image is based on a public release of the Taskflow Agent, and you will have to mount any custom taskflows, personalities, or prompts into the image for them to be available to the Agent.
Optional image mount points to supply custom data are configured via the environment:
- Custom data via `MY_DATA`, mounts to `/app/my_data`
- Custom personalities via `MY_PERSONALITIES`, mounts to `/app/personalities/my_personalities`
- Custom taskflows via `MY_TASKFLOWS`, mounts to `/app/taskflows/my_taskflows`
- Custom prompts via `MY_PROMPTS`, mounts to `/app/prompts/my_prompts`
- Custom toolboxes via `MY_TOOLBOXES`, mounts to `/app/toolboxes/my_toolboxes`
See [docker/run.sh](docker/run.sh) for further details.
Example: deploying a Taskflow (example.yaml):
```sh
docker/run.sh -t example
```
Example: deploying a Taskflow with global variables:
```sh
docker/run.sh -t example_globals -g fruit=apples
```
Example: deploying a custom taskflow (custom_taskflow.yaml):
```sh
MY_TASKFLOWS=~/my_taskflows docker/run.sh -t custom_taskflow
```
Example: deploying a custom taskflow (custom_taskflow.yaml) and making local CodeQL databases available to the CodeQL MCP server:
```sh
MY_TASKFLOWS=~/my_taskflows MY_DATA=~/app/my_data CODEQL_DBS_BASE_PATH=/app/my_data/codeql_databases docker/run.sh -t custom_taskflow
```
For more advanced scenarios like e.g. making custom MCP server code available, you can alter the run script to mount your custom code into the image and configure your toolboxes to use said code accordingly.
```sh
export MY_MCP_SERVERS="$PWD"/mcp_servers
export MY_TOOLBOXES="$PWD"/toolboxes
export MY_PERSONALITIES="$PWD"/personalities
export MY_TASKFLOWS="$PWD"/taskflows
export MY_PROMPTS="$PWD"/prompts
export MY_DATA="$PWD"/data
if [ ! -f ".env" ]; then
touch ".env"
fi
docker run \
--mount type=bind,src="$PWD"/.env,dst=/app/.env,ro \
${MY_DATA:+--mount type=bind,src=$MY_DATA,dst=/app/my_data} \
${MY_MCP_SERVERS:+--mount type=bind,src=$MY_MCP_SERVERS,dst=/app/my_mcp_servers,ro} \
${MY_TASKFLOWS:+--mount type=bind,src=$MY_TASKFLOWS,dst=/app/taskflows/my_taskflows,ro} \
${MY_TOOLBOXES:+--mount type=bind,src=$MY_TOOLBOXES,dst=/app/toolboxes/my_toolboxes,ro} \
${MY_PROMPTS:+--mount type=bind,src=$MY_PROMPTS,dst=/app/prompts/my_prompts,ro} \
${MY_PERSONALITIES:+--mount type=bind,src=$MY_PERSONALITIES,dst=/app/personalities/my_personalities,ro} \
"ghcr.io/githubsecuritylab/seclab-taskflow-agent" "$@"
```
## General YAML file headers
Every YAML files used by the Seclab Taskflow Agent must include a header like this:
```yaml
seclab-taskflow-agent:
version: "1.0"
filetype: taskflow
```
The `version` number in the header is currently 1. It means that the
file uses version 1 of the seclab-taskflow-agent syntax. If we ever need
to make a major change to the syntax, then we'll update the version number.
This will hopefully enable us to make changes without breaking backwards
compatibility. Version can be specified as an integer, float, or string.
The `filetype` determines whether the file defines a personality, toolbox, etc.
This means that different types of files can be stored in the same directory.
A `filetype` can be one of the following:
- taskflow
- personality
- toolbox
- prompt
- model_config
We'll now explain the role of different types of files and functionalities available to them.
## Personalities
Core characteristics for a single Agent. Configured through YAML files of `filetype` `personality`.
These are system prompt level instructions.
Example:
```yaml
# personalities define the system prompt level directives for this Agent
seclab-taskflow-agent:
version: 1
filetype: personality
personality: |
You are a simple echo bot. You use echo tools to echo things.
task: |
Echo user inputs using the echo tools.
# personality toolboxes map to mcp servers made available to this Agent
toolboxes:
- seclab_taskflow_agent.toolboxes.echo
```
In the above, the `personality` and `task` field specifies the system prompt to be used whenever this `personality` is used.
The `toolboxes` are the tools that are available to this `personality`. The `toolboxes` should be a list of files of the `filetype` `toolbox`. (See the [Import paths](#import-paths) section for how to reference other files.)
Personalities can be used in two ways. First it can be used standalone with a prompt input from the command line:
```
hatch run main -p examples.personalities.echo 'echo this message'
```
In this case, `personality` and `task` from [`examples/personalities/echo.yaml`](examples/personalities/echo.yaml) are used as the
system prompt while the user argument `echo this message` is used as a user prompt. In this use case, the only tools that this
personality has access to is the `toolboxes` specified in the file.
Personalities can also be used in a `taskflow` to perform tasks. This is done by adding the `personality` to the `agents` field in a `taskflow` file:
```yaml
taskflow:
- task:
...
agents:
- personalities.assistant
user_prompt: |
Fetch all the open pull requests from `github/codeql` github repository.
You do not need to provide a summary of the results.
toolboxes:
- seclab_taskflow_agent.toolboxes.github_official
```
In this case, the `personality` specified in `agents` provides the system prompt and the user prompt is specified in the `user_prompt` field of the task. A big difference in this case is that the `toolboxes` specified in the `task` will overwrite the `toolboxes` that the `personality` has access to. So in the above example, the `personalities.assistant` will have access to the `seclab_taskflow_agent.toolboxes.github_official` toolbox instead of its own toolbox. It is important to note that the `personalities` toolboxes get *overwritten* in this case, so whenever a `toolboxes` field is provided in a `task`, it'll use the provided toolboxes and `personality` loses access to its own toolboxes. e.g.
```yaml
taskflow:
- task:
...
agents:
- examples.personalities.echo
user_prompt: |
echo this
toolboxes:
- seclab_taskflow_agent.toolboxes.github_official
```
In the above `task`, `personalities.examples.echo` will only have access to the `toolboxes.github_official` and can no longer access the `seclab_taskflow_agent.toolboxes.echo` `toolbox` (unless it is added also in the `task` `toolboxes`).
## Toolboxes
MCP servers that provide tools. Configured through YAML files of `filetype` `toolbox`. These are files that provide the type and parameters to start an MCP server.
For example, to start a stdio MCP server that is implemented in a python file:
```yaml
# stdio mcp server configuration
seclab-taskflow-agent:
version: 1
filetype: toolbox
server_params:
kind: stdio
command: python
args: ["-m", "seclab_taskflow_agent.mcp_servers.echo.echo"]
env:
TEST: value
```
In the above, `command` and `args` are just the command and arguments that should be run to start the MCP server. Environment variables can be passed using the `env` field.
A [streamable](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http) is also supported by setting the `kind` to `streamable`:
```yaml
server_params:
kind: streamable
url: https://api.githubcopilot.com/mcp/
#See https://github.com/github/github-mcp-server/blob/main/docs/remote-server.md
headers:
Authorization: "Bearer {{ env('GH_TOKEN') }}"
optional_headers:
X-MCP-Toolsets: "{{ env('GITHUB_MCP_TOOLSETS') }}"
X-MCP-Readonly: "{{ env('GITHUB_MCP_READONLY') }}"
```
You can force certain tools within a `toolbox` to require user confirmation to run. This can be helpful if a tool may perform irreversible actions and should require user approval prior to its use. This is done by including the name of the tool (function) in the MCP server in the `confirm` section:
```yaml
server_params:
kind: stdio
...
# the list of tools that you want the framework to confirm with the user before executing
# use this to guard rail any potentially dangerous functions from MCP servers
confirm:
- memcache_clear_cache
```
## Taskflows
A sequence of interdependent tasks performed by a set of Agents. Configured through YAML files of `filetype` `taskflow`.
Taskflows supports a number of features, and their details can be found [here](doc/GRAMMAR.md).
Example:
```yaml
seclab-taskflow-agent:
version: 1
filetype: taskflow
taskflow:
- task:
# taskflows can optionally choose any of the models supported by your API for a task
model: gpt-4.1
# taskflows can optionally limit the max allowed number of Agent task loop
# iterations to complete a task, this defaults to 50 when not provided
max_steps: 20
must_complete: true
# taskflows can set a primary (first entry) and handoff (additional entries) agent
agents:
- seclab_taskflow_agent.personalities.c_auditer
- examples.personalities.fruit_expert
user_prompt: |
Store an example vulnerable C program that uses `strcpy` in the
`vulnerable_c_example` memory key and explain why `strcpy`
is insecure in the C programming language. Do this before handing off
to any other agent.
Finally, why are apples and oranges healthy to eat?
# taskflows can set temporary environment variables, these support the general
# "{{ env('FROM_EXISTING_ENVIRONMENT') }}" pattern we use elsewhere as well
# these environment variables can then be made available to any stdio mcp server
# through its respective yaml configuration, see memcache.yaml for an example
# you can use these to override top-level environment variables on a per-task basis
env:
MEMCACHE_STATE_DIR: "example_taskflow/"
MEMCACHE_BACKEND: "dictionary_file"
# taskflows can optionally override personality toolboxes, in this example
# this normally only has the memcache toolbox, but we extend it here with
# the GHSA toolbox
toolboxes:
- seclab_taskflow_agent.toolboxes.memcache
- seclab_taskflow_agent.toolboxes.codeql
- task:
must_complete: true
model: gpt-4.1
agents:
- seclab_taskflow_agent.personalities.c_auditer
user_prompt: |
Retrieve C code for security review from the `vulnerable_c_example`
memory key and perform a review.
Clear the memory cache when you're done.
env:
MEMCACHE_STATE_DIR: "example_taskflow/"
MEMCACHE_BACKEND: "dictionary_file"
toolboxes:
- seclab_taskflow_agent.toolboxes.memcache
# headless mode does not prompt for tool call confirms configured for a server
# note: this will auto-allow, if you want control over potentially dangerous
# tool calls, then you should NOT run a task in headless mode (default: false)
headless: true
- task:
# tasks can also run shell scripts that return e.g. json output for repeat prompt iterable
must_complete: true
run: |
echo '["apple", "banana", "orange"]'
- task:
repeat_prompt: true
agents:
- seclab_taskflow_agent.personalities.assistant
user_prompt: |
What kind of fruit is {{ RESULT }}?
```
Taskflows support [Agent handoffs](https://openai.github.io/openai-agents-python/handoffs/). Handoffs are useful for implementing triage patterns where the primary Agent can decide to handoff a task to any subsequent Agents in the `Agents` list.
See the [taskflow examples](examples/taskflows) for other useful Taskflow patterns such as repeatable and asynchronous templated prompts.
You can run a taskflow from the command line like this:
```
hatch run main -t examples.taskflows.CVE-2023-2283
```
## Prompts
Prompts are configured through YAML files of `filetype` `prompt`. They define a reusable prompt that can be referenced in `taskflow` files.
They contain only one field, the `prompt` field, which is used to replace any `{{ PROMPT_<import-path> }}` template parameter in a taskflow. For example, the following `prompt`.
```yaml
seclab-taskflow-agent:
version: 1
filetype: prompt
prompt: |
Tell me more about bananas as well.
```
would replace any `{{ PROMPT_examples.prompts.example_prompt }}` template parameter found in the `user_prompt` section in a taskflow:
```yaml
- task:
agents:
- examples.personalities.fruit_expert
user_prompt: |
Tell me more about apples.
{{ PROMPTS_examples.prompts.example_prompt }}
```
becomes:
```yaml
- task:
agents:
- examples.personalities.fruit_expert
user_prompt: |
Tell me more about apples.
Tell me more about bananas as well.
```
## Model configs
Model configs are configured through YAML files of `filetype` `model_config`. These provide a way to configure model versions.
```yaml
seclab-taskflow-agent:
version: 1
filetype: model_config
models:
gpt_latest: gpt-5
```
A `model_config` file can be used in a `taskflow` and the values defined in `models` can then be used throughout.
```yaml
model_config: examples.model_configs.model_config
taskflow:
- task:
model: gpt_latest
```
The model version can then be updated by changing `gpt_latest` in the `model_config` file and applied across all taskflows that use the config.
In addition, model specific parameters can be provided via `model_config`. To do so, define a `model_settings` section in the `model_config` file. This section has to be a dictionary with the model names as keys:
```yaml
model_settings:
gpt_latest:
temperature: 1
reasoning:
effort: high
```
You do not need to set parameters for all models defined in the `models` section. When parameters are not set for a model, they'll fall back to the default value. However, all the settings in this section must belong to one of the models specified in the `models` section, otherwise an error will raise:
```yaml
model_settings:
new_model:
...
```
The above will result in an error because `new_model` is not defined in `models` section. Model parameters can also be set per task, and any settings defined in a task will override the settings in the config.
## Passing environment variables
Files of types `taskflow` and `toolbox` allow environment variables to be passed using the `env` field:
```yaml
server_params:
...
env:
CODEQL_DBS_BASE_PATH: "{{ env('CODEQL_DBS_BASE_PATH') }}"
# prevent git repo operations on gh codeql executions
GH_NO_UPDATE_NOTIFIER: "disable"
```
For `toolbox`, `env` can be used inside `server_params`. A template of the form `{{ env('ENV_VARIABLE_NAME') }}` can be used to pass values of the environment variable from the current process to the MCP server. So in the above, the MCP server is run with `GH_NO_UPDATE_NOTIFIER=disable` and passes the value of `CODEQL_DBS_BASE_PATH` from the current process to the MCP server. The templated parameter `{{ env('CODEQL_DBS_BASE_PATH') }}` is replaced by the value of the environment variable `CODEQL_DBS_BASE_PATH` in the current process.
Similarly, environment variables can be passed to a `task` in a `taskflow`:
```yaml
taskflow:
- task:
must_complete: true
agents:
- seclab_taskflow_agent.personalities.assistant
user_prompt: |
Store the json array ["apples", "oranges", "bananas"] in the `fruits` memory key.
env:
MEMCACHE_STATE_DIR: "example_taskflow/"
MEMCACHE_BACKEND: "dictionary_file"
```
This overwrites the environment variables `MEMCACHE_STATE_DIR` and `MEMCACHE_BACKEND` for the task only. A template `{{ env('ENV_VARIABLE_NAME') }}` can also be used.
Note that when using the template `{{ env('ENV_VARIABLE_NAME') }}`, `ENV_VARIABLE_NAME` must be the name of an environment variable in the current process.
## Import paths
YAML files often need to refer to each other. For example, a taskflow can reference a personality like this:
```yaml
taskflow:
- task:
...
agents:
- seclab_taskflow_agent.personalities.assistant
```
We use Python's import system, so a name like `seclab_taskflow_agent.personalities.assistant` will get resolved to a YAML file using Python's import rules. One of the benefits of this is that it makes it easy to bundle and share taskflows as Python packages on PyPI.
The implementation works like this:
1. A name like `seclab_taskflow_agent.personalities.assistant` gets split (at the last `.` character) into a package name (`seclab_taskflow_agent.personalities`) and a file name (`assistant`).
2. Python's [`importlib.resources.files`](https://docs.python.org/3/library/importlib.resources.html#importlib.resources.files) is used to resolve the package name into a directory name.
3. The extension `.yaml` is added to the filename: `assistant.yaml`.
4. The yaml file is loaded from the directory that was returned by `importlib.resources.files`.
The exact code that implements this can be found in [`available_tools.py`](src/seclab_taskflow_agent/available_tools.py).
## Background
SecLab Taskflow Agent is an experimental agentic framework, maintained by [GitHub Security Lab](https://securitylab.github.com/). We are using it to experiment with using AI Agents for security purposes, such as auditing code for vulnerabilities, or triaging issues.
We'd love to hear your feedback. Please [create an issue](https://github.com/GitHubSecurityLab/seclab-taskflow-agent/issues/new/choose) to send us a feature request or bug report. We also welcome pull requests (see our [contribution guidelines](./CONTRIBUTING.md) for more information if you wish to contribute).
## License
This project is licensed under the terms of the [MIT](https://spdx.org/licenses/MIT.html) license. Please refer to the [LICENSE](./LICENSE) file for the full terms.
## Maintainers
[CODEOWNERS](./CODEOWNERS)
## Support
[SUPPORT](./SUPPORT.md)
## Acknowledgements
Security Lab team members [Man Yue Mo](https://github.com/m-y-mo) and [Peter Stöckli](https://github.com/p-) for contributing heavily to the testing and development of this framework, as well as the rest of the Security Lab team for helpful discussions and feedback.
| text/markdown | null | GitHub Security Lab <securitylab@github.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiofiles==24.1.0",
"annotated-types==0.7.0",
"anyio==4.9.0",
"attrs==25.3.0",
"authlib==1.6.6",
"certifi==2025.6.15",
"cffi==2.0.0",
"charset-normalizer==3.4.2",
"click==8.2.1",
"colorama==0.4.6",
"cryptography==46.0.5",
"cyclopts==4.0.0",
"distro==1.9.0",
"dnspython==2.8.0",
"docstring-parser==0.17.0",
"docutils==0.22",
"email-validator==2.3.0",
"exceptiongroup==1.3.0",
"fastmcp==2.14.0",
"griffe==1.7.3",
"h11==0.16.0",
"httpcore==1.0.9",
"httpx-sse==0.4.1",
"httpx==0.28.1",
"idna==3.10",
"importlib-metadata==8.7.0",
"isodate==0.7.2",
"jedi==0.19.2",
"jinja2>=3.1.0",
"jiter==0.10.0",
"jsonschema-path==0.3.4",
"jsonschema-specifications==2025.4.1",
"jsonschema==4.24.0",
"lazy-object-proxy==1.12.0",
"markdown-it-py==3.0.0",
"markupsafe==3.0.2",
"mcp==1.23.1",
"mdurl==0.1.2",
"more-itertools==10.8.0",
"openai-agents==0.2.11",
"openai==1.107.0",
"openapi-core==0.19.5",
"openapi-pydantic==0.5.1",
"openapi-schema-validator==0.6.3",
"openapi-spec-validator==0.7.2",
"parse==1.20.2",
"parso==0.8.4",
"pathable==0.4.4",
"platformdirs==4.5.0",
"pluggy==1.6.0",
"pycparser==2.23",
"pydantic-core==2.33.2",
"pydantic-settings==2.10.1",
"pydantic==2.11.7",
"pygments==2.19.2",
"pyperclip==1.9.0",
"python-dotenv==1.1.1",
"python-lsp-jsonrpc==1.1.2",
"python-lsp-server==1.12.2",
"python-multipart==0.0.22",
"pyyaml==6.0.2",
"referencing==0.36.2",
"requests==2.32.4",
"rfc3339-validator==0.1.4",
"rich-rst==1.3.1",
"rich==14.0.0",
"rpds-py==0.26.0",
"shellingham==1.5.4",
"six==1.17.0",
"sniffio==1.3.1",
"sqlalchemy==2.0.41",
"sse-starlette==2.4.1",
"starlette==0.49.1",
"strenum==0.4.15",
"tqdm==4.67.1",
"typer==0.16.0",
"types-requests==2.32.4.20250611",
"typing-extensions==4.15.0",
"typing-inspection==0.4.1",
"ujson==5.10.0",
"urllib3==2.6.3",
"uvicorn==0.35.0",
"zipp==3.23.0"
] | [] | [] | [] | [
"Source, https://github.com/GitHubSecurityLab/seclab-taskflow-agent",
"Issues, https://github.com/GitHubSecurityLab/seclab-taskflow-agent/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:55:29.515837 | seclab_taskflow_agent-0.2.0.tar.gz | 92,851 | 73/80/c722890ff0aa5003aefc9dc1aea509506fbf685b1e659d4dffb0caca97e3/seclab_taskflow_agent-0.2.0.tar.gz | source | sdist | null | false | 33bb7aafb6161a0c936eab97c45283db | 20e8622d29198fa3a984cf2609117a5aba18692410b2bb43d1b778c11593adb1 | 7380c722890ff0aa5003aefc9dc1aea509506fbf685b1e659d4dffb0caca97e3 | MIT | [
"LICENSE",
"NOTICE"
] | 227 |
2.4 | modmux | 0.2.5 | Single interface for several game mod portals | # ModMux
[](https://pypi.org/project/modmux/)
[](https://pypi.org/project/modmux/)
[](LICENSE)
[](https://github.com/APasz/ModMux/actions/workflows/ci.yml)
Unified async client for multiple game mod platforms. ModMux normalises provider responses into shared Pydantic models and ships a small CLI for quick lookups.
## Features
- Async HTTP client with rate limiting and retry handling
- Normalised `Mod` metadata model across providers
- Pluggable provider registry
- URL parsing helpers for provider links
- CLI for fetching a single mod and printing JSON
## Supported providers
- Modrinth
- CurseForge
- Nexus Mods
- Mod.io
- Steam Workshop
- Factorio Mod Portal (Wube)
## Requirements
- Python 3.13+
- Dependencies: `pydantic`, `httpx`, `aiolimiter`
## Install
```bash
python -m pip install modmux
```
### Credentials and environment variables
- `--token` accepts API tokens/keys.
- `--user` supplies a user id for providers that need user-scoped base URLs (mod.io).
- Environment variables:
- `MODMUX_TOKEN` (fallback for all providers)
- `MODMUX_<PROVIDER>_TOKEN` (provider-specific, e.g. `MODMUX_MODRINTH_TOKEN`)
- `MODMUX_<PROVIDER>_USER` (provider-specific user id, e.g. `MODMUX_MODIO_USER`)
## Library usage
Two supported patterns are available: an async context manager or a manually
closed client.
```python
import asyncio
from modmux import ModID, ModrinthCreds, Muxer, Provider
mod_id = ModID(provider=Provider.MODRINTH, id="fabric-api")
async def run() -> None:
async with Muxer(creds=[ModrinthCreds(api_key="api-key")]) as mux:
mod = await mux.get_mod(Provider.MODRINTH, mod_id)
print(mod.name)
asyncio.run(run())
```
Use the same context manager pattern as above; you can supply multiple provider credentials in one go:
```python
from modmux import ModioCreds, Muxer, NexusCreds, SteamCreds
mux = Muxer(
creds=[
ModioCreds(api_key="api-key", user_id="user-id"),
NexusCreds(token="nexus-token"),
SteamCreds(api_key="steam-key"),
]
)
```
```python
import asyncio
from modmux import ModID, Muxer, Provider
mod_id = ModID(provider=Provider.MODRINTH, id="fabric-api")
async def run() -> None:
mux = Muxer()
try:
mod = await mux.get_mod(Provider.MODRINTH, mod_id)
finally:
await mux.aclose()
print(mod.name)
asyncio.run(run())
```
The `modmux_client(...)` helper remains available as a convenience wrapper
around `Muxer` for existing code.
You can also pass raw credential dicts when you do not want to import the
credential models directly:
```python
from modmux import ModID, Muxer, Provider
mod_id = ModID(provider=Provider.MODRINTH, id="fabric-api")
async with Muxer(creds={Provider.MODRINTH: {"token": "token"}}) as mux:
mod = await mux.get_mod(Provider.MODRINTH, mod_id)
```
## URL parsing
Parse provider URLs into `ModID` instances, or fetch directly from a URL.
```python
import asyncio
from modmux import Muxer, parse_url
async def run_mod_id() -> None:
async with Muxer() as mux:
mod = await mux.get_mod_from_url("https://modrinth.com/mod/fabric-api")
print(mod.name)
async def run_from_url() -> None:
mod_id = parse_url("https://modrinth.com/mod/fabric-api")
if mod_id:
async with Muxer() as mux:
mod = await mux.get_mod(mod_id.provider, mod_id)
print(mod.name)
asyncio.run(run())
```
Some providers require extra context (game ids or credentials); use
`get_mod_from_url(..., game="...")` when the URL cannot supply it.
## CLI usage
The CLI expects a provider name and provider-specific mod id. Provider names are case-insensitive.
```bash
modmux MODRINTH fabric-api --pretty
modmux CURSEFORGE 238222 --pretty
modmux NEXUSMODS 12345 --game transportfever2
modmux MODIO some-mod --game 4321 --user 12345 --token <api-key>
```
Or without installing a script:
```bash
python -m modmux MODRINTH fabric-api --pretty
```
Output is a JSON serialisation of the `Mod` model.
## Provider notes
- CurseForge: slug lookups require `ModID.game` (game id).
- Nexus Mods: requires `ModID.game` (game domain, e.g. `skyrim`).
- mod.io: requires `ModID.game` plus a user id (use `--user` or `MODMUX_MODIO_USER`).
- Steam: uses a Workshop published file id; `ModID.game` is optional.
- Wube: uses the Factorio mod name slug.
| text/markdown | APasz | null | null | null | MIT | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"aiolimiter<2,>=1.2",
"httpx<1,>=0.28",
"pydantic<3,>=2"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:54:36.800520 | modmux-0.2.5.tar.gz | 20,791 | 9f/69/0e5c58a55c9ac5ff102fa1a479b0116945c43a1f646807120fb431e63c36/modmux-0.2.5.tar.gz | source | sdist | null | false | a19f7976e4ed9a7827b4fa8c5ae16ed8 | 3ac56b9c833d7d1fbab0f1d9742574c4972af62decc14ed926b50a9ad78f6adb | 9f690e5c58a55c9ac5ff102fa1a479b0116945c43a1f646807120fb431e63c36 | null | [
"LICENSE"
] | 203 |
2.4 | usdm4-fhir | 0.8.0 | A python package for importing and exporting the CDISC TransCelerate USDM, version 4, using Excel | # USDM4 FHIR
# Install
```pip install usdm4_fhir```
# Build Package
Build steps for deployment to pypi.org
- Run `pytest`, ensure coverage and all tests pass
- Run `ruff format`
- Run `ruff check`, ensure no errors
- Build with `python3 -m build --sdist --wheel`
- Upload to pypi.org using `twine upload dist/*`
| text/markdown | D Iberson-Hurst | null | null | null | null | null | [
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Operating System :: OS Independent"
] | [] | null | null | null | [] | [] | [] | [
"usdm4>=0.18.0",
"d4k_ms_base>=0.3.0",
"openpyxl"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.3 | 2026-02-20T15:54:10.040249 | usdm4_fhir-0.8.0.tar.gz | 45,381 | 21/d8/87230554fd5a06b4af6bc9fe11f2d8b1c9a93a8f8c14df40fa5e36e75249/usdm4_fhir-0.8.0.tar.gz | source | sdist | null | false | e17a49477d7e71c7bad17ea8a8a6e19c | b19a4952be5a5edf4f7cf5025a145eb0b2d40aa0aaba6ee2acaca607aef7bcf8 | 21d887230554fd5a06b4af6bc9fe11f2d8b1c9a93a8f8c14df40fa5e36e75249 | null | [
"LICENSE"
] | 203 |
2.4 | karellen-llvm-core | 22.1.0rc3.post33 | Karellen LLVM core libraries | Contains LLVM, LTO, Remarks, libc++, libc++abi, and libunwind
| text/markdown | Karellen, Inc. | supervisor@karellen.co | Arcadiy Ivanov | arcadiy@karellen.co | Apache-2.0 | LLVM, libc++, libcxx | [
"Programming Language :: Python",
"Operating System :: POSIX :: Linux",
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools"
] | [] | https://github.com/karellen/karellen-llvm | null | null | [] | [] | [] | [
"wheel-axle-runtime<1.0"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/karellen/karellen-llvm/issues",
"Documentation, https://github.com/karellen/karellen-llvm",
"Source Code, https://github.com/karellen/karellen-llvm"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T15:53:32.507930 | karellen_llvm_core-22.1.0rc3.post33-py3-none-manylinux_2_28_x86_64.whl | 32,721,571 | c1/2e/63ba89c65aaa126d206671b361282126bfb086909fc31d8b7650af531707/karellen_llvm_core-22.1.0rc3.post33-py3-none-manylinux_2_28_x86_64.whl | py3 | bdist_wheel | null | false | 762faf534bc02eacca371f544c098c7a | 537162b443e653d95ee2303664c9da04cc73dce1a1f96a3d12c460c9ef3b4e87 | c12e63ba89c65aaa126d206671b361282126bfb086909fc31d8b7650af531707 | null | [] | 126 |
2.4 | karellen-llvm-clang | 22.1.0rc3.post33 | Karellen Clang compiler infrastructure | Self-contained LLVM Clang compiler infrastructure
| text/markdown | Karellen, Inc. | supervisor@karellen.co | Arcadiy Ivanov | arcadiy@karellen.co | Apache-2.0 | LLVM, clang, c, c++, compiler | [
"Programming Language :: Python",
"Operating System :: POSIX :: Linux",
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools"
] | [] | https://github.com/karellen/karellen-llvm | null | null | [] | [] | [] | [
"karellen-llvm-core==22.1.0rc3.post33",
"wheel-axle-runtime<1.0",
"karellen-llvm-toolchain-tools==22.1.0rc3.post33; extra == \"tools\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/karellen/karellen-llvm/issues",
"Documentation, https://github.com/karellen/karellen-llvm",
"Source Code, https://github.com/karellen/karellen-llvm"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T15:53:21.766954 | karellen_llvm_clang-22.1.0rc3.post33-py3-none-manylinux_2_28_x86_64.whl | 76,368,956 | c8/ad/2b91f0649576005dcf63e75ce12035af092ed396194b93956ea56025f9b4/karellen_llvm_clang-22.1.0rc3.post33-py3-none-manylinux_2_28_x86_64.whl | py3 | bdist_wheel | null | false | e33e6a1957747919b2cc162871ffcc11 | 24eea712c8e888cca5a9cea1ccac0db7bda5de879924e20da0f24cdb2c46489b | c8ad2b91f0649576005dcf63e75ce12035af092ed396194b93956ea56025f9b4 | null | [] | 124 |
2.4 | overloadable | 1.1.4 | This projects allows overloading. | ============
overloadable
============
Visit the website `https://overloadable.johannes-programming.online/ <https://overloadable.johannes-programming.online/>`_ for more information.
| text/x-rst | null | Johannes <johannes.programming@gmail.com> | null | null | The MIT License (MIT)
Copyright (c) 2024 Johannes
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"datarepr<2,>=1.1",
"setdoc<2,>=1.2.10"
] | [] | [] | [] | [
"Download, https://pypi.org/project/overloadable/#files",
"Index, https://pypi.org/project/overloadable/",
"Source, https://github.com/johannes-programming/overloadable/",
"Website, https://overloadable.johannes-programming.online/"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T15:53:14.434015 | overloadable-1.1.4.tar.gz | 5,956 | 2e/9d/8460a8f343e7d83310599dd59e9593e8edd133687b2288ce586fe40942cf/overloadable-1.1.4.tar.gz | source | sdist | null | false | ab5e73e38c3b97dc92495ed7bd789914 | 82feb71153d0c2ef77942456641df12eda70a2360389f6e6c391ad99fde73faf | 2e9d8460a8f343e7d83310599dd59e9593e8edd133687b2288ce586fe40942cf | null | [
"LICENSE.txt"
] | 203 |
2.4 | nncodec | 2.1.0 | Fraunhofer HHI implementation of the Neural Network Coding (NNC) Standard | <div align="center">
<img src="https://github.com/fraunhoferhhi/nncodec/assets/65648299/69b41b38-19ed-4c45-86aa-2b2cd4d835f7" width="660"/>
# A Software Implementation of the ISO/IEC 15938-17 Neural Network Coding (NNC) Standard
</div>
## Table of Contents
- [Information](#information)
- [Quick Start](#quick-start)
- [Installation](#installation)
- [NNCodec Usage](#nncodec-usage)
- [Tensor Coding](#coding-tensors-in-ai-based-media-processing-taimp)
- [Neural Network Coding](#coding-neural-networks-and-neural-network-updates)
- [Federated Learning](#federated-learning-with-nncodec)
- [Paper Results](#paper-results)
- [Citation and Publications](#citation-and-publications)
- [License](#license)
## Information
This repository hosts a beta version of NNCodec 2.0, which incorporates new compression tools for incremental neural
network data, as introduced in the second edition of the NNC standard. It also supports coding
"Tensors in AI-based Media Processing" (TAIMP), addressing recent MPEG requirements for coding individual tensors rather
than entire neural networks or differential updates to a base neural network.
The repository also includes a novel use case demonstrating federated learning (FL) for tiny language models in
telecommunications.
The official NNCodec 1.0 git repository, which served as the foundation for this project, can be found here:
[](https://github.com/fraunhoferhhi/nncodec)
It also contains a [Wiki-Page](https://github.com/fraunhoferhhi/nncodec/wiki) providing further information on NNCodec.
Upon approval, this second version will update the official git repository.
### The Fraunhofer Neural Network Encoder/Decoder (NNCodec)
NNCodec is an efficient implementation of NNC ([Neural Network Coding ISO/IEC 15938-17](https://www.iso.org/standard/85545.html)),
the first international standard for compressing (incremental) neural network data.
It provides the following main features:
- Standard-compliant encoder/decoder including, e.g., DeepCABAC, quantization, and sparsification
- Built-in support for common deep learning frameworks (e.g., PyTorch)
- Integrated support for data-driven compression tools on common datasets (ImageNet, CIFAR, PascalVOC)
- Federated AI support via [*Flower*](https://flower.ai), a prominent and widely used framework
- Compression pipelines for:
- Neural Networks (NN)
- Tensors (TAIMP)
- Federated Learning (FL)
## Quick Start
Install and run a tensor compression example:
```bash
pip install nncodec
python example/tensor_coding.py
```
## Installation
### Requirements
- Python >= 3.8 with working pip
- Windows: Visual Studio 2015 Update 3 or later
### Package Installation from PyPI
NNCodec 2.0 supports pip installation:
```bash
pip install nncodec
```
will install packages from `install_requires` list in [setup.py](https://github.com/d-becking/nncodec2/blob/master/setup.py)
### Package Installation from Source
To install NNCodec from source, we recommend creating a virtual Python environment and install via pip from the root of
the cloned repository:
```bash
python3 -m venv env
source env/bin/activate
pip install --upgrade pip
pip install -e .
```
## NNCodec Usage
<div align="center">
<img src="https://github.com/user-attachments/assets/564b9d02-a706-459a-a8bb-241d2ec4608f" width="660"/>
</div>
NNCodec 2.0, as depicted above, includes three main pipelines:
- One for tensorial data in AI-based media processing (e.g., function coefficients, feature maps, ...),
```python
from nncodec.tensor import encode, decode
```
- one for coding entire neural networks (or their differential updates), and
```python
from nncodec.nn import encode, decode
```
- one for federated learning scenarios.
```python
from nncodec.fl import NNClient, NNCFedAvg
```
### Coding Tensors in AI-based Media Processing (TAIMP)
The [tensor_coding.py](https://github.com/d-becking/nncodec2/blob/master/example/tensor_coding.py) script provides
encoding and decoding examples of random tensors.
The first example codes a random _PyTorch_ tensor (which could also be an integer tensor or a _numpy_ array):
```python
example_tensor = torch.randn(256, 64, 64) # torch.randint(0, 255, (3, 3, 32, 32)) # example 8-bit uint tensor
bitstream = encode(example_tensor, args_dict)
dec_tensor = torch.tensor(decode(bitstream, args_dict["tensor_id"]))
```
here, `args_dict` is a python dictionary that specifies the encoding configuration. The default configuration is:
```python
args_dict = { 'approx_method': 'uniform', # Quantization method ['uniform' or 'codebook']
'qp': -32, # main quantization parameter (QP)
'nonweight_qp': -75, # QP for non-weights, e.g., 1D or BatchNorm params (default: -75, i.e., fine quantization)
'use_dq': True, # enables dependent scalar / Trellis-coded quantization
'bitdepth': None, # Optional: integer-aligned bitdepth for limited precision [1, 31] bit; note: overwrites QPs.
'quantize_only': False, # if True encode() returns quantized parameter instead of bitstream
'tca': False, # enables Temporal Context Adaptation (TCA)
'row_skipping': True, # enables skipping tensor rows from arithmetic coding if entirely zero
'sparsity': 0.0, # introduces mean- & std-based unstructured sparsity [0.0, 1.0]
'struct_spars_factor': 0.0, # introduces structured per-channel sparsity (based on channel means); requires sparsity > 0.0
'job_identifier': 'TAIMP_coding', # Name extension for generated *.nnc bitstream files and for logging
'results': '.', # path where results / bitstreams shall be stored
'tensor_id': '0', # identifier for tensor
'tensor_path': None, # path to tensor to be encoded
'compress_differences': False, # if True bitstream represents a differential update of a base tensor; set automatically if TCA enabled
'verbose': True # print stdout process information.
}
```
An exemplary minimal config:
```python
args_dict = {
'approx_method': 'uniform', 'bitdepth': 4, 'use_dq': True, 'sparsity': 0.5, 'struct_spars_factor': 0.9, 'tensor_id': '0'
}
```
The second example targets incremental tensor coding with the coding tool Temporal Context Adaptation (TCA).
Running tensor_coding.py with `--incremental` updates 50% of the example tensor's elements for `num_increments`
iterations and stores the previously decoded, co-located tensor in `approx_param_base`.
`approx_param_base` must be initialized with
```python
approx_param_base = {"parameters": {}, "put_node_depth": {}, "device_id": 0, "parameter_id": {}}
```
### Coding Neural Networks and Neural Network Updates
The [nn_coding.py](https://github.com/d-becking/nncodec2/blob/master/example/nn_coding.py) script provides
encoding and decoding examples of entire neural networks (NN) (`--uc=0`), incremental full NN (`--uc=1`)
and incremental differential dNN (`--uc=2`).
**Minimal example:** In its most simple form, an NN's parameters can be represented as a python dictionary of float32 or int32 numpy arrays:
```python
from nncodec.nn import encode, decode
import numpy as np
model = {f"parameter_{i}": np.random.randn(np.random.randint(1, 36),
np.random.randint(1, 303)).astype(np.float32) for i in range(5)}
bitstream = encode(model)
rec_mdl_params = decode(bitstream)
```
Hyperparameters can be inserted (like in the `nncodec.tensor` pipeline above) by passing an `args_dict`
to `encode()` containing one or more configurations, e.g.,
```python
bitstream = encode(model, args={'qp': -24, 'use_dq': True, 'sparsity': 0.4})
```
or instead of a `qp` also a `bitdepth` can be used:
```python
bitstream = encode(model, args={'bitdepth': 4, 'use_dq': True, 'sparsity': 0.4})
```
**Example CLI:** For coding an actual NN, we included a _ResNet-56_ model pre-trained on _CIFAR-100_. Additionally, all
_torchvision_ models can be coded out of the box. To see a list of all available models, execute:
```bash
python example/nn_coding.py --help
```
The following example codes the _mobilenet_v2_ model from torchvision:
```bash
python example/nn_coding.py --model=mobilenet_v2
```
The following example codes _ResNet-56_ and tests the model's performance afterward:
```bash
python example/nn_coding.py --dataset_path=<your_path> --dataset=cifar100 --model=resnet56 --model_path=./models/ResNet56_CIF100.pt
```
Training a randomly initialized _ResNet-56_ from scratch and code the incremental updates with temporal context adaptation (TCA) is achieved by:
```bash
python example/nn_coding.py --uc=1 --dataset_path=<your_path> --model=resnet56 --model_rand_int --dataset=cifar100 --tca
```
For **coding incremental differences** with respect to the base model, i.e., <img src="https://latex.codecogs.com/svg.image?\Delta NN^{(e=1)} = NN^{(e=1)} - NN^{(e=2)}" alt="dNN"/>,
set `--uc=2`.
`--max_batches` can be used to decrease the number of batches used per train epoch.
Other available hyperparameters and coding tools like `--sparsity`, `--use_dq`, `--opt_qp`, `--bitdepth`, `--approx_method=codebook`, and others are described in [nn_coding.py](https://github.com/d-becking/nncodec2/blob/master/example/nn_coding.py).
### Federated Learning with NNCodec
The [nnc_fl.py](https://github.com/d-becking/nncodec2/blob/master/example/nnc_fl.py) file implements a base script for communication-efficient
Federated Learning with NNCodec. It imports the `NNClient` and `NNCFedAvg` classes — specialized NNC-[*Flower*](https://flower.ai) objects — that
are responsible for establishing and handling the compressed FL environment.
#### Important: Install Flower before using, e.g., by issuing:
```bash
pip install -U "flwr[simulation]>=1.5"
```
The default configuration launches FL with two _ResNet-56_ clients learning the _CIFAR-100_ classification task. The _CIFAR_ dataset
is automatically downloaded if not available under `--dataset_path` (~170MB).
```bash
python example/nnc_fl.py --dataset_path=<your_path> --model_rand_int --epochs=30 --compress_upstream --compress_downstream --err_accumulation --compress_differences
```
Main coding tools and hyperparameter settings for coding are:
```bash
--qp 'Quantization parameter (QP) for NNs (default: -32)'
--diff_qp 'Quantization parameter for dNNs. Defaults to QP if unspecified (default: None)'
--nonweight_qp 'QP for non-weights, e.g., 1D or BatchNorm params (default: -75)'
--opt_qp 'Enables layer-wise QP modification based on relative layer size within NN'
--use_dq 'Enables dependent scalar / Trellis-coded quantization'
--bitdepth 'Optional: integer-aligned bitdepth for limited precision [1, 31] bit; note: overwrites QPs.'
--bnf 'Enables incremental BatchNorm Folding (BNF)'
--sparsity 'Introduces mean- & std-based unstructured sparsity [0.0, 1.0] (default: 0.0)'
--struct_spars_factor 'Introduces structured per-channel sparsity (based on channel means); requires sparsity > 0 (default: 0.9)'
--row_skipping 'Enables skipping tensor rows from arithmetic coding that are entirely zero'
--tca 'Enables Temporal Context Adaptation (TCA)'
```
Additional important hyperparameters for FL (among others in [nnc_fl.py](https://github.com/d-becking/nncodec2/blob/master/example/nnc_fl.py)):
```bash
--compress_differences 'Weight differences wrt. to base model (dNN) are compressed, otherwise full base models (NN) are communicated'
--model_rand_int 'If set, model is randomly initialized, i.e., w/o loading pre-trained weights'
--num_clients 'Number of clients in FL scenario (default: 2)'
--compress_upstream 'Compression of clients-to-server communication'
--compress_downstream 'Compression of server-to-clients communication'
--err_accumulation 'If set, quantization errors are locally accumulated ("residuals") and added to NN update prior to compression'
```
Section [Paper results](#EuCNC-2025-Poster-Session) (EuCNC) below introduces an additional use case and implementation of NNCodec 2.0 FL with tiny language models collaboratively learning feature predictions in cellular data.
### Logging results using Weights & Biases
We used Weights & Biases (wandb) for experimental results logging. Enable `--wandb` if you want to use it. Add your wandb key and optionally an experiment identifier for the run:
```bash
--wandb --wandb_key="my_key" --wandb_run_name="my_project"
```
#### Important: Install wandb before using, e.g., by issuing:
```bash
pip install wandb
```
## Paper results
- ### EuCNC 2025 Poster Session
[](https://arxiv.org/abs/2504.01947)
We presented **"Efficient Federated Learning Tiny Language Models for Mobile Network Feature Prediction"** at the Poster Session I of the 2025 Joint European Conference on Networks and Communications & 6G Summit (EuCNC/6G Summit).
**TL;DR** - This work introduces a communication-efficient Federated Learning (FL) framework for training tiny language models (TLMs) that collaboratively learn to predict mobile network features (such as ping, SNR or frequency band) across five geographically distinct regions from the Berlin V2X dataset. Using NNCodec, the framework reduces communication overhead by over 99% with minimal performance degradation, enabling scalable FL deployment across autonomous mobile network cells.
<img src="https://github.com/user-attachments/assets/4fba1aca-50ca-492f-901b-d601cc20874c" width="750" /> <br>
To reproduce the experimental results and evaluate NNCodec in the telco FL setting described above, execute:
```bash
python example/nnc_fl.py --dataset=V2X --dataset_path=<your_path>/v2x --model=tinyllama --model_rand_int \
--num_clients=5 --epochs=30 --compress_upstream --compress_downstream --err_accumulation --compress_differences \
--qp=-18 --batch_size=8 --max_batches=300 --max_batches_test=150 --sparsity=0.8 --struct_spars_factor=0.9 \
--TLM_size=1 --tca --tokenizer_path=./example/tokenizer/telko_tokenizer.model
```
The pre-tokenized [Berlin V2X dataset](https://ieee-dataport.org/open-access/berlin-v2x) can be downloaded here: https://datacloud.hhi.fraunhofer.de/s/CcAeHRoWRqe5PiQ
and the pre-trained Sentencepiece Tokenizer is included in this repository at [telko_tokenizer.model](https://github.com/d-becking/nncodec2/blob/master/example/tokenizer/).
Resulting bitstreams and the best performing global TLM of all communication rounds will be stored in a `results` directory (with path set via `--results`).
To evaluate this model, execute:
```bash
python example/eval.py --model_path=<your_path>/best_tinyllama_.pt --batch_size=1 --dataset=V2X \
--dataset_path=<your_path>/v2x --model=tinyllama --TLM_size=1 --tokenizer_path=./example/tokenizer/telko_tokenizer.model
```
- ### ICML 2023 Neural Compression Workshop
[](https://openreview.net/forum?id=5VgMDKUgX0)
Our paper titled **"NNCodec: An Open Source Software Implementation of the Neural Network Coding
ISO/IEC Standard"** was awarded a Spotlight Paper at the ICML 2023 Neural Compression Workshop.
**TL;DR** - The paper presents NNCodec 1.0, analyses its coding tools with respect to the principles of information theory and gives comparative results for a broad range of neural network architectures.
The code for reproducing the experimental results of the paper and a software demo are available
here:
[](https://github.com/d-becking/nncodec-icml-2023-demo)
## Citation and Publications
If you use NNCodec in your work, please cite:
```
@inproceedings{becking2023nncodec,
title={{NNC}odec: An Open Source Software Implementation of the Neural Network Coding {ISO}/{IEC} Standard},
author={Daniel Becking and Paul Haase and Heiner Kirchhoffer and Karsten M{\"u}ller and Wojciech Samek and Detlev Marpe},
booktitle={ICML 2023 Workshop Neural Compression: From Information Theory to Applications},
year={2023},
url={https://openreview.net/forum?id=5VgMDKUgX0}
}
```
### Additional Publications (chronological order)
- D. Becking et al., **"Neural Network Coding of Difference Updates for Efficient Distributed Learning Communication"**, IEEE Transactions on Multimedia, vol. 26, pp. 6848–6863, 2024, doi: 10.1109/TMM.2024.3357198, Open Access
- H. Kirchhoffer et al. **"Overview of the Neural Network Compression and Representation (NNR) Standard"**, IEEE Transactions on Circuits and Systems for Video Technology, pp. 1-14, July 2021, doi: 10.1109/TCSVT.2021.3095970, Open Access
- P. Haase et al. **"Encoder Optimizations For The NNR Standard On Neural Network Compression"**, 2021 IEEE International Conference on Image Processing (ICIP), 2021, pp. 3522-3526, doi: 10.1109/ICIP42928.2021.9506655.
- K. Müller et al. **"Ein internationaler KI-Standard zur Kompression Neuronaler Netze"**, FKT- Fachzeitschrift für Fernsehen, Film und Elektronische Medien, pp. 33-36, September 2021
- S. Wiedemann et al., **"DeepCABAC: A universal compression algorithm for deep neural networks"**, in IEEE Journal of Selected Topics in Signal Processing, doi: 10.1109/JSTSP.2020.2969554.
## License
Please see [LICENSE.txt](./LICENSE.txt) file for the terms of the use of the contents of this repository.
For more information and bug reports, please contact: [nncodec@hhi.fraunhofer.de](mailto\:nncodec@hhi.fraunhofer.de)
**Copyright (c) 2019-2025, Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. & The NNCodec Authors.**
**All rights reserved.**
| text/markdown | Paul Haase, Daniel Becking | paul.haase@hhi.fraunhofer.de, daniel.becking@hhi.fraunhofer.de | null | null | BSD | null | [] | [] | https://hhi.fraunhofer.de | null | >=3.8 | [] | [] | [] | [
"Click>=7.0",
"scikit-learn>=0.23.1",
"tqdm>=4.32.2",
"h5py>=3.1.0",
"pybind11>=2.6.2",
"pandas>=1.0.5",
"opencv-python>=4.4.0.46",
"torch>=2",
"torchvision>=0.15",
"ptflops>=0.7",
"torchmetrics>=0.11.4",
"numpy<2"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.9.19 | 2026-02-20T15:53:05.780546 | nncodec-2.1.0.tar.gz | 129,644 | 4b/2f/cac2aec4904c2f639ada614376203d4cd3ad2ccbfaf8f5524fa70009080b/nncodec-2.1.0.tar.gz | source | sdist | null | false | c047cad44c0bf8f9d07d6a277e65ad2a | 35ee9e66ae5ea39b437481400fa584936c3a48123f6bb32758bbcefa9d85eeff | 4b2fcac2aec4904c2f639ada614376203d4cd3ad2ccbfaf8f5524fa70009080b | null | [] | 198 |
2.4 | DMU | 0.4.2 | This package is used to store commonly used functions on pip for the sake of easy pulling for other code. Features utilities for IV characterisation, lumerical data analysis and geometry instanciation, SEM stitching and other tools. | # ATTENTION
This is a code-storage repository for small functions, big functions, GUI's and other things I have developed for every-day electrical characterisation and simulation work. You'll find things from figure handling, colourmaps and named colours (combining tab20b and tab20c, for instance), to data extractors, data save-tools, json serialisation, pickle jars, ideality GUI plotters, you name it!
Code from my other lumerical specific package, called LDI, has been included in its entirety, where slightly more extended documentation can be found [here](https://github.com/DeltaMod/LDI), including some setup examples of their intended use, but you can consider the vast majority of those functions outdated or defunct.
Most functions should be sufficiently commented to understand what they do, and a lot of functions are redunant.
# Installation and Updating
install ```pip install DMU```
update ```pip install DMU --upgrade```
# Package Structure
DMU contains several modules, all which should serve a specific function (with some overlap)
* utils.py
* Data importers (matloader, for instance, can pair .mat files with json files and collapse contents to a dict structure)
* Automatic data plotting for Keithley and Nanonis tools
* Lumerical functions
* utils_utils.py
* Contains additional miscellaneous utility functions. Mostly unused.
* plot_utils.py
* Generally contains tools that assist in figure plotting, wherein some examples include:
* adjust_ticks - increase or reduce number of ticks along axis to
* align_axis_zeros - provide a list of twinx axes in a plot, and it will force the zero to be aligned in all of them. useful for visualisation.
* And many, many other functions that assist in plotting.
* sem_tools.py
* Specialised package to import and modify SEM TIFF images. The main purpose of most of it being to:
* Create uniform scalebars over many different images cropped to different sizes
* Create nice looking inserts into images, highlighting both the area of the larger overview image and boxing in the insert at a specified location. Examples will be provided at a later date.
* Perform rudimentary post-processing on the images, like expanding the dynamic range, modifying the contrast, sharpness and brightness.
## DMIdeal
This is a GUI whose development was mainly focused on folder-scrubbing and sorting device data from our excessively expansive electrical characterisation data for the express purpose of fitting, and cataloguing ideality data. It is compatible with the .xls data output from the Clarius+ software provided with a setup using a Cascade 11000B probe station with a Keithley 4200A-SCS parameter analyser.

### Folder structure requirements for this tool
Unfortunately, much of this project is hard-coded to support our device architecture and measurement structure. I hope to be able to rewrite this to be more general in the future, but for now this works for our 2NW/1NW measurements. This means that if you have 2 probe measurements, you can adapt this repo for your own purposes with minor modifications to the retrieval code. Otherwise, the python data importer for this .xls data works on any Clarius+ data file (this is in DMU.utils.Keithley_xls_read(file), and works as a standalone)
**how the data storage is structured**
The script crawls all containing folders of the root directory, and collects any and all .xls files it finds. For each .xls file, it also looks for a log-file in the same directory, reading the runID of the saved .xls data and seeing if the same runID has been logged in the logbook. An logbook and measurement data can be found at the bottom of this data structure.
When selecting the data storage folder, we choose the root directory (in this case, it is ExampleData) within which all devices are stored. Then, the structure should be:
DeviceRoot/DeviceName/DeviceName_MaterialName/DeviceName_SubdeviceName/(Data.xls)
(as an example, we have used: DFR1_RAW_DATA\DFR1_GK\DFR1_GK_AIR\2024_10_10_DFR1_GK_TL2\(data is here))
The GUI is heavily reliant on a logbook system (see example data), since we look to categorise: device-subdevice-Nanowire ID which can't be collected from the keithley data alone, but must be manually specified. In the future, we could have injected this data into the settings file for each run to avoid needing to have a logbook, but no matter.
We use a Probe station - Cascade 11000B which has 4 SMUs, and 4 probes. For 2NW measurements, each NW needs two probes, so our logbook reflects that.
### Logbook Example (also see example data)
RUN No. Device SMU1 SMU4 SMU2 SMU3 Light Room Light Microscope V @ I = 1e-6 Range Comment
NW1 NW2
p-i-n; detector n-i-p; emitter
900 DFR1_GG_BR1 sweep [-4.5,4.5] common ground NA NA FALSE FALSE 10uA
So which one of these NEED to be present for the script to work?
- Data you want to keep NEEDS a RUN No. corresponding to the Run No of exported data from clarius.
- Device needs to be filled, otherwise we can't populate the list. The format needs to be: DeviceName_Subdevicename (hyphens work too). If you don't have any "subdevices", make the devicename anything else, like the type of device.
- SMU1/SMU2/SMU3/SMU4 all need a separate entry, the order does not matter, so long as each pair in C-D and E-F correspond to paired probes in an IV
- NW1/NW2 needs to be defined, but for single device measurements you can just ignore E-F and only use the NW1
- The contents of all cells can be empty. Ideally, Light Room and Light Microscope are always FALSE unless it matters to you. Comment can contain ANYTHING and will be stored in the dict file created later, in case you want to retrieve it.
Note that empty rows are ok, Run no.s that have no corresponding runID in the data also are also fine. The text in all filds are processed as strings.
Only rows with a Run No are processed by the data importer.
The script currently hard-codes "NW1" and "NW2" detecton, but as long as you have both present, you can ignore filling data for any other field.
So! Fill the logbook with RUN no. from the data that is saved into the .xls file that clarius exports, and place the log file in the same folder.
The log file needs to share the same name within the first three underscores as the rest of the data, for example:
DFR1_GG_BR1_LOG wil be checked for all files DFR1_GG_BR1_6_4Term, DFR1_GG_BR1_5_4Term, DFR1_GG_BR1_4_4Term, DFR1_GG_BR1_2_2Term
But not for DFR1GG_BR1_2_2Term or GG_BR1_2_2Term.
### How to use the GUI for idealiy fitting
* Setting up data storage directories
* First, make sure your user mode is set to ideality.
* Then, select your Data Storage folder - This is where json file versions of your xls data will be stored and loaded from. Your personal settings are saved in Session under your user name (found with os.getlogin()).
* After this, you should select your root data directory as advised above. To test the GUI, simply pick "DeviceRoot" in "Example Data". Then choose "Crawl Data Storage and Merge Dicts" and it will populate the device/subdevice/runID lists.
* Data is saved every time you change subdevice or device, but you can also mannually save.
* For fitting the ideality
* In the fitting panel, click "Activate I0 Range" to activate the I0 cursor. Use this to highlight the reverse saturation current regime, which provides a value for the initial guess of $I_0$. The cursor will then automatically swap to the "Activate Idealty Fit range", where another range cursor can be used to highlight the ideal regime of the IV curve.
* "Fit data" can now be presesd to run a scipy.optimize script, plotting and saving a new value for the plotted ideality. This range is very sensitive, so try to modify it until your ideality fit looks good.
* Another implementation attempts to use the fitted ideality and $I_0$ as the initial guess to analytically attempt to solve for the series resistance $R_s$. $R_s$ is found both through the linear fit of the series resistance regime at higher voltages, and through a rewritten Shockley diode equation, solving for $R_s$ and $n$, where each point on the IV curve is used to first evaluate $R_s$ with the scipy.root minimisation function, where $n$ and $I_0$ are set to their initial guesses. This is then repeated for the $n$ formula, choosing the evaluated $n$ in the ideality regime. Finally, all three parameters are evaluated per point to see if the Shockley diode equation converges to the current. As evidenced by the red fit in the Figure, all values eventually converge.
* Note that the limited voltage range in the ideality measurements causes the linear fit estimate of $R_s$ to always be much larger than the one estimated with the analytical fit.
# How packages can be imported
```import DMU``` -> DMU.utils, DMU.plot_utils, DMU.utils_utils, DMU.sem_tools etc
how I usually import packages:
from DMU import utils as dm
from DMU import plot_utils as dmp
from DMU import utils_utils as dmuu
from DMU import sem_tools as dms
from DMU import graph_styles as gss
| text/markdown | Atli Vidar Már FLodgren | vidar@flodgren.com | null | null | null | hello world example examples | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent",
"Development Status :: 2 - Pre-Alpha"
] | [] | https://github.com/DeltaMod/DMU | null | null | [] | [] | [] | [
"h5py",
"natsort",
"matplotlib",
"xlrd",
"scipy",
"numpy",
"mat73",
"docutils>=0.3",
"torch; extra == \"alldeps\"",
"torchvision; extra == \"alldeps\"",
"kornia; extra == \"alldeps\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T15:52:43.032414 | dmu-0.4.2.tar.gz | 84,085 | dc/67/9b23f924dd9ce12536b6323945e6600e030321bad3eb37ac55b1ccbb5149/dmu-0.4.2.tar.gz | source | sdist | null | false | f354aa27d62d156ba59d9020baad4173 | 1f6715bd79e803823c888271f2a7708316d30072f28f6e66de8f30c6f36c05e4 | dc679b23f924dd9ce12536b6323945e6600e030321bad3eb37ac55b1ccbb5149 | null | [
"LICENSE"
] | 0 |
2.1 | aiosonic | 0.31.1 | Async HTTP/WebSocket client | 
[](https://coveralls.io/github/sonic182/aiosonic?branch=master)
[](https://badge.fury.io/py/aiosonic)
[](https://aiosonic.readthedocs.io/en/latest/?badge=latest)
# aiosonic - lightweight Python asyncio HTTP/WebSocket client
A very fast, lightweight Python asyncio HTTP/1.1, HTTP/2, and WebSocket client.
The repository is hosted on [GitHub](https://github.com/sonic182/aiosonic).
For full documentation, please see [aiosonic docs](https://aiosonic.readthedocs.io/en/latest/).
## Features
- Keepalive support and smart pool of connections
- Multipart file uploads
- Handling of chunked responses and requests
- Connection timeouts and automatic decompression
- Automatic redirect following
- Fully type-annotated
- WebSocket support
- HTTP proxy support
- Sessions with cookie persistence
- Elegant key/value cookies
- (Nearly) 100% test coverage
- HTTP/2 (BETA; enabled with a flag)
## Requirements
- Python >= 3.10 (or PyPy 3.11+)
## Installation
```bash
pip install aiosonic
```
## Getting Started
Below is an example demonstrating basic HTTP client usage:
```python
import asyncio
import aiosonic
import json
async def run():
client = aiosonic.HTTPClient()
# Sample GET request
response = await client.get('https://www.google.com/')
assert response.status_code == 200
assert 'Google' in (await response.text())
# POST data as multipart form
url = "https://postman-echo.com/post"
posted_data = {'foo': 'bar'}
response = await client.post(url, data=posted_data)
assert response.status_code == 200
data = json.loads(await response.content())
assert data['form'] == posted_data
# POST data as JSON
response = await client.post(url, json=posted_data)
assert response.status_code == 200
data = json.loads(await response.content())
assert data['json'] == posted_data
# GET request with timeouts
from aiosonic.timeout import Timeouts
timeouts = Timeouts(sock_read=10, sock_connect=3)
response = await client.get('https://www.google.com/', timeouts=timeouts)
assert response.status_code == 200
assert 'Google' in (await response.text())
print('HTTP client success')
if __name__ == '__main__':
asyncio.run(run())
```
## WebSocket Usage
Below is an example demonstrating how to use aiosonic's WebSocket support:
```python
import asyncio
from aiosonic import WebSocketClient
async def main():
# Replace with your WebSocket server URL
ws_url = "ws://localhost:8080"
async with WebSocketClient() as client:
async with await client.connect(ws_url) as ws:
# Send a text message
await ws.send_text("Hello WebSocket")
# Receive an echo response
response = await ws.receive_text()
print("Received:", response)
# Send a ping and wait for the pong
await ws.ping(b"keep-alive")
pong = await ws.receive_pong()
print("Pong received:", pong)
# You can have a "reader" task like this:
async def ws_reader(conn):
async for msg in conn:
# handle the message...
# msg is an instance of aiosonic.web_socket_client.Message dataclass.
pass
asyncio.create_task(ws_reader(ws))
# Gracefully close the connection (optional)
await ws.close(code=1000, reason="Normal closure")
if __name__ == "__main__":
asyncio.run(main())
```
## Api Wrapping
You can easily wrap APIs with `BaseClient` and override its hooks to customize the response handling.
```python
import asyncio
import json
from aiosonic import BaseClient
class GitHubAPI(BaseClient):
base_url = "https://api.github.com"
default_headers = {
"Accept": "application/vnd.github+json",
"X-GitHub-Api-Version": "2022-11-28",
# "Authorization": "Bearer YOUR_GITHUB_TOKEN",
}
async def process_response(self, response):
body = await response.text()
return json.loads(body)
async def users(self, username: str, **kwargs):
return await self.get(f"/users/{username}", **kwargs)
async def update_repo(self, owner: str, repo: str, description: str):
data = {
"name": repo,
"description": description,
}
return await self.put(f"/repos/{owner}/{repo}", json=data)
async def main():
# You can pass an existing aiosonic.HTTPClient() instance in the constructor.
# If not provided, BaseClient will create a new instance automatically.
github = GitHubAPI()
# Call the custom 'users' method to get data for user "sonic182"
user_data = await github.users("sonic182")
print(json.dumps(user_data, indent=2))
if __name__ == '__main__':
asyncio.run(main())
```
Note: You may wanna do a singleton of your clients implementations in order to reuse the internal HTTPClient instance, and it's pool of connections (efficient usage of the client), an example:
```python
class SingletonMixin:
_instances = {}
def __new__(cls, *args, **kwargs):
if cls not in cls._instances:
cls._instances[cls] = super().__new__(cls)
return cls._instances[cls]
class GitHubAPI(BaseClient, SingletonMixin):
base_url = "https://api.github.com"
# ... the rest of the code
# now, each instance of the class will be the first created
gh = GitHubAPI()
g2 = GitHubAPI()
gh == gh2
```
## Benchmarks
A simple performance benchmark script is included in the `tests` folder. For example:
```bash
python scripts/performance.py
```
Example output:
```json
{
"aiohttp": "5000 requests in 558.31 ms",
"aiosonic": "5000 requests in 563.95 ms",
"requests": "5000 requests in 10306.90 ms",
"aiosonic_cyclic": "5000 requests in 642.15 ms",
"httpx": "5000 requests in 7920.04 ms"
}
```
aiosonic is 1457.99% faster than requests
aiosonic is -1.38% faster than aiosonic cyclic
> **Note:**
> These benchmarks are basic and machine-dependent. They are intended as a rough comparison.
## [TODO's](https://github.com/sonic182/aiosonic/projects/1)
- **HTTP/2:**
- [ ] Stable HTTP/2 release
- Better documentation
- International domains and URLs (IDNA + cache)
- Basic/Digest authentication
## Development
Install development dependencies with Poetry:
```bash
poetry install
```
It is recommended to install Poetry in a separate virtual environment (via apt, pacman, etc.) rather than in your development environment. You can configure Poetry to use an in-project virtual environment by running:
```bash
poetry config virtualenvs.in-project true
```
### Running Tests
```bash
poetry run pytest
```
## Contributing
1. Fork the repository.
2. Create a branch named `feature/your_feature`.
3. Commit your changes, push, and submit a pull request.
Thanks for contributing!
## Contributors
<a href="https://github.com/sonic182/aiosonic/graphs/contributors">
<img src="https://contributors-img.web.app/image?repo=sonic182/aiosonic" alt="Contributors" />
</a>
| text/markdown | Johanderson Mogollon | johander1822@gmail.com | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Networking"
] | [] | https://aiosonic.readthedocs.io/en/latest/ | null | >=3.10.0 | [] | [] | [] | [
"charset-normalizer<4.0.0,>=2.0.0",
"h2<5.0.0,>=4.1.0",
"onecache<0.9.0,>=0.8.1"
] | [] | [] | [] | [
"Repository, https://github.com/sonic182/aiosonic"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T15:52:39.591881 | aiosonic-0.31.1.tar.gz | 41,430 | 0c/79/d42eb06c510d14ef485f0dff20955a5120e80196905261b98ce755372747/aiosonic-0.31.1.tar.gz | source | sdist | null | false | e72a2183186a0bd296e5582dacad892e | 17678c49f4b533c3b1623c5b8ac635793a3e17ecd4af2c8d389979c929c70ed3 | 0c79d42eb06c510d14ef485f0dff20955a5120e80196905261b98ce755372747 | null | [] | 4,614 |
2.4 | temporal-cortex-toon | 0.2.1 | TOON encoder/decoder and Truth Engine (temporal resolution, timezone conversion, RRULE expansion, availability merging) for AI calendar agents | # temporal-cortex-toon (Python)
Python bindings for the TOON format encoder/decoder and truth-engine, built with PyO3 and maturin.
## Installation
```bash
pip install temporal-cortex-toon
```
## Usage
```python
from temporal_cortex_toon import encode, decode, filter_and_encode, expand_rrule
# JSON → TOON
toon = encode('{"name":"Alice","scores":[95,87,92]}')
print(toon)
# name: Alice
# scores[3]: 95,87,92
# TOON → JSON (perfect roundtrip)
json_str = decode(toon)
print(json_str)
# {"name":"Alice","scores":[95,87,92]}
# Semantic filtering: strip noisy fields before encoding
toon = filter_and_encode(
'{"name":"Event","etag":"abc","kind":"calendar#event"}',
["etag", "kind"],
)
print(toon)
# name: Event
# RRULE expansion
import json
events_json = expand_rrule(
"FREQ=WEEKLY;BYDAY=TU,TH", # RFC 5545 RRULE
"2026-02-17T14:00:00", # start date (local time)
60, # duration in minutes
"America/Los_Angeles", # IANA timezone
"2026-06-30T23:59:59", # expand until (optional)
None, # max count (optional)
)
events = json.loads(events_json)
for e in events:
print(f"{e['start']} → {e['end']}")
```
## API
### `encode(json: str) -> str`
Converts a valid JSON string into TOON format. Raises `ValueError` if the input is not valid JSON.
### `decode(toon: str) -> str`
Converts a TOON string back into compact JSON. Raises `ValueError` if the input is not valid TOON.
### `filter_and_encode(json: str, patterns: list[str]) -> str`
Strips fields matching the given patterns from JSON, then encodes to TOON. Patterns support:
- `"etag"` — strip the top-level field
- `"items.etag"` — strip nested field via dot-path
- `"*.etag"` — wildcard: strip field at any depth
### `expand_rrule(rrule, dtstart, duration_minutes, timezone, until=None, max_count=None) -> str`
Expands an RFC 5545 RRULE into concrete event instances. Returns a JSON string containing an array of `{"start": "...", "end": "..."}` objects with UTC datetimes.
## Build from Source
```bash
# From the crate directory:
cd crates/temporal-cortex-toon-python
# Create a virtualenv and install
python3 -m venv .venv
source .venv/bin/activate
pip install maturin pytest
# Build and install the native extension
maturin develop
# Run tests
pytest tests/ -v
```
## Testing
26 pytest tests across 5 suites:
- **9 encode tests** — simple objects, nested, arrays, empty, null, booleans, strings
- **3 decode tests** — simple, nested, valid JSON output
- **3 roundtrip tests** — simple, nested, type preservation
- **4 filter tests** — field removal, empty patterns, wildcards, error handling
- **7 RRULE tests** — daily count, start/end fields, until, max count, weekly, error handling
```bash
cd crates/temporal-cortex-toon-python
source .venv/bin/activate
pytest tests/ -v
```
## Architecture
```
src/lib.rs ← PyO3 #[pyfunction] wrappers around temporal-cortex-toon and truth-engine
pyproject.toml ← maturin build configuration
tests/ ← pytest test suite
```
The Python module (`temporal_cortex_toon`) is a thin wrapper that:
1. Accepts Python strings
2. Calls the underlying Rust functions (temporal-cortex-toon encode/decode, truth-engine expand)
3. Maps Rust errors to Python `ValueError` exceptions
## License
MIT OR Apache-2.0
| text/markdown; charset=UTF-8; variant=GFM | Billy Lui | null | null | null | null | toon, json, llm, calendar, rrule, icalendar, timezone, datetime | [
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: MIT License",
"License :: OSI Approved :: Apache Software License",
"Topic :: Software Development :: Libraries",
"Topic :: Office/Business :: Scheduling"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Bug Tracker, https://github.com/billylui/temporal-cortex-core/issues",
"Homepage, https://github.com/billylui/temporal-cortex-core",
"Repository, https://github.com/billylui/temporal-cortex-core"
] | maturin/1.12.3 | 2026-02-20T15:52:32.433682 | temporal_cortex_toon-0.2.1.tar.gz | 90,281 | 37/5d/fa455e4dea6749e80d6b0fdaf0b2d47ce3a72910fc03f3504461c29eed71/temporal_cortex_toon-0.2.1.tar.gz | source | sdist | null | false | 783767dc692d8d1ce947b03ac4780989 | 1b8a9ff62d7a30ff2ebd0eeac6ef3030af17b57b41d1f91f36553cfce2835f13 | 375dfa455e4dea6749e80d6b0fdaf0b2d47ce3a72910fc03f3504461c29eed71 | null | [] | 204 |
2.4 | graph-pes | 1.0.0 | Potential Energy Surfaces on Graphs | <div align="center">
<a href="https://jla-gardner.github.io/graph-pes/">
<img src="docs/source/_static/logo-text.svg" width="90%"/>
</a>
`graph-pes` is a framework built to accelerate the development of machine-learned potential energy surface (PES) models that act on graph representations of atomic structures.
Links: [Google Colab Quickstart](https://colab.research.google.com/github/jla-gardner/graph-pes/blob/main/docs/source/quickstart/quickstart.ipynb) - [Documentation](https://jla-gardner.github.io/graph-pes/) - [PyPI](https://pypi.org/project/graph-pes/)
[](https://pypi.org/project/graph-pes/)
[](https://github.com/conda-forge/graph-pes-feedstock)
[](https://github.com/jla-gardner/graph-pes/actions/workflows/tests.yaml)
[](https://codecov.io/gh/jla-gardner/graph-pes)
[]()
</div>
## Statement of need
`graph-pes` is a toolkit for building, training, and deploying machine-learned potential energy surfaces (PES) models that act on graph representations of atomic structures.
As a researcher who wants to train and use existing MLIPs, you can use the `graph-pes-train` command to train many different architectures from scratch on your own data, or fine-tune several existing foundation models. Once trained, you can use our drivers to run optimisations, single point energy calculations, and molecular dynamics simulations with your model with a variety of existing tools (`LAMMPS`, `ASE`, and `torch-sim`).
As a researcher wanting to work on MLIP methodology, `graph-pes` makes implementing new architectures easy, allows you to experiment with various different training strategies, and provides a clean, well-documented API for building things yourself.
## Features
- Experiment with new model architectures by inheriting from our `GraphPESModel` [base class](https://jla-gardner.github.io/graph-pes/models/root.html).
- [Train your own](https://jla-gardner.github.io/graph-pes/quickstart/implement-a-model.html) or existing model architectures (e.g., [SchNet](https://jla-gardner.github.io/graph-pes/models/many-body/schnet.html), [NequIP](https://jla-gardner.github.io/graph-pes/models/many-body/nequip.html), [PaiNN](https://jla-gardner.github.io/graph-pes/models/many-body/pinn.html), [MACE](https://jla-gardner.github.io/graph-pes/models/many-body/mace.html), [TensorNet](https://jla-gardner.github.io/graph-pes/models/many-body/tensornet.html), [OrB](https://jla-gardner.github.io/graph-pes/models/many-body/orb.html) etc.).
- Use and fine-tune foundation models via a unified interface: [MACE-MP0](https://jla-gardner.github.io/graph-pes/interfaces/mace.html), [MACE-OFF](https://jla-gardner.github.io/graph-pes/interfaces/mace.html), [MatterSim](https://jla-gardner.github.io/graph-pes/interfaces/mattersim.html), [GO-MACE](https://jla-gardner.github.io/graph-pes/interfaces/mace.html), [Egret](https://jla-gardner.github.io/graph-pes/interfaces/mace.html#graph_pes.interfaces.egret) and [Orb v2/3](https://jla-gardner.github.io/graph-pes/interfaces/orb.html).
- Easily configure distributed training, learning rate scheduling, weights and biases logging, and other features using our `graph-pes-train` [command line interface](https://jla-gardner.github.io/graph-pes/cli/graph-pes-train/root.html).
- Use our data-loading pipeline within your [own training loop](https://jla-gardner.github.io/graph-pes/quickstart/custom-training-loop.html).
- Run molecular dynamics simulations with any `GraphPESModel` using [torch-sim](https://jla-gardner.github.io/graph-pes/tools/torch-sim.html), [LAMMPS](https://jla-gardner.github.io/graph-pes/tools/lammps.html) or [ASE](https://jla-gardner.github.io/graph-pes/tools/ase.html)
## Quickstart
```bash
pip install -q graph-pes
wget https://tinyurl.com/graph-pes-minimal-config -O config.yaml
graph-pes-train config.yaml
```
Alternatively, for a 0-install quickstart experience, please see [this Google Colab](https://colab.research.google.com/github/jla-gardner/graph-pes/blob/main/docs/source/quickstart/quickstart.ipynb), which you can also find in our [documentation](https://jla-gardner.github.io/graph-pes/quickstart/quickstart.html).
## Contributing
Contributions are welcome! If you find any issues or have suggestions for new features, please open an issue or submit a pull request on the [GitHub repository](https://github.com/jla-gardner/graph-pes).
We use `uv` to manage dependencies and run commands. Install it [here](https://docs.astral.sh/uv/), and sync the dependencies using `uv sync --all-extras`.
Once you have made your changes, you can:
- run tests locally: `uv run pytest tests/`
- build the documentation: `uv run sphinx-build docs/source docs/build`
Please see our [CONTRIBUTING.md](CONTRIBUTING.md) file for more details.
## Citing `graph-pes`
We kindly ask that you cite `graph-pes` in your work if it has been useful to you.
A manuscript is currently in preparation - in the meantime, please cite the Zenodo DOI found in the [CITATION.cff](CITATION.cff) file.
| text/markdown | null | John Gardner <gardner.john97@gmail.com> | null | null | MIT License
Copyright (c) 2023-25 John Gardner
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"torch",
"pytorch-lightning",
"ase",
"numpy",
"rich",
"dacite",
"e3nn==0.4.4",
"scikit-learn",
"locache>=4.0.2",
"load-atoms>=0.3.9",
"wandb",
"data2objects>=0.1.0",
"vesin>=0.3.2",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"sphinx; extra == \"docs\"",
"furo; extra == \"docs\"",
"nbsphinx; extra == \"docs\"",
"sphinxext-opengraph; extra == \"docs\"",
"sphinx-copybutton; extra == \"docs\"",
"sphinx-design; extra == \"docs\"",
"build; extra == \"publish\"",
"twine; extra == \"publish\""
] | [] | [] | [] | [
"Homepage, https://github.com/jla-gardner/graph-pes"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:52:15.294621 | graph_pes-1.0.0.tar.gz | 262,321 | cf/08/5741326d32c27802202f9eb17739ea79d429c95762386f38ea5551e324ba/graph_pes-1.0.0.tar.gz | source | sdist | null | false | eeeb364d9381465c9492633d3c908426 | b9a1f87b7260c2a852d91134acf1ac1f94676ad030f7813bd3d0c7954b7a2b4b | cf085741326d32c27802202f9eb17739ea79d429c95762386f38ea5551e324ba | null | [
"LICENSE"
] | 305 |
2.4 | rectanglepy | 1.3.0 | Hierarchical deconvolution of bulk transcriptomics | # Rectangle
[![Tests][badge-tests]][link-tests]
[![Documentation][badge-docs]][link-docs]
[badge-tests]: https://img.shields.io/github/actions/workflow/status/ComputationalBiomedicineGroup/Rectangle/build.yaml?branch=main
[link-tests]: https://github.com/ComputationalBiomedicineGroup/Rectangle/actions/workflows/build.yaml
[badge-docs]: https://img.shields.io/readthedocs/rectanglepy
Rectangle is an open-source Python package for single-cell-informed cell-type deconvolution of bulk and spatial transcriptomic data, which is part of the [scverse ecosystem](https://scverse.org/packages/).
Rectangle presents a novel approach to second-generation deconvolution, characterized by hierarchical signature building for fine-grained cell-type deconvolution, estimation and correction of unknown cellular content, and efficient handling of large-scale single-cell data during signature matrix computation.
Rectangle was developed to overcome the current challenges in cell-type deconvolution, providing a robust and accurate methodology while ensuring a low computational profile.
## Getting started
Please refer to the [documentation][link-docs]. In particular, the
- [Tutorials][link-docs/tutorials] for a step-by-step guide on how to use Rectangle, and the
- [API documentation][link-api].
## Installation
You need to have Python 3.10 or higher installed on your system.
How to install Rectangle:
Install the latest release of `Rectangle` from `PyPI` <https://pypi.org/project/rectanglepy/>:
```bash
pip install rectanglepy
```
## Release notes
See the [changelog][changelog].
## Contact
If you found a bug, please use the [issue tracker][issue-tracker].
## Citation
> If you use Rectangle in your project, please cite: (TBA)
[scverse-discourse]: https://discourse.scverse.org/
[issue-tracker]: https://github.com/ComputationalBiomedicineGroup/Rectangle/issues
[changelog]: https://rectanglepy.readthedocs.io/changelog.html
[link-docs]: https://Rectanglepy.readthedocs.io
[link-api]: https://rectanglepy.readthedocs.io/api.html
[link-docs/tutorials]: https://rectanglepy.readthedocs.io/notebooks/example.html
| text/markdown | Bernhard Eder | null | null | Bernhard Eder <Bernhard.Eder@student.uibk.ac.at> | GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>. | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"anndata<0.10.9,>=0.8.0",
"loguru",
"numpy<2.0.0,>=1.0.0",
"osqp>=1.0.5",
"pandas<3.0.0,>=2.0.0",
"pydeseq2==0.4.11",
"scipy==1.13.0",
"statsmodels>=0.14.1",
"bump2version; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"twine>=4.0.2; extra == \"dev\"",
"furo; extra == \"doc\"",
"ipykernel; extra == \"doc\"",
"ipython; extra == \"doc\"",
"myst-nb; extra == \"doc\"",
"sphinx; extra == \"doc\"",
"sphinx-autodoc-typehints; extra == \"doc\"",
"sphinx-book-theme; extra == \"doc\"",
"sphinx-copybutton; extra == \"doc\"",
"sphinxcontrib-bibtex; extra == \"doc\"",
"pytest; extra == \"test\""
] | [] | [] | [] | [
"Documentation, https://rectanglepy.readthedocs.io/",
"Source, https://github.com/ComputationalBiomedicineGroup/Rectangle",
"Home-page, https://github.com/ComputationalBiomedicineGroup/Rectangle"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T15:52:02.655025 | rectanglepy-1.3.0.tar.gz | 3,272,009 | 41/0c/31a5542b9a5186e240421bb9902624305eba621fb1847b983966ed5c0a02/rectanglepy-1.3.0.tar.gz | source | sdist | null | false | 4f81a0f2cee0c63fb7736acaea3a1c05 | a054b89f00631d0ac7f308ea6869a6e25e423edc8b594e2abbfc2bcc1355bbfc | 410c31a5542b9a5186e240421bb9902624305eba621fb1847b983966ed5c0a02 | null | [
"LICENSE"
] | 214 |
2.4 | vvenv | 0.1.2 | Auto-tracks pip installs/uninstalls in requirements.txt — like package.json for Python | # venvy 🐍
> **Auto-tracks `pip install` and `pip uninstall` in `requirements.txt` — like `package.json` for Python.**
Once set up, you just use pip normally. venvy silently keeps your `requirements.txt` in sync automatically.
---
## How it works
venvy injects a small hook (`venvy_hook.py`) into your venv's `site-packages`. This hook patches pip's internal `InstallCommand.run` and `UninstallCommand.run` — so every successful `pip install` or `pip uninstall` automatically updates `requirements.txt`. No wrappers, no aliases, no shell setup.
---
## Installation
```bash
pip install venvy # global install recommended
# or
pipx install venvy
```
---
## Quick start
### 1. Create a venv (hook included automatically)
```bash
cd my-project
venvy create venv
```
Asks two questions:
```
[1/2] Install ipykernel? [y/N]: n
[2/2] Requirements file name [requirements.txt]:
```
Then: creates `.venv/`, upgrades pip, injects the tracking hook.
### 2. Activate your venv normally
```bash
# Windows CMD
.venv\Scripts\activate.bat
# Windows PowerShell
.venv\Scripts\Activate.ps1
# macOS / Linux
source .venv/bin/activate
```
### 3. Just use pip — requirements.txt updates automatically
```bash
pip install flask
# Output:
# Successfully installed flask-3.1.3 ...
# [venvy] + flask==3.1.3 → requirements.txt ← automatic!
pip install requests httpx
# [venvy] + requests==2.31.0 → requirements.txt
# [venvy] + httpx==0.27.0 → requirements.txt
pip uninstall flask -y
# Successfully uninstalled flask-3.1.3
# [venvy] - flask → requirements.txt ← automatic!
```
No extra commands. No wrappers. Just pip.
---
## Using an existing venv
If you already have a venv (not created by venvy):
```bash
source .venv/bin/activate # activate it first (or don't — venvy finds it)
venvy install-hook # inject the tracking hook
```
Or specify the path:
```bash
venvy install-hook .venv
venvy install-hook my_env
```
---
## All commands
```
venvy create venv Create venv + inject pip hook
--name -n Venv directory name (default: .venv)
--requirements -r Requirements file name (prompted)
--ipykernel Install ipykernel for Jupyter
--yes -y Skip all prompts
venvy install-hook [path] Inject hook into existing venv
venvy pip install <pkg> Install + update requirements (manual fallback)
--upgrade -U
venvy pip uninstall <pkg> Uninstall + update requirements (manual fallback)
--yes -y
venvy status Show venv info + hook status + requirements
venvy list List packages in requirements.txt
venvy sync Install all packages from requirements.txt
venvy --version
```
---
## Project structure
```
my-project/
├── .venv/
│ └── lib/pythonX.Y/site-packages/
│ ├── venvy_hook.py ← the hook (auto-injected)
│ └── venvy_hook.pth ← loads hook on every Python start
├── .venvy/
│ └── config.json ← venvy project config
├── requirements.txt ← auto-managed ✓
└── your_code.py
```
`.venvy/config.json`:
```json
{
"venv_name": ".venv",
"requirements_file": "requirements.txt",
"install_ipykernel": false
}
```
---
## Restoring a project
```bash
git clone https://github.com/user/project
cd project
venvy create venv # installs from existing requirements.txt automatically
# OR
python -m venv .venv
source .venv/bin/activate
venvy install-hook
venvy sync
```
---
## Comparison with npm
| npm | venvy |
|-----|-------|
| `npm init` | `venvy create venv` |
| `npm install pkg` | `pip install pkg` (auto-tracked after venvy setup) |
| `npm uninstall pkg` | `pip uninstall pkg` (auto-tracked) |
| `npm install` | `venvy sync` |
| `package.json` | `requirements.txt` |
| `node_modules/` | `.venv/` |
---
## License
MIT
| text/markdown | null | null | null | null | MIT | venv, virtualenv, pip, requirements, cli | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"typer>=0.9.0",
"rich>=13.0.0",
"watchdog>=3.0.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"black>=23.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.6 | 2026-02-20T15:52:00.502347 | vvenv-0.1.2.tar.gz | 28,605 | 12/68/bc44a40933b24cfbac80165348c1283656a791243ab09433b348144092cb/vvenv-0.1.2.tar.gz | source | sdist | null | false | fe2f9c3b2cec4b4c75adef840b723462 | 4bd45f1d9b002bf7cd1691cf9a7b3919254921f91848d734ed39f75aafdfb06b | 1268bc44a40933b24cfbac80165348c1283656a791243ab09433b348144092cb | null | [
"LICENSE"
] | 206 |
2.4 | optimagic | 0.5.3 | Tools to solve difficult numerical optimization problems. | # optimagic

[](https://pypi.org/project/optimagic)
[](https://anaconda.org/conda-forge/optimagic)
[](https://anaconda.org/conda-forge/optimagic)
[](https://pypi.org/project/optimagic)
[](https://optimagic.readthedocs.io/en/latest)
[](https://github.com/optimagic-dev/optimagic/actions?query=branch%3Amain)
[](https://codecov.io/gh/optimagic-dev/optimagic)
[](https://results.pre-commit.ci/latest/github/optimagic-dev/optimagic/main)
[](https://github.com/astral-sh/ruff)
[](https://pepy.tech/project/optimagic)
[](https://numfocus.org/sponsored-projects/affiliated-projects)
[](https://x.com/optimagic)
## Introduction
*optimagic* is a Python package for numerical optimization. It is a unified interface to
optimizers from SciPy, NlOpt and many other Python packages.
*optimagic*'s `minimize` function works just like SciPy's, so you don't have to adjust
your code. You simply get more optimizers for free. On top you get powerful diagnostic
tools, parallel numerical derivatives and more.
*optimagic* was formerly called *estimagic*, because it also provides functionality to
perform statistical inference on estimated parameters. *estimagic* is now a subpackage
of *optimagic*.
## Documentation
The documentation is hosted at https://optimagic.readthedocs.io
## Installation
The package can be installed via pip or conda. To do so, type the following commands in
a terminal:
```bash
pip install optimagic
```
or
```bash
$ conda config --add channels conda-forge
$ conda install optimagic
```
The first line adds conda-forge to your conda channels. This is necessary for conda to
find all dependencies of optimagic. The second line installs optimagic and its
dependencies.
## Installing optional dependencies
Only `scipy` is a mandatory dependency of optimagic. Other algorithms become available
if you install more packages. We make this optional because most of the time you will
use at least one additional package, but only very rarely will you need all of them.
For an overview of all optimizers and the packages you need to install to enable them
see {ref}`list_of_algorithms`.
To enable all algorithms at once, do the following:
`conda install nlopt`
`pip install Py-BOBYQA`
`pip install DFO-LS`
`conda install petsc4py` (Not available on Windows)
`conda install cyipopt`
`conda install pygmo`
`pip install fides>=0.7.4 (Make sure you have at least 0.7.1)`
## Citation
If you use optimagic for your research, please do not forget to cite it.
```
@Unpublished{Gabler2024,
Title = {optimagic: A library for nonlinear optimization},
Author = {Janos Gabler},
Year = {2022},
Url = {https://github.com/optimagic-dev/optimagic}
}
```
## Acknowledgements
We thank all institutions that have funded or supported optimagic (formerly estimagic)
<img src="docs/source/_static/images/aai-institute-logo.svg" width="185">
<img src="docs/source/_static/images/numfocus_logo.png" width="200">
<img src="docs/source/_static/images/tra_logo.png" width="240">
<img src="docs/source/_static/images/hoover_logo.png" width="192">
<img src="docs/source/_static/images/transferlab-logo.svg" width="400">
| text/markdown | null | Janos Gabler <janos.gabler@gmail.com> | null | Janos Gabler <janos.gabler@gmail.com>, Tim Mensinger <mensingertim@gmail.com> | MIT | derivative free optimization, estimation, extremum estimation, finite differences, global optimization, inference, maximum likelihood, method of simulated moments, nonlinear optimization, numerical differentiation, optimization, parallel optimization, statistics | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"annotated-types",
"cloudpickle",
"joblib",
"numpy",
"pandas",
"plotly",
"pybaum>=0.1.2",
"scipy>=1.2.1",
"sqlalchemy>=1.3",
"typing-extensions"
] | [] | [] | [] | [
"Repository, https://github.com/optimagic-dev/optimagic",
"Github, https://github.com/optimagic-dev/optimagic",
"Tracker, https://github.com/optimagic-dev/optimagic/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:51:38.430184 | optimagic-0.5.3.tar.gz | 404,890 | e6/8b/c73021109ac254465c973453f926e8204fefdccdfc8c094a9d642ad9423e/optimagic-0.5.3.tar.gz | source | sdist | null | false | 6a7f4628a9dc04ee85554d60db4642db | 1489f51843efb43897c6a281f4ccc2e816d0c00ee6288a58f1039348bf30d07d | e68bc73021109ac254465c973453f926e8204fefdccdfc8c094a9d642ad9423e | null | [
"LICENSE"
] | 361 |
2.4 | pytest-codingagents | 0.2.4 | Pytest plugin for testing real coding agents via their SDK | # pytest-codingagents
**Test-driven prompt engineering for GitHub Copilot.**
Everyone copies instruction files from blog posts, adds "you are a senior engineer" to agent configs, and includes skills found on Reddit. But does any of it work? Are your instructions making your agent better — or just longer?
**You don't know, because you're not testing it.**
pytest-codingagents gives you a complete **test→optimize→test loop** for GitHub Copilot configurations:
1. **Write a test** — define what the agent *should* do
2. **Run it** — see it fail (or pass)
3. **Optimize** — call `optimize_instruction()` to get a concrete suggestion
4. **A/B confirm** — use `ab_run` to prove the change actually helps
5. **Ship it** — you now have evidence, not vibes
Currently supports **GitHub Copilot** via [copilot-sdk](https://www.npmjs.com/package/github-copilot-sdk) with **IDE personas** for VS Code, Claude Code, and Copilot CLI environments.
```python
from pytest_codingagents import CopilotAgent, optimize_instruction
import pytest
async def test_docstring_instruction_works(ab_run):
"""Prove the docstring instruction actually changes output, and get a fix if it doesn't."""
baseline = CopilotAgent(instructions="Write Python code.")
treatment = CopilotAgent(
instructions="Write Python code. Add Google-style docstrings to every function."
)
b, t = await ab_run(baseline, treatment, "Create math.py with add(a, b) and subtract(a, b).")
assert b.success and t.success
if '"""' not in t.file("math.py"):
suggestion = await optimize_instruction(
treatment.instructions or "",
t,
"Agent should add docstrings to every function.",
)
pytest.fail(f"Docstring instruction was ignored.\n\n{suggestion}")
assert '"""' not in b.file("math.py"), "Baseline should not have docstrings"
```
## Install
```bash
uv add pytest-codingagents
```
Authenticate via `GITHUB_TOKEN` env var (CI) or `gh auth status` (local).
## What You Can Test
| Capability | What it proves | Guide |
|---|---|---|
| **A/B comparison** | Config B actually produces different (and better) output than Config A | [Getting Started](https://sbroenne.github.io/pytest-codingagents/getting-started/) |
| **Instruction optimization** | Turn a failing test into a ready-to-use instruction fix | [Optimize Instructions](https://sbroenne.github.io/pytest-codingagents/how-to/optimize/) |
| **Instructions** | Your custom instructions change agent behavior — not just vibes | [Getting Started](https://sbroenne.github.io/pytest-codingagents/getting-started/) |
| **Skills** | That domain knowledge file is helping, not being ignored | [Skill Testing](https://sbroenne.github.io/pytest-codingagents/how-to/skills/) |
| **Models** | Which model works best for your use case and budget | [Model Comparison](https://sbroenne.github.io/pytest-codingagents/getting-started/model-comparison/) |
| **Custom Agents** | Your custom agent configurations actually work as intended | [Getting Started](https://sbroenne.github.io/pytest-codingagents/getting-started/) |
| **MCP Servers** | The agent discovers and uses your custom tools | [MCP Server Testing](https://sbroenne.github.io/pytest-codingagents/how-to/mcp-servers/) |
| **CLI Tools** | The agent operates command-line interfaces correctly | [CLI Tool Testing](https://sbroenne.github.io/pytest-codingagents/how-to/cli-tools/) |
## AI Analysis
> **See it in action:** [Basic Report](https://sbroenne.github.io/pytest-codingagents/demo/basic-report.html) · [Model Comparison](https://sbroenne.github.io/pytest-codingagents/demo/model-comparison-report.html) · [Instruction Testing](https://sbroenne.github.io/pytest-codingagents/demo/instruction-testing-report.html)
Every test run produces an HTML report with AI-powered insights:
- **Diagnoses failures** — root cause analysis with suggested fixes
- **Compares models** — leaderboards ranked by pass rate and cost
- **Evaluates instructions** — which instructions produce better results
- **Recommends improvements** — actionable changes to tools, instructions, and skills
```bash
uv run pytest tests/ --aitest-html=report.html --aitest-summary-model=azure/gpt-5.2-chat
```
## Documentation
Full docs at **[sbroenne.github.io/pytest-codingagents](https://sbroenne.github.io/pytest-codingagents/)** — API reference, how-to guides, and demo reports.
## License
MIT
| text/markdown | Stefan Brunner | null | null | null | MIT | ai, coding-agents, copilot, llm, pytest, testing | [
"Development Status :: 3 - Alpha",
"Framework :: Pytest",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Testing"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"azure-identity>=1.25.2",
"github-copilot-sdk>=0.1.25",
"pydantic-ai>=1.0",
"pytest-aitest>=0.5.7",
"pytest>=9.0",
"pyyaml>=6.0",
"pre-commit>=4.5; extra == \"dev\"",
"pyright>=1.1.408; extra == \"dev\"",
"pytest-asyncio>=1.3; extra == \"dev\"",
"ruff>=0.15; extra == \"dev\"",
"cairosvg>=2.7; extra == \"docs\"",
"mkdocs-material>=9.7; extra == \"docs\"",
"mkdocs>=1.6; extra == \"docs\"",
"mkdocstrings[python]>=0.24; extra == \"docs\"",
"pillow>=11.0; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/sbroenne/pytest-codingagents",
"Repository, https://github.com/sbroenne/pytest-codingagents"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:51:34.927869 | pytest_codingagents-0.2.4.tar.gz | 119,614 | 13/cd/7a152fc3ad26a6f12d5ebf5a41891b9f49896436df21463e1c185b96d0a7/pytest_codingagents-0.2.4.tar.gz | source | sdist | null | false | e50b653b7a3bf92529256f7eb235afb7 | d0139a22dccba29195e3bb8f26a11061a8bd79891c101684009fb54371d00d25 | 13cd7a152fc3ad26a6f12d5ebf5a41891b9f49896436df21463e1c185b96d0a7 | null | [
"LICENSE"
] | 208 |
2.4 | dirac-cwl | 1.2.0 | Prototype of CWL used as a production/job workflow language | <p align="center">
<img alt="Dirac CWL Logo" src="public/CWLDiracX.png" width="300" >
</p>
# Dirac CWL Prototype

 [](https://anaconda.org/conda-forge/dirac-cwl) [](https://anaconda.org/conda-forge/dirac-cwl)
This Python prototype introduces a command-line interface (CLI) designed for the end-to-end execution of Common Workflow Language (CWL) workflows at different scales. It enables users to locally test CWL workflows, and then run them as jobs, transformations and/or productions.
## Prototype Workflow
### Local testing
Initially, the user tests the CWL workflow locally using `cwltool`. This step involves validating the workflow's structure and ensuring that it executes correctly with the provided inputs.
> - CWL task: workflow structure
> - inputs of the task
Once the workflow passes local testing, the user can choose from 3 options for submission depending on the requirements.
### Submission methods
1. **Submission as Dirac Jobs**: For simple workflows with a limited number of inputs, CWL tasks can be submitted as individual jobs. In this context, they are run locally as if they were run on distributed computing resources. Additionally, users can submit the same workflow with different sets of inputs in a single request, generating multiple jobs at once.
> - CWL task
> - [inputs1, inputs2, ...]
> - Dirac description (site, priority): Dirac-specific attributes related to scheduling
> - Metadata (job type): Dirac-specific attributes related to scheduling + execution
2. **Submission as Dirac Transformation**: For workflows requiring continuous, real-time input data or large-scale execution, CWL tasks can be submitted as transformations. As new input data becomes available, jobs are automatically generated and executed as jobs. This method is ideal for ongoing data processing and scalable operations.
> - CWL task (inputs already described within it)
> - Dirac description (site, priority)
> - Metadata (job type, group size, query parameters)
3. **Submission as Dirac Productions**: For complex workflows that require multiple steps with different requirements, CWL tasks can be submitted as productions. This method allows the workflow to be split into multiple transformations, with each transformation handling a distinct step in the process. Each transformation can manage one or more jobs, enabling large-scale, multi-step execution.
> - CWL task (inputs already described within it)
> - Step Metadata (per step):
> - Dirac description (site, priority)
> - Metadata (job type, group size, query parameters)
## Installation (with Pixi)
This project uses [Pixi](https://pixi.sh) to manage the development environment and tasks.
1) Install Pixi (see official docs for your platform)
2) Create and populate the environment
```bash
pixi install
```
3) Enter the environment (optional)
```bash
pixi shell
```
That’s it. You can now run commands either inside `pixi shell` or by prefixing with `pixi run`.
## Usage
Inside the Pixi environment:
```bash
# Either inside a shell
pixi shell
# Submit
dirac-cwl job submit <workflow_path> [--parameter-path <input_path>] [--metadata-path <metadata_path>]
dirac-cwl transformation submit <workflow_path> [--metadata-path <metadata_path>]
dirac-cwl production submit <workflow_path> [--steps-metadata-path <steps_metadata_path>]
```
Or prefix individual commands:
```bash
pixi run dirac-cwl job submit <workflow_path> --parameter-path <input_path>
```
Common tasks are defined in `pyproject.toml` and can be run with Pixi:
```bash
# Run tests
pixi run test
# Lint (mypy)
pixi run lint
```
## Using cwltool directly
To use the workflows and inputs directly with `cwltool`, you need to add the `modules` directory to the `$PATH`:
```bash
export PATH=$PATH:</path/to/dirac-cwl/src/dirac_cwl/modules>
cwltool <workflow_path> <inputs>
```
## Contribute
### Add a workflow
To add a new workflow to the project, follow these steps:
- Create a new directory under `workflows` (e.g. `workflows/helloworld`)
- Add one or more variants of a workflow under different directory (e.g. `helloworld/helloworld_basic/description.cwl` and `helloworld/helloworld_with_inputs/description.cwl`)
- In a `type_dependencies` subdirectory, add the required files to submit a job/transformation/production from a given variant.
Directory Structure Example:
```
workflows/
└── my_new_workflow/
|
├── my_new_workflow_complete/
| └── description.cwl
├── my_new_workflow_step1/
| └── description.cwl
├── my_new_workflow_step2/
| └── description.cwl
|
└── type_dependencies/
├── production/
| └── steps_metadata.yaml
├── transformation/
| └── metadata.yaml
└── job/
├── inputs1.yaml
└── inputs2.yaml
```
### Add a Pre/Post-processing commamd and a Job type
#### Add a Pre/Post-Command
A pre/post-processing command allows the execution of code before and after the workflow.
The commands should be stored at the `src/dirac_cwl/commands/` directory
To add a new pre/post-processing command to the project, follow these steps:
- Create a class that inherits `PreProcessCommand` if it's going to be executed before the workflow or `PostProcessCommand` if it's going to be executed after the workflow. In the rare case that the command can be executed in both stages, it should inherit both classes. These classes are located at `src/dirac_cwl/commands/core.py`.
- Implement the `execute` function with the actions it's expected to do. This function recieves the `job path` as a `string` and the dictionary of keyworded arguments `**kwargs`. This function can raise exceptions if it needs to.
#### Add a Job Type
Job types in `dirac_cwl` have the name of "plugins". These plugins are created from the hints defined in a cwl file.
The Job type should be stored at the `src/dirac_cwl/execution_hooks/plugins/` directory and should appear in the `__all__` list of the `__init__.py` file.
To add a new Job type to the project, follow these steps:
- Create a class that inherits `ExecutionHooksBasePlugin` from `src/dirac_cwl/execution_hooks/core.py`.
- Import the pre-processing and post-processing commands that this Job type is going to execute.
- Inside the `__init__` function, set the `preprocess_commands` and `postprocess_commands` lists with the commands that each step should execute. Be specially careful in the order, the commands will be executed in the same order they were specified in the lists.
In the end, it should look something like this:
```python
class JobTypeExample(ExecutionHooksBasePlugin):
def __init__(self, **data):
super().__init__(**data)
# ...
self.preprocess_commands = [PreProcessCmd1, PreProcessCmd2, PreProcessCmd3]
self.postprocess_commands = [PostProcessCmd1, PostProcessCmd2, PostProcessCmd3]
# ...
```
In the previous example, `PreProcessCmd1` will be executed before `PreProcessCmd2`, and this will be executed before `PreProcessCmd3`.
- Finally, to be able to discover this plugin from the registry, it has to appear in `pyproject.toml` entrypoints at the group `dirac_cwl.execution_hooks`. The previous example would look like:
```toml
[project.entry-points."dirac_cwl.execution_hooks"]
# ...
JobTypeExample = "dirac_cwl.execution_hooks.plugins:JobTypeExample"
# ...
```
### Add a module
If your workflow requires calling a script, you can add this script as a module. Follow these steps to properly integrate the module:
- Add the script: Place your script in the `src/dirac_cwl/modules` directory.
- Update `pyproject.toml`: Add the script to the `pyproject.toml` file to create a command-line interface (CLI) command.
- Reinstall the package: Run `pixi run pip install .` to reinstall the package and make the new script available as a command.
- Usage in CWL Workflow: Reference the command in your `description.cwl` file.
**Example**
Let’s say you have a script named `generic_command.py` located at `src/dirac_cwl/modules/generic_command.py`. Here's how you can integrate it:
- `generic_command.py` Example Script:
```python
#!/usr/bin/env python3
import typer
from rich.console import Console
app = typer.Typer()
console = Console()
@app.command()
def run_example():
console.print("This is an example command.")
if __name__ == "__main__":
app()
```
- Update `pyproject.toml`:
```toml
[project.scripts]
generic-command = "dirac_cwl.modules.generic_command:app"
```
- Reinstall the package with:
```bash
pixi run pip install .
```
- Reference in `description.cwl`:
```yaml
baseCommand: [generic-command]
```
### Test your changes
- Run tests via Pixi:
```bash
pixi run test
```
- Or directly:
```bash
pixi run pytest test/test_workflows.py -v
```
| text/markdown | DIRAC consortium | null | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Topic :: Scientific/Engineering",
"Topic :: System :: Distributed Computing"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"cwl-utils",
"cwlformat",
"cwltool",
"dirac>=9.0.0",
"diracx-core>=0.0.8",
"diracx-api>=0.0.8",
"diracx-client>=0.0.8",
"diracx-cli>=0.0.8",
"lbprodrun",
"pydantic",
"pyyaml",
"typer",
"referencing>=0.30",
"rich",
"ruamel.yaml",
"pytest>=6; extra == \"testing\"",
"pytest-mock; extra == \"testing\"",
"mypy; extra == \"testing\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:51:07.288064 | dirac_cwl-1.2.0.tar.gz | 213,574 | 1f/ca/dc3dbf0a15293a6ebebc55c26f0e0578678f8106843188a5aff1182dbebb/dirac_cwl-1.2.0.tar.gz | source | sdist | null | false | 2b11ef2f93898980d0dcdb9e67add233 | 19e3473c8e088825fe05f53bc7bc60615aeba62a6ddfd81d16b713401d4847ca | 1fcadc3dbf0a15293a6ebebc55c26f0e0578678f8106843188a5aff1182dbebb | GPL-3.0-only | [
"LICENSE"
] | 208 |
2.4 | agentlint | 0.2.0 | Real-time quality guardrails for AI coding agents | # agentlint
[](https://github.com/mauhpr/agentlint/actions/workflows/ci.yml)
[](https://codecov.io/gh/mauhpr/agentlint)
[](https://pypi.org/project/agentlint/)
[](https://pypi.org/project/agentlint/)
[](https://opensource.org/licenses/MIT)
Real-time quality guardrails for AI coding agents.
AI coding agents drift during long sessions — they introduce API keys into source, skip tests, force-push to main, and leave debug statements behind. AgentLint catches these problems *as they happen*, not at review time.
## What it catches
AgentLint ships with 31 rules across 5 packs. The 10 **universal** rules work with any tech stack; 4 additional packs auto-activate based on your project files:
| Rule | Severity | What it does |
|------|----------|-------------|
| `no-secrets` | ERROR | Blocks writes containing API keys, tokens, passwords |
| `no-env-commit` | ERROR | Blocks writing `.env` and credential files |
| `no-force-push` | ERROR | Blocks `git push --force` to main/master |
| `no-destructive-commands` | WARNING | Warns on `rm -rf`, `DROP TABLE`, `git reset --hard` |
| `dependency-hygiene` | WARNING | Warns on ad-hoc `pip install` / `npm install` |
| `max-file-size` | WARNING | Warns when a file exceeds 500 lines |
| `drift-detector` | WARNING | Warns after many edits without running tests |
| `no-debug-artifacts` | WARNING | Detects `console.log`, `print()`, `debugger` left in code |
| `test-with-changes` | WARNING | Warns if source changed but no tests were updated |
| `no-todo-left` | INFO | Reports TODO/FIXME comments in changed files |
**ERROR** rules block the agent's action. **WARNING** rules inject advice into the agent's context. **INFO** rules appear in the session report.
<details>
<summary><strong>Python pack</strong> (6 rules) — auto-activates when <code>pyproject.toml</code> or <code>setup.py</code> exists</summary>
| Rule | Severity | What it does |
|------|----------|-------------|
| `no-bare-except` | WARNING | Prevents bare `except:` clauses that swallow all exceptions |
| `no-unsafe-shell` | ERROR | Blocks unsafe shell execution via subprocess or os module |
| `no-dangerous-migration` | WARNING | Warns on risky Alembic migration operations |
| `no-wildcard-import` | WARNING | Prevents `from module import *` |
| `no-unnecessary-async` | INFO | Flags async functions that never use `await` |
| `no-sql-injection` | ERROR | Blocks SQL via string interpolation (f-strings, `.format()`) |
</details>
<details>
<summary><strong>Frontend pack</strong> (8 rules) — auto-activates when <code>package.json</code> exists</summary>
| Rule | Severity | What it does |
|------|----------|-------------|
| `a11y-image-alt` | WARNING | Ensures images have alt text (WCAG 1.1.1) |
| `a11y-form-labels` | WARNING | Ensures form inputs have labels or `aria-label` |
| `a11y-interactive-elements` | WARNING | Checks ARIA attributes and link anti-patterns |
| `a11y-heading-hierarchy` | INFO | Ensures no skipped heading levels or multiple h1s |
| `mobile-touch-targets` | WARNING | Ensures 44x44px minimum touch targets (WCAG 2.5.5) |
| `mobile-responsive-patterns` | INFO | Warns about desktop-only layout patterns |
| `style-no-arbitrary-values` | INFO | Warns about arbitrary Tailwind values bypassing tokens |
| `style-focus-visible` | WARNING | Ensures focus indicators are not removed (WCAG 2.4.7) |
</details>
<details>
<summary><strong>React pack</strong> (3 rules) — auto-activates when <code>react</code> is in <code>package.json</code> dependencies</summary>
| Rule | Severity | What it does |
|------|----------|-------------|
| `react-query-loading-state` | WARNING | Ensures `useQuery` results handle loading and error states |
| `react-empty-state` | INFO | Suggests empty state handling for `array.map()` in JSX |
| `react-lazy-loading` | INFO | Suggests lazy loading for heavy components in page files |
</details>
<details>
<summary><strong>SEO pack</strong> (4 rules) — auto-activates when an SSR/SSG framework (Next.js, Nuxt, Gatsby, Astro, etc.) is detected</summary>
| Rule | Severity | What it does |
|------|----------|-------------|
| `seo-page-metadata` | WARNING | Ensures page files include title and description |
| `seo-open-graph` | INFO | Ensures pages with metadata include Open Graph tags |
| `seo-semantic-html` | INFO | Encourages semantic HTML over excessive divs |
| `seo-structured-data` | INFO | Suggests JSON-LD structured data for content pages |
</details>
### Stack auto-detection
When `stack: auto` (the default), AgentLint detects your project and activates matching packs:
| Detected file | Pack activated |
|--------------|----------------|
| `pyproject.toml` or `setup.py` | `python` |
| `package.json` | `frontend` |
| `react` in package.json dependencies | `react` |
| SSR/SSG framework in dependencies (Next.js, Nuxt, Gatsby, Astro, SvelteKit, Remix) | `seo` |
The `universal` pack is always active. To override auto-detection, list packs explicitly in `agentlint.yml`.
## Quick start
```bash
pip install agentlint
cd your-project
agentlint setup
```
That's it! AgentLint hooks are now active in Claude Code.
When AgentLint blocks a dangerous action, the agent sees:
```
⛔ [no-secrets] Possible secret token detected (prefix 'sk_live_')
💡 Use environment variables instead of hard-coded secrets.
```
The agent's action is blocked before it can write the secret into your codebase.
The `setup` command:
- Installs hooks into `.claude/settings.json`
- Creates `agentlint.yml` with auto-detected settings (if it doesn't exist)
To remove AgentLint hooks:
```bash
agentlint uninstall
```
### Installation options
```bash
# Install to project (default)
agentlint setup
# Install to user-level settings (~/.claude/settings.json)
agentlint setup --global
```
### Claude Code marketplace
Add the AgentLint marketplace and install the plugin:
```
/plugin marketplace add mauhpr/agentlint
/plugin install agentlint@agentlint
```
### Local plugin (development)
```bash
claude --plugin-dir /path/to/agentlint/plugin
```
### Manual hook configuration
Add to your project's `.claude/settings.json`:
```json
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash|Edit|Write",
"hooks": [{ "type": "command", "command": "agentlint check --event PreToolUse" }]
}
],
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [{ "type": "command", "command": "agentlint check --event PostToolUse" }]
}
],
"Stop": [
{
"hooks": [{ "type": "command", "command": "agentlint report" }]
}
]
}
}
```
## Configuration
Create `agentlint.yml` in your project root (or run `agentlint init`):
```yaml
# Auto-detect tech stack or list packs explicitly
stack: auto
# strict: warnings become errors
# standard: default behavior
# relaxed: warnings become info
severity: standard
packs:
- universal
# - python # Auto-detected from pyproject.toml / setup.py
# - frontend # Auto-detected from package.json
# - react # Auto-detected from react in dependencies
# - seo # Auto-detected from SSR/SSG framework in dependencies
rules:
max-file-size:
limit: 300 # Override default 500-line limit
drift-detector:
threshold: 5 # Warn after 5 edits without tests (default: 10)
no-secrets:
enabled: false # Disable a rule entirely
# Python pack examples:
# no-bare-except:
# allow_reraise: true
# Frontend pack examples:
# a11y-heading-hierarchy:
# max_h1: 1
# Load custom rules from a directory
# custom_rules_dir: .agentlint/rules/
```
## Custom rules
Create a Python file in your custom rules directory:
```python
# .agentlint/rules/no_direct_db.py
from agentlint.models import Rule, RuleContext, Violation, Severity, HookEvent
class NoDirectDB(Rule):
id = "custom/no-direct-db"
description = "API routes must not import database layer directly"
severity = Severity.WARNING
events = [HookEvent.POST_TOOL_USE]
pack = "custom"
def evaluate(self, context: RuleContext) -> list[Violation]:
if not context.file_path or "/routes/" not in context.file_path:
return []
if context.file_content and "from database" in context.file_content:
return [Violation(
rule_id=self.id,
message="Route imports database directly. Use repository pattern.",
severity=self.severity,
file_path=context.file_path,
)]
return []
```
Then set `custom_rules_dir: .agentlint/rules/` in your config.
See [docs/custom-rules.md](docs/custom-rules.md) for the full guide.
## How it works
AgentLint hooks into Claude Code's lifecycle events:
1. **PreToolUse** — Before Write/Edit/Bash calls. Can **block** the action (exit code 2).
2. **PostToolUse** — After Write/Edit. Injects warnings into the agent's context.
3. **Stop** — End of session. Generates a quality report.
Each invocation loads your config, evaluates matching rules, and returns JSON that Claude Code understands. Session state persists across invocations so rules like `drift-detector` can track cumulative behavior.
## Comparison with alternatives
| Project | How AgentLint differs |
|---------|----------------------|
| guardrails-ai | Validates LLM I/O. AgentLint validates agent *tool calls* in real-time. |
| claude-code-guardrails | Uses external API. AgentLint is local-first, no network dependency. |
| Custom hooks | Copy-paste scripts. AgentLint is a composable engine with config + plugins. |
| Codacy Guardrails | Commercial, proprietary. AgentLint is fully open source. |
## FAQ
**Does AgentLint slow down Claude Code?**
No. Rules evaluate in <10ms. AgentLint runs locally as a subprocess — no network calls, no API dependencies.
**What if a rule is too strict for my project?**
Disable it in `agentlint.yml`: `rules: { no-secrets: { enabled: false } }`. Or switch to `severity: relaxed` to downgrade warnings to informational.
**Is my code sent anywhere?**
No. AgentLint is fully offline. It reads stdin from Claude Code's hook system and evaluates rules locally. No telemetry, no network requests.
**Can I use AgentLint outside Claude Code?**
The CLI works standalone — you can pipe JSON to `agentlint check` in any CI pipeline. However, the hook integration (blocking actions in real-time) is specific to Claude Code.
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md) for development setup and guidelines.
## License
MIT
| text/markdown | mauhpr | null | null | null | null | agent-guardrails, agentic, ai-agents, ai-code-review, ai-guardrails, ai-linting, ai-safety, claude, claude-code, code-quality, code-review, guardrails, hooks, linting, pre-commit, tool-use | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Quality Assurance",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"click>=8.0",
"pyyaml>=6.0",
"pytest-cov>=5.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/mauhpr/agentlint",
"Repository, https://github.com/mauhpr/agentlint",
"Issues, https://github.com/mauhpr/agentlint/issues",
"Changelog, https://github.com/mauhpr/agentlint/blob/main/CHANGELOG.md",
"Documentation, https://github.com/mauhpr/agentlint/tree/main/docs"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:51:05.786877 | agentlint-0.2.0.tar.gz | 101,853 | f6/81/435f1f1cef3fda0abac46aeabc6371d02c8dbb9d9c5fe2a85529d2182496/agentlint-0.2.0.tar.gz | source | sdist | null | false | 89cbbd1e318bc13728b8900de1777a58 | 846a45e5cb734aacb6c26d1ce5feb9ee3f577d2358ea12bcca8c4ed36452df43 | f681435f1f1cef3fda0abac46aeabc6371d02c8dbb9d9c5fe2a85529d2182496 | MIT | [
"LICENSE"
] | 208 |
2.3 | httpx-oauth2-flows | 0.4.0 | Add your description here | 1. The "Big Four" (The Originals)
These were the first four flows ever created in the original 2012 rulebook (RFC 6749).
Authorization Code Flow: The "Permission Slip." You get a temporary code and swap it for a badge. (Best for Web Apps).
Implicit Flow: The "Shortcut." The badge is given directly in the URL. (Now considered Dangerous/Legacy).
Resource Owner Password Credentials: The "House Keys." You give your password directly to the app. (Now Forbidden/Legacy).
Client Credentials Flow: The "Robot ID." The app logs in as itself to do its own chores. (Best for Server-to-Server).
2. The "Modern Standard" (The Must-Have)
This is the update that made the internet much safer.
Authorization Code + PKCE: The "Secret Handshake." It’s the standard Authorization Code flow but adds a scrambled secret word so no one can steal the ticket. (The Gold Standard for everything today).
3. The "Special Devices" Flow
For things that don't have a normal browser or keyboard.
Device Authorization Grant: The "TV Code." The TV shows a code, you type it into your phone to log in. (RFC 8628). (Best for Smart TVs, CLI tools, and IoT).
4. The "Maintenance" Flow
This isn't for logging in the first time; it's for staying logged in.
Refresh Token Flow: The "Badge Renewer." When your 1-hour VIP badge expires, you use a special "Refresh Token" to get a new one without typing your password again.
5. The "Assertion" Flows (The Translators)
These are used when you already have one kind of proof and need to swap it for an OAuth badge.
SAML 2.0 Bearer: The "Enterprise Translator." Swapping an old-school XML "Official Letter" for a modern badge. (RFC 7522).
JWT Bearer: The "Digital Signature." Swapping a signed digital note for a badge. Used for high-security machine talk. (RFC 7523).
6. The "Upgrade & Swap" Flows
The newest tools for complex systems with many parts.
Token Exchange: The "Badge Swap." Trading a badge for one building for a badge for a different building. (RFC 8693).
Token Delegation: (Part of Token Exchange). When an app says, "I'm acting for Bob, give me a token that proves I'm his assistant."
7. The "Rare/Extended" Flows
You might see these in very specific high-level setups.
Ciba (Client Initiated Backchannel Authentication): The "Ping My Phone." Instead of a redirect, the app pings your phone directly and asks, "Is this you?" You click "Yes" on your phone, and the app logs in. (Common in banking apps).
OpenID Connect (OIDC): While technically a "layer" on top of OAuth2, it adds the ID Token (The "ID Card") which tells the app exactly who you are (name, email, photo), whereas OAuth2 only tells the app what it is allowed to do.
| text/markdown | Florian Daude | Florian Daude <floriandaude@hotmail.fr> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"cryptography>=46.0.3",
"httpx>=0.28.1",
"keyring>=25.7.0",
"pydantic>=2.12.5",
"pyjwt>=2.10.1",
"uvicorn>=0.40.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Alpine Linux","version":"3.23.3","id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T15:51:04.609006 | httpx_oauth2_flows-0.4.0.tar.gz | 20,391 | 12/b6/c3ef2aecd2744c4d814b10fc5dd535ce5bdecf3ce7fa9a94498a879e9c63/httpx_oauth2_flows-0.4.0.tar.gz | source | sdist | null | false | 4393697731b4b9c36d9f76ac261cb29b | 18ddfc791e0bde55a8a03ee16a2c091b9d26b72467de6713d5e258b6e2a75339 | 12b6c3ef2aecd2744c4d814b10fc5dd535ce5bdecf3ce7fa9a94498a879e9c63 | null | [] | 214 |
2.4 | jupyter-chat-components | 0.1.2 | Components to displayed in jupyter chat | # jupyter_chat_components
[](https://github.com/brichet/jupyter-chat-components/actions/workflows/build.yml)
Components to displayed in jupyter chat
## Requirements
- JupyterLab >= 4.0.0
## Install
To install the extension, execute:
```bash
pip install jupyter_chat_components
```
## Uninstall
To remove the extension, execute:
```bash
pip uninstall jupyter_chat_components
```
## Contributing
### Development install
Note: You will need NodeJS to build the extension package.
The `jlpm` command is JupyterLab's pinned version of
[yarn](https://yarnpkg.com/) that is installed with JupyterLab. You may use
`yarn` or `npm` in lieu of `jlpm` below.
```bash
# Clone the repo to your local environment
# Change directory to the jupyter_chat_components directory
# Set up a virtual environment and install package in development mode
python -m venv .venv
source .venv/bin/activate
pip install --editable "."
# Link your development version of the extension with JupyterLab
jupyter labextension develop . --overwrite
# Rebuild extension Typescript source after making changes
# IMPORTANT: Unlike the steps above which are performed only once, do this step
# every time you make a change.
jlpm build
```
You can watch the source directory and run JupyterLab at the same time in different terminals to watch for changes in the extension's source and automatically rebuild the extension.
```bash
# Watch the source directory in one terminal, automatically rebuilding when needed
jlpm watch
# Run JupyterLab in another terminal
jupyter lab
```
With the watch command running, every saved change will immediately be built locally and available in your running JupyterLab. Refresh JupyterLab to load the change in your browser (you may need to wait several seconds for the extension to be rebuilt).
By default, the `jlpm build` command generates the source maps for this extension to make it easier to debug using the browser dev tools. To also generate source maps for the JupyterLab core extensions, you can run the following command:
```bash
jupyter lab build --minimize=False
```
### Development uninstall
```bash
pip uninstall jupyter_chat_components
```
In development mode, you will also need to remove the symlink created by `jupyter labextension develop`
command. To find its location, you can run `jupyter labextension list` to figure out where the `labextensions`
folder is located. Then you can remove the symlink named `jupyter-chat-components` within that folder.
### Testing the extension
#### Frontend tests
This extension is using [Jest](https://jestjs.io/) for JavaScript code testing.
To execute them, execute:
```sh
jlpm
jlpm test
```
#### Integration tests
This extension uses [Playwright](https://playwright.dev/docs/intro) for the integration tests (aka user level tests).
More precisely, the JupyterLab helper [Galata](https://github.com/jupyterlab/jupyterlab/tree/master/galata) is used to handle testing the extension in JupyterLab.
More information are provided within the [ui-tests](./ui-tests/README.md) README.
## AI Coding Assistant Support
This project includes an `AGENTS.md` file with coding standards and best practices for JupyterLab extension development. The file follows the [AGENTS.md standard](https://agents.md) for cross-tool compatibility.
### Compatible AI Tools
`AGENTS.md` works with AI coding assistants that support the standard, including Cursor, GitHub Copilot, Windsurf, Aider, and others. For a current list of compatible tools, see [the AGENTS.md standard](https://agents.md).
Other conventions you might encounter:
- `.cursorrules` - Cursor's YAML/JSON format (Cursor also supports AGENTS.md natively)
- `CONVENTIONS.md` / `CONTRIBUTING.md` - For CodeConventions.ai and GitHub bots
- Project-specific rules in JetBrains AI Assistant settings
All tool-specific files should be symlinks to `AGENTS.md` as the single source of truth.
### What's Included
The `AGENTS.md` file provides guidance on:
- Code quality rules and file-scoped validation commands
- Naming conventions for packages, plugins, and files
- Coding standards (TypeScript)
- Development workflow and debugging
- Common pitfalls and how to avoid them
### Customization
You can edit `AGENTS.md` to add project-specific conventions or adjust guidelines to match your team's practices. The file uses plain Markdown with Do/Don't patterns and references to actual project files.
**Note**: `AGENTS.md` is living documentation. Update it when you change conventions, add dependencies, or discover new patterns. Include `AGENTS.md` updates in commits that modify workflows or coding standards.
### Packaging the extension
See [RELEASE](RELEASE.md)
| text/markdown | null | Project Jupyter <jupyter@googlegroups.com> | null | null | BSD 3-Clause License
Copyright (c) 2026, Project Jupyter
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | jupyter, jupyterlab, jupyterlab-extension | [
"Framework :: Jupyter",
"Framework :: Jupyter :: JupyterLab",
"Framework :: Jupyter :: JupyterLab :: 4",
"Framework :: Jupyter :: JupyterLab :: Extensions",
"Framework :: Jupyter :: JupyterLab :: Extensions :: Mime Renderers",
"Framework :: Jupyter :: JupyterLab :: Extensions :: Prebuilt",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/brichet/jupyter-chat-components",
"Bug Tracker, https://github.com/brichet/jupyter-chat-components/issues",
"Repository, https://github.com/brichet/jupyter-chat-components.git"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T15:51:01.280716 | jupyter_chat_components-0.1.2.tar.gz | 217,220 | 0a/f9/1b644aaeec5e60ee9370a40b920f25f0429d327c5364e0505e1fb74609c8/jupyter_chat_components-0.1.2.tar.gz | source | sdist | null | false | 16be38692c63063add35560e76844c1c | 6c15022a9e1766800c41ca4e35ee7aaec150cdd443256acf8f8e888f0d10f1ea | 0af91b644aaeec5e60ee9370a40b920f25f0429d327c5364e0505e1fb74609c8 | null | [
"LICENSE"
] | 208 |
2.4 | hugr-qir | 0.0.21rc1 | Quantinuum's tool for converting HUGR to QIR | # hugr-qir
[![build_status][]](https://github.com/Quantinuum/hugr-qir/actions)
[![codecov][]](https://codecov.io/gh/Quantinuum/hugr-qir)
A tool for converting Hierarchical Unified Graph Representation (HUGR, pronounced _hugger_) formatted quantum programs into [QIR](https://github.com/qir-alliance/qir-spec) format.
Warning: Not all hugr/guppy programs can be converted to QIR.
## Installation
You can install from pypi via `pip install hugr-qir`.
## Usage
### Python
Use the function `hugr_to_qir` from the `hugr_to_qir` module to convert hugr to qir. By default, some basic validity checks will be run on the generated QIR. These checks can be turned off by passing `validate_qir = False`.
You can find an example notebook at `examples/submit-guppy-h2-via-qir.ipynb` showing the conversion and the submission to H1/H2.
### CLI
You can use the available cli after installing the python package.
This will generate qir for a given hugr file:
```sh
hugr-qir test-file.hugr
```
Run `hugr-qir --help` to see the available options.
If you want to generate a hugr file from guppy, you can do this in two steps:
1. Add this to the end of your guppy file:
```py
if __name__ == "__main__":
sys.stdout.buffer.write(main.compile().to_bytes())
# Or to compile a non-main guppy function:
sys.stdout.buffer.write(guppy_func.compile_function().to_bytes())
```
1. Generate the hugr file with:
```sh
python guppy_examples/general/quantum-classical-1.py > test-guppy.hugr
```
## Development
### #️⃣ Setting up the development environment
The easiest way to setup the development environment is to use the provided
[`devenv.nix`](devenv.nix) file. This will setup a development shell with all the
required dependencies.
To use this, you will need to install [devenv](https://devenv.sh/getting-started/).
Once you have it running, open a shell with:
```bash
devenv shell
```
All the required dependencies should be available. You can automate loading the
shell by setting up [direnv](https://devenv.sh/automatic-shell-activation/).
### Run tests
You can run the rust test with:
```bash
cargo test
```
You can run the Python test with:
```bash
pytest -n auto
```
If you want to update the snapshots you can do that via:
```bash
pytest --snapshot-update
```
## License
This project is licensed under Apache License, Version 2.0 ([LICENSE][] or <http://www.apache.org/licenses/LICENSE-2.0>).
[build_status]: https://github.com/Quantinuum/hugr-qir/actions/workflows/ci-py.yml/badge.svg?branch=main
[codecov]: https://img.shields.io/codecov/c/gh/Quantinuum/hugr-qir?logo=codecov
[LICENSE]: https://github.com/Quantinuum/hugr-qir/blob/main/LICENCE
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"click~=8.3.1",
"guppylang~=0.21.9",
"hugr~=0.15.4",
"llvmlite~=0.44.0",
"quantinuum-qircheck~=0.4.0"
] | [] | [] | [] | [] | maturin/1.12.3 | 2026-02-20T15:50:42.299501 | hugr_qir-0.0.21rc1-cp310-abi3-win_amd64.whl | 29,398,334 | e3/4c/2d2271fb9b6284b4f1e59136b72cc489564960791f59cab516a407259e6d/hugr_qir-0.0.21rc1-cp310-abi3-win_amd64.whl | cp310 | bdist_wheel | null | false | 34f404fcfa1afa22e1202b3117bfe2f0 | 8ac8f37af30387f34963b50a7351d22508d9a882a62f8d548cfd86c7bc3482ce | e34c2d2271fb9b6284b4f1e59136b72cc489564960791f59cab516a407259e6d | null | [] | 315 |
2.4 | ScaleNx | 2026.2.12.34.post1 | Image resizing using Scale2x, Scale3x, Scale2xSFX and Scale3xSFX algorithms, in pure Python. | # Pixel art image scaling - Scale2x, Scale3x, Scale2xSFX and Scale3xSFX in pure Python
 
## Overview
[**ScaleNx**](https://dnyarri.github.io/scalenx.html), encompassing **Scale2x**, **Scale3x**, **Scale2xSFX**, and **Scale3xSFX**, is a group of [pixel-art scaling algorithms](https://en.wikipedia.org/wiki/Pixel-art_scaling_algorithms), intended to rescale images without introducing additional colors and blurring sharp edges.
[**Scale2x** and **Scale3x**](https://github.com/amadvance/scale2x) (aka **AdvMAME2x** and **AdvMAME3x**) algorithms were developed by [Andrea Mazzoleni](https://www.scale2x.it/) for the sole purpose of scaling up small graphics like icons and game sprites while keeping sharp edges, avoiding blurs, and, unlike nearest neighbour interpolation, taking into account diagonal patterns to avoid converting image into square mosaic.
Later on improved versions, called [**Scale2xSFX** and **Scale3xSFX**](https://web.archive.org/web/20160527015550/https://libretro.com/forums/archive/index.php?t-1655.html), were introduced for the same purpose, providing better diagonals rendering and less artifacts on some patterns.
| Fig. 1. *Example of consecutive upscaling with Scale3xSFX.* |
| :---: |
|  |
| *Consecutive upscaling of tiny diagonal object with Scale3xSFX thrice. Source object on the left upscaled 3x3x3=27 times bigger in linear size, i.e. 27x27=729 times bigger by area, meaning that 728 out of 729 resulting pixels are purely artificial; yet the result looks surprisingly clear.* |
Being initially created for tiny game sprite images, these algorithms appeared to be useful for some completely different tasks, *e.g.* scaling up text scans with low resolution before OCR, to improve OCR quality, or upscaling old low quality gravure and line art prints. One of the advantages of ScaleNx algorithms is that they don't use any empirical chroma mask or something else specifically adopted for game sprites on screen, and therefore are capable to work efficiently on any image, including images intended for print.
| Fig. 2. *Example of low resolution drawing upscaling with Scale3xSFX.* |
| :---: |
|  |
| *Jagged lines of low resolution original are turned into smoother diagonals.* |
Unfortunately, while specialised Scale2x and Scale3x screen renderers (*e.g.* scalers for DOS emulators) are numerous, it appears to be next to impossible to find ready-made batch processing application working with arbitrary images in common graphics formats.
Therefore, due to severe demand for general purpose ScaleNx library, and apparent lack thereof, current general purpose pure Python implementation of algorithms above was developed. Current implementation does not use any import, neither Python standard nor third party, and therefore is quite cross-platform and next to omnicompatible.
Note that current PyPI-distributed package is intended for developers, and therefore include ScaleNx core module only. For example of practical Python program utilizing this module, with Tkinter GUI, multiprocessing *etc.*, please visit [ScaleNx at Github](https://github.com/Dnyarri/PixelArtScaling) (PNG support in this program is based on [PyPNG](https://gitlab.com/drj11/pypng), and PPM and PGM support - on [PyPNM](https://pypi.org/project/PyPNM/), both of the above being pure Python modules with excellent backward compatibility as well).
## Python compatibility
Current ScaleNx version is maximal backward compatibility build, created specifically for PyPI distribution. While most of the development was performed using Python 3.12, comprehensive testing with other versions was carried out, and current ScaleNx module proven to work with antique **Python 3.4** ([reached end of life 18 Mar 2019](https://devguide.python.org/versions/#full-chart)) under Windows XP 32-bit ([reached extended end of support 8 Apr 2014](https://learn.microsoft.com/en-us/lifecycle/products/windows-xp)).
## Installation
`python -m pip install --upgrade scalenx`.
## Usage
As of version 2026.2.12.34, recommended ScaleNx module usage is:
```python
from scalenx import scaleNx
scaled_image = scaleNx(source_image, n, sfx)
```
where:
- **`source_image`** is source image data as `list[list[list[int]]]`;
- `int` **`n`** value should be either **`2`** or **`3`**, meaning the choice between Scale**2**\* and Scale**3**\* methods;
- `bool` **`sfx`** means whether you choose ScaleNx**SFX** methods rather than classic ScaleNx;
- **`scaled_image`** is resulting image data as `list[list[list[int]]]`.
However, legacy module access (as of version 2024.11.24) still works, and for, say, Scale2x it looks like:
```python
from scalenx import scalenx
scaled_image = scalenx.scale2x(source_image)
```
therefore no changes required for programs written using older (2024-2025) versions of ScaleNx.
## Copyright and redistribution
Current Python implementation was written by [Ilya Razmanov](https://dnyarri.github.io/) and may be freely used, copied and improved. In case of making substantial improvements it's almost obligatory to share your work with the developer and lesser species.
## References
1. [Scale2x and Scale3x algorithms description](https://www.scale2x.it/algorithm) by the inventor, Andrea Mazzoleni.
2. [Scale2xSFX and Scale3xSFX algorithms description](https://web.archive.org/web/20160527015550/https://libretro.com/forums/archive/index.php?t-1655.html) at dead forum article, archived copy.
3. [Pixel-art scaling algorithms review](https://en.wikipedia.org/wiki/Pixel-art_scaling_algorithms) at Wikipedia.
4. [Current ScaleNx implementation main page](https://dnyarri.github.io/scalenx.html) with some explanations and illustration.
5. [ScaleNx source code at Github](https://github.com/Dnyarri/PixelArtScaling/) - current ScaleNx source at Github, containing main program for single and batch image processing, with GUI, multiprocessing *etc.*.
6. [ScaleNx source code for Python 3.4 at Github](https://github.com/Dnyarri/PixelArtScaling/tree/py34) - same as above, but fully compatible with Python 3.4 (both ScaleNx and image formats I/O and main application).
| text/markdown | null | Ilya Razmanov <ilyarazmanov@gmail.com> | null | Ilya Razmanov <ilyarazmanov@gmail.com> | null | Scale2x, Scale2xSFX, Scale3x, Scale3xSFX, bitmap, image, pixel, rescale, resize, AdvMAME2, AdvMAME3, Python | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3.15",
"Topic :: Multimedia :: Graphics",
"Topic :: Scientific/Engineering :: Image Processing",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.4 | [] | [] | [] | [] | [] | [] | [] | [
"Changelog, https://github.com/Dnyarri/PixelArtScaling/blob/py34/CHANGELOG.md",
"Homepage, https://dnyarri.github.io/",
"Issues, https://github.com/Dnyarri/PixelArtScaling/issues",
"Source, https://github.com/Dnyarri/PixelArtScaling"
] | twine/6.2.0 CPython/3.12.10 | 2026-02-20T15:50:26.741511 | scalenx-2026.2.12.34.post1-py3-none-any.whl | 15,463 | c0/49/16d6b615633adddf1a6931f0c795afa765d406c1b2a67f5d220487656b47/scalenx-2026.2.12.34.post1-py3-none-any.whl | py3 | bdist_wheel | null | false | ecd389b44c3360605d507386e30b71b5 | 560c79a6486520e0d3370d1114df2e4b939b9c35d632103275ad034879b4a3af | c04916d6b615633adddf1a6931f0c795afa765d406c1b2a67f5d220487656b47 | Unlicense | [
"LICENSE"
] | 0 |
2.4 | tmg-data | 0.3.9 | TMG data library | TMG Data Library
==================================
TMG data library has the functionalities to interact with Google Cloud services allowing to develop more reliable and standard data pipelines.
- `Client Library Documentation`_
.. _Client Library Documentation: https://tmg-data.readthedocs.io
Quick Start
-----------
Installation
~~~~~~~~~~~~
Install this library in a `virtualenv`_ using pip.
.. _`virtualenv`: https://virtualenv.pypa.io/en/latest/
Supported Python Versions
^^^^^^^^^^^^^^^^^^^^^^^^^
Python >= 3.5
Mac/Linux
^^^^^^^^^
.. code-block:: console
pip install virtualenv
virtualenv <your-env>
source <your-env>/bin/activate
<your-env>/bin/pip install tmg-data
Example Usage
-------------
Transform from MySQL to BigQuery
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code:: python
from tmg.data import transfer
transfer_client = transfer.Client(project='your-project-id')
transfer_client.bq_to_mysql(
connection_string='root:password@host:port/your-database',
bq_table='your-project-id.your-dataset.your-table',
mysql_table='your-database.your-table'
)
| null | TMG Data Platform team | data.platform@telegraph.co.uk | null | null | Apache 2.0 | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | https://github.com/telegraph/tmg-data | null | >=3.9 | [] | [] | [] | [
"oauth2client==4.1.3",
"google-api-python-client==2.167.0",
"google-cloud-bigquery==3.31.0",
"google-cloud-storage==2.19.0",
"paramiko==3.5.1",
"Jinja2==3.1.6",
"mysql-connector==2.2.9",
"boto3==1.37.34",
"simple-salesforce==1.12.9",
"parse==1.15.0",
"delegator.py==0.1.1",
"markupsafe==3.0.2",
"pandas==2.2.3",
"pysftp==0.2.9"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.9.7 | 2026-02-20T15:49:37.284225 | tmg_data-0.3.9.tar.gz | 36,611 | 86/fa/9c6739fc911da62105347846570e66156b2a43591dc2d454f7e46f405877/tmg_data-0.3.9.tar.gz | source | sdist | null | false | a797a877687214562f0ad2001b2a3c03 | 99b05a4c4bafd6cd757df0dfdaa7130a9aef34894bc864c732419b8cdc5964a3 | 86fa9c6739fc911da62105347846570e66156b2a43591dc2d454f7e46f405877 | null | [
"LICENSE"
] | 485 |
2.4 | opp-env | 0.35.0.260220 | A tool, that sets up the development environment for OMNeT++ projects | 
# opp_env: Simplifying OMNeT++ Model Installations
`opp_env` is a powerful tool that allows for the easy and automated installation
of OMNeT++ simulation frameworks and models, including dependencies like INET
Framework and OMNeT++ itself. It can install any version of OMNeT++ and INET, as
well as currently selected versions of Veins, SimuLTE, Simu5G and other models.
We are working towards expanding its database with more models, and we are open
to suggestions and contributions.
`opp_env` supports Linux and macOS systems. On Windows 11, `opp_env` can be
run on the Windows Subsystem for Linux (WSL2).
> [!NOTE]
> `opp_env` relies on [Nix](https://nixos.org/), a powerful package manager that
> provides isolation between different versions of dependencies and allows for
> reproducible builds. By leveraging the power of Nix, `opp_env` ensures that each
> installation is consistent and can be easily replicated on different machines.
## Features
`opp_env` provides a number of powerful features that make it a valuable tool for
any researcher or developer working with OMNeT++ simulation frameworks:
- Automated installation of OMNeT++ simulation frameworks and models, including
dependencies like INET Framework and OMNeT++ itself.
- Support for any version of OMNeT++ and INET, as well as select
versions of Veins, SimuLTE, Simu5G and other 3rd party models.
- Reproducible builds thanks to the powerful isolation provided by Nix.
- Easy to use shell command that sets up an environment for working with the
selected simulation framework.
- Customizable configuration options that allow for advanced control over the
installation process.
## Installation
See the [INSTALL](INSTALL.md) page.
## Usage
To install a specific version of a simulation framework
and its dependencies, first create a workspace and initialize it:
mkdir workspace && cd workspace && opp_env init
Then run the following command:
opp_env install <framework-version>
For example, to install Simu5G version 1.3.0, run the following command:
opp_env install simu5g-1.3.0
This will download Simu5G, the matching INET and OMNeT++ packages and compile
them.
> [!TIP]
> To install the latest version of a package, use the `latest` pseudo-version
> e.g. to install the latest version of OMNeT++ use `opp_env install omnetpp-latest`
To open a shell prompt where you can use the recently installed Simu5G model, type:
opp_env shell simu5g-1.3.0
> [!IMPORTANT]
> You cannot use the packages you installed via `opp_env` outside of `opp_env shell` or `opp_env run`.
> [!TIP]
> If you frequently install new versions of OMNeT++ and/or simulation models, it's recommended to
> run `nix store gc` periodically to reclaim disk space occupied by older, unused dependencies.
## Available Packages
To see the list of available packages, type: `opp_env list`. The output from `opp_env list` from October 31, 2024:
omnetpp 6.1.0 6.0.3 6.0.2 6.0.1 6.0.0 5.7.1 5.7.0 5.6.3
5.6.2 5.6.1 5.6.0 5.5.2 5.5.1 5.5.0 5.4.2 5.4.1
5.4.0 5.3.1 5.3.0 5.2.2 5.2.1 5.2.0 5.1.2 5.1.1
5.1.0 5.0.1 5.0.0 4.6.1 4.6.0 4.5.1 4.5.0 4.4.2
4.4.1 4.4.0 4.3.2 4.3.1 4.3.0 4.2.3 4.2.2 4.2.1
4.2.0 4.1.1 4.1.0 4.0.2 4.0.1 3.3.2 3.3.1 git
inet 4.5.4 4.5.2 4.5.1 4.5.0 4.4.1 4.4.0 4.3.9 4.3.8
4.3.7 4.2.10 4.2.9 4.2.8 4.2.7 4.2.6 4.2.5 4.2.4
4.2.3 4.2.2 4.2.1 4.2.0 4.1.2 4.1.1 4.1.0 4.0.0
3.8.5 3.8.3 3.8.2 3.8.1 3.8.0 3.7.1 3.7.0 3.6.8
3.6.7 3.6.6 3.6.5 3.6.4 3.6.3 3.6.2 3.6.1 3.6.0
3.5.x 3.5.0 3.4.0 3.3.0 3.2.4 3.2.3 3.2.2 3.2.1
3.2.0 3.1.x 3.1.1 3.1.0 3.0.x 3.0.0 2.6.x 2.6.0
2.5.x 2.5.0 2.4.x 2.4.0 2.3.x 2.3.0 2.2.x 2.2.0
2.1.x 2.1.0 2.0.x 2.0.0 20100323 20061020 git
afdx 20220904
ansa 3.4.0
artery_allinone 20240807 20230820
can_allinone 0.1.0
castalia 3.3pr16 3.3 3.2
cell 20140729
chaosmanager 20221210
cmm_orbit_mobility_allinone 20220815
core4inet 20240124 221109
crsimulator 20140204
dctrafficgen 20181016
dns 20150911
ecmp_allinone 20230713
fico4omnet 20240124 20210113
flora 1.1.0
gptp 20200311
gradys 0.5
hnocs 20221212
icancloud 1.0
ieee802154standalone 20180310
inet_hnrl 20170217 20100723
inetgpl 1.0
inetmanet3 3.8.2
inetmanet4 4.0.0
libara_allinone 20150402
lora_icn paper
lre_omnet 1.0.1
mixim 2.3
ndnomnet 20200914
nesting 0.9.1
neta_allinone 1.0
obs 20130114
omnet_tdma 1.0.2
opencv2x_artery 1.4.1
opencv2x_veins 1.4.1
opendsme_allinone 20201110
openflow4core 20240124 20231017
opp_env_testproject 0.1
oppbsd 4.0
ops_allinone 20230331
os3 1.0
plexe 3.1.2 3.1.0 3.0
processbus_allinone 20180926
quagga 20090803
quisp 20230807
rease 20130819
rimfading_allinone 20171123
rinasim 20200903
rpl_allinone 6tisch_paper
rspsim 6.1.3 6.1.2
sdn4core 20240124
seapp 20191230
sedencontroller_allinone 20230305
signals_and_gateways 20240124
simcan 1.2
simproctc 2.0.2
simu5g 1.3.0 1.2.3 1.2.2 1.2.1 1.1.0 git
simulte 1.2.0 1.1.0 0.9.1
soa4core 20240124
solarleach 1.01
space_veins_allinone 0.3
stochasticbattery 20170224
streetlightsim 1.0
swim_allinone 20180221
tcp_fit_illinois 20150828
tsch_allinone 6tisch_paper
veins 5.3 5.2 5.1 5.0 4.7.1 4.7 4.6 4.4 4.3 3.0 git
veins_vlc 1.0.20210526 1.0
wifidirect_allinone 3.4
## Modifying opp_env
If you want to modify `opp_env`, see the [DEVELOP](DEVELOP.md) page.
| text/markdown | null | András Varga <andras@omnetpp.org>, Levente Mészáros <levy@omnetpp.org>, Rudolf Hornig <rudi@omnetpp.org> | null | Rudolf Hornig <rudi@omnetpp.org> | null | omnetpp, omnest, simulation, discrete, event, package manager, model | [
"Development Status :: 4 - Beta",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Environment :: Console",
"Programming Language :: C++",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Intended Audience :: Telecommunications Industry",
"Intended Audience :: Education",
"License :: OSI Approved :: GNU Lesser General Public License v3 or later (LGPLv3+)",
"Topic :: Education",
"Topic :: Software Development :: Build Tools",
"Topic :: Software Development :: Libraries",
"Topic :: System :: Software Distribution"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://omnetpp.org",
"Documentation, https://github.com/omnetpp/opp_env/blob/main/README.md",
"Changes, https://github.com/omnetpp/opp_env/blob/main/CHANGES.md",
"Repository, https://github.com/omnetpp/opp_env",
"Issues, https://github.com/omnetpp/opp_env/issues",
"Changelog, https://github.com/omnetpp/opp_env/commits/main"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:49:18.128102 | opp_env-0.35.0.260220.tar.gz | 129,085 | 96/1d/3c6c496ac330b86e976b497d56276dbf6be0da9da3e3ca929a81fa4f0da3/opp_env-0.35.0.260220.tar.gz | source | sdist | null | false | 24ba690517639eadc008dd64c3607435 | 31d68fb0c6fdc3d08a5107b28c4e767bf04512d94df7bfcbc1ea79cb73307d6a | 961d3c6c496ac330b86e976b497d56276dbf6be0da9da3e3ca929a81fa4f0da3 | null | [
"LICENSE"
] | 274 |
2.4 | agirails | 2.3.1 | AGIRAILS Python SDK - Agent Commerce Transaction Protocol | # AGIRAILS Python SDK
[](https://pypi.org/project/agirails/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/Apache-2.0)
[]()
The official Python SDK for the **Agent Commerce Transaction Protocol (ACTP)** — enabling AI agents to transact with each other through blockchain-based escrow on Base L2.
**Full 1:1 parity with TypeScript SDK v2.5.0.**
## Install
```bash
pip install agirails==2.3.0
```
## Features
- **Adapter Routing** — priority-based adapter selection (Standard, Basic, X402)
- **x402 Payments** — HTTP-based instant payments with relay fee splitting
- **ERC-8004 Identity** — on-chain agent identity resolution and reputation
- **Keystore Security (AIP-13)** — fail-closed private key policy, `ACTP_KEYSTORE_BASE64` for CI/CD
- **AGIRAILS.md Source of Truth** — parse, hash, publish, pull, diff agent configs
- **Smart Wallet (ERC-4337)** — batched transactions with paymaster gas sponsorship
- **Lazy Publish** — mainnet activation deferred to first real transaction
- **Three-tier API** — Basic, Standard, and Advanced levels
- **Mock Runtime** — full local testing without blockchain
- **CLI** — `actp pay`, `publish`, `pull`, `diff`, `deploy:env`, `deploy:check`
- **Async-first** — built on asyncio
- **1,738 tests passing**
## Quick Start
```python
import asyncio
from agirails import ACTPClient
async def main():
client = await ACTPClient.create(mode="mock", requester_address="0x1234...")
# Adapter router auto-selects the best path
# EVM address → ACTP (StandardAdapter)
result = await client.pay({"to": "0xProvider...", "amount": "10.00"})
# HTTP URL → x402 instant payment
result = await client.pay({"to": "https://api.example.com/pay", "amount": "5.00"})
# Agent ID → ERC-8004 resolve → ACTP
result = await client.pay({"to": "12345", "amount": "10.00"})
print(f"Transaction: {result.tx_id}, State: {result.state}")
asyncio.run(main())
```
## Adapter Routing
Priority-based adapter selection matching TypeScript `AdapterRouter`:
| Adapter | Priority | Target | Use Case |
|---------|----------|--------|----------|
| **X402Adapter** | 70 | `https://...` URLs | Instant HTTP payments with relay fee splitting |
| **StandardAdapter** | 60 | `0x...` addresses | Full ACTP lifecycle with escrow |
| **BasicAdapter** | 50 | `0x...` addresses | Simple pay-and-forget (Smart Wallet batched) |
## Keystore & Deployment Security (AIP-13)
Fail-closed private key policy with network-aware enforcement:
| Network | `ACTP_PRIVATE_KEY` | Behavior |
|---------|-------------------|----------|
| mock | Allowed | Silent |
| testnet (base-sepolia) | Allowed | Warn once |
| mainnet (base-mainnet) | Blocked | Hard fail |
**Resolution order:** `ACTP_PRIVATE_KEY` → `ACTP_KEYSTORE_BASE64` + `ACTP_KEY_PASSWORD` → `.actp/keystore.json` → `None`
```bash
# Generate base64 keystore for CI/CD
actp deploy:env
# Scan repo for exposed secrets
actp deploy:check
```
## AGIRAILS.md Config Management
```bash
actp publish --network base-sepolia # Hash + upload to IPFS + register on-chain
actp pull --network base-sepolia # Fetch config from chain
actp diff --network base-sepolia # Compare local vs on-chain
```
## Transaction Lifecycle
```
INITIATED → QUOTED → COMMITTED → IN_PROGRESS → DELIVERED → SETTLED
↘ ↘ ↘
CANCELLED CANCELLED DISPUTED → SETTLED
```
## CLI
```bash
# Payments
actp pay <to> <amount> [--deadline TIME]
actp balance [ADDRESS]
# Transaction management
actp tx list [--state STATE]
actp tx status <tx_id>
actp tx deliver <tx_id>
actp tx settle <tx_id>
# Config sync
actp publish [path]
actp pull [path] [--network NETWORK]
actp diff [path] [--network NETWORK]
# Deployment security
actp deploy:env
actp deploy:check [path] [--fix]
# Mock mode
actp mint <address> <amount>
actp time advance <duration>
```
## SDK Parity
Full 1:1 parity with TypeScript SDK v2.5.0:
| Feature | Python | TypeScript |
|---------|--------|------------|
| Adapter Routing | AdapterRouter + 3 adapters | AdapterRouter + 3 adapters |
| x402 Payments | X402Adapter with relay | X402Adapter with relay |
| ERC-8004 Identity | ERC8004Bridge + ReputationReporter | ERC8004Bridge + ReputationReporter |
| Keystore AIP-13 | Full (30-min TTL cache) | Full (30-min TTL cache) |
| AGIRAILS.md SOT | parse, hash, publish, pull, diff | parse, hash, publish, pull, diff |
| Smart Wallet | ERC-4337 scaffolding | ERC-4337 full |
| Lazy Publish | pending-publish lifecycle | pending-publish lifecycle |
| CLI Commands | pay, publish, pull, diff, deploy:* | pay, publish, pull, diff, deploy:* |
| State Machine | 8 states, all transitions | 8 states, all transitions |
| Cross-SDK Tests | Shared test vectors | Shared test vectors |
## Testing
```bash
pytest # Run all 1,738 tests
pytest -v # Verbose output
pytest tests/test_adapters/ # Adapter tests only
pytest -k "test_pay" # Pattern match
```
## Requirements
- Python 3.9+
- Dependencies: web3, eth-account, pydantic, aiofiles, httpx, typer, rich
## Links
- [PyPI](https://pypi.org/project/agirails/)
- [Documentation](https://docs.agirails.io)
- [GitHub](https://github.com/agirails/sdk-python)
- [Discord](https://discord.gg/nuhCt75qe4)
## License
Apache 2.0 — see [LICENSE](LICENSE) for details.
| text/markdown | null | AGIRAILS Team <developers@agirails.io> | null | null | null | actp, ai-agents, base, blockchain, escrow, ethereum, payments, web3 | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"aiofiles<25.0.0,>=24.0.0",
"eth-abi<6.0.0,>=5.0.0",
"eth-account<0.14.0,>=0.13.0",
"eth-utils<6.0.0,>=5.0.0",
"httpx<1.0.0,>=0.27.0",
"pydantic<3.0.0,>=2.6.0",
"python-dateutil<3.0.0,>=2.8.0",
"rich<14.0.0,>=13.0.0",
"typer<1.0.0,>=0.12.0",
"typing-extensions<5.0.0,>=4.0.0; python_version < \"3.10\"",
"web3<8.0.0,>=7.0.0",
"black>=24.0.0; extra == \"dev\"",
"hypothesis>=6.100.0; extra == \"dev\"",
"mypy>=1.11.0; extra == \"dev\"",
"pytest-asyncio>=0.24.0; extra == \"dev\"",
"pytest-benchmark>=4.0.0; extra == \"dev\"",
"pytest-cov>=5.0.0; extra == \"dev\"",
"pytest-timeout>=2.3.0; extra == \"dev\"",
"pytest-xdist>=3.5.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.6.0; extra == \"dev\"",
"types-aiofiles>=24.0.0; extra == \"dev\"",
"types-python-dateutil>=2.8.0; extra == \"dev\"",
"mutmut>=2.4.0; extra == \"mutation\"",
"bandit>=1.7.0; extra == \"security\"",
"pip-audit>=2.7.0; extra == \"security\"",
"safety>=3.0.0; extra == \"security\""
] | [] | [] | [] | [
"Homepage, https://agirails.io",
"Documentation, https://docs.agirails.io",
"Repository, https://github.com/agirails/sdk-python",
"Issues, https://github.com/agirails/sdk-python/issues"
] | uv/0.9.18 {"installer":{"name":"uv","version":"0.9.18","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T15:49:05.285546 | agirails-2.3.1-py3-none-any.whl | 342,192 | ce/38/0d8b7391e939a4d4ddb8254ae8f20529f2773d5f62c00928524c08ed0399/agirails-2.3.1-py3-none-any.whl | py3 | bdist_wheel | null | false | dbfd162555e4cd205bcf218b860e5f57 | ae162422457b8a0a7ef0b135b10eaced82376d528ae76790d7494c49176e5e77 | ce380d8b7391e939a4d4ddb8254ae8f20529f2773d5f62c00928524c08ed0399 | Apache-2.0 | [
"LICENSE"
] | 208 |
2.4 | qdrant-relevance-feedback | 0.1.0 | Qdrant's relevance feedback model training | # Relevance feedback
Framework to customize the Relevance Feedback Naive Scoring Formula, introduced in ["Relevance Feedback in Qdrant"](https://qdrant.tech/articles/relevance-feedback/) article, to your dataset (Qdrant collection), retriever, and feedback model.
As a result, you will get `a`, `b` and `c` formula parameters, which you can [plug into the Qdrant's Relevance Feedback Query interface](https://qdrant.tech/documentation/concepts/search-relevance/#relevance-feedback) & increase the relevance of retrieval in your Qdrant collection.
## Usage example
```python
from qdrant_client import QdrantClient
from qdrant_relevance_feedback import RelevanceFeedback
from qdrant_relevance_feedback.feedback import FastembedFeedback
from qdrant_relevance_feedback.retriever import QdrantRetriever
if __name__ == "__main__":
RETRIEVER_VECTOR_NAME = None # your named vector handle in Qdrant's collection or None if it's a default vector
COLLECTION_NAME = "document_collection"
# LIMIT controls the cost-quality tradeoff: the feedback model scores LIMIT documents per query to generate ground truth labels.
LIMIT = 50 # responses per query
client = QdrantClient(
url="https://xyz-example.eu-central.aws.cloud.qdrant.io",
api_key="your-api-key",
cloud_inference=True
)
retriever = QdrantRetriever("mixedbread-ai/mxbai-embed-large-v1", modality="text", embed_options={"lazy_load": True}) #lazy_load is an example of propagating options, instead of loading a model into memory straightaway, it loads it on the 1st use
feedback = FastembedFeedback("colbert-ir/colbertv2.0", score_options={"lazy_load": True})
relevance_feedback = RelevanceFeedback(
retriever=retriever,
feedback=feedback,
client=client,
collection_name=COLLECTION_NAME,
vector_name=RETRIEVER_VECTOR_NAME,
payload_key="document" # should refer to the raw data, which, after being embedded, is used in Qdrant retrieval. So, set this to the payload field that contains your original data you're searching for.
)
formula_params = relevance_feedback.train(
queries=None, # if you have real queries for training, provide a list here
amount_of_queries=200, # otherwise, you can specify amount of "synthetic queries" - documents randomly sampled from your collection
limit=LIMIT,
context_limit=5, # top responses used for mining context pairs
)
print('formula params are: ', formula_params)
```
> When using `QdrantRetriever`, both local (via Fastembed) and cloud inference are supported. Set `cloud_inference=True` to use cloud inference, `cloud_inference=False` or just empty otherwise.
> **Warning:** If your use case doesn’t involve document-to-document semantic similarity search, training on sampled documents ("synthetic queries") alone may completely cancel the effect of relevance feedback scoring on real data.
It’s far more effective to use real queries.
### Redefining source of metadata (payload).
In `RelevanceFeedback` class above, you're expected to fill in a `payload_key`.
This key should refer to the raw data, which, after being embedded, is used in Qdrant retrieval (with `RETRIEVER_VECTOR_NAME`). This data, in its original form (before it is embedded for dense retrieval), is used for the feedback model ground truth scoring.
If you're storing original data externally to Qdrant's collection, you should override `retrieve_payload` method of `RelevanceFeedback` class.
## Adding your own models
### Retriever
In order to use a custom retriever, you can define your class inherited from `Retriever` and override `embed_query` method.
Here's an example with overriding `Retriever`.
```python
from typing import Any
from qdrant_relevance_feedback.retriever import Retriever
try:
from openai import OpenAI
except ImportError:
OpenAI = None
class OpenAIRetriever(Retriever):
def __init__(self, model_name: str, api_key: str, embed_options: dict[str, Any] | None = None, **kwargs: Any) -> None:
assert OpenAI is not None, 'OpenAIRetriever requires `openai` package to be installed`'
self._client = OpenAI(api_key=api_key, **kwargs)
self._model_name = model_name
self._embed_options = embed_options or {}
def embed_query(self, query: str) -> list[float]:
return self._client.embeddings.create(
model=self._model_name,
input=query,
**self._embed_options
).data[0].embedding
```
> **Note:** However, if you plan to use OpenAI embeddings for retrieval, we recommend using Qdrant Cloud Inference in `QdrantRetriever` to optimize latency.
```python
QdrantRetriever(model_name="openai/text-embedding-3-small"), embed_options={"openai-api-key" : "sk-proj-..."})
```
### Feedback model
Same for the feedback model, but this time you'd need to inherit from `Feedback` class and override `score` method.
```python
from typing import Any
from qdrant_relevance_feedback.feedback import Feedback
try:
import cohere
except ImportError:
cohere = None
class CohereFeedback(Feedback):
def __init__(self, model_name: str, api_key: str, score_options: dict[str, Any] | None = None, **kwargs: Any):
self._api_key = api_key
self._model_name = model_name # E.g. "rerank-v4.0-pro"
self._score_options = score_options or {}
assert cohere is not None, "CohereFeedback requires `cohere` package"
self._client = cohere.ClientV2(api_key=api_key, **kwargs)
def score(self, query: str, responses: list[str]) -> list[float]:
response = self._client.rerank(
model=self._model_name,
query=query,
documents=responses,
top_n=len(responses),
**self._score_options
).results
feedback_model_scores = [
item.relevance_score for item in sorted(response, key=lambda x: x.index)
]
return feedback_model_scores
```
### Gathering all together
Once you have created your classes, use them the same way as the builtin ones: create objects and pass to `RelevanceFeedback`.
```python
from qdrant_client import QdrantClient
from qdrant_relevance_feedback import RelevanceFeedback
if __name__ == "__main__":
RETRIEVER_VECTOR_NAME = None
COLLECTION_NAME = "document_collection"
LIMIT = 50
CONTEXT_LIMIT = 5
client = QdrantClient(...)
retriever = OpenAIRetriever("text-embedding-3-small", api_key="<your openai api key>")
feedback = CohereFeedback("rerank-v4.0-pro", api_key="<your cohere api key")
relevance_feedback = RelevanceFeedback(
retriever=retriever,
feedback=feedback,
client=client,
collection_name=COLLECTION_NAME,
vector_name=RETRIEVER_VECTOR_NAME,
payload_key="document"
)
formula_params = relevance_feedback.train(
queries=None,
limit=LIMIT,
context_limit=CONTEXT_LIMIT,
)
print('formula params are: ', formula_params)
```
## Image data with QdrantRetriever and FastembedFeedback
If you have a collection of images, represented by `image_url` and `file_name` payload fields, payload usage is more complicated, as `qdrant_client` and `fastembed` expect images to be paths to files on disk. Here is an example that downloads images to disk and returns their file path.
```python
import os
import shutil
from pathlib import Path
import requests
from qdrant_client import QdrantClient, models
from qdrant_relevance_feedback import RelevanceFeedback
from qdrant_relevance_feedback.feedback import FastembedFeedback
from qdrant_relevance_feedback.retriever import QdrantRetriever
class RelevanceFeedbackImageCache(RelevanceFeedback):
def retrieve_payload(self, responses: list[models.ScoredPoint]):
cache_dir = (
Path(os.getenv("XDG_CACHE_HOME", "~/.cache")).expanduser()
/ "relevance_feedback"
)
cache_dir.mkdir(exist_ok=True, parents=True)
responses_content = []
for p in responses:
if not (cache_dir / p.payload["file_name"]).is_file():
with (cache_dir / p.payload["file_name"]).open("wb") as f:
shutil.copyfileobj(
requests.get(p.payload["image_url"], stream=True).raw, f
)
responses_content.append(str(cache_dir / p.payload["file_name"]))
return responses_content
if __name__ == "__main__":
RETRIEVER_VECTOR_NAME = None
COLLECTION_NAME = "image_collection"
LIMIT = 50 # responses per query
CONTEXT_LIMIT = 5 # top responses used for mining context pairs
client = QdrantClient(
url="https://xyz-example.eu-central.aws.cloud.qdrant.io",
api_key="your-api-key",
cloud_inference=True
)
retriever = QdrantRetriever("Qdrant/clip-ViT-B-32-vision", modality="image") # Note, that you'll also have to tell `QdrantRetriever` that you are dealing with images, not text.
feedback = FastembedFeedback("Qdrant/colpali-v1.3-fp16")
relevance_feedback = RelevanceFeedbackImageCache(
retriever=retriever,
feedback=feedback,
client=client,
collection_name=COLLECTION_NAME,
vector_name=RETRIEVER_VECTOR_NAME,
)
formula_params = relevance_feedback.train(
queries=None, # if you have specific queries for training, provide a list here
amount_of_queries=200, # otherwise, you can specify amount of synthetic queries - documents sampled from your collection
limit=LIMIT,
context_limit=CONTEXT_LIMIT,
)
print('formula params are: ', formula_params)
```
## Evaluation
Evaluates the trained naive formula on two metrics: **relative gain** based on **abovethreshold@N** and **Discounted Cumulative Gain (DCG) Win Rate@N**.
> Detailed explanation of these metrics and their meaning can be found in the ["Relevance Feedback in Qdrant"](https://qdrant.tech/articles/relevance-feedback/) article.
```python
from qdrant_relevance_feedback.evaluate import Evaluator
n = 10 # as in metric@n
EVAL_CONTEXT_LIMIT = 3 # top responses used for mining context pairs (what you'll use in production for you retrieval pipelines)
evaluator = Evaluator(relevance_feedback=relevance_feedback)
# Similar to `relevance_feedback.train`, you can provide your own set of predefined queries by passing `eval_queries=[<queries>]`,
# or use synthetic queries sampled from your collection.
results = evaluator.evaluate_queries(
at_n=n,
formula_params=formula_params,
eval_queries=None,
amount_of_eval_queries=100,
eval_context_limit=EVAL_CONTEXT_LIMIT
)
```
| text/markdown | null | Qdrant Team <info@qdrant.tech>, Evgeniya Sukhodolskaya <evgeniya.sukhodolskaya@qdrant.tech>, George Panchuk <george.panchuk@qdrant.tech> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"pandas>=3.0.0; python_version >= \"3.11\"",
"pandas<3.0.0,>=2.3.3; python_version < \"3.11\"",
"qdrant-client>=1.17.0",
"torch>=2.9.1",
"rich>=14.3.2"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T15:48:52.315982 | qdrant_relevance_feedback-0.1.0-py3-none-any.whl | 21,892 | 1f/bb/5072025c5e1b8a3f144c86c34d789721fda7a3278fba9980f1729f06dc66/qdrant_relevance_feedback-0.1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | a8f13fbbb348611ed9c63e6e2f89053f | 56b824aa3afb1b25f16591a012da20642d01ba11498252e528edf4594f63ce1f | 1fbb5072025c5e1b8a3f144c86c34d789721fda7a3278fba9980f1729f06dc66 | null | [] | 224 |
2.4 | ipeds-wrangler | 0.1.1 | A package for wrangling IPEDS data | # ipeds-wrangler
## What it does
The United States National Center for Education Statistics (NCES) maintains the Integrated Postsecondary Education Data System (IPEDS). Each year, IPEDS collects data from approximately 6,000 institutions, representing more than 15 million students. These data can enable many institutional research projects, such as: benchmarking against peer institutions, tracking enrollment trends, and analyzing graduation rates. However, IPEDS data can be challenging to wrangle into actionable insights - especially for Python users.
This package is new, but ipeds-wrangler will enable Python users to:
- Webscrape IPEDS databases and convert key tables to CSV files with `download_ipeds_databases`.
- Search IPEDS databases efficiently with `search_ipeds_databases` - coming soon!
- Convert numerical categorical variables into user-friendly text with `convert_ipeds_numeric_categories` - coming soon!
## Get started
Install the package:
```bash
pip install ipeds-wrangler
```
Download IPEDS databases:
```python
from ipeds_wrangler import download_ipeds_databases
# Download the most recent 4 years of IPEDS databases to your Desktop
download_ipeds_databases()
# Or specify a custom directory name
download_ipeds_databases(download_directory="my_ipeds_data")
```
| text/markdown | null | Tracy Reuter <tracy.ellen.reuter@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"requests>=2.31.0",
"beautifulsoup4>=4.12.0",
"platformdirs>=4.0.0",
"access-parser>=0.0.5",
"pandas>=2.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/tracyreuter/ipeds-wrangler"
] | twine/6.2.0 CPython/3.13.2 | 2026-02-20T15:48:13.475379 | ipeds_wrangler-0.1.1.tar.gz | 5,563 | f2/5e/7c1ea0576f4935248a50ca4f56d92f5d271835d1cacd2ec3940d50dead2e/ipeds_wrangler-0.1.1.tar.gz | source | sdist | null | false | bc9e5b107cdcfaef20a2a333a4583e84 | afaf3abf959a84ab509629bc4c570365f265325bd01a0f17bf024f889b323943 | f25e7c1ea0576f4935248a50ca4f56d92f5d271835d1cacd2ec3940d50dead2e | MIT | [
"LICENSE"
] | 204 |
2.4 | apexbase | 1.5.0 | High-performance HTAP embedded database with Rust core and Python API | # ApexBase
**High-performance HTAP embedded database with Rust core and Python API**
ApexBase is an embedded columnar database designed for **Hybrid Transactional/Analytical Processing (HTAP)** workloads. It combines a high-throughput columnar storage engine written in Rust with an ergonomic Python API, delivering analytical query performance that surpasses DuckDB and SQLite on most benchmarks — all in a single `.apex` file with zero external dependencies.
---
## Features
- **HTAP architecture** — V4 Row Group columnar storage with DeltaStore for cell-level updates; fast inserts and fast analytical scans in one engine
- **Multi-database support** — multiple isolated databases in one directory; cross-database queries with standard `db.table` SQL syntax
- **Single-file storage** — custom `.apex` format per table, no server process, no external dependencies
- **Comprehensive SQL** — DDL, DML, JOINs (INNER/LEFT/RIGHT/FULL/CROSS), subqueries (IN/EXISTS/scalar), CTEs (WITH ... AS), UNION/UNION ALL, window functions, EXPLAIN/ANALYZE, multi-statement execution
- **70+ built-in functions** — math (ABS, SQRT, POWER, LOG, trig), string (UPPER, LOWER, SUBSTR, REPLACE, CONCAT, REGEXP_REPLACE, ...), date (YEAR, MONTH, DAY, DATEDIFF, DATE_ADD, ...), conditional (COALESCE, IFNULL, NULLIF, CASE WHEN, GREATEST, LEAST)
- **Aggregation and analytics** — COUNT, SUM, AVG, MIN, MAX, COUNT(DISTINCT), GROUP BY, HAVING, ORDER BY with NULLS FIRST/LAST
- **Window functions** — ROW_NUMBER, RANK, DENSE_RANK, NTILE, PERCENT_RANK, CUME_DIST, LAG, LEAD, FIRST_VALUE, LAST_VALUE, NTH_VALUE, RUNNING_SUM, and windowed SUM/AVG/COUNT/MIN/MAX with PARTITION BY and ORDER BY
- **Transactions** — BEGIN / COMMIT / ROLLBACK with OCC (Optimistic Concurrency Control), SAVEPOINT / ROLLBACK TO / RELEASE, statement-level auto-rollback
- **MVCC** — multi-version concurrency control with snapshot isolation, version store, and garbage collection
- **Indexing** — B-Tree and Hash indexes with CREATE INDEX / DROP INDEX / REINDEX; automatic multi-index AND intersection for compound predicates
- **Full-text search** — built-in NanoFTS integration with fuzzy matching
- **JIT compilation** — Cranelift-based JIT for predicate evaluation and SIMD-vectorized aggregations
- **Zero-copy Python bridge** — Arrow IPC between Rust and Python; direct conversion to Pandas, Polars, and PyArrow
- **Durability levels** — configurable `fast` / `safe` / `max` with WAL support and crash recovery
- **Compact storage** — dictionary encoding for low-cardinality strings, LZ4 and Zstd compression
- **Parquet interop** — COPY TO / COPY FROM Parquet files
- **PostgreSQL wire protocol** — built-in server for DBeaver, psql, DataGrip, pgAdmin, Navicat, and any PostgreSQL-compatible client; two distribution modes (Python CLI or standalone Rust binary)
- **Arrow Flight gRPC server** — high-performance columnar data transfer over HTTP/2; streams Arrow IPC RecordBatch directly, 4–7× faster than PG wire for large result sets; accessible via `pyarrow.flight`, Go arrow, Java arrow, and any Arrow Flight client
- **Cross-platform** — Linux, macOS, and Windows; x86_64 and ARM64; Python 3.9 -- 3.13
---
## Installation
```bash
pip install apexbase
```
Build from source (requires Rust toolchain):
```bash
maturin develop --release
```
---
## Quick Start
```python
from apexbase import ApexClient
# Open (or create) a database directory
client = ApexClient("./data")
# Create a table
client.create_table("users")
# Store records
client.store({"name": "Alice", "age": 30, "city": "Beijing"})
client.store([
{"name": "Bob", "age": 25, "city": "Shanghai"},
{"name": "Charlie", "age": 35, "city": "Beijing"},
])
# SQL query
results = client.execute("SELECT * FROM users WHERE age > 28 ORDER BY age DESC")
# Convert to DataFrame
df = results.to_pandas()
client.close()
```
---
## Usage Guide
### Database Management
ApexBase supports multiple isolated databases within a single root directory. Each named database lives in its own subdirectory; the default database uses the root directory.
```python
# Switch to a named database (creates it if needed)
client.use_database("analytics")
# Combined: switch database + select/create a table in one call
client.use(database="analytics", table="events")
# List all databases
dbs = client.list_databases() # ["analytics", "default", "hr"]
# Current database
print(client.current_database) # "analytics"
# Cross-database SQL — standard db.table syntax
client.execute("SELECT * FROM default.users")
client.execute("SELECT u.name, e.event FROM default.users u JOIN analytics.events e ON u.id = e.user_id")
client.execute("INSERT INTO analytics.events (name) VALUES ('click')")
client.execute("UPDATE default.users SET age = 31 WHERE name = 'Alice'")
client.execute("DELETE FROM default.users WHERE age < 18")
```
All SQL operations (SELECT, INSERT, UPDATE, DELETE, JOIN, CREATE TABLE, DROP TABLE, ALTER TABLE) support `database.table` qualified names, allowing cross-database queries in a single statement.
### Table Management
Each table is stored as a separate `.apex` file. Tables must be created before use.
```python
# Create with optional schema
client.create_table("orders", schema={
"order_id": "int64",
"product": "string",
"price": "float64",
})
# Switch tables
client.use_table("users")
# List / drop
tables = client.list_tables()
client.drop_table("orders")
```
### Data Ingestion
```python
import pandas as pd
import polars as pl
import pyarrow as pa
# Columnar dict (fastest for bulk data)
client.store({
"name": ["D", "E", "F"],
"age": [22, 32, 42],
})
# From pandas / polars / PyArrow (auto-creates table when table_name given)
client.from_pandas(pd.DataFrame({"name": ["G"], "age": [28]}), table_name="users")
client.from_polars(pl.DataFrame({"name": ["H"], "age": [38]}), table_name="users")
client.from_pyarrow(pa.table({"name": ["I"], "age": [48]}), table_name="users")
```
### SQL
ApexBase supports a broad SQL dialect. Examples:
```python
# DDL
client.execute("CREATE TABLE IF NOT EXISTS products")
client.execute("ALTER TABLE products ADD COLUMN name STRING")
client.execute("DROP TABLE IF EXISTS products")
# DML
client.execute("INSERT INTO users (name, age) VALUES ('Zoe', 29)")
client.execute("UPDATE users SET age = 31 WHERE name = 'Alice'")
client.execute("DELETE FROM users WHERE age < 20")
# SELECT with full clause support
client.execute("""
SELECT city, COUNT(*) AS cnt, AVG(age) AS avg_age
FROM users
WHERE age BETWEEN 20 AND 40
GROUP BY city
HAVING cnt > 1
ORDER BY avg_age DESC
LIMIT 10
""")
# JOINs
client.execute("""
SELECT u.name, o.product
FROM users u
INNER JOIN orders o ON u._id = o.user_id
""")
# Subqueries
client.execute("SELECT * FROM users WHERE age > (SELECT AVG(age) FROM users)")
client.execute("SELECT * FROM users WHERE city IN (SELECT city FROM cities WHERE pop > 1000000)")
# CTEs
client.execute("""
WITH seniors AS (SELECT * FROM users WHERE age >= 30)
SELECT city, COUNT(*) FROM seniors GROUP BY city
""")
# Window functions
client.execute("""
SELECT name, age,
ROW_NUMBER() OVER (ORDER BY age DESC) AS rank,
AVG(age) OVER (PARTITION BY city) AS city_avg
FROM users
""")
# UNION
client.execute("""
SELECT name FROM users WHERE city = 'Beijing'
UNION ALL
SELECT name FROM users WHERE city = 'Shanghai'
""")
# Multi-statement
client.execute("""
INSERT INTO users (name, age) VALUES ('New1', 20);
INSERT INTO users (name, age) VALUES ('New2', 21);
SELECT COUNT(*) FROM users
""")
# INSERT ... ON CONFLICT (upsert)
client.execute("""
INSERT INTO users (name, age) VALUES ('Alice', 31)
ON CONFLICT (name) DO UPDATE SET age = 31
""")
# CREATE TABLE AS
client.execute("CREATE TABLE seniors AS SELECT * FROM users WHERE age >= 30")
# EXPLAIN / EXPLAIN ANALYZE
client.execute("EXPLAIN SELECT * FROM users WHERE age > 25")
# Parquet interop
client.execute("COPY users TO '/tmp/users.parquet'")
client.execute("COPY users FROM '/tmp/users.parquet'")
```
### Transactions
```python
client.execute("BEGIN")
client.execute("INSERT INTO users (name, age) VALUES ('Tx1', 20)")
client.execute("SAVEPOINT sp1")
client.execute("INSERT INTO users (name, age) VALUES ('Tx2', 21)")
client.execute("ROLLBACK TO sp1") # undo Tx2 only
client.execute("COMMIT") # Tx1 persisted
```
Transactions use OCC validation — concurrent writes are detected at commit time.
### Indexes
```python
client.execute("CREATE INDEX idx_age ON users (age)")
client.execute("CREATE UNIQUE INDEX idx_name ON users (name)")
# Queries automatically use indexes when applicable
client.execute("SELECT * FROM users WHERE age = 30") # index scan
client.execute("DROP INDEX idx_age ON users")
client.execute("REINDEX users")
```
### Full-Text Search
ApexBase ships a native full-text search engine (NanoFTS) integrated directly into the SQL executor. FTS is available through **all interfaces** — Python API, PostgreSQL Wire, and Arrow Flight — without any Python-side middleware.
#### SQL interface (recommended)
```python
# 1. Create the FTS index via SQL DDL
client.execute("CREATE FTS INDEX ON articles (title, content)")
# Optional: specify lazy loading and cache size
client.execute("CREATE FTS INDEX ON logs WITH (lazy_load=true, cache_size=50000)")
# 2. Query using MATCH() / FUZZY_MATCH() in WHERE
results = client.execute("SELECT * FROM articles WHERE MATCH('rust programming')")
results = client.execute("SELECT title, content FROM articles WHERE FUZZY_MATCH('pytohn')")
# Combine with other predicates
results = client.execute("""
SELECT * FROM articles
WHERE MATCH('machine learning') AND published_at > '2024-01-01'
ORDER BY _id DESC LIMIT 20
""")
# FTS also works in aggregations
count = client.execute("SELECT COUNT(*) FROM articles WHERE MATCH('deep learning')")
# Manage indexes
client.execute("SHOW FTS INDEXES") # list all FTS-enabled tables
client.execute("ALTER FTS INDEX ON articles DISABLE") # disable, keep files
client.execute("DROP FTS INDEX ON articles") # remove index + delete files
```
#### Python API (alternative)
```python
# Initialize FTS for current table
client.use_table("articles")
client.init_fts(index_fields=["title", "content"])
# Search
ids = client.search_text("database")
fuzzy = client.fuzzy_search_text("databse") # tolerates typos
recs = client.search_and_retrieve("python", limit=10)
top5 = client.search_and_retrieve_top("neural network", n=5)
# Lifecycle
client.get_fts_stats()
client.disable_fts() # suspend without deleting files
client.drop_fts() # remove index + delete files
```
> **Tip:** The SQL interface (`MATCH()` / `FUZZY_MATCH()`) works over PG Wire and Arrow Flight without any extra setup; the Python API methods are Python-process-only.
### Record-Level Operations
```python
record = client.retrieve(0) # by internal _id
records = client.retrieve_many([0, 1, 2])
all_data = client.retrieve_all()
client.replace(0, {"name": "Alice2", "age": 31})
client.delete(0)
client.delete([1, 2, 3])
```
### Column Operations
```python
client.add_column("email", "String")
client.rename_column("email", "email_addr")
client.drop_column("email_addr")
client.get_column_dtype("age") # "Int64"
client.list_fields() # ["name", "age", "city"]
```
### ResultView
Query results are returned as `ResultView` objects with multiple output formats:
```python
results = client.execute("SELECT * FROM users")
df = results.to_pandas() # pandas DataFrame (zero-copy by default)
pl_df = results.to_polars() # polars DataFrame
arrow = results.to_arrow() # PyArrow Table
dicts = results.to_dict() # list of dicts
results.shape # (rows, columns)
results.columns # column names
len(results) # row count
results.first() # first row as dict
results.scalar() # single value (for aggregates)
results.get_ids() # numpy array of _id values
```
### Context Manager
```python
with ApexClient("./data") as client:
client.create_table("tmp")
client.store({"key": "value"})
# Automatically closed on exit
```
---
## Performance
### ApexBase vs SQLite vs DuckDB (1M rows)
Three-way comparison on macOS 26.3, Apple arm (10 cores), 32 GB RAM.
Python 3.11.10, ApexBase v1.5.0, SQLite v3.45.3, DuckDB v1.1.3, PyArrow v19.0.0.
Dataset: 1,000,000 rows × 5 columns (name, age, score, city, category).
Average of 5 timed iterations after 2 warmup runs.
| Query | ApexBase | SQLite | DuckDB | vs Best Other |
|-------|----------|--------|--------|---------------|
| Bulk Insert (1M rows) | 273ms | 905ms | 863ms | **3.3x faster** |
| COUNT(\*) | 0.049ms | 8.26ms | 0.512ms | **10x faster** |
| SELECT \* LIMIT 100 [cold] ¹ | 0.113ms | 0.101ms | 0.470ms | 1.1x slower |
| SELECT \* LIMIT 10K [cold] | 0.917ms | 6.53ms | 4.51ms | **4.9x faster** |
| Filter (name = 'user\_5000') | 0.035ms | 38.56ms | 1.58ms | **45x faster** |
| Filter (age BETWEEN 25 AND 35) | 0.026ms | 155ms | 88.32ms | **>3000x faster** |
| GROUP BY city (10 groups) | 0.040ms | 344ms | 2.69ms | **67x faster** |
| GROUP BY + HAVING | 0.026ms | 358ms | 2.99ms | **115x faster** |
| ORDER BY score LIMIT 100 | 0.029ms | 50.29ms | 4.59ms | **158x faster** |
| Aggregation (5 funcs) | 0.034ms | 78.22ms | 1.07ms | **31x faster** |
| Complex (Filter+Group+Order) | 0.028ms | 152ms | 2.34ms | **84x faster** |
| Point Lookup (by \_id) | 0.026ms | 0.039ms | 2.51ms | **1.5x faster** |
| Insert 1K rows | 0.602ms | 1.32ms | 2.44ms | **2.2x faster** |
| SELECT \* → pandas (full scan) | 0.605ms | 1100ms | 162ms | **268x faster** |
| GROUP BY city, category (100 grp) | 0.017ms | 646ms | 4.14ms | **244x faster** |
| LIKE filter (name LIKE 'user\_1%') | 28.18ms | 129ms | 52.55ms | **1.9x faster** |
| Multi-cond (age>30 AND score>50) | 0.033ms | 323ms | 189ms | **>5000x faster** |
| ORDER BY city, score DESC LIMIT 100 | 0.026ms | 65.62ms | 6.00ms | **231x faster** |
| COUNT(DISTINCT city) | 0.026ms | 84.02ms | 3.23ms | **124x faster** |
| IN filter (city IN 3 cities) | 0.029ms | 294ms | 153ms | **>5000x faster** |
| UPDATE rows (age = 25) | 207ms | 36.03ms | 14.35ms | 14.4x slower |
**Summary**: wins 19 of 21 benchmarks. Slower on UPDATE (disk-flush dominated) and cold SELECT \* LIMIT 100¹.
> ¹ **Cold-start note**: ApexBase re-opens from disk on every iteration; SQLite reuses a warm connection. ApexBase true cold-start without GC interference: **0.027ms** — 4× faster than SQLite's warm 0.101ms.
Reproduce: `python benchmarks/bench_vs_sqlite_duckdb.py --rows 1000000`
---
## Server Protocols
ApexBase ships two complementary server protocols for external access:
| Protocol | Port | Best for | Binary / CLI |
|----------|------|----------|--------------|
| **PG Wire** | 5432 | DBeaver, psql, DataGrip, BI tools | `apexbase-server` |
| **Arrow Flight** | 50051 | Python (pyarrow), Go, Java, Spark | `apexbase-flight` |
### Combined Launcher (Both Servers at Once)
```bash
# Start PG Wire + Arrow Flight simultaneously
apexbase-serve --dir /path/to/data
# Custom ports
apexbase-serve --dir /path/to/data --pg-port 5432 --flight-port 50051
# Disable one server
apexbase-serve --dir /path/to/data --no-flight # PG Wire only
apexbase-serve --dir /path/to/data --no-pg # Arrow Flight only
```
| Flag | Default | Description |
|------|---------|-------------|
| `--dir`, `-d` | `.` | Directory containing `.apex` database files |
| `--host` | `127.0.0.1` | Bind host for both servers |
| `--pg-port` | `5432` | PostgreSQL Wire port |
| `--flight-port` | `50051` | Arrow Flight gRPC port |
| `--no-pg` | — | Disable PG Wire server |
| `--no-flight` | — | Disable Arrow Flight server |
---
## PostgreSQL Wire Protocol Server
ApexBase includes a built-in PostgreSQL wire protocol server, allowing you to connect using **DBeaver**, **psql**, **DataGrip**, **pgAdmin**, **Navicat**, and any other tool that supports the PostgreSQL protocol.
### Starting the Server
**Method 1: Python CLI (after `pip install apexbase`)**
```bash
apexbase-server --dir /path/to/data --port 5432
```
Options:
| Flag | Default | Description |
|------|---------|-------------|
| `--dir`, `-d` | `.` | Directory containing `.apex` database files |
| `--host` | `127.0.0.1` | Host to bind to (use `0.0.0.0` for remote access) |
| `--port`, `-p` | `5432` | Port to listen on |
**Method 2: Standalone Rust binary (no Python required)**
```bash
# Build
cargo build --release --bin apexbase-server --no-default-features --features server
# Run
./target/release/apexbase-server --dir /path/to/data --port 5432
```
### Connecting with Database Tools
The server emulates PostgreSQL 15.0, reports a `pg_catalog` and `information_schema` compatible metadata layer, and supports `SimpleQuery` protocol. No username or password is required (authentication is disabled).
#### DBeaver
1. **New Database Connection** → choose **PostgreSQL**
2. Fill in connection details:
- **Host**: `127.0.0.1` (or the `--host` you specified)
- **Port**: `5432` (or the `--port` you specified)
- **Database**: `apexbase` (any value accepted)
- **Authentication**: select **No Authentication** or leave username/password empty
3. Click **Test Connection** → **Finish**
4. DBeaver will discover tables and columns automatically via `pg_catalog` / `information_schema`
#### psql
```bash
psql -h 127.0.0.1 -p 5432 -d apexbase
```
#### DataGrip / IntelliJ IDEA
1. **Database** tool window → **+** → **Data Source** → **PostgreSQL**
2. Set **Host**, **Port**, **Database** as above; leave **User** and **Password** empty
3. Click **Test Connection** → **OK**
#### pgAdmin
1. **Add New Server** → **General** tab: give it a name
2. **Connection** tab: set **Host** and **Port**; leave **Username** as `postgres` (ignored) and **Password** empty
3. **Save** — tables appear under **Databases > apexbase > Schemas > public > Tables**
#### Navicat for PostgreSQL
1. **Connection** → **PostgreSQL**
2. Set **Host**, **Port**; leave **User** and **Password** blank
3. **Test Connection** → **OK**
#### Other Compatible Tools
Any tool or library that speaks the PostgreSQL wire protocol (libpq) can connect, including:
- **TablePlus**, **Beekeeper Studio**, **Heidisql**
- **Python**: `psycopg2` / `asyncpg`
- **Node.js**: `pg` (`node-postgres`)
- **Go**: `pgx` / `lib/pq`
- **Rust**: `tokio-postgres` / `sqlx`
- **Java**: JDBC PostgreSQL driver
Example with `psycopg2`:
```python
import psycopg2
conn = psycopg2.connect(host="127.0.0.1", port=5432, dbname="apexbase")
cur = conn.cursor()
cur.execute("SELECT * FROM users LIMIT 10")
print(cur.fetchall())
conn.close()
```
### Supported SQL over Wire Protocol
The wire protocol server passes SQL directly to the ApexBase query engine. All SQL features listed in [Usage Guide](#usage-guide) are available, including JOINs, CTEs, window functions, transactions, and DDL.
### Metadata Compatibility
The server implements a `pg_catalog` compatibility layer that responds to common catalog queries:
| Catalog / View | Purpose |
|----------------|---------|
| `pg_catalog.pg_namespace` | Schema listing |
| `pg_catalog.pg_database` | Database listing |
| `pg_catalog.pg_class` | Table discovery |
| `pg_catalog.pg_attribute` | Column metadata |
| `pg_catalog.pg_type` | Type information |
| `pg_catalog.pg_settings` | Server settings |
| `information_schema.tables` | Standard table listing |
| `information_schema.columns` | Standard column listing |
| `SET` / `SHOW` statements | Client configuration probes |
This enables GUI tools to browse tables, inspect columns, and display data types without modification.
### Supported Protocol Features
| Feature | Status |
|---------|--------|
| Simple Query Protocol | ✅ Fully supported |
| Extended Query Protocol (prepared statements) | ✅ Supported — schema cached, binary format for psycopg3 |
| Cross-database SQL (`db.table`) | ✅ Supported — `USE dbname` / `\c dbname` to switch context |
| `pg_catalog` / `information_schema` | ✅ Compatible layer for GUI tools |
| All ApexBase SQL (JOINs, CTEs, window functions, DDL) | ✅ Full pass-through to query engine |
### Limitations
- **Authentication** is not implemented — the server accepts all connections regardless of username/password
- **SSL/TLS** is not supported — use an SSH tunnel (`ssh -L 5432:127.0.0.1:5432 user@host`) for remote access
---
## Arrow Flight gRPC Server
Arrow Flight sends Arrow IPC RecordBatch directly over gRPC (HTTP/2), bypassing per-row text serialization entirely. It is **4–7× faster than PG wire for large result sets** (10K+ rows).
| Query | PG Wire | Arrow Flight | Speedup |
|-------|---------|--------------|--------|
| SELECT 10K rows | 5.1ms | 0.7ms | **7× faster** |
| BETWEEN (~33K rows) | 22ms | 5.6ms | **4× faster** |
| Single row / point lookup | ~7.5ms | ~7.9ms | equal |
### Starting the Flight Server
**Python CLI:**
```bash
apexbase-flight --dir /path/to/data --port 50051
```
**Standalone Rust binary:**
```bash
cargo build --release --bin apexbase-flight --no-default-features --features flight
./target/release/apexbase-flight --dir /path/to/data --port 50051
```
### Python Client
```python
import pyarrow.flight as fl
import pandas as pd
client = fl.connect("grpc://127.0.0.1:50051")
# SELECT — returns Arrow Table
table = client.do_get(fl.Ticket(b"SELECT * FROM users LIMIT 10000")).read_all()
df = table.to_pandas() # zero-copy to pandas
pl_df = pl.from_arrow(table) # zero-copy to polars
# DML / DDL
client.do_action(fl.Action("sql", b"INSERT INTO users (name, age) VALUES ('Alice', 30)"))
client.do_action(fl.Action("sql", b"CREATE TABLE logs (event STRING, ts INT64)"))
# List available actions
for action in client.list_actions():
print(action.type, "—", action.description)
```
### When to Use Arrow Flight vs PG Wire
| Scenario | Recommendation |
|----------|---------------|
| DBeaver / Tableau / BI tools | **PG Wire** (only option) |
| Python + small queries (<100 rows) | **Native API** (fastest, in-process) |
| Python + large queries (10K+ rows, remote) | **Arrow Flight** (4–7× faster than PG wire) |
| Go / Java / Spark workers | **Arrow Flight** (native Arrow support) |
| Local Python (same machine) | **Native API** (`ApexClient.execute()`) |
### PyO3 Python API
Both servers are also accessible as blocking Python functions (released GIL):
```python
import threading
from apexbase._core import start_pg_server, start_flight_server
t1 = threading.Thread(target=start_pg_server, args=("/data", "0.0.0.0", 5432), daemon=True)
t2 = threading.Thread(target=start_flight_server, args=("/data", "0.0.0.0", 50051), daemon=True)
t1.start()
t2.start()
```
---
## Architecture
```
Python (ApexClient)
|
|-- Arrow IPC / columnar dict --------> ResultView (Pandas / Polars / PyArrow)
|
Rust Core (PyO3 bindings)
|
+-- SQL Parser -----> Query Planner -----> Query Executor
| |
| +-- JIT Compiler (Cranelift) |
| +-- Expression Evaluator (70+ functions) |
| +-- Window Function Engine |
| |
+-- Storage Engine |
| +-- V4 Row Group Format (.apex) |
| +-- DeltaStore (cell-level updates) |
| +-- WAL (write-ahead log) |
| +-- Mmap on-demand reads |
| +-- LZ4 / Zstd compression |
| +-- Dictionary encoding |
| |
+-- Index Manager (B-Tree, Hash) |
+-- TxnManager (OCC + MVCC) |
+-- NanoFTS (full-text search) |
+-- PG Wire Protocol Server (pgwire) |
| +-- DBeaver / psql / DataGrip / pgAdmin |
| +-- pg_catalog & information_schema compat |
| |
+-- Arrow Flight gRPC Server (tonic + HTTP/2) |
+-- pyarrow.flight / Go / Java / Spark |
+-- Arrow IPC — zero serialization overhead |
```
### Storage Format
ApexBase uses a custom V4 Row Group format:
- Each table is a single `.apex` file containing a header, row groups, and a footer
- Row groups store columns contiguously with per-column compression (LZ4 or Zstd)
- Low-cardinality string columns are dictionary-encoded on disk
- Null bitmaps are stored per column per row group
- A DeltaStore file (`.deltastore`) holds cell-level updates that are merged on read and compacted automatically
- WAL records provide crash recovery with idempotent replay
### Query Execution
- The SQL parser produces an AST that the query planner analyzes for optimization strategy
- Fast paths bypass the full executor for common patterns (COUNT(\*), SELECT \* LIMIT N, point lookups, single-column GROUP BY)
- Arrow RecordBatch is the internal data representation; results flow to Python via Arrow IPC with zero-copy when possible
- Repeated identical read queries are served from an in-process result cache
---
## API Reference
### ApexClient
**Constructor**
```python
ApexClient(
dirpath="./data", # data directory
drop_if_exists=False, # clear existing data on open
batch_size=1000, # batch size for operations
enable_cache=True, # enable query cache
cache_size=10000, # cache capacity
prefer_arrow_format=True, # prefer Arrow format for results
durability="fast", # "fast" | "safe" | "max"
)
```
**Database Management**
| Method | Description |
|--------|-------------|
| `use_database(database='default')` | Switch to a named database (creates it if needed) |
| `use(database='default', table=None)` | Switch database and optionally select/create a table |
| `list_databases()` | List all databases (`'default'` always included) |
| `current_database` | Property: current database name |
**Table Management**
| Method | Description |
|--------|-------------|
| `create_table(name, schema=None)` | Create a new table, optionally with pre-defined schema |
| `drop_table(name)` | Drop a table |
| `use_table(name)` | Switch active table |
| `list_tables()` | List all tables in the current database |
| `current_table` | Property: current table name |
**Data Storage**
| Method | Description |
|--------|-------------|
| `store(data)` | Store data (dict, list, DataFrame, Arrow Table) |
| `from_pandas(df, table_name=None)` | Import from pandas DataFrame |
| `from_polars(df, table_name=None)` | Import from polars DataFrame |
| `from_pyarrow(table, table_name=None)` | Import from PyArrow Table |
**Data Retrieval**
| Method | Description |
|--------|-------------|
| `execute(sql)` | Execute SQL statement(s) |
| `query(where, limit)` | Query with WHERE expression |
| `retrieve(id)` | Get record by \_id |
| `retrieve_many(ids)` | Get multiple records by \_id |
| `retrieve_all()` | Get all records |
| `count_rows(table)` | Count rows in table |
**Data Modification**
| Method | Description |
|--------|-------------|
| `replace(id, data)` | Replace a record |
| `batch_replace({id: data})` | Batch replace records |
| `delete(id)` or `delete([ids])` | Delete record(s) |
**Column Operations**
| Method | Description |
|--------|-------------|
| `add_column(name, type)` | Add a column |
| `drop_column(name)` | Drop a column |
| `rename_column(old, new)` | Rename a column |
| `get_column_dtype(name)` | Get column data type |
| `list_fields()` | List all fields |
**Full-Text Search**
| Method | Description |
|--------|-------------|
| `init_fts(fields, lazy_load, cache_size)` | Initialize FTS |
| `search_text(query)` | Search documents |
| `fuzzy_search_text(query)` | Fuzzy search |
| `search_and_retrieve(query, limit, offset)` | Search and return records |
| `search_and_retrieve_top(query, n)` | Top N results |
| `get_fts_stats()` | FTS statistics |
| `disable_fts()` / `drop_fts()` | Disable or drop FTS |
**Utility**
| Method | Description |
|--------|-------------|
| `flush()` | Flush data to disk |
| `set_auto_flush(rows, bytes)` | Set auto-flush thresholds |
| `get_auto_flush()` | Get auto-flush config |
| `estimate_memory_bytes()` | Estimate memory usage |
| `close()` | Close the client |
### ResultView
| Method / Property | Description |
|-------------------|-------------|
| `to_pandas(zero_copy=True)` | Convert to pandas DataFrame |
| `to_polars()` | Convert to polars DataFrame |
| `to_arrow()` | Convert to PyArrow Table |
| `to_dict()` | Convert to list of dicts |
| `scalar()` | Get single scalar value |
| `first()` | Get first row as dict |
| `get_ids(return_list=False)` | Get record IDs |
| `shape` | (rows, columns) |
| `columns` | Column names |
| `__len__()` | Row count |
| `__iter__()` | Iterate over rows |
| `__getitem__(idx)` | Index access |
---
## Documentation
Additional documentation is available in the `docs/` directory.
## License
Apache-2.0
| text/markdown; charset=UTF-8; variant=GFM | null | Birch Kwok <birchkwok@gmail.com> | null | null | Apache-2.0 | database, embedded-database, rust, high-performance, htap, columnar, analytics, arrow | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Rust",
"Topic :: Database"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pyarrow>=10.0.0",
"pandas>=2.0.0",
"polars>=0.15.0",
"pytest>=7.0.0; extra == \"dev\"",
"maturin>=1.5.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/BirchKwok/ApexBase",
"Repository, https://github.com/BirchKwok/ApexBase"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T15:47:56.875768 | apexbase-1.5.0.tar.gz | 671,098 | 06/09/358935c82774291cd484dc28aa7fee220709670ae38e09cf3d952739d154/apexbase-1.5.0.tar.gz | source | sdist | null | false | 8bca436bafe808cb6228484bd91027f8 | 49450f430dca54d8d4fc63a6beb1584d4b05c9887b941e0cc88ad217306a6f54 | 0609358935c82774291cd484dc28aa7fee220709670ae38e09cf3d952739d154 | null | [
"LICENSE"
] | 1,106 |
2.4 | bbdc-cli | 0.5.0 | Typer CLI for Bitbucket Data Center | # bbdc-cli
A small, practical Typer CLI for Bitbucket Data Center / Server REST API.
It reads credentials from environment variables and provides high-signal PR workflows (list, create, comment,
approve, merge, update metadata, manage reviewers/participants, review completion, diffs, etc.) without needing a
full SDK.
## Requirements
- Python 3.9+
- `pipx` recommended for isolated install
## Install
From PyPI:
```bash
pipx install bbdc-cli
# or
pip install bbdc-cli
```
From source (repo root with `pyproject.toml`):
```bash
pipx install .
# If you are iterating locally:
pipx install -e .
# Reinstall after changes (non-editable install):
pipx reinstall bbdc-cli
# Uninstall:
pipx uninstall bbdc-cli
```
## Configuration
The CLI uses two environment variables:
- `BITBUCKET_SERVER`: base REST URL ending in `/rest`
- `BITBUCKET_API_TOKEN`: Bitbucket token (PAT or HTTP access token)
BBVA note:
- Most users will authenticate with Project/Repository HTTP access tokens.
- Those tokens usually work for repository/project workflows, but some user-account endpoints can return `401`.
Example (BBVA-style context path):
```
https://bitbucket.globaldevtools.bbva.com/bitbucket/rest
```
Set them:
```bash
export BITBUCKET_SERVER="https://bitbucket.globaldevtools.bbva.com/bitbucket/rest"
export BITBUCKET_API_TOKEN="YOUR_TOKEN"
```
## Quick check
```bash
bbdc doctor
# machine-readable output
bbdc doctor --json
```
If this succeeds, your base URL + token are working.
Optional (for account profile/settings lookups):
- `BITBUCKET_USER_SLUG`: your Bitbucket user slug
## Common commands
Show help:
```bash
bbdc --help
bbdc account --help
bbdc dashboard --help
bbdc pr --help
```
Get information about your authenticated account:
```bash
# consolidated snapshot (recent repos + SSH keys + GPG keys)
bbdc account me
# if some account endpoints are not permitted with your token,
# account me still returns partial JSON with "partial" + "errors"
bbdc account me
# account me is JSON by default; there is no --json flag
# (use: bbdc account me)
# include user profile and settings when your slug is known
bbdc account me --user-slug your.user --include-settings
# raw account endpoint calls
bbdc account recent-repos
bbdc account ssh-keys
bbdc account gpg-keys
bbdc account user --user-slug your.user
bbdc account settings --user-slug your.user
```
Inspect dashboard pull requests:
```bash
# pull requests where you are involved (author/reviewer/participant)
bbdc dashboard pull-requests
# filter by role/state
bbdc dashboard pull-requests --role REVIEWER --state OPEN
# return JSON
bbdc dashboard pull-requests --json
```
List pull requests:
```bash
bbdc pr list --project GL_KAIF_APP-ID-2866825_DSG --repo mercury-viz
```
Get a pull request:
```bash
bbdc pr get -p GL_KAIF_APP-ID-2866825_DSG -r mercury-viz 123
```
Create a pull request:
```bash
bbdc pr create \
--project GL_KAIF_APP-ID-2866825_DSG \
--repo mercury-viz \
--from-branch feature/my-branch \
--to-branch develop \
--title "Add viz panel" \
--description "Implements X"
```
Add reviewers (repeat `--reviewer`):
```bash
bbdc pr create \
-p GL_KAIF_APP-ID-2866825_DSG \
-r mercury-viz \
--from-branch feature/my-branch \
--to-branch develop \
--title "Add viz panel" \
--description "Implements X" \
--reviewer some.username \
--reviewer other.username
```
Approve, decline, merge:
```bash
bbdc pr approve -p GL_KAIF_APP-ID-2866825_DSG -r mercury-viz 123
bbdc pr decline -p GL_KAIF_APP-ID-2866825_DSG -r mercury-viz 123 --comment "Not proceeding"
bbdc pr merge -p GL_KAIF_APP-ID-2866825_DSG -r mercury-viz 123 --message "LGTM"
```
Update metadata:
```bash
bbdc pr update -p GL_KAIF_APP-ID-2866825_DSG -r mercury-viz 123 \
--title "New title" \
--description "Updated description" \
--reviewer some.username
```
Participants / reviewers:
```bash
bbdc pr participants list -p GL_KAIF_APP-ID-2866825_DSG -r mercury-viz 123
bbdc pr participants add -p GL_KAIF_APP-ID-2866825_DSG -r mercury-viz 123 --user alice --role REVIEWER
bbdc pr participants status -p GL_KAIF_APP-ID-2866825_DSG -r mercury-viz 123 alice --status APPROVED
```
Review completion and comments:
```bash
bbdc pr review complete -p GL_KAIF_APP-ID-2866825_DSG -r mercury-viz 123 --comment "Looks good" --status APPROVED
bbdc pr comments add -p GL_KAIF_APP-ID-2866825_DSG -r mercury-viz 123 --text "LGTM"
```
## Batch operations
Batch commands live under `bbdc pr batch ...` and read a JSON list of items from `--file` (or `-` for stdin). You can
provide `--project` and `--repo` as defaults for each item.
Example batch approvals (`approve.json`):
```json
[
{"pr_id": 123},
{"pr_id": 456}
]
```
```bash
bbdc pr batch approve -p GL_KAIF_APP-ID-2866825_DSG -r mercury-viz -f approve.json
```
Diffs and commits:
```bash
bbdc pr commits -p GL_KAIF_APP-ID-2866825_DSG -r mercury-viz 123
bbdc pr diff -p GL_KAIF_APP-ID-2866825_DSG -r mercury-viz 123
bbdc pr diff-file -p GL_KAIF_APP-ID-2866825_DSG -r mercury-viz 123 src/main.py
```
See the full command reference in `docs/CLI.md` and usage examples in `docs/examples.md`.
## Codex integration
If teammates will use this through Codex with natural language:
1. Distribute this CLI through PyPI (`bbdc-cli`).
2. Distribute the Codex skill separately (for example, git repo cloned into `$CODEX_HOME/skills/bbdc-cli`).
3. Keep the skill's command inventory synced with this repo's `bbdc_cli/__main__.py`.
Recommended split of responsibilities:
- This repo: command behavior, API semantics, package distribution.
- Skill repo: natural-language intent mapping, execution policy, Codex-specific prompting.
This separation is the correct approach and avoids coupling Codex behavior to package release timing.
### Codex runtime execution caveat
For this BBVA infrastructure, assume Codex runtimes cannot execute `bbdc` against Bitbucket (DNS/VPN/network
constraints), even though the same command works on the user's machine.
Typical error:
- `Request failed: HTTPSConnectionPool(... NameResolutionError ... Failed to resolve ...)`
Recommended workflow in Codex:
1. Codex generates exact `bbdc` commands.
2. User runs commands locally in their terminal.
3. User shares output back to Codex for analysis or next steps.
## Troubleshooting
`BITBUCKET_SERVER` must end with `/rest`.
Use the REST base, not the UI URL. For instances hosted under `/bitbucket`, the REST base is often:
- UI: `https://host/bitbucket/...`
- REST: `https://host/bitbucket/rest`
Unauthorized / 401 / 403:
- Token missing or incorrect
- Token lacks required permissions for that project/repo
- Your Bitbucket instance may require a different auth scheme (rare if PAT is enabled)
- In BBVA, Project/Repository HTTP access tokens may return `401` on user-account endpoints
(`account ssh-keys`, `account gpg-keys`, `account user`, `account settings`)
- `account me` now returns partial results when some account endpoints are unauthorized; inspect `errors` in output
404 Not Found:
Usually one of:
- Wrong `BITBUCKET_SERVER` base path (`/rest` vs `/bitbucket/rest`)
- Wrong `--project` key or `--repo` slug
- PR id does not exist in that repo
## Development
Run without installing:
```bash
python -m bbdc_cli --help
python -m bbdc_cli doctor
```
## License
Mercury - BBVA
| text/markdown | null | null | null | null | null | bitbucket, cli, typer, data-center, server | [
"Environment :: Console",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"typer>=0.9",
"requests>=2.31",
"build>=1.2.1; extra == \"build\"",
"twine>=5.0.0; extra == \"build\""
] | [] | [] | [] | [
"Repository, https://github.com/marcosgalleterobbva/bb-cli"
] | twine/6.2.0 CPython/3.10.16 | 2026-02-20T15:47:09.750025 | bbdc_cli-0.5.0.tar.gz | 23,795 | a7/c2/4f4fc2a40685087b0d09a74feb3ff46e1e16c06c8a714e4c3c4cc9934334/bbdc_cli-0.5.0.tar.gz | source | sdist | null | false | 59cc3e5e4aa86c954d6d1d7ba2854ff7 | 46afd1f39eaf2fac9da96642ec46af40da329b54da7fa3e40f1748cd811ed471 | a7c24f4fc2a40685087b0d09a74feb3ff46e1e16c06c8a714e4c3c4cc9934334 | null | [] | 205 |
2.4 | llm-pathway-curator | 0.1.0.post1 | Transform enrichment outputs into verifiable, auditable pathway claims with calibrated abstention. | # LLM-PathwayCurator
<p align="left">
<img src="https://raw.githubusercontent.com/kenflab/LLM-PathwayCurator/main/docs/assets/LLM-PathwayCurator_logo.png" width="90" alt="LLM-PathwayCurator"
style="vertical-align: middle; margin-right: 10px;">
<span style="font-size: 28px; font-weight: 700; vertical-align: middle;">
Enrichment interpretations → audited, decision-grade pathway claims.
</span>
</p>
[](https://llm-pathwaycurator.readthedocs.io/)
[](https://doi.org/10.64898/2026.02.18.706381)
[](https://doi.org/10.5281/zenodo.18625777)
[](https://scicrunch.org/resolver/SCR_027964)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
- **Docs:** [https://llm-pathwaycurator.readthedocs.io/](https://llm-pathwaycurator.readthedocs.io/)
- **Paper reproducibility (canonical):** [`paper/`](https://github.com/kenflab/LLM-PathwayCurator/tree/main/paper) (see [`paper/README.md`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/paper/README.md); panel map in [`paper/FIGURE_MAP.csv`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/paper/FIGURE_MAP.csv))
- **Cite:** bioRxiv preprint (DOI: [10.64898/2026.02.18.706381](https://doi.org/10.64898/2026.02.18.706381)).
---
## 🚀 What this is
**LLM-PathwayCurator is an interpretation quality-assurance (QA) layer for enrichment analysis.**
It **does not** introduce a new enrichment statistic. Instead, it turns EA outputs into **auditable decision objects**:
- **Input:** enrichment term lists from ORA (e.g., Metascape) or rank-based enrichment (e.g., fgsea, an implementation of the GSEA method)
- **Output:** **typed, evidence-linked claims** + **PASS/ABSTAIN/FAIL** decisions + **reason-coded audit logs**
- **Promise:** we **abstain** when claims are **unstable**, **under-supported**, **contradictory**, or **context-nonspecific**
> **Selective prediction for pathway interpretation:** calibrated abstention is a feature, not a failure.
<p align="center">
<img src="https://raw.githubusercontent.com/kenflab/LLM-PathwayCurator/main/docs/assets/LLM-PathwayCurator_Fig1_bioRxiv_2026.png" width="85%"
alt="LLM-PathwayCurator workflow: EvidenceTable → modules → claims → audits">
</p>
<p align="center">
<em style="max-width: 600px; display: inline-block; line-height: 1.6;">
Fig. 1a. Overview of LLM-PathwayCurator workflow:<br>
<strong>EvidenceTable</strong> → <strong>modules</strong> → <strong>claims</strong> → <strong>audits</strong>
(<a href="https://doi.org/10.64898/2026.02.18.706381">bioRxiv preprint</a>)
</em>
</p>
---
## 🧭 Why this is different (and why it matters)
Enrichment tools return ranked term lists. In practice, interpretation breaks because:
1) **Representative terms are ambiguous** under study context
2) **Gene support is opaque**, enabling cherry-picking
3) **Related terms share / bridge evidence** in non-obvious ways
4) There is **no mechanical stop condition** for fragile narratives
**LLM-PathwayCurator replaces narrative endorsement with audit-gated decisions.**
We transform ranked terms into **machine-auditable claims** by enforcing:
- **Evidence-linked constraints:** claims must resolve to valid term/module identifiers and supporting-gene evidence
- **Stability audits:** supporting-gene perturbations yield stability proxies (operating point: **τ**)
- **Context validity stress tests:** context swap reveals context dependence without external knowledge
- **Contradiction checks:** internally inconsistent claims fail mechanically
- **Reason-coded outcomes:** every decision is explainable by a finite audit code set
---
## 🔍 What this is not
- Not an enrichment method; it **audits** enrichment outputs.
- Not a free-text summarizer; **claims are schema-bounded** (typed JSON; no narrative prose as “evidence”).
- Not a biological truth oracle; it checks **internal consistency and evidence integrity**, not mechanistic truth.
---
## 🧩 Core pipeline (A → B → C)
**A) Stability distillation (evidence hygiene)**
Perturb supporting genes (seeded) to compute stability proxies (e.g., LOO/jackknife-like survival scores).
Output: [`distilled.tsv`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/examples/demo/expected/distilled.tsv)
**B) Evidence factorization (modules)**
Factorize the term–gene bipartite graph into **evidence modules** that preserve shared vs distinct support.
Outputs: [`modules.tsv`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/examples/demo/expected/modules.tsv), [`term_modules.tsv`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/examples/demo/expected/term_modules.tsv), [`term_gene_edges.tsv`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/examples/demo/expected/term_gene_edges.tsv)
**C) Claims → audit → report**
- **C1 (proposal-only):** deterministic baseline or optional LLM proposes **typed claims** with resolvable evidence links
- **C2 (audit/decider):** mechanical rules assign **PASS/ABSTAIN/FAIL** with precedence (FAIL > ABSTAIN > PASS)
- **C3 (report):** decision-grade report + audit log ([`audit_log.tsv`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/examples/demo/expected/audit_log.tsv)) + provenance
---
## ⚡ Quick start (library entrypoint)
```bash
llm-pathway-curator run \
--sample-card examples/demo/sample_card.json \
--evidence-table examples/demo/evidence_table.tsv \
--out out/demo/
````
### Key outputs (stable contract)
* [`audit_log.tsv`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/examples/demo/expected/audit_log.tsv) — PASS/ABSTAIN/FAIL + reason codes (mechanical)
* [`report.jsonl`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/examples/demo/expected/report.jsonl), [`report.md`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/examples/demo/expected/report.md) — decision objects (evidence-linked)
* [`claims.proposed.tsv`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/examples/demo/expected/claims.proposed.tsv) — proposed candidates (proposal-only; auditable)
* [`distilled.tsv`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/examples/demo/expected/distilled.tsv) — stability proxies / evidence hygiene outputs
* [`modules.tsv`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/examples/demo/expected/modules.tsv), [`term_modules.tsv`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/examples/demo/expected/term_modules.tsv), [`term_gene_edges.tsv`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/examples/demo/expected/term_gene_edges.tsv) — evidence structure
* [`run_meta.json`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/examples/demo/expected/run_meta.json) (+ optional `manifest.json`) — pinned params + provenance
---
## 📊 Rank & visualize ranked terms (`rank` / `plot-ranked`)
LLM-PathwayCurator includes two small post-processing commands for **ranking** and **publication-ready visualization**
of ranked terms/modules:
- [`llm-pathway-curator rank`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/src/llm_pathway_curator/ranked.py) — produces a **ranked table** (`claims_ranked.tsv`) for downstream plots and summaries.
- [`llm-pathway-curator plot-ranked`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/src/llm_pathway_curator/viz_ranked.py) — renders ranked terms/modules as either:
- **bars** (Metascape-like horizontal bars), or
- **packed circles** (module-level circle packing with term circles inside).
### A) Rank (produce `claims_ranked.tsv`)
Use `rank` to generate a deterministic ranked table from a run output directory.
```bash
llm-pathway-curator rank --help
# Typical workflow: point rank to a run directory and write claims_ranked.tsv
# (See --help for the exact flags supported by your installed version.)
````
### B) Plot (bars or packed circles)
`plot-ranked` auto-detects `claims_ranked.tsv` (recommended) or falls back to `audit_log.tsv`
under `--run-dir`.
> Packed circles require an extra dependency:
> `python -m pip install circlify`
#### Bars (Metascape-like)
```bash
llm-pathway-curator plot-ranked \
--mode bars \
--run-dir out/demo \
--out-png out/demo/plots/ranked_bars.png \
--decision PASS \
--group-by-module \
--left-strip \
--strip-labels \
--bar-color-mode module
```
#### Packed circles (modules → terms)
```bash
llm-pathway-curator plot-ranked \
--mode packed \
--run-dir out/demo \
--out-png out/demo/plots/ranked_packed.png \
--decision PASS \
--term-color-mode module
```
#### Packed circles (direction shading)
```bash
llm-pathway-curator plot-ranked \
--mode packed \
--run-dir out/demo \
--out-png out/demo/plots/ranked_packed.direction.png \
--decision PASS \
--term-color-mode direction
```
### Consistent module labels/colors across plots
`plot-ranked` assigns a single module display rank (**M01, M02, ...**) and a stable module color per `module_id`,
so **bars** and **packed circles** can be placed side-by-side without label/color drift.
---
## ⚖️ Inputs (contracts)
### EvidenceTable (minimum required columns)
Each row is one enriched term.
Required columns:
* `term_id`, `term_name`, `source`
* `stat`, `qval`, `direction`
* `evidence_genes` (supporting genes; TSV uses `;` join)
### Sample Card (study context)
Structured context record used for proposal and context gating, e.g.:
* `condition/disease`, `tissue`, `perturbation`, `comparison`
Adapters for common tools live under [`src/llm_pathway_curator/adapters/`](https://github.com/kenflab/LLM-PathwayCurator/tree/main/src/llm_pathway_curator/adapters).
---
## 🔧 Adapters (Input → EvidenceTable)
Adapters are intentionally conservative:
* preserve **evidence identity** (term × genes)
* avoid destructive parsing
* keep TSV **round-trips stable** (contract drift is treated as a bug)
See: [`src/llm_pathway_curator/adapters/README.md`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/src/llm_pathway_curator/adapters/README.md)
---
## 🛡️ Decisions: PASS / ABSTAIN / FAIL
LLM-PathwayCurator assigns decisions by **mechanical audit gates**:
* **FAIL**: auditable violations (evidence-link drift, schema violations, contradictions, forbidden fields, etc.)
* **ABSTAIN**: non-specific, under-supported, or unstable under perturbations / stress tests
* **PASS**: survives all enabled gates at the chosen operating point (**τ**)
**Important:** the LLM (if enabled) never decides acceptance.
It may propose candidates; **the audit suite is the decider**.
---
## 🧪 Built-in stress tests (counterfactuals without external knowledge)
* **Context swap**: shuffle study context (e.g., BRCA → LUAD) to test context dependence
* **Evidence dropout**: randomly remove supporting genes (seeded; min_keep enforced)
* **Contradiction injection** (optional): introduce internally contradictory candidates to test FAIL gates
These are specification-driven perturbations intended to validate that the pipeline
**abstains for the right reasons**, with **stress-specific reason codes**.
---
## ♻️ Reproducibility by default
LLM-PathwayCurator is deterministic by default:
* fixed seeds (CLI + library defaults)
* pinned parsing + hashing utilities
* stable output schemas and reason codes
* run metadata persisted to [`run_meta.json`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/examples/demo/expected/run_meta.json) (and runner-level `manifest.json` when used)
Paper-side runners (e.g., [`paper/scripts/fig2_run_pipeline.py`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/paper/scripts/fig2_run_pipeline.py)) **orchestrate** reproducible sweeps
and do not implement scientific logic; they call the library entrypoint ([`llm_pathway_curator.pipeline.run_pipeline`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/src/llm_pathway_curator/pipeline.py)).
---
## 📦 Installation
[](https://pypi.org/project/llm-pathway-curator/)
[](https://pypi.org/project/llm-pathway-curator/)
[](https://github.com/kenflab/LLM-PathwayCurator/pkgs/container/llm-pathway-curator)
[](https://jupyter.org/)
### Option A: PyPI (recommended)
```bash
pip install llm-pathway-curator
```
(See PyPI project page: [https://pypi.org/project/llm-pathway-curator/](https://pypi.org/project/llm-pathway-curator/))
### Option B: From source (development)
```bash
git clone https://github.com/kenflab/LLM-PathwayCurator.git
cd LLM-PathwayCurator
pip install -e .
```
---
## 🐳 Docker (recommended for reproducibility)
We provide an official Docker environment (Python + R + Jupyter), sufficient to run LLM-PathwayCurator and most paper figure generation.
Optionally includes **Ollama** for local LLM annotation (no cloud API key required).
- #### Option A: Prebuilt image (recommended)
Use the published image from GitHub Container Registry (GHCR).
```bash
# from the repo root (optional, for notebooks / file access)
docker pull ghcr.io/kenflab/llm-pathway-curator:official
```
Run Jupyter:
```
docker run --rm -it \
-p 8888:8888 \
-v "$PWD":/work \
-e GEMINI_API_KEY \
-e OPENAI_API_KEY \
ghcr.io/kenflab/llm-pathway-curator:official
```
Open Jupyter:
[http://localhost:8888](http://localhost:8888) <br>
(Use the token printed in the container logs.)
<br>
Notes:
> For manuscript reproducibility, we also provide versioned tags (e.g., :0.1.0). Prefer a version tag when matching a paper release.
- #### Option B: Build locally (development)
- ##### Option B-1: Build locally with Compose (recommended for dev)
```bash
# from the repo root
docker compose -f docker/docker-compose.yml build
docker compose -f docker/docker-compose.yml up
```
**B-1.1) Open Jupyter**
- [http://localhost:8888](http://localhost:8888)
Workspace mount: `/work`
**B-1.2) If prompted for "Password or token"**
- Get the tokenized URL from container logs:
```bash
docker compose -f docker/docker-compose.yml logs -f llm-pathway-curator
```
- Then either:
- open the printed URL (contains `?token=...`) in your browser, or
- paste the token value into the login prompt.
- ##### Option B-2: Build locally without Compose (alternative)
```bash
# from the repo root
docker build -f docker/Dockerfile -t llm-pathway-curator:official .
```
**B-2.1) Run Jupyter**
```bash
docker run --rm -it \
-p 8888:8888 \
-v "$PWD":/work \
-e GEMINI_API_KEY \
-e OPENAI_API_KEY \
llm-pathway-curator:official
```
**B-2.2) Open Jupyter**
- [http://localhost:8888](http://localhost:8888)
Workspace mount: `/work`
---
## 🖥️ Apptainer / Singularity (HPC)
- #### Option A: Prebuilt image (recommended)
Use the published image from GitHub Container Registry (GHCR).
```bash
apptainer build llm-pathway-curator.sif docker://ghcr.io/kenflab/llm-pathway-curator:official
```
- #### Option B: a .sif from the Docker image (development)
```bash
docker compose -f docker/docker-compose.yml build
apptainer build llm-pathway-curator.sif docker-daemon://llm-pathway-curator:official
```
Run Jupyter (either image):
```bash
apptainer exec --cleanenv \
--bind "$PWD":/work \
llm-pathway-curator.sif \
bash -lc 'jupyter lab --ip=0.0.0.0 --port=8888 --no-browser
```
---
## 🤖 LLM usage (proposal-only; optional)
If enabled, the LLM is confined to proposal steps and must emit **schema-bounded JSON**
with **resolvable EvidenceTable links**.
Backends (example):
* Ollama: `LLMPATH_OLLAMA_HOST`, `LLMPATH_OLLAMA_MODEL`
* Gemini: `GEMINI_API_KEY`
* OpenAI: `OPENAI_API_KEY`
Typical environment:
```bash
export LLMPATH_BACKEND="ollama" # ollama|gemini|openai
```
Deterministic settings are used by default (e.g., temperature=0), and runs persist
prompt/raw/meta artifacts alongside [`run_meta.json`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/examples/demo/expected/run_meta.json).
---
## 📄 Manuscript reproduction
[`paper/`](https://github.com/kenflab/LLM-PathwayCurator/tree/main/paper) contains manuscript-facing scripts, Source Data exports, and frozen/derived artifacts (when redistributable).
* [`paper/README.md`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/paper/README.md) — how to reproduce figures
* [`paper/FIGURE_MAP.csv`](https://github.com/kenflab/LLM-PathwayCurator/blob/main/paper/FIGURE_MAP.csv) — canonical mapping: panel ↔ inputs ↔ scripts ↔ outputs
---
## 🧾 Citation
If you use LLM-PathwayCurator, please cite:
- bioRxiv preprint (doi: [10.64898/2026.02.18.706381](https://doi.org/10.64898/2026.02.18.706381))
- Zenodo archive (v0.1.0): [10.5281/zenodo.18625777](https://doi.org/10.5281/zenodo.18625777)
- GitHub release tag: [v0.1.0](https://github.com/kenflab/LLM-PathwayCurator/releases/tag/v0.1.0)
- Software RRID: [RRID:SCR_027964](https://scicrunch.org/resolver/SCR_027964)
---
| text/markdown | Ken Furudate | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"pandas>=2.0",
"numpy>=1.24",
"pydantic>=2.0",
"tabulate>=0.9",
"matplotlib>=3.8",
"circlify>=0.15.1",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.4; extra == \"dev\"",
"openpyxl>=3.1; extra == \"dev\"",
"pyyaml>=6.0; extra == \"bench\"",
"tqdm>=4.66; extra == \"bench\"",
"synapseclient>=2.0; extra == \"bench\"",
"openai>=1.0; extra == \"llm\"",
"google-genai; extra == \"llm\"",
"google-generativeai; extra == \"llm\"",
"pytest>=8.0; extra == \"all\"",
"ruff>=0.4; extra == \"all\"",
"openpyxl>=3.1; extra == \"all\"",
"pyyaml>=6.0; extra == \"all\"",
"tqdm>=4.66; extra == \"all\"",
"synapseclient>=2.0; extra == \"all\"",
"openai>=1.0; extra == \"all\"",
"google-genai; extra == \"all\"",
"google-generativeai; extra == \"all\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:46:59.359516 | llm_pathway_curator-0.1.0.post1.tar.gz | 1,054,128 | d7/d1/f3b3303da9efa8d5eea7432b08e9c2da702c33d21e4f4b93db297cb6c5b8/llm_pathway_curator-0.1.0.post1.tar.gz | source | sdist | null | false | 60e322f16a919e1f28c5f2ea9e3b48e0 | a7d42840befd1e2e7fa12217fe54d2b4fe8f9514a8c6216786e0694f88f2d3a2 | d7d1f3b3303da9efa8d5eea7432b08e9c2da702c33d21e4f4b93db297cb6c5b8 | MIT | [
"LICENSE"
] | 197 |
2.3 | pytest-pvcr | 0.1.0 | PyTest Process VCR | # pytest-pvcr
A pytest plugin that records and replays commands executed with `subprocess.run()`.
This plugin was inspired by VCR.py.
## Installation
This project can be installed via pip:
```
pip install pytest-pvcr
```
## Usage
```python
import subprocess
import pytest
@pytest.mark.pvcr()
def test_command():
# subprocess.run is patched at runtime
ret = subprocess.run(["ls", "/tmp"])
assert rc.returncode == 0
@pytest.mark.pvcr(wait=False)
def test_command():
# Super long command but since wait == False, PVCR does not wait its completion
ret = subprocess.run(["sleep", "1000"])
assert rc.returncode == 0
```
Run your tests:
```shell
pytest --pvcr-record-mode=new test_commands.py
```
### Record modes
There is three record modes:
```shell
# Only record new commands not previously recorded
pytest --pvcr-record-mode=new test_commands.py
# Record nothing
pytest --pvcr-record-mode=none test_commands.py
# Record all commands, even previously recorded ones
pytest --pvcr-record-mode=all test_commands.py
```
### Block execution
The execution of processes can be completely blocked.
This is useful to protect test environments from destructive commands.
```shell
pytest --pvcr-block-run test_commands.py
```
The test will fail if an unrecorded command is executed.
### Fuzzy matching
Commands can be fuzzy matched by defining one or more regex.
If a fuzzy regex has matching groups, the matched parts are kept for matching the commands.
If a fuzzy regex has no matching groups, the whole matched string is ignored when matching the commands.
```shell
# Ignore `--dry-run` arguments when matching commands
pytest --pvcr-fuzzy-matcher='--dry-run' test_commands.py
# Ignore the beginning of a path and keep the filename
pytest --pvcr-fuzzy-matcher='^.+\/(kubeconfig)$' test_commands.py
```
It's possible to automatically ignore the parent path of the test script.
This parameter is useful to make test assets portable.
```shell
pytest --pvcr-auto-fuzzy-match test_commands.py
```
## Python support
This plugin supports python >= 3.12
## Authors
* Fabien Dupont <fabien.dupont@eurofiber.com>
| text/markdown | Fabien Dupont | Fabien Dupont <fabien.dupont@eurofiber.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"pytest>=3.5.0",
"pyyaml>=6.0.3"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:46:53.589080 | pytest_pvcr-0.1.0.tar.gz | 5,424 | 19/a8/0b423a3bb52c18cbc00b0e47c73f2557a9280d0c1bb16980281fe5f44837/pytest_pvcr-0.1.0.tar.gz | source | sdist | null | false | 760dc923e7e0593df886a16d37683ac7 | 5a04f5677466f5cd25ad4a8a21168911d27ed7a37d52311bc415bcc8c53b0b42 | 19a80b423a3bb52c18cbc00b0e47c73f2557a9280d0c1bb16980281fe5f44837 | null | [] | 216 |
2.3 | wagtail-lms | 0.9.0 | A Learning Management System extension for Wagtail with SCORM 1.2/2004 and H5P support | # Wagtail LMS
[](https://github.com/dr-rompecabezas/wagtail-lms/actions/workflows/ci.yml)
[](https://codecov.io/gh/dr-rompecabezas/wagtail-lms)
[](https://pypi.org/project/wagtail-lms/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://results.pre-commit.ci/latest/github/dr-rompecabezas/wagtail-lms/main)
A Learning Management System extension for Wagtail CMS with SCORM 1.2/2004 and H5P support.
## ⚠️ Alpha Release
**This package is in early development.** That said, it is actively used in production at [thinkelearn.com](https://thinkelearn.com).
**Supported versions:**
- **Python:** 3.11, 3.12, 3.13, 3.14
- **Django:** 4.2 (LTS), 5.0, 5.1, 5.2 (LTS), 6.0
- **Wagtail:** 6.0, 6.2, 6.3, 7.1, 7.2, 7.3
Selected combinations are tested in CI. See our [compatibility matrix](https://github.com/dr-rompecabezas/wagtail-lms/blob/main/.github/workflows/ci.yml) for specific version combinations.
## Features
- 📚 **Course Management** - Integrate courses into Wagtail's page system
- 📦 **SCORM Support** - Full SCORM 1.2 and 2004 package compatibility
- 🎯 **H5P Support** - Embed interactive H5P activities in long-scroll lesson pages
- 👥 **Enrollment Tracking** - Automatic student enrollment and progress monitoring
- 📊 **SCORM API** - Complete runtime API implementation for content interactivity
- 📡 **xAPI Tracking** - Record H5P learner interactions as xAPI statements
- 🔒 **Secure Delivery** - Path-validated content serving with iframe support
- 💾 **Progress Persistence** - CMI data model storage with suspend/resume capability
- 🔄 **Concurrency Handling** - Retry logic for SQLite database lock scenarios
- 🎨 **Framework Agnostic** - Minimal default styling, easy to customize with any CSS framework
## Installation
```bash
pip install wagtail-lms
```
## Quick Start
1. Add to `INSTALLED_APPS` in your Django settings:
```python
INSTALLED_APPS = [
# ...
'wagtail_lms',
# ...
]
```
2. Add wagtail-lms URLs to your `urls.py`:
```python
from django.urls import path, include
urlpatterns = [
# ...
path('lms/', include('wagtail_lms.urls')),
# ...
]
```
3. Run migrations:
```bash
python manage.py migrate wagtail_lms
```
4. Collect static files:
```bash
python manage.py collectstatic
```
## Configuration
Optional settings in your Django settings:
```python
# SCORM
WAGTAIL_LMS_SCORM_UPLOAD_PATH = 'scorm_packages/' # Upload directory
WAGTAIL_LMS_CONTENT_PATH = 'scorm_content/' # Extracted content
WAGTAIL_LMS_AUTO_ENROLL = False # Auto-enroll on course visit
# H5P
WAGTAIL_LMS_H5P_UPLOAD_PATH = 'h5p_packages/' # Upload directory
WAGTAIL_LMS_H5P_CONTENT_PATH = 'h5p_content/' # Extracted content
# Cache-Control rules for served assets (exact MIME, wildcard, and default)
WAGTAIL_LMS_CACHE_CONTROL = {
"text/html": "no-cache",
"text/css": "max-age=86400",
"application/javascript": "max-age=86400",
"text/javascript": "max-age=86400",
"image/*": "max-age=604800",
"font/*": "max-age=604800",
"default": "max-age=86400",
}
# Redirect audio/video assets to storage URLs (useful for S3 backends)
WAGTAIL_LMS_REDIRECT_MEDIA = False
```
## Usage
### SCORM Courses
1. Log into Wagtail admin
2. Create a new **Course Page** under Pages
3. Upload a SCORM package via **LMS → SCORM Packages** in the Wagtail admin
4. Assign the SCORM package to your course page and publish
**SCORM package requirements:**
- Valid SCORM 1.2 or 2004 ZIP file
- Must contain `imsmanifest.xml` at the root
- Launch file must be specified in the manifest
### H5P Lessons
H5P activities are composed into long-scroll **Lesson Pages** alongside rich text. A lesson is always a child of a Course Page; enrollment in the course is required to access its lessons.
1. Upload an **H5P Activity** snippet via **LMS → H5P Activities** in the Wagtail admin
2. Create a **Course Page** (no SCORM package required for H5P-only courses)
3. Add a **Lesson Page** as a child of the Course Page
4. In the lesson body, add **H5P Activity** blocks (and/or rich text blocks) to compose the lesson
5. Publish — enrolled learners can access the lesson; xAPI statements are recorded automatically
**H5P package requirements:**
- Valid `.h5p` file (ZIP with an `.h5p` extension) containing `h5p.json` at the root
- **Must include library JavaScript files** — h5p-standalone renders content using
library JS bundled inside the package (e.g. `H5P.InteractiveVideo-1.27/`).
A warning is logged and "Could not load activity." is shown if files are missing.
✅ [Lumi desktop editor](https://lumi.education) (free, open-source) — recommended
✅ Moodle / WordPress / Drupal H5P plugin export
✅ Lumi Cloud (free tier available at lumi.education)
❌ H5P.org "Reuse" download — content-only, no library files included
❌ H5P.org does not offer a download-with-libraries option for any content
### Customizing Templates
The package includes minimal, functional styling that works out of the box. To match your project's design:
- **Quick:** Override the CSS classes in your own stylesheet
- **Full control:** Override the templates in your project (standard Django approach)
- **Examples:** See [Template Customization Guide](https://github.com/dr-rompecabezas/wagtail-lms/blob/main/docs/template_customization.md) for Bootstrap, Tailwind CSS, and Bulma examples
For API-first projects, the templates are optional and can be ignored entirely.
## Development
An example project is available in `example_project/` for local development and testing. See its [README](https://github.com/dr-rompecabezas/wagtail-lms/blob/main/example_project/README.md) for setup instructions.
### Running Tests
The project includes a comprehensive test suite. See [current coverage](https://app.codecov.io/gh/dr-rompecabezas/wagtail-lms).
```bash
# Install testing dependencies (pytest, pytest-django, pytest-cov)
uv sync --extra testing
# Run all tests
PYTHONPATH=. uv run pytest
# Run with coverage report
PYTHONPATH=. uv run pytest --cov=src/wagtail_lms --cov-report=term-missing
# Run specific test file
PYTHONPATH=. uv run pytest tests/test_models.py -v
```
### Database Considerations
**SQLite**: The package includes retry logic with exponential backoff to handle database lock errors during concurrent SCORM API operations. For development with the example project:
```python
# example_project/settings.py
DATABASES = {
"default": {
"ENGINE": "django.db.backends.sqlite3",
"NAME": "db.sqlite3",
"OPTIONS": {
"timeout": 20, # Increased timeout for SCORM operations
},
}
}
```
**Production**: For production deployments, PostgreSQL is recommended for better concurrency handling:
```python
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": "wagtail_lms",
# ... other PostgreSQL settings
}
}
```
## Acknowledgments
- Built with [Django](https://djangoproject.com/) and [Wagtail CMS](https://wagtail.org/)
- SCORM implementation based on ADL specifications
- H5P support powered by [H5P](https://h5p.org/) and [h5p-standalone](https://github.com/tunapanda/h5p-standalone)
- [Lumi](https://lumi.education/) — recommended free, open-source H5P editor for creating self-contained packages
- Inspired by open-source LMS solutions like [Moodle](https://moodle.org/) and [Open edX](https://openedx.org/)
## License
This project is licensed under the MIT License. See the [LICENSE](https://github.com/dr-rompecabezas/wagtail-lms/blob/main/LICENSE) file for details.
| text/markdown | Felipe Villegas | Felipe Villegas <felavid@gmail.com> | null | null | MIT | wagtail, django, lms, scorm, xAPI, H5P, e-learning, education | [
"Development Status :: 3 - Alpha",
"Environment :: Web Environment",
"Framework :: Django",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Framework :: Django :: 5.1",
"Framework :: Django :: 5.2",
"Framework :: Django :: 6.0",
"Framework :: Wagtail",
"Framework :: Wagtail :: 6",
"Framework :: Wagtail :: 7",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Education",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"django>=4.2",
"wagtail>=6.0",
"pytest>=7.4.2; extra == \"testing\"",
"pytest-django>=4.5.2; extra == \"testing\"",
"pytest-cov>=4.0; extra == \"testing\""
] | [] | [] | [] | [
"Homepage, https://github.com/dr-rompecabezas/wagtail-lms",
"Documentation, https://wagtail-lms.readthedocs.io",
"Repository, https://github.com/dr-rompecabezas/wagtail-lms",
"Bug Tracker, https://github.com/dr-rompecabezas/wagtail-lms/issues",
"Changelog, https://github.com/dr-rompecabezas/wagtail-lms/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:46:32.776451 | wagtail_lms-0.9.0.tar.gz | 616,883 | 36/29/6f490c8e4771ad06eb3085ef57b58c18a010688db577a019a1df2d5c3b5e/wagtail_lms-0.9.0.tar.gz | source | sdist | null | false | 884ca06c5c8162b9ffa51b75a92177e0 | 7433376190fe2a3cdb17010a819ba7db660e2ae874ed2efe9f0a8e914af90824 | 36296f490c8e4771ad06eb3085ef57b58c18a010688db577a019a1df2d5c3b5e | null | [] | 259 |
2.4 | tpcav | 0.6.1 | Testing with PCA projected Concept Activation Vectors | # TPCAV
> Testing with PCA projected Concept Activation Vectors
This repository contains code to compute TPCAV (Testing with PCA projected Concept Activation Vectors) on deep learning models. TPCAV is an extension of the original TCAV method, which uses PCA to reduce the dimensionality of the activations at a selected intermediate layer before computing Concept Activation Vectors (CAVs) to improve the consistency of the results.
For more technical details, please check our manuscript on Biorxiv [TPCAV: Interpreting deep learning genomics models via concept attribution](https://doi.org/10.64898/2026.01.20.700723)!
## When should I use TPCAV?
TPCAV is a global feature attribution method that can be applied to any model, provided that a set of examples is available to represent the concept of interest. It is input-agnostic, meaning it can operate on raw inputs, engineered features, or **tokenized representations**, including **foundation models**.
Typical concepts in Genomics include:
- Transcription factor motifs
- Cis-regulatory regions
- DNA repeats
The same framework naturally extends to other domains, such as protein structure prediction, transcriptomics, or any field with a well established knowledge base, by defining appropriate concept sets.
## Installation
`pip install tpcav`
## Detailed Usage
For detailed usage for more flexibility on defining concepts, please refer to this [jupyter notebook](https://github.com/seqcode/TPCAV/tree/main/examples/tpcav_detailed_usage.ipynb)
## Quick start
> `tpcav` only works with Pytorch model, if your model is built using other libraries, you should port the model into Pytorch first. For Tensorflow models, you can use [tf2onnx](https://github.com/onnx/tensorflow-onnx) and [onnx2pytorch](https://github.com/Talmaj/onnx2pytorch) for the conversion.
```python
import torch
from tpcav import run_tpcav
#==================== Prepare Model and Data transform function ================================
class DummyModelSeq(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer1 = torch.nn.Linear(1024, 1)
self.layer2 = torch.nn.Linear(4, 1)
def forward(self, seq):
y_hat = self.layer1(seq)
y_hat = y_hat.squeeze(-1)
y_hat = self.layer2(y_hat)
return y_hat
# By default, every concept extracts fasta sequences and bigwig signals from the given region
# Use your own custom transformation function to get your desired inputs
# Here we transform them into one-hot coded DNA sequences
def transform_fasta_to_one_hot_seq(seq, chrom):
# `seq` is a list of fasta sequences
# `chrom` is a numpy array of bigwig signals of shape [-1, # bigwigs, len]
return (helper.fasta_to_one_hot_sequences(seq),) # it has to return a tuple of inputs, even if there is only one input
#==================== Construct concepts ================================
motif_path = "data/motif-clustering-v2.1beta_consensus_pwms.test.meme" # motif file in meme format for constructing motif concepts
bed_seq_concept = "data/hg38_rmsk.head500k.bed" # a bed file to supply concepts described by a set of regions, format [chrom, start, end, label, concept_name]
genome_fasta = "data/hg38.analysisSet.fa"
model = DummyModelSeq() # load the model
layer_name = "layer1" # name of the layer to be interpreted, you should be able to retrieve the layer object by getattr(model, layer_name)
# concept_fscores_dataframe: fscores of each concept
# motif_cav_trainers: each trainer contains the cav weights of motifs inserted different number of times
# bed_cav_trainer: trainer contains the cav weights of the sequence concepts provided in bed file
concept_fscores_dataframe, motif_cav_trainers, bed_cav_trainer = run_tpcav(
model=model,
layer_name=layer_name,
motif_file=motif_path,
motif_file_fmt='meme', # specify your motif file format, either meme or consensus (tab delimited file in form [motif_name, consensus_sequence])
genome_fasta=genome_fasta,
num_motif_insertions=[4, 8],
bed_seq_file=bed_seq_concept,
output_dir="test_run_tpcav_output/",
input_transform_func=transform_fasta_to_one_hot_seq)
#==================== Compuate layer attributions of target testing regions ================================
# retrieve the tpcav model
tpcav_model = bed_cav_trainer.tpcav
# create input regions and baseline regions for attribution
random_regions_1 = helper.random_regions_dataframe(genome_fasta + ".fai", 1024, 100, seed=1)
random_regions_2 = helper.random_regions_dataframe(genome_fasta + ".fai", 1024, 100, seed=2)
# create iterators to yield one-hot encoded sequences from the region dataframes
def pack_data_iters(df):
seq_fasta_iter = helper.dataframe_to_fasta_iter(df, genome_fasta, batch_size=8)
seq_one_hot_iter = (helper.fasta_to_one_hot_sequences(seq_fasta) for seq_fasta in seq_fasta_iter)
return zip(seq_one_hot_iter, )
# compute layer attributions given the iterators of testing regions and control regions
attributions = tpcav_model.layer_attributions(pack_data_iters(random_regions_1), pack_data_iters(random_regions_2))["attributions"]
# compute TPCAV scores for the concept
# here uses bed_cav_trainer that contains the concepts provided from bed file
bed_cav_trainer.tpcav_score_all_concepts_log_ratio(attributions)
```
If you find any issue, feel free to open an issue (strongly suggested) or contact [Jianyu Yang](mailto:jmy5455@psu.edu).
| text/markdown | null | Jianyu Yang <yztxwd@gmail.com> | null | null | null | interpretation, attribution, concept, genomics, deep learning | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"torch",
"pandas",
"numpy",
"seqchromloader",
"deeplift",
"pyfaidx",
"pybedtools",
"captum",
"scikit-learn",
"biopython",
"seaborn",
"matplotlib",
"logomaker"
] | [] | [] | [] | [
"Homepage, https://github.com/seqcode/TPCAV"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T15:45:46.561626 | tpcav-0.6.1.tar.gz | 30,066 | bf/4a/5aeda9fa80e2608e4249f895c052cba421c9a05da48ce79d1c1515df33a6/tpcav-0.6.1.tar.gz | source | sdist | null | false | de26f4bd07e9e2f0b8cb1821c488ac70 | 1224bbb64c6cc3642aa37dc7fed55754016e8365d0e96421f5550b364157796c | bf4a5aeda9fa80e2608e4249f895c052cba421c9a05da48ce79d1c1515df33a6 | MIT AND (Apache-2.0 OR BSD-2-Clause) | [
"LICENSE"
] | 215 |
2.4 | alita-sdk | 0.3.691 | SDK for building langchain agents using resources from Alita | Alita SDK
=========
Alita SDK, built on top of Langchain, enables the creation of intelligent agents within the Alita Platform using project-specific prompts and data sources. This SDK is designed for developers looking to integrate advanced AI capabilities into their projects with ease.
Prerequisites
-------------
Before you begin, ensure you have the following requirements met:
* Python 3.10+
* An active deployment of Project Alita
* Access to personal project
Installation
------------
It is recommended to use a Python virtual environment to avoid dependency conflicts and keep your environment isolated.
### 1. Create and activate a virtual environment
For **Unix/macOS**:
```bash
python3 -m venv .venv
source .venv/bin/activate
```
For **Windows**:
```bat
python -m venv .venv
venv\Scripts\activate
```
### 2. Install dependencies
Install all required dependencies for the SDK and toolkits:
```bash
pip install -U '.[all]'
```
Environment Setup
-----------------
Before running your Alita agents, set up your environment variables. Create a `.env` file in the root directory of your project and include your Project Alita credentials:
```.env
DEPLOYMENT_URL=<your_deployment_url>
API_KEY=<your_api_key>
PROJECT_ID=<your_project_id>
```
NOTE: these variables can be grabbed from your Elitea platform configuration page.

### Custom .env File Location
By default, the CLI looks for `.env` files in the following order:
1. `.alita/.env` (recommended)
2. `.env` in the current directory
You can override this by setting the `ALITA_ENV_FILE` environment variable:
```bash
export ALITA_ENV_FILE=/path/to/your/.env
alita-cli agent chat
```
Using the CLI for Interactive Chat
----------------------------------
The Alita SDK includes a powerful CLI for interactive agent chat sessions.
### Starting a Chat Session
```bash
# Interactive selection (shows all available agents + direct chat option)
alita-cli agent chat
# Chat with a specific local agent
alita-cli agent chat .alita/agents/my-agent.agent.md
# Chat with a platform agent
alita-cli agent chat my-agent-name
```
### Direct Chat Mode (No Agent)
You can start a chat session directly with the LLM without any agent configuration:
```bash
alita-cli agent chat
# Select option 1: "Direct chat with model (no agent)"
```
This is useful for quick interactions or testing without setting up an agent.
### Chat Commands
During a chat session, you can use the following commands:
| Command | Description |
|---------|-------------|
| `/help` | Show all available commands |
| `/model` | Switch to a different model (preserves chat history) |
| `/add_mcp` | Add an MCP server from your local mcp.json (preserves chat history) |
| `/add_toolkit` | Add a toolkit from $ALITA_DIR/tools (preserves chat history) |
| `/clear` | Clear conversation history |
| `/history` | Show conversation history |
| `/save` | Save conversation to file |
| `exit` | End conversation |
### Enhanced Input Features
The chat interface includes readline-based input enhancements:
| Feature | Key/Action |
|---------|------------|
| **Tab completion** | Press `Tab` to autocomplete commands (e.g., `/mo` → `/model`) |
| **Command history** | `↑` / `↓` arrows to navigate through previous messages |
| **Cursor movement** | `←` / `→` arrows to move within the current line |
| **Start of line** | `Ctrl+A` jumps to the beginning of the line |
| **End of line** | `Ctrl+E` jumps to the end of the line |
| **Delete word** | `Ctrl+W` deletes the word before cursor |
| **Clear line** | `Ctrl+U` clears from cursor to beginning of line |
### Dynamic Model Switching
Use `/model` to switch models on the fly:
```
> /model
🔧 Select a model:
# Model Type
1 gpt-4o openai
2 gpt-4o-mini openai
3 claude-3-sonnet anthropic
Select model number: 1
✓ Selected: gpt-4o
╭──────────────────────────────────────────────────────────────╮
│ ℹ Model switched to gpt-4o. Agent state reset, chat history │
│ preserved. │
╰──────────────────────────────────────────────────────────────╯
```
### Adding MCP Servers Dynamically
Use `/add_mcp` to add MCP servers during a chat session. Servers are loaded from your local `mcp.json` file (typically at `.alita/mcp.json`):
```
> /add_mcp
🔌 Select an MCP server to add:
# Server Type Command/URL
1 playwright stdio npx @playwright/mcp@latest
2 filesystem stdio npx @anthropic/mcp-fs
Select MCP server number: 1
✓ Selected: playwright
╭──────────────────────────────────────────────────────────────╮
│ ℹ Added MCP: playwright. Agent state reset, chat history │
│ preserved. │
╰──────────────────────────────────────────────────────────────╯
```
### Adding Toolkits Dynamically
Use `/add_toolkit` to add toolkits from your `$ALITA_DIR/tools` directory (default: `.alita/tools`):
```
> /add_toolkit
🧰 Select a toolkit to add:
# Toolkit Type File
1 jira jira jira-config.json
2 github github github-config.json
Select toolkit number: 1
✓ Selected: jira
╭──────────────────────────────────────────────────────────────╮
│ ℹ Added toolkit: jira. Agent state reset, chat history │
│ preserved. │
╰──────────────────────────────────────────────────────────────╯
```
Using SDK with Streamlit for Local Development
----------------------------------------------
To use the SDK with Streamlit for local development, follow these steps:
1. Ensure you have Streamlit installed:
```bash
pip install streamlit
```
2. Run the Streamlit app:
```bash
streamlit run alita_local.py
```
Note: If **streamlit** throws an error related to **pytorch**, add this `--server.fileWatcherType none` extra arguments.
Sometimes it tries to index **pytorch** modules, and since they are **C** modules it raises an exception.
Example of launch configuration for Streamlit:
Important: Make sure to set the correct path to your `.env` file and streamlit.

Streamlit Web Application
------------------------
The Alita SDK includes a Streamlit web application that provides a user-friendly interface for interacting with Alita agents. This application is powered by the `streamlit.py` module included in the SDK.
### Key Features
- **Agent Management**: Load and interact with agents created in the Alita Platform
- **Authentication**: Easily connect to your Alita/Elitea deployment using your credentials
- **Chat Interface**: User-friendly chat interface for communicating with your agents
- **Toolkit Integration**: Add and configure toolkits for your agents
- **Session Management**: Maintain conversation history and thread state
### Using the Web Application
1. **Authentication**:
- Navigate to the "Alita Settings" tab in the sidebar
- Enter your deployment URL, API key, and project ID
- Click "Login" to authenticate with the Alita Platform
2. **Loading an Agent**:
- After authentication, you'll see a list of available agents
- Select an agent from the dropdown menu
- Specify a version name (default: 'latest')
- Optionally, select an agent type and add custom tools
- Click "Load Agent" to initialize the agent
3. **Interacting with the Agent**:
- Use the chat input at the bottom of the screen to send messages to the agent
- The agent's responses will appear in the chat window
- Your conversation history is maintained until you clear it
4. **Clearing Data**:
- Use the "Clear Chat" button to reset the conversation history
- Use the "Clear Config" button to reset toolkit configurations
This web application simplifies the process of testing and interacting with your Alita agents, making development and debugging more efficient.
Using Elitea toolkits and tools with Streamlit for Local Development
----------------------------------------------
Actually, toolkits are part of the Alita SDK (`alita-sdk/tools`), so you can use them in your local development environment as well.
To debug it, you can use the `alita_local.py` file, which is a Streamlit application that allows you
to interact with your agents and toolkits by setting the breakpoints in the code of corresponding tool.
# Example of agent's debugging with Streamlit:
Assume we try to debug the user's agent called `Questionnaire` with the `Confluence` toolkit and `get_pages_with_label` method.
Pre-requisites:
- Make sure you have set correct variables in your `.env` file
- Set the breakpoints in the `alita_sdk/tools/confluence/api_wrapper.py` file, in the `get_pages_with_label` method
1. Run the Streamlit app (using debug):
```bash
streamlit run alita_local.py
```
2. Login into the application with your credentials (populated from .env file)
- Enter your deployment URL, API key, and project ID (optionally)
- Click "Login" to authenticate with the Alita Platform

3. Select `Questionnaire` agent

4. Query the agent with the required prompt:
```
get pages with label `ai-mb`
```
5. Debug the agent's code:
- The Streamlit app will call the `get_pages_with_label` method of the `Confluence` toolkit
- The execution will stop at the breakpoint you set in the `alita_sdk/tools/confluence/api_wrapper.py` file
- You can inspect variables, step through the code, and analyze the flow of execution

How to create a new toolkit
----------------------------------------------
The toolkit is a collection of pre-built tools and functionalities designed to simplify the development of AI agents. These toolkits provide developers with the necessary resources, such as APIs, data connectors to required services and systems.
As an initial step, you have to decide on its capabilities to design required tools and its args schema.
Example of the Testrail toolkit's capabilities:
- `get_test_cases`: Retrieve test cases from Testrail
- `get_test_runs`: Retrieve test runs from Testrail
- `get_test_plans`: Retrieve test plans from Testrail
- `create_test_case`: Create a new test case in Testrail
- etc.
### General Steps to Create a Toolkit
### 1. Create the Toolkit package
Create a new package under `alita_sdk/tools/` for your toolkit, e.g., `alita_sdk/tools/mytoolkit/`.
### 2. Implement the API Wrapper
Create an `api_wrapper.py` file in your toolkit directory. This file should:
- Define a config class (subclassing `BaseToolApiWrapper`).
- Implement methods for each tool/action you want to implement.
- Provide a `get_available_tools()` method that returns tools' metadata and argument schemas.
Note:
- args schema should be defined using Pydantic models, which will help in validating the input parameters for each tool.
- make sure tools descriptions are clear and concise, as they will be used by LLM to define on tool's execution chain.
- clearly define the input parameters for each tool, as they will be used by LLM to generate the correct input for the tool and whether it is required or optional (refer to https://docs.pydantic.dev/2.2/migration/#required-optional-and-nullable-fields if needed).
**Example:**
```python
# alita_sdk/tools/mytoolkit/api_wrapper.py
from ...elitea_base import BaseToolApiWrapper
from pydantic import create_model, Field
class MyToolkitConfig(BaseToolApiWrapper):
# Define config fields (e.g., API keys, endpoints)
api_key: str
def do_something(self, param1: str):
"""Perform an action with param1."""
# Implement your logic here
return {"result": f"Did something with {param1}"}
def get_available_tools(self):
return [
{
"name": "do_something",
"ref": self.do_something,
"description": self.do_something.__doc__,
"args_schema": create_model(
"DoSomethingModel",
param1=(str, Field(description="Parameter 1"))
),
}
]
```
### 3. Implement the Toolkit Configuration Class
Create an `__init__.py` file in your toolkit directory. This file should:
- Define a `toolkit_config_schema()` static method for toolkit's configuration (this data is used for toolkit configuration card rendering on UI).
- Implement a `get_tools(tool)` method to grab toolkit's configuration parameters based on the configuration on UI.
- Implement a `get_toolkit()` class method to instantiate tools.
- Return a list of tool instances via `get_tools()`.
**Example:**
```python
# alita_sdk/tools/mytoolkit/__init__.py
from pydantic import BaseModel, Field, create_model
from langchain_core.tools import BaseToolkit, BaseTool
from .api_wrapper import MyToolkitConfig
from ...base.tool import BaseAction
name = "mytoolkit"
def get_tools(tool):
return MyToolkit().get_toolkit(
selected_tools=tool['settings'].get('selected_tools', []),
url=tool['settings']['url'],
password=tool['settings'].get('password', None),
email=tool['settings'].get('email', None),
toolkit_name=tool.get('toolkit_name')
).get_tools()
class MyToolkit(BaseToolkit):
tools: list[BaseTool] = []
@staticmethod
def toolkit_config_schema() -> BaseModel:
return create_model(
name,
url=(str, Field(title="Base URL", description="Base URL for the API")),
email=(str, Field(title="Email", description="Email for authentication", default=None)),
password=(str, Field(title="Password", description="Password for authentication", default=None)),
selected_tools=(list[str], Field(title="Selected Tools", description="List of tools to enable", default=[])),
)
@classmethod
def get_toolkit(cls, selected_tools=None, toolkit_name=None, **kwargs):
config = MyToolkitConfig(**kwargs)
available_tools = config.get_available_tools()
tools = []
for tool in available_tools:
if selected_tools and tool["name"] not in selected_tools:
continue
tools.append(BaseAction(
api_wrapper=config,
name=tool["name"],
description=tool["description"],
args_schema=tool["args_schema"]
))
return cls(tools=tools)
def get_tools(self) -> list[BaseTool]:
return self.tools
```
### 4. Add the Toolkit to the SDK
Update the `__init__.py` file in the `alita_sdk/tools/` directory to include your new toolkit:
```python
# alita_sdk/tools/__init__.py
def get_tools(tools_list, alita: 'AlitaClient', llm: 'LLMLikeObject', *args, **kwargs):
...
# add your toolkit here with proper type
elif tool['type'] == 'mytoolkittype':
tools.extend(get_mytoolkit(tool))
# add toolkit's config schema
def get_toolkits():
return [
...,
MyToolkit.toolkit_config_schema(),
]
```
### 5. Test Your Toolkit
To test your toolkit, you can use the Streamlit application (`alita_local.py`) to load and interact with your toolkit.
- Login to the platform
- Select `Toolkit testing` tab
- Choose your toolkit from the dropdown menu.
- Adjust the configuration parameters as needed, and then test the tools by sending queries to them.
**NOTE**: use `function mode` for testing of required tool.


| text/markdown | null | Artem Rozumenko <artyom.rozumenko@gmail.com>, Mikalai Biazruchka <mikalai_biazruchka@epam.com>, Roman Mitusov <roman_mitusov@epam.com>, Ivan Krakhmaliuk <lifedj27@gmail.com>, Artem Dubrovskiy <ad13box@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"sqlalchemy<2.0.36",
"tiktoken>=0.7.0",
"openai>=1.55.0",
"python-dotenv~=1.0.1",
"jinja2~=3.1.3",
"pillow~=11.1.0",
"requests~=2.3",
"pydantic~=2.12.0",
"chardet==5.2.0",
"fastapi==0.115.9",
"httpcore==1.0.7",
"urllib3>=2",
"certifi==2024.8.30",
"aiohttp>=3.9.0",
"langchain-core==1.2.7; extra == \"runtime\"",
"langchain==1.2.6; extra == \"runtime\"",
"langchain-community==0.4.1; extra == \"runtime\"",
"langchain-openai==1.1.7; extra == \"runtime\"",
"langchain-anthropic==1.3.1; extra == \"runtime\"",
"langchain-text-splitters==1.1.0; extra == \"runtime\"",
"langchain-chroma==1.0.0; extra == \"runtime\"",
"langchain-unstructured==1.0.0; extra == \"runtime\"",
"langchain-postgres==0.0.16; extra == \"runtime\"",
"langchain-mcp-adapters<0.2.0,>=0.1.14; extra == \"runtime\"",
"langgraph==1.0.7; extra == \"runtime\"",
"langgraph-prebuilt==1.0.7; extra == \"runtime\"",
"langgraph-swarm==0.1.0; extra == \"runtime\"",
"langgraph-checkpoint==2.1.2; extra == \"runtime\"",
"langgraph-checkpoint-sqlite==2.0.11; extra == \"runtime\"",
"langgraph-checkpoint-postgres==2.0.21; extra == \"runtime\"",
"langsmith>=0.3.45; extra == \"runtime\"",
"anthropic==0.76.0; extra == \"runtime\"",
"chromadb<2.0.0,>=1.0.20; extra == \"runtime\"",
"pgvector==0.2.5; extra == \"runtime\"",
"unstructured[local-inference]==0.16.23; extra == \"runtime\"",
"unstructured_pytesseract==0.3.13; extra == \"runtime\"",
"unstructured_inference==0.8.7; extra == \"runtime\"",
"python-pptx==1.0.2; extra == \"runtime\"",
"python-docx==1.1.2; extra == \"runtime\"",
"openpyxl==3.1.2; extra == \"runtime\"",
"pypdf==4.3.1; extra == \"runtime\"",
"pdfminer.six==20240706; extra == \"runtime\"",
"pdf2image==1.16.3; extra == \"runtime\"",
"pikepdf==8.7.1; extra == \"runtime\"",
"docx2txt==0.8; extra == \"runtime\"",
"mammoth==1.9.0; extra == \"runtime\"",
"reportlab==4.2.5; extra == \"runtime\"",
"svglib==1.5.1; extra == \"runtime\"",
"cairocffi==1.7.1; extra == \"runtime\"",
"rlpycairo==0.3.0; extra == \"runtime\"",
"keybert==0.8.3; extra == \"runtime\"",
"sentence-transformers==2.7.0; extra == \"runtime\"",
"gensim==4.3.3; extra == \"runtime\"",
"scipy==1.13.1; extra == \"runtime\"",
"opencv-python==4.11.0.86; extra == \"runtime\"",
"pytesseract==0.3.13; extra == \"runtime\"",
"markdown==3.5.1; extra == \"runtime\"",
"beautifulsoup4==4.12.2; extra == \"runtime\"",
"charset_normalizer==3.3.2; extra == \"runtime\"",
"opentelemetry-exporter-otlp-proto-grpc>=1.25.0; extra == \"runtime\"",
"opentelemetry_api>=1.25.0; extra == \"runtime\"",
"opentelemetry_instrumentation>=0.46b0; extra == \"runtime\"",
"grpcio_status>=1.63.0rc1; extra == \"runtime\"",
"protobuf>=4.25.7; extra == \"runtime\"",
"streamlit>=1.28.0; extra == \"runtime\"",
"dulwich==0.21.6; extra == \"tools\"",
"paramiko==3.3.1; extra == \"tools\"",
"pygithub==2.3.0; extra == \"tools\"",
"python-gitlab==4.5.0; extra == \"tools\"",
"gitpython==3.1.43; extra == \"tools\"",
"atlassian-python-api~=4.0.7; extra == \"tools\"",
"jira==3.8.0; extra == \"tools\"",
"qtest-swagger-client==0.0.3; extra == \"tools\"",
"testrail-api==1.13.4; extra == \"tools\"",
"zephyr-python-api==0.1.0; extra == \"tools\"",
"azure-devops==7.1.0b4; extra == \"tools\"",
"azure-core==1.30.2; extra == \"tools\"",
"azure-identity==1.16.0; extra == \"tools\"",
"azure-keyvault-keys==4.9.0; extra == \"tools\"",
"azure-keyvault-secrets==4.8.0; extra == \"tools\"",
"azure-mgmt-core==1.4.0; extra == \"tools\"",
"azure-mgmt-resource==23.0.1; extra == \"tools\"",
"azure-mgmt-storage==21.1.0; extra == \"tools\"",
"azure-storage-blob==12.23.1; extra == \"tools\"",
"azure-search-documents==11.5.2; extra == \"tools\"",
"msrest==0.7.1; extra == \"tools\"",
"boto3>=1.37.23; extra == \"tools\"",
"PyMySQL==1.1.1; extra == \"tools\"",
"psycopg2-binary==2.9.10; extra == \"tools\"",
"Office365-REST-Python-Client==2.5.14; extra == \"tools\"",
"pypdf2~=3.0.1; extra == \"tools\"",
"FigmaPy==2018.1.0; extra == \"tools\"",
"pandas==2.2.3; extra == \"tools\"",
"factor_analyzer==0.5.1; extra == \"tools\"",
"statsmodels==0.14.4; extra == \"tools\"",
"tabulate==0.9.0; extra == \"tools\"",
"tree_sitter==0.20.2; extra == \"tools\"",
"tree-sitter-languages==1.10.2; extra == \"tools\"",
"astor~=0.8.1; extra == \"tools\"",
"markdownify~=1.1.0; extra == \"tools\"",
"requests_openapi==1.0.5; extra == \"tools\"",
"duckduckgo_search==5.3.0; extra == \"tools\"",
"google-api-python-client==2.154.0; extra == \"tools\"",
"wikipedia==1.4.0; extra == \"tools\"",
"lxml==5.2.2; extra == \"tools\"",
"python-graphql-client~=0.4.3; extra == \"tools\"",
"pymupdf==1.24.9; extra == \"tools\"",
"googlemaps==4.10.0; extra == \"tools\"",
"yagmail==0.15.293; extra == \"tools\"",
"pysnc==1.1.10; extra == \"tools\"",
"pyral==1.6.0; extra == \"tools\"",
"shortuuid==1.0.13; extra == \"tools\"",
"yarl==1.17.1; extra == \"tools\"",
"langmem==0.0.27; extra == \"tools\"",
"textract-py3==2.1.1; extra == \"tools\"",
"slack_sdk==3.35.0; extra == \"tools\"",
"deltalake==1.0.2; extra == \"tools\"",
"google_cloud_bigquery==3.34.0; extra == \"tools\"",
"python-calamine==0.5.3; extra == \"tools\"",
"retry-extended==0.2.3; extra == \"community\"",
"pyobjtojson==0.3; extra == \"community\"",
"elitea-analyse==0.1.2; extra == \"community\"",
"networkx>=3.0; extra == \"community\"",
"alita-sdk[runtime]; extra == \"all\"",
"alita-sdk[tools]; extra == \"all\"",
"alita-sdk[community]; extra == \"all\"",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"black; extra == \"dev\"",
"flake8; extra == \"dev\"",
"mypy; extra == \"dev\"",
"click>=8.1.0; extra == \"cli\"",
"rich>=13.0.0; extra == \"cli\"",
"pyyaml>=6.0; extra == \"cli\"",
"langchain-mcp-adapters; extra == \"cli\""
] | [] | [] | [] | [
"Homepage, https://projectalita.ai",
"Issues, https://github.com/ProjectAlita/alita-sdk/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:45:08.978361 | alita_sdk-0.3.691.tar.gz | 1,648,898 | 3f/23/bbc3771f5ee685d633ef4402e2c3d846efb83ab212fde9d7d3ce776a2308/alita_sdk-0.3.691.tar.gz | source | sdist | null | false | 37b37c3c421870c9b079ffe1c65358e4 | 2e5f648b3f0c660415ac2f2147b2d676166c9194294d67ef58f8d7639e547db3 | 3f23bbc3771f5ee685d633ef4402e2c3d846efb83ab212fde9d7d3ce776a2308 | Apache-2.0 | [
"LICENSE"
] | 225 |
2.2 | autosar-e2e | 0.8.0.dev1 | Python implementation of the AUTOSAR E2E Protocol | # autosar-e2e
[](https://pypi.org/project/autosar-e2e)
[](https://pypi.org/project/autosar-e2e)
[](https://autosar-e2e.readthedocs.io/en/latest/?badge=latest)
The documentation is available [here](https://autosar-e2e.readthedocs.io/en/latest/).
-----
**Table of Contents**
- [Description](#description)
- [Installation](#installation)
- [Usage](#usage)
- [Test](#test)
- [Build](#build)
- [License](#license)
## Description
This library provides fast C implementations of the E2E CRC algorithms and E2E profiles.
Currently, all relevant CRC algorithms are available in module `e2e.crc`
but only E2E profiles 1, 2, 4, 5 and 7 are available.
If you provide example data for the other profiles I would try to implement them, too.
## Installation
```console
pip install autosar-e2e
```
## Usage
### CRC example
```python3
import e2e
crc: int = e2e.crc.calculate_crc8_h2f(b"\x00\x00\x00\x00")
```
### E2E Profile 2
```python3
import e2e
# create data
data = bytearray(b"\x00" * 8)
length = len(data) - 1
data_id_list = b"\x00" * 16
# increment counter and calculate CRC inplace
e2e.p02.e2e_p02_protect(data, length, data_id_list, increment_counter=True)
# check CRC
crc_correct: bool = e2e.p02.e2e_p02_check(data, length, data_id_list)
```
## Test
```console
pip install pipx
pipx run tox
```
## Build
```console
pip install pipx
pipx run build
pipx run twine check dist/*
```
## License
`autosar-e2e` is distributed under the terms of the [MIT](https://spdx.org/licenses/MIT.html) license.
| text/markdown | null | Artur Drogunow <artur.drogunow@zf.com> | null | null | MIT | AUTOSAR, E2E, automotive | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Programming Language :: Python :: Free Threading :: 1 - Unstable"
] | [] | null | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://autosar-e2e.readthedocs.io/en/latest",
"Issues, https://github.com/zariiii9003/autosar-e2e/issues",
"Source, https://github.com/zariiii9003/autosar-e2e",
"Homepage, https://github.com/zariiii9003/autosar-e2e"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:44:22.960655 | autosar_e2e-0.8.0.dev1.tar.gz | 32,961 | 34/bd/e01d9120ba324d009503696cef1945732232282e19a7cf8e3ebed61d8816/autosar_e2e-0.8.0.dev1.tar.gz | source | sdist | null | false | 5a1524902b6bc152ab8f1b87eea2c7d0 | fa43ed46b16651a66cbb211fedd47338bdf59ce557721dbaba3d898a668b9baf | 34bde01d9120ba324d009503696cef1945732232282e19a7cf8e3ebed61d8816 | null | [] | 2,035 |
2.4 | oxarchive | 0.8.2 | Official Python SDK for 0xarchive - Hyperliquid Historical Data API | # oxarchive
Official Python SDK for [0xarchive](https://0xarchive.io) - Historical Market Data API.
Supports multiple exchanges:
- **Hyperliquid** - Perpetuals data from April 2023
- **Hyperliquid HIP-3** - Builder-deployed perpetuals (Pro+ only, February 2026+)
- **Lighter.xyz** - Perpetuals data (August 2025+ for fills, Jan 2026+ for OB, OI, Funding Rate)
## Installation
```bash
pip install oxarchive
```
For WebSocket support:
```bash
pip install oxarchive[websocket]
```
## Quick Start
```python
from oxarchive import Client
client = Client(api_key="ox_your_api_key")
# Hyperliquid data
hl_orderbook = client.hyperliquid.orderbook.get("BTC")
print(f"Hyperliquid BTC mid price: {hl_orderbook.mid_price}")
# Lighter.xyz data
lighter_orderbook = client.lighter.orderbook.get("BTC")
print(f"Lighter BTC mid price: {lighter_orderbook.mid_price}")
# HIP-3 builder perps (February 2026+)
hip3_instruments = client.hyperliquid.hip3.instruments.list()
hip3_orderbook = client.hyperliquid.hip3.orderbook.get("km:US500")
hip3_trades = client.hyperliquid.hip3.trades.recent("km:US500")
hip3_funding = client.hyperliquid.hip3.funding.current("xyz:XYZ100")
hip3_oi = client.hyperliquid.hip3.open_interest.current("xyz:XYZ100")
# Get historical order book snapshots
history = client.hyperliquid.orderbook.history(
"ETH",
start="2024-01-01",
end="2024-01-02",
limit=100
)
```
## Async Support
All methods have async versions prefixed with `a`:
```python
import asyncio
from oxarchive import Client
async def main():
client = Client(api_key="ox_your_api_key")
# Async get (Hyperliquid)
orderbook = await client.hyperliquid.orderbook.aget("BTC")
print(f"BTC mid price: {orderbook.mid_price}")
# Async get (Lighter.xyz)
lighter_ob = await client.lighter.orderbook.aget("BTC")
# Don't forget to close the client
await client.aclose()
asyncio.run(main())
```
Or use as async context manager:
```python
async with Client(api_key="ox_your_api_key") as client:
orderbook = await client.hyperliquid.orderbook.aget("BTC")
```
## Configuration
```python
client = Client(
api_key="ox_your_api_key", # Required
base_url="https://api.0xarchive.io", # Optional
timeout=30.0, # Optional, request timeout in seconds (default: 30.0)
)
```
## REST API Reference
All examples use `client.hyperliquid.*` but the same methods are available on `client.lighter.*` for Lighter.xyz data.
### Order Book
```python
# Get current order book (Hyperliquid)
orderbook = client.hyperliquid.orderbook.get("BTC")
# Get current order book (Lighter.xyz)
orderbook = client.lighter.orderbook.get("BTC")
# Get order book at specific timestamp
historical = client.hyperliquid.orderbook.get("BTC", timestamp=1704067200000)
# Get with limited depth
shallow = client.hyperliquid.orderbook.get("BTC", depth=10)
# Get historical snapshots (start and end are required)
history = client.hyperliquid.orderbook.history(
"BTC",
start="2024-01-01",
end="2024-01-02",
limit=1000,
depth=20 # Price levels per side
)
# Async versions
orderbook = await client.hyperliquid.orderbook.aget("BTC")
history = await client.hyperliquid.orderbook.ahistory("BTC", start=..., end=...)
```
#### Orderbook Depth Limits
The `depth` parameter controls how many price levels are returned per side. Tier-based limits apply:
| Tier | Max Depth |
|------|-----------|
| Free | 20 |
| Build | 50 |
| Pro | 100 |
| Enterprise | Full Depth |
**Note:** Hyperliquid source data only contains 20 levels. Higher limits apply to Lighter.xyz data.
#### Lighter Orderbook Granularity
Lighter.xyz orderbook history supports a `granularity` parameter for different data resolutions. Tier restrictions apply.
| Granularity | Interval | Tier Required | Credit Multiplier |
|-------------|----------|---------------|-------------------|
| `checkpoint` | ~60s | Free+ | 1x |
| `30s` | 30s | Build+ | 2x |
| `10s` | 10s | Build+ | 3x |
| `1s` | 1s | Pro+ | 10x |
| `tick` | tick-level | Enterprise | 20x |
```python
# Get Lighter orderbook history with 10s resolution (Build+ tier)
history = client.lighter.orderbook.history(
"BTC",
start="2024-01-01",
end="2024-01-02",
granularity="10s"
)
# Get 1-second resolution (Pro+ tier)
history = client.lighter.orderbook.history(
"BTC",
start="2024-01-01",
end="2024-01-02",
granularity="1s"
)
# Tick-level data (Enterprise tier) - returns checkpoint + raw deltas
history = client.lighter.orderbook.history(
"BTC",
start="2024-01-01",
end="2024-01-02",
granularity="tick"
)
```
**Note:** The `granularity` parameter is ignored for Hyperliquid orderbook history.
#### Orderbook Reconstruction (Enterprise Tier)
For tick-level data, the SDK provides client-side orderbook reconstruction. This efficiently reconstructs full orderbook state from a checkpoint and incremental deltas.
```python
from datetime import datetime, timedelta
from oxarchive import OrderBookReconstructor
# Option 1: Get fully reconstructed snapshots (simplest)
snapshots = client.lighter.orderbook.history_reconstructed(
"BTC",
start=datetime.now() - timedelta(hours=1),
end=datetime.now()
)
for ob in snapshots:
print(f"{ob.timestamp}: bid={ob.bids[0].px} ask={ob.asks[0].px}")
# Option 2: Get raw tick data for custom reconstruction
tick_data = client.lighter.orderbook.history_tick(
"BTC",
start=datetime.now() - timedelta(hours=1),
end=datetime.now()
)
print(f"Checkpoint: {len(tick_data.checkpoint.bids)} bids")
print(f"Deltas: {len(tick_data.deltas)} updates")
# Option 3: Auto-paginating iterator (recommended for large time ranges)
# Automatically handles pagination, fetching up to 1,000 deltas per request
for snapshot in client.lighter.orderbook.iterate_tick_history(
"BTC",
start=datetime.now() - timedelta(days=1), # 24 hours of data
end=datetime.now()
):
print(snapshot.timestamp, "Mid:", snapshot.mid_price)
if some_condition:
break # Early exit supported
# Option 4: Manual iteration (single page, for custom logic)
for snapshot in client.lighter.orderbook.iterate_reconstructed(
"BTC", start=start, end=end
):
# Process each snapshot without loading all into memory
process(snapshot)
if some_condition:
break # Early exit if needed
# Option 5: Get only final state (most efficient)
reconstructor = client.lighter.orderbook.create_reconstructor()
final = reconstructor.reconstruct_final(tick_data.checkpoint, tick_data.deltas)
# Check for sequence gaps
gaps = OrderBookReconstructor.detect_gaps(tick_data.deltas)
if gaps:
print("Sequence gaps detected:", gaps)
# Async versions available
snapshots = await client.lighter.orderbook.ahistory_reconstructed("BTC", start=..., end=...)
tick_data = await client.lighter.orderbook.ahistory_tick("BTC", start=..., end=...)
# Async auto-paginating iterator
async for snapshot in client.lighter.orderbook.aiterate_tick_history("BTC", start=..., end=...):
process(snapshot)
```
**Methods:**
| Method | Description |
|--------|-------------|
| `history_tick(coin, ...)` | Get raw checkpoint + deltas (single page, max 1,000 deltas) |
| `history_reconstructed(coin, ...)` | Get fully reconstructed snapshots (single page) |
| `iterate_tick_history(coin, ...)` | Auto-paginating iterator for large time ranges |
| `aiterate_tick_history(coin, ...)` | Async auto-paginating iterator |
| `iterate_reconstructed(coin, ...)` | Memory-efficient iterator (single page) |
| `create_reconstructor()` | Create a reconstructor instance for manual control |
**Note:** The API returns a maximum of 1,000 deltas per request. For time ranges with more deltas, use `iterate_tick_history()` / `aiterate_tick_history()` which handle pagination automatically.
**Parameters:**
| Parameter | Default | Description |
|-----------|---------|-------------|
| `depth` | all | Maximum price levels in output |
| `emit_all` | `True` | If `False`, only return final state |
### Trades
The trades API uses cursor-based pagination for efficient retrieval of large datasets.
```python
# Get trade history with cursor-based pagination
result = client.hyperliquid.trades.list("ETH", start="2024-01-01", end="2024-01-02", limit=1000)
trades = result.data
# Paginate through all results
while result.next_cursor:
result = client.hyperliquid.trades.list(
"ETH",
start="2024-01-01",
end="2024-01-02",
cursor=result.next_cursor,
limit=1000
)
trades.extend(result.data)
# Filter by side
buys = client.hyperliquid.trades.list("BTC", start=..., end=..., side="buy")
# Get recent trades (Lighter only - has real-time data)
recent = client.lighter.trades.recent("BTC", limit=100)
# Async versions
result = await client.hyperliquid.trades.alist("ETH", start=..., end=...)
recent = await client.lighter.trades.arecent("BTC", limit=100)
```
**Note:** The `recent()` method is only available for Lighter.xyz (`client.lighter.trades.recent()`). Hyperliquid does not have a recent trades endpoint - use `list()` with a time range instead.
### Instruments
```python
# List all trading instruments (Hyperliquid)
instruments = client.hyperliquid.instruments.list()
# Get specific instrument details
btc = client.hyperliquid.instruments.get("BTC")
print(f"BTC size decimals: {btc.sz_decimals}")
# Async versions
instruments = await client.hyperliquid.instruments.alist()
btc = await client.hyperliquid.instruments.aget("BTC")
```
#### Lighter.xyz Instruments
Lighter instruments have a different schema with additional fields for fees, market IDs, and minimum order amounts:
```python
# List Lighter instruments (returns LighterInstrument, not Instrument)
lighter_instruments = client.lighter.instruments.list()
# Get specific Lighter instrument
eth = client.lighter.instruments.get("ETH")
print(f"ETH taker fee: {eth.taker_fee}")
print(f"ETH maker fee: {eth.maker_fee}")
print(f"ETH market ID: {eth.market_id}")
print(f"ETH min base amount: {eth.min_base_amount}")
# Async versions
lighter_instruments = await client.lighter.instruments.alist()
eth = await client.lighter.instruments.aget("ETH")
```
**Key differences:**
| Field | Hyperliquid (`Instrument`) | Lighter (`LighterInstrument`) |
|-------|---------------------------|------------------------------|
| Symbol | `name` | `symbol` |
| Size decimals | `sz_decimals` | `size_decimals` |
| Fee info | Not available | `taker_fee`, `maker_fee`, `liquidation_fee` |
| Market ID | Not available | `market_id` |
| Min amounts | Not available | `min_base_amount`, `min_quote_amount` |
#### HIP-3 Instruments
HIP-3 instruments are derived from live market data and include mark price, open interest, and mid price:
```python
# List all HIP-3 instruments (no tier restriction)
hip3_instruments = client.hyperliquid.hip3.instruments.list()
for inst in hip3_instruments:
print(f"{inst.coin} ({inst.namespace}:{inst.ticker}): mark={inst.mark_price}, OI={inst.open_interest}")
# Get specific HIP-3 instrument (case-sensitive)
us500 = client.hyperliquid.hip3.instruments.get("km:US500")
print(f"Mark price: {us500.mark_price}")
# Async versions
hip3_instruments = await client.hyperliquid.hip3.instruments.alist()
us500 = await client.hyperliquid.hip3.instruments.aget("km:US500")
```
**Available HIP-3 Coins:**
| Builder | Coins |
|---------|-------|
| xyz (Hyperliquid) | `xyz:XYZ100` |
| km (Kinetiq Markets) | `km:US500`, `km:SMALL2000`, `km:GOOGL`, `km:USBOND`, `km:GOLD`, `km:USTECH`, `km:NVDA`, `km:SILVER`, `km:BABA` |
### Funding Rates
```python
# Get current funding rate
current = client.hyperliquid.funding.current("BTC")
# Get funding rate history (start is required)
history = client.hyperliquid.funding.history(
"ETH",
start="2024-01-01",
end="2024-01-07"
)
# Async versions
current = await client.hyperliquid.funding.acurrent("BTC")
history = await client.hyperliquid.funding.ahistory("ETH", start=..., end=...)
```
### Open Interest
```python
# Get current open interest
current = client.hyperliquid.open_interest.current("BTC")
# Get open interest history (start is required)
history = client.hyperliquid.open_interest.history(
"ETH",
start="2024-01-01",
end="2024-01-07"
)
# Async versions
current = await client.hyperliquid.open_interest.acurrent("BTC")
history = await client.hyperliquid.open_interest.ahistory("ETH", start=..., end=...)
```
### Liquidations (Hyperliquid only)
Get historical liquidation events. Data available from May 2025 onwards.
```python
# Get liquidation history for a coin
liquidations = client.hyperliquid.liquidations.history(
"BTC",
start="2025-06-01",
end="2025-06-02",
limit=100
)
# Paginate through all results
all_liquidations = list(liquidations.data)
while liquidations.next_cursor:
liquidations = client.hyperliquid.liquidations.history(
"BTC",
start="2025-06-01",
end="2025-06-02",
cursor=liquidations.next_cursor,
limit=1000
)
all_liquidations.extend(liquidations.data)
# Get liquidations for a specific user
user_liquidations = client.hyperliquid.liquidations.by_user(
"0x1234...",
start="2025-06-01",
end="2025-06-07",
coin="BTC" # optional filter
)
# Async versions
liquidations = await client.hyperliquid.liquidations.ahistory("BTC", start=..., end=...)
user_liquidations = await client.hyperliquid.liquidations.aby_user("0x...", start=..., end=...)
```
### Candles (OHLCV)
Get historical OHLCV candle data aggregated from trades.
```python
# Get candle history (start is required)
candles = client.hyperliquid.candles.history(
"BTC",
start="2024-01-01",
end="2024-01-02",
interval="1h", # 1m, 5m, 15m, 30m, 1h, 4h, 1d, 1w
limit=100
)
# Iterate through candles
for candle in candles.data:
print(f"{candle.timestamp}: O={candle.open} H={candle.high} L={candle.low} C={candle.close} V={candle.volume}")
# Cursor-based pagination for large datasets
result = client.hyperliquid.candles.history("BTC", start=..., end=..., interval="1m", limit=1000)
while result.next_cursor:
result = client.hyperliquid.candles.history(
"BTC", start=..., end=..., interval="1m",
cursor=result.next_cursor, limit=1000
)
# Lighter.xyz candles
lighter_candles = client.lighter.candles.history(
"BTC",
start="2024-01-01",
end="2024-01-02",
interval="15m"
)
# Async versions
candles = await client.hyperliquid.candles.ahistory("BTC", start=..., end=..., interval="1h")
```
#### Available Intervals
| Interval | Description |
|----------|-------------|
| `1m` | 1 minute |
| `5m` | 5 minutes |
| `15m` | 15 minutes |
| `30m` | 30 minutes |
| `1h` | 1 hour (default) |
| `4h` | 4 hours |
| `1d` | 1 day |
| `1w` | 1 week |
### Data Quality Monitoring
Monitor data coverage, incidents, latency, and SLA compliance across all exchanges.
```python
# Get overall system health status
status = client.data_quality.status()
print(f"System status: {status.status}")
for exchange, info in status.exchanges.items():
print(f" {exchange}: {info.status}")
# Get data coverage summary for all exchanges
coverage = client.data_quality.coverage()
for exchange in coverage.exchanges:
print(f"{exchange.exchange}:")
for dtype, info in exchange.data_types.items():
print(f" {dtype}: {info.total_records:,} records, {info.completeness}% complete")
# Get symbol-specific coverage with gap detection
btc = client.data_quality.symbol_coverage("hyperliquid", "BTC")
oi = btc.data_types["open_interest"]
print(f"BTC OI completeness: {oi.completeness}%")
print(f"Historical coverage: {oi.historical_coverage}%") # Hour-level granularity
print(f"Gaps found: {len(oi.gaps)}")
for gap in oi.gaps[:5]:
print(f" {gap.duration_minutes} min gap: {gap.start} -> {gap.end}")
# Check empirical data cadence (when available)
ob = btc.data_types["orderbook"]
if ob.cadence:
print(f"Orderbook cadence: ~{ob.cadence.median_interval_seconds}s median, p95={ob.cadence.p95_interval_seconds}s")
# Time-bounded gap detection (last 7 days)
from datetime import datetime, timedelta, timezone
week_ago = datetime.now(timezone.utc) - timedelta(days=7)
btc_7d = client.data_quality.symbol_coverage("hyperliquid", "BTC", from_time=week_ago)
# List incidents with filtering
result = client.data_quality.list_incidents(status="open")
for incident in result.incidents:
print(f"[{incident.severity}] {incident.title}")
# Get latency metrics
latency = client.data_quality.latency()
for exchange, metrics in latency.exchanges.items():
print(f"{exchange}: OB lag {metrics.data_freshness.orderbook_lag_ms}ms")
# Get SLA compliance metrics for a specific month
sla = client.data_quality.sla(year=2026, month=1)
print(f"Period: {sla.period}")
print(f"Uptime: {sla.actual.uptime}% ({sla.actual.uptime_status})")
print(f"API P99: {sla.actual.api_latency_p99_ms}ms ({sla.actual.latency_status})")
# Async versions available for all methods
status = await client.data_quality.astatus()
coverage = await client.data_quality.acoverage()
```
#### Data Quality Endpoints
| Method | Description |
|--------|-------------|
| `status()` | Overall system health and per-exchange status |
| `coverage()` | Data coverage summary for all exchanges |
| `exchange_coverage(exchange)` | Coverage details for a specific exchange |
| `symbol_coverage(exchange, symbol, *, from_time, to_time)` | Coverage with gap detection, cadence, and historical coverage |
| `list_incidents(...)` | List incidents with filtering and pagination |
| `get_incident(incident_id)` | Get specific incident details |
| `latency()` | Current latency metrics (WebSocket, REST, data freshness) |
| `sla(year, month)` | SLA compliance metrics for a specific month |
**Note:** Data Quality endpoints (`coverage()`, `exchange_coverage()`, `symbol_coverage()`) perform complex aggregation queries and may take 30-60 seconds on first request (results are cached server-side for 5 minutes). If you encounter timeout errors, create a client with a longer timeout:
```python
client = Client(
api_key="ox_your_api_key",
timeout=60.0 # 60 seconds for data quality endpoints
)
```
### Legacy API (Deprecated)
The following legacy methods are deprecated and will be removed in v2.0. They default to Hyperliquid data:
```python
# Deprecated - use client.hyperliquid.orderbook.get() instead
orderbook = client.orderbook.get("BTC")
# Deprecated - use client.hyperliquid.trades.list() instead
trades = client.trades.list("BTC", start=..., end=...)
```
## WebSocket Client
The WebSocket client supports three modes: real-time streaming, historical replay, and bulk streaming.
```python
import asyncio
from oxarchive import OxArchiveWs, WsOptions
ws = OxArchiveWs(WsOptions(api_key="ox_your_api_key"))
```
### Real-time Streaming
Subscribe to live market data from Hyperliquid.
```python
import asyncio
from oxarchive import OxArchiveWs, WsOptions
async def main():
ws = OxArchiveWs(WsOptions(api_key="ox_your_api_key"))
# Set up handlers
ws.on_open(lambda: print("Connected"))
ws.on_close(lambda code, reason: print(f"Disconnected: {code}"))
ws.on_error(lambda e: print(f"Error: {e}"))
# Connect
await ws.connect()
# Subscribe to channels
ws.subscribe_orderbook("BTC")
ws.subscribe_orderbook("ETH")
ws.subscribe_trades("BTC")
ws.subscribe_all_tickers()
# Handle real-time data
ws.on_orderbook(lambda coin, data: print(f"{coin}: {data.mid_price}"))
ws.on_trades(lambda coin, trades: print(f"{coin}: {len(trades)} trades"))
# Keep running
await asyncio.sleep(60)
# Unsubscribe and disconnect
ws.unsubscribe_orderbook("ETH")
await ws.disconnect()
asyncio.run(main())
```
### Historical Replay
Replay historical data with timing preserved. Perfect for backtesting.
> **Important:** Replay data is delivered via `on_historical_data()`, NOT `on_trades()` or `on_orderbook()`.
> The real-time callbacks only receive live market data from subscriptions.
```python
import asyncio
import time
from oxarchive import OxArchiveWs, WsOptions
async def main():
ws = OxArchiveWs(WsOptions(api_key="ox_..."))
# Handle replay data - this is where historical records arrive
ws.on_historical_data(lambda coin, ts, data:
print(f"{ts}: {data['mid_price']}")
)
# Replay lifecycle events
ws.on_replay_start(lambda ch, coin, start, end, speed:
print(f"Starting replay: {ch}/{coin} at {speed}x")
)
ws.on_replay_complete(lambda ch, coin, sent:
print(f"Replay complete: {sent} records")
)
await ws.connect()
# Start replay at 10x speed
await ws.replay(
"orderbook", "BTC",
start=int(time.time() * 1000) - 86400000, # 24 hours ago
end=int(time.time() * 1000), # Optional
speed=10 # Optional, defaults to 1x
)
# Lighter.xyz replay with granularity (tier restrictions apply)
await ws.replay(
"orderbook", "BTC",
start=int(time.time() * 1000) - 86400000,
speed=10,
granularity="10s" # Options: 'checkpoint', '30s', '10s', '1s', 'tick'
)
# Handle tick-level data (granularity='tick', Enterprise tier)
ws.on_historical_tick_data(lambda coin, checkpoint, deltas:
print(f"Checkpoint: {len(checkpoint['bids'])} bids, Deltas: {len(deltas)}")
)
# Control playback
await ws.replay_pause()
await ws.replay_resume()
await ws.replay_seek(1704067200000) # Jump to timestamp
await ws.replay_stop()
asyncio.run(main())
```
### Bulk Streaming
Fast bulk download for data pipelines. Data arrives in batches without timing delays.
```python
import asyncio
import time
from oxarchive import OxArchiveWs, WsOptions
async def main():
ws = OxArchiveWs(WsOptions(api_key="ox_..."))
all_data = []
# Handle batched data
ws.on_batch(lambda coin, records:
all_data.extend([r.data for r in records])
)
ws.on_stream_progress(lambda snapshots_sent:
print(f"Progress: {snapshots_sent} snapshots")
)
ws.on_stream_complete(lambda ch, coin, sent:
print(f"Downloaded {sent} records")
)
await ws.connect()
# Start bulk stream
await ws.stream(
"orderbook", "ETH",
start=int(time.time() * 1000) - 3600000, # 1 hour ago
end=int(time.time() * 1000),
batch_size=1000 # Optional, defaults to 1000
)
# Lighter.xyz stream with granularity (tier restrictions apply)
await ws.stream(
"orderbook", "BTC",
start=int(time.time() * 1000) - 3600000,
end=int(time.time() * 1000),
granularity="10s" # Options: 'checkpoint', '30s', '10s', '1s', 'tick'
)
# Stop if needed
await ws.stream_stop()
asyncio.run(main())
```
### Gap Detection
During historical replay and bulk streaming, the server automatically detects gaps in the data and notifies the client. This helps identify periods where data may be missing.
```python
import asyncio
from oxarchive import OxArchiveWs, WsOptions
async def main():
ws = OxArchiveWs(WsOptions(api_key="ox_..."))
# Handle gap notifications during replay/stream
def handle_gap(channel, coin, gap_start, gap_end, duration_minutes):
print(f"Gap detected in {channel}/{coin}:")
print(f" From: {gap_start}")
print(f" To: {gap_end}")
print(f" Duration: {duration_minutes} minutes")
ws.on_gap(handle_gap)
await ws.connect()
# Start replay - gaps will be reported via on_gap callback
await ws.replay(
"orderbook", "BTC",
start=int(time.time() * 1000) - 86400000,
end=int(time.time() * 1000),
speed=10
)
asyncio.run(main())
```
Gap thresholds vary by channel:
- **orderbook**, **candles**, **liquidations**: 2 minutes
- **trades**: 60 minutes (trades can naturally have longer gaps during low activity periods)
### WebSocket Configuration
```python
ws = OxArchiveWs(WsOptions(
api_key="ox_your_api_key",
ws_url="wss://api.0xarchive.io/ws", # Optional
auto_reconnect=True, # Auto-reconnect on disconnect (default: True)
reconnect_delay=1.0, # Initial reconnect delay in seconds (default: 1.0)
max_reconnect_attempts=10, # Max reconnect attempts (default: 10)
ping_interval=30.0, # Keep-alive ping interval in seconds (default: 30.0)
))
```
### Available Channels
#### Hyperliquid Channels
| Channel | Description | Requires Coin | Historical Support |
|---------|-------------|---------------|-------------------|
| `orderbook` | L2 order book updates | Yes | Yes |
| `trades` | Trade/fill updates | Yes | Yes |
| `candles` | OHLCV candle data | Yes | Yes (replay/stream only) |
| `liquidations` | Liquidation events (May 2025+) | Yes | Yes (replay/stream only) |
| `ticker` | Price and 24h volume | Yes | Real-time only |
| `all_tickers` | All market tickers | No | Real-time only |
#### HIP-3 Builder Perps Channels
| Channel | Description | Requires Coin | Historical Support |
|---------|-------------|---------------|-------------------|
| `hip3_orderbook` | HIP-3 L2 order book snapshots | Yes | Yes |
| `hip3_trades` | HIP-3 trade/fill updates | Yes | Yes |
| `hip3_candles` | HIP-3 OHLCV candle data | Yes | Yes |
> **Note:** HIP-3 coins are case-sensitive (e.g., `km:US500`, `xyz:XYZ100`). Do not uppercase them.
#### Lighter.xyz Channels
| Channel | Description | Requires Coin | Historical Support |
|---------|-------------|---------------|-------------------|
| `lighter_orderbook` | Lighter L2 order book (reconstructed) | Yes | Yes |
| `lighter_trades` | Lighter trade/fill updates | Yes | Yes |
| `lighter_candles` | Lighter OHLCV candle data | Yes | Yes |
#### Candle Replay/Stream
```python
# Replay candles at 10x speed
await ws.replay(
"candles", "BTC",
start=int(time.time() * 1000) - 86400000,
end=int(time.time() * 1000),
speed=10,
interval="15m" # 1m, 5m, 15m, 30m, 1h, 4h, 1d, 1w
)
# Bulk stream candles
await ws.stream(
"candles", "ETH",
start=int(time.time() * 1000) - 3600000,
end=int(time.time() * 1000),
batch_size=1000,
interval="1h"
)
# Lighter.xyz candles
await ws.replay(
"lighter_candles", "BTC",
start=...,
speed=10,
interval="5m"
)
```
#### HIP-3 Replay/Stream
```python
# Replay HIP-3 orderbook at 50x speed
await ws.replay(
"hip3_orderbook", "km:US500",
start=int(time.time() * 1000) - 3600000,
end=int(time.time() * 1000),
speed=50,
)
# Bulk stream HIP-3 trades
await ws.stream(
"hip3_trades", "xyz:XYZ100",
start=int(time.time() * 1000) - 86400000,
end=int(time.time() * 1000),
batch_size=1000,
)
# HIP-3 candles
await ws.replay(
"hip3_candles", "km:US500",
start=int(time.time() * 1000) - 86400000,
end=int(time.time() * 1000),
speed=100,
interval="1h"
)
```
## Timestamp Formats
The SDK accepts timestamps in multiple formats:
```python
from datetime import datetime
# Unix milliseconds (int)
client.orderbook.get("BTC", timestamp=1704067200000)
# ISO string
client.orderbook.history("BTC", start="2024-01-01", end="2024-01-02")
# datetime object
client.orderbook.history(
"BTC",
start=datetime(2024, 1, 1),
end=datetime(2024, 1, 2)
)
```
## Error Handling
```python
from oxarchive import Client, OxArchiveError
client = Client(api_key="ox_your_api_key")
try:
orderbook = client.orderbook.get("INVALID")
except OxArchiveError as e:
print(f"API Error: {e.message}")
print(f"Status Code: {e.code}")
print(f"Request ID: {e.request_id}")
```
## Type Hints
Full type hint support with Pydantic models:
```python
from oxarchive import Client, LighterGranularity
from oxarchive.types import OrderBook, Trade, Instrument, LighterInstrument, FundingRate, OpenInterest, Candle, Liquidation
from oxarchive.resources.trades import CursorResponse
# Orderbook reconstruction types (Enterprise)
from oxarchive import (
OrderBookReconstructor,
OrderbookDelta,
TickData,
ReconstructedOrderBook,
ReconstructOptions,
)
client = Client(api_key="ox_your_api_key")
orderbook: OrderBook = client.hyperliquid.orderbook.get("BTC")
result: CursorResponse = client.hyperliquid.trades.list("BTC", start=..., end=...)
# Lighter has real-time data, so recent() is available
recent: list[Trade] = client.lighter.trades.recent("BTC")
# Lighter granularity type hint
granularity: LighterGranularity = "10s"
# Orderbook reconstruction (Enterprise)
tick_data: TickData = client.lighter.orderbook.history_tick("BTC", start=..., end=...)
snapshots: list[ReconstructedOrderBook] = client.lighter.orderbook.history_reconstructed("BTC", start=..., end=...)
```
## Requirements
- Python 3.9+
- httpx
- pydantic
## License
MIT
| text/markdown | null | 0xarchive <support@0xarchive.io> | null | null | null | 0xarchive, api, historical-data, hyperliquid, orderbook, sdk, trading | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx>=0.25.0",
"pydantic>=2.0.0",
"websockets>=14.0; extra == \"all\"",
"mypy>=1.9.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.4.0; extra == \"dev\"",
"websockets>=14.0; extra == \"websocket\""
] | [] | [] | [] | [
"Homepage, https://0xarchive.io",
"Documentation, https://0xarchive.io/docs/sdks",
"Repository, https://github.com/0xarchiveIO/sdk-python",
"Issues, https://github.com/0xarchiveIO/sdk-python/issues"
] | twine/6.2.0 CPython/3.10.12 | 2026-02-20T15:43:30.613146 | oxarchive-0.8.2.tar.gz | 42,259 | 4f/cc/3090956b3260d9d76f00add1812fbea8c0026d812bae6f7e36e9f4d70b68/oxarchive-0.8.2.tar.gz | source | sdist | null | false | 9f4059a217ae7d00e2157ae6d18121b4 | 2c2fe2298e30ba4157c5d77afb46db5a9f5d696cbb4dbe40a95efd228ee00a8b | 4fcc3090956b3260d9d76f00add1812fbea8c0026d812bae6f7e36e9f4d70b68 | MIT | [] | 210 |
2.4 | pytest-aitest | 0.5.7 | Pytest plugin for testing AI agents with MCP and CLI servers | # pytest-aitest
[](https://pypi.org/project/pytest-aitest/)
[](https://pypi.org/project/pytest-aitest/)
[](https://github.com/sbroenne/pytest-aitest/actions/workflows/ci.yml)
[](https://opensource.org/licenses/MIT)
**Test your AI interfaces. AI analyzes your results.**
A pytest plugin for test-driven development of MCP servers, tools, prompts, and skills. Write tests first. Let the AI analysis drive your design.
## Why?
Your MCP server passes all unit tests. Then an LLM tries to use it and picks the wrong tool, passes garbage parameters, or ignores your system prompt.
**Because you tested the code, not the AI interface.** For LLMs, your API is tool descriptions, schemas, and prompts — not functions and types. No compiler catches a bad tool description. No linter flags a confusing schema. Traditional tests can't validate them.
## How It Works
So I built pytest-aitest: write tests as natural language prompts. An **Agent** bundles an LLM with your tools — you assert on what happened:
```python
from pytest_aitest import Agent, Provider, MCPServer
async def test_balance_query(aitest_run):
agent = Agent(
provider=Provider(model="azure/gpt-5-mini"),
mcp_servers=[MCPServer(command=["python", "-m", "my_banking_server"])],
)
result = await aitest_run(agent, "What's my checking balance?")
assert result.success
assert result.tool_was_called("get_balance")
```
If the test fails, your tool descriptions need work — not your code. This is **test-driven development for AI interfaces**:
1. **Write a test** — a prompt that describes what a user would say
2. **Run it** — the LLM tries to use your tools and fails
3. **Fix the interface** — improve tool descriptions, schemas, or prompts until it passes
4. **AI analysis tells you what else to optimize** — cost, redundant calls, unused tools
## AI Analysis
AI analyzes your results and tells you **what to fix**: which model to deploy, how to improve tool descriptions, where to cut costs. [See a sample report →](https://sbroenne.github.io/pytest-aitest/demo/hero-report.html)

## Quick Start
Install:
```bash
uv add pytest-aitest
```
Configure in `pyproject.toml`:
```toml
[tool.pytest.ini_options]
addopts = """
--aitest-summary-model=azure/gpt-5.2-chat
"""
```
Set credentials and run:
```bash
export AZURE_API_BASE=https://your-resource.openai.azure.com/
az login
pytest tests/
```
## Features
- **MCP Server Testing** — Real models against real tool interfaces
- **CLI Server Testing** — Wrap CLIs as testable tool servers
- **Agent Comparison** — Compare models, prompts, skills, and server versions
- **Agent Leaderboard** — Auto-ranked by pass rate and cost
- **Multi-Turn Sessions** — Test conversations that build on context
- **AI Analysis** — Actionable feedback on tool descriptions, prompts, and costs
- **Multi-Provider** — Any model via [Pydantic AI](https://ai.pydantic.dev/) (OpenAI, Anthropic, Gemini, Azure, Bedrock, Mistral, and more)
- **Clarification Detection** — Catch agents that ask questions instead of acting
- **Semantic Assertions** — Built-in `llm_assert` fixture powered by [pydantic-evals](https://ai.pydantic.dev/evals/) LLM judge
- **Multi-Dimension Scoring** — `llm_score` fixture for granular quality measurement across named dimensions
- **Image Assertions** — `llm_assert_image` for AI-graded visual evaluation of screenshots and charts
- **Cost Estimation** — Automatic per-test cost tracking with pricing from litellm + custom overrides
## Who This Is For
- **MCP server authors** — Validate that LLMs can actually use your tools
- **Agent builders** — Compare models, prompts, and skills to find the best configuration
- **Teams shipping AI systems** — Catch LLM-facing regressions in CI/CD
## Documentation
📚 **[Full Documentation](https://sbroenne.github.io/pytest-aitest/)**
## Requirements
- Python 3.11+
- pytest 9.0+
- An LLM provider (Azure, OpenAI, Anthropic, etc.)
## Acknowledgments
Inspired by [agent-benchmark](https://github.com/mykhaliev/agent-benchmark).
## License
MIT
| text/markdown | Stefan Brunner | null | null | null | MIT | agents, ai, llm, mcp, pytest, testing | [
"Development Status :: 3 - Alpha",
"Framework :: Pytest",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Testing"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"azure-identity>=1.25.2",
"htpy>=25.12.0",
"litellm>=1.81.13",
"markdown>=3.10.2",
"mcp>=1.26",
"mdutils>=1.8.1",
"pydantic-ai>=1.61.0",
"pydantic-evals>=1.61.0",
"pydantic>=2.0",
"pytest>=9.0",
"python-frontmatter>=1.1.0",
"pre-commit>=4.5; extra == \"dev\"",
"pyright>=1.1.408; extra == \"dev\"",
"pytest-asyncio>=1.3; extra == \"dev\"",
"pytest-cov>=7.0; extra == \"dev\"",
"python-dotenv>=1.2; extra == \"dev\"",
"ruff>=0.15; extra == \"dev\"",
"typeguard>=4.5; extra == \"dev\"",
"cairosvg>=2.7; extra == \"docs\"",
"mkdocs-material>=9.7; extra == \"docs\"",
"mkdocs>=1.6; extra == \"docs\"",
"mkdocstrings[python]>=0.24; extra == \"docs\"",
"pillow>=11.0; extra == \"docs\"",
"syrupy>=5.1; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/sbroenne/pytest-aitest",
"Repository, https://github.com/sbroenne/pytest-aitest"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:43:28.760497 | pytest_aitest-0.5.7.tar.gz | 99,690 | e7/3e/28d4d1e37ca07af0f0a12c457a7c774e9f562a348edef42f9d9321983c8b/pytest_aitest-0.5.7.tar.gz | source | sdist | null | false | 9acc4974347cc2bf4544a2a8fb741dcd | e5552aea9644cd17d63b3d27ebbe9b61561b3ce1bec25bca671c5bda6bc79c64 | e73e28d4d1e37ca07af0f0a12c457a7c774e9f562a348edef42f9d9321983c8b | null | [
"LICENSE"
] | 289 |
2.4 | math-core | 0.5.1 | Convert LaTeX math to MathML Core | # math-core
A Python library for converting LaTeX math expressions to MathML Core.
## Overview
`math-core` converts LaTeX mathematical expressions into MathML Core, a streamlined subset of MathML that is supported by all major web browsers. It lets you render mathematical content on the web without requiring JavaScript libraries or polyfills.
## Features
- Convert LaTeX math expressions to MathML Core
- Support for both inline and display (block) math
- Define custom LaTeX macros for extended functionality
- Global and local counter for numbered equations
- Pretty-printing option for readable MathML output
- Comprehensive error handling with descriptive error messages
## Installation
```bash
pip install math-core
```
## Quick Start
```python
from math_core import LatexToMathML
# Create a converter instance
converter = LatexToMathML()
# Convert inline math
mathml = converter.convert_with_local_counter("x^2 + y^2 = z^2", displaystyle=False)
print(mathml)
# Output: <math><msup><mi>x</mi><mn>2</mn></msup><mo>+</mo><msup><mi>y</mi><mn>2</mn></msup><mo>=</mo><msup><mi>z</mi><mn>2</mn></msup></math>
# Convert display math
mathml = converter.convert_with_local_counter(r"\frac{1}{2}", displaystyle=True)
print(mathml)
# Output: <math display="block"><mfrac><mn>1</mn><mn>2</mn></mfrac></math>
```
## Usage
### Basic Usage
```python
from math_core import LatexToMathML, LatexError
# Initialize converter
converter = LatexToMathML(pretty_print="always")
# Convert LaTeX to MathML
try:
mathml = converter.convert_with_local_counter(r"\sqrt{x^2 + 1}", displaystyle=False)
print(mathml)
except LatexError as e:
print(f"Conversion error: {e}")
```
### Custom LaTeX Macros
Define custom macros to extend or modify LaTeX command behavior:
```python
# Define custom macros
macros = {
"d": r"\mathrm{d}", # Differential d
"R": r"\mathbb{R}", # Real numbers
"vec": r"\mathbf{#1}" # Vector notation
}
converter = LatexToMathML(macros=macros)
mathml = converter.convert_with_local_counter(r"\d x", displaystyle=False)
```
### Numbered Equations with Global Counter
For documents with multiple numbered equations:
```python
converter = LatexToMathML()
# First equation gets (1)
eq1 = converter.convert_with_global_counter(
r"\begin{align}E = mc^2\end{align}",
displaystyle=True
)
# Second equation gets (2)
eq2 = converter.convert_with_global_counter(
r"\begin{align}F = ma\end{align}",
displaystyle=True
)
# Reset counter when starting a new chapter/section
converter.reset_global_counter()
# This equation gets (1) again
eq3 = converter.convert_with_global_counter(
r"\begin{align}p = mv\end{align}",
displaystyle=True
)
```
### Local Counter for Independent Numbering
Use local counters when equation numbers should restart within each conversion:
```python
converter = LatexToMathML()
# Each conversion has independent numbering
doc1 = converter.convert_with_local_counter(
r"\begin{align}a &= b\\c &= d\end{align}",
displaystyle=True
) # Contains (1) and (2)
doc2 = converter.convert_with_local_counter(
r"\begin{align}x &= y\\z &= w\end{align}",
displaystyle=True
) # Also contains (1) and (2)
```
## API Reference
### LatexToMathML
The main converter class.
**Constructor parameters:**
- `pretty_print` (`str`, optional): A string indicating whether to pretty print the MathML output. Options are “never”, “always”, or “auto”. “auto” means that all block equations will be pretty printed. Default: “never”.
- `macros` (`dict[str, str]`, optional): Dictionary of LaTeX macros for custom commands.
- `xml_namespace` (`bool`, optional): A boolean indicating whether to include `xmlns="http://www.w3.org/1998/Math/MathML"` in the `<math>` tag. Default: `False`.
- `continue_on_error` (`bool`, optional): A boolean indicating whether to raise an exception for conversion errors. If conversion fails and this is `True`, an HTML snippet describing the error will be returned, instead of raising `LatexError`. Default: `False`.
- `ignore_unknown_commands` (`bool`, optional): A boolean indicating whether to ignore unknown LaTeX commands. If `True`, unknown commands be rendered as red text and the conversion will continue. Default: `False`.
- `annotation` (`bool`, optional): A boolean indicating whether to include the original LaTeX as an annotation in the MathML output. Default: `False`.
**Methods:**
- `convert_with_global_counter(latex: str, displaystyle: bool) -> str | LatexError`: Convert LaTeX to MathML using a global equation counter.
- `convert_with_local_counter(latex: str, displaystyle: bool) -> str | LatexError`: Convert LaTeX to MathML using a local equation counter.
- `reset_global_counter() -> None`: Reset the global equation counter to zero.
### LatexError
Exception raised when LaTeX parsing or conversion fails.
```python
from math_core import LatexToMathML, LatexError
converter = LatexToMathML()
try:
result = converter.convert_with_local_counter(r"\invalid")
except LatexError as e
print(f"Conversion failed: {e}")
```
## Use Cases
### Static Site Generators
Integrate `math-core` into your static site generator to convert LaTeX in Markdown files:
```python
import re
from math_core import LatexToMathML
converter = LatexToMathML(pretty_print="auto")
def process_math(content):
# Replace display math $$...$$; do this first to avoid conflicts with inline math delimiters
content = re.sub(
r"\$\$([^\$]+)\$\$",
lambda m: converter.convert_with_local_counter(m.group(1), displaystyle=True),
content,
)
# Replace inline math $...$
content = re.sub(
r"\$([^\$]+)\$",
lambda m: converter.convert_with_local_counter(m.group(1), displaystyle=False),
content,
)
return content
```
### Web Applications
Generate MathML on the server side:
```python
from flask import Flask, render_template_string
from math_core import LatexToMathML, LatexError
app = Flask(__name__)
converter = LatexToMathML()
@app.route("/equation/<latex>")
def render_equation(latex):
try:
mathml = converter.convert_with_local_counter(latex, displaystyle=True)
return render_template_string(
"<html><body>{{ mathml|safe }}</body></html>", mathml=mathml
)
except LatexError:
return "Invalid equation", 400
```
## Why MathML Core?
MathML Core is a carefully selected subset of MathML 4 that focuses on essential mathematical notation while ensuring consistent rendering across browsers. Unlike full MathML or JavaScript-based solutions:
- **Native browser support**: No JavaScript required
- **Accessibility**: Better screen reader support
- **Performance**: Faster rendering than JS solutions
- **SEO-friendly**: Search engines can index mathematical content
- **Future-proof**: Part of web standards with ongoing browser support
## Browser Support
Firefox currently has the most complete support for MathML Core, with Chrome close behind. Safari has the least support and some rendering issues exist when using MathML Core, but it is improving with each release.
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
This project is licensed under the MIT License - see the LICENSE file for details.
| text/markdown; charset=UTF-8; variant=GFM | null | Thomas MK <tmke8@posteo.net> | null | null | null | null | [
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"License :: OSI Approved :: MIT License"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pytest; extra == \"tests\""
] | [] | [] | [] | [
"repository, https://github.com/tmke8/math-core.git"
] | maturin/1.12.3 | 2026-02-20T15:42:41.332173 | math_core-0.5.1-cp314-cp314-win_amd64.whl | 190,443 | 32/cd/52232560080648548cea8c183ea25bd991cbdc6d078826cca01f9cc50548/math_core-0.5.1-cp314-cp314-win_amd64.whl | cp314 | bdist_wheel | null | false | 9a19e94717eba6aae8e4e36ac04fc21b | d90d694a0a34df41d01007e0405d4cc458befc9e914c8188361ce4aa4c316c5a | 32cd52232560080648548cea8c183ea25bd991cbdc6d078826cca01f9cc50548 | null | [] | 2,881 |
2.4 | sphinxcontrib-typstbuilder | 0.1.0 | Build PDF from your Sphinx documentation using Typst | # sphinxcontrib-typstbuilder
[](https://pypi.org/project/sphinxcontrib-typstbuilder)
[](https://pypi.org/project/sphinxcontrib-typstbuilder)
-----
`sphinxcontrib-typstbuilder` is a [Sphinx] extension
for building [Sphinx] documentation as PDFs
by using [Typst].
[Sphinx]: https://www.sphinx-doc.org/
[Sphinx domain]: https://www.sphinx-doc.org/en/master/usage/domains/index.html
[Typst]: https://typst.app/
## Documentation
See the [`sphinxcontrib-typstbuilder` documentation] for installation and usage.
[`sphinxcontrib-typstbuilder` documentation]: https://sphinxcontrib-typstbuilder.readthedocs.io/en/stable/index.html
## License
Licensed under the EUPL
| text/markdown | null | Minijackson <minijackson@riseup.net> | null | Minijackson <minijackson@riseup.net> | null | sphinx, typst | [
"Development Status :: 4 - Beta",
"Framework :: Sphinx",
"Framework :: Sphinx :: Domain",
"Framework :: Sphinx :: Extension",
"License :: OSI Approved",
"License :: OSI Approved :: European Union Public Licence 1.2 (EUPL 1.2)",
"Topic :: Documentation :: Sphinx"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"sphinx>=7.0.0",
"furo; extra == \"docs\"",
"myst-parser; extra == \"docs\"",
"pytest; extra == \"tests\""
] | [] | [] | [] | [
"Documentation, https://github.com/minijackson/sphinxcontrib-typstbuilder#readme",
"Issues, https://github.com/minijackson/sphinxcontrib-typstbuilder/issues",
"Source, https://github.com/minijackson/sphinxcontrib-typstbuilder"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T15:42:15.385739 | sphinxcontrib_typstbuilder-0.1.0.tar.gz | 24,739 | da/52/3c8f1f5a7c15e18326080536cc1019d9f9456db0d30989be655267ab9b2a/sphinxcontrib_typstbuilder-0.1.0.tar.gz | source | sdist | null | false | 11eea8b9b09a8109f0385018eda7a3ec | b54af348537c66e2a3d72b2534a284cce23c780f920b0e9417f0784bf4c63991 | da523c8f1f5a7c15e18326080536cc1019d9f9456db0d30989be655267ab9b2a | EUPL-1.2 | [
"LICENSE.txt"
] | 223 |
2.4 | grover-visualizer | 0.1.4 | Visualize Grover's Algorithm with quantum state animations | # Quantum Grover's Algorithm with Visualization
This project implements Grover's Algorithm using Qiskit and provides an animation of the probability amplitudes as the algorithm progresses through its iterations.
## Features
- **Grover's Search Algorithm**: Implementation of the quantum search algorithm.
- **Dynamic Oracle**: Configurable target states for the search.
- **State Visualization**: Real-time animation of probability distributions after each iteration.
- **Hybrid Backend Support**: Automatically falls back to local `AerSimulator` if IBM Quantum connection is unavailable.
- **Optimal Iteration Calculation**: Automatically determines the number of iterations needed based on the number of qubits and target states.
## Prerequisites
To run this project, you need Python 3.8+ installed along with the following libraries:
- `qiskit>=1.0.0`
- `qiskit-aer>=0.13.0`
- `qiskit-ibm-runtime>=0.17.0`
- `matplotlib>=3.7.0`
- `numpy>=1.24.0`
## Installation
You can install the package directly from PyPI:
```bash
pip install grover-visualizer
```
Or you can clone the reposetory and isntall the dependencies with:
```bash
gh repo clone SilentSword123456/Groovers_Algorithm-Quantum
cd Groovers_Algorithm-Quantum
pip install -r requirements.txt
```
## Usage
After installation, you can run the visualization using:
```bash
python -m grover_visualizer.grover
```
### Configuration
In `grover_visualizer/grover.py`, you can modify the `targets` list to change the state(s) you are searching for:
```python
targets = ['0101'] # Change this to any bitstring of your choice
```
## Project Structure
- `grover_visualizer/grover.py`: The core script that constructs the quantum circuit, runs the simulation, and manages the algorithm's iterations.
- `grover_visualizer/animation.py`: Contains the logic for the Matplotlib-based animation of quantum states.
## How it Works
1. **Initialization**: The circuit starts by applying Hadamard gates to all qubits, creating a uniform superposition of all possible states.
2. **Oracle**: A phase-flip oracle marks the target state(s) by reversing their signs.
3. **Diffusion (Inversion about the Mean)**: This operator amplifies the probability amplitude of the marked state while decreasing the amplitudes of other states.
4. **Repetition**: The Oracle and Diffusion steps are repeated for the optimal number of times ($\approx \frac{\pi}{4}\sqrt{2^n/M}$).
5. **Animation**: The `grover_visualizer/animation.py` module tracks the statevector after each iteration to visualize how the probability of the target state grows.
6. **Measurement**: Finally, the qubits are measured, and the results are displayed as a histogram of counts.
| text/markdown | SilentSword | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/SilentSword123456/Groovers_Algorithm-Quantum | null | >=3.8 | [] | [] | [] | [
"qiskit>=1.0.0",
"qiskit-aer>=0.13.0",
"qiskit-ibm-runtime>=0.17.0",
"matplotlib>=3.7.0",
"numpy>=1.24.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.0 | 2026-02-20T15:41:22.223848 | grover_visualizer-0.1.4.tar.gz | 4,803 | 21/a3/fdbadebcce84794b652f7962787076be5fe7f82aedfcdc1f0106fd5aefb7/grover_visualizer-0.1.4.tar.gz | source | sdist | null | false | 97086f51b61e2c0efce0eda29fc03900 | a930a7cc747d695d589ad80f3985d6c63c8291bab0a4191d66425abd81c772a7 | 21a3fdbadebcce84794b652f7962787076be5fe7f82aedfcdc1f0106fd5aefb7 | null | [] | 200 |
2.1 | onecache | 0.8.1 | Python cache for sync and async code |
[](https://coveralls.io/github/sonic182/onecache?branch=master)

# OneCache
Python cache for sync and async code.
Cache uses an LRU algorithm and can optionally set TTLs per entry.
Tested automatically on CPython 3.8–3.14 and PyPy 3.9 across Linux, macOS, and Windows (see the workflow badge). Earlier versions may work but are not part of the supported matrix.
# Usage
```python
from onecache import CacheDecorator
from onecache import AsyncCacheDecorator
class Counter:
def __init__(self, count=0):
self.count = count
@pytest.mark.asyncio
async def test_async_cache_counter():
"""Test async cache, counter case."""
counter = Counter()
@AsyncCacheDecorator()
async def mycoro(counter: Counter):
counter.count += 1
return counter.count
assert 1 == (await mycoro(counter))
assert 1 == (await mycoro(counter))
def test_cache_counter():
"""Test async cache, counter case."""
counter = Counter()
@CacheDecorator()
def sample(counter: Counter):
counter.count += 1
return counter.count
assert 1 == (sample(counter))
assert 1 == (sample(counter))
```
Decorator classes supports the following arguments
* **maxsize (int)**: Maximun number of items to be cached. default: 512
* **ttl (int)**: time to expire in milliseconds, if None, it does not expire. default: None
* **skip_args (bool)**: apply cache as the function doesn't have any arguments, default: False
* **cache_class (class)**: Class to use for cache instance. default: LRUCache
* **refresh_ttl (bool)**: if cache with ttl, This flag makes key expiration timestamp to be refresh per access. default: False
* **thread_safe (bool)**: tell decorator to use thread safe lock. default=False
* **max_mem_size (int)**: max mem size in bytes. Ceil for sum of cache values sizes. default=None which means no limit. For pypy this value is ignored as the objects can change by the JIT compilation.
If num of records exceds maxsize, it drops the oldest.
# Development
Install dependencies with Poetry (includes dev + test groups):
```bash
poetry install --with test,dev
```
Run the test suite and coverage locally:
```bash
poetry run pytest --cov
```
Lint and format checks:
```bash
poetry run flake8
poetry run autopep8 --in-place --recursive onecache tests
```
# Contribute
1. Fork
2. create a branch `feature/your_feature`
3. commit - push - pull request
Thanks :)
| text/markdown | Johanderson Mogollon | johander1822@gmail.com | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: PyPy"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T15:40:56.916837 | onecache-0.8.1.tar.gz | 4,484 | a7/56/d442f06a3e0f2f45b2474afec2b3c1125aad325b93221911055dc55948a1/onecache-0.8.1.tar.gz | source | sdist | null | false | a8ab59b382d57b517d2d14a0200ed9d4 | 4df2d62ced1e6ed905e3965f8fb4bdb4c999ebb7fe044202c34b52a0034f3882 | a756d442f06a3e0f2f45b2474afec2b3c1125aad325b93221911055dc55948a1 | null | [] | 4,849 |
2.4 | rock-physics-open | 0.6.0 | Equinor Rock Physics Module | <div align="center">
# rock-physics-open
[![PyPI version][pypi-badge]][pypi]
[![License: LGPL v3][license-badge]][license]
[![SCM Compliance][scm-compliance-badge]][scm-compliance]
[![on push main action status][on-push-main-action-badge]][on-push-main-action]
</div>
This repository contains Python code for rock physics modules created in Equinor by
Harald Flesche 2010 - ... Batzle-Wang and Span-Wagner fluid equations are implemented
by Eivind Jahren and Jimmy Zurcher. Some models are based on original Matlab code
by Tapan Mukerji at Stanford University and ported to Python by Harald Flesche.
The modules in this repository are implementations of rock physics models
used in quantitative seismic analysis, in addition to some utilities for handling
of seismic and well data. The repository started as internal Equinor plugins, and was
extracted as a separate repository that could be used within other internal applications
in 2023. In 2025 it was released under LGPL license.
The content of the library can be described as follows:
Functions with inputs and outputs consisting of numpy arrays or in some
cases pandas dataframes. Data frames are used in the cases where there are
many inputs and/or there is a need for checking the name of inputs, such
as when there are multiple inputs of the same type which will have
different purpose. It should be made clear in which cases data dataframes
are expected.
There is normally not any check on inputs, it is just the minimum
definition of equations and other utilities.
## Installation
This module can be installed through [PyPI](https://pypi.org/project/rock-physics-open/) with:
```sh
pip install rock-physics-open
```
Alternatively, you can update the dependencies in your `pyproject.toml` file:
<!-- x-release-please-start-version -->
```toml
dependencies = [
"rock-physics-open == 0.6.0",
]
```
<!-- x-release-please-end-version -->
<!-- External Links -->
[scm-compliance]: https://developer.equinor.com/governance/scm-policy/
[scm-compliance-badge]: https://scm-compliance-api.radix.equinor.com/repos/equinor/rock-physics-open/badge
[license]: https://www.gnu.org/licenses/lgpl-3.0
[license-badge]: https://img.shields.io/badge/License-LGPL_v3-blue.svg
[on-push-main-action]: https://github.com/equinor/rock-physics-open/actions/workflows/on-push-main.yaml
[on-push-main-action-badge]: https://github.com/equinor/rock-physics-open/actions/workflows/on-push-main.yaml/badge.svg
[pypi]: https://pypi.org/project/rock-physics-open/
[pypi-badge]: https://img.shields.io/pypi/v/rock-physics-open.svg
| text/markdown | null | Harald Flesche <hfle@equinor.com>, Eivind Jahren <ejah@equinor.com>, Jimmy Zurcher <jiz@equinor.com> | null | Harald Flesche <hfle@equinor.com>, Eirik Ola Aksnes <eoaksnes@equinor.com>, Sivert Utne <sutn@equinor.com>, Einar Wigum Arbo <earb@equinor.com> | null | energy, subsurface, seismic, rock physics, scientific, engineering | [
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Physics",
"Topic :: Software Development :: Libraries",
"Topic :: Utilities",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Natural Language :: English"
] | [] | null | null | <3.15,>=3.11 | [] | [] | [] | [
"numpy>=1.26.4",
"pandas>=2.0.2",
"matplotlib>=3.7.1",
"scipy<2,>=1.16.3",
"scikit-learn>=1.2.2",
"tmatrix~=1.2.0",
"typing-extensions>=4.15.0"
] | [] | [] | [] | [
"Repository, https://github.com/equinor/rock-physics-open",
"Homepage, https://github.com/equinor/rock-physics-open",
"Changelog, https://github.com/equinor/rock-physics-open/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:40:34.746641 | rock_physics_open-0.6.0.tar.gz | 20,913,567 | cf/d3/095fe47d822387649f8bc4b7c46a509cc6a2281a78f1df8094c5b08bc6a4/rock_physics_open-0.6.0.tar.gz | source | sdist | null | false | 112358d9b16c956d785869bc7946feda | eb33952a6b2b0b72734d4d663de913c75483fa658268ab8e329ddf35a68f2adc | cfd3095fe47d822387649f8bc4b7c46a509cc6a2281a78f1df8094c5b08bc6a4 | null | [
"LICENSE"
] | 181 |
2.2 | odb4py | 1.1.1rc0 | Python extension to access the ECMWF observation database (ODB1) format | [](https://pypi.org/project/odb4py/)
[](https://odb4py.readthedocs.io)

# odb4py 1.1.1 release
## Description
**odb4py** is a C/Python interface to read and query ECMWF ODB1 databases.<br>
It provides high-performance access to ODB data through a native C backend, with seamless integration into the Python scientific ecosystem.<br>
The package embeds a customized version of the ECMWF ODB software [odb_api_bundle-0.18.1-Source](https://confluence.ecmwf.int) and is distributed as **manylinux wheels**, requiring no external ODB installation.
---
## Features
- Native C backend based on ECMWF ODB1
- Support for IFS and ARPEGE ODB databases
- SQL-like query interface
- Fast data access with [NumPy/C API](https://numpy.org/doc/2.1/reference/c-api/index.html) and pandas integration
- Manylinux wheels (portable across Linux distributions)
- No runtime dependency on system ODB or ECMWF bundles
---
## Installation
The **odb4py** package can be installed from PyPI using `pip`:
```bash
pip install odb4py
```
## Installation test
`from odb4py import core # The C extension` <br>
`from odb4py import utils # The python module helper`
## Requirements
Python ≥ 3.9 <br>
NumPy ≥ 2.0 <br>
Linux system (manylinux2014 compatible)
## Scientific context
ODB (Observation DataBase) is a column-oriented database format developed at ECMWF
and widely used in numerical weather prediction systems such as IFS,ARPEGE and NWP limited are models<br>
**odb4py** is primarily designed for:<br>
- Meteorologists and atmospheric scientists (especially within the ACCORD consortium)<br>
- Operational and research environments
- Post-processing and diagnostic workflows
- The current package version focuses on read-only access and data extraction for scientific analysis.
## License
Apache License, Version 2.0. [See LICENSE for details ](https://www.apache.org/licenses/LICENSE-2.0).
odb4py incorporates components derived from the ECMWF ODB software.
The original source code has been modified to:
- expose functionality through a Python interface
- reduce the runtime footprint
- enable portable binary wheel distribution
All original copyrights remain with ECMWF.
## Acknowledgements
This project incorporates and is derived from the ECMWF ODB software. <br/>
ODB was developed at the European Centre for Medium-Range Weather Forecasts (ECMWF)
by [S. Saarinen et al](https://www.ecmwf.int/sites/default/files/elibrary/2004/76278-ifs-documentation-cy36r1-part-i-observation-processing_1.pdf). All rights to the original ODB software remain with ECMWF and their respective owners.
| text/markdown | null | Idir Dehmous <idehmous@meteo.be> | null | null | Apache-2.0 | ECMWF, ODB, meteorology, python, database, extension | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Atmospheric Science",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Operating System :: POSIX :: Linux"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/idirdehmous/odb4py_1.1.1",
"Source, https://github.com/idirdehmous/odb4py_1.1.1",
"Documentation, https://odb4py.readthedocs.io"
] | twine/6.2.0 CPython/3.10.12 | 2026-02-20T15:39:43.016893 | odb4py-1.1.1rc0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl | 2,767,434 | 74/e9/6376542c8d545bc052d7d0a069b601d6745163c89e89e53a72435a0e1d85/odb4py-1.1.1rc0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl | cp312 | bdist_wheel | null | false | 6eaca3d9984d1b007dafefabcc735d7f | 37d519facec36da6b273fd536e713c053c0cb6af088e2adacf3c5a0ba00bdf6f | 74e96376542c8d545bc052d7d0a069b601d6745163c89e89e53a72435a0e1d85 | null | [] | 71 |
2.4 | airflow-embedash | 0.2.0 | Add embeded dashboars to Airflow | # Airflow Embedash
A Python package for embedding dashboards in Apache Airflow UI. This plugin allows you to easily integrate Metabase, Datadog and Grafana dashboards into your Airflow environment, providing a seamless way to view data visualizations alongside your workflows.
## Features
- **Easy Dashboard Integration**: Embed Metabase dashboards directly into Apache Airflow UI
- **Flexible Configuration**: Customize menu labels and authentication tokens
- **Multiple Airflow Versions Support**: Compatible with Airflow 2.x and 3.x
- **Secure Access**: Supports Metabase token-based authentication for private dashboards
## Installation
Install the package using pip:
```bash
pip install airflow-embedash
```
Or in development mode:
```bash
pip install -e .
```
## Usage
### Basic Setup
1. **Configure Metabase Settings**: Set the required Airflow variables:
- `embeded_dashboards_metabase_token`: Your Metabase API token for private dashboards
2. **Configure Menu Label**:
- Default menu label is "Dashboards"
- Override by setting the `embeded_dashboards_menu_label` environment variable
3. **Add new dashboards**
- Go to settings menu and add new dashboard
- Restart the service
## Package Structure
```
airflow_embedash/
├── __init__.py # Package initialization
├── plugin/ # Plugin implementation directory
│ ├── __init__.py # Plugin entry point
│ ├── airflow2.py # Airflow 2.x compatibility layer
│ └── templates/ # HTML templates for dashboard views
│ ├── add_dashboard.html # Add dashboard form
│ ├── edit_dashboard.html # Edit dashboard form
│ ├── settings.html # Settings view
│ ├── view_not_set_up.html # Not configured view
│ └── view.html # Main dashboard view
└── plugin/__init__.py # Plugin initialization
```
## Development
To contribute to this project:
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Ensure all tests pass:
```bash
make test
```
5. Submit a pull request
### Development Setup
The project includes development containers and Docker configurations:
1. **Using Docker**:
```bash
docker-compose up -d
```
2. **Local Development**:
- Install development dependencies: `pip install -e ".[dev]"`
- Run tests: `make test`
## Configuration Options
### Environment Variables
| Variable | Description | Required |
|----------|-------------|----------|
| `embeded_dashboards_metabase_token` | Metabase API token for private dashboards | Optional |
| `embeded_dashboards_menu_label` | Custom menu label in Airflow UI | Optional |
## Contributing
We welcome contributions! Please follow these steps:
1. Fork the repository
2. Create a feature branch
3. Make your changes with tests
4. Submit a pull request
## License
This project is licensed under the MIT License. | text/markdown | null | Rodrigo Carneiro <teoria@gmail.com> | null | null | null | airflow, apache-airflow, datadog, grafana, metabase | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"aenum",
"apache-airflow>=2.9.0",
"attrs",
"deprecation",
"jinja2>=3.0.0",
"msgpack",
"packaging>=22.0",
"pydantic>=1.10.0",
"virtualenv"
] | [] | [] | [] | [
"Homepage, https://github.com/teoria/airflow-embedash",
"Documentation, https://github.com/teoria/airflow-embedash",
"Source code, https://github.com/teoria/airflow-embedash"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:39:37.352043 | airflow_embedash-0.2.0.tar.gz | 8,407 | 81/bf/d0f6da0242912435931d32539ef186247d117bca15c6bb554b2c9d53210b/airflow_embedash-0.2.0.tar.gz | source | sdist | null | false | 872c6e7f22a67293c77cf8d3172da528 | 5f09894d67f74b89f5acb82ec3c4d863e009e4ad898d13ec551d12d361258dbc | 81bfd0f6da0242912435931d32539ef186247d117bca15c6bb554b2c9d53210b | null | [] | 207 |
2.4 | peewee | 4.0.0 | a little orm | .. image:: https://media.charlesleifer.com/blog/photos/peewee4-logo.png
peewee
======
Peewee is a simple and small ORM. It has few (but expressive) concepts, making it easy to learn and intuitive to use.
* a small, expressive ORM
* flexible query-builder that exposes full power of SQL
* supports sqlite, mysql, mariadb, postgresql
* asyncio support
* tons of `extensions <http://docs.peewee-orm.com/en/latest/peewee/playhouse.html>`_
New to peewee? These may help:
* `Quickstart <http://docs.peewee-orm.com/en/latest/peewee/quickstart.html#quickstart>`_
* `Example twitter app <http://docs.peewee-orm.com/en/latest/peewee/example.html>`_
* `Using peewee interactively <http://docs.peewee-orm.com/en/latest/peewee/interactive.html>`_
* `Models and fields <http://docs.peewee-orm.com/en/latest/peewee/models.html>`_
* `Querying <http://docs.peewee-orm.com/en/latest/peewee/querying.html>`_
* `Relationships and joins <http://docs.peewee-orm.com/en/latest/peewee/relationships.html>`_
Installation:
.. code-block:: console
pip install peewee
Sqlite comes built-in provided by the standard-lib ``sqlite3`` module. Other
backends can be installed using the following instead:
.. code-block:: console
pip install peewee[mysql] # Install peewee with pymysql.
pip install peewee[postgres] # Install peewee with psycopg2.
pip install peewee[psycopg3] # Install peewee with psycopg3.
# AsyncIO implementations.
pip install peewee[aiosqlite] # Install peewee with aiosqlite.
pip install peewee[aiomysql] # Install peewee with aiomysql.
pip install peewee[asyncpg] # Install peewee with asyncpg.
Examples
--------
Defining models is similar to Django or SQLAlchemy:
.. code-block:: python
from peewee import *
import datetime
db = SqliteDatabase('my_database.db')
class BaseModel(Model):
class Meta:
database = db
class User(BaseModel):
username = CharField(unique=True)
class Tweet(BaseModel):
user = ForeignKeyField(User, backref='tweets')
message = TextField()
created_date = DateTimeField(default=datetime.datetime.now)
is_published = BooleanField(default=True)
Connect to the database and create tables:
.. code-block:: python
db.connect()
db.create_tables([User, Tweet])
Create a few rows:
.. code-block:: python
charlie = User.create(username='charlie')
huey = User(username='huey')
huey.save()
# No need to set `is_published` or `created_date` since they
# will just use the default values we specified.
Tweet.create(user=charlie, message='My first tweet')
Queries are expressive and composable:
.. code-block:: python
# A simple query selecting a user.
User.get(User.username == 'charlie')
# Get tweets created by one of several users.
usernames = ['charlie', 'huey', 'mickey']
users = User.select().where(User.username.in_(usernames))
tweets = Tweet.select().where(Tweet.user.in_(users))
# We could accomplish the same using a JOIN:
tweets = (Tweet
.select()
.join(User)
.where(User.username.in_(usernames)))
# How many tweets were published today?
tweets_today = (Tweet
.select()
.where(
(Tweet.created_date >= datetime.date.today()) &
(Tweet.is_published == True))
.count())
# Paginate the user table and show me page 3 (users 41-60).
User.select().order_by(User.username).paginate(3, 20)
# Order users by the number of tweets they've created:
tweet_ct = fn.Count(Tweet.id)
users = (User
.select(User, tweet_ct.alias('ct'))
.join(Tweet, JOIN.LEFT_OUTER)
.group_by(User)
.order_by(tweet_ct.desc()))
# Do an atomic update (for illustrative purposes only, imagine a simple
# table for tracking a "count" associated with each URL). We don't want to
# naively get the save in two separate steps since this is prone to race
# conditions.
Counter.update(count=Counter.count + 1).where(Counter.url == request.url)
Check out the `example twitter app <http://docs.peewee-orm.com/en/latest/peewee/example.html>`_.
Learning more
-------------
Check the `documentation <http://docs.peewee-orm.com/>`_ for more examples.
Specific question? Come hang out in the #peewee channel on irc.libera.chat, or post to the mailing list, http://groups.google.com/group/peewee-orm . If you would like to report a bug, `create a new issue <https://github.com/coleifer/peewee/issues/new>`_ on GitHub.
Still want more info?
---------------------
.. image:: https://media.charlesleifer.com/blog/photos/wat.jpg
I've written a number of blog posts about building applications and web-services with peewee (and usually Flask). If you'd like to see some real-life applications that use peewee, the following resources may be useful:
* `Building a note-taking app with Flask and Peewee <https://charlesleifer.com/blog/saturday-morning-hack-a-little-note-taking-app-with-flask/>`_ as well as `Part 2 <https://charlesleifer.com/blog/saturday-morning-hacks-revisiting-the-notes-app/>`_ and `Part 3 <https://charlesleifer.com/blog/saturday-morning-hacks-adding-full-text-search-to-the-flask-note-taking-app/>`_.
* `Analytics web service built with Flask and Peewee <https://charlesleifer.com/blog/saturday-morning-hacks-building-an-analytics-app-with-flask/>`_.
* `Personalized news digest (with a boolean query parser!) <https://charlesleifer.com/blog/saturday-morning-hack-personalized-news-digest-with-boolean-query-parser/>`_.
* `Structuring Flask apps with Peewee <https://charlesleifer.com/blog/structuring-flask-apps-a-how-to-for-those-coming-from-django/>`_.
* `Creating a lastpass clone with Flask and Peewee <https://charlesleifer.com/blog/creating-a-personal-password-manager/>`_.
* `Creating a bookmarking web-service that takes screenshots of your bookmarks <https://charlesleifer.com/blog/building-bookmarking-service-python-and-phantomjs/>`_.
* `Building a pastebin, wiki and a bookmarking service using Flask and Peewee <https://charlesleifer.com/blog/dont-sweat-small-stuff-use-flask-blueprints/>`_.
* `Encrypted databases with Python and SQLCipher <https://charlesleifer.com/blog/encrypted-sqlite-databases-with-python-and-sqlcipher/>`_.
* `Dear Diary: An Encrypted, Command-Line Diary with Peewee <https://charlesleifer.com/blog/dear-diary-an-encrypted-command-line-diary-with-python/>`_.
* `Query Tree Structures in SQLite using Peewee and the Transitive Closure Extension <https://charlesleifer.com/blog/querying-tree-structures-in-sqlite-using-python-and-the-transitive-closure-extension/>`_.
| text/x-rst | null | Charles Leifer <coleifer@gmail.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 3",
"Topic :: Database",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | null | [] | [] | [] | [
"pymysql; extra == \"mysql\"",
"psycopg2-binary; extra == \"postgres\"",
"psycopg[binary]; extra == \"psycopg3\"",
"cysqlite; extra == \"cysqlite\"",
"aiosqlite; extra == \"aiosqlite\"",
"greenlet; extra == \"aiosqlite\"",
"aiomysql; extra == \"aiomysql\"",
"greenlet; extra == \"aiomysql\"",
"asyncpg; extra == \"asyncpg\"",
"greenlet; extra == \"asyncpg\""
] | [] | [] | [] | [
"Repository, https://github.com/coleifer/peewee",
"Documentation, https://docs.peewee-orm.com/",
"Changelog, https://github.com/coleifer/peewee/blob/master/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:38:50.312542 | peewee-4.0.0.tar.gz | 686,951 | 37/e3/98ed8ab20f26d429f61b3d5d455c52ac88ba343444fbcf7154374111eb3e/peewee-4.0.0.tar.gz | source | sdist | null | false | 41686278749ad57a8bf1cb1541893554 | bc2722abf32a8074362c346fc8a95f2d34a9587873e81025b6429676c32044b6 | 37e398ed8ab20f26d429f61b3d5d455c52ac88ba343444fbcf7154374111eb3e | null | [
"LICENSE"
] | 315,132 |
2.4 | isagellm | 0.5.1.8 | sageLLM: Modular LLM inference engine with PD separation for domestic computing power | # sageLLM
## Protocol Compliance (Mandatory)
- MUST follow Protocol v0.1:
https://github.com/intellistream/sagellm-docs/blob/main/docs/specs/protocol_v0.1.md
- Any globally shared definitions (fields, error codes, metrics, IDs, schemas) MUST be added to
Protocol first.
<p align="center">
<strong>🚀 Modular LLM Inference Engine for Domestic Computing Power</strong>
</p>
<p align="center">
Ollama-like experience for Chinese hardware ecosystems (Huawei Ascend, NVIDIA)
</p>
______________________________________________________________________
## ✨ Features
- 🎯 **One-Click Install** - `pip install isagellm` gets you started immediately
- 🧠 **CPU-First** - Default CPU engine, no GPU required
- 🇨🇳 **Domestic Hardware** - First-class support for Huawei Ascend NPU
- 📊 **Observable** - Built-in metrics (TTFT, TBT, throughput, KV usage)
- 🧩 **Plugin System** - Extend with custom backends and engines
## 📦 Quick Install
```bash
# Install sageLLM (CPU-first, no GPU required)
pip install isagellm
# With Control Plane (request routing & scheduling)
pip install 'isagellm[control-plane]'
# With API Gateway (OpenAI-compatible REST API)
pip install 'isagellm[gateway]'
# Full server (Control Plane + Gateway)
pip install 'isagellm[server]'
# With CUDA support
pip install 'isagellm[cuda]'
# All features
pip install 'isagellm[all]'
```
### 🚀 国内加速安装 PyTorch(推荐)
由于 PyTorch CUDA 版本从官方源下载较慢(~800MB),我们在 GitHub Releases 提供预先下载的 wheels:
```bash
# 方法 1:使用 sagellm CLI (推荐,最简单)
pip install isagellm
sage-llm install cuda --github # 从 GitHub 下载,快速
sage-llm install cuda # 从官方源下载(默认)
# 方法 2:直接使用 pip --find-links
pip install torch==2.5.1+cu121 torchvision torchaudio \
--find-links https://github.com/intellistream/sagellm-pytorch-wheels/releases/download/v2.5.1-cu121/ \
--trusted-host github.com
```
**其他支持的后端**:
- `sage-llm install ascend` - 华为昇腾 NPU
- `sage-llm install kunlun` - 百度昆仑 XPU
- `sage-llm install haiguang` - 海光 DCU
- `sage-llm install cpu` - CPU-only(最小下载)
💡 **为什么使用 GitHub 加速?**
- ✅ 国内访问速度快(GitHub CDN)
- ✅ 无需配置镜像源
- ✅ 官方 wheels,100% 可信
📦 **Wheels 仓库**: https://github.com/intellistream/sagellm-pytorch-wheels
## 🚀 Quick Start
### CLI 命令统一
- 统一主命令:`sagellm`
- 兼容别名:`sage-llm`(保留向后兼容,建议迁移到 `sagellm`)
### CLI (像 vLLM/Ollama 一样简单)
```bash
# 一键启动(完整栈:Gateway + Engine)
pip install 'isagellm[gateway]'
sage-llm serve --model Qwen2-7B
# ✅ OpenAI API 自动可用
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Qwen2-7B",
"messages": [{"role": "user", "content": "Hello!"}]
}'
# 查看系统信息
sage-llm info
# 单次推理(不启动服务器)
sage-llm run -p "What is LLM inference?"
# 高级用法:分布式部署(分别启动各组件)
sage-llm serve --engine-only --port 9000 # 仅引擎
sage-llm gateway --port 8000 # 仅 Gateway
```
### Python API (Control Plane - Recommended)
```python
import asyncio
from sagellm import ControlPlaneManager, BackendConfig, EngineConfig
# Install with: pip install 'isagellm[control-plane]'
async def main() -> None:
manager = ControlPlaneManager(
backend_config=BackendConfig(kind="cpu", device="cpu"),
engine_configs=[
EngineConfig(
kind="cpu",
model="sshleifer/tiny-gpt2",
model_path="sshleifer/tiny-gpt2"
)
]
)
await manager.start()
try:
# Requests are automatically routed to available engines
response = await manager.execute_request(
prompt="Hello, world!",
max_tokens=128
)
print(response.output_text)
print(f"TTFT: {response.metrics.ttft_ms:.2f} ms")
print(f"Throughput: {response.metrics.throughput_tps:.2f} tokens/s")
finally:
await manager.stop()
asyncio.run(main())
```
**⚠️ Important:** Direct engine creation (`create_engine()`) is not exported from the umbrella
package. All production code must use `ControlPlaneManager` for proper request routing, scheduling,
and lifecycle management.
### Configuration
```yaml
# ~/.sagellm/config.yaml
backend:
kind: cpu # Options: cpu, pytorch-cuda, pytorch-ascend
device: cpu
engine:
kind: cpu
model: sshleifer/tiny-gpt2
control_plane:
endpoint: "localhost:8080"
```
## 📊 Metrics & Validation
sageLLM provides comprehensive performance metrics:
```json
{
"ttft_ms": 45.2,
"tbt_ms": 12.5,
"throughput_tps": 80.0,
"peak_mem_mb": 24576,
"kv_used_tokens": 4096,
"prefix_hit_rate": 0.85
}
```
Run benchmarks:
```bash
sage-llm demo --workload year1 --output metrics.json
```
## 🏗️ Architecture
```
isagellm (umbrella package)
├── isagellm-protocol # Protocol v0.1 types
│ └── Request, Response, Metrics, Error, StreamEvent
├── isagellm-backend # Hardware abstraction (L1 - Foundation)
│ └── BackendProvider, CPUBackend, (CUDABackend, AscendBackend)
├── isagellm-comm # Communication primitives (L2 - Infrastructure)
│ └── Topology, CollectiveOps (all_reduce/gather), P2P (send/recv), Overlap
├── isagellm-kv-cache # KV cache management (L2 - Optional)
│ └── PrefixCache, MemoryPool, EvictionPolicies, Predictor, KV Transfer
├── isagellm-compression # Inference acceleration (quantization, sparsity, etc.) (L2 - Optional)
│ └── Quantization, Sparsity, SpeculativeDecoding, Fusion
├── isagellm-core # Engine core & runtime (L3)
│ └── Config, Engine, Factory, DemoRunner, Adapters (vLLM/LMDeploy)
├── isagellm-control-plane # Request routing & scheduling (L4 - Optional)
│ └── ControlPlaneManager, Router, Policies, Lifecycle
└── isagellm-gateway # OpenAI-compatible REST API (L5 - Optional)
└── FastAPI server, /v1/chat/completions, Session management
```
## 🔧 Development
### Quick Setup (Development Mode)
```bash
# Clone all repositories
./scripts/clone-all-repos.sh
# Install all packages in editable mode
./quickstart.sh
# Open all repos in VS Code Multi-root Workspace
code sagellm.code-workspace
```
**📖 See [WORKSPACE_GUIDE.md](WORKSPACE_GUIDE.md) for Multi-root Workspace usage.**
### Testing
```bash
# Clone and setup
git clone https://github.com/IntelliStream/sagellm.git
cd sagellm
pip install -e ".[dev]"
# Run tests
pytest -v
# Format & lint
ruff format .
ruff check . --fix
# Type check
mypy src/sagellm/
# Verify dependency hierarchy
python scripts/verify_dependencies.py
```
### 📖 Development Resources
- **[DEPLOYMENT_GUIDE.md](docs/DEPLOYMENT_GUIDE.md)** - 完整部署与配置指南
- **[TROUBLESHOOTING.md](docs/TROUBLESHOOTING.md)** - 故障排查快速参考
- **[ENVIRONMENT_VARIABLES.md](docs/ENVIRONMENT_VARIABLES.md)** - 环境变量完整参考
- **[DEVELOPER_GUIDE.md](docs/DEVELOPER_GUIDE.md)** - 开发者指南
- **[WORKSPACE_GUIDE.md](docs/WORKSPACE_GUIDE.md)** - Multi-root Workspace 使用
- **[INFERENCE_FLOW.md](docs/INFERENCE_FLOW.md)** - 推理流程详解
- **[PR_CHECKLIST.md](docs/PR_CHECKLIST.md)** - Pull Request 检查清单
______________________________________________________________________
## 📚 Documentation Index
### 用户文档
- [快速开始](README.md#-quick-start) - 5 分钟上手
- [部署指南](docs/DEPLOYMENT_GUIDE.md) - 生产环境部署
- [配置参考](docs/DEPLOYMENT_GUIDE.md#%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6%E8%AF%B4%E6%98%8E) - 完整配置选项
- [环境变量](docs/ENVIRONMENT_VARIABLES.md) - 环境变量参考
- [故障排查](docs/TROUBLESHOOTING.md) - 常见问题解决
### 开发者文档
- [开发指南](docs/DEVELOPER_GUIDE.md) - 贡献代码
- [架构设计](README.md#-architecture) - 系统架构
- [Workspace 使用](docs/WORKSPACE_GUIDE.md) - Multi-root 工作区
- [PR 检查清单](docs/PR_CHECKLIST.md) - 提交前检查
### API 文档
- OpenAI 兼容 API - 参见 [sagellm-gateway](https://github.com/intellistream/sagellm-gateway)
- Python API - 参见 [API_REFERENCE.md](docs/API_REFERENCE.md)(待补充)
### 子包文档
- [sagellm-protocol](https://github.com/intellistream/sagellm-protocol) - 协议定义
- [sagellm-backend](https://github.com/intellistream/sagellm-backend) - 后端抽象
- [sagellm-core](https://github.com/intellistream/sagellm-core) - 引擎核心
- [sagellm-control-plane](https://github.com/intellistream/sagellm-control-plane) - 控制面
- [sagellm-gateway](https://github.com/intellistream/sagellm-gateway) - API 网关
- [sagellm-benchmark](https://github.com/intellistream/sagellm-benchmark) - 基准测试
- [**DEVELOPER_GUIDE.md**](DEVELOPER_GUIDE.md) - 架构规范与开发指南
- [**PR_CHECKLIST.md**](PR_CHECKLIST.md) - Pull Request 审查清单
- [**scripts/verify_dependencies.py**](scripts/verify_dependencies.py) - 依赖层次验证
## � 贡献指南
### 工作流程(必须遵循)
在提交代码前,**必须**严格遵循以下步骤:
#### 1️⃣ 创建 Issue
描述你要解决的问题、实现的功能或改进:
```bash
gh issue create \
--title "[Category] 简短描述" \
--label "bug,sagellm-core" \
--body "详细描述..."
```
**Issue 类型**:
- `[Bug]` - Bug 修复
- `[Feature]` - 新功能
- `[Performance]` - 性能优化
- `[Integration]` - 与其他模块集成
- `[Docs]` - 文档改进
#### 2️⃣ 在本地分支开发
创建开发分支并解决问题:
```bash
# 从 main-dev 创建分支(不是 main!)
git fetch origin main-dev
git checkout -b fix/#123-short-description origin/main-dev
# 进行开发
# ...
# 确保通过所有检查
ruff format .
ruff check . --fix
pytest -v
```
**分支命名约定**:
- Bug 修复:`bugfix/#123-xxx`
- 新功能:`feature/#456-xxx`
- 文档:`docs/#789-xxx`
- 性能:`perf/#101-xxx`
#### 3️⃣ 发起 Pull Request
提交代码供审查:
```bash
git push origin fix/#123-short-description
gh pr create \
--base main-dev \
--head fix/#123-short-description \
--title "Fix: [简短描述]" \
--body "解决 #123
## 改动
- 改动 1
- 改动 2
## 测试
- 新增单元测试
- 所有测试通过 ✓"
```
**PR 必须包含**:
- 清晰的标题(Fix/Feature/Docs/Perf)
- 关联 issue 号:`Closes #123`
- 改动列表和测试说明
- 通过所有 CI 检查
#### 4️⃣ 代码审查与合并
等待审批后合并到 `main-dev`:
```bash
# 在 GitHub 界面点击"Merge"按钮
# 合并到 main-dev(不是 main!)
```
**合并前条件**:
- ✅ 至少一名维护者审批
- ✅ CI 检查全部通过(pytest, ruff)
- ✅ 合并到 `main-dev` 分支
### 快速检查清单
在发起 PR 前检查:
- [ ] 从 `main-dev` 分支创建开发分支
- [ ] 更新了 `CHANGELOG.md`
- [ ] `ruff format .` 格式化代码
- [ ] `ruff check . --fix` 通过 lint
- [ ] `pytest -v` 通过所有测试
- [ ] 关联了相关 issue:`Closes #123`
### 反面例子 ❌
- ❌ 直接在 `main` 分支提交
- ❌ PR 中没有关联 issue
- ❌ 修改了代码但没有更新 CHANGELOG
- ❌ 代码没有通过 lint 检查
- ❌ 提交前没有运行测试
### 相关资源
- **Issue Labels**:`bug`, `enhancement`, `documentation`, `sagellm-core`, `sagellm-backend` 等
- **GitHub CLI**:`gh issue create`, `gh pr create`
- **更多信息**:见 `.github/copilot-instructions.md`
## �📚 Package Details
| Package | PyPI Name | Import Name | Description |
| ---------------- | ------------------- | ------------------ | ------------------------------- |
| sagellm | `isagellm` | `sagellm` | Umbrella package (install this) |
| sagellm-protocol | `isagellm-protocol` | `sagellm_protocol` | Protocol v0.1 types |
| sagellm-core | `isagellm-core` | `sagellm_core` | Runtime & config |
| sagellm-backend | `isagellm-backend` | `sagellm_backend` | Hardware abstraction |
## 📄 License
Proprietary - IntelliStream. Internal use only.
______________________________________________________________________
<p align="center">
<sub>Built with ❤️ by IntelliStream Team for domestic AI infrastructure</sub>
</p>
# test
| text/markdown | IntelliStream Team | null | null | null | Proprietary - IntelliStream | llm, inference, ascend, huawei, npu, cuda, domestic, pd-separation | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | ==3.11.* | [] | [] | [] | [
"isagellm-protocol<0.6.0,>=0.5.1.2",
"isagellm-backend<0.6.0,>=0.5.2.12",
"isagellm-core<0.6.0,>=0.5.1.7",
"isagellm-control-plane<0.6.0,>=0.5.1.0",
"isagellm-gateway<0.6.0,>=0.5.1.0",
"isagellm-kv-cache<0.6.0,>=0.5.1.6",
"isagellm-comm<0.6.0,>=0.5.1.0",
"isagellm-compression<0.6.0,>=0.5.1.0",
"click>=8.3.1",
"rich>=13.0.0",
"pyyaml>=6.0.3",
"pytest>=9.0.2; extra == \"dev\"",
"pytest-cov>=7.0.0; extra == \"dev\"",
"ruff>=0.15.1; extra == \"dev\"",
"isage-pypi-publisher>=0.1.9.6; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/IntelliStream/sagellm",
"Documentation, https://github.com/IntelliStream/sagellm#readme",
"Repository, https://github.com/IntelliStream/sagellm"
] | twine/6.2.0 CPython/3.11.11 | 2026-02-20T15:38:12.645833 | isagellm-0.5.1.8.tar.gz | 183,012 | 69/e6/934c80ee2fe5bc9d81583fa6a77e44133ea252ed2784aa759423dc4a7a58/isagellm-0.5.1.8.tar.gz | source | sdist | null | false | 381a57888f9edb452a34fba7bfa1be52 | 9ddccf8657369d4b9c534524e460bd29b12538d1bfeeae0605031795716b47ac | 69e6934c80ee2fe5bc9d81583fa6a77e44133ea252ed2784aa759423dc4a7a58 | null | [] | 246 |
2.4 | boj-api-client | 0.1.2 | Python client for Bank of Japan timeseries statistics API | # boj-api-client
日本銀行「時系列統計データ検索サイト API」(`https://www.stat-search.boj.or.jp/api/v1`)向けの Python クライアントです。
## 主な特徴
- 同期/非同期クライアント
- `BojClient`
- `AsyncBojClient`
- 通信フォーマットは JSON 固定(`format=json`)
- 型付き Query/Response モデル
- `getDataCode` の 250 件超を自動分割して統合
- `NEXTPOSITION` を使った自動ページング
- リトライ/スロットリング
- 途中失敗時の partial result + checkpoint 再開
## 対応 API
- `getDataCode`
- `getDataLayer`
- `getMetadata`
## 要件
- Python 3.11+
## インストール
通常利用(PyPI):
```bash
pip install boj-api-client
```
PyPI: <https://pypi.org/project/boj-api-client/>
ローカル開発環境で使う場合:
```bash
pip install -e .
```
または
```bash
uv pip install -e .
```
## クイックスタート(同期)
```python
from boj_api_client import BojClient
from boj_api_client.timeseries import DataCodeQuery
with BojClient() as client:
result = client.timeseries.get_data_code(
DataCodeQuery(
db="CO",
code=["TK99F1000601GCQ01000"],
lang="JP",
)
)
for series in result.series:
print(series.series_code, len(series.points))
```
## クイックスタート(非同期)
```python
import asyncio
from boj_api_client import AsyncBojClient
from boj_api_client.timeseries import MetadataQuery
async def main() -> None:
async with AsyncBojClient() as client:
result = await client.timeseries.get_metadata(
MetadataQuery(db="FM08", lang="JP")
)
print(result.envelope.status, len(result.entries))
asyncio.run(main())
```
## ページ単位で取得する(iter_*)
```python
from boj_api_client import BojClient
from boj_api_client.timeseries import DataLayerQuery
with BojClient() as client:
for page in client.timeseries.iter_data_layer(
DataLayerQuery(db="MD10", frequency="Q", layer1="*")
):
print(page.envelope.status, len(page.series))
```
## getDataLayer の auto-partition を有効化する
`getDataLayer` が 1,250 系列上限に達したとき、metadata 経由の fallback を使う設定です。
```python
from boj_api_client import BojClient, BojClientConfig
from boj_api_client.config import TimeSeriesConfig
from boj_api_client.timeseries import DataLayerQuery
config = BojClientConfig(
timeseries=TimeSeriesConfig(enable_layer_auto_partition=True),
)
with BojClient(config=config) as client:
result = client.timeseries.get_data_layer(
DataLayerQuery(db="MD10", frequency="Q", layer1="*", lang="JP")
)
```
## partial result から再開する
```python
from boj_api_client import BojClient
from boj_api_client.core.errors import BojPartialResultError
from boj_api_client.timeseries import DataCodeQuery
query = DataCodeQuery(db="CO", code=["TK99F1000601GCQ01000"], lang="JP")
with BojClient() as client:
try:
result = client.timeseries.get_data_code(query)
except BojPartialResultError as exc:
if exc.checkpoint_id is None:
raise
result = client.timeseries.get_data_code(
query,
checkpoint_id=exc.checkpoint_id,
)
```
## 設定
```python
from boj_api_client import BojClient, BojClientConfig
from boj_api_client.config import (
CheckpointConfig,
RetryConfig,
ThrottlingConfig,
TimeSeriesConfig,
TransportConfig,
)
config = BojClientConfig(
transport=TransportConfig(timeout_read_seconds=20.0),
retry=RetryConfig(max_attempts=3, max_backoff_seconds=5.0),
throttling=ThrottlingConfig(min_wait_interval_seconds=1.0),
checkpoint=CheckpointConfig(enabled=True, ttl_seconds=86400.0),
timeseries=TimeSeriesConfig(enable_layer_auto_partition=False),
)
with BojClient(config=config) as client:
...
```
## 主な例外
- `BojValidationError`
- `BojServerError`
- `BojUnavailableError`
- `BojTransportError`
- `BojProtocolError`
- `BojPartialResultError`
- `BojClientClosedError`
## live contract test
```bash
BOJ_RUN_LIVE=1 pytest -m live -q tests/contract_live
```
`getDataLayer` live テストを実行する場合:
- `BOJ_LIVE_LAYER1`(必須)
- `BOJ_LIVE_LAYER_DB`(任意、既定 `MD10`)
- `BOJ_LIVE_LAYER_FREQUENCY`(任意、既定 `Q`)
## ドキュメント
- パッケージ構成と責務: `docs/architecture.md`
- API 仕様の実装対応: `docs/api_overview.md`
## 開発者向け
- sync orchestrator は async source から生成:
- `uv run --extra dev python scripts/generate_sync_orchestrator.py`
- PyPI 公開:
- 初回のみ、PyPI 側で Trusted Publisher に GitHub Actions を登録
- `pyproject.toml` の `project.version` を更新後、`v<version>` タグを push
- GitHub Actions (`.github/workflows/ci.yml`) がテスト成功後に自動 publish
| text/markdown | null | delihiros <delihiros@gmail.com> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"httpx<0.29,>=0.27",
"pytest<10,>=8; extra == \"dev\"",
"pytest-cov<7,>=5; extra == \"dev\"",
"pytest-asyncio<0.25,>=0.23; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/delihiros/boj-api-client",
"Repository, https://github.com/delihiros/boj-api-client",
"Issues, https://github.com/delihiros/boj-api-client/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:38:04.964694 | boj_api_client-0.1.2.tar.gz | 33,153 | d9/3c/b610abd65016ecb6bb6e56a2cffe1e87c357ed632a801f2e119aa0a4f0a4/boj_api_client-0.1.2.tar.gz | source | sdist | null | false | c84ad8e94b98d09128b9d0187484cbb5 | efbece72770e9bc6313d61789e998f71057598df0549f54f711ef4bd32355ea7 | d93cb610abd65016ecb6bb6e56a2cffe1e87c357ed632a801f2e119aa0a4f0a4 | MIT | [
"LICENSE"
] | 213 |
2.4 | personal_knowledge_library | 4.2.0 | Library to access Wacom's Personal Knowledge graph. | # Wacom Private Knowledge Library
[](https://github.com/Wacom-Developer/personal-knowledge-library/actions/workflows/python-package.yml)
[](https://github.com/Wacom-Developer/personal-knowledge-library/actions/workflows/pylint.yml)

[](https://pypi.python.org/pypi/personal-knowledge-library)
[](https://pypi.python.org/pypi/personal-knowledge-library)
[](https://developer-docs.wacom.com/docs/private-knowledge-service)



The required tenant API key is only available for selected partner companies.
Please contact your Wacom representative for more information.
---
## Table of Contents
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Introduction](#introduction)
- [Technology Stack](#technology-stack)
- [Domain Knowledge](#domain-knowledge)
- [Knowledge Graph](#knowledge-graph)
- [Semantic Technology](#semantic-technology)
- [Functionality](#functionality)
- [Import Format](#import-format)
- [Access API](#access-api)
- [Entity API](#entity-api)
- [Samples](#samples)
- [Entity Handling](#entity-handling)
- [Named Entity Linking](#named-entity-linking)
- [Access Management](#access-management)
- [Ontology Creation](#ontology-creation)
- [Asynchronous Client](#asynchronous-client)
- [Semantic Search](#semantic-search)
- [Ink Services](#ink-services)
- [Index Management](#index-management)
- [Queue Management](#queue-management)
- [Development](#development)
- [Requirements](#requirements)
- [Setting Up Development Environment](#setting-up-development-environment)
- [Running Tests](#running-tests)
- [Code Quality](#code-quality)
- [Documentation](#documentation)
- [Contributing](#contributing)
- [License](#license)
---
## Installation
Install the library using pip:
```bash
pip install personal-knowledge-library
```
### Python Version
This library requires **Python 3.10 or higher** (supports Python 3.10, 3.11, 3.12, and 3.13).
### Optional Development Dependencies
To install development dependencies for testing and code quality tools:
```bash
pip install personal-knowledge-library[dev]
```
---
## Quick Start
Here's a minimal example to get you started with the Wacom Knowledge Service:
```python
from knowledge.services.graph import WacomKnowledgeService
from knowledge.base.ontology import OntologyClassReference, ThingObject
from knowledge.base.entity import Label
from knowledge.base.language import EN_US
# Initialize the client
client = WacomKnowledgeService(
service_url="https://private-knowledge.wacom.com",
application_name="My Application"
)
# Login with your credentials
client.login(tenant_api_key="<your-tenant-key>", external_user_id="<your-user-id>")
# Search for entities
results, _ = client.search_labels(search_term="Leonardo da Vinci", language_code=EN_US)
for entity in results:
print(f"{entity.uri}: {[l.content for l in entity.label]}")
```
> **Note:** You need a valid tenant API key from Wacom to use this library.
---
## Introduction
In knowledge management there is a distinction between data, information, and knowledge.
In the domain of digital ink this means:
- **Data—** The equivalent would be the ink strokes
- **Information—** After using handwriting-, shape-, math-, or other recognition processes, ink strokes are converted into machine-readable content, such as text, shapes, math representations, other digital content
- **Knowledge / Semantics** - Beyond recognition content needs to be semantically analyzed to become semantically understood based on shared common knowledge.
The following illustration shows the different layers of knowledge:

For handling semantics, Wacom introduced the Wacom Private Knowledge System (PKS) cloud service to manage personal ontologies and its associated personal knowledge graph.
This library provides simplified access to Wacom's personal knowledge cloud service.
It contains:
- Basic datastructures for an Ontology object and entities from the knowledge graph
- Clients for the REST APIs
- Connector for Wikidata public knowledge graph
**Ontology service:**
- List all Ontology structures
- Modify Ontology structures
- Delete Ontology structures
**Entity service:**
- List all entities
- Add entities to the knowledge graph
- Access object properties
**Search service:**
- Search for entities for labels and descriptions with a given language
- Search for literals (data properties)
- Search for relations (object properties)
**Group service:**
- List all groups
- Add groups, modify groups, delete groups
- Add users and entities to groups
**Named Entity Linking service:**
- Linking words to knowledge entities from the graph in a given text (Ontology-based Named Entity Linking)
**Wikidata connector:**
- Import entities from Wikidata
- Mapping Wikidata entities to WPK entities
---
# Technology Stack
## Domain Knowledge
The tasks of the ontology within Wacom's private knowledge system are to formalize the domain the technology is used in, such as education-, smart home-, or creative domain.
The domain model will be the foundation for the entities collected within the knowledge graph, describing real world concepts in a formal language understood by an artificial intelligence system:
- Foundation for structured data, knowledge representation as concepts and relations among concepts
- Being explicit definitions of shared vocabularies for interoperability
- Being actionable fragments of explicit knowledge that engines can use for inferencing (Reasoning)
- Can be used for problem-solving
An ontology defines (specifies) the concepts, relationships, and other distinctions that are relevant for modeling a domain.
## Knowledge Graph
- Knowledge graph is generated from unstructured and structured knowledge sources
- Contains all structured knowledge gathered from all sources
- Foundation for all semantic algorithms
## Semantic Technology
- Extract knowledge from various sources (Connectors)
- Linking words to knowledge entities from the graph in a given text (Ontology-based Named Entity Linking)
- Enables a smart search functionality which understands the context and finds related documents (Semantic Search)
---
# Functionality
## Import Format
For importing entities into the knowledge graph, the tools/import_entities.py script can be used.
The ThingObject supports a NDJSON-based import format, where the individual JSON files can contain the following structure.
| Field name | Subfield name | Data Structure | Description |
|------------------------|---------------|----------------|------------------------------------------------------------------------------------------------|
| source_reference_id | | str | A unique identifier for the entity used in the source system |
| source_system | | str | The source system describes the original source of the entity, such as wikidata, youtube, ... |
| image | | str | A string representing the URL of the entity's icon. |
| labels | | array | An array of label objects, where each object has the following fields: |
| | value | str | A string representing the label text in the specified locale. |
| | locale | str | A string combining the ISO-3166 country code and the ISO-639 language code (e.g., "en-US"). |
| | isMain | bool | A boolean flag indicating if this label is the main label for the entity (true) or an alias (false). |
| descriptions | | array | An array of description objects, where each object has the following fields: |
| | description | str | A string representing the description text in the specified locale. |
| | locale | str | A string combining the ISO-3166 country code and the ISO-639 language code (e.g., "en-US"). |
| type | | str | A string representing the IRI of the ontology class for this entity. |
| literals | | array[map] | An array of data property objects, where each object has the following fields: |
## Access API
The personal knowledge graph backend is implemented as a multi-tenancy system.
Thus, several tenants can be logically separated from each other and different organizations can build their one knowledge graph.

In general, a tenant with their users, groups, and entities are logically separated.
Physically, the entities are stored in the same instance of the Wacom Private Knowledge (WPK) backend database system.
The user management is rather limited, each organization must provide their own authentication service and user management.
The backend only has a reference of the user (*“shadow user”*) by an **external user id**.
The management of tenants is limited to the system owner —Wacom —, as it requires a **tenant management API** key.
While users for each tenant can be created by the owner of the **Tenant API Key**.
You will receive this token from the system owner after the creation of the tenant.
> :warning: Stores the **Tenant API Key** in a secure key store, as attackers can use the key to harm your system.
The **Tenant API Key** should be only used by your authentication service to create shadow users and to log in your user into the WPK backend.
After a successful user login, you will receive a token which can be used by the user to create, update, or delete entities and relations.
The following illustration summarizes the flows for creation of tenant and users:

The organization itself needs to implement their own authentication service which:
- handles the users and their passwords,
- controls the personal data of the users,
- connects the users with the WPK backend and share with them the user token.
The WPK backend only manages the access levels of the entities and the group management for users.
The illustration shows how the access token is received from the WPK endpoint:

# Entity API
The entities used within the knowledge graph and the relationship among them are defined within an ontology managed with Wacom Ontology Management System (WOMS).
An entity within the personal knowledge graphs consists of these major parts:
- **Icon—** a visual representation of the entity, for instance, a portrait of a person.
- **URI—** a unique resource identifier of an entity in the graph.
- **Type—** the type links to the defined concept class in the ontology.
- **Labels—** labels are the word(s) used in a language for the concept.
- **Description—** a short abstract that describes the entity.
- **Literals—** literals are properties of an entity, such as the first name of a person. The ontology defines all literals of the concept class as well as its data type.
- **Relations—** the relationship among different entities is described using relations.
The following illustration provides an example of an entity:

## Entity content
Entities in general are language-independent as across nationalities or cultures we only use different scripts and words for a shared instance of a concept.
Let's take Leonardo da Vinci as an example.
The ontology defines the concept of a Person, a human being.
Now, in English its label would be _Leonardo da Vinci_, while in Japanese _レオナルド・ダ・ヴィンチ_.
Moreover, he is also known as _Leonardo di ser Piero da Vinci_ or _ダ・ビンチ_.
### Labels
Now, in the given example all words that are assigned to the concept are labels.
The label _Leonardo da Vinci_ is stored in the backend with an additional language code, e.g. _en_.
There is always a main label, which refers to the most common or official name of an entity.
Another example would be Wacom, where _Wacom Co., Ltd._ is the official name while _Wacom_ is commonly used and be considered as an alias.
> :pushpin: For the language code the **ISO 639-1:2002**, codes for the representation language names —Part 1: Alpha-2 code. Read more, [here](https://www.iso.org/standard/22109.html)
## Samples
### Entity handling
This samples shows how to work with the graph service.
```python
import argparse
from typing import Optional, Dict, List
from knowledge.base.entity import Description, Label
from knowledge.base.language import LocaleCode, EN_US, DE_DE
from knowledge.base.ontology import OntologyClassReference, OntologyPropertyReference, ThingObject, ObjectProperty
from knowledge.services.graph import WacomKnowledgeService
# ------------------------------- Knowledge entities -------------------------------------------------------------------
LEONARDO_DA_VINCI: str = 'Leonardo da Vinci'
SELF_PORTRAIT_STYLE: str = 'self-portrait'
ICON: str = "https://upload.wikimedia.org/wikipedia/commons/thumb/8/87/Mona_Lisa_%28copy%2C_Thalwil%2C_Switzerland%29."\
"JPG/1024px-Mona_Lisa_%28copy%2C_Thalwil%2C_Switzerland%29.JPG"
# ------------------------------- Ontology class names -----------------------------------------------------------------
THING_OBJECT: OntologyClassReference = OntologyClassReference('wacom', 'core', 'Thing')
"""
The Ontology will contain a Thing class where is the root class in the hierarchy.
"""
ARTWORK_CLASS: OntologyClassReference = OntologyClassReference('wacom', 'creative', 'VisualArtwork')
PERSON_CLASS: OntologyClassReference = OntologyClassReference('wacom', 'core', 'Person')
ART_STYLE_CLASS: OntologyClassReference = OntologyClassReference.parse('wacom:creative#ArtStyle')
IS_CREATOR: OntologyPropertyReference = OntologyPropertyReference('wacom', 'core', 'created')
HAS_TOPIC: OntologyPropertyReference = OntologyPropertyReference.parse('wacom:core#hasTopic')
CREATED: OntologyPropertyReference = OntologyPropertyReference.parse('wacom:core#created')
HAS_ART_STYLE: OntologyPropertyReference = OntologyPropertyReference.parse('wacom:creative#hasArtstyle')
def print_entity(display_entity: ThingObject, list_idx: int, client: WacomKnowledgeService,
short: bool = False):
"""
Printing entity details.
Parameters
----------
display_entity: ThingObject
Entity with properties
list_idx: int
Index with a list
client: WacomKnowledgeService
Knowledge graph client
short: bool
Short summary
"""
print(f'[{list_idx}] : {display_entity.uri} <{display_entity.concept_type.iri}>')
if len(display_entity.label) > 0:
print(' | [Labels]')
for la in display_entity.label:
print(f' | |- "{la.content}"@{la.language_code}')
print(' |')
if not short:
if len(display_entity.alias) > 0:
print(' | [Alias]')
for la in display_entity.alias:
print(f' | |- "{la.content}"@{la.language_code}')
print(' |')
if len(display_entity.data_properties) > 0:
print(' | [Attributes]')
for data_property, labels in display_entity.data_properties.items():
print(f' | |- {data_property.iri}:')
for li in labels:
print(f' | |-- "{li.value}"@{li.language_code}')
print(' |')
relations_obj: Dict[OntologyPropertyReference, ObjectProperty] = client.relations(uri=display_entity.uri)
if len(relations_obj) > 0:
print(' | [Relations]')
for r_idx, re in enumerate(relations_obj.values()):
last: bool = r_idx == len(relations_obj) - 1
print(f' |--- {re.relation.iri}: ')
print(f' {"|" if not last else " "} |- [Incoming]: {re.incoming_relations} ')
print(f' {"|" if not last else " "} |- [Outgoing]: {re.outgoing_relations}')
print()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("-u", "--user", help="External Id of the shadow user within the Wacom Personal Knowledge.",
required=True)
parser.add_argument("-t", "--tenant", help="Tenant Id of the shadow user within the Wacom Personal Knowledge.",
required=True)
parser.add_argument("-i", "--instance", default='https://private-knowledge.wacom.com',
help="URL of instance")
args = parser.parse_args()
TENANT_KEY: str = args.tenant
EXTERNAL_USER_ID: str = args.user
# Wacom personal knowledge REST API Client
knowledge_client: WacomKnowledgeService = WacomKnowledgeService(service_url=args.instance, application_name="Wacom Knowledge Listing")
knowledge_client.login(args.tenant, args.user)
page_id: Optional[str] = None
page_number: int = 1
entity_count: int = 0
print('-----------------------------------------------------------------------------------------------------------')
print(' First step: Find Leonardo da Vinci in the knowledge graph.')
print('-----------------------------------------------------------------------------------------------------------')
res_entities, next_search_page = knowledge_client.search_labels(search_term=LEONARDO_DA_VINCI,
language_code=LocaleCode('en_US'), limit=1000)
leo: Optional[ThingObject] = None
s_idx: int = 1
for res_entity in res_entities:
# Entity must be a person and the label matches with full string
if res_entity.concept_type == PERSON_CLASS and LEONARDO_DA_VINCI in [la.content for la in res_entity.label]:
leo = res_entity
break
print('-----------------------------------------------------------------------------------------------------------')
print(' What artwork exists in the knowledge graph.')
print('-----------------------------------------------------------------------------------------------------------')
relations_dict: Dict[OntologyPropertyReference, ObjectProperty] = knowledge_client.relations(uri=leo.uri)
print(f' Artwork of {leo.label}')
print('-----------------------------------------------------------------------------------------------------------')
idx: int = 1
if CREATED in relations_dict:
for e in relations_dict[CREATED].outgoing_relations:
print(f' [{idx}] {e.uri}: {e.label}')
idx += 1
print('-----------------------------------------------------------------------------------------------------------')
print(' Let us create a new piece of artwork.')
print('-----------------------------------------------------------------------------------------------------------')
# Main labels for entity
artwork_labels: List[Label] = [
Label('Ginevra Gherardini', EN_US),
Label('Ginevra Gherardini', DE_DE)
]
# Alias labels for entity
artwork_alias: List[Label] = [
Label("Ginevra", EN_US),
Label("Ginevra", DE_DE)
]
# Topic description
artwork_description: List[Description] = [
Description('Oil painting of Mona Lisa\' sister', EN_US),
Description('Ölgemälde von Mona Lisa\' Schwester', DE_DE)
]
# Topic
artwork_object: ThingObject = ThingObject(label=artwork_labels, concept_type=ARTWORK_CLASS,
description=artwork_description,
icon=ICON)
artwork_object.alias = artwork_alias
print(f' Create: {artwork_object}')
# Create artwork
artwork_entity_uri: str = knowledge_client.create_entity(artwork_object)
print(f' Entity URI: {artwork_entity_uri}')
# Create relation between Leonardo da Vinci and artwork
knowledge_client.create_relation(source=leo.uri, relation=IS_CREATOR, target=artwork_entity_uri)
relations_dict = knowledge_client.relations(uri=artwork_entity_uri)
for ontology_property, object_property in relations_dict.items():
print(f' {object_property}')
# You will see that wacom:core#isCreatedBy is automatically inferred as a relation as it is the inverse property of
# wacom:core#created.
# Now, more search options
res_entities, next_search_page = knowledge_client.search_description('Michelangelo\'s Sistine Chapel',
EN_US, limit=1000)
print('-----------------------------------------------------------------------------------------------------------')
print(' Search results. Description: "Michelangelo\'s Sistine Chapel"')
print('-----------------------------------------------------------------------------------------------------------')
s_idx: int = 1
for e in res_entities:
print_entity(e, s_idx, knowledge_client)
# Now, let's search all artwork that has the art style self-portrait
res_entities, next_search_page = knowledge_client.search_labels(search_term=SELF_PORTRAIT_STYLE,
language_code=EN_US, limit=1000)
art_style: Optional[ThingObject] = None
s_idx: int = 1
for entity in res_entities:
# Entity must be a person and the label matches with full string
if entity.concept_type == ART_STYLE_CLASS and SELF_PORTRAIT_STYLE in [la.content for la in entity.label]:
art_style = entity
break
res_entities, next_search_page = knowledge_client.search_relation(subject_uri=None,
relation=HAS_ART_STYLE,
object_uri=art_style.uri,
language_code=EN_US)
print('-----------------------------------------------------------------------------------------------------------')
print(' Search results. Relation: relation:=has_topic object_uri:= unknown')
print('-----------------------------------------------------------------------------------------------------------')
s_idx: int = 1
for e in res_entities:
print_entity(e, s_idx, knowledge_client, short=True)
s_idx += 1
# Finally, the activation function retrieving the related identities to a pre-defined depth.
entities, relations = knowledge_client.activations(uris=[leo.uri], depth=1)
print('-----------------------------------------------------------------------------------------------------------')
print(f'Activation. URI: {leo.uri}')
print('-----------------------------------------------------------------------------------------------------------')
s_idx: int = 1
for e in res_entities:
print_entity(e, s_idx, knowledge_client)
s_idx += 1
# All relations
print('-----------------------------------------------------------------------------------------------------------')
for r in relations:
print(f'Subject: {r[0]} Predicate: {r[1]} Object: {r[2]}')
print('-----------------------------------------------------------------------------------------------------------')
page_id = None
# Listing all entities that have the type
idx: int = 1
while True:
# pull
entities, total_number, next_page_id = knowledge_client.listing(ART_STYLE_CLASS, page_id=page_id, limit=100)
pulled_entities: int = len(entities)
entity_count += pulled_entities
print('-------------------------------------------------------------------------------------------------------')
print(f' Page: {page_number} Number of entities: {len(entities)} ({entity_count}/{total_number}) '
f'Next page id: {next_page_id}')
print('-------------------------------------------------------------------------------------------------------')
for e in entities:
print_entity(e, idx, knowledge_client)
idx += 1
if pulled_entities == 0:
break
page_number += 1
page_id = next_page_id
print()
# Delete all personal entities for this user
while True:
# pull
entities, total_number, next_page_id = knowledge_client.listing(THING_OBJECT, page_id=page_id,
limit=100)
pulled_entities: int = len(entities)
if pulled_entities == 0:
break
delete_uris: List[str] = [e.uri for e in entities]
print(f'Cleanup. Delete entities: {delete_uris}')
knowledge_client.delete_entities(uris=delete_uris, force=True)
page_number += 1
page_id = next_page_id
print('-----------------------------------------------------------------------------------------------------------')
```
### Named Entity Linking
Performing Named Entity Linking (NEL) on text and Universal Ink Model.
```python
import argparse
from typing import List, Dict
import urllib3
from knowledge.base.language import EN_US
from knowledge.base.ontology import OntologyPropertyReference, ThingObject, ObjectProperty
from knowledge.nel.base import KnowledgeGraphEntity
from knowledge.nel.engine import WacomEntityLinkingEngine
from knowledge.services.graph import WacomKnowledgeService
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
TEXT: str = "Leonardo da Vinci painted the Mona Lisa."
def print_entity(entity: KnowledgeGraphEntity, list_idx: int, auth_key: str, client: WacomKnowledgeService):
"""
Printing entity details.
Parameters
----------
entity: KnowledgeGraphEntity
Named entity
list_idx: int
Index with a list
auth_key: str
Authorization key
client: WacomKnowledgeService
Knowledge graph client
"""
thing: ThingObject = knowledge_client.entity(auth_key=user_token, uri=entity.entity_source.uri)
print(f'[{list_idx}] - {entity.ref_text} [{entity.start_idx}-{entity.end_idx}] : {thing.uri}'
f' <{thing.concept_type.iri}>')
if len(thing.label) > 0:
print(' | [Labels]')
for la in thing.label:
print(f' | |- "{la.content}"@{la.language_code}')
print(' |')
if len(thing.label) > 0:
print(' | [Alias]')
for la in thing.alias:
print(f' | |- "{la.content}"@{la.language_code}')
print(' |')
relations: Dict[OntologyPropertyReference, ObjectProperty] = client.relations(auth_key=auth_key, uri=thing.uri)
if len(thing.data_properties) > 0:
print(' | [Attributes]')
for data_property, labels in thing.data_properties.items():
print(f' | |- {data_property.iri}:')
for li in labels:
print(f' | |-- "{li.value}"@{li.language_code}')
print(' |')
if len(relations) > 0:
print(' | [Relations]')
for re in relations.values():
print(f' |--- {re.relation.iri}: ')
print(f' |- [Incoming]: {re.incoming_relations} ')
print(f' |- [Outgoing]: {re.outgoing_relations}')
print()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("-u", "--user", help="External Id of the shadow user within the Wacom Personal Knowledge.",
required=True)
parser.add_argument("-t", "--tenant", help="Tenant Id of the shadow user within the Wacom Personal Knowledge.",
required=True)
parser.add_argument("-i", "--instance", default="https://private-knowledge.wacom.com", help="URL of instance")
args = parser.parse_args()
TENANT_KEY: str = args.tenant
EXTERNAL_USER_ID: str = args.user
# Wacom personal knowledge REST API Client
knowledge_client: WacomKnowledgeService = WacomKnowledgeService(
application_name="Named Entity Linking Knowledge access",
service_url=args.instance)
# Wacom Named Entity Linking
nel_client: WacomEntityLinkingEngine = WacomEntityLinkingEngine(
service_url=args.instance,
service_endpoint=WacomEntityLinkingEngine.SERVICE_ENDPOINT
)
# Use special tenant for testing: Unit-test tenant
user_token, refresh_token, expiration_time = nel_client.request_user_token(TENANT_KEY, EXTERNAL_USER_ID)
entities: List[KnowledgeGraphEntity] = nel_client.\
link_personal_entities(text=TEXT, language_code=EN_US, auth_key=user_token)
idx: int = 1
print('-----------------------------------------------------------------------------------------------------------')
print(f'Text: "{TEXT}"@{EN_US}')
print('-----------------------------------------------------------------------------------------------------------')
for e in entities:
print_entity(e, idx, user_token, knowledge_client)
idx += 1
```
### Access Management
The sample shows how access to entities can be shared with a group of users or the tenant.
```python
import argparse
from typing import List
from knowledge.base.entity import Label, Description
from knowledge.base.language import EN_US, DE_DE, JA_JP
from knowledge.base.ontology import OntologyClassReference, ThingObject
from knowledge.services.base import WacomServiceException
from knowledge.services.graph import WacomKnowledgeService
from knowledge.services.group import GroupManagementService, Group
from knowledge.services.users import UserManagementServiceAPI
# ------------------------------- User credential ----------------------------------------------------------------------
TOPIC_CLASS: OntologyClassReference = OntologyClassReference('wacom', 'core', 'Topic')
def create_entity() -> ThingObject:
"""Create a new entity.
Returns
-------
entity: ThingObject
Entity object
"""
# Main labels for entity
topic_labels: List[Label] = [
Label('Hidden', EN_US),
Label('Versteckt', DE_DE),
Label('隠れた', JA_JP),
]
# Topic description
topic_description: List[Description] = [
Description('Hidden entity to explain access management.', EN_US),
Description('Verstecke Entität, um die Zugriffsteuerung zu erklären.', DE_DE)
]
# Topic
topic_object: ThingObject = ThingObject(label=topic_labels, concept_type=TOPIC_CLASS, description=topic_description)
return topic_object
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("-u", "--user", help="External Id of the shadow user within the Wacom Personal Knowledge.",
required=True)
parser.add_argument("-t", "--tenant", help="Tenant Id of the shadow user within the Wacom Personal Knowledge.",
required=True)
parser.add_argument("-i", "--instance", default='https://private-knowledge.wacom.com',
help="URL of instance")
args = parser.parse_args()
TENANT_KEY: str = args.tenant
EXTERNAL_USER_ID: str = args.user
# Wacom personal knowledge REST API Client
knowledge_client: WacomKnowledgeService = WacomKnowledgeService(application_name="Wacom Knowledge Listing",
service_url=args.instance)
# User Management
user_management: UserManagementServiceAPI = UserManagementServiceAPI(service_url=args.instance)
# Group Management
group_management: GroupManagementService = GroupManagementService(service_url=args.instance)
admin_token, refresh_token, expiration_time = user_management.request_user_token(TENANT_KEY, EXTERNAL_USER_ID)
# Now, we create a user
u1, u1_token, _, _ = user_management.create_user(TENANT_KEY, "u1")
u2, u2_token, _, _ = user_management.create_user(TENANT_KEY, "u2")
u3, u3_token, _, _ = user_management.create_user(TENANT_KEY, "u3")
# Now, let's create an entity
thing: ThingObject = create_entity()
entity_uri: str = knowledge_client.create_entity(thing, auth_key=u1_token)
# Only user 1 can access the entity from cloud storage
my_thing: ThingObject = knowledge_client.entity(entity_uri, auth_key=u1_token)
print(f'User is the owner of {my_thing.owner}')
# Now only user 1 has access to the personal entity
knowledge_client.entity(entity_uri, auth_key=u1_token)
# Try to access the entity
try:
knowledge_client.entity(entity_uri, auth_key=u2_token)
except WacomServiceException as we:
print(f"Expected exception as user 2 has no access to the personal entity of user 1. Exception: {we}")
print(f"Status code: {we.status_code}")
print(f"Response text: {we.service_response}")
# Try to access the entity
try:
knowledge_client.entity(entity_uri, auth_key=u3_token)
except WacomServiceException as we:
print(f"Expected exception as user 3 has no access to the personal entity of user 1. Exception: {we}")
# Now, user 1 creates a group
g: Group = group_management.create_group("test-group", auth_key=u1_token)
# Shares the join key with user 2 and user 2 joins
group_management.join_group(g.id, g.join_key, auth_key=u2_token)
# Share entity with a group
group_management.add_entity_to_group(g.id, entity_uri, auth_key=u1_token)
# Now, user 2 should have access
other_thing: ThingObject = knowledge_client.entity(entity_uri, auth_key=u2_token)
print(f'User 2 is the owner of the thing: {other_thing.owner}')
# Try to access the entity
try:
knowledge_client.entity(entity_uri, auth_key=u3_token)
except WacomServiceException as we:
print(f"Expected exception as user 3 still has no access to the personal entity of user 1. Exception: {we}")
print(f"URL: {we.url}, method: {we.method}")
print(f"Status code: {we.status_code}")
print(f"Response text: {we.service_response}")
print(f"Message: {we.message}")
# Un-share the entity
group_management.remove_entity_to_group(g.id, entity_uri, auth_key=u1_token)
# Now, again no access
try:
knowledge_client.entity(entity_uri, auth_key=u2_token)
except WacomServiceException as we:
print(f"Expected exception as user 2 has no access to the personal entity of user 1. Exception: {we}")
print(f"URL: {we.url}, method: {we.method}")
print(f"Status code: {we.status_code}")
print(f"Response text: {we.service_response}")
print(f"Message: {we.message}")
group_management.leave_group(group_id=g.id, auth_key=u2_token)
# Now, share the entity with the whole tenant
my_thing.tenant_access_right.read = True
knowledge_client.update_entity(my_thing, auth_key=u1_token)
# Now, all users can access the entity
knowledge_client.entity(entity_uri, auth_key=u2_token)
knowledge_client.entity(entity_uri, auth_key=u3_token)
# Finally, clean up
knowledge_client.delete_entity(entity_uri, force=True, auth_key=u1_token)
# Remove users
user_management.delete_user(TENANT_KEY, u1.external_user_id, u1.id, force=True)
user_management.delete_user(TENANT_KEY, u2.external_user_id, u2.id, force=True)
user_management.delete_user(TENANT_KEY, u3.external_user_id, u3.id, force=True)
```
### Ontology Creation
The samples show how the ontology can be extended and new entities can be added using the added classes.
```python
import argparse
import sys
from typing import Optional, List
from knowledge.base.entity import Label, Description
from knowledge.base.language import EN_US, DE_DE
from knowledge.base.ontology import DataPropertyType, OntologyClassReference, OntologyPropertyReference, ThingObject, \
DataProperty, OntologyContext
from knowledge.services.graph import WacomKnowledgeService
from knowledge.services.ontology import OntologyService
from knowledge.services.session import PermanentSession
# ------------------------------- Constants ----------------------------------------------------------------------------
LEONARDO_DA_VINCI: str = 'Leonardo da Vinci'
CONTEXT_NAME: str = 'core'
# Wacom Base Ontology Types
PERSON_TYPE: OntologyClassReference = OntologyClassReference.parse("wacom:core#Person")
# Demo Class
ARTIST_TYPE: OntologyClassReference = OntologyClassReference.parse("demo:creative#Artist")
# Demo Object property
IS_INSPIRED_BY: OntologyPropertyReference = OntologyPropertyReference.parse("demo:creative#isInspiredBy")
# Demo Data property
STAGE_NAME: OntologyPropertyReference = OntologyPropertyReference.parse("demo:creative#stageName")
def create_artist() -> ThingObject:
"""
Create a new artist entity.
Returns
-------
instance: ThingObject
Artist entity
"""
# Main labels for entity
topic_labels: List[Label] = [
Label('Gian Giacomo Caprotti', EN_US),
]
# Topic description
topic_description: List[Description] = [
Description('Hidden entity to explain access management.', EN_US),
Description('Verstecke Entität, um die Zugriffsteuerung zu erlären.', DE_DE)
]
data_property: DataProperty = DataProperty(content='Salaj',
property_ref=STAGE_NAME,
language_code=EN_US)
# Topic
artist: ThingObject = ThingObject(label=topic_labels, concept_type=ARTIST_TYPE, description=topic_description)
artist.add_data_property(data_property)
return artist
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("-u", "--user", help="External Id of the shadow user within the Wacom Personal Knowledge.",
required=True)
parser.add_argument("-t", "--tenant", help="Tenant Id of the shadow user within the Wacom Personal Knowledge.",
required=True)
parser.add_argument("-i", "--instance", default="https://private-knowledge.wacom.com", help="URL of instance")
args = parser.parse_args()
TENANT_KEY: str = args.tenant
EXTERNAL_USER_ID: str = args.user
# Wacom Ontology REST API Client
ontology_client: Ontolog | text/markdown | Markus Weber | markus.weber@wacom.com | null | null | null | semantic-knowledge, knowledge-graph | [
"Topic :: Software Development :: Libraries :: Python Modules",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"aiohttp[speedups]",
"requests<3.0.0,>=2.32.0",
"PyJWT<3.0.0,>=2.10.1",
"tqdm>=4.65.0",
"rdflib>=7.1.0",
"orjson>=3.10.0",
"certifi",
"cachetools>=5.3.0",
"loguru==0.7.3",
"urllib3<3.0.0,>=2.6.3",
"types-requests<3.0.0.0,>=2.32.4.20260107",
"pytest; extra == \"dev\"",
"pytest-mock; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\"",
"pytest-env; extra == \"dev\"",
"Faker==18.9.0; extra == \"dev\"",
"flake8>=6.0.0; extra == \"dev\"",
"pylint>=2.17.0; extra == \"dev\"",
"mypy>=1.8.0; extra == \"dev\"",
"black>=24.2.0; extra == \"dev\"",
"tox>=4.0.0; extra == \"dev\"",
"pdoc3; extra == \"dev\"",
"universal-ink-library; extra == \"dev\""
] | [] | [] | [] | [] | poetry/2.2.1 CPython/3.9.25 Linux/6.11.0-1018-azure | 2026-02-20T15:37:55.894986 | personal_knowledge_library-4.2.0.tar.gz | 173,299 | c5/be/014185efe3c960d034918b4dd696a047ae01aa4685f15921384e4b661c48/personal_knowledge_library-4.2.0.tar.gz | source | sdist | null | false | c98d8e8c2263fa1de1acc8cafe42aefc | fedef8ef934df43a09474444e6f7c3360c4d65b879a5e18c07297cca1a0da4f8 | c5be014185efe3c960d034918b4dd696a047ae01aa4685f15921384e4b661c48 | null | [] | 0 |
2.4 | authfort | 0.0.9 | Comprehensive authentication and authorization library for Python | <div align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="../.github/logo-dark.svg" width="60">
<source media="(prefers-color-scheme: light)" srcset="../.github/logo-light.svg" width="60">
<img alt="AuthFort" src="../.github/logo-light.svg" width="60">
</picture>
# authfort
[](https://pypi.org/project/authfort/)
[](https://www.python.org/)
[](https://opensource.org/licenses/MIT)
[](https://bhagyajitjagdev.github.io/authfort/server/configuration/)
</div>
Complete authentication and authorization library for Python.
## Install
```bash
pip install authfort[fastapi]
# or with SQLite: pip install authfort[sqlite,fastapi]
```
## Quick Start
```python
from authfort import AuthFort, CookieConfig
from fastapi import FastAPI, Depends
auth = AuthFort(
database_url="postgresql+asyncpg://user:pass@localhost/mydb",
cookie=CookieConfig(),
)
app = FastAPI()
app.include_router(auth.fastapi_router(), prefix="/auth")
app.include_router(auth.jwks_router())
@app.get("/profile")
async def profile(user=Depends(auth.current_user)):
return {"email": user.email, "roles": user.roles}
```
## Endpoints
| Method | Path | Description |
|--------|------|-------------|
| POST | /auth/signup | Create account |
| POST | /auth/login | Sign in |
| POST | /auth/refresh | Refresh access token |
| POST | /auth/logout | Sign out |
| GET | /auth/me | Get current user |
| GET | /auth/oauth/{provider}/authorize | Start OAuth flow |
| GET | /auth/oauth/{provider}/callback | OAuth callback |
| POST | /auth/introspect | Token introspection |
| GET | /.well-known/jwks.json | Public signing keys |
## Features
- Email/password auth with argon2 hashing
- JWT RS256 with automatic key management
- Refresh token rotation with theft detection
- OAuth 2.1 with PKCE (Google, GitHub)
- Role-based access control
- Password reset (programmatic — you control delivery)
- Change password (with old password verification)
- Session management (list, revoke, revoke all except current)
- Ban/unban users
- Event hooks (15 event types)
- JWKS + key rotation
- Cookie and bearer token modes
- Multi-database: PostgreSQL (default), SQLite, MySQL via SQLAlchemy
## OAuth
```python
from authfort import AuthFort, GoogleProvider, GitHubProvider
auth = AuthFort(
database_url="...",
providers=[
GoogleProvider(client_id="...", client_secret="..."),
GitHubProvider(client_id="...", client_secret="..."),
],
)
```
## Programmatic API
```python
# Create users without the HTTP endpoint
result = await auth.create_user("admin@example.com", "password", name="Admin")
# Roles
await auth.add_role(user_id, "admin")
await auth.remove_role(user_id, "editor")
# Password reset (you handle delivery — email, SMS, etc.)
token = await auth.create_password_reset_token("user@example.com")
if token:
send_email(email, f"https://myapp.com/reset?token={token}")
await auth.reset_password(token, "new_password")
# Change password (authenticated)
await auth.change_password(user_id, "old_password", "new_password")
# Sessions
sessions = await auth.get_sessions(user_id, active_only=True)
await auth.revoke_session(session_id)
await auth.revoke_all_sessions(user_id, exclude=user.session_id) # keep current
# Ban/unban
await auth.ban_user(user_id)
await auth.unban_user(user_id)
```
## License
[MIT](../LICENSE)
| text/markdown | Bhagyajit Jagdev | null | null | null | MIT | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"alembic>=1.18.4",
"argon2-cffi>=25.1.0",
"asyncpg>=0.31.0",
"cryptography>=46.0.5",
"httpx>=0.28.1",
"pyjwt[crypto]>=2.11.0",
"sqlalchemy[asyncio]>=2.0",
"fastapi>=0.104; extra == \"fastapi\"",
"aiomysql>=0.2; extra == \"mysql\"",
"aiosqlite>=0.19; extra == \"sqlite\""
] | [] | [] | [] | [
"Homepage, https://github.com/bhagyajitjagdev/authfort",
"Repository, https://github.com/bhagyajitjagdev/authfort"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T15:37:36.088128 | authfort-0.0.9-py3-none-any.whl | 51,722 | f6/78/e198d0b28952c9ef4fd901a678128230c89796437fce4662665d78e99cec/authfort-0.0.9-py3-none-any.whl | py3 | bdist_wheel | null | false | 923fcd33c8089c3aef509a3504a6b2d1 | 32d08b6f25afd69001c74e3f13489471ed29b6cd6fb2f2130d248f7e4757f5bb | f678e198d0b28952c9ef4fd901a678128230c89796437fce4662665d78e99cec | null | [] | 196 |
2.4 | authfort-service | 0.0.9 | Lightweight JWT verification for AuthFort-powered microservices | <div align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="../.github/logo-dark.svg" width="60">
<source media="(prefers-color-scheme: light)" srcset="../.github/logo-light.svg" width="60">
<img alt="AuthFort" src="../.github/logo-light.svg" width="60">
</picture>
# authfort-service
[](https://pypi.org/project/authfort-service/)
[](https://www.python.org/)
[](https://opensource.org/licenses/MIT)
[](https://bhagyajitjagdev.github.io/authfort/service/)
</div>
Lightweight JWT verification for microservices powered by AuthFort.
## Install
```bash
pip install authfort-service[fastapi]
```
## Quick Start
```python
from authfort_service import ServiceAuth
from fastapi import FastAPI, Depends
service = ServiceAuth(
jwks_url="https://auth.example.com/.well-known/jwks.json",
issuer="authfort",
)
app = FastAPI()
@app.get("/api/data")
async def protected(user=Depends(service.current_user)):
return {"user_id": user.sub, "roles": user.roles}
@app.get("/api/admin")
async def admin_only(user=Depends(service.require_role("admin"))):
return {"message": "admin access"}
```
## Features
- JWKS fetching with automatic caching and refresh
- JWT signature verification (RS256)
- Token introspection client (optional real-time validation)
- FastAPI integration (current_user, require_role dependencies)
- No database required
## With Introspection
```python
service = ServiceAuth(
jwks_url="https://auth.example.com/.well-known/jwks.json",
issuer="authfort",
introspect_url="https://auth.example.com/auth/introspect",
introspect_secret="shared-secret",
)
# Real-time validation (checks ban status, token version, fresh roles)
result = await service.introspect(token)
```
## License
[MIT](../LICENSE)
| text/markdown | Bhagyajit Jagdev | null | null | null | MIT | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"cryptography>=46.0.5",
"httpx>=0.28.1",
"pyjwt[crypto]>=2.11.0",
"fastapi>=0.104; extra == \"fastapi\""
] | [] | [] | [] | [
"Homepage, https://github.com/bhagyajitjagdev/authfort",
"Repository, https://github.com/bhagyajitjagdev/authfort"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T15:37:34.370188 | authfort_service-0.0.9.tar.gz | 10,252 | fb/7d/c89d2e5a50f6091550e643f202da9a06a2af20a5cfb07b450c43433ef124/authfort_service-0.0.9.tar.gz | source | sdist | null | false | fe2bc923791921f4607974415b161e03 | cf984a1b7d66a29b29c544172ac60ebd5c0c7787dc4177be4c5722ff71894976 | fb7dc89d2e5a50f6091550e643f202da9a06a2af20a5cfb07b450c43433ef124 | null | [] | 189 |
2.4 | fathom-global-client-sdk | 1.2.1 | Fathom Global Python SDK | # Fathom Python SDK client
The Python SDK client for communicating with the Fathom API. See [the SDK documentation](https://api-docs.fathom.global/sdk/python.html) for more information.
Example usage:
[//]: # "Example is expanded below"
```python
"""An example usage of the Fathom API client."""
import os
from fathom.sdk.v2 import Client, point, polygon, write_tiffs
if __name__ == "__main__":
client = Client(
os.environ["FATHOM_CLIENT_ID"],
os.environ["FATHOM_CLIENT_SECRET"],
os.environ["FATHOM_API_ADDR"],
)
layer_ids = [
"GLOBAL-1ARCSEC-00_OFFSET-1in10-PLUVIAL-DEFENDED-DEPTH-2020-PERCENTILE50-v0.0.0_test",
"GLOBAL-1ARCSEC-00_OFFSET-1in100-PLUVIAL-DEFENDED-DEPTH-2020-PERCENTILE50-v0.0.0_test",
]
# Get from points
pt = point(lat=51.996147, lng=-2.159495)
points_response = client.geo.get_points(([pt]), layer_ids)
# Or using a polygon
poly = polygon(
[
point(51.45, 0),
point(51.55, 0),
point(51.55, -0.1),
point(51.45, -0.1),
point(51.45, 0),
]
)
polygon_response = client.geo.get_polygon(poly, layer_ids)
write_tiffs(
polygon_response,
output_dir,
)
```
| text/markdown | Fathom | null | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"googleapis-common-protos<2,>=1.63.0",
"grpcio<2,>=1.62.1",
"grpcio-status<2,>=1.62.1",
"protobuf<7,>=6",
"requests<3,>=2",
"fastkml==1.0a11",
"lxml",
"fiona",
"protovalidate"
] | [] | [] | [] | [
"Homepage, https://fathom.global"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:37:28.130949 | fathom_global_client_sdk-1.2.1-py3-none-any.whl | 62,894 | c2/47/1739bcd1e09c2afe3ae50143b3de378f752c7b0087dbdfdb3e43cf7df352/fathom_global_client_sdk-1.2.1-py3-none-any.whl | py3 | bdist_wheel | null | false | bf5401b34b4c87d5565b729d8790a74a | 3d545f8f9eb5cb88ff30f929c2f0ff0afa59e6d063c5fe5d97961a1dcc662911 | c2471739bcd1e09c2afe3ae50143b3de378f752c7b0087dbdfdb3e43cf7df352 | null | [
"LICENSE"
] | 86 |
2.1 | drbutil | 0.0.25 | A tiny collection of geometry processing routines frequently used in my projects. | # drbutil
[](https://badge.fury.io/py/drbutil)
A tiny collection of geometry processing routines frequently used in my prototyping code.
Pure Python, low overhead and minimal dependencies.
## Dependencies
The only **actually required** library is [NumPy](https://github.com/numpy/numpy).
**Optionally**,
* [matplotlib](https://github.com/matplotlib/matplotlib) shows 2D results,
* [Mayavi](https://github.com/enthought/mayavi) visualizes 3D results,
* [tqdm](https://github.com/tqdm/tqdm) realizes progress bars in the shell and
* [scipy](https://github.com/scipy/scipy) speeds up sparse solving operations.
## Install & Use
Install from [PyPI](https://pypi.org/project/drbutil/) with `pip install drbutil` or
clone the repo and run the `buildAndInstall.bat/.sh` script.
Then you can import everything in your project with `import drbutil` or `from drbutil import *`, respectively.
| text/markdown | null | "Dennis R. Bukenberger" <dennis.bukenberger@tum.de> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/dbukenberger/drbutil"
] | twine/6.1.0 CPython/3.8.18 | 2026-02-20T15:37:25.265992 | drbutil-0.0.25.tar.gz | 31,330 | 1d/b5/8daffaaf8c491f35360a7b0b75760ab444292027ba0905b12197ecec03f6/drbutil-0.0.25.tar.gz | source | sdist | null | false | 3d03b6c04dfd75ef1207f3cd6f8397fb | 47ac41aa05aa1029fa7c58a3aaefe58ad7f244d229d00998d8e7aee54284644c | 1db58daffaaf8c491f35360a7b0b75760ab444292027ba0905b12197ecec03f6 | null | [] | 204 |
2.4 | slumber-python | 5.1.0 | Python bindings for Slumber, the source-based REST/HTTP client | # slumber-python
> This is not related to, or a replacement of, the [slumber](https://pypi.org/project/slumber/) package.
[**Documentation**](https://slumber.lucaspickering.me/integration/python.html)
Python bindings for [Slumber](https://slumber.lucaspickering.me/), the source-based REST API client. This library makes it easy to take your existing Slumber collection and use it in Python scripts.
This package does not yet support all the same functionality as the [Slumber CLI](https://slumber.lucaspickering.me/user_guide/cli/index.html). If you have a specific feature that you'd like to see in it, please [open an issue on GitHub](https://github.com/LucasPickering/slumber/issues/new/choose).
**This is not a general-purpose REST/HTTP client.** If you're not already using Slumber as a TUI/CLI client, then there isn't much value provided by this package.
## Installation
```sh
pip install slumber-python
```
## Usage
First, [create a Slumber collection](https://slumber.lucaspickering.me/getting_started.html).
```py
import asyncio
from slumber import Collection
collection = Collection()
response = asyncio.run(collection.request('example_get'))
print(response.text)
```
For more usage examples, [see the docs](https://slumber.lucaspickering.me/integration/python.html).
## Versioning
For simplicity, the version of this package is synched to the main Slumber version and follows the same [releases](https://github.com/LucasPickering/slumber/releases). That means there may be releases of this package that don't actually change anything, or version bumps that are higher than necessary (e.g. minor versions with only patch changes). The versioning **is still semver compliant**.
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | MIT | null | [
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy"
] | [] | https://slumber.lucaspickering.me | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | maturin/1.12.3 | 2026-02-20T15:37:13.730164 | slumber_python-5.1.0-cp310-cp310-win32.whl | 4,699,467 | b2/0e/3822eee719a1fcd18afe73b7c18e8c3943991f0ea89d49287ae521655dec/slumber_python-5.1.0-cp310-cp310-win32.whl | cp310 | bdist_wheel | null | false | 9fda243cf7a070e50439f63608cfe8b4 | 722bb367934f45d62ec0122238895a666964b38d945ee7c8c188b08d15c38d07 | b20e3822eee719a1fcd18afe73b7c18e8c3943991f0ea89d49287ae521655dec | null | [] | 4,481 |
2.4 | infragraph | 0.7.1 | Infrastructure as a Graph | [](https://pypi.org/project/infragraph/)
# InfraGraph (INFRAstructure GRAPH)
InfraGraph defines a [model-driven, vendor-neutral API](https://infragraph.dev/openapi.html) for capturing a system of systems suitable for use in co-designing AI/HPC solutions.
The model and API allows for defining physical infrastructure using a standardized graph like terminology.
In addition to the base graph definition, user provided `annotations` can `extend the graph` allowing for an unlimited number of different physical and/or logical characteristics/view.
Additional information such as background, schema and examples can be found in the [online documentation](https://infragraph.dev).
Contributions can be made in the following ways:
- [open an issue](https://github.com/keysight/infragraph/issues) in the repository
- [fork the models repository](https://github.com/keysight/infragraph) and submit a PR
# Versioning Rules
Infragraph follows a structured versioning scheme to maintain consistency across releases and ensure clear dependency management. Each version reflects compatibility, schema evolution, and API stability expectations.
For versioning rules, refer [this readme.](docs/src/version.md) | text/markdown | null | TBD <tbd@infragraph.org> | null | null | MIT | astrasim, chakra, graph, infrastructure | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"grpcio",
"networkx",
"protobuf",
"pyyaml",
"requests",
"semantic-version"
] | [] | [] | [] | [
"Homepage, https://infragraph.dev/",
"Repository, https://github.com/Keysight/infragraph"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-20T15:37:04.109122 | infragraph-0.7.1.tar.gz | 76,224 | 07/fc/880a02e6344bcc2a1e70f2ecdeb3934482605b4b6498897f2c40de1ef116/infragraph-0.7.1.tar.gz | source | sdist | null | false | 939792a95f7500c73312864cf154f06c | 60df8725c997a18c46006a4ca3432f6e76f87ee66612f4182991da7857e2cd4a | 07fc880a02e6344bcc2a1e70f2ecdeb3934482605b4b6498897f2c40de1ef116 | null | [
"LICENSE"
] | 223 |
2.4 | fattureincloud-mcp | 1.8.0 | MCP Server for Fatture in Cloud API - Italian electronic invoicing with Claude AI | # Fatture in Cloud MCP Server
[🇮🇹 Italiano](#italiano) | [🇬🇧 English](#english)
<!-- mcp-name: io.github.aringad/fattureincloud-mcp -->
---
## Italiano
Server MCP (Model Context Protocol) per integrare **Fatture in Cloud** con Claude AI e altri assistenti compatibili.
Permette di gestire fatture elettroniche italiane tramite conversazione naturale.
### ✨ Funzionalità (20 tool)
| Tool | Descrizione |
|------|-------------|
| `list_invoices` | Lista fatture/NDC/proforma emesse per anno/mese |
| `get_invoice` | Dettaglio completo documento |
| `get_pdf_url` | URL PDF e link web documento |
| `list_clients` | Lista clienti con filtro |
| `get_company_info` | Info azienda collegata |
| `create_client` | 🆕 Crea nuovo cliente in anagrafica |
| `update_client` | 🆕 Aggiorna dati cliente esistente |
| `create_invoice` | Crea nuova fattura (bozza) con codice SDI automatico |
| `create_credit_note` | Crea nota di credito (bozza) |
| `create_proforma` | Crea proforma (bozza, non inviabile SDI) |
| `convert_proforma_to_invoice` | 🆕 Converte proforma in fattura elettronica |
| `update_document` | Modifica parziale documento bozza |
| `duplicate_invoice` | Duplica fattura con codice SDI aggiornato |
| `delete_invoice` | Elimina documento bozza (non inviato) |
| `send_to_sdi` | Invia fattura allo SDI |
| `get_invoice_status` | Stato fattura elettronica SDI |
| `send_email` | Invia copia cortesia via email |
| `list_received_documents` | Fatture passive (fornitori) |
| `get_situation` | Dashboard: fatturato netto, incassato, costi, margine |
| `check_numeration` | Verifica continuità numerica fatture |
> **Nota:** La marcatura dei pagamenti come "pagato" non è supportata. Usa il pannello web di Fatture in Cloud per questa operazione.
### 🚀 Installazione
#### Prerequisiti
- Python 3.10+
- Account [Fatture in Cloud](https://www.fattureincloud.it/) con API attive
- [Claude Desktop](https://claude.ai/download) o altro client MCP
#### 1. Clona il repository
```bash
git clone https://github.com/aringad/fattureincloud-mcp.git
cd fattureincloud-mcp
```
#### 2. Crea ambiente virtuale e installa dipendenze
```bash
python -m venv venv
source venv/bin/activate # Linux/Mac
# oppure: venv\Scripts\activate # Windows
pip install -r requirements.txt
```
#### 3. Configura le credenziali
Copia il file di esempio e inserisci i tuoi dati:
```bash
cp .env.example .env
```
Modifica `.env`:
```env
FIC_ACCESS_TOKEN=a/xxxxx.yyyyy.zzzzz
FIC_COMPANY_ID=123456
FIC_SENDER_EMAIL=fatturazione@tuaazienda.it
```
**Come ottenere le credenziali:**
1. Accedi a [Fatture in Cloud](https://secure.fattureincloud.it/)
2. Vai su *Impostazioni > API e Integrazioni*
3. Crea un **Token Manuale** con i permessi necessari
4. Il `COMPANY_ID` è visibile nell'URL quando sei loggato
#### 4. Configura Claude Desktop
Modifica `~/Library/Application Support/Claude/claude_desktop_config.json` (Mac) o `%APPDATA%\Claude\claude_desktop_config.json` (Windows):
```json
{
"mcpServers": {
"fattureincloud": {
"command": "/percorso/completo/fattureincloud-mcp/venv/bin/python",
"args": ["/percorso/completo/fattureincloud-mcp/server.py"],
"env": {
"FIC_ACCESS_TOKEN": "a/xxxxx.yyyyy.zzzzz",
"FIC_COMPANY_ID": "123456",
"FIC_SENDER_EMAIL": "fatturazione@tuaazienda.it"
}
}
}
}
```
#### 5. Riavvia Claude Desktop
Chiudi completamente Claude Desktop (Cmd+Q su Mac) e riaprilo.
### 💬 Esempi d'uso
```
"Mostrami le fatture di dicembre 2024"
"Qual è la situazione finanziaria del 2025?"
"Duplica la fattura 310 cambiando 2025 in 2026"
"Invia la fattura 326 allo SDI"
"Manda la copia cortesia via email"
"Quali fatture devo ancora incassare?"
"Verifica la numerazione delle fatture 2025"
"Converti la proforma 12 in fattura"
"Crea un nuovo cliente: Rossi SRL, P.IVA 01234567890"
```
### ⚠️ Note di sicurezza
- Le operazioni di scrittura (create, send_to_sdi) richiedono **sempre conferma**
- L'invio allo SDI è **irreversibile**
- Le fatture vengono create come **bozze** (draft)
- Il codice univoco SDI viene recuperato **automaticamente** dall'anagrafica cliente
- Il metodo di pagamento di default è **MP05** (bonifico)
### 📋 Changelog
Vedi [CHANGELOG.md](CHANGELOG.md)
### 📄 Licenza
MIT - Vedi [LICENSE](LICENSE)
### 👨💻 Autore
Sviluppato da **[Mediaform s.c.r.l.](https://media-form.it)** - Genova, Italia
---
## English
MCP (Model Context Protocol) Server to integrate **Fatture in Cloud** with Claude AI and other compatible assistants.
Manage Italian electronic invoices through natural conversation.
### ✨ Features (20 tools)
| Tool | Description |
|------|-------------|
| `list_invoices` | List invoices/credit notes/proforma by year/month |
| `get_invoice` | Full document details |
| `get_pdf_url` | PDF URL and web link for document |
| `list_clients` | List clients with filter |
| `get_company_info` | Connected company info |
| `create_client` | 🆕 Create new client in registry |
| `update_client` | 🆕 Update existing client data |
| `create_invoice` | Create new invoice (draft) with automatic SDI code |
| `create_credit_note` | Create credit note (draft) |
| `create_proforma` | Create proforma (draft, not sendable to SDI) |
| `convert_proforma_to_invoice` | 🆕 Convert proforma to electronic invoice |
| `update_document` | Partial update of draft document |
| `duplicate_invoice` | Duplicate invoice with updated SDI code |
| `delete_invoice` | Delete draft document (not yet sent) |
| `send_to_sdi` | Send invoice to SDI (Italian e-invoice system) |
| `get_invoice_status` | E-invoice SDI status |
| `send_email` | Send courtesy copy via email |
| `list_received_documents` | Received invoices (suppliers) |
| `get_situation` | Dashboard: net revenue, collected, costs, margin |
| `check_numeration` | Verify invoice numbering continuity |
> **Note:** Marking payments as "paid" is not supported. Use the Fatture in Cloud web panel for this operation.
### 🚀 Installation
#### Prerequisites
- Python 3.10+
- [Fatture in Cloud](https://www.fattureincloud.it/) account with API enabled
- [Claude Desktop](https://claude.ai/download) or other MCP client
#### 1. Clone the repository
```bash
git clone https://github.com/aringad/fattureincloud-mcp.git
cd fattureincloud-mcp
```
#### 2. Create virtual environment and install dependencies
```bash
python -m venv venv
source venv/bin/activate # Linux/Mac
# or: venv\Scripts\activate # Windows
pip install -r requirements.txt
```
#### 3. Configure credentials
Copy the example file and fill in your data:
```bash
cp .env.example .env
```
Edit `.env`:
```env
FIC_ACCESS_TOKEN=a/xxxxx.yyyyy.zzzzz
FIC_COMPANY_ID=123456
FIC_SENDER_EMAIL=billing@yourcompany.com
```
**How to get credentials:**
1. Log into [Fatture in Cloud](https://secure.fattureincloud.it/)
2. Go to *Settings > API and Integrations*
3. Create a **Manual Token** with required permissions
4. The `COMPANY_ID` is visible in the URL when logged in
#### 4. Configure Claude Desktop
Edit `~/Library/Application Support/Claude/claude_desktop_config.json` (Mac) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows):
```json
{
"mcpServers": {
"fattureincloud": {
"command": "/full/path/to/fattureincloud-mcp/venv/bin/python",
"args": ["/full/path/to/fattureincloud-mcp/server.py"],
"env": {
"FIC_ACCESS_TOKEN": "a/xxxxx.yyyyy.zzzzz",
"FIC_COMPANY_ID": "123456",
"FIC_SENDER_EMAIL": "billing@yourcompany.com"
}
}
}
}
```
#### 5. Restart Claude Desktop
Fully quit Claude Desktop (Cmd+Q on Mac) and reopen it.
### 💬 Usage examples
```
"Show me invoices from December 2024"
"What's the financial situation for 2025?"
"Duplicate invoice 310 changing 2025 to 2026"
"Send invoice 326 to SDI"
"Send the courtesy copy via email"
"Which invoices are still pending payment?"
"Check invoice numbering for 2025"
"Convert proforma 12 to invoice"
"Create a new client: Rossi SRL, VAT 01234567890"
```
### ⚠️ Security notes
- Write operations (create, send_to_sdi) **always require confirmation**
- Sending to SDI is **irreversible**
- Invoices are created as **drafts**
- SDI unique code is **automatically retrieved** from client registry
- Default payment method is **MP05** (bank transfer)
### 📋 Changelog
See [CHANGELOG.md](CHANGELOG.md)
### 📄 License
MIT - See [LICENSE](LICENSE)
### 👨💻 Author
Developed by **[Mediaform s.c.r.l.](https://media-form.it)** - Genova, Italy
| text/markdown | null | "Mediaform s.c.r.l." <info@media-form.it> | null | null | MIT | mcp, fattureincloud, fatture, invoicing, e-invoice, sdi, claude, anthropic, italy, italian | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Office/Business :: Financial :: Accounting"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"fattureincloud-python-sdk>=2.0.0",
"mcp>=1.0.0",
"python-dotenv>=1.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/aringad/fattureincloud-mcp",
"Repository, https://github.com/aringad/fattureincloud-mcp",
"Issues, https://github.com/aringad/fattureincloud-mcp/issues",
"Author, https://media-form.it"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T15:36:15.590497 | fattureincloud_mcp-1.8.0.tar.gz | 14,605 | 01/66/5b86acf5bcd02c116b43c9d25df2e085170b66ef150e21531d45abaac408/fattureincloud_mcp-1.8.0.tar.gz | source | sdist | null | false | 1015df10e82200f2f0bbb739962728ec | bceccc32c065b8ae18a52dd7d5a7d4793ee299aa3ba195daaa004a5d4f4bd593 | 01665b86acf5bcd02c116b43c9d25df2e085170b66ef150e21531d45abaac408 | null | [
"LICENSE"
] | 205 |
2.4 | chellow | 1771601729.0.0 | Web Application for checking UK energy bills. | # Chellow
A web application for checking UK electricity and gas bills for organizations with
a large number of supplies and / or high consumption.
Website: https://www.chellow.org/
## Licence
Chellow is released under the [GPL v3](http://www.gnu.org/licenses/gpl.html).
## Introduction
Chellow is a web application for checking UK electricity and gas bills. It's designed
for organizations with high electricity consumption. The software is hosted at
https://github.com/WessexWater/chellow.
[](https://github.com/WessexWater/chellow/actions/workflows/chellow.yml)
## Installation
Chellow is a Python web application that uses the PostgreSQL database. To install
Chellow, follow these steps:
* Install [PostgreSQL](http://www.postgresql.org/) 15
* Create a PostgreSQL database: `createdb --encoding=UTF8 chellow`
* Set up the following environment variables to configure Chellow:
| Name | Default | Description |
| ------------ | ----------- | ---------------------- |
| `PGUSER` | `postgres` | Postgres user name |
| `PGPASSWORD` | `postgres` | Postgres password |
| `PGHOST` | `localhost` | Postgres host name |
| `PGPORT` | `5432` | Postgres port |
| `PGDATABASE` | `chellow` | Postgres database name |
In bash an environment variable can be set by doing:
`export PGUSER=postgres`
in Windows an environment variable can be set by doing:
`set PGUSER=postgres`
* Install Python 3.11
* Create a virtual environment: `python3 -m venv venv`
* Activate the virtual environment: `source venv/bin/activate`
* Install Chellow: `pip install chellow`
* Start Chellow:`waitress-serve --host=0.0.0.0 --port=8080 --call chellow:create_app`
* You should now be able to visit `http://localhost:8080/` in a browser. You should be
prompted to enter a username and password. Enter the admin user name
`admin@example.com` and the password `admin`, and then the home page should appear.
Change the admin password from the `users` page.
### Azure
SSH into the machine and you'll be in ``/home/site/wwwroot``.
* Create a virtual environment: ``python -m venv antenv``
* Activate the virtual environment with: ``source antenv/bin/activate``
*
In ``/home/site/wwwroot`` create a file called ``app.py`` containing the following:
import chellow
app = chellow.create_app()
### Manually Upgrading Chellow
To upgrade to the latest version of Chellow do: `pip install --upgrade chellow`
### Automatically Upgrading Chellow
The bash script [chellow\_upgrader.sh](bin/chellow_upgrader.sh) will check for a new
version of Chellow and if one's there it will stop Chellow, then do the upgrade, then
start Chellow.
### Using with systemd
Copy the following files into the `/etc/systemd/system` directory:
* [chellow.service](systemd/chellow.service)
* [chellow\_upgrader.service](systemd/chellow_upgrader.service)
* [chellow\_upgrader.timer](systemd/chellow_upgrader.timer)
and modify them as appropriate. The `chellow_upgrader.service` uses the
`chellow_upgrader.sh` script above. The you should be able to use the following
commands:
* Start Chellow: `sudo systemctl start chellow`
* Make Chellow run on startup: `sudo systemctl enable chellow`
* Start the Chellow Upgrader: `sudo systemctl start chellow_upgrader.timer`
* Make Chellow Upgrader run on startup: `sudo systemctl enable chellow_upgrader.timer`
### Using A Different Webserver
Chellow comes bundled with the
[Waitress](http://docs.pylonsproject.org/projects/waitress/en/latest/) webserver, but
the is also a Python WSGI web application so Chellow can be used with any WSGI compliant
application server, eg Gunicorn. The WSGI app that should be specified is `chellow.app`.
## Getting Started
This is a brief guide to setting things up after you've installed Chellow. It
assumes that you have a basic knowledge of
[UK electricity billing](https://en.wikipedia.org/wiki/Electricity_billing_in_the_UK). It goes through the steps of adding a half-hourly (HH) metered supply,
and producing virtual bills for it, and then importing an actual bill and
running a bill check.
Chellow can handle non-half-hourly supplies as well as half-hourly, and it can
also deal with gas supplies, but we'll use a half-hourly electricity supply for
this example.
### View the Chellow home page
Assuming you've installed Chellow correctly, you should be able to open your
browser, type in the URL of the Chellow application, and see the Chellow home
page.
### Users
Before any users are added, if you access Chellow from `localhost` you'll have
read / write access. Once users are added, you have to log in as one of those
users. Users are added from the 'users' page.
### Importing Market Domain Data
Follow the instructions at `/e/mdd_imports` to import the Market Domain Data into
Chellow.
### Add HHDC Contracts
Every supply must a have a data collector. Add in a new HHDC by going to the
'HHDC Contracts' page and then clicking on the 'Add' link.
### Add MOP Contracts
Every supply must a have a meter operator. Add in a new MOP by going to the
'MOP Contracts' page and then clicking on the 'Add' link. For now just put in a
simple virtual bill for the MOP, so in the 'script' field enter:
```
from chellow.utils import reduce_bill_hhs
def virtual_bill_titles():
return ['net-gbp']
def virtual_bill(data_source):
for hh in data_source.hh_data:
bill_hh = data_source.mop_bill_hhs[hh['start-date']]
if hh['utc-is-month-end']:
bill_hh['net-gbp'] = 10
data_source.mop_bill = reduce_bills_hh(data_source.mop_bill_hhs)
```
### Add Supplier Contracts
Click on the 'supplier contracts' link and then fill out the 'Add a contract'
form. For the Charge Script field enter:
```
from chellow.utils import reduce_bill_hhs
def virtual_bill_titles():
return ['net-gbp', 'day-kwh', 'day-gbp', 'night-kwh', 'night-gbp']
def virtual_bill(data_source):
bill = data_source.supplier_bill
for hh in data_source.hh_data:
bill_hh = data_source.supplier_bill_hhs[hh['start-date']]
if 0 < hh['utc-decimal-hour'] < 8:
bill_hh['night-kwh'] = hh['msp-kwh']
bill_hh['night-gbp'] = hh['msp-kwh'] * 0.05
else:
bill_hh['day-kwh'] = hh['msp-kwh']
bill_hh['day-gbp'] = hh['msp-kwh'] * 0.1
bill_hh['net-gbp'] = sum(
v for k, v in bill_hh.items() if k[-4:] == '-gbp')
data_source.supplier_bill = reduce_bill_hhs(
data_source.supplier_bill_hhs)
```
This will generate a simple virtual bill based on a day / night tariff.
Supplier contract scripts can be much more sophisticated than this, including
DUoS, TNUoS, BSUoS, RO and many other standard charges. These will be addressed
later on in this guide.
Also, don't worry about the 'properties' field for now.
### Add a Site
Go to the 'sites' link on the home page, and click 'add'. Fill out the form
and create the site.
### Add a Supply
To add a supply to a site, go to the site's page and click on 'edit'. Half-way
down the page there's an 'Insert an electricity supply' form. For a standard
electricity supply the 'source' is 'net'. Make sure the profile class (PC) is
'00' to indicate to Chellow that it's a half-hourly metered supply. The SSC
field is left blank for a half-hourly as they don't have an SSC.
A supply is formed from a series of eras. Each era has different
characteristics to capture the history of a supply.
### Run a Virtual Bill
At this stage it should be possible to run a virtual bill for the supply you've
added. Go to the supply's page and click on the 'Supplier Virtual Bill' link.
That should return a page showing the virtual bill for the supply.
Of course, the consumption will be zero because we haven't added in any
half-hourly data yet.
### Add Some HH Data
On the page of the supply you've created, you'll see that there's a 'channels'
link, with an 'add' link next to it. Add an active import channel for the
half-hourly data to be attached to.
Back on the supply page a link to the channel you just created will have
appeared. Click on this and fill out the form for adding a half-hour of data.
If you then re-run the virtual bill for the period in which you added the
half-hour, it should show up in the virtual bill.
It's tedious to add HH data one by one, so if you go to the page of the HHDC
contract that you've created, you'll see a 'HH Data Imports' link. Click on
this and there's a form for uploading HH data in bulk in a variety of formats.
Chellow can also be set up to import files automatically from an FTP server.
### Virtual Bills For A Contract
To see the virtual bills for a supplier contract, go to the contract page and
follow the Virtual Bills link.
### Data Structure
* Site
* Supply
* Supply Era
* Site
* MOP Contract
* DC Contract
* Profile Class
* Imp / Exp Supplier Contract
* Imp / Exp Mpan Core
* Imp / Exp LLFC
* Imp / Exp Supply Capacity
* Imp / Exp Channels
* HH Data
* Supplier Contracts (Same for DC and MOP)
* Rate Scripts
* Batches
* Bills
* Supply
* Register Reads
* DNOs (Distribution Network Operators)
* LLFCs (Line Loss Factor Classes)
### General Imports
The menu has a link called 'General Import' which take you to a page for doing
bulk insert / update / delete operations on Chellow data (eg. Sites, Supplies,
LLFCs etc.) using a CSV file.
## Common Tasks
### Merging Two Supplies
Say there are two supplies A and B, and you want to end up with just A. The
steps are:
1. Back up the data by taking a snapshot of the database.
2. Check that A and B have the same header data (LLFC, MTC etc).
3. See if there are any overlapping channels, eg. do both A and B have import
kVArh? If there are, then decide which one is going to be kept.
4. Load the hh data for the required channels from the backup file. First
take a copy of the file, then edit out the data you don't want, then
further edit the file so that it loads into the new supply.
5. Delete supply B.
### Local Reports
Core reports come with Chellow, but it's possible for users to create custom
reports. Reports are written in Python, and often use a Jinja2 template. You
can display a link to a report of user reports by adding the `local_reports_id`
to the `configuration` non-core contract.
#### Default users
Default users can be automatically assigned to requests from certain IP
addresses. To associate an IP address to a user, go to the non-core contract
`configuration` and add a line to the 'properties' field similar to the
following:
```
{
'ips': {'127.0.0.1': 'implicit-user@localhost'}
}
```
Note that multiple IP addresses can be mapped to the same user.
It's also possible to use Microsoft Active Directory to authenticate users
with a reverse proxy server. Edit the `configuration` non-core contract and add
something like:
```
{
"ad_authentication": {
"default_user": "readonly@example.com",
"on": true
}
```
## Design Decisions
Why don't you use the +/- infinity values for timestamps? The problem is that it's not clear how this would translate into Python. So we currently use null for infinity, which naturally translates into None in Python.
## Contributing
The web interface uses HTML, CSS and [HTMX](https://htmx.org/).
| text/markdown | null | null | null | null | null | null | [
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"flask-restx==1.3.2",
"flask==3.1.3",
"jsonschema==4.26.0",
"markdown-it-py==4.0.0",
"odio==0.0.23",
"openpyxl==3.1.5",
"paramiko==4.0.0",
"pg8000==1.31.5",
"pip>=9.0.1",
"psutil==7.1.1",
"pympler==1.1",
"pypdf==6.7.1",
"python-dateutil==2.8.2",
"pytz==2025.2",
"requests==2.32.5",
"sqlalchemy==2.0.45",
"waitress==3.0.2",
"xlrd==2.0.2",
"zish==0.1.12"
] | [] | [] | [] | [
"Homepage, https://github.com/WessexWater/chellow"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:35:53.918198 | chellow-1771601729.0.0.tar.gz | 651,361 | 60/f8/0f262c26b0ff8ddb82c694fb951bb0fa2c2675dc24952710b3584b85646f/chellow-1771601729.0.0.tar.gz | source | sdist | null | false | 80fd04846098d690e994d797dda11406 | fbde9bd46165e152987c0700ea56e6abbbc7958c8b7de8639c0eb637962a0c13 | 60f80f262c26b0ff8ddb82c694fb951bb0fa2c2675dc24952710b3584b85646f | null | [] | 219 |
2.4 | bsvae | 0.3.0 | Gaussian Mixture VAE for multi-modal biological module discovery in omics data. | # BSVAE: Gaussian Mixture VAE for Module Discovery
[](https://pytorch.org/)
[](https://python.org/)
[](LICENSE)
[](https://bsvae.readthedocs.io/)
BSVAE is a PyTorch package centered on `GMMModuleVAE`, a Gaussian-mixture variational autoencoder for feature-level module discovery in omics data.
## What It Does
- Trains a two-phase GMM-VAE (`bsvae-train`)
- Extracts feature-feature networks from trained models (`bsvae-networks`)
- Extracts module assignments and optional eigengenes (`bsvae-networks`)
- Exports latents (`mu`, `logvar`, `gamma`) as `.npz`
- Simulates synthetic datasets and benchmarks module recovery (`bsvae-simulate`)
## Installation
From PyPI:
```bash
pip install bsvae
```
From source:
```bash
git clone https://github.com/heart-gen/BSVAE.git
cd BSVAE
pip install -e .
```
## CLI Entry Points
- `bsvae-train`
- `bsvae-networks`
- `bsvae-simulate`
## Quickstart
For a full walkthrough (minimal run, production run, post-training analysis, simulation benchmark, troubleshooting, and migration), see `docs/tutorial.md`.
### 1. Train
Input matrix must be `features x samples` with feature IDs in row index and sample IDs in columns.
```bash
bsvae-train exp1 \
--dataset data/expression.csv \
--epochs 100 \
--n-modules 20 \
--latent-dim 32
```
### 2. Extract networks
```bash
bsvae-networks extract-networks \
--model-path results/exp1 \
--dataset data/expression.csv \
--output-dir results/exp1/networks \
--methods mu_cosine gamma_knn
```
### 3. Extract modules
```bash
bsvae-networks extract-modules \
--model-path results/exp1 \
--dataset data/expression.csv \
--output-dir results/exp1/modules
```
### 4. Export latents
```bash
bsvae-networks export-latents \
--model-path results/exp1 \
--dataset data/expression.csv \
--output results/exp1/latents.npz
```
### 5. Simulate and benchmark
```bash
bsvae-simulate generate \
--output data/sim_expr.csv \
--save-ground-truth data/sim_truth.csv
bsvae-simulate benchmark \
--dataset data/sim_expr.csv \
--ground-truth data/sim_truth.csv \
--model-path results/exp1 \
--output results/exp1/sim_metrics.json
```
## Training Outputs
`bsvae-train` writes to `results/<experiment>/`:
- `model.pt` (weights)
- `specs.json` (metadata and run args)
- `train_losses.csv` (epoch/component losses)
- `model-<epoch>.pt` checkpoints when `--checkpoint-every` is set
## Data Formats
The loader supports:
- `.csv` / `.csv.gz`
- `.tsv` / `.tsv.gz`
- `.h5` / `.hdf5`
- `.h5ad` (optional `anndata` dependency)
## Python API (Minimal)
```python
from bsvae.utils.modelIO import load_model
from bsvae.networks.extract_networks import create_dataloader_from_expression, run_extraction
model = load_model("results/exp1", is_gpu=False)
loader, feature_ids, _ = create_dataloader_from_expression("data/expression.csv", batch_size=128)
results = run_extraction(model, loader, feature_ids=feature_ids, methods=["mu_cosine"], top_k=50)
print(results[0].method, results[0].adjacency.shape)
```
## License
This project is licensed under the [GNU General Public License v3.0](LICENSE).
| text/markdown | Kynon J Benjamin | kj.benjamin90@gmail.com | null | null | GPL-3.0-only | variational-autoencoder, gaussian-mixture-model, pytorch, bioinformatics, gene-expression, module-discovery, omics | [
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Bio-Informatics"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"torch<3.0.0,>=2.8.0",
"numpy<2.3",
"pandas<3.0.0,>=2.3.2",
"scikit-learn<2.0.0,>=1.7.2",
"tqdm<5.0.0,>=4.67.1",
"scipy<2.0.0,>=1.16.2",
"h5py>=3.0",
"faiss-cpu>=1.7"
] | [] | [] | [] | [
"homepage, https://bsvae.readthedocs.io/en/latest/",
"repository, https://github.com/heart-gen/BSVAE",
"Bug Tracker, https://github.com/heart-gen/BSVAE/issues"
] | poetry/2.2.0 CPython/3.10.9 Linux/4.18.0-553.22.1.el8_10.x86_64 | 2026-02-20T15:35:33.562290 | bsvae-0.3.0.tar.gz | 68,628 | 28/da/3a51e808fcc454cd5940b3b83e41b1311bdee5df99f8a4f5341d18ec456d/bsvae-0.3.0.tar.gz | source | sdist | null | false | 2f298cda1cfed1eaac4f433d9207bba4 | 8e60c6e44d4e1fea8112cb278e12850cb310ac1721362a9c5c4b95e826bf0d25 | 28da3a51e808fcc454cd5940b3b83e41b1311bdee5df99f8a4f5341d18ec456d | null | [] | 190 |
2.4 | lambda-guard-boosting | 0.2.3 | Overfitting detection for Gradient Boosting models using λ-Guard methodology. | <p align="center">
<img src="docs/logo.png" alt="λ-Guard" width="160"/>
</p>
<p align="center">
<strong>Overfitting detection for Gradient Boosting</strong> — <em>no validation set required</em><br>
<i>Detect the moment when your model stops learning signal and starts memorizing structure.</i>
</p>
<p align="center">
<a href="https://github.com/faberBI/lambdaguard/actions/workflows/tests.yml">
<img src="https://img.shields.io/github/actions/workflow/status/faberBI/lambdaguard/tests.yml?branch=main&logo=github" alt="Tests Status">
</a>
<a href="https://coveralls.io/github/faberBI/lambdaguard">
<img src="https://img.shields.io/coveralls/github/faberBI/lambdaguard/main.svg" alt="Coverage Status">
</a>
<a href="https://pypi.org/project/lambda-guard-boosting/">
<img src="https://img.shields.io/pypi/v/lambdaguard?logo=python" alt="PyPI Version">
</a>
<a href="https://opensource.org/licenses/MIT">
<img src="https://img.shields.io/badge/License-MIT-green.svg" alt="License MIT">
</a>
</p>
---
## ❓ Why λ-Guard?
In Gradient Boosting, overfitting often appears **before the validation error rises**.
By that point, the model is already:
- ✂️ Splitting features into extremely fine regions
- 🍃 Fitting leaves supported by very few observations
- 🌪 Sensitive to tiny perturbations
It’s **no longer improving predictions**, it’s **memorizing the training dataset**.
**λ-Guard detects that moment automatically.**
---
## 🧠 Core Intuition
A boosting model learns two things simultaneously:
| Component | Role |
|-----------|------|
| Geometry | partitions the feature space |
| Predictor | assigns values to each region |
Overfitting occurs when:
*"Geometry keeps growing, but predictor stops extracting real information."*
λ-Guard measures three key signals:
- 📦 **Capacity** → structural complexity
- 🎯 **Alignment** → extracted signal
- 🌊 **Stability** → fragility of predictions
---
## 🧩 Representation Matrix
Every tree divides the feature space into **leaves**.
We record where each observation falls:
Z[i,j] = 1 if sample i falls in leaf j
Z[i,j] = 0 otherwise
- Rows → observations
- Columns → leaves across all trees
Think of **Z** as the **representation learned by the ensemble**.
- Linear regression → hat matrix **H**
- Boosting → representation **Z**
---
## 📦 Capacity — Structural Complexity
- 🔹 Low C → few effective regions
- 🔹 High C → model fragments space
Late-stage boosting **increases C quickly**, often without improving predictions.
---
## 🎯 Alignment — Useful Information
- 🔹 High A → trees add real predictive signal
- 🔹 Low A → trees mostly refine boundaries
*"After some trees, alignment saturates."*
Boosting continues **growing structure** even if prediction stops improving.
---
## 🌊 Stability — Sensitivity to Perturbations
- 🔹 Low S → smooth, robust model
- 🔹 High S → brittle, sensitive model
**Stability is the first signal to explode during overfitting.**
---
## 🔥 The Overfitting Index λ
| Situation | λ |
|-----------|---|
| Compact structure + stable predictions | low |
| Many regions + weak signal | high |
| Unstable predictions | very high |
**Interpretation:** measures how much structural complexity is wasted.
Normalized λ ∈ [0,1] can be used to **compare models**.
## 🧪 Structural Overfitting Test
Detect if a few training points dominate the model using **approximate leverage**:
H_ii ≈ Σ_trees (learning_rate / leaf_size)
T1 = mean(H_ii) # global complexity
T2 = max(H_ii)/mean(H_ii) # local memorization
**Bootstrap procedure:**
1. Repeat B times: resample training data, recompute T1 & T2
2. Compute p-values:
- p1 = P(T1_boot ≥ T1_obs)
- p2 = P(T2_boot ≥ T2_obs)
Reject structural stability if:
p1 < α OR p2 < α
---
## 📊 What λ-Guard Distinguishes
| Regime | Meaning |
|--------|---------|
| ✅ Stable | smooth generalization |
| 📈 Global overfitting | too many effective parameters |
| ⚠️ Local memorization | few points dominate |
| 💥 Extreme | interpolation behavior |
---
## 🧭 When to Use
- Monitor boosting during training
- Hyperparameter tuning
- Small datasets (no validation split)
- Diagnose late-stage performance collapse
---
## ⚙️ Installation
Install via GitHub:
```bash
pip install git+https://github.com/faberBI/lambdaguard.git
from sklearn.ensemble import GradientBoostingRegressor
from lambdaguard.ofi import overfitting_index
from lambdaguard.lambda_guard import lambda_guard_test, interpret
from lambdaguard.cusum import detect_structural_overfitting_cusum_robust
import pandas as pd
# Fit a model
model = GradientBoostingRegressor(n_estimators=50, max_depth=3)
model.fit(X_train, y_train)
# Compute Overfitting Index
ofi_res = overfitting_index(model, X_train, y_train)
# Lambda-guard test
lg_res = lambda_guard_test(model, X_train)
print(interpret(lg_res))
# CUSUM-based detection
df = pd.DataFrame([
{"model": "GBR", "n_estimators": 50, "max_depth": 3, "A": 0.8, "OFI_norm": 0.2},
{"model": "GBR", "n_estimators": 100, "max_depth": 5, "A": 0.85, "OFI_norm": 0.3},
])
cusum_res = detect_structural_overfitting_cusum_robust(df, model_name="GBR")
```
## 📜 Citation
If you use **λ-Guard** in your research or projects, please cite the following:
**Fabrizio Di Sciorio, PhD**
*Universidad de Almeria — Business and Economics Department*
> "λ-Guard: Structural Overfitting Detection for Gradient Boosting Models"
| text/markdown | null | "Fabrizio Di Sciorio, PhD" <fabriziodisciorio91@gmail.com> | null | null | MIT | machine-learning, gradient-boosting, overfitting, boosting, lambda-guard | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy<2.2,>=1.26",
"pandas<3.0,>=2.2",
"scikit-learn<2.0,>=1.3",
"matplotlib<4.0,>=3.8",
"seaborn<0.14,>=0.12",
"xgboost<4.0,>=1.7",
"lightgbm<5.0,>=4.4",
"catboost<2.0,>=1.1"
] | [] | [] | [] | [
"Homepage, https://github.com/faberBI/lambdaguard",
"Documentation, https://github.com/faberBI/lambdaguard",
"BugTracker, https://github.com/faberBI/lambdaguard/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:35:32.293572 | lambda_guard_boosting-0.2.3.tar.gz | 9,519 | 84/66/4ab15caa70162c4fcdb50146bb2e6af9c2b80f9089aa11c0c3435cb67375/lambda_guard_boosting-0.2.3.tar.gz | source | sdist | null | false | f2bced1c54a26844a68cb4537bc9ca03 | 246da0ab60b1fc1c7f3ad8290271cb446acb239a458a8ebf83bf79c66b0661ff | 84664ab15caa70162c4fcdb50146bb2e6af9c2b80f9089aa11c0c3435cb67375 | null | [
"LICENSE.md"
] | 207 |
2.4 | workos | 5.42.1 | WorkOS Python Client | # WorkOS Python Library

[](https://workos.semaphoreci.com/projects/workos-python)
The WorkOS library for Python provides convenient access to the WorkOS API from applications written in Python, [hosted on PyPi](https://pypi.org/project/workos/)
## Documentation
See the [API Reference](https://workos.com/docs/reference/client-libraries) for Python usage examples.
## Installation
To install from PyPi, run the following:
```
pip install workos
```
To install from source, clone the repo and run the following:
```
python -m pip install .
```
## Configuration
The package will need to be configured with your [api key and client ID](https://dashboard.workos.com/api-keys).
```python
from workos import WorkOSClient
workos_client = WorkOSClient(
api_key="sk_1234", client_id="client_1234"
)
```
The SDK also provides asyncio support for some SDK methods, via the async client:
```python
from workos import AsyncWorkOSClient
async_workos_client = AsyncWorkOSClient(
api_key="sk_1234", client_id="client_1234"
)
```
## SDK Versioning
For our SDKs WorkOS follows a Semantic Versioning ([SemVer](https://semver.org/)) process where all releases will have a version X.Y.Z (like 1.0.0) pattern wherein Z would be a bug fix (e.g., 1.0.1), Y would be a minor release (1.1.0) and X would be a major release (2.0.0). We permit any breaking changes to only be released in major versions and strongly recommend reading changelogs before making any major version upgrades.
## Beta Releases
WorkOS has features in Beta that can be accessed via Beta releases. We would love for you to try these
and share feedback with us before these features reach general availability (GA). To install a Beta version,
please follow the [installation steps](#installation) above using the Beta release version.
> Note: there can be breaking changes between Beta versions. Therefore, we recommend pinning the package version to a
> specific version. This way you can install the same version each time without breaking changes unless you are
> intentionally looking for the latest Beta version.
We highly recommend keeping an eye on when the Beta feature you are interested in goes from Beta to stable so that you
can move to using the stable version.
## More Information
- [Single Sign-On Guide](https://workos.com/docs/sso/guide)
- [Directory Sync Guide](https://workos.com/docs/directory-sync/guide)
- [Admin Portal Guide](https://workos.com/docs/admin-portal/guide)
- [Magic Link Guide](https://workos.com/docs/magic-link/guide)
| text/markdown | WorkOS | WorkOS <team@workos.com> | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"cryptography>=44.0.2",
"httpx~=0.28.1",
"pydantic>=2.10.4",
"pyjwt>=2.10.0; python_full_version >= \"3.9\"",
"pyjwt<2.10,>=2.9.0; python_full_version == \"3.8.*\""
] | [] | [] | [] | [
"Changelog, https://workos.com/docs/sdks/python",
"Documentation, https://workos.com/docs/reference",
"Homepage, https://workos.com/docs/sdks/python"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T15:34:45.816487 | workos-5.42.1.tar.gz | 60,500 | cc/4a/11225c4ea1b23499171f73a7da7849423af5b31b5205d5284ba1b169e382/workos-5.42.1.tar.gz | source | sdist | null | false | 62d6ac05a9165afff508679f2fa90b89 | 0d8c4c1fcb244d48c0c6c3b1a2f3a3ef7429f7c6170e702ca2d41a12cd14457c | cc4a11225c4ea1b23499171f73a7da7849423af5b31b5205d5284ba1b169e382 | MIT | [] | 8,980 |
2.4 | apecode | 0.0.2 | A terminal code agent powered by AI | # ApeCode 🦧
A nano terminal code agent in Python — a minimal but complete implementation of a tool-calling AI agent (like Claude Code / Codex CLI / Kimi CLI), built for learning and experimentation.
Powered by [ApeCode.ai](https://apecode.ai)
## Features
- **Tool-calling agent loop** — `user → model → tool calls → tool results → model → response`, with configurable max-steps guard
- **Multi-provider model adapters** — OpenAI, Anthropic, and Kimi (OpenAI-compatible), all conforming to a unified `ChatModel` protocol
- **7 built-in tools** — `list_files`, `read_file`, `write_file`, `replace_in_file`, `grep_files`, `exec_command`, `update_plan`
- **Sandbox + approval model** — `SandboxMode` (read-only / workspace-write / danger-full-access) restricts path mutations; `ApprovalPolicy` (on-request / always / never) controls interactive confirmation for mutating operations
- **Plugin system** — declarative `apecode_plugin.json` manifests contribute tools, slash commands, and skills
- **MCP integration** — load external tools from `.mcp.json` / `apecode_mcp.json` via the `fastmcp` SDK
- **Slash commands** — `/help`, `/tools`, `/skills`, `/skill`, `/plan`, `/subagents`, `/delegate`, `/exit`
- **Subagent delegation** — isolated read-only agents with three default profiles: `general`, `reviewer`, `researcher`
- **Skill templates** — discoverable from `skills/*/SKILL.md` directories or plugins
- **REPL + one-shot mode** — interactive session with prompt-toolkit (history, tab-completion, multi-line via Alt+Enter) or single-prompt execution
- **Thinking model support** — displays `reasoning_content` from thinking models (e.g. Kimi K2.5)
- **AGENTS.md chain** — walks from workspace root to filesystem root, loading `AGENTS.md` files for project-specific instructions
## Installation
```bash
uv sync
```
Dependencies: `openai`, `anthropic`, `fastmcp`, `typer`, `rich`, `prompt-toolkit`.
## Usage
### API keys
```bash
export OPENAI_API_KEY=your_key # for provider=openai (default)
export ANTHROPIC_API_KEY=your_key # for provider=anthropic
export KIMI_API_KEY=your_key # for provider=kimi
```
### Interactive REPL
```bash
uv run ape
```
### One-shot mode
```bash
uv run ape "read README.md and summarize project structure"
```
### CLI flags
```bash
uv run ape --provider openai --model gpt-4.1-mini # default
uv run ape --provider anthropic --model claude-sonnet-4-20250514
uv run ape --provider kimi --model kimi-k2.5
uv run ape --max-steps 30 --timeout 180
uv run ape --cwd /path/to/repo
uv run ape --sandbox-mode read-only --approval-policy never
uv run ape --plugin-dir ./plugins
uv run ape --mcp-config ./.mcp.json
uv run ape --skill-dir ./custom-skills
uv run ape --yolo "apply a simple refactor in src/"
uv run ape --version
```
### Slash commands (inside REPL)
```
/help — list all commands
/tools — list registered tools
/skills — list discovered skills
/skill concise-review review src/apecode/cli.py — run a skill with extra request
/plan — show the current task plan
/subagents — list subagent profiles
/delegate reviewer:: review src/apecode/cli.py — delegate to a subagent
/exit — quit
```
## Architecture
```
user input
│
▼
┌──────────────────────────────────────────────┐
│ cli.py — Typer app, _build_runtime, REPL │
│ ┌────────────────────────────────────────┐ │
│ │ NanoCodeAgent (agent.py) │ │
│ │ ┌──────────┐ ┌──────────────────┐ │ │
│ │ │ ChatModel │◄──│ openai_client.py │ │ │
│ │ │ protocol │ │ OpenAI/Anthropic/ │ │ │
│ │ │ │ │ Kimi adapters │ │ │
│ │ └──────────┘ └──────────────────┘ │ │
│ │ ┌──────────────────────────────────┐ │ │
│ │ │ ToolRegistry (tools.py) │ │ │
│ │ │ 7 built-in + plugin + MCP tools │ │ │
│ │ │ ToolContext: sandbox + approval │ │ │
│ │ └──────────────────────────────────┘ │ │
│ └────────────────────────────────────────┘ │
│ commands.py — slash command registry │
│ plugins.py — apecode_plugin.json loader │
│ mcp.py — fastmcp stdio bridge │
│ skills.py — SKILL.md discovery + catalog │
│ subagents.py — isolated read-only delegates │
│ system_prompt.py — prompt builder + AGENTS.md│
│ console.py — Rich + prompt-toolkit I/O │
└──────────────────────────────────────────────┘
```
### Module breakdown
| Module | Purpose |
|---|---|
| `cli.py` | Typer entry point, assembles runtime (`_build_runtime`), runs REPL or one-shot |
| `agent.py` | `NanoCodeAgent` — the core tool-calling loop with `ChatModel` protocol |
| `tools.py` | `ToolRegistry`, `ToolContext` (sandbox/approval), 7 built-in tool handlers |
| `openai_client.py` | `OpenAIChatCompletionsClient`, `AnthropicMessagesClient`, `KimiChatCompletionsClient` — all adapters convert to/from internal OpenAI message format |
| `commands.py` | `CommandRegistry` + `SlashCommand` — `/help`, `/tools`, `/exit`, etc. |
| `plugins.py` | Loads `apecode_plugin.json` manifests; registers tools, commands, skills |
| `mcp.py` | Parses `.mcp.json`, connects via `fastmcp.Client`, registers MCP tools |
| `skills.py` | `SkillCatalog` — discovers `SKILL.md` files, supports plugin-contributed skills |
| `subagents.py` | `SubagentRunner` — spawns isolated agents with read-only tools and capped steps |
| `system_prompt.py` | Builds system prompt with environment info, AGENTS.md chain, skill catalog |
| `console.py` | Rich console output (panels, spinners, tool call display) + prompt-toolkit input session |
## Environment Variables
| Variable | Default | Description |
|---|---|---|
| `APECODE_PROVIDER` | `openai` | Model provider (`openai` / `anthropic` / `kimi`) |
| `APECODE_MODEL` | `gpt-4.1-mini` | Model name |
| `APECODE_SANDBOX_MODE` | `workspace-write` | Sandbox mode (`read-only` / `workspace-write` / `danger-full-access`) |
| `APECODE_APPROVAL_POLICY` | `on-request` | Approval policy (`on-request` / `always` / `never`) |
| `OPENAI_API_KEY` | — | OpenAI API key |
| `OPENAI_BASE_URL` | `https://api.openai.com/v1` | Custom OpenAI-compatible endpoint |
| `ANTHROPIC_API_KEY` | — | Anthropic API key |
| `ANTHROPIC_BASE_URL` | `https://api.anthropic.com/v1` | Custom Anthropic endpoint |
| `ANTHROPIC_API_VERSION` | `2023-06-01` | Anthropic API version header |
| `KIMI_API_KEY` | — | Kimi API key |
| `KIMI_BASE_URL` | `https://api.moonshot.cn/v1` | Kimi endpoint |
## Plugin System
Place a plugin manifest as `apecode_plugin.json` in a plugin directory:
```json
{
"name": "EchoPlugin",
"tools": [
{
"name": "echo_text",
"description": "Echo text from JSON args",
"parameters": {
"type": "object",
"properties": { "text": { "type": "string" } },
"required": ["text"],
"additionalProperties": false
},
"argv": ["python3", "/absolute/path/to/tool.py"],
"mutating": false,
"timeout_sec": 60
}
],
"commands": [
{
"name": "quick-review",
"description": "Run plugin prompt template",
"usage": "/quick-review <task>",
"output": "Running quick review...",
"agent_input_template": "Review this task:\\n{args}"
}
],
"skills": [
{
"name": "plugin-skill",
"description": "A plugin-provided skill",
"content": "# Plugin Skill\\n\\nKeep output concise."
}
]
}
```
- Tools use either `argv` (recommended) or `command` to specify the executable.
- Tool processes receive JSON arguments on `stdin` and write results to `stdout`.
- Commands support `{args}` placeholder in `agent_input_template`.
- Skills can use inline `content` or a `file` path relative to the manifest.
## MCP Config
Load MCP tools from `.mcp.json` or `apecode_mcp.json` in workspace root, or via `--mcp-config`:
```json
{
"mcpServers": {
"demo": {
"command": "python3",
"args": ["/absolute/path/to/mcp_server.py"],
"timeout_sec": 30
}
}
}
```
## Skills
Create a skill as `skills/<name>/SKILL.md`:
```markdown
# concise-review
Review code and answer with concise bullet points.
```
Use inside REPL:
```
/skill concise-review review src/apecode/agent.py
```
## Development
```bash
# Install dev dependencies
uv sync
# Run tests
uv run pytest
# Run a single test file
uv run pytest tests/test_tools.py -v
# Lint
uv run ruff check src/ tests/
# Lint with auto-fix
uv run ruff check --fix src/ tests/
# Format
uv run ruff format src/ tests/
```
## Project Structure
```
src/apecode/
├── __init__.py # package version
├── __main__.py # python -m apecode entry
├── cli.py # Typer CLI app + runtime assembly
├── agent.py # NanoCodeAgent core loop
├── tools.py # tool registry + built-in tools
├── openai_client.py # model adapters (OpenAI/Anthropic/Kimi)
├── commands.py # slash command framework
├── plugins.py # plugin manifest loader
├── mcp.py # MCP stdio bridge
├── skills.py # skill discovery + catalog
├── subagents.py # subagent delegation
├── system_prompt.py # system prompt builder
└── console.py # Rich + prompt-toolkit I/O
tests/
├── test_agent.py
├── test_commands.py
├── test_mcp.py
├── test_model_adapters.py
├── test_plugins.py
├── test_skills.py
├── test_subagents.py
└── test_tools.py
```
## License
Apache-2.0
| text/markdown | ApeCode Team | null | null | null | Apache-2.0 | agent, ai, cli, code, terminal | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"anthropic>=0.80.0",
"fastmcp>=2.0.0",
"openai>=2.0.0",
"prompt-toolkit>=3.0.0",
"rich>=14.0.0",
"typer>=0.15.0"
] | [] | [] | [] | [] | uv/0.7.15 | 2026-02-20T15:34:03.365468 | apecode-0.0.2.tar.gz | 90,236 | 71/3c/07a541e11afbe60fd31df2e538761f8f45bd0094895f38edebfc5a62ffd7/apecode-0.0.2.tar.gz | source | sdist | null | false | 7d23ecd4cd386ebc61fbb06366c46612 | d50380038d64310c4e47aeca74b2b891b555996063698ef57a0b923a8b60eef7 | 713c07a541e11afbe60fd31df2e538761f8f45bd0094895f38edebfc5a62ffd7 | null | [
"LICENSE"
] | 198 |
2.4 | physicell-settings | 0.5.0 | User-friendly Python package for generating PhysiCell_settings.xml configuration files | # PhysiCell Settings
A powerful, modular Python package for generating PhysiCell_settings.xml configuration files with comprehensive parameter coverage, intuitive API design, and maintainable architecture.
[](https://www.python.org/downloads/)
[](https://pypi.org/project/physicell-settings/)
[](https://www.gnu.org/licenses/gpl-3.0)
## 🚀 Overview
The PhysiCell Settings package provides a powerful yet simple API for creating complex PhysiCell simulations. Built with a modern modular architecture, it handles all aspects of PhysiCell configuration with a focus on ease of use, maintainability, and compatibility with existing PhysiCell standards.
## 📦 Installation
Install from PyPI using pip:
```bash
pip install physicell-settings
```
**Requirements:**
- Python 3.8 or higher
- No external dependencies (uses only Python standard library)
## 🚀 Quick Start
```python
import physicell_config
from physicell_config import PhysiCellConfig
# Create a new configuration
config = PhysiCellConfig()
# Set up simulation domain
config.domain.set_bounds(x_min=-500, x_max=500, y_min=-500, y_max=500)
# Add substrates
config.substrates.add_substrate(
name="oxygen",
diffusion_coefficient=100000.0,
decay_rate=0.1
)
# Add cell type
config.cell_types.add_cell_type(
name="cancer_cell",
cycle_model="Ki67_basic"
)
# Save configuration
config.save("PhysiCell_settings.xml")
```
## Development Status
**Current Version:** 0.5.0
This package is stable and actively maintained. All core features are working and covered by regression tests.
## ✨ Key Features
- **🏗️ Modular Architecture** - Well-organized, maintainable codebase with focused modules
- **🎯 Simple & Intuitive** - Clean API with sensible defaults and method chaining
- **🔧 Comprehensive Coverage** - All PhysiCell features: domain, substrates, cells, rules, PhysiBoSS
- **✅ Built-in Validation** - Configuration validation with detailed error reporting
- **🔄 Full Compatibility** - Generates standard PhysiCell XML, reproduces existing configs
- **🧬 Advanced Features** - Cell rules, PhysiBoSS integration, initial conditions, enhanced visualization
- **📊 Cell Rules CSV** - Context-aware generation of rules.csv files with signal/behavior validation
- **📚 Well Documented** - Extensive examples and clear modular documentation
### 🎯 Perfect For
- **Researchers** building new PhysiCell models with complex requirements
- **Developers** programmatically generating parameter sweeps and batch simulations
- **Teams** collaborating on large simulation projects with maintainable code
- **Educators** teaching computational biology with clear, reproducible examples
## 🏗️ Modular Architecture
The configuration builder uses a modular composition pattern that provides:
- **Clean Separation**: Each module handles one aspect of configuration
- **Easy Maintenance**: Small, focused files instead of monolithic code
- **Team Development**: Multiple developers can work on different modules
- **Extensibility**: Easy to add new modules without affecting existing code
### Module Structure
```
├── config_builder_modular.py # Main configuration class
└── modules/
├── domain.py # Simulation domain and mesh
├── substrates.py # Microenvironment substrates
├── cell_types.py # Cell definitions and phenotypes
├── cell_rules.py # Cell behavior rules
├── cell_rules_csv.py # rules.csv generation with context awareness
├── physiboss.py # PhysiBoSS boolean networks
├── initial_conditions.py # Initial cell placement
├── save_options.py # Output and visualization
└── options.py # Simulation parameters
```
## 🚀 Quick Start
### Installation
The package is available on PyPI for easy installation:
```bash
pip install physicell-settings
```
### Basic Usage
```python
import physicell_config
from physicell_config import PhysiCellConfig
# Create configuration
config = PhysiCellConfig()
# Set up simulation domain
config.domain.set_bounds(x_min=-400, x_max=400, y_min=-400, y_max=400)
# Add substrates
config.substrates.add_substrate(
name="oxygen",
diffusion_coefficient=100000.0,
decay_rate=0.1
)
# Add cell type
config.cell_types.add_cell_type(
name="cancer_cell",
cycle_model="Ki67_basic"
)
```
### Advanced Modular Usage
```python
# Direct module access for advanced features
config.domain.set_bounds(-500, 500, -500, 500)
config.substrates.add_substrate("glucose", diffusion_coefficient=50000.0)
config.cell_types.add_cell_type("immune_cell")
config.cell_types.set_motility("immune_cell", speed=2.0, enabled=True)
config.cell_types.add_secretion("immune_cell", "oxygen", uptake_rate=5.0)
config.cell_rules.add_rule("oxygen", "proliferation", "cancer_cell")
config.physiboss.enable_physiboss("boolean_model.bnd")
config.initial_conditions.add_cell_cluster("cancer_cell", x=0, y=0, radius=100)
```
## 📖 Examples
### Complete Tumor-Immune Simulation
```python
from config_builder_modular import PhysiCellConfig
# Create configuration
config = PhysiCellConfig()
# Setup domain
config.domain.set_bounds(-600, 600, -600, 600)
config.domain.set_mesh(20.0, 20.0)
# Add substrates
config.substrates.add_substrate("oxygen",
diffusion_coefficient=100000.0,
decay_rate=0.1,
initial_condition=38.0)
config.substrates.add_substrate("glucose",
diffusion_coefficient=50000.0,
decay_rate=0.01,
initial_condition=10.0)
# Add cell types
config.cell_types.add_cell_type("cancer_cell")
config.cell_types.set_motility("cancer_cell", speed=0.5, enabled=True)
config.cell_types.add_secretion("cancer_cell", "oxygen", uptake_rate=10.0)
config.cell_types.add_cell_type("immune_cell")
config.cell_types.set_motility("immune_cell", speed=2.0, enabled=True)
# Add initial conditions
config.initial_conditions.add_cell_cluster("cancer_cell", x=0, y=0, radius=150, num_cells=100)
config.initial_conditions.add_cell_cluster("immune_cell", x=300, y=300, radius=50, num_cells=20)
# Add cell rules to XML
config.cell_rules.add_rule(
signal="oxygen",
behavior="proliferation",
cell_type="cancer_cell",
min_signal=0.0,
max_signal=38.0,
min_behavior=0.0,
max_behavior=0.05
)
# Configure visualization
config.save_options.set_svg_options(
interval=120.0,
plot_substrate=True,
substrate_to_plot="oxygen",
cell_color_by="cell_type"
)
# Save configuration
config.save_xml("tumor_immune_simulation.xml")
```
### Loading Cell Rules from CSV
```python
# Create rules CSV file
import csv
rules = [
{"signal": "oxygen", "behavior": "proliferation", "cell_type": "cancer_cell",
"min_signal": 0.0, "max_signal": 38.0, "min_behavior": 0.0, "max_behavior": 0.05},
{"signal": "pressure", "behavior": "apoptosis", "cell_type": "cancer_cell",
"min_signal": 0.0, "max_signal": 1.0, "min_behavior": 0.0, "max_behavior": 0.1}
]
with open("cell_rules.csv", "w", newline="") as f:
writer = csv.DictWriter(f, fieldnames=rules[0].keys())
writer.writeheader()
writer.writerows(rules)
# Load rules in configuration
config.cell_rules.load_rules_from_csv("cell_rules.csv")
```
## 🧪 Testing and Validation
### Run Demo
```bash
python demo_modular.py
```
### Configuration Validation
```python
# Built-in validation
issues = config.validate()
if issues:
for issue in issues:
print(f"⚠️ {issue}")
else:
print("✅ Configuration is valid!")
# Get configuration summary
summary = config.get_summary()
print(f"Substrates: {summary['substrates']}")
print(f"Cell types: {summary['cell_types']}")
```
## 📁 Project Structure
```
physicell_config/
├── README.md # This file
├── config_builder.py # Main configuration class
├── demo_modular.py # Demonstration script
├── modules/ # Modular components
│ ├── __init__.py
│ ├── base.py # Common utilities
│ ├── domain.py # Domain configuration
│ ├── substrates.py # Substrate management
│ ├── cell_types.py # Cell type definitions
│ ├── cell_rules.py # Cell behavior rules
│ ├── physiboss.py # PhysiBoSS integration
│ ├── initial_conditions.py # Initial cell placement
│ ├── save_options.py # Output configuration
│ └── options.py # Simulation options
├── examples/ # Example configurations
│ ├── PhysiCell_settings.xml # Reference PhysiCell config
│ ├── basic_tumor.py # Basic tumor example
│ ├── cancer_immune.py # Cancer-immune interaction
│ └── physiboss_integration.py # PhysiBoSS example
├── MODULAR_ARCHITECTURE.md # Detailed architecture docs
├── MODULARIZATION_COMPLETE.md # Project completion summary
└── setup.py # Package setup
```
## 🔧 Advanced Features
### PhysiBoSS Integration
```python
# Enable PhysiBoSS boolean networks
config.physiboss.enable_physiboss("boolean_model.bnd")
config.physiboss.add_mutation("mutant_cell", "p53", False)
config.physiboss.add_initial_value("EGFR", True)
```
### Complex Initial Conditions
```python
# Multiple initial condition types
config.initial_conditions.add_cell_cluster("cancer", 0, 0, radius=100)
config.initial_conditions.add_single_cell("stem_cell", 200, 200)
config.initial_conditions.add_rectangular_region("stromal", -300, 300, -300, 300, density=0.3)
```
### Enhanced Visualization
```python
# Advanced SVG options
config.save_options.set_svg_options(
plot_substrate=True,
substrate_to_plot="oxygen",
cell_color_by="cell_type",
interval=60.0
)
```
### Cell Rules CSV Generation
```python
# Create cell rules CSV with context awareness
rules = config.cell_rules_csv
# Explore available signals and behaviors
rules.print_available_signals(filter_by_type="contact")
rules.print_available_behaviors(filter_by_type="motility")
rules.print_context() # Shows current cell types and substrates
# Add rules following PhysiCell CSV format
rules.add_rule("tumor", "oxygen", "decreases", "necrosis", 0, 3.75, 8, 0)
rules.add_rule("tumor", "contact with immune_cell", "increases", "apoptosis", 0.1, 0.5, 4, 0)
# Generate PhysiCell-compatible CSV file
rules.generate_csv("config/differentiation/rules.csv")
```
### PhysiBoSS Integration
```python
# Add intracellular models to cell types
config.cell_types.add_intracellular_model("T_cell", "maboss")
config.cell_types.set_intracellular_settings("T_cell",
bnd_filename="tcell.bnd",
cfg_filename="tcell.cfg")
config.cell_types.add_intracellular_mutation("T_cell", "FOXP3", 0)
```
## 🤝 Contributing
We welcome contributions! The modular architecture makes it easy to:
- Add new modules for additional PhysiCell features
- Enhance existing modules with new functionality
- Improve documentation and examples
- Add comprehensive test suites
## 📧 Support & Contact
- **Author:** Marco Ruscone
- **Email:** m.ruscone94@gmail.com
- **PyPI:** https://pypi.org/project/physicell-settings/
For questions, suggestions, or bug reports, please feel free to reach out via email.
## 📄 License
This project is licensed under the GNU General Public License v3.0 - see the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- PhysiCell development team for creating the simulation framework
- The open-source community for inspiration and best practices
### Contributors
Thanks to our external contributors:
- [@zacsims](https://github.com/zacsims) — fixed Dirichlet boundary boolean attribute casing (`enabled="False"` → `enabled="false"`) in the XML output ([#5](https://github.com/mruscone/PhysiCell_Settings/pull/5))
| text/markdown | Marco Ruscone | ym.ruscone94@gmail.com | null | null | null | physicell, multicellular, simulation, biology, computational-biology, bioinformatics, cancer, tissue, xml, configuration | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Bio-Informatics",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent"
] | [] | https://github.com/mruscone/PhysiCell_Settings | null | >=3.8 | [] | [] | [] | [
"pytest>=6.0; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"black; extra == \"dev\"",
"flake8; extra == \"dev\"",
"mypy; extra == \"dev\"",
"sphinx>=4.0; extra == \"docs\"",
"sphinx-rtd-theme; extra == \"docs\"",
"myst-parser; extra == \"docs\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/mruscone/PhysiCell_Settings/issues",
"Documentation, https://github.com/mruscone/PhysiCell_Settings#readme",
"Source Code, https://github.com/mruscone/PhysiCell_Settings"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:32:59.706598 | physicell_settings-0.5.0.tar.gz | 79,219 | 96/3b/46c15de2b32822f32ba4cd27d40238450871d0b1b2abfbf764210ed198fb/physicell_settings-0.5.0.tar.gz | source | sdist | null | false | cb6f8f668561daf0ee032981744bbd0c | f2fe3e5610ef785813b7464ffcc5fff6604e4aadfbc728d1e843b90f917f1ff0 | 963b46c15de2b32822f32ba4cd27d40238450871d0b1b2abfbf764210ed198fb | null | [
"LICENSE"
] | 192 |
2.4 | hexlab | 0.0.6 | A feature-rich color exploration and manipulation tool. | # hexlab
[](https://pypi.org/project/hexlab/)
[](https://pypi.org/project/hexlab/)
[](https://pepy.tech/project/hexlab)
A professional, feature-rich hex color exploration and manipulation tool for the command line.
## Introduction
**hexlab** is a powerful CLI utility for developers, designers, and accessibility experts. It provides deep insight into 24-bit colors, supporting advanced color spaces like **OKLAB, OKLCH, CIE LAB, CIE XYZ**, and standard formats like RGB, HSL, and CMYK.
Beyond inspection, hexlab offers sophisticated manipulation capabilities, including gradient generation using interpolation in perceptual color spaces, vision simulation for color blindness, and a robust adjustment pipeline for fine-tuning colors.
## Installation
hexlab requires **Python 3.7+**.
### Via PyPI (Recommended)
```bash
pip install hexlab
```
### From Source
```bash
git clone https://github.com/mallikmusaddiq1/hexlab.git
cd hexlab
pip install .
```
## Quick Start
Inspect a hex color with visual bars and WCAG contrast:
```bash
hexlab -H FF5733 -rgb -hsl -wcag
```
Generate a smooth gradient in OKLAB space:
```bash
hexlab gradient -H FF0000 -H 0000FF -cs oklab --steps 15
```
Simulate color blindness (Deuteranopia):
```bash
hexlab vision -cn "chartreuse" -d
```
## Main Command
The base command allows you to inspect a single color, view its neighbors, and retrieve technical specifications in various formats.
```bash
usage: hexlab [-h] [-H HEX | -r | -cn NAME | -di INDEX] [OPTIONS...]
```
### Input Options
Exactly one input method is required.
| Flag | Description |
|-----------------------|-------------|
| `-H, --hex` | 6-digit hex color code (e.g., `FF5500`). Do not include the `#`. |
| `-r, --random` | Generate a random 24-bit color. |
| `-cn, --color-name` | Use a named color (e.g., `tomato`, `azure`). See `--list-color-names`. |
| `-di, --decimal-index`| Input integer index (0 to 16777215). Useful for programmatic iteration. |
| `-s, --seed` | Seed for random generation reproducibility. |
### Navigation & Modifications
| Flag | Description |
|------------------|-------------|
| `-n, --next` | Show the next color (index + 1). |
| `-p, --previous` | Show the previous color (index - 1). |
| `-N, --negative` | Show the inverse (negative) color. |
### Technical Information Flags
Toggle specific color space outputs or information blocks.
| Flag | Description |
|-----------------|-------------|
| `-all` | Show **all** available technical information. |
| `-wcag` | Show WCAG contrast ratios (AA/AAA) against Black and White. |
| `-hb, --hide-bars` | Hide the visual ANSI color bars (raw text output only). |
| `-rgb` | Red, Green, Blue (0-255). |
| `-hsl` | Hue, Saturation, Lightness. |
| `-hsv` | Hue, Saturation, Value. |
| `-hwb` | Hue, Whiteness, Blackness. |
| `-cmyk` | Cyan, Magenta, Yellow, Key (Black). |
| `-lab` | CIE 1976 Lab (Perceptually uniform). |
| `--oklab` | OKLAB (Improved perceptual uniformity). |
| `--oklch` | OKLCH (Cylindrical form of OKLAB). |
| `-xyz` | CIE 1931 XYZ. |
| `-lch` | CIE LCH (Cylindrical Lab). |
| `--cieluv` | CIE 1976 LUV. |
| `-l, --luminance` | Relative Luminance (0.0 - 1.0). |
### Meta Options
| Flag | Description |
|-------------------------------|-------------|
| `--list-color-names [fmt]` | List all supported color names. Format can be `text`, `json`, or `prettyjson`. |
| `-hf, --help-all` | Print help for the main command AND all subcommands. |
---
## Subcommands
hexlab features a suite of specialized tools invoked via `hexlab <subcommand>`.
### 1. Gradient
Generate interpolated color steps between two or more colors. Supports interpolation in perceptual spaces like OKLAB for smoother results.
```bash
hexlab gradient -H FF0000 -H 00FF00 -S 10 -cs oklab
```
| Option | Description |
|---------------------|-------------|
| `-H, -cn, -di, -r` | Input colors. **Must provide at least 2 inputs** (or use `-c` with `-r`). |
| `-S, --steps` | Total number of steps in the gradient (default: 10). |
| `-cs, --colorspace` | Interpolation space. Choices: `srgb`, `srgblinear`, `lab`, `lch`, `oklab` (default), `oklch`, `luv`. |
### 2. Mix
Mix (average) multiple colors together. Useful for finding the midpoint or blending pigments conceptually.
```bash
hexlab mix -cn red -cn blue -a 50
```
| Option | Description |
|------------------|-------------|
| `-a, --amount` | Mix ratio for 2 colors (0-100%). Default 50% (perfect average). |
| `-cs, --colorspace` | Mixing space. Averaging in `srgblinear` often yields more physically accurate light mixing than `srgb`. |
### 3. Scheme
Generate standard color harmonies based on color theory wheels.
```bash
hexlab scheme -H FF5733 -triadic -hm oklch
```
| Option | Description |
|-------------------------|-------------|
| `-hm, --harmony-model` | The color wheel model to rotate hue on. Choices: `hsl` (classic), `lch`, `oklch` (modern). |
| `-co, --complementary` | 180° rotation. |
| `-sco, --split-complementary` | 150° and 210° rotations. |
| `-tr, --triadic` | 120° and 240° rotations. |
| `-an, --analogous` | -30° and +30° rotations. |
| `-tsq, -trc` | Tetradic Square (90° steps) and Rectangular (60°/180°). |
| `-mch` | Monochromatic (Lightness variations). |
| `-cs` | Custom degree shift (e.g., `-cs 45`). |
### 4. Vision
Simulate various forms of Color Blindness (CVD) to test accessibility.
```bash
hexlab vision -r -all
```
| Flag | Description |
|-----------------------|-------------|
| `-p, --protanopia` | Red-blind simulation. |
| `-d, --deuteranopia` | Green-blind simulation (most common). |
| `-t, --tritanopia` | Blue-blind simulation. |
| `-a, --achromatopsia` | Total color blindness (grayscale). |
### 5. Similar
Find perceptually similar colors by searching the 24-bit space around a base color.
```bash
hexlab similar -H 336699 -dm oklab -c 5
```
| Option | Description |
|-----------------------|-------------|
| `-dm, --distance-metric` | Algorithm to calculate "similarity". Choices: `lab` (CIEDE2000), `oklab` (Euclidean), `rgb`. |
| `-dv, --dedup-value` | Threshold to consider colors "different". Higher values result in more distinct results. |
| `-c, --count` | Number of similar colors to generate. |
### 6. Distinct
Generate a palette of visually distinct colors starting from a base. Uses a greedy algorithm to maximize distance.
```bash
hexlab distinct -r -c 10 -dm oklab
```
| Option | Description |
|------------------|-------------|
| `-c, --count` | Number of distinct colors to find. |
### 7. Convert
Utility to convert numerical color strings between formats.
```bash
hexlab convert -f hex -t rgb -v "FF0000"
hexlab convert -f oklch -t hex -v "oklch(0.6 0.15 45deg)"
```
| Option | Description |
|--------------------|-------------|
| `-f, --from-format`| Source format (e.g., `oklch`, `rgb`, `hex`). |
| `-t, --to-format` | Target format. |
| `-v, --value` | The value string. Use quotes! |
| `-V, --verbose` | Show input -> output format. |
### 8. Adjust
An advanced color manipulation pipeline. Operations are deterministic. By default, a fixed pipeline is used, but you can define custom order.
```bash
hexlab adjust -H 663399 --lighten 20 --rotate 15 --posterize 8
```
#### Tone & Vividness
| Flag | Description |
|-------------------------------|-------------|
| `--brightness / --brightness-srgb` | Adjust linear or sRGB brightness (-100% to 100%). |
| `--contrast` | Adjust contrast. |
| `--gamma` | Apply gamma correction. |
| `--exposure` | Adjust exposure in stops. |
| `--chroma-oklch` | Scale chroma using OKLCH space. |
| `--vibrance-oklch` | Smart saturation that boosts low-chroma colors more than high-chroma ones. |
| `--warm-oklab / --cool-oklab` | Shift color temperature. |
| `--target-rel-lum` | Force the color to a specific relative luminance (0.0 - 1.0). |
| `--min-contrast-with` | Ensure the result meets a contrast ratio against this hex code. |
#### Filters
| Flag | Description |
|-----------------|-------------|
| `--grayscale` | Convert to B&W. |
| `--sepia` | Apply retro sepia filter. |
| `--invert` | Invert color channels. |
| `--posterize` | Reduce color depth to N levels. |
| `--solarize` | Solarize effect based on OKLAB Lightness. |
| `--threshold` | Binarize color (Black/White) based on luminance. |
| `--tint` | Tint towards a specific Hex color. |
#### Pipeline Control
| Flag | Description |
|-------------------------|-------------|
| `-cp, --custom-pipeline`| Apply adjustments exactly in the order flags are passed in CLI (disables fixed pipeline). |
| `-V, --verbose` | Log every step of the adjustment pipeline. |
## Contributing
Contributions are welcome! Please follow these steps:
1. Fork the repository.
2. Create a feature branch (`git checkout -b feature/amazing-feature`).
3. Commit your changes.
4. Push to the branch.
5. Open a Pull Request.
Please ensure any new color math is backed by tests in the `tests/` directory.
## Author & License
**hexlab** is developed and maintained by:
### Mallik Mohammad Musaddiq
Email: [mallikmusaddiq1@gmail.com](mailto:mallikmusaddiq1@gmail.com)
© 2025 Hexlab. Open Source Software.
| text/markdown | null | Mallik Mohammad Musaddiq <mallikmusaddiq1@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Multimedia :: Graphics",
"Environment :: Console"
] | [] | null | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T15:32:57.640355 | hexlab-0.0.6.tar.gz | 48,545 | 0a/aa/0134ab63ac253be905043961e9d49b10659733ff2b4dafb07ebc12227f99/hexlab-0.0.6.tar.gz | source | sdist | null | false | f66f5f36f718fdcfae8c9415a97e09fd | aad5e90009f3980a97a8a247d942b22c046bd825a2b109c16046eab46398a8e2 | 0aaa0134ab63ac253be905043961e9d49b10659733ff2b4dafb07ebc12227f99 | null | [] | 213 |
2.4 | kombo | 1.0.0 | The official Python SDK for the Kombo Unified API | # Kombo Python SDK
Developer-friendly & type-safe Python SDK for the [Kombo Unified API](https://docs.kombo.dev/introduction).
<div align="left">
<a href="https://www.speakeasy.com/?utm_source=kombo-python&utm_campaign=python">
<img src="https://custom-icon-badges.demolab.com/badge/-built%20with%20speakeasy-212015?style=flat-square&logoColor=FBE331&logo=speakeasy&labelColor=545454" />
</a>
<a href="https://pypi.org/project/kombo/">
<img src="https://img.shields.io/pypi/v/kombo?style=flat-square" />
</a>
<a href="https://opensource.org/licenses/MIT">
<img src="https://img.shields.io/badge/license-MIT-blue?style=flat-square" />
</a>
</div>
<br />
> [!NOTE]
> The Kombo Python SDK is **currently in beta**. The core API structure, methods, and input/output objects are considered stable. We may still make minor adjustments such as renames to exported type classes or fixes for code generator oddities, but all changes will be clearly documented in the changelog. We **do not foresee** any blockers for production use.
<!-- Start Table of Contents [toc] -->
## Table of Contents
<!-- $toc-max-depth=2 -->
* [Kombo Python SDK](https://github.com/kombohq/python-sdk/blob/master/#kombo-python-sdk)
* [SDK Installation](https://github.com/kombohq/python-sdk/blob/master/#sdk-installation)
* [SDK Example Usage](https://github.com/kombohq/python-sdk/blob/master/#sdk-example-usage)
* [Region Selection](https://github.com/kombohq/python-sdk/blob/master/#region-selection)
* [Available Resources and Operations](https://github.com/kombohq/python-sdk/blob/master/#available-resources-and-operations)
* [Pagination](https://github.com/kombohq/python-sdk/blob/master/#pagination)
* [Error Handling](https://github.com/kombohq/python-sdk/blob/master/#error-handling)
* [Retries](https://github.com/kombohq/python-sdk/blob/master/#retries)
* [Custom HTTP Client](https://github.com/kombohq/python-sdk/blob/master/#custom-http-client)
* [Resource Management](https://github.com/kombohq/python-sdk/blob/master/#resource-management)
* [Debugging](https://github.com/kombohq/python-sdk/blob/master/#debugging)
* [Development](https://github.com/kombohq/python-sdk/blob/master/#development)
* [Contributions](https://github.com/kombohq/python-sdk/blob/master/#contributions)
<!-- End Table of Contents [toc] -->
<!-- Start SDK Installation [installation] -->
## SDK Installation
> [!NOTE]
> **Python version upgrade policy**
>
> Once a Python version reaches its [official end of life date](https://devguide.python.org/versions/), a 3-month grace period is provided for users to upgrade. Following this grace period, the minimum python version supported in the SDK will be updated.
The SDK can be installed with *uv*, *pip*, or *poetry* package managers.
### uv
*uv* is a fast Python package installer and resolver, designed as a drop-in replacement for pip and pip-tools. It's recommended for its speed and modern Python tooling capabilities.
```bash
uv add kombo
```
### PIP
*PIP* is the default package installer for Python, enabling easy installation and management of packages from PyPI via the command line.
```bash
pip install kombo
```
### Poetry
*Poetry* is a modern tool that simplifies dependency management and package publishing by using a single `pyproject.toml` file to handle project metadata and dependencies.
```bash
poetry add kombo
```
### Shell and script usage with `uv`
You can use this SDK in a Python shell with [uv](https://docs.astral.sh/uv/) and the `uvx` command that comes with it like so:
```shell
uvx --from kombo python
```
It's also possible to write a standalone Python script without needing to set up a whole project like so:
```python
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "kombo",
# ]
# ///
from kombo import Kombo
sdk = Kombo(
# SDK arguments
)
# Rest of script here...
```
Once that is saved to a file, you can run it with `uv run script.py` where
`script.py` can be replaced with the actual file name.
<!-- End SDK Installation [installation] -->
## SDK Example Usage
```python
from kombo import SDK
with SDK(
api_key="<YOUR_BEARER_TOKEN_HERE>",
) as sdk:
res = sdk.general.check_api_key()
# Handle response
print(res)
```
### Specifying an integration ID
The majority of Kombo API endpoints are for interacting with a single "integration" (i.e., a single connection to one your customers' systems). For using these, make sure to specify the `integration_id` parameter when initializing the SDK:
```python
from kombo import SDK
with SDK(
api_key="<YOUR_BEARER_TOKEN_HERE>",
integration_id="workday:HWUTwvyx2wLoSUHphiWVrp28",
) as sdk:
res = sdk.hris.get_employees()
# Handle response
print(res)
```
## Region Selection
The Kombo platform is available in two regions: Europe and United States.
By default, the SDK will use the EU region. If you're using the US region (hosted under `api.us.kombo.dev`), make sure to specify the `server` parameter when initializing the SDK.
#### Example
```python
from kombo import SDK
with SDK(
server="us",
api_key="<YOUR_BEARER_TOKEN_HERE>",
) as sdk:
res = sdk.general.check_api_key()
# Handle response
print(res)
```
<!-- Start Available Resources and Operations [operations] -->
## Available Resources and Operations
<details open>
<summary>Available methods</summary>
### [Assessment](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/assessment/README.md)
* [get_packages](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/assessment/README.md#get_packages) - Get packages
* [set_packages](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/assessment/README.md#set_packages) - Set packages
* [get_open_orders](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/assessment/README.md#get_open_orders) - Get open orders
* [update_order_result](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/assessment/README.md#update_order_result) - Update order result
### [Ats](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md)
* [get_applications](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#get_applications) - Get applications
* [move_application_to_stage](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#move_application_to_stage) - Move application to stage
* [add_application_result_link](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#add_application_result_link) - Add result link to application
* [add_application_note](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#add_application_note) - Add note to application
* [get_application_attachments](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#get_application_attachments) - Get application attachments
* [add_application_attachment](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#add_application_attachment) - Add attachment to application
* [reject_application](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#reject_application) - Reject application
* [get_candidates](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#get_candidates) - Get candidates
* [create_candidate](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#create_candidate) - Create candidate
* [get_candidate_attachments](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#get_candidate_attachments) - Get candidate attachments
* [add_candidate_attachment](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#add_candidate_attachment) - Add attachment to candidate
* [add_candidate_result_link](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#add_candidate_result_link) - Add result link to candidate
* [add_candidate_tag](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#add_candidate_tag) - Add tag to candidate
* [remove_candidate_tag](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#remove_candidate_tag) - Remove tag from candidate
* [get_tags](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#get_tags) - Get tags
* [get_application_stages](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#get_application_stages) - Get application stages
* [get_jobs](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#get_jobs) - Get jobs
* [create_application](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#create_application) - Create application
* [get_users](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#get_users) - Get users
* [get_offers](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#get_offers) - Get offers
* [get_rejection_reasons](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#get_rejection_reasons) - Get rejection reasons
* [get_interviews](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#get_interviews) - Get interviews
* [import_tracked_application](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/ats/README.md#import_tracked_application) - Import tracked application
### [Connect](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/connect/README.md)
* [create_connection_link](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/connect/README.md#create_connection_link) - Create connection link
* [get_integration_by_token](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/connect/README.md#get_integration_by_token) - Get integration by token
### [General](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/general/README.md)
* [check_api_key](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/general/README.md#check_api_key) - Check API key
* [trigger_sync](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/general/README.md#trigger_sync) - Trigger sync
* [send_passthrough_request](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/general/README.md#send_passthrough_request) - Send passthrough request
* [delete_integration](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/general/README.md#delete_integration) - Delete integration
* [get_integration_details](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/general/README.md#get_integration_details) - Get integration details
* [set_integration_enabled](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/general/README.md#set_integration_enabled) - Set integration enabled
* [create_reconnection_link](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/general/README.md#create_reconnection_link) - Create reconnection link
* [get_integration_fields](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/general/README.md#get_integration_fields) - Get integration fields
* [update_integration_field](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/general/README.md#update_integration_field) - Updates an integration fields passthrough setting
* [get_custom_fields](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/general/README.md#get_custom_fields) - Get custom fields with current mappings
* [update_custom_field_mapping](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/general/README.md#update_custom_field_mapping) - Put custom field mappings
* [get_tools](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/general/README.md#get_tools) - Get tools
### [Hris](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/hris/README.md)
* [get_employees](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/hris/README.md#get_employees) - Get employees
* [get_employee_form](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/hris/README.md#get_employee_form) - Get employee form
* [create_employee_with_form](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/hris/README.md#create_employee_with_form) - Create employee with form
* [add_employee_document](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/hris/README.md#add_employee_document) - Add document to employee
* [get_employee_document_categories](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/hris/README.md#get_employee_document_categories) - Get employee document categories
* [get_groups](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/hris/README.md#get_groups) - Get groups
* [get_employments](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/hris/README.md#get_employments) - Get employments
* [get_locations](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/hris/README.md#get_locations) - Get work locations
* [get_absence_types](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/hris/README.md#get_absence_types) - Get absence types
* [get_time_off_balances](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/hris/README.md#get_time_off_balances) - Get time off balances
* [get_absences](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/hris/README.md#get_absences) - Get absences
* [create_absence](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/hris/README.md#create_absence) - Create absence
* [delete_absence](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/hris/README.md#delete_absence) - Delete absence
* [get_legal_entities](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/hris/README.md#get_legal_entities) - Get legal entities
* [get_timesheets](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/hris/README.md#get_timesheets) - Get timesheets
* [get_performance_review_cycles](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/hris/README.md#get_performance_review_cycles) - Get performance review cycles
* [get_performance_reviews](https://github.com/kombohq/python-sdk/blob/master/docs/sdks/hris/README.md#get_performance_reviews) - Get performance reviews
</details>
<!-- End Available Resources and Operations [operations] -->
<!-- Start Pagination [pagination] -->
## Pagination
Some of the endpoints in this SDK support pagination. To use pagination, you make your SDK calls as usual, but the
returned response object will have a `Next` method that can be called to pull down the next group of results. If the
return value of `Next` is `None`, then there are no more pages to be fetched.
Here's an example of one such pagination call:
```python
from kombo import Kombo
with Kombo(
api_key="<YOUR_BEARER_TOKEN_HERE>",
) as k_client:
res = k_client.general.get_integration_fields(integration_id="<id>", page_size=100)
while res is not None:
# Handle items
res = res.next()
```
<!-- End Pagination [pagination] -->
<!-- Start Error Handling [errors] -->
## Error Handling
[`SDKError`](https://github.com/kombohq/python-sdk/blob/master/./src/kombo/errors/sdkerror.py) is the base class for all HTTP error responses. It has the following properties:
| Property | Type | Description |
| ------------------ | ---------------- | --------------------------------------------------------------------------------------- |
| `err.message` | `str` | Error message |
| `err.status_code` | `int` | HTTP response status code eg `404` |
| `err.headers` | `httpx.Headers` | HTTP response headers |
| `err.body` | `str` | HTTP body. Can be empty string if no body is returned. |
| `err.raw_response` | `httpx.Response` | Raw HTTP response |
| `err.data` | | Optional. Some errors may contain structured data. [See Error Classes](https://github.com/kombohq/python-sdk/blob/master/#error-classes). |
### Example
```python
from kombo import Kombo, errors
with Kombo(
api_key="<YOUR_BEARER_TOKEN_HERE>",
) as k_client:
res = None
try:
res = k_client.general.check_api_key()
# Handle response
print(res)
except errors.SDKError as e:
# The base class for HTTP error responses
print(e.message)
print(e.status_code)
print(e.body)
print(e.headers)
print(e.raw_response)
# Depending on the method different errors may be thrown
if isinstance(e, errors.KomboGeneralError):
print(e.data.status) # models.KomboGeneralErrorStatus
print(e.data.error) # models.KomboGeneralErrorError
```
### Error Classes
**Primary error:**
* [`SDKError`](https://github.com/kombohq/python-sdk/blob/master/./src/kombo/errors/sdkerror.py): The base class for HTTP error responses.
<details><summary>Less common errors (8)</summary>
<br />
**Network errors:**
* [`httpx.RequestError`](https://www.python-httpx.org/exceptions/#httpx.RequestError): Base class for request errors.
* [`httpx.ConnectError`](https://www.python-httpx.org/exceptions/#httpx.ConnectError): HTTP client was unable to make a request to a server.
* [`httpx.TimeoutException`](https://www.python-httpx.org/exceptions/#httpx.TimeoutException): HTTP request timed out.
**Inherit from [`SDKError`](https://github.com/kombohq/python-sdk/blob/master/./src/kombo/errors/sdkerror.py)**:
* [`KomboAtsError`](https://github.com/kombohq/python-sdk/blob/master/./src/kombo/errors/komboatserror.py): The standard error response with the error codes for the ATS use case. Applicable to 27 of 58 methods.*
* [`KomboHrisError`](https://github.com/kombohq/python-sdk/blob/master/./src/kombo/errors/kombohriserror.py): The standard error response with the error codes for the HRIS use case. Applicable to 17 of 58 methods.*
* [`KomboGeneralError`](https://github.com/kombohq/python-sdk/blob/master/./src/kombo/errors/kombogeneralerror.py): The standard error response with just the platform error codes. Applicable to 14 of 58 methods.*
* [`ResponseValidationError`](https://github.com/kombohq/python-sdk/blob/master/./src/kombo/errors/responsevalidationerror.py): Type mismatch between the response data and the expected Pydantic model. Provides access to the Pydantic validation error via the `cause` attribute.
</details>
\* Check [the method documentation](https://github.com/kombohq/python-sdk/blob/master/#available-resources-and-operations) to see if the error is applicable.
<!-- End Error Handling [errors] -->
<!-- Start Retries [retries] -->
## Retries
Some of the endpoints in this SDK support retries. If you use the SDK without any configuration, it will fall back to the default retry strategy provided by the API. However, the default retry strategy can be overridden on a per-operation basis, or across the entire SDK.
To change the default retry strategy for a single API call, simply provide a `RetryConfig` object to the call:
```python
from kombo import Kombo
from kombo.utils import BackoffStrategy, RetryConfig
with Kombo(
api_key="<YOUR_BEARER_TOKEN_HERE>",
) as k_client:
res = k_client.general.check_api_key(,
RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False))
# Handle response
print(res)
```
If you'd like to override the default retry strategy for all operations that support retries, you can use the `retry_config` optional parameter when initializing the SDK:
```python
from kombo import Kombo
from kombo.utils import BackoffStrategy, RetryConfig
with Kombo(
retry_config=RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False),
api_key="<YOUR_BEARER_TOKEN_HERE>",
) as k_client:
res = k_client.general.check_api_key()
# Handle response
print(res)
```
<!-- End Retries [retries] -->
<!-- Start Custom HTTP Client [http-client] -->
## Custom HTTP Client
The Python SDK makes API calls using the [httpx](https://www.python-httpx.org/) HTTP library. In order to provide a convenient way to configure timeouts, cookies, proxies, custom headers, and other low-level configuration, you can initialize the SDK client with your own HTTP client instance.
Depending on whether you are using the sync or async version of the SDK, you can pass an instance of `HttpClient` or `AsyncHttpClient` respectively, which are Protocol's ensuring that the client has the necessary methods to make API calls.
This allows you to wrap the client with your own custom logic, such as adding custom headers, logging, or error handling, or you can just pass an instance of `httpx.Client` or `httpx.AsyncClient` directly.
For example, you could specify a header for every request that this sdk makes as follows:
```python
from kombo import Kombo
import httpx
http_client = httpx.Client(headers={"x-custom-header": "someValue"})
s = Kombo(client=http_client)
```
or you could wrap the client with your own custom logic:
```python
from kombo import Kombo
from kombo.httpclient import AsyncHttpClient
import httpx
class CustomClient(AsyncHttpClient):
client: AsyncHttpClient
def __init__(self, client: AsyncHttpClient):
self.client = client
async def send(
self,
request: httpx.Request,
*,
stream: bool = False,
auth: Union[
httpx._types.AuthTypes, httpx._client.UseClientDefault, None
] = httpx.USE_CLIENT_DEFAULT,
follow_redirects: Union[
bool, httpx._client.UseClientDefault
] = httpx.USE_CLIENT_DEFAULT,
) -> httpx.Response:
request.headers["Client-Level-Header"] = "added by client"
return await self.client.send(
request, stream=stream, auth=auth, follow_redirects=follow_redirects
)
def build_request(
self,
method: str,
url: httpx._types.URLTypes,
*,
content: Optional[httpx._types.RequestContent] = None,
data: Optional[httpx._types.RequestData] = None,
files: Optional[httpx._types.RequestFiles] = None,
json: Optional[Any] = None,
params: Optional[httpx._types.QueryParamTypes] = None,
headers: Optional[httpx._types.HeaderTypes] = None,
cookies: Optional[httpx._types.CookieTypes] = None,
timeout: Union[
httpx._types.TimeoutTypes, httpx._client.UseClientDefault
] = httpx.USE_CLIENT_DEFAULT,
extensions: Optional[httpx._types.RequestExtensions] = None,
) -> httpx.Request:
return self.client.build_request(
method,
url,
content=content,
data=data,
files=files,
json=json,
params=params,
headers=headers,
cookies=cookies,
timeout=timeout,
extensions=extensions,
)
s = Kombo(async_client=CustomClient(httpx.AsyncClient()))
```
<!-- End Custom HTTP Client [http-client] -->
<!-- Start Resource Management [resource-management] -->
## Resource Management
The `Kombo` class implements the context manager protocol and registers a finalizer function to close the underlying sync and async HTTPX clients it uses under the hood. This will close HTTP connections, release memory and free up other resources held by the SDK. In short-lived Python programs and notebooks that make a few SDK method calls, resource management may not be a concern. However, in longer-lived programs, it is beneficial to create a single SDK instance via a [context manager][context-manager] and reuse it across the application.
[context-manager]: https://docs.python.org/3/reference/datamodel.html#context-managers
```python
from kombo import Kombo
def main():
with Kombo(
api_key="<YOUR_BEARER_TOKEN_HERE>",
) as k_client:
# Rest of application here...
# Or when using async:
async def amain():
async with Kombo(
api_key="<YOUR_BEARER_TOKEN_HERE>",
) as k_client:
# Rest of application here...
```
<!-- End Resource Management [resource-management] -->
<!-- Start Debugging [debug] -->
## Debugging
You can setup your SDK to emit debug logs for SDK requests and responses.
You can pass your own logger class directly into your SDK.
```python
from kombo import Kombo
import logging
logging.basicConfig(level=logging.DEBUG)
s = Kombo(debug_logger=logging.getLogger("kombo"))
```
<!-- End Debugging [debug] -->
<!-- Placeholder for Future Speakeasy SDK Sections -->
# Development
## Contributions
While we value open-source contributions to this SDK, this library is generated programmatically. Any manual changes added to internal files will be overwritten on the next generation.
We look forward to hearing your feedback. Feel free to open a PR or an issue with a proof of concept and we'll do our best to include it in a future release.
### SDK Created by [Speakeasy](https://www.speakeasy.com/?utm_source=kombo-python&utm_campaign=python)
<!-- No Summary [summary] -->
<!-- No SDK Example Usage [usage] -->
<!-- No IDE Support [idesupport] -->
<!-- No Authentication [security] -->
<!-- No Global Parameters [global-parameters] -->
<!-- No Server Selection [server] -->
| text/markdown | Kombo Technologies GmbH | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpcore>=1.0.9",
"httpx>=0.28.1",
"jsonpath-python>=1.0.6",
"pydantic>=2.11.2"
] | [] | [] | [] | [
"repository, https://github.com/kombohq/python-sdk.git"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T15:32:26.406045 | kombo-1.0.0-py3-none-any.whl | 406,937 | d5/21/d74536721ace631891e1a3d00909a4b6c191ff412827cfbce72fe340b855/kombo-1.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 398343d270e1c187ba66bb358c7c6c7a | 77ba9c72aa3904579d88b684b061573a485a905119aa409a4321ee383eaab514 | d521d74536721ace631891e1a3d00909a4b6c191ff412827cfbce72fe340b855 | null | [] | 194 |
2.4 | pyDEM | 1.2.1 | Software for calculating Topographic Wetness Index (TWI) | # pyDEM: A python digital elevation model analysis package
-----------------------------------------------------------
PyDEM is a package for topographic (terrain) analysis written in Python (with a
bit of Cython). It takes in digital elevation model (DEM) rasters, and it
outputs quantities like slope, aspect, upstream area, and topographic wetness
index. PyDEM closely follows the approach taken by TauDEM to calculate these
quantities. It is designed to be fast, easily extensible, and capable of
handling very large datasets.
PyDEM can be used both from the command-line, or programmatically via its Python
API. Examples of both usages are given below and in the examples directory. It
can operate on individual elevation rasters, or on entire directories
simultaneously. Most processing steps can be performed in parallel with any
number of independent pyDEM processes. Note that pyDEM depends on TauDEM for
certain steps (e.g., pitfilling) and it also makes extensive use of the GDAL
library for working with geospatial rasters.
## 1. Installation
For installation notes, see [install.md](INSTALL.md)
## 2. Basic Usage
Examples and tests can be found in the `pydem\examples` directory.
### 2.1 Python Module Usage
#### 2.1.1. Calculate quantities on a single elevation tile
Import the `DEMProcessor` class:
from pydem.dem_processing import DEMProcessor
Set the path to a pit filled elevation file in WGS84 coordinates:
filename_to_elevation_geotiff = 'test.tiff'
Instantiate an instance of the `DEMProcessor` class:
dem_proc = DEMProcessor(filename_to_elevation_geotiff)
The following three commands **do** ***not*** **need to be called in order**.
Calculate the aspect and slope magnitude:
mag, aspect = dem_proc.calc_slopes_directions()
Calculate the upstream contributing area:
uca = dem_proc.calc_uca()
Calculate the TWI:
twi = dem_proc.calc_twi()
#### 2.1.2 Calculate TWI on a directory of elevation tiles
The `ProcessManager` class orchestrates multiple processes to compute TWI over multiple tiles in parallel on the same machine. Example usage is as follows:
```python
from pydem.process_manager import ProcessManager
elevation_source_path = r'/home/twi-users/elevation'
manager = ProcessManager(
n_workers=64, # Number of worker processes to use
in_path=elevation_source_path,
out_path='/home/twi-users/temporary_compute_storage/' # Where to save intermediate data
dem_proc_kwargs={}, # dictionary of values used to initialize DemProcessor
)
# Start the processing
manager.process_twi() # If this fails (e.g. machine goes down), can restart to pick up where it left off
```
You can also call individual parts of the processing:
```python
manager.compute_grid() # Figures out how elevation in folder are tiled
manager.process_elevation() # Fills flats, handles pits, fixes artifacts in elevation data
manager.process_aspect_slope()
manager.process_uca() # Computes upstream contributing area (UCA) for individual tiles (embarassingly parallel)
manager.process_uca_edges() # Fixes upstream flow contribution accross tile edges -- iteratively
manager.process_twi() # Call all the above functions internally, but skips any work already done
```
Finally, to export results to a single GeoTiff with overviews, use:
```python
manager.save_non_overlap_data_geotiff(
'float32', # Numpy recognized filetype
new_path=output_path,
keys=['elev', 'uca', 'aspect', 'slope', 'twi'], # list can contain a subset of these
overview_type='average')
```
> Note: The name `non_overlap_data` comes from an implementation quirk. The temporary data is saved as
a zarr file. This zarr file OVERLAPS data at the edges of tiles to make it easier to compute UCA
across edges
#### 2.1.3 DEMProcessor options
The following options are used by the DEMProcess object. They can be modified by setting the value before processing.
dem_proc = DEMProcessor(filename_to_elevation_geotiff)
dem_proc.fill_flats = False
*Slopes & Directions*
* `fill_flats`: Fill/interpolate the elevation for flat regions before calculating slopes and directions. The direction cannot be calculated in regions where the slope is 0 because the nominal elevation is all the same. This can happen in very gradual terrain or in lake and river beds, particularly when the input elevation is composed of integers. When `True`, the elevation is interpolated in those regions so that a reasonable slope and direction can be calculated. Default `True`.
* `fill_flats_below_sea`: Interpolate the elevation for flat regions that are below sea level. Water will never flow out of these "pits", so in many cases you can ignore these regions and achieve faster processing times. Default `False`.
* `fill_flats_source_tol`: When filling flats, the algorithm finds adjacent "source" pixels and "drain" pixels for each flat region and interpolates the elevation using these data points. This sets the tolerance for the elevation of source pixels above the flat region (i.e. shallow sources are used as sources but not steep cliffs). Default `1`.
* `fill_flats_peaks`: Interpolate the elevation for flat regions that are "peaks" (local maxima). These regions have a higher elevation than all adjacent pixels, so there are no "source" pixels to use for interpolation. When `True`, a single pixel is selected approximately in the center of the flat region as the "peak"/"source". Default `True`.
* `fill_flats_pits`: Interpolate the elevation for flat regions that are "pits" (local minima). These regions have a lower elevation than all adjacent pixels, so there are no "drain" pixels to use for interpolation. When `True`, a single pixel is selected approximately in the center of the flat region as the "pit"/"drain". Default `True`.
* `fill_flats_max_iter`: Default `10`.
*UCA*
* `drain_pits`: Drain from "pits" to nearby but non-adjacent pixels. Pits have no lower adjacent pixels to drain to directly. *Note that with `fill_flats_pits` off, this setting will still drain each pixel in large flat regions, but it may be slower and produces less reasonable results.* Default `True`.
* `drain_pits_path`: Default `True`.
* `drain_pits_min_border`: Default `False`.
* `drain_pits_spill`: Default `False`.
* `drain_pits_max_iter`: Maximum number of iterations to look for drain pixels for pits. Generally, "nearby drains" for a pit/flat region are found by expanding the region upward/outward iteratively. Default `300`.
* `drain_pits_max_dist`: Maximum distance in coordnate-space to (non-adjacent) drains for pits. Pits that are too far from another pixel with a lower elevation will not drain. Default `32`.
* `drain_pits_max_dist_XY`: Maximum distance in real-space to (non-adjacent) drains for pits. Pits that are too far from another pixel with a lower elevation will not drain. This filter is applied after `drain_pits_max_dist`; if the X and Y resolution are similar, this filter is generally unnecessary. Default `None`.
* `drain_flats`: *[Deprecated, replaced by `drain_pits`]* Drains flat regions and pits by draining all pixels in the region to an arbitrary pixel in the region and then draining that pixel to the border of the flat region. Ignored if `drain_pits` is `True`. Default `False`.
* `apply_uca_limit_edges`: Mark edges as completed if the maximum UCA is reached when resolving drainage across edges. Default `False`. If True, it may speed up large calculations.
* `uca_saturaion_limit`: Default `32`.
*TWI*
* `apply_twi_limits`: When calculating TWI, limit TWI to max value. Default `False`.
* `apply_twi_limits_on_uca`: When calculating TWI, limit UCA to max value. Default `False`.
* `twi_min_slope`: Default `1e-3`.
* `twi_min_area`: Default `Infinity`.
* `circular_ref_maxcount`: Default `50`.
* `maximum_pit_area`: Default `32`.
### 2.2 Commandline Usage
When installing pydem using the provided setup.py file, the commandline utilities `TWIDinf`, `AreaDinf`, and `DinfFlowDir` are registered with the operating system.
#### TWIDinf :
usage: TWIDinf-script.py [-h] [--save-all]
Input_Pit_Filled_Elevation [Input_Number_of_Chunks]
[Output_D_Infinity_TWI]
Calculates a grid of topographic wetness index which is the log_e(uca / mag),
that is, the natural log of the ratio of contributing area per unit contour
length and the magnitude of the slope. Note, this function takes the elevation
as an input, and it calculates the slope, direction, and contributing area as
intermediate steps.
positional arguments:
Input_Pit_Filled_Elevation
The input pit-filled elevation file in geotiff format.
Input_Number_of_Chunks
The approximate number of chunks that the input file
will be divided into for processing (potentially on
multiple processors).
Output_D_Infinity_TWI
Output filename for the topographic wetness index.
Default value = twi.tif .
optional arguments:
-h, --help show this help message and exit
--save-all, --sa If set, will save all intermediate files as well.
#### AreaDinf :
usage: AreaDinf-script.py [-h] [--save-all]
Input_Pit_Filled_Elevation [Input_Number_of_Chunks]
[Output_D_Infinity_Specific_Catchment_Area]
Calculates a grid of specific catchment area which is the contributing area
per unit contour length using the multiple flow direction D-infinity approach.
Note, this is different from the equivalent tauDEM function, in that it takes
the elevation (not the flow direction) as an input, and it calculates the
slope and direction as intermediate steps.
positional arguments:
Input_Pit_Filled_Elevation
The input pit-filled elevation file in geotiff format.
Input_Number_of_Chunks
The approximate number of chunks that the input file
will be divided into for processing (potentially on
multiple processors).
Output_D_Infinity_Specific_Catchment_Area
Output filename for the flow direction. Default value = uca.tif .
optional arguments:
-h, --help show this help message and exit
--save-all, --sa If set, will save all intermediate files as well.
#### DinfFlowDir :
usage: DinfFlowDir-script.py [-h]
Input_Pit_Filled_Elevation
[Input_Number_of_Chunks]
[Output_D_Infinity_Flow_Direction]
[Output_D_Infinity_Slope]
Assigns a flow direction based on the D-infinity flow method using the
steepest slope of a triangular facet (Tarboton, 1997, "A New Method for the
Determination of Flow Directions and Contributing Areas in Grid Digital
Elevation Models," Water Resources Research, 33(2): 309-319).
positional arguments:
Input_Pit_Filled_Elevation
The input pit-filled elevation file in geotiff format.
Input_Number_of_Chunks
The approximate number of chunks that the input file
will be divided into for processing (potentially on
multiple processors).
Output_D_Infinity_Flow_Direction
Output filename for the flow direction. Default value = ang.tif .
Output_D_Infinity_Slope
Output filename for the flow direction. Default value = mag.tif .
optional arguments:
-h, --help show this help message and exit
## 3. Description of package Contents
* `commandline_utils.py` : Contains the functions that wrap the python modules into command line utilities.
* `dem_processing.py`: Contains the main algorithms.
* Re-implements the D-infinity method from Tarboton (1997).
* Implements a new upstream contributing area algorithm. This performs essentially the same task as previous upstream contributing area algorithms, but with some added functionality. This version deals with areas where the elevation is flat or has no data values and can be updated from the edges without re-calculating the upstream contributing area for the entire tile.
* Re-implements the calculation of the [Topographic Wetness Index](http://en.wikipedia.org/wiki/Topographic_Wetness_Index).
* `process_manager.py`: Implements a class that manages the calculation of TWI for a directory of files.
* Manages the calculation of the upstream contributing area that drains across tile edges.
* Stores errors in the processing.
* Allows multiple processes to work on the same directory without causing conflicts.
* `test_pydem.py`: A few helper utilities that create analytic test-cases used to develop/test pyDEM.
* `utils.py`: A few helper utility functions.
* Renames files in a directory (deprecated, no longer needed).
* Parses file names.
* Wraps some `gdal` functions for reading and writing geotiff files.
* Sorts the rows in an array.
* `cyfuncs`: Directory containing cythonized versions of python functions in `dem_processing.py`. These should be compiled during installation.
* `cyfuncs.cyutils.pyx`: Computationally efficient implementations of algorithms used to calculate upstream contributing area.
* `examples`: Directory containing a few examples, along with an end-to-end test of the cross-tile calculations.
* `examples.compare_tile_to_chunk.py`: Compares the calculation of the upstream contributing area over a full tile compared to multiple chunks in a file. This tests that the upstream contributing area calculation correctly drains across tile edges.
* `examples.process_manager_directory.py`: This shows how to use the `ProcessingManager` to calculate all of the elevation files within a directory.
* `aws`: Directory containing experiment for running PyDEM on Amazon Web Services
* `pydem.test.test_end_to_end.py`: A few integration tests. Can be run from the commandline using the `pytest` package: `cd pydem; cd test; pytest .`
## 4. References
Tarboton, D. G. (1997). A new method for the determination of flow directions and upslope areas in grid digital elevation models. Water resources research, 33(2), 309-319.
Ueckermann, Mattheus P., et al. (2015). "pyDEM: Global Digital Elevation Model Analysis." In K. Huff & J. Bergstra (Eds.), Scipy 2015: 14th Python in Science Conference. Paper presented at Austin, Texas, 6 - 12 July (pp. 117 - 124). [http://conference.scipy.org/proceedings/scipy2015/mattheus_ueckermann.html](http://conference.scipy.org/proceedings/scipy2015/mattheus_ueckermann.html)
## 5. Attributions
pyDEM uses [lib.pyx](https://github.com/alexlib/openpiv-python/blob/cython_safe_installation/openpiv/c_src/lib.pyx) from the [OpenPIV](https://github.com/alexlib/openpiv-python) project for [inpainting](pydem/pydem/reader/inpaint.pyx) missing values in the final outputs.
| text/markdown | Creare | null | null | null | null | null | [] | [] | null | null | >=3 | [] | [] | [] | [
"rasterio",
"numpy",
"scipy",
"geopy",
"traitlets",
"traittypes",
"zarr",
"cython"
] | [] | [] | [] | [
"Homepage, https://github.com/creare-com/pydem"
] | twine/6.2.0 CPython/3.11.2 | 2026-02-20T15:31:46.637602 | pydem-1.2.1.tar.gz | 279,601 | b9/08/3a9a702df8175fe2e9c7d25ed743e19a218c448e0097eefe307ef6e376da/pydem-1.2.1.tar.gz | source | sdist | null | false | 13a102e9f1aa030eb64ce4d841ad7650 | e14e2ed1193a1a57d8b0d8a54d1f347e48ed836309c975828eb908f339918413 | b9083a9a702df8175fe2e9c7d25ed743e19a218c448e0097eefe307ef6e376da | Apache-2.0 | [
"LICENSE.txt"
] | 0 |
2.4 | agent-ros-bridge | 0.3.5 | Agent ROS Bridge - Universal ROS1/ROS2 bridge for AI agents to control robots and embodied intelligence systems | # Agent ROS Bridge
**Universal ROS1/ROS2 bridge for AI agents to control robots and embodied intelligence systems.**
[](https://github.com/webthree549-bot/agent-ros-bridge/actions/workflows/ci.yml)
[](https://pypi.org/project/agent-ros-bridge/)
[](https://opensource.org/licenses/MIT)
---
## 🔐 Security-First Design
**JWT authentication is always required and cannot be disabled.**
```bash
# Generate a secure secret (REQUIRED - no exceptions)
export JWT_SECRET=$(openssl rand -base64 32)
```
The bridge will **fail to start** without JWT_SECRET. This is by design — security is not optional.
See [SECURITY.md](SECURITY.md) for complete security guidelines.
---
## Quick Start
### Production (Native ROS)
**Requirements:** Ubuntu 20.04/22.04 with ROS1 Noetic or ROS2 Humble/Jazzy
```bash
# Install
pip install agent-ros-bridge
# Set required secret
export JWT_SECRET=$(openssl rand -base64 32)
# Start bridge
agent-ros-bridge --config config/bridge.yaml
```
### Docker Examples (Recommended for Testing)
All examples run in isolated Docker containers with simulated robots (no ROS installation needed).
```bash
# Clone repository
git clone https://github.com/webthree549-bot/agent-ros-bridge.git
cd agent-ros-bridge
# Generate JWT secret
export JWT_SECRET=$(openssl rand -base64 32)
# Run example in Docker
cd examples/quickstart
docker-compose up
# Test connection
curl http://localhost:8765/health
```
### Available Docker Examples
| Example | Description | Run |
|---------|-------------|-----|
| `examples/quickstart/` | Basic bridge | `docker-compose up` |
| `examples/fleet/` | Multi-robot fleet | `docker-compose up` |
| `examples/arm/` | Robot arm control | `docker-compose up` |
All examples include:
- Isolated Docker container
- Pre-configured JWT auth
- Simulated robot environment
- Localhost-only binding (127.0.0.1)
---
## Installation
### Via PyPI (Production)
```bash
pip install agent-ros-bridge
```
### Via ClawHub
```bash
openclaw skills add agent-ros-bridge
```
### From Source
```bash
git clone https://github.com/webthree549-bot/agent-ros-bridge.git
cd agent-ros-bridge
pip install -e ".[dev]"
```
---
## Usage
### Python API
```python
from agent_ros_bridge import Bridge
from agent_ros_bridge.gateway_v2.transports.websocket import WebSocketTransport
# Bridge requires JWT_SECRET env var
bridge = Bridge()
bridge.transport_manager.register(WebSocketTransport({"port": 8765}))
await bridge.start()
```
### CLI
```bash
# Set required secret
export JWT_SECRET=$(openssl rand -base64 32)
# Start bridge
agent-ros-bridge --config config/bridge.yaml
# Generate token for client
python scripts/generate_token.py --secret $JWT_SECRET --role operator
```
---
## Features
- **Security-First**: JWT auth always required, no bypass
- **Multi-Protocol**: WebSocket, gRPC, MQTT
- **Multi-ROS**: ROS1 Noetic + ROS2 Humble/Jazzy
- **Fleet Orchestration**: Multi-robot coordination
- **Arm Control**: UR, xArm, Franka manipulation
- **Docker Examples**: Isolated testing environments
- **Production Ready**: Native Ubuntu deployment
---
## Documentation
| Document | Description |
|----------|-------------|
| [User Manual](docs/USER_MANUAL.md) | Complete guide (23,000+ words) |
| [API Reference](docs/API_REFERENCE.md) | Full API documentation |
| [Native ROS](docs/NATIVE_ROS.md) | Ubuntu/ROS installation |
| [Multi-ROS](docs/MULTI_ROS.md) | Fleet management |
| [Docker vs Native](docs/DOCKER_VS_NATIVE.md) | Deployment comparison |
| [SECURITY.md](SECURITY.md) | Security policy |
---
## Development
```bash
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
make test
# Build package
make build
```
---
## License
[MIT License](LICENSE)
---
**Security is not optional. JWT auth always required.**
| text/markdown | null | Agent ROS Bridge Team <dev@agent-ros-bridge.org> | null | null | MIT | agent, ai, automation, bridge, embodied-intelligence, gateway, iot, robotics, ros, ros2 | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Hardware",
"Typing :: Typed"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"paho-mqtt>=1.6.0",
"prometheus-client>=0.17.0",
"psutil>=5.9.5",
"pyjwt>=2.8.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0.1",
"typing-extensions>=4.7.0; python_version < \"3.10\"",
"aioresponses>=0.7.4; extra == \"all\"",
"bandit[toml]>=1.7.5; extra == \"all\"",
"black>=23.7.0; extra == \"all\"",
"commitizen>=3.6.0; extra == \"all\"",
"factory-boy>=3.3.0; extra == \"all\"",
"faker>=19.0.0; extra == \"all\"",
"grpcio-tools>=1.56.0; extra == \"all\"",
"grpcio>=1.56.0; extra == \"all\"",
"isort>=5.12.0; extra == \"all\"",
"mkdocs-gen-files>=0.5.0; extra == \"all\"",
"mkdocs-literate-nav>=0.6.0; extra == \"all\"",
"mkdocs-material>=9.1.0; extra == \"all\"",
"mkdocs-section-index>=0.3.0; extra == \"all\"",
"mkdocs>=1.5.0; extra == \"all\"",
"mkdocstrings[python]>=0.22.0; extra == \"all\"",
"mypy>=1.5.0; extra == \"all\"",
"paho-mqtt>=1.6.0; extra == \"all\"",
"pre-commit>=3.3.0; extra == \"all\"",
"pymavlink>=2.4.30; extra == \"all\"",
"pymodbus>=3.5.0; extra == \"all\"",
"pytest-asyncio>=0.21.0; extra == \"all\"",
"pytest-benchmark>=4.0.0; extra == \"all\"",
"pytest-cov>=4.1.0; extra == \"all\"",
"pytest-xdist>=3.3.0; extra == \"all\"",
"pytest>=7.4.0; extra == \"all\"",
"rclpy>=3.3.0; extra == \"all\"",
"responses>=0.23.0; extra == \"all\"",
"ruff>=0.0.280; extra == \"all\"",
"safety>=2.3.0; extra == \"all\"",
"websockets>=11.0; extra == \"all\"",
"black>=23.7.0; extra == \"dev\"",
"commitizen>=3.6.0; extra == \"dev\"",
"isort>=5.12.0; extra == \"dev\"",
"mypy>=1.5.0; extra == \"dev\"",
"pre-commit>=3.3.0; extra == \"dev\"",
"ruff>=0.0.280; extra == \"dev\"",
"mkdocs-gen-files>=0.5.0; extra == \"docs\"",
"mkdocs-literate-nav>=0.6.0; extra == \"docs\"",
"mkdocs-material>=9.1.0; extra == \"docs\"",
"mkdocs-section-index>=0.3.0; extra == \"docs\"",
"mkdocs>=1.5.0; extra == \"docs\"",
"mkdocstrings[python]>=0.22.0; extra == \"docs\"",
"pymavlink>=2.4.30; extra == \"drones\"",
"pymodbus>=3.5.0; extra == \"industrial\"",
"rclpy>=3.3.0; extra == \"ros2\"",
"bandit[toml]>=1.7.5; extra == \"security\"",
"safety>=2.3.0; extra == \"security\"",
"aioresponses>=0.7.4; extra == \"test\"",
"factory-boy>=3.3.0; extra == \"test\"",
"faker>=19.0.0; extra == \"test\"",
"pytest-asyncio>=0.21.0; extra == \"test\"",
"pytest-benchmark>=4.0.0; extra == \"test\"",
"pytest-cov>=4.1.0; extra == \"test\"",
"pytest-xdist>=3.3.0; extra == \"test\"",
"pytest>=7.4.0; extra == \"test\"",
"responses>=0.23.0; extra == \"test\"",
"grpcio-tools>=1.56.0; extra == \"transports\"",
"grpcio>=1.56.0; extra == \"transports\"",
"paho-mqtt>=1.6.0; extra == \"transports\"",
"websockets>=11.0; extra == \"transports\""
] | [] | [] | [] | [
"Homepage, https://github.com/webthree549-bot/agent-ros-bridge",
"Documentation, https://agent-ros-bridge.readthedocs.io",
"Repository, https://github.com/webthree549-bot/agent-ros-bridge",
"Issues, https://github.com/webthree549-bot/agent-ros-bridge/issues",
"Changelog, https://github.com/webthree549-bot/agent-ros-bridge/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T15:31:27.835163 | agent_ros_bridge-0.3.5.tar.gz | 105,865 | 65/64/c2c4a1af0cba2077b9ac8c53365d6846cd85a4f825d67adc9f8eb674fb44/agent_ros_bridge-0.3.5.tar.gz | source | sdist | null | false | 6718f21158104f0f84773caa5e8a9d61 | 946bd986627f39e78363a700f877b4adc6cfabe772bfefde6aa4b2e8c37df79a | 6564c2c4a1af0cba2077b9ac8c53365d6846cd85a4f825d67adc9f8eb674fb44 | null | [
"LICENSE"
] | 212 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.