metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | rusmppyc | 0.4.0a2 | An async SMPP v5 Python client powered by Rust | # Rusmppyc
[](https://github.com/Rusmpp/Rusmpp?tab=readme-ov-file#license)

[](https://pypi.org/project/rusmppyc/)
[](https://pepy.tech/projects/rusmppyc)
An async [SMPP v5](https://smpp.org/SMPP_v5.pdf) `Python` client powered by `Rust`.
## Example
```python
import logging
import asyncio
from rusmppyc import (
BindTransceiverResp,
Client,
CommandId,
DataCoding,
Event,
Events,
InterfaceVersion,
MessagePayload,
MessageSubmissionRequestTlvValue,
Npi,
SubmitSmResp,
Ton,
)
from rusmppyc.exceptions import RusmppycException
async def handle_events(events: Events, client: Client):
async for event in events:
match event:
case Event.Incoming(cmd):
logging.debug(f"Received Command: {cmd.id}")
match cmd.id:
case CommandId.DeliverSm():
try:
await client.deliver_sm_resp(
cmd.sequence_number, "the message id"
)
except RusmppycException as e:
logging.error(f"Failed to send DeliverSm response: {e}")
case Event.Error(err):
logging.error(f"Error occurred: {err}")
case _:
logging.warning(f"Unknown event: {event}")
logging.debug("Event handling completed")
async def main():
try:
client, events = await Client.connect(
url="smpp://127.0.0.1:2775",
enquire_link_interval=5000,
enquire_link_response_timeout=2000,
response_timeout=2000,
max_command_length=4096,
)
asyncio.create_task(handle_events(events, client))
bind_response: BindTransceiverResp = await client.bind_transceiver(
system_id="test",
password="test",
system_type="test",
interface_version=InterfaceVersion.Smpp5_0(),
addr_ton=Ton.Unknown(),
addr_npi=Npi.National(),
)
logging.info(f"Bind response: {bind_response}")
logging.info(f"Bind response system_id: {bind_response.system_id}")
logging.info(
f"Bind response sc_interface_version: {bind_response.sc_interface_version}"
)
submit_sm_response: SubmitSmResp = await client.submit_sm(
source_addr_ton=Ton.International(),
source_addr_npi=Npi.National(),
source_addr="1234567890",
dest_addr_ton=Ton.International(),
dest_addr_npi=Npi.National(),
destination_addr="0987654321",
data_coding=DataCoding.McSpecific(),
short_message=b"Hello, World!",
tlvs=[
# The message payload will supersede the short_message field and should only be used if short_message is empty
MessageSubmissionRequestTlvValue.MessagePayload(
MessagePayload(b"Big Message" * 10)
)
],
)
logging.info(f"SubmitSm response: {submit_sm_response}")
await asyncio.sleep(5)
await client.unbind()
await client.close()
await client.closed()
logging.debug("RUSMPP connection closed")
except RusmppycException as e:
logging.error(f"An error occurred: {e}")
if __name__ == "__main__":
logging.basicConfig(
format="%(asctime)-15s %(levelname)s %(name)s %(filename)s:%(lineno)d %(message)s"
)
logging.getLogger().setLevel(logging.DEBUG)
logging.getLogger("rusmpp").setLevel(logging.INFO)
logging.getLogger("rusmppc").setLevel(logging.INFO)
logging.getLogger("rusmppyc").setLevel(logging.DEBUG)
asyncio.run(main())
```
For more examples, see the [examples directory](https://github.com/Rusmpp/Rusmpp/tree/main/rusmppy/rusmppyc/python/examples).
## Develop
- Install [`maturin`](https://www.maturin.rs/installation.html)
```bash
pip install maturin
pip install maturin[patchelf]
```
- Create a virtual environment:
```bash
python3 -m venv venv
source venv/bin/activate
```
- Generate the `pyi` stubs:
```bash
cargo run --bin stub-gen
```
- Generate the bindings:
```bash
maturin develop
```
- The bindings are now available in the virtual environment. You can test them by running:
```bash
python3 -c "import rusmppyc; print(rusmppyc.__version__)"
```
| text/markdown; charset=UTF-8; variant=GFM | null | "Jad K. Haddad" <jadkhaddad@gmail.com> | null | "Jad K. Haddad" <jadkhaddad@gmail.com> | MIT OR Apache-2.0 | smpp, smsc, messaging, networking, protocol | [
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/Rusmpp/Rusmpp",
"Repository, https://github.com/Rusmpp/Rusmpp.git"
] | maturin/1.12.3 | 2026-02-21T09:44:59.792173 | rusmppyc-0.4.0a2-cp314-cp314-win32.whl | 2,679,551 | 2f/d5/034525f114aa177d0307724b50de708a4851ae0daa6751a051823ff4d22f/rusmppyc-0.4.0a2-cp314-cp314-win32.whl | cp314 | bdist_wheel | null | false | a6f0162e117f2c1e72e89d2775de80d4 | 7183abac1a184dcf91a40b3f0d5e107d8f0626fb72fee49337ede152488b643c | 2fd5034525f114aa177d0307724b50de708a4851ae0daa6751a051823ff4d22f | null | [] | 2,961 |
2.4 | xppenfbsd | 0.0.1 | FreeBSD XP-Pen unlocker and evdev bridge | # XP-Pen Deco Mini7 v2 FreeBSD Daemon (Unofficial)
A small userspace daemon that allows simple operation of the [XP-Pen Deco Mini7 V2](https://amzn.to/4kPG1C7)
(_note_: affiliate link) on FreeBSD (will be extended to different types of tablets most likely later on).
* [Rationale / some history](#rationale--some-history)
* [Introduction](#introduction)
* [Requirements](#requirements)
* [Installation](#installation)
* [Usage](#usage)

## Rationale / some history
This repository emerged out of the frustration that it has been hard to get an otherwise excellent
XP-Pen Deco Mini7 V2 up and running easily on FreeBSD. Everytime the device was attached it
turned up as `pointer` and `keyboard` device as expected (two `HID` device instances), but the
pointer device never started streaming stylus positions. I played around trying to install existing
drivers like `OpenTabletDriver`, tried out `xf86-input-wacom`, `libwacom` and many other solutions
but they either failed to compile due to heavy dependencies and unsupported platforms or just
plainly did not work due to some untraceable errors.
Minor research pointed out that those tablets require an `activation` sequence to enable
output of the HID nodes and will then spill out reports. In addition the HID messages for
_styli_ are not compatible with the standard USB mouse driver `usm`. This lead to the quick
overnight development of `xppen-mini7-v2-fbsd` based on information captured on Windows utilizing
[Wireshark](https://www.wireshark.org/) and [USBPcap](https://desowin.org/usbpcap/).
## Introduction
`xppen-mini7-v2-fbsd` is a Python daemon that:
- Locates XP-Pen Deco Mini7 V2 tablets on FreeBSD using `PyUSB` by their vendor and product ID.
- Replays the Windows-style initialization sequence so the stylus interface enters reporting mode.
- Creates a virtual `/dev/input/event*` node via FreeBSD's `uinput` driver and forwards stylus
packets with pressure/tilt data.
- Optionally forwards packets to a Unix domain socket instead of `uinput`. This feature will be used
for some applications this author had in mind, it will most likely be useless for anyone else.
- Can run once against an explicit `/dev/ugenX.Y` path (for example to be executed via a `devd` hook)
or stay resident, periodically scanning for devices and auto-binding when the tablet appears.
## Requirements
- FreeBSD 13 or higher with `evdev`, `uinput`, and `hid` support available (either compiled into the kernel like
for `GENERIC`, or loaded via `kldload evdev uinput hid` if your kernel omits them). If `/dev/uinput` exists
the daemon will most likely work.
- Access to `/dev/uinput` and to the USB device (typically run as `root` or grant `devfs` permissions since
we need access to `usbconfig` and other low level routines).
- Python 3.11 or higher with `pyusb` installed.
## Installation
```
pip install xppenfbsd
```
## Usage
Foreground daemon that scans for the tablet and logs all received messages and events verbosely:
```
xppen-fbsd-daemon --scan --verbose
```
Daemon performing the same action in background:
```
xppen-fbsd-daemon --scan --daemonize
```
Launching by specifying an explicit device path (for example when launching useful from `devd`):
```
xppen-fbsd-daemon --device ugen0.7 --detach --event-mode 660 --event-group wheel
```
Socket-only mode:
```
xppen-fbsd-daemon --socket /var/run/xppen.sock --no-uinput
```
| text/markdown | null | Thomas Spielauer <pypipackages01@tspi.at> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: POSIX :: BSD :: FreeBSD"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"pyusb>=1.2"
] | [] | [] | [] | [
"Homepage, https://www.tspi.at/TODO"
] | twine/6.1.0 CPython/3.11.11 | 2026-02-21T09:44:42.347276 | xppenfbsd-0.0.1.tar.gz | 12,565 | 70/32/af14b574881ca0ef9a6cb83b48686ed1b6674a43698a012c7edb99df3be5/xppenfbsd-0.0.1.tar.gz | source | sdist | null | false | 01c998d27b443e782fac5dc1294e777e | f7c867b22992e175ccce480fe18d81bc5842391842587fdf45d9ed9be409e88a | 7032af14b574881ca0ef9a6cb83b48686ed1b6674a43698a012c7edb99df3be5 | BSD-3-Clause | [
"LICENSE.md"
] | 225 |
2.4 | dialoghelper | 0.2.7 | Helper functions for solveit dialogs | # dialoghelper
A Python library for programmatic dialog manipulation in [Solveit](https://solve.it.com), fast.ai's Dialog Engineering web application. It provides both user-callable functions and AI-accessible tools for creating, reading, updating, and managing dialog messages.
## What is Solveit?
**Solveit** is a "Dialog Engineering" web application that combines interactive code execution with AI assistance. Unlike ChatGPT (pure chat) or Jupyter (pure code), Solveit merges both paradigms into a single workspace.
### Core Concepts
- **Instance**: A persistent Linux container with your files and running kernels. Each user can have multiple instances.
- **Dialog**: An `.ipynb` file containing messages. Like a Jupyter notebook, but with AI integration. Each open dialog runs its own Python kernel.
- **Message**: The fundamental unit—similar to a Jupyter cell, but with three types:
| Type | Purpose | Example |
|------|---------|---------|
| `code` | Python execution | `print("hello")` |
| `note` | Markdown documentation | `# My Notes` |
| `prompt` | AI interaction | "Explain this function" |
### How AI Context Works
When you send a prompt to the AI:
1. **All messages above** the current prompt are collected
2. Messages marked as "hidden" (`skipped=True`) are excluded
3. If context exceeds the model limit, oldest non-pinned messages are dropped
4. The AI sees code, outputs, notes, and previous prompts/responses
Key implications:
- Working at the **bottom** of a dialog = **more context** (all messages above)
- Working **higher up** = less context
- **Pinning** a message (`p` key) keeps it in context even when truncation occurs
### Tools: AI-Callable Functions
Solveit lets the AI call Python functions directly. Users declare tools in messages using `&` followed by backticks:
```
&`my_function` # Expose single tool
&`[func1, func2, func3]` # Expose multiple tools
```
When the AI needs to use a tool, Solveit executes it in the kernel and returns the result.
## Installation
The latest version is always pre-installed in Solveit. To manually install (not recommended):
```bash
pip install dialoghelper
```
## What is dialoghelper?
dialoghelper is a programmatic interface to Solveit dialogs. It enables:
- **Dialog manipulation**: Add, update, delete, and search messages
- **AI tool integration**: Expose functions as tools the AI can call
- **Context generation**: Convert folders, repos, and symbols into AI context
- **Screen capture**: Capture browser screenshots for AI analysis
- **Tmux integration**: Read terminal buffers from tmux sessions
## Modules
| Module | Source Notebook | Description |
|--------|-----------------|-------------|
| `core` | `nbs/00_core.ipynb` | Core dialog manipulation (add/update/delete messages, search, context helpers) |
| `capture` | `nbs/01_capture.ipynb` | Screen capture functionality for AI vision |
| `inspecttools` | `nbs/02_inspecttools.ipynb` | Symbol inspection (`symsrc`, `getval`, `getdir`, etc.) |
| `tmux` | `nbs/03_tmux.ipynb` | Tmux buffer reading tools |
| `stdtools` | — | Re-exports all tools from dialoghelper + fastcore.tools |
## Solveit Tools
**Tools** are functions the AI can call directly during a conversation. A function is usable as a tool if it has:
1. **Type annotations** for ALL parameters
2. **A docstring** describing what it does
```python
# Valid tool
def greet(name: str) -> str:
"Greet someone by name"
return f"Hello, {name}!"
# Not a tool (missing type annotation)
def greet(name):
"Greet someone by name"
return f"Hello, {name}!"
# Not a tool (missing docstring)
def greet(name: str) -> str: return f"Hello, {name}!"
```
### Exposing Tools to the AI
In a Solveit dialog, reference tools using `&` followed by backticks:
```
&`greet` # Single tool
&`[add_msg, update_msg, del_msg]` # Multiple tools
```
### Tool Info Functions
These functions add notes to your dialog listing available tools:
| Function | Lists tools from |
|----------|------------------|
| `tool_info()` | `dialoghelper.core` |
| `fc_tool_info()` | `fastcore.tools` (rg, sed, view, create, etc.) |
| `inspect_tool_info()` | `dialoghelper.inspecttools` |
| `tmux_tool_info()` | `dialoghelper.tmux` |
### Tools vs Programmatic Functions
Some functions are designed for AI tool use; others are meant to be called directly from code:
| AI Tools | Programmatic Use |
|----------|------------------|
| `add_msg`, `update_msg`, `del_msg` | |
| `find_msgs`, `read_msg`, `view_dlg` | `call_endp` (raw endpoint access) |
| `symsrc`, `getval`, `getdir` | `resolve` (returns actual Python object) |
## Usage Examples
```python
from dialoghelper import *
# Add a note message
add_msg("Hello from code!", msg_type='note')
# Add a code message
add_msg("print('Hello')", msg_type='code')
# Search for messages
results = find_msgs("pattern", msg_type='code')
# View entire dialog structure
print(view_dlg())
# Generate context from a folder
ctx_folder('.', types='py', max_total=5000)
```
## Development: nbdev Project Structure
dialoghelper is an [nbdev](https://nbdev.fast.ai) project. **Notebooks are the source of truth**—the `.py` files are auto-generated.
### Notebook ↔ Python File Mapping
| Notebook | Generated File |
|----------|----------------|
| `nbs/00_core.ipynb` | `dialoghelper/core.py` |
| `nbs/01_capture.ipynb` | `dialoghelper/capture.py` |
| `nbs/02_inspecttools.ipynb` | `dialoghelper/inspecttools.py` |
| `nbs/03_tmux.ipynb` | `dialoghelper/tmux.py` |
### Workflow
1. Edit notebooks in `nbs/`
2. Run `nbdev_export()` to generate `.py` files
3. Never edit `.py` files directly—they'll be overwritten
## License
Apache 2.0
| text/markdown | null | Jeremy Howard <github@jhoward.fastmail.fm> | null | null | Apache-2.0 | nbdev, jupyter, notebook, python | [
"Natural Language :: English",
"Intended Audience :: Developers",
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"fastcore>=1.12.15",
"ghapi",
"ipykernel-helper>=0.0.28",
"ast-grep-cli",
"ast-grep-py",
"MonsterUI",
"lisette>=0.0.13",
"pillow",
"toolslm>=0.3.30",
"restrictedpython",
"restrictedpython-async",
"python-fasthtml; extra == \"dev\"",
"tracefunc; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/AnswerDotAI/dialoghelper",
"Documentation, https://AnswerDotAI.github.io/dialoghelper"
] | twine/6.2.0 CPython/3.12.0 | 2026-02-21T09:44:40.625042 | dialoghelper-0.2.7.tar.gz | 36,208 | 02/bf/bec35c99eded41fb69b1f5d2f8abf81640bbe5e18857df4edd4d7d96aa87/dialoghelper-0.2.7.tar.gz | source | sdist | null | false | d798a538da8c50b5b0ae94777d2bd75f | a8835b7df5ce591eefde776668705e34b4e5723549b3a819cac88441aa9f8bb7 | 02bfbec35c99eded41fb69b1f5d2f8abf81640bbe5e18857df4edd4d7d96aa87 | null | [
"LICENSE"
] | 213 |
2.4 | xorfice | 0.1.15 | SOTA Multimodal Inference Engine (S2S, I2I, V2V) for Xoron-Dev. | # 🚀 Xoron-Dev: Unified Multimodal AI Model
<div align="center">





**A state-of-the-art multimodal MoE model that unifies text, image, video, and audio understanding and generation.**
[Architecture](#-architecture-overview) | [Features](#-features) | [Installation](#-installation) | [Usage](#-usage) | [Training](#-training) | [Documentation](./docs/README.md)
</div>
---
## 🏗️ Architecture Overview
Xoron-Dev is built on a modular, mixture-of-experts architecture designed for maximum flexibility and performance.
### 🧠 LLM Backbone (Mixture of Experts)
- **12 Layers, 1024d, 16 Heads** - Optimized for efficient inference and training.
- **Aux-Lossless MoE** - 8 experts with top-2 routing and configurable shared expert isolation.
- **Ring Attention** - Memory-efficient processing for up to **128K context**.
- **Qwen2.5 Tokenizer** - High-density 151K vocabulary for multilingual and code support.
### 👁️ Vision & Video
- **SigLIP-2 Encoder** - 384px native resolution with multi-scale support (128-512px).
- **TiTok 1D Tokenization** - Compressed visual representation (256 tokens) for faster processing.
- **VidTok 3D VAE** - Efficient spatiotemporal video encoding with 4x8x8 compression.
- **3D-RoPE & Temporal MoE** - Sophisticated motion pattern recognition and spatial awareness.
### 🎤 Audio System
- **Raw Waveform Processing** - Direct 16kHz audio input/output (no Mel spectrograms required).
- **Conformer + RMLA** - Advanced speech-to-text with KV compression.
- **BigVGAN Waveform Decoder** - High-fidelity direct waveform generation with Snake activation.
- **Zero-Shot Voice Cloning** - Clone voices from short reference clips using speaker embeddings.
---
## 🌟 Features
### **Multimodal Capabilities**
| Modality | Input | Output | Strategy |
|----------|-------|--------|----------|
| **Text** | 128K Context | Reasoning, Code, Agentic | MoE LLM |
| **Image**| 128-512px | Understanding & SFT | SigLIP + TiTok |
| **Video**| 8-24 Frames | Understanding | VidTok + 3D-RoPE |
| **Audio**| 16kHz Waveform | ASR & TTS | Conformer + BigVGAN |
### **Agentic & Tool Calling**
- **250+ Special Tokens** for structured agent behaviors.
- **Native Tool Use**: Execute shell commands, Python scripts, and Jupyter notebooks.
- **Reasoning**: Advanced Chain-of-Thought (`<|think|>`, `<|plan|>`) for complex tasks.
- **Safety**: Anti-hallucination tokens (`<|uncertain|>`, `<|cite|>`) and confidence scores.
### **Optimization**
- **LoRA Variants**: LoRA+, rsLoRA, and DoRA (r=32, α=64).
- **Lookahead Optimizer**: Enhanced stability and faster convergence.
- **8-bit Optimization**: Save up to 75% optimizer memory with bitsandbytes.
- **Continuous-Scale Training**: Adaptive resolution sampling for optimal VRAM usage.
---
## 🚀 Installation
```bash
# Clone the repository
git clone https://github.com/nigfuapp-web/Xoron-Dev.git
cd Xoron-Dev
# Install dependencies
pip install -r requirements.txt
```
---
## 💻 Usage
### Quick Start (Inference)
```python
from load import load_xoron_model
# Load model and tokenizer
model, tokenizer, device, config = load_xoron_model("Backup-bdg/Xoron-Dev-MultiMoe")
# Generate response
output = model.generate_text("Explain quantum entanglement.", tokenizer)
print(output)
```
### CLI Training
The `build.py` script provides a powerful interface for training and building models.
```bash
# Build a new model from scratch
python build.py --build
# Targeted Fine-tuning
python build.py --hf --text --math # Fine-tune on Math
python build.py --hf --text --agent # Fine-tune on Agentic tasks
python build.py --hf --video # Fine-tune on Video datasets
python build.py --hf --voice # Fine-tune on Audio/Voice
```
### Granular Text Training Flags
| Flag | Description |
|------|-------------|
| `--math` | Focus on mathematical reasoning and steps. |
| `--agent` | Tool use, code execution, and system operations. |
| `--software` | High-quality software engineering and coding. |
| `--cot` | Chain-of-Thought and logical reasoning. |
| `--medical` | Medical knowledge and clinical reasoning. |
| `--hallucination` | Anti-hallucination and truthfulness. |
---
## 🏋️ Training
### Weighted Loss Strategy
The trainer applies specialized weights to ensure high performance on critical tokens:
- **Reasoning (CoT)**: 1.5x
- **Tool Calling**: 1.3x
- **Anti-Hallucination**: 1.2x
### Continuous-Scale Strategy
Xoron-Dev dynamically samples resolutions during training:
- **Image**: 128px to 384px (step=32)
- **Video**: 8 to 24 frames, 128px to 320px
---
## 📦 Export & Quantization
Export your models for efficient deployment:
```bash
# Export to GGUF (for llama.cpp)
python build.py --hf --gguf --gguf-quant q4_k_m
# Export to ONNX
python build.py --hf --onnx --quant-bits 4
```
---
## 🤝 Contributing
Contributions are welcome! If you have ideas for new modalities or optimizations, please open an issue or PR.
---
## 📄 License
This project is licensed under the MIT License.
---
<div align="center">
Built with ❤️ by the Xoron-Dev Team
</div>
| text/markdown | null | Xoron-Dev <contact@xoron.dev> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"torch>=2.0.0",
"triton",
"transformers",
"fastapi",
"uvicorn",
"pydantic",
"safetensors",
"hf_transfer"
] | [] | [] | [] | [
"Homepage, https://github.com/xoron-dev/xorfice"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T09:43:59.804600 | xorfice-0.1.15.tar.gz | 153,655 | 34/03/f9fd8f42a16a65599ee27f14e115afed9be98afb8bb89c70fda212345e6e/xorfice-0.1.15.tar.gz | source | sdist | null | false | a592ed6e23e6e19fe2ec994bcd1b5fae | 117f24c5a4ff7c2e111e6d750c0400ad7257fc0facd5c60f36d5d79cf9ef33c4 | 3403f9fd8f42a16a65599ee27f14e115afed9be98afb8bb89c70fda212345e6e | null | [] | 205 |
2.1 | tfp-nightly | 0.26.0.dev20260221 | Probabilistic modeling and statistical inference in TensorFlow | # TensorFlow Probability
TensorFlow Probability is a library for probabilistic reasoning and statistical
analysis in TensorFlow. As part of the TensorFlow ecosystem, TensorFlow
Probability provides integration of probabilistic methods with deep networks,
gradient-based inference via automatic differentiation, and scalability to
large datasets and models via hardware acceleration (e.g., GPUs) and distributed
computation.
__TFP also works as "Tensor-friendly Probability" in pure JAX!__:
`from tensorflow_probability.substrates import jax as tfp` --
Learn more [here](https://www.tensorflow.org/probability/examples/TensorFlow_Probability_on_JAX).
Our probabilistic machine learning tools are structured as follows.
__Layer 0: TensorFlow.__ Numerical operations. In particular, the LinearOperator
class enables matrix-free implementations that can exploit special structure
(diagonal, low-rank, etc.) for efficient computation. It is built and maintained
by the TensorFlow Probability team and is now part of
[`tf.linalg`](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/python/ops/linalg)
in core TF.
__Layer 1: Statistical Building Blocks__
* Distributions ([`tfp.distributions`](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/python/distributions)):
A large collection of probability
distributions and related statistics with batch and
[broadcasting](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
semantics. See the
[Distributions Tutorial](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb).
* Bijectors ([`tfp.bijectors`](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/python/bijectors)):
Reversible and composable transformations of random variables. Bijectors
provide a rich class of transformed distributions, from classical examples
like the
[log-normal distribution](https://en.wikipedia.org/wiki/Log-normal_distribution)
to sophisticated deep learning models such as
[masked autoregressive flows](https://arxiv.org/abs/1705.07057).
__Layer 2: Model Building__
* Joint Distributions (e.g., [`tfp.distributions.JointDistributionSequential`](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/python/distributions/joint_distribution_sequential.py)):
Joint distributions over one or more possibly-interdependent distributions.
For an introduction to modeling with TFP's `JointDistribution`s, check out
[this colab](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Modeling_with_JointDistribution.ipynb)
* Probabilistic Layers ([`tfp.layers`](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/python/layers)):
Neural network layers with uncertainty over the functions they represent,
extending TensorFlow Layers.
__Layer 3: Probabilistic Inference__
* Markov chain Monte Carlo ([`tfp.mcmc`](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/python/mcmc)):
Algorithms for approximating integrals via sampling. Includes
[Hamiltonian Monte Carlo](https://en.wikipedia.org/wiki/Hamiltonian_Monte_Carlo),
random-walk Metropolis-Hastings, and the ability to build custom transition
kernels.
* Variational Inference ([`tfp.vi`](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/python/vi)):
Algorithms for approximating integrals via optimization.
* Optimizers ([`tfp.optimizer`](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/python/optimizer)):
Stochastic optimization methods, extending TensorFlow Optimizers. Includes
[Stochastic Gradient Langevin Dynamics](http://www.icml-2011.org/papers/398_icmlpaper.pdf).
* Monte Carlo ([`tfp.monte_carlo`](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/python/monte_carlo)):
Tools for computing Monte Carlo expectations.
TensorFlow Probability is under active development. Interfaces may change at any
time.
## Examples
See [`tensorflow_probability/examples/`](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/examples/)
for end-to-end examples. It includes tutorial notebooks such as:
* [Linear Mixed Effects Models](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Linear_Mixed_Effects_Models.ipynb).
A hierarchical linear model for sharing statistical strength across examples.
* [Eight Schools](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Eight_Schools.ipynb).
A hierarchical normal model for exchangeable treatment effects.
* [Hierarchical Linear Models](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/HLM_TFP_R_Stan.ipynb).
Hierarchical linear models compared among TensorFlow Probability, R, and Stan.
* [Bayesian Gaussian Mixture Models](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Bayesian_Gaussian_Mixture_Model.ipynb).
Clustering with a probabilistic generative model.
* [Probabilistic Principal Components Analysis](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_PCA.ipynb).
Dimensionality reduction with latent variables.
* [Gaussian Copulas](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Gaussian_Copula.ipynb).
Probability distributions for capturing dependence across random variables.
* [TensorFlow Distributions: A Gentle Introduction](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb).
Introduction to TensorFlow Distributions.
* [Understanding TensorFlow Distributions Shapes](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Understanding_TensorFlow_Distributions_Shapes.ipynb).
How to distinguish between samples, batches, and events for arbitrarily shaped
probabilistic computations.
* [TensorFlow Probability Case Study: Covariance Estimation](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Probability_Case_Study_Covariance_Estimation.ipynb).
A user's case study in applying TensorFlow Probability to estimate covariances.
It also includes example scripts such as:
Representation learning with a latent code and variational inference.
* [Vector-Quantized Autoencoder](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/examples/vq_vae.py).
Discrete representation learning with vector quantization.
* [Disentangled Sequential Variational Autoencoder](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/examples/disentangled_vae.py)
Disentangled representation learning over sequences with variational inference.
* [Bayesian Neural Networks](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/examples/bayesian_neural_network.py).
Neural networks with uncertainty over their weights.
* [Bayesian Logistic Regression](https://github.com/tensorflow/probability/tree/main/tensorflow_probability/examples/logistic_regression.py).
Bayesian inference for binary classification.
## Installation
For additional details on installing TensorFlow, guidance installing
prerequisites, and (optionally) setting up virtual environments, see the
[TensorFlow installation guide](https://www.tensorflow.org/install).
### Stable Builds
To install the latest stable version, run the following:
```shell
# Notes:
# - The `--upgrade` flag ensures you'll get the latest version.
# - The `--user` flag ensures the packages are installed to your user directory
# rather than the system directory.
# - TensorFlow 2 packages require a pip >= 19.0
python -m pip install --upgrade --user pip
python -m pip install --upgrade --user tensorflow tensorflow_probability
```
For CPU-only usage (and a smaller install), install with `tensorflow-cpu`.
To use a pre-2.0 version of TensorFlow, run:
```shell
python -m pip install --upgrade --user "tensorflow<2" "tensorflow_probability<0.9"
```
Note: Since [TensorFlow](https://www.tensorflow.org/install) is *not* included
as a dependency of the TensorFlow Probability package (in `setup.py`), you must
explicitly install the TensorFlow package (`tensorflow` or `tensorflow-cpu`).
This allows us to maintain one package instead of separate packages for CPU and
GPU-enabled TensorFlow. See the
[TFP release notes](https://github.com/tensorflow/probability/releases) for more
details about dependencies between TensorFlow and TensorFlow Probability.
### Nightly Builds
There are also nightly builds of TensorFlow Probability under the pip package
`tfp-nightly`, which depends on one of `tf-nightly` or `tf-nightly-cpu`.
Nightly builds include newer features, but may be less stable than the
versioned releases. Both stable and nightly docs are available
[here](https://www.tensorflow.org/probability/api_docs/python/tfp?version=nightly).
```shell
python -m pip install --upgrade --user tf-nightly tfp-nightly
```
### Installing from Source
You can also install from source. This requires the [Bazel](
https://bazel.build/) build system. It is highly recommended that you install
the nightly build of TensorFlow (`tf-nightly`) before trying to build
TensorFlow Probability from source. The most recent version of Bazel that TFP
currently supports is 6.4.0; support for 7.0.0+ is WIP.
```shell
# sudo apt-get install bazel git python-pip # Ubuntu; others, see above links.
python -m pip install --upgrade --user tf-nightly
git clone https://github.com/tensorflow/probability.git
cd probability
bazel build --copt=-O3 --copt=-march=native :pip_pkg
PKGDIR=$(mktemp -d)
./bazel-bin/pip_pkg $PKGDIR
python -m pip install --upgrade --user $PKGDIR/*.whl
```
## Community
As part of TensorFlow, we're committed to fostering an open and welcoming
environment.
* [Stack Overflow](https://stackoverflow.com/questions/tagged/tensorflow): Ask
or answer technical questions.
* [GitHub](https://github.com/tensorflow/probability/issues): Report bugs or
make feature requests.
* [TensorFlow Blog](https://blog.tensorflow.org/): Stay up to date on content
from the TensorFlow team and best articles from the community.
* [Youtube Channel](http://youtube.com/tensorflow/): Follow TensorFlow shows.
* [tfprobability@tensorflow.org](https://groups.google.com/a/tensorflow.org/forum/#!forum/tfprobability):
Open mailing list for discussion and questions.
See the [TensorFlow Community](https://www.tensorflow.org/community/) page for
more details. Check out our latest publicity here:
+ [Coffee with a Googler: Probabilistic Machine Learning in TensorFlow](
https://www.youtube.com/watch?v=BjUkL8DFH5Q)
+ [Introducing TensorFlow Probability](
https://medium.com/tensorflow/introducing-tensorflow-probability-dca4c304e245)
## Contributing
We're eager to collaborate with you! See [`CONTRIBUTING.md`](CONTRIBUTING.md)
for a guide on how to contribute. This project adheres to TensorFlow's
[code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to
uphold this code.
## References
If you use TensorFlow Probability in a paper, please cite:
+ _TensorFlow Distributions._ Joshua V. Dillon, Ian Langmore, Dustin Tran,
Eugene Brevdo, Srinivas Vasudevan, Dave Moore, Brian Patton, Alex Alemi, Matt
Hoffman, Rif A. Saurous.
[arXiv preprint arXiv:1711.10604, 2017](https://arxiv.org/abs/1711.10604).
(We're aware there's a lot more to TensorFlow Probability than Distributions, but the Distributions paper lays out our vision and is a fine thing to cite for now.)
| text/markdown | Google LLC | no-reply@google.com | null | null | Apache 2.0 | tensorflow probability statistics bayesian machine learning | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | http://github.com/tensorflow/probability | null | >=3.9 | [] | [] | [] | [
"absl-py",
"six>=1.10.0",
"numpy>=1.13.3",
"decorator",
"cloudpickle>=1.3",
"gast>=0.3.2",
"dm-tree",
"jax; extra == \"jax\"",
"jaxlib; extra == \"jax\"",
"tf-nightly; extra == \"tf\"",
"tf-keras-nightly; extra == \"tf\"",
"tfds-nightly; extra == \"tfds\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.8.10 | 2026-02-21T09:43:32.577428 | tfp_nightly-0.26.0.dev20260221-py2.py3-none-any.whl | 6,975,585 | 58/e4/dba89a04022e04529fc642fe857df7937bb2a4b4311a342e25bf2fabc1f4/tfp_nightly-0.26.0.dev20260221-py2.py3-none-any.whl | py2.py3 | bdist_wheel | null | false | 31bf5149585d0c1d7f371f9cfa906ab7 | 2ba2497dfc227bc6a421ef3a44efc6ea6718fd268f9d26f06c37412f8fd68e18 | 58e4dba89a04022e04529fc642fe857df7937bb2a4b4311a342e25bf2fabc1f4 | null | [] | 420 |
2.4 | deadline-cloud-for-blender | 0.6.1 | AWS Deadline Cloud for Blender | # AWS Deadline Cloud for Blender
### [User guide](https://aws-deadline.github.io/) | [Service documentation](https://docs.aws.amazon.com/deadline-cloud/) | [Deadline Cloud on GitHub](https://github.com/aws-deadline/)
[](https://pypi.python.org/pypi/deadline-cloud-for-blender)
[](https://pypi.python.org/pypi/deadline-cloud-for-blender)
[](https://github.com/aws-deadline/deadline-cloud-for-blender/blob/mainline/LICENSE)
AWS Deadline Cloud for Blender is a python package that allows users to create [AWS Deadline Cloud][deadline-cloud] jobs from within Blender. Using the [Open Job Description (OpenJD) Adaptor Runtime][openjd-adaptor-runtime] this package also provides a command line application that adapts Blender's command line interface to support the [OpenJD specification][openjd].
[deadline-cloud]: https://docs.aws.amazon.com/deadline-cloud/latest/userguide/what-is-deadline-cloud.html
[deadline-cloud-client]: https://github.com/aws-deadline/deadline-cloud
[openjd]: https://github.com/OpenJobDescription/openjd-specifications/wiki
[openjd-adaptor-runtime]: https://github.com/OpenJobDescription/openjd-adaptor-runtime-for-python
[openjd-adaptor-runtime-lifecycle]: https://github.com/OpenJobDescription/openjd-adaptor-runtime-for-python/blob/release/README.md#adaptor-lifecycle
## Compatibility
This library requires:
1. Blender 3.6 - 4.5,
1. Python 3.9 to 3.12; and
1. Linux, Windows, or a macOS operating system.
* Adaptor only supports Linux and macOS
## Submitter
This package provides a Blender plugin that creates jobs for AWS Deadline Cloud using the [AWS Deadline Cloud client library][deadline-cloud-client]. Based on the loaded scene it determines the files required, allows the user to specify render options, and builds an [OpenJD template][openjd] that defines the workflow.
### Getting Started
If you have installed the submitter using the Deadline Cloud submitter installer you can follow the guide to [Setup Deadline Cloud submitters](https://docs.aws.amazon.com/deadline-cloud/latest/userguide/submitter.html#load-dca-plugin) for the manual steps needed after installation.
If you are setting up the submitter for a developer workflow or manual installation you can follow the instructions in the [DEVELOPMENT](https://github.com/aws-deadline/deadline-cloud-for-blender/blob/mainline/DEVELOPMENT.md#manual-installation) file.
#### Experimental: Install and auto-update Blender add-on
The Blender submitter can be installed and updated within Blender. Requires Blender 4.2 or newer. This add-on and install method are still experimental and may be removed in the future. Use at your own risk.
1. Open Blender. Click the **Edit** menu then click **Preferences...**. In the **Preferences** window, click on **Get Extensions** on the left side bar. In the top right of the **Preferences** window, click **Repositories**, click the **+** icon, then click **Add Remote Repository**.
2. Set the **URL** to `https://github.com/aws-deadline/deadline-cloud-for-blender/releases/latest/download/index.json` and check the box for **Check for Updates on Startup**. Click **Create**.
<img alt="Screenshot of the Blender preferences window with an open pop-up for adding an extension repository" src="./docs/install-01-adding-repo.png" width="300" />
3. Now, under the **Available** list, there should be an entry for **AWS Deadline Cloud**. Click its **Install** button. A progress bar will track the download then disappear when the installation is complete.
<img alt="Screenshot of the Blender preferences window with the AWS Deadline Cloud add-on available for installation" src="./docs/install-02-repo-added.png" width="300" />
3. The add-on is now installed! You can now use the new Submit to AWS Deadline Cloud option in the Render menu. If later there's an update available, an **Update** button will appear next to the **AWS Deadline Cloud** entry in the **Get Extensions** section.
## Adaptor
The Blender Adaptor implements the [OpenJD][openjd-adaptor-runtime] interface that allows render workloads to launch Blender and feed it commands. This gives the following benefits:
* a standardized render application interface,
* sticky rendering, where the application stays open between tasks,
Jobs created by the submitter use this adaptor by default, and require that both the installed adaptor
and the Blender executable be available on the PATH of the user that will be running your jobs.
Or you can set the `BLENDER_EXECUTABLE` to point to the Blender executable.
### Getting Started
The adaptor can be installed by the standard python packaging mechanisms:
```sh
$ pip install deadline-cloud-for-blender
```
After installation it can then be used as a command line tool:
```sh
$ blender-openjd --help
```
For more information on the commands the OpenJD adaptor runtime provides, see [here][openjd-adaptor-runtime-lifecycle].
## Versioning
This package's version follows [Semantic Versioning 2.0](https://semver.org/), but is still considered to be in its
initial development, thus backwards incompatible versions are denoted by minor version bumps. To help illustrate how
versions will increment during this initial development stage, they are described below:
1. The MAJOR version is currently 0, indicating initial development.
2. The MINOR version is currently incremented when backwards incompatible changes are introduced to the public API.
3. The PATCH version is currently incremented when bug fixes or backwards compatible changes are introduced to the public API.
## Security
See [CONTRIBUTING](https://github.com/aws-deadline/deadline-cloud-for-blender/blob/release/CONTRIBUTING.md#security-issue-notifications) for more information.
## Telemetry
See [telemetry](https://github.com/aws-deadline/deadline-cloud-for-blender/blob/release/docs/telemetry.md) for more information.
## License
This project is licensed under the Apache-2.0 License.
| text/markdown | Amazon Web Services | null | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"deadline<0.55,>=0.54",
"openjd-adaptor-runtime<0.10,>=0.7"
] | [] | [] | [] | [
"Homepage, https://github.com/aws-deadline/deadline-cloud-for-blender",
"Source, https://github.com/aws-deadline/deadline-cloud-for-blender"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:42:58.912349 | deadline_cloud_for_blender-0.6.1.tar.gz | 38,931 | 8d/fc/8f2fa667445ec9d0c3645ef36326f1334bd4b293891ccb377118d21ec51b/deadline_cloud_for_blender-0.6.1.tar.gz | source | sdist | null | false | 1401b6c68f39782de6edf33f18cb252b | c5bf25aa001a56fd392954f0ba5bf8d36816683f6bf367fb7927e89409f191d5 | 8dfc8f2fa667445ec9d0c3645ef36326f1334bd4b293891ccb377118d21ec51b | Apache-2.0 | [
"LICENSE",
"NOTICE"
] | 202 |
2.4 | hololinked | 0.3.11 | A ZMQ-based protocol-agnostic object oriented RPC toolkit primarily focussed for instrumentation control, data acquisition or IoT | # hololinked - Pythonic Object-Oriented Supervisory Control & Data Acquisition / Internet of Things
## Description
`hololinked` is a beginner-friendly, extensible pythonic tool suited for instrumentation control and data acquisition over network (IoT & SCADA).
As a novice, you have a requirement to control and capture data from your hardware, say in your electronics or science lab, and you want to show the data in a dashboard, provide a PyQt GUI or run automated scripts, `hololinked` can help. Even for isolated desktop applications or a small setup without networking, one can still separate the concerns of the tools that interact with the hardware & the hardware itself.
If you are a web developer or an industry professional looking for a web standards compatible (high-speed) IoT runtime, `hololinked` can be a decent choice. By conforming to [W3C Web of Things](https://www.w3.org/WoT/), one can expect a consistent API and flexible bidirectional message flow to interact with your devices, irrespective of the underlying protocol. Currently HTTP, MQTT & ZMQ are supported. See [Use Cases Table](https://docs.hololinked.dev/introduction/use-cases).
This implementation is based on RPC, built ground-up in python keeping both the latest web technologies and python principles in mind.
[](https://github.com/hololinked-dev/hololinked/actions/workflows/ci-pipeline.yml) [](https://github.com/hololinked-dev/docs) [](https://docs.hololinked.dev/introduction/security-scanning)  [](https://pypi.org/project/hololinked/) [](https://anaconda.org/conda-forge/hololinked) [](https://codecov.io/github/hololinked-dev/hololinked) [](https://anaconda.org/conda-forge/hololinked) [](https://pypistats.org/packages/hololinked) [](https://doi.org/10.5281/zenodo.12802841) [](https://discord.com/invite/kEz87zqQXh) [](mailto:info@hololinked.dev) [](https://forms.gle/FB4XwkUDt1wV4GGPA)
## To Install
From pip - `pip install hololinked` <br>
From conda - <br>
`pip install aiomqtt` (needs to be installed separately) <br>
`conda install -c conda-forge hololinked` <br>
Or, clone the repository (main branch for latest codebase, which can also contain bugs) and install `pip install .` / `pip install -e .`. The [uv environment `uv.lock`](#setup-development-environment) can also help to setup all dependencies. Currently the dependencies are hard pinned to promote stability, therefore consider using a virtual environment.
## Usage/Quickstart
Each device or thing can be controlled systematically when their design in software is segregated into properties, actions and events. In object oriented terms:
- the hardware is represented by a class
- properties are validated get-set attributes of the class which may be used to model settings, hold captured/computed data or generic network accessible quantities
- actions are methods which issue commands like connect/disconnect, execute a control routine, start/stop measurement, or run arbitrary python logic
- events can asynchronously communicate/push arbitrary data to a client, like alarm messages, streaming measured quantities etc.
For example, consider an optical spectrometer, the following code is possible:
### Import Statements
```python
from hololinked.core import Thing, Property, action, Event # interactions with hardware
from hololinked.core.properties import String, Integer, Number, List # some property types
from seabreeze.spectrometers import Spectrometer # a device driver
```
### Definition of one's own Hardware Controlling Class
subclass from `Thing` class to make a "network accessible Thing":
```python
class OceanOpticsSpectrometer(Thing):
"""
OceanOptics spectrometers using seabreeze library. Device is identified by serial number.
"""
```
### Instantiating Properties
Say, we wish to make device serial number, integration time and the captured intensity as properties. There are certain predefined properties available like `String`, `Number`, `Boolean` etc. or one may define one's own using [pydantic or JSON schema](https://docs.hololinked.dev/howto/articles/properties/#schema-constrained-property). To create properties:
```python
class OceanOpticsSpectrometer(Thing):
"""class doc"""
serial_number = String(default=None, allow_None=True,
doc="serial number of the spectrometer to connect/or connected")
integration_time = Number(default=1000, bounds=(0.001, None), crop_to_bounds=True,
doc="integration time of measurement in milliseconds")
intensity = List(default=None, allow_None=True, doc="captured intensity", readonly=True,
fget=lambda self: self._intensity)
def __init__(self, id, serial_number, **kwargs):
super().__init__(id=id, serial_number=serial_number, **kwargs)
```
In non-expert terms, properties look like class attributes however their data containers are instantiated at object instance level by default. This is possible due to [python descriptor protocol](https://realpython.com/python-descriptors/). For example, the `integration_time` property defined above as `Number`, whenever set/written, will be validated as a float or int, cropped to bounds and assigned as an attribute to each **instance** of the `OceanOpticsSpectrometer` class with an internally generated name. It is not necessary to know this internally generated name as the property value can be accessed again in any python logic using the dot operator, say, `print(self.integration_time)`.
One may overload the get-set (or read-write) of properties to customize their behavior:
```python
class OceanOpticsSpectrometer(Thing):
integration_time = Number(default=1000, bounds=(0.001, None), crop_to_bounds=True,
doc="integration time of measurement in milliseconds")
@integration_time.setter
def set_integration_time(self, value : float):
self.device.write_integration_time_micros(int(value*1000))
# seabreeze does not provide a write_integration_time_micros method,
# this is only an example
@integration_time.getter
def get_integration_time(self) -> float:
try:
return self.device.read_integration_time_micros() / 1000
# seabreeze does not provide a read_integration_time_micros method,
# this is only an example
except AttributeError:
return self.properties["integration_time"].default
```
In this case, instead of generating a data container with an internal name, the setter method is called when `integration_time` property is set/written. One might add the hardware device driver logic here (say, supplied by the manufacturer) or a protocol that applies the property directly onto the device. One would also want the getter to read from the device directly as well.
Those familiar with Web of Things (WoT) terminology may note that these properties generate the property affordance. An example for `integration_time` is as follows:
```JSON
"integration_time": {
"title": "integration_time",
"description": "integration time of measurement in milliseconds",
"type": "number",
"forms": [{
"href": "https://example.com/spectrometer/integration-time",
"op": "readproperty",
"htv:methodName": "GET",
"contentType": "application/json"
},{
"href": "https://example.com/spectrometer/integration-time",
"op": "writeproperty",
"htv:methodName": "PUT",
"contentType": "application/json"
}
],
"minimum": 0.001
},
```
If you are **not familiar** with Web of Things or the term "property affordance", consider the above JSON as a description of
what the property represents and how to interact with it from somewhere else (in this case, over HTTP). Such a JSON is both human-readable, yet consumable by any application that may use the property - say, a client provider to create a client object to interact with the property or a GUI application to autogenerate a suitable input field for this property.
[](https://docs.hololinked.dev/beginners-guide/articles/properties/) [](https://control-panel.hololinked.dev/#https://examples.hololinked.dev/simulations/oscilloscope/resources/wot-td)
### Specify Methods as Actions
decorate with `action` decorator on a python method to claim it as a network accessible method:
```python
class OceanOpticsSpectrometer(Thing):
@action(input_schema={"type": "object", "properties": {"serial_number": {"type": "string"}}})
def connect(self, serial_number = None):
"""connect to spectrometer with given serial number"""
if serial_number is not None:
self.serial_number = serial_number
self.device = Spectrometer.from_serial_number(self.serial_number)
self._wavelengths = self.device.wavelengths().tolist()
@action()
def disconnect(self):
"""disconnect from the spectrometer"""
self.device.close()
```
Methods that are neither decorated with action decorator nor acting as getters-setters of properties remain as plain python methods and are **not** accessible on the network.
In WoT Terminology, again, such a method becomes specified as an action affordance (or a description of what the action represents and how to interact with it):
```JSON
"connect": {
"title": "connect",
"description": "connect to spectrometer with given serial number",
"forms": [
{
"href": "https://example.com/spectrometer/connect",
"op": "invokeaction",
"htv:methodName": "POST",
"contentType": "application/json"
}
],
"input": {
"type": "object",
"properties": {
"serial_number": {
"type": "string"
}
},
"additionalProperties": false
}
},
```
> input and output schema ("input" field above which describes the argument type `serial_number`) are optional and are discussed in docs
[](https://docs.hololinked.dev/beginners-guide/articles/actions/) [](https://control-panel.hololinked.dev/#https://examples.hololinked.dev/simulations/oscilloscope/resources/wot-td)
### Defining and Pushing Events
create a named event using `Event` object that can push any arbitrary serializable data:
```python
class OceanOpticsSpectrometer(Thing):
intensity_measurement_event = Event(name='intensity-measurement-event',
doc="""event generated on measurement of intensity,
max 30 per second even if measurement is faster.""",
schema=intensity_event_schema)
# schema is optional and will be discussed in documentation,
# assume the intensity_event_schema variable is valid
def capture(self): # not an action, but a plain python method
self._run = True
last_time = time.time()
while self._run:
self._intensity = self.device.intensities(
correct_dark_counts=False,
correct_nonlinearity=False
)
curtime = datetime.datetime.now()
measurement_timestamp = curtime.strftime('%d.%m.%Y %H:%M:%S.') + '{:03d}'.format(
int(curtime.microsecond /1000))
if time.time() - last_time > 0.033: # restrict speed to avoid overloading
self.intensity_measurement_event.push({
"timestamp" : measurement_timestamp,
"value" : self._intensity.tolist()
})
last_time = time.time()
@action()
def start_acquisition(self):
if self._acquisition_thread is not None and self._acquisition_thread.is_alive():
return
self._acquisition_thread = threading.Thread(target=self.capture)
self._acquisition_thread.start()
@action()
def stop_acquisition(self):
self._run = False
```
Events can stream live data without polling or push data to a client whose generation in time is uncontrollable.
In WoT Terminology, such an event becomes specified as an event affordance (or a description of
what the event represents and how to subscribe to it) with subprotocol SSE:
```JSON
"intensity_measurement_event": {
"title": "intensity-measurement-event",
"description": "event generated on measurement of intensity, max 30 per second even if measurement is faster.",
"forms": [
{
"href": "https://example.com/spectrometer/intensity/measurement-event",
"subprotocol": "sse",
"op": "subscribeevent",
"htv:methodName": "GET",
"contentType": "text/plain"
}
],
"data": {
"type": "object",
"properties": {
"value": {
"type": "array",
"items": {
"type": "number"
}
},
"timestamp": {
"type": "string"
}
}
}
}
```
> data schema ("data" field above which describes the event payload) are optional and discussed in documentation
Events follow a pub-sub model with '1 publisher to N subscribers' per `Event` object, through any supported protocol like HTTP server sent events (brokerless) or MQTT (brokered).
[](https://docs.hololinked.dev/beginners-guide/articles/events/) [](https://control-panel.hololinked.dev/#https://examples.hololinked.dev/simulations/oscilloscope/resources/wot-td)
### Start with a Protocol Server
One can start the Thing object with one or more protocols simultaneously. Currently HTTP, MQTT & ZMQ is supported. With HTTP server:
```python
import ssl, os, logging
if __name__ == '__main__':
ssl_context = ssl.SSLContext(protocol=ssl.PROTOCOL_TLS_SERVER)
ssl_context.load_cert_chain(f'assets{os.sep}security{os.sep}certificate.pem',
keyfile = f'assets{os.sep}security{os.sep}key.pem')
ssl_context.minimum_version = ssl.TLSVersion.TLSv1_3
OceanOpticsSpectrometer(
id='spectrometer',
serial_number='S14155',
log_level=logging.DEBUG
).run_with_http_server(
port=9000, ssl_context=ssl_context
)
```
The base URL is constructed as `http(s)://<hostname>:<port>/<thing_id>`
With ZMQ:
```python
if __name__ == '__main__':
OceanOpticsSpectrometer(
id='spectrometer',
serial_number='S14155',
).run(
access_points=['IPC', 'tcp://*:9999']
)
# both interprocess communication & TCP
```
Multiple:
```python
from hololinked.server import HTTPServer, MQTTPublisher, ZMQServer
if __name__ == '__main__':
http_server = HTTPServer(port=9000, ssl_context=http_ssl_context)
mqtt_publisher = MQTTPublisher(hostname='mqtt.example.com', ssl_context=mqtt_ssl_context)
OceanOpticsSpectrometer(
id='spectrometer',
serial_number='S14155',
).run(
servers=[http_server, mqtt_publisher]
)
# HTTP & MQTT
```
There are other improved ways to configure protocol servers, please refer documentation for details.
[](https://docs.hololinked.dev/beginners-guide/articles/protocols/general/)
[](#resources)
## Client Side Applications
To compose client objects, the JSON description of the properties, actions and events are used, which are summarized into a [Thing Description](https://www.w3.org/TR/wot-thing-description11). These descriptions are autogenerated, so at least in the beginner stages, you dont need to know how they work. The following code would be possible:
### Python Clients
Import the `ClientFactory` and create an instance of the client for the desired protocol:
```python
from hololinked.client import ClientFactory
# for HTTP
thing = ClientFactory.http(url="http://localhost:8000/spectrometer/resources/wot-td")
# For HTTP, one needs to append `/resource/wot-td` to the base URL to construct the full URL as `http(s)://<hostname>:<port>/<thing_id>/resources/wot-td`. At this endpoint, the Thing Description will be autogenerated and loaded to compose a client.
# zmq IPC
thing = ClientFactory.zmq(thing_id='spectrometer', access_point='IPC')
# zmq TCP
thing = ClientFactory.zmq(thing_id='spectrometer', access_point='tcp://localhost:9999')
# For ZMQ, Thing Description loading is automatically mediated simply by specifying how to access the Thing
```
To issue operations:
<details open>
<summary>Read Property</summary>
```python
thing.read_property("integration_time")
# or use dot operator
thing.integration_time
```
within an async function:
```python
async def func():
await thing.async_read_property("integration_time")
# dot operator not supported
```
</details>
<details open>
<summary>Write Property</summary>
```python
thing.write_property("integration_time", 2000)
# or use dot operator
thing.integration_time = 2000
```
within an async function:
```python
async def func():
await thing.async_write_property("integration_time", 2000)
# dot operator not supported
```
<details open>
<summary>Invoke Action</summary>
```python
thing.invoke_action("connect", serial_number="S14155")
# or use dot operator
thing.connect(serial_number="S14155")
```
within an async function:
```python
async def func():
await thing.async_invoke_action("connect", serial_number="S14155")
# dot operator not supported
```
</details>
<details open>
<summary>Subscribe to Event</summary>
```python
thing.subscribe_event("intensity_measurement_event", callbacks=lambda value: print(value))
```
There is no async subscribe, as events by nature appear at arbitrary times only when pushed by the server. Yet, events can be asynchronously listened and callbacks can be asynchronously invoked. Please refer documentation. To unsubscribe:
```python
thing.unsubscribe_event("intensity_measurement_event")
```
</details>
<details open>
<summary>Observe Property</summary>
```python
thing.observe_property("integration_time", callbacks=lambda value: print(value))
```
Only observable properties (property where `observable` was set to `True`) can be observed. To unobserve:
```python
thing.unobserve_property("integration_time")
```
</details>
Operations which rely on request-reply pattern (properties and actions) also support one-way and no-block calls:
- `oneway` - issue the operation and dont collect the reply
- `noblock` - issue the operation, obtain a message ID and collect the reply when you want
[](https://docs.hololinked.dev/beginners-guide/articles/object-proxy/)
### Javascript Clients
Similary, one could consume the Thing Description in a Node.js script using Eclipse [ThingWeb node-wot](https://github.com/eclipse-thingweb/node-wot):
```js
const { Servient } = require("@node-wot/core");
const HttpClientFactory = require("@node-wot/binding-http").HttpClientFactory;
const servient = new Servient();
servient.addClientFactory(new HttpClientFactory());
servient.start().then((WoT) => {
fetch("http://localhost:8000/spectrometer/resources/wot-td")
.then((res) => res.json())
.then((td) => WoT.consume(td))
.then((thing) => {
thing.readProperty("integration_time").then(async(interactionOutput) => {
console.log("Integration Time: ", await interactionOutput.value());
})
)});
```
If you're using HTTPS, just make sure the server certificate is valid or trusted by the client.
```js
const HttpsClientFactory = require("@node-wot/binding-http").HttpsClientFactory;
servient.addClientFactory(new HttpsClientFactory({ allowSelfSigned: true }));
```
(example [here](https://gitlab.com/hololinked/examples/clients/node-clients/phymotion-controllers-app/-/blob/main/src/App.tsx?ref_type=heads#L77))
To issue operations:
<details open>
<summary>Read Property</summary>
`thing.readProperty("integration_time").then(async(interactionOutput) => {
console.log("Integration Time:", await interactionOutput.value());
});`
</details>
<details open>
<summary>Write Property</summary>
`thing.writeProperty("integration_time", 2000).then(() => {
console.log("Integration Time updated");
});`
</details>
<details open>
<summary>Invoke Action</summary>
`thing.invokeAction("connect", { serial_number: "S14155" }).then(() => {
console.log("Device connected");
});`
</details>
<details open>
<summary>Subscribe to Event</summary>
`thing.subscribeEvent("intensity_measurement_event", async (interactionOutput) => {
console.log("Received event:", await interactionOutput.value());
});`
</details>
<details open>
<summary>Observe Property</summary>
`thing.observeProperty("integration_time", async (interactionOutput) => {
console.log("Observed integration_time:", await interactionOutput.value());
});`
</details>
<details>
<summary>Links to React Examples</summary>
In React, the Thing Description may be fetched inside `useEffect` hook, the client passed via a `useContext` hook (or a global state manager). The individual operations can be performed in their own callbacks attached to DOM elements:
- [fetch TD](https://gitlab.com/hololinked/examples/clients/node-clients/phymotion-controllers-app/-/blob/main/src/App.tsx?ref_type=heads#L96)
- [issue operations](https://gitlab.com/hololinked/examples/clients/node-clients/phymotion-controllers-app/-/blob/main/src/components/movements.tsx?ref_type=heads#L54)
</details>
<br>
[](https://thingweb.io/docs/node-wot/API)
## Resources
- [examples repository](https://github.com/hololinked-dev/examples) - detailed examples for both clients and servers
- [infrastructure components](https://github.com/hololinked-dev/daq-system-infrastructure) - docker compose files to setup postgres or mongo databases with admin interfaces, Identity and Access Management system among other components.
- [helper GUI](https://github.com/hololinked-dev/thing-control-panel) - view & interact with your object's actions, properties and events.
- [live demo](https://control-panel.hololinked.dev/#https://examples.hololinked.dev/simulations/oscilloscope/resources/wot-td) - an example of an oscilloscope available for live test
> You may use a script deployment/automation tool to remote stop and start servers, in an attempt to remotely control your hardware scripts.
## Contributing
See [organization info](https://github.com/hololinked-dev) for details regarding contributing to this package. There are:
- [good first issues](https://github.com/hololinked-dev/hololinked/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
- [setup development environment](https://docs.hololinked.dev/introduction/contributing#setup-development-environment)
- [discord group](https://discord.com/invite/kEz87zqQXh)
- [weekly meetings](https://github.com/hololinked-dev/#monthly-meetings) and
- [project planning](https://github.com/orgs/hololinked-dev/projects/4) to discuss activities around this repository.
## Currently Supported Features
Some other features that are currently supported:
- use a custom finite state machine.
- database (Postgres, MySQL, SQLite - based on SQLAlchemy) support for storing and loading properties when the object dies and restarts.
- auto-generate Thing Description for Web of Things applications.
- use serializer of your choice (except for HTTP) - MessagePack, JSON, pickle etc. & extend serialization to suit your requirement
- asyncio event loops on server side
| text/markdown | null | Vignesh Vaidyanathan <info@hololinked.dev> | null | null | null | data-acquisition, zmq-rpc, SCADA, IoT, Web of Things, remote data logging | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Intended Audience :: Information Technology",
"Intended Audience :: Science/Research",
"Intended Audience :: Healthcare Industry",
"Intended Audience :: Manufacturing",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"Intended Audience :: Education",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Internet :: WWW/HTTP :: Browsers",
"Topic :: Scientific/Engineering :: Human Machine Interfaces",
"Topic :: System :: Hardware",
"Development Status :: 4 - Beta"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"msgspec>=0.18.6",
"pyzmq<26.2,>=25.1.0",
"pydantic<3.0.0,>=2.8.0",
"tornado>=6.3.3",
"jsonschema<5.0,>=4.22.0",
"httpx<29.0,>=0.28.1",
"sniffio<2.0,>=1.3.1",
"aiomqtt>=2.4.0",
"structlog>=25.5.0",
"ifaddr<0.3,>=0.2.0",
"fastjsonschema==2.20.0",
"serpent<2.0,>=1.41",
"bcrypt==4.3.0",
"argon2-cffi>=23.1.0",
"pyjwt>=2.11.0",
"cryptography>=46.0.5",
"sqlalchemy>2.0.21",
"sqlalchemy-utils>=0.41",
"psycopg2-binary>=2.9.11",
"pymongo>=4.15.2"
] | [] | [] | [] | [
"Documentation, https://docs.hololinked.dev",
"Repository, https://github.com/hololinked-dev/hololinked"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:42:21.479365 | hololinked-0.3.11.tar.gz | 276,994 | a2/02/6c3ce52fa3a0c2b18f26347c75a585f9728a49ec4da72a94db24000de459/hololinked-0.3.11.tar.gz | source | sdist | null | false | 5e28dd15735c7a527ed48bc3878082d3 | 794ce975fe4893ccbdb22b45b0c568f1986cc76521b388361a53fa4d2607ed7d | a2026c3ce52fa3a0c2b18f26347c75a585f9728a49ec4da72a94db24000de459 | Apache-2.0 | [
"licenses/labthings-fastapi-LICENSE.txt",
"licenses/param-LICENSE.txt",
"licenses/pyro-LICENSE.txt",
"licenses/wotpy-LICENSE.txt",
"license.txt"
] | 209 |
2.4 | matcher-py | 0.7.1 | A high-performance matcher designed to solve LOGICAL and TEXT VARIATIONS problems in word matching, implemented in Rust. | # Matcher Rust Implementation with PyO3 Binding
A high-performance matcher designed to solve **LOGICAL** and **TEXT VARIATIONS** problems in word matching, implemented in Rust.
For detailed implementation, see the [Design Document](../DESIGN.md).
## Features
- **Multiple Matching Methods**:
- Simple Word Matching
- Regex-Based Matching
- Similarity-Based Matching
- **Text Normalization**:
- **Fanjian**: Simplify traditional Chinese characters to simplified ones.
Example: `蟲艸` -> `虫艹`
- **Delete**: Remove specific characters.
Example: `*Fu&*iii&^%%*&kkkk` -> `Fuiiikkkk`
- **Normalize**: Normalize special characters to identifiable characters.
Example: `𝜢𝕰𝕃𝙻𝝧 𝙒ⓞᵣℒ𝒟!` -> `hello world!`
- **PinYin**: Convert Chinese characters to Pinyin for fuzzy matching.
Example: `西安` -> ` xi an `, matches `洗按` -> ` xi an `, but not `先` -> ` xian `
- **PinYinChar**: Convert Chinese characters to Pinyin.
Example: `西安` -> `xian`, matches `洗按` and `先` -> `xian`
- **AND OR NOT Word Matching**:
- Takes into account the number of repetitions of words.
- Example: `hello&world` matches `hello world` and `world,hello`
- Example: `无&法&无&天` matches `无无法天` (because `无` is repeated twice), but not `无法天`
- Example: `hello~helloo~hhello` matches `hello` but not `helloo` and `hhello`
- **Customizable Exemption Lists**: Exclude specific words from matching.
- **Efficient Handling of Large Word Lists**: Optimized for performance.
## Installation
### Use pip
```shell
pip install matcher_py
```
### Install pre-built binary
Visit the [release page](https://github.com/Lips7/Matcher/releases) to download the pre-built binary.
## Usage
All relevant types are defined in [extension_types.py](./python/matcher_py/extension_types.py).
### Explanation of the configuration
* `Matcher`'s configuration is defined by the `MatchTableMap = Dict[int, List[MatchTable]]` type, the key of `MatchTableMap` is called `match_id`, **for each `match_id`, the `table_id` inside is required to be unique**.
* `SimpleMatcher`'s configuration is defined by the `SimpleTable = Dict[ProcessType, Dict[int, str]]` type, the value `Dict[int, str]`'s key is called `word_id`, **`word_id` is required to be globally unique**.
#### MatchTable
* `table_id`: The unique ID of the match table.
* `match_table_type`: The type of the match table.
* `word_list`: The word list of the match table.
* `exemption_process_type`: The type of the exemption simple match.
* `exemption_word_list`: The exemption word list of the match table.
For each match table, word matching is performed over the `word_list`, and exemption word matching is performed over the `exemption_word_list`. If the exemption word matching result is True, the word matching result will be False.
#### MatchTableType
* `Simple`: Supports simple multiple patterns matching with text normalization defined by `process_type`.
* It can handle combination patterns and repeated times sensitive matching, delimited by `&` and `~`, such as `hello&world&hello` will match `hellohelloworld` and `worldhellohello`, but not `helloworld` due to the repeated times of `hello`.
* `Regex`: Supports regex patterns matching.
* `SimilarChar`: Supports similar character matching using regex.
* `["hello,hallo,hollo,hi", "word,world,wrd,🌍", "!,?,~"]` will match `helloworld!`, `hollowrd?`, `hi🌍~` ··· any combinations of the words split by `,` in the list.
* `Acrostic`: Supports acrostic matching using regex **(currently only supports Chinese and simple English sentences)**.
* `["h,e,l,l,o", "你,好"]` will match `hope, endures, love, lasts, onward.` and `你的笑容温暖, 好心情常伴。`.
* `Regex`: Supports regex matching.
* `["h[aeiou]llo", "w[aeiou]rd"]` will match `hello`, `world`, `hillo`, `wurld` ··· any text that matches the regex in the list.
* `Similar`: Supports similar text matching based on distance and threshold.
* `Levenshtein`: Supports similar text matching based on Levenshtein distance.
#### ProcessType
* `None`: No transformation.
* `Fanjian`: Traditional Chinese to simplified Chinese transformation. Based on [FANJIAN](../matcher_rs/process_map/FANJIAN.txt).
* `妳好` -> `你好`
* `現⾝` -> `现身`
* `Delete`: Delete all punctuation, special characters and white spaces. Based on [TEXT_DELETE](../matcher_rs/process_map/TEXT-DELETE.txt) and `WHITE_SPACE`.
* `hello, world!` -> `helloworld`
* `《你∷好》` -> `你好`
* `Normalize`: Normalize all English character variations and number variations to basic characters. Based on [NORM](../matcher_rs//process_map/NORM.txt) and [NUM_NORM](../matcher_rs//process_map/NUM-NORM.txt).
* `ℋЀ⒈㈠Õ` -> `he11o`
* `⒈Ƨ㊂` -> `123`
* `PinYin`: Convert all unicode Chinese characters to pinyin with boundaries. Based on [PINYIN](../matcher_rs/process_map/PINYIN.txt).
* `你好` -> ` ni hao `
* `西安` -> ` xi an `
* `PinYinChar`: Convert all unicode Chinese characters to pinyin without boundaries. Based on [PINYIN](../matcher_rs/process_map/PINYIN.txt).
* `你好` -> `nihao`
* `西安` -> `xian`
You can combine these transformations as needed. Pre-defined combinations like `DeleteNormalize` and `FanjianDeleteNormalize` are provided for convenience.
Avoid combining `PinYin` and `PinYinChar` due to that `PinYin` is a more limited version of `PinYinChar`, in some cases like `xian`, can be treat as two words `xi` and `an`, or only one word `xian`.
### Text Process Usage
Here’s an example of how to use the `reduce_text_process` and `text_process` functions:
```python
from matcher_py import reduce_text_process, text_process
from matcher_py.extension_types import ProcessType
print(reduce_text_process(ProcessType.MatchDeleteNormalize, "hello, world!"))
print(text_process(ProcessType.MatchDelete, "hello, world!"))
```
### Matcher Basic Usage
Here’s an example of how to use the `Matcher`:
```python
import json
from matcher_py import Matcher
from matcher_py.extension_types import MatchTable, MatchTableType, ProcessType, RegexMatchType, SimMatchType
matcher = Matcher(
json.dumps({
1: [
MatchTable(
table_id=1,
match_table_type=MatchTableType.Simple(process_type = ProcessType.MatchFanjianDeleteNormalize),
word_list=["hello", "world"],
exemption_process_type=ProcessType.MatchNone,
exemption_word_list=["word"],
),
MatchTable(
table_id=2,
match_table_type=MatchTableType.Regex(
process_type = ProcessType.MatchFanjianDeleteNormalize,
regex_match_type=RegexMatchType.Regex
),
word_list=["h[aeiou]llo"],
exemption_process_type=ProcessType.MatchNone,
exemption_word_list=[],
)
],
2: [
MatchTable(
table_id=3,
match_table_type=MatchTableType.Similar(
process_type = ProcessType.MatchFanjianDeleteNormalize,
sim_match_type=SimMatchType.MatchLevenshtein,
threshold=0.5
),
word_list=["halxo"],
exemption_process_type=ProcessType.MatchNone,
exemption_word_list=[],
)
]
}).encode()
)
# Check if a text matches
assert matcher.is_match("hello")
assert not matcher.is_match("word")
# Perform process as a list
result = matcher.process("hello")
assert result == [{'match_id': 1,
'table_id': 2,
'word_id': 0,
'word': 'h[aeiou]llo',
'similarity': 1.0},
{'match_id': 1,
'table_id': 1,
'word_id': 0,
'word': 'hello',
'similarity': 1.0},
{'match_id': 2,
'table_id': 3,
'word_id': 0,
'word': 'halxo',
'similarity': 0.6}]
# Perform word matching as a dict
assert matcher.word_match(r"hello, world")[1] == [{'match_id': 1,
'table_id': 2,
'word_id': 0,
'word': 'h[aeiou]llo',
'similarity': 1.0},
{'match_id': 1,
'table_id': 1,
'word_id': 0,
'word': 'hello',
'similarity': 1.0},
{'match_id': 1,
'table_id': 1,
'word_id': 1,
'word': 'world',
'similarity': 1.0}]
# Perform word matching as a string
result = matcher.word_match_as_string("hello")
assert result == """{"2":[{"match_id":2,"table_id":3,"word_id":0,"word":"halxo","similarity":0.6}],"1":[{"match_id":1,"table_id":2,"word_id":0,"word":"h[aeiou]llo","similarity":1.0},{"match_id":1,"table_id":1,"word_id":0,"word":"hello","similarity":1.0}]}"""
```
### Simple Matcher Basic Usage
Here’s an example of how to use the `SimpleMatcher`:
```python
import json
from matcher_py import SimpleMatcher
from matcher_py.extension_types import ProcessType
simple_matcher = SimpleMatcher(
json.dumps(
{
ProcessType.MatchNone: {
1: "hello&world",
2: "word&word~hello"
},
ProcessType.MatchDelete: {
3: "hallo"
}
}
).encode()
)
# Check if a text matches
assert simple_matcher.is_match("hello^&!#*#&!^#*()world")
# Perform simple processing
result = simple_matcher.process("hello,world,word,word,hallo")
assert result == [{'word_id': 1, 'word': 'hello&world'}, {'word_id': 3, 'word': 'hallo'}]
```
## Contributing
Contributions to `matcher_py` are welcome! If you find a bug or have a feature request, please open an issue on the [GitHub repository](https://github.com/Lips7/Matcher). If you would like to contribute code, please fork the repository and submit a pull request.
## License
`matcher_py` is licensed under the MIT OR Apache-2.0 license.
## More Information
For more details, visit the [GitHub repository](https://github.com/Lips7/Matcher).
| text/markdown; charset=UTF-8; variant=GFM | null | Foster Guo <f975793771@gmail.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python",
"Programming Language :: Rust",
"Typing :: Typed"
] | [] | https://github.com/Lips7/Matcher | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"changelog, https://github.com/Lips7/Matcher/blob/master/CHANGELOG.md",
"homepage, https://github.com/Lips7/Matcher",
"repository, https://github.com/Lips7/Matcher"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-21T09:41:49.614459 | matcher_py-0.7.1.tar.gz | 332,175 | 42/da/fa514b2713558bced0058e75ccb3a30f520f408e07f8362877532764fc6c/matcher_py-0.7.1.tar.gz | source | sdist | null | false | da24e40847ac7a9893c63d661a8c8475 | 794d4cdc312f1d84103d5686e70f034d22afa06e59e8b2f6842709e53b4d2ab6 | 42dafa514b2713558bced0058e75ccb3a30f520f408e07f8362877532764fc6c | null | [] | 2,098 |
2.1 | odoo14-addon-product-supplierinfo-intercompany | 14.0.1.1.3 | Product SupplierInfo Intercompany | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
=================================
Product SupplierInfo Intercompany
=================================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:8a056971e685d26ea0c3f5d02f7fb3acc9712c369ffd82b75c3bbcf31dfddbb1
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fmulti--company-lightgray.png?logo=github
:target: https://github.com/OCA/multi-company/tree/14.0/product_supplierinfo_intercompany
:alt: OCA/multi-company
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/multi-company-14-0/multi-company-14-0-product_supplierinfo_intercompany
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/multi-company&target_branch=14.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module allows to manage intercompany pricelist, to sell products
between companies in a multi-company environnement.
Choose any company as the seller company, and for it, define a pricelist and flag it as intercompany. For a product P and its cost C, whenever you add a price on that intercompany pricelist, all other companies will get a new supplierinfo for P at price C.
⚠ Tips:
In case that you have multiple intercompany pricelist or intercompany pricelist with price per quantity, we deeply recommand you to install the module product_supplierinfo_group_intercompany in order to be able to manage the supplier order correctly.
Indeed Odoo supplierinfo order it broken by design when you have price per quantity !
The module product_supplierinfo_group reintroduce the supplier group concept and manage sequence on the group to solve it.
The module product_supplierinfo_group_intercompany is a glue module between this two modules and add the possibility to define a global sequence on the intercompany pricelist that will be applied on generated supplier group
**Table of contents**
.. contents::
:local:
Usage
=====
- go to Sales > Product > Pricelist and set a Company in a pricelist
- flag "Is Intercompany Supplier" is now visible; if enabled, a supplierinfo will be created for each product:
1) which has an applicable pricelist line
2) where "Can be sold" and "Can be purchased" is set
- a supplierinfo will be created automatically for each new created product for which conditions above are verified
- the supplierinfo will be updated when pricelist or product is updated
- note: supplierinfo is not visible when using the same Company as the one set in "Is Intercompany Supplier" pricelist, unless technical group "Access to all supplier info" is enabled in user.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/multi-company/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/multi-company/issues/new?body=module:%20product_supplierinfo_intercompany%0Aversion:%2014.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
~~~~~~~
* Akretion
Contributors
~~~~~~~~~~~~
* Pierrick Brun <pierrick.brun@akretion.com>
* Sebastien Beau <sebastien.beau@akretion.com>
* Kevin Khao <kevin.khao@akretion.com>
* `Ooops404 <https://www.ooops404.com>`__:
* Ilyas <irazor147@gmail.com>
Maintainers
~~~~~~~~~~~
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
.. |maintainer-PierrickBrun| image:: https://github.com/PierrickBrun.png?size=40px
:target: https://github.com/PierrickBrun
:alt: PierrickBrun
.. |maintainer-sebastienbeau| image:: https://github.com/sebastienbeau.png?size=40px
:target: https://github.com/sebastienbeau
:alt: sebastienbeau
.. |maintainer-kevinkhao| image:: https://github.com/kevinkhao.png?size=40px
:target: https://github.com/kevinkhao
:alt: kevinkhao
Current `maintainers <https://odoo-community.org/page/maintainer-role>`__:
|maintainer-PierrickBrun| |maintainer-sebastienbeau| |maintainer-kevinkhao|
This module is part of the `OCA/multi-company <https://github.com/OCA/multi-company/tree/14.0/product_supplierinfo_intercompany>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| null | Odoo Community Association (OCA), Akretion | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 14.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/multi-company | null | >=3.6 | [] | [] | [] | [
"odoo14-addon-purchase-sale-inter-company",
"odoo<14.1dev,>=14.0a"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T09:41:11.651068 | odoo14_addon_product_supplierinfo_intercompany-14.0.1.1.3-py3-none-any.whl | 43,588 | 93/15/29344ec8b011d4edbc40a9ce540c77faba5eddc924ce59d7388f91b8f65e/odoo14_addon_product_supplierinfo_intercompany-14.0.1.1.3-py3-none-any.whl | py3 | bdist_wheel | null | false | 7c932e7baa28c11aed3aa4f502c68e1e | 42ade35732ba2083226e3f59baf078dc49292788cf76032b4c64d6e17c78c66f | 931529344ec8b011d4edbc40a9ce540c77faba5eddc924ce59d7388f91b8f65e | null | [] | 79 |
2.4 | inq | 0.2.1 | CLI for user input | # Inq
<div align="center">

</div>
<div align="center">
*CLI for user input 🎙️*
</div>
Use `inq` for prompts in your shell commands/scripts.
For example, to show a checkbox:
```shell
uvx inq checkbox -m "Choose your toppings" -c Pepperoni -c Mushrooms -c Onions`
```

| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.14 | [] | [] | [] | [
"inquirer-textual>=0.4.0",
"typer>=0.21.1"
] | [] | [] | [] | [
"Changelog, https://github.com/robvanderleek/inq/blob/master/CHANGELOG.md",
"Documentation, https://robvanderleek.github.io/inquirer-textual/",
"Source, https://github.com/robvanderleek/inq"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:41:00.358636 | inq-0.2.1.tar.gz | 71,500 | 0a/e4/65c00826bd2b6aeca10095df67b848b9f7d34db1aac00ae8dd0b1872dfd7/inq-0.2.1.tar.gz | source | sdist | null | false | ba4a223a409fe0d97975d41904cb12fa | adfb1cafb5ecf91f0550562b8c341a2ea8bb190c5566362bf786f0860d3750ee | 0ae465c00826bd2b6aeca10095df67b848b9f7d34db1aac00ae8dd0b1872dfd7 | null | [
"LICENSE"
] | 209 |
2.4 | tlm-cli | 0.2.2 | TLM — AI Tech Lead that enforces TDD, tests, and spec compliance in Claude Code. | # TLM — AI Tech Lead for Claude Code
> The annoying agent that makes Claude do the right thing.
TLM sits inside [Claude Code](https://claude.ai/code) and enforces TDD, tests, and spec compliance — automatically. It scans your project, identifies gaps, and hooks into Claude to enforce quality.
## Quick Start
```bash
pipx install tlm-cli
tlm auth <your-api-key>
cd your-project
tlm install
```
That's it. Open Claude Code and start working — TLM activates automatically.
## What Happens
1. **`tlm auth <key>`** — Save your API key (one-time)
2. **`tlm install`** — Scans your project, sends to server for assessment, shows gaps, selects quality tier, installs Claude Code hooks
3. **Work in Claude Code** — TLM interviews you before features, enforces TDD, blocks deploys, warns on gaps
4. **`tlm check`** — Manual quality gate before deployment
## How It Works
TLM scans your project and sends the file tree + samples to the server for assessment. The server returns recommendations (gaps) ranked by severity:
```
Phase 5: Project Gaps (4 found)
┌──────────────────────────────────────────┐
│ critical CI/CD No CI/CD pipeline │
│ critical Testing No tests found │
│ high Infra No staging env │
│ medium Ops No monitoring │
└──────────────────────────────────────────┘
```
You choose a quality tier (high/standard/relaxed) and TLM enforces accordingly.
## Commands
| Command | What it does |
|---------|-------------|
| `tlm auth <key>` | Save API key |
| `tlm install` | Full setup: scan, assess, quality tier, gaps, hooks |
| `tlm scan` | Re-scan and show recommendations |
| `tlm gaps` | Show active project gaps |
| `tlm check` | Run quality gate before deploy |
## Quality Tiers
| Tier | Behavior |
|------|----------|
| **HIGH** | Block on any gap. Spec review required. Multi-model code review. |
| **STANDARD** | Block on critical gaps. Spec review on blockers. Single-model review. |
| **RELAXED** | Warn only. No spec blocking. Mechanical checks only. |
## Hooks
TLM installs 6 Claude Code hooks:
- **session_start** — Loads project context, gaps, quality rules
- **prompt_submit** — Phase-specific guidance, gap warnings
- **guard** (Write/Edit) — Blocks code during interview, enforces TDD
- **compliance** (Bash) — Review gate on git commit
- **deployment** (Bash) — Always blocks deploys until `tlm check` passes
- **stop** — Session cleanup
## License
MIT
| text/markdown | null | TLM <hello@tlm.dev> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Quality Assurance",
"Intended Audience :: Developers"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27",
"click>=8.0",
"rich>=13.0",
"pytest>=8.0; extra == \"dev\"",
"pytest-asyncio>=0.24; extra == \"dev\"",
"respx>=0.22; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://tlm.dev",
"Source, https://github.com/tlm-dev/tlm"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T09:40:48.933844 | tlm_cli-0.2.2.tar.gz | 29,420 | f0/41/7d0bc5f97995bbdbda8c7f2cb5b361a2ee113a2612f96452c7c52ef9b9d1/tlm_cli-0.2.2.tar.gz | source | sdist | null | false | fe93087cac167869c0a96ca6519a25a0 | ee5b62591f4d4931cd15c2c65599df8054ef74b18aeda974619da9dbc176b172 | f0417d0bc5f97995bbdbda8c7f2cb5b361a2ee113a2612f96452c7c52ef9b9d1 | MIT | [] | 209 |
2.4 | pulumi-eks | 4.3.0a1771661619 | Pulumi Amazon Web Services (AWS) EKS Components. | [](https://github.com/pulumi/pulumi-eks/actions/workflows/master.yml)
[](https://slack.pulumi.com)
[](https://badge.fury.io/js/@pulumi%2Feks)
[](https://pypi.org/project/pulumi-eks)
[](https://badge.fury.io/nu/pulumi.eks)
[](https://pkg.go.dev/github.com/pulumi/pulumi-eks/sdk/go/eks)
# Pulumi Amazon Web Services (AWS) EKS Components
The Pulumi EKS library provides a Pulumi component that creates and manages the resources necessary to run an EKS Kubernetes cluster in AWS. This component exposes the Crosswalk for AWS functionality documented in the [Pulumi Elastic Kubernetes Service guide](https://www.pulumi.com/docs/guides/crosswalk/aws/eks/) as a package available in all Pulumi languages.
This includes:
- The EKS cluster control plane.
- The cluster's worker nodes configured as node groups, which are managed by an auto scaling group.
- The AWS CNI Plugin [`aws-k8s-cni`](https://github.com/aws/amazon-vpc-cni-k8s/) to manage pod networking in Kubernetes.
<div>
<a href="https://www.pulumi.com/templates/kubernetes/aws/" title="Get Started">
<img src="https://www.pulumi.com/images/get-started.svg?" width="120">
</a>
</div>
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (JavaScript/TypeScript)
To use from JavaScript or TypeScript in Node.js, install it using either `npm`:
```bash
$ npm install @pulumi/eks
```
or `yarn`:
```bash
$ yarn add @pulumi/eks
```
### Python
To use from Python, install using `pip`:
$ pip install pulumi_eks
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-eks/sdk/go
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Eks
## References
* [Tutorial](https://www.pulumi.com/blog/easily-create-and-manage-aws-eks-kubernetes-clusters-with-pulumi/)
* [Reference Documentation](https://www.pulumi.com/registry/packages/eks/api-docs/)
* [Examples](./examples)
* [Crosswalk for AWS - EKS Guide](https://www.pulumi.com/docs/guides/crosswalk/aws/eks/)
## Contributing
If you are interested in contributing, please see the [contributing docs][contributing].
## Code of Conduct
Please follow the [code of conduct][code-of-conduct].
[contributing]: CONTRIBUTING.md
[code-of-conduct]: CODE-OF-CONDUCT.md
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, aws, eks | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"pulumi-aws<8.0.0,>=7.14.0",
"pulumi-kubernetes<5.0.0,>=4.19.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.com",
"Repository, https://github.com/pulumi/pulumi-eks"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-21T09:40:27.284678 | pulumi_eks-4.3.0a1771661619.tar.gz | 91,337 | 6e/06/c63267a3cf9fd2f721066820a32c4be9cd98106b5818b2b25396f396961a/pulumi_eks-4.3.0a1771661619.tar.gz | source | sdist | null | false | 265b6bfa1e235ea9c29330308e4128e0 | 4b64908c60048cdfda06137d674c4c85ef06e30e357433fa0352b9fced1b5713 | 6e06c63267a3cf9fd2f721066820a32c4be9cd98106b5818b2b25396f396961a | null | [] | 186 |
2.4 | agentswarm-bedrock-agentcore | 0.3.1 | AWS Bedrock AgentCore implementation for ai-agentswarm. | # agentswarm-bedrock-agentcore
An implementation of the [AWS Bedrock AgentCore SDK](https://github.com/aws/bedrock-agentcore-sdk-python) for the `ai-agentswarm` framework.
This library eliminates boilerplate to create and execute agents in the Bedrock infrastructure.
## Installation
```bash
pip install agentswarm-bedrock-agentcore
```
## Usage
### 1. Defining and Hosting your Agent
Extend `BedrockAgent` to create your agent. You can then use the `.serve()` method to start a Bedrock AgentCore compatible WebSocket server.
```python
from agentswarm.bedrock_agentcore import BedrockAgent
from agentswarm.datamodels import Context
from agentswarm.llms import GoogleGenAI
# Initialize your LLM
llm = GoogleGenAI(model_name="gemini-1.5-pro")
class MyBedrockAgent(BedrockAgent):
def id(self) -> str:
return "my-awesome-agent"
async def execute(self, user_id: str, context: Context, input: str = None):
# Your custom agent logic here
# The context will have the default_llm if passed to .serve()
return f"Bedrock Agent says: I processed '{input}'"
if __name__ == "__main__":
# Start the server and pass the default_llm
MyBedrockAgent().serve(port=8000, default_llm=llm)
```
### 2. Invoking your Agent Remotely
Use `BedrockRemoteAgent` to call an agent that is already running. It supports both local WebSocket endpoints and AWS Bedrock ARNs.
```python
from agentswarm.bedrock_agentcore import BedrockRemoteAgent
from agentswarm.datamodels import Context
# Use an ARN for cloud invocation (after deployment)
# or "http://localhost:8000" for local testing
AGENT_ENDPOINT = "arn:aws:bedrock-agentcore:us-west-2:123456789012:runtime/my-agent-abc"
# Create a proxy for the remote agent
remote_agent = BedrockRemoteAgent(
endpoint_url=AGENT_ENDPOINT,
remote_agent_id="my-awesome-agent"
)
# Use it like any other AgentSwarm agent
# Note: Provide required context arguments for initialization
result = await remote_agent.execute(
user_id="user-123",
context=Context(trace_id="trace-1", messages=[], store=None, tracing=None),
input="Hello Bedrock!"
)
print(result)
```
## Deployment
To deploy your agent natively to the **Amazon Bedrock AgentCore Runtime**, use the official **Bedrock AgentCore Starter Toolkit**.
### 1. Install the Toolkit
```bash
pip install bedrock-agentcore-starter-toolkit
```
### 2. Configure your Agent
Run the interactive configuration tool to set up your deployment (entrypoint, region, runtime, etc.).
```bash
agentcore configure
```
This will create or update a `.bedrock_agentcore.yaml` file with your settings.
### 3. Launch to AWS Bedrock
The `launch` command packages your code, installs dependencies from `requirements.txt`, and deploys it to the managed Bedrock runtime.
```bash
# Recommended: Cloud-based deployment (Direct Code Deploy)
agentcore launch
```
This command will return an **Agent ARN** which you can then use to invoke your agent via `BedrockRemoteAgent`.
## Configuration
### requirements.txt
Ensure your `requirements.txt` includes the necessary libraries for the remote runtime:
```text
ai-agentswarm>=0.5.1
agentswarm-bedrock-agentcore>=0.1.0
```
### default_llm Support
You can configure the model via environment variables in your `.bedrock_agentcore.yaml`:
```yaml
env:
DEFAULT_LLM_MODEL: gemini-2.5-flash
```
## Quick Start: Testing and Deployment
### 1. Install Dependencies
```bash
# Core logic and Bedrock implementation
pip install ai-agentswarm agentswarm-bedrock-agentcore
# Deployment Toolkit
pip install bedrock-agentcore-starter-toolkit
```
### 2. Local Testing
Verify the bridge before deploying.
**Server:**
```bash
export RUN_SERVER=true
python examples/bedrock_demo.py
```
**Client (Proxy):**
```bash
# In another terminal
python examples/bedrock_demo.py
```
### 3. Native Bedrock Deployment
1. `export UV_CACHE_DIR=./.uv_cache` (optional, for permissions fix)
2. `agentcore configure`
3. `agentcore launch`
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
| text/markdown | null | Luca Roverelli <luca.roverelli@gmail.com> | null | null | null | ai, agents, aws, bedrock, agentcore | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"ai-agentswarm>=0.5.1",
"bedrock-agentcore",
"pydantic>=2.0.0",
"pytest; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\"",
"twine; extra == \"dev\"",
"build; extra == \"dev\"",
"black; extra == \"dev\"",
"isort; extra == \"dev\"",
"python-dotenv; extra == \"examples\""
] | [] | [] | [] | [
"Homepage, https://github.com/ai-agentswarm/agentswarm-bedrock-agentcore",
"Bug Tracker, https://github.com/ai-agentswarm/agentswarm-bedrock-agentcore/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:40:13.282029 | agentswarm_bedrock_agentcore-0.3.1.tar.gz | 9,570 | d2/ea/623bb53efbe6899bc2fac434d6cf57f3838428a7591526317e5ae09843c8/agentswarm_bedrock_agentcore-0.3.1.tar.gz | source | sdist | null | false | 13aeeb3b357e3e871596c36b3da04528 | 698e0f6f51d087e798f729e7ca68ec15418d5612d985216a27b712c0312873f4 | d2ea623bb53efbe6899bc2fac434d6cf57f3838428a7591526317e5ae09843c8 | null | [] | 210 |
2.4 | onql-client | 0.1.4 | ONQL Python client | # ONQL Python Client
This is the ONQL Python client package.
## Installation
```bash
pip install onql-client
```
## Usage
```python
from onqlclient import ... # your usage here
```
| text/markdown | Paras Virk | Paras Virk <team@autobit.co> | null | null | MIT | null | [] | [] | https://onql.org | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://onql.org"
] | twine/6.2.0 CPython/3.11.5 | 2026-02-21T09:40:03.484583 | onql_client-0.1.4.tar.gz | 3,843 | 70/ac/9ec853da8c8764ca9f09a891ac645da152decad915e3de1c65ecf4809edb/onql_client-0.1.4.tar.gz | source | sdist | null | false | f9072a262787964ab9708753a9c79025 | 92966996ef0e48ec5dc5dc74f639d50c5b22dc9437a7be12802c51de3306eadd | 70ac9ec853da8c8764ca9f09a891ac645da152decad915e3de1c65ecf4809edb | null | [] | 205 |
2.4 | griffe-warnings-deprecated | 1.1.1 | Griffe extension for `@warnings.deprecated` (PEP 702). | # griffe-warnings-deprecated
[](https://github.com/mkdocstrings/griffe-warnings-deprecated/actions?query=workflow%3Aci)
[](https://mkdocstrings.github.io/griffe-warnings-deprecated/)
[](https://pypi.org/project/griffe-warnings-deprecated/)
[](https://app.gitter.im/#/room/#griffe-warnings-deprecated:gitter.im)
Griffe extension for `@warnings.deprecated`
([PEP 702](https://peps.python.org/pep-0702/)).
## Installation
```bash
pip install griffe-warnings-deprecated
```
## Usage
The option values in the following examples are the default ones,
you can omit them if you like the defaults.
### Command-line
```bash
griffe dump mypackage -e griffe_warnings_deprecated
```
See [command-line usage in Griffe's documentation](https://mkdocstrings.github.io/griffe/extensions/#on-the-command-line).
### Python
```python
import griffe
griffe.load(
"mypackage",
extensions=griffe.load_extensions(
[{"griffe_warnings_deprecated": {
"kind": "danger",
"title": "Deprecated",
"label": "deprecated"
}}]
)
)
```
See [programmatic usage in Griffe's documentation](https://mkdocstrings.github.io/griffe/extensions/#programmatically).
### MkDocs
```yaml title="mkdocs.yml"
plugins:
- mkdocstrings:
handlers:
python:
options:
extensions:
- griffe_warnings_deprecated:
kind: danger
title: Deprecated
```
See [MkDocs usage in Griffe's documentation](https://mkdocstrings.github.io/griffe/extensions/#in-mkdocs).
---
Options:
- `kind`: The admonition kind (default: danger).
- `title`: The admonition title (default: Deprecated).
Can be set to null to use the message as title.
- `label`: The label added to deprecated objects (default: deprecated).
Can be set to null.
## Sponsors
<!-- sponsors-start -->
<!-- sponsors-end -->
| text/markdown | null | =?utf-8?q?Timoth=C3=A9e_Mazzucotelli?= <dev@pawamoy.fr> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Documentation",
"Topic :: Software Development",
"Topic :: Utilities",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"griffelib>=2.0"
] | [] | [] | [] | [
"Homepage, https://mkdocstrings.github.io/griffe-warnings-deprecated",
"Documentation, https://mkdocstrings.github.io/griffe-warnings-deprecated",
"Changelog, https://mkdocstrings.github.io/griffe-warnings-deprecated/changelog",
"Repository, https://github.com/mkdocstrings/griffe-warnings-deprecated",
"Issues, https://github.com/mkdocstrings/griffe-warnings-deprecated/issues",
"Discussions, https://github.com/mkdocstrings/griffe-warnings-deprecated/discussions",
"Gitter, https://gitter.im/mkdocstrings/griffe-warnings-deprecated",
"Funding, https://github.com/sponsors/pawamoy"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T09:38:55.858598 | griffe_warnings_deprecated-1.1.1.tar.gz | 26,262 | da/9e/fc86f1e9270f143a395a601de81aa42a871722c34d4b3c7763658dc2e04d/griffe_warnings_deprecated-1.1.1.tar.gz | source | sdist | null | false | 405e7aa184fc77d89f829982b68435c1 | 9261369bf2acb8b5d24a0dc7895cce788208513d4349031d4ea315b979b2e99f | da9efc86f1e9270f143a395a601de81aa42a871722c34d4b3c7763658dc2e04d | ISC | [
"LICENSE"
] | 348 |
2.4 | griffe-typingdoc | 0.3.1 | Griffe extension for PEP 727 – Documentation Metadata in Typing. | # Griffe TypingDoc
[](https://github.com/mkdocstrings/griffe-typingdoc/actions?query=workflow%3Aci)
[](https://mkdocstrings.github.io/griffe-typingdoc/)
[](https://pypi.org/project/griffe-typingdoc/)
[](https://app.gitter.im/#/room/#griffe-typingdoc:gitter.im)
Griffe extension for [`annotated-doc`](https://pypi.org/project/annotated-doc/) (originally [PEP 727](https://peps.python.org/pep-0727/)):
> Document parameters, class attributes, return types, and variables inline, with Annotated.
## Installation
```bash
pip install griffe-typingdoc
```
To use the extension in a MkDocs project, use this configuration:
```yaml
# mkdocs.yml
plugins:
- mkdocstrings:
handlers:
python:
options:
extensions:
- griffe_typingdoc
```
## Sponsors
<!-- sponsors-start -->
<!-- sponsors-end -->
| text/markdown | null | =?utf-8?q?Timoth=C3=A9e_Mazzucotelli?= <pawamoy@pm.me> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Documentation",
"Topic :: Software Development",
"Topic :: Software Development :: Documentation",
"Topic :: Utilities",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"griffelib>=2.0",
"typing-extensions>=4.7"
] | [] | [] | [] | [
"Homepage, https://mkdocstrings.github.io/griffe-typingdoc",
"Documentation, https://mkdocstrings.github.io/griffe-typingdoc",
"Changelog, https://mkdocstrings.github.io/griffe-typingdoc/changelog",
"Repository, https://github.com/mkdocstrings/griffe-typingdoc",
"Issues, https://github.com/mkdocstrings/griffe-typingdoc/issues",
"Discussions, https://github.com/mkdocstrings/griffe-typingdoc/discussions",
"Gitter, https://gitter.im/mkdocstrings/griffe-typingdoc",
"Funding, https://github.com/sponsors/pawamoy"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T09:38:54.409608 | griffe_typingdoc-0.3.1.tar.gz | 31,218 | ce/26/28182e0c8055842bf3da774dee1d5b789c0f236c078dcbdca1937b5214dc/griffe_typingdoc-0.3.1.tar.gz | source | sdist | null | false | 13af6e489b810f6ce73a11a782f4b4c9 | 2ff4703115cb7f8a65b9fdcdd1f3c3a15f813b6554621b52eaad094c4782ce96 | ce2628182e0c8055842bf3da774dee1d5b789c0f236c078dcbdca1937b5214dc | ISC | [
"LICENSE"
] | 748 |
2.4 | griffe-sphinx | 0.2.1 | Parse Sphinx-comments above attributes as docstrings. | # Griffe Sphinx
[](https://github.com/mkdocstrings/griffe-sphinx/actions?query=workflow%3Aci)
[](https://mkdocstrings.github.io/griffe-sphinx/)
[](https://pypi.org/project/griffe-sphinx/)
[](https://app.gitter.im/#/room/#griffe-sphinx:gitter.im)
Parse Sphinx-comments above attributes as docstrings.
## Installation
```bash
pip install griffe-sphinx
```
## Usage
Griffe Sphinx allows reading Sphinx comments above attribute assignments as docstrings.
```python
# your_module.py
#: Summary of your attribute.
#:
#: This is a longer description of your attribute.
#: You can use any markup in here (Markdown, AsciiDoc, rST, etc.).
#:
#: Be careful with indented blocks: they need 4 spaces plus the initial 1-space indent, so 5.
#:
#: print("hello!")
your_attribute = "Hello Sphinx!"
```
This works for module attributes as well as class and instance attributes.
```python
class Hello:
#: Summary of attribute.
attr1 = "hello"
def __init__(self):
#: Summary of attribute.
self.attr2 = "sphinx"
```
Trailing comments (appearing at the end of a line) are not supported.
You can now enable the extension when loading data with Griffe on the command-line, in Python code or with MkDocs.
**On the command-line:**
```bash
griffe dump your_package -e griffe_sphinx
```
**In Python code:**
```python
import griffe
data = griffe.load("your_package", extensions=griffe.load_extensions("griffe_sphinx"))
```
**With [MkDocs](https://www.mkdocs.org/):**
```yaml
plugins:
- mkdocstrings:
handlers:
python:
options:
extensions:
- griffe_sphinx
```
## Sponsors
<!-- sponsors-start -->
<div id="premium-sponsors" style="text-align: center;">
<div id="silver-sponsors"><b>Silver sponsors</b><p>
<a href="https://fastapi.tiangolo.com/"><img alt="FastAPI" src="https://raw.githubusercontent.com/tiangolo/fastapi/master/docs/en/docs/img/logo-margin/logo-teal.png" style="height: 200px; "></a><br>
</p></div>
<div id="bronze-sponsors"><b>Bronze sponsors</b><p>
<a href="https://www.nixtla.io/"><picture><source media="(prefers-color-scheme: light)" srcset="https://www.nixtla.io/img/logo/full-black.svg"><source media="(prefers-color-scheme: dark)" srcset="https://www.nixtla.io/img/logo/full-white.svg"><img alt="Nixtla" src="https://www.nixtla.io/img/logo/full-black.svg" style="height: 60px; "></picture></a><br>
</p></div>
</div>
---
<div id="sponsors"><p>
<a href="https://github.com/ofek"><img alt="ofek" src="https://avatars.githubusercontent.com/u/9677399?u=386c330f212ce467ce7119d9615c75d0e9b9f1ce&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/samuelcolvin"><img alt="samuelcolvin" src="https://avatars.githubusercontent.com/u/4039449?u=42eb3b833047c8c4b4f647a031eaef148c16d93f&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/tlambert03"><img alt="tlambert03" src="https://avatars.githubusercontent.com/u/1609449?u=922abf0524b47739b37095e553c99488814b05db&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/ssbarnea"><img alt="ssbarnea" src="https://avatars.githubusercontent.com/u/102495?u=c7bd9ddf127785286fc939dd18cb02db0a453bce&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/femtomc"><img alt="femtomc" src="https://avatars.githubusercontent.com/u/34410036?u=f13a71daf2a9f0d2da189beaa94250daa629e2d8&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/cmarqu"><img alt="cmarqu" src="https://avatars.githubusercontent.com/u/360986?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/kolenaIO"><img alt="kolenaIO" src="https://avatars.githubusercontent.com/u/77010818?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/ramnes"><img alt="ramnes" src="https://avatars.githubusercontent.com/u/835072?u=3fca03c3ba0051e2eb652b1def2188a94d1e1dc2&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/machow"><img alt="machow" src="https://avatars.githubusercontent.com/u/2574498?u=c41e3d2f758a05102d8075e38d67b9c17d4189d7&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/BenHammersley"><img alt="BenHammersley" src="https://avatars.githubusercontent.com/u/99436?u=4499a7b507541045222ee28ae122dbe3c8d08ab5&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/trevorWieland"><img alt="trevorWieland" src="https://avatars.githubusercontent.com/u/28811461?u=74cc0e3756c1d4e3d66b5c396e1d131ea8a10472&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/MarcoGorelli"><img alt="MarcoGorelli" src="https://avatars.githubusercontent.com/u/33491632?u=7de3a749cac76a60baca9777baf71d043a4f884d&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/analog-cbarber"><img alt="analog-cbarber" src="https://avatars.githubusercontent.com/u/7408243?u=642fc2bdcc9904089c62fe5aec4e03ace32da67d&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/OdinManiac"><img alt="OdinManiac" src="https://avatars.githubusercontent.com/u/22727172?u=36ab20970f7f52ae8e7eb67b7fcf491fee01ac22&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/rstudio-sponsorship"><img alt="rstudio-sponsorship" src="https://avatars.githubusercontent.com/u/58949051?u=0c471515dd18111be30dfb7669ed5e778970959b&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/schlich"><img alt="schlich" src="https://avatars.githubusercontent.com/u/21191435?u=6f1240adb68f21614d809ae52d66509f46b1e877&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/butterlyn"><img alt="butterlyn" src="https://avatars.githubusercontent.com/u/53323535?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/livingbio"><img alt="livingbio" src="https://avatars.githubusercontent.com/u/10329983?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/NemetschekAllplan"><img alt="NemetschekAllplan" src="https://avatars.githubusercontent.com/u/912034?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/EricJayHartman"><img alt="EricJayHartman" src="https://avatars.githubusercontent.com/u/9259499?u=7e58cc7ec0cd3e85b27aec33656aa0f6612706dd&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/15r10nk"><img alt="15r10nk" src="https://avatars.githubusercontent.com/u/44680962?u=f04826446ff165742efa81e314bd03bf1724d50e&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/activeloopai"><img alt="activeloopai" src="https://avatars.githubusercontent.com/u/34816118?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/roboflow"><img alt="roboflow" src="https://avatars.githubusercontent.com/u/53104118?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/cmclaughlin"><img alt="cmclaughlin" src="https://avatars.githubusercontent.com/u/1061109?u=ddf6eec0edd2d11c980f8c3aa96e3d044d4e0468&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/blaisep"><img alt="blaisep" src="https://avatars.githubusercontent.com/u/254456?u=97d584b7c0a6faf583aa59975df4f993f671d121&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/RapidataAI"><img alt="RapidataAI" src="https://avatars.githubusercontent.com/u/104209891?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/rodolphebarbanneau"><img alt="rodolphebarbanneau" src="https://avatars.githubusercontent.com/u/46493454?u=6c405452a40c231cdf0b68e97544e07ee956a733&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/theSymbolSyndicate"><img alt="theSymbolSyndicate" src="https://avatars.githubusercontent.com/u/111542255?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/blakeNaccarato"><img alt="blakeNaccarato" src="https://avatars.githubusercontent.com/u/20692450?u=bb919218be30cfa994514f4cf39bb2f7cf952df4&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/ChargeStorm"><img alt="ChargeStorm" src="https://avatars.githubusercontent.com/u/26000165?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/Alphadelta14"><img alt="Alphadelta14" src="https://avatars.githubusercontent.com/u/480845?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/Cusp-AI"><img alt="Cusp-AI" src="https://avatars.githubusercontent.com/u/178170649?v=4" style="height: 32px; border-radius: 100%;"></a>
</p></div>
*And 7 more private sponsor(s).*
<!-- sponsors-end -->
| text/markdown | null | =?utf-8?q?Timoth=C3=A9e_Mazzucotelli?= <dev@pawamoy.fr> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Documentation",
"Topic :: Software Development",
"Topic :: Utilities",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"griffelib>=2.0"
] | [] | [] | [] | [
"Homepage, https://mkdocstrings.github.io/griffe-sphinx",
"Documentation, https://mkdocstrings.github.io/griffe-sphinx",
"Changelog, https://mkdocstrings.github.io/griffe-sphinx/changelog",
"Repository, https://github.com/mkdocstrings/griffe-sphinx",
"Issues, https://github.com/mkdocstrings/griffe-sphinx/issues",
"Discussions, https://github.com/mkdocstrings/griffe-sphinx/discussions",
"Gitter, https://gitter.im/mkdocstrings/griffe-sphinx",
"Funding, https://github.com/sponsors/pawamoy"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T09:38:52.972577 | griffe_sphinx-0.2.1.tar.gz | 28,519 | eb/54/94f4f78b211c76c62f09a1b7acbd675a193f2b1b4e3317b57cb09c05a4dd/griffe_sphinx-0.2.1.tar.gz | source | sdist | null | false | 9f0518483dd79e12e590a3243cd44a2b | ebb939fbc6bfa4595ff963db5cd6ecd43c1c26a07f57f0243b448a1fa1cc5ad6 | eb5494f4f78b211c76c62f09a1b7acbd675a193f2b1b4e3317b57cb09c05a4dd | ISC | [
"LICENSE"
] | 137 |
2.4 | griffe-runtime-objects | 0.3.1 | Make runtime objects available through `extra`. | # griffe-runtime-objects
[](https://github.com/mkdocstrings/griffe-runtime-objects/actions?query=workflow%3Aci)
[](https://mkdocstrings.github.io/griffe-runtime-objects/)
[](https://pypi.org/project/griffe-runtime-objects/)
[](https://app.gitter.im/#/room/#griffe-runtime-objects:gitter.im)
Make runtime objects available through `extra`.
## Installation
```bash
pip install griffe-runtime-objects
```
## Usage
[Enable](https://mkdocstrings.github.io/griffe/guide/users/extending/#using-extensions) the `griffe_runtime_objects` extension. Now all Griffe objects will have access to the corresponding runtime objects in their `extra` attribute, under the `runtime-objects` namespace:
```pycon
>>> import griffe
>>> griffe_data = griffe.load("griffe", extensions=griffe.load_extensions("griffe_runtime_objects"), resolve_aliases=True)
>>> griffe_data["parse"].extra
defaultdict(<class 'dict'>, {'runtime-objects': {'object': <function parse at 0x78685c951260>}})
>>> griffe_data["Module"].extra
defaultdict(<class 'dict'>, {'runtime-objects': {'object': <class '_griffe.models.Module'>}})
```
This extension can be useful in custom templates of mkdocstrings-python, to iterate on an object value or attributes.
With MkDocs:
```yaml
plugins:
- mkdocstrings:
handlers:
python:
options:
extensions:
- griffe_runtime_objects
```
## Sponsors
<!-- sponsors-start -->
<div id="premium-sponsors" style="text-align: center;">
<div id="silver-sponsors"><b>Silver sponsors</b><p>
<a href="https://fastapi.tiangolo.com/"><img alt="FastAPI" src="https://raw.githubusercontent.com/tiangolo/fastapi/master/docs/en/docs/img/logo-margin/logo-teal.png" style="height: 200px; "></a><br>
</p></div>
<div id="bronze-sponsors"><b>Bronze sponsors</b><p>
<a href="https://www.nixtla.io/"><picture><source media="(prefers-color-scheme: light)" srcset="https://www.nixtla.io/img/logo/full-black.svg"><source media="(prefers-color-scheme: dark)" srcset="https://www.nixtla.io/img/logo/full-white.svg"><img alt="Nixtla" src="https://www.nixtla.io/img/logo/full-black.svg" style="height: 60px; "></picture></a><br>
</p></div>
</div>
---
<div id="sponsors"><p>
<a href="https://github.com/ofek"><img alt="ofek" src="https://avatars.githubusercontent.com/u/9677399?u=386c330f212ce467ce7119d9615c75d0e9b9f1ce&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/samuelcolvin"><img alt="samuelcolvin" src="https://avatars.githubusercontent.com/u/4039449?u=42eb3b833047c8c4b4f647a031eaef148c16d93f&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/tlambert03"><img alt="tlambert03" src="https://avatars.githubusercontent.com/u/1609449?u=922abf0524b47739b37095e553c99488814b05db&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/ssbarnea"><img alt="ssbarnea" src="https://avatars.githubusercontent.com/u/102495?u=c7bd9ddf127785286fc939dd18cb02db0a453bce&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/femtomc"><img alt="femtomc" src="https://avatars.githubusercontent.com/u/34410036?u=f13a71daf2a9f0d2da189beaa94250daa629e2d8&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/cmarqu"><img alt="cmarqu" src="https://avatars.githubusercontent.com/u/360986?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/kolenaIO"><img alt="kolenaIO" src="https://avatars.githubusercontent.com/u/77010818?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/ramnes"><img alt="ramnes" src="https://avatars.githubusercontent.com/u/835072?u=3fca03c3ba0051e2eb652b1def2188a94d1e1dc2&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/machow"><img alt="machow" src="https://avatars.githubusercontent.com/u/2574498?u=c41e3d2f758a05102d8075e38d67b9c17d4189d7&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/BenHammersley"><img alt="BenHammersley" src="https://avatars.githubusercontent.com/u/99436?u=4499a7b507541045222ee28ae122dbe3c8d08ab5&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/trevorWieland"><img alt="trevorWieland" src="https://avatars.githubusercontent.com/u/28811461?u=74cc0e3756c1d4e3d66b5c396e1d131ea8a10472&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/MarcoGorelli"><img alt="MarcoGorelli" src="https://avatars.githubusercontent.com/u/33491632?u=7de3a749cac76a60baca9777baf71d043a4f884d&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/analog-cbarber"><img alt="analog-cbarber" src="https://avatars.githubusercontent.com/u/7408243?u=642fc2bdcc9904089c62fe5aec4e03ace32da67d&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/OdinManiac"><img alt="OdinManiac" src="https://avatars.githubusercontent.com/u/22727172?u=36ab20970f7f52ae8e7eb67b7fcf491fee01ac22&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/rstudio-sponsorship"><img alt="rstudio-sponsorship" src="https://avatars.githubusercontent.com/u/58949051?u=0c471515dd18111be30dfb7669ed5e778970959b&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/schlich"><img alt="schlich" src="https://avatars.githubusercontent.com/u/21191435?u=6f1240adb68f21614d809ae52d66509f46b1e877&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/butterlyn"><img alt="butterlyn" src="https://avatars.githubusercontent.com/u/53323535?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/livingbio"><img alt="livingbio" src="https://avatars.githubusercontent.com/u/10329983?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/NemetschekAllplan"><img alt="NemetschekAllplan" src="https://avatars.githubusercontent.com/u/912034?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/EricJayHartman"><img alt="EricJayHartman" src="https://avatars.githubusercontent.com/u/9259499?u=7e58cc7ec0cd3e85b27aec33656aa0f6612706dd&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/15r10nk"><img alt="15r10nk" src="https://avatars.githubusercontent.com/u/44680962?u=f04826446ff165742efa81e314bd03bf1724d50e&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/activeloopai"><img alt="activeloopai" src="https://avatars.githubusercontent.com/u/34816118?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/roboflow"><img alt="roboflow" src="https://avatars.githubusercontent.com/u/53104118?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/cmclaughlin"><img alt="cmclaughlin" src="https://avatars.githubusercontent.com/u/1061109?u=ddf6eec0edd2d11c980f8c3aa96e3d044d4e0468&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/blaisep"><img alt="blaisep" src="https://avatars.githubusercontent.com/u/254456?u=97d584b7c0a6faf583aa59975df4f993f671d121&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/RapidataAI"><img alt="RapidataAI" src="https://avatars.githubusercontent.com/u/104209891?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/rodolphebarbanneau"><img alt="rodolphebarbanneau" src="https://avatars.githubusercontent.com/u/46493454?u=6c405452a40c231cdf0b68e97544e07ee956a733&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/theSymbolSyndicate"><img alt="theSymbolSyndicate" src="https://avatars.githubusercontent.com/u/111542255?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/blakeNaccarato"><img alt="blakeNaccarato" src="https://avatars.githubusercontent.com/u/20692450?u=bb919218be30cfa994514f4cf39bb2f7cf952df4&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/ChargeStorm"><img alt="ChargeStorm" src="https://avatars.githubusercontent.com/u/26000165?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/Alphadelta14"><img alt="Alphadelta14" src="https://avatars.githubusercontent.com/u/480845?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/Cusp-AI"><img alt="Cusp-AI" src="https://avatars.githubusercontent.com/u/178170649?v=4" style="height: 32px; border-radius: 100%;"></a>
</p></div>
*And 7 more private sponsor(s).*
<!-- sponsors-end -->
| text/markdown | null | =?utf-8?q?Timoth=C3=A9e_Mazzucotelli?= <dev@pawamoy.fr> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Documentation",
"Topic :: Software Development",
"Topic :: Utilities",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"griffelib>=2.0"
] | [] | [] | [] | [
"Homepage, https://mkdocstrings.github.io/griffe-runtime-objects",
"Documentation, https://mkdocstrings.github.io/griffe-runtime-objects",
"Changelog, https://mkdocstrings.github.io/griffe-runtime-objects/changelog",
"Repository, https://github.com/mkdocstrings/griffe-runtime-objects",
"Issues, https://github.com/mkdocstrings/griffe-runtime-objects/issues",
"Discussions, https://github.com/mkdocstrings/griffe-runtime-objects/discussions",
"Gitter, https://gitter.im/mkdocstrings/griffe-runtime-objects",
"Funding, https://github.com/sponsors/pawamoy"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T09:38:51.165626 | griffe_runtime_objects-0.3.1.tar.gz | 28,170 | 2a/38/e80d9a1a52ce7a815f18739e7ce6f8ac3604ae3007ec0e671e85cecdd340/griffe_runtime_objects-0.3.1.tar.gz | source | sdist | null | false | d72fb0f323af36304fc51e9220918c6a | eecb887e1fb6b48bddcfc6115a61f59b3a91aeaadd9b932b8056eb8dd7811a57 | 2a38e80d9a1a52ce7a815f18739e7ce6f8ac3604ae3007ec0e671e85cecdd340 | ISC | [
"LICENSE"
] | 122 |
2.4 | griffe-pydantic | 1.3.1 | Griffe extension for Pydantic. | # griffe-pydantic
[](https://github.com/mkdocstrings/griffe-pydantic/actions?query=workflow%3Aci)
[](https://mkdocstrings.github.io/griffe-pydantic/)
[](https://pypi.org/project/griffe-pydantic/)
[](https://app.gitter.im/#/room/#griffe-pydantic:gitter.im)
[Griffe](https://mkdocstrings.github.io/griffe/) extension for [Pydantic](https://github.com/pydantic/pydantic).
## Installation
```bash
pip install griffe-pydantic
```
## Usage
### Command-line
```bash
griffe dump mypackage -e griffe_pydantic
```
See [command-line usage in Griffe's documentation](https://mkdocstrings.github.io/griffe/extensions/#on-the-command-line).
### Python
```python
import griffe
griffe.load(
"mypackage",
extensions=griffe.load_extensions(
[{"griffe_pydantic": {"schema": True}}]
)
)
```
See [programmatic usage in Griffe's documentation](https://mkdocstrings.github.io/griffe/extensions/#programmatically).
### MkDocs
```yaml title="mkdocs.yml"
plugins:
- mkdocstrings:
handlers:
python:
options:
extensions:
- griffe_pydantic:
schema: true
```
See [MkDocs usage in Griffe's documentation](https://mkdocstrings.github.io/griffe/extensions/#in-mkdocs).
## Sponsors
<!-- sponsors-start -->
<div id="premium-sponsors" style="text-align: center;">
<div id="silver-sponsors"><b>Silver sponsors</b><p>
<a href="https://fastapi.tiangolo.com/"><img alt="FastAPI" src="https://raw.githubusercontent.com/tiangolo/fastapi/master/docs/en/docs/img/logo-margin/logo-teal.png" style="height: 200px; "></a><br>
</p></div>
<div id="bronze-sponsors"><b>Bronze sponsors</b><p>
<a href="https://www.nixtla.io/"><picture><source media="(prefers-color-scheme: light)" srcset="https://www.nixtla.io/img/logo/full-black.svg"><source media="(prefers-color-scheme: dark)" srcset="https://www.nixtla.io/img/logo/full-white.svg"><img alt="Nixtla" src="https://www.nixtla.io/img/logo/full-black.svg" style="height: 60px; "></picture></a><br>
</p></div>
</div>
---
<div id="sponsors"><p>
<a href="https://github.com/ofek"><img alt="ofek" src="https://avatars.githubusercontent.com/u/9677399?u=386c330f212ce467ce7119d9615c75d0e9b9f1ce&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/samuelcolvin"><img alt="samuelcolvin" src="https://avatars.githubusercontent.com/u/4039449?u=42eb3b833047c8c4b4f647a031eaef148c16d93f&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/tlambert03"><img alt="tlambert03" src="https://avatars.githubusercontent.com/u/1609449?u=922abf0524b47739b37095e553c99488814b05db&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/ssbarnea"><img alt="ssbarnea" src="https://avatars.githubusercontent.com/u/102495?u=c7bd9ddf127785286fc939dd18cb02db0a453bce&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/femtomc"><img alt="femtomc" src="https://avatars.githubusercontent.com/u/34410036?u=f13a71daf2a9f0d2da189beaa94250daa629e2d8&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/cmarqu"><img alt="cmarqu" src="https://avatars.githubusercontent.com/u/360986?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/kolenaIO"><img alt="kolenaIO" src="https://avatars.githubusercontent.com/u/77010818?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/ramnes"><img alt="ramnes" src="https://avatars.githubusercontent.com/u/835072?u=3fca03c3ba0051e2eb652b1def2188a94d1e1dc2&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/machow"><img alt="machow" src="https://avatars.githubusercontent.com/u/2574498?u=c41e3d2f758a05102d8075e38d67b9c17d4189d7&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/BenHammersley"><img alt="BenHammersley" src="https://avatars.githubusercontent.com/u/99436?u=4499a7b507541045222ee28ae122dbe3c8d08ab5&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/trevorWieland"><img alt="trevorWieland" src="https://avatars.githubusercontent.com/u/28811461?u=74cc0e3756c1d4e3d66b5c396e1d131ea8a10472&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/MarcoGorelli"><img alt="MarcoGorelli" src="https://avatars.githubusercontent.com/u/33491632?u=7de3a749cac76a60baca9777baf71d043a4f884d&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/analog-cbarber"><img alt="analog-cbarber" src="https://avatars.githubusercontent.com/u/7408243?u=642fc2bdcc9904089c62fe5aec4e03ace32da67d&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/OdinManiac"><img alt="OdinManiac" src="https://avatars.githubusercontent.com/u/22727172?u=36ab20970f7f52ae8e7eb67b7fcf491fee01ac22&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/rstudio-sponsorship"><img alt="rstudio-sponsorship" src="https://avatars.githubusercontent.com/u/58949051?u=0c471515dd18111be30dfb7669ed5e778970959b&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/schlich"><img alt="schlich" src="https://avatars.githubusercontent.com/u/21191435?u=6f1240adb68f21614d809ae52d66509f46b1e877&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/butterlyn"><img alt="butterlyn" src="https://avatars.githubusercontent.com/u/53323535?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/livingbio"><img alt="livingbio" src="https://avatars.githubusercontent.com/u/10329983?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/NemetschekAllplan"><img alt="NemetschekAllplan" src="https://avatars.githubusercontent.com/u/912034?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/EricJayHartman"><img alt="EricJayHartman" src="https://avatars.githubusercontent.com/u/9259499?u=7e58cc7ec0cd3e85b27aec33656aa0f6612706dd&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/15r10nk"><img alt="15r10nk" src="https://avatars.githubusercontent.com/u/44680962?u=f04826446ff165742efa81e314bd03bf1724d50e&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/activeloopai"><img alt="activeloopai" src="https://avatars.githubusercontent.com/u/34816118?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/roboflow"><img alt="roboflow" src="https://avatars.githubusercontent.com/u/53104118?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/cmclaughlin"><img alt="cmclaughlin" src="https://avatars.githubusercontent.com/u/1061109?u=ddf6eec0edd2d11c980f8c3aa96e3d044d4e0468&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/blaisep"><img alt="blaisep" src="https://avatars.githubusercontent.com/u/254456?u=97d584b7c0a6faf583aa59975df4f993f671d121&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/RapidataAI"><img alt="RapidataAI" src="https://avatars.githubusercontent.com/u/104209891?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/rodolphebarbanneau"><img alt="rodolphebarbanneau" src="https://avatars.githubusercontent.com/u/46493454?u=6c405452a40c231cdf0b68e97544e07ee956a733&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/theSymbolSyndicate"><img alt="theSymbolSyndicate" src="https://avatars.githubusercontent.com/u/111542255?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/blakeNaccarato"><img alt="blakeNaccarato" src="https://avatars.githubusercontent.com/u/20692450?u=bb919218be30cfa994514f4cf39bb2f7cf952df4&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/ChargeStorm"><img alt="ChargeStorm" src="https://avatars.githubusercontent.com/u/26000165?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/Alphadelta14"><img alt="Alphadelta14" src="https://avatars.githubusercontent.com/u/480845?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/Cusp-AI"><img alt="Cusp-AI" src="https://avatars.githubusercontent.com/u/178170649?v=4" style="height: 32px; border-radius: 100%;"></a>
</p></div>
*And 7 more private sponsor(s).*
<!-- sponsors-end -->
| text/markdown | null | =?utf-8?q?Timoth=C3=A9e_Mazzucotelli?= <dev@pawamoy.fr> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Documentation",
"Topic :: Software Development",
"Topic :: Utilities",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"griffelib>=2.0"
] | [] | [] | [] | [
"Homepage, https://mkdocstrings.github.io/griffe-pydantic",
"Documentation, https://mkdocstrings.github.io/griffe-pydantic",
"Changelog, https://mkdocstrings.github.io/griffe-pydantic/changelog",
"Repository, https://github.com/mkdocstrings/griffe-pydantic",
"Issues, https://github.com/mkdocstrings/griffe-pydantic/issues",
"Discussions, https://github.com/mkdocstrings/griffe-pydantic/discussions",
"Gitter, https://gitter.im/mkdocstrings/griffe-pydantic",
"Funding, https://github.com/sponsors/pawamoy"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T09:38:49.674558 | griffe_pydantic-1.3.1.tar.gz | 38,176 | 5a/bd/d2eaeaf3f9910c9cd72793af0de18ee3d3a3a27bb30ab01cfd7659c08dc4/griffe_pydantic-1.3.1.tar.gz | source | sdist | null | false | e65d7f907ac8e8fc2b0b9d324860d6db | f7caedfa0effedb22893bf01cc411fd567614f7b4de7ce0c1f4293eb7acb5c44 | 5abdd2eaeaf3f9910c9cd72793af0de18ee3d3a3a27bb30ab01cfd7659c08dc4 | ISC | [
"LICENSE"
] | 877 |
2.4 | griffe-public-wildcard-imports | 0.3.1 | Mark wildcard imported objects as public. | # griffe-public-wildcard-imports
[](https://github.com/mkdocstrings/griffe-public-wildcard-imports/actions?query=workflow%3Aci)
[](https://mkdocstrings.github.io/griffe-public-wildcard-imports/)
[](https://pypi.org/project/griffe-public-wildcard-imports/)
[](https://app.gitter.im/#/room/#griffe-public-wildcard-imports:gitter.im)
Mark wildcard imported objects as public.
## Installation
```bash
pip install griffe-public-wildcard-imports
```
## Usage
[Enable](https://mkdocstrings.github.io/griffe/guide/users/extending/#using-extensions) the `griffe_public_wildcard_imports` extension. Now all objects imported through wildcard imports will be considered public, as per the convention.
```python
# All imported objects are marked as public.
from somewhere import *
```
With MkDocs:
```yaml
plugins:
- mkdocstrings:
handlers:
python:
options:
extensions:
- griffe_public_wildcard_imports
```
## Sponsors
<!-- sponsors-start -->
<div id="premium-sponsors" style="text-align: center;">
<div id="silver-sponsors"><b>Silver sponsors</b><p>
<a href="https://fastapi.tiangolo.com/"><img alt="FastAPI" src="https://raw.githubusercontent.com/tiangolo/fastapi/master/docs/en/docs/img/logo-margin/logo-teal.png" style="height: 200px; "></a><br>
</p></div>
<div id="bronze-sponsors"><b>Bronze sponsors</b><p>
<a href="https://www.nixtla.io/"><picture><source media="(prefers-color-scheme: light)" srcset="https://www.nixtla.io/img/logo/full-black.svg"><source media="(prefers-color-scheme: dark)" srcset="https://www.nixtla.io/img/logo/full-white.svg"><img alt="Nixtla" src="https://www.nixtla.io/img/logo/full-black.svg" style="height: 60px; "></picture></a><br>
</p></div>
</div>
---
<div id="sponsors"><p>
<a href="https://github.com/ofek"><img alt="ofek" src="https://avatars.githubusercontent.com/u/9677399?u=386c330f212ce467ce7119d9615c75d0e9b9f1ce&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/samuelcolvin"><img alt="samuelcolvin" src="https://avatars.githubusercontent.com/u/4039449?u=42eb3b833047c8c4b4f647a031eaef148c16d93f&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/tlambert03"><img alt="tlambert03" src="https://avatars.githubusercontent.com/u/1609449?u=922abf0524b47739b37095e553c99488814b05db&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/ssbarnea"><img alt="ssbarnea" src="https://avatars.githubusercontent.com/u/102495?u=c7bd9ddf127785286fc939dd18cb02db0a453bce&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/femtomc"><img alt="femtomc" src="https://avatars.githubusercontent.com/u/34410036?u=f13a71daf2a9f0d2da189beaa94250daa629e2d8&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/cmarqu"><img alt="cmarqu" src="https://avatars.githubusercontent.com/u/360986?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/kolenaIO"><img alt="kolenaIO" src="https://avatars.githubusercontent.com/u/77010818?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/ramnes"><img alt="ramnes" src="https://avatars.githubusercontent.com/u/835072?u=3fca03c3ba0051e2eb652b1def2188a94d1e1dc2&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/machow"><img alt="machow" src="https://avatars.githubusercontent.com/u/2574498?u=c41e3d2f758a05102d8075e38d67b9c17d4189d7&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/BenHammersley"><img alt="BenHammersley" src="https://avatars.githubusercontent.com/u/99436?u=4499a7b507541045222ee28ae122dbe3c8d08ab5&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/trevorWieland"><img alt="trevorWieland" src="https://avatars.githubusercontent.com/u/28811461?u=74cc0e3756c1d4e3d66b5c396e1d131ea8a10472&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/MarcoGorelli"><img alt="MarcoGorelli" src="https://avatars.githubusercontent.com/u/33491632?u=7de3a749cac76a60baca9777baf71d043a4f884d&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/analog-cbarber"><img alt="analog-cbarber" src="https://avatars.githubusercontent.com/u/7408243?u=642fc2bdcc9904089c62fe5aec4e03ace32da67d&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/OdinManiac"><img alt="OdinManiac" src="https://avatars.githubusercontent.com/u/22727172?u=36ab20970f7f52ae8e7eb67b7fcf491fee01ac22&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/rstudio-sponsorship"><img alt="rstudio-sponsorship" src="https://avatars.githubusercontent.com/u/58949051?u=0c471515dd18111be30dfb7669ed5e778970959b&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/schlich"><img alt="schlich" src="https://avatars.githubusercontent.com/u/21191435?u=6f1240adb68f21614d809ae52d66509f46b1e877&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/butterlyn"><img alt="butterlyn" src="https://avatars.githubusercontent.com/u/53323535?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/livingbio"><img alt="livingbio" src="https://avatars.githubusercontent.com/u/10329983?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/NemetschekAllplan"><img alt="NemetschekAllplan" src="https://avatars.githubusercontent.com/u/912034?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/EricJayHartman"><img alt="EricJayHartman" src="https://avatars.githubusercontent.com/u/9259499?u=7e58cc7ec0cd3e85b27aec33656aa0f6612706dd&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/15r10nk"><img alt="15r10nk" src="https://avatars.githubusercontent.com/u/44680962?u=f04826446ff165742efa81e314bd03bf1724d50e&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/activeloopai"><img alt="activeloopai" src="https://avatars.githubusercontent.com/u/34816118?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/roboflow"><img alt="roboflow" src="https://avatars.githubusercontent.com/u/53104118?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/cmclaughlin"><img alt="cmclaughlin" src="https://avatars.githubusercontent.com/u/1061109?u=ddf6eec0edd2d11c980f8c3aa96e3d044d4e0468&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/blaisep"><img alt="blaisep" src="https://avatars.githubusercontent.com/u/254456?u=97d584b7c0a6faf583aa59975df4f993f671d121&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/RapidataAI"><img alt="RapidataAI" src="https://avatars.githubusercontent.com/u/104209891?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/rodolphebarbanneau"><img alt="rodolphebarbanneau" src="https://avatars.githubusercontent.com/u/46493454?u=6c405452a40c231cdf0b68e97544e07ee956a733&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/theSymbolSyndicate"><img alt="theSymbolSyndicate" src="https://avatars.githubusercontent.com/u/111542255?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/blakeNaccarato"><img alt="blakeNaccarato" src="https://avatars.githubusercontent.com/u/20692450?u=bb919218be30cfa994514f4cf39bb2f7cf952df4&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/ChargeStorm"><img alt="ChargeStorm" src="https://avatars.githubusercontent.com/u/26000165?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/Alphadelta14"><img alt="Alphadelta14" src="https://avatars.githubusercontent.com/u/480845?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/Cusp-AI"><img alt="Cusp-AI" src="https://avatars.githubusercontent.com/u/178170649?v=4" style="height: 32px; border-radius: 100%;"></a>
</p></div>
*And 7 more private sponsor(s).*
<!-- sponsors-end -->
| text/markdown | null | =?utf-8?q?Timoth=C3=A9e_Mazzucotelli?= <dev@pawamoy.fr> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Documentation",
"Topic :: Software Development",
"Topic :: Utilities",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"griffelib>=2.0"
] | [] | [] | [] | [
"Homepage, https://mkdocstrings.github.io/griffe-public-wildcard-imports",
"Documentation, https://mkdocstrings.github.io/griffe-public-wildcard-imports",
"Changelog, https://mkdocstrings.github.io/griffe-public-wildcard-imports/changelog",
"Repository, https://github.com/mkdocstrings/griffe-public-wildcard-imports",
"Issues, https://github.com/mkdocstrings/griffe-public-wildcard-imports/issues",
"Discussions, https://github.com/mkdocstrings/griffe-public-wildcard-imports/discussions",
"Gitter, https://gitter.im/mkdocstrings/griffe-public-wildcard-imports",
"Funding, https://github.com/sponsors/pawamoy"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T09:38:47.882116 | griffe_public_wildcard_imports-0.3.1.tar.gz | 27,607 | 88/b4/bab08ba73fece60828a0d35b252fd3ec2399a4afd901a82e837d39a0483c/griffe_public_wildcard_imports-0.3.1.tar.gz | source | sdist | null | false | 1da9d858b010a942cc650281186fad15 | bca005c76ff1a4b2f143e64d8445758e0a13f1ad439c41102045316ce2fa905e | 88b4bab08ba73fece60828a0d35b252fd3ec2399a4afd901a82e837d39a0483c | ISC | [
"LICENSE"
] | 130 |
2.4 | griffe-public-redundant-aliases | 0.3.1 | Mark objects imported with redundant aliases as public. | # griffe-public-redundant-aliases
[](https://github.com/mkdocstrings/griffe-public-redundant-aliases/actions?query=workflow%3Aci)
[](https://mkdocstrings.github.io/griffe-public-redundant-aliases/)
[](https://pypi.org/project/griffe-public-redundant-aliases/)
[](https://app.gitter.im/#/room/#griffe-public-redundant-aliases:gitter.im)
Mark objects imported with redundant aliases as public.
## Installation
```bash
pip install griffe-public-redundant-aliases
```
## Usage
[Enable](https://mkdocstrings.github.io/griffe/guide/users/extending/#using-extensions) the `griffe_public_redundant_aliases` extension. Now all objects imported with redundant aliases will be marked as public, as per the convention.
```python
# Following objects will be marked as public.
from somewhere import Thing as Thing
from somewhere import Other as Other
# Following object won't be marked as public.
from somewhere import Stuff
```
With MkDocs:
```yaml
plugins:
- mkdocstrings:
handlers:
python:
options:
extensions:
- griffe_public_redundant_aliases
```
## Sponsors
<!-- sponsors-start -->
<div id="premium-sponsors" style="text-align: center;">
<div id="silver-sponsors"><b>Silver sponsors</b><p>
<a href="https://fastapi.tiangolo.com/"><img alt="FastAPI" src="https://raw.githubusercontent.com/tiangolo/fastapi/master/docs/en/docs/img/logo-margin/logo-teal.png" style="height: 200px; "></a><br>
</p></div>
<div id="bronze-sponsors"><b>Bronze sponsors</b><p>
<a href="https://www.nixtla.io/"><picture><source media="(prefers-color-scheme: light)" srcset="https://www.nixtla.io/img/logo/full-black.svg"><source media="(prefers-color-scheme: dark)" srcset="https://www.nixtla.io/img/logo/full-white.svg"><img alt="Nixtla" src="https://www.nixtla.io/img/logo/full-black.svg" style="height: 60px; "></picture></a><br>
</p></div>
</div>
---
<div id="sponsors"><p>
<a href="https://github.com/ofek"><img alt="ofek" src="https://avatars.githubusercontent.com/u/9677399?u=386c330f212ce467ce7119d9615c75d0e9b9f1ce&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/samuelcolvin"><img alt="samuelcolvin" src="https://avatars.githubusercontent.com/u/4039449?u=42eb3b833047c8c4b4f647a031eaef148c16d93f&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/tlambert03"><img alt="tlambert03" src="https://avatars.githubusercontent.com/u/1609449?u=922abf0524b47739b37095e553c99488814b05db&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/ssbarnea"><img alt="ssbarnea" src="https://avatars.githubusercontent.com/u/102495?u=c7bd9ddf127785286fc939dd18cb02db0a453bce&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/femtomc"><img alt="femtomc" src="https://avatars.githubusercontent.com/u/34410036?u=f13a71daf2a9f0d2da189beaa94250daa629e2d8&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/cmarqu"><img alt="cmarqu" src="https://avatars.githubusercontent.com/u/360986?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/kolenaIO"><img alt="kolenaIO" src="https://avatars.githubusercontent.com/u/77010818?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/ramnes"><img alt="ramnes" src="https://avatars.githubusercontent.com/u/835072?u=3fca03c3ba0051e2eb652b1def2188a94d1e1dc2&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/machow"><img alt="machow" src="https://avatars.githubusercontent.com/u/2574498?u=c41e3d2f758a05102d8075e38d67b9c17d4189d7&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/BenHammersley"><img alt="BenHammersley" src="https://avatars.githubusercontent.com/u/99436?u=4499a7b507541045222ee28ae122dbe3c8d08ab5&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/trevorWieland"><img alt="trevorWieland" src="https://avatars.githubusercontent.com/u/28811461?u=74cc0e3756c1d4e3d66b5c396e1d131ea8a10472&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/MarcoGorelli"><img alt="MarcoGorelli" src="https://avatars.githubusercontent.com/u/33491632?u=7de3a749cac76a60baca9777baf71d043a4f884d&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/analog-cbarber"><img alt="analog-cbarber" src="https://avatars.githubusercontent.com/u/7408243?u=642fc2bdcc9904089c62fe5aec4e03ace32da67d&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/OdinManiac"><img alt="OdinManiac" src="https://avatars.githubusercontent.com/u/22727172?u=36ab20970f7f52ae8e7eb67b7fcf491fee01ac22&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/rstudio-sponsorship"><img alt="rstudio-sponsorship" src="https://avatars.githubusercontent.com/u/58949051?u=0c471515dd18111be30dfb7669ed5e778970959b&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/schlich"><img alt="schlich" src="https://avatars.githubusercontent.com/u/21191435?u=6f1240adb68f21614d809ae52d66509f46b1e877&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/butterlyn"><img alt="butterlyn" src="https://avatars.githubusercontent.com/u/53323535?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/livingbio"><img alt="livingbio" src="https://avatars.githubusercontent.com/u/10329983?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/NemetschekAllplan"><img alt="NemetschekAllplan" src="https://avatars.githubusercontent.com/u/912034?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/EricJayHartman"><img alt="EricJayHartman" src="https://avatars.githubusercontent.com/u/9259499?u=7e58cc7ec0cd3e85b27aec33656aa0f6612706dd&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/15r10nk"><img alt="15r10nk" src="https://avatars.githubusercontent.com/u/44680962?u=f04826446ff165742efa81e314bd03bf1724d50e&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/activeloopai"><img alt="activeloopai" src="https://avatars.githubusercontent.com/u/34816118?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/roboflow"><img alt="roboflow" src="https://avatars.githubusercontent.com/u/53104118?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/cmclaughlin"><img alt="cmclaughlin" src="https://avatars.githubusercontent.com/u/1061109?u=ddf6eec0edd2d11c980f8c3aa96e3d044d4e0468&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/blaisep"><img alt="blaisep" src="https://avatars.githubusercontent.com/u/254456?u=97d584b7c0a6faf583aa59975df4f993f671d121&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/RapidataAI"><img alt="RapidataAI" src="https://avatars.githubusercontent.com/u/104209891?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/rodolphebarbanneau"><img alt="rodolphebarbanneau" src="https://avatars.githubusercontent.com/u/46493454?u=6c405452a40c231cdf0b68e97544e07ee956a733&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/theSymbolSyndicate"><img alt="theSymbolSyndicate" src="https://avatars.githubusercontent.com/u/111542255?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/blakeNaccarato"><img alt="blakeNaccarato" src="https://avatars.githubusercontent.com/u/20692450?u=bb919218be30cfa994514f4cf39bb2f7cf952df4&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/ChargeStorm"><img alt="ChargeStorm" src="https://avatars.githubusercontent.com/u/26000165?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/Alphadelta14"><img alt="Alphadelta14" src="https://avatars.githubusercontent.com/u/480845?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/Cusp-AI"><img alt="Cusp-AI" src="https://avatars.githubusercontent.com/u/178170649?v=4" style="height: 32px; border-radius: 100%;"></a>
</p></div>
*And 7 more private sponsor(s).*
<!-- sponsors-end -->
| text/markdown | null | =?utf-8?q?Timoth=C3=A9e_Mazzucotelli?= <dev@pawamoy.fr> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Documentation",
"Topic :: Software Development",
"Topic :: Utilities",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"griffelib>=2.0"
] | [] | [] | [] | [
"Homepage, https://mkdocstrings.github.io/griffe-public-redundant-aliases",
"Documentation, https://mkdocstrings.github.io/griffe-public-redundant-aliases",
"Changelog, https://mkdocstrings.github.io/griffe-public-redundant-aliases/changelog",
"Repository, https://github.com/mkdocstrings/griffe-public-redundant-aliases",
"Issues, https://github.com/mkdocstrings/griffe-public-redundant-aliases/issues",
"Discussions, https://github.com/mkdocstrings/griffe-public-redundant-aliases/discussions",
"Gitter, https://gitter.im/mkdocstrings/griffe-public-redundant-aliases",
"Funding, https://github.com/sponsors/pawamoy"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T09:38:46.117311 | griffe_public_redundant_aliases-0.3.1.tar.gz | 27,855 | fc/a2/fed1980f87d4b26668aa6dd29dcb58e6b86d0e946a224356f28be6f5a9d6/griffe_public_redundant_aliases-0.3.1.tar.gz | source | sdist | null | false | 534b088df29d39f9b07cc5f1e1de9b7c | 8feff8b746b142f4020c2031d423faf3a1af8734c34c01b1261ea9846eb0ba42 | fca2fed1980f87d4b26668aa6dd29dcb58e6b86d0e946a224356f28be6f5a9d6 | ISC | [
"LICENSE"
] | 132 |
2.4 | griffe-inherited-docstrings | 1.1.3 | Griffe extension for inheriting docstrings. | # Griffe Inherited Docstrings
[](https://github.com/mkdocstrings/griffe-inherited-docstrings/actions?query=workflow%3Aci)
[](https://mkdocstrings.github.io/griffe-inherited-docstrings/)
[](https://pypi.org/project/griffe-inherited-docstrings/)
[](https://app.gitter.im/#/room/#griffe-inherited-docstrings:gitter.im)
Griffe extension for inheriting docstrings.
## Installation
```bash
pip install griffe-inherited-docstrings
```
## Usage
With Python:
```python
import griffe
griffe.load("...", extensions=griffe.load_extensions(["griffe_inherited_docstrings"]))
```
With MkDocs and mkdocstrings:
```yaml
plugins:
- mkdocstrings:
handlers:
python:
options:
extensions:
- griffe_inherited_docstrings
```
The extension will iterate on every class and their members
to set docstrings from parent classes when they are not already defined.
The extension accepts a `merge` option, that when set to true
will actually merge all parent docstrings in the class hierarchy
to the child docstring, if any.
```yaml
plugins:
- mkdocstrings:
handlers:
python:
options:
extensions:
- griffe_inherited_docstrings:
merge: true
```
```python
class A:
def method(self):
"""Method in A."""
class B(A):
def method(self):
...
class C(B):
...
class D(C):
def method(self):
"""Method in D."""
class E(D):
def method(self):
"""Method in E."""
```
With the code above, docstrings will be merged like following:
Class | Method docstring
----- | ----------------
`A` | Method in A.
`B` | Method in A.
`C` | Method in A.
`D` | Method in A.<br><br>Method in D.
`E` | Method in A.<br><br>Method in D.<br><br>Method in E.
WARNING: **Limitation**
This extension runs once on whole packages. There is no way to toggle merging or simple inheritance for specifc objects.
## Sponsors
<!-- sponsors-start -->
<!-- sponsors-end -->
| text/markdown | null | =?utf-8?q?Timoth=C3=A9e_Mazzucotelli?= <dev@pawamoy.fr> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Documentation",
"Topic :: Software Development",
"Topic :: Utilities",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"griffelib>=2.0"
] | [] | [] | [] | [
"Homepage, https://mkdocstrings.github.io/griffe-inherited-docstrings",
"Documentation, https://mkdocstrings.github.io/griffe-inherited-docstrings",
"Changelog, https://mkdocstrings.github.io/griffe-inherited-docstrings/changelog",
"Repository, https://github.com/mkdocstrings/griffe-inherited-docstrings",
"Issues, https://github.com/mkdocstrings/griffe-inherited-docstrings/issues",
"Discussions, https://github.com/mkdocstrings/griffe-inherited-docstrings/discussions",
"Gitter, https://gitter.im/mkdocstrings/griffe-inherited-docstrings",
"Funding, https://github.com/sponsors/pawamoy"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T09:38:44.312662 | griffe_inherited_docstrings-1.1.3.tar.gz | 26,738 | cb/da/fd002dc5f215cd896bfccaebe8b4aa1cdeed8ea1d9d60633685bd61ff933/griffe_inherited_docstrings-1.1.3.tar.gz | source | sdist | null | false | e088e3d54876526dc570580ebc44f3d7 | cd1f937ec9336a790e5425e7f9b92f5a5ab17f292ba86917f1c681c0704cb64e | cbdafd002dc5f215cd896bfccaebe8b4aa1cdeed8ea1d9d60633685bd61ff933 | ISC | [
"LICENSE"
] | 671 |
2.4 | weather-scanner | 0.1.7 | Check weather | # Weather Scanner (Tester Package)


**DISCLAIMER: This is a tester package.**
This project is created for educational purposes to learn how to distribute Python libraries via PyPI. Features are limited and not intended for production use.
## Description
A simple Python library to check current temperatures for major cities in Indonesia using the Open-Meteo free API.
```py
from weather_scanner import InfoCuaca
app = InfoCuaca()
print(app.checkweather("jakarta"))
```
| text/markdown | null | Alwy <alwymourteza1@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.28.0",
"ruff; extra == \"dev\"",
"build; extra == \"dev\"",
"twine; extra == \"dev\"",
"pytest; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:38:43.379483 | weather_scanner-0.1.7.tar.gz | 3,529 | 4e/4b/f1ebd85d7348e7dd7204aa52333c13050e09627e253e4e04cc684087c3b7/weather_scanner-0.1.7.tar.gz | source | sdist | null | false | 1395424b0a83ff405fbf11cf52e535b9 | 1f3ce087f52f007cf8a89d342c203bcacdcb59a3590ae007563372bf2d0bd32c | 4e4bf1ebd85d7348e7dd7204aa52333c13050e09627e253e4e04cc684087c3b7 | null | [
"LICENSE"
] | 203 |
2.4 | griffe-autodocstringstyle | 0.2.1 | Set docstring style to 'auto' for external packages. | # griffe-autodocstringstyle
[](https://github.com/mkdocstrings/griffe-autodocstringstyle/actions?query=workflow%3Aci)
[](https://mkdocstrings.github.io/griffe-autodocstringstyle/)
[](https://pypi.org/project/griffe-autodocstringstyle/)
[](https://app.gitter.im/#/room/#griffe-autodocstringstyle:gitter.im)
Set docstring style to 'auto' for external packages.
## Installation
```bash
pip install griffe-autodocstringstyle
```
## Usage
[Enable](https://mkdocstrings.github.io/griffe/guide/users/extending/#using-extensions) the `griffe_autodocstringstyle` extension. Now all packages loaded from a virtual environment will have their docstrings parsed with the `auto` style (automatically guessing the docstring style).
Use the `exclude` option to pass package names that shouldn't be considered. This can be useful if you must first install your sources as a package before loading/documenting them (meaning they end up in the virtual environment too).
With MkDocs:
```yaml
plugins:
- mkdocstrings:
handlers:
python:
options:
extensions:
- griffe_autodocstringstyle:
# only useful if your sources can't be found
# in the current working directory
exclude:
- my_package
```
## Sponsors
<!-- sponsors-start -->
<div id="premium-sponsors" style="text-align: center;">
<div id="silver-sponsors"><b>Silver sponsors</b><p>
<a href="https://fastapi.tiangolo.com/"><img alt="FastAPI" src="https://raw.githubusercontent.com/tiangolo/fastapi/master/docs/en/docs/img/logo-margin/logo-teal.png" style="height: 200px; "></a><br>
</p></div>
<div id="bronze-sponsors"><b>Bronze sponsors</b><p>
<a href="https://www.nixtla.io/"><picture><source media="(prefers-color-scheme: light)" srcset="https://www.nixtla.io/img/logo/full-black.svg"><source media="(prefers-color-scheme: dark)" srcset="https://www.nixtla.io/img/logo/full-white.svg"><img alt="Nixtla" src="https://www.nixtla.io/img/logo/full-black.svg" style="height: 60px; "></picture></a><br>
</p></div>
</div>
---
<div id="sponsors"><p>
<a href="https://github.com/ofek"><img alt="ofek" src="https://avatars.githubusercontent.com/u/9677399?u=386c330f212ce467ce7119d9615c75d0e9b9f1ce&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/samuelcolvin"><img alt="samuelcolvin" src="https://avatars.githubusercontent.com/u/4039449?u=42eb3b833047c8c4b4f647a031eaef148c16d93f&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/tlambert03"><img alt="tlambert03" src="https://avatars.githubusercontent.com/u/1609449?u=922abf0524b47739b37095e553c99488814b05db&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/ssbarnea"><img alt="ssbarnea" src="https://avatars.githubusercontent.com/u/102495?u=c7bd9ddf127785286fc939dd18cb02db0a453bce&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/femtomc"><img alt="femtomc" src="https://avatars.githubusercontent.com/u/34410036?u=f13a71daf2a9f0d2da189beaa94250daa629e2d8&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/cmarqu"><img alt="cmarqu" src="https://avatars.githubusercontent.com/u/360986?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/kolenaIO"><img alt="kolenaIO" src="https://avatars.githubusercontent.com/u/77010818?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/ramnes"><img alt="ramnes" src="https://avatars.githubusercontent.com/u/835072?u=3fca03c3ba0051e2eb652b1def2188a94d1e1dc2&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/machow"><img alt="machow" src="https://avatars.githubusercontent.com/u/2574498?u=c41e3d2f758a05102d8075e38d67b9c17d4189d7&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/BenHammersley"><img alt="BenHammersley" src="https://avatars.githubusercontent.com/u/99436?u=4499a7b507541045222ee28ae122dbe3c8d08ab5&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/trevorWieland"><img alt="trevorWieland" src="https://avatars.githubusercontent.com/u/28811461?u=74cc0e3756c1d4e3d66b5c396e1d131ea8a10472&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/MarcoGorelli"><img alt="MarcoGorelli" src="https://avatars.githubusercontent.com/u/33491632?u=7de3a749cac76a60baca9777baf71d043a4f884d&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/analog-cbarber"><img alt="analog-cbarber" src="https://avatars.githubusercontent.com/u/7408243?u=642fc2bdcc9904089c62fe5aec4e03ace32da67d&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/OdinManiac"><img alt="OdinManiac" src="https://avatars.githubusercontent.com/u/22727172?u=36ab20970f7f52ae8e7eb67b7fcf491fee01ac22&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/rstudio-sponsorship"><img alt="rstudio-sponsorship" src="https://avatars.githubusercontent.com/u/58949051?u=0c471515dd18111be30dfb7669ed5e778970959b&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/schlich"><img alt="schlich" src="https://avatars.githubusercontent.com/u/21191435?u=6f1240adb68f21614d809ae52d66509f46b1e877&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/butterlyn"><img alt="butterlyn" src="https://avatars.githubusercontent.com/u/53323535?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/livingbio"><img alt="livingbio" src="https://avatars.githubusercontent.com/u/10329983?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/NemetschekAllplan"><img alt="NemetschekAllplan" src="https://avatars.githubusercontent.com/u/912034?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/EricJayHartman"><img alt="EricJayHartman" src="https://avatars.githubusercontent.com/u/9259499?u=7e58cc7ec0cd3e85b27aec33656aa0f6612706dd&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/15r10nk"><img alt="15r10nk" src="https://avatars.githubusercontent.com/u/44680962?u=f04826446ff165742efa81e314bd03bf1724d50e&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/activeloopai"><img alt="activeloopai" src="https://avatars.githubusercontent.com/u/34816118?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/roboflow"><img alt="roboflow" src="https://avatars.githubusercontent.com/u/53104118?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/cmclaughlin"><img alt="cmclaughlin" src="https://avatars.githubusercontent.com/u/1061109?u=ddf6eec0edd2d11c980f8c3aa96e3d044d4e0468&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/blaisep"><img alt="blaisep" src="https://avatars.githubusercontent.com/u/254456?u=97d584b7c0a6faf583aa59975df4f993f671d121&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/RapidataAI"><img alt="RapidataAI" src="https://avatars.githubusercontent.com/u/104209891?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/rodolphebarbanneau"><img alt="rodolphebarbanneau" src="https://avatars.githubusercontent.com/u/46493454?u=6c405452a40c231cdf0b68e97544e07ee956a733&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/theSymbolSyndicate"><img alt="theSymbolSyndicate" src="https://avatars.githubusercontent.com/u/111542255?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/blakeNaccarato"><img alt="blakeNaccarato" src="https://avatars.githubusercontent.com/u/20692450?u=bb919218be30cfa994514f4cf39bb2f7cf952df4&v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/ChargeStorm"><img alt="ChargeStorm" src="https://avatars.githubusercontent.com/u/26000165?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/Alphadelta14"><img alt="Alphadelta14" src="https://avatars.githubusercontent.com/u/480845?v=4" style="height: 32px; border-radius: 100%;"></a>
<a href="https://github.com/Cusp-AI"><img alt="Cusp-AI" src="https://avatars.githubusercontent.com/u/178170649?v=4" style="height: 32px; border-radius: 100%;"></a>
</p></div>
*And 7 more private sponsor(s).*
<!-- sponsors-end -->
| text/markdown | null | =?utf-8?q?Timoth=C3=A9e_Mazzucotelli?= <dev@pawamoy.fr> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Documentation",
"Topic :: Software Development",
"Topic :: Utilities",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"griffelib>=2.0"
] | [] | [] | [] | [
"Homepage, https://mkdocstrings.github.io/griffe-autodocstringstyle",
"Documentation, https://mkdocstrings.github.io/griffe-autodocstringstyle",
"Changelog, https://mkdocstrings.github.io/griffe-autodocstringstyle/changelog",
"Repository, https://github.com/mkdocstrings/griffe-autodocstringstyle",
"Issues, https://github.com/mkdocstrings/griffe-autodocstringstyle/issues",
"Discussions, https://github.com/mkdocstrings/griffe-autodocstringstyle/discussions",
"Gitter, https://gitter.im/mkdocstrings/griffe-autodocstringstyle",
"Funding, https://github.com/sponsors/pawamoy"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T09:38:42.228467 | griffe_autodocstringstyle-0.2.1.tar.gz | 27,989 | 00/40/e1694363344952a8115256e6e5bb5a6c4623975fe6399c4560898f7cf4fd/griffe_autodocstringstyle-0.2.1.tar.gz | source | sdist | null | false | 5eb5e27c149c39aa32bf90918dc6b6e6 | 6695430968bc5b2a493017cbd47013ac41de6828f1c57b280446c69fcaa5c485 | 0040e1694363344952a8115256e6e5bb5a6c4623975fe6399c4560898f7cf4fd | ISC | [
"LICENSE"
] | 134 |
2.4 | pymc | 5.28.0 | Probabilistic Programming in Python: Bayesian Modeling and Probabilistic Machine Learning with PyTensor | .. image:: https://cdn.rawgit.com/pymc-devs/pymc/main/docs/logos/svg/PyMC_banner.svg
:height: 100px
:alt: PyMC logo
:align: center
|Build Status| |Coverage| |NumFOCUS_badge| |Binder| |Dockerhub| |DOIzenodo| |Conda Downloads|
PyMC (formerly PyMC3) is a Python package for Bayesian statistical modeling
focusing on advanced Markov chain Monte Carlo (MCMC) and variational inference (VI)
algorithms. Its flexibility and extensibility make it applicable to a
large suite of problems.
Check out the `PyMC overview <https://docs.pymc.io/en/latest/learn/core_notebooks/pymc_overview.html>`__, or
one of `the many examples <https://www.pymc.io/projects/examples/en/latest/gallery.html>`__!
For questions on PyMC, head on over to our `PyMC Discourse <https://discourse.pymc.io/>`__ forum.
Features
========
- Intuitive model specification syntax, for example, ``x ~ N(0,1)``
translates to ``x = Normal('x',0,1)``
- **Powerful sampling algorithms**, such as the `No U-Turn
Sampler <http://www.jmlr.org/papers/v15/hoffman14a.html>`__, allow complex models
with thousands of parameters with little specialized knowledge of
fitting algorithms.
- **Variational inference**: `ADVI <http://www.jmlr.org/papers/v18/16-107.html>`__
for fast approximate posterior estimation as well as mini-batch ADVI
for large data sets.
- Relies on `PyTensor <https://pytensor.readthedocs.io/en/latest/>`__ which provides:
* Computation optimization and dynamic C or JAX compilation
* NumPy broadcasting and advanced indexing
* Linear algebra operators
* Simple extensibility
- Transparent support for missing value imputation
Linear Regression Example
==========================
Plant growth can be influenced by multiple factors, and understanding these relationships is crucial for optimizing agricultural practices.
Imagine we conduct an experiment to predict the growth of a plant based on different environmental variables.
.. code-block:: python
import pymc as pm
# Taking draws from a normal distribution
seed = 42
x_dist = pm.Normal.dist(shape=(100, 3))
x_data = pm.draw(x_dist, random_seed=seed)
# Independent Variables:
# Sunlight Hours: Number of hours the plant is exposed to sunlight daily.
# Water Amount: Daily water amount given to the plant (in milliliters).
# Soil Nitrogen Content: Percentage of nitrogen content in the soil.
# Dependent Variable:
# Plant Growth (y): Measured as the increase in plant height (in centimeters) over a certain period.
# Define coordinate values for all dimensions of the data
coords={
"trial": range(100),
"features": ["sunlight hours", "water amount", "soil nitrogen"],
}
# Define generative model
with pm.Model(coords=coords) as generative_model:
x = pm.Data("x", x_data, dims=["trial", "features"])
# Model parameters
betas = pm.Normal("betas", dims="features")
sigma = pm.HalfNormal("sigma")
# Linear model
mu = x @ betas
# Likelihood
# Assuming we measure deviation of each plant from baseline
plant_growth = pm.Normal("plant growth", mu, sigma, dims="trial")
# Generating data from model by fixing parameters
fixed_parameters = {
"betas": [5, 20, 2],
"sigma": 0.5,
}
with pm.do(generative_model, fixed_parameters) as synthetic_model:
idata = pm.sample_prior_predictive(random_seed=seed) # Sample from prior predictive distribution.
synthetic_y = idata.prior["plant growth"].sel(draw=0, chain=0)
# Infer parameters conditioned on observed data
with pm.observe(generative_model, {"plant growth": synthetic_y}) as inference_model:
idata = pm.sample(random_seed=seed)
summary = pm.stats.summary(idata, var_names=["betas", "sigma"])
print(summary)
From the summary, we can see that the mean of the inferred parameters are very close to the fixed parameters
===================== ====== ===== ======== ========= =========== ========= ========== ========== =======
Params mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
===================== ====== ===== ======== ========= =========== ========= ========== ========== =======
betas[sunlight hours] 4.972 0.054 4.866 5.066 0.001 0.001 3003 1257 1
betas[water amount] 19.963 0.051 19.872 20.062 0.001 0.001 3112 1658 1
betas[soil nitrogen] 1.994 0.055 1.899 2.107 0.001 0.001 3221 1559 1
sigma 0.511 0.037 0.438 0.575 0.001 0 2945 1522 1
===================== ====== ===== ======== ========= =========== ========= ========== ========== =======
.. code-block:: python
# Simulate new data conditioned on inferred parameters
new_x_data = pm.draw(
pm.Normal.dist(shape=(3, 3)),
random_seed=seed,
)
new_coords = coords | {"trial": [0, 1, 2]}
with inference_model:
pm.set_data({"x": new_x_data}, coords=new_coords)
pm.sample_posterior_predictive(
idata,
predictions=True,
extend_inferencedata=True,
random_seed=seed,
)
pm.stats.summary(idata.predictions, kind="stats")
The new data conditioned on inferred parameters would look like:
================ ======== ======= ======== =========
Output mean sd hdi_3% hdi_97%
================ ======== ======= ======== =========
plant growth[0] 14.229 0.515 13.325 15.272
plant growth[1] 24.418 0.511 23.428 25.326
plant growth[2] -6.747 0.511 -7.740 -5.797
================ ======== ======= ======== =========
.. code-block:: python
# Simulate new data, under a scenario where the first beta is zero
with pm.do(
inference_model,
{inference_model["betas"]: inference_model["betas"] * [0, 1, 1]},
) as plant_growth_model:
new_predictions = pm.sample_posterior_predictive(
idata,
predictions=True,
random_seed=seed,
)
pm.stats.summary(new_predictions, kind="stats")
The new data, under the above scenario would look like:
================ ======== ======= ======== =========
Output mean sd hdi_3% hdi_97%
================ ======== ======= ======== =========
plant growth[0] 12.149 0.515 11.193 13.135
plant growth[1] 29.809 0.508 28.832 30.717
plant growth[2] -0.131 0.507 -1.121 0.791
================ ======== ======= ======== =========
Getting started
===============
If you already know about Bayesian statistics:
----------------------------------------------
- `API quickstart guide <https://www.pymc.io/projects/examples/en/latest/introductory/api_quickstart.html>`__
- The `PyMC tutorial <https://docs.pymc.io/en/latest/learn/core_notebooks/pymc_overview.html>`__
- `PyMC examples <https://www.pymc.io/projects/examples/en/latest/gallery.html>`__ and the `API reference <https://docs.pymc.io/en/stable/api.html>`__
Learn Bayesian statistics with a book together with PyMC
--------------------------------------------------------
- `Bayesian Analysis with Python <http://bap.com.ar/>`__ (third edition) by Osvaldo Martin: Great introductory book.
- `Probabilistic Programming and Bayesian Methods for Hackers <https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers>`__: Fantastic book with many applied code examples.
- `PyMC port of the book "Doing Bayesian Data Analysis" by John Kruschke <https://github.com/cluhmann/DBDA-python>`__ as well as the `first edition <https://github.com/aloctavodia/Doing_bayesian_data_analysis>`__.
- `PyMC port of the book "Statistical Rethinking A Bayesian Course with Examples in R and Stan" by Richard McElreath <https://github.com/pymc-devs/resources/tree/master/Rethinking>`__
- `PyMC port of the book "Bayesian Cognitive Modeling" by Michael Lee and EJ Wagenmakers <https://github.com/pymc-devs/resources/tree/master/BCM>`__: Focused on using Bayesian statistics in cognitive modeling.
See also the section on books using PyMC on `our website <https://www.pymc.io/projects/docs/en/stable/learn/books.html>`__.
Audio & Video
-------------
- Here is a `YouTube playlist <https://www.youtube.com/playlist?list=PL1Ma_1DBbE82OVW8Fz_6Ts1oOeyOAiovy>`__ gathering several talks on PyMC.
- You can also find all the talks given at **PyMCon 2020** `here <https://discourse.pymc.io/c/pymcon/2020talks/15>`__.
- The `"Learning Bayesian Statistics" podcast <https://www.learnbayesstats.com/>`__ helps you discover and stay up-to-date with the vast Bayesian community. Bonus: it's hosted by Alex Andorra, one of the PyMC core devs!
Installation
============
To install PyMC on your system, follow the instructions on the `installation guide <https://www.pymc.io/projects/docs/en/latest/installation.html>`__.
Citing PyMC
===========
Please choose from the following:
- |DOIpaper| *PyMC: A Modern and Comprehensive Probabilistic Programming Framework in Python*, Abril-Pla O, Andreani V, Carroll C, Dong L, Fonnesbeck CJ, Kochurov M, Kumar R, Lao J, Luhmann CC, Martin OA, Osthege M, Vieira R, Wiecki T, Zinkov R. (2023)
- BibTex version
.. code:: bibtex
@article{pymc2023,
title = {{PyMC}: A Modern and Comprehensive Probabilistic Programming Framework in {P}ython},
author = {Oriol Abril-Pla and Virgile Andreani and Colin Carroll and Larry Dong and Christopher J. Fonnesbeck and Maxim Kochurov and Ravin Kumar and Junpeng Lao and Christian C. Luhmann and Osvaldo A. Martin and Michael Osthege and Ricardo Vieira and Thomas Wiecki and Robert Zinkov },
journal = {{PeerJ} Computer Science},
volume = {9},
number = {e1516},
doi = {10.7717/peerj-cs.1516},
year = {2023}
}
- |DOIzenodo| A DOI for all versions.
- DOIs for specific versions are shown on Zenodo and under `Releases <https://github.com/pymc-devs/pymc/releases>`_
.. |DOIpaper| image:: https://img.shields.io/badge/DOI-10.7717%2Fpeerj--cs.1516-blue.svg
:target: https://doi.org/10.7717/peerj-cs.1516
.. |DOIzenodo| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.4603970.svg
:target: https://doi.org/10.5281/zenodo.4603970
Contact
=======
We are using `discourse.pymc.io <https://discourse.pymc.io/>`__ as our main communication channel.
To ask a question regarding modeling or usage of PyMC we encourage posting to our Discourse forum under the `“Questions” Category <https://discourse.pymc.io/c/questions>`__. You can also suggest feature in the `“Development” Category <https://discourse.pymc.io/c/development>`__.
Requests for non-technical information about the project are also welcome on Discourse,
we also use Discourse internally for general announcements or governance related processes.
You can also follow us on these social media platforms for updates and other announcements:
- `LinkedIn @pymc <https://www.linkedin.com/company/pymc/>`__
- `YouTube @PyMCDevelopers <https://www.youtube.com/c/PyMCDevelopers>`__
- `X @pymc_devs <https://x.com/pymc_devs>`__
- `Mastodon @pymc@bayes.club <https://bayes.club/@pymc>`__
To report an issue with PyMC please use the `issue tracker <https://github.com/pymc-devs/pymc/issues>`__.
License
=======
`Apache License, Version
2.0 <https://github.com/pymc-devs/pymc/blob/main/LICENSE>`__
Software using PyMC
===================
General purpose
---------------
- `Bambi <https://github.com/bambinos/bambi>`__: BAyesian Model-Building Interface (BAMBI) in Python.
- `calibr8 <https://calibr8.readthedocs.io>`__: A toolbox for constructing detailed observation models to be used as likelihoods in PyMC.
- `gumbi <https://github.com/JohnGoertz/Gumbi>`__: A high-level interface for building GP models.
- `SunODE <https://github.com/aseyboldt/sunode>`__: Fast ODE solver, much faster than the one that comes with PyMC.
- `pymc-learn <https://github.com/pymc-learn/pymc-learn>`__: Custom PyMC models built on top of pymc3_models/scikit-learn API
Domain specific
---------------
- `Exoplanet <https://github.com/dfm/exoplanet>`__: a toolkit for modeling of transit and/or radial velocity observations of exoplanets and other astronomical time series.
- `beat <https://github.com/hvasbath/beat>`__: Bayesian Earthquake Analysis Tool.
- `CausalPy <https://github.com/pymc-labs/CausalPy>`__: A package focusing on causal inference in quasi-experimental settings.
- `PyMC-Marketing <https://github.com/pymc-labs/pymc-marketing>`__: Bayesian marketing toolbox for marketing mix modeling, customer lifetime value, and more.
See also the `ecosystem page <https://www.pymc.io/about/ecosystem.html>`__ on our website. Please contact us if your software is not listed here.
Papers citing PyMC
==================
See Google Scholar `here <https://scholar.google.com/scholar?cites=6357998555684300962>`__ and `here <https://scholar.google.com/scholar?cites=6936955228135731011>`__ for a continuously updated list.
Contributors
============
The `GitHub contributor page <https://github.com/pymc-devs/pymc/graphs/contributors>`__ shows the people who have added content to this repo
which includes a large portion of contributors to the PyMC project but not all of them. Other
contributors have added content to other repos of the ``pymc-devs`` GitHub organization or have contributed
through other project spaces outside of GitHub like `our Discourse forum <https://discourse.pymc.io/>`__.
If you are interested in contributing yourself, read our `Code of Conduct <https://github.com/pymc-devs/pymc/blob/main/CODE_OF_CONDUCT.md>`__
and `Contributing guide <https://www.pymc.io/projects/docs/en/latest/contributing/index.html>`__.
Support
=======
PyMC is a non-profit project under NumFOCUS umbrella. If you want to support PyMC financially, you can donate `here <https://numfocus.org/donate-to-pymc>`__.
Professional Consulting Support
===============================
You can get professional consulting support from `PyMC Labs <https://www.pymc-labs.io>`__.
Sponsors
========
|NumFOCUS|
|PyMCLabs|
|OpenWoundResearch|
Thanks to our contributors
==========================
|contributors|
.. |Binder| image:: https://mybinder.org/badge_logo.svg
:target: https://mybinder.org/v2/gh/pymc-devs/pymc/main?filepath=%2Fdocs%2Fsource%2Fnotebooks
.. |Build Status| image:: https://github.com/pymc-devs/pymc/workflows/tests/badge.svg
:target: https://github.com/pymc-devs/pymc/actions?query=workflow%3Atests+branch%3Amain
.. |Coverage| image:: https://codecov.io/gh/pymc-devs/pymc/branch/main/graph/badge.svg
:target: https://codecov.io/gh/pymc-devs/pymc
.. |Dockerhub| image:: https://img.shields.io/docker/automated/pymc/pymc.svg
:target: https://hub.docker.com/r/pymc/pymc
.. |NumFOCUS_badge| image:: https://img.shields.io/badge/powered%20by-NumFOCUS-orange.svg?style=flat&colorA=E1523D&colorB=007D8A
:target: http://www.numfocus.org/
.. |NumFOCUS| image:: https://github.com/pymc-devs/brand/blob/main/sponsors/sponsor_logos/sponsor_numfocus.png?raw=true
:target: http://www.numfocus.org/
.. |PyMCLabs| image:: https://github.com/pymc-devs/brand/blob/main/sponsors/sponsor_logos/sponsor_pymc_labs.png?raw=true
:target: https://pymc-labs.com
.. |OpenWoundResearch| image:: https://github.com/pymc-devs/brand/blob/main/sponsors/sponsor_logos/owr/sponsor_owr.png?raw=true
:target: https://www.openwoundresearch.com/
.. |contributors| image:: https://contrib.rocks/image?repo=pymc-devs/pymc
:target: https://github.com/pymc-devs/pymc/graphs/contributors
.. |Conda Downloads| image:: https://anaconda.org/conda-forge/pymc/badges/downloads.svg
:target: https://anaconda.org/conda-forge/pymc
| text/x-rst | null | null | PyMC Developers | pymc.devs@gmail.com | Apache License, Version 2.0 | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"License :: OSI Approved :: Apache Software License",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Mathematics",
"Operating System :: OS Independent"
] | [] | http://github.com/pymc-devs/pymc | null | >=3.11 | [] | [] | [] | [
"arviz>=0.13.0",
"cachetools<7,>=4.2.1",
"cloudpickle",
"numpy>=1.25.0",
"pandas>=0.24.0",
"pytensor<2.39,>=2.38.0",
"rich>=13.7.1",
"scipy>=1.4.1",
"threadpoolctl<4.0.0,>=3.1.0",
"typing-extensions>=3.7.4"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:38:22.551823 | pymc-5.28.0.tar.gz | 506,476 | c0/7b/b8ccacceb44ba06806611c2fd35e05429c051497f46d55f1f0ee61a6c442/pymc-5.28.0.tar.gz | source | sdist | null | false | dc439062b97554c0e9713e383d258879 | cea264e72b1040399db1f33a4585ce45b7223eb81c29c299e7ed6e3e26ec2129 | c07bb8ccacceb44ba06806611c2fd35e05429c051497f46d55f1f0ee61a6c442 | null | [
"LICENSE"
] | 2,095 |
2.4 | vietnam-provinces | 2026.2.1 | Library to provide list of Vietnam administrative divisions (tỉnh thành, quận huyện, phường xã). | ================
VietnamProvinces
================
|image love| |image pypi| |common changelog|
[`Tiếng Việt <vietnamese_>`_]
Library to provide list of Vietnam administrative divisions (tỉnh thành, phường xã, after the rearrangement in July 2025) with the name and code as defined by `National Statistics Office of Viet Nam <nso_vn_>`_.
Example:
.. code-block:: json
{
"name": "Tuyên Quang",
"code": 8,
"codename": "tuyen_quang",
"division_type": "tỉnh",
"phone_code": 207,
"wards": [
{
"name": "Xã Thượng Lâm",
"code": 2269,
"codename": "xa_thuong_lam",
"division_type": "xã",
"short_codename": "thuong_lam"
},
{
"name": "Xã Lâm Bình",
"code": 2266,
"codename": "xa_lam_binh",
"division_type": "xã",
"short_codename": "lam_binh"
},
]
}
This library provides data in these forms:
1. JSON
This data is suitable for applications which don't need to access the data often. They are fine with loading JSON and extract information from it. The JSON files are saved in *data* folder. You can get the file path via ``vietnam_provinces.NESTED_DIVISIONS_JSON_PATH`` variable.
Note that this variable only returns the path of the file, not the content. It is up to application developer to use any method to parse the JSON. For example:
.. code-block:: python
import orjson
import rapidjson
from vietnam_provinces import NESTED_DIVISIONS_JSON_PATH
# With rapidjson
with NESTED_DIVISIONS_JSON_PATH.open() as f:
rapidjson.load(f)
# With orjson
orjson.loads(NESTED_DIVISIONS_JSON_PATH.read_bytes())
2. Python data type
This data is useful for some applications which need to access the data more often.
There are two kinds of objects, first is the object presenting a single province or ward, second is province code or ward code in form of `enum`, which you can import in Python code:
.. code-block:: python
>>> from vietnam_provinces import ProvinceCode, Province, WardCode, Ward
>>> Province.from_code(ProvinceCode.P_15)
Province(name='Tỉnh Lào Cai', code=<ProvinceCode.P_15: 15>, division_type=<VietNamDivisionType.TINH: 'tỉnh'>, codename='lao_cai', phone_code=214)
>>> Ward.from_code(23425)
Ward(name='Xã Tu Mơ Rông', code=<WardCode.W_23425: 23425>, division_type=<VietNamDivisionType.XA: 'xã'>, codename='xa_tu_mo_rong', province_code=<ProvinceCode.P_51: 51>)
>>> # Search current wards by legacy data (pre-2025)
>>> Ward.search_from_legacy(name='phu my')
(Ward(name='Phường Phú Mỹ', ...), Ward(name='Xã Phú Mỹ', ...), ...)
>>> # Get legacy wards that were merged to form a new ward
>>> ward = Ward.from_code(4) # Phường Ba Đình
>>> ward.get_legacy_sources()
(Ward(name='Phường Trúc Bạch', ...), Ward(name='Phường Quán Thánh', ...), ...)
>>> # Search current wards by legacy district (districts were dissolved in 2025)
>>> Ward.search_from_legacy_district(code=748) # Thành phố Bà Rịa (old)
(Ward(name='Phường Bà Rịa', ...), Ward(name='Phường Long Hương', ...), ...)
>>> # Search current provinces by legacy province code (pre-2025)
>>> Province.search_from_legacy(code=77) # Tỉnh Bà Rịa - Vũng Tàu
(Province(name='Thành phố Hồ Chí Minh', ...),)
>>> # Get legacy provinces that were merged to form a new province
>>> province = Province.from_code(79) # Thành phố Hồ Chí Minh
>>> province.get_legacy_sources()
(Province(name='Tỉnh Bình Dương', ...), Province(name='Tỉnh Bà Rịa - Vũng Tàu', ...), Province(name='Thành phố Hồ Chí Minh', ...))
The pre-2025 data types can then be used as:
.. code-block:: python
from vietnam_provinces.legacy import Province, District, Ward
from vietnam_provinces.legacy.codes import ProvinceCode
# Look up by code
province = Province.from_code(ProvinceCode.P_01)
# Iterate over all
for p in Province.iter_all():
print(p.name)
To know if the data is up-to-date, check the ``__data_version__`` attribute of the module:
.. code-block:: python
>>> import vietnam_provinces
>>> vietnam_provinces.__data_version__
'2026-02-21'
Install
-------
.. code-block:: sh
python -m pip install vietnam-provinces
# or
uv add vietnam-provinces
This library is compatible with Python 3.12+.
Development
-----------
In development, this project has a tool to scrape and convert data from the National Statistics Office website.
The tool is tested on Linux only (may not run on Windows).
Update data
~~~~~~~~~~~
To scrape data directly from the National Statistics Office website and generate JSON:
.. code-block:: sh
python3 -m dev scrape -f nested-json -o vietnam_provinces/data/nested-divisions.json
Or to generate Python code directly:
.. code-block:: sh
python3 -m dev scrape -f python
You can run
.. code-block:: sh
python3 -m dev scrape --help
to see more options of that tool.
Note that this tool is only available in the source folder (cloned from Git). It is not included in the distributable Python package.
Generate Python code
~~~~~~~~~~~~~~~~~~~~
.. code-block:: sh
python3 -m dev scrape -f python
Generate code for pre-2025 data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To generate Python code for pre-2025 administrative divisions (3-level hierarchy: Province -> District -> Ward):
.. code-block:: sh
python3 -m dev gen-legacy -c dev/seed-data/Pre-2025-07/Xa_2025-01-04.csv
This generates two files:
1. *vietnam_provinces/legacy/codes.py* - Enum definitions for ``ProvinceCode``, ``DistrictCode``, ``WardCode``.
2. *vietnam_provinces/legacy/lookup.py* - Lookup mappings for ``Province``, ``District``, ``Ward`` objects.
Data source
~~~~~~~~~~~
- Name and code of provinces, and wards: `National Statistics Office of Viet Nam <nso_vn_>`_.
- Phone area code: `Thái Bình province's department of Information and Communication <tb_ic_>`_.
Credit
------
Given to you by `Nguyễn Hồng Quân <quan_>`_, after nights and weekends.
.. |image love| image:: https://madewithlove.now.sh/vn?heart=true&colorA=%23ffcd00&colorB=%23da251d
.. |image pypi| image:: https://badgen.net/pypi/v/vietnam-provinces
:target: https://pypi.org/project/vietnam-provinces/
.. |common changelog| image:: https://common-changelog.org/badge.svg
:target: https://common-changelog.org
.. _vietnamese: README.vi_VN.rst
.. _nso_vn: https://danhmuchanhchinh.nso.gov.vn/
.. _draft_new_units: https://chinhphu.vn/du-thao-vbqppl/du-thao-quyet-dinh-cua-thu-tuong-chinh-phu-ban-hanh-bang-danh-muc-va-ma-so-cac-don-vi-hanh-chinh-7546
.. _tb_ic: https://sotttt.thaibinh.gov.vn/tin-tuc/buu-chinh-vien-thong/tra-cuu-ma-vung-dien-thoai-co-dinh-mat-dat-ma-mang-dien-thoa2.html
.. _dataclass: https://docs.python.org/3/library/dataclasses.html
.. _pydantic: https://pypi.org/project/pydantic/
.. _quan: https://quan.hoabinh.vn
| text/x-rst | null | Nguyễn Hồng Quân <ng.hong.quan@gmail.com> | null | null | null | Vietnam, administrative, division, locality | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Information Technology",
"Intended Audience :: Telecommunications Industry",
"Natural Language :: Vietnamese",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Software Development :: Localization"
] | [] | null | null | <4.0,>=3.12 | [] | [] | [] | [] | [] | [] | [] | [
"Changelog, https://github.com/sunshine-tech/VietnamProvinces/blob/main/CHANGELOG.md",
"Documentation, https://vietnamprovinces.readthedocs.io",
"Homepage, https://github.com/sunshine-tech/VietnamProvinces",
"Repository, https://github.com/sunshine-tech/VietnamProvinces.git"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"25.10","id":"questing","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T09:37:54.777524 | vietnam_provinces-2026.2.1-py3-none-any.whl | 693,428 | 81/66/9d73097ba7e5e7fb8876aa1527f69b3bc16763c829be65c7e2a1c4498ba2/vietnam_provinces-2026.2.1-py3-none-any.whl | py3 | bdist_wheel | null | false | ec283bb4065baeb4c5d0cac303723d39 | fee8acfd63b8d626412ee6c576daf3397bd17286df07fa4e72edc82ac487350e | 81669d73097ba7e5e7fb8876aa1527f69b3bc16763c829be65c7e2a1c4498ba2 | GPL-3.0-or-later | [] | 212 |
2.4 | mindbot | 0.1.0 | MindBot - AI Assistant powered by Thryve | # MindBot 开发计划
> 基于 Thryve 的多通道 AI 助手
## 项目愿景
MindBot 是一个开箱即用的多通道 AI 助手。
## 阶段划分
| 阶段 | 内容 | 优先级 |
|------|------|--------|
| [阶段一](phase1_core.md) | 项目初始化与核心类 | P0 |
| [阶段二](phase2_cli.md) | CLI 实现 | P0 |
| [阶段三](phase3_channels.md) | 通道实现 | P1 |
| [阶段四](phase4_tools.md) | 工具系统 | P1 |
| [阶段五](phase5_integration.md) | 完善与集成 | P2 |
## 技术栈
- **核心框架**: Thryve v0.2.0
- **CLI**: typer + prompt_toolkit + rich
- **通道**: aiohttp/FastAPI
- **工具**: 内置 + MCP
## 快速开始
```bash
# 安装
pip install mindbot
# 初始化
mindbot onboard
# 对话
mindbot chat -m "你好"
# 交互式
mindbot shell
# 启动服务
mindbot serve --port 8080
```
| text/markdown | MindBot Team | null | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"thryve>=0.1.0",
"pydantic>=2.0",
"pydantic-settings>=2.0",
"pyyaml>=6.0",
"loguru>=0.7",
"typer>=0.12",
"prompt-toolkit>=3.0",
"rich>=13.0",
"croniter>=1.4",
"aiohttp>=3.9"
] | [] | [] | [] | [] | poetry/2.3.0 CPython/3.12.12 Darwin/25.2.0 | 2026-02-21T09:37:22.663097 | mindbot-0.1.0-py3-none-any.whl | 28,166 | 75/ca/50ae4d82b992c11342b6c15284879db872eb75f50e2d6be04f4f343260e3/mindbot-0.1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 9413d16c8d1e8b225e50e5181fbde77e | 8dea65c3d2598a04108835acf9ab8a3b2229c36d61feace71ce76ade9abfb64e | 75ca50ae4d82b992c11342b6c15284879db872eb75f50e2d6be04f4f343260e3 | null | [
"LICENSE"
] | 222 |
2.4 | soracli | 1.0.0 | Real-time terminal ambient environment engine that synchronizes real-world weather conditions with animated terminal simulations | # SoraCLI 🌦️
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
**SoraCLI** is a terminal weather simulation tool that displays animated ASCII weather (rain, snow, thunderstorms, fog) in your terminal. It can sync with real-world weather data or run in demo mode.
```
┌────────────────────────────────────────────────────────────────┐
│ ┃ │ ┃ │ ┃ │ ┃ │ ┃ │ ┃ │
│ │ ┃ │ ┃ │ ┃ │ ┃ │ ┃ │ │
│ ┃ │ ┃ │ ┃ │ ┃ │ ┃ │ ┃ │ ┃ │ │
│ · ° · ° · ° · ° · ° · ° · ° · ° · ° │
└────────────────────────────────────────────────────────────────┘
```
## Installation
### Option 1: Install from source
```bash
git clone https://github.com/yourusername/SoraCLI.git
cd SoraCLI
pip install .
```
### Option 2: Development mode
```bash
git clone https://github.com/yourusername/SoraCLI.git
cd SoraCLI
pip install -e .
```
### Verify installation
```bash
soracli --version
soracli --help
```
> **Note:** If `soracli` command is not found, use `python -m soracli` instead, or add `~/.local/bin` to your PATH.
## Quick Start
### Demo Mode (No API key needed!)
```bash
# Rain
soracli demo --rain
# Snow
soracli demo --snow
# Thunderstorm with lightning
soracli demo --storm
# Fog/mist
soracli demo --fog
# Clear night sky (twinkling stars)
soracli demo --clear --night
# With custom theme and intensity
soracli demo --rain --theme cyberpunk --intensity 1.5
# Run for specific duration (30 seconds)
soracli demo --rain --duration 30
```
**Press `Ctrl+C` to exit any animation.**
### Live Weather Mode (Requires free API key)
1. **Get a free API key** from [OpenWeatherMap](https://openweathermap.org/api)
2. **Set your API key:**
```bash
# Linux/macOS
export OPENWEATHERMAP_API_KEY="your_api_key_here"
# Windows PowerShell
$env:OPENWEATHERMAP_API_KEY = "your_api_key_here"
# Windows CMD
set OPENWEATHERMAP_API_KEY=your_api_key_here
```
3. **Start live weather:**
```bash
soracli start --location "Tokyo"
soracli start --location "New York" --theme minimal
```
## Daemon Mode (Background Weather Bar)
Run a persistent weather animation bar at the top of your terminal using tmux.
### Requirements
```bash
# Ubuntu/Debian
sudo apt install tmux
# macOS
brew install tmux
# Arch Linux
sudo pacman -S tmux
```
### Usage
```bash
# Start daemon (5-line weather bar at top)
soracli daemon start
# Start with larger panel
soracli daemon start --panel-size 10
# Check status
soracli daemon status
# Attach to see the split view
soracli daemon attach
# Detach: Press Ctrl+B, then D
# Stop daemon
soracli daemon stop
```
### Layout
```
┌────────────────────────────────────────────┐
│ │╿|┃╿||│⸽╽╿ (rain animation) │ ← 5 lines
├────────────────────────────────────────────┤
│ $ your normal shell │
│ $ commands work as usual │ ← Rest of terminal
└────────────────────────────────────────────┘
```
## All Commands
| Command | Description |
|---------|-------------|
| `soracli demo --rain` | Rain simulation |
| `soracli demo --snow` | Snow simulation |
| `soracli demo --storm` | Thunderstorm with lightning |
| `soracli demo --fog` | Fog/mist effect |
| `soracli demo --clear --night` | Stars at night |
| `soracli start -l "City"` | Live weather for location |
| `soracli init` | Interactive configuration |
| `soracli status` | Show current config |
| `soracli themes` | List themes |
| `soracli daemon start` | Start background mode |
| `soracli daemon stop` | Stop background mode |
## Options
### Demo Options
```bash
soracli demo --rain \
--theme cyberpunk \ # default, cyberpunk, minimal, nature
--intensity 1.5 \ # 0.1 to 2.0
--duration 60 # seconds (0 = infinite)
```
### Daemon Options
```bash
soracli daemon start \
--panel-size 8 \ # Height in lines (default: 5)
--theme cyberpunk # Visual theme
```
## Themes
- **default** - Classic terminal colors
- **cyberpunk** - Neon cyan/magenta
- **minimal** - Subtle, clean
- **nature** - Earthy greens
### Custom Theme
Create `~/.soracli/themes/mytheme.json`:
```json
{
"name": "mytheme",
"rain_colors": ["cyan", "blue", "white"],
"snow_colors": ["white", "bright_white"],
"lightning_color": "bright_yellow"
}
```
## Troubleshooting
### "soracli: command not found"
```bash
# Use python -m instead
python -m soracli demo --rain
# Or add to PATH
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
```
### "No API key found"
Use demo mode (no key needed) or set the environment variable.
### Daemon issues
```bash
tmux kill-server # Kill stuck sessions
soracli daemon start # Start fresh
```
## Development
```bash
pip install -e ".[dev]"
pytest # Run tests
pytest --cov=soracli # With coverage
```
## Requirements
- Python 3.8+
- click, requests, pyyaml
- tmux (daemon mode only)
## License
MIT License
---
**Enjoy your ambient terminal weather!** 🌧️❄️⚡🌫️✨
| text/markdown | SoraCLI Contributors | null | SoraCLI Contributors | null | MIT | cli, terminal, weather, animation, ascii, ambient, simulation | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Terminals",
"Topic :: Multimedia :: Graphics",
"Topic :: Scientific/Engineering :: Atmospheric Science",
"Typing :: Typed"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"click>=8.0.0",
"requests>=2.25.0",
"pyyaml>=6.0",
"aiohttp>=3.8.0; extra == \"async\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.20.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"isort>=5.12.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"flake8>=6.0.0; extra == \"dev\"",
"types-requests>=2.28.0; extra == \"dev\"",
"types-PyYAML>=6.0.0; extra == \"dev\"",
"soracli[async,dev]; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/soracli/soracli",
"Documentation, https://github.com/soracli/soracli#readme",
"Repository, https://github.com/soracli/soracli.git",
"Issues, https://github.com/soracli/soracli/issues",
"Changelog, https://github.com/soracli/soracli/releases"
] | twine/6.2.0 CPython/3.11.9 | 2026-02-21T09:36:46.900325 | soracli-1.0.0.tar.gz | 41,056 | 25/61/fb2a12d5f276ca0e044044fb18212ea9d4168155f1bd03b278df71566e47/soracli-1.0.0.tar.gz | source | sdist | null | false | 3554484c2aa88fe75918d5df0c022587 | a99df2b00700637cab6859d6e3405e321b180a45bf8f5f445e541f3d33a660a4 | 2561fb2a12d5f276ca0e044044fb18212ea9d4168155f1bd03b278df71566e47 | null | [
"LICENSE"
] | 231 |
2.4 | uipath-dev | 0.0.55 | UiPath Developer Console | # UiPath Developer Console
[](https://pypi.org/project/uipath-dev/)
[](https://pypi.org/project/uipath-dev/)
[](https://pypi.org/project/uipath-dev/)
Interactive terminal application for building, testing, and debugging UiPath Python runtimes, agents, and automation scripts.
## Overview
The Developer Console provides a local environment for developers who are building or experimenting with Python-based UiPath runtimes.
It integrates with the [`uipath-runtime`](https://github.com/uipath/uipath-runtime-python) SDK to execute agents and visualize their behavior in real time using the [`textual`](https://github.com/textualize/textual) framework.
This tool is designed for:
- Developers building **UiPath agents** or **custom runtime integrations**
- Python engineers testing **standalone automation scripts** before deployment
- Contributors exploring **runtime orchestration** and **execution traces**
## Installation
```bash
uv add uipath-dev
```
## Features
- Run and inspect Python runtimes interactively
- View structured logs, output, and OpenTelemetry traces
- Export and review execution history
---



## Development
Launch the Developer Console with mocked data:
```bash
uv run uipath-dev
```
To run tests:
```bash
pytest
```
### :heart: Special Thanks
A huge thank-you to the open-source community and the maintainers of the libraries that make this project possible:
- [OpenTelemetry](https://github.com/open-telemetry/opentelemetry-python) for observability and tracing.
- [Pyperclip](https://github.com/asweigart/pyperclip) for cross-platform clipboard operations.
- [Textual](https://github.com/Textualize/textual) for the powerful TUI framework that powers the developer console.
| text/markdown | null | null | null | Cristian Pufu <cristian.pufu@uipath.com> | null | null | [
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Build Tools"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"fastapi>=0.128.8",
"pyperclip<2.0.0,>=1.11.0",
"textual<8.0.0,>=7.5.0",
"uipath-runtime<0.10.0,>=0.9.0",
"uvicorn[standard]>=0.40.0"
] | [] | [] | [] | [
"Homepage, https://uipath.com",
"Repository, https://github.com/UiPath/uipath-dev-python",
"Documentation, https://uipath.github.io/uipath-python/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:36:14.274100 | uipath_dev-0.0.55.tar.gz | 8,426,099 | 10/29/bef894a02f1a5ab37ced0bf51e012e06d44ebadd87eaeda71b992ea83645/uipath_dev-0.0.55.tar.gz | source | sdist | null | false | 68470d8c0a7c3761b2c4366f2e9d4db8 | 7e24494c26d32729e9168968014dceb205c5cea36ee998a8d40c745aa8011412 | 1029bef894a02f1a5ab37ced0bf51e012e06d44ebadd87eaeda71b992ea83645 | null | [
"LICENSE"
] | 331 |
2.4 | tapl-lang | 0.2.2 | Tapl Language | # Tapl Lang
<!--
Part of the Tapl Language project, under the Apache License v2.0 with LLVM
Exceptions. See /LICENSE for license information.
SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
-->
[](https://pypi.org/project/tapl-lang)
[](https://pypi.org/project/tapl-lang)
-----
## Table of Contents
- [Installation](#installation)
- [License](#license)
## Installation
```console
pip install tapl-lang
```
## License
`tapl-lang` is distributed under the terms of the [Apache-2.0 WITH LLVM-exception](https://spdx.org/licenses/Apache-2.0.html) license.
| text/markdown | null | Orti Bazar <orti.bazar@gmail.com> | null | null | Apache-2.0 WITH LLVM-exception | null | [
"Development Status :: 4 - Beta",
"Programming Language :: Python",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://tapl-lang.org",
"Issues, https://github.com/tapl-org/tapl/issues",
"Source, https://github.com/tapl-org/tapl"
] | Hatch/1.16.3 cpython/3.14.3 HTTPX/0.28.1 | 2026-02-21T09:34:56.141963 | tapl_lang-0.2.2-py3-none-any.whl | 63,046 | 52/05/f5d5e034d79c5ad723c0c1cd49c3f4ca739f28fa152dc5eb472e4dee01c3/tapl_lang-0.2.2-py3-none-any.whl | py3 | bdist_wheel | null | false | d1dcf5b644a7edc491f3a001e6689f58 | 43151a3f7ec33ce97ba2f29ab365bc3a58510608bbb4cc66ab4ff81ba96be4b7 | 5205f5d5e034d79c5ad723c0c1cd49c3f4ca739f28fa152dc5eb472e4dee01c3 | null | [
"LICENSE.txt"
] | 216 |
2.4 | dogcat | 0.10.2 | lightweight, file-based issue tracking for AI agents (and humans) | # Dogcat - lightweight, file-based issue tracking and memory upgrade for AI agents (and humans!)
`dogcat` is a memory upgrade for AI agents. No more tracking issues and progress in Markdown files and burning your context window on them. With a simple command line utility (and some TUI niceties!) you can create, edit, manage and display issues.
- [Installation](#installation)
- [Usage](#usage)
- [Telling your agent to use dogcat](#telling-your-agent-to-use-dogcat)
- [Command cheat sheet](#command-cheat-sheet)
- [Screenshots](#screenshots)
- [Tips & tricks](#tips--tricks)
- [FAQ](#faq)
## Relation to Beads
Heavily inspired by [steveyegge/beads](https://github.com/steveyegge/beads).
Beads is great, but it is ever expanding and slowly getting more and more complicated as he is building Kubernetes for Agents.
Dogcat is a simpler, more minimal version that focuses on the core functionality. The goal is to keep it simple and not chase orchestration of tens of agents running at the same time.
It also avoids some complexity by not using a daemon and/or SQL database, and only relying on the `issues.jsonl` file.
## Installation
### Homebrew (macOS)
```bash
brew install oroddlokken/tap/dogcat
```
This installs `dcat`/`dogcat` and handles Python and dependencies automatically via `uv`.
### pip / pipx / uv (all platforms)
```bash
# With uv (recommended for CLI tools)
uv tool install dogcat
# With pipx
pipx install dogcat
# With pip
pip install dogcat
```
### From source
Install `uv`, then run `./dcat.py`.
## Usage
Run `dcat init` to initialize the program. Then you can run `dcat prime` to see the information an AI agent should use.
For a guide more suited for humans, run `dcat guide`.
Alternatively, you can run `dcat init --use-existing-folder /home/me/project/.dogcats` to use a shared dogcat database.
If you don't want to store issues in git, use `dcat init --no-git`.
### Telling your agent to use dogcat
In your `AGENTS.md`/`CLAUDE.md` file, add something like the following:
``````text
# Agent Instructions
## Issue tracking
This project uses **dcat** for issue tracking. You MUST run `dcat prime --opinionated` for instructions.
Then run `dcat list --agent-only` to see the list of issues. Generally we work on bugs first, and always on high priority issues first.
When running multiple `dcat` commands, make separate parallel Bash tool calls instead of chaining them with `&&` and `echo` separators.
Mark each issue `in_progress` right when you start working on it — not before. Set `in_review` when work on that issue is done before moving on. The status should reflect what you are *actually* working on right now.
It is okay to work on multiple related issues at the same time, but do NOT batch-mark an entire backlog as `in_progress` upfront. If there is a priority conflict, ask the user which to focus on first.
If the user brings up a new bug, feature or anything else that warrants changes to the code, ALWAYS ask if we should create an issue for it before you start working on the code. When creating issues, set appropriate labels using `--labels` based on the issue content (e.g. `cli`, `tui`, `api`, `docs`, `testing`, `refactor`, `ux`, `performance`, etc.).
When research or discussion produces findings relevant to an existing issue, ask these as **separate questions in order**:
1. First ask: "Should I update issue [id] with these findings?"
2. Only after that, separately ask: "Should I start working on the implementation?"
Do NOT combine these into one question. The user may want to update the issue without starting work.
### Closing Issues - IMPORTANT
NEVER close issues without explicit user approval. When work is complete:
1. Set status to `in_review`: `dcat update --status in_review $issueId`
2. Ask the user to test
3. Ask if we can close it: "Can I close issue [id] '[title]'?"
4. Only run `dcat close` after user confirms
5. Ask: "Should I add this to CHANGELOG.md?" — update if yes
``````
This is only a starting point - it's up to you to decide how dogcat fits best in your workflow!
You can always run `dcat example-md` to get an example of what to put in your AGENTS.md/CLAUDE.md file.
`dcat prime` mainly concerns itself on how to use the dcat CLI, not how your workflow should be.
`dcat prime --opinionated` is a more opinionated version of the guide for agents, with stricter guidelines.
You can run `diff <(dcat prime) <(dcat prime --opinionated)` to see the differences.
### Command cheat sheet
| Command | Action |
| --- | --- |
| **Creating** | |
| `dcat create "My first bug" -t bug -p 0` | Create a bug issue, with priority 0 |
| `dcat c b 0 "My first bug"` | Same as above, using `dcat c` shorthands for type and priority |
| `dcat create "Turn off the lights" --manual` | Create a manual issue (not for agents) |
| **Viewing** | |
| `dcat list` | List all open issues |
| `dcat list --tree` | List issues as a parent-child tree |
| `dcat show $id` | Show full details about an issue |
| `dcat search "login"` | Search issues across all fields |
| `dcat search "bug" --type bug` | Search with type filter |
| `dcat labels` | List all labels with counts |
| **Visualizing** | |
| `dcat graph` | Show the full dependency graph as ASCII |
| `dcat graph $id` | Show the subgraph reachable from an issue |
| **Filtering** | |
| `dcat ready` | List issues not blocked by other issues |
| `dcat blocked` | List all blocked issues |
| `dcat in-progress` | List issues currently in progress |
| `dcat in-review` | List issues currently in review |
| `dcat pr` | List issues in progress and in review |
| `dcat manual` | List issues marked as manual |
| `dcat recently-added` | List recently added issues |
| `dcat recently-closed` | List recently closed issues |
| **Updating** | |
| `dcat update $id --status in_progress` | Update an issue's status |
| `dcat close $id --reason "Fixed the bug"` | Close an issue with reason |
| `dcat reopen $id` | Reopen a closed issue |
| `dcat delete $id` | Delete an issue (soft delete) |
| **TUI** | |
| `dcat tui` | Launch the interactive TUI dashboard |
| `dcat new` | Interactive TUI for creating a new issue |
| `dcat edit [$id]` | Interactive TUI for editing an issue |
| **Git & maintenance** | |
| `dcat git setup` | Install the JSONL merge driver for git |
| `dcat history` | Show change history timeline |
| `dcat diff` | Show uncommitted issue changes |
| `dcat doctor` | Run health checks on issue data |
| `dcat archive` | Archive closed issues to reduce startup load |
| `dcat prune` | Permanently remove deleted issues |
| `dcat config` | Manage dogcat configuration |
| `dcat stream` | Stream issue changes in real-time (JSONL) |
## Screenshots
Compact table view showing tasks with ID, Parent, Type, Priority, and Title columns:

Hierarchical tree view displaying parent-child issue relationships:

Detailed list view with status indicators and full issue information:

Ready view showing unblocked issues available for work:

Detailed issue view with description, acceptance criteria, and metadata:

TUI for creating new issues (`dcat new`):

TUI for editing issues, select the one you want to edit (`dcat edit`):

TUI for editing issues (`dcat edit $id`):

List issues in progress:

List issues in review:

## Tips & tricks
Personally, I use these aliases:
```bash
alias dcl="dcat list --tree"
alias dct="dcat list --table"
alias dcn="dcat new"
alias dce="dcat edit"
```
## FAQ
**What's a dogcat?**
¯\_(ツ)_/¯ Some cats are dog-like, and some dogs are cat-like.
**Why a new project and just not use or fork beads?**
Dogcat started out as some tooling on top of beads, that quickly grew into its own separate project. I found it tricky to integrate against beads, and instead of trying to keep up with changes in beads, it was more fun to just build my own.
**Why Python?**
I wanted to use [Textual](https://textual.textualize.io/), which is awesome for making TUIs with. It's also the language I am the most familiar with.
## Migrating from beads
If you already have a collection of issues in Beads, you can import them in dogcat. In a folder without a `.dogcats` folder run `dogcat import-beads /path/to/project/.beads/issues.jsonl`.
## Development
`dogcat` is now in a state where it can be dogfooded. Included is the issues.jsonl file containing the current issues.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"orjson>=3.11.7",
"rich>=14.3.2",
"textual>=7.5.0",
"tomli-w>=1.0.0",
"tomli>=2.0.0; python_version < \"3.11\"",
"typer>=0.14.0",
"watchdog>=4.0.0",
"fastapi>=0.115.0; extra == \"web\"",
"jinja2>=3.1.0; extra == \"web\"",
"python-multipart>=0.0.9; extra == \"web\"",
"uvicorn[standard]>=0.30.0; extra == \"web\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:34:52.336403 | dogcat-0.10.2.tar.gz | 3,743,397 | 32/9a/9a20ebcbdc6ba5cdb5ebf7718484b186862616a2e1da79e3756b81ede04e/dogcat-0.10.2.tar.gz | source | sdist | null | false | e7916b57ce3d3674aa6acefa9fd80d3f | 1e446f7e7720215f36ff85c91a7545d0f81266c6e52ab851a25bfa7a51f09ad8 | 329a9a20ebcbdc6ba5cdb5ebf7718484b186862616a2e1da79e3756b81ede04e | null | [
"LICENSE"
] | 219 |
2.4 | umierrorcorrect2 | 0.32.2 | Pipeline for analyzing barcoded amplicon sequencing data with Unique Molecular Identifiers (UMI) | # UMIErrorCorrect2
[](https://badge.fury.io/py/umierrorcorrect2)
[](https://github.com/sfilges/umierrorcorrect2/actions/workflows/ci.yml)
[](https://codecov.io/gh/sfilges/umierrorcorrect2)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/astral-sh/ruff)
A modern, high-performance pipeline for analyzing barcoded amplicon sequencing data with Unique Molecular Identifiers (UMI).
This package is a **complete modernization** of the original [UMIErrorCorrect](https://github.com/stahlberggroup/umierrorcorrect) published in *Clinical Chemistry* (2022).
## Key Features
- **High Performance**: Parallel processing of genomic regions and fastp-based preprocessing.
- **Modern Tooling**: Built with `typer`, `pydantic`, `loguru`, and `hatch`.
- **Easy Installation**: Fully PEP 621 compliant, installable via `pip` or `uv`.
- **Comprehensive**: From raw FASTQ to error-corrected VCFs and consensus statistics.
- **Robust**: Extensive test suite and type safety.
## Dependencies
### Mandatory
- [bwa](https://github.com/lh3/bwa) for alignment
### Optional
- [fastp](https://github.com/OpenGene/fastp) for preprocessing
- [fastqc](https://www.bioinformatics.babraham.ac.uk/projects/fastqc/) for quality control
- [multiqc](https://seqera.io/multiqc/) for quality control / report aggregation
Fastp is **highly recommended**, but not mandatory, for preprocessing. If you do not have fastp installed or run with `--no-fastp`, the pipeline will use `cutadapt` for adapter trimming only.
The `--no-qc` flag disables quality control steps. If QC is enabled (default) but fastqc or multiqc are not installed, the pipeline will raise a warning but finish successfully.
## Installation
Use [uv](https://github.com/astral-sh/uv) for lightning-fast installation:
```bash
# Installs globally
uv tool install umierrorcorrect2
# Install in your venv
uv pip install umierrorcorrect2
```
Or standard pip:
```bash
pip install umierrorcorrect2
```
## Quick Start
The command-line tool is named `umierrorcorrect2`. Run the full pipeline on a single sample:
```bash
umierrorcorrect2 run \
-r1 sample_R1.fastq.gz \
-r2 sample_R2.fastq.gz \
-r hg38.fa \
-o results/
```
Run the pipeline on multiple samples in a folder (searches recursively for FASTQ files):
```bash
umierrorcorrect2 run \
-i folder_with_fastq_files/ \
-r hg38.fa \
-o results/
```
For detailed instructions, see the **[User Guide](docs/USER_GUIDE.md)** or run:
```bash
umierrorcorrect2
```
## Documentation
- [User Guide](docs/USER_GUIDE.md): Detailed usage instructions for all commands.
- [Docker Guide](docs/DOCKER.md): Running with containers.
- [Implementation Details](docs/IMPLEMENTATION.md): Architecture and design overview.
## Citation
> Osterlund T., Filges S., Johansson G., Stahlberg A. *UMIErrorCorrect and UMIAnalyzer: Software for Consensus Read Generation, Error Correction, and Visualization Using Unique Molecular Identifiers*, Clinical Chemistry, 2022. [doi:10.1093/clinchem/hvac136](https://doi.org/10.1093/clinchem/hvac136)
| text/markdown | null | Stefan Filges <stefan.filges@pm.me>, Tobias Osterlund <tobias.osterlund@gu.se> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Bio-Informatics"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"cutadapt>=5.2",
"loguru>=0.7.3",
"matplotlib>=3.10.8",
"multiqc>=1.33",
"numba>=0.57.0",
"pydantic>=2.12.0",
"pysam>=0.23.3",
"scipy>=1.15.0",
"typer>=0.21.1",
"mypy>=1.19.1; extra == \"dev\"",
"pre-commit>=4.5.1; extra == \"dev\"",
"pytest-cov>=7.0.0; extra == \"dev\"",
"pytest>=9.0.2; extra == \"dev\"",
"ruff>=0.14.13; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/sfilges/umierrorcorrect2",
"Documentation, https://github.com/sfilges/umierrorcorrect2/wiki",
"Repository, https://github.com/sfilges/umierrorcorrect2"
] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"25.10","id":"questing","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T09:33:46.029241 | umierrorcorrect2-0.32.2.tar.gz | 61,291 | eb/7b/331b2012cecb75ec5c90b1b5baa9f7ace42f6156b3a9035f4b5f4fb851d1/umierrorcorrect2-0.32.2.tar.gz | source | sdist | null | false | 9aa0724d334b66f654df0d611d8faa9e | 6ddc4528c3a1a72c6c14c9e1a63c9689fbe3700d9f4795bd009929ff318b5318 | eb7b331b2012cecb75ec5c90b1b5baa9f7ace42f6156b3a9035f4b5f4fb851d1 | MIT | [
"LICENSE.txt"
] | 222 |
2.3 | mcp-guide | 1.0.0b5 | MCP server for handling guidelines, project rules and controlled development | # mcp-guide
[](https://github.com/deeprave/mcp-guide/actions/workflows/python-mcp-docs.yml)
[](https://github.com/deeprave/mcp-guide/security/code-scanning)
[](https://github.com/deeprave/mcp-guide)
[](https://pypi.org/project/mcp-guide/)
[](https://pypi.org/project/mcp-guide/)
[](https://pypi.org/project/mcp-guide/)
**Structured content delivery for AI agents via Model Context Protocol**
mcp-guide is an MCP server that provides AI agents with organised access to project guidelines, documentation, and context. It helps agents understand your project's standards, follow development workflows, and access relevant information through a flexible content management system.
## Key Features
- **Content Management** - Organise documents, instructions and prompts by category and collection
- **Template Support** - Dynamic content with Mustache/Chevron templates
- **Multiple Transports** - STDIO, HTTP, and HTTPS modes
- **Feature Flags** - Project-specific and global configuration
- **Workflow Management** - Structured development phase tracking
- **Profile System** - Pre-configured setups for common scenarios
- **Docker Support** - Containerised deployment with SSL
- **OpenSpec Integration** - Spec-driven development workflow
## Quick Start
mcp-guide is run using your AI Agent's MCP configuration, and not usually run directly, at least in stdio transport mode.
In stdio mode, standard input and output are used to communicate with the MCP so the agent needs to control both in order
to operate.
In http mode, however, the server provides web server (http) transport, and this may be started in standalone mode, not
necessarily by the agent directly (although typically it does).
The configurations below detail configuration with some cli agents, but almost all of them will be similar.
### Configure with AI Agents
#### JSON configuration
These blocks can be used as is and inserted into the agent's configuration.
The stdio mode is a straightforward configuration, although it requires the uv tool to be installed.
##### Stdio
```json
{
"mcpServers": {
"mcp-guide": {
"command": "uvx",
"args": ["mcp-guide"]
}
}
}
```
If the "mcpServers" block already exists, add the "mcp-guide" block at the end, ensuring that the previously last item, if any, has a terminating comma.
#### Kiro-CLI
Add the above JSON block to `~/.kiro/settings/mcp.json`.
#### Claude Code
Add the above JSON block to `~/.claude/settings.json`.
#### GitHub Copilot CLI
Add this JSON block to `~/.config/.copilot/mcp.json`.
Other clients will offer similar configuration, some also
See the [Installation Guide](docs/user/installation.md) for more detail in use with various clients, use with docker and using the http/sse transport mode.
## Content Organisation
mcp-guide organises content using **frontmatter** (optional YAML metadata at the start of documents) to define document properties and behaviour.
Content is classified into three types via the `type:` field in frontmatter:
- **user/information** - Content displayed to users
- **agent/information** - Context for AI agents
- **agent/instruction** - Directives for agent behaviour
Content is organised using **categories** (file patterns and directories) and **collections** (groups of categories). Collections act as "macros" to provide targeted context for specific tasks or purposes.
See [Content Management](docs/user/content-management.md) for details.
## Feature Flags
Feature flags control behaviour, capabilities and special features and may be set globally or per project:
- **workflow** - Enable workflow phase tracking
- **openspec** - Enable OpenSpec integration
- **content-style** - Output format (None, plain, mime)
See [Feature Flags](docs/user/feature-flags.md) for more information.
## Documentation
- **[Documentation Index](docs/index.md)** - Documentation overview
- **[Getting Started](docs/user/getting-started.md)** - First-time setup and basic concepts
- **[Changelog](CHANGELOG.md)** - Release notes and version history
## Links
- **Documentation**: [docs/user/](docs/index.md)
- **Issues**: [GitHub Issues](https://github.com/deeprave/mcp-guide/issues)
- **MCP Protocol**: [modelcontextprotocol.io](https://modelcontextprotocol.io/)
## License
MIT License - See [LICENSE.md](LICENSE.md) for details.
| text/markdown | David Nugent | David Nugent <davidn@uniquode.io> | null | null | # MIT License
Copyright (c) 2025 David L Nugent
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Communications"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"mcp[cli]>=1.16.0",
"pydantic>=2.0",
"pyyaml>=6.0",
"aioshutil>=1.6",
"aiofiles>=23.2.1",
"chevron>=0.14.0",
"click>=8.3.0",
"uuid7>=0.1.0",
"patch-ng>=1.19.0",
"packaging>=24.0",
"mkdocs>=1.5.0; extra == \"docs\"",
"mkdocs-material>=9.5.0; extra == \"docs\"",
"mike>=2.0.0; extra == \"docs\"",
"yq>=3.0.0; extra == \"docs\"",
"uvicorn>=0.27.0; extra == \"http\""
] | [] | [] | [] | [
"Homepage, https://github.com/deeprave/mcp-guide",
"Documentation, https://deeprave.github.io/mcp-guide/",
"Repository, https://github.com/deeprave/mcp-guide",
"Issues, https://github.com/deeprave/mcp-guide/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:33:25.004230 | mcp_guide-1.0.0b5.tar.gz | 219,866 | 52/24/23d2320b2a2669322ff078730b3faa26860d199521db3a15eeb2e38335a8/mcp_guide-1.0.0b5.tar.gz | source | sdist | null | false | fd83f2057693d11a550158db50fc658e | 40c97415faa496b3e4ca1762c299ae03d93a1cbbcc88f42cb0b2ed1f76a7756e | 522423d2320b2a2669322ff078730b3faa26860d199521db3a15eeb2e38335a8 | null | [] | 209 |
2.4 | cerebro-ai | 1.5.3 | A cognitive memory system for AI agents — 49 MCP tools for persistent memory, causal reasoning, and predictive intelligence | <div align="center">
<img src="https://readme-typing-svg.demolab.com?font=Orbitron&weight=900&size=60&duration=3000&pause=1000&color=8B5CF6¢er=true&vCenter=true&width=600&height=80&lines=CEREBRO" alt="Cerebro" />
<br/>
<em>The Brain Behind the Code</em>
<br/><br/>
<strong>A cognitive memory system that plugs into Claude Code (or any MCP client) and gives your AI persistent memory, learning, causal reasoning, and predictive intelligence — across every session, every project, forever.</strong>
<br/><br/>
<img src="docs/images/neural-banner.svg" width="900" alt="Neural Network"/>
<br/><br/>
<sub>49 MCP tools. 3-tier memory. Local-first. Install in under 3 minutes.</sub>
</div>
<br/>
<div align="center">
[](LICENSE)
[](https://python.org)
[](docs/MCP_TOOLS.md)
[](https://pypi.org/project/cerebro-ai/)
[](docs/ARCHITECTURE.md)
[](https://cerebro.life)
</div>
---
## Why Cerebro?
<table>
<tr>
<td align="center" width="33%">
### :brain: Remember Everything
Your AI gets **total recall**. Conversations, facts, and context carry across sessions — nothing is ever forgotten.
- **Episodic** memory for events, **semantic** for facts, **working** for active reasoning
- Hybrid **semantic + keyword** search across all memories
- Session continuity — **pick up exactly where you left off**
</td>
<td align="center" width="33%">
### :gear: Learn and Adapt
Your AI gets **smarter with every interaction**. Solutions, failures, and patterns are tracked automatically.
- Auto-detects solutions, failures, and **antipatterns**
- Patterns **auto-promote** to trusted knowledge after 3+ confirmations
- Tracks past mistakes and **avoids repeating them**
</td>
<td align="center" width="33%">
### :crystal_ball: Reason and Predict
Go beyond retrieval into **genuine reasoning**. Cerebro builds causal models and catches problems before they happen.
- Causal models with **"what-if" simulation**
- **Predictive failure anticipation** from historical patterns
- **Hallucination detection** and confidence scoring
</td>
</tr>
</table>
---
## Quick Start
### Prerequisites
- **Python 3.10+**
- **Claude Code** or any [MCP-compatible client](https://modelcontextprotocol.io)
### 1. Install
```bash
pip install cerebro-ai
```
For **semantic search** (recommended — uses FAISS + sentence-transformers):
```bash
pip install cerebro-ai[embeddings]
```
> Without `[embeddings]`, Cerebro falls back to keyword-only search. Still functional, but semantic search is significantly more powerful.
### 2. Initialize
```bash
cerebro init
```
This creates your local memory store at `~/.cerebro/data`.
### 3. Add to Claude Code
Add this to your MCP config (`~/.claude/mcp.json`):
```json
{
"mcpServers": {
"cerebro": {
"command": "cerebro",
"args": ["serve"]
}
}
}
```
### 4. Verify
Restart Claude Code and run `/mcp` — you should see 49 Cerebro tools. Start a conversation and Cerebro will automatically begin building your memory.
### Health Check
```bash
cerebro doctor
```
---
<img src="https://capsule-render.vercel.app/api?type=waving&color=0:0a0a1a,50:4c1d95,100:7c3aed&height=80§ion=header&reversal=true" width="100%"/>
<div align="center">
## The Full Experience
The MCP tools give your AI persistent memory. **Cerebro Pro** wraps it in a
complete cognitive desktop — where your AI thinks, acts, and evolves autonomously.
</div>
<div align="center">
<br/>
<a href="https://cerebro.life">
<img src="https://img.shields.io/badge/Explore_Cerebro_Pro-%E2%86%92_cerebro.life-A855F7?style=for-the-badge&labelColor=1a1a2e" alt="Cerebro Pro"/>
</a>
<br/><br/>
</div>
<img src="https://capsule-render.vercel.app/api?type=waving&color=0:7c3aed,50:4c1d95,100:0a0a1a&height=80§ion=footer" width="100%"/>
---
## What You Get
These are the tools you'll use daily. Cerebro has 49 total — here are the highlights:
| Tool | What it does |
|------|-------------|
| **`search`** | Find anything in memory — hybrid semantic + keyword search across all conversations, facts, and learnings |
| **`record_learning`** | Save a solution, failure, or antipattern. Next time you hit the same problem, Cerebro surfaces it |
| **`get_corrections`** | Check what your AI got wrong before — so it doesn't repeat the same mistakes |
| **`check_session_continuation`** | Pick up where you left off. Detects in-progress work and restores full context |
| **`working_memory`** | Active reasoning state: hypotheses, evidence chains, scratch notes that persist across compactions |
| **`causal`** | Build cause-effect models. Ask "what causes X?" or simulate "what if I do Y?" |
| **`predict`** | Anticipate failures before they happen based on patterns from your history |
| **`get_user_profile`** | Your AI knows your preferences, projects, environment, and goals — no re-explaining |
> **See all 49 tools below** or browse the full [MCP Tools Reference](docs/MCP_TOOLS.md).
<p align="center">
<br/>
<img src="docs/images/cerebro-flow.svg" width="800" alt="Cerebro Pipeline"/>
<br/><br/>
</p>
---
## All 49 MCP Tools
Cerebro exposes **49 tools** through the [Model Context Protocol](https://modelcontextprotocol.io), organized into 10 categories. Every tool works with any MCP-compatible AI client.
<details>
<summary><strong>Memory Core</strong> (5 tools) — Store, search, and retrieve memories</summary>
| Tool | Description |
|------|-------------|
| `save_conversation_ultimate` | Save conversations with comprehensive extraction of facts, entities, actions, and code snippets |
| `search` | Hybrid semantic + keyword search across all memories (recommended default) |
| `search_knowledge_base` | Search the central knowledge base for facts, learnings, and discoveries |
| `search_by_device` | Filter memory searches by device origin (e.g., only laptop conversations) |
| `get_chunk` | Retrieve specific memory chunks by ID for context injection |
</details>
<details>
<summary><strong>Knowledge Graph</strong> (5 tools) — Entities, timelines, and user context</summary>
| Tool | Description |
|------|-------------|
| `get_entity_info` | Get information about any entity (tool, person, server, etc.) with conversation history |
| `get_timeline` | Chronological timeline of actions and decisions for a given month |
| `find_file_paths` | Find all file paths mentioned in conversations with purpose and context |
| `get_user_context` | Comprehensive user context: goals, preferences, technical environment |
| `get_user_profile` | Full personal profile: identity, relationships, projects, preferences |
</details>
<details>
<summary><strong>3-Tier Memory</strong> (6 tools) — Episodic, semantic, and working memory</summary>
| Tool | Description |
|------|-------------|
| `memory_type: query_episodic` | Query event memories by date, actor, or emotional state |
| `memory_type: query_semantic` | Query general facts by domain or keyword |
| `memory_type: save_episodic` | Save event memories with emotional state and outcome |
| `memory_type: save_semantic` | Save factual knowledge with domain classification |
| `working_memory` | Active reasoning state: hypotheses, evidence chains, scratch notes |
| `consolidate` | Cluster episodes, create abstractions, strengthen connections, prune redundancies |
</details>
<details>
<summary><strong>Reasoning</strong> (5 tools) — Causal models, prediction, and self-awareness</summary>
| Tool | Description |
|------|-------------|
| `reason` | Active reasoning over memories: analyze, find insights, validate hypotheses |
| `causal` | Causal models: add cause-effect links, find causes/effects, simulate "what-if" interventions |
| `predict` | Predictive simulation: anticipate failures, check patterns, suggest preventive actions |
| `self_model` | Continuous self-modeling: confidence tracking, uncertainty, hallucination checks |
| `analyze` | Pattern analysis, knowledge gap detection, skill development tracking |
</details>
<details>
<summary><strong>Learning</strong> (4 tools) — Solutions, corrections, and antipatterns</summary>
| Tool | Description |
|------|-------------|
| `record_learning` | Record solutions, failures, or antipatterns with tags and context |
| `find_learning` | Search for proven solutions or known antipatterns by problem description |
| `analyze_conversation_learnings` | Extract learnings from a past conversation automatically |
| `get_corrections` | Retrieve corrections Claude learned from the user to avoid repeating mistakes |
</details>
<details>
<summary><strong>Session Continuity</strong> (6 tools) — Never lose your place</summary>
| Tool | Description |
|------|-------------|
| `check_session_continuation` | Check for recent work-in-progress to continue |
| `get_continuation_context` | Get full context for resuming a previous session |
| `update_active_work` | Track current project state for session handoff |
| `session_handoff` | Save and restore working memory across sessions |
| `working_memory: export/import` | Export active reasoning state for handoff, import to restore |
| `session` | Session info: thread history, active sessions, summaries, continuation detection |
</details>
<details>
<summary><strong>User Intelligence</strong> (5 tools) — Preferences, goals, and proactive suggestions</summary>
| Tool | Description |
|------|-------------|
| `preferences` | Track and evolve user preferences with confidence weighting and contradiction detection |
| `personality` | Personality evolution: traits, consistency checks, feedback-driven adaptation |
| `goals` | Detect, track, and reason about user goals with blocker identification |
| `suggest_questions` | Generate questions to fill knowledge gaps in the user profile |
| `get_suggestions` | Proactive context-aware suggestions based on current situation and history |
</details>
<details>
<summary><strong>Projects</strong> (2 tools) — Project tracking and version evolution</summary>
| Tool | Description |
|------|-------------|
| `projects` | Project lifecycle: state, active list, stale detection, auto-update, activity summaries |
| `project_evolution` | Version tracking: record releases, view timeline, manage superseded versions |
</details>
<details>
<summary><strong>Quality</strong> (5 tools) — Maintenance, health, and self-improvement</summary>
| Tool | Description |
|------|-------------|
| `rebuild_vector_index` | Rebuild the FAISS vector search index after bulk updates |
| `decay` | Storage decay management: run decay cycles, preview, manage golden (protected) items |
| `self_report` | Self-improvement reports: performance metrics, before/after tracking |
| `system_health_check` | Health check across all components: storage, embeddings, indexes, database |
| `quality` | Memory quality: deduplication, merge, fact linking, quality scoring |
</details>
<details>
<summary><strong>Meta</strong> (6 tools) — Retrieval optimization, privacy, and exploration</summary>
| Tool | Description |
|------|-------------|
| `meta_learn` | Retrieval strategy optimization: A/B testing, parameter tuning, performance tracking |
| `memory_type` | Query and manage episodic vs semantic memory types with stats and migration |
| `privacy` | Secret detection, redaction statistics, sensitive conversation identification |
| `device` | Device registration and identification for multi-device memory isolation |
| `branch` | Exploration branches: create divergent reasoning paths, mark chosen/abandoned |
| `conversation` | Conversation management: tagging, notes, relevance scoring |
</details>
---
## How It Works
```mermaid
graph LR
A[Your AI Client] <-->|MCP Protocol| B[Cerebro Server]
B --> C[FAISS Vector Search]
B --> D[Knowledge Base]
B --> E[File Storage]
```
All data stays on your machine. No cloud, no API keys, no telemetry.
---
## Free vs Pro
| Capability | Free (This Repo) | Pro ([cerebro.life](https://cerebro.life)) |
|---|---|---|
| **Memory** | 49-tool MCP server. Full cognitive architecture. | Everything in Free + dashboard visualization of your memory graph and health stats. |
| **Interface** | Claude Code CLI or any MCP client. | Native desktop app with Mind Chat, 3D neural constellation, real-time activity. |
| **Agents** | Single Claude session with persistent memory. | Agent swarms — multiple Claudes collaborating on complex tasks autonomously. |
| **Browser** | Not included. | Autonomous browser agents: research, navigate, extract — with live video preview. |
| **Automations** | Not included. | Calendar-driven recurring tasks, scheduled research, automated workflows. |
| **Cognitive Loop** | Not included. | OODA cycle: Observe-Orient-Decide-Act. Your AI thinks and acts continuously. |
<div align="center">
<br/>
<a href="https://cerebro.life">
<img src="https://img.shields.io/badge/See_Cerebro_Pro_in_Action-%E2%86%92_cerebro.life-A855F7?style=for-the-badge&labelColor=1a1a2e" alt="Cerebro Pro"/>
</a>
<br/><br/>
</div>
---
## Configuration
Cerebro works out of the box with zero configuration. All settings are optional and controlled via environment variables:
| Variable | Default | Description |
|----------|---------|-------------|
| `CEREBRO_DATA_DIR` | `~/.cerebro/data` | Base directory for all Cerebro data |
| `CEREBRO_EMBEDDING_MODEL` | `all-mpnet-base-v2` | Sentence transformer model for semantic search |
| `CEREBRO_EMBEDDING_DIM` | `768` | Embedding vector dimensions |
| `CEREBRO_LOG_LEVEL` | `INFO` | Logging level |
| `CEREBRO_LLM_URL` | *(none)* | Optional local LLM endpoint for deeper reasoning |
| `CEREBRO_LLM_MODEL` | *(none)* | Optional local LLM model name |
Set them in your MCP config:
```json
{
"mcpServers": {
"cerebro": {
"command": "cerebro",
"args": ["serve"],
"env": {
"CEREBRO_DATA_DIR": "/path/to/your/data"
}
}
}
}
```
---
## Contributing
Contributions are welcome — bug fixes, new MCP tools, documentation improvements, or feature ideas.
Please read the [Contributing Guide](CONTRIBUTING.md) before submitting a pull request. All contributions must be compatible with the AGPL-3.0 license.
---
## License & Attribution
```
Copyright (C) 2026 Michael Lopez (Professor-Low)
Cerebro is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0).
See LICENSE for details.
```
**What AGPL-3.0 means:** If you use Cerebro's code in your own product — including as a network service — you **must** release your modified source code under the same license and give proper attribution. This protects the project from being taken proprietary.
**Created and maintained by** [Michael Lopez](https://github.com/Professor-Low) (Professor-Low)
<div align="center">
<br/>
<p>
<a href="#quick-start">Get Started</a> ·
<a href="https://cerebro.life"><strong>Cerebro Pro</strong></a> ·
<a href="docs/ARCHITECTURE.md">Architecture</a> ·
<a href="https://github.com/Professor-Low/Cerebro/issues">Issues</a>
</p>
<sub>If Cerebro helps you, consider giving it a star — it helps others find the project.</sub>
<br/><br/>
<a href="https://github.com/Professor-Low/Cerebro">
<img src="https://img.shields.io/github/stars/Professor-Low/Cerebro?style=social" alt="GitHub stars" />
</a>
<br/><br/>
<a href="https://cerebro.life"><strong>cerebro.life</strong></a>
</div>
| text/markdown | null | Michael Lopez <lopez.michael19007@gmail.com> | null | null | null | ai, claude, cognitive, llm, mcp, memory, reasoning | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"anyio>=4.0.0",
"mcp>=1.25.0",
"numpy>=2.0.0",
"pydantic>=2.5.0",
"python-dateutil>=2.8.0",
"mypy>=1.0; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"faiss-cpu>=1.13.0; extra == \"embeddings\"",
"sentence-transformers>=5.0.0; extra == \"embeddings\"",
"faiss-gpu>=1.7.4; extra == \"gpu\"",
"torch>=2.0.0; extra == \"gpu\""
] | [] | [] | [] | [
"Homepage, https://github.com/Professor-Low/Cerebro",
"Documentation, https://github.com/Professor-Low/Cerebro/tree/main/docs",
"Repository, https://github.com/Professor-Low/Cerebro",
"Issues, https://github.com/Professor-Low/Cerebro/issues"
] | twine/6.2.0 CPython/3.13.8 | 2026-02-21T09:33:14.699410 | cerebro_ai-1.5.3.tar.gz | 456,766 | d4/a1/591f038659dc22268f0e2f64017e00dd77c929de578d97c0cc9d6e793ba0/cerebro_ai-1.5.3.tar.gz | source | sdist | null | false | 9b75f52db1ab9957b0b61894549a8f0e | e57ebece1d6f08187da1633ef2ac15a36903e2494c639b170c4ae95db6c49539 | d4a1591f038659dc22268f0e2f64017e00dd77c929de578d97c0cc9d6e793ba0 | AGPL-3.0-only | [
"LICENSE"
] | 215 |
2.3 | skivvy | 0.708 | A simple tool for testing JSON/HTTP APIs | # skivvy — JSON-native, CLI for integration tests for HTTP APIs
Skivvy is a tiny, Unix-style runner for API tests where the tests themselves are **JSON**.
### What makes skivvy similar to postman / bruno / curl / jq / etc
- Support for all https-verbs, http-headers, cookies handled the way you expect them to, data from responses can be passed into other requests, easy to deal with things such as OAuth etc
- Rich support for verifying / asserting responses
- Good diffs when tests fail
- Setup / Teardown functionality
- Arbitary amount environment configs (eg local / staging / etc)
### What makes skivvy *different* (compared to postman, bruno)
- Assert **only what you care about** (whether it's only the status, or a substring in the response, or a particular field among other fields you don't care about - snapshots are an anti-pattern leading to brittle, flaky-tests and false positives). This is probably the most central and distinguishing aspect of skivvy and
although technically can do snapshot-like asserting, it's strongly discouraged and if that's something you're
after you should probably stick to some other tool.
- Unix-philosophy: do one thing well
- Lightweight
- Simple, clear, declarative, text-based (.json) tests. Much simpler than postman, bruno etc
- CI-friendly, no GUI
- Very tiny API (if it could even be called that)
- Simple to extend by implementing tiny custom python functions
- Predictable deterministic execution
- MIT license
At [my current company](https://www.mindmore.com/) we use it for **all backend API tests** - as far as I know there's never been a false positive (hello cypress).
## try it out
If you use **uv**, **pipx**, **nix** or **docker**, you don't need to install skivvy.
The following bash one-liners downloads the examples to `/tmp` and runs a subset of the test suite:
### uv
```bash
pushd /tmp && curl -L https://raw.githubusercontent.com/hyrfilm/skivvy/refs/heads/master/examples.tar.gz | tar -xz -C . && \
uvx skivvy examples/typicode/default.json && popd
```
Those tests should all pass successfully, but you tend to care more about the circumstances when a test *does not pass*,
the following line does just that:
```bash
pushd /tmp && curl -L https://raw.githubusercontent.com/hyrfilm/skivvy/refs/heads/master/examples.tar.gz | tar -xz -C . && \
uvx skivvy examples/typicode/failing.json && popd
```
### pipx
This line runs both of the succeeding and failing suites above:
```
pushd /tmp && curl -L https://raw.githubusercontent.com/hyrfilm/skivvy/refs/heads/master/examples.tar.gz | tar -xz -C . && pipx run skivvy examples/typicode/all.json && popd
```
### pip & virtualenv
Installing it into a new virtualenv directory `skivvy` using **pip** and **virtualenv**:
```bash
python -m venv skivvy && source skivvy/bin/activate && pip install skivvy
skivvy --version
```
This should print out the version installed.
You can of course install it via **pipx** or **uv**. If you're running it in a throwaway (eg. like in a CI/CD container) installing globally works fine as well.
### docker
```
docker run --rm hyrfilm/skivvy:examples
```
Running this container will simply just print out its version.
To run the default example tests
```
docker run --rm hyrfilm/skivvy:examples skivvy examples/typicode/default.json
```
If you want to poke around you can attach a interactive terminal:
```
docker run --rm -it hyrfilm/skivvy:examples
```
This will print out the version and then you're inside the container where the examples are located.
#### running skivvy through docker (using bind mounts)
If you have a test suite you can bind mount it into the container to run your tests.
Assuming the current directory would contain your tests and that the root of that directory would contain a
configuration file `cfg.json` you could bind mount that directory and run skivvy like so:
```sh
docker run --rm --mount type=bind,source="$(pwd)",target="/app" hyrfilm/skivvy cfg.json
```
## Why Skivvy (vs GUI suites)
GUI tools (Postman/Bruno) are good for exploration, but heavier and brittle when used in an actual CI/CD envirionment for testing your entire API. But what's worse is they push you toward bad habits such as overerly complicated imperative JS hooks and snapshot-style assertions. This is just unnecessary and encourages writing bad, brittle tessts. Having JS code does also introduce its own set of issues like learning an some unwieldy API using an unwieldy languange like JS (this is not meant as a flame-bait ;) I happpen to write JS-code for a living but that doesn't have to mean that I think it's a good languange for all tasks).
Skivvy tests are plain json files you keep in git.
## Quick look
Assert what you care about, whether it be only the status code:
```json
{ "url": "/api/items", "status": 200 }
```
Or whether you get back some thing particular thing among others:
```json
{
"url": "/api/items",
"response": {
"results": [{ "name": "Widget42" }]
}
}
```
Or maybe you just care about:
```json
{
"url": "/api/items?limit=200",
"response": {
"results": "$len 200"
}
}
```
Pass state between steps (via variables or persisted as files):
```json
{
"_comment": "Login and retrieve user settings",
"url": "/login",
"method": "post",
"response": {
"region": "$store region",
"language": "$store language",
"dashboard": "$store dashboard-id",
"profile": { "pic": "$valid_url" }
}
}
```
```json
{ "url": "/home/<region>/<dashboard-id>?i18n=<language>", "status": 200 }
```
This brace expansion aspect works consistently whether it involves checking verifying whether a field or a part of field has some value, or whether a it's something that it's something matched or returned as part of the response body from one test that should be passed passed in as some header value for another test.
Match a **subset** anywhere under a node:
```json
{
"url": "/project",
"response": { "project": { "name": "MKUltra" } }
}
```
(Works even if the server nests it under `project.department.name`.)
This can be disabled globally or per tests, if so preferred.
## Why not just curl + jq + bash?
Well...You can — and you’ll slowly re-invent like skivvy (reusable assertions, readable diffs, state passing, config/env handling, CI output). Skivvy is the minimal framework you’d build anyway.
## Readable diffs
Intentional failure:
```json
{
"url": "/project",
"match_subsets": true,
"response": { "project": { "name": "WrongName" } }
}
```
Typical output (abridged):
```
✗ GET /project
response.project.name
expected: "WrongName"
actual: "MKUltra"
diff:
- WrongName
+ MKUltra
```
## CLI filters (this example illustrates how a setup/teardown could be implemented)
```bash
skivvy cfg.json -i '00_setup' -i '99_teardown' -e 'flaky' -i $1
```
Then you can just create an alias for it and be able to do something like:
```bash
skivvy 01_login_tests
```
In your local / CI environment you could, for example, seed the database in the setup and tear it down after running the test pattern you specified
Includes are applied first, then excludes.
## Config (high-value keys)
A minimal config / env file might look this:
```json
{
"tests": "./api/tests",
"base_url": "https://example.com",
"ext": ".json"
}
Or specifying all currently supported settings:
```
"ext": ".json",
"base_url": "https://api.example.com",
"colorize": true,
"fail_fast": false,
"brace_expansion": true,
"validate_variable_names": true,
"auto_coerce": true,
"matchers": "./matchers"
```
## Test file keys (most used)
- `url` (string required),
### All other fields are optional
- `method` (`get` default),
- `status` (expected HTTP status, only checked if specified)
- `response` (object or matcher string, only checked if specified)
- `match_subsets` (false by default, allows you to check fields or parts of objects, occurring somewhere in the response)
- `skip_empty_objects` (false by default, only relevant with `match_subsets`; when true, empty objects are skipped during verification)
- `skip_empty_arrays` (false by default, only relevant with `match_subsets`; when true, empty arrays are skipped during verification)
- `match_every_entry` (false by default, when true every actual array entry must satisfy the expected template — as opposed to the default "at least one" semantics)
- `match_falsiness` (true by default)
- `brace_expansion`, (true by default, makes )
- `validate_variable_names` (true by default) - enforces variable names starting with a letter and using only `[a-z0-9_-.,/\\]`; set to false to relax (not recommended)
- `auto_coerce` - will make an educated guess what "field": "<variable>" should be interpreted as. If it can be parsed as a boolean (eg "true"/"false" then: "field": true, "42" would result in "field": 42 and so on). If it can't be coerced into any other JSON primitive than a string then it will simply be left as a string eg, if variable is "42 years old" then: "field": "42 years old".
- `_comment` or `comment` or `note` or `whatever` (unrecognized top-level entries are simply ignored)
## Built-in matchers (common)
Format: `"$matcher args..."`
- `$contains <text>` — substring present
- `$store <name>` - store a field's value in variable, variables are scoped to the current directory
- `$fetch <name>` - value is equal to the value of a variable
- `$gt N`, `$lt N`, `$between <min> <max>` — numeric comparisons (between is inclusive)
- `$len N`, `$len_gt N`, `$len_lt N` — length checks
- `$valid_url` — value is http(s) URL that returns 2xx
- `$regexp <pattern>` — regex match
- `$~ <number> [threshold <ratio>]` — approximate numeric
- `$date [format?]` — value parses as date
- `$expr <python_expr>` — escape hatch (`actual` bound to value)
- `$write_file <filename>` — write value for later `<filename>`
- `$read_file <filename>` — read value from file
Negation: prefix with `$!` (e.g., `$!contains foo`).
**Custom matchers:** create `./matchers/<name>.py` with:
```python
def match(expected, actual):
# return True/False, optionally with a message
...
```
## Docker & ephemeral DBs
We often seed a DB in `00_setup/` and teardown in `9999_teardown/`. With bind mounts, state files (IDs/tokens) are inspectable:
```bash
docker run --rm -v "$PWD":/work -w /work hyrfilm/skivvy skivvy cfg.json
```
## FAQ
- **Isn’t this just curl + jq / grep ?** YES, it is. Especially if you like writing a lot of bash, over time you might want reusable assertions, diffs, state, filters, CI-friendly output, and then you've ended up re-implementing something like skivvy or not, I say go for it!
- **Why JSON (not YAML/JS)?** JSON matches your payloads; zero DSL. JS is not supported as a concious decision, if you want a non-declarative tool for testing your APIs, there's always bruno/postman etc.
- **Why serial by default?** Determinism. For concurrency, run multiple processes with distinct state dirs. (This might get elevated to support true concurrency in the future, if the total cost of complexity is low and
fits with the other design-goals mentioned above).
- **Comments?** `_comment` is supported and ignored at runtime.
## License
Keeping it MIT dude
Keep rockin' in the free world
# skivvy
A simple tool for testing JSON/HTTP APIs
Skivvy was developed in order to facilitate automated testing of web-APIs. If you've written an API that consumes or
produces JSON, skivvy makes it easy to create test-suites for these APIs.
You can think of skivvy as a more simple-minded cousin of cURL - it can't do many of the things cURL can - but the few things it can do it does well.
#### running skivvy through docker (using bind mounts)
Assuming the current directory would contain your tests and that the root of that directory would contain a
configuration file `cfg.json` you could bind mount that directory and run skivvy like so:
```sh
docker run --rm --mount type=bind,source="$(pwd)",target="/app" hyrfilm/skivvy skivvy cfg.json
```
This allows you to have your tests and configuration outside the container and mouting it inside the container.
## Documentation
### CLI flags
As common for most testing frameworks, you can pass a number of flags to filter what files get included in the suite
that skivvy runs. `-i regexp` is used for including files, `-e regexp`is used for excluding files.
Running `skivvy cfg.json -i file1 -i file2` only includes paths that match either the regexp `file1` or `file2`. `skivvy cfg.json` is functionally
equivalent of `skivvy cfg.json -i *.` In other words, all files that skivvy finds are included.
Running `skivvy cfg.json -e file3` excludes paths that match the `file3` regexp.
Stacking multiple flags is allowed: `skivvy cfg.json -i path1.* -i path2.* -e some.*file`.
The order of filtering is done by first applying the `-i` filters and then the `-e` filters.
### config settings
a skivvy testfile, can contain the following flags that changes how the tests is performed:
##### mandatory config settings
* *tests* - directory where to look for tests (recursively)
* *ext* - file extension to look for (like ".json")
* *base_url* - base URL that will be prefixed for all tests
##### optional config settings
* *log_level* - a low value like 10, shows ALL logging, a value like 20 shows only info and more severe
* *colorize* - terminal colors for diffs (default is true)
* *fail_fast* - aborts the test run immediately when a testcase fails instead of running the whole suite (default is false)
* *matchers* - directory where you place your own matchers (eg "./matchers")
#### mandatory settings for a testcase
* *url* - the URL that skivvy should send a HTTP request to
* *base_url* - an base url (like "https://example.com") that will be prefixed for the URL
* *method* - what HTTP verb should be used (optional, defaults to GET)
#### optional settings for testcase
* *brace_expansion* - whether brace expansion should used for URLs containing \<variable> (these variables can be retrieved from a file in the path, or can be a file created using $write_file)
* *expected_status* - the expected HTTP status of the call
* *response* - the _expected_ response that should be checked against _actual_ response received from the API
* *data* - data should be sent in in POST or PUT request
* *json_encode_body* - setting this to false makes skivvy not json encode the body of a PUT or POST and instead sends it as form-data
* *headers* - headers to send into the request
* *content_type* - defaults to _"application/json"_
* *headers* - headers to send into the request
* *write_headers* - headers that should be retrieved from the HTTP response and dumped a file, for example: ````"write_headers": {"headers.json": ["Set-Cookie", "Cache-Control"]}, ````
* *read_headers* - specifies a file containing headers to be sent in the request, for example: ````"read_headers": "headers.json"````
* *response_headers* - expected response headers to verify (case-insensitive keys, supports matchers)
* *match_subsets* - (boolean, default is false) - controls whether skivvy will allow to match a subset of a dict found in a list
* *skip_empty_objects* - (boolean, default is false) - only with *match_subsets*; when true, empty objects are skipped during verification
* *skip_empty_arrays* - (boolean, default is false) - only with *match_subsets*; when true, empty arrays are skipped during verification
* *match_every_entry* - (boolean, default is false) - when true, the expected array element acts as a template that every actual array entry must satisfy (default is "at least one" semantics)
* *match_falsiness* - (boolean, default is false) - controls whether skivvy will consider falsy values (such as null, and empty string, etc) as the same equivalent
* *upload* - see below for an example of uploading files
* auto_coerce - default is true, if the content of a file (read using [$read_file](#read_file) or [brace expansion](#brace-expansion)) can be interpreted as an integer it will be converted to that.
#### variables
Parts of a request may need to vary depending on context. Skivvy provides a number of ways to facilitate this:
* *If a response* contains a value you want to store, use
[$write_file](#$write_file)
* *If a response* contains a value you want to verify matches a value in a stored file, use [$read_file](#$read_file)
* If the *body of a request* should contain one or more values from a stored file, use [brace expansion](#brace-expansion)
* If the parts of the *url of a request* should contain one or more values from a stored file, use [brace expansion](#brace-expansion)
* If the *headers of a response* should be saved to a file, use $write_headers
* If the *headers of a request* should be read from a file, use $read_headers
#### brace expansion
```json
{
"url": "http://example.com/<some>/<file>",
"a": "<foo>",
"b": "<bar>"
}
```
If you `brace_expansion` is to `true`. The value for `a` will be read from the file `foo` the value for `b`will be read from `bar`.
The first part of the path of the url will be read replaced with the contents of the file `some` and the second part by the contents of the file `file`. The file is expected to be in the path, otherwise no brace expansion will occur and the value is set as-is.
By default any value that can be interpreted as an integer will be coerced to an int. Disable this by setting `auto_coerce`to false.
#### file uploads
POSTs supports both file uploading & sending a post body as JSON. You can't have both (because that would result in conflicting HTTP-headers).
Uploads takes precedence, which means that if you have enabled file uploads for a testcase it will happily ignore the POST data you pass in.
Enabling a file upload would look like this:
```json
{
"url": "http://example.com/some/path",
"upload": {"file": "./path/to/some/file"}
}
```
When seeing an upload field skivvy like above skivvy will try to open that file and pass it along in the field specified ("file"
in the example above). Currently only one upload is supported.
The file needs to either be a absolute path or relative to where skivvy is executing. If the file can't be found skivvy will
complain and mark the as failed.
### matchers
Skivvy's matcher-syntax is a simple, extensible notation that allows one greater expressiveness than vanilla-JSON would allow for.
For example, let's say you want to check that the field "email" containing a some characters followed by an @-sign,
some more characters followed by a dot and then some more characters (I don't recommend this as a way to check if an email is valid, but let's just pretend it was that simple).
Then you could write:
```
"email": "$regexp (.+)@(.+)\.(.+)"
```
The format for all matchers are as following:
```
$matcher_name expected [parameter1 parameter2... parameterN].
```
The amount of parameters a particular matcher takes depends on what matcher you are using. Currently these matchers are supported out-of-the-box:
### Extending skivvy
Skivvy can be extended with custom matchers, written in python. This allows you to either provide your own
matchers if you feel that some are missing. A matcher is just a python-function that you write yourself, you
can use all the typical python functions in the standard library like os, urllib etc.
A matcher is expected to a boolean and a message, or just a boolean if you're lazy. You're recommended to
provide a message in case the matcher returns false which will make skivvy treat that testcase as failed.
Tecnically a matcher can do whatever you want (like `$write_file`, for example) as long as it returns a boolean.
#### Example: creating a custom matcher
Let's say you want to have a sort of useless matcher that looks for whether the json has a key that contains the
word "dude". You would use it like this `$dude nickname` that would verify that the response would have a key
`nickname` that would contain `dude`.
1. in the config create a key like so `./matchers`
2. Create a directory, `./matchers` next to the config directory
3. Create a file `dude.py` like so:
```python
def match(expected, actual):
expected = str(expected) # would contain "nickname"
actual = actual # would contain json like {nicknamne: "dude", ...}
field = actual.get(expected.strip(), {})
if not field:
return False, "Missing key: %s" % expected
return "dude" in field, "Didn't find 'dude' in %s" % expected
```
4. Use it in a testcase:
```json
{"url": "/some/url",
"method": "get",
"status": 200,
"response": "$dude nickname"}
```
**NOTE:** If you were to use the matcher on a specific field, `actual` would refer to that part of the response
like this for example:
`
{
...
response": {"somekey": "$dude"}}
`
... then the variable `actual` would refer to what `somekey` contains when the matcher would run.
### matcher reference
#### $valid_url
Matches any URL that returns a 200 status.
Example:
```
"some_page": "$valid_url"
```
would pass if `some_page` was `http://google.com`
#### $contains
Matches a string inside a field, good for finding nested information when you
don't care about the structure of what's returned.
Example:
```"foo": "$contains dude!" ```
would for example pass if `foo` was
```json
{"movies": [{"title": "dude! where's my car?"}]}
```
#### $len
Matches the length on JSON-array.
Example:
```"foo": "$len 3" ```
would for example pass if `foo` was
```json
["a", "b", "c"]
```
#### $len_gt
Passes if the length of an JSON-array is longer than a certain amount.
Example:
```"foo": "$len_gt 3" ```
would for example pass if `foo` was
```json
["a", "b", "c", "d"]
```
#### $len_lt
Passes if the length of an JSON-array is shorter than a certain amount.
Example:
```"foo": "$len_gt 3" ```
would for example pass if `foo` was
```json
["a", "b"]
```
#### $gt / $lt
Numeric comparisons (non-length). Examples:
```
"rating": "$gt 3"
"discount": "$lt 0.5"
```
#### $between
Numeric comparison that checks a value falls within an inclusive range.
Example:
```
"rating": "$between 1 5"
```
passes for any value >=1 and <=5. If the lower bound exceeds the upper bound the matcher fails early.
#### $~
Matches an approximate value.
Example:
```
"foo": "$~ 100 threshold 0.1"
```
would match values between 90-110.
#### $write_file
Writes the value of a field to a file, which can then be passed to another test.
This is useful for scenarios where you want to save a field (like a database id) that
should be passed in to a subsequent test.
Example:
```
"foo": "$write_file dude.txt"
```
Would write the value of `foo` to the file `dude.txt`
#### $read_file
Reads the value of a file and sets a field to it (most useful in the body of a POST)
```json
...
"body": {
"foo": "$read_file dude.txt"
}
```
Would read the contents of the file `dude.txt` and assign it to the field `foo`.
#### $regexp
Matches using a regular expression.
Example:
```"foo": "$regexp [a-z]+" ```
Would require `foo` to contain at least one occurence of the a or b... to z.
#### $expr
Dynamically evaluates the string as a python statement, on the data received if the statement evaluates to True it passes.
(Be careful with this one, don't use it on untrusted data etc :)
Example:
```
"foo": "$expr (int(actual)%3)==0"
```
Would try to convert the data in the field `foo` to an integer and see if it was
evenly dividable by 3. If so it would pass, otherwise fail.
#### negation
Note that all matchers (including custom ones) automatically gets a negating matcher. For example, there's a matcher
called `$contains` that checks that the result contains some text string, like so `$contains what's up?`. This would
succeed if the response contained the string "what's up?". The negating matcher would look like this:
`$!contains what's up?` - and will succeed if the response does NOT contain the string "what's up?". It operatates
as a NOT expression, in other words. The prefix used is `$!` is used instead of `$`. This will work even for
custom matchers that you create yourself.
| text/markdown | Jonas Holmer | Jonas Holmer <jonas.holmer@gmail.com> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Quality Assurance",
"Topic :: Software Development :: Testing",
"Programming Language :: Python"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"pyopenssl",
"requests",
"docopt-ng",
"rich>=14.1.0"
] | [] | [] | [] | [
"Homepage, https://github.com/hyrfilm/skivvy"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:33:11.783428 | skivvy-0.708.tar.gz | 29,202 | 23/4c/d664a0170aed8d9f138d088d150175fca48605e076deaab92032e7d5054e/skivvy-0.708.tar.gz | source | sdist | null | false | f319ab4d13b577ed8f6f4974aed82dd7 | 2715bef529d41529096b1bcaccea4a196b71b02a7c581af0554fd69deb302777 | 234cd664a0170aed8d9f138d088d150175fca48605e076deaab92032e7d5054e | null | [] | 222 |
2.4 | isage-benchmark | 0.1.1.1 | SAGE Benchmark - RAG and experimental benchmarks for SAGE framework | # SAGE Benchmark
> Comprehensive benchmarking tools and RAG examples for the SAGE framework
[](https://www.python.org/downloads/)
[](../../LICENSE)
## 📋 Overview
**SAGE Benchmark** provides a comprehensive suite of benchmarking tools and RAG (Retrieval-Augmented
Generation) examples for evaluating SAGE framework performance. This package enables researchers and
developers to:
- **Benchmark RAG pipelines** with multiple retrieval strategies (dense, sparse, hybrid)
- **Compare vector databases** (Milvus, ChromaDB, FAISS) for RAG applications
- **Evaluate multimodal retrieval** with text, image, and video data
- **Run reproducible experiments** with standardized configurations and metrics
This package is designed for both research experiments and production system evaluation.
## ✨ Key Features
- **Multiple RAG Implementations**: Dense, sparse, hybrid, and multimodal retrieval
- **Vector Database Support**: Milvus, ChromaDB, FAISS integration
- **Experiment Framework**: Automated benchmarking with configurable experiments
- **Evaluation Metrics**: Comprehensive metrics for RAG performance
- **Sample Data**: Included test data for quick start
- **Extensible Design**: Easy to add new benchmarks and retrieval methods
## 📦 Package Structure
```
sage-benchmark/
├── src/
│ └── sage/
│ └── benchmark/
│ ├── __init__.py
│ └── benchmark_rag/ # RAG benchmarking
│ ├── __init__.py
│ ├── implementations/ # RAG implementations
│ │ ├── pipelines/ # RAG pipeline scripts
│ │ │ ├── qa_dense_retrieval_milvus.py
│ │ │ ├── qa_sparse_retrieval_milvus.py
│ │ │ ├── qa_multimodal_fusion.py
│ │ │ └── ...
│ │ └── tools/ # Supporting tools
│ │ ├── build_chroma_index.py
│ │ ├── build_milvus_dense_index.py
│ │ └── loaders/
│ ├── evaluation/ # Experiment framework
│ │ ├── pipeline_experiment.py
│ │ ├── evaluate_results.py
│ │ └── config/
│ ├── config/ # RAG configurations
│ └── data/ # Test data
│ # Future benchmarks:
│ # ├── benchmark_agent/ # Agent benchmarking
│ # └── benchmark_anns/ # ANNS benchmarking
├── tests/
├── pyproject.toml
└── README.md
```
## 🚀 Installation
### Quick Start (Recommended)
Clone the repository with submodules and set up development environment:
```bash
# 1. Clone repository
git clone --recurse-submodules https://github.com/intellistream/sage-benchmark.git
cd sage-benchmark
# Or if already cloned, initialize submodules
./quickstart.sh
# 2. Install package with development dependencies
pip install -e ".[dev]"
# 3. Install pre-commit hooks (IMPORTANT for contributors)
pre-commit install
```
The `quickstart.sh` script will automatically:
- Initialize all Git submodules (LibAMM, SAGE-DB-Bench, sageData)
- Check environment and dependencies
- Display submodule status
**Why install pre-commit?** Pre-commit hooks automatically check code quality (formatting, import sorting, linting) before each commit, preventing CI/CD failures.
### Manual Installation
If you prefer manual setup:
```bash
# Clone repository
git clone https://github.com/intellistream/sage-benchmark.git
cd sage-benchmark
# Initialize submodules (direct level only, not recursive)
git submodule update --init
# Install package
pip install -e .
```
Or with development dependencies:
```bash
pip install -e ".[dev]"
```
### Git Submodules
This repository uses Git submodules for external components:
- **benchmark_amm** (`src/sage/benchmark/benchmark_amm/`) → [LibAMM](https://github.com/intellistream/LibAMM)
- **benchmark_anns** (`src/sage/benchmark/benchmark_anns/`) → [SAGE-DB-Bench](https://github.com/intellistream/SAGE-DB-Bench)
- **sage.data** (`src/sage/data/`) → [sageData](https://github.com/intellistream/sageData)
All submodules track the `main-dev` branch and must be initialized before use.
## 📊 RAG Benchmarking
The benchmark_rag module provides comprehensive RAG benchmarking capabilities:
### RAG Implementations
Various RAG approaches for performance comparison:
**Vector Databases:**
- **Milvus**: Dense, sparse, and hybrid retrieval
- **ChromaDB**: Local vector database with simple setup
- **FAISS**: Efficient similarity search
**Retrieval Methods:**
- Dense retrieval (embeddings-based)
- Sparse retrieval (BM25, sparse vectors)
- Hybrid retrieval (combining dense + sparse)
- Multimodal fusion (text + image + video)
### Quick Start
#### 1. Build Vector Index
First, prepare your vector index:
```bash
# Build ChromaDB index (simplest)
python -m sage.benchmark.benchmark_rag.implementations.tools.build_chroma_index
# Or build Milvus dense index
python -m sage.benchmark.benchmark_rag.implementations.tools.build_milvus_dense_index
```
#### 2. Run a RAG Pipeline
Test individual RAG pipelines:
```bash
# Dense retrieval with Milvus
python -m sage.benchmark.benchmark_rag.implementations.pipelines.qa_dense_retrieval_milvus
# Sparse retrieval
python -m sage.benchmark.benchmark_rag.implementations.pipelines.qa_sparse_retrieval_milvus
# Hybrid retrieval (dense + sparse)
python -m sage.benchmark.benchmark_rag.implementations.pipelines.qa_hybrid_retrieval_milvus
```
#### 3. Run Benchmark Experiments
Execute full benchmark suite:
```bash
# Run comprehensive benchmark
python -m sage.benchmark.benchmark_rag.evaluation.pipeline_experiment
# Evaluate and generate reports
python -m sage.benchmark.benchmark_rag.evaluation.evaluate_results
```
#### 4. View Results
Results are saved in `benchmark_results/`:
- `experiment_TIMESTAMP/` - Individual experiment runs
- `metrics.json` - Performance metrics
- `comparison_report.md` - Comparison report
## 📖 Quick Start
### Basic Example
```python
from sage.benchmark.benchmark_rag.implementations.pipelines import (
qa_dense_retrieval_milvus,
)
from sage.benchmark.benchmark_rag.config import load_config
# Load configuration
config = load_config("config_dense_milvus.yaml")
# Run RAG pipeline
results = qa_dense_retrieval_milvus.run_pipeline(query="What is SAGE?", config=config)
# View results
print(f"Retrieved {len(results)} documents")
for doc in results:
print(f"- {doc.content[:100]}...")
```
### Run Custom Benchmark
```python
from sage.benchmark.benchmark_rag.evaluation import PipelineExperiment
# Define experiment configuration
experiment = PipelineExperiment(
name="custom_rag_benchmark",
pipelines=["dense", "sparse", "hybrid"],
queries=["query1.txt", "query2.txt"],
metrics=["precision", "recall", "latency"],
)
# Run experiment
results = experiment.run()
# Generate report
experiment.generate_report(results)
```
### Configuration
Configuration files are located in `sage/benchmark/benchmark_rag/config/`:
- `config_dense_milvus.yaml` - Dense retrieval configuration
- `config_sparse_milvus.yaml` - Sparse retrieval configuration
- `config_hybrid_milvus.yaml` - Hybrid retrieval configuration
- `config_qa_chroma.yaml` - ChromaDB configuration
Experiment configurations in `sage/benchmark/benchmark_rag/evaluation/config/`:
- `experiment_config.yaml` - Benchmark experiment settings
## 📖 Data
Test data is included in the package:
- **Benchmark Data** (`benchmark_rag/data/`):
- `queries.jsonl` - Sample queries for testing
- `qa_knowledge_base.*` - Knowledge base in multiple formats (txt, md, pdf, docx)
- `sample/` - Additional sample documents for testing
- `sample/` - Additional sample documents
- **Benchmark Config** (`benchmark_rag/config/`):
- `experiment_config.yaml` - RAG benchmark configurations
## 🔧 Development
### Running Tests
```bash
pytest packages/sage-benchmark/
```
### Code Formatting
```bash
# Format code
black packages/sage-benchmark/
# Lint code
ruff check packages/sage-benchmark/
```
## 📚 Documentation
For detailed documentation on each component:
- See `src/sage/benchmark/rag/README.md` for RAG examples
- See `src/sage/benchmark/benchmark_rag/README.md` for benchmark details
## 🔮 Future Components
- **benchmark_agent**: Agent system performance benchmarking
- **benchmark_anns**: Approximate Nearest Neighbor Search benchmarking
- **benchmark_llm**: LLM inference performance benchmarking
## 🤝 Contributing
This package follows the same contribution guidelines as the main SAGE project. See the main
repository's `CONTRIBUTING.md`.
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](../../LICENSE) file for details.
## 🔗 Related Packages
- **sage-kernel**: Core computation engine for running benchmarks
- **sage-libs**: RAG components and utilities
- **sage-middleware**: Vector database services (Milvus, ChromaDB)
- **sage-common**: Common utilities and data types
## 📮 Support
- **Documentation**: https://intellistream.github.io/SAGE-Pub/guides/packages/sage-benchmark/
- **Issues**: https://github.com/intellistream/SAGE/issues
- **Discussions**: https://github.com/intellistream/SAGE/discussions
______________________________________________________________________
**Part of the SAGE Framework** | [Main Repository](https://github.com/intellistream/SAGE)
| text/markdown | null | IntelliStream Team <shuhao_zhang@hust.edu.cn> | null | null | MIT | sage, benchmark, rag, retrieval-augmented-generation, evaluation, experiments, intellistream | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Information Analysis"
] | [] | null | null | ==3.11.* | [] | [] | [] | [
"isage-common>=0.2.3",
"isage-platform>=0.2.3",
"isage-kernel>=0.2.3",
"isage-middleware>=0.2.3",
"isage-libs>=0.2.0.6",
"sentence-transformers<4.0.0,>=3.1.0",
"pyyaml>=6.0",
"chromadb>=1.0.20",
"pymilvus[model]>=2.4.0",
"faiss-cpu<2.0.0,>=1.7.0",
"pandas>=2.0.0",
"numpy<2.3.0,>=1.26.0",
"matplotlib>=3.7.0",
"seaborn>=0.12.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"ruff==0.14.6; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/intellistream/SAGE",
"Documentation, https://intellistream.github.io/SAGE",
"Repository, https://github.com/intellistream/SAGE",
"Issues, https://github.com/intellistream/SAGE/issues"
] | twine/6.2.0 CPython/3.11.11 | 2026-02-21T09:32:54.533637 | isage_benchmark-0.1.1.1.tar.gz | 97,390 | 64/44/c906d1e3defc756a75114641ab4dbce9f70c0d14c8fd383d33c76ed40975/isage_benchmark-0.1.1.1.tar.gz | source | sdist | null | false | a9c5d977a80212e40e0f83aa066b7ec4 | 73dafca32d026d029b6068f2705f0e9168b264e7c2748cc1054754e5f21bdd0c | 6444c906d1e3defc756a75114641ab4dbce9f70c0d14c8fd383d33c76ed40975 | null | [
"LICENSE"
] | 210 |
2.4 | codespy-ai | 0.3.2 | Code review agent powered by DSPy | <p align="center">
<img src="assets/codespy-logo.png" alt="CodeSpy logo">
</p>
<h1 align="center">Code<a href="https://github.com/khezen/codespy">Spy</a></h1>
<p align="center">
An open-source AI reviewer that catches bugs, improves code quality, and integrates directly into your PR workflow, without sacrificing control or security.
</p>
<p align="center">
<a href="https://github.com/khezen/codespy/actions">
<img src="https://img.shields.io/github/actions/workflow/status/khezen/codespy/ci.yml">
</a>
<a href="https://github.com/khezen/codespy/blob/main/LICENSE">
<img src="https://img.shields.io/github/license/khezen/codespy">
</a>
<a href="https://github.com/khezen/codespy/stargazers">
<img src="https://img.shields.io/github/stars/khezen/codespy">
</a>
<a href="https://github.com/khezen/codespy/issues">
<img src="https://img.shields.io/github/issues/khezen/codespy">
</a>
</p>
---
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Why CodeSpy?](#why-codespy)
- [Features](#features)
- [Installation](#installation)
- [Using pip](#using-pip)
- [Using Homebrew (macOS/Linux)](#using-homebrew-macoslinux)
- [Using Docker](#using-docker)
- [Using Poetry (for development)](#using-poetry-for-development)
- [Quick Start](#quick-start)
- [Usage](#usage)
- [Command Line](#command-line)
- [IDE Integration (MCP Server)](#ide-integration-mcp-server)
- [Using Docker](#using-docker-1)
- [GitHub Action](#github-action)
- [Configuration](#configuration)
- [Setup](#setup)
- [Git Platform Tokens](#git-platform-tokens)
- [GitHub Token](#github-token)
- [GitLab Token](#gitlab-token)
- [LLM Provider](#llm-provider)
- [Advanced Configuration (YAML)](#advanced-configuration-yaml)
- [Recommended Model Strategy](#recommended-model-strategy)
- [Output](#output)
- [Markdown (default)](#markdown-default)
- [GitHub/GitLab Review Comments](#githubgitlab-review-comments)
- [Architecture](#architecture)
- [DSPy Signatures](#dspy-signatures)
- [Supported Languages](#supported-languages)
- [Development](#development)
- [Contributors](#contributors)
- [License](#license)
---
## Why CodeSpy?
Most AI code reviewers are:
- ❌ Black boxes
- ❌ SaaS-only
- ❌ Opaque about reasoning
- ❌ Risky for sensitive codebases
**CodeSpy is different:**
- 🔍 Transparent reasoning
- 🔐 Self-hostable
- 🧠 Configurable review rules
- 🔄 Native PR integration
- 🧩 Extensible architecture
- 📦 100% open-source
Built for **engineering teams that care about correctness, security, and control.**
---
## Features
- 🔒 **Security Analysis** - Detects common vulnerabilities (injection, auth issues, data exposure, etc.) with CWE references
- 🐛 **Bug Detection** - Identifies logic errors, null references, resource leaks, edge cases
- 📝 **Documentation Review** - Checks for missing docstrings, outdated comments, incomplete docs
- 🔍 **Intelligent Scope Detection** - Automatically identifies code scopes (frontend, backend, infra, microservice in mono repo, etc...)
- 🔄 **Smart Deduplication** - LLM-powered issue deduplication across reviewers
- 💰 **Cost Tracking** - Track LLM calls, tokens, and costs per review
- 🤖 **Model Agnostic** - Works with OpenAI, AWS Bedrock, Anthropic, Ollama, and more via LiteLLM
- 🐳 **Docker Ready** - Run locally or in the cloud with Docker
- <img src="assets/GitHub_Invertocat_Black.svg" height="20" alt="GitHub"> <img src="assets/gitlab-logo-500-rgb.png" height="20" alt="GitLab"> **GitHub & GitLab** - Works with both platforms, auto-detects from URL
- 🖥️ **Local Reviews** - Review local git changes without GitHub/GitLab — diff against any branch, ref, or review uncommitted work
- 🧩 **MCP Server** - IDE integration via Model Context Protocol — trigger reviews from AI coding assistants like Cline without leaving your editor
- 🔌 **GitHub Action** - One-line integration for automatic PR reviews
---
## Installation
### Using pip
```bash
pip install codespy-ai
```
### Using Homebrew (macOS/Linux)
```bash
brew tap khezen/codespy
brew install codespy
```
### Using Docker
```bash
# Pull the pre-built image from GitHub Container Registry
docker pull ghcr.io/khezen/codespy:latest
# Or build locally
docker build -t codespy .
```
### Using Poetry (for development)
```bash
# Clone the repository
git clone https://github.com/khezen/codespy.git
cd codespy
# Install dependencies
poetry install
# Or install only production dependencies
poetry install --only main
```
---
## Quick Start
Get up and running in 30 seconds:
```bash
# 1. Set your Git token (or let codespy auto-discover from gh/glab CLI)
export GITHUB_TOKEN=ghp_xxxxxxxxxxxxxxxxxxxx # For GitHub
# OR
export GITLAB_TOKEN=glpat-xxxxxxxxxxxxxxxxxxxx # For GitLab
# 2. Set your LLM provider (example with Anthropic)
export DEFAULT_MODEL=anthropic/claude-opus-4-6
export ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxxxxxxxxxx
# 3. Review a PR or MR!
codespy review https://github.com/owner/repo/pull/123
# OR
codespy review https://gitlab.com/group/project/-/merge_requests/123
```
codespy auto-discovers credentials from standard locations (`~/.aws/credentials`, `gh auth token`, `glab auth token`, etc.) - see [Configuration](#configuration) for details.
---
## Usage
### Command Line
```bash
# Review GitHub Pull Request
codespy review https://github.com/owner/repo/pull/123
# Review GitLab Merge Request
codespy review https://gitlab.com/group/project/-/merge_requests/123
# GitLab with nested groups
codespy review https://gitlab.com/group/subgroup/project/-/merge_requests/123
# Self-hosted GitLab
codespy review https://gitlab.mycompany.com/team/project/-/merge_requests/123
# Output as JSON
codespy review https://github.com/owner/repo/pull/123 --output json
# Use a specific model
codespy review https://github.com/owner/repo/pull/123 --model anthropic/claude-opus-4-6
# Use a custom config file
codespy review https://github.com/owner/repo/pull/123 --config path/to/config.yaml
codespy review https://github.com/owner/repo/pull/123 -f staging.yaml
# Disable stdout output (useful with --git-comment)
codespy review https://github.com/owner/repo/pull/123 --no-stdout
# Post review as GitHub/GitLab comment
codespy review https://github.com/owner/repo/pull/123 --git-comment
# Combine: only post to Git platform, no stdout
codespy review https://github.com/owner/repo/pull/123 --no-stdout --git-comment
# Show current configuration
codespy config
# Show configuration from a specific file
codespy config --config path/to/config.yaml
# Show version
codespy --version
# Review local git changes (no GitHub/GitLab needed)
codespy review-local # Review current dir vs main
codespy review-local /path/to/repo # Review specific repo
codespy review-local --base develop # Compare against develop
codespy review-local --base origin/main # Compare against origin/main
codespy review-local --base HEAD~5 # Compare against 5 commits back
# Review uncommitted changes (staged + unstaged)
codespy review-uncommitted # Review current dir
codespy review-uncommitted /path/to/repo
codespy review-uncommitted --output json
```
### IDE Integration (MCP Server)
CodeSpy can run as an MCP (Model Context Protocol) server for integration with AI coding assistants like Cline, enabling code reviews directly from your editor without leaving your workflow.
```bash
# Start the MCP server
codespy serve
# Use a custom config file
codespy serve --config path/to/config.yaml
```
**Configure your IDE** (example for Cline in VS Code):
Add to `cline_mcp_settings.json`:
```json
{
"mcpServers": {
"codespy-reviewer": {
"command": "codespy",
"args": ["serve"],
"env": {
"DEFAULT_MODEL": "anthropic/claude-opus-4-6",
"ANTHROPIC_API_KEY": "your-key-here"
}
}
}
}
```
Or for AWS Bedrock:
```json
{
"mcpServers": {
"codespy-reviewer": {
"command": "codespy",
"args": ["serve"],
"env": {
"DEFAULT_MODEL": "bedrock/us.anthropic.claude-opus-4-6-v1",
"AWS_REGION": "us-east-1",
"AWS_ACCESS_KEY_ID": "your-access-key",
"AWS_SECRET_ACCESS_KEY": "your-secret-key"
}
}
}
}
```
**Available MCP Tools:**
- `review_local_changes(repo_path, base_ref)` — Review branch changes vs base (e.g., vs `main`)
- `review_uncommitted(repo_path)` — Review staged + unstaged working tree changes
- `review_pr(mr_url)` — Review a GitHub PR or GitLab MR by URL
Then ask your AI assistant: *"Review my local changes"* or *"Review uncommitted work in /path/to/repo"*
### Using Docker
```bash
# With docker run (using GHCR image)
docker run --rm \
-e GITHUB_TOKEN=$GITHUB_TOKEN \
-e DEFAULT_MODEL=anthropic/claude-opus-4-6 \
-e ANTHROPIC_API_KEY=$ANTHROPIC_API_KEY \
ghcr.io/khezen/codespy:latest review https://github.com/owner/repo/pull/123
# Or use a specific version
docker run --rm \
-e GITHUB_TOKEN=$GITHUB_TOKEN \
-e DEFAULT_MODEL=anthropic/claude-opus-4-6 \
-e ANTHROPIC_API_KEY=$ANTHROPIC_API_KEY \
ghcr.io/khezen/codespy:0.2.1 review https://github.com/owner/repo/pull/123
```
### GitHub Action
Add CodeSpy to your repository for automatic PR reviews:
**Trigger on `/codespy review` comment:**
```yaml
# .github/workflows/codespy-review.yml
name: CodeSpy Code Review
on:
issue_comment:
types: [created]
jobs:
review:
# Only run on PR comments containing '/codespy review'
if: |
github.event.issue.pull_request &&
contains(github.event.comment.body, '/codespy review')
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- name: Run CodeSpy Review
uses: khezen/codespy@v1
with:
model: 'anthropic/claude-opus-4-6'
anthropic-api-key: ${{ secrets.ANTHROPIC_API_KEY }}
```
**Trigger automatically on every PR:**
```yaml
# .github/workflows/codespy-review.yml
name: CodeSpy Code Review
on:
pull_request:
types: [opened, synchronize, reopened]
jobs:
review:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- name: Run CodeSpy Review
uses: khezen/codespy@v1
with:
model: 'anthropic/claude-opus-4-6'
anthropic-api-key: ${{ secrets.ANTHROPIC_API_KEY }}
```
See [`.github/workflows/codespy-review.yml.example`](.github/workflows/codespy-review.yml.example) for more examples.
---
## Configuration
codespy supports two configuration methods:
- **`.env` file** - Simple environment variables for basic setup
- **`codespy.yaml`** - Full YAML configuration for advanced options (per-module settings)
Priority: cmd options > Environment Variables > YAML Config > Defaults
### Setup
```bash
# Copy the example file
cp .env.example .env
```
### Git Platform Tokens
codespy automatically detects the platform (GitHub or GitLab) from the URL and discovers tokens from multiple sources.
#### GitHub Token
Auto-discovered from:
- `GITHUB_TOKEN` or `GH_TOKEN` environment variables
- GitHub CLI (`gh auth token`)
- Git credential helper
- `~/.netrc` file
Or create a token at https://github.com/settings/tokens with `repo` scope:
```bash
GITHUB_TOKEN=ghp_xxxxxxxxxxxxxxxxxxxx
```
To disable auto-discovery:
```bash
GITHUB_AUTO_DISCOVER_TOKEN=false
```
#### GitLab Token
Auto-discovered from:
- `GITLAB_TOKEN` or `GITLAB_PRIVATE_TOKEN` environment variables
- GitLab CLI (`glab auth token`)
- Git credential helper
- `~/.netrc` file
- python-gitlab config files (`~/.python-gitlab.cfg`, `/etc/python-gitlab.cfg`)
Or create a token at https://gitlab.com/-/user_settings/personal_access_tokens with `api` scope:
```bash
GITLAB_TOKEN=glpat-xxxxxxxxxxxxxxxxxxxx
```
For self-hosted GitLab:
```bash
GITLAB_URL=https://gitlab.mycompany.com
GITLAB_TOKEN=glpat-xxxxxxxxxxxxxxxxxxxx
```
To disable auto-discovery:
```bash
GITLAB_AUTO_DISCOVER_TOKEN=false
```
### LLM Provider
codespy auto-discovers credentials for all providers:
**Anthropic** (auto-discovers from `$ANTHROPIC_API_KEY`, `~/.config/anthropic/`, `~/.anthropic/`):
```bash
DEFAULT_MODEL=anthropic/claude-opus-4-6
# Optional - set explicitly or let codespy auto-discover:
# ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxxxxxxxxxx
```
**AWS Bedrock** (auto-discovers from `~/.aws/credentials`, AWS CLI, env vars):
```bash
DEFAULT_MODEL=bedrock/us.anthropic.claude-sonnet-4-5-20250929-v1:0
AWS_REGION=us-east-1
# Optional - uses ~/.aws/credentials by default, or set explicitly:
# AWS_ACCESS_KEY_ID=...
# AWS_SECRET_ACCESS_KEY=...
```
**OpenAI** (auto-discovers from `$OPENAI_API_KEY`, `~/.config/openai/`, `~/.openai/`):
```bash
DEFAULT_MODEL=openai/gpt-5
# Optional - set explicitly or let codespy auto-discover:
# OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxxxxxx
```
**Google Gemini** (auto-discovers from `$GEMINI_API_KEY`, `$GOOGLE_API_KEY`, gcloud ADC):
```bash
DEFAULT_MODEL=gemini/gemini-2.5-pro
# Optional - set explicitly or let codespy auto-discover:
# GEMINI_API_KEY=xxxxxxxxxxxxxxxxxxxx
```
**Local Ollama:**
```bash
DEFAULT_MODEL=ollama/llama3
```
To disable auto-discovery for specific providers:
```bash
AUTO_DISCOVER_AWS=false
AUTO_DISCOVER_OPENAI=false
AUTO_DISCOVER_ANTHROPIC=false
AUTO_DISCOVER_GEMINI=false
```
### Advanced Configuration (YAML)
For per-signature settings, use `codespy.yaml`. See [`codespy.yaml`](codespy.yaml) for all available options including:
- LLM provider settings and auto-discovery
- Git platform configuration (GitHub/GitLab)
- Per-signature model and iteration overrides
- Output format and destination settings
- Directory exclusions
Override YAML settings via environment variables using `_` separator:
```bash
# Default settings
export DEFAULT_MODEL=anthropic/claude-opus-4-6
export DEFAULT_MAX_ITERS=20
# Per-signature settings (use signature name, not module name)
export CODE_REVIEW_MODEL=anthropic/claude-sonnet-4-5-20250929
# Output settings
export OUTPUT_STDOUT=false
export OUTPUT_GIT=true
```
See `codespy.yaml` for full configuration options.
### Recommended Model Strategy
codespy uses a tiered model approach to balance review quality and cost:
| Tier | Role | Default | Recommended Model | Used By |
|------|------|---------|-------------------|---------|
| 🧠 **Smart** | Core analysis & reasoning | `DEFAULT_MODEL` | `anthropic/claude-opus-4-6` | Code & doc review, supply chain, scope identification |
| ⚡ **Mid-tier** | Extraction & deduplication | Falls back to `DEFAULT_MODEL` | `anthropic/claude-sonnet-4-5-20250929` | TwoStepAdapter field extraction, issue deduplication |
| 💰 **Cheap** | Summarization | Falls back to `DEFAULT_MODEL` | `anthropic/claude-haiku-4-5-20251001` | PR summary generation |
By default, **all models use `DEFAULT_MODEL`** (`anthropic/claude-opus-4-6`). This works out of the box — just set your API credentials and go.
To optimize costs, override the mid-tier and cheap models:
```bash
# .env or environment variables
DEFAULT_MODEL=anthropic/claude-opus-4-6 # Smart tier (default)
EXTRACTION_MODEL=anthropic/claude-sonnet-4-5-20250929 # Mid-tier: field extraction
DEDUPLICATION_MODEL=anthropic/claude-sonnet-4-5-20250929 # Mid-tier: issue deduplication
SUMMARIZATION_MODEL=anthropic/claude-haiku-4-5-20251001 # Cheap tier: PR summary
```
Or in `codespy.yaml`:
```yaml
default_model: anthropic/claude-opus-4-6
extraction_model: anthropic/claude-sonnet-4-5-20250929
signatures:
deduplication:
model: anthropic/claude-sonnet-4-5-20250929
summarization:
model: anthropic/claude-haiku-4-5-20251001
```
---
## Output
### Markdown (default)
```markdown
# Code Review: Add user authentication
**PR:** [owner/repo#123](https://github.com/owner/repo/pull/123)
**Reviewed at:** 2024-01-15 10:30 UTC
**Model:** anthropic/claude-opus-4-6
## Summary
This PR implements user authentication with JWT tokens...
## Statistics
- **Total Issues:** 3
- **Critical:** 1
- **Security:** 1
- **Bugs:** 1
- **Documentation:** 1
## Issues
### 🔴 Critical (1)
#### SQL Injection Vulnerability
**Location:** `src/auth/login.py:45`
**Category:** security
The user input is directly interpolated into the SQL query...
**Code:**
query = f"SELECT * FROM users WHERE username = '{username}'"
**Suggestion:**
Use parameterized queries instead...
**Reference:** [CWE-89](https://cwe.mitre.org/data/definitions/89.html)
```
### GitHub/GitLab Review Comments
CodeSpy can post reviews directly to GitHub PRs or GitLab MRs as native review comments with inline annotations.
**Enable via CLI:**
```bash
# GitHub
codespy review https://github.com/owner/repo/pull/123 --git-comment
# GitLab
codespy review https://gitlab.com/group/project/-/merge_requests/123 --git-comment
# Combine: only post to platform, no stdout
codespy review https://github.com/owner/repo/pull/123 --no-stdout --git-comment
```
**Enable via configuration:**
```bash
# Environment variable
export OUTPUT_GIT=true
# Or in codespy.yaml
output_git: true
```
**Features:**
- 🎯 **Inline Comments** - Issues are posted as review comments on the exact lines where they occur
- 📏 **Multi-line Support** - Issues spanning multiple lines are annotated with start/end line ranges
- 🔴🟠🟡🔵 **Severity Indicators** - Visual emoji markers for Critical, High, Medium, Low severity
- 📦 **Collapsible Sections** - Organized review body with expandable details:
- 📋 Summary of changes
- 🎯 Quality Assessment
- 📊 Statistics table
- 💰 Cost breakdown per signature
- 💡 Recommendation
- 🔗 **CWE References** - Security issues link directly to MITRE CWE database
---
## Architecture
```
┌─────────────────────────────────────────────────────────────────────┐
│ codespy CLI │
├─────────────────────────────────────────────────────────────────────┤
│ review <pr_url> [--config ...] [--output json|md] [--model ...] │
└──────────────────────────────┬──────────────────────────────────────┘
│
┌──────────────────────────────▼──────────────────────────────────────┐
│ Git Platform Integration │
│ - GitHub: Fetch PR diff, changed files, commit messages │
│ - GitLab: Fetch MR diff, changed files, commit messages │
│ - Auto-detects platform from URL │
│ - Clone/access full repository for context │
└──────────────────────────────┬──────────────────────────────────────┘
│
┌──────────────────────────────▼──────────────────────────────────────┐
│ DSPy Review Pipeline │
│ │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ Scope Identifier │ │
│ │ (identifies code scopes: frontend, backend, infra, etc.) │ │
│ └──────────────────────────┬─────────────────────────────────┘ │
│ │ │
│ ┌──────────────────────────▼─────────────────────────────────┐ │
│ │ Parallel Review Modules │ │
│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────┐ │ │
│ │ │ Supply Chain │ │ Code │ │ Doc │ │ │
│ │ │ Auditor │ │ Reviewer │ │ Reviewer │ │ │
│ │ │ │ │ (bug+sec+ │ │ │ │ │
│ │ │ │ │ smell) │ │ │ │ │
│ │ └──────────────┘ └──────────────┘ └──────────┘ │ │
│ └──────────────────────────┬─────────────────────────────────┘ │
│ │ │
│ ┌──────────────────────────▼─────────────────────────────────┐ │
│ │ Issue Deduplicator │ │
│ │ (LLM-powered deduplication across reviewers) │ │
│ └──────────────────────────┬─────────────────────────────────┘ │
│ │ │
│ ┌──────────────────────────▼─────────────────────────────────┐ │
│ │ PR Summarizer │ │
│ │ (generates summary, quality assessment, recommendation) │ │
│ └────────────────────────────────────────────────────────────┘ │
│ │
│ Cost Tracker (tokens, calls, $) │
└──────────────────────────────┬──────────────────────────────────────┘
│
┌──────────────────────────────▼──────────────────────────────────────┐
│ Tools Layer │
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌──────────────┐ │
│ │ Filesystem │ │ Git │ │ Web │ │ Cyber/OSV │ │
│ │ │ │ (GH + GL) │ │ │ │ │ │
│ └────────────┘ └────────────┘ └────────────┘ └──────────────┘ │
│ ┌────────────────────────────────────────────────────────────────┐ │
│ │ Parsers │ │
│ │ ┌─────────────────┐ ┌────────────────────────────────────┐ │ │
│ │ │ Ripgrep │ │ Tree-sitter │ │ │
│ │ │ (code search) │ │ (multi-language AST parsing) │ │ │
│ │ └─────────────────┘ └────────────────────────────────────┘ │ │
│ └────────────────────────────────────────────────────────────────┘ │
└──────────────────────────────┬──────────────────────────────────────┘
│
┌──────────────────────────────▼──────────────────────────────────────┐
│ LLM Backend (LiteLLM) │
│ Bedrock | OpenAI | Anthropic | Ollama | Any OpenAI-compatible │
└─────────────────────────────────────────────────────────────────────┘
```
## DSPy Signatures
The review is powered by DSPy signatures that structure the LLM's analysis:
| Signature | Config Key | Description |
|-----------|------------|-------------|
| **ScopeIdentifierSignature** | `scope` | Identifies code scopes (frontend, backend, infra, microservice in mono repo, etc...) |
| **CodeReviewSignature** | `code_review` | Detects verified bugs, security vulnerabilities, removed defensive code, and code smells |
| **DocReviewSignature** | `doc` | Detects stale or wrong documentation caused by code changes |
| **SupplyChainSecuritySignature** | `supply_chain` | Analyzes artifacts (Dockerfiles) and dependencies for supply chain security |
| **IssueDeduplicationSignature** | `deduplication` | LLM-powered deduplication of issues across reviewers |
| **MRSummarySignature** | `summarization` | Generates summary, quality assessment, and recommendation |
## Supported Languages
Tree-sitter based parsing for context-aware analysis:
| Language | Extensions | Features |
|----------|-----------|----------|
| Python | `.py` | Functions, classes, imports |
| JavaScript | `.js`, `.jsx` | Functions, classes, imports |
| TypeScript | `.ts`, `.tsx` | Functions, classes, interfaces |
| Go | `.go` | Functions, structs, interfaces |
| Java | `.java` | Methods, classes, packages |
| Kotlin | `.kt` | Functions, classes, objects |
| Swift | `.swift` | Functions, classes, structs |
| Objective-C | `.m`, `.h` | Methods, interfaces, protocols |
| Rust | `.rs` | Functions, structs, traits, impl blocks |
| Terraform | `.tf` | Resources, data sources, modules, variables |
All languages are supported for security, bug, and documentation analysis.
## Development
```bash
# Quick setup (creates .env and installs dependencies)
make setup
# Or manually with Poetry:
poetry install # Install all dependencies including dev
poetry lock # Update lock file
# Available make targets
make help
# Run commands with Poetry
make lint # Run ruff linter
make format # Format code with ruff
make typecheck # Run mypy type checker
make test # Run pytest tests
make build # Build package with Poetry
make clean # Clean build artifacts
# Or run directly:
poetry run codespy review https://github.com/owner/repo/pull/123
poetry run ruff check src/
poetry run mypy src/
```
---
## Contributors
* @khezen
* @pranavsriram8
---
## License
MIT
| text/markdown | khezen | khezen@users.noreply.github.com | null | null | MIT | code-review, ai, dspy, llm, github, pull-request, security, bug-detection, static-analysis | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"PyGithub>=2.5.0",
"beautifulsoup4>=4.12.0",
"boto3>=1.35.0",
"cachetools>=5.0.0",
"cloudpickle<4.0.0,>=3.1.2",
"ddgs>=8.0.0",
"dspy[mcp]<4.0.0,>=3.1.3",
"gitpython>=3.1.0",
"httpx>=0.28.0",
"json-repair<0.56.0,>=0.55.1",
"litellm<2.0.0,>=1.81.6",
"markdownify>=0.13.0",
"mcp>=1.0.0",
"pydantic>=2.10.0",
"pydantic-settings>=2.6.0",
"python-gitlab>=4.0.0",
"rich>=13.9.0",
"tree-sitter>=0.23",
"tree-sitter-go>=0.23",
"tree-sitter-hcl>=1.2.0",
"tree-sitter-java>=0.23",
"tree-sitter-javascript>=0.23",
"tree-sitter-kotlin>=1.0",
"tree-sitter-objc>=3.0",
"tree-sitter-python>=0.23",
"tree-sitter-rust>=0.23",
"tree-sitter-swift>=0.0.1",
"tree-sitter-typescript>=0.23",
"typer>=0.12.0"
] | [] | [] | [] | [
"Documentation, https://github.com/khezen/codespy#readme",
"Homepage, https://github.com/khezen/codespy",
"Repository, https://github.com/khezen/codespy"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:32:46.914407 | codespy_ai-0.3.2.tar.gz | 100,480 | b1/3b/f2ee38c4d7d7a3a034baf098566b647bec4153cdc9c0b9aae8ef2d20c061/codespy_ai-0.3.2.tar.gz | source | sdist | null | false | d74a92bdee92fac740b8fdf40a166733 | 6ccf13b8b27a1642fd7ef95476df32a3610ea5d16d1270a43ac861cd1add0ed9 | b13bf2ee38c4d7d7a3a034baf098566b647bec4153cdc9c0b9aae8ef2d20c061 | null | [
"LICENSE"
] | 211 |
2.4 | translatebot-django | 0.7.1 | Automate Django .po file translation with AI. Repeatable, consistent, and pennies per language. | # translatebot-django
[](https://pypi.org/project/translatebot-django/) [](https://pepy.tech/project/translatebot-django) [](https://github.com/gettranslatebot/translatebot-django/actions/workflows/test.yml) [](https://codecov.io/gh/gettranslatebot/translatebot-django) [](https://opensource.org/licenses/MPL-2.0)
[](https://www.python.org/) [](https://www.djangoproject.com/)
⚡ **Automate Django translations with AI.** Repeatable, consistent, and pennies per language.
Documentation: **[https://translatebot.dev/docs/](https://translatebot.dev/docs/)**
## The Problem
Translating a Django app sounds simple until it isn't:
- **Manual workflow doesn't scale.** Copy strings to Google Translate, paste back, fix placeholders, repeat for every language. It works for 20 strings. It falls apart at 200.
- **AI assistants work once, but not repeatedly.** You can ask ChatGPT or Claude Code to translate a `.po` file, and it'll do a decent job. Once. Next sprint, when 15 new strings appear, you're prompting from scratch, re-translating the whole file, and hoping it stays consistent.
- **SaaS translation platforms are expensive overkill.** Paid localization services charge per-word subscriptions and come with portals, review workflows, and team features you don't need for a solo project or small team.
## Why TranslateBot
TranslateBot is a dedicated tool that sits between "do it by hand" and "pay for a platform":
- **Incremental.** Only translates new and changed strings. Add 10 strings in a sprint, pay for 10 strings, not the whole file.
- **Consistent.** A `TRANSLATING.md` file in your repo acts as a version-controlled glossary: terminology, tone, brand rules. Every translation run uses it.
- **Cost-efficient.** Batches strings into optimized API requests. A typical app costs under $0.01 per language with GPT-4o-mini.
- **Scales to many languages.** One command translates all your configured languages. Adding a new locale is a one-liner.
- **Automatable.** A CLI command you can script or hook into your workflow. No browser, no portal.
- **Placeholder-safe.** Preserves `%(name)s`, `{0}`, `%s`, and HTML tags with 100% test coverage on format string handling.
## Installation
TranslateBot is a development tool, so we recommend installing it as a dev dependency:
```bash
uv add --dev translatebot-django
```
## Quick Start
```python
# settings.py
INSTALLED_APPS = [
# ...
'translatebot_django',
]
TRANSLATEBOT_API_KEY = "your-api-key-here"
```
```bash
# Translate to Dutch
python manage.py translate --target-lang nl
# Preview without saving
python manage.py translate --target-lang nl --dry-run
```
## Features
- **Multiple AI Providers**: OpenAI, Anthropic, Google Gemini, Azure, and [many more](https://docs.litellm.ai/docs/providers)
- **Smart Translation**: Preserves placeholders (`%(name)s`, `{0}`, `%s`) and HTML tags
- **Model Field Translation**: Supports [django-modeltranslation](https://github.com/deschler/django-modeltranslation)
- **Flexible Configuration**: Django settings, environment variables, or CLI arguments
- **Well Tested**: 100% code coverage
## When to Use TranslateBot
For a one-off translation of 20 strings, ChatGPT works fine. TranslateBot is for **ongoing projects** with multiple languages where translations need to stay in sync as your code changes.
Use TranslateBot when:
- You're actively developing and strings change every sprint
- You support 3+ languages and want them all updated at once
- You want consistent terminology across translation runs
- You want translations done in seconds, not hours of manual work
## Documentation
For full documentation, visit **[translatebot.dev/docs/](https://translatebot.dev/docs/)**
- [Installation](https://translatebot.dev/docs/getting-started/installation)
- [Configuration](https://translatebot.dev/docs/getting-started/configuration)
- [Command Reference](https://translatebot.dev/docs/usage/command-reference)
- [Model Translation](https://translatebot.dev/docs/usage/model-translation)
- [Supported AI Models](https://translatebot.dev/docs/integrations/ai-models)
- [FAQ](https://translatebot.dev/docs/faq)
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
```bash
# Setup
git clone https://github.com/gettranslatebot/translatebot-django.git
cd translatebot-django
uv sync --extra dev
# Run tests
uv run pytest
```
## License
This project is licensed under the Mozilla Public License 2.0 - see the [LICENSE](LICENSE) file for details.
## Credits
- Built with [LiteLLM](https://github.com/BerriAI/litellm) for universal LLM provider support
- Uses [polib](https://github.com/izimobil/polib) for `.po` file manipulation
---
Made with ❤️ for the Django community
| text/markdown | Bjorn the Builder | bjornthebuilder@proton.me | null | null | MPL-2.0 | django, translation, i18n, localization, gettext, po, openai | [
"Development Status :: 3 - Alpha",
"Framework :: Django",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Framework :: Django :: 5.1",
"Framework :: Django :: 5.2",
"Framework :: Django :: 6.0",
"Intended Audience :: Developers",
"License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Internationalization",
"Topic :: Software Development :: Localization"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"Django>=4.2",
"polib>=1.2.0",
"litellm>=1.80.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-django>=4.5.0; extra == \"dev\"",
"pytest-mock>=3.10.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"ruff>=0.8.0; extra == \"dev\"",
"django-modeltranslation>=0.18; extra == \"dev\"",
"django-modeltranslation>=0.18; extra == \"modeltranslation\""
] | [] | [] | [] | [
"homepage, https://github.com/gettranslatebot/translatebot-django",
"documentation, https://translatebot.dev/docs/",
"repository, https://github.com/gettranslatebot/translatebot-django",
"issues, https://github.com/gettranslatebot/translatebot-django/issues",
"changelog, https://github.com/gettranslatebot/translatebot-django/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:32:08.037180 | translatebot_django-0.7.1.tar.gz | 40,109 | bd/87/717ffee51abdf7ae2a93e8283ae5ab96a8ec25b70e9aa9ded59e362d67c3/translatebot_django-0.7.1.tar.gz | source | sdist | null | false | 14278fec6d5b1ffa225fbd1500d17016 | fef776d7dea5adb77568e01e05270be440f097f15daf246c12683a47adb49608 | bd87717ffee51abdf7ae2a93e8283ae5ab96a8ec25b70e9aa9ded59e362d67c3 | null | [
"LICENSE"
] | 223 |
2.4 | fdavrs | 0.1.2 | Federated Drift-Aware Vision Reliability System SDK | FDAVRS
Federated Drift-Aware Vision Reliability System
FDAVRS is a PyTorch-compatible SDK that acts as a self-healing wrapper for Computer Vision models.
When edge devices (such as drones or autonomous vehicles) encounter environmental drift (fog, snow, blur, lighting shifts), traditional models fail silently.
FDAVRS:
Detects drift in real time
Applies Unsupervised Test-Time Adaptation (TTA)
Heals the model mathematically
Securely packages adaptation weights for Federated Learning
Preserves the original model architecture
nstallation
Install the latest version from PyPI:
pip install fdavrs
Quick Start Guide:
Using FDAVRS is seamless.
If you know how to use PyTorch, you already know how to use FDAVRS.
1️⃣ Import the SDK
import torch
import torchvision
from fdavrs import FDAVRS
2️⃣ Wrap Your Model
Load your base vision model (e.g., ResNet or YOLO) and wrap it.
# 1. Load your standard pre-trained model
base_model = torchvision.models.resnet18(pretrained=True)
# 2. Wrap it with FDAVRS
robust_model = FDAVRS(
client_model=base_model,
feature_layer='avgpool', # Layer to monitor for drift
threshold=0.3 # Drift score boundary
)
3️⃣ Calibrate the Baseline
Before deployment, the SDK analyzes clean images to establish a baseline embedding.
# Pass a standard PyTorch DataLoader containing clean images
robust_model.fit(clean_data_loader)
4️⃣ Live Inference & Self-Healing
During live inference, FDAVRS operates automatically.
for images, labels in live_camera_feed:
# The SDK:
# - Calculates drift
# - Applies local adaptation if needed
# - Returns corrected predictions
predictions = robust_model.predict(images)
# Inspect SDK behavior
current_status = robust_model.status()
print(f"Action Taken: {current_status['action']}")
📖 API Reference
Class: FDAVRS
The main wrapper class that orchestrates the Monitor, Brain, and Adapter layers.
🔧 Parameters
client_model (torch.nn.Module)
The PyTorch vision model to make robust.
The SDK freezes the feature extractor to prevent catastrophic forgetting.
feature_layer (str)
Name of the internal layer to attach a forward hook.
Examples:
'avgpool' → ResNet
'model.model.9' → YOLO
threshold (float, default = 0.3)
Drift score boundary:
Score < threshold → IDLE (Highly Reliable)
threshold < Score < 0.8 → LOCAL_ADAPTATION
Score > 0.8 → REQUEST_SERVER_CURE
core Methods
fit(dataloader)
Description
Calibrates the Monitor layer by computing a baseline reference embedding.
Parameters
dataloader (torch.utils.data.DataLoader)
Clean, in-distribution images.
Returns
None
predict(images) / forward(images)
Description
Main inference execution.
The SDK calculates a Composite Drift Score using:
Entropy
Cosine Shift
Confidence
If drift is detected:
BatchNorm layers switch to .train() mode
Running statistics (μ and σ²) are recalculated
5 adaptation iterations are performed
Corrected logits are returned
Parameters
images (torch.Tensor)
Shape: (B, C, H, W)
Returns
logits (torch.Tensor)
status()
Description
Returns the most recent decision taken by the Policy Engine.
Returns
{
"action": "IDLE | LOCAL_ADAPTATION | PASSIVE_DISCOVERY | REQUEST_SERVER_CURE",
"metrics": {
"score": float,
"entropy": float,
"shift": float,
"confidence": float
}
} Architecture Overview
When predict() is called, FDAVRS executes a Teacher–Student Federated Learning protocol:
1️⃣ Monitor Layer
PyTorch forward hooks intercept internal activations
Calculates distribution shift
2️⃣ Decision Layer
Triage logic determines:
Whether the model is reliable enough to teach others
Or if it is blind and needs external correction
3️⃣ Local Adaptation Layer
If drift is recoverable:
Test-Time Adaptation (TTA) is triggered
BatchNorm statistics (μ, σ²) are recalculated
No permanent architecture changes occur
4️⃣ Knowledge Vault
Successful fixes are:
Stripped down to normalization weights (.pt)
Paired with a drift signature (.json)
Uploaded to Federated Server
Aggregated using FedAvg
| text/markdown | Srivandhi | null | null | null | null | null | [] | [] | https://github.com/Srivandhi/FDAVRS | null | null | [] | [] | [] | [
"torch>=2.0.0",
"numpy>=1.24.0",
"scipy>=1.11.0"
] | [] | [] | [] | [
"Clients, https://github.com/Srivandhi/FDAVRS/tree/main/clients"
] | twine/6.2.0 CPython/3.11.4 | 2026-02-21T09:31:56.866686 | fdavrs-0.1.2.tar.gz | 9,183 | 14/61/85d6ba5b752fa3390e81fe9936862a423d8f64e511f3d1e37aed6c286ea5/fdavrs-0.1.2.tar.gz | source | sdist | null | false | ec99005f5569c23a83498389c61810e1 | fd43be9fd5344ecf27350709224ac1308cade16085245fc15620aafd2a7bf5fb | 146185d6ba5b752fa3390e81fe9936862a423d8f64e511f3d1e37aed6c286ea5 | null | [] | 224 |
2.2 | lief | 0.17.4 | Library to instrument executable formats | About
=====
The purpose of this project is to provide a cross platform library that can parse, modify and
abstract ELF, PE and MachO formats.
Main features:
* **Parsing**: LIEF can parse ELF, PE, MachO, OAT, DEX, VDEX, ART and provides an user-friendly API to access to format internals.
* **Modify**: LIEF enables to modify some parts of these formats
* **Abstract**: Three formats have common features like sections, symbols, entry point... LIEF factors them.
* **API**: LIEF can be used in C, C++, Python and Rust
LIEF Extended:
* DWARF/PDB Support
* Objective-C Metadata
* dyld shared cache
Checkout: https://lief.re/doc/latest/extended/intro.html for the details
Getting Started
================
.. code-block:: console
$ pip install lief
.. code-block:: python
import lief
elf = lief.ELF.parse("/bin/ls")
for section in elf.sections:
print(section.name, len(section.content))
pe = lief.PE.parse("cmd.exe")
for imp in pe.imports:
print(imp.name)
fat = lief.MachO.parse("/bin/dyld")
for macho in fat:
for sym in macho.symbols:
print(sym)
Documentation
=============
* `Main documentation <https://lief.re/doc/latest/index.html>`_
* `API <https://lief.re/doc/latest/api/python/index.html>`_
Contact
=======
* **Mail**: contact at lief.re
* **Discord**: `LIEF <https://discord.gg/jGQtyAYChJ>`_
Authors
=======
Romain Thomas `@rh0main <https://x.com/rh0main>`_
----
LIEF is provided under the `Apache 2.0 license <https://github.com/lief-project/LIEF/blob/0.15.1/LICENSE>`_
| text/x-rst | null | Romain Thomas <contact@lief.re> | null | null | Apache License 2.0 | parser, elf, pe, macho, reverse-engineering | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3",
"Programming Language :: C++",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"homepage, https://lief-project.github.io/",
"documentation, https://lief-project.github.io/doc/latest/",
"repository, https://github.com/lief-project/LIEF",
"changelog, https://lief-project.github.io/doc/latest/changelog.html",
"Funding, https://github.com/sponsors/lief-project",
"Tracker, https://github.com/lief-project/LIEF/issues"
] | twine/6.0.1 CPython/3.13.1 | 2026-02-21T09:31:52.591729 | lief-0.17.4.tar.gz | 7,174 | e3/23/84cb2be183c1d1e08923b5863e60faf98ab6da4a30802311ef8e8d741fe4/lief-0.17.4.tar.gz | source | sdist | null | false | f6bd018615d47eb8d96bea12b01023d1 | 3653120460a3bee7c648a713fbf982ff24b8de56efa305614aeff87edc0485e5 | e32384cb2be183c1d1e08923b5863e60faf98ab6da4a30802311ef8e8d741fe4 | null | [] | 11,322 |
2.4 | activitysmith | 0.1.3 | Official ActivitySmith Python SDK | # ActivitySmith Python Library
The ActivitySmith Python library provides convenient access to the ActivitySmith API from Python applications.
## Documentation
See the [API reference](https://activitysmith.com/docs/api-reference/introduction).
## Installation
This package is available on PyPI:
```sh
pip install activitysmith
```
Alternatively, install from source with:
```sh
python -m pip install .
```
## Setup
```python
import os
from activitysmith import ActivitySmith
activitysmith = ActivitySmith(
api_key=os.environ["ACTIVITYSMITH_API_KEY"],
)
```
## Usage
### Send a Push Notification
```python
response = activitysmith.notifications.send(
{
"title": "New subscription 💸",
"message": "Customer upgraded to Pro plan",
"channels": ["devs", "ops"], # Optional
}
)
print(response.success)
print(response.devices_notified)
```
### Start a Live Activity
```python
start = activitysmith.live_activities.start(
{
"content_state": {
"title": "Nightly database backup",
"subtitle": "create snapshot",
"number_of_steps": 3,
"current_step": 1,
"type": "segmented_progress",
"color": "yellow",
},
"channels": ["devs", "ops"], # Optional
}
)
activity_id = start.activity_id
```
### Update a Live Activity
```python
update = activitysmith.live_activities.update(
{
"activity_id": activity_id,
"content_state": {
"title": "Nightly database backup",
"subtitle": "upload archive",
"current_step": 2,
}
}
)
print(update.devices_notified)
```
### End a Live Activity
```python
end = activitysmith.live_activities.end(
{
"activity_id": activity_id,
"content_state": {
"title": "Nightly database backup",
"subtitle": "verify restore",
"current_step": 3,
"auto_dismiss_minutes": 2,
}
}
)
print(end.success)
```
## Error Handling
```python
try:
activitysmith.notifications.send(
{
"title": "New subscription 💸",
}
)
except Exception as err:
print("Request failed:", err)
```
## API Surface
- `activitysmith.live_activities`
- `activitysmith.notifications`
Request/response models are included and can be imported from `activitysmith_openapi.models`.
## Requirements
- Python 3.9 or newer
## License
MIT
| text/markdown | null | ActivitySmith <adam@activitysmith.com> | null | null | MIT | activitysmith, live activities, push notifications, api, sdk | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"urllib3>=1.25.3",
"python-dateutil>=2.8.2",
"pydantic>=2",
"pytest>=7; extra == \"dev\"",
"ruff>=0.5; extra == \"dev\""
] | [] | [] | [] | [
"homepage, https://activitysmith.com",
"documentation, https://activitysmith.com/docs",
"source, https://github.com/ActivitySmithHQ/activitysmith-python",
"issues, https://github.com/ActivitySmithHQ/activitysmith-python/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:30:57.692016 | activitysmith-0.1.3.tar.gz | 29,679 | 27/c1/38237cabb64eb6deadf9e1f5e314b4c1293685e373f4ecfd6c21f8bc3f83/activitysmith-0.1.3.tar.gz | source | sdist | null | false | e528d8e137b1330212cbbd8f27ec7ee2 | 1044b0334d89902fb9661f856ce45d075942e3eb23777007febf5c68f1298e1b | 27c138237cabb64eb6deadf9e1f5e314b4c1293685e373f4ecfd6c21f8bc3f83 | null | [
"LICENSE"
] | 226 |
2.4 | ocrmypdf | 17.3.0 | OCRmyPDF adds an OCR text layer to scanned PDF files, allowing them to be searched | <!-- SPDX-FileCopyrightText: 2014 Julien Pfefferkorn -->
<!-- SPDX-FileCopyrightText: 2015 James R. Barlow -->
<!-- SPDX-License-Identifier: CC-BY-SA-4.0 -->
<img src="docs/images/logo.svg" width="240" alt="OCRmyPDF">
[](https://github.com/ocrmypdf/OCRmyPDF/actions/workflows/build.yml) [![PyPI version][pypi]](https://pypi.org/project/ocrmypdf/) ![Homebrew version][homebrew] ![ReadTheDocs][docs] ![Python versions][pyversions]
[pypi]: https://img.shields.io/pypi/v/ocrmypdf.svg "PyPI version"
[homebrew]: https://img.shields.io/homebrew/v/ocrmypdf.svg "Homebrew version"
[docs]: https://readthedocs.org/projects/ocrmypdf/badge/?version=latest "RTD"
[pyversions]: https://img.shields.io/pypi/pyversions/ocrmypdf "Supported Python versions"
OCRmyPDF adds an OCR text layer to scanned PDF files, allowing them to be searched or copy-pasted.
```bash
ocrmypdf # it's a scriptable command line program
-l eng+fra # it supports multiple languages
--rotate-pages # it can fix pages that are misrotated
--deskew # it can deskew crooked PDFs!
--title "My PDF" # it can change output metadata
--jobs 4 # it uses multiple cores by default
--output-type pdfa # it produces PDF/A by default
input_scanned.pdf # takes PDF input (or images)
output_searchable.pdf # produces validated PDF output
```
[See the release notes for details on the latest changes](https://ocrmypdf.readthedocs.io/en/latest/release_notes.html).
## Main features
- Generates a searchable [PDF/A](https://en.wikipedia.org/?title=PDF/A) file from a regular PDF
- Places OCR text accurately below the image to ease copy / paste
- Keeps the exact resolution of the original embedded images
- When possible, inserts OCR information as a "lossless" operation without disrupting any other content
- Optimizes PDF images, often producing files smaller than the input file
- If requested, deskews and/or cleans the image before performing OCR
- Validates input and output files
- Distributes work across all available CPU cores
- Uses [Tesseract OCR](https://github.com/tesseract-ocr/tesseract) engine to recognize more than [100 languages](https://github.com/tesseract-ocr/tessdata)
- Keeps your private data private.
- Scales properly to handle files with thousands of pages.
- Battle-tested on millions of PDFs.
<img src="misc/screencast/demo.svg" alt="Demo of OCRmyPDF in a terminal session">
For details: please consult the [documentation](https://ocrmypdf.readthedocs.io/en/latest/).
## Motivation
I searched the web for a free command line tool to OCR PDF files: I found many, but none of them were really satisfying:
- Either they produced PDF files with misplaced text under the image (making copy/paste impossible)
- Or they did not handle accents and multilingual characters
- Or they changed the resolution of the embedded images
- Or they generated ridiculously large PDF files
- Or they crashed when trying to OCR
- Or they did not produce valid PDF files
- On top of that none of them produced PDF/A files (format dedicated for long time storage)
...so I decided to develop my own tool.
## Installation
Linux, Windows, macOS and FreeBSD are supported. Docker images are also available, for both x64 and ARM.
| Operating system | Install command |
| ----------------------------- | ------------------------------|
| Debian, Ubuntu | ``apt install ocrmypdf`` |
| Windows Subsystem for Linux | ``apt install ocrmypdf`` |
| Fedora | ``dnf install ocrmypdf`` |
| macOS (Homebrew) | ``brew install ocrmypdf`` |
| macOS (MacPorts) | ``port install ocrmypdf`` |
| macOS (nix) | ``nix-env -i ocrmypdf`` |
| LinuxBrew | ``brew install ocrmypdf`` |
| FreeBSD | ``pkg install py-ocrmypdf`` |
| OpenBSD | ``pkg_add ocrmypdf`` |
| Ubuntu Snap | ``snap install ocrmypdf`` |
For everyone else, [see our documentation](https://ocrmypdf.readthedocs.io/en/latest/installation.html) for installation steps.
## Languages
OCRmyPDF uses Tesseract for OCR, and relies on its language packs. For Linux users, you can often find packages that provide language packs:
```bash
# Debian/Ubuntu users
apt-cache search tesseract-ocr # Display a list of all Tesseract language packs
apt-get install tesseract-ocr-chi-sim # Example: Install Chinese Simplified language pack
# Arch Linux users
pacman -S tesseract-data-eng tesseract-data-deu # Example: Install the English and German language packs
# OpenBSD users
pkg_info -aQ tesseract # Display a list of all Tesseract language packs
pkg_add tesseract-cym # Example: Install the Welsh language pack
# brew macOS users
brew install tesseract-lang
# Fedora users
dnf search tesseract-langpack # Display a list of all Tesseract language packs
dnf install tesseract-langpack-ita # Example: Install the Italian language pack
```
You can then pass the `-l LANG` argument to OCRmyPDF to give a hint as to what languages it should search for. Multiple languages can be requested.
OCRmyPDF supports Tesseract 4.1.1+. It will automatically use whichever version it finds first on the `PATH` environment variable. On Windows, if `PATH` does not provide a Tesseract binary, we use the highest version number that is installed according to the Windows Registry.
## Documentation and support
Once OCRmyPDF is installed, the built-in help which explains the command syntax and options can be accessed via:
```bash
ocrmypdf --help
```
Our [documentation is served on Read the Docs](https://ocrmypdf.readthedocs.io/en/latest/index.html).
Please report issues on our [GitHub issues](https://github.com/ocrmypdf/OCRmyPDF/issues) page, and follow the issue template for quick response.
## Feature demo
```bash
# Add an OCR layer and require PDF/A
ocrmypdf --output-type pdfa input.pdf output.pdf
# Convert an image to single page PDF
ocrmypdf input.jpg output.pdf
# Add OCR to a file in place (only modifies file on success)
ocrmypdf myfile.pdf myfile.pdf
# OCR with non-English languages (look up your language's ISO 639-3 code)
ocrmypdf -l fra LeParisien.pdf LeParisien.pdf
# OCR multilingual documents
ocrmypdf -l eng+fra Bilingual-English-French.pdf Bilingual-English-French.pdf
# Deskew (straighten crooked pages)
ocrmypdf --deskew input.pdf output.pdf
```
For more features, see the [documentation](https://ocrmypdf.readthedocs.io/en/latest/index.html).
## Requirements
In addition to the required Python version, OCRmyPDF requires external program installations of Ghostscript and Tesseract OCR. OCRmyPDF is pure Python, and runs on pretty much everything: Linux, macOS, Windows and FreeBSD.
## Plugins
OCRmyPDF provides a plugin interface allowing its capabilities to be extended or replaced. Here are some plugins we are aware of:
- [OCRmyPDF-AppleOCR](https://github.com/mkyt/ocrmypdf-AppleOCR): replaces the standard Tesseract OCR engine with Apple Vision Framework. Requires macOS.
- [OCRmyPDF-EasyOCR](https://github.com/ocrmypdf/OCRmyPDF-EasyOCR): replaces the standard Tesseract OCR engine with EasyOCR, a newer OCR engine based on PyTorch. GPU strongly recommended.
- [OCRmyPDF-PaddleOCR](https://github.com/clefru/ocrmypdf-paddleocr): replaces the standard Tesseract OCR engine with PaddleOCR, a powerful GPU accelerated OCR engine.
[paperless-ngx](https://docs.paperless-ngx.com/) provides integration of OCRmyPDF into a searchable document management system.
## Press & Media
- [Going paperless with OCRmyPDF](https://medium.com/@ikirichenko/going-paperless-with-ocrmypdf-e2f36143f46a)
- [Converting a scanned document into a compressed searchable PDF with redactions](https://medium.com/@treyharris/converting-a-scanned-document-into-a-compressed-searchable-pdf-with-redactions-63f61c34fe4c)
- [c't 1-2014, page 59](https://heise.de/-2279695): Detailed presentation of OCRmyPDF v1.0 in the leading German IT magazine c't
- [heise Open Source, 09/2014: Texterkennung mit OCRmyPDF](https://heise.de/-2356670)
- [heise Durchsuchbare PDF-Dokumente mit OCRmyPDF erstellen](https://www.heise.de/ratgeber/Durchsuchbare-PDF-Dokumente-mit-OCRmyPDF-erstellen-4607592.html)
- [Excellent Utilities: OCRmyPDF](https://www.linuxlinks.com/excellent-utilities-ocrmypdf-add-ocr-text-layer-scanned-pdfs/)
- [LinuxUser Texterkennung mit OCRmyPDF und Scanbd automatisieren](https://www.linux-community.de/ausgaben/linuxuser/2021/06/texterkennung-mit-ocrmypdf-und-scanbd-automatisieren/)
- [Y Combinator discussion](https://news.ycombinator.com/item?id=32028752)
## Business enquiries
OCRmyPDF would not be the software that it is today without companies and users choosing to provide support for feature development and consulting enquiries. We are happy to discuss all enquiries, whether for extending the existing feature set, or integrating OCRmyPDF into a larger system.
## License
The OCRmyPDF software is licensed under the Mozilla Public License 2.0 (MPL-2.0). This license permits integration of OCRmyPDF with other code, included commercial and closed source, but asks you to publish source-level modifications you make to OCRmyPDF.
Some components of OCRmyPDF have other licenses, as indicated by standard SPDX license identifiers or the DEP5 copyright and licensing information file. Generally speaking, non-core code is licensed under MIT, and the documentation and test files are licensed under Creative Commons ShareAlike 4.0 (CC-BY-SA 4.0).
## Disclaimer
The software is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
| text/markdown | null | "James R. Barlow" <james@purplerock.ca> | null | null | null | OCR, PDF, PDF/A, optical character recognition, scanning | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: End Users/Desktop",
"Intended Audience :: Science/Research",
"Intended Audience :: System Administrators",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Operating System :: POSIX :: BSD",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Image Recognition",
"Topic :: Text Processing :: Indexing",
"Topic :: Text Processing :: Linguistic"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"deprecation>=2.1.0",
"fpdf2>=2.8.0",
"img2pdf>=0.5",
"packaging>=20",
"pdfminer-six>=20220319",
"pi-heif",
"pikepdf>=10",
"pillow>=10.0.1",
"pluggy>=1",
"pydantic>=2.12.5",
"pypdfium2>=5.0.0",
"rich>=13",
"uharfbuzz>=0.53.2",
"cyclopts>=3; extra == \"watcher\"",
"python-dotenv; extra == \"watcher\"",
"watchdog>=1.0.2; extra == \"watcher\"",
"streamlit>=1.41.0; extra == \"webservice\""
] | [] | [] | [] | [
"Documentation, https://ocrmypdf.readthedocs.io/",
"Source, https://github.com/ocrmypdf/OCRmyPDF",
"Tracker, https://github.com/ocrmypdf/OCRmyPDF/issues",
"Changelog, https://github.com/ocrmypdf/OCRmyPDF/tree/main/docs/releasenotes"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:30:07.207857 | ocrmypdf-17.3.0.tar.gz | 7,378,015 | fa/fe/60bdc79529be1ad8b151d426ed2020d5ac90328c54e9ba92bd808e1535c1/ocrmypdf-17.3.0.tar.gz | source | sdist | null | false | b1929a4b4f018a959b3526708efd5ef4 | 4022f13aad3f405e330056a07aa8bd63714b48b414693831b56e2cf2c325f52d | fafe60bdc79529be1ad8b151d426ed2020d5ac90328c54e9ba92bd808e1535c1 | MPL-2.0 | [
"LICENSE"
] | 1,815 |
2.4 | sphinx-fortran-domain | 0.0.0 | A modern Sphinx domain for Fortran | # Sphinx Fortran Domain
Fortran-lang's base Sphinx domain to document Fortran projects.
> **WARNING**: This project is under construction, at this stage you can use it but expect missing features or some rendering bugs. Your friendly feedback will be very important to get this project in shape.
## Install
Editable install for development:
`pip install -e .`
## Build the docs
Install with documentation dependencies:
`pip install -e ".[docs]"`
Build HTML documentation:
```
cd docs
make html
```
## Enable the extension
In `conf.py`:
```python
extensions = [
"sphinx_fortran_domain",
]
# Where your Fortran sources live (directories, files, or glob patterns)
fortran_sources = [
"../src", # directory
"../example/*.f90", # glob pattern
]
# Exclude sources from parsing (directories, files, or glob patterns)
fortran_sources_exclude = [
"../example/legacy", # directory
"../example/skip_this.f90", # file
"../example/**/generated_*.f90", # glob
]
# Select a lexer (built-in: "regex")
fortran_lexer = "regex"
# Doc comment convention
# Examples: '!>' or '!!' or '!@'
fortran_doc_chars = [">", "!"]
```
## Directives and roles
Manual declarations (create targets for cross-references):
```rst
.. f:function:: add_vectors(vec1, vec2)
.. f:subroutine:: normalize_vector(vec)
```
Autodoc-style views from parsed sources:
```rst
.. f:module:: example_module
.. f:submodule:: stdlib_quadrature_trapz
.. f:program:: test_program
```
Cross-references:
```rst
See :f:mod:`example_module` and :f:subr:`normalize_vector`.
```
## Writing a lexer plugin
See the full step-by-step guide in the documentation: ``docs/api/lexers.rst``.
External packages can register a lexer at import/setup time:
```python
from sphinx_fortran_domain.lexers import register_lexer
def setup(app):
register_lexer("my-lexer", lambda: MyLexer())
```
Then use `fortran_lexer = "my-lexer"`.
## Math in doc comments
This extension parses Fortran doc comments as reStructuredText fragments, so Sphinx
roles/directives work inside docs (including math when `sphinx.ext.mathjax` is enabled).
Supported math styles:
- Recommended (reST):
```Fortran
!> .. math:: \hat{v} = \frac{\vec{v}}{|\vec{v}|}
```
Inline math also works via `:math:`:
```Fortran
!> The magnitude is :math:`|\vec{v}| = \sqrt{x^2 + y^2 + z^2}`.
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"Sphinx>=6",
"myst-parser; extra == \"docs\"",
"pydata-sphinx-theme; extra == \"docs\"",
"pytest>=7; extra == \"test\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:29:21.326721 | sphinx_fortran_domain-0.0.0.tar.gz | 35,063 | 1a/c0/c692d58a1475380ecebba5293fac2a8e018bf0b5f849639ab91d7252a69f/sphinx_fortran_domain-0.0.0.tar.gz | source | sdist | null | false | b4c87e3c25a7cd86a3e585437b8d0656 | 76008ec8b06d14e2e520318559d966070728c0a6e644a41052096cc2c8c6c03b | 1ac0c692d58a1475380ecebba5293fac2a8e018bf0b5f849639ab91d7252a69f | null | [
"LICENSE"
] | 251 |
2.4 | axoniq | 0.2.1 | Graph-powered code intelligence engine — indexes codebases into a knowledge graph, exposed via MCP tools for AI agents and a CLI for developers. | # Axon
**Graph-powered code intelligence engine** — indexes your codebase into a knowledge graph and exposes it via MCP tools for AI agents and a CLI for developers.
```
axon analyze .
Phase 1: Walking files... 142 files found
Phase 3: Parsing code... 142/142
Phase 5: Tracing calls... 847 calls resolved
Phase 7: Analyzing types... 234 type relationships
Phase 8: Detecting communities... 8 clusters found
Phase 9: Detecting execution flows... 34 processes found
Phase 10: Finding dead code... 12 unreachable symbols
Phase 11: Analyzing git history... 18 coupled file pairs
Done in 4.2s — 623 symbols, 1,847 edges, 8 clusters, 34 flows
```
Most code intelligence tools treat your codebase as flat text. Axon builds a **structural graph** — every function, class, import, call, type reference, and execution flow becomes a node or edge in a queryable knowledge graph. AI agents using Axon don't just search for keywords; they understand how your code is connected.
---
## Why Axon?
**For AI agents (Claude Code, Cursor):**
- "What breaks if I change this function?" → blast radius via call graph + type references + git coupling
- "What code is never called?" → dead code detection with framework-aware exemptions
- "Show me the login flow end-to-end" → execution flow tracing from entry points through the call graph
- "Which files always change together?" → git history change coupling analysis
**For developers:**
- Instant answers to architectural questions without grepping through files
- Find dead code, tightly coupled files, and execution flows automatically
- Raw Cypher queries against your codebase's knowledge graph
- Watch mode that re-indexes on every save
**Zero cloud dependencies.** Everything runs locally — parsing, graph storage, embeddings, search. No API keys, no data leaving your machine.
---
## Features
### 11-Phase Analysis Pipeline
Axon doesn't just parse your code — it builds a deep structural understanding through 11 sequential analysis phases:
| Phase | What It Does |
|-------|-------------|
| **File Walking** | Walks repo respecting `.gitignore`, filters by supported languages |
| **Structure** | Creates File/Folder hierarchy with CONTAINS relationships |
| **Parsing** | tree-sitter AST extraction — functions, classes, methods, interfaces, enums, type aliases |
| **Import Resolution** | Resolves import statements to actual files (relative, absolute, bare specifiers) |
| **Call Tracing** | Maps function calls with confidence scores (1.0 = exact match, 0.5 = fuzzy) |
| **Heritage** | Tracks class inheritance (EXTENDS) and interface implementation (IMPLEMENTS) |
| **Type Analysis** | Extracts type references from parameters, return types, and variable annotations |
| **Community Detection** | Leiden algorithm clusters related symbols into functional communities |
| **Process Detection** | Framework-aware entry point detection + BFS flow tracing |
| **Dead Code Detection** | Multi-pass analysis with override, protocol, and decorator awareness |
| **Change Coupling** | Git history analysis — finds files that always change together |
### Hybrid Search (BM25 + Vector + RRF)
Three search strategies fused with Reciprocal Rank Fusion:
- **BM25 full-text search** — fast exact name and keyword matching via KuzuDB FTS
- **Semantic vector search** — conceptual queries via 384-dim embeddings (BAAI/bge-small-en-v1.5)
- **Fuzzy name search** — Levenshtein fallback for typos and partial matches
Test files are automatically down-ranked (0.5x), source-level functions/classes boosted (1.2x).
### Dead Code Detection
Finds unreachable symbols with intelligence — not just "zero callers" but a multi-pass analysis:
1. **Initial scan** — flags functions/methods/classes with no incoming calls
2. **Exemptions** — entry points, exports, constructors, test code, dunder methods, `__init__.py` public symbols, decorated functions, `@property` methods
3. **Override pass** — un-flags methods that override non-dead base class methods (handles dynamic dispatch)
4. **Protocol conformance** — un-flags methods on classes conforming to Protocol interfaces
5. **Protocol stubs** — un-flags all methods on Protocol classes (interface contracts)
### Impact Analysis (Blast Radius)
When you're about to change a symbol, Axon traces upstream through:
- **Call graph** — every function that calls this one, recursively
- **Type references** — every function that takes, returns, or stores this type
- **Git coupling** — files that historically change alongside this one
### Community Detection
Uses the [Leiden algorithm](https://www.nature.com/articles/s41598-019-41695-z) (igraph + leidenalg) to automatically discover functional clusters in your codebase. Each community gets a cohesion score and auto-generated label based on member file paths.
### Execution Flow Tracing
Detects entry points using framework-aware patterns:
- **Python**: `@app.route`, `@router.get`, `@click.command`, `test_*` functions, `__main__` blocks
- **JavaScript/TypeScript**: Express handlers, exported functions, `handler`/`middleware` patterns
Then traces BFS execution flows from each entry point through the call graph, classifying flows as intra-community or cross-community.
### Change Coupling (Git History)
Analyzes 6 months of git history to find hidden dependencies that static analysis misses:
```
coupling(A, B) = co_changes(A, B) / max(changes(A), changes(B))
```
Files with coupling strength ≥ 0.3 and 3+ co-changes get linked. Surfaces coupled files in impact analysis.
### Watch Mode
Live re-indexing powered by a Rust-based file watcher (watchfiles):
```bash
$ axon watch
Watching /Users/you/project for changes...
[10:32:15] src/auth/validate.py modified → re-indexed (0.3s)
[10:33:02] 2 files modified → re-indexed (0.5s)
```
- File-local phases (parse, imports, calls, types) run immediately on change
- Global phases (communities, processes, dead code) batch every 30 seconds
### Branch Comparison
Structural diff between branches using git worktrees (no stashing required):
```bash
$ axon diff main..feature
Symbols added (4):
+ process_payment (Function) -- src/payments/stripe.py
+ PaymentIntent (Class) -- src/payments/models.py
Symbols modified (2):
~ checkout_handler (Function) -- src/routes/checkout.py
Symbols removed (1):
- old_charge (Function) -- src/payments/legacy.py
```
---
## Supported Languages
| Language | Extensions | Parser |
|----------|-----------|--------|
| Python | `.py` | tree-sitter-python |
| TypeScript | `.ts`, `.tsx` | tree-sitter-typescript |
| JavaScript | `.js`, `.jsx`, `.mjs`, `.cjs` | tree-sitter-javascript |
---
## Installation
```bash
# With pip
pip install axoniq
# With uv (recommended)
uv add axoniq
# With Neo4j backend support
pip install axoniq[neo4j]
```
Requires **Python 3.11+**.
### From Source
```bash
git clone https://github.com/harshkedia177/axon.git
cd axon
uv sync --all-extras
uv run axon --help
```
---
## Quick Start
### 1. Index Your Codebase
```bash
cd your-project
axon analyze .
```
### 2. Query It
```bash
# Search for symbols
axon query "authentication handler"
# Get full context on a symbol
axon context validate_user
# Check blast radius before changing something
axon impact UserModel --depth 3
# Find dead code
axon dead-code
# Run a raw Cypher query
axon cypher "MATCH (n:Function) WHERE n.is_dead = true RETURN n.name, n.file_path"
```
### 3. Keep It Updated
```bash
# Watch mode — re-indexes on every save
axon watch
# Or re-analyze manually
axon analyze .
```
---
## CLI Reference
```
axon analyze [PATH] Index a repository (default: current directory)
--full Force full rebuild (skip incremental)
axon status Show index status for current repo
axon list List all indexed repositories
axon clean Delete index for current repo
--force / -f Skip confirmation prompt
axon query QUERY Hybrid search the knowledge graph
--limit / -n N Max results (default: 20)
axon context SYMBOL 360-degree view of a symbol
axon impact SYMBOL Blast radius analysis
--depth / -d N BFS traversal depth (default: 3)
axon dead-code List all detected dead code
axon cypher QUERY Execute a raw Cypher query (read-only)
axon watch Watch mode — live re-indexing on file changes
axon diff BASE..HEAD Structural branch comparison
axon setup Print MCP configuration JSON
--claude For Claude Code
--cursor For Cursor
axon mcp Start the MCP server (stdio transport)
axon serve Start the MCP server (same as axon mcp)
--watch, -w Enable live file watching with auto-reindex
axon --version Print version
```
---
## MCP Integration
Axon exposes its full intelligence as an MCP server, giving AI agents like Claude Code and Cursor deep structural understanding of your codebase.
### Setup for Claude Code
Add to your `.claude/settings.json` or project `.mcp.json`:
```json
{
"mcpServers": {
"axon": {
"command": "axon",
"args": ["serve", "--watch"]
}
}
}
```
This starts the MCP server **with live file watching** — the knowledge graph updates automatically as you edit code. To run without watching, use `"args": ["mcp"]` instead.
Or run the setup helper:
```bash
axon setup --claude
```
### Setup for Cursor
Add to your Cursor MCP settings:
```json
{
"axon": {
"command": "axon",
"args": ["serve", "--watch"]
}
}
```
Or run:
```bash
axon setup --cursor
```
### MCP Tools
Once connected, your AI agent gets access to these tools:
| Tool | Description |
|------|-------------|
| `axon_list_repos` | List all indexed repositories with stats |
| `axon_query` | Hybrid search (BM25 + vector + fuzzy) across all symbols |
| `axon_context` | 360-degree view — callers, callees, type refs, community, processes |
| `axon_impact` | Blast radius — all symbols affected by changing the target |
| `axon_dead_code` | List all unreachable symbols grouped by file |
| `axon_detect_changes` | Map a `git diff` to affected symbols in the graph |
| `axon_cypher` | Execute read-only Cypher queries against the knowledge graph |
Every tool response includes a **next-step hint** guiding the agent through a natural investigation workflow:
```
query → "Next: Use context() on a specific symbol for the full picture."
context → "Next: Use impact() if planning changes to this symbol."
impact → "Tip: Review each affected symbol before making changes."
```
### MCP Resources
| Resource URI | Description |
|-------------|-------------|
| `axon://overview` | Node and relationship counts by type |
| `axon://dead-code` | Full dead code report |
| `axon://schema` | Graph schema reference for writing Cypher queries |
---
## Knowledge Graph Model
### Nodes
| Label | Description |
|-------|-------------|
| `File` | Source file |
| `Folder` | Directory |
| `Function` | Top-level function |
| `Class` | Class definition |
| `Method` | Method within a class |
| `Interface` | Interface / Protocol definition |
| `TypeAlias` | Type alias |
| `Enum` | Enumeration |
| `Community` | Auto-detected functional cluster |
| `Process` | Detected execution flow |
### Relationships
| Type | Description | Key Properties |
|------|-------------|----------------|
| `CONTAINS` | Folder → File/Symbol hierarchy | — |
| `DEFINES` | File → Symbol it defines | — |
| `CALLS` | Symbol → Symbol it calls | `confidence` (0.0–1.0) |
| `IMPORTS` | File → File it imports from | `symbols` (names list) |
| `EXTENDS` | Class → Class it extends | — |
| `IMPLEMENTS` | Class → Interface it implements | — |
| `USES_TYPE` | Symbol → Type it references | `role` (param/return/variable) |
| `EXPORTS` | File → Symbol it exports | — |
| `MEMBER_OF` | Symbol → Community it belongs to | — |
| `STEP_IN_PROCESS` | Symbol → Process it participates in | `step_number` |
| `COUPLED_WITH` | File → File that co-changes with it | `strength`, `co_changes` |
### Node ID Format
```
{label}:{relative_path}:{symbol_name}
Examples:
function:src/auth/validate.py:validate_user
class:src/models/user.py:User
method:src/models/user.py:User.save
```
---
## Architecture
```
Source Code (.py, .ts, .js, .tsx, .jsx)
│
▼
┌──────────────────────────────────────────────┐
│ Ingestion Pipeline (11 phases) │
│ │
│ walk → structure → parse → imports → calls │
│ → heritage → types → communities → processes │
│ → dead_code → coupling │
└──────────────────────┬───────────────────────┘
│
▼
┌─────────────────┐
│ KnowledgeGraph │ (in-memory during build)
└────────┬────────┘
│
┌────────────┼────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ KuzuDB │ │ FTS │ │ Vector │
│ (graph) │ │ (BM25) │ │ (HNSW) │
└────┬────┘ └────┬────┘ └────┬────┘
└────────────┼────────────┘
│
StorageBackend Protocol
│
┌────────┴────────┐
▼ ▼
┌──────────┐ ┌──────────┐
│ MCP │ │ CLI │
│ Server │ │ (Typer) │
│ (stdio) │ │ │
└────┬─────┘ └────┬─────┘
│ │
Claude Code Terminal
/ Cursor (developer)
```
### Tech Stack
| Layer | Technology | Purpose |
|-------|-----------|---------|
| Parsing | tree-sitter | Language-agnostic AST extraction |
| Graph Storage | KuzuDB | Embedded graph database with Cypher, FTS, and vector support |
| Graph Algorithms | igraph + leidenalg | Leiden community detection |
| Embeddings | fastembed | ONNX-based 384-dim vectors (~100MB, no PyTorch) |
| MCP Protocol | mcp SDK (FastMCP) | AI agent communication via stdio |
| CLI | Typer + Rich | Terminal interface with progress bars |
| File Watching | watchfiles | Rust-based file system watcher |
| Gitignore | pathspec | Full `.gitignore` pattern matching |
### Storage
Everything lives locally in your repo:
```
your-project/
└── .axon/
├── kuzu/ # KuzuDB graph database (graph + FTS + vectors)
└── meta.json # Index metadata and stats
```
Add `.axon/` to your `.gitignore`.
The storage layer is abstracted behind a `StorageBackend` Protocol — KuzuDB is the default, with an optional Neo4j backend available via `pip install axoniq[neo4j]`.
---
## Example Workflows
### "I need to refactor the User class — what breaks?"
```bash
# See everything connected to User
axon context User
# Check blast radius
axon impact User --depth 3
# Find which files always change with user.py
axon cypher "MATCH (a:File)-[r:CodeRelation]->(b:File) WHERE a.name = 'user.py' AND r.rel_type = 'coupled_with' RETURN b.name, r.strength ORDER BY r.strength DESC"
```
### "Is there dead code we should clean up?"
```bash
axon dead-code
```
### "What are the main execution flows in our app?"
```bash
axon cypher "MATCH (p:Process) RETURN p.name, p.properties ORDER BY p.name"
```
### "Which parts of the codebase are most tightly coupled?"
```bash
axon cypher "MATCH (a:File)-[r:CodeRelation]->(b:File) WHERE r.rel_type = 'coupled_with' RETURN a.name, b.name, r.strength ORDER BY r.strength DESC LIMIT 20"
```
---
## How It Compares
| Capability | grep/ripgrep | LSP | Axon |
|-----------|-------------|-----|------|
| Text search | Yes | No | Yes (hybrid BM25 + vector) |
| Go to definition | No | Yes | Yes (graph traversal) |
| Find all callers | No | Partial | Yes (full call graph with confidence) |
| Type relationships | No | Yes | Yes (param/return/variable roles) |
| Dead code detection | No | No | Yes (multi-pass, framework-aware) |
| Execution flow tracing | No | No | Yes (entry point → flow) |
| Community detection | No | No | Yes (Leiden algorithm) |
| Change coupling (git) | No | No | Yes (6-month co-change analysis) |
| Impact analysis | No | No | Yes (calls + types + git coupling) |
| AI agent integration | No | Partial | Yes (full MCP server) |
| Structural branch diff | No | No | Yes (node/edge level) |
| Watch mode | No | Yes | Yes (Rust-based, 500ms debounce) |
| Works offline | Yes | Yes | Yes |
---
## Development
```bash
git clone https://github.com/harshkedia177/axon.git
cd axon
uv sync --all-extras
# Run tests
uv run pytest
# Lint
uv run ruff check src/
# Run from source
uv run axon --help
```
---
## License
MIT
---
Built by [@harshkedia177](https://github.com/harshkedia177)
| text/markdown | harshkedia177 | harshkedia177 <harshkedia717@gmail.com> | null | null | null | code-intelligence, knowledge-graph, mcp, tree-sitter, static-analysis, dead-code, claude-code | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Quality Assurance",
"Topic :: Software Development :: Code Generators"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"typer>=0.15.0",
"rich>=13.0.0",
"tree-sitter>=0.25.0",
"tree-sitter-python>=0.23.0",
"tree-sitter-javascript>=0.23.0",
"tree-sitter-typescript>=0.23.0",
"kuzu>=0.11.0",
"igraph>=1.0.0",
"leidenalg>=0.11.0",
"fastembed>=0.7.0",
"mcp>=1.0.0",
"watchfiles>=1.0.0",
"pathspec>=1.0.4",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-asyncio>=0.24.0; extra == \"dev\"",
"pytest-cov>=6.0.0; extra == \"dev\"",
"ruff>=0.9.0; extra == \"dev\"",
"neo4j>=5.0.0; extra == \"neo4j\""
] | [] | [] | [] | [
"Homepage, https://github.com/harshkedia177/axon",
"Repository, https://github.com/harshkedia177/axon",
"Issues, https://github.com/harshkedia177/axon/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T09:28:59.105806 | axoniq-0.2.1.tar.gz | 67,305 | e8/ba/e283f6abab3e143420b4971a0f4eb974fe07e7399e0f3b7f795b04290650/axoniq-0.2.1.tar.gz | source | sdist | null | false | 47c58cd6d3f4d38ca2d338f27a2f4684 | df994c6dda7776302e50680f5b3d1f3d62b8c072510c985f31f4c90cb9109b6d | e8bae283f6abab3e143420b4971a0f4eb974fe07e7399e0f3b7f795b04290650 | MIT | [] | 222 |
2.4 | mcp-watchdog | 0.1.1 | MCP security proxy that detects and blocks all known MCP attack classes | # mcp-watchdog
[](https://github.com/bountyyfi/mcp-watchdog/actions/workflows/ci.yml)
[](https://pypi.org/project/mcp-watchdog/)
<!-- mcp-name: io.github.bountyyfi/mcp-watchdog -->
MCP security proxy that sits between AI coding assistants and MCP servers, detecting and blocking all known MCP attack classes.
Catches **Rug Pulls**, **Tool Poisoning**, **Tool Shadowing**, **Name Squatting**, **Parameter Injection**, **SSRF**, **Command Injection**, **SQL Injection**, **Reverse Shell**, **Supply Chain Impersonation**, **Token Leakage**, **OAuth Confused Deputy**, **Session Smuggling**, **Context Leakage**, **Email Header Injection**, **False-Error Escalation**, **Preference Manipulation**, **ANSI Escape Injection**, **MCP Parasite**, **Thanatos** (all 4 layers), and **SANDWORM_MODE**-style prompt injection - before any of it reaches your AI assistant.
## Why this exists
MCP (Model Context Protocol) servers have full access to your AI assistant's context. A malicious or compromised server can:
- **Inject hidden instructions** into tool descriptions (`<IMPORTANT>` blocks telling the AI to exfiltrate credentials)
- **Silently redefine tools** after initial approval (Rug Pull attacks)
- **Shadow trusted tools** by injecting cross-server override instructions in tool descriptions (100% ASR on Claude Desktop)
- **Squat tool names** by registering duplicate tool names from different servers
- **Steal system prompts and conversation history** via parameter injection (HiddenLayer attack)
- **Access cloud metadata** via SSRF through URI parameters (MCP fURI, 36.7% of servers vulnerable)
- **Execute arbitrary commands** via shell metacharacters in tool arguments
- **Inject SQL** via tool arguments to SQLite/database MCP servers (Trend Micro disclosure)
- **Spawn reverse shells** to C2 servers (JFrog found 3 PyPI + npm packages with identical reverse shell payloads)
- **Impersonate legitimate packages** via typosquatting (fake Postmark MCP server - 1,643 downloads)
- **Leak API keys and tokens** (GitHub PATs, AWS keys, Slack tokens, JWTs) in responses
- **Hijack OAuth flows** via malformed authorization endpoints (CVE-2025-6514, 437K+ dev environments)
- **Inject messages into sessions** via agent session smuggling (A2A attacks)
- **Silently BCC emails** to attacker addresses via email header injection (postmark-mcp incident)
- **Trigger privilege escalation** via fake error messages designed to manipulate AI into granting elevated access
- **Manipulate tool selection** via persuasive language in descriptions biasing which tools the AI chooses
- **Hide instructions** via ANSI escape sequences and bidirectional text overrides invisible in terminal UIs
- **Profile your behavior** by collecting commit timestamps, deploy windows, and activity patterns
- **Encode payloads steganographically** inside normal-looking JSON responses
- **Propagate across servers** - output from Server A influences calls to Server B
- **Persist across sessions** by writing state to project files outside declared scope
- **Escape filesystem sandboxes** via symlink attacks bypassing path restrictions
- **Exfiltrate data via URL params** - sensitive tokens embedded in `https://evil.com/steal?data=SECRET`
- **Poison schema fields** - injection in parameter defaults, enums, and nested schema values (CyberArk FSP)
- **Flood approval requests** to cause consent fatigue, then slip in destructive actions
- **Replay OAuth tokens** across servers via audience mismatch (RFC 8707 violation)
- **Inject fake notifications** (`tools/list_changed`) to trigger tool re-fetching for rug pulls
mcp-watchdog intercepts all JSON-RPC traffic and applies multi-layer detection before any data reaches your AI model.
## What it catches
| Attack Class | Detection Layer | Rule | Severity |
|---|---|---|---|
| SANDWORM_MODE `<IMPORTANT>` injection | SMAC-L3 | SMAC-5 | Critical |
| HTML comment injection | SMAC-L3 | SMAC-1 | High |
| Zero-width unicode steganography | SMAC-L3 | SMAC-1 | High |
| ANSI escape sequence injection | SMAC-L3 | SMAC-1 | High |
| Bidirectional text overrides (LRE/RLO/LRI) | SMAC-L3 | SMAC-1 | High |
| Markdown reference link exfiltration | SMAC-L3 | SMAC-2 | High |
| Credential-seeking patterns | SMAC-L3 | SMAC-5 | Critical |
| Token/secret leakage (GitHub, AWS, Slack, JWT, OpenAI) | SMAC-L3 | SMAC-6 | Critical |
| Rug Pull (silent tool redefinition) | Tool Registry | RUG-PULL | Critical |
| Tool removal after establishing trust | Tool Registry | RUG-PULL | High |
| Tool Shadowing (cross-server desc pollution) | Tool Shadow | SHADOW | Critical |
| Tool Name Squatting (duplicate names across servers) | Tool Shadow | SHADOW | Critical |
| Preference Manipulation (biasing tool selection) | Tool Shadow | SHADOW | High |
| Cross-server tool reference in descriptions | Tool Shadow | SHADOW | High |
| Parameter injection (`system_prompt`, `conversation_history`) | Param Scanner | PARAM-INJECT | Critical |
| Suspicious parameter patterns | Param Scanner | PARAM-INJECT | High |
| Full Schema Poisoning (defaults, enums, nested fields) | Param Scanner | PARAM-INJECT | Critical |
| SSRF to cloud metadata (AWS/GCP/Azure IMDS) | URL Filter | SSRF | Critical |
| SSRF to localhost / internal networks | URL Filter | SSRF | High |
| Data exfiltration via URL parameters (Slack CVE-2025-34072) | URL Filter | EXFIL | Critical |
| Shell metacharacter injection | Input Sanitizer | CMD-INJECT | Critical |
| Command injection patterns | Input Sanitizer | CMD-INJECT | Critical |
| Path traversal attacks | Input Sanitizer | CMD-INJECT | High |
| SQL injection (UNION SELECT, DROP TABLE, etc.) | Input Sanitizer | SQL-INJECT | Critical |
| Reverse shell patterns (bash /dev/tcp, nc -e, mkfifo) | Input Sanitizer | REVERSE-SHELL | Critical |
| Email header injection (BCC exfiltration) | Tool Shadow | EMAIL-INJECT | Critical |
| False-error escalation (fake errors triggering privilege escalation) | Tool Shadow | ESCALATION | High |
| Supply chain typosquatting | Registry Checker | SUPPLY-CHAIN | Critical |
| Known malicious server patterns | Registry Checker | SUPPLY-CHAIN | Critical |
| OAuth authorization endpoint injection (CVE-2025-6514) | OAuth Guard | OAUTH | Critical |
| Excessive OAuth scope requests | OAuth Guard | OAUTH | High |
| Suspicious OAuth redirect URIs | OAuth Guard | OAUTH | Critical |
| Token audience mismatch / replay (RFC 8707) | OAuth Guard | TOKEN-REPLAY | Critical |
| MCP sampling exploitation | Proxy | SAMPLING | High |
| Session smuggling (orphaned/injected responses) | Flow Tracker | SESSION | Critical |
| Cross-server data propagation | Flow Tracker | CROSS-SERVER | High |
| Context leakage between servers | Proxy | CONTEXT-LEAK | High |
| Consent fatigue / approval flooding | Rate Limiter | RATE-LIMIT | High |
| Burst flooding (rapid-fire tool calls) | Rate Limiter | RATE-LIMIT | Critical |
| Notification event injection (tools/list_changed) | Rate Limiter | NOTIF-INJECT | Critical |
| Behavioral fingerprinting | Behavioral Monitor | DRIFT | High |
| Scope creep (credential field access) | Behavioral Monitor | DRIFT | Critical |
| Phase transitions (sudden behavior change) | Behavioral Monitor | DRIFT | Critical |
| Steganographic C2 payloads | Entropy Analyzer | ENTROPY | Medium |
| Hidden instructions in tool responses | Entropy + Semantic | ENTROPY | High |
| Structural anomalies (unusual JSON depth) | Entropy Analyzer | ENTROPY | Low |
| Out-of-scope filesystem writes | Scope Enforcer | SCOPE-L4 | Critical |
| MCP config file modification | Scope Enforcer | SCOPE-L4 | Critical |
| Symlink escape attacks | Scope Enforcer | SCOPE-L4 | Critical |
## SMAC-L3 compliance
mcp-watchdog implements the SMAC (Structured MCP Audit Controls) Level 3 preprocessing standard:
- **SMAC-1**: Strip HTML comments, zero-width unicode, ANSI escape sequences, and bidirectional text overrides
- **SMAC-2**: Strip markdown reference links used for data exfiltration
- **SMAC-4**: Log all violations with content hashes and timestamps
- **SMAC-5**: Detect and strip `<IMPORTANT>` instruction blocks and credential-seeking patterns
- **SMAC-6**: Detect and redact leaked tokens/secrets (GitHub PATs, AWS keys, Slack tokens, JWTs, OpenAI keys)
## Install
```bash
git clone https://github.com/bountyyfi/mcp-watchdog.git
cd mcp-watchdog
pip install -e ".[dev]"
```
With optional dependencies:
```bash
pip install -e ".[all]" # Includes anthropic SDK + watchdog filesystem monitoring
pip install -e ".[semantic]" # Just the LLM semantic classifier
pip install -e ".[filesystem]" # Just filesystem monitoring
```
## Usage
Wrap any MCP server command with `mcp-watchdog --verbose --`. The original server command goes after `--`:
```bash
# Proxy mode — wrap an upstream MCP server:
mcp-watchdog --verbose -- npx -y @modelcontextprotocol/server-filesystem /tmp
# Standalone scanner — pipe MCP messages through for testing:
echo '{"jsonrpc":"2.0","method":"tools/list"}' | mcp-watchdog
```
## Configuration
Replace your existing MCP server entry with mcp-watchdog + the original command as args after `--`.
### Claude Desktop
```json
{
"mcpServers": {
"filesystem-watchdog": {
"command": "mcp-watchdog",
"args": ["--verbose", "--", "npx", "-y", "@modelcontextprotocol/server-filesystem", "/Users/you/projects"],
"env": {}
}
}
}
```
### Cursor / Windsurf
Same pattern — see `configs/` for IDE-specific examples.
## Live demo
See every detection layer fire in real time:
```bash
python demo.py
```
The demo starts a real proxy wrapping a fake MCP server, sends clean traffic and 7 different attack types through it, and shows what gets caught vs what passes through.
## Detection layers
### Layer 0: SMAC-L3 Preprocessing
Static pattern matching applied to every tool response. Strips injection patterns, zero-width characters, ANSI escape sequences, bidirectional text overrides, hidden instructions, and redacts leaked tokens/secrets before they reach the AI model.
### Layer 1: Behavioral Drift Detection
Monitors MCP server behavior over time. Detects scope creep, behavioral fingerprinting, and phase transitions after establishing a baseline.
### Layer 2: Entropy + Semantic Analysis
Shannon entropy analysis detects base64-encoded payloads, instruction-like language, and structural anomalies. Optional LLM semantic classifier (Claude Haiku) catches steganographic payloads that are statistically normal but semantically malicious.
### Layer 3: Cross-Server Flow Tracking + Session Integrity
Tracks tokens across servers to detect cross-server propagation. Monitors request/response message sequences to detect session smuggling and injected responses.
### Layer 4: Filesystem Scope Enforcement
Blocks writes to `.git/config`, `.ssh/`, `.aws/`, MCP config files via inotify/FSEvents monitoring. Resolves symlinks to prevent sandbox escape attacks.
### Layer 5: Tool Integrity + Shadow Detection
Hashes every tool definition on first load. Detects rug pulls (silent redefinition), tool removal, and schema changes. Scans parameter names for injection patterns that leak system prompts and conversation history. Detects tool shadowing (cross-server description pollution), name squatting (duplicate tool names), and preference manipulation (persuasive language biasing tool selection).
### Layer 6: Network Security + Injection Prevention
SSRF protection blocks requests to cloud metadata endpoints (AWS IMDS, GCP, Azure), localhost, and internal networks. Command injection scanner catches shell metacharacters and injection patterns in tool arguments. SQL injection scanner detects UNION SELECT, DROP TABLE, and boolean-based injection. Reverse shell detector catches bash /dev/tcp, nc -e, mkfifo, and Python socket/subprocess patterns.
### Layer 7: Supply Chain + Auth + Email
Typosquatting detection via Levenshtein distance against known-good server registry. OAuth flow validation catches malformed authorization endpoints (CVE-2025-6514), excessive scopes, and suspicious redirects. Token audience validation prevents replay attacks across servers (RFC 8707). Email header injection detector catches BCC exfiltration attacks (postmark-mcp style).
### Layer 8: Escalation + Response Integrity
False-error escalation detector catches fake error messages designed to trick AI into granting elevated access. Response content is scanned for patterns like "permission denied, need admin access" that manipulate the AI's decision-making.
### Layer 9: Rate Limiting + Notification Guard
Consent fatigue protection monitors tool call frequency per server. Detects both sustained flooding and burst patterns designed to desensitize user approval. Notification event injection detector catches rapid `notifications/tools/list_changed` events used to trigger rug pull re-fetches.
## Running tests
```bash
# Full suite
pytest tests/ -v
# E2E only (starts real proxy subprocess)
pytest tests/test_e2e_proxy.py -v
# Unit/integration only
pytest tests/ -v --ignore=tests/test_e2e_proxy.py
```
158+ tests across unit, integration, and end-to-end suites.
**Unit tests** test each detection module in isolation. **Integration tests** test the `MCPWatchdogProxy` class across multi-server sequences. **End-to-end tests** start the actual proxy binary as a subprocess, connect it to a fake MCP server, and push real JSON-RPC traffic through stdin/stdout.
## Architecture
```
AI Assistant <-> mcp-watchdog proxy <-> MCP Server(s)
|
|-- SMAC-L3 preprocessor (token redaction + ANSI stripping)
|-- Entropy analyzer
|-- Behavioral monitor
|-- Flow tracker + session integrity
|-- Tool registry (rug pull detection)
|-- Tool shadow detector (shadowing + squatting)
|-- Parameter scanner
|-- URL filter (SSRF)
|-- Input sanitizer (cmd + SQL + reverse shell)
|-- Registry checker (supply chain)
|-- OAuth guard (+ token replay detection)
|-- Rate limiter (consent fatigue + notification injection)
|-- Email injection detector
|-- Escalation detector
|-- Semantic classifier (optional)
+-- Scope enforcer (filesystem + symlink)
```
mcp-watchdog is a transparent JSON-RPC proxy. It does not modify clean responses - only strips malicious content and raises alerts.
## License
MIT
## Credits
Open source by [Bountyy Oy](https://github.com/bountyyfi).
Research references:
- [Bountyy Oy - SMAC: Structured MCP Audit Controls](https://github.com/bountyyfi/invisible-prompt-injection/blob/main/SMAC.md)
- [Bountyy Oy - Thanatos MCP Attack Framework](https://github.com/bountyyfi/thanatos-mcp)
- [Bountyy Oy - ProjectMemory: MCP Parasite PoC](https://github.com/bountyyfi/ProjectMemory)
- [Invariant Labs - Tool Poisoning & Rug Pull Attacks](https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks)
- [HiddenLayer - Parameter Injection](https://hiddenlayer.com/innovation-hub/exploiting-mcp-tool-parameters/)
- [BlueRock - MCP fURI SSRF](https://www.bluerock.io/post/mcp-furi-microsoft-markitdown-vulnerabilities)
- [Unit 42 - MCP Sampling Attacks](https://unit42.paloaltonetworks.com/model-context-protocol-attack-vectors/)
- [OWASP MCP Top 10](https://owasp.org/www-project-mcp-top-10/)
- [Docker - MCP Supply Chain Horror Stories](https://www.docker.com/blog/mcp-horror-stories-the-supply-chain-attack/)
- [Elastic Security Labs - MCP Attack Vectors](https://www.elastic.co/security-labs/mcp-tools-attack-defense-recommendations)
- [Pillar Security - MCP Risks](https://www.pillar.security/blog/the-security-risks-of-model-context-protocol-mcp)
- [Trail of Bits - Line Jumping Attack](https://blog.trailofbits.com/2025/04/21/jumping-the-line-how-mcp-servers-can-attack-you-before-you-ever-use-them/)
- [Trail of Bits - mcp-context-protector](https://blog.trailofbits.com/2025/07/28/we-built-the-security-layer-mcp-always-needed/)
- [JFrog - Malicious MCP PyPI Reverse Shells](https://research.jfrog.com/post/3-malicious-mcps-pypi-reverse-shell/)
- [Snyk - Malicious postmark-mcp on npm](https://snyk.io/blog/malicious-mcp-server-on-npm-postmark-mcp-harvests-emails/)
- [Noma Security - Unicode Exploits in MCP](https://noma.security/blog/invisible-mcp-vulnerabilities-risks-exploits-in-the-ai-supply-chain/)
- [CoSAI - MCP Security White Paper](https://www.coalitionforsecureai.org/securing-the-ai-agent-revolution-a-practical-guide-to-mcp-security/)
- [MCPSecBench - Security Benchmark](https://arxiv.org/abs/2508.13220)
- [MCP-Guard - Defense Framework](https://arxiv.org/abs/2508.10991)
- [Breaking the Protocol - MCPSec](https://arxiv.org/abs/2601.17549)
- [CVE-2026-0755 - Gemini MCP Tool Command Injection](https://cybersecuritynews.com/gemini-mcp-tool-0-day-vulnerability/)
- [Invariant Labs - WhatsApp MCP Exfiltration](https://invariantlabs.ai/blog/whatsapp-mcp-exploited)
| text/markdown | Bountyy Oy | null | null | null | null | mcp, security, proxy, model-context-protocol, ai-security, prompt-injection, smac, tool-poisoning | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Security"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"websockets>=12.0",
"rich>=13.0",
"fastapi>=0.110.0",
"uvicorn>=0.27.0",
"pytest>=8.0; extra == \"dev\"",
"pytest-asyncio>=0.23; extra == \"dev\"",
"anthropic>=0.40.0; extra == \"semantic\"",
"watchdog>=4.0; extra == \"filesystem\"",
"anthropic>=0.40.0; extra == \"all\"",
"watchdog>=4.0; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/bountyyfi/mcp-watchdog",
"Repository, https://github.com/bountyyfi/mcp-watchdog",
"Issues, https://github.com/bountyyfi/mcp-watchdog/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:28:04.503309 | mcp_watchdog-0.1.1.tar.gz | 55,985 | 60/ed/28b82b64d223b97e6de8e0e2ffe02e5e418a043dd35410c3fa7da62fefc2/mcp_watchdog-0.1.1.tar.gz | source | sdist | null | false | 1c62e20e15b77c151023e1b6b0eb988c | 8b6860c5cb4d2210fff296f824cd605de18553b74bbf5b001dc090fbe87f57a1 | 60ed28b82b64d223b97e6de8e0e2ffe02e5e418a043dd35410c3fa7da62fefc2 | MIT | [] | 252 |
2.4 | plum-dispatch | 2.7.1 | Multiple dispatch in Python | # [Plum: Multiple Dispatch in Python](https://github.com/beartype/plum)
[](https://zenodo.org/badge/latestdoi/110279931)
[](https://github.com/beartype/plum/actions/workflows/ci.yml)
[](https://coveralls.io/github/beartype/plum?branch=master)
[](https://beartype.github.io/plum)
[](https://github.com/psf/black)
Everybody likes multiple dispatch, just like everybody likes plums.
The design philosophy of Plum is to provide an implementation of multiple dispatch that is Pythonic, yet close to how [Julia](http://julialang.org/) does it.
[See here for a comparison between Plum, `multipledispatch`, and `multimethod`.](https://beartype.github.io/plum/comparison.html)
*Note:*
Plum 2 is now powered by [Beartype](https://github.com/beartype/beartype)!
If you notice any issues with the new release, please open an issue.
# Installation
Plum requires Python 3.10 or higher.
```bash
pip install plum-dispatch
```
# [Documentation](https://beartype.github.io/plum)
See [here](https://beartype.github.io/plum).
# What's This?
Plum brings your type annotations to life:
```python
from numbers import Number
from plum import dispatch
@dispatch
def f(x: str):
return "This is a string!"
@dispatch
def f(x: int):
return "This is an integer!"
@dispatch
def f(x: Number):
return "This is a number, but I don't know which type."
```
```python
>>> f("1")
'This is a string!'
>>> f(1)
'This is an integer!'
>>> f(1.0)
'This is a number, but I don't know which type.'
>>> f(object())
NotFoundLookupError: `f(<object object at 0x7fd3b01cd330>)` could not be resolved.
Closest candidates are the following:
f(x: str)
<function f at 0x7fd400644ee0> @ /<ipython-input-2-c9f6cdbea9f3>:6
f(x: int)
<function f at 0x7fd3a0235ca0> @ /<ipython-input-2-c9f6cdbea9f3>:11
f(x: numbers.Number)
<function f at 0x7fd3a0235d30> @ /<ipython-input-2-c9f6cdbea9f3>:16
```
> [!IMPORTANT]
> Dispatch, as implemented by Plum, is based on the _positional_ arguments to a function.
> Keyword arguments are not used in the decision making for which method to call.
> In particular, this means that _positional arguments without a default value must
> always be given as positional arguments_!
>
> Example:
> ```python
> from plum import dispatch
>
> @dispatch
> def f(x: int):
> return x
>
> >>> f(1) # OK
> 1
>
> >> try: f(x=1) # Not OK
> ... except Exception as e: print(f"{type(e).__name__}: {e}")
> NotFoundLookupError: `f()` could not be resolved...
> ```
This also works for multiple arguments, enabling some neat design patterns:
```python
from numbers import Number, Real, Rational
from plum import dispatch
@dispatch
def multiply(x: Number, y: Number):
return "Performing fallback implementation of multiplication..."
@dispatch
def multiply(x: Real, y: Real):
return "Performing specialised implementation for reals..."
@dispatch
def multiply(x: Rational, y: Rational):
return "Performing specialised implementation for rationals..."
```
```python
>>> multiply(1, 1)
'Performing specialised implementation for rationals...'
>>> multiply(1.0, 1.0)
'Performing specialised implementation for reals...'
>>> multiply(1j, 1j)
'Performing fallback implementation of multiplication...'
>>> multiply(1, 1.0) # For mixed types, it automatically chooses the right optimisation!
'Performing specialised implementation for reals...'
```
# Projects Using Plum
The following projects are using Plum to do multiple dispatch!
Would you like to add your project here?
Please feel free to open a PR to add it to the list!
- [Coordinax](https://github.com/GalacticDynamics/coordinax) implements coordinates in JAX.
- [`fasttransform`](https://github.com/AnswerDotAI/fasttransform) provides the main building block of data pipelines in `fastai`.
- [GPAR](https://github.com/wesselb/gpar) is an implementation of the [Gaussian Process Autoregressive Model](https://arxiv.org/abs/1802.07182).
- [GPCM](https://github.com/wesselb/gpcm) is an implementation of various [Gaussian Process Convolution Models](https://arxiv.org/abs/2203.06997).
- [Galax](https://github.com/GalacticDynamics/galax) does galactic and gravitational dynamics.
- [Geometric Kernels](https://github.com/GPflow/GeometricKernels) implements kernels on non-Euclidean spaces, such as Riemannian manifolds, graphs, and meshes.
- [LAB](https://github.com/wesselb/lab) uses Plum to provide backend-agnostic linear algebra (something that works with PyTorch/TF/JAX/etc).
- [MLKernels](https://github.com/wesselb/mlkernels) implements standard kernels.
- [MMEval](https://github.com/open-mmlab/mmeval) is a unified evaluation library for multiple machine learning libraries.
- [Matrix](https://github.com/wesselb/matrix) extends LAB and implements structured matrix types, such as low-rank matrices and Kronecker products.
- [NetKet](https://github.com/netket/netket), a library for machine learning with JAX/Flax targeted at quantum physics, uses Plum extensively to pick the right, efficient implementation for a large combination of objects that interact.
- [NeuralProcesses](https://github.com/wesselb/neuralprocesses) is a framework for composing Neural Processes.
- [OILMM](https://github.com/wesselb/oilmm) is an implementation of the [Orthogonal Linear Mixing Model](https://arxiv.org/abs/1911.06287).
- [PySAGES](https://github.com/SSAGESLabs/PySAGES) is a suite for advanced general ensemble simulations.
- [Quax](https://github.com/patrick-kidger/quax) implements multiple dispatch over abstract array types in JAX.
- [Unxt](https://github.com/GalacticDynamics/unxt) implements unitful quantities in JAX.
- [Varz](https://github.com/wesselb/varz) uses Plum to provide backend-agnostic tools for non-linear optimisation.
[See the docs for a comparison of Plum to other implementations of multiple dispatch.](https://beartype.github.io/plum/comparison.html)
| text/markdown | null | Wessel Bruinsma <wessel.p.bruinsma@gmail.com> | null | null | MIT | multiple dispatch | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"beartype>=0.16.2",
"rich>=10.0",
"typing-extensions>=4.9.0"
] | [] | [] | [] | [
"repository, https://github.com/beartype/plum",
"documentation, https://beartype.github.io/plum"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:27:49.964501 | plum_dispatch-2.7.1.tar.gz | 242,770 | 4b/7a/5bbae2b6431df921757188f742be6e919393aa55787b582aecb281d1a4bf/plum_dispatch-2.7.1.tar.gz | source | sdist | null | false | 2453b5551238bb65f93f25c94d42d5dd | 38f04f42f2cc4f726083244e52ee04cfd155f53fad55423ec0fd1bfee3fc97a9 | 4b7a5bbae2b6431df921757188f742be6e919393aa55787b582aecb281d1a4bf | null | [
"LICENCE.txt"
] | 2,816 |
2.4 | annal | 0.2.0 | Semantic memory server for AI agent teams | # Annal
*A tool built by tools, for tools.*
> Early stage — this project is under active development and not yet ready for production use. APIs, config formats, and storage schemas may change without notice. If you're curious, feel free to explore and open issues, but expect rough edges.
Semantic memory server for AI agent teams. Stores, searches, and retrieves knowledge across sessions using ChromaDB with local ONNX embeddings, exposed as an MCP server.
Designed for multi-agent workflows where analysts, architects, developers, and reviewers need shared institutional memory — decisions made months ago surface automatically when relevant, preventing contradictions and preserving context that no single session can hold.
## How it works
Annal runs as a persistent MCP server (stdio or HTTP) and provides tools for storing, searching, updating, and managing memories. Memories are embedded locally using all-MiniLM-L6-v2 (ONNX) and stored in ChromaDB, namespaced per project.
File indexing is optional. Point Annal at directories to watch and it will chunk markdown files by heading, track modification times for incremental re-indexing, and keep the store current via watchdog filesystem events. For large repos, file watching can be disabled per-project — agents trigger re-indexing on demand via `index_files`.
Indexing is non-blocking. `init_project` and `index_files` return immediately while reconciliation runs in the background. Agents poll `index_status` to track progress, which shows elapsed time and chunk counts.
Agent memories and file-indexed content coexist in the same search space but are distinguished by tags (`memory`, `decision`, `pattern`, `bug`, `indexed`, etc.), so agents can search everything or filter to just what they need.
A web dashboard (HTMX + Jinja2) runs alongside the server, providing a browser-based view of memories with search, browsing, bulk delete, and live SSE updates when memories are stored or indexing is in progress.
## Quick start
```bash
pip install annal
# One-shot setup: creates service, configures MCP clients, starts the daemon
annal install
```
Or from source:
```bash
git clone https://github.com/heyhayes/annal.git
cd annal
pip install -e ".[dev]"
# Run in stdio mode (single session)
annal
# Run as HTTP daemon (shared across sessions)
annal --transport streamable-http
```
`annal install` detects your OS and sets up the appropriate service (systemd on Linux, launchd on macOS, scheduled task on Windows). It also writes MCP client configs for Claude Code, Codex, and Gemini CLI.
## MCP client integration
### Claude Code
Add to `~/.mcp.json` for stdio mode:
```json
{
"mcpServers": {
"annal": {
"command": "/path/to/annal/.venv/bin/annal"
}
}
}
```
For HTTP daemon mode (recommended when running multiple concurrent sessions):
```json
{
"mcpServers": {
"annal": {
"type": "http",
"url": "http://localhost:9200/mcp"
}
}
}
```
### Codex / Gemini CLI
`annal install` writes the appropriate config files automatically. See `annal install` output for paths.
## Project setup
On first use, call `init_project` with watch paths for file indexing, or just start storing memories — unknown projects are auto-registered in the config.
```
init_project(project_name="myapp", watch_paths=["/home/user/projects/myapp"])
```
Every tool takes a `project` parameter. Use the directory name of the codebase you're working in (e.g. "myapp", "annal").
## Tools
`store_memory` — Store knowledge with tags and source attribution. Near-duplicates (>95% similarity) are automatically skipped.
`search_memories` — Natural language search with optional tag filtering and similarity scores. Supports `mode="probe"` for compact summaries (saves context window) and `mode="full"` for complete content. Optional `min_score` filter suppresses low-relevance noise.
`expand_memories` — Retrieve full content for specific memory IDs. Use after a probe search to fetch details for relevant results.
`update_memory` — Revise content, tags, or source on an existing memory without losing its ID or creation timestamp. Tracks `updated_at` alongside the original.
`delete_memory` — Remove a specific memory by ID.
`list_topics` — Show all tags and their frequency counts.
`init_project` — Register a project with watch paths, patterns, and exclusions for file indexing. Indexing starts in the background and returns immediately.
`index_files` — Full re-index: clears all file-indexed chunks and re-indexes from scratch. Use after changing exclude patterns to remove stale chunks.
`index_status` — Per-project diagnostics: total chunks, file-indexed vs agent memory counts, indexing state with elapsed time, and last reconcile timestamp.
## Configuration
`~/.annal/config.yaml`:
```yaml
data_dir: ~/.annal/data
port: 9200
projects:
myapp:
watch_paths:
- /home/user/projects/myapp
watch_patterns:
- "**/*.md"
- "**/*.yaml"
- "**/*.toml"
- "**/*.json"
watch_exclude:
- "**/node_modules/**"
- "**/vendor/**"
- "**/.git/**"
- "**/.venv/**"
- "**/__pycache__/**"
- "**/dist/**"
- "**/build/**"
large-repo:
watch: false # disable file watching, use index_files on demand
watch_paths:
- /home/user/projects/large-repo
```
## Running as a daemon
The recommended approach is `annal install`, which sets up the service for your OS automatically.
For manual setup, use the service scripts in `contrib/`:
### Linux (systemd)
```bash
cp contrib/annal.service ~/.config/systemd/user/
# Edit ExecStart path, then:
systemctl --user daemon-reload
systemctl --user enable --now annal
```
### macOS (launchd)
```bash
cp contrib/com.annal.server.plist ~/Library/LaunchAgents/
# Edit the ProgramArguments path, then:
launchctl load ~/Library/LaunchAgents/com.annal.server.plist
```
### Windows (scheduled task)
```powershell
.\contrib\annal-service.ps1 -Action install -AnnalPath "C:\path\to\annal\.venv\Scripts\annal.exe"
Start-ScheduledTask -TaskName "Annal MCP Server"
```
## Dashboard
When running as an HTTP daemon, the dashboard is available at `http://localhost:9200`. It provides:
- Memory browsing with pagination and filters (by type, source, tags)
- Full-text search across memories
- Expandable content previews
- Bulk delete by filter
- Live SSE updates when memories are stored, deleted, or indexing is in progress
Disable with `--no-dashboard` if not needed.
## Development
```bash
pip install -e ".[dev]"
pytest -v
```
95 tests covering store operations, search, indexing, file watching, dashboard routes, SSE events, and CLI installation.
## License
MIT — see [LICENSE](LICENSE).
| text/markdown | heyhayes | null | null | null | null | ai-agents, chromadb, mcp, memory, semantic-search | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"chromadb>=0.5.0",
"jinja2>=3.1",
"mcp[cli]>=1.2.0",
"pyyaml>=6.0",
"watchdog>=4.0.0",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/heyhayes/annal",
"Repository, https://github.com/heyhayes/annal",
"Issues, https://github.com/heyhayes/annal/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T09:27:44.722058 | annal-0.2.0.tar.gz | 130,398 | 30/9c/7697e1954d89c6d1c5c538e27ac76793e8ef0267f6f32a9ad13ee491cd2f/annal-0.2.0.tar.gz | source | sdist | null | false | 4f206919fad883d80a2af419fceee801 | 3fc4f1dc6386ca8a2c6b9d331e9ee3c43d155861b36e3d1aaa05136507d74ce6 | 309c7697e1954d89c6d1c5c538e27ac76793e8ef0267f6f32a9ad13ee491cd2f | MIT | [
"LICENSE"
] | 218 |
2.4 | momahub | 0.1.0 | Distributed AI inference hub — Mixture of Models on Ollama | # MoMa Hub
**Distributed AI inference on consumer GPUs — Mixture of Models on Ollama**
```
pip install momahub
```
---
## What is MoMa Hub?
MoMa Hub is the infrastructure layer for the **MoMa (Mixture of Models on Ollama)** vision:
a federated network where anyone with a gaming GPU can contribute inference capacity to the
global AI commons — and route tasks to the right model on the right GPU automatically.
| Inspiration | What they shared | MoMa Hub equivalent |
|-------------|-----------------|---------------------|
| SETI@home | Idle CPU cycles | Idle GPU cycles |
| Airbnb | Spare bedrooms | Spare VRAM |
| Docker Hub | Container images | Ollama runtime configs |
| GitHub | Source code | SPL prompt scripts |
A **GTX 1080 Ti** (11 GB VRAM, ~$150 used) runs any 7B model in real time.
There are millions sitting idle in gaming PCs. MoMa Hub organises them.
---
## Quick Start
```bash
# 1. Install
pip install momahub
# 2. Start the hub server
momahub serve --port 8765
# 3. Register your Ollama node (in another terminal)
momahub register --node-id home-gpu-0 \
--url http://localhost:11434 \
--gpu "GTX 1080 Ti" \
--vram 11 \
--models qwen2.5:7b --models mistral:7b
# 4. Check nodes
momahub nodes
# 5. Run inference through the hub
momahub infer --model qwen2.5:7b --prompt "Explain attention mechanisms"
```
### Python SDK
```python
import asyncio
from momahub import MoMaHub, NodeInfo, InferenceRequest
hub = MoMaHub()
hub.register(NodeInfo(
node_id="home-gpu-0",
url="http://localhost:11434",
gpu_model="GTX 1080 Ti",
models=["qwen2.5:7b"],
))
resp = asyncio.run(hub.infer(InferenceRequest(
model="qwen2.5:7b",
prompt="Hello from MoMa Hub!",
)))
print(resp.content)
```
---
## Architecture
```
Consumer GPUs (GTX 1080 Ti × N)
│ Ollama serve (one per GPU)
│
▼
MoMa Hub ── FastAPI registry + round-robin router
│
▼
Client CLI / Python SDK / SPL scripts
```
See [docs/DESIGN.md](docs/DESIGN.md) for the full roadmap and hardware reference.
---
## Roadmap
| Version | Milestone |
|---------|-----------|
| **v0.1** (now) | Local MVP: register nodes, round-robin routing, CLI + SDK |
| v0.2 | Persistent registry, capability-aware routing, heartbeat daemon |
| v0.3 | LAN mDNS discovery, SPL integration (`USING HUB momahub://...`) |
| v0.4 | Internet federation, auth, public hub registry |
---
## Contributing
MoMa Hub is planned as open-source (Apache 2.0). Once the MVP is proven on 4× GTX 1080 Ti
at home, the repo will go public. Follow the GitHub for updates.
```
https://github.com/digital-duck/momahub
```
---
## License
Apache 2.0
| text/markdown | null | null | null | null | Apache-2.0 | distributed-inference, gpu-sharing, llm, moma, ollama | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0.0",
"dd-config>=0.1.0",
"dd-db>=0.1.2",
"dd-embed>=0.1.0",
"dd-llm>=0.1.0",
"dd-logging>=0.1.0",
"dd-vectordb>=0.1.2",
"fastapi>=0.104.0",
"httpx>=0.25.0",
"pydantic>=2.0.0",
"spl-flow>=0.1.0",
"spl-llm>=0.1.0",
"uvicorn>=0.24.0",
"httpx; extra == \"dev\"",
"pytest-asyncio>=0.23; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/digital-duck/momahub",
"Repository, https://github.com/digital-duck/momahub",
"Issues, https://github.com/digital-duck/momahub/issues"
] | twine/6.2.0 CPython/3.11.5 | 2026-02-21T09:27:33.116929 | momahub-0.1.0.tar.gz | 14,733 | ab/56/8815a1c4d68656231982ecc544bafca3a9bb6f6a9fa4e09ca1c43dbc4f47/momahub-0.1.0.tar.gz | source | sdist | null | false | ce0f93124f0b6149b1bd41c72b9c7086 | ad8b6e3a44fa7ef82e9a7e219ad92b6f821d779fd7a29e7abc8edfeffad6a6b1 | ab568815a1c4d68656231982ecc544bafca3a9bb6f6a9fa4e09ca1c43dbc4f47 | null | [
"LICENSE"
] | 251 |
2.4 | llmdebug | 2.14.6 | Structured debug snapshots for LLM-assisted debugging | <p align="center">
<img src="logo/bird.png" alt="llmdebug logo" width="200">
</p>
# llmdebug
Structured debug snapshots for LLM-assisted debugging.
When your code fails, `llmdebug` captures the exception, stack frames, local variables, and environment info in a JSON format optimized for LLM consumption. This enables **evidence-based debugging** instead of the "guess → patch → rerun" loop.
Current feature status is documented in this README (source of truth for shipped capabilities). Research context and forward-looking priorities live in `docs/research-improvement-roadmap.md`.
## Why?
Without observability, LLMs debug by guessing:
```
fail → guess patch → rerun → repeat (LLM roulette)
```
With `llmdebug`, failures produce rich snapshots automatically:
```
fail → read snapshot → ranked hypotheses → minimal patch → verify
```
The key insight: **baseline instrumentation should always be on**, so the first failure already has the evidence needed to diagnose it.
## Installation
```bash
pip install llmdebug # Core library + pytest plugin
pip install llmdebug[cli] # CLI for viewing snapshots
pip install llmdebug[mcp] # MCP server for IDE integration (Claude Code, etc.)
pip install llmdebug[jupyter] # Jupyter/IPython integration
pip install llmdebug[toon] # TOON output format for maximum token savings
```
## Quick Start
### Pytest (automatic - recommended)
Just install the package. Test failures automatically generate snapshots.
```bash
pytest # Failures create .llmdebug/latest.json
```
### Decorator
```python
from llmdebug import debug_snapshot
@debug_snapshot()
def main():
data = load_data()
process(data)
if __name__ == "__main__":
main()
```
### Context Manager
For targeted instrumentation when you need more detail:
```python
from llmdebug import snapshot_section
with snapshot_section("data_processing"):
result = transform(data)
```
### Snapshot Privacy Defaults
`debug_snapshot()` and `snapshot_section()` keep backward-compatible redaction defaults.
If you do not pass any redaction settings, llmdebug emits a `UserWarning` so you can
opt into a safer profile explicitly.
Recommended:
```python
from llmdebug import debug_snapshot, snapshot_section
@debug_snapshot(redaction_profile="ci") # safer dev/CI default
def run_job():
...
with snapshot_section("checkout", redaction_profile="prod"): # stricter production profile
...
```
### Jupyter / IPython
Automatic snapshot capture in notebooks with rich HTML display:
```python
# In a notebook cell:
%load_ext llmdebug
# Or programmatically:
import llmdebug
llmdebug.load_jupyter()
```
After any cell error, a compact banner shows the exception, crash location, and hints. Use magic commands for deeper analysis:
```python
%llmdebug # Show full snapshot with locals and context
%llmdebug hypothesize # Generate ranked debugging hypotheses
%llmdebug diff # Compare latest vs previous snapshot
%llmdebug list # List recent snapshots
%llmdebug config # Show active configuration
```
Requires the `jupyter` extra: `pip install llmdebug[jupyter]`
### Production Hooks
Capture unhandled exceptions automatically in production applications:
```python
import llmdebug
llmdebug.install_hooks(out_dir=".llmdebug")
# Any unhandled exception, thread crash, or unraisable exception
# will now produce a snapshot automatically.
# Optional: uninstall when done
llmdebug.uninstall_hooks()
```
Hooks install into `sys.excepthook`, `threading.excepthook`, and `sys.unraisablehook`. They include rate limiting (default: 10 captures/min) and automatic PII redaction.
### Web Middleware
Zero-config crash capture for web frameworks:
```python
# Flask
app.wsgi_app = LLMDebugWSGIMiddleware(app.wsgi_app)
# FastAPI
from llmdebug import LLMDebugASGIMiddleware
app.add_middleware(LLMDebugASGIMiddleware)
# Django WSGI
from llmdebug import LLMDebugWSGIMiddleware
application = LLMDebugWSGIMiddleware(application)
```
Middleware captures request context (method, path, query string) alongside the crash snapshot, with automatic PII redaction on query parameters.
## CLI
View and manage snapshots in the terminal with rich formatting:
```bash
llmdebug # Show latest snapshot (crash-level detail)
llmdebug show --detail full # Show all stack frames
llmdebug show --detail context # Everything including repro, git, env
llmdebug show --json # Output raw expanded JSON
llmdebug show --raw-session # Output raw DebugSession envelope JSON
llmdebug list # List recent snapshots
llmdebug frames -i 0 # Inspect a specific frame
llmdebug diff # Compare latest vs previous snapshot
llmdebug git-context # On-demand enhanced git metadata
llmdebug git-context --json # Enhanced git metadata as JSON
llmdebug hypothesize # Auto-generate debugging hypotheses
llmdebug clean -k 5 # Keep only 5 most recent snapshots
```
All commands accept `--dir <path>` to point at a custom snapshot directory.
Requires the `cli` extra: `pip install llmdebug[cli]`
### Detail Levels
The `show` command defaults to **crash** level for minimal token usage. Use `--detail` to control verbosity:
| Level | Content | Typical Size |
|-------|---------|--------------|
| `crash` (default) | Exception + crash frame only | ~2K tokens |
| `full` | All frames + traceback | ~5K tokens |
| `context` | Everything (repro, git, env, coverage) | ~10K tokens |
### Snapshot Diffing
Compare two snapshots to see what changed between runs:
```bash
llmdebug diff # Compare latest vs previous
llmdebug diff old.json new.json # Compare specific files
llmdebug diff --json # Output diff as JSON
```
### Enhanced Git Context
Get richer git-aware debugging metadata on demand (without inflating snapshot capture payloads):
```bash
llmdebug git-context # Latest snapshot, text view
llmdebug git-context --json # JSON output for tooling
llmdebug git-context '#2' # Specific snapshot reference
```
Outputs metadata only:
- crash-line blame metadata
- recent commit metadata + shortstats
- crash-file diffstat metadata
### Hypothesis Generation
Auto-generate ranked debugging hypotheses from snapshot patterns:
```bash
llmdebug hypothesize # Analyze latest snapshot
llmdebug hypothesize --json # Output as JSON array
```
The hypothesis engine includes 10 pattern detectors that identify common bug patterns (empty arrays, shape mismatches, None values, off-by-one errors, etc.) and provide actionable suggestions.
## MCP Server
`llmdebug` includes an MCP server for direct IDE integration (Claude Code, Cursor, etc.):
```bash
llmdebug-mcp # Start the MCP server (stdio transport)
```
Install with: `pip install llmdebug[mcp]`
### Available Tools
| Tool | Description |
|------|-------------|
| `llmdebug_diagnose` | Concise crash summary optimized for LLM consumption |
| `llmdebug_show` | Full expanded JSON snapshot with detail level control |
| `llmdebug_list` | List available snapshots with metadata |
| `llmdebug_frame` | Detailed view of a specific stack frame |
| `llmdebug_git_context` | On-demand enhanced git metadata for crash triage |
| `llmdebug_diff` | Compare two snapshots to show what changed |
| `llmdebug_hypothesize` | Generate ranked debugging hypotheses |
| `llmdebug_rca_status` | Show latest RCA state for a session |
| `llmdebug_rca_history` | Show RCA attempt history |
| `llmdebug_rca_advance` | Manually advance RCA state machine |
`llmdebug_diagnose`/`llmdebug_show` support detail controls; RCA-related tools return JSON state payloads.
### Evidence-First Defaults
MCP evidence tools are evidence-only by default and optimized for model consumption:
- `response_format="json"` by default on evidence tools (`diagnose`, `show`, `frame`, `git_context`,
`diff`, `hypothesize`).
- `with_rca=false` by default on evidence tools (RCA coaching/state is opt-in).
- `evidence_schema="summary"` omits heavy payloads by default.
This keeps tool outputs compact and neutral. LLM reasoning remains primary; the protocol focuses on
transporting high-signal evidence and retry deltas.
Default JSON envelope shape:
```json
{
"tool": "llmdebug_diagnose",
"format_version": "2.0",
"evidence": {
"exception": {"type": "", "message": "", "category": "", "suggestion": ""},
"crash_frame": {"index": 0, "file_rel": "", "line": 0, "function": "", "code": ""},
"locals_summary": [],
"repro": {"argv": [], "nodeid": ""},
"evidence_ids": [],
"sections": []
},
"metadata": {
"response_format": "json",
"with_rca": false,
"evidence_schema": "summary"
}
}
```
Notable MCP parameters:
- `llmdebug_show(raw_session=true)` returns the raw DebugSession envelope.
- `llmdebug_show(with_rca=true)` returns `{snapshot, rca}` JSON.
- `gate_mode=off|soft|strict` and `exploratory=true|false` are available on RCA-aware tools.
### RCA Workflow (Opt-In)
Evidence tools omit RCA metadata unless `with_rca=true`.
Use `llmdebug_rca_status` and `llmdebug_rca_history` to inspect progression, or
`llmdebug_rca_advance` for custom/manual agent workflows.
RCA prompt contract reference: `docs/rca_prompt_contract.md`.
### Claude Code Configuration
Add to your project's `.mcp.json`:
```json
{
"mcpServers": {
"llmdebug": {
"command": "llmdebug-mcp"
}
}
}
```
## Output
On failure, `.llmdebug/latest.json` stores a versioned DebugSession envelope:
```json
{
"schema_version": "2.0",
"kind": "llmdebug.debug_session",
"session": {
"name": "test_training_step",
"timestamp_utc": "2026-01-27T14:30:52Z",
"llmdebug_version": "2.3.0"
},
"snapshot": {
"exception": {
"type": "ValueError",
"message": "operands could not be broadcast together..."
},
"frames": [
{
"file": "training.py",
"line": 42,
"function": "train_step",
"code": "output = model(x) + residual",
"locals": {
"x": {"__array__": "jax.Array", "shape": [32, 64], "dtype": "float32"},
"residual": {"__array__": "jax.Array", "shape": [32, 128], "dtype": "float32"}
}
}
]
},
"context": {
"env": {"python": "3.12.0", "platform": "Darwin-24.0.0-arm64"}
}
}
```
For compatibility, `get_latest_snapshot()` and loader APIs return a normalized flat view by default:
```python
from llmdebug import get_latest_snapshot
flat = get_latest_snapshot() # default: normalized flat snapshot
from llmdebug.output import get_latest_snapshot as get_raw_snapshot
raw = get_raw_snapshot(normalize=False) # raw DebugSession envelope
```
**Key features:**
- Crash frame is at index 0 (most relevant first)
- Arrays summarized with `shape` and `dtype` (not raw data)
- Source snippet around the failing line
- Environment info for reproducibility
### Snapshot Enrichment
Snapshots are automatically enriched with contextual data:
- **Schema metadata**: `schema_version`, `llmdebug_version`, `crash_frame_index`
- **Exception detail**: `qualified_type`, `args`, `notes`, `cause`, `context`, `exceptions` (ExceptionGroup), `error_category` with auto-classification and suggestions
- **Frame metadata**: `module`, `file_rel`, `locals_meta` (type/size hints), truncation markers
- **Git context**: commit hash, branch, dirty status
- **Pytest context**: `longrepr`, `capstdout`, `capstderr`, params, `repro` command
- **Coverage data**: executed/missing lines, branch stats (when pytest-cov is active)
- **Async context**: asyncio task name and state
- **Log records**: recent log entries (opt-in via `capture_logs=True`)
- **Capture config**: frames, locals_mode, truncation limits, redaction patterns
## For Claude Code / LLM Users
Add this to your project's `CLAUDE.md`:
```markdown
## Debug Snapshots (llmdebug)
This project uses `llmdebug` for structured crash diagnostics.
### On any failure:
1. **Read `.llmdebug/latest.json` first** (or run `llmdebug show --json`) - never patch before reading
2. Analyze the snapshot:
- **Exception type/message** - what went wrong
- **Crash frame (index 0)** - where it happened
- **Locals** - variable values at crash time
- **Array shapes** - look for empty arrays, shape mismatches
3. **Produce 2-4 ranked hypotheses** based on evidence
4. Apply minimal fix for the most likely hypothesis
5. Re-run to verify
### Key signals:
- `shape: [0, ...]` - empty array, upstream data issue
- `None` where object expected - initialization bug
- Shape mismatch in binary op - broadcasting error
- `i=10` with `len(arr)=10` - off-by-one
### When the snapshot isn't enough:
If locals show the symptom but not the cause:
1. Add `with snapshot_section("stage_x")` around suspect code
2. Re-run to get a better snapshot
3. Repeat hypothesis→patch loop
### Don't:
- Guess without reading the snapshot first
- Make multiple speculative changes at once
- Refactor until tests pass
```
## Configuration
```python
@debug_snapshot(
out_dir=".llmdebug", # Output directory
frames=5, # Stack frames to capture
source_context=3, # Lines of source before/after crash
source_mode="all", # "all" | "crash_only" | "none"
locals_mode="safe", # "safe" | "meta" | "none"
max_str=500, # Truncate long strings
max_items=50, # Truncate large collections
redaction_profile="dev", # Optional: "dev" | "ci" | "prod"
redact=[r"api_key=.*"], # Regex patterns to redact
redact_keys=False, # Keep dict keys stable by default
redact_traceback=False, # Redact traceback text
redact_exception_strings=False, # Redact exception message/args/notes
include_env=True, # Include Python/platform info
max_snapshots=50, # Auto-cleanup old snapshots (0 = unlimited)
output_format="json_compact", # "json" | "json_compact" | "toon"
include_git=True, # Git commit/branch/dirty status
include_args=True, # Separate function arguments from locals
categorize_errors=True, # Auto-classify errors with suggestions
include_async_context=True, # Asyncio task info
include_array_stats=False, # Compute min/max/mean/std for arrays
capture_logs=False, # Capture recent log records
log_max_records=20, # Max log records to capture
include_coverage=True, # Pytest-plugin coverage enrichment toggle
include_modules=None, # Filter frames by module prefix (None = all)
max_exception_depth=5, # Exception chain recursion limit
lock_timeout=5.0, # Seconds to wait for file lock
)
```
Redaction defaults to leaf string values only. This avoids accidental key collisions in nested dicts.
Set `redact_keys=True` only if you explicitly need key-name redaction and can accept possible key merging.
`redaction_profile` provides preset behavior:
- `dev`: minimal redaction defaults
- `ci`: stronger string redaction for non-local workflows
- `prod`: strictest defaults (includes traceback/exception-string redaction)
Profiles are additive: explicit `redact`/`redact_*` options always take precedence.
`include_coverage` currently applies to pytest-plugin failure captures only. Coverage data is
attached when pytest-cov is active and `LLMDEBUG_INCLUDE_COVERAGE` is enabled.
Decorator/context-manager captures do not currently add coverage payloads.
### Environment Variables
All configuration options can also be set via environment variables for pytest:
```bash
LLMDEBUG_OUTPUT_FORMAT=json pytest # Use pretty JSON
LLMDEBUG_INCLUDE_GIT=false pytest # Disable git context
LLMDEBUG_CAPTURE_LOGS=true pytest # Enable log capture
LLMDEBUG_REDACTION_PROFILE=ci pytest # Use CI redaction profile
LLMDEBUG_REDACT_TRACEBACK=true pytest # Redact traceback text
LLMDEBUG_REDACT_EXCEPTION_STRINGS=true pytest # Redact exception strings
LLMDEBUG_RCA_MAX_RECORDS=5000 pytest # Cap persisted RCA history records
```
### Output Formats
llmdebug supports multiple output formats to optimize for different use cases:
| Format | Size | Best For |
|--------|------|----------|
| `json` | baseline | Human readability, external tools |
| `json_compact` (default) | ~40% smaller | LLM context efficiency |
| `toon` | ~50% smaller | Maximum token savings |
**Compact JSON** uses abbreviated keys (e.g., `_exc` instead of `exception`) to reduce token usage. The `get_latest_snapshot()` function auto-expands keys and normalizes DebugSession envelopes by default, so your code works identically regardless of format.
### Pytest Opt-out
Skip snapshot capture for specific tests:
```python
import pytest
@pytest.mark.no_snapshot
def test_expected_failure():
...
```
## API
```python
from llmdebug import (
# Capture
debug_snapshot, # Decorator for exception capture
snapshot_section, # Context manager for targeted capture
get_latest_snapshot, # Read the most recent snapshot (auto-expands keys)
SnapshotConfig, # Configuration dataclass
RedactionProfile, # Type alias: "dev" | "ci" | "prod"
resolve_redaction_policy,# Resolve profile + explicit redaction settings
# Analysis
generate_hypotheses, # Auto-generate debugging hypotheses from a snapshot
Hypothesis, # Hypothesis dataclass (confidence, pattern, evidence, suggestion)
filter_snapshot, # Layered disclosure: filter to crash/full/context detail
DetailLevel, # Type alias: "crash" | "full" | "context"
# Production hooks
install_hooks, # Install sys.excepthook + thread + unraisable hooks
uninstall_hooks, # Restore original hooks
PII_PATTERNS, # Default PII redaction patterns (email, API keys, etc.)
# Jupyter / IPython
load_jupyter, # Install into current IPython/Jupyter session
# Web middleware
LLMDebugWSGIMiddleware, # WSGI middleware (Flask, Django)
LLMDebugASGIMiddleware, # ASGI middleware (FastAPI, Starlette)
# Log capture
enable_log_capture, # Install log handler to capture recent records
)
# Read the most recent snapshot programmatically
snapshot = get_latest_snapshot() # Returns dict or None
# Filter to minimal detail for LLM context
from llmdebug import filter_snapshot
filtered = filter_snapshot(snapshot, "crash") # Exception + crash frame only
# Generate debugging hypotheses
from llmdebug import generate_hypotheses
hypotheses = generate_hypotheses(snapshot)
for h in hypotheses:
print(f"[{h.confidence:.0%}] {h.description}")
print(f" Suggestion: {h.suggestion}")
```
## Project Docs
- Contributing guide: `CONTRIBUTING.md`
- Security policy: `SECURITY.md`
- Code of conduct: `CODE_OF_CONDUCT.md`
## License
MIT
| text/markdown | null | Nicolas Schuler <schuler.nicolas@proton.me> | null | null | MIT | crash-reporting, debugging, llm, pytest | [
"Development Status :: 4 - Beta",
"Framework :: Pytest",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Debuggers"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"filelock>=3.0",
"polars>=1.12.0",
"scikit-learn>=1.4.0",
"scipy>=1.13",
"click>=8.0; extra == \"cli\"",
"rich>=13.0; extra == \"cli\"",
"bandit>=1.8.0; extra == \"dev\"",
"click>=8.0; extra == \"dev\"",
"deptry>=0.22.0; extra == \"dev\"",
"diff-cover>=9.2; extra == \"dev\"",
"import-linter>=2.0; extra == \"dev\"",
"ipython>=8.0; extra == \"dev\"",
"mcp>=1.0; extra == \"dev\"",
"mutmut>=3.2; extra == \"dev\"",
"numpy>=1.20; extra == \"dev\"",
"pip-audit>=2.9.0; extra == \"dev\"",
"pyright>=1.1.390; extra == \"dev\"",
"pytest-asyncio>=0.25.0; extra == \"dev\"",
"pytest-benchmark>=4.0; extra == \"dev\"",
"pytest-cov>=6.0; extra == \"dev\"",
"pytest>=9.0; extra == \"dev\"",
"python-semantic-release>=9.0; extra == \"dev\"",
"radon>=6.0; extra == \"dev\"",
"rich>=13.0; extra == \"dev\"",
"ruff>=0.12.0; extra == \"dev\"",
"toons>=0.1; extra == \"dev\"",
"vulture>=2.14; extra == \"dev\"",
"xenon>=0.9.3; extra == \"dev\"",
"datasets>=2.0; extra == \"evals\"",
"swebench==4.1.0; extra == \"evals\"",
"testcontainers>=4.13.2; extra == \"evals\"",
"ipython>=8.0; extra == \"jupyter\"",
"mcp>=1.0; extra == \"mcp\"",
"toons>=0.1; extra == \"toon\""
] | [] | [] | [] | [
"Homepage, https://github.com/NicolasSchuler/llmdebug",
"Repository, https://github.com/NicolasSchuler/llmdebug"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T09:25:38.572716 | llmdebug-2.14.6.tar.gz | 5,902,219 | 4b/e6/f79ff06a6a9727e4c4a29dfa3ebb6566c973664ee55c5e5d13b8385457ae/llmdebug-2.14.6.tar.gz | source | sdist | null | false | df10ae02dda6b0e2aa7ef9a3707925fe | c5c326dd86d6b7cf5b18d5ecdf339999f5e98c973ca43e5d4075064bb31a6f93 | 4be6f79ff06a6a9727e4c4a29dfa3ebb6566c973664ee55c5e5d13b8385457ae | null | [
"LICENSE"
] | 226 |
2.2 | gangdan | 1.0.2 | Offline Development Assistant powered by Ollama and ChromaDB | # GangDan - Offline Dev Assistant
A local-first, offline programming assistant powered by [Ollama](https://ollama.ai/) and [ChromaDB](https://www.trychroma.com/). Chat with LLMs, build a vector knowledge base from documentation, run terminal commands, and get AI-generated shell suggestions -- all from a single browser tab.
> **GangDan (纲担)** -- Principled and Accountable.

## Features
- **RAG Chat** -- Ask questions with optional retrieval from a local ChromaDB knowledge base and/or web search (DuckDuckGo, SearXNG, Brave). Responses stream in real-time via SSE. A **Knowledge Base Scope Selector** lets you pick exactly which KBs to query.
- **AI Command Assistant** -- Describe what you want to do in natural language; the assistant generates a shell command you can drag-and-drop into the terminal, execute, and auto-summarize.
- **Built-in Terminal** -- Run commands directly in the browser with stdout/stderr display.
- **Documentation Manager** -- One-click download and indexing of 30+ popular library docs (Python, Rust, Go, JS, C/C++, CUDA, Docker, SciPy, Scikit-learn, SymPy, Jupyter, etc.). Batch operations and GitHub repo search included.
- **Custom Knowledge Base Upload** -- Upload your own Markdown (.md) and plain text (.txt) documents to create named knowledge bases. Files are automatically indexed for RAG retrieval.
- **10-Language UI** -- Switch between Chinese, English, Japanese, French, Russian, German, Italian, Spanish, Portuguese, and Korean without page reload.
- **Proxy Support** -- None / system / manual proxy modes for both the chat backend and documentation downloads.
- **Offline by Design** -- Runs entirely on your machine. No cloud APIs required.
## Screenshots
| Chat | Terminal |
|:----:|:--------:|
|  |  |
| Documentation | Settings |
|:-------------:|:--------:|
|  |  |
| Upload Documents | KB Scope Selection |
|:----------------:|:------------------:|
|  |  |
## Requirements
- Python 3.10+
- [Ollama](https://ollama.ai/) running locally (default `http://localhost:11434`)
- A chat model pulled in Ollama (e.g. `ollama pull qwen2.5`)
- An embedding model for RAG (e.g. `ollama pull nomic-embed-text`)
## Installation
### Method 1: Install from PyPI (Recommended)
```bash
pip install gangdan
```
After installation, launch directly:
```bash
# Start GangDan
gangdan
# Or use python -m
python -m gangdan
# Custom host and port
gangdan --host 127.0.0.1 --port 8080
# Specify a custom data directory
gangdan --data-dir /path/to/my/data
```
### Method 2: Install from Source (Development)
```bash
# 1. Clone the repository
git clone https://github.com/cycleuser/GangDan.git
cd GangDan
# 2. (Optional) Create and activate a virtual environment
python -m venv .venv
source .venv/bin/activate # Linux/macOS
# .venv\Scripts\activate # Windows
# 3. Install in editable mode with all dependencies
pip install -e .
# 4. Launch GangDan
gangdan
```
### Ollama Setup
Make sure Ollama is installed and running before starting GangDan:
```bash
# Start Ollama service
ollama serve
# Pull a chat model
ollama pull qwen2.5
# Pull an embedding model for RAG
ollama pull nomic-embed-text
```
Open [http://127.0.0.1:5000](http://127.0.0.1:5000) in your browser.
## CLI Options
```
gangdan [OPTIONS]
Options:
--host TEXT Host to bind to (default: 0.0.0.0)
--port INT Port to listen on (default: 5000)
--debug Enable Flask debug mode
--data-dir PATH Custom data directory
--version Show version and exit
```
## Project Structure
```
GangDan/
├── pyproject.toml # Package metadata & build config
├── MANIFEST.in # Source distribution manifest
├── LICENSE # GPL-3.0-or-later
├── README.md # English documentation
├── README_CN.md # Chinese documentation
├── gangdan/
│ ├── __init__.py # Package version
│ ├── __main__.py # python -m gangdan entry
│ ├── cli.py # CLI argument parsing & startup
│ ├── app.py # Flask backend (routes, Ollama, ChromaDB, i18n)
│ ├── templates/
│ │ └── index.html # Jinja2 HTML template
│ └── static/
│ ├── css/
│ │ └── style.css # Application styles (dark theme)
│ └── js/
│ ├── i18n.js # Internationalization & state management
│ ├── utils.js # Panel switching & toast notifications
│ ├── markdown.js # Markdown / LaTeX (KaTeX) rendering
│ ├── chat.js # Chat panel & SSE streaming
│ ├── terminal.js # Terminal & AI command assistant
│ ├── docs.js # Documentation download & indexing
│ └── settings.js # Settings panel & initialization
├── images/ # Screenshots
├── publish.py # PyPI publish helper script
└── test_package.py # Comprehensive package test suite
```
Runtime data (created automatically):
```
~/.gangdan/ # Default when installed via pip
├── gangdan_config.json # Persisted settings
├── docs/ # Downloaded documentation
└── chroma/ # ChromaDB vector store
```
## Architecture
The frontend and backend are fully decoupled:
- **Backend** (`app.py`) -- A single Python file containing Flask routes, the Ollama client, ChromaDB manager, documentation downloader, web searcher, and conversation manager. All server-side configuration is injected into the template via a `window.SERVER_CONFIG` block.
- **Frontend** (`templates/` + `static/`) -- Pure HTML/CSS/JS with no build step. JavaScript files are loaded in dependency order and share state through global functions. KaTeX is loaded from CDN for LaTeX rendering.
ChromaDB is initialized with automatic corruption recovery: if the database is damaged, it is backed up and recreated transparently.
## Configuration
All settings are managed through the **Settings** tab in the UI:
| Setting | Description |
|---------|-------------|
| Ollama URL | Ollama server address (default `http://localhost:11434`) |
| Chat Model | Model for conversation (e.g. `qwen2.5:7b-instruct`) |
| Embedding Model | Model for RAG embeddings (e.g. `nomic-embed-text`) |
| Reranker Model | Optional reranker for better search results |
| Proxy Mode | `none` / `system` / `manual` for network requests |
Settings are persisted to `gangdan_config.json` in the data directory.
## License
GPL-3.0-or-later. See [LICENSE](LICENSE) for details.
| text/markdown | GangDan Contributors | null | null | null | GPL-3.0-or-later | ollama, chromadb, rag, offline, assistant, flask | [
"Development Status :: 4 - Beta",
"Environment :: Web Environment",
"Framework :: Flask",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"flask>=3.0",
"flask-cors>=4.0",
"requests>=2.31",
"chromadb>=1.0"
] | [] | [] | [] | [
"Homepage, https://github.com/cycleuser/GangDan",
"Repository, https://github.com/cycleuser/GangDan",
"Issues, https://github.com/cycleuser/GangDan/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T09:25:36.592640 | gangdan-1.0.2.tar.gz | 77,448 | 13/08/e294d4245ef8c2bdfcf5526fb14d611dd91cc25bd754f9c5096c21d9e548/gangdan-1.0.2.tar.gz | source | sdist | null | false | 47e2b960b6ee901ca4b8267edf98c3f0 | 0232af844c45dff8dd825fd852c8bb549a77b19b4220200181f5c08ed2134fb8 | 1308e294d4245ef8c2bdfcf5526fb14d611dd91cc25bd754f9c5096c21d9e548 | null | [] | 227 |
2.4 | neuromemory | 0.3.0 | Memory management framework for AI agents | # NeuroMemory
**AI Agent 记忆框架**
为 AI agent 开发者提供记忆管理能力。直接在 Python 程序中使用,无需部署服务器。
---
## 安装
### 方式 1: 从 PyPI 安装(推荐)
```bash
# 基础安装(包含核心功能)
pip install neuromemory
# 或安装所有可选依赖(推荐)
pip install neuromemory[all]
# 按需安装
pip install neuromemory[s3] # S3/MinIO 文件存储
pip install neuromemory[pdf] # PDF 文件处理
pip install neuromemory[docx] # Word 文档处理
```
**依赖自动安装**: SQLAlchemy、asyncpg、pgvector、httpx 等核心依赖会自动安装。
### 方式 2: 从源码安装(开发者)
```bash
git clone https://github.com/yourusername/NeuroMemory
cd NeuroMemory
pip install -e ".[dev]" # 包含测试工具
```
---
## 外部依赖
NeuroMemory 需要以下外部服务(**不包含在 pip 包中**):
### 1. PostgreSQL 16+ with pgvector(必需)
```bash
# 使用项目提供的 Docker Compose
docker compose -f docker-compose.yml up -d db
# 或使用官方镜像
docker run -d -p 5432:5432 \
-e POSTGRES_USER=neuromemory \
-e POSTGRES_PASSWORD=neuromemory \
-e POSTGRES_DB=neuromemory \
ankane/pgvector:pg16
```
### 2. Embedding Provider(必需,三选一)
- **本地模型**(无需 API Key):`pip install sentence-transformers`,使用本地 transformer 模型
- **SiliconFlow**:[siliconflow.cn](https://siliconflow.cn/),需要 API Key
- **OpenAI**:[platform.openai.com](https://platform.openai.com/),需要 API Key
### 3. LLM API Key(用于自动提取记忆,可选)
- [OpenAI](https://platform.openai.com/) 或 [DeepSeek](https://platform.deepseek.com/)
- 不使用 LLM 时,仍可手动通过 `add_memory()` 添加记忆并用 `recall()`/`search()` 检索
### 4. MinIO/S3(可选,仅用于文件存储)
```bash
docker compose -f docker-compose.yml up -d minio
```
---
## 快速开始
```python
import asyncio
from neuromemory import NeuroMemory, SiliconFlowEmbedding, OpenAILLM
async def main():
async with NeuroMemory(
database_url="postgresql+asyncpg://neuromemory:neuromemory@localhost:5432/neuromemory",
embedding=SiliconFlowEmbedding(api_key="your-key"),
llm=OpenAILLM(api_key="your-openai-key"), # 用于自动提取记忆
auto_extract=True, # 默认启用,像 mem0 那样实时提取记忆
) as nm:
# 1. 存储对话消息 → 自动提取记忆(facts/episodes/relations)
await nm.conversations.add_message(
user_id="alice", role="user",
content="I work at ABC Company as a software engineer"
)
# → 后台自动提取:fact: "在 ABC Company 工作", relation: (alice)-[works_at]->(ABC Company)
# 2. 三因子检索(相关性 × 时效性 × 重要性)
result = await nm.recall(user_id="alice", query="Where does Alice work?")
for r in result["merged"]:
print(f"[{r['score']:.2f}] {r['content']}")
# 3. 生成洞察和情感画像(可选,定期调用)
insights = await nm.reflect(user_id="alice")
print(f"生成了 {insights['insights_generated']} 条洞察")
asyncio.run(main())
```
### 核心操作流程
NeuroMemory 的核心使用围绕三个操作:
**插入记忆**(自动模式,默认):
- 对话驱动:`add_message()` 存储对话 **并自动提取记忆**(推荐,像 mem0)
- 直接添加:`add_memory(user_id, content, memory_type)`(手动指定类型,不需要 LLM)
**召回记忆(recall)**:
- `await nm.recall(user_id, query)` — 综合考虑相关性、时效性、重要性,找出最匹配的记忆
- 在对话中使用:让 agent 能"想起"相关的历史信息来回应用户
**生成洞察(reflect)**(可选,定期调用):
- `await nm.reflect(user_id)` — 高层记忆分析:
1. **提炼洞察**:从已提取的记忆生成高层理解(行为模式、阶段总结)
2. **更新画像**:整合情感数据,更新用户情感画像
- 让记忆从"事实"升华为"洞察"
> **关键变化**(v0.2.0):`add_message()` 现在默认自动提取记忆(`auto_extract=True`),无需手动调用 `extract_memories()` 或 `reflect()`。`reflect()` 专注于生成洞察和情感画像,不再提取基础记忆。
**逻辑关系**:
```
对话进行中 → 存储消息 (add_message) → 自动提取记忆
↓
agent 需要上下文 → 召回记忆 (recall)
↓
定期分析 → 生成洞察 (reflect) → 洞察 + 情感画像
```
**配置选项**:
- **自动模式**(默认,推荐):`auto_extract=True`,每次 `add_message` 都提取记忆
- **手动模式**:`auto_extract=False`,手动调用 `extract_memories()`
- **策略模式**:`auto_extract=False` + `ExtractionStrategy(message_interval=10)`,每 10 条消息触发
---
## 核心特性
### 记忆分类
NeuroMemory 提供 7 种记忆类型,每种有不同的存储和获取方式:
| 记忆类型 | 存储方式 | 底层存储 | 获取方式 | 示例 |
|---------|---------|---------|---------|------|
| **事实** | Embedding + Graph | pgvector + Apache AGE | `nm.recall(user_id, query)` | "在 Google 工作" |
| **情景** | Embedding | pgvector | `nm.recall(user_id, query)` | "昨天面试很紧张" |
| **关系** | Graph Store | Apache AGE | `nm.graph.get_neighbors(user_id, type, id)` | `(user)-[works_at]->(Google)` |
| **洞察** | Embedding | pgvector | `nm.search(user_id, query, memory_type="insight")` | "用户倾向于晚上工作" |
| **情感画像** | Table | PostgreSQL | `reflect()` 自动更新 | "容易焦虑,对技术兴奋" |
| **偏好** | KV (Profile) | PostgreSQL | `nm.kv.get(user_id, "profile", "preferences")` | `["喜欢喝咖啡", "偏好深色模式"]` |
| **通用** | Embedding | pgvector | `nm.search(user_id, query)` | 手动 `add_memory()` 的内容 |
### 三因子混合检索
不是简单的向量数据库封装。`recall()` 综合三个因子评分并融合图谱遍历:
```python
Score = rrf_score × recency × importance
rrf_score = RRF(vector_rank, bm25_rank) # 向量 + BM25 关键词混合检索 (RRF 融合)
recency = e^(-t / decay_rate × (1 + arousal × 0.5)) # 时效性,高情感唤醒衰减更慢
importance = metadata.importance / 10 # LLM 评估的重要性 (0.1-1.0)
```
| 对比维度 | 纯向量检索 | 三因子检索 |
|---------|-----------|-----------|
| **时间感知** | ❌ 1 年前和昨天的权重相同 | ✅ 指数衰减(Ebbinghaus 遗忘曲线) |
| **情感影响** | ❌ 不考虑情感强度 | ✅ 高 arousal 记忆衰减慢 50% |
| **重要性** | ❌ 琐事和大事同等对待 | ✅ 重要事件优先级更高 |
**实际案例** — 用户问"我在哪工作?":
| 记忆内容 | 时间 | 纯向量 | 三因子 | 应该返回 |
|---------|------|--------|--------|---------|
| "我在 Google 工作" | 1 年前 | 0.95 | 0.008 | ❌ 已过时 |
| "上周从 Google 离职了" | 7 天前 | 0.85 | 0.67 | ✅ 最新且重要 |
**图实体检索**:从知识图谱中查找结构化关系(`(alice)-[works_at]->(Google)`),与向量结果去重合并。`recall()` 返回 `vector_results`、`graph_results` 和合并后的 `merged` 列表。
### 三层情感架构
唯一实现三层情感设计的开源记忆框架:
| 层次 | 类型 | 存储位置 | 时间性 | 示例 |
|------|------|---------|--------|------|
| **微观** | 事件情感标注 | 记忆 metadata (valence/arousal/label) | 瞬时 | "说到面试时很紧张(valence=-0.6)" |
| **中观** | 近期情感状态 | emotion_profiles.latest_state | 1-2周 | "最近工作压力大,情绪低落" |
| **宏观** | 长期情感画像 | emotion_profiles.* | 长期稳定 | "容易焦虑,但对技术话题兴奋" |
- 微观:捕捉瞬时情感,丰富记忆细节
- 中观:追踪近期状态,agent 可以关心"你最近还好吗"
- 宏观:理解长期特质,形成真正的用户画像
> **隐私合规**:不自动推断用户人格 (Big Five) 或价值观。EU AI Act Article 5 禁止此类自动化画像。人格和价值观应由开发者通过 system prompt 设定 agent 角色。
### LLM 驱动的记忆提取与反思
- **提取** (`extract_memories`):从对话中自动识别事实、事件、关系,附带情感标注(valence/arousal/label)和重要性评分(1-10),偏好存入用户画像
- **反思** (`reflect`):定期从近期记忆提炼高层洞察(行为模式、阶段总结),更新情感画像
- **访问追踪**:自动记录 access_count 和 last_accessed_at,符合 ACT-R 记忆模型
理论基础:Generative Agents (Park 2023) 的 Reflection 机制 + LeDoux 情感标记 + Ebbinghaus 遗忘曲线 + ACT-R 记忆模型。
### 与同类框架对比
| 特性 | NeuroMemory | Mem0 | LangChain Memory |
|------|------------|------|-----------------|
| 三层情感架构 | ✅ 微观+中观+宏观 | ❌ | ❌ |
| 情感标注 | ✅ valence/arousal/label | ❌ | ❌ |
| 重要性评分 + 三因子检索 | ✅ | 🔶 有评分 | ❌ |
| 反思机制 | ✅ 洞察 + 画像更新 | ❌ | ❌ |
| 知识图谱 | ✅ Apache AGE (Cypher) | 🔶 简单图 | 🔶 LangGraph |
| 多模态文件 | ✅ PDF/DOCX 提取 | ✅ | ❌ |
| 框架嵌入 | ✅ Python 库 | ✅ | ✅ |
| 隐私合规 | ✅ 不推断人格 | ❓ | ❓ |
---
## API 使用说明
> 完整 API 参考文档见 **[docs/API.md](docs/API.md)**,包含所有方法的签名、参数、返回值和示例。
NeuroMemory 有三组容易混淆的 API,以下是快速对比:
### ✏️ 写入 API:add_message() vs add_memory()
| API | 用途 | 写入目标 | 何时使用 |
|-----|------|---------|---------|
| **add_message()** ⭐ | 存储对话消息 | 对话历史 → 后续通过 `reflect()` 提取记忆 | **日常使用(推荐)** |
| **add_memory()** | 直接写入记忆 | 记忆表(embedding),立即可检索 | 手动导入、批量初始化、已知结构化信息 |
```python
# add_message(): 对话驱动(推荐)— 先存对话,再用 reflect() 提取记忆
await nm.conversations.add_message(user_id="alice", role="user",
content="我在 Google 工作,做后端开发")
await nm.reflect(user_id="alice")
# → 自动提取: fact: "在 Google 工作" + 情感标注 + 重要性评分 + 洞察
# add_memory(): 直接写入(手动指定一切)
await nm.add_memory(user_id="alice", content="在 Google 工作",
memory_type="fact", metadata={"importance": 8})
```
### 📚 检索 API:recall() vs search()
| API | 用途 | 检索方式 | 何时使用 |
|-----|------|---------|---------|
| **recall()** ⭐ | 智能混合检索 | 三因子向量(相关性×时效×重要性)+ 图实体检索 + 去重 | **日常使用(推荐)** |
| **search()** | 纯语义检索 | 仅 embedding 余弦相似度 | 只需语义相似度,不考虑时间和重要性 |
```python
# recall(): 综合考虑,最近的重要记忆优先
result = await nm.recall(user_id="alice", query="工作")
# → "昨天面试 Google"(最近 + 重要)优先于 "去年在微软实习"(久远)
# search(): 只看语义,可能返回很久以前的记忆
results = await nm.search(user_id="alice", query="工作")
# → "去年在微软实习" 和 "昨天面试 Google" 都可能返回,只按相似度排序
```
### 🧠 记忆管理 API:reflect() vs extract_memories()
| API | 用途 | 处理内容 | 何时使用 |
|-----|------|---------|---------|
| **reflect()** ⭐ | 一站式记忆处理 | 提取事实/情景/关系 + 生成洞察 + 更新画像 | **推荐使用**:手动处理记忆时 |
| **extract_memories()** | 仅提取新记忆 | 从对话中提取事实/情景/关系(不生成洞察) | 底层方法:由 `ExtractionStrategy` 自动调用 |
```python
# reflect(): 推荐 — 一站式处理(提取 + 洞察 + 画像)
await nm.conversations.add_message(user_id="alice", role="user", content="我在 Google 工作")
result = await nm.reflect(user_id="alice")
# → 提取: fact: "在 Google 工作", relation: (alice)-[works_at]->(Google)
# → 洞察: "用户近期求职,面试了 Google 和微软"
# → 画像: 更新情感状态
# extract_memories(): 底层方法(通常不需要直接调用)
# 由 ExtractionStrategy 自动调用,用于高频轻量的增量提取
```
### 策略配置(ExtractionStrategy)
通过 `ExtractionStrategy` 控制自动记忆管理,配置后 `add_message()` 会在满足条件时自动触发提取:
```python
from neuromemory import ExtractionStrategy
nm = NeuroMemory(
...,
extraction=ExtractionStrategy(
message_interval=10, # 每 10 条消息自动提取记忆(0 = 禁用)
idle_timeout=600, # 闲置 10 分钟后自动提取(0 = 禁用)
reflection_interval=50, # 每 50 次提取后触发 reflect() 整理(0 = 禁用)
on_session_close=True, # 会话关闭时提取
on_shutdown=True, # 程序关闭时提取
)
)
```
**推荐配置**:
- **实时应用**(聊天机器人):`message_interval=10, reflection_interval=50`
- **批处理**(每日总结):`message_interval=0, on_session_close=True`,手动调用 `reflect()`
- **开发调试**:全部设为 0,手动控制提取和反思时机
---
## 完整 Agent 示例
> 可运行的完整示例见 **[example/](example/)**,支持终端交互、命令查询、自动记忆提取。无需 Embedding API Key。
以下是一个带记忆的聊天 agent 核心实现:
```python
from neuromemory import NeuroMemory, SiliconFlowEmbedding, OpenAILLM, ExtractionStrategy
from openai import AsyncOpenAI
class MemoryAgent:
def __init__(self, nm: NeuroMemory, openai_client: AsyncOpenAI):
self.nm = nm
self.llm = openai_client
async def chat(self, user_id: str, user_input: str) -> str:
"""处理用户输入,返回 agent 回复"""
# === 步骤 1:存储用户消息(自动提取记忆)===
await self.nm.conversations.add_message(
user_id=user_id,
role="user",
content=user_input
)
# → 如果启用 auto_extract,记忆已自动提取
# === 步骤 2:召回相关记忆 ===
recall_result = await self.nm.recall(user_id=user_id, query=user_input, limit=5)
memories = recall_result["merged"]
# 获取用户偏好(从 profile 中)
lang_kv = await self.nm.kv.get(user_id, "profile", "language")
language = lang_kv.value if lang_kv else "zh-CN"
# 获取近期洞察
insights = await self.nm.search(user_id, user_input, memory_type="insight", limit=3)
# === 步骤 3:构建包含记忆的 prompt ===
memory_context = "\n".join([
f"- {m['content']} (重要性: {m.get('metadata', {}).get('importance', 5)})"
for m in memories[:3]
]) if memories else "暂无相关记忆"
insight_context = "\n".join([
f"- {i['content']}" for i in insights
]) if insights else "暂无深度理解"
system_prompt = f"""你是一个有记忆的 AI 助手。请用 {language} 语言回复。
**关于用户的具体记忆**:
{memory_context}
**对用户的深度理解(洞察)**:
{insight_context}
请根据这些记忆和理解,以朋友的口吻自然地回应用户。"""
# === 步骤 4:调用 LLM 生成回复 ===
response = await self.llm.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_input}
]
)
assistant_reply = response.choices[0].message.content
# === 步骤 5:存储 assistant 回复 ===
await self.nm.conversations.add_message(
user_id=user_id,
role="assistant",
content=assistant_reply
)
return assistant_reply
# 使用示例
async def main():
async with NeuroMemory(
database_url="postgresql+asyncpg://...",
embedding=SiliconFlowEmbedding(api_key="..."),
llm=OpenAILLM(api_key="..."),
auto_extract=True, # 默认启用,每次 add_message 都提取记忆(推荐)
) as nm:
agent = MemoryAgent(nm, AsyncOpenAI(api_key="..."))
# 第一轮对话
reply1 = await agent.chat("alice", "我在 Google 工作,做后端开发,最近压力有点大")
print(f"Agent: {reply1}")
# → add_message 自动提取记忆:
# fact: "在 Google 工作", episodic: "最近压力有点大"
# relation: (alice)-[works_at]->(Google)
# 第二轮对话(几天后)— agent 能"记住"之前的对话
reply2 = await agent.chat("alice", "有什么减压的建议吗?")
print(f"Agent: {reply2}")
# 定期生成洞察(可选,如每天晚上或每 100 条消息)
result = await nm.reflect(user_id="alice")
print(f"生成了 {result['insights_generated']} 条洞察")
```
**关键点**:
1. **实时提取**:`auto_extract=True` 让每次 `add_message` 都立即提取记忆(像 mem0)
2. **召回记忆**:每次对话前用 `recall()` 找出相关记忆
3. **注入 prompt**:将记忆作为 context 注入到 LLM 的 system prompt
4. **生成洞察**:定期调用 `reflect()` 提炼高层洞察和更新情感画像
5. **持续学习**:agent 随着对话增加,对用户的理解越来越深入
---
## 架构
### 架构概览
```
┌─────────────────────────────────────────────────────────────┐
│ NeuroMemory 架构 │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ 应用层 (Your Agent Code) │ │
│ │ from neuromemory import NeuroMemory │ │
│ │ nm = NeuroMemory(database_url=..., embedding=...) │ │
│ └──────────────────────┬───────────────────────────────┘ │
│ │ │
│ ┌──────────────────────▼───────────────────────────────┐ │
│ │ 门面层 (Facade Layer) │ │
│ │ nm.kv nm.conversations nm.files nm.graph │ │
│ └──────────────────────┬───────────────────────────────┘ │
│ │ │
│ ┌──────────────────────▼───────────────────────────────┐ │
│ │ 服务层 (Service Layer) │ │
│ │ SearchService │ KVService │ ConversationService │ │
│ │ FileService │ GraphService │ MemoryExtractionService │ │
│ └──────────────────────┬───────────────────────────────┘ │
│ │ │
│ ┌──────────────────────▼───────────────────────────────┐ │
│ │ Provider 层 (可插拔) │ │
│ │ EmbeddingProvider │ LLMProvider │ ObjectStorage │ │
│ └──────────────────────┬───────────────────────────────┘ │
│ │ │
│ ┌──────────────────────▼───────────────────────────────┐ │
│ │ 存储层 │ │
│ │ PostgreSQL + pgvector + AGE │ MinIO/S3 (可选) │ │
│ └─────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
```
### 技术栈
| 组件 | 技术 | 说明 |
|------|------|------|
| **Framework** | Python 3.12+ async | 直接嵌入 agent 程序 |
| **数据库** | PostgreSQL 16 + pgvector | 向量检索 + 结构化存储 |
| **图数据库** | Apache AGE | Cypher 查询语言 |
| **ORM** | SQLAlchemy 2.0 (async) | asyncpg 驱动 |
| **Embedding** | 可插拔 Provider | SiliconFlow / OpenAI |
| **LLM** | 可插拔 Provider | OpenAI / DeepSeek |
| **文件存储** | S3 兼容 | MinIO / AWS S3 / 华为云 OBS |
### 可插拔 Provider
```
EmbeddingProvider (ABC)
├── SiliconFlowEmbedding # BAAI/bge-m3, 1024 维
└── OpenAIEmbedding # text-embedding-3-small, 1536 维
LLMProvider (ABC)
└── OpenAILLM # 兼容 OpenAI / DeepSeek
ObjectStorage (ABC)
└── S3Storage # 兼容 MinIO / AWS S3 / 华为云 OBS
```
---
## 文档
| 文档 | 说明 |
|------|------|
| **[API 参考](docs/API.md)** | 完整的 Python API 文档(recall, search, extract_memories 等) |
| **[快速开始](docs/GETTING_STARTED.md)** | 10 分钟上手指南 |
| **[架构设计](docs/ARCHITECTURE.md)** | 系统架构、Provider 模式、数据模型 |
| **[使用指南](docs/SDK_GUIDE.md)** | API 用法和代码示例 |
| **[为什么不提供 Web UI](docs/WHY_NO_WEB_UI.md)** | 设计理念和替代方案 |
---
## 路线图
### Phase 1(已完成)
- [x] PostgreSQL + pgvector 统一存储
- [x] 向量语义检索
- [x] 时间范围查询和时间线聚合
- [x] KV 存储
- [x] 对话管理
- [x] 文件上传和文本提取
- [x] Apache AGE 图数据库
- [x] LLM 记忆分类提取
- [x] 可插拔 Provider(Embedding/LLM/Storage)
### Phase 2(已完成)
- [x] 情感标注(valence / arousal / label)
- [x] 重要性评分(1-10)
- [x] 三因子检索(relevance × recency × importance)
- [x] 访问追踪(access_count / last_accessed_at)
- [x] 反思机制(从记忆中生成高层洞察)
- [x] 后台任务系统(ExtractionStrategy 自动触发)
### Phase 3(规划中)
- [ ] 基准测试:LoCoMo(ACL 2024,长对话记忆评测,10 组多轮对话 + 1986 个 QA)
- [ ] 基准测试:LongMemEval(ICLR 2025,超长记忆评测,500 个问题,115k~1.5M tokens)
- [ ] 自然遗忘(主动记忆清理/归档机制)
- [ ] 多模态 embedding(图片、音频)
- [ ] 分布式部署支持
---
## 贡献
欢迎贡献代码、文档或提出建议!
1. Fork 项目
2. 创建特性分支 (`git checkout -b feature/AmazingFeature`)
3. 提交改动 (`git commit -m 'Add some AmazingFeature'`)
4. 推送到分支 (`git push origin feature/AmazingFeature`)
5. 提交 Pull Request
---
## 许可证
MIT License - 详见 [LICENSE](LICENSE) 文件
---
**NeuroMemory** - 让您的 AI 拥有记忆
| text/markdown | null | Jacky <jacky@example.com> | null | null | MIT | ai, memory, agent, llm, rag, vector-database | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"sqlalchemy[asyncio]>=2.0.0",
"asyncpg>=0.30.0",
"pgvector>=0.3.0",
"httpx>=0.27.0",
"boto3>=1.34.0; extra == \"s3\"",
"pypdf>=4.0.0; extra == \"pdf\"",
"python-docx>=1.1.0; extra == \"docx\"",
"boto3>=1.34.0; extra == \"all\"",
"pypdf>=4.0.0; extra == \"all\"",
"python-docx>=1.1.0; extra == \"all\"",
"sentence-transformers>=3.0.0; extra == \"local-embedding\"",
"tqdm>=4.66.0; extra == \"eval\"",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-asyncio>=0.24.0; extra == \"dev\"",
"boto3>=1.34.0; extra == \"dev\"",
"pypdf>=4.0.0; extra == \"dev\"",
"python-docx>=1.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/zhuqingxun/NeuroMemory",
"Documentation, https://github.com/zhuqingxun/NeuroMemory",
"Repository, https://github.com/zhuqingxun/NeuroMemory",
"Issues, https://github.com/zhuqingxun/NeuroMemory/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-21T09:25:10.552758 | neuromemory-0.3.0.tar.gz | 69,831 | 24/00/b27e411b544c002f2593faa0dfe992affc1a787762d25079e996325af2ab/neuromemory-0.3.0.tar.gz | source | sdist | null | false | 51ebce1441dbacbe87e750786d2bf712 | ade01305d2091eef34fd705af991b9c7e6cf9f1aa1af715e55ae75fcb95b66d8 | 2400b27e411b544c002f2593faa0dfe992affc1a787762d25079e996325af2ab | null | [
"LICENSE"
] | 227 |
2.4 | unhwp | 0.2.1 | High-performance HWP/HWPX document extraction library | # unhwp
High-performance Python library for extracting HWP/HWPX Korean word processor documents to Markdown.
## Installation
```bash
pip install unhwp
```
## Quick Start
```python
import unhwp
# Simple conversion
markdown = unhwp.to_markdown("document.hwp")
print(markdown)
# Extract plain text
text = unhwp.extract_text("document.hwp")
# Full parsing with images
with unhwp.parse("document.hwp") as result:
print(result.markdown)
print(f"Sections: {result.section_count}")
print(f"Paragraphs: {result.paragraph_count}")
# Save images
for img in result.images:
img.save(f"output/{img.name}")
```
## Features
- **Fast**: Native Rust library with zero-copy parsing
- **Complete**: Extracts text, tables, images, and document structure
- **Clean Output**: Optional cleanup pipeline for polished Markdown
- **Format Support**: HWP 5.0, HWPX, and HWP 3.x (legacy)
## API Reference
### Functions
#### `to_markdown(path) -> str`
Convert an HWP/HWPX document to Markdown.
```python
markdown = unhwp.to_markdown("document.hwp")
```
#### `to_markdown_with_cleanup(path, cleanup_options=None) -> str`
Convert with optional cleanup.
```python
markdown = unhwp.to_markdown_with_cleanup(
"document.hwp",
cleanup_options=unhwp.CleanupOptions.aggressive()
)
```
#### `extract_text(path) -> str`
Extract plain text content.
```python
text = unhwp.extract_text("document.hwp")
```
#### `parse(path, render_options=None) -> ParseResult`
Parse a document with full access to content and images.
```python
with unhwp.parse("document.hwp") as result:
print(result.markdown)
print(result.text)
for img in result.images:
print(img.name, len(img.data))
```
#### `detect_format(path) -> int`
Detect the document format.
```python
fmt = unhwp.detect_format("document.hwp")
if fmt == unhwp.FORMAT_HWP5:
print("HWP 5.0 format")
elif fmt == unhwp.FORMAT_HWPX:
print("HWPX format")
```
### Classes
#### `RenderOptions`
Options for Markdown rendering.
```python
opts = unhwp.RenderOptions(
include_frontmatter=True,
image_path_prefix="images/",
preserve_line_breaks=False,
)
```
#### `CleanupOptions`
Options for output cleanup.
```python
# Presets
opts = unhwp.CleanupOptions.minimal()
opts = unhwp.CleanupOptions.default()
opts = unhwp.CleanupOptions.aggressive()
opts = unhwp.CleanupOptions.disabled()
# Custom
opts = unhwp.CleanupOptions(
enabled=True,
preset=1,
detect_mojibake=True,
)
```
### Constants
- `FORMAT_UNKNOWN` - Unknown format
- `FORMAT_HWP5` - HWP 5.0 binary format
- `FORMAT_HWPX` - HWPX XML format
- `FORMAT_HWP3` - HWP 3.x legacy format
## Platform Support
- Windows (x64)
- Linux (x64)
- macOS (x64, ARM64)
## License
MIT License - see [LICENSE](../../LICENSE) for details.
## Links
- [GitHub Repository](https://github.com/iyulab/unhwp)
- [Rust Crate](https://crates.io/crates/unhwp)
- [NuGet Package](https://www.nuget.org/packages/Unhwp)
| text/markdown | null | iyulab <contact@iyulab.com> | null | null | MIT | hwp, hwpx, korean, document, markdown, parser | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Rust",
"Topic :: Text Processing",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/iyulab/unhwp",
"Repository, https://github.com/iyulab/unhwp",
"Documentation, https://github.com/iyulab/unhwp#readme",
"Issues, https://github.com/iyulab/unhwp/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:25:08.522462 | unhwp-0.2.1.tar.gz | 7,726,050 | ff/46/839287a68111fa2d65e49cfdddb451460562e0c04266525ac9e61bd8a5f9/unhwp-0.2.1.tar.gz | source | sdist | null | false | d7b6eaede26c24b66b6fb214af4e84d6 | 6019958ec8ee02458ea49c4732163c34b5de42542b7b7b5b507d55c209181e01 | ff46839287a68111fa2d65e49cfdddb451460562e0c04266525ac9e61bd8a5f9 | null | [] | 241 |
2.4 | cacherator | 1.2.1 | Persistent JSON caching for Python with async support - cache function results and object state effortlessly. | # Cacherator
**Persistent JSON caching for Python with async support** - Cache function results and object state effortlessly.
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
## Overview
Cacherator is a Python library that provides persistent JSON-based caching for class state and function results. It enables developers to cache expensive operations with minimal configuration, supporting both synchronous and asynchronous functions.
### Key Features
- **Zero-configuration caching** - Simple inheritance and decorator pattern
- **Async/await support** - Native support for asynchronous functions
- **Persistent storage** - Cache survives program restarts
- **TTL (Time-To-Live)** - Automatic cache expiration
- **Selective caching** - Fine-grained control over what gets cached
- **Cache management** - Built-in methods for inspection and clearing
- **Flexible logging** - Global and per-instance control
- **DynamoDB backend** - Optional L2 cache for cross-machine sharing
## Installation
```bash
pip install cacherator
```
### Optional: DynamoDB Support
For cross-machine cache sharing via DynamoDB:
```bash
pip install boto3
```
## Quick Start
### Basic Function Caching
```python
from cacherator import JSONCache, Cached
import time
class Calculator(JSONCache):
def __init__(self):
super().__init__(data_id="calc")
@Cached()
def expensive_calculation(self, x, y):
time.sleep(2) # Simulate expensive operation
return x ** y
calc = Calculator()
result = calc.expensive_calculation(2, 10) # Takes 2 seconds
result = calc.expensive_calculation(2, 10) # Instant!
```
### Async Function Caching
```python
class APIClient(JSONCache):
@Cached(ttl=1) # Cache for 1 day
async def fetch_user(self, user_id):
# Expensive API call
response = await api.get(f"/users/{user_id}")
return response.json()
client = APIClient()
user = await client.fetch_user(123) # API call
user = await client.fetch_user(123) # Cached!
```
### State Persistence
```python
class GameState(JSONCache):
def __init__(self, game_id):
super().__init__(data_id=f"game_{game_id}")
if not hasattr(self, "score"):
self.score = 0
self.level = 1
def add_points(self, points):
self.score += points
self.json_cache_save()
# Session 1
game = GameState("player1")
game.add_points(100)
# Session 2 (after restart)
game = GameState("player1")
print(game.score) # 100 - persisted!
```
## Advanced Usage
### DynamoDB Backend (Cross-Machine Cache Sharing)
Enable optional DynamoDB L2 cache for sharing cache across multiple machines:
```python
from cacherator import JSONCache, Cached
class WebScraper(JSONCache):
def __init__(self):
super().__init__(dynamodb_table='my-cache-table')
@Cached(ttl=7)
def scrape_expensive_data(self, url):
# Expensive operation
return fetch_data(url)
# On machine 1 (laptop)
scraper = WebScraper()
data = scraper.scrape_expensive_data("https://example.com") # Scrapes and caches
# On machine 2 (EC2 instance) - same code
scraper = WebScraper()
data = scraper.scrape_expensive_data("https://example.com") # Uses cached data!
```
**How it works:**
- **L1 (local JSON)**: Checked first for instant access
- **L2 (DynamoDB)**: Checked on L1 miss, then written to L1
- **Writes**: Saved to both L1 and L2 simultaneously
- **No table specified**: Works as local-only cache
**DynamoDB table:**
- Auto-created if missing (requires IAM permissions)
- Partition key: `cache_id` (String)
- TTL enabled for automatic expiry
- Pay-per-request billing mode
**AWS credentials** via standard boto3 chain:
- Environment variables: `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_REGION`
- IAM role (recommended for EC2/Lambda)
- AWS credentials file (`~/.aws/credentials`)
### Custom TTL Configuration
```python
class WeatherService(JSONCache):
@Cached(ttl=0.25) # 6 hours (0.25 days)
def get_forecast(self, city):
return fetch_weather(city)
@Cached(ttl=30) # 30 days
def get_historical(self, city, year):
return fetch_historical(city, year)
```
### Excluding Variables from Cache
```python
class DataProcessor(JSONCache):
def __init__(self):
self._excluded_cache_vars = ["temp_data", "api_key"]
super().__init__()
self.results = {}
self.temp_data = [] # Won't be cached
self.api_key = "secret" # Won't be cached
```
### Cache Management
```python
processor = DataProcessor()
# Get cache statistics
stats = processor.json_cache_stats()
print(stats)
# {'total_entries': 5, 'functions': {'process': 3, 'analyze': 2}}
# Clear specific function cache
processor.json_cache_clear("process")
# Clear all cache
processor.json_cache_clear()
```
### Logging Control
```python
from cacherator import JSONCache
# Disable logging globally
JSONCache.set_logging(False)
# Enable logging globally (default)
JSONCache.set_logging(True)
# Per-instance control
processor = DataProcessor(logging=False) # Silent mode
```
**When logging is enabled:**
- DynamoDB operations are logged (table creation, reads, writes)
- Local JSON operations are silent (fast, not interesting)
**When logging is disabled:**
- All operations are silent
```
## Configuration
### JSONCache Constructor
```python
JSONCache(
data_id="unique_id", # Unique identifier (default: class name)
directory="cache", # Cache directory (default: "data/cache")
clear_cache=False, # Clear existing cache on init
ttl=999, # Default TTL in days
logging=True, # Enable logging (True/False)
dynamodb_table=None # DynamoDB table name (optional)
)
```
### @Cached Decorator
```python
@Cached(
ttl=7, # Time-to-live in days (default: class ttl)
clear_cache=False # Clear cache for this function
)
```
## Use Cases
### API Client with Caching
```python
class GitHubClient(JSONCache):
def __init__(self):
super().__init__(data_id="github_client", ttl=1)
@Cached(ttl=0.5) # 12 hours
async def get_user(self, username):
async with aiohttp.ClientSession() as session:
async with session.get(f"https://api.github.com/users/{username}") as resp:
return await resp.json()
@Cached(ttl=7) # 1 week
async def get_repos(self, username):
async with aiohttp.ClientSession() as session:
async with session.get(f"https://api.github.com/users/{username}/repos") as resp:
return await resp.json()
```
### Database Query Caching
```python
class UserRepository(JSONCache):
def __init__(self):
super().__init__(data_id="user_repo", ttl=0.1) # 2.4 hours
@Cached()
def get_user_by_id(self, user_id):
return db.query("SELECT * FROM users WHERE id = ?", user_id)
@Cached(ttl=1)
def get_user_stats(self, user_id):
return db.query("SELECT COUNT(*) FROM posts WHERE user_id = ?", user_id)
```
### Machine Learning Model Predictions
```python
class ModelPredictor(JSONCache):
def __init__(self):
super().__init__(data_id="ml_predictor")
self.model = load_model()
@Cached(ttl=30)
def predict(self, features_hash, features):
# Cache predictions by feature hash
return self.model.predict(features)
```
## Best Practices
### Recommended Use Cases
- Expensive API calls and network requests
- Database queries with relatively static data
- Heavy computational operations
- Machine learning model predictions
- Data transformations and aggregations
### When to Use TTL
- Set short TTL (minutes to hours) for frequently changing data
- Set long TTL (days to weeks) for stable reference data
- Consider data freshness requirements for your application
### What Not to Cache
- Non-deterministic functions (random number generation, timestamps)
- Very fast operations (overhead exceeds benefit)
- Non-JSON-serializable objects without custom handling
- Real-time data without appropriate TTL configuration
## Performance
Cacherator introduces minimal overhead:
- **Cache hit**: ~0.1ms
- **Cache miss**: Function execution time + ~1ms
- **Disk I/O**: Non-blocking, asynchronous operations
### Performance Improvements
- API calls (100ms - 5s) reduced to ~0.1ms
- Database queries (10ms - 1s) reduced to ~0.1ms
- Heavy computations (1s+) reduced to ~0.1ms
## Compatibility
- **Python**: 3.7 and above
- **Async**: Full support for async/await syntax
- **Operating Systems**: Windows, macOS, Linux
- **Data Types**: All JSON-serializable types plus datetime objects
- **Optional Dependencies**: boto3 (for DynamoDB backend), dynamorator
## Changelog
### Version 1.2.0
- **Added**: Optional DynamoDB backend for cross-machine cache sharing via dynamorator
- **Added**: Two-layer cache architecture (L1: local JSON, L2: DynamoDB)
- **Added**: Constructor parameter `dynamodb_table` for enabling DynamoDB
- **Added**: Automatic DynamoDB table creation with TTL support
- **Changed**: DynamoDB backend now uses dynamorator package
- **Changed**: Simplified logging to boolean (True/False)
- **Removed**: Environment variable configuration (use constructor parameter)
- **Removed**: LogLevel enum (simplified to boolean)
## Troubleshooting
### Cache Not Persisting
```python
# Explicitly save cache
obj.json_cache_save()
# Check for serialization errors
obj._excluded_cache_vars = ["problematic_attr"]
```
### Cache Not Being Used
```python
# Verify TTL hasn't expired
obj = MyClass(ttl=30) # Increase TTL
# Ensure arguments are identical (type matters)
obj.func(1, 2) # Different from
obj.func(1.0, 2) # (int vs float)
```
### Large Cache Files
```python
# Exclude large attributes
self._excluded_cache_vars = ["large_data"]
# Use separate cache instances
processor1 = DataProcessor(data_id="dataset1")
processor2 = DataProcessor(data_id="dataset2")
```
## Contributing
Contributions are welcome. Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
## License
MIT License - see [LICENSE](LICENSE) file for details.
## Resources
- **GitHub Repository**: https://github.com/Redundando/cacherator
- **Issue Tracker**: https://github.com/Redundando/cacherator/issues
- **PyPI Package**: https://pypi.org/project/cacherator/
---
Developed by [Arved Klöhn](https://github.com/Redundando)
| text/markdown | null | Arved Klöhn <arved.kloehn@gmail.com> | null | null | null | cache, caching, json, persistent, async, decorator, memoization, storage, dynamodb, aws | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Operating System :: OS Independent",
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Software Development :: Libraries",
"Topic :: Utilities",
"Framework :: AsyncIO"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"python-slugify>=8.0.0",
"logorator>=1.0.0",
"dynamorator>=0.1.5",
"boto3>=1.26.0; extra == \"dynamodb\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"build>=0.10.0; extra == \"dev\"",
"twine>=4.0.0; extra == \"dev\"",
"numpy>=1.26.0; extra == \"dev\"",
"pandas>=2.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Source, https://github.com/Redundando/cacherator",
"Issues, https://github.com/Redundando/cacherator/issues",
"Documentation, https://github.com/Redundando/cacherator#readme"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T09:23:49.940859 | cacherator-1.2.1.tar.gz | 22,210 | 95/bf/90ba6c6ee63cc919c24357aa559f519ce89eb54dd74429024a36d927f56c/cacherator-1.2.1.tar.gz | source | sdist | null | false | 4a37eea537f49acd4c9d63f165a09862 | 3353db36a3923922c4ce4f6181955d80d3c48a88de911fa79ce0717b47079d5e | 95bf90ba6c6ee63cc919c24357aa559f519ce89eb54dd74429024a36d927f56c | MIT | [
"LICENSE"
] | 237 |
2.4 | room-env | 3.0.6 | The Room environment | # The Room environments (compatible with gymnasium)
[](https://zenodo.org/doi/10.5281/zenodo.10876436)
[](https://badge.fury.io/py/room-env)
At the moment, there are three versions of the Room environments.
## README for each version
- [RoomEnv-v0](./README-v0.md)
- [RoomEnv-v1](./README-v1.md)
- [RoomEnv-v2](./README-v2.md)
- [RoomEnv-v3](./README-v3.md)
## List of academic papers that use the Rooms environments
- ["A Machine With Human-Like Memory Systems"](https://arxiv.org/abs/2204.01611)
- ["A Machine with Short-Term, Episodic, and Semantic Memory Systems"](https://arxiv.org/abs/2212.02098)
- ["Temporal Knowledge-Graph Memory in a Partially Observable Environment"](https://arxiv.org/abs/2408.05861)
## pdoc documentation
Click on [this link](https://humemai.github.io/room-env) to see the HTML rendered
docstrings
| text/markdown | Taewoon Kim | info@humem.ai | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/humemai/room-env | null | >=3.8 | [] | [] | [] | [
"gymnasium<1,>=0.27.1",
"torch>=1.12.1",
"PyYAML>=6.0",
"networkx>=3.5",
"tqdm",
"matplotlib",
"Ipython",
"rdflib"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/humemai/room-env/issues"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-21T09:23:29.369731 | room_env-3.0.6.tar.gz | 277,715 | ea/05/d487a3ade956d12af577ee2dc9fe480598550edde7ace99d9a6db1cad313/room_env-3.0.6.tar.gz | source | sdist | null | false | 63529db551281a06d8b68840f2550f1e | 90ad7d2038e8cf501b168f3bf3af5f96721257be376143d44c93df63a86d9f69 | ea05d487a3ade956d12af577ee2dc9fe480598550edde7ace99d9a6db1cad313 | null | [
"LICENSE"
] | 228 |
2.4 | uipath-agent-framework | 0.0.6 | Python SDK that enables developers to build and deploy Microsoft Agent Framework agents to the UiPath Cloud Platform | # UiPath Agent Framework Python SDK
[](https://pypi.org/project/uipath-agent-framework/)
[](https://pypi.org/project/uipath-agent-framework/)
[](https://pypi.org/project/uipath-agent-framework/)
A Python SDK that enables developers to build and deploy [Agent Framework](https://github.com/microsoft/agent-framework) agents to the UiPath Cloud Platform. It provides programmatic interaction with UiPath Cloud Platform services.
This package is an extension to the [UiPath Python SDK](https://github.com/UiPath/uipath-python) and implements the [UiPath Runtime Protocol](https://github.com/UiPath/uipath-runtime-python).
Check out these [sample projects](https://github.com/UiPath/uipath-integrations-python/tree/main/packages/uipath-agent-framework/samples) to see the SDK in action.
## Requirements
- Python 3.11 or higher
- UiPath Automation Cloud account
## Installation
```bash
pip install uipath-agent-framework
```
using `uv`:
```bash
uv add uipath-agent-framework
```
For Anthropic model support:
```bash
pip install 'uipath-agent-framework[anthropic]'
```
## Configuration
### Environment Variables
Create a `.env` file in your project root with the following variables:
```
UIPATH_URL=https://cloud.uipath.com/ACCOUNT_NAME/TENANT_NAME
UIPATH_ACCESS_TOKEN=YOUR_TOKEN_HERE
```
## Command Line Interface (CLI)
The SDK provides a command-line interface for creating, packaging, and deploying Agent Framework agents:
### Authentication
```bash
uipath auth
```
This command opens a browser for authentication and creates/updates your `.env` file with the proper credentials.
### Initialize a Project
```bash
uipath init
```
Running `uipath init` will process the agent definitions in the `agent_framework.json` file and create the corresponding `entry-points.json` file needed for deployment.
For more details on the configuration format, see the [UiPath configuration specifications](https://github.com/UiPath/uipath-python/blob/main/specs/README.md).
### Debug a Project
```bash
uipath run AGENT [INPUT]
```
Executes the agent with the provided JSON input arguments.
### Package a Project
```bash
uipath pack
```
Packages your project into a `.nupkg` file that can be deployed to UiPath.
**Note:** Your `pyproject.toml` must include:
- A description field (avoid characters: &, <, >, ", ', ;)
- Author information
Example:
```toml
description = "Your package description"
authors = [{name = "Your Name", email = "your.email@example.com"}]
```
### Publish a Package
```bash
uipath publish
```
Publishes the most recently created package to your UiPath Orchestrator.
## Project Structure
To properly use the CLI for packaging and publishing, your project should include:
- A `pyproject.toml` file with project metadata
- An `agent_framework.json` file with your agent definitions (e.g., `"agents": {"agent": "main.py:agent"}`)
- An `entry-points.json` file (generated by `uipath init`)
- A `bindings.json` file (generated by `uipath init`) to configure resource overrides
- Any Python files needed for your automation
## Development
### Developer Tools
Check out [uipath-dev](https://github.com/uipath/uipath-dev-python) - an interactive terminal application for building, testing, and debugging UiPath Python runtimes, agents, and automation scripts.
### Setting Up a Development Environment
Please read our [contribution guidelines](https://github.com/UiPath/uipath-integrations-python/packages/uipath-agent-framework/blob/main/CONTRIBUTING.md) before submitting a pull request.
### Special Thanks
A huge thank-you to the open-source community and the maintainers of the libraries that make this project possible:
- [Agent Framework](https://github.com/microsoft/agent-framework) for providing a flexible framework for building AI agents.
- [OpenInference](https://github.com/Arize-ai/openinference) for observability and instrumentation support.
- [Pydantic](https://github.com/pydantic/pydantic) for reliable, typed configuration and validation.
| text/markdown | null | null | null | Cristian Pufu <cristian.pufu@uipath.com> | null | null | [
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Build Tools"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"agent-framework-core>=1.0.0rc1",
"agent-framework-orchestrations>=1.0.0b260219",
"aiosqlite>=0.20.0",
"openinference-instrumentation-agent-framework>=0.1.0",
"uipath-runtime<0.10.0,>=0.9.0",
"uipath<2.9.0,>=2.8.41",
"agent-framework-anthropic>=1.0.0b260219; extra == \"anthropic\"",
"anthropic>=0.43.0; extra == \"anthropic\""
] | [] | [] | [] | [
"Homepage, https://uipath.com",
"Repository, https://github.com/UiPath/uipath-integrations-python"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:23:22.998121 | uipath_agent_framework-0.0.6.tar.gz | 231,841 | 42/d2/21c1592dbb0cde3702331d2c3299a82146a48565e1ab5bf3632ee1d291f2/uipath_agent_framework-0.0.6.tar.gz | source | sdist | null | false | 02d52932728425d05275f669b42544e0 | 4d7a4b6c2508389eca9a1cc98e990e15810615e8d63dc383e0e09525acf82bfe | 42d221c1592dbb0cde3702331d2c3299a82146a48565e1ab5bf3632ee1d291f2 | null | [] | 230 |
2.4 | soprano-sdk | 0.2.51 | YAML-driven workflow engine with AI agent integration for building conversational SOPs | # Conversational SOP Framework
A YAML-driven workflow engine with AI agent integration for building conversational Standard Operating Procedures (SOPs).
## Features
- **YAML Configuration**: Define workflows declaratively using YAML
- **AI Agent Integration**: Built-in support for conversational data collection using OpenAI models
- **State Management**: Powered by LangGraph for robust workflow execution
- **External Context Injection**: Support for pre-populated fields from external orchestrators
- **Pattern Matching**: Flexible transition logic based on patterns and conditions
- **Visualization**: Generate workflow graphs as images or Mermaid diagrams
- **Follow-up Conversations**: Handle user follow-up questions with full workflow context
- **Intent Detection**: Route users between collector nodes based on detected intent
- **Out-of-Scope Detection**: Signal when user queries are unrelated to the current workflow
- **Outcome Humanization**: LLM-powered transformation of outcome messages into natural, context-aware responses
- **Per-Turn Localization**: Dynamic language and script switching for multi-language support
## Installation
```bash
pip install conversational-sop-framework
```
Or using uv:
```bash
uv add conversational-sop-framework
```
## Quick Start
### 1. Define a Workflow in YAML
```yaml
name: "User Greeting Workflow"
description: "Collects user information and provides a personalized greeting"
version: "1.0"
data:
- name: name
type: text
description: "User's name"
label: "Full Name" # Optional: User-friendly label for UI display
- name: age
type: number
description: "User's age in years"
label: "Age (years)" # Optional: User-friendly label for UI display
steps:
- id: get_name
action: collect_input_with_agent
field: name
max_attempts: 3
agent:
name: "NameCollector"
model: "gpt-4o-mini"
instructions: |
Your goal is to capture the user's name.
Start with a friendly greeting and ask for their name.
Once you have a clear name, respond with: 'NAME_CAPTURED: [name]'
transitions:
- pattern: "NAME_CAPTURED:"
next: get_age
- pattern: "NAME_FAILED:"
next: end_failed
- id: get_age
action: collect_input_with_agent
field: age
max_attempts: 3
agent:
name: "AgeCollector"
model: "gpt-4o-mini"
instructions: |
Ask for the user's age.
Once you have a valid age, respond with: 'AGE_CAPTURED: [age]'
transitions:
- pattern: "AGE_CAPTURED:"
next: end_success
- pattern: "AGE_FAILED:"
next: end_failed
outcomes:
- id: end_success
type: success
message: "Hello {name}! You are {age} years old."
- id: end_failed
type: failure
message: "Sorry, I couldn't complete the workflow."
```
### 2. Load and Execute the Workflow
```python
from soprano_sdk import load_workflow
from langgraph.types import Command
import uuid
# Load workflow
graph, engine = load_workflow("greeting_workflow.yaml")
# Setup execution
thread_id = str(uuid.uuid4())
config = {"configurable": {"thread_id": thread_id}}
# Start workflow
result = graph.invoke({}, config=config)
# Interaction loop
while True:
if "__interrupt__" in result and result["__interrupt__"]:
# Get prompt from workflow
prompt = result["__interrupt__"][0].value
print(f"Bot: {prompt}")
# Get user input
user_input = input("You: ")
# Resume workflow with user input
result = graph.invoke(Command(resume=user_input), config=config)
else:
# Workflow completed
message = engine.get_outcome_message(result)
print(f"Bot: {message}")
break
```
## Data Fields Configuration
Data fields define the information collected and processed by your workflow. Each field supports the following properties:
### Field Properties
```yaml
data:
- name: field_name # Required: Unique identifier
type: text # Required: text, number, boolean, date, etc.
description: "..." # Required: Used for agent understanding
label: "Display Name" # Optional: User-friendly label for UI display
default: "value" # Optional: Default value
```
### Label Field
The `label` field provides a user-friendly display name for fields. When present, it appears in the `field_details` object returned by the workflow tool, making it easier to build consumer UIs.
**Example:**
```yaml
data:
- name: email
type: text
description: "User's email address"
label: "Email Address"
- name: phone
type: text
description: "User's phone number"
label: "Phone Number"
- name: return_reason
type: text
description: "Reason for return"
# No label - field_details will only include name and value
```
**Field Details Output:**
```python
# When workflow interrupts for user input
result = tool.execute(thread_id="123", user_message="hi")
# Result includes field_details with labels
result.field_details
# [
# {"name": "email", "value": "user@example.com", "label": "Email Address"},
# {"name": "phone", "value": "1234567890", "label": "Phone Number"},
# {"name": "return_reason", "value": "damaged item"} # No label
# ]
```
**Use Cases:**
- **UI Rendering**: Display friendly labels instead of field names (`"Email Address"` vs `"email"`)
- **Internationalization**: Use labels for localized field names while keeping internal field names in English
- **Better UX**: Show contextual labels (`"Age (years)"` is clearer than `"age"`)
- **Backward Compatibility**: Optional - existing workflows without labels continue to work
### 3. External Context Injection
You can inject external context into workflows:
```python
# Pre-populate fields from external orchestrator
result = graph.invoke({
"name": "Alice",
"age": 30
}, config=config)
# Workflow will automatically skip collection steps
# and proceed to validation/processing
```
### 4. Persistence
The library supports pluggable persistence through LangGraph's checkpointer system.
#### In-Memory (Default)
```python
# No persistence - state lost when process ends
graph, engine = load_workflow("workflow.yaml")
```
#### MongoDB Persistence
```python
from soprano_sdk import load_workflow
from langgraph.checkpoint.mongodb import MongoDBSaver
from pymongo import MongoClient
# Setup MongoDB persistence (local)
client = MongoClient("mongodb://localhost:27017")
checkpointer = MongoDBSaver(client=client, db_name="workflows")
# Or MongoDB Atlas (cloud)
client = MongoClient("mongodb+srv://user:pass@cluster.mongodb.net")
checkpointer = MongoDBSaver(client=client, db_name="workflows")
# Load workflow with persistence
graph, engine = load_workflow("workflow.yaml", checkpointer=checkpointer)
# Execute with thread_id for state tracking
config = {"configurable": {"thread_id": "user-123-return"}}
result = graph.invoke({}, config=config)
# Later, resume using same thread_id
result = graph.invoke(Command(resume="continue"), config=config)
```
#### Thread ID Strategies
Choose a thread_id strategy based on your use case:
| Strategy | Thread ID Pattern | Best For |
|----------|-------------------|----------|
| **Entity-Based** | `f"return_{order_id}"` | One workflow per business entity |
| **Conversation** | `str(uuid.uuid4())` | Multiple concurrent workflows |
| **User+Workflow** | `f"{user_id}_{workflow_type}"` | One workflow type per user |
| **Session-Based** | `session_id` | Web apps with sessions |
**Examples**: See `examples/persistence/` for detailed examples of each strategy.
## Workflow Actions
### collect_input_with_agent
Collects user input using an AI agent with conversation history.
```yaml
- id: collect_field
action: collect_input_with_agent
field: field_name
max_attempts: 5
agent:
name: "CollectorAgent"
model: "gpt-4o-mini"
instructions: |
Instructions for the agent...
transitions:
- pattern: "SUCCESS:"
next: next_step
- pattern: "FAILED:"
next: failure_outcome
```
### call_function
Calls a Python function with workflow state.
```yaml
- id: process_data
action: call_function
function: "my_module.my_function"
inputs:
field1: "{field_name}"
field2: "static_value"
output: result_field
transitions:
- condition: true
next: success_step
- condition: false
next: failure_step
```
### call_async_function
Calls an async function that may return a pending status, triggering an interrupt until the async operation completes.
```yaml
- id: verify_payment
action: call_async_function
function: "payments.start_verification"
output: verification_result
transitions:
- condition: "verified"
next: payment_approved
- condition: "failed"
next: payment_rejected
```
### follow_up
Handles follow-up questions from users. Unlike `collect_input_with_agent` where the agent asks first, here the **user initiates** by asking questions. The agent responds using full workflow context.
```yaml
- id: handle_questions
action: follow_up
next: final_confirmation # Where to go when user says "done"
closure_patterns: # Optional: customize closure detection
- "ok"
- "thank you"
- "done"
agent:
name: "FollowUpAssistant"
model: "gpt-4o-mini"
description: "Answering questions about the order"
instructions: |
Help the user with any questions about their order.
Be concise and helpful.
detect_out_of_scope: true # Signal when user asks unrelated questions
transitions: # Optional: route based on patterns
- pattern: "ROUTE_TO_PAYMENT:"
next: payment_step
```
**Key features:**
- **User initiates**: No initial prompt - waits for user to ask a question
- **Full state context**: Agent sees all collected workflow data
- **Closure detection**: Detects "ok", "thanks", "done" → proceeds to next step
- **Intent change**: Routes to collector nodes when user wants to change data
- **Out-of-scope**: Signals to parent orchestrator for unrelated queries
## Interrupt Types
The workflow engine uses three interrupt types to pause execution and communicate with the caller:
| Type | Marker | Triggered By | Use Case |
|------|--------|--------------|----------|
| **USER_INPUT** | `__WORKFLOW_INTERRUPT__` | `collect_input_with_agent`, `follow_up` | Waiting for user input |
| **ASYNC** | `__ASYNC_INTERRUPT__` | `call_async_function` | Waiting for async operation callback |
| **OUT_OF_SCOPE** | `__OUT_OF_SCOPE_INTERRUPT__` | `collect_input_with_agent`, `follow_up` | User query unrelated to current task |
### Handling Interrupts
```python
result = graph.invoke({}, config=config)
if "__interrupt__" in result and result["__interrupt__"]:
interrupt_value = result["__interrupt__"][0].value
# Check interrupt type
if isinstance(interrupt_value, dict):
if interrupt_value.get("type") == "async":
# Async interrupt - wait for external callback
pending_metadata = interrupt_value.get("pending")
# ... handle async operation ...
result = graph.invoke(Command(resume=async_result), config=config)
elif interrupt_value.get("type") == "out_of_scope":
# Out-of-scope - user asking unrelated question
reason = interrupt_value.get("reason")
user_message = interrupt_value.get("user_message")
# ... route to different workflow or handle appropriately ...
else:
# User input interrupt - prompt is a string
prompt = interrupt_value
user_input = input(f"Bot: {prompt}\nYou: ")
result = graph.invoke(Command(resume=user_input), config=config)
```
### Out-of-Scope Detection
Data collector and follow-up nodes can detect when user queries are unrelated to the current task. This is useful for multi-workflow systems where a supervisor agent needs to route users to different SOPs.
**Configuration:**
```yaml
agent:
detect_out_of_scope: true # Disabled by default, set to true to enable
scope_description: "collecting order information for returns" # Optional
```
**Response format:**
```
__OUT_OF_SCOPE_INTERRUPT__|{thread_id}|{workflow_name}|{"reason":"...","user_message":"..."}
```
## Outcome Types
Workflows can define three types of outcomes to represent different completion states:
### 1. Success Outcome
Represents successful workflow completion.
```yaml
outcomes:
- id: order_approved
type: success
message: "Order {{order_id}} has been approved!"
humanize: true # Optional, default: true
```
### 2. Failure Outcome
Represents workflow completion with an error or failure state.
```yaml
outcomes:
- id: order_rejected
type: failure
message: "Order {{order_id}} could not be processed."
humanize: true # Optional, default: true
```
### 3. Redirect Outcome
Redirects to another workflow or system. Returns a special formatted response for external orchestrators to route the conversation.
```yaml
outcomes:
- id: transfer_to_support
type: redirect
redirect_to: "customer_support_workflow"
message: "Let me transfer you to customer support for {{issue_type}}."
humanize: true
```
**Response format:**
```
__REDIRECT__|{thread_id}|{workflow_name}|{redirect_to}|{message}
```
**Key features:**
- `redirect_to` is mandatory when `type` is `redirect`
- If `redirect_to` is provided, `type` must be `redirect`
- The `redirect_to` field supports Jinja2 templates (e.g., `"support_{{issue_type}}"`)
- Messages are humanized (if enabled) before being included in the redirect response
- Orchestrators can parse this format to route users to appropriate workflows
**Example:**
```yaml
outcomes:
- id: escalate_to_billing
type: redirect
redirect_to: "billing_workflow"
message: "I'll connect you with our billing department for assistance with {{issue_type}}."
```
If triggered with `issue_type: "refund"`, returns:
```
__REDIRECT__|thread-123|order_workflow|billing_workflow|I'll connect you with our billing department for assistance with refund.
```
## Outcome Humanization
Outcome messages can be automatically humanized using an LLM to transform template-based messages into natural, context-aware responses. This feature uses the full conversation history to generate responses that match the tone and context of the interaction.
### How It Works
1. **Template rendering**: The outcome message template is first rendered with state values (e.g., `{{order_id}}` → `1234`)
2. **LLM humanization**: The rendered message is passed to an LLM along with the conversation history
3. **Natural response**: The LLM generates a warm, conversational response while preserving all factual details
### Configuration
Humanization is **enabled by default**. Configure it at the workflow level:
```yaml
name: "Return Processing Workflow"
version: "1.0"
# Humanization configuration (optional - enabled by default)
humanization_agent:
model: "gpt-4o" # Override model for humanization (optional)
base_url: "https://custom-api.com/v1" # Override base URL (optional)
instructions: | # Custom instructions (optional)
You are a friendly customer service representative.
Rewrite the message to be warm and empathetic.
Always thank the customer for their patience.
outcomes:
- id: success
type: success
message: "Return approved for order {{order_id}}. Reason: {{return_reason}}."
- id: technical_error
type: failure
humanize: false # Disable humanization for this specific outcome
message: "Error code: {{error_code}}. Contact support."
```
### Example Transformation
| Template Message | Humanized Response |
|-----------------|-------------------|
| `"Return approved for order 1234. Reason: damaged item."` | `"Great news! I've approved the return for your order #1234. I completely understand about the damaged item - that's so frustrating. You'll receive an email shortly with return instructions. Is there anything else I can help you with?"` |
### Disabling Humanization
**Globally** (for entire workflow):
```yaml
humanization_agent:
enabled: false
```
**Per-outcome**:
```yaml
outcomes:
- id: error_code
type: failure
humanize: false # Keep exact message for debugging/logging
message: "Error: {{error_code}}"
```
### Model Configuration
The humanization agent inherits the workflow's runtime `model_config`. You can override specific settings:
```python
config = {
"model_config": {
"model_name": "gpt-4o-mini", # Base model for all agents
"api_key": os.getenv("OPENAI_API_KEY"),
}
}
# In YAML, humanization_agent.model overrides model_name for humanization only
```
## Per-Turn Localization
The framework supports per-turn localization, allowing dynamic language and script switching during workflow execution. Each call to `execute()` can specify a different target language/script.
### How It Works
1. **Per-turn parameters**: Pass `target_language` and `target_script` to `execute()`
2. **Instruction injection**: Localization instructions are prepended to agent system prompts
3. **No extra LLM calls**: The same agent that generates the response handles localization
### Usage
**Per-turn language switching:**
```python
from soprano_sdk import WorkflowTool
tool = WorkflowTool(
yaml_path="return_workflow.yaml",
name="return_processor",
description="Process returns",
checkpointer=checkpointer,
config=config
)
# Turn 1: English (no localization)
result = tool.execute(thread_id="123", user_message="hi")
# Turn 2: Switch to Tamil
result = tool.execute(
thread_id="123",
user_message="my order id is 1234",
target_language="Tamil",
target_script="Tamil"
)
# Turn 3: Back to English (no localization params)
result = tool.execute(thread_id="123", user_message="yes")
```
### YAML Defaults (Optional)
You can set default localization in the workflow YAML. These are used when `target_language`/`target_script` are not passed to `execute()`:
```yaml
name: "Return Workflow"
version: "1.0"
localization:
language: "Tamil"
script: "Tamil"
instructions: | # Optional: custom instructions
Use formal Tamil suitable for customer service.
Always be polite and respectful.
# ... rest of workflow
```
### Key Points
- **Localization affects**: Data collector prompts, follow-up responses, and humanized outcome messages
- **Outcome messages require humanization**: If `humanize: false`, outcome messages stay in English (template output)
- **Per-turn override**: Runtime parameters always override YAML defaults
## Examples
See the `examples/` directory for complete workflow examples:
- `greeting_workflow.yaml` - Simple user greeting workflow
- `return_workflow.yaml` - Customer return processing workflow
- Function modules with business logic (`greeting_functions.py`, `return_functions.py`)
- `persistence/` - Persistence strategy examples (entity-based, conversation-based, SQLite demo)
## Running Workflows
### CLI Demo
```bash
# Basic usage (in-memory)
python scripts/workflow_demo.py examples/greeting_workflow.yaml
# With MongoDB persistence (local)
python scripts/workflow_demo.py examples/greeting_workflow.yaml --mongodb mongodb://localhost:27017
# Resume existing workflow
python scripts/workflow_demo.py examples/greeting_workflow.yaml --mongodb mongodb://localhost:27017 --thread-id abc-123
# With MongoDB Atlas
python scripts/workflow_demo.py examples/greeting_workflow.yaml --mongodb mongodb+srv://user:pass@cluster.mongodb.net
```
### Gradio UI
```bash
# Basic usage (in-memory)
python scripts/workflow_demo_ui.py examples/greeting_workflow.yaml
# With MongoDB persistence
python scripts/workflow_demo_ui.py examples/greeting_workflow.yaml --mongodb mongodb://localhost:27017
# With MongoDB Atlas
python scripts/workflow_demo_ui.py examples/greeting_workflow.yaml --mongodb mongodb+srv://user:pass@cluster.mongodb.net
```
### Persistence Examples
```bash
cd examples/persistence
# Entity-based (order ID as thread ID)
python entity_based.py ORDER-123
# Conversation-based (UUID with supervisor pattern)
python conversation_based.py ../return_workflow.yaml --order-id ORDER-456
# MongoDB demo with pause/resume
python mongodb_demo.py
# Use MongoDB Atlas
python mongodb_demo.py --mongodb mongodb+srv://user:pass@cluster.mongodb.net
```
### Visualize Workflow
```bash
python scripts/visualize_workflow.py examples/greeting_workflow.yaml
```
## Development
### Setup
```bash
git clone https://github.com/dnivra26/soprano_sdk_framework.git
cd soprano_sdk_framework
uv sync --dev
```
### Run Tests
```bash
python tests/test_external_values.py
```
## Architecture
- **soprano_sdk/**: Core library package
- `engine.py`: Workflow engine implementation
- `__init__.py`: Public API exports
- **examples/**: Example workflows and persistence patterns
- Workflow YAML definitions
- Function modules with business logic
- `persistence/`: Different persistence strategy examples
- **scripts/**: Utility tools for running and visualizing workflows
- `workflow_demo.py`: CLI runner with persistence support
- `workflow_demo_ui.py`: Gradio UI with thread management
- `visualize_workflow.py`: Workflow graph generator
- **tests/**: Test suite
- **legacy/**: Previous implementations (FSM, direct LangGraph)
## Requirements
### Core Dependencies
- Python >= 3.12
- agno >= 2.0.7
- langgraph >= 0.6.8
- openai >= 1.108.1
- pyyaml >= 6.0
### Optional Dependencies
For MongoDB persistence:
```bash
# Using pip
pip install langgraph-checkpoint-mongodb pymongo
# Using uv (recommended)
uv add langgraph-checkpoint-mongodb pymongo --optional persistence
# Or install library with persistence support
pip install conversational-sop-framework[persistence]
```
For development (includes Gradio UI and tests):
```bash
pip install conversational-sop-framework[dev]
# or
uv sync --dev
```
## License
MIT
## Contributing
Contributions are welcome! Please open an issue or submit a pull request.
## To Do
- ✅ Database persistence (SqliteSaver, PostgresSaver supported)
- ✅ Pluggable checkpointer system
- ✅ Thread ID strategies and examples
- ✅ Follow-up node for conversational Q&A
- ✅ Out-of-scope detection for multi-workflow routing
- ✅ Outcome humanization with LLM
- ✅ Per-turn localization for multi-language support
- Additional action types (webhook, conditional branching, parallel execution)
- More workflow examples (customer onboarding, support ticketing, approval flows)
- Workflow testing utilities
- Metrics and monitoring hooks
## Links
- [GitHub Repository](https://github.com/dnivra26/soprano_sdk_framework)
- [Issues](https://github.com/dnivra26/soprano_sdk_framework/issues)
| text/markdown | Arvind Thangamani | null | null | null | MIT | agent, ai, conversational, langgraph, sop, soprano, workflow | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"agno>=2.0.7",
"crewai>=0.186.1",
"jsonschema>=4.0.0",
"langchain-community>=0.4.1",
"langchain-core>=0.3.67",
"langchain-openai>=1.0.3",
"langchain>=1.0.7",
"langfuse>=3.10.1",
"langgraph==1.0.2",
"litellm>=1.74.9",
"openai>=1.92.1",
"pydantic-ai>=1.22.0",
"pydantic>=2.0.0",
"pytest>=9.0.1",
"pyyaml>=6.0",
"gradio>=5.46.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff==0.14.13; extra == \"dev\"",
"langgraph-checkpoint-mongodb>=0.2.0; extra == \"persistence\"",
"pymongo>=4.0.0; extra == \"persistence\"",
"crewai>=0.1.0; extra == \"supervisors\"",
"langchain-openai>=0.3.34; extra == \"supervisors\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T09:22:09.174887 | soprano_sdk-0.2.51-py3-none-any.whl | 83,416 | c8/8c/d6951cd8da57890bc522ee9616968c87a540f9ffcb37676d93138eb22c84/soprano_sdk-0.2.51-py3-none-any.whl | py3 | bdist_wheel | null | false | 7916b796a597b76aacab7ab68be3352b | 9a29c868de49f59bee399e6f7727b24f5272a3100c2a8cfadb153acdc1d880c5 | c88cd6951cd8da57890bc522ee9616968c87a540f9ffcb37676d93138eb22c84 | null | [
"LICENSE"
] | 305 |
2.4 | itdpy | 0.3.2 | Python SDK for ИТД.com API | # ITDpy
Python SDK для социальной сети итд.com.
> ⚠️ Неофициальный API-клиент.
>SDK предназначен для разработки клиентских приложений и тестирования API в рамках действующих правил платформы.
## Установка pip
```bash
pip install itdpy
```
### Через git
```bash
git clone https://github.com/Gam5510/ITDpy
cd itdpy
pip install -r requirements.txt
pip install -e .
```
## Документация
- [Документация](documentation/index.md)
- [Быстрый старт](documentation/quickstart.md)
- [Навигация](documentation/NAVIGATION.md)
---
## Модули
- [Clans](documentation/clans.md)
- [Comments](documentation/comments.md)
- [Discovery](documentation/discovery.md)
- [Formatting](documentation/formatting.md)
- [Notifications](documentation/notifications.md)
- [Pins](documentation/pins.md)
- [Polls](documentation/polls.md)
- [Posts](documentation/posts.md)
- [Profile](documentation/profile.md)
- [Settings](documentation/settings.md)
- [Upload](documentation/upload.md)
- [Users](documentation/users.md)
---
## Модели
- [Actor](documentation/models/actor.md)
- [Comment](documentation/models/comment.md)
- [Comments](documentation/models/comments.md)
- [Discovery](documentation/models/discovery.md)
- [Notification](documentation/models/notification.md)
- [Notifications](documentation/models/notifications.md)
- [Pagination](documentation/models/pagination.md)
- [Pins](documentation/models/pins.md)
- [Poll](documentation/models/poll.md)
- [Post](documentation/models/post.md)
- [Posts](documentation/models/posts.md)
- [Settings](documentation/models/settings.md)
- [Users](documentation/models/users.md)
## Быстрый старт
> Blockquote 
Как получить токен
```python
from itdpy.client import ITDClient
client = ITDClient(refresh_token="Ваш refresh token")
me = client.get_me()
print(me.id)
print(me.username)
```
### Скрипт на обновление имени
```python
from itdpy.client import ITDClient
from datetime import datetime
import time
client = ITDClient(refresh_token="Ваш_токен")
while True:
client.update_profile(display_name=f"Фазлиддин |{datetime.now().strftime('%m.%d %H:%M:%S')}|")
time.sleep(1)
```
### Скрипт на обновление баннера
```python
from itdpy.client import ITDClient
client = ITDClient(refresh_token="Ваш_токен")
file = client.upload_file(client, "matrix-rain-effect-animation-photoshop-editor.gif")
print(file.id)
update = update_profile(client, banner_id=file.id)
print(update.banner)
```
# Костомные запросы
## ✅ Базовый пример кастомного GET
```python
response = client.get("/api/users/me")
data = response.json()
print(data)
```
### Можно добавить любой эндпоинт
----------
## ✅ POST с JSON
```python
response = client.post(
"/api/posts",
json={ "content": "Привет из кастомного запроса" }
)
print(response.status_code)
print(response.json())
```
----------
## ✅ PUT / PATCH
```python
response = client.patch( "/api/profile",
json={ "displayName": "Фазлиддин 😎" }
)
```
----------
## ✅ DELETE
```python
client.delete("/api/posts/POST_ID")
```
----------
## ✅ Передача query-параметров
```python
response = client.get( "/api/posts",
params={ "limit": 50, "sort": "popular" }
)
```
## Планы
- Асинхронная версия библиотеки (`aioitd`)
- Улучшенная обработка и форматирование ошибок
- Логирование (через `logging`)
- Расширение объектной модели (Post, Comment, User и др.)
- Дополнительные API-эндпоинты по мере появления
- Улучшение документации и примеров
## Прочее
Проект активно развивается.
Если у вас есть идеи или предложения — создавайте issue или pull request.
| text/markdown | Gam5510 | null | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"requests>=2.28.0"
] | [] | [] | [] | [
"Homepage, https://github.com/Gam5510/ITDpy",
"Repository, https://github.com/Gam5510/ITDpy"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-21T09:20:48.458373 | itdpy-0.3.2.tar.gz | 17,899 | cb/03/b3ab2e85e90e2dac98ba27da25cfd9def26497eecd4c605c62a7a3ab259b/itdpy-0.3.2.tar.gz | source | sdist | null | false | b324611e1bf4713455d48a80b76e98e8 | a331a5630b1339ef9322989fd12e0f898fe9f8048d20ee3436adb3cbb08458c2 | cb03b3ab2e85e90e2dac98ba27da25cfd9def26497eecd4c605c62a7a3ab259b | null | [
"LICENSE"
] | 240 |
2.4 | defenx-nlp | 0.2.0 | Semantic NLP intelligence toolkit — encoding, embeddings, GPU/CPU device handling, and reusable inference interfaces. | # defenx-nlp
**Semantic NLP Intelligence Toolkit**
> A domain-agnostic library for semantic sentence encoding, embedding generation,
> GPU/CPU-aware device handling, and reusable inference interfaces.
[](https://pypi.org/project/defenx-nlp/)
[](https://pypi.org/project/defenx-nlp/)
[](LICENSE)
---
## Overview
`defenx-nlp` is a standalone, pip-installable semantic NLP library. It is designed to be **domain-agnostic** so
the same encoder that understands human chat intent can be repurposed for:
| Use case | What you embed |
|---|---|
| NLP classification | User sentences → intent labels |
| Anomaly detection | System log lines → outlier scores |
| Log intelligence | Server events → semantic clusters |
| Behavioural analytics | User actions → behavioural patterns |
| Semantic search | Documents → retrieval ranking |
---
## Installation
### Standard (CPU)
```bash
pip install defenx-nlp
```
### With CUDA 12 (RTX 30/40 series, recommended)
```bash
pip install defenx-nlp
pip install torch --index-url https://download.pytorch.org/whl/cu128
```
### Development install (editable + test tools)
```bash
git clone https://github.com/defenx-sec/defenx-nlp.git
cd defenx-nlp
pip install -e ".[dev]"
```
---
## Quick Start
```python
from defenx_nlp import SemanticEncoder
# Auto-detects CUDA — falls back to CPU silently
enc = SemanticEncoder()
# Encode a single sentence → (384,) float32 numpy array
embedding = enc.encode("Neural networks are universal approximators.")
print(embedding.shape) # (384,)
print(embedding.dtype) # float32
# Batch encode — much faster than looping
embeddings = enc.encode_batch(["Hello", "Goodbye", "Help me please"])
print(embeddings.shape) # (3, 384)
```
### Semantic similarity
```python
from defenx_nlp import SemanticEncoder, cosine_similarity
enc = SemanticEncoder()
e1 = enc.encode("I love machine learning")
e2 = enc.encode("I enjoy deep learning")
sim = cosine_similarity(e1, e2)
print(f"Similarity: {sim:.3f}") # ~0.87
```
### Top-k retrieval
```python
from defenx_nlp import SemanticEncoder, top_k_similar
enc = SemanticEncoder()
corpus = ["Help me", "Goodbye", "Great job!", "What is AI?"]
query = "Can you assist me?"
c_embs = [enc.encode(t) for t in corpus]
q_emb = enc.encode(query)
results = top_k_similar(q_emb, c_embs, k=1)
print(corpus[results[0][0]]) # "Help me"
```
### Text preprocessing
```python
from defenx_nlp import clean_text, batch_clean
text = clean_text(" HELLO WORLD! ", lowercase=True)
# → "hello world!"
texts = batch_clean([" A ", " B "], lowercase=True)
# → ["a", "b"]
```
### CUDA warmup (for production services)
```python
enc = SemanticEncoder(lazy=False)
enc.warmup() # initialise CuDNN kernels at startup, not first request
```
---
## API Summary
| Symbol | Description |
|---|---|
| `SemanticEncoder` | Main encoder class — lazy, thread-safe, CUDA-aware |
| `BaseEncoder` | Abstract base for custom encoder backends |
| `BaseInferenceEngine` | Abstract base for downstream classifiers |
| `get_device(preferred)` | Resolve `"auto"/"cuda"/"cpu"/"mps"` → `torch.device` |
| `device_info()` | Hardware diagnostic dictionary |
| `clean_text(text, **opts)` | Configurable single-text cleaner |
| `batch_clean(texts, **opts)` | Apply `clean_text` to a list |
| `truncate(text, max_chars)` | Hard-truncate with optional ellipsis |
| `cosine_similarity(a, b)` | Scalar cosine similarity in `[-1, 1]` |
| `batch_cosine_similarity(q, M)` | Vectorised query-vs-matrix similarity `(N,)` |
| `top_k_similar(q, corpus, k)` | Top-k retrieval → `[(idx, score)]` |
| `normalize_embedding(v)` | L2-normalise a single embedding |
| `normalize_batch(M)` | Row-wise L2-normalise `(N, D)` matrix |
Full API docs: [`docs/api_reference.md`](docs/api_reference.md)
---
## Hardware Requirements
### Minimum
| Component | Requirement |
|---|---|
| CPU | Dual-core, 64-bit |
| RAM | 4 GB |
| Disk | 500 MB (model cache) |
| GPU | None (CPU mode) |
| Python | 3.9+ |
### Recommended
| Component | Requirement |
|---|---|
| CPU | 6+ cores (AMD Ryzen 7 / Intel Core i7+) |
| RAM | 16 GB |
| GPU | NVIDIA RTX 20-series or newer |
| VRAM | 4+ GB |
| CUDA | 11.8 or 12.x |
| Python | 3.11+ |
> **Tested on:** AMD Ryzen 7 4800H + NVIDIA RTX 3050 6 GB (CUDA 12.8) on Kali Linux (WSL2).
> Average inference latency: **~15 ms/sentence on CUDA**, **~80 ms on CPU**.
---
## Supported Operating Systems
| OS | CPU mode | CUDA mode | Notes |
|---|---|---|---|
| **Linux** (Ubuntu 20.04+, Debian 11+, Kali) | ✅ | ✅ | Fully tested |
| **Windows 10 / 11** | ✅ | ✅ | Use WSL2 for CUDA in WSL |
| **macOS 12+** (Intel) | ✅ | — | No NVIDIA CUDA support |
| **macOS 12+** (Apple Silicon M1/M2/M3) | ✅ | MPS | Use `device="mps"` |
---
## Extending the Library
### Custom encoder backend
```python
import numpy as np
import torch
from defenx_nlp import BaseEncoder
class OpenAIEncoder(BaseEncoder):
"""Drop-in encoder using OpenAI embeddings API."""
def __init__(self, api_key: str):
import openai
openai.api_key = api_key
self._client = openai.OpenAI()
def encode(self, text: str) -> np.ndarray:
resp = self._client.embeddings.create(
model="text-embedding-3-small", input=text
)
return np.array(resp.data[0].embedding, dtype=np.float32)
def encode_batch(self, texts):
resp = self._client.embeddings.create(
model="text-embedding-3-small", input=texts
)
return np.array([d.embedding for d in resp.data], dtype=np.float32)
@property
def embedding_dim(self) -> int: return 1536
@property
def device(self) -> torch.device: return torch.device("cpu")
```
---
## Running Tests
```bash
# Install dev extras first
pip install -e ".[dev]"
# Run all tests
pytest tests/ -v
# With coverage
pytest tests/ -v --cov=defenx_nlp --cov-report=term-missing
```
Expected output:
```
tests/test_encoder.py::TestSemanticEncoder::test_encode_shape PASSED
tests/test_encoder.py::TestSemanticEncoder::test_embedding_dim_property PASSED
...
13 passed in 42.3s
```
---
## Running Examples
```bash
# Basic single-sentence usage + similarity + retrieval
python examples/basic_usage.py
# Batch throughput benchmark + similarity matrix
python examples/batch_encoding.py
```
---
## Publishing to PyPI
### 1. Build the distribution
```bash
pip install build twine
python -m build
# Creates dist/defenx_nlp-0.1.0.tar.gz and dist/defenx_nlp-0.1.0-py3-none-any.whl
```
### 2. Test on TestPyPI first (always)
```bash
twine upload --repository testpypi dist/*
pip install --index-url https://test.pypi.org/simple/ defenx-nlp
```
### 3. Publish to real PyPI
```bash
twine upload dist/*
```
### 4. Verify the install
```bash
pip install defenx-nlp
python -c "from defenx_nlp import SemanticEncoder; print(SemanticEncoder())"
```
### Versioning
Update `version` in `pyproject.toml` before each release.
Follow [Semantic Versioning](https://semver.org/): `MAJOR.MINOR.PATCH`.
---
## Project Structure
```
defenx-nlp/
├── defenx_nlp/
│ ├── __init__.py Public API surface — all exports live here
│ ├── encoder.py SemanticEncoder — lazy, thread-safe, CUDA-aware
│ ├── device.py get_device() and device_info() helpers
│ ├── preprocessing.py clean_text, batch_clean, truncate, deduplicate
│ ├── interfaces.py BaseEncoder and BaseInferenceEngine ABCs
│ └── utils.py cosine_similarity, top_k_similar, normalize_*
│
├── tests/
│ └── test_encoder.py pytest suite — encoder, device, preprocessing, utils
│
├── examples/
│ ├── basic_usage.py Single-sentence encode, similarity, retrieval
│ └── batch_encoding.py Throughput benchmark, similarity matrix
│
├── docs/
│ └── api_reference.md Full API documentation
│
├── README.md This file
├── pyproject.toml PEP 621 package metadata + build config
└── LICENSE MIT
```
---
## License
MIT — see [LICENSE](LICENSE).
---
## Acknowledgements
Built on top of:
- [sentence-transformers](https://www.sbert.net/) by UKPLab
- [PyTorch](https://pytorch.org/) by Meta AI
- [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) by Microsoft
| text/markdown | null | DEFENX <defenx@zohomail.in> | null | null | MIT License
Copyright (c) 2026 DefenX
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| nlp, sentence-transformers, embeddings, semantic-search, cuda, pytorch, anomaly-detection, behavioral-analytics | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Text Processing :: Linguistic",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy>=1.24",
"sentence-transformers>=2.2.0",
"torch>=2.0",
"torch>=2.0; extra == \"cpu\"",
"pytest>=7.4; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"ruff; extra == \"dev\"",
"mypy; extra == \"dev\"",
"twine; extra == \"dev\"",
"build; extra == \"dev\"",
"mkdocs>=1.5; extra == \"docs\"",
"mkdocs-material>=9.0; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/defenx-sec/defenx-nlp",
"Repository, https://github.com/defenx-sec/defenx-nlp",
"Issues, https://github.com/defenx-sec/defenx-nlp/issues",
"Changelog, https://github.com/defenx-sec/defenx-nlp/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:20:03.651795 | defenx_nlp-0.2.0.tar.gz | 20,423 | e6/ce/81cb46c3d0efd7f60176da547301359e94d29242a298d9fd87086db183a0/defenx_nlp-0.2.0.tar.gz | source | sdist | null | false | df03bc88634697066406c688cfe895be | 2b6bc6c0b9cd12a7246aa2b8f3322be5a0f71de72babe8020124bd9598a34d5c | e6ce81cb46c3d0efd7f60176da547301359e94d29242a298d9fd87086db183a0 | null | [
"LICENSE"
] | 229 |
2.4 | agent-dump | 0.1.1 | AI Coding Assistant Session Export Tool | 
# Agent Dump
AI 编码助手会话导出工具 - 支持从多种 AI 编码工具的会话数据导出会话为 JSON 格式。
## 支持的 AI 工具
- **OpenCode** - 开源 AI 编程助手
- **Claude Code** - Anthropic 的 AI 编码工具 *(计划中)*
- **Code X** - GitHub Copilot Chat *(计划中)*
- **更多工具** - 欢迎提交 PR 支持其他 AI 编码工具
## 功能特性
- **交互式选择**: 使用 questionary 提供友好的命令行交互界面
- **批量导出**: 支持导出最近 N 天的所有会话
- **指定导出**: 通过会话 ID 导出特定会话
- **会话列表**: 仅列出会话而不导出
- **统计数据**: 导出包含 tokens 使用量、成本等统计信息
- **消息详情**: 完整保留会话消息、工具调用等详细信息
## 安装
### 方式一:使用 uv tool 安装(推荐)
```bash
# 从 PyPI 安装(发布后可使用)
uv tool install agent-dump
# 从 GitHub 直接安装
uv tool install git+https://github.com/xingkaixin/agent-dump
```
### 方式二:使用 uvx 直接运行(无需安装)
```bash
# 从 PyPI 运行(发布后可使用)
uvx agent-dump --help
# 从 GitHub 直接运行
uvx --from git+https://github.com/xingkaixin/agent-dump agent-dump --help
```
### 方式三:本地开发
```bash
# 克隆仓库
git clone https://github.com/xingkaixin/agent-dump.git
cd agent-dump
# 使用 uv 安装依赖
uv sync
# 本地安装测试
uv tool install . --force
```
## 使用方法
### 交互式导出(默认)
```bash
# 方式一:使用命令行入口
uv run agent-dump
# 方式二:使用模块运行
uv run python -m agent_dump
```
运行后会显示最近 7 天的会话列表,使用空格选择/取消,回车确认导出。
### 命令行参数
```bash
uv run agent-dump --days 3 # 导出最近 3 天的会话
uv run agent-dump --agent claude # 指定 Agent 工具名称
uv run agent-dump --output ./my-sessions # 指定输出目录
uv run agent-dump --list # 仅列出会话
uv run agent-dump --export ses_abc,ses_xyz # 导出指定 ID 的会话
```
### 完整参数说明
| 参数 | 说明 | 默认值 |
|------|------|--------|
| `--days` | 查询最近 N 天的会话 | 7 |
| `--agent` | Agent 工具名称 | opencode |
| `--output` | 输出目录 | ./sessions |
| `--export` | 导出指定会话 ID(逗号分隔) | - |
| `--list` | 仅列出会话,不导出 | - |
## 项目结构
```
.
├── src/
│ └── agent_dump/ # 主包目录
│ ├── __init__.py # 包初始化
│ ├── __main__.py # python -m agent_dump 入口
│ ├── cli.py # 命令行接口
│ ├── db.py # 数据库操作
│ ├── exporter.py # 导出逻辑
│ └── selector.py # 交互式选择
├── tests/ # 测试目录
├── pyproject.toml # 项目配置
├── Makefile # 自动化命令
├── ruff.toml # 代码风格配置
├── data/ # 数据库目录
│ └── opencode/
│ └── opencode.db
└── sessions/ # 导出目录
└── {agent-name}/ # 按工具分类的导出文件
└── ses_xxx.json
```
## 开发
```bash
# 代码检查
make lint
# 自动修复
make lint.fix
# 代码格式化
make lint.fmt
# 类型检查
make check
```
## 依赖
- Python >= 3.14
- prompt-toolkit >= 3.0.0
- questionary >= 2.1.1
- ruff >= 0.15.2 (开发)
- ty >= 0.0.18 (开发)
## 许可证
MIT
| text/markdown | null | XingKaiXin <xingkaixin@gmail.com> | null | null | null | ai, chat, cli, export, opencode | [
"Environment :: Console",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.14",
"Topic :: Utilities"
] | [] | null | null | >=3.14 | [] | [] | [] | [
"prompt-toolkit>=3.0.0",
"questionary>=2.1.1"
] | [] | [] | [] | [
"Homepage, https://github.com/xingkaixin/agent-dump",
"Repository, https://github.com/xingkaixin/agent-dump",
"Issues, https://github.com/xingkaixin/agent-dump/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T09:20:00.271559 | agent_dump-0.1.1-py3-none-any.whl | 9,176 | 49/b4/ef88d3ac8f29fc6c4144f5ce4aa8f2f5fea97dd19d3d903014a3e6e021f3/agent_dump-0.1.1-py3-none-any.whl | py3 | bdist_wheel | null | false | db7eab585df41bfe1f893a7e4fc03c3a | 3c77bcc356ca94a61fbf06705114515def478f4645ca2aa73daa0a6b2239d5b8 | 49b4ef88d3ac8f29fc6c4144f5ce4aa8f2f5fea97dd19d3d903014a3e6e021f3 | MIT | [
"LICENSE"
] | 225 |
2.4 | devloop | 0.10.0 | Intelligent background agents for development workflow automation | # DevLoop
> **Intelligent background agents for development workflow automation** — automate code quality checks, testing, documentation, and more while you code.
[](https://www.python.org/downloads/)
[](#testing)
[](#status)
[](LICENSE)
---
## Why DevLoop?
### The Problem
Modern development workflows have a critical gap: **code quality checks happen too late**.
**Without DevLoop:**
- Write code → Save → Push → Wait for CI → **❌ Build fails** → Context switch back
- 10-30 minutes wasted per CI failure
- Broken `main` branch blocks the team
- Finding issues after push disrupts flow state
**The hidden costs:**
- ⏱️ **Time**: 30+ min per day waiting for CI feedback
- 🔄 **Context switching**: 4-8 interruptions per day
- 😤 **Frustration**: Breaking builds, blocking teammates
- 💸 **Money**: CI minutes aren't free at scale
### The Solution
DevLoop runs intelligent agents **in the background** that catch issues **before commit**, not after push.
```bash
# Traditional workflow (slow feedback)
edit code → save → commit → push → wait for CI → ❌ fails
# DevLoop workflow (instant feedback)
edit code → save → ✅ agents run automatically → ✅ all checks pass → commit → push
```
**Key benefits:**
- 🎯 **Catch 90%+ of CI failures locally**[^1] before they reach your repository
- ⚡ **Sub-second feedback** on linting, formatting, type errors
- 🔒 **Pre-commit enforcement** prevents bad commits from ever being created
- 🧠 **Smart test selection** runs only affected tests, not the entire suite
- 💰 **Reduce CI costs** by 60%+[^2] through fewer pipeline runs
---
## Quick Win: 2-Minute Setup
Get value immediately with zero configuration:
```bash
pip install devloop
devloop init /path/to/project # Interactive setup
devloop watch . # Start monitoring
```
**What happens next:**
- ✅ Agents automatically run on file save
- ✅ Pre-commit hook prevents bad commits
- ✅ Issues caught before CI even runs
- ✅ Faster feedback = faster development
Try it on a side project first. See results in minutes, not days.
---
## Status & Trust Signals
🔬 **Alpha Release** — Feature-complete development automation system undergoing active testing and hardening.
### What's Working ✅
DevLoop has **production-grade** foundation with 737+ passing tests:
- **Core stability**: Event system, agent coordination, context management - all battle-tested
- **Code quality**: Black, Ruff, mypy, pytest - works reliably across 1000s of file changes
- **Git integration**: Pre-commit hooks, CI monitoring - deployed in multiple projects
- **Security scanning**: Bandit, Snyk integration - catches real vulnerabilities
- **Performance**: Sub-second latency, <5% idle CPU, 50MB memory footprint
- **Resource management**: CPU/memory limits, process isolation, graceful degradation
**Real-world usage**: DevLoop developers use it daily to build DevLoop itself (dogfooding).
### Known Limitations ⚠️
DevLoop has been thoroughly tested (737+ tests) with production-grade implementations. Remaining limitations are minor:
| Risk | Current Status | Mitigation |
|------|---------------|------------|
| Auto-fix safety | Fully implemented with configurable safety levels (`safe_only`, `medium`, `all`) | Reviews available via git diff before commit |
| Resource isolation | Graceful CPU/memory limits with configurable thresholds | Use `resourceLimits` in `.devloop/agents.json` |
| Daemon restart | Automatic supervision and restart handling on failure | Logs available at `.devloop/devloop.log` |
| Config migrations | Automated with schema versioning system | Handled automatically on version upgrades |
[View complete risk assessment →](./history/RISK_ASSESSMENT.md)
### Recommended Use
✅ **Safe to use:**
- Side projects and personal code
- Development environments (not production systems)
- Testing automation workflows
- Learning about agent-based development
⚠️ **Use with caution:**
- Work projects (keep git backups)
- Auto-fix feature (review all changes)
❌ **Not recommended:**
- Production deployments
- Critical infrastructure code
- Untrusted/malicious codebases
**Best practice**: Try it on a side project first. See results in 2 minutes. Scale up when confident.
---
## How DevLoop Compares
**Why not just use CI/CD or pre-commit hooks?**
| Feature | CI/CD Only | Pre-commit Hooks | **DevLoop** |
|---------|-----------|------------------|-------------|
| **Feedback Speed** | 10-30 min | On commit only | **<1 second** (as you type) |
| **Coverage** | Full suite | Basic checks | **Comprehensive** (11 agents) |
| **Context Switching** | High (wait for CI) | Medium (at commit) | **Minimal** (background) |
| **CI Cost** | High (every push) | Medium (fewer failures) | **Low** (60%+[^2] reduction) |
| **Smart Test Selection** | ❌ Runs all tests | ❌ Manual selection | **✅ Automatic** |
| **Learning System** | ❌ Static rules | ❌ Static rules | **✅ Adapts** to your patterns |
| **Security Scanning** | ✅ On push | ❌ Rarely | **✅ On save** |
| **Performance Profiling** | ❌ Manual | ❌ Manual | **✅ Automatic** |
| **Auto-fix** | ❌ None | ⚠️ Limited | **✅ Configurable** safety levels |
**The DevLoop advantage**: Combines the comprehensiveness of CI with the speed of local checks, plus intelligence that neither provides.
**Real impact**:
- **Before DevLoop**: 6-8 CI failures per day[^3] × 15 min = 90-120 min wasted
- **After DevLoop**: 1-2 CI failures per day × 15 min = 15-30 min wasted
- **Time saved**: ~75-90 minutes per developer per day[^3]
---
## Features
DevLoop runs background agents that automatically:
### Code Quality & Testing
- **🔍 Linting & Type Checking** — Detect issues as you code (mypy, custom linters)
- **📝 Code Formatting** — Auto-format files with Black, isort, and more
- **✅ Testing** — Run relevant tests on file changes
### Security & Performance
- **🔐 Security Scanning** — Find vulnerabilities with Bandit
- **⚡ Performance** — Track performance metrics and detect regressions
### Workflow & Documentation
- **📚 Documentation** — Keep docs in sync with code changes
- **🎯 Git Integration** — Generate smart commit messages
- **🤖 Custom Agents** — Create no-code agents via builder pattern
### Agent Marketplace (NEW!)
- **🏪 Agent Marketplace** — Discover and share agents with the community
- **📦 Agent Publishing** — Publish your agents with semantic versioning & signing
- **✍️ Cryptographic Signing** — SHA256 checksums + directory hashing for tamper detection
- **⭐ Ratings & Reviews** — Community ratings, user reviews, and agent statistics
- **🔍 Agent Discovery** — Full-text search, category filtering, install tracking
- **🔄 Version Management** — Manage agent versions, deprecation notices, and updates
- **🛠️ Tool Dependencies** — Automatic dependency resolution for agent tools
- **🌐 REST API Server** — Run a local/remote marketplace with HTTP API
### IDE & Editor Integration
- **VSCode Extension** — Real-time agent feedback with inline quick fixes and status bar integration
- **LSP Server** — Language Server Protocol for multi-editor support
- **Agent Status Display** — View findings and metrics directly in your editor
### Developer Experience & Reliability
- **Daemon Supervision** — Automatic process monitoring and restart handling
- **Transactional I/O** — Atomic writes, checksums, corruption recovery
- **Config Schema Versioning** — Automatic migration between configuration versions
- **Self-healing Filesystem** — Detects and repairs corrupted data files
- **Event Logging** — Structured SQLite audit trail with 30-day retention
### Workflow Integration
- **Beads Task Integration** — Auto-create issues from detected patterns
- **Amp Thread Context** — Cross-thread pattern detection and analytics
- **Multi-CI Support** — GitHub Actions, GitLab CI, Jenkins, CircleCI
- **Multi-Registry Support** — PyPI, npm, Docker, and custom registries
### Advanced Features
- **📊 Learning System** — Automatically learn patterns and optimize behavior
- **🔄 Auto-fix** — Safely apply fixes (configurable safety levels)
- **🔐 Token Security** — Secure credential management with OAuth2 and validation
- **🧹 Cache Management** — Smart cleanup of stale caches and temporary files
All agents run **non-intrusively in the background**, respecting your workflow.
---
## Quick Start
### ⚠️ Before You Start
**ALPHA SOFTWARE DISCLAIMER:**
- This is research-quality code. Data loss is possible.
- Only use on projects you can afford to lose or easily recover.
- Make sure to commit your code to git before enabling DevLoop.
- Do not enable auto-fix on important code.
- Some agents may fail silently (see logs for details).
### Installation
**Prerequisites:**
- Python 3.11 or later
- For release workflow: Poetry 1.7+ and GitHub CLI 2.78+
#### Option 1: From PyPI (Recommended)
```bash
# Basic installation (all default agents)
pip install devloop
# With marketplace API server
pip install devloop[marketplace-api]
# With optional agents (Snyk security scanning)
pip install devloop[snyk]
# With multiple optional agents
pip install devloop[snyk,code-rabbit,marketplace-api]
# With all optional agents
pip install devloop[all-optional]
```
**Available extras:**
- `marketplace-api` — Marketplace registry HTTP server and publishing tools (FastAPI + uvicorn)
- `snyk` — Dependency vulnerability scanning via Snyk CLI
- `code-rabbit` — AI-powered code analysis
- `ci-monitor` — CI/CD pipeline monitoring
- `all-optional` — All of the above
**Optional sandbox enhancements:**
- **Pyodide WASM Sandbox** (cross-platform Python sandboxing)
- Requires: Node.js 18+ (system dependency)
- Install: See [Pyodide Installation Guide](./docs/PYODIDE_INSTALLATION.md)
- Works in POC mode without installation for testing
#### System Dependencies
DevLoop automatically detects and uses several system tools. Install them for full functionality:
**For Pre-Push CI Verification (Optional but Recommended):**
```bash
# GitHub CLI 2.78+ (for checking CI status before push)
# Ubuntu/Debian:
sudo apt-get install -y gh
# macOS:
brew install gh
# Verify installation
gh --version
```
**For Release Management (Optional but Recommended for Publishing):**
```bash
# Poetry 1.7+ (for package management and publishing)
curl -sSL https://install.python-poetry.org | python3 -
# Verify installation
poetry --version
# Configure PyPI credentials (get token from https://pypi.org/account/)
poetry config pypi-token.pypi "pypi-AgEIcHlwaS5vcmc..."
```
**For Task Management Integration (Optional):**
```bash
# Beads task tracking (integrates findings with task queue)
pip install beads-mcp
```
**What happens if missing:**
- `gh` (2.78+): Pre-push CI verification is skipped (but DevLoop still works)
- `poetry` (1.7+): Release workflow unavailable (but development still works)
- `bd`: Task creation on push won't work (but DevLoop still works)
DevLoop will warn you during `devloop init` if any tools are missing and provide installation instructions. You can install them later and they'll be detected automatically.
#### Option 2: From Source
```bash
# Clone the repository
git clone https://github.com/wioota/devloop
cd devloop
# Install poetry (if needed)
curl -sSL https://install.python-poetry.org | python3 -
# Install dependencies
poetry install
# Activate virtual environment
poetry shell
```
### Initialize & Run (Fully Automated)
```bash
# 1. Initialize in your project (interactive setup)
devloop init /path/to/your/project
```
The `init` command will:
- ✅ Set up .devloop directory with default agents
- ✅ Ask which optional agents you want to enable:
- **Snyk** — Scan dependencies for vulnerabilities
- **Code Rabbit** — AI-powered code analysis
- **CI Monitor** — Track CI/CD pipeline status
- ✅ Create configuration file with your selections
- ✅ Set up git hooks (if git repo)
- ✅ Registers Amp integration (if in Amp)
```bash
# 1a. Alternative: Non-interactive setup (skip optional agent prompts)
devloop init /path/to/your/project --non-interactive
```
Then just:
```bash
# 2. Start watching for changes
cd /path/to/your/project
devloop watch .
# 3. Make code changes and watch agents respond
```
**That's it!** No manual configuration needed. DevLoop will automatically monitor your project, run agents on file changes, and enforce commit discipline.
[View the installation automation details →](./INSTALLATION_AUTOMATION.md)
### Common Commands
```bash
# Watch a directory for changes
devloop watch .
# Show agent status and health
devloop status
# Agent publishing and management
devloop agent publish ./my-agent # Publish agent to marketplace
devloop agent check ./my-agent # Check readiness to publish
devloop agent version ./my-agent patch # Bump version (major/minor/patch)
devloop agent verify ./my-agent # Verify agent signature
devloop agent info ./my-agent --signature # Show agent metadata & signature
devloop agent deprecate my-agent -m "Use new-agent" # Mark version as deprecated
devloop agent sign ./my-agent # Cryptographically sign agent
# Marketplace server management
devloop marketplace server start --port 8000 # Start HTTP registry server
devloop marketplace server stop # Stop running server
devloop marketplace status # Show registry statistics
devloop marketplace install my-agent-name 1.0.0 # Install agent from registry
devloop marketplace search "formatter" # Search agents in registry
devloop marketplace list-categories # List available categories
# Tool dependency management
devloop agent dependencies check ./my-agent # Verify all dependencies available
devloop agent dependencies resolve ./my-agent # Install missing dependencies
devloop agent dependencies list ./my-agent # Show agent's dependencies
# View current findings in Amp
/agent-summary # Recent findings
/agent-summary today # Today's findings
/agent-summary --agent linter --severity error
# Create a custom agent
devloop custom-create my_agent pattern_matcher
```
[View all CLI commands →](./docs/cli-commands.md)
### Verify Installation & Version Compatibility
After installation, verify everything is working:
```bash
# Check DevLoop version
devloop --version
# Verify system dependencies are detected
devloop init --check-requirements
# Check daemon status (if running)
devloop status
# Verify git hooks are installed (in your project)
cat .git/hooks/pre-commit # Should exist
cat .git/hooks/pre-push # Should exist
```
**Version compatibility:**
- DevLoop 0.4.1+ requires Python 3.11+
- Release workflow requires Poetry 1.7+ and GitHub CLI 2.78+
- AGENTS.md template was updated in DevLoop 0.4.0+
**If you're upgrading DevLoop:**
```bash
# Upgrade to latest
pip install --upgrade devloop
# Update your project's AGENTS.md (templates may have changed)
devloop init --merge-templates /path/to/your/project
# Restart the daemon
devloop stop
devloop watch .
```
**📖 See [docs/UPGRADE_GUIDE.md](docs/UPGRADE_GUIDE.md) for:**
- Detailed upgrade procedures
- Version compatibility matrix
- Breaking changes and migrations
- Rollback instructions
- Troubleshooting
---
## Architecture
```
File Changes → Collectors → Event Bus → Agents → Results
(Filesystem) (Git, Etc) (Pub/Sub) (8 built-in + custom)
↓
Context Store
(shared state)
↓
Findings & Metrics
```
### Core Components
| Component | Purpose |
|-----------|---------|
| **Event Bus** | Pub/sub system for agent coordination |
| **Collectors** | Monitor filesystem, git, process, system events |
| **Agents** | Process events and produce findings |
| **Context Store** | Shared development context |
| **CLI** | Command-line interface and Amp integration |
| **Config** | JSON-based configuration system |
[Read the full architecture guide →](./docs/architecture.md)
---
## Agents
DevLoop includes **11 built-in agents** out of the box:
### Code Quality
- **Linter Agent** — Runs linters on changed files
- **Formatter Agent** — Auto-formats code (Black, isort, etc.)
- **Type Checker Agent** — Background type checking (mypy)
- **Code Rabbit Agent** — AI-powered code analysis and insights
### Testing & Security
- **Test Runner Agent** — Runs relevant tests on changes
- **Security Scanner Agent** — Detects code vulnerabilities (Bandit)
- **Snyk Agent** — Scans dependencies for known vulnerabilities
- **Performance Profiler Agent** — Tracks performance metrics
### Development Workflow
- **Git Commit Assistant** — Suggests commit messages
- **CI Monitor Agent** — Tracks GitHub Actions status
- **Doc Lifecycle Agent** — Manages documentation organization
### Custom Agents
Create your own agents without writing code:
```python
from devloop.core.custom_agent import AgentBuilder, CustomAgentType
# Create a custom pattern matcher
config = (
AgentBuilder("todo_finder", CustomAgentType.PATTERN_MATCHER)
.with_description("Find TODO comments")
.with_triggers("file:created", "file:modified")
.with_config(patterns=[r"#\s*TODO:.*"])
.build()
)
```
[View agent architecture and categories →](./ARCHITECTURE.md)
---
## Agent Marketplace
DevLoop includes a complete **agent marketplace** for discovering, publishing, and managing community agents.
### Publishing Your Agent
Share your custom agents with the community:
```bash
# Publish an agent to the marketplace
devloop agent publish ./my-agent
# Check if agent is ready to publish
devloop agent check ./my-agent
# Bump version (semantic versioning)
devloop agent version ./my-agent minor
# Deprecate an old version
devloop agent deprecate my-agent --message "Use my-agent-v2 instead"
```
### Agent Signing & Verification
DevLoop automatically signs agents for integrity and tamper detection:
```bash
# Agent signing is automatic (SHA256 checksums + directory hashing)
# Verify agent authenticity
devloop agent verify ./my-agent
# View signature information
devloop agent info ./my-agent --signature
```
### Agent Ratings & Reviews
Community-driven quality metrics help you find reliable agents:
```bash
# View agent ratings and reviews
devloop agent reviews my-agent
# Rate an agent (1-5 stars)
devloop agent rate my-agent 5 --message "Works great, very fast!"
# List highest-rated agents in a category
devloop marketplace search --category code-quality --sort rating
# View detailed agent statistics
devloop agent info my-agent --reviews --stats
```
**Ratings help you:**
- Find trusted, well-maintained agents
- Avoid buggy or abandoned agents
- Give feedback to agent developers
- Build community trust and transparency
### Marketplace Registry API
Programmatically discover and manage agents:
```python
from devloop.marketplace import RegistryAPI, create_registry_client
from pathlib import Path
# Initialize
client = create_registry_client(Path("~/.devloop/registry"))
api = RegistryAPI(client)
# Search agents
response = api.search_agents(query="formatter", categories=["formatting"])
print(f"Found {response.data['total_results']} agents")
# Get agent details
response = api.get_agent("my-formatter")
if response.success:
print(f"Rating: {response.data['rating']['average']}")
# Rate an agent
api.rate_agent("my-formatter", 5.0)
```
### Marketplace HTTP Server
Run a local marketplace registry with REST API endpoints:
```bash
# Start the marketplace server (persistent background service)
devloop marketplace server start --port 8000
# With additional options
devloop marketplace server start --port 8000 --host 0.0.0.0 --workers 4
# View server logs
devloop marketplace server logs
# Stop the running server
devloop marketplace server stop
# Access API documentation at http://localhost:8000/docs
# Interactive API testing at http://localhost:8000/redoc
```
**Available REST API endpoints:**
- `GET /api/v1/agents/search?q=formatter&category=code-quality` — Search agents with filters
- `GET /api/v1/agents/{name}` — Get agent details including ratings
- `GET /api/v1/agents/{name}/versions` — List all versions of an agent
- `POST /api/v1/agents` — Register new agent with metadata
- `POST /api/v1/agents/{name}/rate` — Rate an agent (1-5 stars)
- `POST /api/v1/agents/{name}/review` — Leave a text review
- `GET /api/v1/agents/{name}/reviews` — Get agent reviews and ratings
- `GET /api/v1/categories` — List available categories
- `GET /api/v1/stats` — Registry statistics (agent count, total installations, etc.)
- `POST /api/v1/install/{name}/{version}` — Record agent installation
[Full marketplace API documentation →](./docs/MARKETPLACE_API.md)
### Tool Dependency Management
Agents can declare and manage their tool dependencies (binaries, packages, services):
```bash
# Check if all dependencies are available
devloop agent dependencies check ./my-agent
# Automatically resolve and install missing dependencies
devloop agent dependencies resolve ./my-agent
# List declared dependencies
devloop agent dependencies list ./my-agent
```
**Declaring dependencies in agent metadata:**
```json
{
"name": "security-scanner",
"version": "2.0.0",
"toolDependencies": {
"bandit": {
"type": "python",
"minVersion": "1.7.0",
"package": "bandit"
},
"shellcheck": {
"type": "binary",
"minVersion": "0.8.0",
"install": "apt-get install shellcheck"
},
"npm": {
"type": "npm-global",
"minVersion": "8.0.0",
"package": "npm"
}
},
"pythonVersion": ">=3.11",
"devloopVersion": ">=0.5.0"
}
```
**Supported dependency types:**
- `python` — Python packages (installed via pip)
- `npm-global` — npm packages installed globally
- `binary` — System binaries/executables
- `venv` — Virtual environment executables
- `docker` — Docker images
When installing an agent, DevLoop automatically detects missing dependencies and prompts you to install them.
### Agent Metadata Schema
```json
{
"name": "my-agent",
"version": "1.0.0",
"description": "What this agent does",
"author": "Your Name",
"license": "MIT",
"homepage": "https://example.com",
"repository": "https://github.com/you/my-agent",
"categories": ["code-quality"],
"keywords": ["quality", "analysis"],
"pythonVersion": ">=3.11",
"devloopVersion": ">=0.5.0",
"toolDependencies": {
"tool-name": {
"type": "python|binary|npm-global|docker",
"minVersion": "1.0.0",
"package": "package-name",
"install": "apt-get install tool-name"
}
},
"configSchema": {
"type": "object",
"properties": {
"enabled": {"type": "boolean"},
"severity": {"type": "string", "enum": ["low", "medium", "high"]}
}
},
"publishedAt": "2025-12-13T10:30:00Z",
"deprecated": false,
"deprecationMessage": null,
"maintainer": "username",
"rating": {
"average": 4.5,
"count": 120
}
}
```
**Schema explanation:**
- `toolDependencies` — External tools/packages this agent requires
- `configSchema` — JSON schema defining agent configuration options
- `publishedAt` — When agent was first published
- `deprecated` — Whether agent is deprecated and shouldn't be used
- `deprecationMessage` — Suggested alternative if deprecated
- `maintainer` — DevLoop username of agent maintainer
- `rating` — Community ratings and review count (auto-updated)
---
## VSCode Extension
DevLoop provides a VSCode extension for real-time agent feedback directly in your editor.
### Installation
**Option 1: From VSCode Marketplace**
```
Open VSCode → Extensions → Search "devloop" → Click Install
```
**Option 2: Manual Installation**
```bash
# Install from the devloop repository
git clone https://github.com/wioota/devloop
cd devloop/vscode-extension
npm install
npm run compile
# Extension is now installed in ~/.vscode/extensions/devloop-*
```
### Features
- **Real-time Findings** — View linting, type checking, and security issues inline
- **Quick Fixes** — Apply auto-fixes directly from the editor
- **Status Bar** — Shows agent status, finding count, and health metrics
- **Diagnostics Panel** — Detailed findings organized by agent and severity
- **Multi-language Support** — Python, JavaScript, TypeScript, and more
### Usage
Once installed, DevLoop automatically:
1. Monitors your editor for file changes
2. Runs background agents via the LSP server
3. Displays findings as inline diagnostics
4. Provides quick fix actions for auto-fixable issues
**View findings:**
- Hover over squiggly lines to see details
- Click quick fix actions to apply changes
- Open Problems panel (Ctrl+Shift+M) to see all findings
- Check status bar for agent health
**Configuration:**
Extension settings are automatically synced with `.devloop/agents.json`. No separate configuration needed.
---
### Code Rabbit Integration
Code Rabbit Agent provides AI-powered code analysis with insights on code quality, style, and best practices.
**Setup:**
```bash
# 1. Install code-rabbit CLI
npm install -g @code-rabbit/cli
# or
pip install code-rabbit
# 2. Set your API key
export CODE_RABBIT_API_KEY="your-api-key-here"
# 3. Agent runs automatically on file changes
# Results appear in agent findings and context store
```
**Configuration:**
```json
{
"code-rabbit": {
"enabled": true,
"triggers": ["file:modified", "file:created"],
"config": {
"apiKey": "${CODE_RABBIT_API_KEY}",
"minSeverity": "warning",
"filePatterns": ["**/*.py", "**/*.js", "**/*.ts"]
}
}
}
```
**Features:**
- Real-time code analysis as you type
- AI-generated insights on code improvements
- Integration with DevLoop context store
- Configurable severity filtering
- Automatic debouncing to avoid excessive runs
### Snyk Integration
Snyk Agent provides security vulnerability scanning for project dependencies across multiple package managers.
**Setup:**
```bash
# 1. Install snyk CLI
npm install -g snyk
# or
brew install snyk
# 2. Authenticate with Snyk (creates ~/.snyk token)
snyk auth
# 3. Set your API token for DevLoop
export SNYK_TOKEN="your-snyk-token"
# 4. Agent runs automatically on dependency file changes
# Results appear in agent findings and context store
```
**Configuration:**
```json
{
"snyk": {
"enabled": true,
"triggers": ["file:modified", "file:created"],
"config": {
"apiToken": "${SNYK_TOKEN}",
"severity": "high",
"filePatterns": [
"**/package.json",
"**/requirements.txt",
"**/Gemfile",
"**/pom.xml",
"**/go.mod",
"**/Cargo.toml"
]
}
}
}
```
**Features:**
- Scans all major package managers (npm, pip, Ruby, Maven, Go, Rust)
- Detects known security vulnerabilities in dependencies
- Shows CVSS scores and fix availability
- Integration with DevLoop context store
- Configurable severity filtering (critical/high/medium/low)
- Automatic debouncing to avoid excessive scans
**Supported Package Managers:**
- **npm** / **yarn** / **pnpm** (JavaScript/Node.js)
- **pip** / **pipenv** / **poetry** (Python)
- **bundler** (Ruby)
- **maven** / **gradle** (Java)
- **go mod** (Go)
- **cargo** (Rust)
---
## Multi-CI/Registry Provider System
DevLoop uses a provider abstraction layer for CI/CD and package registry support. This means you can use DevLoop with any CI system and publish to any package registry.
### Supported CI Platforms
DevLoop auto-detects and works with:
- **GitHub Actions** — Default, with pre-push CI verification
- **GitLab CI/CD** — Full support with pipeline status monitoring
- **Jenkins** — Classic and declarative pipelines
- **CircleCI** — OAuth2 and API token authentication
- **Custom CI** — Via manual configuration
### Supported Package Registries
Publish agents and packages to:
- **PyPI** — Python Package Index (via Poetry or Twine)
- **npm** — Node Package Manager
- **Docker Registry** — Docker Hub or custom registries
- **GitHub Releases** — Attach artifacts to releases
- **Custom Registries** — Via manual configuration (Artifactory, etc.)
### Release Workflow
DevLoop provides a unified release process across all providers:
```bash
# Check if ready to release
devloop release check 1.2.3
# Publish to detected CI/registry
devloop release publish 1.2.3
# Specify explicit providers
devloop release publish 1.2.3 --ci github --registry pypi
# Dry-run to see what would happen
devloop release publish 1.2.3 --dry-run
```
The release workflow automatically:
1. Validates preconditions (clean git, passing CI, valid version)
2. Creates annotated git tag
3. Publishes to registry
4. Pushes tag to remote repository
[Full provider documentation →](./docs/PROVIDER_SYSTEM.md)
---
## Configuration
Configure agent behavior in `.devloop/agents.json`:
```json
{
"global": {
"autonomousFixes": {
"enabled": false,
"safetyLevel": "safe_only"
},
"maxConcurrentAgents": 5,
"resourceLimits": {
"maxCpu": 25,
"maxMemory": "500MB"
}
},
"agents": {
"linter": {
"enabled": true,
"triggers": ["file:save", "git:pre-commit"],
"config": {
"debounce": 500,
"filePatterns": ["**/*.py"]
}
}
}
}
```
**Safety levels (Auto-fix):**
- `safe_only` — Only fix whitespace/indentation (default, recommended)
- `medium_risk` — Include import/formatting fixes
- `all` — Apply all fixes (use with caution)
⚠️ **Auto-fix Warning:** Currently auto-fixes run without backups or review. **DO NOT enable auto-fix in production** or on critical code. Track [secure auto-fix with backups issue](https://github.com/wioota/devloop/issues/emc).
### Token Security
DevLoop securely manages API keys and tokens for agent integrations:
```bash
# Use environment variables for all credentials
export SNYK_TOKEN="your-token"
export CODE_RABBIT_API_KEY="your-key"
export GITHUB_TOKEN="your-token"
# DevLoop automatically:
# - Hides tokens in logs and process lists
# - Validates token format and expiry
# - Warns about placeholder values ("changeme", "token", etc.)
# - Never logs full token values
```
**Best practices:**
- ✅ Use environment variables (never command-line arguments)
- ✅ Enable token expiry and rotation (30-90 days recommended)
- ✅ Use read-only or project-scoped tokens when possible
- ✅ Store tokens in `.env` file (add to `.gitignore`)
- ❌ Never commit tokens to git
- ❌ Never pass tokens as command arguments
- ❌ Never hardcode tokens in code
**Token validation:**
```bash
# DevLoop validates token format during initialization
devloop init /path/to/project
# View token status
devloop status --show-token-info
```
[Full token security guide →](./docs/TOKEN_SECURITY.md)
[Full configuration reference →](./docs/configuration.md)
---
## Integration with Beads
DevLoop integrates with [Beads](https://github.com/wioota/devloop) task tracking to create actionable work items from detected patterns.
### Auto-Issue Creation
DevLoop automatically creates Beads issues for significant findings:
```bash
# View auto-created issues from DevLoop findings
bd ready # Shows unblocked work
bd show bd-abc123 # View specific issue created by DevLoop
```
**What gets tracked:**
- Security vulnerabilities (high/critical only)
- Performance regressions
- Pattern discoveries (e.g., "same issue found 3 times")
- Failing tests in CI
- Deprecated dependencies
**Issue linking:**
DevLoop uses `discovered-from` dependencies to link:
- Findings → Beads issues → Original agent
- Patterns across multiple findings
- Root cause analysis chains
### Thread Context Capture
When using DevLoop in [Amp](https://ampcode.com) threads:
```bash
# Automatically captures thread context (if AMP_THREAD_ID is set)
devloop watch .
```
DevLoop logs:
- Thread ID and URL
- Commands executed
- Agent findings and results
- Patterns detected across sessions
This enables cross-thread pattern detection:
> "This type of error appeared in 5 different threads — likely a documentation gap"
---
## Event Logging & Observability
DevLoop maintains a complete audit trail of agent activity for debugging and analysis.
### Event Store
All agent actions are logged to an SQLite event store in `.devloop/events.db`:
```bash
# View recent agent activity
devloop audit query --limit 20
# Filter by agent
devloop audit query --agent linter
# View agent health metrics
devloop health
```
**Event data includes:**
- Agent name and execution time
- Success/failure status
- Finding count and types
- Resource usage (CPU, memory)
- Timestamps and correlations
### Log Files
Application logs are stored in `.devloop/devloop.log` with rotation:
```bash
# View logs in real-time
tail -f .devloop/devloop.log
# View verbose logs during watch
devloop watch . --verbose --foreground
# Check log disk usage
du -sh .devloop/devloop.log*
```
**Log rotation:**
- Max file size: 100MB
- Keep 3 backups (300MB max)
- Auto-cleanup logs older than 7 days
---
## CI/CD Integration
DevLoop includes GitHub Actions integration with automated security scanning.
### GitHub Actions Workflow
The default CI pipeline includes:
1. **Tests** — Run pytest on Python 3.11 & 3.12
2. **Lint** — Check code formatting (Black) and style (Ruff)
3. **Type Check** — Verify type safety with mypy
4. **Security (Bandit)** — Scan code for security issues
5. **Security (Snyk)** — Scan dependencies for vulnerabilities
### Setting Up Snyk in CI
To enable Snyk scanning in your CI pipeline:
**1. Get a Snyk API Token:**
```bash
# Create account at https://snyk.io
# Get token from https://app.snyk.io/account/
```
**2. Add token to GitHub secrets:**
```bash
# In your GitHub repository:
# Settings → Secrets and variables → Actions
# Add new secret: SNYK_TOKEN = your-token
```
**3. Snyk job runs automatically:**
- Scans all dependencies for known vulnerabilities
- Fails build if high/critical vulnerabilities found
- Uploads report as artifact for review
- Works with all supported package managers
**Configuration:**
- **Severity threshold:** high (fails on critical or high)
- **Supported managers:** npm, pip, Ruby, Maven, Go, Rust
- **Report:** `snyk-report.json` available as artifact
---
## Usage Examples
### Example 1: Auto-Format on Save
```bash
# Agent automatically runs Black, isort when you save a file
echo "x=1" > app.py # Auto-formatted to x = 1
# View findings
/agent-summary recent
```
### Example 2: Run Tests on Changes
```bash
# Test runner agent detects changed test files
# Automatically runs: pytest path/to/changed_test.py
# Or view all test results
/agent-summary --agent test-runner
```
### Example 3: Create Custom Pattern Matcher
```bash
# Create agent to find TODO comments
devloop custom-create find_todos pattern_matcher \
--description "Find TODO comments" \
--triggers file:created,file:modified
# List your custom agents
devloop custom-list
```
### Example 4: Learn & Optimize
```bash
# View learned patterns
devloop learning-insights --agent linter
# Get recommendations
devloop learning-recommendations linter
# Check performance data
devloop perf-summary --agent formatter
```
[More examples →](./examples/)
---
## Testing
```bash
# Run all tests
poetry run pytest
# Run with coverage report
poetry run pytest --cov=devloop
# Run specific test file
poetry run pytest tests/unit/agents/test_linter.py -v
# Run tests with output
poetry run pytest -v
```
**Current status:** ✅ 737+ tests passing
[View test strategy →](./docs/testing.md)
---
## Development
### Project Structure
```
devloop/
├── src/devloop/
│ ├── core/ # Event system, agents, context
│ ├── collectors/ # Event collectors
│ ├── agents/ # Built-in agents
│ └── cli/ # CLI interface
├── tests/ # Unit and integration tests
├── docs/ # Documentation
├── examples/ # Usage examples
└── pyproject.toml # Poetry configuration
```
### Adding a New Agent
1. Create `src/devloop/agents/my_agent.py`:
```python
from devloop.core.agent import Agent, AgentResult
from devloop.core.event import Event
class MyAgent(Agent):
async def handle(self, event: Event) -> AgentResult:
# Your logic here
return AgentResult(
agent_name=self.name,
success=True,
duration=0.1,
message="Processed successfully"
)
```
2. Register in `src/devloop/cli/main.py`
3. Add tests in `tests/unit/agents/test_my_agent.py`
[Developer guide →](./docs/development.md)
### Code Style
- **Formatter:** Black
- **Linter:** Ruff
- **Type Checker:** mypy
- **Python Version:** 3.11+
Run formatters:
```bash
poetry run black src tests
poetry run ruff check --fix src tests
poetry run mypy src
```
---
## Documentation
### User Guides
- **[Getting Started Guide](./docs/getting-started.md)** — Installation and basic usage
- **[Architecture Guide](./docs/architecture.md)** — System design and components
- **[Configuration Guide](./docs/configuration.md)** — Full config reference
- **[CLI Commands](./docs/cli-commands.md)** — Command reference
### Agent Development
- **[Agent Development Guide](./docs/AGENT_DEVELOPMENT.md)** — Tutorial on creating agents with patterns and best practices
- **[Agent API Reference](./docs/AGENT_API_REFERENCE.md)** — Complete API documentation for all agent classes and interfaces
- **[Agent Examples](./docs/AGENT_EXAMPLES.md)** — Real-world examples from simple to advanced implementations
- **[Agent Troubleshooting](./docs/AGENT_TROUBLESHOOTING.md)** — Common issues and solutions
### Marketplace
- **[Marketplace Guide](./docs/MARKETPLACE_GUIDE.md)** — Discovering, installing, and publishing agents
- **[Marketplace API Guide](./docs/MARKETPLACE_API.md)** — Agent registry API reference
- **[Agent Reference](./ARCHITECTURE.md)** — Agent categories and architecture
### Advanced
- **[Development Guide](./docs/development.md)** — Contributing guide
- **[Implementation Status](./IMPLEMENTATION_STATUS.md)** — What's implemented
- **[Learning & Optimization](./PHASE3_COMPLETE.md)** — Advanced features
---
## Design Principles
DevLoop follows these core principles:
✅ **Non-Intrusive** — Runs in background without blocking workflow
✅ **Event-Driven** — All actions triggered by observable events
✅ **Configurable** — Fine-grained control over agent behavior
✅ **Context-Aware** — Understands your project structure
✅ **Parallel** — Multiple agents run concurrently
✅ **Lightweight** — Respects system resources
[Read the AI agent workflow guide →](./AGENTS.md) | [System architecture →](./ARCHITECTURE.md)
---
## Troubleshooting
### ⚠️ If Something Goes Wrong
**Recovery steps:**
1. Stop the daemon: `devloop stop .`
2. Check the logs: `tail -100 .devloop/devloop.log`
3. Verify your code in git: `git status`
4. Recover from git if files were modified: `git checkout <file>`
5. Report the issue: [GitHub Issues](https://github.com/wioota/devloop/issues)
### Agents not running
```bash
# Check status
devloop status
# View logs (use | text/markdown | DevLoop Contributors | devloop@example.com | null | null | MIT | agents, development, automation, code-quality, testing, linting, security, continuous-integration, devops | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Build Tools",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Software Development :: Quality Assurance",
"Topic :: Utilities"
] | [] | https://github.com/wioota/devloop | null | <4.0,>=3.11 | [] | [] | [] | [
"pydantic<3.0,>=2.5",
"watchdog<4.0,>=3.0",
"typer<1.0,>=0.15",
"rich<14.0,>=13.7",
"aiofiles<24.0,>=23.2",
"psutil<6.0,>=5.9",
"pygls<2.0.0,>=1.3.0",
"lsprotocol<2024.0.0,>=2023.0.0",
"mcp<2.0.0,>=1.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/wioota/devloop",
"Repository, https://github.com/wioota/devloop",
"Documentation, https://github.com/wioota/devloop#readme"
] | poetry/2.2.1 CPython/3.12.3 Linux/5.15.167.4-microsoft-standard-WSL2 | 2026-02-21T09:19:43.185272 | devloop-0.10.0.tar.gz | 303,917 | 90/53/f56be176e1dd0f6a4ed9bf398caf4dc09832b203bdfd0b4a7da6ffceffbf/devloop-0.10.0.tar.gz | source | sdist | null | false | 5f3a85b2e460b4c53df791e6814e3b25 | 53b94d7412bdc6295a80597d87fc83c5cb142ecfda788f1871190d4d22eb37d2 | 9053f56be176e1dd0f6a4ed9bf398caf4dc09832b203bdfd0b4a7da6ffceffbf | null | [] | 222 |
2.4 | undoc | 0.1.16 | High-performance Microsoft Office document extraction to Markdown | # undoc
High-performance Microsoft Office document extraction to Markdown.
## Installation
```bash
pip install undoc
```
## Usage
### Basic Usage
```python
from undoc import parse_file
# Parse a document
doc = parse_file("document.docx")
# Convert to Markdown
markdown = doc.to_markdown()
print(markdown)
# Convert to plain text
text = doc.to_text()
# Convert to JSON
json_data = doc.to_json()
```
### With Context Manager
```python
from undoc import parse_file
with parse_file("document.xlsx") as doc:
print(doc.to_markdown(frontmatter=True))
print(f"Sections: {doc.section_count}")
print(f"Resources: {doc.resource_count}")
```
### Parse from Bytes
```python
from undoc import parse_bytes
with open("document.pptx", "rb") as f:
data = f.read()
doc = parse_bytes(data)
markdown = doc.to_markdown()
```
### Extract Resources (Images)
```python
from undoc import parse_file
doc = parse_file("document.docx")
# Get all resource IDs
resource_ids = doc.get_resource_ids()
for rid in resource_ids:
# Get resource metadata
info = doc.get_resource_info(rid)
print(f"Resource: {info['filename']} ({info['mime_type']})")
# Get resource binary data
data = doc.get_resource_data(rid)
# Save to file
with open(info['filename'], 'wb') as f:
f.write(data)
```
### Document Metadata
```python
from undoc import parse_file
doc = parse_file("document.docx")
print(f"Title: {doc.title}")
print(f"Author: {doc.author}")
print(f"Sections: {doc.section_count}")
print(f"Resources: {doc.resource_count}")
```
## Supported Formats
- **DOCX** - Microsoft Word documents
- **XLSX** - Microsoft Excel spreadsheets
- **PPTX** - Microsoft PowerPoint presentations
## Features
- **RAG-Ready Output**: Structured Markdown optimized for RAG/LLM applications
- **High Performance**: Native Rust implementation via FFI
- **Asset Extraction**: Images and embedded resources
- **Metadata Preservation**: Document properties, styles, formatting
- **Cross-Platform**: Windows, Linux, macOS (Intel & ARM)
## API Reference
### Functions
- `parse_file(path)` - Parse document from file path
- `parse_bytes(data)` - Parse document from bytes
- `version()` - Get library version
### Undoc Class
#### Conversion Methods
- `to_markdown(frontmatter=False, escape_special=False, paragraph_spacing=False)` - Convert to Markdown
- `to_text()` - Convert to plain text
- `to_json(compact=False)` - Convert to JSON
- `plain_text()` - Get plain text (fast extraction)
#### Properties
- `title` - Document title
- `author` - Document author
- `section_count` - Number of sections
- `resource_count` - Number of resources
#### Resource Methods
- `get_resource_ids()` - List of resource IDs
- `get_resource_info(id)` - Resource metadata
- `get_resource_data(id)` - Resource binary data
## License
MIT License - see [LICENSE](../../LICENSE) for details.
| text/markdown | null | iyulab <tech@iyulab.com> | null | null | MIT | office, docx, xlsx, pptx, markdown, extraction | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS :: MacOS X",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Text Processing :: Markup"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"black>=23.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/iyulab/undoc",
"Documentation, https://github.com/iyulab/undoc#readme",
"Repository, https://github.com/iyulab/undoc",
"Issues, https://github.com/iyulab/undoc/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:19:15.509422 | undoc-0.1.16-py3-none-any.whl | 3,800,503 | c7/86/2f9c42c2ac1d5adfbb5f670db428a6a7c7cda33f9d3fa32c9309d1494190/undoc-0.1.16-py3-none-any.whl | py3 | bdist_wheel | null | false | 554fe1774cecd6d69077a89433ebdef7 | bce4cfba7fa3fcae4cc086344e91a19687eb3b6506c58272669c72e9db7c9e98 | c7862f9c42c2ac1d5adfbb5f670db428a6a7c7cda33f9d3fa32c9309d1494190 | null | [] | 84 |
2.4 | video-extensions | 1.0.0 | Check if a file or extension is a video type, and iterate over 36 known video file formats including mp4, mov, avi, mkv, and more. Python port of video-extensions npm package. | # Video Extensions
[](https://www.python.org/downloads/)
[](https://github.com/ysskrishna/video-extensions/blob/main/LICENSE)

[](https://pypi.org/project/video-extensions/)
[](https://pepy.tech/projects/video-extensions)
Check if a file or extension is a video type, and iterate over 36 known video file formats including mp4, mov, avi, mkv, and more. Python port of [video-extensions](https://github.com/sindresorhus/video-extensions) npm package.
## Features
- Immutable collection of 36 known video file extensions
- Fast membership checks using `frozenset`
- Case-insensitive and dot-aware checks
- Works for both extensions and full file paths
- Supports dotfiles (e.g., `.mov`)
- Zero dependencies, minimal overhead
## Installation
```bash
pip install video-extensions
```
Or using `uv`:
```bash
uv add video-extensions
```
## Usage
### Check if an extension is a video type
```python
from video_extensions import is_video_extension
is_video_extension("mp4") # True
is_video_extension(".mov") # True (dot-aware)
is_video_extension("AVI") # True (case-insensitive)
is_video_extension("txt") # False
```
### Check if a file path has a video extension
```python
from video_extensions import is_video_path
is_video_path("movie.mp4") # True
is_video_path("/path/to/video.MOV") # True (case-insensitive)
is_video_path("presentation.avi") # True
is_video_path("document.txt") # False
is_video_path(".mov") # True (dotfile support)
```
### Access the list of video extensions
```python
from video_extensions import VIDEO_EXTENSIONS, VIDEO_EXTENSIONS_LOWER
# VIDEO_EXTENSIONS is a frozenset of all known video extensions
print(len(VIDEO_EXTENSIONS)) # 36
"mp4" in VIDEO_EXTENSIONS # True
"txt" in VIDEO_EXTENSIONS # False
# VIDEO_EXTENSIONS_LOWER contains all extensions in lowercase
# Useful for case-insensitive lookups without calling .lower() repeatedly
"MP4" in VIDEO_EXTENSIONS_LOWER # True (case-insensitive)
# Iterate over all extensions
for ext in sorted(VIDEO_EXTENSIONS):
print(ext)
```
## Supported Extensions
The package includes support for 36 video file extensions:
- **Common formats**: mp4, mov, avi, mkv, webm, flv, wmv, mpg, mpeg
- **Streaming**: m3u8, m4v, m4p, ogv, ogg
- **Professional**: mxf, drc, aaf, roq
- **Legacy**: 3gp, 3g2, asf, vob, rm, rmvb, qt
- **Specialized**: avchd, m2v, mp2, mpe, mpv, mng, nsv, svi, yuv, wmv
## Contributing
Contributions are welcome! Please read our [Contributing Guide](https://github.com/ysskrishna/video-extensions/blob/main/CONTRIBUTING.md) for details on our code of conduct, development setup, and the process for submitting pull requests.
## Support
If you find this library helpful:
- ⭐ Star the repository
- 🐛 Report issues
- 🔀 Submit pull requests
- 💝 [Sponsor on GitHub](https://github.com/sponsors/ysskrishna)
## Credits
This package is a Python port of the [video-extensions](https://github.com/sindresorhus/video-extensions) npm package by [Sindre Sorhus](https://github.com/sindresorhus).
## License
MIT © [Y. Siva Sai Krishna](https://github.com/ysskrishna) - see [LICENSE](https://github.com/ysskrishna/video-extensions/blob/main/LICENSE) for details.
---
<p align="left">
<a href="https://github.com/ysskrishna">Author's GitHub</a> •
<a href="https://linkedin.com/in/ysskrishna">Author's LinkedIn</a> •
<a href="https://pypi.org/project/video-extensions/">Package on PyPI</a> •
<a href="https://github.com/ysskrishna/video-extensions">GitHub Repository</a> •
<a href="https://github.com/ysskrishna/video-extensions/issues">Report Issues</a> •
<a href="https://github.com/ysskrishna/video-extensions/blob/main/CHANGELOG.md">Changelog</a> •
<a href="https://github.com/ysskrishna/video-extensions/releases">Release History</a>
</p>
| text/markdown | null | ysskrishna <sivasaikrishnassk@gmail.com> | null | null | MIT | extension, extensions, file, file-detection, file-extensions, file-type, file-utils, mime-type, mit license, utilities, utils, video, video-detection, video-files, ysskrishna | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Utilities",
"Typing :: Typed"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/ysskrishna/video-extensions",
"Repository, https://github.com/ysskrishna/video-extensions.git",
"Issues, https://github.com/ysskrishna/video-extensions/issues",
"Changelog, https://github.com/ysskrishna/video-extensions/blob/main/CHANGELOG.md",
"Discussions, https://github.com/ysskrishna/video-extensions/discussions"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T09:19:00.452696 | video_extensions-1.0.0.tar.gz | 16,749 | 63/5f/bd1b2325d16f35427bdc5b65727107688c5d8f26df3f93bcfd253d195539/video_extensions-1.0.0.tar.gz | source | sdist | null | false | 3d6f48571db5024bc65d349bae16d4ef | b69c342600db0af88bcaad2dd6f6732d10b6bbde2b49f3161685870c95bea958 | 635fbd1b2325d16f35427bdc5b65727107688c5d8f26df3f93bcfd253d195539 | null | [
"LICENSE"
] | 251 |
2.4 | clickpesa-python-sdk | 0.1.0 | Production-grade Python SDK for the ClickPesa API — sync & async, collections, payouts, BillPay and more | # ClickPesa Python SDK
[](https://pypi.org/project/clickpesa-python-sdk/)
[](https://pypi.org/project/clickpesa-python-sdk/)
[](https://opensource.org/licenses/MIT)
Production-grade Python SDK for the [ClickPesa API](https://docs.clickpesa.com) — supports both **sync** and **async** usage, with automatic token management, checksum injection, retry logic, and a full exception hierarchy.
---
## Features
- **Sync & Async** — `ClickPesa` for blocking code, `AsyncClickPesa` for `asyncio` / FastAPI / async frameworks
- **Auto Auth** — JWT tokens are fetched and cached automatically (55-minute window, 1-hour API TTL)
- **Checksum injection** — HMAC-SHA256 checksums added to every mutating request when a `checksum_key` is configured
- **Retries** — exponential backoff on transient 5xx errors (configurable)
- **Thread-safe** — safe to share a single client across threads or async tasks
- **Context manager** — `with` / `async with` support for automatic cleanup
- **Typed exceptions** — structured error hierarchy with `status_code` and `response` attributes
- **PEP 561 compliant** — ships with `py.typed` for mypy / pyright support
---
## Installation
```bash
pip install clickpesa-python-sdk
```
Requires **Python 3.10+**.
---
## Quick Start
### Sync
```python
from clickpesa import ClickPesa
with ClickPesa(
client_id="YOUR_CLIENT_ID",
api_key="YOUR_API_KEY",
checksum_key="YOUR_CHECKSUM_KEY", # optional but recommended
sandbox=True, # set False for production
) as client:
balance = client.account.get_balance()
print(balance)
```
### Async
```python
import asyncio
from clickpesa import AsyncClickPesa
async def main():
async with AsyncClickPesa(
client_id="YOUR_CLIENT_ID",
api_key="YOUR_API_KEY",
checksum_key="YOUR_CHECKSUM_KEY",
sandbox=True,
) as client:
balance = await client.account.get_balance()
print(balance)
asyncio.run(main())
```
---
## Configuration
| Parameter | Type | Default | Description |
| --- | --- | --- | --- |
| `client_id` | `str` | required | Your ClickPesa application Client ID |
| `api_key` | `str` | required | Your ClickPesa application API key |
| `checksum_key` | `str \| None` | `None` | Enables HMAC-SHA256 checksum on every mutating request |
| `sandbox` | `bool` | `False` | Target sandbox (`api-sandbox.clickpesa.com`) instead of production |
| `timeout` | `float` | `30.0` | Per-request timeout in seconds |
| `max_retries` | `int` | `3` | Max retry attempts on transient server errors |
> **Note:** `order_id` / `orderReference` values must be **alphanumeric only** (no hyphens, underscores, or special characters). The API will reject any order reference containing non-alphanumeric characters.
---
## Collections
### USSD Push
```python
# 1. Preview — check available methods and fees before charging
preview = client.payments.preview_ussd_push(
amount="5000",
order_id="ORD20240001",
phone="255712345678", # optional: include to get sender details
fetch_sender_details=True, # optional: returns accountName / accountProvider
)
print(preview["activeMethods"]) # [{"name": "TIGO-PESA", "status": "AVAILABLE", "fee": 580, ...}]
# 2. Initiate — triggers PIN prompt on the customer's phone
transaction = client.payments.initiate_ussd_push(
amount="5000",
phone="255712345678",
order_id="ORD20240001",
currency="TZS", # only TZS supported
)
print(transaction["id"], transaction["status"])
```
### Card Payment
```python
# 1. Preview — check card method availability
preview = client.payments.preview_card(amount="50", order_id="CARD001")
# 2. Initiate — generate a hosted payment link
result = client.payments.initiate_card(
amount="50",
order_id="CARD001",
currency="USD", # only USD supported
customer={
"fullName": "John Doe",
"email": "john@example.com",
"phoneNumber": "255712345678",
},
# or use an existing customer ID:
# customer={"id": "CUST_123"}
)
print(result["cardPaymentLink"]) # redirect customer here
```
### Query Payments
```python
# Single payment by order reference
payments = client.payments.get_status("ORD20240001")
# Paginated list with filters
page = client.payments.list_all(
status="SUCCESS",
collectedCurrency="TZS",
startDate="2024-01-01",
endDate="2024-12-31",
limit=20,
skip=0,
)
print(page["totalCount"], page["data"])
```
---
## Disbursements
### Mobile Money Payout
```python
# 1. Preview — verify fees and recipient before sending
preview = client.payouts.preview_mobile_money(
amount=10000,
phone="255712345678",
order_id="PAY20240001",
currency="TZS", # TZS or USD; recipient always receives TZS
)
print(preview["fee"], preview["receiver"]["accountName"])
# 2. Create — disburse funds
payout = client.payouts.create_mobile_money(
amount=10000,
phone="255712345678",
order_id="PAY20240001",
)
print(payout["id"], payout["status"]) # status: AUTHORIZED → PROCESSING → SUCCESS
```
### Bank Payout (ACH / RTGS)
```python
# Get list of supported banks and their BIC codes
banks = client.payouts.get_banks()
# [{"name": "EQUITY BANK TANZANIA LIMITED", "value": "equity_bank_tanzania_limited", "bic": "EQBLTZTZ"}, ...]
# 1. Preview
preview = client.payouts.preview_bank(
amount=500000,
account_number="1234567890",
bic="EQBLTZTZ",
order_id="BANK20240001",
transfer_type="ACH", # "ACH" or "RTGS"
currency="TZS",
)
# 2. Create
payout = client.payouts.create_bank(
amount=500000,
account_number="1234567890",
account_name="Jane Doe",
bic="EQBLTZTZ",
order_id="BANK20240001",
transfer_type="RTGS",
currency="TZS",
)
```
### Query Payouts
```python
# Single payout by order reference
payouts = client.payouts.get_status("PAY20240001")
# All payouts with filters
page = client.payouts.list_all(
channel="MOBILE MONEY",
status="SUCCESS",
limit=50,
)
```
---
## BillPay
ClickPesa BillPay lets customers pay using a numeric control number through mobile money, SIM banking, and CRDB Wakalas. There are two types of control numbers:
- **Order** — one-time, closes after payment. Ideal for invoices and e-commerce orders.
- **Customer** — static and reusable per customer. Ideal for subscriptions and recurring payments.
> **Note:** Every ClickPesa merchant has a 4-digit **Merchant BillPay-Namba** visible on the dashboard. Order control numbers can also be generated *offline* (no API call) by concatenating your Merchant BillPay-Namba with any internal order reference (e.g. `1122` + `231256` = `1122231256`). The SDK only covers API-based generation.
### Create Control Numbers
```python
# Order control number (one-time)
cn = client.billpay.create_order_control_number(
bill_reference="INVOICE001", # optional — becomes the control number; auto-generated if omitted
amount=90900,
description="Water Bill - July 2024",
payment_mode="EXACT", # "EXACT" or "ALLOW_PARTIAL_AND_OVER_PAYMENT"
)
print(cn["billPayNumber"]) # share this with your customer
# Customer control number (persistent / recurring)
cn = client.billpay.create_customer_control_number(
customer_name="John Doe",
phone="255712345678", # phone or email required
email="john@example.com",
amount=50000,
payment_mode="ALLOW_PARTIAL_AND_OVER_PAYMENT",
)
```
### Bulk Create (up to 50 per request)
```python
# Bulk order control numbers
result = client.billpay.bulk_create_order_numbers([
{"billAmount": 10000, "billDescription": "Invoice #001", "billPaymentMode": "EXACT"},
{"billAmount": 20000, "billDescription": "Invoice #002"},
{"billReference": "MYREF003", "billAmount": 5000},
])
print(result["created"], result["failed"])
print(result["billPayNumbers"])
# Bulk customer control numbers
result = client.billpay.bulk_create_customer_numbers([
{"customerName": "Alice", "customerPhone": "255712345678", "billAmount": 15000},
{"customerName": "Bob", "customerEmail": "bob@example.com"},
])
```
### Manage Existing Numbers
```python
# Query details
details = client.billpay.get_details("55042914871931")
# Update amount, description or payment mode
client.billpay.update_reference(
"55042914871931",
amount=120000,
description="Updated Water Bill",
payment_mode="EXACT",
)
# Activate / deactivate
client.billpay.update_status("55042914871931", "INACTIVE")
client.billpay.update_status("55042914871931", "ACTIVE")
```
---
## Hosted Links
```python
# Checkout link — customer chooses their payment method
result = client.links.generate_checkout(
order_id="LINK001",
order_currency="TZS",
total_price="15000",
customer_name="Jane Doe",
customer_email="jane@example.com",
customer_phone="255712345678",
description="Order LINK001",
)
print(result["checkoutLink"]) # redirect customer here
# With itemised order instead of a flat total
result = client.links.generate_checkout(
order_id="LINK002",
order_currency="USD",
order_items=[
{"name": "Widget A", "price": "25.00", "quantity": 2},
{"name": "Widget B", "price": "10.00", "quantity": 1},
],
)
# Payout link — recipient enters their own bank / mobile details
result = client.links.generate_payout(amount="50000", order_id="POUT001")
print(result["payoutLink"])
```
---
## Account & Exchange
```python
# Account balances
result = client.account.get_balance()
# {"balances": [{"currency": "TZS", "balance": 39700}, {"currency": "USD", "balance": 0}]}
print(result["balances"])
# Transaction statement
statement = client.account.get_statement(
currency="TZS",
start_date="2024-01-01",
end_date="2024-12-31",
)
print(statement["accountDetails"])
print(statement["transactions"])
# Exchange rates
rates = client.exchange.get_rates() # all pairs
rates = client.exchange.get_rates(source="USD", target="TZS") # specific pair
# [{"source": "USD", "target": "TZS", "rate": 2510, "date": "..."}]
```
---
## Async Usage
Every method on `AsyncClickPesa` is the `await`-able equivalent:
```python
import asyncio
from clickpesa import AsyncClickPesa
async def run_payments():
async with AsyncClickPesa(
client_id="YOUR_CLIENT_ID",
api_key="YOUR_API_KEY",
sandbox=True,
) as client:
# Run multiple API calls concurrently
balance, rates = await asyncio.gather(
client.account.get_balance(),
client.exchange.get_rates(source="USD"),
)
print(balance, rates)
# Collections
tx = await client.payments.initiate_ussd_push(
amount="3000",
phone="255712345678",
order_id="ASYNC001",
)
# Disbursements
payout = await client.payouts.create_mobile_money(
amount=3000,
phone="255712345678",
order_id="ASYNC002",
)
asyncio.run(run_payments())
```
---
## Webhook Verification
```python
from clickpesa import WebhookValidator
# In your webhook endpoint (Flask / FastAPI / Django etc.)
def webhook_handler(request):
is_valid = WebhookValidator.verify(
payload=request.json,
signature=request.headers["X-ClickPesa-Signature"],
checksum_key="YOUR_CHECKSUM_KEY",
)
if not is_valid:
return {"error": "Invalid signature"}, 401
# Process event ...
```
---
## Error Handling
All errors are subclasses of `ClickPesaError` and carry `.status_code` and `.response`:
```python
from clickpesa.exceptions import (
AuthenticationError, # 401 — invalid credentials / expired token
ForbiddenError, # 403 — feature not enabled on your account
ValidationError, # 400 — bad payload
InsufficientFundsError, # 400 — not enough balance (subclass of ValidationError)
NotFoundError, # 404 — resource not found
ConflictError, # 409 — duplicate orderReference / billReference
RateLimitError, # 429 — payout request already in progress
ServerError, # 5xx — ClickPesa server error
ClickPesaError, # base class — catches all of the above
)
try:
client.payments.initiate_ussd_push("5000", "255712345678", "ORD001")
except InsufficientFundsError as e:
print(f"Not enough balance: {e}")
except ConflictError as e:
print(f"Order reference already used: {e}")
print(f"HTTP {e.status_code} — {e.response}")
except ClickPesaError as e:
print(f"Unexpected API error [{e.status_code}]: {e}")
```
---
## Health Check
```python
# Returns True if the API is reachable and credentials are valid
if client.is_healthy():
print("Connected to ClickPesa")
else:
print("API unreachable or credentials invalid")
# Async equivalent
healthy = await client.is_healthy()
```
---
## Development
```bash
git clone https://github.com/JAXPARROW/clickpesa-python-sdk
cd clickpesa-python-sdk
# Install with dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run tests with coverage
pytest --cov=clickpesa --cov-report=term-missing
```
---
## License
MIT — see [LICENSE](LICENSE) for details.
| text/markdown | null | Jackson Linus <jacksonlinus95@gmail.com> | null | null | null | clickpesa, payments, fintech, tanzania, sdk, api | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Office/Business :: Financial",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.24.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.23; extra == \"dev\"",
"respx>=0.20; extra == \"dev\"",
"build; extra == \"dev\"",
"twine; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"ruff>=0.1; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/JAXPARROW/clickpesa-python-sdk",
"Documentation, https://docs.clickpesa.com",
"Repository, https://github.com/JAXPARROW/clickpesa-python-sdk",
"Bug Tracker, https://github.com/JAXPARROW/clickpesa-python-sdk/issues",
"Changelog, https://github.com/JAXPARROW/clickpesa-python-sdk/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-21T09:18:48.323963 | clickpesa_python_sdk-0.1.0.tar.gz | 30,675 | 98/af/a4ba59668f4c647a259ec71db75e524d80ea9ef7f65db125a76c55d31d11/clickpesa_python_sdk-0.1.0.tar.gz | source | sdist | null | false | 3fcfd74ea56a66db2c56f694d217689f | 427e7934301abc335bdbac0723baf2b5ed05ad2d621cde9e3184efde8becf4d2 | 98afa4ba59668f4c647a259ec71db75e524d80ea9ef7f65db125a76c55d31d11 | MIT | [
"LICENSE"
] | 246 |
2.4 | kcli | 99.0.202602210918 | Provisioner/Manager for Libvirt/Vsphere/Aws/Gcp/Hcloud/Kubevirt/Ovirt/Openstack/IBM Cloud and containers | Provisioner/Manager for Libvirt/Vsphere/Aws/Gcp/Hcloud/Kubevirt/Ovirt/Openstack/IBM Cloud and containers
| null | Karim Boumedhel | karimboumedhel@gmail.com | null | null | ASL | null | [] | [] | http://github.com/karmab/kcli | null | null | [] | [] | [] | [
"argcomplete",
"PyYAML",
"prettytable",
"jinja2",
"libvirt-python>=2.0.0",
"pyghmi; extra == \"all\"",
"podman; extra == \"all\"",
"websockify; extra == \"all\"",
"boto3; extra == \"all\"",
"google-api-python-client; extra == \"all\"",
"google-auth-httplib2; extra == \"all\"",
"google-cloud-dns; extra == \"all\"",
"google-cloud-storage; extra == \"all\"",
"google-cloud-container; extra == \"all\"",
"google-cloud-compute; extra == \"all\"",
"python-cinderclient; extra == \"all\"",
"python-neutronclient; extra == \"all\"",
"python-glanceclient; extra == \"all\"",
"python-keystoneclient; extra == \"all\"",
"python-novaclient; extra == \"all\"",
"python-swiftclient; extra == \"all\"",
"ovirt-engine-sdk-python; extra == \"all\"",
"pyvmomi; extra == \"all\"",
"cryptography; extra == \"all\"",
"google-crc32c==1.1.2; extra == \"all\"",
"ibm_vpc; extra == \"all\"",
"ibm-cos-sdk; extra == \"all\"",
"ibm-platform-services; extra == \"all\"",
"ibm-cloud-networking-services; extra == \"all\"",
"azure-mgmt-compute; extra == \"all\"",
"azure-mgmt-network; extra == \"all\"",
"azure-mgmt-core; extra == \"all\"",
"azure-identity; extra == \"all\"",
"azure-mgmt-resource; extra == \"all\"",
"azure-mgmt-marketplaceordering; extra == \"all\"",
"azure-storage-blob; extra == \"all\"",
"azure-mgmt-dns; extra == \"all\"",
"azure-mgmt-containerservice; extra == \"all\"",
"azure-mgmt-storage; extra == \"all\"",
"azure-mgmt-msi; extra == \"all\"",
"azure-mgmt-authorization; extra == \"all\"",
"hcloud; extra == \"all\"",
"proxmoxer; extra == \"all\"",
"boto3; extra == \"aws\"",
"azure-mgmt-compute; extra == \"azure\"",
"azure-mgmt-network; extra == \"azure\"",
"azure-mgmt-core; extra == \"azure\"",
"azure-identity; extra == \"azure\"",
"azure-mgmt-resource; extra == \"azure\"",
"azure-mgmt-marketplaceordering; extra == \"azure\"",
"azure-storage-blob; extra == \"azure\"",
"azure-mgmt-dns; extra == \"azure\"",
"azure-mgmt-containerservice; extra == \"azure\"",
"azure-mgmt-storage; extra == \"azure\"",
"azure-mgmt-msi; extra == \"azure\"",
"azure-mgmt-authorization; extra == \"azure\"",
"google-api-python-client; extra == \"gcp\"",
"google-auth-httplib2; extra == \"gcp\"",
"google-cloud-dns; extra == \"gcp\"",
"google-cloud-storage; extra == \"gcp\"",
"google-cloud-container; extra == \"gcp\"",
"google-cloud-compute; extra == \"gcp\"",
"hcloud; extra == \"hcloud\"",
"google-crc32c==1.1.2; extra == \"ibm\"",
"ibm_vpc; extra == \"ibm\"",
"ibm-cos-sdk; extra == \"ibm\"",
"ibm-platform-services; extra == \"ibm\"",
"ibm-cloud-networking-services; extra == \"ibm\"",
"python-cinderclient; extra == \"openstack\"",
"python-neutronclient; extra == \"openstack\"",
"python-glanceclient; extra == \"openstack\"",
"python-keystoneclient; extra == \"openstack\"",
"python-novaclient; extra == \"openstack\"",
"python-swiftclient; extra == \"openstack\"",
"ovirt-engine-sdk-python; extra == \"ovirt\"",
"proxmoxer; extra == \"proxmox\"",
"pyvmomi; extra == \"vsphere\"",
"cryptography; extra == \"vsphere\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T09:18:18.928437 | kcli-99.0.202602210918.tar.gz | 1,441,542 | 92/1f/4d7b2e859e68ef8cb61ba460be92766affe068599e31a4a8aa563e5abce9/kcli-99.0.202602210918.tar.gz | source | sdist | null | false | 5053a9aa7c9cd9d53eeb8c7ee187ffaf | 83f5ae02780b71f4d882e510b57e1405888724d4f6f020c4f5866abf3b3601bb | 921f4d7b2e859e68ef8cb61ba460be92766affe068599e31a4a8aa563e5abce9 | null | [
"LICENSE"
] | 241 |
2.4 | chrome-cookies-to-playwright | 0.1.1 | Export Chrome cookies to Playwright storage state format (macOS) | # chrome-cookies-to-playwright
Export your macOS Chrome cookies to [Playwright](https://playwright.dev/) storage state format — with full `httpOnly`, `sameSite`, and expiry metadata.
## Quick Start
```bash
# Zero-install (requires Python 3.9+)
uvx chrome-cookies-to-playwright
# Or install globally
pip install chrome-cookies-to-playwright
chrome-cookies-to-playwright
```
## What It Does
Playwright's built-in cookie APIs cannot access `httpOnly` or `sameSite` flags from a real browser profile. This tool works around that by:
1. Using [browser-cookie3](https://github.com/borisbabic/browser_cookie3) to **decrypt** Chrome's cookie values via the macOS Keychain.
2. Reading Chrome's **SQLite Cookies database** directly to extract `httpOnly`, `sameSite`, `secure`, and precise expiry metadata.
3. Joining the two data sources into a single **Playwright storage state JSON** file that you can load with `browserContext.addCookies()` or the Playwright CLI.
The result is a complete, accurate cookie export that preserves all the metadata Playwright needs.
## Usage
```
chrome-cookies-to-playwright [--output FILE] [--profile NAME] [--domain FILTER]
```
| Option | Description |
|---|---|
| `--output`, `-o` | Output file path (default: `/tmp/chrome-cookies-state.json`) |
| `--profile`, `-p` | Chrome profile directory name (default: `Default`) |
| `--domain`, `-d` | Only export cookies whose domain contains this string |
### Examples
```bash
# Export all cookies
chrome-cookies-to-playwright
# Export only GitHub cookies
chrome-cookies-to-playwright --domain github.com
# Use a specific Chrome profile and custom output path
chrome-cookies-to-playwright --profile "Profile 1" --output ./cookies.json
```
### Using the output with Playwright
```python
# Python
context = browser.new_context(storage_state="/tmp/chrome-cookies-state.json")
```
```javascript
// JavaScript
const context = await browser.newContext({
storageState: '/tmp/chrome-cookies-state.json'
});
```
## How It Works
Chrome stores cookies in an SQLite database at:
```
~/Library/Application Support/Google/Chrome/<Profile>/Cookies
```
Cookie *values* are encrypted with a key stored in the macOS Keychain. `browser-cookie3` handles this decryption. However, it doesn't expose `httpOnly` or `sameSite` metadata.
This tool reads the SQLite database directly to get those fields, then joins the results with the decrypted values to produce a complete Playwright-compatible storage state.
### Chrome timestamp conversion
Chrome uses a custom epoch (1601-01-01 00:00:00 UTC) with microsecond precision. The tool converts these to Unix timestamps that Playwright expects.
## Requirements
- **macOS** (relies on Chrome's Keychain-based cookie encryption)
- **Google Chrome** installed
- **Full Disk Access** permission for your terminal (System Settings → Privacy & Security → Full Disk Access)
- **Python 3.9+**
## License
MIT
| text/markdown | Richard Liu | null | null | null | null | chrome, cookies, playwright, browser, testing | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Operating System :: MacOS",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Testing"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"browser-cookie3"
] | [] | [] | [] | [
"Homepage, https://github.com/richardzone/chrome-cookies-to-playwright",
"Repository, https://github.com/richardzone/chrome-cookies-to-playwright",
"Issues, https://github.com/richardzone/chrome-cookies-to-playwright/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T09:17:58.996495 | chrome_cookies_to_playwright-0.1.1.tar.gz | 5,615 | b6/80/78c8f31ac227473d22f163da981e9f2f101402943b367a9bd9c6f3e2bc66/chrome_cookies_to_playwright-0.1.1.tar.gz | source | sdist | null | false | be16dae3c6369966cb14d2a2f3f27042 | e3a77ebf019619f605edb66152ce8b58e0e91cbb59bf5e82d4fd5f503bbf69d2 | b68078c8f31ac227473d22f163da981e9f2f101402943b367a9bd9c6f3e2bc66 | MIT | [
"LICENSE"
] | 243 |
2.4 | xpycode-master | 0.1.7 | XPyCode Master - Python scripting for Excel with VBA-like interface | # XPyCode
[](https://www.python.org/downloads/)
[](LICENSE)
[]()
[-orange.svg)]()
**XPyCode** is an Excel-Python integration platform that enables you to write, execute, and manage Python code directly within Microsoft Excel workbooks. It provides a seamless bridge between Excel and Python, featuring a full-featured IDE, custom function publishing, package management, and real-time debugging.
## Project status
⚠️ This version is at early stage. It is an Alpha version, almost Beta.
🚨 Don't use for production or in sensitive environement 🚨
## Help Keep the Project Alive
⭐ Add a star in [GitHub](https://xpycode.com/stars) to promote the project
💵 [Donate](https://xpycode.com/donate) (for instance 3€ per month)
💬 Join the XPyCode Slack community to ask questions, share feedback, and connect with other users:
👉 [Invite link](https://xpycode.com/slack_invite)
👉 [Workspace](https://xpycode.com/slack)
## Features
- 🐍 **Python Execution in Excel** - Run Python code with full access to Excel objects
- 📝 **Integrated IDE** - Monaco-based code editor with IntelliSense, syntax highlighting, and debugging
- 📦 **Package Manager** - Install and manage Python packages per workbook with dependency resolution
- 🔧 **Custom Functions (UDFs)** - Publish Python functions as Excel formulas
- 🎯 **Event Handling** - React to Excel events (worksheet changes, selections, etc.) with Python
- 🔍 **Object Management** - Save and re-use objects in python kernel
- 🐛 **Debugger** - Set breakpoints, step through code, and inspect variables
- 🎨 **Theming** - Customizable dark/light themes for the IDE
## Requirements
- **Operating System**: Windows 10/11 (64-bit) / Other platforms are enabled but not tested
- **Python**: 3.9 or higher
- **Microsoft Excel**: 2016 or later (with Office.js Add-in support)
## Installation
```bash
pip install xpycode_master
```
### Quick Start
1. Install XPyCode:
```bash
pip install xpycode_master
```
2. Start the XPyCode Master server:
```bash
python -m xpycode_master
```
3. The Excel Add-in will be automatically registered.
Open Excel and look for the XPyCode in **Add-Ins -> More Add-Ins -> Shared Folder**
4. Launch in Excel:
```
[Open Console] --> [<> Editor ]
```
5. In **XPyCode Editor**
- Add a python module: Right click on the workbook -> **New Module**
- Start coding, using xpycode module:
```python
def updateExcelFromPython():
import xpycode
ws=xpycode.worksheets.getActiveWorksheet()
rA1=ws.getRange("A1")
rA1.values="Hello"
rA1.format.fill.color="yellow"
```
## Running as a Service
XPyCode can run as a system service for automatic startup:
```bash
# Install and start as a service
python -m xpycode_master service install
# Check status
python -m xpycode_master service status
# Stop service
python -m xpycode_master service stop
```
Supported on Windows, Linux (systemd), and macOS (launchd). See [Service Management](https://docs.xpycode.com/user-guide/service-management/) for details.
## Upgrading
Check for and install updates:
```bash
# Check if an update is available
python -m xpycode_master --upgrade --check
# Upgrade interactively
python -m xpycode_master --upgrade
# Upgrade without confirmation
python -m xpycode_master --upgrade --yes
```
If XPyCode is running as a service, the upgrade process will automatically stop and restart the service.
## Addin Hosting Modes
XPyCode supports two modes for running the Excel add-in:
### External Mode (Default)
The add-in UI is served from `https://addin.xpycode.com`. This is the default mode and requires no certificate management.
```bash
python -m xpycode_master
```
### Local Mode
The add-in UI is served from a local HTTPS server on your machine. Requires self-signed certificates.
```bash
python -m xpycode_master --use-local-addin
```
!!! warning "Mode Switch Cache Clearing"
When switching between local and external modes, XPyCode will automatically clear the Office add-in cache. This affects all Office add-ins, not just XPyCode. You may need to restart Excel after switching modes.
## Usage
### Running Python Code
1. Open a workbook in Excel
2. Click "Open Console" in the XPyCode ribbon
3. Open Editor with "<>" button
4. Right click on the workbook name and add a python module
5. Write Python code in the editor
6. Press F5 or click "Run" to execute
### Publishing Custom Functions
```python
# In your module, define a function
def add_numbers(a: float, b: float) -> float:
"""Add two numbers together."""
return a + b
```
Then use the Function Publisher in the IDE to expose it as an Excel formula: `=ADD_NUMBERS(A1, B1)`
### Package Management
1. Open the Package Manager panel in the IDE
2. Search for a package (e.g., "pandas")
3. Select version and optional extras
4. Click "Install/Update" to install for the current workbook
## Configuration
Configure themes, pypi urls, console preferences, ... in File/Settings menu
## Excel Sample
You will find an Excel workbook sample in xpycode_master\excel_sample.
## Dependencies
### Core Dependencies
- **fastapi** >= 0.100.0 - Web framework for the Business Layer API
- **uvicorn** >= 0.22.0 - ASGI server for FastAPI
- **websockets** >= 12.0 - WebSocket client/server implementation
- **aiohttp** >= 3.8.0 - Async HTTP client for package index queries
- **packaging** >= 21.0 - Version parsing and specifier handling
- **PySide6** >= 6.5.0 - Qt bindings for the IDE GUI (includes WebEngine for Monaco Editor embedding)
- **jedi** >= 0.19.0 - Python autocompletion and static analysis
- **orjson** >= 3.9.15 - Fast JSON serialization (recommended)
- **keyring** >= 24.0.0 - Secure credential storage for AI providers
- **unearth** >= 0.14.0 - Enhanced package discovery
## Architecture
XPyCode consists of several interconnected components:
```
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Excel Add-in │◄───►│ Business Layer │◄───►│ Python IDE │
│ (Office.js) │ │ (FastAPI) │ │ (PySide6) │
└─────────────────┘ └────────┬────────┘ └─────────────────┘
│
▼
┌─────────────────┐
│ Python Kernel │
│ (per workbook) │
└─────────────────┘
```
- **Excel Add-in**: Office.js-based add-in providing the Excel interface
- **Business Layer**: FastAPI server acting as message broker between components
- **Python Kernel**: Per-workbook Python execution environment
- **Python IDE**: PySide6-based development environment with Monaco Editor
## License
This project is licensed under the **MIT License with Commons Clause**.
You are free to use, modify, and distribute this software for any purpose. However, you may not sell the software or include it as a substantial part of a commercial product or service.
See the [LICENSE](https://xpycode.com/LICENSE) file for full details.
## Author
**BGE Advisory**
## Feedbacks
Contributions are welcome! Please feel free to submit issues.
## Support
- **Issues**: [GitHub Issues](https://xpycode.com/issues)
- **Documentation**: [Docs](https://docs.xpycode.com/)
## Acknowledgments
- [Monaco Editor](https://microsoft.github.io/monaco-editor/) - Code editor component
- [Office.js](https://docs.microsoft.com/en-us/office/dev/add-ins/) - Excel Add-in API
- [FastAPI](https://fastapi.tiangolo.com/) - Modern Python web framework
- [PySide6](https://doc.qt.io/qtforpython/) - Qt bindings for Python
| text/markdown | XPyCode Team | null | null | null | null | excel, python, vba, automation, office | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Cython",
"Operating System :: OS Independent",
"Topic :: Office/Business",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"fastapi>=0.115.0",
"uvicorn>=0.30.0",
"websockets>=12.0",
"aiohttp>=3.9.0",
"packaging>=21.0",
"python-lsp-server>=1.9.0",
"jedi>=0.18.0",
"pyflakes>=3.0.0",
"cryptography>=3.0",
"pip-system-certs",
"requests-ntlm",
"PySide6>=6.6.0",
"qasync>=0.27.0",
"keyring>=24.0.0",
"pandas>=2.0.0",
"mkdocs>=1.5.3",
"mkdocs-material>=9.5.0",
"mkdocs-glightbox>=0.3.7",
"mkdocs-print-site-plugin>=2.3.0",
"pymdown-extensions>=10.7",
"orjson>=3.9.15",
"unearth>=0.14.0",
"pywin32>=305; sys_platform == \"win32\"",
"polars>=0.20.0; extra == \"polars\""
] | [] | [] | [] | [
"Homepage, https://xpycode.com",
"Repository, https://xpycode.com/repo",
"Issues, https://xpycode.com/issues",
"Documentation, https://docs.xpycode.com"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-21T09:16:24.212535 | xpycode_master-0.1.7.tar.gz | 65,080,146 | 0b/9c/8e12bfd14a7761dca98e384bbb3bc933bdde32a998a011b03aa9c84b2eaa/xpycode_master-0.1.7.tar.gz | source | sdist | null | false | 782401086d4e47ba6eebcac8092cb548 | 006a1ae51180ce08f1f60d346f5ec2df7041a6b8421cdcd25049b1483d19bbd1 | 0b9c8e12bfd14a7761dca98e384bbb3bc933bdde32a998a011b03aa9c84b2eaa | MIT | [
"LICENSE"
] | 968 |
2.4 | varicon-observability | 1.0.24 | Unified observability package for Varicon services - logs, traces, and metrics | # Varicon Observability
Unified observability package for logs, traces, and metrics across all Varicon services.
## Features
- **Universal Log Capture**: Captures all logs regardless of how they're created
- **Distributed Tracing**: Automatic trace correlation across services
- **Metrics**: System telemetry and performance metrics
- **Zero Code Changes**: Works with existing logging code
- **Framework Support**: Auto-detects and instruments Django, FastAPI, Celery
## Installation
### From Local Source (Development)
```bash
cd varicon_observability
pip install -e .
# Or with optional dependencies
pip install -e ".[full]"
```
### From Git Repository
```bash
pip install git+https://github.com/your-org/varicon-observability.git
# With optional dependencies
pip install "git+https://github.com/your-org/varicon-observability.git#egg=varicon-observability[full]"
```
### Build and Install from Source
```bash
cd varicon_observability
python -m build
pip install dist/varicon_observability-*.whl
```
### Installation Options
- **Basic**: `pip install varicon-observability`
- **Django**: `pip install varicon-observability[django]`
- **FastAPI**: `pip install varicon-observability[fastapi]`
- **Celery**: `pip install varicon-observability[celery]`
- **Full**: `pip install varicon-observability[full]`
## Quick Start
### Django (varicon)
```python
# varicon/varicon/asgi.py or settings.py
from varicon_observability import setup_observability
setup_observability(service_name="varicon-django")
```
### FastAPI (integrations_service)
```python
# integrations_service/main.py
from varicon_observability import setup_observability
setup_observability(service_name="integration-service")
```
## Configuration
Set environment variables:
```bash
OTEL_ENABLED=true
OTEL_SERVICE_NAME=my-service
OTEL_EXPORTER_OTLP_ENDPOINT=http://signoz-otel-collector:4318
OTEL_EXPORTER_OTLP_PROTOCOL=grpc # or http
OTEL_EXPORTER_OTLP_HEADERS=signoz-ingestion-key=your-key
```
## What Gets Captured
- All Python logging (`logging.getLogger()`, `IntegrationLogger()`, etc.)
- Framework logs (Django, FastAPI, Uvicorn, Celery)
- HTTP requests (requests, httpx)
- Database queries (PostgreSQL via psycopg2)
- Redis operations
- Custom traces and metrics
**Note**: SQLAlchemy logs are disabled by default (set `ENABLE_SQLALCHEMY_LOGS=true` to enable)
## Architecture
```
Application Code
↓
Python Logging (any pattern)
↓
Root Logger Handler
↓
OpenTelemetry LoggingHandler
↓
OTLP Exporter
↓
SigNoz
```
All logs automatically include trace context for correlation.
| text/markdown | null | samir Thapa <samir.thapa@varicon.com.au> | null | null | MIT | observability, logging, tracing, metrics, opentelemetry, signoz | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Logging",
"Topic :: System :: Monitoring"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"opentelemetry-api<2.0.0,>=1.24.0",
"opentelemetry-sdk<2.0.0,>=1.24.0",
"opentelemetry-exporter-otlp<2.0.0,>=1.24.0",
"opentelemetry-semantic-conventions<0.61b0,>=0.45b0",
"django>=3.2; extra == \"django\"",
"opentelemetry-instrumentation-django<0.61b0,>=0.45b0; extra == \"django\"",
"opentelemetry-instrumentation-psycopg2<0.61b0,>=0.45b0; extra == \"django\"",
"opentelemetry-instrumentation-asgi<0.61b0,>=0.45b0; extra == \"django\"",
"fastapi>=0.100.0; extra == \"fastapi\"",
"opentelemetry-instrumentation-fastapi<0.61b0,>=0.45b0; extra == \"fastapi\"",
"opentelemetry-instrumentation-httpx<0.61b0,>=0.45b0; extra == \"fastapi\"",
"celery>=5.0.0; extra == \"celery\"",
"opentelemetry-instrumentation-celery<0.61b0,>=0.45b0; extra == \"celery\"",
"django>=3.2; extra == \"full\"",
"fastapi>=0.100.0; extra == \"full\"",
"celery>=5.0.0; extra == \"full\"",
"opentelemetry-instrumentation-django<0.61b0,>=0.45b0; extra == \"full\"",
"opentelemetry-instrumentation-fastapi<0.61b0,>=0.45b0; extra == \"full\"",
"opentelemetry-instrumentation-celery<0.61b0,>=0.45b0; extra == \"full\"",
"opentelemetry-instrumentation-psycopg2<0.61b0,>=0.45b0; extra == \"full\"",
"opentelemetry-instrumentation-httpx<0.61b0,>=0.45b0; extra == \"full\"",
"opentelemetry-instrumentation-redis<0.61b0,>=0.45b0; extra == \"full\"",
"opentelemetry-instrumentation-requests<0.61b0,>=0.45b0; extra == \"full\"",
"opentelemetry-instrumentation-asgi<0.61b0,>=0.45b0; extra == \"full\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.6 | 2026-02-21T09:15:42.099758 | varicon_observability-1.0.24.tar.gz | 23,141 | d6/57/716900b718d1f35b736b7c209ed28e4c0e54f0c08ce85a45949315e35e36/varicon_observability-1.0.24.tar.gz | source | sdist | null | false | 8c5112d82e4a3e60b236d8c87eef4752 | f6a824f4ae0944bfb57c3c4b95247e7a56fd9dcaf222588ded490574aac4061a | d657716900b718d1f35b736b7c209ed28e4c0e54f0c08ce85a45949315e35e36 | null | [
"LICENSE"
] | 240 |
2.4 | wa1kpcap | 0.1.2 | Fast PCAP analysis library — extract flow-level features and multi-layer protocol fields from network traffic captures | # wa1kpcap
[](https://pypi.org/project/wa1kpcap/)
[](https://pypi.org/project/wa1kpcap/)
[](https://github.com/ljs-2002/wa1kpcap/blob/main/LICENSE)
[](https://github.com/ljs-2002/wa1kpcap/actions/workflows/tests.yml)
[](https://pypi.org/project/wa1kpcap/)
[中文文档](https://github.com/ljs-2002/wa1kpcap/blob/main/README_CN.md)
Fast PCAP analysis library for Python. Extracts multi-level flow features and protocol fields across all layers from network traffic captures, with a native C++ parsing engine.
## Installation
```bash
pip install wa1kpcap
```
Optional dependencies:
```bash
pip install wa1kpcap[dpkt] # dpkt engine support
pip install wa1kpcap[export] # pandas DataFrame export
pip install wa1kpcap[crypto] # TLS certificate parsing
pip install wa1kpcap[dev] # development (pytest, scapy, etc.)
```
## Quick Start
```python
from wa1kpcap import Wa1kPcap
analyzer = Wa1kPcap()
flows = analyzer.analyze_file('traffic.pcap')
for flow in flows:
print(f"{flow.key} packets={flow.packet_count} duration={flow.duration:.3f}s")
```
## Supported Protocols
| Layer | Protocols |
|-------|-----------|
| Link | Ethernet, VLAN (802.1Q), Linux SLL/SLL2, Raw IP, BSD Loopback, NFLOG |
| Network | IPv4, IPv6, ARP, ICMP, ICMPv6 |
| Tunnel | GRE, VXLAN, MPLS |
| Transport | TCP, UDP |
| Application | TLS (SNI/ALPN/certs), DNS, HTTP, DHCP, DHCPv6, QUIC (Initial decryption, SNI/ALPN) |
All protocols have C++ fast-path implementations. Tunnel protocols (GRE, VXLAN, MPLS) support recursive inner-packet dispatch.
## Features
- Fast C++ native parsing engine with Python API, also supports dpkt as alternative engine (`pip install wa1kpcap[dpkt]`)
- Flow-level feature extraction with signed directional packet lengths
- 8 sequence features per flow: packet_lengths, ip_lengths, trans_lengths, app_lengths, timestamps, iats, tcp_flags, tcp_window_sizes
- Statistical aggregation: mean, std, var, min, max, range, median, skew, kurt, cv, plus up/down directional breakdowns
- Multi-layer protocol field extraction from link layer to application layer
- BPF filter with protocol-aware keywords (dhcp, dhcpv6, vlan, gre, vxlan, mpls)
- IP fragment, TCP stream, and TLS record reassembly
- Export to DataFrame, CSV, JSON
- Custom incremental feature registration
- YAML-based protocol extension for adding new protocols without C++ code
## Documentation
For detailed usage, API reference, and examples, see [docs/README.md](https://github.com/ljs-2002/wa1kpcap/blob/main/docs/README.md).
## Roadmap
- More application protocols (QUIC 0-RTT/Handshake decryption, HTTP/3, SSH, SMTP)
- CLI tool for quick pcap inspection
- Multi-process parallel parsing for large captures
- Flow-level QUIC connection tracking and migration detection
## License
MIT License
## Author
1in_js
| text/markdown | null | 1in_js <ljs_2002@163.com> | null | null | null | pcap, network, traffic, flow, packet, protocol, tls, dns, feature-extraction, nids | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: System :: Networking :: Monitoring",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"PyYAML>=6.0",
"numpy>=1.23",
"dpkt>=1.9; extra == \"dpkt\"",
"pandas>=1.5; extra == \"export\"",
"cryptography>=41.0; extra == \"crypto\"",
"pytest>=7.0; extra == \"dev\"",
"scapy>=2.5; extra == \"dev\"",
"pandas>=1.5; extra == \"dev\"",
"cryptography>=41.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/ljs-2002/wa1kpcap",
"Repository, https://github.com/ljs-2002/wa1kpcap",
"Issues, https://github.com/ljs-2002/wa1kpcap/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:15:34.631059 | wa1kpcap-0.1.2.tar.gz | 831,612 | 64/5a/521a5dee71a5526931076d3f0d51255c504e3098c1976fa8659fdd7f10db/wa1kpcap-0.1.2.tar.gz | source | sdist | null | false | 316c21b82a853465297196d6dedf300e | a53eb44c267c81c05576032aa63774aba1c32df349ce987b91837924a8d6afbe | 645a521a5dee71a5526931076d3f0d51255c504e3098c1976fa8659fdd7f10db | MIT | [] | 1,160 |
2.4 | deep-trainer | 0.2.0 | Yet another simple & effective PyTorch trainer | # Deep-Trainer
[](https://github.com/raphaelreme/deep-trainer/raw/main/LICENSE)
[](https://pypi.org/project/deep-trainer)
[](https://pypi.org/project/deep-trainer)
[](https://pypi.org/project/deep-trainer)
[](https://codecov.io/github/raphaelreme/deep-trainer)
[](https://github.com/raphaelreme/deep-trainer/actions/workflows/tests.yml)
Lightweight training utilities for PyTorch projects.
`deep-trainer` provides a minimal yet flexible training loop abstraction
for PyTorch projects, including:
- Training & evaluation loops
- Automatic Mixed Precision (AMP) support
- Checkpointing (best / last / all)
- Metric handling system with aggregation
- TensorBoard logging (or custom loggers)
- Easy subclassing for custom training behavior
------------------------------------------------------------------------
## ⚠️ Project Status
This project was originally developed as a personal baseline training
framework.
- The codebase is **functional but relatively old**
- APIs may evolve in future versions
- Some refactoring and cleanup are planned
- Backward compatibility is not guaranteed for future major
updates
If you use this project in production or research, please consider
pinning a version.
Contributions and improvements are welcome.
------------------------------------------------------------------------
## 🚀 Installation
### Install with pip
``` bash
pip install deep-trainer
```
### Install from source
``` bash
git clone https://github.com/raphaelreme/deep-trainer.git
cd deep-trainer
pip install .
```
------------------------------------------------------------------------
## 🏁 Getting Started
Below is a minimal training example for a classification task.
``` python
import torch
from deep_trainer import PytorchTrainer
# ======================
# Dataset
# ======================
trainset = ...
valset = ...
testset = ...
train_loader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
val_loader = torch.utils.data.DataLoader(valset, batch_size=256)
test_loader = torch.utils.data.DataLoader(testset, batch_size=256)
# ======================
# Model
# ======================
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = ...
model.to(device)
# ======================
# Optimizer & Scheduler
# ======================
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
# Step every batch (scheduler is stepped per training step)
scheduler = torch.optim.lr_scheduler.StepLR(
optimizer,
step_size=len(train_loader) * 50, # decay every 50 epochs
gamma=0.1,
)
# ======================
# Loss
# ======================
criterion = torch.nn.CrossEntropyLoss()
# ======================
# Training
# ======================
trainer = PytorchTrainer(
model,
optimizer,
scheduler=scheduler,
save_mode="small", # keep best + last checkpoint
device=device,
use_amp=True, # optional mixed precision
)
trainer.train(
epochs=150,
train_loader=train_loader,
criterion=criterion,
val_loader=val_loader,
)
# ======================
# Testing (Best model)
# ======================
trainer.load("experiments/checkpoints/best.ckpt") # Reload best checkpoint
test_metrics = trainer.evaluate(test_loader)
print(test_metrics)
```
------------------------------------------------------------------------
## Features Overview
### ✔ Simple Trainer Abstraction
The `PytorchTrainer` handles:
- Forward / backward passes
- Optimizer and scheduler stepping
- Mixed precision scaling
- Metric tracking
- Validation & best checkpoint selection
- Logging
You probably will need to override the following method:
- `process_train_batch`
- `train_step`
- `backward`
- `eval_step`
to customize behavior (multi-loss, gradient accumulation, multiple
optimizers, self-supervised learning, etc.).
------------------------------------------------------------------------
### ✔ Flexible Metric System
The metric system supports:
- Per-batch metrics
- Aggregated metrics
- Validation metric selection
- Custom metrics via subclassing
------------------------------------------------------------------------
### ✔ Logging
By default, logs are written to TensorBoard.
``` bash
tensorboard --logdir experiments/logs/
```
You can also use:
- `DictLogger` (in-memory logging)
- `MultiLogger` (combine multiple loggers)
- Or implement your own logger by subclassing `TrainLogger`
------------------------------------------------------------------------
## Example
An example training script is available in:
example/example.py
It demonstrates training a **PreActResNet18** on CIFAR-10.
To use it:
``` bash
# Show available hyperparameters
python example.py -h
# Launch training
python example.py
# Monitor training
tensorboard --logdir experiments/logs/
```
With default parameters, it reaches approximately **94--95% validation
accuracy** on CIFAR-10.
------------------------------------------------------------------------
## Design Philosophy
`deep-trainer` aims to be:
- Minimal (no heavy abstractions)
- Transparent (easy to read & debug)
- Hackable (easy to override core behavior)
- Suitable for research baselines
It is **not** intended to replace full-featured training frameworks
like:
- PyTorch Lightning
- HuggingFace Trainer
- Accelerate
Instead, it provides a lightweight middle ground between raw PyTorch
loops and larger ecosystems.
------------------------------------------------------------------------
## Contributing
Contributions are welcome!
If you'd like to:
- Improve documentation
- Refactor old components
- Add new metrics
- Improve testing
- Modernize APIs
Feel free to open an issue or submit a pull request.
------------------------------------------------------------------------
## 📜 License
MIT License. See `LICENSE` file for details.
| text/markdown | Raphael Reme | Raphael Reme <raphaelreme-dev@protonmail.com> | Raphael Reme | Raphael Reme <raphaelreme-dev@protonmail.com> | null | PyTorch, Training, Evaluation, Reproducibility | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"setuptools<82",
"tensorboard",
"torch>=2.3.0",
"tqdm",
"typing-extensions; python_full_version < \"3.12\""
] | [] | [] | [] | [
"Homepage, https://github.com/raphaelreme/deep-trainer",
"Documentation, https://github.com/raphaelreme/deep-trainer",
"Repository, https://github.com/raphaelreme/deep-trainer",
"Issues, https://github.com/raphaelreme/deep-trainer/issues"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-21T09:14:15.718966 | deep_trainer-0.2.0.tar.gz | 19,357 | 6e/6e/2ebef68a6b92579eb75db880c04aa19a4d6674ef59f7e6d6cb1742c22c6c/deep_trainer-0.2.0.tar.gz | source | sdist | null | false | e7d7dbaae670166cddc9e6e243ef1d64 | 5c80296559305bfd26ae062c5c7c0e988ea4e56e26dbe4bae3762c6720839c69 | 6e6e2ebef68a6b92579eb75db880c04aa19a4d6674ef59f7e6d6cb1742c22c6c | MIT | [
"LICENSE"
] | 236 |
2.1 | gges | 0.0.1 | Geography and Environmental Science | Read/write and process rs/gis related data, especially atmospheric rs data.
| null | Songyan Zhu | Songyan.Zhu@soton.ac.uk | null | null | MIT Licence | geography, env science | [] | [
"any"
] | https://github.com/songyanzhu/gges | null | null | [] | [] | [] | [
"geetools"
] | [] | [] | [] | [] | twine/5.1.0 CPython/3.11.7 | 2026-02-21T09:13:42.876535 | gges-0.0.1.tar.gz | 1,518 | 72/0f/d7ebece0d4a501b9395251dbd1711978840c7b4f2f15882f174d3da243a3/gges-0.0.1.tar.gz | source | sdist | null | false | 95b48b3f3f5ed304fe78ac3b2563ea2e | 9fd84b6b46b13f58544692f2bfe0821fdf1ccf3cc399af2b5316ce0f87cb529b | 720fd7ebece0d4a501b9395251dbd1711978840c7b4f2f15882f174d3da243a3 | null | [] | 253 |
2.4 | atlas-meshtastic-bridge | 0.1.20 | Reliable offline bridge for Atlas Command over Meshtastic radios. | # Atlas Meshtastic Bridge
Reliable offline access to Atlas Command over Meshtastic radios. The bridge runs in two modes:
- **Gateway** - connected to Atlas Command over IP. It receives Meshtastic requests, calls the HTTP API, and returns responses over radio.
- **Client** - runs next to a field asset. It issues Atlas Command requests via the gateway and renders the responses locally.
> The protocol and chunking design are described in `docs/SYSTEMS_DESIGN.md`. This README focuses on day-to-day setup, usage, and troubleshooting.
## Reliability guarantees
- Application-level ACKs are emitted for every reassembled message; senders track pending messages until ACKed.
- Outgoing messages are durably spooled to disk (JSON file) and retried with exponential backoff + jitter until acknowledged.
- Pending messages are replayed automatically after restarts; gateways flush the outbox each poll cycle and clients flush before sending.
- ACK envelopes are filtered from application handlers so existing client/gateway flows remain unchanged.
- Spool location is configurable via `--spool-path` (default: `~/.atlas_meshtastic_spool.json`).
## Payload limits
- **Chunk**: the smallest on-air packet sent over Meshtastic (what the transport already splits and ACKs today).
- Current chunk limit is up to **230 bytes** (`MAX_CHUNK_SIZE`) before transport-level retries/ack handling.
- The hardware harness enforces a **10 KB** limit for object uploads. Larger transfers are currently disabled; use the Atlas HTTP API instead.
- Keep request payloads small and metadata-focused when operating over radio links.
## Prerequisites
- Python 3.10+
- Atlas Command reachable from the gateway (HTTPS recommended)
- Meshtastic radios flashed with current firmware
- `atlas-asset-client>=0.3.0` (required)
- `meshtastic>=2.3.0` (optional for real radios; not needed for `--simulate-radio`)
- Optional: virtual (in-memory) radio for local testing
Install the bridge as a standalone package (and optionally meshtastic-python for real radios):
```bash
cd Atlas_Client_SDKs/connection_packages/atlas_meshtastic_bridge
pip install -e .[meshtastic] # includes meshtastic
# or, for simulation-only workflows without hardware drivers:
pip install -e .
```
## Meshtastic hardware setup
1. Flash both radios with the same firmware and channel settings. Verify they can message each other using the Meshtastic app.
2. Connect the gateway radio to the machine that can reach Atlas Command over IP.
3. Note the serial port for each device:
- Linux: `/dev/ttyUSB0`, `/dev/ttyACM0`, or `dmesg | grep tty`
- macOS: `/dev/cu.usbserial-*`
- Windows: `COM3`, `COM4`, etc.
4. Recommended radio config:
- Same channel name/psk on all nodes
- `hop_limit` and `power` appropriate for your mesh size
- Unique, meaningful node IDs (set via Meshtastic app)
5. Run with `--radio-port` pointing at the serial device. Use `--simulate-radio` to bypass hardware during development.
## Configuration
CLI flags (client and gateway):
| Flag | Description |
| --- | --- |
| `--mode {gateway,client}` | Run as gateway or client (required). |
| `--gateway-node-id` | Meshtastic node ID of the gateway (required). |
| `--api-base-url` | Atlas Command base URL. Required by the CLI; gateway uses it for HTTP calls (clients must still supply because the flag is required). |
| `--api-token` | Atlas Command bearer token (gateway mode, optional). |
| `--timeout` | Client request timeout in seconds (default: 5). |
| `--simulate-radio` | Use in-memory radio instead of hardware. |
| `--radio-port` | Serial port path (hardware mode). |
| `--node-id` | Override local node ID (default: `gateway` or `client`). |
| `--command` | Client command to run (client mode). |
| `--data` | JSON payload for the command (client mode). |
| `--log-level` | Logging level (default: `INFO`). |
| `--metrics-host` | Host interface for metrics/health server (default: `0.0.0.0`). |
| `--metrics-port` | Port for metrics/health server (default: `9700`). |
| `--disable-metrics` | Disable metrics and health endpoints. |
Environment variables (gateway):
| Variable | Purpose |
| --- | --- |
| `ATLAS_API_BASE_URL` | Convenience only; the bridge does **not** read this directly. Export it and pass via `--api-base-url \"$ATLAS_API_BASE_URL\"`. |
| `ATLAS_API_TOKEN` | Convenience only; the bridge does **not** read this directly. Export it and pass via `--api-token \"$ATLAS_API_TOKEN\"`. |
## Quick start (simulated radios)
Terminal 1 - start gateway:
```bash
python -m atlas_meshtastic_bridge.cli \
--mode gateway \
--gateway-node-id gw-1 \
--api-base-url http://localhost:8000 \
--api-token "$ATLAS_API_TOKEN" \
--simulate-radio \
--node-id gw-1
```
Terminal 2 - run a client request:
```bash
python -m atlas_meshtastic_bridge.cli \
--mode client \
--gateway-node-id gw-1 \
--api-base-url http://localhost:8000 \
--simulate-radio \
--node-id field-1 \
--command list_entities \
--data '{"limit":5}'
```
## Usage examples
### Entity registration / creation
Entity creation is performed over the HTTP API (the mesh bridge keeps payloads small). Register the asset via HTTP first, then use the bridge for telemetry and tasking:
```bash
curl -X POST "$ATLAS_API_BASE_URL/entities" \
-H "Authorization: Bearer $ATLAS_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{"entity_id":"DRONE-001","entity_type":"asset","subtype":"uav","alias":"DRONE-001"}'
```
### Check-in workflow (over Meshtastic)
1. Ensure the entity exists (HTTP as above).
2. Send a check-in from the asset:
```bash
python -m atlas_meshtastic_bridge.cli \
--mode client \
--gateway-node-id gw-1 \
--simulate-radio \
--command checkin_entity \
--data '{"entity_id":"DRONE-001","latitude":40.0,"longitude":-105.0}'
```
3. Fetch outstanding tasks for the entity:
```bash
python -m atlas_meshtastic_bridge.cli \
--mode client \
--gateway-node-id gw-1 \
--simulate-radio \
--command get_tasks_by_entity \
--data '{"entity_id":"DRONE-001","limit":5}'
```
4. Report progress or telemetry:
```bash
python -m atlas_meshtastic_bridge.cli \
--mode client \
--gateway-node-id gw-1 \
--simulate-radio \
--command update_telemetry \
--data '{"entity_id":"DRONE-001","altitude_m":1200,"speed_m_s":14}'
```
### Task execution (start / complete / fail)
```bash
# Start a task
python -m atlas_meshtastic_bridge.cli --mode client --gateway-node-id gw-1 \
--simulate-radio --command start_task --data '{"task_id":"TASK-123"}'
# Complete a task with result data
python -m atlas_meshtastic_bridge.cli --mode client --gateway-node-id gw-1 \
--simulate-radio --command complete_task --data '{"task_id":"TASK-123","result":{"summary":"scan complete"}}'
```
### Object download
Request object metadata or bytes (small payloads only):
```bash
python -m atlas_meshtastic_bridge.cli --mode client --gateway-node-id gw-1 \
--simulate-radio --command get_object \
--data '{"object_id":"OBJ-1","download":true}'
```
If `download` is omitted or false, only metadata is returned.
### Object upload
Large uploads are intentionally not sent over Meshtastic. Use the Atlas Command HTTP API to obtain a presigned upload URL, upload the file via HTTPS, and then reference the object by ID in subsequent mesh requests.
### Change feed / incremental sync
Fetch changes since an RFC3339 timestamp:
```bash
python -m atlas_meshtastic_bridge.cli --mode client --gateway-node-id gw-1 \
--simulate-radio --command get_changed_since \
--data '{"since":"2025-01-01T00:00:00Z","limit_per_type":50}'
```
## End-to-end workflow example
1. Register entity via HTTP.
2. Start gateway (hardware or simulated) pointed at Atlas Command.
3. Client check-in over Meshtastic; gateway forwards to Atlas Command.
4. Client requests tasks; gateway responds with pending assignments.
5. Client downloads any referenced objects (metadata/bytes) as needed.
6. Client reports task status (start/complete/fail) and telemetry updates.
## Command reference
| Command | Description | Payload keys |
| --- | --- | --- |
| `list_entities` | List entities with pagination. | `limit`, `offset` |
| `get_entity` | Fetch entity by ID. | `entity_id` |
| `get_entity_by_alias` | Fetch entity by alias. | `alias` |
| `create_entity` | Create a new entity. | `entity_id`, `entity_type`, `alias`, `subtype`, optional `components` |
| `update_entity` | Update entity metadata/components. | `entity_id`, optional `subtype`, `components` |
| `delete_entity` | Delete entity by ID. | `entity_id` |
| `checkin_entity` | Check in an entity with optional telemetry and task filters. | `entity_id`, telemetry fields, optional `status_filter`, `limit`, `since`, `fields` |
| `update_telemetry` | Update entity telemetry. | `entity_id`, telemetry fields (`latitude`, `longitude`, `altitude_m`, `speed_m_s`, `heading_deg`) |
| `list_tasks` | List tasks. | `limit`, optional `status` |
| `get_task` | Fetch task by ID. | `task_id` |
| `get_tasks_by_entity` | Tasks scoped to an entity. | `entity_id`, `limit` |
| `create_task` | Create a task. | `task_id`, optional `status`, `entity_id`, `components`, `extra` |
| `update_task` | Update an existing task. | `task_id`, optional `status`, `entity_id`, `components`, `extra` |
| `delete_task` | Delete task by ID. | `task_id` |
| `transition_task_status` | Transition task to a new status. | `task_id`, `status` |
| `start_task` | Mark a task as started. | `task_id` |
| `complete_task` | Mark task complete. | `task_id`, optional `result` |
| `fail_task` | Mark task failed. | `task_id`, optional `error_message`, `error_details` |
| `list_objects` | List objects. | `limit`, `offset` |
| `create_object` | Create/upload an object (small payloads only). | `object_id`, `content_b64`, `content_type`, optional `file_name`, `usage_hint`, `type`, `referenced_by` |
| `get_object` | Get object metadata or bytes. | `object_id`, optional `download` |
| `update_object` | Update object metadata. | `object_id`, optional `usage_hints`, `referenced_by` |
| `delete_object` | Delete object by ID. | `object_id` |
| `get_objects_by_entity` | Objects linked to an entity. | `entity_id`, optional `limit` |
| `get_objects_by_task` | Objects linked to a task. | `task_id`, optional `limit` |
| `add_object_reference` | Link object to entity/task. | `object_id`, `entity_id` or `task_id` |
| `remove_object_reference` | Unlink object from entity/task. | `object_id`, `entity_id` or `task_id` |
| `find_orphaned_objects` | Find objects with no references. | optional `limit`, `offset` |
| `get_object_references` | List current object references. | `object_id` |
| `validate_object_references` | Validate object references. | `object_id` |
| `cleanup_object_references` | Cleanup invalid object references. | `object_id` |
| `get_changed_since` | Incremental change feed (includes deleted_entities/deleted_tasks/deleted_objects; ~1h in-memory TTL). | `since`, `limit_per_type` |
| `get_full_dataset` | Fetch complete dataset snapshot. | optional `entity_limit`, `task_limit`, `object_limit` |
| `health_check` | Check Atlas Command health. | none |
| `test_echo` | Echo payload for connectivity testing. | free-form |
## Troubleshooting
- **No response from gateway**: Confirm the gateway process is running, radio IDs match `--gateway-node-id`, and Atlas Command is reachable at `--api-base-url` (check `/health`).
- **Serial port errors**: Verify the port path and permissions (`sudo usermod -a -G dialout $USER` on Linux). Try `--simulate-radio` to isolate radio issues.
- **Large payloads dropped**: Messages are chunked to fit Meshtastic limits (up to 230 bytes per chunk). Avoid sending large JSON or binary content; use HTTP uploads instead.
- **Duplicate or missing responses**: Ensure both radios share the same channel/PSK and that clocks are roughly in sync. The CLI generates a fresh UUID for every request; custom integrations should do the same and reuse that UUID when retrying so deduplication works as expected.
- **Timeouts**: Increase `--timeout` for slow links. Poor RF conditions may require retries on the client side.
- **Observability checks**: Metrics and health endpoints are available by default on `http://<metrics-host>:<metrics-port>/metrics`, `/health`, `/ready`, and `/status` unless `--disable-metrics` is set.
## Where to go next
- Protocol details: `docs/SYSTEMS_DESIGN.md`
- Code reference: `atlas_meshtastic_bridge.gateway` and `atlas_meshtastic_bridge.transport`
- Atlas Command HTTP client: `connection_packages/atlas_asset_http_client_python`
| text/markdown | ATLAS Team | null | null | null | null | atlas, meshtastic, bridge, mesh, telemetry | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Communications",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"atlas-asset-client>=0.3.0",
"msgpack>=1.0.7",
"zstandard>=0.22.0",
"meshtastic>=2.3.0; extra == \"meshtastic\"",
"pypubsub>=4.0.3; extra == \"meshtastic\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/the-Drunken-coder/ATLAS",
"Repository, https://github.com/the-Drunken-coder/ATLAS"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:13:21.841790 | atlas_meshtastic_bridge-0.1.20.tar.gz | 90,470 | 41/5f/f09d4bd1774e9e4898341aed630de16fb8cf2b1cad0c04ac9681b07b041a/atlas_meshtastic_bridge-0.1.20.tar.gz | source | sdist | null | false | 3a39cd2ed3631ab757d8f7801b7f3a5c | b1b28a4074012c0778b997ddb6a32408c85c53d10854bd0a1f57b569f868fac6 | 415ff09d4bd1774e9e4898341aed630de16fb8cf2b1cad0c04ac9681b07b041a | MIT | [] | 228 |
2.4 | tree-sitter-rsm | 1.0.1 | RSM grammar for tree-sitter | # tree-sitter-rsm
This is the reference implementation of the Readable Science Markup (RSM) language,
written as a tree-sitter grammar. RSM is one of the cornerstone components of the
[Aris](https://github.com/leotrs/aris) system. For more information [see
here](https://aris.pub).
## Development
The two main files are `grammar.js` and `src/scanner.c` which implement the language
grammar and the external scanner, respectively. The tests are defined in
`test/corpus/*.txt`, and can be executed via `npx tree-sitter test`.
Compile the grammar locally by executing
```bash
npx tree-sitter generate --abi 14
```
and build locally by executing
```bash
npx tree-sitter build
```
Once development of a feature is complete, submit a PR.
## Publishing
The grammar is released as a PyPI package by following these
[intructions](https://tree-sitter.github.io/tree-sitter/creating-parsers/6-publishing.html).
At the time of writing, a summarized version of the instructions are the following:
+ Bump the grammar version with `npx tree-sitter version <version>` and commit the changes
generated.
+ Tag the commit with `git tag -- v<version>`.
+ Push the commit and tag with `git push --tags origin main`.
+ The `publish.yml` GitHub workflow will take care of the rest.
| text/markdown | null | null | null | null | MIT | incremental, parsing, tree-sitter, rsm | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Topic :: Software Development :: Compilers",
"Topic :: Text Processing :: Linguistic",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"tree-sitter~=0.22; extra == \"core\""
] | [] | [] | [] | [
"Homepage, https://github.com/leotrs/tree-sitter-rsm"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:12:41.903250 | tree_sitter_rsm-1.0.1.tar.gz | 62,436 | 05/46/4df382e31f2e096c7551e2afca33a563ce84c05f80109325947124ad8509/tree_sitter_rsm-1.0.1.tar.gz | source | sdist | null | false | a8cf4819bcef9c3549d5004876c866f1 | a22f68cdf0fa2505054acdb2dafa8cfc532cb9b8d6fb8f9d56b025eba4b66b6d | 05464df382e31f2e096c7551e2afca33a563ce84c05f80109325947124ad8509 | null | [] | 601 |
2.4 | binance-and-crypto-payment | 0.1.0 | Binance and Crypto Payment Checkout | # binance-and-crypto-payment
Official Python SDK for Binance and Crypto Payment integration.
## Installation
pip install binance-and-crypto-payment
## Usage
from binance_and_crypto_payment import CryptoPaymentClient
client = CryptoPaymentClient("PUBLIC_KEY", "SECRET_KEY")
response = client.payment(
invoice_id="INV001",
amount=1.00,
items=[{"name": "Product", "qty": "1", "price": "1.00"}],
data={
"first_name": "John",
"last_name": "Doe",
"email": "john@example.com",
"redirect_url": "https://example.com/success",
"notify_url": "https://example.com/notify",
"cancel_url": "https://example.com/cancel",
}
)
print(response)
| text/markdown | PayerURL | null | null | null | MIT | null | [] | [] | null | null | null | [] | [] | [] | [
"requests"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.13 | 2026-02-21T09:12:38.587026 | binance_and_crypto_payment-0.1.0.tar.gz | 2,576 | 9e/73/53567459038fbfa1dad0eaa973e2542ea796c3f36203e1c22bc29a9d27ac/binance_and_crypto_payment-0.1.0.tar.gz | source | sdist | null | false | ca4df2b7052211e07c77d1864ffe83bf | 792f8af5f0a68c65e7b354a7c5230abe0f0a54553b644654fa306ff32aa2b551 | 9e7353567459038fbfa1dad0eaa973e2542ea796c3f36203e1c22bc29a9d27ac | null | [] | 260 |
2.4 | warp-lang | 1.12.0.dev20260221 | A Python framework for high-performance simulation and graphics programming | [](https://badge.fury.io/py/warp-lang)
[](https://opensource.org/licenses/Apache-2.0)

[](https://pepy.tech/project/warp-lang)
[](https://codecov.io/github/NVIDIA/warp)

# NVIDIA Warp
Warp is a Python framework for writing high-performance simulation and graphics code. Warp takes
regular Python functions and JIT compiles them to efficient kernel code that can run on the CPU or GPU.
Warp is designed for [spatial computing](https://en.wikipedia.org/wiki/Spatial_computing)
and comes with a rich set of primitives that make it easy to write
programs for physics simulation, perception, robotics, and geometry processing. In addition, Warp kernels
are differentiable and can be used as part of machine-learning pipelines with frameworks such as PyTorch, JAX and Paddle.
Please refer to the project [Documentation](https://nvidia.github.io/warp/) for API and language reference and
[CHANGELOG.md](https://github.com/NVIDIA/warp/blob/main/CHANGELOG.md) for release history.
<div align="center">
<img src="https://github.com/NVIDIA/warp/raw/main/docs/img/header.jpg">
<p><i>A selection of physical simulations computed with Warp</i></p>
</div>
## Installing
Python version 3.9 or newer is required. Warp can run on x86-64 and ARMv8 CPUs on Windows, Linux, and macOS.
GPU support requires a CUDA-capable NVIDIA GPU and driver (minimum GeForce GTX 9xx).
The easiest way to install Warp is from [PyPI](https://pypi.org/project/warp-lang/):
```text
pip install warp-lang
```
You can also use `pip install warp-lang[examples]` to install additional dependencies for running examples and USD-related features.
The binaries hosted on PyPI are currently built with the CUDA 12 runtime.
We also provide binaries built with the CUDA 13.0 runtime on the [GitHub Releases](https://github.com/NVIDIA/warp/releases) page.
Copy the URL of the appropriate wheel file (`warp-lang-{ver}+cu13-py3-none-{platform}.whl`) and pass it to
the `pip install` command, e.g.
| Platform | Install Command |
| --------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| Linux aarch64 | `pip install https://github.com/NVIDIA/warp/releases/download/v1.11.1/warp_lang-1.11.1+cu13-py3-none-manylinux_2_34_aarch64.whl` |
| Linux x86-64 | `pip install https://github.com/NVIDIA/warp/releases/download/v1.11.1/warp_lang-1.11.1+cu13-py3-none-manylinux_2_28_x86_64.whl` |
| Windows x86-64 | `pip install https://github.com/NVIDIA/warp/releases/download/v1.11.1/warp_lang-1.11.1+cu13-py3-none-win_amd64.whl` |
The `--force-reinstall` option may need to be used to overwrite a previous installation.
### Nightly Builds
Nightly builds of Warp from the `main` branch are available on the [NVIDIA Package Index](https://pypi.nvidia.com/warp-lang/).
To install the latest nightly build, use the following command:
```text
pip install -U --pre warp-lang --extra-index-url=https://pypi.nvidia.com/
```
Note that the nightly builds are built with the CUDA 12 runtime and are not published for macOS.
If you plan to install nightly builds regularly, you can simplify future installations by adding NVIDIA's package
repository as an extra index via the `PIP_EXTRA_INDEX_URL` environment variable. For example:
```text
export PIP_EXTRA_INDEX_URL="https://pypi.nvidia.com"
```
This ensures the index is automatically used for `pip` commands, avoiding the need to specify it explicitly.
### CUDA Requirements
* Warp packages built with CUDA Toolkit 12.x require NVIDIA driver 525 or newer.
* Warp packages built with CUDA Toolkit 13.x require NVIDIA driver 580 or newer.
This applies to pre-built packages distributed on PyPI and GitHub and also when building Warp from source.
Note that building Warp with the `--quick` flag changes the driver requirements. The quick build skips CUDA backward compatibility, so the minimum required driver is determined by the CUDA Toolkit version. Refer to the [latest CUDA Toolkit release notes](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html) to find the minimum required driver for different CUDA Toolkit versions (e.g., [this table from CUDA Toolkit 12.6](https://docs.nvidia.com/cuda/archive/12.6.0/cuda-toolkit-release-notes/index.html#id5)).
Warp checks the installed driver during initialization and will report a warning if the driver is not suitable, e.g.:
```text
Warp UserWarning:
Insufficient CUDA driver version.
The minimum required CUDA driver version is 12.0, but the installed CUDA driver version is 11.8.
Visit https://github.com/NVIDIA/warp/blob/main/README.md#installing for guidance.
```
This will make CUDA devices unavailable, but the CPU can still be used.
To remedy the situation there are a few options:
* Update the driver.
* Install a compatible pre-built Warp package.
* Build Warp from source using a CUDA Toolkit that's compatible with the installed driver.
## Tutorial Notebooks
The [NVIDIA Accelerated Computing Hub](https://github.com/NVIDIA/accelerated-computing-hub) contains the current,
actively maintained set of Warp tutorials:
| Notebook | Colab Link |
|----------|------------|
| [Introduction to NVIDIA Warp](https://github.com/NVIDIA/accelerated-computing-hub/blob/32fe3d5a448446fd52c14a6726e1b867cbfed2d9/Accelerated_Python_User_Guide/notebooks/Chapter_12_Intro_to_NVIDIA_Warp.ipynb) | [](https://colab.research.google.com/github/NVIDIA/accelerated-computing-hub/blob/32fe3d5a448446fd52c14a6726e1b867cbfed2d9/Accelerated_Python_User_Guide/notebooks/Chapter_12_Intro_to_NVIDIA_Warp.ipynb) |
| [GPU-Accelerated Ising Model Simulation in NVIDIA Warp](https://github.com/NVIDIA/accelerated-computing-hub/blob/32fe3d5a448446fd52c14a6726e1b867cbfed2d9/Accelerated_Python_User_Guide/notebooks/Chapter_12.1_IsingModel_In_Warp.ipynb) | [](https://colab.research.google.com/github/NVIDIA/accelerated-computing-hub/blob/32fe3d5a448446fd52c14a6726e1b867cbfed2d9/Accelerated_Python_User_Guide/notebooks/Chapter_12.1_IsingModel_In_Warp.ipynb) |
Additionally, several notebooks in the [notebooks](https://github.com/NVIDIA/warp/tree/main/notebooks) directory
provide additional examples and cover key Warp features:
| Notebook | Colab Link |
|----------|------------|
| [Warp Core Tutorial: Basics](https://github.com/NVIDIA/warp/blob/main/notebooks/core_01_basics.ipynb) | [](https://colab.research.google.com/github/NVIDIA/warp/blob/main/notebooks/core_01_basics.ipynb) |
| [Warp Core Tutorial: Generics](https://github.com/NVIDIA/warp/blob/main/notebooks/core_02_generics.ipynb) | [](https://colab.research.google.com/github/NVIDIA/warp/blob/main/notebooks/core_02_generics.ipynb) |
| [Warp Core Tutorial: Points](https://github.com/NVIDIA/warp/blob/main/notebooks/core_03_points.ipynb) | [](https://colab.research.google.com/github/NVIDIA/warp/blob/main/notebooks/core_03_points.ipynb) |
| [Warp Core Tutorial: Meshes](https://github.com/NVIDIA/warp/blob/main/notebooks/core_04_meshes.ipynb) | [](https://colab.research.google.com/github/NVIDIA/warp/blob/main/notebooks/core_04_meshes.ipynb) |
| [Warp Core Tutorial: Volumes](https://github.com/NVIDIA/warp/blob/main/notebooks/core_05_volumes.ipynb) | [](https://colab.research.google.com/github/NVIDIA/warp/blob/main/notebooks/core_05_volumes.ipynb) |
| [Warp PyTorch Tutorial: Basics](https://github.com/NVIDIA/warp/blob/main/notebooks/pytorch_01_basics.ipynb) | [](https://colab.research.google.com/github/NVIDIA/warp/blob/main/notebooks/pytorch_01_basics.ipynb) |
| [Warp PyTorch Tutorial: Custom Operators](https://github.com/NVIDIA/warp/blob/main/notebooks/pytorch_02_custom_operators.ipynb) | [](https://colab.research.google.com/github/NVIDIA/warp/blob/main/notebooks/pytorch_02_custom_operators.ipynb) |
## Running Examples
The [warp/examples](https://github.com/NVIDIA/warp/tree/main/warp/examples) directory contains a number of scripts categorized under subdirectories
that show how to implement various simulation methods using the Warp API.
Most examples will generate USD files containing time-sampled animations in the current working directory.
Before running examples, install the optional example dependencies using:
```text
pip install warp-lang[examples]
```
On Linux aarch64 systems (e.g., NVIDIA DGX Spark), the `[examples]` extra automatically installs
[`usd-exchange`](https://pypi.org/project/usd-exchange/) instead of `usd-core` as a drop-in replacement,
since `usd-core` wheels are not available for that platform.
Examples can be run from the command-line as follows:
```text
python -m warp.examples.<example_subdir>.<example>
```
To browse the example source code, you can open the directory where the files are located like this:
```text
python -m warp.examples.browse
```
Most examples can be run on either the CPU or a CUDA-capable device, but a handful require a CUDA-capable device. These are marked at the top of the example script.
USD files can be viewed or rendered inside [NVIDIA Omniverse](https://developer.nvidia.com/omniverse), Pixar's UsdView, and Blender. Note that Preview in macOS is not recommended as it has limited support for time-sampled animations.
Built-in unit tests can be run from the command-line as follows:
```text
python -m warp.tests
```
### warp/examples/core
<table>
<tbody>
<tr>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_dem.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/core_dem.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_fluid.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/core_fluid.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_graph_capture.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/core_graph_capture.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_marching_cubes.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/core_marching_cubes.png"></a></td>
</tr>
<tr>
<td align="center">dem</td>
<td align="center">fluid</td>
<td align="center">graph capture</td>
<td align="center">marching cubes</td>
</tr>
<tr>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_mesh.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/core_mesh.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_nvdb.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/core_nvdb.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_raycast.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/core_raycast.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_raymarch.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/core_raymarch.png"></a></td>
</tr>
<tr>
<td align="center">mesh</td>
<td align="center">nvdb</td>
<td align="center">raycast</td>
<td align="center">raymarch</td>
</tr>
<tr>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_sample_mesh.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/core_sample_mesh.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_sph.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/core_sph.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_torch.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/core_torch.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_wave.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/core_wave.png"></a></td>
</tr>
<tr>
<td align="center">sample mesh</td>
<td align="center">sph</td>
<td align="center">torch</td>
<td align="center">wave</td>
</tr>
</tbody>
</table>
### warp/examples/fem
<table>
<tbody>
<tr>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/fem/example_diffusion_3d.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/fem_diffusion_3d.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/fem/example_mixed_elasticity.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/fem_mixed_elasticity.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/fem/example_apic_fluid.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/fem_apic_fluid.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/fem/example_streamlines.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/fem_streamlines.png"></a></td>
</tr>
<tr>
<td align="center">diffusion 3d</td>
<td align="center">mixed elasticity</td>
<td align="center">apic fluid</td>
<td align="center">streamlines</td>
</tr>
<tr>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/fem/example_distortion_energy.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/fem_distortion_energy.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/fem/example_navier_stokes.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/fem_navier_stokes.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/fem/example_burgers.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/fem_burgers.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/fem/example_magnetostatics.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/fem_magnetostatics.png"></a></td>
</tr>
<tr>
<td align="center">distortion energy</td>
<td align="center">navier stokes</td>
<td align="center">burgers</td>
<td align="center">magnetostatics</td>
</tr>
<tr>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/fem/example_adaptive_grid.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/fem_adaptive_grid.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/fem/example_nonconforming_contact.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/fem_nonconforming_contact.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/fem/example_darcy_ls_optimization.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/fem_darcy_ls_optimization.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/fem/example_elastic_shape_optimization.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/fem_elastic_shape_optimization.png"></a></td>
</tr>
<tr>
<td align="center">adaptive grid</td>
<td align="center">nonconforming contact</td>
<td align="center">darcy level-set optimization</td>
<td align="center">elastic shape optimization</td>
</tr>
</tbody>
</table>
### warp/examples/optim
<table>
<tbody>
<tr>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/optim/example_diffray.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/optim_diffray.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/optim/example_fluid_checkpoint.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/optim_fluid_checkpoint.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/optim/example_particle_repulsion.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/optim_particle_repulsion.png"></a></td>
<td></td>
</tr>
<tr>
<td align="center">diffray</td>
<td align="center">fluid checkpoint</td>
<td align="center">particle repulsion</td>
<td align="center"></td>
</tr>
</tbody>
</table>
### warp/examples/tile
<table>
<tbody>
<tr>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/tile/example_tile_mlp.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/tile_mlp.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/tile/example_tile_nbody.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/tile_nbody.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/tile/example_tile_mcgp.py"><img src="https://media.githubusercontent.com/media/NVIDIA/warp/refs/heads/main/docs/img/examples/tile_mcgp.png"></a></td>
<td></td>
</tr>
<tr>
<td align="center">mlp</td>
<td align="center">nbody</td>
<td align="center">mcgp</td>
<td align="center"></td>
</tr>
</tbody>
</table>
## Building
For developers who want to build the library themselves, the following tools are required:
* Microsoft Visual Studio 2019 upwards (Windows)
* GCC 9.4 upwards (Linux)
* CUDA Toolkit 12.0 or higher
* [Git LFS](https://git-lfs.github.com/) installed
After cloning the repository, users should run:
```text
python build_lib.py
```
Upon success, the script will output platform-specific binary files in `warp/bin/`.
The build script will look for the CUDA Toolkit in its default installation path.
This path can be overridden by setting the `CUDA_PATH` environment variable. Alternatively,
the path to the CUDA Toolkit can be passed to the build command as
`--cuda-path="..."`. After building, the Warp package should be installed using:
```text
pip install -e .
```
This ensures that subsequent modifications to the library will be reflected in the Python package.
## Learn More
Please see the following resources for additional background on Warp:
* [Product Page](https://developer.nvidia.com/warp-python)
* [SIGGRAPH 2024 Course Slides](https://dl.acm.org/doi/10.1145/3664475.3664543)
* [GTC 2024 Presentation](https://www.nvidia.com/en-us/on-demand/session/gtc24-s63345/)
* [GTC 2022 Presentation](https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s41599)
* [GTC 2021 Presentation](https://www.nvidia.com/en-us/on-demand/session/gtcspring21-s31838)
* [SIGGRAPH Asia 2021 Differentiable Simulation Course](https://dl.acm.org/doi/abs/10.1145/3476117.3483433)
The underlying technology in Warp has been used in a number of research projects at NVIDIA including the following publications:
* Accelerated Policy Learning with Parallel Differentiable Simulation - Xu, J., Makoviychuk, V., Narang, Y., Ramos, F., Matusik, W., Garg, A., & Macklin, M. [(2022)](https://short-horizon-actor-critic.github.io)
* DiSECt: Differentiable Simulator for Robotic Cutting - Heiden, E., Macklin, M., Narang, Y., Fox, D., Garg, A., & Ramos, F [(2021)](https://github.com/NVlabs/DiSECt)
* gradSim: Differentiable Simulation for System Identification and Visuomotor Control - Murthy, J. Krishna, Miles Macklin, Florian Golemo, Vikram Voleti, Linda Petrini, Martin Weiss, Breandan Considine et al. [(2021)](https://gradsim.github.io)
## Frequently Asked Questions
See the [FAQ](https://nvidia.github.io/warp/faq.html) in the Warp documentation.
## Support
Problems, questions, and feature requests can be opened on [GitHub Issues](https://github.com/NVIDIA/warp/issues).
For inquiries not suited for GitHub Issues, please email <warp-python@nvidia.com>.
## Versioning
Versions take the format X.Y.Z, similar to [Python itself](https://devguide.python.org/developer-workflow/development-cycle/#devcycle):
* Increments in X are reserved for major reworks of the project causing disruptive incompatibility (or reaching the 1.0 milestone).
* Increments in Y are for regular releases with a new set of features.
* Increments in Z are for bug fixes. In principle, there are no new features. Can be omitted if 0 or not relevant.
This is similar to [Semantic Versioning](https://semver.org/) but is less strict regarding backward compatibility.
Like with Python, some breaking changes can be present between minor versions if well-documented and gradually introduced.
Note that prior to 0.11.0, this schema was not strictly adhered to.
## License
Warp is provided under the Apache License, Version 2.0.
Please see [LICENSE.md](https://github.com/NVIDIA/warp/blob/main/LICENSE.md) for full license text.
This project will download and install additional third-party open source software projects.
Review the license terms of these open source projects before use.
## Contributing
Contributions and pull requests from the community are welcome.
Please see the [Contribution Guide](https://nvidia.github.io/warp/user_guide/contribution_guide.html) for more
information on contributing to the development of Warp.
## Publications & Citation
### Research Using Warp
Our [PUBLICATIONS.md](https://github.com/NVIDIA/warp/blob/main/PUBLICATIONS.md) file lists academic and research
publications that leverage the capabilities of Warp.
We encourage you to add your own published work using Warp to this list.
### Citing Warp
To cite Warp itself in your own publications, please use the following BibTeX entry:
```bibtex
@misc{warp2022,
title = {Warp: A High-performance Python Framework for GPU Simulation and Graphics},
author = {Miles Macklin},
month = {March},
year = {2022},
note = {NVIDIA GPU Technology Conference (GTC)},
howpublished = {\url{https://github.com/nvidia/warp}}
}
```
| text/markdown | null | NVIDIA Corporation <warp-python@nvidia.com> | null | null | Apache-2.0 | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Natural Language :: English",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3 :: Only",
"Environment :: GPU :: NVIDIA CUDA",
"Environment :: GPU :: NVIDIA CUDA :: 12",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"nvidia-sphinx-theme; python_version >= \"3.9\" and extra == \"docs\"",
"sphinx-copybutton; extra == \"docs\"",
"pre-commit; extra == \"docs\"",
"myst_parser; extra == \"docs\"",
"usd-core>=25.5; (platform_machine != \"aarch64\" and python_version < \"3.14\") and extra == \"benchmark\"",
"usd-exchange>=2.2; (python_version >= \"3.10\" and python_version < \"3.13\" and platform_machine == \"aarch64\") and extra == \"benchmark\"",
"blosc>=1.11.1; extra == \"examples\"",
"matplotlib>=3.7.5; extra == \"examples\"",
"pillow>=10.4.0; extra == \"examples\"",
"psutil>=7.1.0; extra == \"examples\"",
"pyglet>=2.1.9; extra == \"examples\"",
"usd-core>=25.5; (platform_machine != \"aarch64\" and python_version < \"3.14\") and extra == \"examples\"",
"usd-exchange>=2.2; (python_version >= \"3.10\" and python_version < \"3.13\" and platform_machine == \"aarch64\") and extra == \"examples\"",
"warp-lang[examples]; extra == \"torch-cu12\"",
"torch>=2.7.0; python_version >= \"3.9\" and extra == \"torch-cu12\"",
"warp-lang[examples]; extra == \"dev\"",
"warp-lang[docs]; extra == \"dev\"",
"nvtx; extra == \"dev\"",
"coverage[toml]; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://developer.nvidia.com/warp-python",
"Documentation, https://nvidia.github.io/warp",
"Repository, https://github.com/NVIDIA/warp",
"Issues, https://github.com/NVIDIA/warp/issues",
"Changelog, https://github.com/NVIDIA/warp/blob/main/CHANGELOG.md"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T09:11:09.994258 | warp_lang-1.12.0.dev20260221.tar.gz | 7,253 | d5/e5/e7706e53571a41a1abc15eb9904d9915a8855a18b3b194937c67bee79ed9/warp_lang-1.12.0.dev20260221.tar.gz | source | sdist | null | false | 9fb37f7e368e31bdf8d957da52cdb47b | de02544beae98c8f199fec32093a5b2fdb515e31ff9dfc7439d4a267b992aed6 | d5e5e7706e53571a41a1abc15eb9904d9915a8855a18b3b194937c67bee79ed9 | null | [] | 182 |
2.4 | waste-predictor | 1.0.5 | Production-grade machine learning system for industrial waste prediction | # Waste Prediction Module V4
A production-grade machine learning system for predicting industrial waste compositions based on production volume and environmental parameters.
## 📊 Model Performance
| Metric | Score |
|--------|-------|
| **R² Score** | **0.98** |
| **MAE** | 1,607 |
| **RMSE** | 3,092 |
| **CV R² (5-Fold)** | 0.974 ± 0.005 |
### Per-Target R² Scores
| Waste Type | R² |
|------------|-----|
| Total_Waste_kg | 0.96 |
| Solid_Waste_Limestone_kg | 0.98 |
| Solid_Waste_Gypsum_kg | 0.99 |
| Solid_Waste_Industrial_Salt_kg | 0.98 |
| Liquid_Waste_Bittern_Liters | 0.99 |
| Potential_Epsom_Salt_kg | 0.98 |
| Potential_Potash_kg | 0.99 |
| Potential_Magnesium_Oil_Liters | 0.98 |
## 🚀 Quick Start
### Installation
#### Install from source (local development)
```bash
pip install -e .
```
#### Install from wheel file
```bash
pip install waste_predictor-4.0.0-py3-none-any.whl
```
#### Install from GitHub (if hosted)
```bash
pip install git+https://github.com/yourusername/waste-predictor.git
```
#### Install from PyPI (if published)
```bash
pip install waste-predictor
```
## 📈 Usage
### Making Predictions
The package provides a simple `get_waste_prediction` function:
```python
from waste_predictor import get_waste_prediction
# Prepare input data
input_data = {
'production_volume': 50000,
'rain_sum': 200,
'temperature_mean': 28,
'humidity_mean': 85,
'wind_speed_mean': 15,
'month': 6
}
# Get prediction
result = get_waste_prediction(input_data)
print(result)
# {
# 'Total_Waste_kg': 101804.91,
# 'Solid_Waste_Limestone_kg': 7322.23,
# 'Solid_Waste_Gypsum_kg': 32448.03,
# 'Solid_Waste_Industrial_Salt_kg': 62034.64,
# 'Liquid_Waste_Bittern_Liters': 40547.23,
# 'Potential_Epsom_Salt_kg': 2128.86,
# 'Potential_Potash_kg': 379.34,
# 'Potential_Magnesium_Oil_Liters': 4055.32
# }
```
### Training from MongoDB
Users can train their own models using data from MongoDB:
```python
from waste_predictor import train_from_mongodb
# Train with local MongoDB
results = train_from_mongodb(
mongo_uri='mongodb://localhost:27017',
database='waste_db',
collection='training',
output_model_path='my_custom_model.pkl'
)
print(f"Model R²: {results['metrics']['r2']:.4f}")
print(f"Model saved to: {results['model_path']}")
```
#### Training with MongoDB Atlas (Cloud)
```python
from waste_predictor import train_from_mongodb
results = train_from_mongodb(
mongo_uri='mongodb+srv://cluster.mongodb.net',
database='waste_production_db',
username='your_username',
password='your_password',
collection='training',
output_model_path='waste_predictor_custom.pkl',
verbose=True
)
```
#### Training with Full Connection String
```python
from waste_predictor import train_from_mongodb
connection_string = "mongodb+srv://user:pass@cluster.mongodb.net/dbname?retryWrites=true"
results = train_from_mongodb(
mongo_uri=connection_string,
database='waste_db',
collection='training'
)
```
### Training from DataFrame
You can also train from a pandas DataFrame:
```python
import pandas as pd
from waste_predictor import train_from_dataframe
# Load your data
df = pd.read_csv('training_data.csv')
# Train model
results = train_from_dataframe(
df=df,
output_model_path='custom_model.pkl',
verbose=True
)
print(f"Training R²: {results['metrics']['r2']:.4f}")
```
### Updating Model from S3
Download and update your model directly from AWS S3:
```python
from waste_predictor import update_model_from_s3
# Update model from S3
result = update_model_from_s3(
bucket_name='my-models-bucket',
s3_key='models/waste_predictor_v5.pkl',
aws_access_key_id='YOUR_ACCESS_KEY_ID',
aws_secret_access_key='YOUR_SECRET_ACCESS_KEY',
region_name='us-east-1'
)
if result['success']:
print(f"✓ {result['message']}")
else:
print(f"✗ {result['message']}")
```
#### Using IAM Role (EC2/Lambda)
When running on AWS infrastructure with IAM roles:
```python
from waste_predictor import update_model_from_s3
# No credentials needed - uses IAM role
result = update_model_from_s3(
bucket_name='my-models-bucket',
s3_key='models/waste_predictor_v5.pkl'
)
```
#### Restore from Backup
If an update fails, restore the previous model:
```python
from waste_predictor import restore_model_from_backup
result = restore_model_from_backup()
print(result['message'])
```
**📚 For detailed S3 setup and examples, see [S3_UPDATE_GUIDE.md](S3_UPDATE_GUIDE.md)**
## 📋 Required Data Format
### MongoDB Document Format
Each document in your MongoDB training collection should have:
```json
{
"Year": 2000,
"Month": 1,
"production_volume": 43163.99,
"rain_sum": 270.87,
"temperature_mean": 26.53,
"humidity_mean": 100,
"wind_speed_mean": 19.68,
"Total_Waste_kg": 96664.3838,
"Solid_Waste_Limestone_kg": 5080.5243,
"Solid_Waste_Gypsum_kg": 23189.168,
"Solid_Waste_Industrial_Salt_kg": 68394.6915,
"Liquid_Waste_Bittern_Liters": 31725.5458,
"Potential_Epsom_Salt_kg": 1605.914,
"Potential_Potash_kg": 206.5851,
"Potential_Magnesium_Oil_Liters": 3163.2316
}
```
### Required Fields
**Input Features:**
- `production_volume` - Production volume (numeric)
- `rain_sum` - Total rainfall in mm (numeric)
- `temperature_mean` - Average temperature in °C (numeric)
- `humidity_mean` - Average humidity percentage (numeric)
- `wind_speed_mean` - Average wind speed (numeric)
- `Month` - Month number 1-12 (integer)
**Output Targets (for training only):**
- `Total_Waste_kg`
- `Solid_Waste_Limestone_kg`
- `Solid_Waste_Gypsum_kg`
- `Solid_Waste_Industrial_Salt_kg`
- `Liquid_Waste_Bittern_Liters`
- `Potential_Epsom_Salt_kg`
- `Potential_Potash_kg`
- `Potential_Magnesium_Oil_Liters`
## 📁 Project Structure
```
local-module/
├── data/
│ └── training/
│ └── training.csv # Training dataset (312 samples)
├── train_v4.py # V4 training (PRODUCTION - R²=0.98)
├── predict_v4.py # V4 inference module
├── waste_predictor_v4.pkl # Trained model checkpoint
├── waste_predictor_v4_metadata.json
├── train_v3.py # V3 training (Neural network)
├── model_v2.py # Enhanced neural network model
├── feature_engineering.py # Feature engineering pipeline
├── requirements.txt # Dependencies
└── README.md
```
## 🔧 Input Features
| Feature | Description | Range |
|---------|-------------|-------|
| `production_volume` | Production volume | 0 - 200,000 |
| `rain_sum` | Total rainfall (mm) | 0 - 1,000 |
| `temperature_mean` | Average temperature (°C) | 0 - 50 |
| `humidity_mean` | Average humidity (%) | 0 - 100 |
| `wind_speed_mean` | Average wind speed (km/h) | 0 - 50 |
| `month` | Month of year | 1 - 12 |
## 📤 Output Predictions
| Output | Description |
|--------|-------------|
| `Total_Waste_kg` | Total waste produced (kg) |
| `Solid_Waste_Limestone_kg` | Limestone solid waste (kg) |
| `Solid_Waste_Gypsum_kg` | Gypsum solid waste (kg) |
| `Solid_Waste_Industrial_Salt_kg` | Industrial salt waste (kg) |
| `Liquid_Waste_Bittern_Liters` | Bittern liquid waste (L) |
| `Potential_Epsom_Salt_kg` | Potential Epsom salt byproduct (kg) |
| `Potential_Potash_kg` | Potential Potash byproduct (kg) |
| `Potential_Magnesium_Oil_Liters` | Potential Magnesium oil (L) |
## 🧠 Model Architecture (V4)
### Weighted Ensemble of 3 Model Types:
1. **XGBoost Gradient Boosting** (weight ~33%)
- 500 estimators, max_depth=6
- Per-target models with log-transformed outputs
2. **Stacked Ensemble** (weight ~34%)
- Level 0: XGBoost + LightGBM + Random Forest + GBR
- Level 1: Ridge regression meta-learner
- 5-fold stacking with passthrough
3. **Deep Neural Network** (weight ~33%)
- Architecture: 256 → 512 → 256 → 128 with skip connections
- GELU activation, BatchNorm, Dropout
- Cosine annealing LR schedule
### Feature Engineering (30+ features):
- Log/sqrt/squared production transforms
- Cyclical month encoding (sin/cos)
- Weather condition indices (wet, dry, evaporation)
- Production × weather interactions
- Domain-driven ratio features
## 📈 Performance Comparison
| Version | R² Score | MAE | Key Technique |
|---------|----------|-----|---------------|
| V1 (Original) | 0.47 | 7,873 | Simple MLP |
| V3 | 0.77 | 5,575 | Log transform + Feature eng |
| **V4 (Production)** | **0.98** | **1,607** | XGBoost + Stacked + DNN Ensemble |
## 🔄 Retraining
To retrain the model with new data:
1. Add new data to `data/training/training.csv`
2. Run training:
```bash
python train_v4.py
```
3. Model will be saved to `waste_predictor_v4.pkl`
## 📝 License
MIT License
| text/markdown | Research Project Team | null | null | null | MIT | machine-learning, waste-prediction, industrial, ml, prediction, ensemble | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"torch>=2.0.0",
"pandas>=2.0.0",
"numpy>=1.24.0",
"scikit-learn>=1.3.0",
"xgboost>=2.0.0",
"lightgbm>=4.0.0",
"pymongo>=4.0.0",
"boto3>=1.28.0",
"botocore>=1.31.0",
"pytest>=7.0; extra == \"dev\"",
"black>=23.0; extra == \"dev\"",
"flake8>=6.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"flask>=3.0.0; extra == \"api\"",
"flask-cors>=4.0.0; extra == \"api\""
] | [] | [] | [] | [
"Homepage, https://github.com/yourusername/waste-predictor",
"Documentation, https://github.com/yourusername/waste-predictor#readme",
"Repository, https://github.com/yourusername/waste-predictor",
"Issues, https://github.com/yourusername/waste-predictor/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-21T09:09:54.957886 | waste_predictor-1.0.5.tar.gz | 10,873,523 | eb/1c/b7bffec15fef0d2c37143061306e0d3a5ed4d34d8e4e5447525f04417faf/waste_predictor-1.0.5.tar.gz | source | sdist | null | false | a7dc82d6820edc0bb19ae9631f8245ef | afbed50459ad08231a065e99f42714cfae021ed215879bf5ecac466421a3690d | eb1cb7bffec15fef0d2c37143061306e0d3a5ed4d34d8e4e5447525f04417faf | null | [
"LICENSE"
] | 225 |
2.4 | atlas-asset-client | 0.3.14 | Async HTTP client for Atlas Command. | # Atlas Command HTTP Client (Python)
`atlas-asset-client` is a lightweight async wrapper around the Atlas Command REST API. It provides strongly-typed convenience methods for working with
entities, tasks, objects, and query endpoints via HTTP.
## Installation
```bash
pip install atlas-asset-client
```
During local development inside this repository:
```bash
pip install -e Atlas_Client_SDKs/connection_packages/atlas_asset_http_client_python
```
## Import Options
You can import the client using either module name:
```python
# Option 1: Package-name-matching import (recommended)
from atlas_asset_client import AtlasCommandHttpClient
# Option 2: Full module name (also works, maintained for backward compatibility)
from atlas_asset_http_client_python import AtlasCommandHttpClient
```
Both import paths work identically and provide the same functionality.
## Quickstart
### Using Typed Components (Recommended)
The client now supports typed component parameters that provide IDE autocomplete, type checking, and validation before transmission:
```python
import asyncio
from atlas_asset_client import (
AtlasCommandHttpClient,
EntityComponents,
TelemetryComponent,
HealthComponent,
CommunicationsComponent,
TaskCatalogComponent,
)
async def main() -> None:
async with AtlasCommandHttpClient("http://localhost:8000") as client:
# Create entity with typed components
components = EntityComponents(
telemetry=TelemetryComponent(
latitude=40.7128,
longitude=-74.0060,
altitude_m=120,
speed_m_s=8.2,
heading_deg=165,
),
health=HealthComponent(battery_percent=85),
communications=CommunicationsComponent(link_state="connected"),
task_catalog=TaskCatalogComponent(
supported_tasks=["move_to_location", "survey_grid"]
),
)
entity = await client.create_entity(
entity_id="drone-01",
entity_type="asset",
alias="Demo Drone",
subtype="drone",
components=components,
)
print("Created entity:", entity["entity_id"])
asyncio.run(main())
```
## Typed Component Reference
### Entity Components
The `EntityComponents` class accepts the following typed component fields:
| Component | Type | Description |
|-----------|------|-------------|
| `telemetry` | `TelemetryComponent` | Position and motion data |
| `geometry` | `GeometryComponent` | GeoJSON geometry for geoentities |
| `task_catalog` | `TaskCatalogComponent` | Supported task identifiers |
| `media_refs` | `List[MediaRefItem]` | References to media objects |
| `mil_view` | `MilViewComponent` | Military tactical classification |
| `health` | `HealthComponent` | Health and vital statistics |
| `sensor_refs` | `List[SensorRefItem]` | Sensor configurations |
| `communications` | `CommunicationsComponent` | Network link status |
| `task_queue` | `TaskQueueComponent` | Current and queued work items |
| `status` | `StatusComponent` | Operational status marker |
| `heartbeat` | `HeartbeatComponent` | Last heartbeat timestamp |
| `custom_*` | `Any` | Custom components (must be prefixed with `custom_`) |
#### TelemetryComponent
```python
TelemetryComponent(
latitude=40.7128, # degrees (WGS84)
longitude=-74.0060, # degrees (WGS84)
altitude_m=120, # meters above sea level
speed_m_s=8.2, # horizontal speed in m/s
heading_deg=165, # heading (0=N, 90=E)
)
```
#### GeometryComponent
```python
# Point
GeometryComponent(type="Point", coordinates=[-74.0060, 40.7128])
# LineString
GeometryComponent(type="LineString", coordinates=[[-74.0060, 40.7128], [-74.0070, 40.7138]])
# Polygon
GeometryComponent(type="Polygon", coordinates=[[[-74.0060, 40.7128], [-74.0070, 40.7128], [-74.0060, 40.7128]]])
```
#### HealthComponent
```python
HealthComponent(battery_percent=85) # 0-100
```
#### CommunicationsComponent
```python
CommunicationsComponent(link_state="connected") # connected/disconnected/degraded/unknown
```
#### MilViewComponent
```python
MilViewComponent(
classification="friendly", # friendly/hostile/neutral/unknown/civilian
last_seen="2025-11-23T10:05:00Z",
)
```
#### StatusComponent
```python
StatusComponent(
value="active",
last_update="2025-12-01T10:30:00Z",
)
```
#### HeartbeatComponent
```python
HeartbeatComponent(last_seen="2025-12-01T10:30:00Z")
```
### Task Components
The `TaskComponents` class accepts:
| Component | Type | Description |
|-----------|------|-------------|
| `command` | `CommandComponent` | Task command/work type identifier |
| `parameters` | `TaskParametersComponent` | Command parameters for task execution |
| `progress` | `TaskProgressComponent` | Runtime telemetry about execution |
```python
from atlas_asset_http_client_python import (
CommandComponent,
TaskComponents,
TaskParametersComponent,
TaskProgressComponent,
)
components = TaskComponents(
command=CommandComponent(type="move_to_location"),
parameters=TaskParametersComponent(
latitude=40.123,
longitude=-74.456,
altitude_m=120,
),
progress=TaskProgressComponent(
percent=65,
updated_at="2025-11-25T08:45:00Z",
status_detail="En route to destination",
),
)
task = await client.create_task(
task_id="task-1",
entity_id="asset-1",
components=components,
)
```
### Custom Components
Custom components must be prefixed with `custom_`:
```python
components = EntityComponents(
telemetry=TelemetryComponent(latitude=40.7128),
custom_weather={"wind_speed": 12, "gusts": 18}, # Custom component
)
```
## Entity Types Guide
Atlas Command supports several entity types, each with different purposes and component structures. All entities are created using the `create_entity()` method, but the `entity_type` parameter and `components` structure differ based on what you're representing.
### Assets
**Purpose:** Assets represent taskable autonomous agents that can execute commands and report telemetry. Examples include drones, rovers, security cameras, and other controllable hardware.
**When to use:** Register any physical or virtual device that can receive tasks from Atlas Command.
**Required fields:**
- `entity_id`: Unique identifier for the asset (string)
- `entity_type`: Must be `"asset"` (string)
- `alias`: Human-readable name for the asset (string)
- `subtype`: Asset subtype (string, e.g., "drone", "rover", "camera")
**Common components:**
- `telemetry`: Location and movement data (`latitude`, `longitude`, `altitude_m`, `speed_m_s`, `heading_deg`)
- `task_catalog`: Supported task types the asset can execute
- `health`: System status (e.g., `battery_percent`)
- `communications`: Connection state (`link_state`)
- `sensor_refs`: Array of attached sensor configurations
- `media_refs`: Array of object references for camera feeds or thumbnails
**Example:**
```python
from atlas_asset_client import (
EntityComponents,
TelemetryComponent,
TaskCatalogComponent,
HealthComponent,
CommunicationsComponent,
)
async with AtlasCommandHttpClient("http://localhost:8000") as client:
asset = await client.create_entity(
entity_id="drone-alpha-01",
entity_type="asset",
alias="Drone Alpha 01",
subtype="drone",
components=EntityComponents(
telemetry=TelemetryComponent(
latitude=40.7128,
longitude=-74.0060,
altitude_m=120,
speed_m_s=8.2,
heading_deg=165,
),
task_catalog=TaskCatalogComponent(
supported_tasks=["move_to_location", "survey_grid"]
),
health=HealthComponent(battery_percent=76),
communications=CommunicationsComponent(link_state="connected"),
),
)
```
### Tracks
**Purpose:** Tracks represent observed entities detected by sensors or other assets. They are passive entities that track movement and characteristics of detected objects, but cannot receive tasks.
**When to use:** Register any detected object or entity that needs to be monitored but not controlled. Examples include vehicles, people, or other objects detected by security cameras or radar systems.
**Required fields:**
- `entity_id`: Unique identifier for the track (string)
- `entity_type`: Must be `"track"` (string)
- `alias`: Human-readable name for the track (string)
- `subtype`: Track subtype (string, e.g., "vehicle", "person", "unknown")
**Common components:**
- `telemetry`: Current location and movement data
- `mil_view`: Classification and tracking information (`classification`: friendly/hostile/neutral/unknown/civilian, `last_seen`)
- `sensor_refs`: Sensors that detected this track
- `media_refs`: Object references for images or videos of the track
**Example:**
```python
from atlas_asset_client import EntityComponents, TelemetryComponent, MilViewComponent
async with AtlasCommandHttpClient("http://localhost:8000") as client:
track = await client.create_entity(
entity_id="target-alpha",
entity_type="track",
alias="Target Alpha",
subtype="vehicle",
components=EntityComponents(
telemetry=TelemetryComponent(
latitude=40.7128,
longitude=-74.0060,
altitude_m=120,
speed_m_s=8.2,
heading_deg=165,
),
mil_view=MilViewComponent(
classification="unknown",
last_seen="2025-11-23T10:05:00Z",
),
),
)
```
### Geofeatures
**Purpose:** Geofeatures represent geographic features or zones on the map. They can be points, lines, polygons, or circles representing waypoints, routes, boundaries, restricted areas, or other geographic annotations.
**When to use:** Register any geographic annotation that needs to be displayed on the map. Common use cases include waypoints, patrol routes, no-fly zones, survey areas, or boundaries.
**Required fields:**
- `entity_id`: Unique identifier for the geofeature (string)
- `entity_type`: Must be `"geofeature"` (string)
- `alias`: Human-readable name for the geofeature (string)
- `subtype`: Geofeature subtype (string, e.g., "waypoint", "route", "zone", "boundary")
- `components.geometry`: Geometry definition based on type
**Geometry types:**
#### Point Geofeature
A single coordinate location. Use for waypoints or point-of-interest markers.
```python
from atlas_asset_client import EntityComponents, GeometryComponent
async with AtlasCommandHttpClient("http://localhost:8000") as client:
point = await client.create_entity(
entity_id="waypoint-alpha",
entity_type="geofeature",
alias="Waypoint Alpha",
subtype="waypoint",
components=EntityComponents(
geometry=GeometryComponent(
type="Point",
coordinates=[-74.0060, 40.7128],
),
),
)
```
#### LineString Geofeature
A path or route defined by multiple coordinates. Use for patrol routes, flight paths, or boundaries.
```python
from atlas_asset_client import EntityComponents, GeometryComponent
async with AtlasCommandHttpClient("http://localhost:8000") as client:
linestring = await client.create_entity(
entity_id="patrol-route-alpha",
entity_type="geofeature",
alias="Patrol Route Alpha",
subtype="route",
components=EntityComponents(
geometry=GeometryComponent(
type="LineString",
coordinates=[
[-74.0060, 40.7128],
[-74.0070, 40.7130],
[-74.0080, 40.7135],
[-74.0090, 40.7140],
],
),
),
)
```
#### Polygon Geofeature
A closed area defined by coordinates. The first and last coordinate must be the same to close the polygon. Use for restricted zones, survey areas, or regions of interest.
```python
from atlas_asset_client import EntityComponents, GeometryComponent
async with AtlasCommandHttpClient("http://localhost:8000") as client:
polygon = await client.create_entity(
entity_id="area-of-interest-alpha",
entity_type="geofeature",
alias="Area of Interest Alpha",
subtype="zone",
components=EntityComponents(
geometry=GeometryComponent(
type="Polygon",
coordinates=[[
[-74.0060, 40.7128],
[-74.0070, 40.7128],
[-74.0070, 40.7130],
[-74.0060, 40.7130],
[-74.0060, 40.7128],
]],
),
),
)
```
#### Circle Geofeature
A circular area defined by a center point and radius. Use for circular zones, coverage areas, or proximity alerts.
```python
from atlas_asset_client import EntityComponents, GeometryComponent
async with AtlasCommandHttpClient("http://localhost:8000") as client:
circle = await client.create_entity(
entity_id="perimeter-epsilon",
entity_type="geofeature",
alias="Perimeter Epsilon",
subtype="zone",
components=EntityComponents(
geometry=GeometryComponent(
type="circle",
point_lat=40.7128,
point_lng=-74.0060,
radius_m=500,
),
),
)
```
**Common components for geofeatures:**
- `geometry`: Geometry definition (required)
- `geometry_type`: Explicit type specification (for circles: `"circle"`)
- `description`: Human-readable description of the geofeature
- `mil_view`: Classification metadata if applicable
## Features
- Uses `httpx.AsyncClient` under the hood with pluggable transport/timeouts.
- Convenience methods for every public endpoint:
- `get_root`, `get_health`, `get_readiness`
- `list_entities`, `get_entity`, `create_entity`, `update_entity`, `delete_entity`,
`get_entity_by_alias`, `update_entity_telemetry`, `checkin_entity`
- `list_tasks`, `create_task`, `get_task`, `update_task`, `delete_task`,
`get_tasks_by_entity`, `start_task`, `complete_task`, `transition_task_status`, `fail_task`
- `list_objects`, `create_object` (uploads a file via `/objects/upload`), `get_object`,
- `download_object`, `create_object_metadata`, `update_object`, `delete_object`, `view_object`,
- `get_objects_by_entity`, `get_objects_by_task`, `find_orphaned_objects`,
- `add_object_reference`, `remove_object_reference`, `get_object_references`,
- `validate_object_references`, `cleanup_object_references`
- `get_changed_since`, `get_full_dataset`
- Optional bearer token support via the `token=` constructor parameter.
- Context manager support (`async with client:`) to manage connection lifecycle.
## Field reference
### Client configuration
- `AtlasCommandHttpClient(base_url, *, token=None, timeout=10.0, transport=None)` – requires `base_url`,
optional `token`, `timeout`, and `transport`.
### Service
- `get_root()` – returns the API root metadata.
- `get_health()` – returns `/health` status payload.
- `get_readiness()` – returns `/readiness` status payload.
### Entities
- `list_entities(*, limit=100, offset=0)` – optional pagination parameters based on defaults.
- `get_entity(entity_id)` – requires `entity_id`.
- `get_entity_by_alias(alias)` – requires `alias`.
- `create_entity(*, entity_id, entity_type, alias, subtype, components=None)` – requires `entity_id`, `entity_type`, `alias`, and `subtype`;
`components` are optional.
- `update_entity(entity_id, *, components=None, subtype=None)` – requires `entity_id`; at least one of `components` or `subtype` must be provided.
- `delete_entity(entity_id)` – requires `entity_id`.
- `update_entity_telemetry(entity_id, *, latitude=None, longitude=None, altitude_m=None, speed_m_s=None, heading_deg=None)`
– requires `entity_id`; telemetry values are optional and only set when provided.
- `checkin_entity(entity_id, *, status=None, latitude=None, longitude=None, altitude_m=None, speed_m_s=None, heading_deg=None, status_filter="pending,in_progress", limit=10, since=None, fields=None)`
– requires `entity_id`; optional status/telemetry and task filters are accepted (`fields="minimal"` is supported).
Response includes `entity`, `tasks`, `task_count`, and `task_limit`.
### Tasks
- `list_tasks(*, status=None, limit=25, offset=0)` – optional `status`, page size, and offset.
- `get_task(task_id)` – requires `task_id`.
- `create_task(*, task_id, status="pending", entity_id=None, components=None, extra=None)` – requires `task_id`;
`status` defaults to `"pending"`, `entity_id`, `components`, and `extra` are optional.
- `update_task(task_id, *, status=None, entity_id=None, components=None, extra=None)` – requires `task_id`;
all other parameters are optional and only update when provided.
- `delete_task(task_id)` – requires `task_id`.
- `get_tasks_by_entity(entity_id, *, status=None, limit=25, offset=0)` – requires `entity_id`; filters optional.
- `start_task(task_id)` – requires `task_id`.
- `complete_task(task_id, *, result=None)` – requires `task_id`; optional `result` payload.
- `transition_task_status(task_id, status, *, validate=True, extra=None)` – requires `task_id` and `status`; optional validation toggle and `extra` metadata.
- `fail_task(task_id, *, error_message=None, error_details=None)` – requires `task_id`; error info optional.
### Objects
- `list_objects(*, content_type=None, type=None, limit=100, offset=0)` – optional filters.
- `get_object(object_id)` – requires `object_id`.
- `create_object(file, *, object_id, content_type, usage_hint=None, referenced_by=None, object_type=None)` – requires `file` data, `object_id`, and a MIME `content_type`;
`usage_hint`, `referenced_by`, and `object_type` optional.
- `download_object(object_id)` – returns `(bytes_content, content_type, content_length)`.
- `create_object_metadata(*, object_id, path=None, bucket=None, size_bytes=None, content_type=None, object_type=None, usage_hints=None, referenced_by=None, extra=None)`
– creates object metadata entries via `/objects`.
- `update_object(object_id, *, usage_hints=None, referenced_by=None)` – requires `object_id`; metadata optional.
- `update_object()` requires at least one field (`usage_hints` or `referenced_by`).
- `delete_object(object_id)` – requires `object_id`.
- `view_object(object_id)` – returns `(text_content, content_type, content_length)`.
- `get_objects_by_entity(entity_id, *, limit=50, offset=0)` – requires `entity_id`, optional pagination.
- `get_objects_by_task(task_id, *, limit=50, offset=0)` – requires `task_id`, optional pagination.
- `add_object_reference(object_id, *, entity_id=None, task_id=None)` / `remove_object_reference(...)`
– require `object_id`; provide either `entity_id` or `task_id` to target the reference.
- `find_orphaned_objects(*, limit=100, offset=0)` – optional pagination.
- `get_object_references(object_id)` / `validate_object_references(object_id)` / `cleanup_object_references(object_id)` –
each requires `object_id`.
**Pagination metadata:** Atlas list endpoints expose `X-Total-Count`, `X-Limit`, `X-Offset`, and
`X-Returned-Count` headers for page bookkeeping.
### Queries
- `get_changed_since(since, *, limit_per_type=None)` – requires `since`; optional per-type limit. Response includes `deleted_entities`, `deleted_tasks`, and `deleted_objects` (in-memory, ~1h TTL).
- `get_full_dataset(*, entity_limit=None, task_limit=None, object_limit=None)` – filters are optional.
## Configuration
```python
client = AtlasCommandHttpClient(
"https://atlas.example.com",
token="my-api-token",
timeout=30.0,
)
```
You can also pass a custom `httpx` transport for testing:
```python
transport = httpx.MockTransport(my_handler)
client = AtlasCommandHttpClient("http://testserver", transport=transport)
```
## Error Handling
The client raises exceptions in the following scenarios:
### HTTP Errors
All API calls use `httpx.Response.raise_for_status()` which raises `httpx.HTTPStatusError` for 4xx and 5xx responses:
```python
import httpx
from atlas_asset_http_client_python import AtlasCommandHttpClient
async with AtlasCommandHttpClient("http://localhost:8000") as client:
try:
entity = await client.get_entity("nonexistent-id")
except httpx.HTTPStatusError as e:
if e.response.status_code == 404:
print("Entity not found")
else:
print(f"HTTP error: {e.response.status_code}")
```
### Client Errors
| Exception | Condition |
|-----------|-----------|
| `RuntimeError("Client is closed")` | Attempting to use the client after calling `aclose()` or exiting the context manager |
| `ValueError("update_entity requires at least one of: components, subtype")` | Calling `update_entity()` without providing either `components` or `subtype` |
| `RuntimeError` | `create_object()` with `referenced_by` but the upload response doesn't include `object_id` |
### Validation Errors
When using typed components, dataclass validation may raise `ValueError` or `TypeError`:
```python
from atlas_asset_http_client_python import EntityComponents, HealthComponent
try:
components = EntityComponents(
health=HealthComponent(battery_percent=150) # Invalid: must be 0-100 (inclusive)
)
except ValueError as e:
print(f"Validation error: {e}")
try:
components = EntityComponents(
health=HealthComponent(battery_percent="high") # Invalid: battery_percent must be numeric
)
except TypeError as e:
print(f"Type validation error: {e}")
try:
components = EntityComponents(
unknown_component={"foo": "bar"} # Invalid: unknown component
)
except ValueError as e:
print(f"Unknown component: {e}") # "Unknown component 'unknown_component'. Custom components must be prefixed with 'custom_'"
```
## Testing
Run the suite with:
```bash
pip install -e .[dev]
pytest
```
The tests use `httpx.MockTransport` so they do not require a running Atlas Command instance.
| text/markdown | ATLAS Team | null | null | null | null | atlas, command, http, asset | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: System :: Monitoring",
"Topic :: Software Development :: Libraries",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.9 | [] | [] | [] | [
"httpx>=0.27",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"mypy>=1.10.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/atlas/command",
"Repository, https://github.com/atlas/command",
"Documentation, https://github.com/atlas/command/wiki"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:09:46.912593 | atlas_asset_client-0.3.14.tar.gz | 25,291 | 19/ab/e5109ff55c6166e9d051c4703a34d85f6dc08e78915e936caf6c4470d07d/atlas_asset_client-0.3.14.tar.gz | source | sdist | null | false | 0324e57a300a50660c690baf5d0a1ad5 | 23a56947b7d59064ee29c02242218e8e9b1368901ae4f19291d57ec3f34e3878 | 19abe5109ff55c6166e9d051c4703a34d85f6dc08e78915e936caf6c4470d07d | MIT | [
"LICENSE"
] | 241 |
2.4 | wrought | 0.0.1 | Wrought by FluxForge AI — Engineering and operations control system. Full package coming soon. | # Wrought
**Engineering and operations control system by [FluxForge AI](https://github.com/fluxforgeai).**
Wrought is a full-lifecycle control system for AI-assisted engineering and production operations — standardizing feature delivery, incident response, RCA, and governance into repeatable workflows with durable artifacts.
*If it ain't Wrought, it's fraught.*
Full package coming soon.
| text/markdown | null | FluxForge AI <hello@fluxforge.ai> | null | null | null | ai, control-system, engineering, incident-management, mcp | [
"Development Status :: 1 - Planning",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/fluxforgeai/wrought"
] | uv/0.9.24 {"installer":{"name":"uv","version":"0.9.24","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T09:09:40.859842 | wrought-0.0.1-py3-none-any.whl | 1,529 | 32/42/6c0a7b9bf18131cadf5da0676dbdf77df46fe85282bb9f33860831da7947/wrought-0.0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | bdb443c5cbf9580999ed3e2b0deed384 | 11042e8523cc0872c3105f7f4ee8a9a992c344c69871728cc2965fa654f88f0c | 32426c0a7b9bf18131cadf5da0676dbdf77df46fe85282bb9f33860831da7947 | MIT | [] | 252 |
2.4 | autogluon | 1.5.1b20260221 | Fast and Accurate ML in 3 Lines of Code |
<div align="center">
<img src="https://user-images.githubusercontent.com/16392542/77208906-224aa500-6aba-11ea-96bd-e81806074030.png" width="350">
## Fast and Accurate ML in 3 Lines of Code
[](https://github.com/autogluon/autogluon/releases)
[](https://anaconda.org/conda-forge/autogluon)
[](https://pypi.org/project/autogluon/)
[](https://pepy.tech/project/autogluon)
[](./LICENSE)
[](https://discord.gg/wjUmjqAc2N)
[](https://twitter.com/autogluon)
[](https://github.com/autogluon/autogluon/actions/workflows/continuous_integration.yml)
[](https://github.com/autogluon/autogluon/actions/workflows/platform_tests-command.yml)
[Installation](https://auto.gluon.ai/stable/install.html) | [Documentation](https://auto.gluon.ai/stable/index.html) | [Release Notes](https://auto.gluon.ai/stable/whats_new/index.html)
</div>
AutoGluon, developed by AWS AI, automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy machine learning and deep learning models on image, text, time series, and tabular data.
## 💾 Installation
AutoGluon is supported on Python 3.10 - 3.13 and is available on Linux, MacOS, and Windows.
You can install AutoGluon with:
```python
pip install autogluon
```
Visit our [Installation Guide](https://auto.gluon.ai/stable/install.html) for detailed instructions, including GPU support, Conda installs, and optional dependencies.
## :zap: Quickstart
Build accurate end-to-end ML models in just 3 lines of code!
```python
from autogluon.tabular import TabularPredictor
predictor = TabularPredictor(label="class").fit("train.csv", presets="best")
predictions = predictor.predict("test.csv")
```
| AutoGluon Task | Quickstart | API |
|:--------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| TabularPredictor | [](https://auto.gluon.ai/stable/tutorials/tabular/tabular-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.tabular.TabularPredictor.html) |
| TimeSeriesPredictor | [](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.timeseries.TimeSeriesPredictor.html) |
| MultiModalPredictor | [](https://auto.gluon.ai/stable/tutorials/multimodal/multimodal_prediction/multimodal-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.multimodal.MultiModalPredictor.html) |
## :mag: Resources
### Hands-on Tutorials / Talks
Below is a curated list of recent tutorials and talks on AutoGluon. A comprehensive list is available [here](AWESOME.md#videos--tutorials).
| Title | Format | Location | Date |
|--------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------|------------|
| :tv: [AutoGluon: Towards No-Code Automated Machine Learning](https://www.youtube.com/watch?v=SwPq9qjaN2Q) | Tutorial | [AutoML 2024](https://2024.automl.cc/) | 2024/09/09 |
| :tv: [AutoGluon 1.0: Shattering the AutoML Ceiling with Zero Lines of Code](https://www.youtube.com/watch?v=5tvp_Ihgnuk) | Tutorial | [AutoML 2023](https://2023.automl.cc/) | 2023/09/12 |
| :sound: [AutoGluon: The Story](https://automlpodcast.com/episode/autogluon-the-story) | Podcast | [The AutoML Podcast](https://automlpodcast.com/) | 2023/09/05 |
| :tv: [AutoGluon: AutoML for Tabular, Multimodal, and Time Series Data](https://youtu.be/Lwu15m5mmbs?si=jSaFJDqkTU27C0fa) | Tutorial | PyData Berlin | 2023/06/20 |
| :tv: [Solving Complex ML Problems in a few Lines of Code with AutoGluon](https://www.youtube.com/watch?v=J1UQUCPB88I) | Tutorial | PyData Seattle | 2023/06/20 |
| :tv: [The AutoML Revolution](https://www.youtube.com/watch?v=VAAITEds-28) | Tutorial | [Fall AutoML School 2022](https://sites.google.com/view/automl-fall-school-2022) | 2022/10/18 |
### Scientific Publications
- [AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data](https://arxiv.org/pdf/2003.06505.pdf) (*Arxiv*, 2020) ([BibTeX](CITING.md#general-usage--autogluontabular))
- [Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation](https://proceedings.neurips.cc/paper/2020/hash/62d75fb2e3075506e8837d8f55021ab1-Abstract.html) (*NeurIPS*, 2020) ([BibTeX](CITING.md#tabular-distillation))
- [Benchmarking Multimodal AutoML for Tabular Data with Text Fields](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/9bf31c7ff062936a96d3c8bd1f8f2ff3-Paper-round2.pdf) (*NeurIPS*, 2021) ([BibTeX](CITING.md#autogluonmultimodal))
- [XTab: Cross-table Pretraining for Tabular Transformers](https://proceedings.mlr.press/v202/zhu23k/zhu23k.pdf) (*ICML*, 2023)
- [AutoGluon-TimeSeries: AutoML for Probabilistic Time Series Forecasting](https://arxiv.org/abs/2308.05566) (*AutoML Conf*, 2023) ([BibTeX](CITING.md#autogluontimeseries))
- [TabRepo: A Large Scale Repository of Tabular Model Evaluations and its AutoML Applications](https://arxiv.org/pdf/2311.02971.pdf) (*AutoML Conf*, 2024)
- [AutoGluon-Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models](https://arxiv.org/pdf/2404.16233) (*AutoML Conf*, 2024) ([BibTeX](CITING.md#autogluonmultimodal))
- [Multi-layer Stack Ensembles for Time Series Forecasting](https://arxiv.org/abs/2511.15350) (*AutoML Conf*, 2025) ([BibTeX](CITING.md#autogluontimeseries))
- [Chronos-2: From Univariate to Universal Forecasting](https://arxiv.org/abs/2510.15821) (*Arxiv*, 2025) ([BibTeX](CITING.md#autogluontimeseries))
### Articles
- [AutoGluon-TimeSeries: Every Time Series Forecasting Model In One Library](https://towardsdatascience.com/autogluon-timeseries-every-time-series-forecasting-model-in-one-library-29a3bf6879db) (*Towards Data Science*, Jan 2024)
- [AutoGluon for tabular data: 3 lines of code to achieve top 1% in Kaggle competitions](https://aws.amazon.com/blogs/opensource/machine-learning-with-autogluon-an-open-source-automl-library/) (*AWS Open Source Blog*, Mar 2020)
- [AutoGluon overview & example applications](https://towardsdatascience.com/autogluon-deep-learning-automl-5cdb4e2388ec?source=friends_link&sk=e3d17d06880ac714e47f07f39178fdf2) (*Towards Data Science*, Dec 2019)
### Train/Deploy AutoGluon in the Cloud
- [AutoGluon Cloud](https://auto.gluon.ai/cloud/stable/index.html) (Recommended)
- [AutoGluon on SageMaker AutoPilot](https://auto.gluon.ai/stable/tutorials/cloud_fit_deploy/autopilot-autogluon.html)
- [AutoGluon on Amazon SageMaker](https://auto.gluon.ai/stable/tutorials/cloud_fit_deploy/cloud-aws-sagemaker-train-deploy.html)
- [AutoGluon Deep Learning Containers](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#autogluon-training-containers) (Security certified & maintained by the AutoGluon developers)
- [AutoGluon Official Docker Container](https://hub.docker.com/r/autogluon/autogluon)
- [AutoGluon-Tabular on AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-n4zf5pmjt7ism) (Not maintained by us)
## :pencil: Citing AutoGluon
If you use AutoGluon in a scientific publication, please refer to our [citation guide](CITING.md).
## :wave: How to get involved
We are actively accepting code contributions to the AutoGluon project. If you are interested in contributing to AutoGluon, please read the [Contributing Guide](https://github.com/autogluon/autogluon/blob/master/CONTRIBUTING.md) to get started.
## :classical_building: License
This library is licensed under the Apache 2.0 License.
| text/markdown | AutoGluon Community | null | null | null | Apache-2.0 | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Education",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Intended Audience :: Customer Service",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Healthcare Industry",
"Intended Audience :: Telecommunications Industry",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Operating System :: Unix",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Information Analysis",
"Topic :: Scientific/Engineering :: Image Recognition"
] | [] | https://github.com/autogluon/autogluon | null | <3.14,>=3.10 | [] | [] | [] | [
"autogluon.core[all]==1.5.1b20260221",
"autogluon.features==1.5.1b20260221",
"autogluon.tabular[all]==1.5.1b20260221",
"autogluon.multimodal==1.5.1b20260221",
"autogluon.timeseries[all]==1.5.1b20260221",
"autogluon.tabular[tabarena]==1.5.1b20260221; extra == \"tabarena\""
] | [] | [] | [] | [
"Documentation, https://auto.gluon.ai",
"Bug Reports, https://github.com/autogluon/autogluon/issues",
"Source, https://github.com/autogluon/autogluon/",
"Contribute!, https://github.com/autogluon/autogluon/blob/master/CONTRIBUTING.md"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T09:09:38.771209 | autogluon-1.5.1b20260221.tar.gz | 10,155 | 5c/f2/cbb3ea61f77ef97d2302055e816916d04c86134cc7ad788afabe5594de47/autogluon-1.5.1b20260221.tar.gz | source | sdist | null | false | 8bc9bdf57cc6149cfad798a12f9e7342 | d43f236c1bb7731020c30ed92aa9328fc15119106df055499b78768a33c7a430 | 5cf2cbb3ea61f77ef97d2302055e816916d04c86134cc7ad788afabe5594de47 | null | [
"LICENSE",
"NOTICE"
] | 231 |
2.4 | autogluon.timeseries | 1.5.1b20260221 | Fast and Accurate ML in 3 Lines of Code |
<div align="center">
<img src="https://user-images.githubusercontent.com/16392542/77208906-224aa500-6aba-11ea-96bd-e81806074030.png" width="350">
## Fast and Accurate ML in 3 Lines of Code
[](https://github.com/autogluon/autogluon/releases)
[](https://anaconda.org/conda-forge/autogluon)
[](https://pypi.org/project/autogluon/)
[](https://pepy.tech/project/autogluon)
[](./LICENSE)
[](https://discord.gg/wjUmjqAc2N)
[](https://twitter.com/autogluon)
[](https://github.com/autogluon/autogluon/actions/workflows/continuous_integration.yml)
[](https://github.com/autogluon/autogluon/actions/workflows/platform_tests-command.yml)
[Installation](https://auto.gluon.ai/stable/install.html) | [Documentation](https://auto.gluon.ai/stable/index.html) | [Release Notes](https://auto.gluon.ai/stable/whats_new/index.html)
</div>
AutoGluon, developed by AWS AI, automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy machine learning and deep learning models on image, text, time series, and tabular data.
## 💾 Installation
AutoGluon is supported on Python 3.10 - 3.13 and is available on Linux, MacOS, and Windows.
You can install AutoGluon with:
```python
pip install autogluon
```
Visit our [Installation Guide](https://auto.gluon.ai/stable/install.html) for detailed instructions, including GPU support, Conda installs, and optional dependencies.
## :zap: Quickstart
Build accurate end-to-end ML models in just 3 lines of code!
```python
from autogluon.tabular import TabularPredictor
predictor = TabularPredictor(label="class").fit("train.csv", presets="best")
predictions = predictor.predict("test.csv")
```
| AutoGluon Task | Quickstart | API |
|:--------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| TabularPredictor | [](https://auto.gluon.ai/stable/tutorials/tabular/tabular-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.tabular.TabularPredictor.html) |
| TimeSeriesPredictor | [](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.timeseries.TimeSeriesPredictor.html) |
| MultiModalPredictor | [](https://auto.gluon.ai/stable/tutorials/multimodal/multimodal_prediction/multimodal-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.multimodal.MultiModalPredictor.html) |
## :mag: Resources
### Hands-on Tutorials / Talks
Below is a curated list of recent tutorials and talks on AutoGluon. A comprehensive list is available [here](AWESOME.md#videos--tutorials).
| Title | Format | Location | Date |
|--------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------|------------|
| :tv: [AutoGluon: Towards No-Code Automated Machine Learning](https://www.youtube.com/watch?v=SwPq9qjaN2Q) | Tutorial | [AutoML 2024](https://2024.automl.cc/) | 2024/09/09 |
| :tv: [AutoGluon 1.0: Shattering the AutoML Ceiling with Zero Lines of Code](https://www.youtube.com/watch?v=5tvp_Ihgnuk) | Tutorial | [AutoML 2023](https://2023.automl.cc/) | 2023/09/12 |
| :sound: [AutoGluon: The Story](https://automlpodcast.com/episode/autogluon-the-story) | Podcast | [The AutoML Podcast](https://automlpodcast.com/) | 2023/09/05 |
| :tv: [AutoGluon: AutoML for Tabular, Multimodal, and Time Series Data](https://youtu.be/Lwu15m5mmbs?si=jSaFJDqkTU27C0fa) | Tutorial | PyData Berlin | 2023/06/20 |
| :tv: [Solving Complex ML Problems in a few Lines of Code with AutoGluon](https://www.youtube.com/watch?v=J1UQUCPB88I) | Tutorial | PyData Seattle | 2023/06/20 |
| :tv: [The AutoML Revolution](https://www.youtube.com/watch?v=VAAITEds-28) | Tutorial | [Fall AutoML School 2022](https://sites.google.com/view/automl-fall-school-2022) | 2022/10/18 |
### Scientific Publications
- [AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data](https://arxiv.org/pdf/2003.06505.pdf) (*Arxiv*, 2020) ([BibTeX](CITING.md#general-usage--autogluontabular))
- [Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation](https://proceedings.neurips.cc/paper/2020/hash/62d75fb2e3075506e8837d8f55021ab1-Abstract.html) (*NeurIPS*, 2020) ([BibTeX](CITING.md#tabular-distillation))
- [Benchmarking Multimodal AutoML for Tabular Data with Text Fields](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/9bf31c7ff062936a96d3c8bd1f8f2ff3-Paper-round2.pdf) (*NeurIPS*, 2021) ([BibTeX](CITING.md#autogluonmultimodal))
- [XTab: Cross-table Pretraining for Tabular Transformers](https://proceedings.mlr.press/v202/zhu23k/zhu23k.pdf) (*ICML*, 2023)
- [AutoGluon-TimeSeries: AutoML for Probabilistic Time Series Forecasting](https://arxiv.org/abs/2308.05566) (*AutoML Conf*, 2023) ([BibTeX](CITING.md#autogluontimeseries))
- [TabRepo: A Large Scale Repository of Tabular Model Evaluations and its AutoML Applications](https://arxiv.org/pdf/2311.02971.pdf) (*AutoML Conf*, 2024)
- [AutoGluon-Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models](https://arxiv.org/pdf/2404.16233) (*AutoML Conf*, 2024) ([BibTeX](CITING.md#autogluonmultimodal))
- [Multi-layer Stack Ensembles for Time Series Forecasting](https://arxiv.org/abs/2511.15350) (*AutoML Conf*, 2025) ([BibTeX](CITING.md#autogluontimeseries))
- [Chronos-2: From Univariate to Universal Forecasting](https://arxiv.org/abs/2510.15821) (*Arxiv*, 2025) ([BibTeX](CITING.md#autogluontimeseries))
### Articles
- [AutoGluon-TimeSeries: Every Time Series Forecasting Model In One Library](https://towardsdatascience.com/autogluon-timeseries-every-time-series-forecasting-model-in-one-library-29a3bf6879db) (*Towards Data Science*, Jan 2024)
- [AutoGluon for tabular data: 3 lines of code to achieve top 1% in Kaggle competitions](https://aws.amazon.com/blogs/opensource/machine-learning-with-autogluon-an-open-source-automl-library/) (*AWS Open Source Blog*, Mar 2020)
- [AutoGluon overview & example applications](https://towardsdatascience.com/autogluon-deep-learning-automl-5cdb4e2388ec?source=friends_link&sk=e3d17d06880ac714e47f07f39178fdf2) (*Towards Data Science*, Dec 2019)
### Train/Deploy AutoGluon in the Cloud
- [AutoGluon Cloud](https://auto.gluon.ai/cloud/stable/index.html) (Recommended)
- [AutoGluon on SageMaker AutoPilot](https://auto.gluon.ai/stable/tutorials/cloud_fit_deploy/autopilot-autogluon.html)
- [AutoGluon on Amazon SageMaker](https://auto.gluon.ai/stable/tutorials/cloud_fit_deploy/cloud-aws-sagemaker-train-deploy.html)
- [AutoGluon Deep Learning Containers](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#autogluon-training-containers) (Security certified & maintained by the AutoGluon developers)
- [AutoGluon Official Docker Container](https://hub.docker.com/r/autogluon/autogluon)
- [AutoGluon-Tabular on AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-n4zf5pmjt7ism) (Not maintained by us)
## :pencil: Citing AutoGluon
If you use AutoGluon in a scientific publication, please refer to our [citation guide](CITING.md).
## :wave: How to get involved
We are actively accepting code contributions to the AutoGluon project. If you are interested in contributing to AutoGluon, please read the [Contributing Guide](https://github.com/autogluon/autogluon/blob/master/CONTRIBUTING.md) to get started.
## :classical_building: License
This library is licensed under the Apache 2.0 License.
| text/markdown | AutoGluon Community | null | null | null | Apache-2.0 | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Education",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Intended Audience :: Customer Service",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Healthcare Industry",
"Intended Audience :: Telecommunications Industry",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Operating System :: Unix",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Information Analysis",
"Topic :: Scientific/Engineering :: Image Recognition"
] | [] | https://github.com/autogluon/autogluon | null | <3.14,>=3.10 | [] | [] | [] | [
"joblib<1.7,>=1.2",
"numpy<2.4.0,>=1.25.0",
"scipy<1.17,>=1.5.4",
"pandas<2.4.0,>=2.0.0",
"torch<2.10,>=2.6",
"lightning<2.6,>=2.5.1",
"transformers[sentencepiece]<4.58,>=4.51.0",
"accelerate<2.0,>=0.34.0",
"gluonts<0.17,>=0.15.0",
"networkx<4,>=3.0",
"statsforecast<2.0.2,>=1.7.0",
"mlforecast<0.15.0,>=0.14.0",
"utilsforecast<0.2.12,>=0.2.3",
"coreforecast<0.0.17,>=0.0.12",
"fugue>=0.9.0",
"tqdm<5,>=4.38",
"orjson~=3.9",
"einops<1,>=0.7",
"chronos-forecasting<2.4,>=2.2.2",
"peft<0.18,>=0.13.0",
"tensorboard<3,>=2.9",
"autogluon.core==1.5.1b20260221",
"autogluon.common==1.5.1b20260221",
"autogluon.features==1.5.1b20260221",
"autogluon.tabular[catboost,lightgbm,xgboost]==1.5.1b20260221",
"pytest; extra == \"tests\"",
"ruff>=0.0.285; extra == \"tests\"",
"flaky<4,>=3.7; extra == \"tests\"",
"pytest-timeout<3,>=2.1; extra == \"tests\"",
"autogluon.core[raytune]==1.5.1b20260221; extra == \"ray\"",
"autogluon.core[raytune]==1.5.1b20260221; extra == \"all\""
] | [] | [] | [] | [
"Documentation, https://auto.gluon.ai",
"Bug Reports, https://github.com/autogluon/autogluon/issues",
"Source, https://github.com/autogluon/autogluon/",
"Contribute!, https://github.com/autogluon/autogluon/blob/master/CONTRIBUTING.md"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T09:09:35.296522 | autogluon_timeseries-1.5.1b20260221.tar.gz | 205,719 | 1d/9a/c5139dfc5dc5aa75a248f2e39464311424cd5898e4c4a5917fb85fbb5f5f/autogluon_timeseries-1.5.1b20260221.tar.gz | source | sdist | null | false | 219ef2e454c17c835f186ca9c3b91191 | be6f539b59dd1da3d746e611c725f4834921920f38abc43f610b3c1ddd7a24a2 | 1d9ac5139dfc5dc5aa75a248f2e39464311424cd5898e4c4a5917fb85fbb5f5f | null | [
"LICENSE",
"NOTICE"
] | 0 |
2.4 | autogluon.multimodal | 1.5.1b20260221 | Fast and Accurate ML in 3 Lines of Code |
<div align="center">
<img src="https://user-images.githubusercontent.com/16392542/77208906-224aa500-6aba-11ea-96bd-e81806074030.png" width="350">
## Fast and Accurate ML in 3 Lines of Code
[](https://github.com/autogluon/autogluon/releases)
[](https://anaconda.org/conda-forge/autogluon)
[](https://pypi.org/project/autogluon/)
[](https://pepy.tech/project/autogluon)
[](./LICENSE)
[](https://discord.gg/wjUmjqAc2N)
[](https://twitter.com/autogluon)
[](https://github.com/autogluon/autogluon/actions/workflows/continuous_integration.yml)
[](https://github.com/autogluon/autogluon/actions/workflows/platform_tests-command.yml)
[Installation](https://auto.gluon.ai/stable/install.html) | [Documentation](https://auto.gluon.ai/stable/index.html) | [Release Notes](https://auto.gluon.ai/stable/whats_new/index.html)
</div>
AutoGluon, developed by AWS AI, automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy machine learning and deep learning models on image, text, time series, and tabular data.
## 💾 Installation
AutoGluon is supported on Python 3.10 - 3.13 and is available on Linux, MacOS, and Windows.
You can install AutoGluon with:
```python
pip install autogluon
```
Visit our [Installation Guide](https://auto.gluon.ai/stable/install.html) for detailed instructions, including GPU support, Conda installs, and optional dependencies.
## :zap: Quickstart
Build accurate end-to-end ML models in just 3 lines of code!
```python
from autogluon.tabular import TabularPredictor
predictor = TabularPredictor(label="class").fit("train.csv", presets="best")
predictions = predictor.predict("test.csv")
```
| AutoGluon Task | Quickstart | API |
|:--------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| TabularPredictor | [](https://auto.gluon.ai/stable/tutorials/tabular/tabular-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.tabular.TabularPredictor.html) |
| TimeSeriesPredictor | [](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.timeseries.TimeSeriesPredictor.html) |
| MultiModalPredictor | [](https://auto.gluon.ai/stable/tutorials/multimodal/multimodal_prediction/multimodal-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.multimodal.MultiModalPredictor.html) |
## :mag: Resources
### Hands-on Tutorials / Talks
Below is a curated list of recent tutorials and talks on AutoGluon. A comprehensive list is available [here](AWESOME.md#videos--tutorials).
| Title | Format | Location | Date |
|--------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------|------------|
| :tv: [AutoGluon: Towards No-Code Automated Machine Learning](https://www.youtube.com/watch?v=SwPq9qjaN2Q) | Tutorial | [AutoML 2024](https://2024.automl.cc/) | 2024/09/09 |
| :tv: [AutoGluon 1.0: Shattering the AutoML Ceiling with Zero Lines of Code](https://www.youtube.com/watch?v=5tvp_Ihgnuk) | Tutorial | [AutoML 2023](https://2023.automl.cc/) | 2023/09/12 |
| :sound: [AutoGluon: The Story](https://automlpodcast.com/episode/autogluon-the-story) | Podcast | [The AutoML Podcast](https://automlpodcast.com/) | 2023/09/05 |
| :tv: [AutoGluon: AutoML for Tabular, Multimodal, and Time Series Data](https://youtu.be/Lwu15m5mmbs?si=jSaFJDqkTU27C0fa) | Tutorial | PyData Berlin | 2023/06/20 |
| :tv: [Solving Complex ML Problems in a few Lines of Code with AutoGluon](https://www.youtube.com/watch?v=J1UQUCPB88I) | Tutorial | PyData Seattle | 2023/06/20 |
| :tv: [The AutoML Revolution](https://www.youtube.com/watch?v=VAAITEds-28) | Tutorial | [Fall AutoML School 2022](https://sites.google.com/view/automl-fall-school-2022) | 2022/10/18 |
### Scientific Publications
- [AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data](https://arxiv.org/pdf/2003.06505.pdf) (*Arxiv*, 2020) ([BibTeX](CITING.md#general-usage--autogluontabular))
- [Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation](https://proceedings.neurips.cc/paper/2020/hash/62d75fb2e3075506e8837d8f55021ab1-Abstract.html) (*NeurIPS*, 2020) ([BibTeX](CITING.md#tabular-distillation))
- [Benchmarking Multimodal AutoML for Tabular Data with Text Fields](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/9bf31c7ff062936a96d3c8bd1f8f2ff3-Paper-round2.pdf) (*NeurIPS*, 2021) ([BibTeX](CITING.md#autogluonmultimodal))
- [XTab: Cross-table Pretraining for Tabular Transformers](https://proceedings.mlr.press/v202/zhu23k/zhu23k.pdf) (*ICML*, 2023)
- [AutoGluon-TimeSeries: AutoML for Probabilistic Time Series Forecasting](https://arxiv.org/abs/2308.05566) (*AutoML Conf*, 2023) ([BibTeX](CITING.md#autogluontimeseries))
- [TabRepo: A Large Scale Repository of Tabular Model Evaluations and its AutoML Applications](https://arxiv.org/pdf/2311.02971.pdf) (*AutoML Conf*, 2024)
- [AutoGluon-Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models](https://arxiv.org/pdf/2404.16233) (*AutoML Conf*, 2024) ([BibTeX](CITING.md#autogluonmultimodal))
- [Multi-layer Stack Ensembles for Time Series Forecasting](https://arxiv.org/abs/2511.15350) (*AutoML Conf*, 2025) ([BibTeX](CITING.md#autogluontimeseries))
- [Chronos-2: From Univariate to Universal Forecasting](https://arxiv.org/abs/2510.15821) (*Arxiv*, 2025) ([BibTeX](CITING.md#autogluontimeseries))
### Articles
- [AutoGluon-TimeSeries: Every Time Series Forecasting Model In One Library](https://towardsdatascience.com/autogluon-timeseries-every-time-series-forecasting-model-in-one-library-29a3bf6879db) (*Towards Data Science*, Jan 2024)
- [AutoGluon for tabular data: 3 lines of code to achieve top 1% in Kaggle competitions](https://aws.amazon.com/blogs/opensource/machine-learning-with-autogluon-an-open-source-automl-library/) (*AWS Open Source Blog*, Mar 2020)
- [AutoGluon overview & example applications](https://towardsdatascience.com/autogluon-deep-learning-automl-5cdb4e2388ec?source=friends_link&sk=e3d17d06880ac714e47f07f39178fdf2) (*Towards Data Science*, Dec 2019)
### Train/Deploy AutoGluon in the Cloud
- [AutoGluon Cloud](https://auto.gluon.ai/cloud/stable/index.html) (Recommended)
- [AutoGluon on SageMaker AutoPilot](https://auto.gluon.ai/stable/tutorials/cloud_fit_deploy/autopilot-autogluon.html)
- [AutoGluon on Amazon SageMaker](https://auto.gluon.ai/stable/tutorials/cloud_fit_deploy/cloud-aws-sagemaker-train-deploy.html)
- [AutoGluon Deep Learning Containers](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#autogluon-training-containers) (Security certified & maintained by the AutoGluon developers)
- [AutoGluon Official Docker Container](https://hub.docker.com/r/autogluon/autogluon)
- [AutoGluon-Tabular on AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-n4zf5pmjt7ism) (Not maintained by us)
## :pencil: Citing AutoGluon
If you use AutoGluon in a scientific publication, please refer to our [citation guide](CITING.md).
## :wave: How to get involved
We are actively accepting code contributions to the AutoGluon project. If you are interested in contributing to AutoGluon, please read the [Contributing Guide](https://github.com/autogluon/autogluon/blob/master/CONTRIBUTING.md) to get started.
## :classical_building: License
This library is licensed under the Apache 2.0 License.
| text/markdown | AutoGluon Community | null | null | null | Apache-2.0 | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Education",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Intended Audience :: Customer Service",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Healthcare Industry",
"Intended Audience :: Telecommunications Industry",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Operating System :: Unix",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Information Analysis",
"Topic :: Scientific/Engineering :: Image Recognition"
] | [] | https://github.com/autogluon/autogluon | null | <3.14,>=3.10 | [] | [] | [] | [
"numpy<2.4.0,>=1.25.0",
"scipy<1.17,>=1.5.4",
"pandas<2.4.0,>=2.0.0",
"scikit-learn<1.8.0,>=1.4.0",
"Pillow<12,>=10.0.1",
"tqdm<5,>=4.38",
"boto3<2,>=1.10",
"torch<2.10,>=2.6",
"lightning<2.6,>=2.5.1",
"transformers[sentencepiece]<4.58,>=4.51.0",
"accelerate<2.0,>=0.34.0",
"fsspec[http]<=2025.3",
"requests<3,>=2.30",
"jsonschema<4.24,>=4.18",
"seqeval<1.3.0,>=1.2.2",
"evaluate<0.5.0,>=0.4.0",
"timm<1.0.7,>=0.9.5",
"torchvision<0.25.0,>=0.21.0",
"scikit-image<0.26.0,>=0.19.1",
"text-unidecode<1.4,>=1.3",
"torchmetrics<1.8,>=1.2.0",
"omegaconf<2.4.0,>=2.1.1",
"autogluon.core[raytune]==1.5.1b20260221",
"autogluon.features==1.5.1b20260221",
"autogluon.common==1.5.1b20260221",
"pytorch-metric-learning<2.9,>=1.3.0",
"nlpaug<1.2.0,>=1.1.10",
"nltk<3.10,>=3.4.5",
"openmim<0.4.0,>=0.3.7",
"defusedxml<0.7.2,>=0.7.1",
"jinja2<3.2,>=3.0.3",
"tensorboard<3,>=2.9",
"pytesseract<0.4,>=0.3.9",
"nvidia-ml-py3<8.0,>=7.352.0",
"pdf2image<1.19,>=1.17.0",
"ruff; extra == \"tests\"",
"datasets<3.6.0,>=2.16.0; extra == \"tests\"",
"tensorrt<10.9.1,>=8.6.0; (platform_system == \"Linux\" and python_version < \"3.11\") and extra == \"tests\"",
"onnx!=1.16.2,<1.21.0,>=1.13.0; platform_system == \"Windows\" and extra == \"tests\"",
"onnx<1.21.0,>=1.13.0; platform_system != \"Windows\" and extra == \"tests\"",
"onnxruntime<1.24.0,>=1.17.0; extra == \"tests\"",
"onnxruntime-gpu<1.24.0,>=1.17.0; (platform_system != \"Darwin\" and platform_machine != \"aarch64\") and extra == \"tests\""
] | [] | [] | [] | [
"Documentation, https://auto.gluon.ai",
"Bug Reports, https://github.com/autogluon/autogluon/issues",
"Source, https://github.com/autogluon/autogluon/",
"Contribute!, https://github.com/autogluon/autogluon/blob/master/CONTRIBUTING.md"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T09:09:31.810773 | autogluon_multimodal-1.5.1b20260221.tar.gz | 368,405 | 82/62/afbfa830dce2d1e355eb6bbb7e510f1221aafceb07610ee17ed89f2627ac/autogluon_multimodal-1.5.1b20260221.tar.gz | source | sdist | null | false | b7bd322f863548e1af2be7c7fe00b57f | c4ed0ba094b2cf0787f32381a07348a5173ea549ef3590b4e57b5e63f914ef16 | 8262afbfa830dce2d1e355eb6bbb7e510f1221aafceb07610ee17ed89f2627ac | null | [
"LICENSE",
"NOTICE"
] | 0 |
2.4 | autogluon.tabular | 1.5.1b20260221 | Fast and Accurate ML in 3 Lines of Code |
<div align="center">
<img src="https://user-images.githubusercontent.com/16392542/77208906-224aa500-6aba-11ea-96bd-e81806074030.png" width="350">
## Fast and Accurate ML in 3 Lines of Code
[](https://github.com/autogluon/autogluon/releases)
[](https://anaconda.org/conda-forge/autogluon)
[](https://pypi.org/project/autogluon/)
[](https://pepy.tech/project/autogluon)
[](./LICENSE)
[](https://discord.gg/wjUmjqAc2N)
[](https://twitter.com/autogluon)
[](https://github.com/autogluon/autogluon/actions/workflows/continuous_integration.yml)
[](https://github.com/autogluon/autogluon/actions/workflows/platform_tests-command.yml)
[Installation](https://auto.gluon.ai/stable/install.html) | [Documentation](https://auto.gluon.ai/stable/index.html) | [Release Notes](https://auto.gluon.ai/stable/whats_new/index.html)
</div>
AutoGluon, developed by AWS AI, automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy machine learning and deep learning models on image, text, time series, and tabular data.
## 💾 Installation
AutoGluon is supported on Python 3.10 - 3.13 and is available on Linux, MacOS, and Windows.
You can install AutoGluon with:
```python
pip install autogluon
```
Visit our [Installation Guide](https://auto.gluon.ai/stable/install.html) for detailed instructions, including GPU support, Conda installs, and optional dependencies.
## :zap: Quickstart
Build accurate end-to-end ML models in just 3 lines of code!
```python
from autogluon.tabular import TabularPredictor
predictor = TabularPredictor(label="class").fit("train.csv", presets="best")
predictions = predictor.predict("test.csv")
```
| AutoGluon Task | Quickstart | API |
|:--------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| TabularPredictor | [](https://auto.gluon.ai/stable/tutorials/tabular/tabular-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.tabular.TabularPredictor.html) |
| TimeSeriesPredictor | [](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.timeseries.TimeSeriesPredictor.html) |
| MultiModalPredictor | [](https://auto.gluon.ai/stable/tutorials/multimodal/multimodal_prediction/multimodal-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.multimodal.MultiModalPredictor.html) |
## :mag: Resources
### Hands-on Tutorials / Talks
Below is a curated list of recent tutorials and talks on AutoGluon. A comprehensive list is available [here](AWESOME.md#videos--tutorials).
| Title | Format | Location | Date |
|--------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------|------------|
| :tv: [AutoGluon: Towards No-Code Automated Machine Learning](https://www.youtube.com/watch?v=SwPq9qjaN2Q) | Tutorial | [AutoML 2024](https://2024.automl.cc/) | 2024/09/09 |
| :tv: [AutoGluon 1.0: Shattering the AutoML Ceiling with Zero Lines of Code](https://www.youtube.com/watch?v=5tvp_Ihgnuk) | Tutorial | [AutoML 2023](https://2023.automl.cc/) | 2023/09/12 |
| :sound: [AutoGluon: The Story](https://automlpodcast.com/episode/autogluon-the-story) | Podcast | [The AutoML Podcast](https://automlpodcast.com/) | 2023/09/05 |
| :tv: [AutoGluon: AutoML for Tabular, Multimodal, and Time Series Data](https://youtu.be/Lwu15m5mmbs?si=jSaFJDqkTU27C0fa) | Tutorial | PyData Berlin | 2023/06/20 |
| :tv: [Solving Complex ML Problems in a few Lines of Code with AutoGluon](https://www.youtube.com/watch?v=J1UQUCPB88I) | Tutorial | PyData Seattle | 2023/06/20 |
| :tv: [The AutoML Revolution](https://www.youtube.com/watch?v=VAAITEds-28) | Tutorial | [Fall AutoML School 2022](https://sites.google.com/view/automl-fall-school-2022) | 2022/10/18 |
### Scientific Publications
- [AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data](https://arxiv.org/pdf/2003.06505.pdf) (*Arxiv*, 2020) ([BibTeX](CITING.md#general-usage--autogluontabular))
- [Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation](https://proceedings.neurips.cc/paper/2020/hash/62d75fb2e3075506e8837d8f55021ab1-Abstract.html) (*NeurIPS*, 2020) ([BibTeX](CITING.md#tabular-distillation))
- [Benchmarking Multimodal AutoML for Tabular Data with Text Fields](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/9bf31c7ff062936a96d3c8bd1f8f2ff3-Paper-round2.pdf) (*NeurIPS*, 2021) ([BibTeX](CITING.md#autogluonmultimodal))
- [XTab: Cross-table Pretraining for Tabular Transformers](https://proceedings.mlr.press/v202/zhu23k/zhu23k.pdf) (*ICML*, 2023)
- [AutoGluon-TimeSeries: AutoML for Probabilistic Time Series Forecasting](https://arxiv.org/abs/2308.05566) (*AutoML Conf*, 2023) ([BibTeX](CITING.md#autogluontimeseries))
- [TabRepo: A Large Scale Repository of Tabular Model Evaluations and its AutoML Applications](https://arxiv.org/pdf/2311.02971.pdf) (*AutoML Conf*, 2024)
- [AutoGluon-Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models](https://arxiv.org/pdf/2404.16233) (*AutoML Conf*, 2024) ([BibTeX](CITING.md#autogluonmultimodal))
- [Multi-layer Stack Ensembles for Time Series Forecasting](https://arxiv.org/abs/2511.15350) (*AutoML Conf*, 2025) ([BibTeX](CITING.md#autogluontimeseries))
- [Chronos-2: From Univariate to Universal Forecasting](https://arxiv.org/abs/2510.15821) (*Arxiv*, 2025) ([BibTeX](CITING.md#autogluontimeseries))
### Articles
- [AutoGluon-TimeSeries: Every Time Series Forecasting Model In One Library](https://towardsdatascience.com/autogluon-timeseries-every-time-series-forecasting-model-in-one-library-29a3bf6879db) (*Towards Data Science*, Jan 2024)
- [AutoGluon for tabular data: 3 lines of code to achieve top 1% in Kaggle competitions](https://aws.amazon.com/blogs/opensource/machine-learning-with-autogluon-an-open-source-automl-library/) (*AWS Open Source Blog*, Mar 2020)
- [AutoGluon overview & example applications](https://towardsdatascience.com/autogluon-deep-learning-automl-5cdb4e2388ec?source=friends_link&sk=e3d17d06880ac714e47f07f39178fdf2) (*Towards Data Science*, Dec 2019)
### Train/Deploy AutoGluon in the Cloud
- [AutoGluon Cloud](https://auto.gluon.ai/cloud/stable/index.html) (Recommended)
- [AutoGluon on SageMaker AutoPilot](https://auto.gluon.ai/stable/tutorials/cloud_fit_deploy/autopilot-autogluon.html)
- [AutoGluon on Amazon SageMaker](https://auto.gluon.ai/stable/tutorials/cloud_fit_deploy/cloud-aws-sagemaker-train-deploy.html)
- [AutoGluon Deep Learning Containers](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#autogluon-training-containers) (Security certified & maintained by the AutoGluon developers)
- [AutoGluon Official Docker Container](https://hub.docker.com/r/autogluon/autogluon)
- [AutoGluon-Tabular on AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-n4zf5pmjt7ism) (Not maintained by us)
## :pencil: Citing AutoGluon
If you use AutoGluon in a scientific publication, please refer to our [citation guide](CITING.md).
## :wave: How to get involved
We are actively accepting code contributions to the AutoGluon project. If you are interested in contributing to AutoGluon, please read the [Contributing Guide](https://github.com/autogluon/autogluon/blob/master/CONTRIBUTING.md) to get started.
## :classical_building: License
This library is licensed under the Apache 2.0 License.
| text/markdown | AutoGluon Community | null | null | null | Apache-2.0 | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Education",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Intended Audience :: Customer Service",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Healthcare Industry",
"Intended Audience :: Telecommunications Industry",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Operating System :: Unix",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Information Analysis",
"Topic :: Scientific/Engineering :: Image Recognition"
] | [] | https://github.com/autogluon/autogluon | null | <3.14,>=3.10 | [] | [] | [] | [
"numpy<2.4.0,>=1.25.0",
"scipy<1.17,>=1.5.4",
"pandas<2.4.0,>=2.0.0",
"scikit-learn<1.8.0,>=1.4.0",
"networkx<4,>=3.0",
"autogluon.core==1.5.1b20260221",
"autogluon.features==1.5.1b20260221",
"lightgbm<4.7,>=4.0; extra == \"lightgbm\"",
"catboost<1.3,>=1.2; extra == \"catboost\"",
"xgboost<3.2,>=2.0; extra == \"xgboost\"",
"pytabkit<1.8,>=1.7.2; extra == \"realmlp\"",
"interpret-core<0.8,>=0.7.2; extra == \"interpret\"",
"spacy<3.9; extra == \"fastai\"",
"torch<2.10,>=2.6; extra == \"fastai\"",
"fastai<2.8.6,>=2.3.1; extra == \"fastai\"",
"torch<2.10,>=2.6; extra == \"tabm\"",
"tabpfn<6.2.1,>=6.2.0; extra == \"tabpfn\"",
"tabdpt<1.2,>=1.1.11; extra == \"tabdpt\"",
"torch<2.10,>=2.6; extra == \"tabpfnmix\"",
"huggingface_hub[torch]<1.0; extra == \"tabpfnmix\"",
"einops<0.9,>=0.7; extra == \"tabpfnmix\"",
"loguru; extra == \"mitra\"",
"einx; extra == \"mitra\"",
"omegaconf; extra == \"mitra\"",
"torch<2.10,>=2.6; extra == \"mitra\"",
"transformers; extra == \"mitra\"",
"huggingface_hub[torch]<1.0; extra == \"mitra\"",
"einops<0.9,>=0.7; extra == \"mitra\"",
"tabicl<2.1,>=2.0; extra == \"tabicl\"",
"autogluon.core[all]==1.5.1b20260221; extra == \"ray\"",
"scikit-learn-intelex<2025.10,>=2025.0; extra == \"skex\"",
"imodels<2.1.0,>=1.3.10; extra == \"imodels\"",
"skl2onnx<1.20.0,>=1.15.0; extra == \"skl2onnx\"",
"onnx!=1.16.2,<1.21.0,>=1.13.0; platform_system == \"Windows\" and extra == \"skl2onnx\"",
"onnx<1.21.0,>=1.13.0; platform_system != \"Windows\" and extra == \"skl2onnx\"",
"onnxruntime<1.24.0,>=1.17.0; extra == \"skl2onnx\"",
"onnxruntime-gpu<1.24.0,>=1.17.0; (platform_system != \"Darwin\" and platform_machine != \"aarch64\") and extra == \"skl2onnx\"",
"torch<2.10,>=2.6; extra == \"all\"",
"catboost<1.3,>=1.2; extra == \"all\"",
"transformers; extra == \"all\"",
"einx; extra == \"all\"",
"huggingface_hub[torch]<1.0; extra == \"all\"",
"loguru; extra == \"all\"",
"fastai<2.8.6,>=2.3.1; extra == \"all\"",
"autogluon.core[all]==1.5.1b20260221; extra == \"all\"",
"spacy<3.9; extra == \"all\"",
"einops<0.9,>=0.7; extra == \"all\"",
"omegaconf; extra == \"all\"",
"lightgbm<4.7,>=4.0; extra == \"all\"",
"xgboost<3.2,>=2.0; extra == \"all\"",
"pytabkit<1.8,>=1.7.2; extra == \"tabarena\"",
"torch<2.10,>=2.6; extra == \"tabarena\"",
"catboost<1.3,>=1.2; extra == \"tabarena\"",
"transformers; extra == \"tabarena\"",
"tabicl<2.1,>=2.0; extra == \"tabarena\"",
"einx; extra == \"tabarena\"",
"huggingface_hub[torch]<1.0; extra == \"tabarena\"",
"loguru; extra == \"tabarena\"",
"fastai<2.8.6,>=2.3.1; extra == \"tabarena\"",
"autogluon.core[all]==1.5.1b20260221; extra == \"tabarena\"",
"spacy<3.9; extra == \"tabarena\"",
"einops<0.9,>=0.7; extra == \"tabarena\"",
"omegaconf; extra == \"tabarena\"",
"lightgbm<4.7,>=4.0; extra == \"tabarena\"",
"interpret-core<0.8,>=0.7.2; extra == \"tabarena\"",
"tabdpt<1.2,>=1.1.11; extra == \"tabarena\"",
"tabpfn<6.2.1,>=6.2.0; extra == \"tabarena\"",
"xgboost<3.2,>=2.0; extra == \"tabarena\"",
"interpret-core<0.8,>=0.7.2; extra == \"tests\"",
"tabdpt<1.2,>=1.1.11; extra == \"tests\"",
"tabicl<2.1,>=2.0; extra == \"tests\"",
"tabpfn<6.2.1,>=6.2.0; extra == \"tests\"",
"pytabkit<1.8,>=1.7.2; extra == \"tests\"",
"torch<2.10,>=2.6; extra == \"tests\"",
"huggingface_hub[torch]<1.0; extra == \"tests\"",
"einops<0.9,>=0.7; extra == \"tests\"",
"imodels<2.1.0,>=1.3.10; extra == \"tests\"",
"skl2onnx<1.20.0,>=1.15.0; extra == \"tests\"",
"onnx!=1.16.2,<1.21.0,>=1.13.0; platform_system == \"Windows\" and extra == \"tests\"",
"onnx<1.21.0,>=1.13.0; platform_system != \"Windows\" and extra == \"tests\"",
"onnxruntime<1.24.0,>=1.17.0; extra == \"tests\"",
"onnxruntime-gpu<1.24.0,>=1.17.0; (platform_system != \"Darwin\" and platform_machine != \"aarch64\") and extra == \"tests\""
] | [] | [] | [] | [
"Documentation, https://auto.gluon.ai",
"Bug Reports, https://github.com/autogluon/autogluon/issues",
"Source, https://github.com/autogluon/autogluon/",
"Contribute!, https://github.com/autogluon/autogluon/blob/master/CONTRIBUTING.md"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T09:09:27.755641 | autogluon_tabular-1.5.1b20260221.tar.gz | 414,824 | 08/60/c8052e5cf5a7c57e1cc6e84a6b824fb201bcb07d030aa4c3eb9d7f7b235c/autogluon_tabular-1.5.1b20260221.tar.gz | source | sdist | null | false | ff2245e74c3ba09c0b4ffaff6f9c104b | 7654c00d90adc7c4dad240e3dbf839b47ef8915aa018e37e4f7f512926771d52 | 0860c8052e5cf5a7c57e1cc6e84a6b824fb201bcb07d030aa4c3eb9d7f7b235c | null | [
"LICENSE",
"NOTICE"
] | 0 |
2.4 | autogluon.features | 1.5.1b20260221 | Fast and Accurate ML in 3 Lines of Code |
<div align="center">
<img src="https://user-images.githubusercontent.com/16392542/77208906-224aa500-6aba-11ea-96bd-e81806074030.png" width="350">
## Fast and Accurate ML in 3 Lines of Code
[](https://github.com/autogluon/autogluon/releases)
[](https://anaconda.org/conda-forge/autogluon)
[](https://pypi.org/project/autogluon/)
[](https://pepy.tech/project/autogluon)
[](./LICENSE)
[](https://discord.gg/wjUmjqAc2N)
[](https://twitter.com/autogluon)
[](https://github.com/autogluon/autogluon/actions/workflows/continuous_integration.yml)
[](https://github.com/autogluon/autogluon/actions/workflows/platform_tests-command.yml)
[Installation](https://auto.gluon.ai/stable/install.html) | [Documentation](https://auto.gluon.ai/stable/index.html) | [Release Notes](https://auto.gluon.ai/stable/whats_new/index.html)
</div>
AutoGluon, developed by AWS AI, automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy machine learning and deep learning models on image, text, time series, and tabular data.
## 💾 Installation
AutoGluon is supported on Python 3.10 - 3.13 and is available on Linux, MacOS, and Windows.
You can install AutoGluon with:
```python
pip install autogluon
```
Visit our [Installation Guide](https://auto.gluon.ai/stable/install.html) for detailed instructions, including GPU support, Conda installs, and optional dependencies.
## :zap: Quickstart
Build accurate end-to-end ML models in just 3 lines of code!
```python
from autogluon.tabular import TabularPredictor
predictor = TabularPredictor(label="class").fit("train.csv", presets="best")
predictions = predictor.predict("test.csv")
```
| AutoGluon Task | Quickstart | API |
|:--------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| TabularPredictor | [](https://auto.gluon.ai/stable/tutorials/tabular/tabular-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.tabular.TabularPredictor.html) |
| TimeSeriesPredictor | [](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.timeseries.TimeSeriesPredictor.html) |
| MultiModalPredictor | [](https://auto.gluon.ai/stable/tutorials/multimodal/multimodal_prediction/multimodal-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.multimodal.MultiModalPredictor.html) |
## :mag: Resources
### Hands-on Tutorials / Talks
Below is a curated list of recent tutorials and talks on AutoGluon. A comprehensive list is available [here](AWESOME.md#videos--tutorials).
| Title | Format | Location | Date |
|--------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------|------------|
| :tv: [AutoGluon: Towards No-Code Automated Machine Learning](https://www.youtube.com/watch?v=SwPq9qjaN2Q) | Tutorial | [AutoML 2024](https://2024.automl.cc/) | 2024/09/09 |
| :tv: [AutoGluon 1.0: Shattering the AutoML Ceiling with Zero Lines of Code](https://www.youtube.com/watch?v=5tvp_Ihgnuk) | Tutorial | [AutoML 2023](https://2023.automl.cc/) | 2023/09/12 |
| :sound: [AutoGluon: The Story](https://automlpodcast.com/episode/autogluon-the-story) | Podcast | [The AutoML Podcast](https://automlpodcast.com/) | 2023/09/05 |
| :tv: [AutoGluon: AutoML for Tabular, Multimodal, and Time Series Data](https://youtu.be/Lwu15m5mmbs?si=jSaFJDqkTU27C0fa) | Tutorial | PyData Berlin | 2023/06/20 |
| :tv: [Solving Complex ML Problems in a few Lines of Code with AutoGluon](https://www.youtube.com/watch?v=J1UQUCPB88I) | Tutorial | PyData Seattle | 2023/06/20 |
| :tv: [The AutoML Revolution](https://www.youtube.com/watch?v=VAAITEds-28) | Tutorial | [Fall AutoML School 2022](https://sites.google.com/view/automl-fall-school-2022) | 2022/10/18 |
### Scientific Publications
- [AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data](https://arxiv.org/pdf/2003.06505.pdf) (*Arxiv*, 2020) ([BibTeX](CITING.md#general-usage--autogluontabular))
- [Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation](https://proceedings.neurips.cc/paper/2020/hash/62d75fb2e3075506e8837d8f55021ab1-Abstract.html) (*NeurIPS*, 2020) ([BibTeX](CITING.md#tabular-distillation))
- [Benchmarking Multimodal AutoML for Tabular Data with Text Fields](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/9bf31c7ff062936a96d3c8bd1f8f2ff3-Paper-round2.pdf) (*NeurIPS*, 2021) ([BibTeX](CITING.md#autogluonmultimodal))
- [XTab: Cross-table Pretraining for Tabular Transformers](https://proceedings.mlr.press/v202/zhu23k/zhu23k.pdf) (*ICML*, 2023)
- [AutoGluon-TimeSeries: AutoML for Probabilistic Time Series Forecasting](https://arxiv.org/abs/2308.05566) (*AutoML Conf*, 2023) ([BibTeX](CITING.md#autogluontimeseries))
- [TabRepo: A Large Scale Repository of Tabular Model Evaluations and its AutoML Applications](https://arxiv.org/pdf/2311.02971.pdf) (*AutoML Conf*, 2024)
- [AutoGluon-Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models](https://arxiv.org/pdf/2404.16233) (*AutoML Conf*, 2024) ([BibTeX](CITING.md#autogluonmultimodal))
- [Multi-layer Stack Ensembles for Time Series Forecasting](https://arxiv.org/abs/2511.15350) (*AutoML Conf*, 2025) ([BibTeX](CITING.md#autogluontimeseries))
- [Chronos-2: From Univariate to Universal Forecasting](https://arxiv.org/abs/2510.15821) (*Arxiv*, 2025) ([BibTeX](CITING.md#autogluontimeseries))
### Articles
- [AutoGluon-TimeSeries: Every Time Series Forecasting Model In One Library](https://towardsdatascience.com/autogluon-timeseries-every-time-series-forecasting-model-in-one-library-29a3bf6879db) (*Towards Data Science*, Jan 2024)
- [AutoGluon for tabular data: 3 lines of code to achieve top 1% in Kaggle competitions](https://aws.amazon.com/blogs/opensource/machine-learning-with-autogluon-an-open-source-automl-library/) (*AWS Open Source Blog*, Mar 2020)
- [AutoGluon overview & example applications](https://towardsdatascience.com/autogluon-deep-learning-automl-5cdb4e2388ec?source=friends_link&sk=e3d17d06880ac714e47f07f39178fdf2) (*Towards Data Science*, Dec 2019)
### Train/Deploy AutoGluon in the Cloud
- [AutoGluon Cloud](https://auto.gluon.ai/cloud/stable/index.html) (Recommended)
- [AutoGluon on SageMaker AutoPilot](https://auto.gluon.ai/stable/tutorials/cloud_fit_deploy/autopilot-autogluon.html)
- [AutoGluon on Amazon SageMaker](https://auto.gluon.ai/stable/tutorials/cloud_fit_deploy/cloud-aws-sagemaker-train-deploy.html)
- [AutoGluon Deep Learning Containers](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#autogluon-training-containers) (Security certified & maintained by the AutoGluon developers)
- [AutoGluon Official Docker Container](https://hub.docker.com/r/autogluon/autogluon)
- [AutoGluon-Tabular on AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-n4zf5pmjt7ism) (Not maintained by us)
## :pencil: Citing AutoGluon
If you use AutoGluon in a scientific publication, please refer to our [citation guide](CITING.md).
## :wave: How to get involved
We are actively accepting code contributions to the AutoGluon project. If you are interested in contributing to AutoGluon, please read the [Contributing Guide](https://github.com/autogluon/autogluon/blob/master/CONTRIBUTING.md) to get started.
## :classical_building: License
This library is licensed under the Apache 2.0 License.
| text/markdown | AutoGluon Community | null | null | null | Apache-2.0 | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Education",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Intended Audience :: Customer Service",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Healthcare Industry",
"Intended Audience :: Telecommunications Industry",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Operating System :: Unix",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Information Analysis",
"Topic :: Scientific/Engineering :: Image Recognition"
] | [] | https://github.com/autogluon/autogluon | null | <3.14,>=3.10 | [] | [] | [] | [
"numpy<2.4.0,>=1.25.0",
"pandas<2.4.0,>=2.0.0",
"scikit-learn<1.8.0,>=1.4.0",
"autogluon.common==1.5.1b20260221"
] | [] | [] | [] | [
"Documentation, https://auto.gluon.ai",
"Bug Reports, https://github.com/autogluon/autogluon/issues",
"Source, https://github.com/autogluon/autogluon/",
"Contribute!, https://github.com/autogluon/autogluon/blob/master/CONTRIBUTING.md"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T09:09:23.148929 | autogluon_features-1.5.1b20260221.tar.gz | 87,110 | bd/a6/2aa6289f46c851886547f3adc5feb9f4db585df29e53992ab40c3df27962/autogluon_features-1.5.1b20260221.tar.gz | source | sdist | null | false | ac35e7d0044b0bc804c67af439c0394a | d72473ee7c3cba8cadfcdc99b5693801d38548907c540658442cf66cd2e5ee9e | bda62aa6289f46c851886547f3adc5feb9f4db585df29e53992ab40c3df27962 | null | [
"LICENSE",
"NOTICE"
] | 0 |
2.4 | autogluon.core | 1.5.1b20260221 | Fast and Accurate ML in 3 Lines of Code |
<div align="center">
<img src="https://user-images.githubusercontent.com/16392542/77208906-224aa500-6aba-11ea-96bd-e81806074030.png" width="350">
## Fast and Accurate ML in 3 Lines of Code
[](https://github.com/autogluon/autogluon/releases)
[](https://anaconda.org/conda-forge/autogluon)
[](https://pypi.org/project/autogluon/)
[](https://pepy.tech/project/autogluon)
[](./LICENSE)
[](https://discord.gg/wjUmjqAc2N)
[](https://twitter.com/autogluon)
[](https://github.com/autogluon/autogluon/actions/workflows/continuous_integration.yml)
[](https://github.com/autogluon/autogluon/actions/workflows/platform_tests-command.yml)
[Installation](https://auto.gluon.ai/stable/install.html) | [Documentation](https://auto.gluon.ai/stable/index.html) | [Release Notes](https://auto.gluon.ai/stable/whats_new/index.html)
</div>
AutoGluon, developed by AWS AI, automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy machine learning and deep learning models on image, text, time series, and tabular data.
## 💾 Installation
AutoGluon is supported on Python 3.10 - 3.13 and is available on Linux, MacOS, and Windows.
You can install AutoGluon with:
```python
pip install autogluon
```
Visit our [Installation Guide](https://auto.gluon.ai/stable/install.html) for detailed instructions, including GPU support, Conda installs, and optional dependencies.
## :zap: Quickstart
Build accurate end-to-end ML models in just 3 lines of code!
```python
from autogluon.tabular import TabularPredictor
predictor = TabularPredictor(label="class").fit("train.csv", presets="best")
predictions = predictor.predict("test.csv")
```
| AutoGluon Task | Quickstart | API |
|:--------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| TabularPredictor | [](https://auto.gluon.ai/stable/tutorials/tabular/tabular-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.tabular.TabularPredictor.html) |
| TimeSeriesPredictor | [](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.timeseries.TimeSeriesPredictor.html) |
| MultiModalPredictor | [](https://auto.gluon.ai/stable/tutorials/multimodal/multimodal_prediction/multimodal-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.multimodal.MultiModalPredictor.html) |
## :mag: Resources
### Hands-on Tutorials / Talks
Below is a curated list of recent tutorials and talks on AutoGluon. A comprehensive list is available [here](AWESOME.md#videos--tutorials).
| Title | Format | Location | Date |
|--------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------|------------|
| :tv: [AutoGluon: Towards No-Code Automated Machine Learning](https://www.youtube.com/watch?v=SwPq9qjaN2Q) | Tutorial | [AutoML 2024](https://2024.automl.cc/) | 2024/09/09 |
| :tv: [AutoGluon 1.0: Shattering the AutoML Ceiling with Zero Lines of Code](https://www.youtube.com/watch?v=5tvp_Ihgnuk) | Tutorial | [AutoML 2023](https://2023.automl.cc/) | 2023/09/12 |
| :sound: [AutoGluon: The Story](https://automlpodcast.com/episode/autogluon-the-story) | Podcast | [The AutoML Podcast](https://automlpodcast.com/) | 2023/09/05 |
| :tv: [AutoGluon: AutoML for Tabular, Multimodal, and Time Series Data](https://youtu.be/Lwu15m5mmbs?si=jSaFJDqkTU27C0fa) | Tutorial | PyData Berlin | 2023/06/20 |
| :tv: [Solving Complex ML Problems in a few Lines of Code with AutoGluon](https://www.youtube.com/watch?v=J1UQUCPB88I) | Tutorial | PyData Seattle | 2023/06/20 |
| :tv: [The AutoML Revolution](https://www.youtube.com/watch?v=VAAITEds-28) | Tutorial | [Fall AutoML School 2022](https://sites.google.com/view/automl-fall-school-2022) | 2022/10/18 |
### Scientific Publications
- [AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data](https://arxiv.org/pdf/2003.06505.pdf) (*Arxiv*, 2020) ([BibTeX](CITING.md#general-usage--autogluontabular))
- [Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation](https://proceedings.neurips.cc/paper/2020/hash/62d75fb2e3075506e8837d8f55021ab1-Abstract.html) (*NeurIPS*, 2020) ([BibTeX](CITING.md#tabular-distillation))
- [Benchmarking Multimodal AutoML for Tabular Data with Text Fields](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/9bf31c7ff062936a96d3c8bd1f8f2ff3-Paper-round2.pdf) (*NeurIPS*, 2021) ([BibTeX](CITING.md#autogluonmultimodal))
- [XTab: Cross-table Pretraining for Tabular Transformers](https://proceedings.mlr.press/v202/zhu23k/zhu23k.pdf) (*ICML*, 2023)
- [AutoGluon-TimeSeries: AutoML for Probabilistic Time Series Forecasting](https://arxiv.org/abs/2308.05566) (*AutoML Conf*, 2023) ([BibTeX](CITING.md#autogluontimeseries))
- [TabRepo: A Large Scale Repository of Tabular Model Evaluations and its AutoML Applications](https://arxiv.org/pdf/2311.02971.pdf) (*AutoML Conf*, 2024)
- [AutoGluon-Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models](https://arxiv.org/pdf/2404.16233) (*AutoML Conf*, 2024) ([BibTeX](CITING.md#autogluonmultimodal))
- [Multi-layer Stack Ensembles for Time Series Forecasting](https://arxiv.org/abs/2511.15350) (*AutoML Conf*, 2025) ([BibTeX](CITING.md#autogluontimeseries))
- [Chronos-2: From Univariate to Universal Forecasting](https://arxiv.org/abs/2510.15821) (*Arxiv*, 2025) ([BibTeX](CITING.md#autogluontimeseries))
### Articles
- [AutoGluon-TimeSeries: Every Time Series Forecasting Model In One Library](https://towardsdatascience.com/autogluon-timeseries-every-time-series-forecasting-model-in-one-library-29a3bf6879db) (*Towards Data Science*, Jan 2024)
- [AutoGluon for tabular data: 3 lines of code to achieve top 1% in Kaggle competitions](https://aws.amazon.com/blogs/opensource/machine-learning-with-autogluon-an-open-source-automl-library/) (*AWS Open Source Blog*, Mar 2020)
- [AutoGluon overview & example applications](https://towardsdatascience.com/autogluon-deep-learning-automl-5cdb4e2388ec?source=friends_link&sk=e3d17d06880ac714e47f07f39178fdf2) (*Towards Data Science*, Dec 2019)
### Train/Deploy AutoGluon in the Cloud
- [AutoGluon Cloud](https://auto.gluon.ai/cloud/stable/index.html) (Recommended)
- [AutoGluon on SageMaker AutoPilot](https://auto.gluon.ai/stable/tutorials/cloud_fit_deploy/autopilot-autogluon.html)
- [AutoGluon on Amazon SageMaker](https://auto.gluon.ai/stable/tutorials/cloud_fit_deploy/cloud-aws-sagemaker-train-deploy.html)
- [AutoGluon Deep Learning Containers](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#autogluon-training-containers) (Security certified & maintained by the AutoGluon developers)
- [AutoGluon Official Docker Container](https://hub.docker.com/r/autogluon/autogluon)
- [AutoGluon-Tabular on AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-n4zf5pmjt7ism) (Not maintained by us)
## :pencil: Citing AutoGluon
If you use AutoGluon in a scientific publication, please refer to our [citation guide](CITING.md).
## :wave: How to get involved
We are actively accepting code contributions to the AutoGluon project. If you are interested in contributing to AutoGluon, please read the [Contributing Guide](https://github.com/autogluon/autogluon/blob/master/CONTRIBUTING.md) to get started.
## :classical_building: License
This library is licensed under the Apache 2.0 License.
| text/markdown | AutoGluon Community | null | null | null | Apache-2.0 | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Education",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Intended Audience :: Customer Service",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Healthcare Industry",
"Intended Audience :: Telecommunications Industry",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Operating System :: Unix",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Information Analysis",
"Topic :: Scientific/Engineering :: Image Recognition"
] | [] | https://github.com/autogluon/autogluon | null | <3.14,>=3.10 | [] | [] | [] | [
"numpy<2.4.0,>=1.25.0",
"scipy<1.17,>=1.5.4",
"scikit-learn<1.8.0,>=1.4.0",
"networkx<4,>=3.0",
"pandas<2.4.0,>=2.0.0",
"tqdm<5,>=4.38",
"requests",
"matplotlib<3.11,>=3.7.0",
"boto3<2,>=1.10",
"autogluon.common==1.5.1b20260221",
"autogluon.features==1.5.1b20260221",
"ray[default]<2.54,>=2.43.0; (platform_system != \"Windows\" or python_version != \"3.13\") and extra == \"ray\"",
"pyarrow>=15.0.0; extra == \"raytune\"",
"ray[default,tune]<2.54,>=2.43.0; (platform_system != \"Windows\" or python_version != \"3.13\") and extra == \"raytune\"",
"hyperopt<0.2.8,>=0.2.7; extra == \"raytune\"",
"stevedore<5.5; extra == \"raytune\"",
"setuptools<82; extra == \"raytune\"",
"pre-commit; extra == \"tests\"",
"types-setuptools; extra == \"tests\"",
"flake8; extra == \"tests\"",
"pytest-mypy; extra == \"tests\"",
"pytest; extra == \"tests\"",
"types-requests; extra == \"tests\"",
"pyarrow>=15.0.0; extra == \"all\"",
"stevedore<5.5; extra == \"all\"",
"hyperopt<0.2.8,>=0.2.7; extra == \"all\"",
"setuptools<82; extra == \"all\"",
"ray[default]<2.54,>=2.43.0; (platform_system != \"Windows\" or python_version != \"3.13\") and extra == \"all\"",
"ray[default,tune]<2.54,>=2.43.0; (platform_system != \"Windows\" or python_version != \"3.13\") and extra == \"all\""
] | [] | [] | [] | [
"Documentation, https://auto.gluon.ai",
"Bug Reports, https://github.com/autogluon/autogluon/issues",
"Source, https://github.com/autogluon/autogluon/",
"Contribute!, https://github.com/autogluon/autogluon/blob/master/CONTRIBUTING.md"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T09:09:19.434008 | autogluon_core-1.5.1b20260221.tar.gz | 207,438 | 8c/70/7ce5a7fc204f161d49fb1d1c4574e5d700f3b961e741cbaa29cb6a4e11b9/autogluon_core-1.5.1b20260221.tar.gz | source | sdist | null | false | 71cde6aa62da0ecfabeba295f9738658 | 191276b15ddc9eaad3139e200715c5021d090eff74fb9ddea651f375e727e512 | 8c707ce5a7fc204f161d49fb1d1c4574e5d700f3b961e741cbaa29cb6a4e11b9 | null | [
"LICENSE",
"NOTICE"
] | 0 |
2.4 | autogluon.common | 1.5.1b20260221 | Fast and Accurate ML in 3 Lines of Code |
<div align="center">
<img src="https://user-images.githubusercontent.com/16392542/77208906-224aa500-6aba-11ea-96bd-e81806074030.png" width="350">
## Fast and Accurate ML in 3 Lines of Code
[](https://github.com/autogluon/autogluon/releases)
[](https://anaconda.org/conda-forge/autogluon)
[](https://pypi.org/project/autogluon/)
[](https://pepy.tech/project/autogluon)
[](./LICENSE)
[](https://discord.gg/wjUmjqAc2N)
[](https://twitter.com/autogluon)
[](https://github.com/autogluon/autogluon/actions/workflows/continuous_integration.yml)
[](https://github.com/autogluon/autogluon/actions/workflows/platform_tests-command.yml)
[Installation](https://auto.gluon.ai/stable/install.html) | [Documentation](https://auto.gluon.ai/stable/index.html) | [Release Notes](https://auto.gluon.ai/stable/whats_new/index.html)
</div>
AutoGluon, developed by AWS AI, automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy machine learning and deep learning models on image, text, time series, and tabular data.
## 💾 Installation
AutoGluon is supported on Python 3.10 - 3.13 and is available on Linux, MacOS, and Windows.
You can install AutoGluon with:
```python
pip install autogluon
```
Visit our [Installation Guide](https://auto.gluon.ai/stable/install.html) for detailed instructions, including GPU support, Conda installs, and optional dependencies.
## :zap: Quickstart
Build accurate end-to-end ML models in just 3 lines of code!
```python
from autogluon.tabular import TabularPredictor
predictor = TabularPredictor(label="class").fit("train.csv", presets="best")
predictions = predictor.predict("test.csv")
```
| AutoGluon Task | Quickstart | API |
|:--------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| TabularPredictor | [](https://auto.gluon.ai/stable/tutorials/tabular/tabular-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.tabular.TabularPredictor.html) |
| TimeSeriesPredictor | [](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.timeseries.TimeSeriesPredictor.html) |
| MultiModalPredictor | [](https://auto.gluon.ai/stable/tutorials/multimodal/multimodal_prediction/multimodal-quick-start.html) | [](https://auto.gluon.ai/stable/api/autogluon.multimodal.MultiModalPredictor.html) |
## :mag: Resources
### Hands-on Tutorials / Talks
Below is a curated list of recent tutorials and talks on AutoGluon. A comprehensive list is available [here](AWESOME.md#videos--tutorials).
| Title | Format | Location | Date |
|--------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------|------------|
| :tv: [AutoGluon: Towards No-Code Automated Machine Learning](https://www.youtube.com/watch?v=SwPq9qjaN2Q) | Tutorial | [AutoML 2024](https://2024.automl.cc/) | 2024/09/09 |
| :tv: [AutoGluon 1.0: Shattering the AutoML Ceiling with Zero Lines of Code](https://www.youtube.com/watch?v=5tvp_Ihgnuk) | Tutorial | [AutoML 2023](https://2023.automl.cc/) | 2023/09/12 |
| :sound: [AutoGluon: The Story](https://automlpodcast.com/episode/autogluon-the-story) | Podcast | [The AutoML Podcast](https://automlpodcast.com/) | 2023/09/05 |
| :tv: [AutoGluon: AutoML for Tabular, Multimodal, and Time Series Data](https://youtu.be/Lwu15m5mmbs?si=jSaFJDqkTU27C0fa) | Tutorial | PyData Berlin | 2023/06/20 |
| :tv: [Solving Complex ML Problems in a few Lines of Code with AutoGluon](https://www.youtube.com/watch?v=J1UQUCPB88I) | Tutorial | PyData Seattle | 2023/06/20 |
| :tv: [The AutoML Revolution](https://www.youtube.com/watch?v=VAAITEds-28) | Tutorial | [Fall AutoML School 2022](https://sites.google.com/view/automl-fall-school-2022) | 2022/10/18 |
### Scientific Publications
- [AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data](https://arxiv.org/pdf/2003.06505.pdf) (*Arxiv*, 2020) ([BibTeX](CITING.md#general-usage--autogluontabular))
- [Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation](https://proceedings.neurips.cc/paper/2020/hash/62d75fb2e3075506e8837d8f55021ab1-Abstract.html) (*NeurIPS*, 2020) ([BibTeX](CITING.md#tabular-distillation))
- [Benchmarking Multimodal AutoML for Tabular Data with Text Fields](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/9bf31c7ff062936a96d3c8bd1f8f2ff3-Paper-round2.pdf) (*NeurIPS*, 2021) ([BibTeX](CITING.md#autogluonmultimodal))
- [XTab: Cross-table Pretraining for Tabular Transformers](https://proceedings.mlr.press/v202/zhu23k/zhu23k.pdf) (*ICML*, 2023)
- [AutoGluon-TimeSeries: AutoML for Probabilistic Time Series Forecasting](https://arxiv.org/abs/2308.05566) (*AutoML Conf*, 2023) ([BibTeX](CITING.md#autogluontimeseries))
- [TabRepo: A Large Scale Repository of Tabular Model Evaluations and its AutoML Applications](https://arxiv.org/pdf/2311.02971.pdf) (*AutoML Conf*, 2024)
- [AutoGluon-Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models](https://arxiv.org/pdf/2404.16233) (*AutoML Conf*, 2024) ([BibTeX](CITING.md#autogluonmultimodal))
- [Multi-layer Stack Ensembles for Time Series Forecasting](https://arxiv.org/abs/2511.15350) (*AutoML Conf*, 2025) ([BibTeX](CITING.md#autogluontimeseries))
- [Chronos-2: From Univariate to Universal Forecasting](https://arxiv.org/abs/2510.15821) (*Arxiv*, 2025) ([BibTeX](CITING.md#autogluontimeseries))
### Articles
- [AutoGluon-TimeSeries: Every Time Series Forecasting Model In One Library](https://towardsdatascience.com/autogluon-timeseries-every-time-series-forecasting-model-in-one-library-29a3bf6879db) (*Towards Data Science*, Jan 2024)
- [AutoGluon for tabular data: 3 lines of code to achieve top 1% in Kaggle competitions](https://aws.amazon.com/blogs/opensource/machine-learning-with-autogluon-an-open-source-automl-library/) (*AWS Open Source Blog*, Mar 2020)
- [AutoGluon overview & example applications](https://towardsdatascience.com/autogluon-deep-learning-automl-5cdb4e2388ec?source=friends_link&sk=e3d17d06880ac714e47f07f39178fdf2) (*Towards Data Science*, Dec 2019)
### Train/Deploy AutoGluon in the Cloud
- [AutoGluon Cloud](https://auto.gluon.ai/cloud/stable/index.html) (Recommended)
- [AutoGluon on SageMaker AutoPilot](https://auto.gluon.ai/stable/tutorials/cloud_fit_deploy/autopilot-autogluon.html)
- [AutoGluon on Amazon SageMaker](https://auto.gluon.ai/stable/tutorials/cloud_fit_deploy/cloud-aws-sagemaker-train-deploy.html)
- [AutoGluon Deep Learning Containers](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#autogluon-training-containers) (Security certified & maintained by the AutoGluon developers)
- [AutoGluon Official Docker Container](https://hub.docker.com/r/autogluon/autogluon)
- [AutoGluon-Tabular on AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-n4zf5pmjt7ism) (Not maintained by us)
## :pencil: Citing AutoGluon
If you use AutoGluon in a scientific publication, please refer to our [citation guide](CITING.md).
## :wave: How to get involved
We are actively accepting code contributions to the AutoGluon project. If you are interested in contributing to AutoGluon, please read the [Contributing Guide](https://github.com/autogluon/autogluon/blob/master/CONTRIBUTING.md) to get started.
## :classical_building: License
This library is licensed under the Apache 2.0 License.
| text/markdown | AutoGluon Community | null | null | null | Apache-2.0 | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Education",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Intended Audience :: Customer Service",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Healthcare Industry",
"Intended Audience :: Telecommunications Industry",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Operating System :: Unix",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Information Analysis",
"Topic :: Scientific/Engineering :: Image Recognition"
] | [] | https://github.com/autogluon/autogluon | null | <3.14,>=3.10 | [] | [] | [] | [
"numpy<2.4.0,>=1.25.0",
"pandas<2.4.0,>=2.0.0",
"pyarrow<21.0.0,>=7.0.0",
"boto3<2,>=1.10",
"psutil<7.2.0,>=5.7.3",
"tqdm<5,>=4.38",
"requests",
"joblib<1.7,>=1.2",
"pyyaml>=5.0",
"pytest; extra == \"tests\"",
"pytest-mypy; extra == \"tests\"",
"types-setuptools; extra == \"tests\"",
"types-requests; extra == \"tests\""
] | [] | [] | [] | [
"Documentation, https://auto.gluon.ai",
"Bug Reports, https://github.com/autogluon/autogluon/issues",
"Source, https://github.com/autogluon/autogluon/",
"Contribute!, https://github.com/autogluon/autogluon/blob/master/CONTRIBUTING.md"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T09:09:15.489810 | autogluon_common-1.5.1b20260221.tar.gz | 68,271 | 4a/4f/5da1826aa3b29581e25489293134bdcaa88fb32249ae8746b8ab2eec3eef/autogluon_common-1.5.1b20260221.tar.gz | source | sdist | null | false | a054ff2d50c80dbe132b9406c2cae775 | eaac70ac084327e6937142bb9269b106e3706669f79b2f394a75bc3caff10b58 | 4a4f5da1826aa3b29581e25489293134bdcaa88fb32249ae8746b8ab2eec3eef | null | [
"LICENSE",
"NOTICE"
] | 0 |
2.4 | kiss-agent-framework | 0.1.22 | KISS Agent Framework - A simple and portable agent framework for building and evolving AI agents | 
**Version:** 0.1.22
# When Simplicity Becomes Your Superpower: Meet KISS Multi Agent Multi Optimization Framework
*"Everything should be made as simple as possible, but not simpler." — Albert Einstein*
______________________________________________________________________
KISS stands for ["Keep it Simple, Stupid"](https://en.wikipedia.org/wiki/KISS_principle) which is a well known software engineering principle.
## Installation
Install from PyPI with pip:
```bash
pip install kiss-agent-framework
python -m kiss.agents.assistant.assistant
```
## 🎯 The Problem with AI Agent Frameworks Today
Let's be honest. The AI agent ecosystem has become a jungle. Every week brings a new framework promising to revolutionize how we build AI agents. They come loaded with abstractions on top of abstractions, bloated with techniques that are unnecessary. By the time you've figured out how to make your first tool call, you've already burned through half your patience and all your enthusiasm.
**What if there was another way?**
What if building AI agents could be as straightforward as the name suggests?
Enter **KISS** — the *Keep It Simple, Stupid* Agent Framework.
## 🚀 Your First Agent in 30 Seconds.
Let me show you something beautiful:
```python
from kiss.core.kiss_agent import KISSAgent
def calculate(expression: str) -> str:
"""Evaluate a math expression."""
return str(eval(expression))
agent = KISSAgent(name="Math Buddy")
result = agent.run(
model_name="gemini-2.5-flash",
prompt_template="Calculate: {question}",
arguments={"question": "What is 15% of 847?"},
tools=[calculate]
)
print(result) # 127.05
```
That's a fully functional AI agent that uses tools. No annotations. No boilerplate. No ceremony. Just intent, directly expressed.
Well you might ask "**Why not use LangChain, DSpy, OpenHands, MiniSweAgent, CrewAI, Google ADK, Claude Agent SDK, or some well established agent frameworks?**" Here is my response:
- **KISS comes with [Repo Optimizer](src/kiss/agents/coding_agents/repo_optimizer.py) and [Agent Optimizer](src/kiss/agents/coding_agents/agent_optimizer.py) which enables you to optimize a repository of code (and AI agents) for your metric of choice (e.g., cost and running time or test coverage or code quality/readability).**
- **It has the GEPA prompt optimizer builtin with a simple API.**
- **It has a [RelentlessCodingAgent](src/kiss/agents/coding_agents/relentless_coding_agent.py), which is pretty straightforward in terms of implementation, but it can work for very very long tasks. It was self evolved over time and is still evolving.**
- **No bloat and simple codebase.**
- **Optimization strategies can be written in plain English.**
- **New techniques will be incorporated to the framework as I research them.**
- **The project effectively applies various programming language and software engineering principles and concepts that I learned since 1995.**
## 🤝 Multi-Agent Orchestration is Function Composition
Here's where KISS really shines — composing multiple agents into systems greater than the sum of their parts.
Since agents are just functions, you orchestrate them with plain Python. Here's a complete **research-to-article pipeline** with three agents:
```python
from kiss.core.kiss_agent import KISSAgent
# Agent 1: Research a topic
researcher = KISSAgent(name="Researcher")
research = researcher.run(
model_name="gpt-4o",
prompt_template="List 3 key facts about {topic}. Be concise.",
arguments={"topic": "Python asyncio"},
is_agentic=False # Simple generation, no tools
)
# Agent 2: Write a draft using the research
writer = KISSAgent(name="Writer")
draft = writer.run(
model_name="claude-sonnet-4-5",
prompt_template="Write a 2-paragraph intro based on:\n{research}",
arguments={"research": research},
is_agentic=False
)
# Agent 3: Polish the draft
editor = KISSAgent(name="Editor")
final = editor.run(
model_name="gemini-2.5-flash",
prompt_template="Improve clarity and fix any errors:\n{draft}",
arguments={"draft": draft},
is_agentic=False
)
print(final)
```
**That's it.** Each agent can use a different model. Each agent saves its own trajectory. And you compose them with the most powerful orchestration tool ever invented: **regular Python code**.
No special orchestration framework needed. No message buses. No complex state machines. Just Python functions calling Python functions.
## 💪 Using Relentless Coding Agent
The **flagship** coding agent of KISS is the [relentless coding agent](src/kiss/agents/coding_agents/relentless_coding_agent.py). For very long running coding tasks, use the `RelentlessCodingAgent`. The agent will work relentlessly to complete your task using a single-agent architecture with smart continuation. It can run for hours to days to complete a task:
```python
from kiss.agents.coding_agents.relentless_coding_agent import RelentlessCodingAgent
agent = RelentlessCodingAgent(name="Simple Coding Agent")
result = agent.run(
prompt_template="""
Create a Python script that reads a CSV file,
filters rows where age > 18, and writes to a new file.
""",
model_name="claude-sonnet-4-5",
work_dir="./workspace",
max_steps=200,
max_sub_sessions=200
)
print(f"Result: {result}")
```
**Running with Docker:**
You can optionally run bash commands inside a Docker container for isolation:
```python
from kiss.agents.coding_agents.relentless_coding_agent import RelentlessCodingAgent
agent = RelentlessCodingAgent(name="Dockered Relentless Coding Agent")
result = agent.run(
prompt_template="""
Install numpy and create a script that generates
a random matrix and computes its determinant.
""",
docker_image="python:3.11-slim", # Bash commands run in Docker
max_steps=200,
max_sub_sessions=2000
)
print(f"Result: {result}")
```
**Key Features:**
- **Single-Agent with Auto-Continuation**: A single agent executes the task across multiple sub-sessions, automatically continuing where it left off via structured JSON progress tracking
- **Structured Progress Tracking**: Each sub-session reports completed and remaining tasks in JSON format (done/next items), which is deduplicated and passed to subsequent sub-sessions
- **Retry with Context**: Failed sub-sessions automatically pass structured progress summaries to the next sub-session
- **Configurable Sub-Sessions**: Set high sub-session counts (e.g., 200+) for truly relentless execution
- **Docker Support**: Optional isolated execution via Docker containers
- **Path Access Control**: Enforces read/write permissions on file system paths
- **Built-in Tools**: Bash, Read, Edit, Write, search_web, and fetch_url tools for file and web operations
- **Budget & Token Tracking**: Automatic cost and token usage monitoring across all sub-sessions
## 💬 Browser-Based Assistant
KISS includes a browser-based assistant UI for interacting with agents. It provides a rich web interface with real-time streaming output, task history with autocomplete.
```bash
# Launch the assistant (opens browser automatically)
uv run assistant
# Or with a custom working directory
uv run assistant --work-dir ./my-project
```
The assistant features:
- **Real-time streaming**: See agent thinking, tool calls, and results as they happen
- **Task history**: Previously submitted tasks are saved and available via autocomplete
- **Modern UI**: Dark theme with collapsible sections for tool calls and thinking
## 🔧 Using Repo Optimizer
**This is one of the most important and useful feature of KISS.** The `RepoOptimizer` (`repo_optimizer.py`) uses the `RelentlessCodingAgent` to optimize code within your own project repository. It runs a specified command, monitors output in real time, fixes errors, and iteratively optimizes for specified metrics — all without changing the agent's interface. The code can be found [here.](src/kiss/agents/coding_agents/repo_optimizer.py).
```bash
# Optimize a program for speed and cost
uv run python -m kiss.agents.coding_agents.repo_optimizer \
--command "uv run python src/kiss/agents/coding_agents/relentless_coding_agent.py" \
--metrics "running time and cost" \
--work-dir .
```
**CLI Options:**
| Flag | Default | Description |
|------|---------|-------------|
| `--command` | (required) | Command to run and monitor |
| `--metrics` | (required) | Metrics to minimize (e.g., "running time and cost") |
| `--work-dir` | `.` | Working directory for the agent |
**How It Works:**
1. Runs the specified command and monitors output in real time
1. If repeated errors are observed, fixes the code and reruns
1. Once the command succeeds, analyzes output and optimizes the source to minimize the specified metrics
1. Repeats until the metrics are reduced significantly
📖 **For the full story of how the repo optimizer self-optimized the RelentlessCodingAgent, see [BLOG.md](src/kiss/agents/coding_agents/BLOG.md)**
## 🎨 Output Formatting
Unlike other agentic systems, you do not need to specify the output schema for the agent. Just create
a suitable "finish" function with parameters. The parameters could be treated as the top level keys
in a json format.
**Example: Custom Structured Output**
```python
from kiss.core.kiss_agent import KISSAgent
# Define a custom finish function with your desired output structure
def finish(
sentiment: str,
confidence: float,
key_phrases: str,
summary: str
) -> str:
"""
Complete the analysis with structured results.
Args:
sentiment: The overall sentiment ('positive', 'negative', or 'neutral')
confidence: Confidence score between 0.0 and 1.0
key_phrases: Comma-separated list of key phrases found in the text
summary: A brief summary of the analysis
Returns:
The formatted analysis result
"""
...
```
The agent will automatically use your custom `finish` function instead of the default one which returns its argument. The function's parameters define what information the agent must provide, and the docstring helps the LLM understand how to format each field.
## 📊 Trajectory Saving and Visualization
Agent trajectories are automatically saved to the artifacts directory (default: `artifacts/`). Each trajectory includes:
- Complete message history with token usage and budget information appended to each message
- Tool calls and results
- Configuration used
- Timestamps
- Budget and token usage statistics
### Visualizing Trajectories
The framework includes a web-based trajectory visualizer for viewing agent execution histories:
```bash
# Run the visualizer server
uv run python -m kiss.viz_trajectory.server artifacts
# Or with custom host/port
uv run python -m kiss.viz_trajectory.server artifacts --host 127.0.0.1 --port 5050
```
Then open your browser to `http://127.0.0.1:5050` to view the trajectories.
The visualizer provides:
- **Modern UI**: Dark theme with smooth animations
- **Sidebar Navigation**: List of all trajectories sorted by start time
- **Markdown Rendering**: Full markdown support for message content
- **Code Highlighting**: Syntax highlighting for fenced code blocks
- **Message Display**: Clean, organized view of agent conversations
- **Metadata Display**: Shows agent ID, model, steps, tokens, and budget information

📖 **For detailed trajectory visualizer documentation, see [Trajectory Visualizer README](src/kiss/viz_trajectory/README.md)**
## 📖 Features of The KISS Framework
KISS is a lightweight, yet powerful, multi agent framework that implements a ReAct (Reasoning and Acting) loop for LLM agents. The framework provides:
- **Simple Architecture**: Clean, minimal core that's easy to understand and extend
- **Multi-Tool Execution**: Agents can execute multiple tool calls in a single step for faster task completion
- **Relentless Coding Agent**: Single-agent coding system with smart auto-continuation for long-running tasks
- **Browser-Based Assistant**: Interactive web UI for agents with real-time streaming and task history
- **Repo Optimizer**: Uses RelentlessCodingAgent to iteratively optimize code in your project for speed and cost (💡 new idea)
- **IMO Agent**: Verification-and-refinement pipeline for solving competition math problems (based on [arXiv:2507.15855](https://arxiv.org/abs/2507.15855))
- **GEPA Implementation From Scratch**: Genetic-Pareto prompt optimization for compound AI systems
- **KISSEvolve Implementation From Scratch**: Evolutionary algorithm discovery framework with LLM-guided mutation and crossover
- **Model Agnostic**: Support for multiple LLM providers (OpenAI, Anthropic, Gemini, Together AI, OpenRouter)
- **Native Function Calling**: Seamless tool integration using native function calling APIs (OpenAI, Anthropic, Gemini, Together AI, and OpenRouter)
- **Docker Integration**: Built-in Docker manager for running agents in isolated environments
- **Trajectory Tracking**: Automatic saving of agent execution trajectories with unified state management
- **Structured Result Display**: Console and browser printers parse YAML result content to show success/failure status with markdown rendering
- **Token Streaming**: Real-time token streaming via async callback for all providers (OpenAI, Anthropic, Gemini, Together AI, OpenRouter), including thinking/reasoning tokens and tool execution output
- **Token Usage Tracking**: Built-in token usage tracking with automatic context length detection and step counting
- **Budget Tracking**: Automatic cost tracking and budget monitoring across all agent runs
- **Self-Evolution**: Framework for agents to evolve and refine other multi agents
- **SWE-bench Dataset Support**: Built-in support for downloading and working with SWE-bench Verified dataset
- **RAG Support**: Simple retrieval-augmented generation system with in-memory vector store
- **Useful Agents**: Pre-built utility agents including prompt refinement and general bash execution agents
- **Multiprocessing Support**: Utilities for parallel execution of functions using multiprocessing
- **Trajectory Visualization**: Web-based visualizer for viewing agent execution trajectories with modern UI
## 📦 Installation
```bash
# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone/download KISS and navigate to the directory
cd kiss
# Create virtual environment
uv venv --python 3.13
# Install all dependencies (full installation)
uv sync
# (Optional) activate the venv for convenience (uv run works without activation)
source .venv/bin/activate
# Set up API keys (optional, for LLM providers)
export GEMINI_API_KEY="your-key-here"
export OPENAI_API_KEY="your-key-here"
export ANTHROPIC_API_KEY="your-key-here"
export TOGETHER_API_KEY="your-key-here"
export OPENROUTER_API_KEY="your-key-here"
```
### Selective Installation (Dependency Groups)
KISS supports selective installation via dependency groups for minimal footprints:
```bash
# Minimal core only (no model SDKs) - for custom integrations
uv sync --group core
# Core + specific provider support
uv sync --group claude # Core + Anthropic Claude
uv sync --group openai # Core + OpenAI Compatible Models
uv sync --group gemini # Core + Google Gemini
# Docker support (for running agents in isolated containers)
uv sync --group docker
# Evals dependencies (for running benchmarks)
uv sync --group evals
# Development tools (mypy, ruff, pytest, jupyter, etc.)
uv sync --group dev
# Combine multiple groups as needed
uv sync --group claude --group dev
```
**Dependency Group Contents:**
| Group | Description | Key Packages |
|-------|-------------|--------------|
| `core` | Minimal core module | pydantic, rich, requests, beautifulsoup4, playwright, flask |
| `claude` | Core + Anthropic | core + anthropic |
| `openai` | Core + OpenAI | core + openai |
| `gemini` | Core + Google | core + google-genai |
| `docker` | Docker integration | docker, types-docker |
| `evals` | Benchmark running | datasets, swebench, orjson, scipy, scikit-learn |
| `dev` | Development tools | mypy, ruff, pyright, pytest, jupyter, notebook |
> **Optional Dependencies:** All LLM provider SDKs (`openai`, `anthropic`, `google-genai`) are optional. You can import `kiss.core` and `kiss.agents` without installing all of them. When you try to use a model whose SDK is not installed, KISS raises a clear `KISSError` telling you which package to install.
## 📚 KISSAgent API Reference
📖 **For detailed KISSAgent API documentation, see [API.md](API.md)**
## 🎯 Using GEPA for Prompt Optimization
KISS has a fresh implementation of GEPA with some key improvements. GEPA (Genetic-Pareto) is a prompt optimization framework that uses natural language reflection to evolve prompts. It maintains an instance-level Pareto frontier of top-performing prompts and combines complementary lessons through structural merge. It also supports optional batched evaluation via `batched_agent_wrapper`, so you can plug in prompt-merging inference pipelines to process more datapoints per API call. GEPA is based on the paper ["GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning"](https://arxiv.org/pdf/2507.19457).
📖 **For detailed GEPA documentation, see [GEPA README](src/kiss/agents/gepa/README.md)**
## 🧪 Using KISSEvolve for Algorithm Discovery
This is where I started building an optimizer for agents. Then I switched to [`agent evolver`](src/kiss/agents/create_and_optimize_agent/agent_evolver.py) because `KISSEvolver` was expensive to run. Finally I switched to \[`repo_optimizer`\] for efficiency and simplicity. I am still keeping KISSEvolve around. KISSEvolve is an evolutionary algorithm discovery framework that uses LLM-guided mutation and crossover to evolve code variants. It supports advanced features including island-based evolution, novelty rejection sampling, and multiple parent sampling methods.
For usage examples, API reference, and configuration options, please see the [KISSEvolve README](src/kiss/agents/kiss_evolve/README.md).
📖 **For detailed KISSEvolve documentation, see [KISSEvolve README](src/kiss/agents/kiss_evolve/README.md)**
## ⚡ Multiprocessing
KISS provides utilities for parallel execution of Python functions using multiprocessing. This is useful for running multiple independent tasks concurrently to maximize CPU utilization.
### Basic Usage
```python
from kiss.multiprocessing import run_functions_in_parallel
def add(a, b):
return a + b
def multiply(x, y):
return x * y
# Define tasks as (function, arguments) tuples
tasks = [(add, [1, 2]), (multiply, [3, 4])]
results = run_functions_in_parallel(tasks)
print(results) # [3, 12]
```
### With Keyword Arguments
```python
from kiss.multiprocessing import run_functions_in_parallel_with_kwargs
def greet(name, title="Mr."):
return f"Hello, {title} {name}!"
functions = [greet, greet]
args_list = [["Alice"], ["Bob"]]
kwargs_list = [{"title": "Dr."}, {}]
results = run_functions_in_parallel_with_kwargs(functions, args_list, kwargs_list)
print(results) # ["Hello, Dr. Alice!", "Hello, Mr. Bob!"]
```
### 💻 Checking Available Cores
```python
from kiss.multiprocessing import get_available_cores
num_cores = get_available_cores()
print(f"Available CPU cores: {num_cores}")
```
The multiprocessing utilities automatically scale to the number of available CPU cores, using at most as many workers as there are tasks to avoid unnecessary overhead.
## 🐳 Docker Manager
KISS provides a `DockerManager` class for managing Docker containers and executing commands inside them. This is useful for running code in isolated environments, testing with specific dependencies, or working with SWE-bench tasks.
### Basic Usage
```python
from kiss.docker import DockerManager
# Create a Docker manager for an Ubuntu container
with DockerManager(image_name="ubuntu", tag="22.04", workdir="/app") as docker:
# Run commands inside the container
output = docker.run_bash_command("echo 'Hello from Docker!'", "Print greeting")
print(output)
output = docker.run_bash_command("python3 --version", "Check Python version")
print(output)
```
### Manual Lifecycle Management
```python
from kiss.docker import DockerManager
docker = DockerManager(image_name="python", tag="3.11", workdir="/workspace")
docker.open() # Pull image and start container
try:
output = docker.run_bash_command("pip install numpy", "Install numpy")
output = docker.run_bash_command("python -c 'import numpy; print(numpy.__version__)'", "Check numpy")
print(output)
finally:
docker.close() # Stop and remove container
```
### Port Mapping
```python
from kiss.docker import DockerManager
# Map container port 8080 to host port 8080
with DockerManager(image_name="nginx", ports={80: 8080}) as docker:
# Start a web server
docker.run_bash_command("nginx", "Start nginx")
# Get the actual host port (useful when Docker assigns a random port)
host_port = docker.get_host_port(80)
print(f"Server available at http://localhost:{host_port}")
```
### Configuration Options
- `image_name`: Docker image name (e.g., 'ubuntu', 'python:3.11')
- `tag`: Image tag/version (default: 'latest')
- `workdir`: Working directory inside the container (default: '/')
- `mount_shared_volume`: Whether to mount a shared volume for file transfer (default: True)
- `ports`: Port mapping from container to host (e.g., `{8080: 8080}`)
The Docker manager automatically handles image pulling, container lifecycle, and cleanup of temporary directories.
## 📁 Project Structure
```
kiss/
├── src/kiss/
│ ├── agents/ # Agent implementations
│ │ ├── assistant/ # Assistant agent with coding + browser tools
│ │ │ ├── assistant_agent.py # AssistantAgent with coding and browser automation
│ │ │ ├── assistant.py # Browser-based assistant UI
│ │ │ ├── relentless_agent.py # RelentlessAgent base class
│ │ │ └── config.py # Assistant agent configuration
│ │ ├── create_and_optimize_agent/ # Agent evolution and improvement
│ │ │ ├── agent_evolver.py # Evolutionary agent optimization
│ │ │ ├── improver_agent.py # Agent improvement through generations
│ │ │ ├── config.py # Agent creator configuration
│ │ │ ├── BLOG.md # Blog post about agent evolution
│ │ │ └── README.md # Agent creator documentation
│ │ ├── gepa/ # GEPA (Genetic-Pareto) prompt optimizer
│ │ │ ├── gepa.py
│ │ │ ├── config.py # GEPA configuration
│ │ │ └── README.md # GEPA documentation
│ │ ├── imo_agent/ # IMO mathematical problem-solving agent
│ │ │ ├── __init__.py
│ │ │ ├── imo_agent.py # Verification-and-refinement pipeline (arXiv:2507.15855)
│ │ │ ├── imo_problems.py # IMO 2025 problem statements, validation criteria, and difficulty
│ │ │ ├── imo_agent_creator.py # Repo agent that created the IMO agent
│ │ │ └── config.py # IMO agent configuration
│ │ ├── kiss_evolve/ # KISSEvolve evolutionary algorithm discovery
│ │ │ ├── kiss_evolve.py
│ │ │ ├── novelty_prompts.py # Prompts for novelty-based evolution
│ │ │ ├── config.py # KISSEvolve configuration
│ │ │ └── README.md # KISSEvolve documentation
│ │ ├── coding_agents/ # Coding agents for software development tasks
│ │ │ ├── relentless_coding_agent.py # Single-agent system with smart auto-continuation
│ │ │ ├── claude_coding_agent.py # Claude-based coding agent
│ │ │ ├── repo_optimizer.py # Iterative code optimizer using RelentlessCodingAgent
│ │ │ ├── repo_agent.py # Repo-level task agent using RelentlessCodingAgent
│ │ │ ├── agent_optimizer.py # Meta-optimizer that optimizes agent source code
│ │ │ ├── config.py # Coding agent configuration (RelentlessCodingAgent)
│ │ │ └── BLOG.md # Blog post about self-optimization
│ │ ├── self_evolving_multi_agent/ # Self-evolving multi-agent system
│ │ │ ├── agent_evolver.py # Agent evolution logic
│ │ │ ├── multi_agent.py # Multi-agent orchestration
│ │ │ ├── config.py # Configuration
│ │ │ └── README.md # Documentation
│ │ └── kiss.py # Utility agents (prompt refiner, bash agent)
│ ├── core/ # Core framework components
│ │ ├── base.py # Base class with common functionality for all KISS agents
│ │ ├── kiss_agent.py # KISS agent with native function calling (supports multi-tool execution)
│ │ ├── printer.py # Abstract Printer base class and MultiPrinter
│ │ ├── print_to_console.py # ConsolePrinter: Rich-formatted terminal output
│ │ ├── print_to_browser.py # BrowserPrinter: SSE streaming to browser UI
│ │ ├── browser_ui.py # Browser UI base components and utilities
│ │ ├── config.py # Configuration
│ │ ├── config_builder.py # Dynamic config builder with CLI support
│ │ ├── kiss_error.py # Custom error class
│ │ ├── utils.py # Utility functions (finish, resolve_path, is_subpath, etc.)
│ │ ├── useful_tools.py # UsefulTools class with path-restricted Read, Write, Bash, Edit, search_web, fetch_url
│ │ ├── web_use_tool.py # WebUseTool class with Playwright-based browser automation
│ │ └── models/ # Model implementations
│ │ ├── model.py # Model interface with TokenCallback streaming support
│ │ ├── gemini_model.py # Gemini model implementation
│ │ ├── openai_compatible_model.py # OpenAI-compatible API model (OpenAI, Together AI, OpenRouter)
│ │ ├── anthropic_model.py # Anthropic model implementation
│ │ └── model_info.py # Model info: context lengths, pricing, and capabilities
│ ├── docker/ # Docker integration
│ │ └── docker_manager.py
│ ├── evals/ # Benchmark and evaluation integrations
│ │ ├── algotune/ # AlgoTune benchmark integration
│ │ │ ├── run_algotune.py # AlgoTune task evolution
│ │ │ └── config.py # AlgoTune configuration
│ │ ├── arvo_agent/ # ARVO vulnerability detection agent
│ │ │ ├── arvo_agent.py # Arvo-based vulnerability detector
│ │ │ └── arvo_tags.json # Docker image tags for Arvo
│ │ ├── hotpotqa/ # HotPotQA benchmark integration
│ │ │ ├── hotpotqa_benchmark.py # HotPotQA benchmark runner
│ │ │ └── README.md # HotPotQA documentation
│ │ └── swe_agent_verified/ # SWE-bench Verified benchmark integration
│ │ ├── run_swebench.py # Main runner with CLI support
│ │ ├── config.py # Configuration for SWE-bench runs
│ │ └── README.md # SWE-bench documentation
│ ├── multiprocessing/ # Multiprocessing utilities
│ │ └── multiprocess.py
│ ├── rag/ # RAG (Retrieval-Augmented Generation)
│ │ └── simple_rag.py # Simple RAG system with in-memory vector store
│ ├── demo/ # Demo scripts
│ │ └── kiss_demo.py # Interactive demo with streaming output to terminal and browser
│ ├── scripts/ # Utility scripts
│ │ ├── check.py # Code quality check script
│ │ ├── notebook.py # Jupyter notebook launcher and utilities
│ │ └── kissevolve_bubblesort.py # KISSEvolve example: evolving bubble sort
│ ├── tests/ # Test suite
│ │ ├── conftest.py # Pytest configuration and fixtures
│ │ ├── test_kiss_agent_agentic.py
│ │ ├── test_kiss_agent_non_agentic.py
│ │ ├── test_kiss_agent_coverage.py # Coverage tests for KISSAgent
│ │ ├── test_kissevolve_bubblesort.py
│ │ ├── test_gepa_hotpotqa.py
│ │ ├── test_gepa_batched.py # Tests for GEPA batched wrapper behavior and performance
│ │ ├── test_gepa_progress_callback.py # Tests for GEPA progress callbacks
│ │ ├── test_docker_manager.py
│ │ ├── test_model_implementations.py # Integration tests for model implementations
│ │ ├── run_all_models_test.py # Comprehensive tests for all models
│ │ ├── test_multiprocess.py
│ │ ├── test_internal.py
│ │ ├── test_core_branch_coverage.py # Branch coverage tests for core components
│ │ ├── test_gemini_model_internals.py # Tests for Gemini model internals
│ │ ├── test_cli_options.py # Tests for CLI option parsing
│ │ ├── test_claude_coding_agent.py # Tests for coding agents
│ │ ├── test_evolver_progress_callback.py # Tests for AgentEvolver progress callbacks
│ │ ├── test_token_callback.py # Tests for async token streaming callback
│ │ ├── test_coding_agent_token_callback.py # Tests for token callback in coding agents
│ │ ├── test_a_model.py # Tests for model implementations
│ │ ├── test_print_to_console.py # Tests for ConsolePrinter output
│ │ ├── test_print_to_browser.py # Tests for BrowserPrinter browser output
│ │ ├── test_search_web.py
│ │ ├── test_useful_tools.py
│ │ ├── test_web_use_tool.py # Tests for WebUseTool browser automation
│ │ ├── test_chatbot_tasks.py # Tests for assistant task handling
│ │ ├── integration_test_assistant_agent.py # Integration tests for AssistantAgent
│ │ ├── integration_test_gmail_login.py # Integration tests for Gmail login
│ │ ├── integration_test_google_search.py # Integration tests for Google search
│ │ └── integration_test_web_use_tool.py # Integration tests for WebUseTool
│ ├── py.typed # PEP 561 marker for type checking
│ └── viz_trajectory/ # Trajectory visualization
│ ├── server.py # Flask server for trajectory visualization
│ ├── README.md # Trajectory visualizer documentation
│ └── templates/ # HTML templates for the visualizer
│ └── index.html
├── scripts/ # Repository-level scripts
│ └── release.sh # Release script
├── API.md # KISSAgent API reference
├── BLOG.md # Blog post about the KISS framework
├── CLAUDE.md # Code style guidelines for LLM assistants
├── kiss.ipynb # Interactive tutorial Jupyter notebook
├── LICENSE # Apache-2.0 license
├── pyproject.toml # Project configuration
└── README.md
```
## 🏷️ Versioning
The project uses semantic versioning (MAJOR.MINOR.PATCH). The version is defined in a single source of truth:
- **Version file**: `src/kiss/_version.py` - Edit this file to update the version
- **Package access**: `kiss.__version__` - Access the version programmatically
- **Build system**: `pyproject.toml` automatically reads the version from `_version.py` using dynamic versioning
Example:
```python
from kiss import __version__
print(f"KISS version: {__version__}")
```
To update the version, simply edit `src/kiss/_version.py`:
```python
__version__ = "0.2.0" # Update to new version
```
## ⚙️ Configuration
Configuration is managed through environment variables and the `DEFAULT_CONFIG` object:
- **API Keys**: Set `GEMINI_API_KEY`, `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `TOGETHER_API_KEY`, `OPENROUTER_API_KEY`, and/or `MINIMAX_API_KEY` environment variables
- **Agent Settings**: Modify `DEFAULT_CONFIG.agent` in `src/kiss/core/config.py`:
- `max_steps`: Maximum iterations in the ReAct loop (default: 100)
- `verbose`: Enable verbose output (default: True)
- `debug`: Enable debug mode (default: False)
- `max_agent_budget`: Maximum budget per agent run in USD (default: 10.0)
- `global_max_budget`: Maximum total budget across all agents in USD (default: 200.0)
- `use_web`: Automatically add web browsing and search tool if enabled (default: False)
- `print_to_console`: Enable ConsolePrinter for Rich terminal output (default: True). Can be overridden per-call via the `print_to_console` parameter on `run()`.
- `print_to_browser`: Enable BrowserPrinter for live browser UI output (default: False). Can be overridden per-call via the `print_to_browser` parameter on `run()`.
- `artifact_dir`: Directory for agent artifacts (default: auto-generated with timestamp)
- **Relentless Coding Agent Settings**: Modify `DEFAULT_CONFIG.coding_agent.relentless_coding_agent` in `src/kiss/agents/coding_agents/config.py`:
- `model_name`: Model for task execution (default: "claude-opus-4-6")
- `max_sub_sessions`: Maximum number of sub-sessions for auto-continuation (default: 200)
- `max_steps`: Maximum steps per sub-session (default: 25)
- `max_budget`: Maximum budget in USD (default: 200.0)
- **IMO Agent Settings**: Modify `DEFAULT_CONFIG.imo_agent` in `src/kiss/agents/imo_agent/config.py`:
- `solver_model`: Model for solving IMO problems (default: "o3")
- `verifier_model`: Model for verifying solutions (default: "gemini-2.5-pro")
- `validator_model`: Model for independent validation against known answers (default: "gemini-3-pro-preview")
- `max_refinement_rounds`: Max verification-refinement iterations per attempt (default: 2)
- `num_verify_passes`: Number of verification passes required to accept a solution (default: 1)
- `max_attempts`: Max independent attempts per problem (default: 1)
- `max_budget`: Maximum budget in USD per problem (default: 50.0)
- **GEPA Settings**: Modify `DEFAULT_CONFIG.gepa` in `src/kiss/agents/gepa/config.py`:
- `reflection_model`: Model to use for reflection (default: "gemini-3-flash-preview")
- `max_generations`: Maximum number of evolutionary generations (default: 10)
- `population_size`: Number of candidates to maintain in population (default: 8)
- `pareto_size`: Maximum size of Pareto frontier (default: 4)
- `mutation_rate`: Probability of mutating a prompt template (default: 0.5)
- **KISSEvolve Settings**: Modify `DEFAULT_CONFIG.kiss_evolve` in `src/kiss/agents/kiss_evolve/config.py`:
- `max_generations`: Maximum number of evolutionary generations (default: 10)
- `population_size`: Number of variants to maintain in population (default: 8)
- `mutation_rate`: Probability of mutating a variant (default: 0.7)
- `elite_size`: Number of best variants to preserve each generation (default: 2)
- `num_islands`: Number of islands for island-based evolution, 1 = disabled (default: 2)
- `migration_frequency`: Number of generations between migrations (default: 5)
- `migration_size`: Number of individuals to migrate between islands (default: 1)
- `migration_topology`: Migration topology: 'ring', 'fully_connected', or 'random' (default: "ring")
- `enable_novelty_rejection`: Enable code novelty rejection sampling (default: False)
- `novelty_threshold`: Cosine similarity threshold for rejecting code (default: 0.95)
- `max_rejection_attempts`: Maximum rejection attempts before accepting (default: 5)
- `parent_sampling_method`: Parent sampling: 'tournament', 'power_law', or 'performance_novelty' (default: "power_law")
- `power_law_alpha`: Power-law sampling parameter for rank-based selection (default: 1.0)
- `performance_novelty_lambda`: Selection pressure parameter for sigmoid (default: 1.0)
- **Self-Evolving Multi-Agent Settings**: Modify `DEFAULT_CONFIG.self_evolving_multi_agent` in `src/kiss/agents/self_evolving_multi_agent/config.py`:
- `model`: LLM model to use for the main agent (default: "gemini-3-flash-preview")
- `sub_agent_model`: Model for sub-agents (default: "gemini-3-flash-preview")
- `evolver_model`: Model for evolution (default: "gemini-3-flash-preview")
- `max_steps`: Maximum orchestrator steps (default: 100)
- `max_budget`: Maximum budget in USD (default: 10.0)
- `max_retries`: Maximum retries on error (default: 3)
- `sub_agent_max_steps`: Maximum steps for sub-agents (default: 50)
- `sub_agent_max_budget`: Maximum budget for sub-agents in USD (default: 2.0)
- `docker_image`: Docker image for execution (default: "python:3.12-slim")
- `workdir`: Working directory in container (default: "/workspace")
## 🛠️ Available Commands
### Development
- `uv sync` - Install all dependencies (full installation)
- `uv sync --group dev` - Install dev tools (mypy, ruff, pytest, jupyter, etc.)
- `uv sync --group <name>` - Install specific dependency group (see [Selective Installation](#selective-installation-dependency-groups))
- `uv build` - Build the project package
### Testing
- `uv run pytest` - Run all tests (uses testpaths from pyproject.toml)
- `uv run pytest src/kiss/tests/ -v` - Run all tests with verbose output
- `uv run pytest src/kiss/tests/test_kiss_agent_agentic.py -v` - Run agentic agent tests
- `uv run pytest src/kiss/tests/test_kiss_agent_non_agentic.py -v` - Run non-agentic agent tests
- `uv run pytest src/kiss/tests/test_multiprocess.py -v` - Run multiprocessing tests
- `uv run python -m unittest src.kiss.tests.test_docker_manager -v` - Run docker manager tests (unittest)
- `uv run python -m unittest discover -s src/kiss/tests -v` - Run all tests using unittest
### Code Quality
- `uv run check` - Run all code quality checks (fresh dependency install, build, lint, and type check)
- `uv run check --clean` - Run all code quality checks (fresh dependency install, build, lint, and type check after removing previous build options)
- `uv run ruff format src/` - Format code with ruff (line-length: 100, target: py313)
- `uv run ruff check src/` - Lint code with ruff (selects: E, F, W, I, N, UP)
- `uv run mypy src/` - Type check with mypy (python_version: 3.13)
- `uv run pyright src/` - Type check with pyright (alternative to mypy, stricter checking)
### Notebook
- `uv run notebook --test` - Test all imports and basic functionality
- `uv run notebook --lab` - Open the tutorial notebook in JupyterLab (recommended)
- `uv run notebook --run` - Open the tutorial notebook in Jupyter Notebook
- `uv run notebook --execute` - Execute notebook cells and update outputs in place
- `uv run notebook --convert` - Convert notebook to Python | text/markdown | null | Koushik Sen <ksen@berkeley.edu> | null | null | Apache-2.0 | agent, ai, anthropic, docker, evolution, framework, function-calling, gemini, genetic-algorithm, llm, openai, rag, react, swe-bench, together | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"anthropic>=0.40.0",
"beautifulsoup4>=4.12.0",
"claude-agent-sdk>=0.1.19",
"datasets>=2.0.0",
"docker>=7.0.0",
"flask>=3.0.0",
"google-adk>=0.1.0",
"google-genai>=0.30.0",
"numpy>=1.26.0",
"openai-agents>=0.0.3",
"openai>=2.13.0",
"playwright>=1.40.0",
"pydantic-settings>=2.0.0",
"pydantic>=2.0.0",
"pyyaml>=6.0.0",
"requests>=2.28.0",
"rich>=14.2.0",
"starlette>=0.38.0",
"swebench>=1.0.0",
"types-docker>=7.1.0.20251202",
"types-pyyaml>=6.0.0",
"uvicorn>=0.30.0"
] | [] | [] | [] | [
"Homepage, https://github.com/ksen/kiss",
"Repository, https://github.com/ksen/kiss"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-21T09:07:26.614102 | kiss_agent_framework-0.1.22.tar.gz | 386,319 | b2/09/bdb562fef1a7b9d9e57a567e0600050357114e9d8e40c6ba5cc7335dcd5d/kiss_agent_framework-0.1.22.tar.gz | source | sdist | null | false | b56ff108c8228a86ea3ce0a2dc90cb9c | 20ad0c542948e3e84fab377c1807a697a64ddcefc22076a524d12fa93baca4b3 | b209bdb562fef1a7b9d9e57a567e0600050357114e9d8e40c6ba5cc7335dcd5d | null | [
"LICENSE"
] | 232 |
2.4 | splox | 0.0.8 | Official Python SDK for the Splox API — run workflows, manage chats, and monitor execution | # Splox Python SDK
Official Python SDK for the [Splox API](https://docs.splox.io) — run workflows, manage chats, browse the MCP catalog, and monitor execution programmatically.
## Installation
```bash
pip install splox
```
## Quick Start
```python
from splox import SploxClient
client = SploxClient(api_key="your-api-key")
# Create a chat session
chat = client.chats.create(
name="My Session",
resource_id="your-workflow-id",
)
# Run a workflow
result = client.workflows.run(
workflow_version_id="your-version-id",
chat_id=chat.id,
start_node_id="your-start-node-id",
query="Summarize the latest sales report",
)
print(result.workflow_request_id)
# Get execution tree
tree = client.workflows.get_execution_tree(result.workflow_request_id)
for node in tree.execution_tree.nodes:
print(f"{node.node_label}: {node.status}")
```
## Async Support
```python
import asyncio
from splox import AsyncSploxClient
async def main():
client = AsyncSploxClient(api_key="your-api-key")
chat = await client.chats.create(
name="Async Session",
resource_id="your-workflow-id",
)
result = await client.workflows.run(
workflow_version_id="your-version-id",
chat_id=chat.id,
start_node_id="your-start-node-id",
query="Hello from async!",
)
# Stream execution events via SSE
async for event in client.workflows.listen(result.workflow_request_id):
if event.node_execution:
print(f"Node {event.node_execution.status}: {event.node_execution.output_data}")
if event.workflow_request and event.workflow_request.status in ("completed", "failed"):
break
await client.close()
asyncio.run(main())
```
## Streaming (SSE)
### Listen to workflow execution
```python
# Sync
for event in client.workflows.listen(workflow_request_id):
print(event)
# Async
async for event in async_client.workflows.listen(workflow_request_id):
print(event)
```
### Listen to chat messages
Stream real-time chat events including text deltas, tool calls, and more:
```python
# Async example — collect streamed response
async for event in client.chats.listen(chat_id):
if event.event_type == "text_delta":
print(event.text_delta, end="", flush=True)
elif event.event_type == "tool_call_start":
print(f"\\nCalling tool: {event.tool_name}")
elif event.event_type == "done":
print("\\nIteration complete")
# Stop when workflow completes
if event.workflow_request and event.workflow_request.status == "completed":
break
```
**Event types:**
| Type | Fields | Description |
|------|--------|-------------|
| `text_delta` | `text_delta` | Streamed text chunk |
| `reasoning_delta` | `reasoning_delta`, `reasoning_type` | Thinking content |
| `tool_call_start` | `tool_call_id`, `tool_name` | Tool call initiated |
| `tool_call_delta` | `tool_call_id`, `tool_args_delta` | Tool arguments delta |
| `tool_start` | `tool_name`, `tool_call_id` | Tool execution started |
| `tool_complete` | `tool_name`, `tool_call_id`, `tool_result` | Tool finished |
| `tool_error` | `tool_name`, `tool_call_id`, `error` | Tool failed |
| `done` | `iteration`, `run_id` | Iteration complete |
| `error` | `error` | Error occurred |
## Run & Wait
Convenience method that runs a workflow and waits for completion:
```python
execution = client.workflows.run_and_wait(
workflow_version_id="your-version-id",
chat_id=chat.id,
start_node_id="your-start-node-id",
query="Process this request",
timeout=300, # 5 minutes
)
print(execution.status) # "completed"
for node in execution.nodes:
print(f"{node.node_label}: {node.output_data}")
```
## Memory
Inspect and manage agent context memory — list instances, read messages, summarize, trim, clear, or export.
```python
# List memory instances (paginated)
result = client.memory.list("workflow-version-id", limit=20)
for inst in result.chats:
print(f"{inst.memory_node_label}: {inst.message_count} messages")
# Paginate
if result.has_more:
more = client.memory.list("workflow-version-id", cursor=result.next_cursor)
# Get messages for an agent node
messages = client.memory.get("agent-node-id", chat_id="session-id", limit=20)
for msg in messages.messages:
print(f"[{msg.role}] {msg.content}")
# Summarize — compress older messages into an LLM-generated summary
result = client.memory.summarize(
"agent-node-id",
context_memory_id="session-id",
workflow_version_id="version-id",
keep_last_n=3,
)
print(f"Summary: {result.summary}")
# Trim — drop oldest messages to stay under a limit
client.memory.trim(
"agent-node-id",
context_memory_id="session-id",
workflow_version_id="version-id",
max_messages=20,
)
# Export all messages without modifying them
exported = client.memory.export(
"agent-node-id",
context_memory_id="session-id",
workflow_version_id="version-id",
)
# Clear all messages
client.memory.clear(
"agent-node-id",
context_memory_id="session-id",
workflow_version_id="version-id",
)
# Delete a specific memory instance
client.memory.delete(
"session-id",
memory_node_id="agent-node-id",
workflow_version_id="version-id",
)
```
## MCP (Model Context Protocol)
Browse the MCP server catalog, manage end-user connections, and generate credential-submission links.
### Catalog
```python
# Search the MCP catalog
catalog = client.mcp.list_catalog(search="github", per_page=10)
for server in catalog.mcp_servers:
print(f"{server.name} — {server.url}")
# Get featured servers
featured = client.mcp.list_catalog(featured=True)
# Get a single catalog item
item = client.mcp.get_catalog_item("mcp-server-id")
print(item.name, item.auth_type)
```
### Connections
```python
# List all end-user connections
conns = client.mcp.list_connections()
print(f"{conns.total} connections")
# List owner-user MCP servers via the same endpoint
owner_servers = client.mcp.list_connections(scope="owner_user")
# Filter by MCP server or end-user
filtered = client.mcp.list_connections(
mcp_server_id="server-id",
end_user_id="user-123",
)
# Delete a connection
client.mcp.delete_connection("connection-id")
```
### Connection Token & Link
Generate signed JWTs for end-user credential submission — no API call required:
```python
from splox import generate_connection_token, generate_connection_link
# Generate a token (expires in 1 hour)
token = generate_connection_token(
mcp_server_id="mcp-server-id",
owner_user_id="owner-user-id",
end_user_id="end-user-id",
credentials_encryption_key="your-credentials-encryption-key",
)
# Generate a full connection link
link = generate_connection_link(
base_url="https://app.splox.io",
mcp_server_id="mcp-server-id",
owner_user_id="owner-user-id",
end_user_id="end-user-id",
credentials_encryption_key="your-credentials-encryption-key",
)
# → https://app.splox.io/tools/connect?token=eyJhbG...
```
Async usage is identical — the token/link functions are synchronous and available on both `client.mcp` and as standalone imports.
## Webhooks
```python
# Trigger a workflow via webhook (no auth required)
from splox import SploxClient
client = SploxClient() # No API key needed for webhooks
result = client.events.send(
webhook_id="your-webhook-id",
payload={"order_id": "12345", "status": "paid"},
)
print(result.event_id)
```
## Error Handling
```python
from splox import SploxClient
from splox.exceptions import (
SploxAPIError,
SploxAuthError,
SploxRateLimitError,
SploxNotFoundError,
)
client = SploxClient(api_key="your-api-key")
try:
result = client.workflows.run(...)
except SploxAuthError:
print("Invalid or expired API token")
except SploxRateLimitError as e:
print(f"Rate limited. Retry after: {e.retry_after}")
except SploxNotFoundError:
print("Resource not found")
except SploxAPIError as e:
print(f"API error {e.status_code}: {e.message}")
```
## Custom Base URL
```python
client = SploxClient(
api_key="your-api-key",
base_url="https://your-self-hosted-instance.com/api/v1",
)
```
## API Reference
### `SploxClient` / `AsyncSploxClient`
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `api_key` | `str \| None` | `SPLOX_API_KEY` env | API authentication token |
| `base_url` | `str` | `https://app.splox.io/api/v1` | API base URL |
| `timeout` | `float` | `30.0` | Request timeout in seconds |
### `client.workflows`
| Method | Description |
|--------|-------------|
| `run(...)` | Trigger a workflow execution |
| `listen(id)` | Stream execution events (SSE) |
| `get_execution_tree(id)` | Get complete execution hierarchy |
| `get_history(id, ...)` | Get paginated execution history |
| `stop(id)` | Stop a running workflow |
| `run_and_wait(...)` | Run and wait for completion |
### `client.chats`
| Method | Description |
|--------|-------------|
| `create(...)` | Create a new chat session |
| `get(id)` | Get a chat by ID |
| `listen(id)` | Stream chat events (SSE) |
### `client.events`
| Method | Description |
|--------|-------------|
| `send(webhook_id, ...)` | Send event via webhook |
### `client.memory`
| Method | Description |
|--------|-------------|
| `list(version_id, ...)` | List memory instances (paginated) |
| `get(node_id, ...)` | Get paginated messages |
| `summarize(node_id, ...)` | Summarize older messages with LLM |
| `trim(node_id, ...)` | Drop oldest messages |
| `clear(node_id, ...)` | Remove all messages |
| `export(node_id, ...)` | Export all messages |
| `delete(memory_id, ...)` | Delete a memory instance |
### `client.mcp`
| Method | Description |
|--------|-------------|
| `list_catalog(...)` | Search/list MCP catalog (paginated) |
| `get_catalog_item(id)` | Get a single catalog item |
| `list_connections(...)` | List MCP links by identity scope (`end_user` or `owner_user`) |
| `delete_connection(id)` | Delete an end-user connection |
| `generate_connection_token(...)` | Create a signed JWT (1 hr expiry) |
| `generate_connection_link(...)` | Build a full connection URL |
### Standalone functions
| Function | Description |
|----------|-------------|
| `generate_connection_token(server_id, owner_id, end_user_id, key)` | Create a signed JWT |
| `generate_connection_link(base_url, server_id, owner_id, end_user_id, key)` | Build a full connection URL |
## License
MIT
| text/markdown | null | Splox <support@splox.io> | null | null | null | agent, ai, sdk, splox, workflow | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx>=0.25.0",
"mypy>=1.0; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"pytest-httpx>=0.30; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"ruff>=0.4; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://splox.io",
"Documentation, https://docs.splox.io",
"Repository, https://github.com/splox-ai/python-sdk",
"Issues, https://github.com/splox-ai/python-sdk/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T09:07:10.724546 | splox-0.0.8.tar.gz | 32,878 | d2/59/d7095d34c97978a8b3c50a83814a01de9711baac8cb9b3043f5020e5dcfd/splox-0.0.8.tar.gz | source | sdist | null | false | d99d59b778b9dd6b1e65fb78ffd8131f | a50af91dd2e6c85a8a8a609755db8b308d967d93202d14d9ff8a06ef42acc2a6 | d259d7095d34c97978a8b3c50a83814a01de9711baac8cb9b3043f5020e5dcfd | MIT | [
"LICENSE"
] | 263 |
2.4 | polaris-studio | 26.2.21.dev1 | Transportation System Simulation Tool | # polaris-studio
The polaris-studio package is th Python gateway for all things Polaris. The package is
divided in several submodules varying from data preparation to result analysis and is
the source of truth for the data model required by Polaris.
For the package's full description and documentation, see: https://polaris.taps.anl.gov/polaris/index.html
For release notes, see: https://polaris.taps.anl.gov/polaris/releases/index.html
The standard installation of Polaris-Studio brings a minimum set of dependencies, which are those required
for running a simulation. For a full installation, please install
pip install polaris-studio[builder]
We recommend using virtual environments to prevent dependency clashes. In the root folder of the repo you will find
a script called `setup_venv.sh` (linux) or `setup_venv.bat` (windows) which will create a virtual environment
into a sub-directory `venv` of the repo and install that as a kernel (named "polaris-studio") which can be used
with any jupyter notebooks you are running.
```bash
./setup_venv.sh
```
## Documentation
Polaris-studio is also responsible for building the documentation website (https://polaris.taps.anl.gov). The
steps for building are outlined in the `ci/documentation_ci.yml` gitlab CI definition file and require the
cloning of the `polaris-linux` and `QPolaris` repositories. This can be achieved in a local environment by running
```bash
./docs/build_all_locally.sh
```
There are a number of example notebooks in the documentation that require additional datafiles to run, these can be
downloaded and run locally using the following command
```bash
./docs/build_consolidated_docs.sh grab
./docs/build_consolidated_docs.sh run_notebooks [optional_pattern]
```
The optional pattern can be used to only run one notebook if you are adding a new one or debugging one that has
developed a problem.
| text/markdown | null | Pedro Camargo <pveigadecamargo@anl.gov>, Jamie Cook <james.cook@anl.gov>, Polaris Team <polaris@anl.gov> | null | null | null | null | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click",
"diskcache",
"numpy>=2.0.0",
"openmatrix",
"pandas>=2.2.0",
"psutil",
"psycopg[binary,pool]",
"pyarrow",
"pydantic",
"pyyaml",
"retry2",
"scipy",
"sqlalchemy>=2",
"tables>=3.8",
"tzlocal",
"numexpr<=2.13.1",
"globus_sdk<4,>3.10; extra == \"hpc\"",
"aequilibrae>=1.6.0; extra == \"builder\"",
"appdirs; extra == \"builder\"",
"census>=0.8.22; extra == \"builder\"",
"duckdb==1.3.0; extra == \"builder\"",
"geopandas>=1.1.1; extra == \"builder\"",
"gmnspy; extra == \"builder\"",
"matplotlib; extra == \"builder\"",
"networkx; extra == \"builder\"",
"partridge; extra == \"builder\"",
"psycopg2-binary; extra == \"builder\"",
"pygris>=0.2.0; extra == \"builder\"",
"pyproj; extra == \"builder\"",
"rapidfuzz; extra == \"builder\"",
"requests; extra == \"builder\"",
"rtree; extra == \"builder\"",
"scipy; extra == \"builder\"",
"seaborn; extra == \"builder\"",
"shapely>=2.0.1; extra == \"builder\"",
"scikit-learn; extra == \"builder\"",
"tqdm; extra == \"builder\"",
"us>=3.2.0; extra == \"builder\"",
"types-tqdm; extra == \"linting\"",
"types-python-dateutil; extra == \"linting\"",
"types-PyYAML; extra == \"linting\"",
"types-requests; extra == \"linting\"",
"pandas-stubs<3.0; extra == \"linting\"",
"types-requests; extra == \"linting\"",
"types-tzlocal; extra == \"linting\"",
"sqlalchemy; extra == \"linting\"",
"numpy; extra == \"linting\"",
"ruff==0.14.9; extra == \"linting\"",
"black==25.12.0; extra == \"linting\"",
"ty==0.0.1-alpha.23; extra == \"linting\"",
"scipy-stubs; extra == \"linting\"",
"types-networkx; extra == \"linting\"",
"types-seaborn; extra == \"linting\"",
"pyproj; extra == \"linting\"",
"globus_sdk; extra == \"linting\"",
"matplotlib-stubs; extra == \"linting\"",
"duckdb; extra == \"linting\"",
"jupytext; extra == \"devtools\"",
"pytest; extra == \"devtools\"",
"pytest-cov; extra == \"devtools\"",
"pytest-order; extra == \"devtools\"",
"pytest-rerunfailures; extra == \"devtools\"",
"pytest-xdist; extra == \"devtools\"",
"pytest-random-order; extra == \"devtools\"",
"setuptools; extra == \"devtools\"",
"testing.postgresql; sys_platform == \"linux\" and extra == \"devtools\"",
"uv; extra == \"devtools\"",
"pandas>=3.0.0; extra == \"devtools\"",
"ipywidgets; extra == \"devtools\"",
"jupyter; extra == \"devtools\"",
"py7zr; extra == \"devtools\"",
"pygit2; extra == \"devtools\"",
"autodoc_pydantic; extra == \"docs\"",
"pyaml; extra == \"docs\"",
"enum34>=1.1.6; extra == \"docs\"",
"Sphinx; extra == \"docs\"",
"jinja2; extra == \"docs\"",
"pydata-sphinx-theme; extra == \"docs\"",
"sphinx-book-theme; extra == \"docs\"",
"myst-parser; extra == \"docs\"",
"mapclassify; extra == \"docs\"",
"sphinx_autodoc_annotation; extra == \"docs\"",
"nbsphinx; extra == \"docs\"",
"pillow; extra == \"docs\"",
"matplotlib; extra == \"docs\"",
"folium; extra == \"docs\"",
"contextily; extra == \"docs\"",
"requests; extra == \"docs\"",
"sphinx-gallery>=0.17.0; extra == \"docs\"",
"sphinx-design; extra == \"docs\"",
"nbsphinx; extra == \"docs\"",
"ipython_genutils; extra == \"docs\"",
"sphinxcontrib-youtube; extra == \"docs\"",
"py7zr; extra == \"docs\"",
"polaris-studio[builder,devtools,docs,hpc,linting]; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://vms.taps.anl.gov/tools/polaris//",
"Documentation, https://polaris.taps.anl.gov",
"Issue tracker, https://git-out.gss.anl.gov/polaris/issues/-/issues",
"Release Notes, https://polaris.taps.anl.gov/polaris/releases/index.html"
] | twine/6.2.0 CPython/3.12.7 | 2026-02-21T09:07:10.243574 | polaris_studio-26.2.21.dev1-py3-none-win_amd64.whl | 88,848,699 | 3d/2d/6a14aebfec4e47078409eb1215f7475fbe58cc5b28669649dfa62ed2bb91/polaris_studio-26.2.21.dev1-py3-none-win_amd64.whl | py3 | bdist_wheel | null | false | f18f348029a00b06927647ef24ede109 | ddd28c497b02124ac2ea06ad458836caa3111af837c03737dbd3e809dae5753a | 3d2d6a14aebfec4e47078409eb1215f7475fbe58cc5b28669649dfa62ed2bb91 | LicenseRef-Polaris-Studio | [
"LICENSE.md"
] | 137 |
2.4 | rainbear | 2.1.0 | Build lazy Zarr scans with Polars | # Rainbear
Python + Rust experiment for **lazy Zarr scanning into Polars**, with an API inspired by xarray’s coordinate-based selection.
This repo currently contains:
- A first-pass `scan_zarr(...)` that streams a Zarr store using Rust [`zarrs`] and yields Polars `LazyFrame`s.
- A test suite that compares rainbear against xarray for various Zarr datasets and filter conditions.
## Status / caveats
- **Zarr v3**: the Rust backend uses `zarrs`.
Note that zarrs-v2 is not likely to work for this reason.
- **Tidy table output**: `scan_zarr` currently emits a “tidy” `DataFrame` with one row per element and columns:
- dimension/coord columns (e.g. `time`, `lat`)
- variable columns (e.g. `temp`)
- **Predicate pushdown**:
- Rust attempts to compile a limited subset of predicates (simple comparisons on coord columns combined with `&`) for **chunk pruning**.
- If Polars `Expr` deserialization fails (typically because Python Polars and the Rust-side Polars ABI/serde don’t match), `scan_zarr` automatically falls back to **Python-side filtering** (correct but slower).
## Quickstart (uv)
The project is configured as a `maturin` extension module.
- Run a quick import check:
```bash
uv run --with polars python -c "import rainbear; print(rainbear.print_extension_info())"
```
## Using `scan_zarr`
```python
import polars as pl
import rainbear
lf = rainbear.scan_zarr("/path/to/data.zarr")
# Filter the LazyFrame (predicate's are pushed down and used for chunk pruning)
lf = lf.filter((pl.col("lat") >= 32.0) & (pl.col("lat") <= 52.0))
df = lf.collect()
print(df)
```
## Caching Backends
Rainbear provides three backend classes that own the store connection and cache metadata and coordinate chunks across multiple scans. This dramatically improves performance for repeated queries on the same dataset.
### `ZarrBackend` (Async)
The **async caching backend** for standard Zarr stores. Best for cloud storage (S3, GCS, Azure) where async I/O provides significant performance benefits.
**Features:**
- Persistent caching of coordinate array chunks and metadata across scans
- Async I/O with configurable concurrency for parallel chunk reads
- Compatible with any ObjectStore (S3, GCS, Azure, HTTP, local filesystem)
- Cache statistics and management (clear cache, view stats)
**When to use:**
- Cloud-based Zarr stores where network latency dominates
- Applications already using async/await patterns
- High-concurrency workloads with many simultaneous chunk reads
```python
import polars as pl
from datetime import datetime
import rainbear
# Create backend from URL
backend = rainbear.ZarrBackend.from_url("s3://bucket/dataset.zarr")
# First scan - reads and caches coordinates
df1 = await backend.scan_zarr_async(pl.col("time") > datetime(2024, 1, 1))
# Second scan - reuses cached coordinates (much faster!)
df2 = await backend.scan_zarr_async(pl.col("time") > datetime(2024, 6, 1))
# Check what's cached
stats = await backend.cache_stats()
print(f"Cached {stats['coord_entries']} coordinate chunks")
# Clear cache if needed
await backend.clear_coord_cache()
```
### `ZarrBackendSync` (Sync)
The **synchronous caching backend** for standard Zarr stores. Best for local filesystem access or simpler synchronous codebases.
**Features:**
- Same persistent caching as `ZarrBackend` (coordinates and metadata)
- Synchronous API - no async/await required
- Blocking I/O suitable for local or low-latency stores
- Additional options: column selection, row limits, batch size control
**When to use:**
- Local filesystem Zarr stores
- Synchronous applications or scripts
- Interactive data exploration (notebooks, REPL)
- When you don't need async concurrency
```python
import polars as pl
from datetime import datetime
import rainbear
# Create backend from URL
backend = rainbear.ZarrBackendSync.from_url("/path/to/local/dataset.zarr")
# Scan with column selection and row limit
df1 = backend.scan_zarr_sync(
predicate=pl.col("time") > datetime(2024, 1, 1),
with_columns=["temp", "pressure"],
n_rows=1000
)
# Second scan reuses cached coordinates
df2 = backend.scan_zarr_sync(pl.col("time") > datetime(2024, 6, 1))
# No await needed for cache operations in sync backend
stats = backend.cache_stats()
backend.clear_coord_cache()
```
### `IcechunkBackend` (Async, Version Control)
The **async-only caching backend** for [Icechunk](https://icechunk.io/)-backed Zarr stores. Icechunk adds Git-like version control to Zarr datasets, enabling branches, commits, and time-travel queries.
**Features:**
- Same persistent caching as `ZarrBackend` (coordinates and metadata)
- Access to versioned Zarr data with branch/snapshot support
- Direct integration with icechunk-python Session objects
- Async-only (Icechunk operations are inherently async)
**When to use:**
- Working with version-controlled Zarr datasets
- Need to query specific branches or historical snapshots
- Collaborative workflows with multiple dataset versions
- Reproducible analysis requiring exact dataset versions
```python
import polars as pl
from datetime import datetime
import rainbear
# Create backend from Icechunk filesystem repository
backend = await rainbear.IcechunkBackend.from_filesystem(
path="/path/to/icechunk/repo",
branch="main" # or specific branch name
)
# Scan like normal - caching works the same
df1 = await backend.scan_zarr_async(pl.col("time") > datetime(2024, 1, 1))
df2 = await backend.scan_zarr_async(pl.col("time") > datetime(2024, 6, 1))
# Or use existing Icechunk session directly
from icechunk import Repository, local_filesystem_storage
storage = local_filesystem_storage("/path/to/repo")
repo = Repository.open(storage)
session = repo.readonly_session("experimental-branch")
# No manual serialization needed!
backend = await rainbear.IcechunkBackend.from_session(session)
df = await backend.scan_zarr_async(pl.col("lat") < 45.0)
```
### Backend Comparison
| Feature | ZarrBackend | ZarrBackendSync | IcechunkBackend |
|---------|-------------|-----------------|-----------------|
| **API Style** | Async | Sync | Async |
| **Caching** | ✓ Coordinates & metadata | ✓ Coordinates & metadata | ✓ Coordinates & metadata |
| **Best For** | Cloud storage (S3, GCS, Azure) | Local filesystem | Version-controlled datasets |
| **Concurrency** | High (configurable) | Single-threaded | High (configurable) |
| **Version Control** | ✗ | ✗ | ✓ (branches, snapshots) |
| **Column Selection** | ✗ | ✓ | ✗ |
| **Row Limits** | ✗ | ✓ | ✗ |
## Running the smoke tests
The Python tests create some local Zarr stores and then scan them.
From the workspace root:
```bash
cd rainbear-tests
uv run pytest
```
# Development
To run the Rust tests:
```bash
cargo test
```
To run the Python tests:
```bash
uv run pytest
```
Profiling:
```bash
samply record -- uv run python -m pytest tests/test_benchmark_novel_queries.py -m 'benchmark' --no-header -rN
```
## Roadmap
### Near Term
- [ ] Geospatial support via ewkb and polars-st
- [x] Interpolation support
- [ ] Tests against cloud storage backends
- [ ] Benchmarks
- [ ] Documentation
### Longer Term
- [ ] Improved manner of application to take full advantage of Polars' lazy engine
- [ ] Caching Support?
- [ ] Writing to zarr?
- [ ] Capability to work with datatrees
- [ ] Allow output to arrow/pandas/etc.
- [ ] Icechunk support
- [ ] Zarr V2 support (backwards compatibility)
## Code map
- **Rust extension module**: `rainbear/src/lib.rs` exports `_core`
- **Zarr store opener (multi-backend URLs)**: `rainbear/src/zarr_store.rs`
- **Metadata loader (dims/coords/vars + schema)**: `rainbear/src/zarr_meta.rs`
- **Streaming IO source**: `rainbear/src/zarr_source.rs` (exposed to Python as `ZarrBackendSync`)
- **Python API**: `rainbear/src/rainbear/__init__.py` (`scan_random`, `scan_zarr`, `ZarrBackendSync`)
- **Tests**: `rainbear-tests/tests/` (separate workspace package)
[`zarrs`]: https://docs.rs/zarrs/latest/zarrs/
| text/markdown; charset=UTF-8; variant=GFM | null | Benjamin Sobel <ben-developer@opayq.com> | null | null | null | zarr, polars, lazy | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"polars==1.37.1"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T09:06:57.779851 | rainbear-2.1.0-cp39-abi3-win32.whl | 35,947,798 | 3a/b7/20266fb396d516d65156e846660210d533dbe4e7f019bbbea3c4f86ed26d/rainbear-2.1.0-cp39-abi3-win32.whl | cp39 | bdist_wheel | null | false | 3fc74c1ed1b5b155ad42e256c65088f7 | 5201eac391a2bda1c1bb2db36a3c990f3f0192e2f7803a1b00b517f9cc1ee95a | 3ab720266fb396d516d65156e846660210d533dbe4e7f019bbbea3c4f86ed26d | null | [
"LICENSE"
] | 1,234 |
2.3 | pydotaconstants | 1.0.0b1 | Add your description here | # PyDotaConstants
PyDotaConstants is a Python library designed to provide structured access to Dota 2 hero and ability data. It allows developers to easily retrieve hero and ability information based on their code names and display names. This project is particularly useful for creating applications and tools that require detailed insights into Dota 2's capabilities and characters.
## Features
- Retrieve hero information by codename, ID, or display name.
- Access ability information similarly through codename or display name.
- Data is loaded from pre-compiled binary files, ensuring fast access and processing.
- Supports a structured schema for hero and ability data that can be easily navigated.
## Installation and Setup
To install the PyDotaConstants library, simply clone the repository and ensure the necessary data files are in place. The current data files must be compiled as `.pkl` files as represented in the `src/pydotaconstants/data` directory.
```bash
git clone https://github.com/yourusername/pydotaconstants.git
cd pydotaconstants
```
Make sure to run the `_update.py` script to generate the necessary data files if they are not already provided.
```bash
python src/_update.py
```
## Basic Usage
To use the library, you can import the desired classes and access hero or ability data as shown below:
```python
from pydotaconstants import Hero, Ability
# Get hero by codename
hero = Hero.getByName("npc_dota_hero_axe")
print(hero.displayName) # Output: Axe
# Get ability by display name
ability = Ability.getByDisplayName("Hex")
print(ability.displayDescription)
```
## Configuration
This library expects certain pre-compiled data files:
- `heroes.pkl`: Contains data for all Dota 2 heroes.
- `abilities.pkl`: Contains data for all Dota 2 abilities.
- `locals.pkl`: Contains localization strings for heroes and abilities.
Make sure these files are located in the `src/pydotaconstants/data` directory for the library to function correctly.
## Contributing Guidelines
Contributions to PyDotaConstants are welcome. To contribute:
1. Fork the repository.
2. Create a new branch (`git checkout -b feature-branch`).
3. Make your changes and commit them (`git commit -m 'Add some feature'`).
4. Push to the branch (`git push origin feature-branch`).
5. Open a Pull Request detailing your changes.
Please ensure any new features include tests and documentation updates as necessary.
## License
This project is licensed under the MIT License. See the `LICENSE` file for more details. | text/markdown | r41ngee | r41ngee <r41ngee@yandex.ru> | null | null | null | null | [] | [] | null | null | >=3.14 | [] | [] | [] | [] | [] | [] | [] | [] | uv/0.9.14 {"installer":{"name":"uv","version":"0.9.14","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T09:06:44.172726 | pydotaconstants-1.0.0b1.tar.gz | 7,225,350 | 54/31/9dc36e8bcc52c634747dc672801c9c22d9af6d88791739e49f28432df3c8/pydotaconstants-1.0.0b1.tar.gz | source | sdist | null | false | bf5ace65c827e43aefd55e61497646bb | 1712e84a9e18c9f787ac0a4178786651a686c575f6eeee54fcc781df3e792fdf | 54319dc36e8bcc52c634747dc672801c9c22d9af6d88791739e49f28432df3c8 | null | [] | 229 |
2.3 | tutr | 0.1.3 | AI-powered terminal assistant that generates shell commands from natural language. | # tutr - Terminal Utility for Con(T)extual Responses
A stupid simple, AI-powered terminal assistant that generates commands from natural language.
## What does it do?
Generates terminal commands from natural language queries.
``` bash
> tutr git create and switch to a new branch called testing
git checkout -b testing
```
``` bash
> tutr go back to the previous directory
cd -
```
## Installation
Requires Python 3.10+.
```bash
pipx install tutr
```
Or run it without installing:
```bash
uvx tutr
```
For development from source:
```bash
git clone https://github.com/spi/tutr.git
cd tutr
uv sync
```
## Setup
On first run, tutr launches an interactive setup to select your provider, model, and API key:
```
$ tutr git "show recent commits"
Welcome to tutr! Let's get you set up.
Select your LLM provider:
1. Gemini
2. Anthropic
3. OpenAI
4. Ollama (local, no API key needed)
Enter choice (1-4): 1
Enter your Gemini API key:
API key:
Select a model:
1. Gemini 3 Flash (recommended)
2. Gemini 2.0 Flash
3. Gemini 2.5 Pro
Enter choice (1-3): 1
Configuration saved to ~/.tutr/config.json
```
Setup is skipped if `~/.tutr/config.json` already exists or provider API key environment variables are set.
## Usage
```
tutr <command> <what you want to do>
```
### Examples
```bash
tutr git "create and switch to a new branch called testing"
tutr sed "replace all instances of 'foo' with 'bar' in myfile.txt"
tutr curl "http://example.com and display all request headers"
```
### Arguments
| Argument | Description |
|---|---|
| `command` | The terminal command to get help with (e.g., `git`, `sed`, `curl`) |
| `query` | What you want to do, in natural language |
### Options
| Flag | Description |
|---|---|
| `-h, --help` | Show help message |
| `-V, --version` | Show version |
| `-d, --debug` | Enable debug logging |
| `-e, --explain` | Show LLM explanation and source for the generated command |
## Configuration
Config is stored in `~/.tutr/config.json`. Environment variables override the config file.
| Environment Variable | Description | Default |
|---|---|---|
| `TUTR_MODEL` | LLM model to use ([litellm format](https://docs.litellm.ai/docs/providers)) | `gemini/gemini-3-flash-preview` |
| `GEMINI_API_KEY` | Gemini API key | — |
| `ANTHROPIC_API_KEY` | Anthropic API key | — |
| `OPENAI_API_KEY` | OpenAI API key | — |
You can also enable command explanations persistently in `~/.tutr/config.json`:
```json
{
"show_explanation": true
}
```
To re-run setup, delete the config file:
```bash
rm ~/.tutr/config.json
```
## Development
Run all quality checks:
```bash
uv run poe check
```
Run tests only:
```bash
uv run pytest
```
Lint and format:
```bash
uv run ruff check .
uv run ruff format .
```
## Publishing to PyPI
Build and validate distribution artifacts:
```bash
uv sync
uv run poe dist
```
Upload to TestPyPI first:
```bash
export TWINE_USERNAME=__token__
export TWINE_PASSWORD=<testpypi-api-token>
uv run poe publish_testpypi
```
Then upload to PyPI:
```bash
export TWINE_USERNAME=__token__
export TWINE_PASSWORD=<pypi-api-token>
uv run poe publish_pypi
```
You can create API tokens in your account settings:
- TestPyPI: https://test.pypi.org/manage/account/token/
- PyPI: https://pypi.org/manage/account/token/
| text/markdown | spi | spi <spi3@pm.me> | null | null | MIT License
Copyright (c) 2026 tutr
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | cli, terminal, llm, developer-tools, ai | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Build Tools",
"Topic :: Utilities"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"litellm>=1.81.13",
"pydantic>=2.12.5"
] | [] | [] | [] | [
"Homepage, https://github.com/spi3/tutr",
"Repository, https://github.com/spi3/tutr",
"Issues, https://github.com/spi3/tutr/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:05:24.997597 | tutr-0.1.3.tar.gz | 17,065 | d2/d8/71d12c90bcbc50f58cd9ce014250351209b498198420fca8f3192174176e/tutr-0.1.3.tar.gz | source | sdist | null | false | dfee44b651e1108a12847b6983430642 | eed735f640d0d787237f22017985260f9a0d16548bba5acb4ade7d47eee05375 | d2d871d12c90bcbc50f58cd9ce014250351209b498198420fca8f3192174176e | null | [] | 248 |
2.4 | lucid-dl | 2.13.8 | Lumerico's Comprehensive Interface for Deep Learning | # Lucid² 💎


[](https://pepy.tech/projects/lucid-dl)



**Lucid** is a minimalist deep learning framework built entirely from scratch in Python. It offers a pedagogically rich environment to explore the foundations of modern deep learning systems, including autodiff, neural network modules, and GPU acceleration — all while staying lightweight, readable, and free of complex dependencies.
Whether you're a student, educator, or an advanced researcher seeking to demystify deep learning internals, Lucid provides a transparent and highly introspectable API that faithfully replicates key behaviors of major frameworks like PyTorch, yet in a form simple enough to study line by line.
[📑 Lucid Documentation](https://chanlumerico.github.io/lucid/build/html/index.html) | [✏️ Lucid DevLog](https://velog.io/@lumerico284/series/Lucid-Development) |
[🤗 Lucid Huggingface](https://huggingface.co/ChanLumerico/lucid)
#### Other Languages
[🇰🇷 Korean](https://github.com/ChanLumerico/lucid/blob/main/README.kr.md)
### 🔥 What's New
- Implemented **RoFormer** (Su et al., 2021) `lucid.models.RoFormer` based on BERT implementation, along with various task wrappers.
- Added **Rotary Positional Embedding** (RoPE; Su et al., 2021): `nn.RotaryPosEmbedding`
- Added new submodule `lucid.data.tokenizers` which contains various tokenizers for NLP tasks along with thier ***Fast*** versions, accelerated via **C++ backend**. (e.g. `WordPieceTokenizerFast`)
- Implemented **BERT**(Devlin et al., 2018) `lucid.models.BERT`
## 🔧 How to Install
Lucid is designed to be light, portable, and friendly to all users — no matter your setup.
### ▶️ Basic Installation
Lucid is available directly on PyPI:
```bash
pip install lucid-dl
```
Alternatively, you can install the latest development version from GitHub:
```bash
pip install git+https://github.com/ChanLumerico/lucid.git
```
This installs all the core components needed to use Lucid in CPU mode using NumPy.
### ⚡ Enable GPU (Metal / MLX Acceleration)
If you're using a Mac with Apple Silicon (M1, M2, M3), Lucid supports GPU execution via the MLX library.
To enable Metal acceleration:
1. Install MLX:
```bash
pip install mlx
```
2. Confirm you have a compatible device (Apple Silicon).
3. Run any computation with `device="gpu"`.
### ✅ Verification
Here's how to check whether GPU acceleration is functioning:
```python
import lucid
x = lucid.ones((1024, 1024), device="gpu")
print(x.device) # Should print: 'gpu'
```
## 📐 Tensor: The Core Abstraction
At the heart of Lucid is the `Tensor` class — a generalization of NumPy arrays that supports advanced operations such as gradient tracking, device placement, and computation graph construction.
Each Tensor encapsulates:
- A data array (`ndarray` or `mlx.array`)
- Gradient (`grad`) buffer
- The operation that produced it
- A list of parent tensors from which it was derived
- Whether it participates in the computation graph (`requires_grad`)
### 🔁 Construction and Configuration
```python
from lucid import Tensor
x = Tensor([[1.0, 2.0], [3.0, 4.0]], requires_grad=True, device="gpu")
```
- `requires_grad=True` adds this tensor to the autodiff graph.
- `device="gpu"` allocates the tensor on the Metal backend.
### 🔌 Switching Between Devices
Tensors can be moved between CPU and GPU at any time using `.to()`:
```python
x = x.to("gpu") # Now uses MLX arrays for accelerated computation
y = x.to("cpu") # Moves data back to NumPy
```
You can inspect which device a tensor resides on via:
```python
print(x.device) # Either 'cpu' or 'gpu'
```
## 📉 Automatic Differentiation (Autodiff)
Lucid implements **reverse-mode automatic differentiation**, which is commonly used in deep learning due to its efficiency for computing gradients of scalar-valued loss functions.
It builds a dynamic graph during the forward pass, capturing every operation involving Tensors that require gradients. Each node stores a custom backward function which, when called, computes local gradients and propagates them upstream using the chain rule.
### 📘 Computation Graph Internals
The computation graph is a Directed Acyclic Graph (DAG) in which:
- Each `Tensor` acts as a node.
- Each operation creates edges between inputs and outputs.
- A `_backward_op` method is associated with each Tensor that defines how to compute gradients w.r.t. parents.
The `.backward()` method:
1. Topologically sorts the graph.
2. Initializes the output gradient (usually with 1.0).
3. Executes all backward operations in reverse order.
### 🧠 Example
```python
import lucid
x = lucid.tensor([1.0, 2.0, 3.0], requires_grad=True)
y = x * 2 + 1
z = y.sum()
z.backward()
print(x.grad) # Output: [2.0, 2.0, 2.0]
```
This chain-rule application computes the gradient $\frac{\partial z}{\partial x} = \frac{\partial z}{\partial y}\cdot\frac{\partial y}{\partial x} = [2, 2, 2]$.
### 🔄 Hooks & Shape Alignment
Lucid supports:
- **Hooks** for gradient inspection or modification.
- **Shape broadcasting and matching** for non-conforming tensor shapes.
## 🚀 Metal Acceleration (MLX Backend)
Lucid supports **Metal acceleration** on Apple Silicon devices using [MLX](https://github.com/ml-explore/mlx). This integration allows tensor operations, neural network layers, and gradient computations to run efficiently on the GPU, leveraging Apple’s unified memory and neural engine.
### 📋 Key Features
- Tensors with `device="gpu"` are allocated as `mlx.core.array`.
- Core mathematical operations, matrix multiplications, and backward passes use MLX APIs.
- No change in API: switching to GPU is as simple as `.to("gpu")` or passing `device="gpu"` to tensor constructors.
### 💡 Example 1: Basic Acceleration
```python
import lucid
x = lucid.randn(1024, 1024, device="gpu", requires_grad=True)
y = x @ x.T
z = y.sum()
z.backward()
print(x.grad.device) # 'gpu'
```
### 💡 Example 2: GPU-Based Model
```python
import lucid.nn as nn
import lucid.nn.functional as F
class TinyNet(nn.Module):
def __init__(self):
super().__init__()
self.fc = nn.Linear(100, 10)
def forward(self, x):
return F.relu(self.fc(x))
model = TinyNet().to("gpu")
data = lucid.randn(32, 100, device="gpu", requires_grad=True)
output = model(data)
loss = output.sum()
loss.backward()
```
When training models on GPU using MLX, **you must explicitly evaluate the loss tensor** after each forward pass to prevent the MLX computation graph from growing uncontrollably.
MLX defers evaluation until needed. If you don’t force evaluation (e.g. calling `.eval()`), the internal graph may become too deep and lead to performance degradation or memory errors.
### Recommended GPU Training Pattern:
```python
loss = model(input).sum()
loss.eval() # force evaluation on GPU
loss.backward()
```
This ensures that all prior GPU computations are flushed and evaluated **before** backward pass begins.
## 🧱 Neural Networks with `lucid.nn`
Lucid provides a modular PyTorch-style interface to build neural networks via `nn.Module`. Users define model classes by subclassing `nn.Module` and defining parameters and layers as attributes.
Each module automatically registers its parameters, supports device migration (`.to()`), and integrates with Lucid’s autodiff system.
### 🧰 Custom Module Definition
```python
import lucid.nn as nn
class MLP(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.fc1(x)
x = nn.functional.relu(x)
x = self.fc2(x)
return x
```
### 🧩 Parameter Registration
All parameters are registered automatically and can be accessed:
```python
model = MLP()
print(model.parameters())
```
### 🧭 Moving to GPU
```python
model = model.to("gpu")
```
This ensures all internal parameters are transferred to GPU memory.
## 🏋️♂️ Training & Evaluation
Lucid supports training neural networks using standard loops, customized optimizers, and tracking gradients over batches of data.
### ✅ Full Training Loop
```python
import lucid
from lucid.nn.functional import mse_loss
model = MLP().to("gpu")
optimizer = lucid.optim.SGD(model.parameters(), lr=0.01)
for epoch in range(100):
preds = model(x_train)
loss = mse_loss(preds, y_train)
loss.eval() # force evaluation
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f"Epoch {epoch}, Loss: {loss.item()}")
```
### 🧪 Evaluation without Gradients
```python
with lucid.no_grad():
out = model(x_test)
```
Prevents gradient tracking and reduces memory usage.
## 📦 Loading Pretrained Weights
Lucid supports loading pretrained weights for models using the `lucid.weights` module,
which provides access to standard pretrained initializations.
```python
from lucid.models import lenet_5
from lucid.weights import LeNet_5_Weights
# Load LeNet-5 with pretrained weights
model = lenet_5(weights=LeNet_5_Weights.DEFAULT)
```
You can also initialize models without weights by passing `weights=None`.
## 🧬 Educational by Design
Lucid is not a black box. It’s built to be explored. Every class, every function, and every line is designed to be readable and hackable.
- Use it to build intuition for backpropagation.
- Modify internal operations to test custom autograd.
- Benchmark CPU vs GPU behavior on your own model.
- Debug layer by layer, shape by shape, gradient by gradient.
Whether you're building neural nets from scratch, inspecting gradient flow, or designing a new architecture — Lucid is your transparent playground.
## 🧠 Conclusion
Lucid serves as a powerful educational resource and a minimalist experimental sandbox. By exposing the internals of tensors, gradients, and models — and integrating GPU acceleration — it invites users to **see, touch, and understand** how deep learning truly works.
## 📜 Others
**Dependencies**:
| Library | Purpose |
| ------- | ------- |
| `numpy` | Core Tensor operations for CPU |
| `mlx` | Core Tensor operations for GPU(Apple Silicon) |
| `pandas`, `openml` | Dataset download and fetching |
| `matplotlib` | Various visualizations |
**Inspired By**:



| text/markdown | ChanLumerico | greensox284@gmail.com | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent",
"Operating System :: MacOS :: MacOS X"
] | [] | https://github.com/ChanLumerico/lucid | null | >=3.12 | [] | [] | [] | [
"numpy",
"pandas",
"openml",
"mlx; platform_system == \"Darwin\" and platform_machine == \"arm64\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.10 | 2026-02-21T09:05:20.144875 | lucid_dl-2.13.8.tar.gz | 253,664 | 8f/69/3fbbf24d173f0d09b928c0bb8fa4c8532bb1b18a337ab4e22e641add8ce0/lucid_dl-2.13.8.tar.gz | source | sdist | null | false | cfab80792d825773f6aaba42fbab4ba5 | ac71c3b05b91f038b15c9559228f21b4967fea33bbcc3f272107945b0bc1250d | 8f693fbbf24d173f0d09b928c0bb8fa4c8532bb1b18a337ab4e22e641add8ce0 | null | [
"LICENSE"
] | 227 |
2.4 | MaaFw | 5.7.1 | An automation black-box testing framework based on image recognition | <!-- markdownlint-disable MD033 MD041 -->
<p align="center">
<img alt="LOGO" src="https://cdn.jsdelivr.net/gh/MaaAssistantArknights/design@main/logo/maa-logo_512x512.png" width="256" height="256" />
</p>
<div align="center">
# MaaFramework
<!-- prettier-ignore-start -->
<!-- markdownlint-disable-next-line MD036 -->
_✨ 基于图像识别的自动化黑盒测试框架 ✨_
<!-- prettier-ignore-end -->
</div>
<p align="center">
<img alt="C++" src="https://img.shields.io/badge/C++-20-%2300599C?logo=cplusplus">
<img alt="platform" src="https://img.shields.io/badge/platform-Windows%20%7C%20Linux%20%7C%20macOS%20%7C%20Android-blueviolet">
<br>
<img alt="license" src="https://img.shields.io/github/license/MaaXYZ/MaaFramework">
<img alt="activity" src="https://img.shields.io/github/commit-activity/m/MaaXYZ/MaaFramework?color=%23ff69b4">
<img alt="stars" src="https://img.shields.io/github/stars/MaaXYZ/MaaFramework?style=social">
<br>
<a href="https://pypi.org/project/MaaFw/" target="_blank"><img alt="pypi" src="https://img.shields.io/pypi/dm/maafw?logo=pypi&label=PyPI"></a>
<a href="https://www.nuget.org/packages/Maa.Framework.Runtimes" target="_blank"><img alt="nuget" src="https://img.shields.io/badge/NuGet-004880?logo=nuget"></a>
<a href="https://www.npmjs.com/package/@maaxyz/maa-node" target="_blank"><img alt="npm" src="https://img.shields.io/badge/npm-CB3837?logo=npm"></a>
<a href="https://pkg.go.dev/github.com/MaaXYZ/maa-framework-go/v3"><img alt="go reference" src="https://pkg.go.dev/badge/github.com/MaaXYZ/maa-framework-go/v3.svg" /></a>
<a href="https://crates.io/crates/maa-framework"><img alt="rust crate" src="https://img.shields.io/badge/Rust-crate-orange?logo=rust" /></a>
<a href="https://mirrorchyan.com/zh/projects?source=maafw-badge" target="_blank"><img alt="mirrorc" src="./docs/static/mirrorc-zh.svg"></a>
<br>
<a href="https://maafw.com/" target="_blank"><img alt="website" src="./docs/static/maafw.svg"></a>
<a href="https://deepwiki.com/MaaXYZ/MaaFramework" target="_blank"><img alt="deepwiki" src="https://deepwiki.com/badge.svg"></a>
</p>
<div align="center">
[English](./README_en.md) | [简体中文](./README.md)
</div>
## 简介
**MaaFramework** 是基于图像识别技术、运用 [MAA](https://github.com/MaaAssistantArknights/MaaAssistantArknights) 开发经验去芜存菁、完全重写的新一代自动化黑盒测试框架。
低代码的同时仍拥有高扩展性,旨在打造一款丰富、领先、且实用的开源库,助力开发者轻松编写出更好的黑盒测试程序,并推广普及。
## 即刻开始
> [!TIP]
> 访问我们的 [官网](https://maafw.com/) 以获得更优秀的文档阅读体验。
> 找不到相关文档?试试问 [AI](https://deepwiki.com/MaaXYZ/MaaFramework) !
- [快速开始](docs/zh_cn/1.1-快速开始.md) & [术语解释](docs/zh_cn/1.2-术语解释.md)
- [代码集成](docs/zh_cn/2.1-集成文档.md) & [API](docs/zh_cn/2.2-集成接口一览.md)
- [Pipeline 低代码协议](docs/zh_cn/3.1-任务流水线协议.md)
- [项目接口 PI 协议](docs/zh_cn/3.3-ProjectInterfaceV2协议.md)
## 社区项目
### 通用 UI
- [MFAAvalonia](https://github.com/SweetSmellFox/MFAAvalonia)     [](https://mirrorchyan.com/zh/projects?source=maafw-badge)
基于 Avalonia 的 通用 GUI。由 MaaFramework 强力驱动!
- [MFW-CFA](https://github.com/overflow65537/MFW-PyQt6)     [](https://mirrorchyan.com/zh/projects?source=maafw-badge)
基于 PySide6 的通用 GUI。由 MaaFramework 强力驱动!
- [MXU](https://github.com/MistEO/MXU)     
基于 Tauri 2 + React 的轻量级跨平台通用 GUI。由 MaaFramework 强力驱动!
- [MWU](https://github.com/ravizhan/MWU)     
基于 Vue + FastAPI 的轻量级跨平台通用 WebUI。由 MaaFramework 强力驱动!
### 开发工具
- [MaaDebugger](https://github.com/MaaXYZ/MaaDebugger)     [](https://pypi.org/project/MaaDebugger/)
MaaFramework Pipeline 调试器
- [maa-support-extension](https://github.com/neko-para/maa-support-extension)    [](https://marketplace.visualstudio.com/items?itemName=nekosu.maa-support)
MaaFramework VSCode 插件
- [MFAToolsPlus](https://github.com/SweetSmellFox/MFAToolsPlus)    
基于 Avalonia 框架开发的跨平台开发工具箱,提供便捷的数据获取和模拟测试方法
- [MaaPipelineEditor](https://github.com/kqcoxn/MaaPipelineEditor)     [](https://mpe.codax.site/stable/)
可视化阅读与构建 Pipeline,[功能完备](https://github.com/kqcoxn/MaaPipelineEditor?tab=readme-ov-file#%E4%BA%AE%E7%82%B9),极致轻量跨平台,提供渐进式[本地功能扩展](https://mpe.codax.site/docs/guide/server/deploy.html),无缝兼容新旧项目
- [MaaInspector](https://github.com/TanyaShue/MaaInspector)   
基于 vue-flow 的可视化编辑器,集成节点预览,编辑,调试于一体的简单好用的 MaaFramework Pipeline 编辑器
- [MaaMCP](https://github.com/MaaXYZ/MaaMCP)     [](https://pypi.org/project/maa-mcp)
基于 MaaFramework 的 MCP 服务器 为 AI 助手提供 Android 设备和 Windows 桌面自动化能力
- [MaaLogAnalyzer](https://github.com/Windsland52/MAALogAnalyzer)     [](https://maaloganalyzer.maafw.com/)
MaaFramework 用户日志分析工具,支持可视化任务执行流程和全文搜索
### 应用程序
- [M9A](https://github.com/MaaXYZ/M9A)      [](https://mirrorchyan.com/zh/projects?source=maafw-badge) [](https://1999.fan)
亿韭韭韭 小助手。图像技术 + 模拟控制,解放双手!由 MaaFramework 强力驱动!
- [MSBA](https://github.com/overflow65537/MAA_SnowBreak)      [](https://mirrorchyan.com/zh/projects?source=maafw-badge)
尘白禁区 小助手。图像技术 + 模拟控制,解放双手!由 MaaFramework 强力驱动!
- [MaaYYs](https://github.com/TanyaShue/MaaYYs)      [](https://mirrorchyan.com/zh/projects?source=maafw-badge)
阴阳师小助手。图像技术 + 模拟控制,当赛博屯屯鼠,自动日常,解放你的双手!由 MaaFramework 强力驱动!
- [MPA](https://github.com/overflow65537/MAA_Punish)      [](https://mirrorchyan.com/zh/projects?source=maafw-badge)
战双帕弥什 小助手。图像技术 + 模拟控制,解放双手!由 玛丽的黑咖啡 2.0 强力驱动!
- [MRA](https://github.com/Saratoga-Official/MRA)     [](https://mirrorchyan.com/zh/projects?source=maafw-badge)
战舰少女R 小助手。图像技术 + 模拟控制,解放双手!由 MaaFramework 强力驱动!
- [MaaYuan](https://github.com/syoius/MaaYuan)     [](https://mirrorchyan.com/zh/projects?source=maafw-badge) [](https://maayuan.top)
代号鸢/如鸢 小助手。图像技术 + 模拟控制,解放双手!由 MaaFramework 强力驱动!
- [Maa-HBR](https://github.com/KarylDAZE/Maa-HBR)     [](https://mirrorchyan.com/zh/projects?source=maafw-badge)
炽焰天穹/HBR 小助手。图像技术 + 模拟控制,解放双手!由 MaaFramework 强力驱动!
- [MaaGF2Exilium](https://github.com/DarkLingYun/MaaGF2Exilium)     [](https://mirrorchyan.com/zh/projects?source=maafw-badge)
少女前线 2: 追放自动化助手。图像技术 + 模拟控制,解放双手!由 MaaFramework 强力驱动!
- [MaaXuexi](https://github.com/ravizhan/MaaXuexi)     
学习强国 自动化助手。图像技术 + 模拟控制,解放双手!由 MaaFramework 强力驱动!
- [MAA_MHXY_MG](https://github.com/gitlihang/Maa_MHXY_MG)     [](https://mirrorchyan.com/zh/projects?source=maafw-badge)
梦幻西游手游 小助手。图像技术 + 模拟控制,解放双手!由 MaaFramework 强力驱动!
- [MaaTOT](https://github.com/Coxwtwo/MaaTOT)    
未定事件簿 小助手。图像技术 + 模拟控制,解放双手!由 MaaFramework 强力驱动!
- [MaaGumballs](https://github.com/KhazixW2/MaaGumballs)      [](https://mirrorchyan.com/zh/projects?source=maafw-badge)
不思议迷宫小助手是一款由图像识别与模拟控制技术驱动的辅助工具。它能够帮助大家解放双手,一键开启敲砖大冒险,由 MaaFramework 强力支持。
- [MMleo](https://github.com/fictionalflaw/MMleo)      [](https://mirrorchyan.com/zh/projects?source=maafw-badge)
偶像梦幻祭 2 小助手。使用图像识别+模拟控制技术,解放双手!助力屯屯鼠的制作人生涯!由 MaaFramework 强力驱动!
- [SLIMEIM_Maa](https://github.com/miaojiuqing/SLIMEIM_Maa)      [](https://mirrorchyan.com/zh/projects?source=maafw-badge)
魔王与龙的建国谭小助手。使用图像识别+模拟控制技术,解放双手!由 MaaFramework 强力驱动!
- [Maa_bbb](https://github.com/miaojiuqing/Maa_bbb)      [](https://mirrorchyan.com/zh/projects?source=maafw-badge)
崩坏三小助手。使用图像识别+模拟控制技术,解放双手!PC 端与模拟器端同步支持,由 MaaFramework 强力驱动!
- [MAN](https://github.com/duorua/narutomobile)      [](https://mirrorchyan.com/zh/projects?source=maafw-badge) [](https://naruto.natsuu.top)
火影忍者手游小助手。使用图像识别+模拟控制技术,解放双手!由 MaaFramework 强力驱动!
- [MaaGakumasu](https://github.com/SuperWaterGod/MaaGakumasu)      [](https://mirrorchyan.com/zh/projects?source=maafw-badge)
学园偶像大师小助手。使用图像技术 + 模拟控制 + 深度学习,解放双手!由 MaaFramework 强力驱动!
- [MaaStarResonance](https://github.com/233Official/MaaStarResonance)      [](https://mirrorchyan.com/zh/projects?source=maafw-badge)
星痕共鸣小助手。使用图像技术 + 模拟控制,解放双手!由 MaaFramework 强力驱动!
- [MAG](https://github.com/Kazaorus/MAG)     
深空之眼小助手。使用图像技术 + 模拟控制,解放双手!由 MaaFramework 强力驱动!
- [MAAAE](https://github.com/NewWYoming/MAAAE)     
白荆回廊 小助手。图像技术 + 模拟控制,解放双手!由 MaaFramework 强力驱动!
- [MBCCtools](https://github.com/quietlysnow/MBCCtools)    
无期迷途 小助手。图像技术 + 模拟控制,解放双手!由 MaaFramework 强力驱动!
- [MaaEOV](https://github.com/Tigerisu/MaaEOV)    
异象回声 小助手。图像技术 + 模拟控制,解放双手!由 MaaFramework 强力驱动!
- [MAA Star Resonance](https://github.com/26F-Studio/maa-star-resonance)    
星痕共鸣小助手。使用 Electron + 文本图像识别 + ADB 模拟控制 技术,解放双手!由 MaaFramework 和 Quasar 强力驱动!
- [StellaSora-Auto-Helper](https://github.com/SodaCodeSave/StellaSora-Auto-Helper)    
星塔旅人 小助手。图像技术 + 模拟控制,解放双手!由 MaaFramework 强力驱动!
- [MaaDuDuL](https://github.com/kqcoxn/MaaDuDuL)      [](https://mirrorchyan.com/zh/projects?source=maafw-badge) [](https://mddl.codax.site)
嘟嘟脸恶作剧 小助手。图像技术 + 模拟控制,自动捏脸,解放双手!由 MaaFramework 强力驱动!
- [MaaLYSK](https://github.com/Witty36/MaaLYSK)      [](https://mirrorchyan.com/zh/projects?rid=MaaLYSK) [](https://maalysk.top)
恋与深空 小助手。使用图像技术 + 模拟控制,解放双手!由 MaaFramework 强力驱动!
- [MaaEnd](https://github.com/MaaEnd/MaaEnd)      [](https://mirrorchyan.com/zh/projects?source=maafw-badge) [](https://maaend.com)
终末地小助手。由 MaaFramework 与 MXU 驱动,绝赞开发中!
- [MaaGFNeuralCloud](https://github.com/PinkMMF/MaaGFNeuralCloud)    
少女前线:云图计划自动化助手。图像技术 + 模拟控制,解放双手!由 MaaFramework 强力驱动!
## 生态共建
MAA 正计划建设为一类项目,而非舟的单一软件。
若您的项目依赖于 MaaFramework ,我们欢迎您将它命名为 MaaXXX, MXA, MAX 等等。当然,这是许可而不是限制,您也可以自由选择其他与 MAA 无关的名字,完全取决于您自己的想法!
同时,我们也非常欢迎您提出 PR ,在上方的社区项目列表中添加上您的项目!
## 声明与许可
### 开源许可
本项目采用 [`LGPL-3.0`](./LICENSE.md) 许可证进行开源。
### 分发说明
本项目支持 GPU 加速功能,其在 Windows 平台上依赖于 Microsoft 提供的独立组件 [DirectML](https://learn.microsoft.com/en-us/windows/ai/directml/)。DirectML 并非本项目的开源部分,也不受 LGPL-3.0 的约束。为方便用户,我们随安装包附带了一个未经修改的 DirectML.dll 文件。如果您无需 GPU 加速功能,可安全删除该 DLL 文件,软件的核心功能仍可正常运行。
### 免责声明
#### 预期用途
本项目旨在为软件开发提供**自动化黑盒测试工具**,包括图像识别、界面操作模拟等合法技术场景。开发者应确保其使用方式符合所有适用法律法规及目标软件的服务条款。
#### 禁止滥用
禁止将本项目用于以下用途(包括但不限于):
- 破坏、绕过或干扰任何软件、游戏、服务的正常功能(如反作弊机制、授权验证系统)。
- 开发或分发违反第三方服务条款的工具(如游戏外挂、作弊器、自动化脚本)。
- 任何侵犯他人合法权益或违反法律的行为(如数据窃取、网络攻击)。
#### 责任豁免
本项目按“原样”提供,作者**不承担**因以下行为导致的任何直接、间接或衍生责任:
- 使用者违反本声明或法律法规的行为。
- 第三方利用本项目开发的工具造成的损害(如账号封禁、法律纠纷)。
- 因使用本项目导致的任何技术或经济损失。
#### 用户义务
使用本项目即表示您同意:
- 自行承担所有使用风险。
- 确保您的应用场景合法,并已获得相关授权(如目标软件厂商的许可)。
- 若您的行为导致法律纠纷,您应独立承担责任并免除本项目作者的一切责任。
## 开发
_请留意,仅当您准备开发 MaaFramework 本身时,才需要阅读本章节内容。若您仅希望基于 MaaFramework 开发自己的应用,则请参考 [即刻开始](#即刻开始)。_
- [构建指南](docs/zh_cn/4.1-构建指南.md)
- [接口设计](docs/zh_cn/4.2-标准化接口设计.md)
## 鸣谢
### 开源库
- [opencv](https://github.com/opencv/opencv)
Open Source Computer Vision Library
- [fastdeploy](https://github.com/PaddlePaddle/FastDeploy)
⚡️An Easy-to-use and Fast Deep Learning Model Deployment Toolkit for ☁️Cloud 📱Mobile and 📹Edge.
- [onnxruntime](https://github.com/microsoft/onnxruntime)
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
- [boost](https://www.boost.org/)
Boost provides free peer-reviewed portable C++ source libraries.
- [libzmq](https://github.com/zeromq/libzmq)
ZeroMQ core engine in C++, implements ZMTP/3.1
- [cppzmq](https://github.com/zeromq/cppzmq)
Header-only C++ binding for libzmq
- [meojson](https://github.com/MistEO/meojson)
✨ Next-gen C++ JSON/JSON5 Serialization Engine | Zero Dependency | Header-Only | Unleash JSON Potential
- [minitouch](https://github.com/DeviceFarmer/minitouch)
Minimal multitouch event producer for Android.
- [maatouch](https://github.com/MaaAssistantArknights/MaaTouch)
Android native implementation of minitouch input protocol
- [minicap](https://github.com/DeviceFarmer/minicap)
Stream real-time screen capture data out of Android devices.
- [zlib](https://github.com/madler/zlib)
A massively spiffy yet delicately unobtrusive compression library.
- [gzip-hpp](https://github.com/mapbox/gzip-hpp)
Gzip header-only C++ library
- [ViGEmClient](https://github.com/nefarius/ViGEmClient)
ViGEm Client SDK for feeder development.
- ~~[protobuf](https://github.com/protocolbuffers/protobuf)~~
~~Protocol Buffers - Google's data interchange format~~
- ~~[grpc](https://github.com/grpc/grpc)~~
~~The C based gRPC (C++, Python, Ruby, Objective-C, PHP, C#)~~
- ~~[thrift](https://github.com/apache/thrift)~~
~~Apache Thrift~~
### 思路灵感
- [MaaAssistantArknights](https://github.com/MaaAssistantArknights/MaaAssistantArknights)
《明日方舟》小助手,全日常一键长草!| A one-click tool for the daily tasks of Arknights, supporting all clients.
**MaaFramework 参考了该项目中 ADB 控制器部分实现思路,但未使用其任何源代码。**
- [ok-script](https://github.com/ok-oldking/ok-script)
全新 Python 游戏自动化框架(支持 Windows 和模拟器)
**MaaFramework 参考该项目中 Win32 控制器部分实现思路,但未使用其任何源代码。**
### 开发者
感谢以下开发者对 MaaFramework 作出的贡献:
[](https://github.com/MaaXYZ/MaaFramework/graphs/contributors)
## 沟通交流
欢迎开发者加入官方 QQ 群(595990173),交流集成与开发实践。群内仅讨论开发相关议题,不提供日常使用/客服支持;为保证讨论质量,长期离题或违规的成员可能会被移除。
## 赞助
<!-- markdownlint-disable MD045 -->
<a href="https://afdian.com/a/misteo">
<img width="200" src="https://pic1.afdiancdn.com/static/img/welcome/button-sponsorme.png">
</a>
| text/markdown | MaaXYZ | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"maaagentbinary",
"numpy",
"strenum"
] | [] | [] | [] | [
"Homepage, https://github.com/MaaXYZ/MaaFramework"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T09:05:18.681836 | maafw-5.7.1-py3-none-win_arm64.whl | 24,395,445 | 65/4b/352529b61ebccadc0dae1ae20f8037e8708f0d06d229d17d9bb8c3f78751/maafw-5.7.1-py3-none-win_arm64.whl | py3 | bdist_wheel | null | false | 49f0def876024785628fd95b41c0d5c9 | 2dcb6eb0b1e97227cced7dfcf2854e933adc1a5096a116688dbd46dffccaf0f9 | 654b352529b61ebccadc0dae1ae20f8037e8708f0d06d229d17d9bb8c3f78751 | null | [
"LICENSE.md"
] | 0 |
2.4 | probekit | 0.3.1 | A versatile kit for training and using linear probes on neural network activations. | # Probes
A lightweight, modular library for training linear probes and steering vectors on neural network activations.
## Installation
`probekit` is not a minimal dependency package. It pulls in heavy ML dependencies, including `torch`, `scikit-learn`, and `sae-lens`.
Install it in an environment where large binary wheels and ML runtime deps are expected.
```bash
# PyPI install
pip install probekit
# Local editable install (from a cloned repo)
pip install -e .
```
## Core Design (V2)
This library separates **Semantics** (the probe model) from **Fitting** (how it's learned).
### 1. The Models: `LinearProbe` and `ProbeCollection`
- **`LinearProbe`** (`probekit.core.probe`): A container for a single probe (+ normalization stats).
- **`ProbeCollection`** (`probekit.core.collection`): A container for a **batch** of probes.
- `to_tensor()`: Stacks weights into `[B, D]` and biases into `[B]`.
- `best_layer(metric)`: Finds the probe with the best validation accuracy.
### 2. The Fitters
Functional solvers in `probekit.fitters` take training data and return a `LinearProbe` (or `ProbeCollection`).
- `fit_logistic`: Standard L2-regularized Logistic Regression.
- `fit_elastic_net`: ElasticNet (L1 + L2), useful for sparse features (SAEs, Neurons).
- `fit_dim`: Difference-in-Means (Class 1 Mean - Class 0 Mean).
Method choice: use `fit_dim` for strictly linear, overfitting-resistant separation and use `fit_logistic` for standard L2-regularized classification.
#### Batched GPU Fitters
Optimized PyTorch implementations in `probekit.fitters.batch` handle 3D inputs `[B, N, D]` efficiently on GPU:
- `fit_logistic_batch`: Batched IRLS/Newton solver with auto-switch between dense Newton and memory-safe Newton-CG.
- `fit_dim_batch`: Vectorized DiM with median thresholding.
- `fit_elastic_net_path`: Efficiently fits a regularization path (multiple alphas) using warm-starting.
## Quick Start
The high-level API supports explicit backend control (`backend="torch"` / `backend="sklearn"`),
and in `backend="auto"` mode it prefers torch when inputs are already torch tensors.
```python
from probekit import sae_probe, dim_probe
# 1. Single Probe (X: [N, D], y: [N])
probe = sae_probe(X_2d, y_1d)
# 2. Batched Probes (X: [B, N, D], y: [B, N] or [N])
# Uses torch batch fitters and returns a ProbeCollection
probes = sae_probe(X_3d, y)
weights, biases = probes.to_tensor() # [B, D], [B]
# 3. Inference with a trained single probe
scores = probe.predict_score(X_2d) # raw margins/logits
pred = probe.predict(X_2d, threshold=0.0) # binary predictions
# 4. Inference with a trained probe collection
batch_scores = probes.predict_score(X_3d) # [B, N]
batch_pred = probes.predict(X_3d, threshold=0.0) # [B, N]
# 5. Force backend explicitly
probe_torch = sae_probe(X_2d_torch, y_1d_torch, backend="torch")
probe_cpu = sae_probe(X_3d_numpy, y_2d_numpy, backend="sklearn")
```
## Copyable Skill Snippet
```md
# Probekit Quick Skill
Goal: Train and run linear probes on activations.
## Core Imports
from probekit import sae_probe, logistic_probe, dim_probe
## Train
# x: [N, D], y: [N]
probe = sae_probe(x, y)
## Inference
scores = probe.predict_score(x)
pred = probe.predict(x, threshold=0.0)
## Batched Training
# xb: [B, N, D], yb: [B, N] or broadcast-compatible
probes = sae_probe(xb, yb)
weights, biases = probes.to_tensor()
batch_scores = probes.predict_score(xb)
batch_pred = probes.predict(xb, threshold=0.0)
## Method choice
# Use dim_probe(...) for strictly linear, overfitting-resistant separation.
# Use logistic_probe(...) for standard L2-regularized classification.
```
## Steering Vectors
You can build steering vectors for individual probes or entire collections:
```python
from probekit import build_steering_vector, build_steering_vectors
# Single
vec = build_steering_vector(probe, sae_model, layer=10)
# Batched (Maps layers to probes)
vecs = build_steering_vectors(probe_collection, sae_model, layers=[8, 9, 10])
```
## Structure
- `probekit/core/`: `LinearProbe` and `ProbeCollection` definitions.
- `probekit/fitters/`:
- `logistic.py`, `elastic.py`, `dim.py`: Single-probe (CPU/sklearn) fitters.
- `batch/`: Optimized GPU-batched fitters (IRLS, ISTA, DiM).
- `probekit/api.py`: High-level aliases and dimension routing.
- `probekit/steering/`: Tools for building steering vectors.
| text/markdown | Probekit Contributors | null | null | null | MIT | interpretability, safety, llm, probes | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"torch>=2.0.0",
"scikit-learn>=1.3.0",
"numpy>=1.24.0",
"tqdm>=4.65.0",
"sae-lens>=3.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"ruff==0.1.6; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"build; extra == \"dev\"",
"twine; extra == \"dev\"",
"pre-commit; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/ZuiderveldTimJ/probekit",
"Repository, https://github.com/ZuiderveldTimJ/probekit",
"Issues, https://github.com/ZuiderveldTimJ/probekit/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:04:32.966967 | probekit-0.3.1.tar.gz | 30,630 | 24/2b/116fb9dc1b9df0c3fe59857153a29ac59029dbbdc33b0302c9a834d62bfe/probekit-0.3.1.tar.gz | source | sdist | null | false | 35de22c3e7c056a206ea910497e45425 | cbabf950c49cf1f97139605e9c538fad36554b3e943cc99c9b6341df31a20113 | 242b116fb9dc1b9df0c3fe59857153a29ac59029dbbdc33b0302c9a834d62bfe | null | [
"LICENSE"
] | 233 |
2.4 | marshmallow-recipe | 0.0.79a1 | Bake marshmallow schemas based on dataclasses | # marshmallow-recipe
[](https://badge.fury.io/py/marshmallow-recipe)
[](https://pypi.org/project/marshmallow-recipe/)
Library for convenient serialization/deserialization of Python dataclasses using marshmallow.
Originally developed as an abstraction layer over marshmallow to facilitate migration from v2 to v3 for codebases with extensive dataclass usage,
this library has evolved into a powerful tool offering a more concise approach to serialization.
It can be seamlessly integrated into any codebase, providing the following benefits:
1. Automatic schema generation: Marshmallow schemas are generated and cached automatically, while still being accessible when needed
2. Comprehensive Generics support with full nesting and inheritance capabilities
3. Nested cyclic references support
4. Flexible field configuration through `dataclass.field(meta)` or `Annotated[T, meta]`
5. Customizable case formatting support, including built-in `camelCase` and `CamelCase`, via dataclass decorators
6. Configurable None value handling through dataclass decorators
7. PATCH operation support via mr.MISSING value
## Supported Types
**Simple types:** `str`, `bool`, `int`, `float`, `decimal.Decimal`, `datetime.datetime`, `datetime.date`, `datetime.time`, `uuid.UUID`, `enum.StrEnum`, `enum.IntEnum`, `typing.Any`
**Collections:** `list[T]`, `set[T]`, `frozenset[T]`, `tuple[T, ...]`, `dict[K, V]`, `Sequence[T]`, `Set[T]`, `Mapping[K, V]`
**Advanced:** `T | None`, `Optional[T]`, `Generic[T]`, `Annotated[T, ...]`, `NewType('Name', T)`
**Features:** Nested dataclasses, cyclic references, generics with full inheritance
## Examples
### Base scenario
```python
import dataclasses
import datetime
import uuid
import marshmallow_recipe as mr
@dataclasses.dataclass(frozen=True)
class Entity:
id: uuid.UUID
created_at: datetime.datetime
comment: str | None
entity = Entity(
id=uuid.uuid4(),
created_at=datetime.datetime.now(tz=datetime.UTC),
comment=None,
)
# dumps the dataclass instance to a dict
serialized = mr.dump(entity)
# deserializes a dict to the dataclass instance
loaded = mr.load(Entity, serialized)
assert loaded == entity
# provides a generated marshmallow schema for the dataclass
marshmallow_schema = mr.schema(Entity)
```
### Configuration
```python
import dataclasses
import datetime
import decimal
import marshmallow_recipe as mr
from typing import Annotated
@dataclasses.dataclass(frozen=True)
class ConfiguredFields:
with_custom_name: str = dataclasses.field(metadata=mr.meta(name="alias"))
strip_whitespaces: str = dataclasses.field(metadata=mr.str_meta(strip_whitespaces=True))
with_post_load: str = dataclasses.field(metadata=mr.str_meta(post_load=lambda x: x.replace("-", "")))
with_validation: decimal.Decimal = dataclasses.field(metadata=mr.meta(validate=lambda x: x != 0))
decimal_two_places_by_default: decimal.Decimal # Note: 2 decimal places by default
decimal_any_places: decimal.Decimal = dataclasses.field(metadata=mr.decimal_metadata(places=None))
decimal_three_places: decimal.Decimal = dataclasses.field(metadata=mr.decimal_metadata(places=3))
decimal_with_rounding: decimal.Decimal = dataclasses.field(metadata=mr.decimal_metadata(places=2, rounding=decimal.ROUND_UP))
nullable_with_custom_format: datetime.date | None = dataclasses.field(metadata=mr.datetime_meta(format="%Y%m%d"), default=None)
with_default_factory: str = dataclasses.field(default_factory=lambda: "42")
@dataclasses.dataclass(frozen=True)
class AnnotatedFields:
with_post_load: Annotated[str, mr.str_meta(post_load=lambda x: x.replace("-", ""))]
decimal_three_places: Annotated[decimal.Decimal, mr.decimal_metadata(places=3)]
@dataclasses.dataclass(frozen=True)
class AnnotatedListItem:
nullable_value: list[Annotated[str, mr.str_meta(strip_whitespaces=True)]] | None
value_with_nullable_item: list[Annotated[str | None, mr.str_meta(strip_whitespaces=True)]]
@dataclasses.dataclass(frozen=True)
@mr.options(none_value_handling=mr.NoneValueHandling.INCLUDE)
class NoneValueFieldIncluded:
nullable_value: str | None
@dataclasses.dataclass(frozen=True)
@mr.options(none_value_handling=mr.NoneValueHandling.IGNORE)
class NoneValueFieldExcluded:
nullable_value: str | None
@dataclasses.dataclass(frozen=True)
@mr.options(naming_case=mr.CAPITAL_CAMEL_CASE)
class UpperCamelCaseExcluded:
naming_case_applied: str # serialized to `NamingCaseApplied`
naming_case_ignored: str = dataclasses.field(metadata=mr.meta(name="alias")) # serialized to `alias`
@dataclasses.dataclass(frozen=True)
@mr.options(naming_case=mr.CAMEL_CASE)
class LowerCamelCaseExcluded:
naming_case_applied: str # serialized to `namingCaseApplied`
@dataclasses.dataclass(frozen=True, slots=True, kw_only=True)
class DataClass:
str_field: str
data = dict(StrField="foobar")
loaded = mr.load(DataClass, data, naming_case=mr.CAPITAL_CAMEL_CASE)
dumped = mr.dump(loaded, naming_case=mr.CAPITAL_CAMEL_CASE)
```
### Update API
```python
import decimal
import dataclasses
import marshmallow_recipe as mr
@dataclasses.dataclass(frozen=True)
@mr.options(none_value_handling=mr.NoneValueHandling.INCLUDE)
class CompanyUpdateData:
name: str = mr.MISSING
annual_turnover: decimal.Decimal | None = mr.MISSING
company_update_data = CompanyUpdateData(name="updated name")
dumped = mr.dump(company_update_data)
assert dumped == {"name": "updated name"} # Note: no "annual_turnover" here
loaded = mr.load(CompanyUpdateData, {"name": "updated name"})
assert loaded.name == "updated name"
assert loaded.annual_turnover is mr.MISSING
loaded = mr.load(CompanyUpdateData, {"annual_turnover": None})
assert loaded.name is mr.MISSING
assert loaded.annual_turnover is None
```
### Generics
Everything works automatically, except for one case. Dump operation of a generic dataclass with `frozen=True` or/and `slots=True` requires an explicitly specified subscripted generic type as first `cls` argument of `dump` and `dump_many` methods.
```python
import dataclasses
from typing import Generic, TypeVar
import marshmallow_recipe as mr
T = TypeVar("T")
@dataclasses.dataclass()
class RegularGeneric(Generic[T]):
value: T
mr.dump(RegularGeneric[int](value=123)) # it works without explicit cls specification
@dataclasses.dataclass(slots=True)
class SlotsGeneric(Generic[T]):
value: T
mr.dump(SlotsGeneric[int], SlotsGeneric[int](value=123)) # cls required for slots=True generic
@dataclasses.dataclass(frozen=True)
class FrozenGeneric(Generic[T]):
value: T
mr.dump(FrozenGeneric[int], FrozenGeneric[int](value=123)) # cls required for frozen=True generic
@dataclasses.dataclass(slots=True, frozen=True)
class SlotsFrozenNonGeneric(FrozenGeneric[int]):
pass
mr.dump(SlotsFrozenNonGeneric(value=123)) # cls not required for non-generic
```
## More Examples
The [examples/](https://github.com/anna-money/marshmallow-recipe/tree/main/examples) directory contains comprehensive examples covering all library features:
- **[01_basic_usage.md](https://github.com/anna-money/marshmallow-recipe/blob/main/examples/01_basic_usage.md)** - Basic types, load/dump, schema, NewType
- **[02_nested_and_collections.md](https://github.com/anna-money/marshmallow-recipe/blob/main/examples/02_nested_and_collections.md)** - Nested dataclasses, collections, collections.abc types
- **[03_field_customization.md](https://github.com/anna-money/marshmallow-recipe/blob/main/examples/03_field_customization.md)** - Custom field names, string transforms, decimal precision, datetime formats
- **[04_validation.md](https://github.com/anna-money/marshmallow-recipe/blob/main/examples/04_validation.md)** - Field validation, regex, mr.validate(), collection item validation
- **[05_naming_case_conversion.md](https://github.com/anna-money/marshmallow-recipe/blob/main/examples/05_naming_case_conversion.md)** - camelCase, PascalCase, UPPER_SNAKE_CASE conversion
- **[06_patch_operations.md](https://github.com/anna-money/marshmallow-recipe/blob/main/examples/06_patch_operations.md)** - PATCH operations with mr.MISSING
- **[07_generics.md](https://github.com/anna-money/marshmallow-recipe/blob/main/examples/07_generics.md)** - Generic[T] types
- **[08_global_overrides.md](https://github.com/anna-money/marshmallow-recipe/blob/main/examples/08_global_overrides.md)** - Runtime parameter overrides (naming_case, none_value_handling, decimal_places)
- **[09_per_dataclass_overrides.md](https://github.com/anna-money/marshmallow-recipe/blob/main/examples/09_per_dataclass_overrides.md)** - Per-dataclass overrides with @mr.options decorator
- **[10_cyclic_references.md](https://github.com/anna-money/marshmallow-recipe/blob/main/examples/10_cyclic_references.md)** - Cyclic and self-referencing structures
- **[11_pre_load_hooks.md](https://github.com/anna-money/marshmallow-recipe/blob/main/examples/11_pre_load_hooks.md)** - @mr.pre_load hooks, add_pre_load()
- **[12_validation_errors.md](https://github.com/anna-money/marshmallow-recipe/blob/main/examples/12_validation_errors.md)** - get_validation_field_errors(), error handling
| text/markdown | null | Yury Pliner <yury.pliner@gmail.com> | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Rust",
"Operating System :: OS Independent",
"Development Status :: 5 - Production/Stable"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"marshmallow<4,>=2.20.5",
"pytest==8.3.5; extra == \"dev\"",
"ruff==0.8.4; extra == \"dev\"",
"pyright==1.1.407; extra == \"dev\"",
"setuptools; extra == \"dev\"",
"maturin<2.0,>=1.9; extra == \"dev\"",
"pyperf>=2.7; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/anna-money/marshmallow-recipe"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T09:04:29.143134 | marshmallow_recipe-0.0.79a1-cp314-cp314-manylinux_2_28_x86_64.whl | 415,741 | 2a/4e/45f1705ce03cc1d4b5bc16f5dd345c86b1e09b4c16c5986e7d092a2d5ba7/marshmallow_recipe-0.0.79a1-cp314-cp314-manylinux_2_28_x86_64.whl | cp314 | bdist_wheel | null | false | 77f64a07e68825bf74bbb2386a56d293 | 64231c914486cf26590d6bee0ddc8f8074e006263db568d146f428aa8b80138e | 2a4e45f1705ce03cc1d4b5bc16f5dd345c86b1e09b4c16c5986e7d092a2d5ba7 | null | [
"LICENSE"
] | 992 |
2.4 | stateprime | 1.0.0 | Local-first, production-ready AI agent tracing SDK with Ed25519 signing and PII redaction | # StatePrime SDK v1.0
**Verifiable Tracing for AI Agents — 100% Local, Open-Source, Production-Ready**
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
[](#security)
## What is StatePrime?
StatePrime is a lightweight, open-source SDK for tracing AI agent decision-making. It runs **100% locally** by default — no cloud dependency, no data ever leaves your machine. Perfect for building verifiable, auditable agent workflows.
**Core Value**: See exactly what your agents did, why they did it, and prove it with cryptographic signatures.
### One-Line Integration
```python
from stateprime import StatePrime
sp = StatePrime()
@sp.traceable(agent_id="researcher")
def research(topic: str):
return {"findings": f"Research on {topic}"}
trace, result = research("AI agents in 2026")
print(sp.replay(trace['id'])) # See full execution with signatures
```
## Key Features
- ✅ **100% Local** — All traces stored in SQLite on your machine. Zero cloud calls.
- ✅ **Single-File SDK** — Copy `stateprime.py` into your project. No complex dependencies.
- ✅ **One-Line Integration** — Add `@sp.traceable` to any function (sync or async).
- ✅ **Cryptographic Signing** — Ed25519 signatures on every step for non-repudiation.
- ✅ **Multi-Agent Workflows** — Link agents with parent-child trace relationships.
- ✅ **PII Redaction** — Auto-detect and mask emails, phone numbers, credit cards, SSNs.
- ✅ **Export Options** — Export traces as JSON, CSV, or Markdown for compliance.
- ✅ **Performance Tracking** — Latency per step, total execution time.
- ✅ **Production Ready** — Error handling, type hints, comprehensive comments.
## Security & Privacy
**StatePrime is designed with privacy as a core principle:**
- 🔒 **Local-Only**: By default, no network calls. All traces stored in SQLite database locally.
- 🔐 **Cryptographic**: Ed25519 signatures prove traces haven't been tampered with.
- 🚫 **No Phone-Home**: Zero telemetry, zero cloud dependency.
- 🎯 **Minimal Dependencies**: Only `cryptography` (well-maintained, open-source).
**Note on Cloud Mode**: Future versions may support optional cloud sync (not implemented yet). When enabled, you will have full control over whether data is transmitted.
## Quick Start (3 minutes)
### 1. Install
**Option A: Copy-paste (fastest)**
```bash
# Just copy stateprime.py into your project
cp stateprime.py /path/to/your/project/
```
**Option B: pip install (coming soon)**
```bash
pip install stateprime
```
### 2. Initialize
```python
from stateprime import StatePrime
# Create SDK instance (stores traces in traces.db)
sp = StatePrime()
```
### 3. Trace a Function
```python
@sp.traceable(agent_id="my_agent", intent="Analyze data", reasoning="Step 1 of workflow")
def analyze(data: dict) -> dict:
"""Your function here."""
return {"analysis": "..."}
# Call it
trace, result = analyze({"value": 42})
# See what happened
print(sp.replay(trace['id']))
```
### 4. Export Traces
```python
# Export for compliance
json_export = sp.export_trace(trace['id'], format="json")
csv_export = sp.export_trace(trace['id'], format="csv")
markdown_export = sp.export_trace(trace['id'], format="markdown")
# PII redaction
clean_data = sp.redact_pii({"email": "user@example.com"}) # → {"email": "[EMAIL]"}
```
## Usage Examples
### Single-Agent Workflow
```python
from stateprime import StatePrime
sp = StatePrime()
@sp.traceable(agent_id="researcher")
def research(topic: str):
# Your code here
return {"findings": f"Researched {topic}"}
@sp.traceable(agent_id="analyzer")
def analyze(findings: dict):
# Your code here
return {"analysis": f"Analysis: {findings}"}
# Execute
trace1, res1 = research("AI agents")
trace2, res2 = analyze(res1)
# View execution chain
print(sp.replay(trace2['id']))
```
### Multi-Agent Workflow (Parent-Child Linkage)
```python
@sp.traceable(agent_id="researcher")
def research(topic: str):
return {"findings": f"Research: {topic}"}
@sp.traceable(agent_id="executor")
def execute(findings: dict, parent_id: int = None):
# Pass parent_id to link traces
return {"action": f"Executed based on {findings}"}
# Execute and link
trace1, res1 = research("AI")
plan_dict = sp.save_trace({
'tool': 'execute',
'agent_id': 'executor',
'intent': 'Execute findings',
'reasoning': 'Next in workflow',
'input': res1,
'output': {},
'timestamp': time.time(),
'latency_ms': 0
}, parent_id=trace1['id'])
# View linked chain
print(sp.replay(plan_dict))
```
### LangGraph Integration
```python
from stateprime import StatePrime, langgraph_node_wrapper
from langgraph.graph import StateGraph
sp = StatePrime()
def my_node(state):
# Your LangGraph node logic
return {"result": "..."}
# Wrap the node
traced_node = langgraph_node_wrapper(sp, my_node, agent_id="graph_node")
# Use in your graph
graph = StateGraph(...)
graph.add_node("my_node", traced_node)
```
### CrewAI Integration
```python
from stateprime import StatePrime, CrewTracedTask
from crewai import Agent, Crew
sp = StatePrime()
agent = Agent(role="Researcher", ...)
traced_task = CrewTracedTask(
sp,
name="research_task",
description="Research AI agents",
agent=agent,
expected_output="Research report"
)
crew = Crew(agents=[agent], tasks=[traced_task.get_task()])
result = crew.kickoff()
# Trace the result
traced_task.trace_result(result)
```
### AutoGen Integration
```python
from stateprime import StatePrime, AutoGenTracedAssistant
sp = StatePrime()
assistant = AutoGenTracedAssistant(
sp,
name="assistant",
system_message="You are a helpful assistant.",
agent_id="autogen_assistant"
)
agent = assistant.get_agent()
# Use your agent...
message = "What is AI?"
response = agent.get_response(message)
# Trace the message
assistant.trace_message(message, response)
```
### Async Functions
```python
import asyncio
sp = StatePrime()
@sp.traceable(agent_id="async_agent")
async def fetch_data(url: str):
# Async code here
return {"data": "..."}
# Use it
async def main():
trace, result = await fetch_data("https://api.example.com")
print(sp.replay(trace['id']))
asyncio.run(main())
```
### PII Redaction
```python
sp = StatePrime()
# Redact sensitive data
sensitive = {
"email": "user@example.com",
"phone": "555-123-4567",
"ssn": "123-45-6789",
"card": "4532-1234-5678-9123",
"text": "Contact: john@example.com"
}
clean = sp.redact_pii(sensitive)
# Result: {
# "email": "[EMAIL]",
# "phone": "[PHONE]",
# "ssn": "[SSN]",
# "card": "[CC]",
# "text": "Contact: [EMAIL]"
# }
```
### Export & Compliance
```python
sp = StatePrime()
# ... run your traces ...
# Export full trace chain
trace_id = 1
# JSON (machine-readable)
json_data = sp.export_trace(trace_id, format="json")
# CSV (spreadsheet-ready)
csv_data = sp.export_trace(trace_id, format="csv")
# Markdown (human-readable)
md_data = sp.export_trace(trace_id, format="markdown")
```
## API Reference
### Core Class: `StatePrime`
#### `__init__(db_path="traces.db", local_only=True)`
Initialize SDK. `local_only=True` enforces local-only mode (recommended for security).
#### `@traceable(agent_id="default", intent="", reasoning="")`
Decorator to trace function execution. Returns `(trace_dict, result)`.
#### `save_trace(trace_dict, parent_id=None) -> int`
Save trace to database. Returns trace_id. Pass `parent_id` to link to parent trace.
#### `replay(trace_id) -> str`
Generate human-readable execution replay. Shows full chain, latencies, signatures.
#### `get_trace_chain(trace_id) -> List[Trace]`
Get list of traces from root to specified trace. Useful for custom visualization.
#### `export_trace(trace_id, format="json") -> str`
Export trace chain as JSON, CSV, or Markdown.
#### `redact_pii(obj) -> Any`
Recursively redact PII from object. Handles emails, phones, SSNs, credit cards.
#### `get_stats() -> Dict`
Get performance stats: total traces, total latency, average latency, unique agents.
### Framework Adapters
#### `langgraph_node_wrapper(sp, node_func, agent_id)`
Wrap LangGraph node to trace execution.
#### `CrewTracedTask(sp, name, description, agent, expected_output)`
Wrapper for CrewAI Task with tracing.
#### `AutoGenTracedAssistant(sp, name, system_message, agent_id)`
Wrapper for AutoGen ConversableAgent with tracing.
## Performance
StatePrime is designed to be lightweight:
- **Decorator overhead**: <1ms per traced function (mainly serialization + signing)
- **Database**: SQLite with indexed queries (millisecond lookups)
- **Memory**: Minimal — no in-memory caching, all traces in DB
- **Storage**: ~1KB per trace in SQLite (compressed on disk)
**Example**: Tracing a 10-step workflow adds ~10ms overhead total.
## Database Schema
StatePrime uses SQLite with the following schema:
```sql
CREATE TABLE traces (
id INTEGER PRIMARY KEY, -- Auto-increment trace ID
parent_id INTEGER DEFAULT NULL, -- Links to parent trace (multi-agent)
agent_id TEXT NOT NULL, -- Agent executing this step
intent TEXT, -- Agent's goal
reasoning TEXT, -- Why it did this
tool TEXT NOT NULL, -- Function/tool name
input_data TEXT, -- JSON input
output_data TEXT, -- JSON output
timestamp REAL NOT NULL, -- Unix timestamp
signature TEXT NOT NULL, -- Ed25519 (hex-encoded)
latency_ms REAL NOT NULL, -- Execution time
created_at TIMESTAMP -- DB insertion time
);
```
No authentication or encryption is built-in (it's local). For sensitive data, use:
- Disk-level encryption (e.g., LUKS on Linux)
- File permissions (readable by owner only)
- PII redaction via `redact_pii()`
## Roadmap
### v1.0 (Current)
- ✅ Core tracing SDK
- ✅ Ed25519 signing
- ✅ SQLite storage
- ✅ Export (JSON/CSV/Markdown)
- ✅ PII redaction
- ✅ Framework adapters (LangGraph, CrewAI, AutoGen)
### v1.1 (Planned)
- 🔄 Visualization (web UI for trace graphs)
- 🔄 Advanced query API (filter by agent, date, etc.)
- 🔄 Batch operations (export multiple traces)
- 🔄 Import from other tools (LangSmith, etc.)
### v2.0 (Future)
- 🔄 Optional cloud sync (user-controlled, opt-in)
- 🔄 Distributed tracing (multi-machine workflows)
- 🔄 Real-time streaming (websocket support)
- 🔄 OpenTelemetry integration
## Demo & Community
- **Try it online**: [StatePrime Demo on Hugging Face Spaces](https://huggingface.co/spaces) (preview only)
- **GitHub**: [stateprime/stateprime](https://github.com) (coming soon)
- **Discussions**: GitHub Discussions (coming soon)
- **Issues**: GitHub Issues (coming soon)
## License
MIT License — Use freely in commercial and personal projects.
## Contributing
We welcome contributions! (Repo coming soon on GitHub)
Guidelines:
- Keep single-file philosophy (or ensure easy integration)
- Maintain local-first principle
- Add tests for new features
- Document with docstrings and examples
## FAQ
**Q: Does StatePrime send data to the cloud?**
A: No. By default, everything runs locally in SQLite. No cloud calls, no telemetry.
**Q: Why Ed25519 signatures?**
A: Non-repudiation — once you trace something, you can't deny you did it. Great for compliance.
**Q: How is this different from LangSmith/Helicone?**
A: StatePrime is local-first and open-source. LangSmith/Helicone are cloud services. Use StatePrime for privacy, them for hosted features.
**Q: Can I use this in production?**
A: Yes! It's designed for production. Use PII redaction for sensitive data.
**Q: What if I have 1000s of traces?**
A: SQLite handles millions of records fine. Consider archiving old traces to separate databases.
**Q: How do I delete traces?**
A: Traces are immutable (crypto-signed). Delete the entire `traces.db` file to start fresh.
## Support
- **Questions**: Open an issue on GitHub (repo coming soon)
- **Bugs**: GitHub Issues with minimal reproduction
- **Features**: GitHub Discussions
## Citation
If you use StatePrime in research or production, please cite:
```bibtex
@software{stateprime2026,
title={StatePrime: Verifiable Tracing for AI Agents},
author={StatePrime Contributors},
year={2026},
url={https://github.com/stateprime/stateprime}
}
```
---
**Get Started Now**: Copy `stateprime.py` into your project and add `@sp.traceable` to your functions! 🚀
| text/markdown | StatePrime Team | StatePrime Team <support@stateprime.dev> | null | null | MIT | AI, tracing, agents, LLM, observability, cryptography, OpenTelemetry | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent"
] | [] | https://github.com/stateprime/stateprime | null | >=3.8 | [] | [] | [] | [
"cryptography>=41.0.0",
"langgraph>=0.0.1; extra == \"langgraph\"",
"crewai>=0.1.0; extra == \"crewai\"",
"pyautogen>=0.2.0; extra == \"autogen\"",
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/stateprime/stateprime",
"Bug Tracker, https://github.com/stateprime/stateprime/issues",
"Documentation, https://github.com/stateprime/stateprime#readme",
"Source Code, https://github.com/stateprime/stateprime"
] | twine/6.2.0 CPython/3.11.3 | 2026-02-21T09:04:11.275813 | stateprime-1.0.0.tar.gz | 47,079 | ea/40/07b1cae26c9edc695671e9fcffc8f35927ae616502ef3af848ad33a1741e/stateprime-1.0.0.tar.gz | source | sdist | null | false | 00449af2b8590ba4c5e4253db9a8adbc | 6843420fc03821ce8653bf4fdd46600653d0223d5447bf49c57e7480b0e332c2 | ea4007b1cae26c9edc695671e9fcffc8f35927ae616502ef3af848ad33a1741e | null | [
"LICENSE"
] | 61 |
2.4 | minecraft-schemas | 0.4.1.post1 | Parse Minecraft-relative JSON to structured objects | # minecraft-schemas
A Python library for help you to parse Minecraft-relative JSON to structured objects.
**Disclaimer:** Although the project name contains "minecraft", this project is not supported by Mojang Studios or Microsoft.
## Notice for users/developers migrated from `minecraft-schemes`
This project is renamed from [`minecraft-schemes`](https://pypi.org/project/minecraft-schemes), with a full package structure reorganization.
For more information about migration, see the [version history file](HISTORY.md) for more details.
(Or scroll down to see the version history if you are reading this on PyPI.)
Due to limited time and energy, I am unable to provide a complete backward compatibility solution for this release. Sorry for the inconvenience.
## Features
### Already implemented
- Easy installing
- Open source
- All public APIs are static typed
- Supports parsing various file structures used by Mojang and Minecraft ([see below](#supported-file-structures))
- Easy-to-use file structure definitions, powered by [`attrs`](https://www.attrs.org)
- Rapidly file parsing, powered by [`cattrs`](https://catt.rs)
- Conditional testing for game/command line options and dependency libraries (in `client.json`)
### Not implemented yet (not a complete list)
- [ ] Parse/build supports for `launcher_profiles.json` (used by official Minecraft Launcher)
- [ ] Game/JVM command line options concatenating and completing
## Supported file structures
### <span id="file-structure-version-manifest"></span>`version_manifest.json` and `version_manifest_v2.json`
**See more:**
[Minecraft Wiki](https://minecraft.wiki/w/Version_manifest.json)
- A JSON file that list Minecraft versions available in the official launcher.
### <span id="file-structure-client-manifest"></span>`client.json`
**See more:**
[Minecraft Wiki](https://minecraft.wiki/w/Client.json)
- A JSON file that accompanies client.jar in `.minecraft/versions/<version>` and lists the version's attributes.
- Usually named `<game version>.json`.
- Don't confuse this file with `version.json`; they are fundamentally different.
### <span id="file-structure-asset-index"></span>Asset index file
**See more:**
[Minecraft Wiki (only Chinese version)](https://zh.minecraft.wiki/w/%E6%95%A3%E5%88%97%E8%B5%84%E6%BA%90%E6%96%87%E4%BB%B6#%E8%B5%84%E6%BA%90%E7%B4%A2%E5%BC%95)
- A series of JSON files used to query the hash value of the corresponding hashed resource file based on the resource path, in order to invoke
the file.
- Can be downloaded from the URL pointed in the `client.json`: `[Root Tag] > "assetIndex" > "url"`
### <span id="file-structure-version-attributes"></span>`version.json`
**See more:**
[Minecraft Wiki](https://minecraft.wiki/w/Version.json)
- A JSON file that offers some basic information about the version's attributes.
- Embedded within client.jar in `.minecraft/versions/<version>` and `server.jar`.
- Don't confuse this file with `client.json`; they are fundamentally different.
### <span id="file-structure-mojang-java-index-manifest"></span>Mojang Java Runtime index file and manifest files
- A JSON file that list manifest files of Java Runtime provided by Mojang via their "codename".
- Not documented by Minecraft Wiki or Mojang, but it is believed to be for the purposes described above.
### <span id="file-structure-yggdrasil-api-responses"></span>Yggdrasil API Responses
**See more:**
[Unofficial Yggdrasil server technical specification, provided by
`authlib-injector` (only Chinese version)](https://github.com/yushijinhun/authlib-injector/wiki/Yggdrasil-%E6%9C%8D%E5%8A%A1%E7%AB%AF%E6%8A%80%E6%9C%AF%E8%A7%84%E8%8C%83)
Included the following things:
- Error response
- Endpoint `/authenticate` response and its parts
- Endpoint `/refresh` response and its parts
- `authlib-injector`-compatible Yggdrasil API metadata
- `authlib-injector`-compatible Yggdrasil Server metadata
- Included in the `authlib-injector`-compatible Yggdrasil API metadata: `[Root Tag] > "meta"` (JSON) or `api_metadata.serverMetadata` (parsed)
- According to this [
`authlib-injector` Wiki](https://github.com/yushijinhun/authlib-injector/wiki/Yggdrasil-%E6%9C%8D%E5%8A%A1%E7%AB%AF%E6%8A%80%E6%9C%AF%E8%A7%84%E8%8C%83#meta-%E4%B8%AD%E7%9A%84%E5%85%83%E6%95%B0%E6%8D%AE)
section about the server metadata, this structure is not mandatory. Regardless of whether the parsing is successful or not, users/developers
should access and manipulate it as a regular dict.
- Player texture property
- As a part of Yggdrasil API endpoint `/authenticate` and `/refresh`.
- Usually encoded in Base64. Users/developers should decode and load it manually.
## Install
Install `minecraft-schemas` using pip:
```commandline
pip install minecraft-schemas
```
The release page also provides various versions of wheel files for manual download and installation.
## API Documentation
`mcschemas`'s most useful functionalities are the following APIs:
- `mcschemas.Schemas`
- An enum class that declares currently supported schemas. All schemas have parsing support, but no one currently have build support.
- `mcschemas.parse(obj, schema, /, *, converter=None)`
- Parse `obj` as the given `schema` to the corresponding type.
- `schema` must be a member of enum `mcschemas.Schemas`.
- `converter` can be an instance of `cattrs.BaseConverter` or its subclasses.
- Default is `None`. At this time, a `mcschemas.DedicatedConverter` instance will be automatically created for internal
structuring.
- `mcschemas.loads(s, schema, /, *, converter=None, **json_loads_kwargs)`
- Deserialize the string `s`, then parse the deserialized result as the given `schema` to the corresponding type.
- `schema` and `converter` is identical to `mcschemas.parse()`.
- `**json_loads_kwargs` will be passed to `json.loads()`, except the keyword `s`.
- `mcschemas.load(fp, schema, /, *, converter=None, **json_load_kwargs)`
- Identical to `mcschemas.loads()`, but instead of a string, deserialize the file-like object `fp`, then parse the deserialized result as the
given `schema` to the corresponding type.
- `fp` must be a `.read()`-supporting text file.
- `schema` and `converter` is identical to `mcschemas.parse()`.
- `**json_load_kwargs` will be passed to `json.load()`, except the keyword `fp`.
- `mcschemas.loadVersionAttrsFromClientJar(file, /, *, converter=None, **json_load_kwargs)`
- A convenience function for loading, deserializing and parsing `version.json` from `client.jar`.
- The schema is fixed to `mcschemas.Schemas.VERSION_ATTRIBUTES`.
- `converter` is identical to `mcschemas.parse()`.
- `**json_load_kwargs` is identical to `mcschemas.load()`.
- _class_ `mcschemas.DedicatedConverter(*, regex_flags=0, detailed_validation=True, forbid_extra_keys=False)`
- A converter for converting between structured and unstructured data according to the data structures defined in this package.
- _classmethod_ `mcschemas.DedicatedConverter.configure(converter, /, *, regex_flags=0)`
- Configure an existing converter to convert some specific types.
- `converter` must be an instance of `cattrs.BaseConverter` or its subclasses.
- Full documentation can be found in the code: [src/mcschemas/parser/converters.py](src/mcschemas/parser/converters.py)
And file structure model declarations:
| Sub Package | `mcschemas.Schemas` Enum Member Name | Corresponding File Structure |
|:----------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------:|
| `mcschemas.models.versionmanifest` | `VERSION_MANIFEST` | [`version_manifest.json` and `version_manifest_v2.json`](#file-structure-version-manifest) |
| `mcschemas.models.clientmanifest` | `CLIENT_MANIFEST` | [`client.json`](#file-structure-client-manifest) |
| `mcschemas.models.assetindex` | `ASSET_INDEX` | [Asset index file](#file-structure-asset-index) |
| `mcschemas.models.versionattrs` | `VERSION_ATTRIBUTES` | [`version.json`](#file-structure-version-attributes) |
| `mcschemas.models.mojangjava` | `MOJANG_JAVA_RUNTIME_INDEX`<br/>`MOJANG_JAVA_RUNTIME_MANIFEST` | [Mojang Java Runtime index file and manifest files](#file-structure-mojang-java-index-manifest) |
| `mcschemas.models.yggdrasil` | `TEXTURE_PROPERTY`<br/>`ERROR_RESPONSE`<br/>`ENDPOINT_AUTHENTICATE_RESPONSE`<br/>`ENDPOINT_REFRESH_RESPONSE`<br/>`YGGDRASIL_API_METADATA`<br/>`REFERENCE_SERVER_METADATA` | [Yggdrasil API Responses](#file-structure-yggdrasil-api-responses) |
## Usage Example
### Parse `version_manifest.json`
Download [at here](https://piston-meta.mojang.com/mc/game/version_manifest_v2.json).
```python
import mcschemas
from mcschemas.models.enums import VersionType
with open('version_manifest.json', mode='r') as f:
version_manifest = mcschemas.load(f, mcschemas.Schema.VERSION_MANIFEST)
print('Latest release:', version_manifest.latest.release)
print('Latest snapshot:', version_manifest.latest.snapshot)
print('Number of available versions:', len(version_manifest.versions))
print('Show information on the first 5 release versions:')
for idx, entity in enumerate(version_manifest.filterVersions(type=VersionType.RELEASE)):
if idx >= 5:
break
print(' The ID of the release version (at index {0}):'.format(version_manifest.index(entity)), entity.id)
print(' The release time of the release version (at index {0}):'.format(version_manifest.index(entity)), entity.releaseTime)
print(' The last update time of the release version (at index {0}):'.format(version_manifest.index(entity)), entity.time)
```
### Parse `client.json`
This example code uses `client.json` from Minecraft Java Edition
1.21.11, download [at here](https://piston-meta.mojang.com/v1/packages/3f42d3ea921915b36c581a435ed03683a7023fb1/1.21.11.json).
```python
import mcschemas
with open('1.21.11.json', mode='r') as f:
client_manifest_1_21_11 = mcschemas.load(f, mcschemas.Schema.CLIENT_MANIFEST)
print('Version ID:', client_manifest_1_21_11.id)
# The following field is structured as a member of enum mcschemas.enums.VersionType
print('Version Type:', str(client_manifest_1_21_11.type))
print('Asset version ID:', client_manifest_1_21_11.assetIndex.id)
print('Main class:', client_manifest_1_21_11.mainClass)
print('Release time:', client_manifest_1_21_11.releaseTime)
print('Last update time:', client_manifest_1_21_11.time)
print('Number of dependency libraries:', len(client_manifest_1_21_11.libraries))
client_jar_file_info = client_manifest_1_21_11.downloads.get('client')
if client_jar_file_info:
print('URL to download the client JAR file:', client_jar_file_info.url)
```
### Parse asset index file
This example code uses the asset index file version 29. You can download
it [at here](https://piston-meta.mojang.com/v1/packages/aaf4be9d6e197c384a09b1d9c631c6900d1f077c/29.json).
```python
from pathlib import Path
import mcschemas
with open('29.json', mode='r') as f:
asset_index = mcschemas.load(f, mcschemas.Schema.ASSET_INDEX)
print('Number of asset files:', len(asset_index.objects))
asset_file_relative_path = Path('icons/icon_256x256.png')
if asset_file_relative_path in asset_index.objects:
target_asset_file_info = asset_index.objects[asset_file_relative_path]
print('Information about asset file {0}: hash={1.hash}, size={1.size}'.format(asset_file_relative_path, target_asset_file_info))
```
### Parse `version.json` from a client JAR file
This example code uses the client JAR file from Minecraft Java Edition 1.21.11. You can download it in official Minecraft Launcher
or [at here](https://piston-data.mojang.com/v1/objects/ba2df812c2d12e0219c489c4cd9a5e1f0760f5bd/client.jar).
```python
from pathlib import Path
import mcschemas
version_attrs = mcschemas.loadVersionAttrsFromClientJar(Path.home().joinpath('.minecraft/versions/1.21.11/1.21.11.jar'))
print('Unique identifier of this client JAR:', version_attrs.id)
print('User-friendly name of this client JAR:', version_attrs.name)
print('Data version of this client JAR:', version_attrs.world_version)
print('Protocol version of this client JAR:', version_attrs.protocol_version)
print('Build time of this client JAR:', version_attrs.build_time)
if version_attrs.series_id:
print('Series ID (branch name) of this client JAR:', version_attrs.series_id)
```
### Load `client.json`, then filter and concatenate command line
This example code uses `client.json` from Minecraft Java Edition
1.21.11, download [at here](https://piston-meta.mojang.com/v1/packages/3f42d3ea921915b36c581a435ed03683a7023fb1/1.21.11.json).
**Note:** this example only demonstrates basic conditional filtering and concatenation operations, and does not consider the replacement of
placeholder parameters (which may be supported in future versions).
```python
import mcschemas
from mcschemas.tools import rules
with open('1.21.11.json', mode='r') as f:
client_manifest_1_21_11 = mcschemas.load(f, mcschemas.Schema.CLIENT_MANIFEST)
features: dict[str, bool] = {
'is_demo_user' : True,
'has_custom_resolution': True
}
cmdline: list[str] = ['java']
for jvm_arg_entry in client_manifest_1_21_11.arguments.jvm:
if rules.isArgumentCanBeAppended(jvm_arg_entry, features=features):
cmdline.extend(jvm_arg_entry.value)
cmdline.append(client_manifest_1_21_11.mainClass)
for game_arg_entry in client_manifest_1_21_11.arguments.game:
if rules.isArgumentCanBeAppended(game_arg_entry, features=features):
cmdline.extend(game_arg_entry.value)
print('Concatenated command line (without placeholder replacements):', cmdline)
```
### Fetch API metadata from an `authlib-injector` compatible Yggdrasil Service
This example code requires [`httpx`](https://pypi.org/project/httpx). You can install it through `pip`.
The Yggdrasil service is provided by [Drasl](https://drasl.unmojang.org).
```python
import sys
import cattrs
import httpx
import mcschemas
try:
resp = httpx.get('https://drasl.unmojang.org/authlib-injector').raise_for_status()
except httpx.HTTPError as exc:
print('Failed to fetch the Yggdrasil API metadata: {0}'.format(exc))
sys.exit(1)
# Parse the structure that already unserialized by `resp.json()`
api_metadata = mcschemas.parse(resp.json(), mcschemas.Schema.YGGDRASIL_API_METADATA)
# Or directly load from original response text
api_metadata = mcschemas.loads(resp.text, mcschemas.Schema.YGGDRASIL_API_METADATA)
# Parse the server metadata
# If failed to parse, we can still access/manipulate it as a regular dict
try:
server_metadata = mcschemas.parse(api_metadata.serverMetadata, mcschemas.Schema.REFERENCE_SERVER_METADATA)
except cattrs.ClassValidationError:
server_metadata = api_metadata.serverMetadata
print('Skin domain allowlist:')
for allowed_skin_domain in api_metadata.skinDomainAllowlist:
print(' -', allowed_skin_domain)
print('Player profile signature public key (in PEM format):')
print(api_metadata.signaturePublicKey)
print('Yggdrasil server name (if exists):', server_metadata.get('serverName', ''))
print(
'Yggdrasil server implementation name and version (if exists):',
server_metadata.get('implementationName'),
server_metadata.get('implementationVersion'),
)
print('Yggdrasil server supports non-email account name:', server_metadata.get('feature.non_email_login', False))
```
# History
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased] - TBD
## [0.4.1.post1] - 2026-02-21
### Removed
- **Parsing:** `mcschemas.parser.createConverter()` has been removed; it had previously been declared deprecated.
- This function was removed at the code level in the previous release, but this was not mentioned in the previous release's changelog. Therefore,
this release is specifically for correcting this oversight.
## [0.4.1] - 2026-02-21
### Added
- **Schema:** Supported to parse the response format of Yggdrasil APIs compatible with authlib-injector.
- Corresponding model declarations at `mcschemas.models.yggdrasil`.
- **Schema:** Added the method `filterVersions()` for `mcschemas.models.versionmanifest.VersionManifest`
to help filtering version entries by version type/ID.
### Changes
- **Project metadata:** Overhauled and updated the README.
- **Schema:** Registered `mcschemas.models.versionmanifest.VersionManifest` as a `collections.abc.MutableSequence` by implementing required abstract
methods. In short, `mcschemas.models.versionmanifest.VersionManifest` now is a mutable sequence.
- Its `__getitem__` method behaves a little differently from a regular sequence: when a string is passed in, it iterates through the internal list
of version entries, searches for and returns the entry whose version ID **exactly matches** the string.
- **Schema:** Registered `mcschemas.models.assetindex.AssetIndex` as a `collections.abc.MutableMapping` by implementing required abstract methods.
In short, `mcschemas.models.assetindex.AssetIndex` now is a mutable mapping.
- Its `__getitem__`, `_setitem__` and `__delitem__` method behaves a little differently from a regular mapping: when a string is passed as the
first argument after the `self`, it is first converted into a`pathlib.Path` instance before being passed to the internal asset objects mapping.
## [0.4.0] - 2026-02-09
This release focuses on renaming package and reorganizing the package internal structure.
Due to limited time and energy, I am unable to design a complete backward compatibility solution for this release. Sorry for the inconvenience.
### Backwards-incompatible Changes
- **Organizational:** The package is renamed to `mcschemas` (formally `mcschemes`),
and this project is renamed to `minecraft-schemas` (formally `minecraft-schemes`).
- Since the word "scheme" [[Merriam-Webster]](https://www.merriam-webster.com/dictionary/scheme) does not accurately reflect the content and
objectives of this project, using "schema" [[Merriam-Webster]](https://www.merriam-webster.com/dictionary/schema) is clearly more appropriate.
- **Organizational:** Package structure changed:
- All sub packages and modules including schema declarations are moved to `mcschemas.models`, detailed below:
- `mcschemes.assetindex` -> `mcschemas.models.assetindex`
- `mcschemes.clientmanifest` -> `mcschemas.models.clientmanifest`
- `mcschemes.mojangjava` -> `mcschemas.models.mojangjava`
- `mcschemes.versionattrs` -> `mcschemas.models.versionattrs`
- `mcschemes.versionmanifest` -> `mcschemas.models.versionmanifest`
- `mcschemes.enums` -> `mcschemas.models.enums`
- `mcschemes.specials` -> `mcschemas.models.specials`
- Parser submodule `mcschemes.tools.parser` is moved to `mcschemas.parser`.
### Changes
- **Project metadata:** Fixed typos in this version history file and README: `scheme` -> `schema`.
## [0.3.0.post1] - 2026-02-09
**Deprecated:**
- Announced the abandonment of the old package name `mcschemes` and the old project name `minecraft-schemes`.
- The obsolete parts are kept in a separate branch `0.3.0-announced-deprecation`.
## [0.3.0] - 2026-01-31
### Added
- **Schema:** Added support for parsing version information files (the `version.json` embedded within `client.jar`).
- **Schema:** Added parse support for index file of Mojang Java Runtimes, and file manifest of each java runtime.
- **Parsing:** Added a dedicated converter class in `mcschemes.tools.parser.Converter.DedicatedConverter`.
- This is intended to replace the `mcschemes.tools.parser.createConverter()`.
### Backwards-incompatible Changes
- **Schema:** `mcschemes.assetindex.AssetIndex` now use the `Path` object from the standard library's `pathlib` module to represent file relative
paths in asset index files.
1. Previously, it will use `str` to represent file relative paths, so you can access information (e.g. hash, size) by the following way:
```python
from mcschemes.models.assetindex import AssetIndex
asset_index: AssetIndex = ... # Some operations to obtain the json and structure it to the AssetIndex instance
file_info = asset_index.objects['icons/icon_128x128.png']
[...] # Do your operations for file_info
```
2. Now, you need to use a `pathlib.Path` object as the key to access the corresponding information:
```python
from pathlib import Path
from mcschemes.models.assetindex import AssetIndex
asset_index: AssetIndex = ... # Some operations to obtain the json and structure it to the AssetIndex instance
file_info = asset_index.objects[Path('icons/icon_128x128.png')]
[...] # Do your operations for file_info
```
### Deprecations
- **Parsing:** `mcschemes.tools.parser.createConverter()` is now marked as deprecated and will be removed in future versions.
- Now pass a converter class based on `cattrs.Converter` to the kw-only argument ``converter_class`` is no longer determines the type of the
returned dedicated converter instance.
### Changes
- **Project metadata:** This version history file has been revised to conform to the format described
in [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- **Project metadata:** Fully updated the [README file](README.md):
- Added a summary of the main features and benefits.
- Added a summary of file structures supported by this library.
- Usage example are now more useful and better represent typical use cases.
- **Organizational:** Reorganized the project structure:
- `mcschemes.tools.parser` now is a package.
- Sub-package `mcschemes.tools.parser.converters` is added to contains dedicated converters.
- **Typing:** `typing-extensions` was used instead of stdlib `typing` for better backward-compatibility for type annotation.
- **Schema:** Several changes for SHA-1 hexdigest container:
- The comparison between two `mcsehemes.specials.Sha1Sum` instances now is based on the case-insensitive form of the `hexdigest` attribute of
both.
- The exception class `mcschemes.specials.ChecksumMismatch` is now exposed.
- **Parsing:** `mcschemes.tools.parser.parse()` now will check the second argument `scheme` in more robust way.
### Fixed
- **Tooling:** Fixed a mistake when comparing the OS name in the function `mcschemes.tools.rules.isAllow()`.
## [0.2.0] - 2025-12-11
### Added
- **Project metadata:** Added `MANIFEST.in` for setuptools.
- **Schema:** Added an SHA-1 hexdigest container type for `sha1`/`hash` fields (un-)structuring. Its definition can be found at:
`mcschemes.specials.Sha1Sum`.
- **Tooling:** Added some tool functions to calculate a set of rules (iterable of `mcschemes.clientmanifest.nodes.RuleEntry` instances) means allow or
disallow some operation, such as append an argument or download a library file.
### Changes
- **Project metadata:** Declared build backend `setuptools` into `pyproject.toml`.
- **Project metadata:** According to [PEP 561](https://peps.python.org/pep-0561), an empty `py.typed` is added into the root directory of package.
- **Project metadata:** Corrected the date format for all tier-2 titles in this version history file.
- **Organizational:** Moved `typings.py` to the root directory of package.
## [0.1.0.post1] - 2025-12-05
### Changes
- **Project metadata:** Added project urls into `pyproject.toml`.
- **Project metadata:** Added disclaimer in `README.md`.
## [0.1.0] - 2025-12-04
### Added
The initial release.
| text/markdown | null | Izanagi Tokiyuki <izanagitokiyuki@noreply.codeberg.org> | null | null | null | minecraft, schema | [
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"attrs>=25.4.0",
"cattrs>=25.3.0",
"pendulum>=3.1.0",
"typing-extensions>=4.15.0"
] | [] | [] | [] | [
"Repository, https://codeberg.org/IzanagiTokiyuki/minecraft-schemas",
"Issues, https://codeberg.org/IzanagiTokiyuki/minecraft-schemas/issues",
"Changelog, https://codeberg.org/IzanagiTokiyuki/minecraft-schemas/src/branch/master/HISTORY.md"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-21T09:02:01.616298 | minecraft_schemas-0.4.1.post1.tar.gz | 37,377 | da/6a/f4affef33fea8dd18af9cba8a0d6409bb8a59917af3f075ad1188ea8c719/minecraft_schemas-0.4.1.post1.tar.gz | source | sdist | null | false | b1a489378bce80a67b5338fdec7cb95d | 18321c95f3316ac0230728f3b9f6e7a4f56f9a70d8f611c7aaf58a7e3f74458e | da6af4affef33fea8dd18af9cba8a0d6409bb8a59917af3f075ad1188ea8c719 | MulanPSL-2.0 | [
"LICENSE"
] | 221 |
2.4 | dynamorator | 0.1.5 | Lightweight DynamoDB JSON storage with automatic TTL support | # Dynamorator
Lightweight DynamoDB JSON storage with automatic TTL support. A simple, reliable wrapper for storing and retrieving JSON data in AWS DynamoDB.
## Features
- Simple key-value JSON storage in DynamoDB
- Automatic TTL (Time To Live) support
- Automatic table creation with proper configuration
- Silent error handling - never crashes your application
- Shared boto3 client for efficiency
- Optional logging with logorator
- Minimal dependencies (boto3, logorator)
## Installation
```bash
pip install dynamorator
```
## Quick Start
```python
from dynamorator import DynamoDBStore
# Initialize (table will be auto-created if it doesn't exist)
store = DynamoDBStore(table_name="my-data-store")
# Store data (expires in 7 days)
store.put("user:123", {"name": "Alice", "score": 100}, ttl_days=7)
# Retrieve data
data = store.get("user:123") # Returns dict or None
print(data) # {'name': 'Alice', 'score': 100}
# List all keys
result = store.list_keys(limit=50)
print(result['keys']) # ['user:123', ...]
# Delete data
store.delete("user:123")
```
## Silent Mode
Disable logging for production environments:
```python
# With logging (default)
store = DynamoDBStore(table_name="my-store")
# Silent mode - no logging
store = DynamoDBStore(table_name="my-store", silent=True)
```
## AWS Credentials Setup
Dynamorator uses boto3, which follows the standard AWS credential chain:
1. Environment variables (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`)
2. AWS credentials file (`~/.aws/credentials`)
3. IAM role (when running on EC2, ECS, Lambda, etc.)
See [AWS Boto3 Configuration](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html) for details.
## Required IAM Permissions
Your AWS credentials need the following DynamoDB permissions:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:CreateTable",
"dynamodb:DescribeTable",
"dynamodb:UpdateTimeToLive",
"dynamodb:PutItem",
"dynamodb:GetItem",
"dynamodb:DeleteItem",
"dynamodb:Scan"
],
"Resource": "arn:aws:dynamodb:*:*:table/your-table-name"
}
]
}
```
If the table already exists, you only need: `PutItem`, `GetItem`, `DeleteItem`, and `Scan`.
## API Reference
### `DynamoDBStore(table_name=None, silent=False)`
Initialize the store.
**Parameters:**
- `table_name` (str, optional): DynamoDB table name. If None, the store is disabled.
- `silent` (bool, optional): If True, disables all logging output. Default is False.
**Behavior:**
- Automatically creates the table if it doesn't exist
- Uses `PAY_PER_REQUEST` billing mode
- Configures TTL on the `ttl` attribute
- Table schema: partition key `cache_id` (String)
### `is_enabled() -> bool`
Check if the store is enabled.
**Returns:** `True` if table_name is set, `False` otherwise.
```python
store = DynamoDBStore(table_name="my-store")
if store.is_enabled():
print("Store is ready!")
```
### `get(key: str) -> Optional[dict]`
Retrieve JSON data by key.
**Parameters:**
- `key` (str): The key to retrieve
**Returns:** Dictionary if found, `None` if not found or on error.
```python
data = store.get("user:123")
if data:
print(f"Found: {data}")
else:
print("Not found")
```
### `put(key: str, data: dict, ttl_days: float)`
Store JSON data with TTL.
**Parameters:**
- `key` (str): The key to store under
- `data` (dict): JSON-serializable dictionary
- `ttl_days` (float): Expiration time in days (can be fractional, e.g., 0.5 for 12 hours)
**Behavior:**
- Silently fails on error (no exceptions raised)
- Automatically handles datetime objects in data using `DateTimeEncoder`
- Stores creation timestamp for tracking
```python
from datetime import datetime
store.put("session:abc", {
"user_id": 123,
"created": datetime.now(),
"expires": datetime(2026, 12, 31)
}, ttl_days=1)
```
### `delete(key: str)`
Delete an entry by key.
**Parameters:**
- `key` (str): The key to delete
**Behavior:**
- Silently fails on error (no exceptions raised)
```python
store.delete("user:123")
```
### `list_keys(limit=100, last_key=None) -> dict`
List keys in the table with pagination support.
**Parameters:**
- `limit` (int): Maximum number of keys to return (default: 100)
- `last_key` (str, optional): Pagination token from previous call
**Returns:** Dictionary with:
- `keys` (list): List of key strings
- `last_key` (str or None): Token for next page, or None if no more results
```python
# Get first page
result = store.list_keys(limit=50)
print(result['keys'])
# Get next page if available
if result['last_key']:
next_result = store.list_keys(limit=50, last_key=result['last_key'])
print(next_result['keys'])
```
## Table Structure
Dynamorator creates tables with the following structure:
```
Partition Key: cache_id (String)
Attributes:
- data (String) - JSON serialized dictionary
- ttl (Number) - Unix timestamp for expiration
- created_at (Number) - Unix timestamp of creation
TTL: Enabled on 'ttl' attribute
Billing: PAY_PER_REQUEST
```
## TTL Behavior
DynamoDB's TTL feature:
- Automatically deletes expired items (usually within 48 hours of expiration)
- Doesn't consume write capacity
- Items may still be returned by queries shortly after expiration
- Free of charge
Example TTL values:
```python
store.put(key, data, ttl_days=7) # 7 days
store.put(key, data, ttl_days=0.5) # 12 hours
store.put(key, data, ttl_days=30) # 30 days
store.put(key, data, ttl_days=365) # 1 year
```
## Error Handling
Dynamorator follows a "silent failure" philosophy:
- `get()` returns `None` on errors
- `put()` and `delete()` fail silently
- Only table creation operations raise exceptions
This design ensures your application continues running even if DynamoDB is temporarily unavailable.
```python
# Safe to use without try/except
data = store.get("key") # Returns None on error
store.put("key", {"value": 1}, ttl_days=1) # Silent on error
store.delete("key") # Silent on error
```
## DateTimeEncoder
Automatically handles datetime serialization:
```python
from datetime import datetime
from dynamorator import DynamoDBStore
store = DynamoDBStore(table_name="events")
# Datetime objects are automatically converted to ISO format
store.put("event:1", {
"name": "Meeting",
"scheduled": datetime(2026, 3, 15, 14, 30),
"created": datetime.now()
}, ttl_days=30)
# Retrieved as ISO strings
data = store.get("event:1")
# {'name': 'Meeting', 'scheduled': '2026-03-15T14:30:00', 'created': '2026-02-21T...'}
```
## Use Cases
- **Session Storage**: Store user sessions with automatic expiration
- **Cache Layer**: Simple caching for API responses or computed data
- **Feature Flags**: Store and retrieve feature flag configurations
- **Temporary Data**: Any data that should automatically expire
- **User Preferences**: Store user settings with optional expiration
## Disabled Mode
Pass `None` as table_name to disable the store (useful for testing or optional features):
```python
import os
# Only enable in production
table_name = os.getenv("DYNAMODB_TABLE") if os.getenv("ENV") == "production" else None
store = DynamoDBStore(table_name=table_name)
# Safe to call even when disabled
store.put("key", {"data": 1}, ttl_days=1) # No-op when disabled
data = store.get("key") # Returns None when disabled
```
## License
MIT License - see LICENSE file for details.
## Author
Arved Klöhn
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
| text/markdown | Arved Klöhn | null | null | null | null | dynamodb, aws, json, storage, cache, ttl, database | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Database"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"boto3",
"logorator>=2.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/Redundando/dynamorator",
"Repository, https://github.com/Redundando/dynamorator",
"Issues, https://github.com/Redundando/dynamorator/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T09:01:37.599166 | dynamorator-0.1.5.tar.gz | 9,362 | a1/21/516503462c81ce8f7210a1b70114911385c5ce838f9bfe273c2c127a622e/dynamorator-0.1.5.tar.gz | source | sdist | null | false | 9eab9d484e9b27d31a5bb3587f4748dc | 7d235c81105be380b4a9eaf0dd90d7483b959ed6c9cfe7534829651325e6dbd6 | a121516503462c81ce8f7210a1b70114911385c5ce838f9bfe273c2c127a622e | MIT | [
"LICENSE"
] | 341 |
2.4 | wm-screenshooter | 1.0.10 | A sophisticated cross-platform screenshot tool with timer support and organization features | # WorkMaster ScreenShooter
Screenshooter is a sophisticated cross-platform screenshot TUI application with timer support and organisation features. This modular Python application helps freelancers document their work with automatic (or manual) screenshots and pairs this with contemporaneous notes and the ability to create and email a PDF report to their clients.
## Why?
"As a freelance developer I love to code and help solve my clients' problems. Much of my work is hourly requiring me to keep accurate logs and notes. I created ScreenShooter to keep track of sessions, projects and notes for clients allowing me to focus on the fun parts of coding and not the admin."
- [Conor Ryan - ScreenShooter Creator](https://www.conorjwryan.com)
**Note:** ScreenShooter supports macOS (Linux and Windows are supported but are under beta testing).
## Features
- Application lives in your terminal and can be left in the background tracking time while you focus on work
- Screenshot capture can be done automatically at regular intervals or manually when the user requests them.
- Flexibility in screenshot capture all displays, specific display or a dedicated window.
- Produce captions (which appear under given screenshots) or notes for communicating ideas to clients
- Ability to pause / resume when doing sensitive work
- Session based reporting system allows for flexibility and time management
- Built in client management features for sending reports and archiving finished projects
- Ability to generate and send reports to client after every session, day or when needed
- Upload reports to S3 bucket and generate link for clients to view
- Optional CLI for quicker session start
- Backup functionality to backup settings, database, and screenshots directory
## Requirements
- Python 3.10 - 3.14
- macOS, Linux, or Windows
- For snippet drag-selection (`w` command):
- `PySide6-Essentials` is optional and only required for GUI drag-selection.
- For desktop notifications (optional):
- macOS: `terminal-notifier` (`brew install terminal-notifier`)
- Linux: `notify-send` (usually pre-installed)
- Windows: PowerShell (built-in)
- Email sending system (e.g. Brevo, SendGrid, Mailgun, etc.)
- S3 bucket (for remote report storage to make it easier to send reports to clients)
- Cloudflare KV (for custom link generation to make it easier to send reports to clients) (optional)
## Installation
### Option 1: Install from PyPI (Recommended)
```bash
pip install wm-screenshooter
screenshooter --version
```
To enable GUI snippet drag-selection (`w`) on top of the base install:
```bash
pip install "wm-screenshooter[snippet-gui]"
```
### Option 2: Install from source (using uv)
1. Clone the repository and install the application using `uv`:
```bash
git clone https://gitlab.com/workmaster/screenshooter.git
cd screenshooter
uv tool install .
screenshooter --version
```
2. (Optional) enable GUI snippet drag-selection dependency:
```bash
uv sync --extra snippet-gui
```
### (Optional) Install notification support
Notification support is optional and will be disabled if not installed.
**macOS:**
```bash
brew install terminal-notifier # macOS
```
**Linux:** (Debian/Ubuntu)
```bash
sudo apt install libnotify-bin # Debian/Ubuntu
```
**Windows:**
Not required as already built in.
## Usage
Screenshooter can be used either via the command-line interface or via the interactive menu. Until you have created a client and project, you will need to use the interactive menu to set up your project and client.
### Interactive Menu (first time use)
Below is the general flow for performing some of the common actions in the ScreenShooter program. Below will
#### How to take screenshots
TLDR; Skip to step 3 if you have already created a client and project.
1. On the main ScreenShooter screen press option 1.
2. As this is your first time using the application you will be prompted to create a client and a project. Do the following:
- Enter the name of your client (no spaces) - eg: "Testing"
- Press n for no to create a custom directory name for your client
- Enter the Company Name - eg: "Testing Ltd"
- Contact Name - eg: "John Doe"
- Contact Email - eg: "<john.doe@testing.com>"
- Screenshot delivery method [local/email/cloud] (enter):
- Notification preferences [all/important/none] (enter):
- Reporting frequency [daily/weekly/monthly/none] (enter):
- PDF security [none/password] (enter):
- PDF page size [A4/letter] (enter):
- Press enter to save the client information
3. You will now be asked to create a project. Do the following:
- Enter the name of your project (no spaces) - eg: "BigProject"
4. (Start of Session)
- Choose the display mode you want to use (press enter to use all):
- Enter the timer duration in minutes - eg: 15
- Type a note to start the session - eg: "Started working on BigProject"
- Press enter to start the session
A Countdown will start to take the first screenshot. (10 seconds).
- The session is now running, the timer is counting down to the next screenshot.
- Refer to the in-session commands below on how to interact with the session.
- When you are finished with the session you can press 'q' to quit and take a final screenshot.
#### In-session commands
```bash
'n' - to add a note to the log
'c' - to add a caption to the last screenshot
's' - to take a manual screenshot now and reset timer
'o' - to open the last screenshot in Preview
'q' - to quit and take final screenshot
't' - to change the timer duration
'd' - to change the display mode
'p' - to pause the session for sensitive activities
'm' - to switch to manual mode (no timer)
'r' - to archive the last action (screenshot or note)
'e' - to list the last 5 actions
'l' - to open the current session log in TextEdit
'i' - to show time in session, today, and project total
'h' - for help
'z' - to cancel active countdowns
```
### CLI Command Structure
ScreenShooter uses a command-line interface with nested commands:
```bash
screenshooter # Interactive menu
screenshooter client # List all clients
screenshooter client "ClientName" # Manage specific client
screenshooter client "ClientName" projects # List projects
screenshooter shoot --client "Name" --project "Project" [options]
screenshooter report # Interactive report generator
screenshooter report generate --client "Name" --project "Project"
screenshooter open logs|config # Open logs or config directories
screenshooter settings # Manage settings
screenshooter upgrade # Manage upgrade checks and notifications
screenshooter backup <settings|db|all> [--outputdir PATH] # Create backups
screenshooter help # Show help information
```
For more information on the commands and options available, run `screenshooter command --help` for more information.
## Directory Structure and Screenshot Storage
Screenshots are organized in the following structure:
```tree
~/WorkMaster_Screenshooter/
└── CLIENT_NAME/
├── client.json # Client information and preferences
└── PROJECT_NAME/
├── project.json
├── PROJECT_NAME_log.txt
├── reports/
│ └── PROJECT_NAME_Report_YYYYMMDD_HHMMSS.pdf
└── sessions/
└── YYYY-MM-DD_HH-MM-SS/
├── session.log
├── session.json
└── screenshots/
└── PROJECT_NAME_YYYY-MM-DD_HH-MM-SS_*.jpg
```
Above is the general structure of the application.
## Notes
### Email Support and Remote Report Storage
If you want to send the screenshot reports to your client via email this requires you to set use a bulk email sending system. ScreenShooter supports this via the `email` setting in the settings menu.
`Brevo` is recommended as it has a generous free tier and is easy to set up. The one catch with these bulk email senders is that they often don't allow for large attachments. This is why ScreenShooter allows you to upload the report to an S3 bucket and then generate a link to the report.
You can set up an S3 bucket and configure ScreenShooter to use it via the `s3` setting in the settings menu. The S3 bucket will be used to store the screenshot reports and the link to the report will be generated and sent to your client via email.
### Updates
Updates to ScreenShooter are checked automatically at startup and can be checked manually with
`screenshooter upgrade check`.
Two update channels are available:
- `release` (default): checks GitLab releases and uses cached checks.
- `dev`: checks the configured branch head commit on every run (no cache reads).
When running from a local git checkout (for example via `uv run screenshooter`), dev checks use local
`HEAD` as the current commit source.
You can also skip or pin specific versions of ScreenShooter to prevent notifications for these or future versions.
```bash
screenshooter upgrade check # Check for updates
screenshooter upgrade status # Show current channel/branch behavior
screenshooter upgrade channel <release|dev> # Set update channel
screenshooter upgrade branch <branch-name> # Set dev channel branch target
screenshooter upgrade settings --enable --frequency 7 # Configure checks
screenshooter upgrade skip <version> # Skip notifications for a specific version
screenshooter upgrade pin <version> # Pin to a specific version
screenshooter upgrade unpin # Remove pinned version
```
The upgrade itself is not automatically installed and requires you to manually download the latest version and install it.
If installing from PyPI:
```bash
pip install --upgrade wm-screenshooter
screenshooter --version
```
If installing from GitLab you can use the following command to install the latest version:
```bash
git pull
uv tool install .
screenshooter --version
```
## Contributing
For contributor and branch workflow guidance (including dev/release upgrade channels), see
[`docs/DEVELOPMENT.md`](docs/DEVELOPMENT.md).
## Who Made This?
ScreenShooter is being developed by [@conorjwryan](https://gitlab.com/conorjwryan) as a part of the [WorkMaster](https://workmaster.app) project. WorkMaster is a platform for freelancers to manage their work and clients. ScreenShooter is its companion application for capturing screenshots of their work when requested by clients.
The WorkMaster site is under closed development at the moment and is not ready for public use. More information will be available soon.
In the meantime, please checkout [Conor's website](https://www.conorjwryan.com) and his [GitLab](https://gitlab.com/conorjwryan) for more of his projects.
| text/markdown | Conor Ryan | null | null | null | # MIT License Copyright (c) 2025 Conor Ryan Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | screenshot, screenshooter, cli, time tracking, project management, client management, report generation, email delivery, s3 storage, cloudflare kv | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"click>=8.1.3",
"rich>=13.7.0",
"python-dateutil>=2.8.2",
"pathlib>=1.0.1",
"pydantic>=2.12.0",
"reportlab>=4.0.9",
"Pillow>=10.1.0",
"boto3>=1.29.2",
"requests>=2.31.0",
"packaging>=21.0",
"pyzipper>=0.3.6",
"mss>=9.0.0",
"PySide6-Essentials>=6.7.0; extra == \"snippet-gui\""
] | [] | [] | [] | [
"Homepage, https://gitlab.com/workmaster/screenshooter"
] | uv/0.9.13 {"installer":{"name":"uv","version":"0.9.13"},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T09:01:30.733419 | wm_screenshooter-1.0.10-py3-none-any.whl | 208,758 | ac/4e/2c6f2ab2e7d104aa9c19cddfaa7c866e8352d3fed41f02faceda3ac1920d/wm_screenshooter-1.0.10-py3-none-any.whl | py3 | bdist_wheel | null | false | 724f46cdc6117a68e93bdf325f685895 | c59aae7c0352d59895ca2d33270de3aa1089605582348827524a275183e8cf9a | ac4e2c6f2ab2e7d104aa9c19cddfaa7c866e8352d3fed41f02faceda3ac1920d | null | [
"LICENCE.md"
] | 228 |
2.4 | fraiseql | 2.0.0b3 | FraiseQL v2 - Compiled GraphQL execution engine (schema authoring) | # FraiseQL v2 - Python Schema Authoring
**Python decorators for authoring FraiseQL schemas**
This package provides Python decorators to define GraphQL schemas that are compiled by the FraiseQL Rust engine.
## Architecture
```
Python Decorators → schema.json → fraiseql-cli compile → schema.compiled.json → Rust Runtime
```
**Important**: This package is for **schema authoring only**. It does NOT provide runtime execution.
The compiled schema is executed by the standalone Rust server.
## Installation
```bash
pip install fraiseql
```
## Quick Start
```python
import fraiseql
# Define a GraphQL type
@fraiseql.type
class User:
id: int
name: str
email: str
created_at: str
# Define a query
@fraiseql.query(sql_source="v_user")
def users(limit: int = 10) -> list[User]:
"""Get all users with pagination."""
pass
# Define a mutation
@fraiseql.mutation(sql_source="fn_create_user", operation="CREATE")
def create_user(name: str, email: str) -> User:
"""Create a new user."""
pass
# Export schema to JSON
if __name__ == "__main__":
fraiseql.export_schema("schema.json")
```
## Compile Schema
```bash
# Compile schema.json to optimized schema.compiled.json
fraiseql-cli compile schema.json -o schema.compiled.json
# Start server with compiled schema
fraiseql-server --schema schema.compiled.json
```
## Features
- **Type-safe**: Python type hints map to GraphQL types
- **Database-backed**: Queries map to SQL views, mutations to functions
- **Compile-time**: All validation happens at compile time, zero runtime overhead
- **No FFI**: Pure JSON output, no Python-Rust bindings needed
- **Analytics**: Fact tables and aggregate queries for OLAP workloads
## Analytics / Fact Tables
FraiseQL supports high-performance analytics via fact tables:
```python
import fraiseql
# Define a fact table
@fraiseql.fact_table(
table_name="tf_sales",
measures=["revenue", "quantity", "cost"],
dimension_paths=[
{"name": "category", "json_path": "data->>'category'", "data_type": "text"},
{"name": "region", "json_path": "data->>'region'", "data_type": "text"}
]
)
@fraiseql.type
class Sale:
id: int
revenue: float # Measure (aggregatable)
quantity: int # Measure
cost: float # Measure
customer_id: str # Denormalized filter (indexed)
occurred_at: str # Denormalized filter (indexed)
# Define an aggregate query
@fraiseql.aggregate_query(
fact_table="tf_sales",
auto_group_by=True,
auto_aggregates=True
)
@fraiseql.query
def sales_aggregate() -> list[dict]:
"""Aggregate sales with flexible grouping and filtering."""
```
This generates a GraphQL query that supports:
- **GROUP BY**: Dimensions (`category`, `region`) and temporal buckets (`occurred_at_day`, `occurred_at_month`)
- **Aggregates**: `count`, `revenue_sum`, `revenue_avg`, `quantity_sum`, etc.
- **WHERE**: Pre-aggregation filters (`customer_id`, `occurred_at` range)
- **HAVING**: Post-aggregation filters (`revenue_sum_gt: 1000`)
- **ORDER BY**: Any aggregate or dimension
- **LIMIT/OFFSET**: Pagination
### Fact Table Pattern
```sql
-- Table name starts with tf_ (table fact)
CREATE TABLE tf_sales (
id BIGSERIAL PRIMARY KEY,
-- Measures: Numeric columns for fast aggregation
revenue DECIMAL(10,2) NOT NULL,
quantity INT NOT NULL,
cost DECIMAL(10,2) NOT NULL,
-- Dimensions: JSONB column for flexible GROUP BY
data JSONB NOT NULL,
-- Denormalized filters: Indexed columns for fast WHERE
customer_id UUID NOT NULL,
occurred_at TIMESTAMPTZ NOT NULL
);
CREATE INDEX ON tf_sales(customer_id);
CREATE INDEX ON tf_sales(occurred_at);
```
**Key Principles:**
- **Measures**: SQL columns (numeric types) for fast aggregation
- **Dimensions**: JSONB `data` column for flexible grouping
- **Denormalized Filters**: Indexed SQL columns for fast WHERE clauses
- **No Joins**: All dimensional data denormalized at ETL time
## Type Mapping
| Python Type | GraphQL Type |
|-------------|--------------|
| `int` | `Int` |
| `float` | `Float` |
| `str` | `String` |
| `bool` | `Boolean` |
| `list[T]` | `[T]` |
| `T \| None` | `T` (nullable) |
| Custom class | Object type |
## Documentation
Full documentation: <https://fraiseql.readthedocs.io>
## License
MIT
| text/markdown | FraiseQL Team | null | null | null | MIT | compiler, database, graphql, schema, sql | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Database :: Front-Ends",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://fraiseql.readthedocs.io",
"Homepage, https://github.com/fraiseql/fraiseql",
"Issues, https://github.com/fraiseql/fraiseql/issues",
"Repository, https://github.com/fraiseql/fraiseql"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T08:59:16.872473 | fraiseql-2.0.0b3.tar.gz | 64,889 | eb/22/c1abd5d70e028ffda22e026dc79fb6708c0c863e8bd500bca0ee4adba25b/fraiseql-2.0.0b3.tar.gz | source | sdist | null | false | d08150206796557ebbfa07e59dd65956 | f09b5871e82931d1858197751ec28fcf1a5c480d9770c1df036b3f290f597a87 | eb22c1abd5d70e028ffda22e026dc79fb6708c0c863e8bd500bca0ee4adba25b | null | [] | 227 |
2.4 | haema | 0.2.0 | HAEMA memory framework built on ChromaDB | # HAEMA
[English](README.md) | [한국어](README.ko.md)
HAEMA is an agent memory framework built on ChromaDB.
It provides three memory modes through a single write API:
- `core memory`: durable high-impact identity/policy/user facts (`get_core`)
- `latest memory`: recency slice by timestamp (`get_latest`)
- `long-term memory`: semantic retrieval (`search`)
You only write through `add(contents)`, and HAEMA updates all layers automatically.
## Key Changes (Current)
- `add(contents)` runs a single N:M reconstruction pass per call.
- Embedding is split into query/document interfaces:
- `embed_query(...)`
- `embed_document(...)`
- no-related special path is removed; one reconstruction schema is used.
- reconstruction schema:
- `memories: list[str]`
- `coverage: "complete" | "incomplete"`
## Installation
```bash
pip install haema
```
Development:
```bash
pip install -e ".[dev]"
```
## Quick Start
```python
from haema import Memory
m = Memory(
path="./haema_store",
output_dimensionality=1536,
embedding_client=..., # your EmbeddingClient implementation
llm_client=..., # your LLMClient implementation
merge_top_k=3,
merge_distance_cutoff=0.25,
)
m.add([
"The user prefers concise and actionable responses.",
"The user is building HAEMA on top of ChromaDB.",
])
print(m.get_core()) # str
print(m.get_latest(begin=1, count=5)) # list[str]
print(m.search("user preference", 3)) # list[str]
```
Real provider example:
- `examples/google_genai_example.py`
## Public API
### Constructor
`Memory(path, output_dimensionality, embedding_client, llm_client, merge_top_k=3, merge_distance_cutoff=0.25)`
- `path`: storage root directory
- `output_dimensionality`: embedding output dimension
- `embedding_client`: user embedding adapter
- `llm_client`: user structured-output LLM adapter
- `merge_top_k`: related candidate count per new content (default `3`)
- `merge_distance_cutoff`: related-memory distance threshold (default `0.25`)
### Methods
- `get_core() -> str`
- `get_latest(begin: int, count: int) -> list[str]`
- `search(content: str, n: int) -> list[str]`
- `add(contents: str | list[str]) -> None`
## Client Interfaces
### `EmbeddingClient`
- `embed_query(texts, output_dimensionality) -> np.ndarray`
- `embed_document(texts, output_dimensionality) -> np.ndarray`
Both must return:
- 2D `numpy.ndarray`
- dtype `float32`
- shape `(len(texts), output_dimensionality)`
### `LLMClient`
- `generate_structured(system_prompt, user_prompt, response_model) -> dict[str, Any]`
Must return a dict parseable by the provided Pydantic model.
## Reconstruction Schema
HAEMA uses structured reconstruction output for long-term memory updates:
```python
class MemoryReconstructionResponse(BaseModel):
memories: list[str]
coverage: Literal["complete", "incomplete"]
```
If output is empty or `coverage == "incomplete"`, HAEMA runs one refinement pass.
If it still fails, HAEMA safely falls back to normalized `contents`.
## Storage Layout
Given `path="./haema_store"`:
- long-term vector DB: `./haema_store/db`
- core markdown: `./haema_store/core.md`
- latest index DB: `./haema_store/latest.sqlite3`
Long-term metadata fields:
- `timestamp` (UTC ISO8601)
- `timestamp_ms` (Unix epoch milliseconds)
## How `add()` Works
1. Normalize input strings.
- if `contents` is a single `str`, HAEMA first expands it into multiple pre-memory items via structured LLM output
2. Batch query-embed all `contents`.
3. For each query, fetch top-k and keep matches with distance cutoff.
4. Union related memories by `id`.
5. Run one reconstruction call with:
- related memory documents (may be empty)
- all new contents
6. Upsert reconstructed memories with document embeddings.
7. Delete replaced related IDs only after upsert succeeds.
8. Update core once per `add()` call.
## Breaking Changes
Compared to older builds:
1. `EmbeddingClient.embed(...)` is removed.
2. `NoRelatedMemoryResponse` is removed.
3. `MemorySynthesisResponse(update: list[str])` is replaced by `MemoryReconstructionResponse`.
4. `merge_top_k` default changed from `5` to `3`.
## Documentation
- `docs/index.md`
- `docs/usage.md`
- `docs/api.md`
- `docs/architecture.md`
- `docs/release.md`
## License
MIT
| text/markdown | HAEMA Contributors | null | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"chromadb>=0.5.0",
"numpy>=1.26.0",
"pydantic>=2.7.0",
"pytest>=8.0.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T08:58:44.393764 | haema-0.2.0.tar.gz | 19,964 | 31/4b/f3ba5eecf439fb4b5407533ec0a85ca90b2622c015bf76db47a1cbd016d1/haema-0.2.0.tar.gz | source | sdist | null | false | 7195cab238b60b4d02f6b5ae64051fd6 | 9a5ee3d1894bacb8048be205d502f78a9fcdf8c2f9dd57304037b32c48fce9da | 314bf3ba5eecf439fb4b5407533ec0a85ca90b2622c015bf76db47a1cbd016d1 | null | [] | 238 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.