metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | llama-index-vector-stores-weaviate | 1.5.0 | llama-index vector_stores weaviate integration | # LlamaIndex Vector_Stores Integration: Weaviate
| text/markdown | null | Your Name <you@example.com> | null | null | null | null | [] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"llama-index-core<0.15,>=0.13.0",
"weaviate-client<5,>=4.5.7"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T23:20:19.830964 | llama_index_vector_stores_weaviate-1.5.0.tar.gz | 9,679 | 3d/9c/fd6eae66b87e736807776c6824bebebb44098630af2dac8cd5cdf5938d0d/llama_index_vector_stores_weaviate-1.5.0.tar.gz | source | sdist | null | false | 891f11e00d8634703cf512f445d304e0 | 99ba6dbdcf92e9ec56f464de2d71ed3c0503e3fc5b71f9d74dbc32da981b0cf5 | 3d9cfd6eae66b87e736807776c6824bebebb44098630af2dac8cd5cdf5938d0d | MIT | [
"LICENSE"
] | 280 |
2.4 | fairyfly-therm | 0.10.0 | Fairyfly extension for translating HBJSON files to INP files for eQuest | 
<img src="https://raw.githubusercontent.com/ladybug-tools/artwork/refs/heads/master/icons_components/fairyfly/png/therm.png" alt="THERM" width="200" height="200">
[](https://github.com/ladybug-tools/fairyfly-therm/actions)
[](https://www.python.org/downloads/release/python-3100/)
[](https://www.python.org/downloads/release/python-370/)
[](https://www.python.org/downloads/release/python-270/)
[](https://github.com/IronLanguages/ironpython2/releases/tag/ipy-2.7.8/)
# fairyfly-therm
Fairyfly extension for energy modeling with LBNL THERM.
[THERM](https://windows.lbl.gov/software-tools#therm-heading) is a widely used and accepted freeware for modeling two-dimensional heat-transfer in building components such as windows, walls, foundations, etc.
## Installation
`pip install -U fairyfly-therm`
## QuickStart
```console
import fairyfly_therm
```
## [API Documentation](http://ladybug-tools.github.io/fairyfly-therm/docs)
## Local Development
1. Clone this repo locally
```console
git clone git@github.com:ladybug-tools/fairyfly-therm
# or
git clone https://github.com/ladybug-tools/fairyfly-therm
```
2. Install dependencies:
```
cd fairyfly-therm
pip install -r dev-requirements.txt
pip install -r requirements.txt
```
3. Run Tests:
```console
python -m pytest tests/
```
4. Generate Documentation:
```console
sphinx-apidoc -f -e -d 4 -o ./docs ./fairyfly_therm
sphinx-build -b html ./docs ./docs/_build/docs
```
| text/markdown | Ladybug Tools | info@ladybug.tools | null | null | AGPL-3.0 | null | [
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: IronPython",
"Operating System :: OS Independent"
] | [] | https://github.com/ladybug-tools/fairyfly-therm | null | null | [] | [] | [] | [
"fairyfly-core==0.2.17",
"fairyfly-therm-standards==0.1.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.12 | 2026-02-20T23:20:00.670534 | fairyfly_therm-0.10.0.tar.gz | 71,197 | ad/e8/1801b6f330d4b1e7485c37fad7a0fa94f3a29e153be995f8ab6bd7ab6c6d/fairyfly_therm-0.10.0.tar.gz | source | sdist | null | false | b9a9d363fe662dabd043c79bd4d38748 | 29ff84893ebff4dbe2f018a293e7488b8b290d2693c20c5e86a785e1ba20d5fa | ade81801b6f330d4b1e7485c37fad7a0fa94f3a29e153be995f8ab6bd7ab6c6d | null | [
"LICENSE"
] | 248 |
2.4 | manim | 0.20.0 | Animation engine for explanatory math videos. | <p align="center">
<a href="https://www.manim.community/"><img src="https://raw.githubusercontent.com/ManimCommunity/manim/main/logo/cropped.png"></a>
<br />
<br />
<a href="https://pypi.org/project/manim/"><img src="https://img.shields.io/pypi/v/manim.svg?style=flat&logo=pypi" alt="PyPI Latest Release"></a>
<a href="https://hub.docker.com/r/manimcommunity/manim"><img src="https://img.shields.io/docker/v/manimcommunity/manim?color=%23099cec&label=docker%20image&logo=docker" alt="Docker image"> </a>
<a href="https://mybinder.org/v2/gh/ManimCommunity/jupyter_examples/HEAD?filepath=basic_example_scenes.ipynb"><img src="https://mybinder.org/badge_logo.svg"></a>
<a href="http://choosealicense.com/licenses/mit/"><img src="https://img.shields.io/badge/license-MIT-red.svg?style=flat" alt="MIT License"></a>
<a href="https://www.reddit.com/r/manim/"><img src="https://img.shields.io/reddit/subreddit-subscribers/manim.svg?color=orange&label=reddit&logo=reddit" alt="Reddit" href=></a>
<a href="https://twitter.com/manimcommunity/"><img src="https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40manimcommunity" alt="Twitter">
<a href="https://manim.community/discord/"><img src="https://img.shields.io/discord/581738731934056449.svg?label=discord&color=yellow&logo=discord" alt="Discord"></a>
<a href="https://docs.manim.community/"><img src="https://readthedocs.org/projects/manimce/badge/?version=latest" alt="Documentation Status"></a>
<img src="https://github.com/ManimCommunity/manim/workflows/CI/badge.svg" alt="CI">
<br />
<br />
<i>An animation engine for explanatory math videos</i>
</p>
<hr />
Manim is an animation engine for explanatory math videos. It's used to create precise animations programmatically, as demonstrated in the videos of [3Blue1Brown](https://www.3blue1brown.com/).
> [!NOTE]
> The community edition of Manim (ManimCE) is a version maintained and developed by the community. It was forked from 3b1b/manim, a tool originally created and open-sourced by Grant Sanderson, also creator of the 3Blue1Brown educational math videos. While Grant Sanderson continues to maintain his own repository, we recommend this version for its continued development, improved features, enhanced documentation, and more active community-driven maintenance. If you would like to study how Grant makes his videos, head over to his repository ([3b1b/manim](https://github.com/3b1b/manim)).
## Table of Contents:
- [Installation](#installation)
- [Usage](#usage)
- [Documentation](#documentation)
- [Docker](#docker)
- [Help with Manim](#help-with-manim)
- [Contributing](#contributing)
- [License](#license)
## Installation
> [!CAUTION]
> These instructions are for the community version _only_. Trying to use these instructions to install [3b1b/manim](https://github.com/3b1b/manim) or instructions there to install this version will cause problems. Read [this](https://docs.manim.community/en/stable/faq/installation.html#why-are-there-different-versions-of-manim) and decide which version you wish to install, then only follow the instructions for your desired version.
Manim requires a few dependencies that must be installed prior to using it. If you
want to try it out first before installing it locally, you can do so
[in our online Jupyter environment](https://try.manim.community/).
For local installation, please visit the [Documentation](https://docs.manim.community/en/stable/installation.html)
and follow the appropriate instructions for your operating system.
## Usage
Manim is an extremely versatile package. The following is an example `Scene` you can construct:
```python
from manim import *
class SquareToCircle(Scene):
def construct(self):
circle = Circle()
square = Square()
square.flip(RIGHT)
square.rotate(-3 * TAU / 8)
circle.set_fill(PINK, opacity=0.5)
self.play(Create(square))
self.play(Transform(square, circle))
self.play(FadeOut(square))
```
In order to view the output of this scene, save the code in a file called `example.py`. Then, run the following in a terminal window:
```sh
manim -p -ql example.py SquareToCircle
```
You should see your native video player program pop up and play a simple scene in which a square is transformed into a circle. You may find some more simple examples within this
[GitHub repository](example_scenes). You can also visit the [official gallery](https://docs.manim.community/en/stable/examples.html) for more advanced examples.
Manim also ships with a `%%manim` IPython magic which allows to use it conveniently in JupyterLab (as well as classic Jupyter) notebooks. See the
[corresponding documentation](https://docs.manim.community/en/stable/reference/manim.utils.ipython_magic.ManimMagic.html) for some guidance and
[try it out online](https://mybinder.org/v2/gh/ManimCommunity/jupyter_examples/HEAD?filepath=basic_example_scenes.ipynb).
## Command line arguments
The general usage of Manim is as follows:

The `-p` flag in the command above is for previewing, meaning the video file will automatically open when it is done rendering. The `-ql` flag is for a faster rendering at a lower quality.
Some other useful flags include:
- `-s` to skip to the end and just show the final frame.
- `-n <number>` to skip ahead to the `n`'th animation of a scene.
- `-f` show the file in the file browser.
For a thorough list of command line arguments, visit the [documentation](https://docs.manim.community/en/stable/guides/configuration.html).
## Documentation
Documentation is in progress at [ReadTheDocs](https://docs.manim.community/).
## Docker
The community also maintains a docker image (`manimcommunity/manim`), which can be found [on DockerHub](https://hub.docker.com/r/manimcommunity/manim).
Instructions on how to install and use it can be found in our [documentation](https://docs.manim.community/en/stable/installation/docker.html).
## Help with Manim
If you need help installing or using Manim, feel free to reach out to our [Discord
Server](https://www.manim.community/discord/) or [Reddit Community](https://www.reddit.com/r/manim). If you would like to submit a bug report or feature request, please open an issue.
## Contributing
Contributions to Manim are always welcome. In particular, there is a dire need for tests and documentation. For contribution guidelines, please see the [documentation](https://docs.manim.community/en/stable/contributing.html).
However, please note that Manim is currently undergoing a major refactor. In general,
contributions implementing new features will not be accepted in this period.
The contribution guide may become outdated quickly; we highly recommend joining our
[Discord server](https://www.manim.community/discord/) to discuss any potential
contributions and keep up to date with the latest developments.
Most developers on the project use `uv` for management. You'll want to have uv installed and available in your environment.
Learn more about `uv` at its [documentation](https://docs.astral.sh/uv/) and find out how to install manim with uv at the [manim dev-installation guide](https://docs.manim.community/en/latest/contributing/development.html) in the manim documentation.
## How to Cite Manim
We acknowledge the importance of good software to support research, and we note
that research becomes more valuable when it is communicated effectively. To
demonstrate the value of Manim, we ask that you cite Manim in your work.
Currently, the best way to cite Manim is to go to our
[repository page](https://github.com/ManimCommunity/manim) (if you aren't already) and
click the "cite this repository" button on the right sidebar. This will generate
a citation in your preferred format, and will also integrate well with citation managers.
## Code of Conduct
Our full code of conduct, and how we enforce it, can be read on [our website](https://docs.manim.community/en/stable/conduct.html).
## License
The software is double-licensed under the MIT license, with copyright by 3blue1brown LLC (see LICENSE), and copyright by Manim Community Developers (see LICENSE.community).
| text/markdown | null | The Manim Community Developers <contact@manim.community>, Grant '3Blue1Brown' Sanderson <grant@3blue1brown.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Multimedia :: Graphics",
"Topic :: Multimedia :: Video",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"audioop-lts>=0.2.1; python_full_version >= \"3.13\"",
"av>=15.0",
"beautifulsoup4>=4.12",
"click>=8.0",
"cloup>=2.0.0",
"decorator>=4.3.2",
"isosurfaces>=0.1.1",
"manimpango<1.0.0,>=0.6.1",
"mapbox-earcut>=1.0.0",
"moderngl-window>=2.0.0",
"moderngl<6.0.0,>=5.7.0",
"networkx>=2.6",
"numpy>=2.1",
"pillow>=11.0",
"pycairo<2.0.0,>=1.14",
"pydub>=0.22.0",
"pygments>=2.17",
"rich>=12.0.0",
"scipy>=1.13.0",
"scipy>=1.15.0; python_full_version >= \"3.13\"",
"screeninfo>=0.7.0",
"skia-pathops>=0.9.0",
"srt>=3.0.0",
"svgelements>=1.9.0",
"tqdm>=4.21.0",
"typing-extensions>=4.12.0",
"watchdog>=2.0.0",
"dearpygui>=1.0.0; extra == \"gui\"",
"jupyterlab>=4.3.4; extra == \"jupyterlab\"",
"notebook>=7.3.2; extra == \"jupyterlab\""
] | [] | [] | [] | [
"repository, https://github.com/manimcommunity/manim",
"documentation, https://docs.manim.community/",
"homepage, https://www.manim.community/",
"Bug Tracker, https://github.com/ManimCommunity/manim/issues",
"Changelog, https://docs.manim.community/en/stable/changelog.html",
"X / Twitter, https://x.com/manim_community",
"Bluesky, https://bsky.app/profile/manim.community",
"Discord, https://www.manim.community/discord/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T23:19:54.270085 | manim-0.20.0.tar.gz | 551,687 | 25/a2/84871307cab8b135378a810fc10fe0493600511e54c9ea36ee55b04fef4b/manim-0.20.0.tar.gz | source | sdist | null | false | 43a2b60dcac49782b056648e2684cbb4 | 9ab10520c431872109aa8dfaa48aeb9872849efdbe2db65d2cc043ba77062938 | 25a284871307cab8b135378a810fc10fe0493600511e54c9ea36ee55b04fef4b | MIT | [
"LICENSE",
"LICENSE.community"
] | 2,550 |
2.4 | mkdocs-zettelkasten | 0.5.0 | Add Zettelkasten features to MkDocs | # MkDocs Zettelkasten
This is a [Zettelkasten](https://zettelkasten.de) theme and plugin for [MkDocs](https://www.mkdocs.org). It renders the MkDocs pages as cards (zettels).
For more information, head on over to [the documentation](https://buvis.github.io/mkdocs-zettelkasten/)
## Install
```bash
pip install mkdocs-zettelkasten
```
## Development
```bash
uv sync # install deps
uv run playwright install # install browsers (first time only)
```
### Testing
Three levels of testing, from fast to thorough:
**1. Unit tests** — plugin logic without building the site:
```bash
make test # ~0.3s, 124 tests
```
**2. E2E tests** — playwright builds the site from `docs/`, serves it, and checks the UI automatically:
```bash
make test-e2e # ~27s, 63 tests
```
**3. Manual acceptance** — build and serve the site from `docs/`, open localhost:8000 in a browser and walk through the checklists in `.local/testscripts/`:
```bash
make run # default (solarized, validation on)
make run-selenized # selenized theme
make run-editor # markdown editor enabled
make run-no-validation # validation disabled
```
## Release
`mise` adds `dev/bin` to PATH. Tags with `rc` in the name publish to TestPyPI; stable tags go to PyPI. Manual workflow dispatch defaults to TestPyPI.
```bash
release patch|minor|major # tag and push -> CI publishes to PyPI
release --pre rc1 # pre-release current version to TestPyPI
release --pre rc1 minor # bump + pre-release to TestPyPI
release # after rc: strip suffix, release stable to PyPI
release --dry-run patch # preview without doing anything
```
**First-time setup** (already done for mkdocs-zettelkasten):
- pypi.org: add trusted publisher (owner: `buvis`, repo: `mkdocs-zettelkasten`, workflow: `publish.yml`, env: `pypi`)
- test.pypi.org: same, env: `testpypi`
- GitHub repo settings: create `pypi` and `testpypi` environments
The release script updates the pinned version in `.github/workflows/requirements.txt` (used by docs deployment), commits, tags, and pushes both. Version derives from git tags via hatch-vcs — no version field in `pyproject.toml`. This works for pure Python packages. Projects with native extensions (like buvis/gems with maturin/Rust) need an explicit version in `pyproject.toml` because maturin reads it at build time.
Stable releases (no `rc` in tag) auto-create a GitHub Release with a changelog generated from conventional commits since the previous tag.
| text/markdown | null | Tomáš Bouška <tomas@buvis.net> | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"colorlog==6.10.1",
"gitpython==3.1.46",
"jinja2==3.1.6",
"mkdocs==1.6.1",
"pymdown-extensions==10.21",
"pyyaml==6.0.3",
"tzlocal==5.3.1"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:19:07.789928 | mkdocs_zettelkasten-0.5.0.tar.gz | 814,423 | cf/81/46cc75746ce41bf0b136830878a5aa56f8f62c000b53e32325cc6a610b79/mkdocs_zettelkasten-0.5.0.tar.gz | source | sdist | null | false | bfd908e84af9c09a08a0285e47342096 | d96d9cec21e9c5bd02f188579f412eb3d164037e36dfa1c76c65cb9c1ce1d5c1 | cf8146cc75746ce41bf0b136830878a5aa56f8f62c000b53e32325cc6a610b79 | null | [
"LICENSE"
] | 234 |
2.4 | open-strix | 0.1.9 | Minimal autonomous agent harness with LangGraph Deep Agents | # open-strix
[](https://pypi.org/project/open-strix/)
Minimal, non-production autonomous agent harness built with LangGraph Deep Agents.
## Install uv
Install `uv` first:
```bash
# macOS / Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
```
```powershell
# Windows (PowerShell)
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
```
Official install docs (alternate methods like Homebrew, pipx, winget):
- https://docs.astral.sh/uv/getting-started/installation/
## Quick start (recommended)
```bash
uvx open-strix setup --home my-agent --github
cd my-agent
uvx open-strix
```
If you run `uvx open-strix` in a plain directory with no git repo, it now auto-runs setup first.
`open-strix setup` bootstraps the target directory with:
- `state/`
- `skills/`
- `blocks/`
- `logs/events.jsonl`
- `logs/journal.jsonl`
- `scheduler.yaml`
- `config.yaml`
- `checkpoint.md`
- `.env` (template)
It also prints a CLI walkthrough with links and step-by-step setup for:
- MiniMax M2.5
- Kimi/Moonshot
- Discord bot creation + permissions
- `config.yaml` values
Then `uvx open-strix` connects to Discord if a token is present (by default `DISCORD_TOKEN`).
Otherwise it runs in local stdin mode.
## Installed mode (optional)
If you prefer a local project install instead of `uvx`:
```bash
uv init --python 3.11
uv add open-strix
uv run open-strix setup --home .
uv run open-strix
```
## Install and auth `gh` (GitHub CLI)
If you want `open-strix setup --github`, install and log into `gh` first.
Install:
```bash
# macOS (Homebrew)
brew install gh
# Ubuntu / Debian
sudo apt install gh
```
```powershell
# Windows (winget)
winget install --id GitHub.cli
```
Authenticate:
```bash
gh auth login
gh auth status
```
Official docs:
- https://cli.github.com/
- https://github.com/cli/cli#installation
## Create a GitHub repo and set remote
`open-strix` auto-syncs with git after each turn, so set up a repo + remote early.
Recommended:
```bash
uvx open-strix setup --home my-agent --github
```
Keep this private, since agent memory and logs can contain sensitive context.
Manual fallback with GitHub CLI (`gh`):
```bash
cd my-agent
gh auth login
gh repo create <repo-name> --private --source=. --remote=origin
git add .
git commit -m "Initial commit"
git push -u origin HEAD
```
Manual fallback with GitHub web UI:
1. Create a new **private** empty repo on GitHub (no README, no `.gitignore`, no license).
2. In your project directory:
```bash
git init
git add .
git commit -m "Initial commit"
git branch -M main
git remote add origin git@github.com:<your-user>/<repo-name>.git
git push -u origin main
```
If you prefer HTTPS:
```bash
git remote add origin https://github.com/<your-user>/<repo-name>.git
```
Check remote config:
```bash
git remote -v
```
## Environment setup
Start from the example env file:
```bash
cp .env.example .env
```
Default model setup in this project expects an Anthropic-compatible endpoint:
- `ANTHROPIC_API_KEY`
- `ANTHROPIC_BASE_URL`
Discord runtime uses:
- `DISCORD_TOKEN`
Optional:
- `DISCORD_TEST_CHANNEL_ID`
- `OPEN_STRIX_TEST_MODEL`
## Models
### Default: MiniMax M2.5
This project defaults to:
- `model: MiniMax-M2.5` in `config.yaml`
- provider prefix `anthropic:` internally (so the runtime uses `anthropic:MiniMax-M2.5`)
Use MiniMax's Anthropic-compatible endpoint in your `.env`:
- `ANTHROPIC_BASE_URL=https://api.minimax.io/anthropic`
MiniMax docs:
- Anthropic compatibility + model IDs: https://platform.minimax.io/docs/api-reference/text-anthropic-api
- AI coding tools guide (M2.5 context): https://platform.minimax.io/docs/guides/text-ai-coding-tools
### Alternative: Kimi K2.5
If you want Kimi instead of MiniMax:
1. Point Anthropic-compatible env vars at your Moonshot endpoint (see Moonshot docs for current endpoint details).
2. Set `model` in `config.yaml` to the current Kimi model ID you want.
Moonshot docs:
- Docs overview: https://platform.moonshot.ai/docs/overview
- K2 update post (links to current quick-start): https://platform.moonshot.ai/blog/posts/Kimi_API_Newsletter
Note: the Moonshot update posted on November 8, 2025 references `kimi-k2-thinking` and `kimi-k2-thinking-turbo`. If you refer to these as "K2.5", use the exact current model IDs from Moonshot docs/console.
### Model config behavior
`config.yaml` key:
- `model`
Behavior:
- If `model` has no `:` (example `MiniMax-M2.5`), open-strix treats it as Anthropic-provider and uses `anthropic:<model>`.
- If `model` already includes `provider:model` (example `openai:gpt-4o-mini`), it is passed through unchanged.
## Discord setup
Use Discord's Developer Portal web UI:
1. Go to https://discord.com/developers/applications and create a new application.
2. Open your app, then go to the `Bot` tab.
3. Under `Token`, generate/reset token and copy it (you won't be able to re-view it later).
4. In the same `Bot` tab, enable `Message Content Intent` (required for open-strix message handling).
5. Go to `Installation`.
6. Under `Installation Contexts`, enable `Guild Install`.
7. Under `Install Link`, pick `Discord Provided Link`.
8. Under `Default Install Settings`:
- `Guild Install` scopes: select `bot` (and `applications.commands` if you plan to add slash commands).
- `Permissions`: for this bot, a practical baseline is:
- `View Channels`
- `Send Messages`
- `Send Messages in Threads`
- `Read Message History`
- `Add Reactions`
9. Copy the generated install link, open it in your browser, pick your server, and authorize.
Reference docs for the same flow:
- Getting started (app creation + installation flow): https://docs.discord.com/developers/quick-start/getting-started
- OAuth2 scopes/install links: https://docs.discord.com/developers/topics/oauth2
- Permissions reference: https://docs.discord.com/developers/topics/permissions
- Gateway + intents reference: https://docs.discord.com/developers/events/gateway
Where this is configured in open-strix:
- Token env var name: `config.yaml` -> `discord_token_env` (default `DISCORD_TOKEN`)
- Actual token value: your `.env`
- Bot allowlist behavior: `config.yaml` -> `always_respond_bot_ids`
## `config.yaml` tour
Default:
```yaml
model: MiniMax-M2.5
journal_entries_in_prompt: 90
discord_messages_in_prompt: 10
discord_token_env: DISCORD_TOKEN
always_respond_bot_ids: []
```
Key meanings:
- `model`: model name (or `provider:model`)
- `journal_entries_in_prompt`: how many journal entries go into each prompt
- `discord_messages_in_prompt`: how many recent Discord messages go into each prompt
- `discord_token_env`: env var name to read Discord token from
- `always_respond_bot_ids`: bot author IDs the agent is allowed to respond to
Related files:
- `scheduler.yaml`: cron/time-of-day jobs
- `blocks/*.yaml`: memory blocks surfaced in prompt context
- `checkpoint.md`: returned by `journal` tool after a journal write
- `skills/`: user-editable local skills
- `/.open_strix_builtin_skills/skill-creator/SKILL.md`: packaged built-in skill source mounted as read-only
Runtime behavior note:
- Git sync (`git add -A` -> commit -> push) runs automatically after each processed turn.
## Personality bootstrap
Creating an agent is less about code, and a whole lot more about the time you spend talking to it.
[Lily Luo](https://www.appliedaiformops.com/p/what-building-a-persistent-ai-agent) has a great post on
forming agent personalities.
You should plan on spending time:
* Communication patterns — correct the agent to know when and how often it should use the `send_message` and `react` tools. Agents often initially find it surprising that their final message is ignored, so they need to use their tools instead.
* Talk about things you're interested in, see what the agent becomes interested in
## Tests
```bash
uv run pytest -q
```
Discord coverage includes:
- unit tests with mocked boundaries in `tests/test_discord.py`
- live integration tests against real Discord in `tests/test_discord_live.py`
Live test env vars:
- `DISCORD_TOKEN` (required for live connect test)
- `DISCORD_TEST_CHANNEL_ID` (optional; enables live send-message test)
## Safety baseline
- Agent file writes/edits are blocked outside `state/`.
- Reads still use repository scope.
- This is intentionally simple and should not be treated as production-ready.
| text/markdown | null | Tim Kellogg <timothy.kellogg@gmail.com> | null | Tim Kellogg <timothy.kellogg@gmail.com> | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"apscheduler>=3.11.2",
"deepagents>=0.4.1",
"discord-py>=2.6.4",
"python-dotenv>=1.2.1",
"pyyaml>=6.0.3"
] | [] | [] | [] | [
"Homepage, https://github.com/tkellogg/open-strix"
] | uv/0.7.4 | 2026-02-20T23:18:44.878515 | open_strix-0.1.9.tar.gz | 177,369 | 2d/b4/063e5451550bacb63d81d3dfa72bfd7ea072f86408253bf595ad1806c23b/open_strix-0.1.9.tar.gz | source | sdist | null | false | bf3574477e69e11a1dc909b36752dd82 | c49f2af4caf7e8883fb13ce50b561b2a186b16e0615a91bff6673cc605edf9d7 | 2db4063e5451550bacb63d81d3dfa72bfd7ea072f86408253bf595ad1806c23b | null | [] | 219 |
2.4 | gpype | 3.0.9 | A Python Software Development Kit for BCI/Neuroscience Applications | # g.Pype
[](http://gtec.at)
[](https://pypi.org/project/gpype/)
[](https://pypi.org/project/gpype/)


[](https://github.com/gtec-medical-engineering/gpype/blob/main/LICENSE-GNCL.txt)
[](https://gpype.gtec.at/)
g.Pype is a Python Software Development Kit (SDK) for building neuroscience and Brain-Computer Interface (BCI) applications. It is designed to be simple to use, with a clear and well-documented coding interface with many examples that help you get started quickly. It provides essential building blocks that can be combined and adapted to your needs, while remaining open to integration with other Python packages. g.Pype runs on Windows and macOS.
# Quickstart
Install `gpype` and clone the GitHub repository:
```shell
pip install gpype
git clone https://github.com/gtec-medical-engineering/gpype.git
```
Navigate to the subfolder `./gpype/examples` and run the example scripts directly from your IDE (e.g., VS Code, PyCharm, ...).
# Documentation
Full documentation is available at [gpype.gtec.at](https://gpype.gtec.at).
# License
`gpype` is licensed under the **g.tec Non-Commercial License (GNCL)**. See the [LICENSE](https://github.com/gtec-medical-engineering/gpype/blob/main/LICENSE-GNCL.txt) file for details.
| text/markdown | null | "g.tec medical engineering GmbH" <support@gtec.at> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Topic :: Scientific/Engineering",
"Topic :: Software Development :: Libraries :: Python Modules",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Operating System :: Microsoft :: Windows",
"Operating System :: MacOS"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.24.4",
"scipy>=1.10.1",
"sympy>=1.13.0",
"psutil>=6.1.1",
"pynput>=1.7.7",
"pyobjc==10.3.2; sys_platform == \"darwin\"",
"PySide6==6.9.0",
"PySide6_Addons==6.9.0",
"PySide6_Essentials==6.9.0",
"pyqtgraph>=0.13.3",
"pylsl==1.17.6",
"ioiocore>=4.0.7",
"gtec-ble>=2.0.1",
"gtec_gds>=1.3.0; sys_platform == \"win32\"",
"gtec_pp>=1.1.1; sys_platform == \"win32\"",
"gtec_licensing>=1.0.2",
"sphinx>=7.1.2; extra == \"doc\"",
"sphinx-rtd-theme>=2.0.1; extra == \"doc\"",
"myst-parser>=3.0.1; extra == \"doc\"",
"sphinx-autodoc-typehints>=2.0.1; extra == \"doc\"",
"rstcheck; extra == \"doc\"",
"sphinx-lint; extra == \"doc\"",
"sphinx-design; extra == \"doc\"",
"sphinxcontrib-gtagjs; extra == \"doc\"",
"pytest-cov>=5.0.0; extra == \"test\"",
"pytest-html>=4.1.1; extra == \"test\"",
"pytest-qt>=4.4.0; extra == \"test\"",
"coverage>=7.6.1; extra == \"test\"",
"setuptools_scm; extra == \"test\"",
"setuptools>=75.3.2; extra == \"build\"",
"cython>=3.1.1; extra == \"build\"",
"build>=1.2.2; extra == \"build\"",
"wheel>=0.45.1; extra == \"build\"",
"tomli; python_version < \"3.11\" and extra == \"build\"",
"setuptools_scm; extra == \"build\"",
"flake8==7.3.0; extra == \"lint\"",
"black==25.1.0; extra == \"lint\"",
"isort==6.0.1; extra == \"lint\"",
"pyinstaller; extra == \"apps\"",
"twine; extra == \"pypi\"",
"paramiko; extra == \"release\"",
"requests; extra == \"release\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.12 | 2026-02-20T23:18:14.144520 | gpype-3.0.9-cp313-cp313-win_amd64.whl | 181,825 | c2/35/354d1db6ca08fd65757a2bfd7e14a428089d5e2c67d700ee15984eb38d13/gpype-3.0.9-cp313-cp313-win_amd64.whl | cp313 | bdist_wheel | null | false | 53633db68d60199a51c93e5b9a6f91fa | 6136392363a66979abf4f606eefc30b66817ccaa35d50fc9458aaf6addd1f86b | c235354d1db6ca08fd65757a2bfd7e14a428089d5e2c67d700ee15984eb38d13 | LicenseRef-GNCL | [
"LICENSE-GNCL.txt"
] | 462 |
2.4 | policyengine-us | 1.572.1 | Add your description here. | # PolicyEngine US
[](https://codecov.io/gh/PolicyEngine/policyengine-us)
[](https://badge.fury.io/py/policyengine-us)
[](https://github.com/psf/black)
PolicyEngine US is a microsimulation model of the US state and federal tax and benefit system.
To install, run `pip install policyengine-us`.
| text/markdown | null | PolicyEngine <hello@policyengine.org> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Operating System :: POSIX",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Information Analysis"
] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"microdf-python>=1.0.0",
"pandas>=2.0",
"policyengine-core>=3.23.5",
"tqdm>=4.67.1",
"black==26.1.0; extra == \"dev\"",
"build>=1.2.2.post1; extra == \"dev\"",
"coverage>=7.9.2; extra == \"dev\"",
"furo>=2024.8.6; extra == \"dev\"",
"jupyter-book>=1.0.4.post1; extra == \"dev\"",
"setuptools>=80.9.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:18:01.565390 | policyengine_us-1.572.1.tar.gz | 8,675,508 | 7f/8d/60a3fc7b320959347f495286dc7d58d121645595f5b172813a63916b2e57/policyengine_us-1.572.1.tar.gz | source | sdist | null | false | fe8ec3b5c8eeaa8bb28b1381076e0c51 | 63ff4a6f2d83477bad7c482c509943171078a9632bf98a40711d0fb9bdf9b572 | 7f8d60a3fc7b320959347f495286dc7d58d121645595f5b172813a63916b2e57 | null | [
"LICENSE"
] | 512 |
2.4 | python4cpm | 1.0.15 | Python for CPM | # Python4CPM
A simple way of using python scripts with CyberArk CPM rotations. This module levereages the [Credential Management .NET SDK](https://docs.cyberark.com/privilege-cloud-standard/latest/en/content/pasimp/plug-in-netinvoker.htm) from CyberArk to securely offload a password rotation logic into a python script.
This platform allows you to duplicate it multiple times, simply changing its settings from Privilege Cloud/PVWA to point to different python scripts leveraging the module `python4cpm`.
## Installation
### Preparing Python
1. Install Python in CPM. **Python must be installed for all users when running the install wizard**.
2. Create a venv in CPM, by running `py -m venv c:\venv`. Use the default location `c:\venv` or a custom one (e.g., `c:\my-venv-path`).
3. Install `python4cpm` in your venv:
- If your CPM can connect to the internet, install with `c:\venv\Scripts\pip install python4cpm`.
- If your CPM cannot connect to the internet:
- Download the [latest wheel](https://github.com/gonatienza/python4cpm/releases/download/latest/python4cpm-wheel.zip).
- Copy the file to CPM and extract to a temporary location.
- From the temporary location run `c:\venv\Scripts\pip install --no-index --find-links=.\python4cpm-wheel python4cpm`.
### Importing the platform
1. Download the latest [Credential Management .NET SDK](https://community.cyberark.com/marketplace/s/#a3550000000EkA0AAK-a3950000000jjoOAAQ) and place its content in the bin folder of CPM (`C:\Program Files (x86)\CyberArk\Password Manager\bin`).
2. Download the [latest platform zip file](https://github.com/gonatienza/python4cpm/releases/download/latest/python4cpm-platform.zip).
3. Import the platform zip file into Privilege Cloud/PVWA `(Administration -> Platform Management -> Import platform)`.
4. Craft your python script and place it within the bin folder of CPM (`C:\Program Files (x86)\CyberArk\Password Manager\bin`).
5. Duplicate the imported platform in Privilege Cloud/PVWA `(Administration -> Platform Management -> Application -> Python for CPM)` and name it after your application (e.g., My App).
6. Edit the duplicated platform and specify the path of your placed script in the bin folder of CPM, under `Target Account Platform -> Automatic Platform Management -> Additional Policy Settings -> Parameters -> PythonScriptPath -> Value` (e.g., `bin\myapp.py`).
7. If you used a custom venv location, also update `Target Account Platform -> Automatic Platform Management -> Additional Policy Settings -> Parameters -> PythonExePath -> Value` with the custom path for the venv's `python.exe` file (e.g., `c:\my-venv-path\Scripts\python.exe`).
8. If you want to disable logging, update `Target Account Platform -> Automatic Platform Management -> Additional Policy Settings -> Parameters -> PythonLogging -> Value` to `no`.
9. If you want to change the logging level to `debug`, update `Target Account Platform -> Automatic Platform Management -> Additional Policy Settings -> Parameters -> PythonLoggingLevel -> Value` to `debug`.
10. For new applications repeat steps from 4 to 9.
## Python Script
### Example:
```python
from python4cpm import Python4CPM
p4cpm = Python4CPM("MyApp") # this instantiates the object and grabs all arguments and secrets shared by TPC
# These are the usable properties and related methods from the object:
p4cpm.args.action # action requested from CPM
p4cpm.args.address # address from the account address field
p4cpm.args.username # username from the account username field
p4cpm.args.reconcile_username # reconcile username from the linked reconcile account
p4cpm.args.logon_username # logon username from the linked logon account
p4cpm.args.logging # used to carry the platform logging settings for python
p4cpm.secrets.password.get() # get str from password received from the vault
p4cpm.secrets.new_password.get() # get str from new password in case of a rotation
p4cpm.secrets.logon_password.get() # get str from linked logon account password
p4cpm.secrets.reconcile_password.get() # get str from linked reconcile account password
# Logging methods -> Will only log if Automatic Platform Management -> Additional Policy Settings -> Parameters -> PythonLogging is set to yes (default is yes)
p4cpm.log_error("this is an error message") # logs error into Logs/ThirdParty/Python4CPM/MyApp.log
p4cpm.log_warning("this is a warning message") # logs warning into Logs/ThirdParty/Python4CPM/MyApp.log
p4cpm.log_info("this is an info message") # logs info into Logs/ThirdParty/Python4CPM/MyApp.log
# Logging level -> Will only log debug messages if Automatic Platform Management -> Additional Policy Settings -> Parameters -> PythonLoggingLevel is set to debug (default is info)
p4cpm.log_debug("this is an debug message") # logs info into Logs/ThirdParty/Python4CPM/MyApp.log if logging level is set to debug
# Terminate signals -> ALWAYS use one of the following three signals to terminate the script
## p4cpm.close_success() # terminate with success state
## p4cpm.close_fail() # terminate with recoverable failed state
## p4cpm.close_fail(unrecoverable=True) # terminate with unrecoverable failed state
# If no signal is call, CPM will not know if the action was successful and display an error
# Verification example -> verify the username and password are valid
def verify(from_reconcile=False):
if from_reconcile is False:
pass
# Use p4cpm.args.address, p4cpm.args.username, p4cpm.secrets.password.get()
# for your logic in a verification
else:
pass
# Use p4cpm.args.address, p4cpm.args.reconcile_username, p4cpm.secrets.reconcile_password.get()
# for your logic in a verification
result = True
if result is True:
p4cpm.log_info("verification successful") # logs info message into Logs/ThirdParty/Python4CPM/MyApp.log
else:
p4cpm.log_error("something went wrong") # logs error message Logs/ThirdParty/Python4CPM/MyApp.log
raise Exception("verify failed") # raise to trigger failed termination signal
# Rotation example -> rotate the password of the account
def change(from_reconcile=False):
if from_reconcile is False:
pass
# Use p4cpm.args.address, p4cpm.args.username, p4cpm.secrets.password.get()
# and p4cpm.secrets.new_password.get() for your logic in a rotation
else:
pass
# Use p4cpm.args.address, p4cpm.args.username, p4cpm.args.reconcile_username,
# p4cpm.secrets.reconcile_password.get() and p4cpm.secrets.new_password.get() for your logic in a reconciliation
result = True
if result is True:
p4cpm.log_info("rotation successful") # logs info message into Logs/ThirdParty/Python4CPM/MyApp.log
else:
p4cpm.log_error("something went wrong") # logs error message Logs/ThirdParty/Python4CPM/MyApp.log
raise Exception("change failed") # raise to trigger failed termination signal
if __name__ == "__main__":
try:
if p4cpm.args.action == Python4CPM.ACTION_VERIFY: # class attribute ACTION_VERIFY holds the verify action value
verify()
p4cpm.close_success() # terminate with success state
elif p4cpm.args.action == Python4CPM.ACTION_LOGON: # class attribute ACTION_LOGON holds the logon action value
verify()
p4cpm.close_success() # terminate with success state
elif p4cpm.args.action == Python4CPM.ACTION_CHANGE: # class attribute ACTION_CHANGE holds the password change action value
change()
p4cpm.close_success() # terminate with success state
elif p4cpm.args.action == Python4CPM.ACTION_PRERECONCILE: # class attribute ACTION_PRERECONCILE holds the pre-reconcile action value
verify(from_reconcile=True)
p4cpm.close_success() # terminate with success state
# Alternatively ->
## p4cpm.log_error("reconciliation is not supported") # let the logs know that reconciliation is not supported
## p4cpm.close_fail() # let CPM know to check the logs
elif p4cpm.args.action == Python4CPM.ACTION_RECONCILE: # class attribute ACTION_RECONCILE holds the reconcile action value
change(from_reconcile=True)
p4cpm.close_success() # terminate with success state
# Alternatively ->
## p4cpm.log_error("reconciliation is not supported") # let the logs know that reconciliation is not supported
## p4cpm.close_fail() # let CPM know to check the logs
else:
p4cpm.log_error(f"invalid action: '{p4cpm.args.action}'") # logs into Logs/ThirdParty/Python4CPM/MyApp.log
p4cpm.close_fail(unrecoverable=True) # terminate with unrecoverable failed state
except Exception as e:
p4cpm.log_error(f"{type(e).__name__}: {e}")
p4cpm.close_fail()
```
(*) a more realistic examples can be found [here](https://github.com/gonatienza/python4cpm/blob/main/examples).
When doing `verify`, `change` or `reconcile` from Privilege Cloud/PVWA:
1. Verify -> the sciprt will be executed once with the `p4cpm.args.action` as `Python4CPM.ACTION_VERIFY`.
2. Change -> the sciprt will be executed twice, once with the action `p4cpm.args.action` as `Python4CPM.ACTION_LOGON` and once as `Python4CPM.ACTION_CHANGE`.
- If all actions are not terminated with `p4cpm.close_success()` the overall change will fail.
3. Reconcile -> the sciprt will be executed twice, once with the `p4cpm.args.action` as `Python4CPM.ACTION_PRERECONCILE` and once as `Python4CPM.ACTION_RECONCILE`.
- If all actions are not terminated with `p4cpm.close_success()` the overall reconcile will fail.
4. When `p4cpm.args.action` comes as `Python4CPM.ACTION_VERIFY`, `Python4CPM.ACTION_LOGON` or `Python4CPM.ACTION_PRERECONCILE`: `p4cpm.secrets.new_password.get()` will always return an empty string.
5. If a logon account is not linked, `p4cpm.args.logon_username` and `p4cpm.secrets.logon_password.get()` will return an empty string.
6. If a reconcile account is not linked, `p4cpm.args.reconcile_username` and `p4cpm.secrets.reconcile_password.get()` will return an empty string.
### Installing dependancies in python venv
As with any python venv, you can install dependancies in your venv.
1. If your CPM can connect to the internet:
- You can use regular pip install commands (e.g., `c:\venv\Scripts\pip.exe install requests`).
2. If your CPM cannot connect to the internet:
- You can download packages for an offline install. More info [here](https://pip.pypa.io/en/stable/cli/pip_download/).
## Dev Helper:
For dev purposes, `NETHelper` is a companion helper that simplifies the instantiation of the `Python4CPM` object by simulating how the plugin passes arguments and secrets to `Python4CPM`.
Install this module (in a dev workstation) with:
```bash
pip install python4cpm
```
**Note**: As CPM runs in Windows, the plugin was built to pass secrets securely to the `Python4CPM.crypto` module using the Data Protection API (DPAPI). For dev purposes in Linux/Mac dev workstations, those secrets will appear as plaintext in the process environment. This is informational only, the module will use its encryption/decryption capabilities automatically in Windows and you do not have to do anything specific to enable it.
### Example:
```python
from python4cpm import TPCHelper, Python4CPM
from getpass import getpass
# Get secrets for your password, logon account password, reconcile account password and new password
# You can use an empty string if it does not apply
password = getpass("password: ") # password from account
logon_password = getpass("logon_password: ") # password from linked logon account
reconcile_password = getpass("reconcile_password: ") # password from linked reconcile account
new_password = getpass("new_password: ") # new password for the rotation
p4cpm = TPCHelper.run(
action=Python4CPM.ACTION_LOGON, # use actions from Python4CPM.ACTION_*
address="myapp.corp.local", # populate with the address from your account properties
username="jdoe", # populate with the username from your account properties
logon_username="ldoe", # populate with the logon account username from your linked logon account
reconcile_username="rdoe", # ppopulate with the reconcile account username from your linked logon account
logging="yes", # populate with the PythonLogging parameter from the platform: "yes" or "no"
logging_level="info", # populate with the PythonLoggingLevel parameter from the platform: "info" or "debug"
password=password,
logon_password=logon_password,
reconcile_password=reconcile_password,
new_password=new_password
)
# Use the p4cpm object during dev to build your script logic
assert password == p4cpm.secrets.password.get()
p4cpm.log_info("success!")
p4cpm.close_success()
# Remember for your final script:
## changing the definition of p4cpm from TPCHelper.run() to Python4CPM("MyApp")
## remove any secrets prompting
## remove the TPCHelper import
```
Remember for your final script:
- Change the definition of `p4cpm` from `p4cpm = TPCHelper.run(**kwargs)` to `p4cpm = Python4CPM("MyApp")`.
- Remove any secrets prompting or interactive interruptions.
- Remove the import of `TPCHelper`.
| text/markdown | null | Gonzalo Atienza Rela <gonatienza@gmail.com> | null | null | MIT License
Copyright (c) 2026 Gonzalo Atienza Rela
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
| null | [] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:17:59.524399 | python4cpm-1.0.15.tar.gz | 13,055 | 95/db/1b4a0884fa679414055d50d5ca97e05bc7d0ad639f9d1d78cf22bb6b77c5/python4cpm-1.0.15.tar.gz | source | sdist | null | false | 43692303863740088bcce832085fee67 | 2d0b719ba8ccebe1d6a68dc9b0e85dfba93fef64c49ed925bd1416f8b2a2a3c8 | 95db1b4a0884fa679414055d50d5ca97e05bc7d0ad639f9d1d78cf22bb6b77c5 | null | [
"LICENSE"
] | 233 |
2.4 | mars-patcher | 0.11.0 | An open source randomizer patcher for Metroid Fusion. | # Metroid Advance Randomizer System Patcher
This is an open source randomizer patcher for Metroid Fusion. It is not intended for standalone use, but rather as a library to help implement plando- or randomizers.
Here is a list of projects that use this library:
- [Randovania](https://randovania.org/)
- [An Archipelago APWorld](https://github.com/Rosalie-A/Archipelago/wiki/Metroid-Fusion-Setup-Guide)
## Developer Info
Running from source:
- Create a venv: `python -m venv venv`
- Activate the venv:
- Windows: `call venv\scripts\activate`
- Unix-based: `source ./venv/bin/activate`
- Install the project as editable: `pip install -e .[tooling]`
- Run: `python -m mars_patcher`
Before running the patcher, you want to initialize the required assembly patches into `src/mars_patcher/data/patches/mf_u/asm`.
The easiest way to do that is by running `python pull-assembly-patches.py`, which will fetch the patches from the correct release.
However for development purposes, you may want to create the assembly patches yourself manually and then copy them to that directory.
| text/markdown | biospark | null | null | null | GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
| null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3.10"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"jsonschema",
"frozendict",
"typing-extensions",
"requests; extra == \"tooling\"",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"pytest-mock; extra == \"test\"",
"mypy; extra == \"typing\"",
"types-requests; extra == \"typing\"",
"types-pyinstaller; extra == \"typing\"",
"types-jsonschema; extra == \"typing\"",
"pre-commit; extra == \"typing\""
] | [] | [] | [] | [
"Repository, https://github.com/MetroidAdvRandomizerSystem/mars-patcher"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:17:18.136983 | mars_patcher-0.11.0.tar.gz | 322,913 | 0e/56/cb647e5b2d7edc32bbe9e0c73714b04a999110ad64ef03d1210c4619ba35/mars_patcher-0.11.0.tar.gz | source | sdist | null | false | bf7a4e48b8a225c99625aab58a691bd1 | add432171a37122b910af952f41d1adab0b374886a41153061a5527259a51fc9 | 0e56cb647e5b2d7edc32bbe9e0c73714b04a999110ad64ef03d1210c4619ba35 | null | [
"LICENSE"
] | 407 |
2.4 | sumo3Dviz | 0.4.0 | 3D visualization of SUMO traffic simulations | <h1>
<center>
<table width="100%">
<tr>
<td align="center">
<img src="resources/Figure_Banner.PNG"
alt="sumo3Dviz"
style="height: 3.5em; vertical-align: middle; margin-right: 0.4em;">
sumo3Dviz <img src="resources/Figure_Banner.PNG"
alt="sumo3Dviz"
style="height: 3.5em; vertical-align: middle; margin-left: 0.4em;">
</td>
</tr>
<tr>
<td align="center">
A three dimensional traffic visualisation
</td>
</tr>
</table>
</center>
</h1>
[](https://pypi.org/project/sumo3Dviz/)
[](https://www.gnu.org/licenses/gpl-3.0)
[](https://github.com/DerKevinRiehl/sumo3Dviz/actions/workflows/build.yml)
[](https://github.com/DerKevinRiehl/sumo3Dviz/actions/workflows/test_rendering.yml)
**sumo3Dviz** is a lightweight, open-source 3D visualisation pipeline for SUMO traffic simulations.
It converts standard SUMO simulation outputs, such as vehicle trajectories and signal states, into high-quality 3D renderings using a Python-based framework.
<table>
<tr>
<td><b>Features:</b></td>
<td><img src="resources/Figure_OSLogo.png" style="height: 2.5em"/></td>
<td>Major OS Support</td>
<td><img src="resources/Figure_PythonLogo.png" style="height: 2.5em"/></td>
<td>Python 3.9 Support</td>
</tr>
</table>
<details>
<summary><strong>Table of Contents</strong></summary>
- [Highlights](#highlights)
- [Installation](#installation)
- [Usage](#usage)
- [Command Line Interface (CLI)](#command-line-interface-cli)
- [Python Code](#python-code)
- [Case Study: Barcelona](#case-study-barcelona)
- [Step 1: Prepare Sumo Simulation](#step-1-prepare-sumo-simulation)
- [Step 2: Prepare Visualisation Configuration](#step-2-prepare-visualisation-configuration)
- [Step 3: Render Video Visualisation with sumo3Dviz](#step-3-render-video-visualisation-with-sumo3dviz)
- [Citations](#citations)
</details>
## Highlights
<table>
<tr>
<td colspan="4"><b><center>Visualisation Modes</center></b></td>
</tr>
<tr>
<td><center>(1) Eulerian</center></td>
<td><center>(2) Lagrangian</center></td>
<td><center>(3) Cinematic</center></td>
<td><center>(4) Interactive</center></td>
</tr>
<tr>
<td><img src="resources/Figure_DemoVis_Euler.png" style="height: 120px"/></td>
<td><img src="resources/Figure_DemoVis_Lagrang.png" style="height: 120px"/></td>
<td><img src="resources/Figure_DemoVis_Cinematic.png" style="height: 120px"/></td>
<td><img src="resources/Figure_DemoVis_Interactive.png" style="height: 120px"/></td>
</tr>
<tr>
<td><img src="resources/video_eulerian.gif" style="height: 120px"/></td>
<td><img src="resources/video_lagrangian.gif" style="height: 120px"/></td>
<td><img src="resources/video_cinematic.gif" style="height: 120px"/></td>
<td><img src="resources/video_interactive.gif" style="height: 120px"/></td>
</tr>
</table>
Video Demos on YouTube:
- https://www.youtube.com/watch?v=wEUbjlqigyg
- https://www.youtube.com/watch?v=dq9pH1Cj7gA
- https://www.youtube.com/watch?v=XvpG5cbv7Ig
## Installation
The python package **sumo3Dviz** can be installed using pip:
```bash
pip install sumo3Dviz
```
**Please note:**
Currently only Python 3.9 is supported on all major operating systems (Windows, Mac iOS, Linux).
## Usage
You can use sumo3Dviz as command line tool (CLI), configure a variety of parameters in the config YAML file, and the run four different visualisation modes:
1. Run sumo3Dviz in Eulerian mode:
```
sumo3Dviz --config config.yaml --mode eulerian --output results/vid_eul.avi
```
2. Run sumo3Dviz in Lagrangian mode:
```
sumo3Dviz --config config.yaml --mode lagrangian --output results/vid_lag.avi
```
3. Run sumo3Dviz in Cinematic mode:
```
sumo3Dviz --config config.yaml --mode cinematic --output results/vid_cin.avi
```
4. Run sumo3Dviz in Interactive mode:
```
sumo3Dviz --config config.yaml --mode interactive
```
## Case Study: Barcelona
### Step 1: Prepare Sumo Simulation
You can run any SUMO simulation and render it to a video.
Just make sure to log vehicle positions and traffic lights (if desired for rendering).
Also, if you want to place trees, fences, buildings, and other objects, please create polygon files with netedit.
In the following explanations how to do it.
Moreover, we provide an example (barcelona_simulation) that demos all outlined information.
(1) **Log Vehicle Positions in your `Configuration.sumocfg`:**
```xml
<!-- YOUR Configuration.sumocfg -->
<?xml version="1.0" encoding="UTF-8"?>
<configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://sumo.dlr.de/xsd/sumoConfiguration.xsd">
<!-- ... -->
<!-- INSERT THIS TO LOG VEHICLE POSITIONS -->
<output>
<fcd-output value="simulation_logs/vehicle_positions.xml"/>
<fcd-output.attributes value="x,y,angle"/>
</output>
<!-- ... -->
</configuration>
```
(2) (Optional) **Log Traffic Light States `Configuration.sumocfg`:**
```xml
<!-- YOUR Configuration.sumocfg -->
<?xml version="1.0" encoding="UTF-8"?>
<configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://sumo.dlr.de/xsd/sumoConfiguration.xsd">
<!-- ... -->
<!-- INSERT THIS TO LOAD ADDITIONAL FILE tls_logging.add.xml -->
<input>
<additional-files value="tls_logging.add.xml"/>
</input>
<!-- ... -->
</configuration>
```
And create the additional file `tls_logging.add.xml` in the same folder:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<additional>
<timedEvent type="SaveTLSStates"
dest="simulation_logs/signal_states.xml"/>
</additional>
```
(3) (Optional) **Additional Objects (Fences, Trees, Buildings...):**
You can create polygon files (POIs) with Netedit, and store them, for example following `trees.add.xml`:
```xml
<additional xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://sumo.dlr.de/xsd/additional_file.xsd">
<!-- Shapes -->
<poi id="poi_0" color="red" layer="202.00" x="19332.99" y="17853.26"/>
<poi id="poi_1" color="red" layer="202.00" x="19398.22" y="17894.70"/>
<poi id="poi_10" color="red" layer="202.00" x="19412.72" y="17919.65"/>
<poi id="poi_100" color="red" layer="202.00" x="18935.17" y="17729.96"/>
<poi id="poi_1000" color="red" layer="202.00" x="20139.72" y="18631.08"/>
<poi id="poi_1001" color="red" layer="202.00" x="20154.28" y="18637.80"/>
<poi id="poi_1002" color="red" layer="202.00" x="20205.22" y="18645.08"/>
<poi id="poi_1003" color="red" layer="202.00" x="20209.14" y="18647.88"/>
<!-- ... -->
```
The simulation's input files (network, POIs), and the generated output log files are then processed by **sumo3Dviz** to generate the visualisation.
### Step 2: Prepare Visualisation Configuration
### Step 3: Render Video Visualisation with sumo3Dviz
#### Command Line Interface (CLI)
```bash
sumo3Dviz --config path/to/your/configuration.yaml
```
#### Python Code
In this repository we provide four example codes to run sumo3Dviz in the four different modes, that can be found in `.examples/`. These examples visualize the aforementioned case study of Barcelona.
1. Run sumo3Dviz in Eulerian mode:
```
python examples/demo_eulerian.py
```
2. Run sumo3Dviz in Lagrangian mode:
```
python examples/demo_lagrangian.py
```
3. Run sumo3Dviz in Cinematic mode:
```
python examples/demo_cinematic.py
```
4. Run sumo3Dviz in Interactive mode:
```
python examples/demo_interactive.py
```
# USAGE
## Option 1: CLI usage (after installation through pip)
```bash
pip install sumo3Dviz
sumo3Dviz --config path/to/your/configuration.yaml
```
To run the example provided in the repository, you can run
```bash
sumo3Dviz --config examples/config_barcelona.yaml
```
## Option 2: Module-based usage (after installation through pip)
Import the relevant modules and run the rendering mechanism as demonstrated in the `render_barcelona.py` script:
```bash
pip install sumo3Dviz
python examples/render_barcelona.py
```
## Citations
Please cite our paper if you find sumo3Dviz useful:
```
@inproceedings{riehl2026sumo3Dviz,
title={sumo3Dviz: A three dimensional traffic visualisation},
author={Riehl, Kevin and Schlapbach, Julius and Kouvelas, Anastasios and Makridis, Michail A.},
booktitle={SUMO Conference Proceedings},
year={2026}
}
```
| text/markdown | null | Kevin Riehl <kriehl@ethz.ch>, Julius Schlapbach <juliussc@ethz.ch> | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy==1.24.4",
"opencv-python==4.9.0.80",
"pandas==2.1.1",
"panda3d==1.10.15",
"sumolib==1.25.0",
"PyYAML==6.0.3",
"pandera==0.26.1",
"jsonschema==4.25.1",
"tqdm==4.67.3",
"pytest<9.0.0,>=8.0.0; extra == \"dev\"",
"pylint<4.0.0,>=3.0.0; extra == \"dev\"",
"git-cliff<3.0.0,>=2.11.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/DerKevinRiehl/sumo3Dviz"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-20T23:17:03.735745 | sumo3dviz-0.4.0.tar.gz | 59,257,790 | 71/25/c1d221d522eb1d8699e1788cbab40181dca9bb4acf58a9003ea2b9f21a29/sumo3dviz-0.4.0.tar.gz | source | sdist | null | false | 27527ba14a3f0520d71f8bf012203e83 | 138e4350f17e8aab95369ae826e8d9222007ccca0a8d4f74a6415ed364ade88d | 7125c1d221d522eb1d8699e1788cbab40181dca9bb4acf58a9003ea2b9f21a29 | null | [
"LICENSE"
] | 0 |
2.4 | llama-index-readers-preprocess | 0.5.0 | llama-index readers preprocess integration (discontinued) | # Preprocess Loader
> **This package has been discontinued.** The Preprocess service is no longer available and will not receive updates or support. Please remove this dependency from your projects.
---
```bash
pip install llama-index-readers-preprocess
```
[Preprocess](https://preprocess.co) is an API service that splits any kind of document into optimal chunks of text for use in language model tasks.
Given documents in input `Preprocess` splits them into chunks of text that respect the layout and semantics of the original document.
We split the content by taking into account sections, paragraphs, lists, images, data tables, text tables, and slides, and following the content semantics for long texts.
We support PDFs, Microsoft Office documents (Word, PowerPoint, Excel), OpenOffice documents (ods, odt, odp), HTML content (web pages, articles, emails), and plain text.
This loader integrates with the `Preprocess` API library to provide document conversion and chunking or to load already chunked files inside LlamaIndex.
## Requirements
Install the Python `Preprocess` library if it is not already present:
```
pip install pypreprocess
```
## Usage
To use this loader, you need to pass the `Preprocess API Key`.
When initializing `PreprocessReader`, you should pass your `API Key`, if you don't have it yet, please ask for one at [support@preprocess.co](mailto:support@preprocess.co). Without an `API Key`, the loader will raise an error.
To chunk a file pass a valid filepath and the reader will start converting and chunking it.
`Preprocess` will chunk your files by applying an internal `Splitter`. For this reason, you should not parse the document into nodes using a `Splitter` or applying a `Splitter` while transforming documents in your `IngestionPipeline`.
If you want to handle the nodes directly:
```python
from llama_index.core import VectorStoreIndex
from llama_index.readers.preprocess import PreprocessReader
# pass a filepath and get the chunks as nodes
loader = PreprocessReader(
api_key="your-api-key", filepath="valid/path/to/file"
)
nodes = loader.get_nodes()
# import the nodes in a Vector Store with your configuration
index = VectorStoreIndex(nodes)
query_engine = index.as_query_engine()
```
By default load_data() returns a document for each chunk, remember to not apply any splitting to these documents
```python
from llama_index.core import VectorStoreIndex
from llama_index.readers.preprocess import PreprocessReader
# pass a filepath and get the chunks as nodes
loader = PreprocessReader(
api_key="your-api-key", filepath="valid/path/to/file"
)
documents = loader.load_data()
# don't apply any Splitter parser to documents
# if you have an ingestion pipeline you should not apply a Splitter in the transformations
# import the documents in a Vector Store, if you set the service_context parameter remember to avoid including a splitter
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
```
If you want to return only the extracted text and handle it with custom pipelines set `return_whole_document = True`
```python
# pass a filepath and get the chunks as nodes
loader = PreprocessReader(
api_key="your-api-key", filepath="valid/path/to/file"
)
document = loader.load_data(return_whole_document=True)
```
If you want to load already chunked files you can do it via `process_id` passing it to the reader.
```python
# pass a process_id obtained from a previous instance and get the chunks as one string inside a Document
loader = PreprocessReader(api_key="your-api-key", process_id="your-process-id")
```
This loader is designed to be used as a way to load data into [LlamaIndex](https://github.com/run-llama/llama_index/).
## Other info
`PreprocessReader` is based on `pypreprocess` from [Preprocess](https://github.com/preprocess-co/pypreprocess) library.
For more information or other integration needs please check the [documentation](https://github.com/preprocess-co/pypreprocess).
| text/markdown | null | Your Name <you@example.com> | preprocess | null | null | chunk, chunking, documents, preprocess | [
"Development Status :: 7 - Inactive"
] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"llama-index-core<0.15,>=0.13.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T23:16:27.236223 | llama_index_readers_preprocess-0.5.0-py3-none-any.whl | 4,328 | b0/8f/93a9ab87755c9746d2ff133c81916006d01067013be00e84ee25f5cb28b2/llama_index_readers_preprocess-0.5.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 6206f020d4de44e14176405b21a8b21f | 8e44c754605ee396aa07ed4499c7d2cd7e260932def54cf2a5d0bd8d810b0f36 | b08f93a9ab87755c9746d2ff133c81916006d01067013be00e84ee25f5cb28b2 | MIT | [
"LICENSE"
] | 222 |
2.4 | elara-core | 0.15.0 | Persistent presence, mood, memory, and self-awareness for AI assistants | # Elara Core
> **Layer 3 of the Elara Protocol — persistent memory, cognition, and awareness for AI systems.**
[](https://github.com/navigatorbuilds/elara-core/actions/workflows/tests.yml)
[](https://pypi.org/project/elara-core/)
[](https://pypi.org/project/elara-core/)
[](https://github.com/navigatorbuilds/elara-core/blob/main/LICENSE)
[](https://elara.navigatorbuilds.com)
This is **one layer** of the [Elara Protocol](https://github.com/navigatorbuilds/elara-protocol) — a post-quantum universal validation layer for digital work. The full stack spans cryptographic signing, a Rust VM, decentralized network consensus, and this cognitive layer.
### Install in one line
```bash
# Linux / macOS
curl -sSL https://raw.githubusercontent.com/navigatorbuilds/elara-core/main/scripts/install.sh | sh
# Windows (PowerShell)
irm https://raw.githubusercontent.com/navigatorbuilds/elara-core/main/scripts/install.ps1 | iex
# Or with pip / pipx
pip install elara-core[network]
```
**Every install is a node.** When you run `elara serve`, your instance joins the Elara mesh network as a LEAF node — sharing anonymized validation records with other nodes. No personal data leaves your machine. Opt out anytime with `elara serve --no-node` or `elara node stop`.
```
The Elara Protocol Stack:
Layer 1 — Post-quantum cryptography (Dilithium3 + SPHINCS+), DAG signing, offline validation
Layer 1.5 — Rust DAM Virtual Machine (PyO3 bindings, record processing)
Layer 2 — Decentralized network (Adaptive Witness Consensus, peer discovery, trust scoring)
Layer 3 — AI cognition (THIS REPO) — memory, mood, reasoning, awareness
```
```
Network Topology:
┌──────┐ ┌──────┐ ┌──────┐
│ LEAF │────▶│RELAY │◀────│ LEAF │ LEAF = your install
└──┬───┘ └──┬───┘ └──┬───┘ RELAY = seed/routing node
│ │ │ WITNESS = attestation authority
▼ ▼ ▼
┌──────────────────────────────┐
│ WITNESS NODES │
│ Cross-sign validation │
│ records across the mesh │
└──────────────────────────────┘
```
**45 tools. 15 modules. 36K+ lines of Python. 222 tests. Everything runs locally.** Cognitive outputs are dual-signed and stored in the cryptographic DAG. Pattern recognition feeds back into the validation chain.
```
You: "Morning."
Elara: "You were debugging the auth module at 2am. Did you sleep?"
You: "What happened this week?"
Elara: "3 work sessions. Auth module shipped. Goal #4 is stalling —
no progress in 9 days. My overnight brain built 2 new models
and flagged a prediction deadline in 3 days."
```
---
## Project Status
**This project is in active development.** The protocol layers are being built and integrated. Here's where things stand:
| Layer | Status | Repository |
|-------|--------|------------|
| **Layer 1** — Post-quantum crypto | Done | Private (pre-release) |
| **Layer 1.5** — Rust DAM VM | Done | [elara-runtime](https://github.com/navigatorbuilds/elara-runtime) |
| **Layer 2** — Network consensus | Active (node-by-default) | Included in this repo (`network/`) |
| **Layer 3** — AI cognition | Done | This repo |
| **Protocol specs** | v0.4.1 | [elara-protocol](https://github.com/navigatorbuilds/elara-protocol) |
| **US Provisional Patent** | Filed | Application No. 63/983,064 (Feb 14, 2026) |
Every install is a node. When you run `elara serve`, your instance participates in the decentralized mesh — sharing anonymized validation records, not personal data. The install scripts handle everything in one line.
---
## What This Repo Contains
Elara Core is the cognitive layer (Layer 3). It provides persistent intelligence for AI assistants via [Model Context Protocol (MCP)](https://modelcontextprotocol.io):
### Core
| Feature | What it does |
|---------|-------------|
| **Semantic memory** | Store and search by meaning, not keywords. "What were we doing last week?" just works. |
| **Long-range memory** | Temporal sweep across time windows at boot — surfaces important items from weeks or months ago, plus landmark memories that never fade. |
| **Mood system** | Tracks valence, energy, openness. Decays naturally between sessions. |
| **Session tracking** | Episodes with milestones, decisions, project tagging, and timeline view. |
| **Goals & corrections** | Persistent goals with staleness detection. Mistakes saved and surfaced before you repeat them. |
| **Session handoff** | Structured carry-forward between sessions so nothing gets lost. |
### Advanced
| Feature | What it does |
|---------|-------------|
| **3D Cognition** | Persistent models (understanding), predictions (foresight), principles (wisdom), and workflow patterns (action) that accumulate over time. |
| **Workflow Patterns** | Learned action sequences from episode history, proactively surfaced when a known trigger is detected mid-session. |
| **Knowledge Graph** | Document cross-referencing with 6-tuple addressing, SQLite + ChromaDB, 4 validators for contradiction detection. |
| **Cortical execution** | 5-layer concurrent architecture (Reflex → Reactive → Deliberative → Contemplative → Social). Hot cache, async event bus, worker pools — concurrent tool calls don't block each other. |
| **Overnight thinking** | Autonomous analysis between sessions — runs 15 phases through a local LLM, builds cognitive models, detects workflow patterns, makes predictions. |
| **Creative drift** | The overnight brain's imagination — random context collisions at high temperature. Writes to an accumulating creative journal. |
| **Dream mode** | Weekly/monthly pattern discovery across sessions, inspired by sleep consolidation. |
| **Reasoning trails** | Track hypothesis chains when debugging. Includes what was abandoned and why. |
| **Self-reflection** | Mood trends, blind spots, growth intentions. |
| **Layer 1 bridge** | Cognitive artifacts are dual-signed (Dilithium3 + SPHINCS+) and stored in the cryptographic DAG. |
| **Layer 2 network** | Peer discovery (mDNS), record exchange, witness attestation, weighted trust scoring. |
> **Note:** Overnight thinking requires [Ollama](https://ollama.ai) with a local LLM. Layer 1 bridge requires [elara-protocol](https://github.com/navigatorbuilds/elara-protocol). Layer 2 network requires `elara-core[network]`.
---
## Tools (45)
**Start here** (5 essential tools):
| Tool | What it does |
|------|-------------|
| `elara_remember` | Save something to memory |
| `elara_recall` | Search memories by meaning |
| `elara_mood` | Check emotional state |
| `elara_episode_start` | Begin tracking a work session |
| `elara_status` | Full status check |
<details>
<summary>All 45 tools by module</summary>
| Module | Tools | Count |
|--------|-------|-------|
| **Memory** | `elara_remember`, `elara_recall`, `elara_recall_conversation`, `elara_conversations` | 4 |
| **Mood** | `elara_mood`, `elara_mood_adjust`, `elara_imprint`, `elara_mode`, `elara_status` | 5 |
| **Episodes** | `elara_episode_start`, `elara_episode_note`, `elara_episode_end`, `elara_episode_query`, `elara_context` | 5 |
| **Goals** | `elara_goal`, `elara_goal_boot`, `elara_correction`, `elara_correction_boot`, `elara_handoff` | 5 |
| **Awareness** | `elara_reflect`, `elara_insight`, `elara_intention`, `elara_observe`, `elara_temperament` | 5 |
| **Dreams** | `elara_dream`, `elara_dream_info` | 2 |
| **Cognitive** | `elara_reasoning`, `elara_outcome`, `elara_synthesis` | 3 |
| **3D Cognition** | `elara_model`, `elara_prediction`, `elara_principle` | 3 |
| **Workflows** | `elara_workflow` | 1 |
| **Knowledge** | `elara_kg_index`, `elara_kg_query`, `elara_kg_validate`, `elara_kg_diff` | 4 |
| **Business** | `elara_business` | 1 |
| **LLM** | `elara_llm` | 1 |
| **Gmail** | `elara_gmail` | 1 |
| **Maintenance** | `elara_rebuild_indexes`, `elara_briefing`, `elara_snapshot`, `elara_memory_consolidation` | 4 |
| **Network** | `elara_network` | 1 |
</details>
**[Full tool reference →](https://elara.navigatorbuilds.com/tools.html)**
---
## Architecture
```
┌─────────────────────────────────────────────────┐
│ YOUR MCP CLIENT │
│ Claude Code · Cursor · Windsurf · Cline │
└────────────────────┬────────────────────────────┘
│ MCP Protocol (stdio)
┌────────────────────▼────────────────────────────┐
│ hooks/ (hippocampus) │
│ │
│ Intention resolver · Rolling message buffer │
│ Semantic recall · Compound queries · Dedup │
│ Frustration detection · Context injection │
│ Long-range temporal sweep · Landmark recall │
├──────────────────────────────────────────────────┤
│ Cortical Execution Model │
│ │
│ L0 REFLEX — Hot cache, instant reads │
│ L1 REACTIVE — Async event bus, cascading fx │
│ L2 DELIBERATE — IO pool (4) + LLM pool (2) │
│ L3 CONTEMPLATE — Overnight brain, dreams │
│ L4 SOCIAL — Peer network, witness consensus │
├──────────────────────────────────────────────────┤
│ elara_mcp/tools/ (45 tools) │
│ │
│ Memory · Mood · Episodes · Goals · Awareness │
│ Dreams · Cognitive · 3D Cognition · Workflows │
│ Knowledge · Business · LLM · Gmail · Maintenance│
│ Network │
└────────────────────┬────────────────────────────┘
│
┌────────────────────▼────────────────────────────┐
│ daemon/ + core/ │
│ │
│ State engine · Emotions · Models · Predictions │
│ Principles · Workflows · Overnight brain · Drift│
└────────┬───────────────────────┬────────────────┘
│ │
┌────────▼────────┐ ┌────────▼────────────────┐
│ ~/.elara/ │ │ Layer 1 Bridge │
│ (all local) │ │ Dilithium3 + SPHINCS+ │
│ │ │ DAG signing │
│ ChromaDB (14) │ │ → Layer 2 Network │
│ JSON state │ │ → Witness consensus │
│ Overnight data │ │ → Trust scoring │
└─────────────────┘ └─────────────────────────┘
```
---
## Development
```bash
git clone https://github.com/navigatorbuilds/elara-core.git
cd elara-core
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -e ".[dev]"
pytest
```
### CLI commands
```
elara init Interactive setup wizard
elara init --yes Non-interactive init (CI/scripts)
elara doctor Diagnostic health check
elara serve Start MCP server + network node (stdio)
elara serve --no-node Start MCP server without network
elara serve --profile full Start with all 45 tool schemas
elara node status Show node info (type, port, peers)
elara node peers List connected peers
elara node stop Disable network node
elara node start Enable network node
elara sign <file> Sign a file with Layer 1 crypto
elara verify <proof> Verify an .elara.proof file
elara identity Show identity info
elara dag stats Show DAG statistics
elara testnet Run Layer 2 testnet demo
elara --version Show version
```
---
## Compatibility
| | Supported |
|---|---|
| **Python** | 3.10, 3.11, 3.12 |
| **OS** | Linux, macOS, Windows (WSL recommended) |
| **MCP Clients** | Claude Code, Claude Desktop, Cursor, Windsurf, Cline |
---
## The Elara Protocol
The [Elara Protocol](https://github.com/navigatorbuilds/elara-protocol) is a post-quantum universal validation layer for digital work — from a poem written on a $30 phone in Kenya to telemetry from a Mars colony. It introduces the **Directed Acyclic Mesh (DAM)**, a novel 5-dimensional data structure with partition-tolerant consensus across planetary distances.
| Document | Where |
|----------|-------|
| **Elara Protocol Whitepaper v0.4.1** | [GitHub](https://github.com/navigatorbuilds/elara-protocol) |
| **Elara Core Whitepaper v1.4.0** | [GitHub](https://github.com/navigatorbuilds/elara-protocol) |
| **US Provisional Patent** | Application No. 63/983,064 (Feb 14, 2026) |
**What Layer 3 adds to the protocol:**
- Cognitive outputs (predictions, models, principles) are dual-signed with Dilithium3 + SPHINCS+ and stored in the cryptographic DAG via the Layer 1 bridge
- Pattern recognition across validation streams — anomaly detection, fraud prediction, routing optimization
- Continuous autonomous thinking — 15-phase analysis engine running every 2 hours
- Dual-use architecture: industrial applications (manufacturing, research) and emotional companionship (humanoid robotics, therapeutic AI) from a single codebase
---
## What's New
**v0.14.0 — One-Line Install + Every Install Is a Node** — One-line install scripts for Linux, macOS, and Windows. `elara serve` now automatically starts a LEAF network node (opt out with `--no-node`). New `elara node` subcommand for node management. Seed node bootstrap with GitHub peer list fallback. PyPI version check on startup. `elara --version` flag.
**v0.13.0 — Cortical Execution Model + Long-Range Memory** — 5-layer concurrent architecture: hot cache (L0), async event bus (L1), worker pools (L2), brain events (L3), network consolidation (L4). All 45 tool handlers are now non-blocking. Plus temporal sweep at boot — surfaces important memories from weeks/months ago and landmark memories that never fade. Timeline view for milestones.
**v0.12.0 — Testnet Hardening** — Witness signature verification (Dilithium3), peer rate limiting, attestation back-propagation, heartbeat protocol, weighted trust with temporal decay + diversity bonus, role enforcement.
**v0.11.0 — Layer 2 Network + CLI Tools** — Minimum viable network: mDNS peer discovery, HTTP record exchange, witness attestation, trust scoring. CLI: `elara sign/verify/identity/dag`. Bridge hardened with validation guards, dedup (10K cache), and rate limiting (120/min).
**v0.10.8 — Layer 1 Bridge** — Cryptographic validation of cognitive artifacts. Predictions, corrections, models, principles, and other significant events are dual-signed (Dilithium3 + SPHINCS+) and stored in a local DAG.
**v0.10.7 — Workflow Patterns** — Learned action sequences from episode history. The overnight brain detects recurring multi-step processes and crystallizes them into workflow patterns.
**v0.10.6 — Knowledge Graph** — Document cross-referencing with 6-tuple addressing. SQLite + ChromaDB backend. 4 validators for contradiction detection.
**v0.10.0 — 3D Cognition** — Persistent models (understanding), predictions (foresight), and principles (wisdom) that accumulate over time. Plus creative drift — the overnight brain's imagination.
**[Full changelog →](CHANGELOG.md)**
---
## Community
- **[Docs](https://elara.navigatorbuilds.com)** — Architecture, tool reference, persona templates
- **[Discussions](https://github.com/navigatorbuilds/elara-core/discussions)** — Questions, ideas, showcase
- **[Issues](https://github.com/navigatorbuilds/elara-core/issues)** — Bug reports and feature requests
- **[Contributing](CONTRIBUTING.md)** — How to help
---
If Elara resonates with you, a star helps others find it. ⭐
| text/markdown | Nenad Vasić | null | null | null | null | ai, claude, mcp, memory, persona | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"chromadb>=0.4.0",
"mcp[cli]>=1.0.0",
"psutil>=5.9.0",
"pydantic>=2.0.0",
"python-dateutil>=2.8.0",
"pyyaml>=6.0",
"flask>=3.0.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"google-api-python-client>=2.0.0; extra == \"gmail\"",
"google-auth-oauthlib>=1.0.0; extra == \"gmail\"",
"google-auth>=2.0.0; extra == \"gmail\"",
"aiohttp>=3.9.0; extra == \"network\"",
"zeroconf>=0.131.0; extra == \"network\"",
"flask>=3.0.0; extra == \"web\""
] | [] | [] | [] | [
"Homepage, https://elara.navigatorbuilds.com",
"Repository, https://github.com/navigatorbuilds/elara-core",
"Issues, https://github.com/navigatorbuilds/elara-core/issues",
"Documentation, https://github.com/navigatorbuilds/elara-core#readme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:16:25.971530 | elara_core-0.15.0.tar.gz | 1,186,037 | 4a/27/41522134a47526677660aecfbeeb108aba381ac5057a8c6cf951224babdc/elara_core-0.15.0.tar.gz | source | sdist | null | false | 97248c81c5afc5706a8f7a32c76cd0dd | 1a94483a5fda355accf748bec44f5ab1b687be30cb034cf6410203eca8d15d27 | 4a2741522134a47526677660aecfbeeb108aba381ac5057a8c6cf951224babdc | LicenseRef-BSL-1.1 | [
"LICENSE"
] | 224 |
2.4 | yohou-nixtla | 0.1.0a1 | A Nixtla integration for Yohou | <p align="center">
<picture>
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/stateful-y/yohou-nixtla/main/docs/assets/logo_light.png">
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/stateful-y/yohou-nixtla/main/docs/assets/logo_dark.png">
<img src="https://raw.githubusercontent.com/stateful-y/yohou-nixtla/main/docs/assets/logo_light.png" alt="Yohou-Nixtla">
</picture>
</p>
[](https://pypi.org/project/yohou_nixtla/)
[](https://github.com/stateful-y/yohou-nixtla/blob/main/LICENSE)
[](https://pypi.org/project/yohou_nixtla/)
[](https://anaconda.org/conda-forge/yohou_nixtla)
[](https://codecov.io/gh/stateful-y/yohou-nixtla)
## What is Yohou-Nixtla?
**Yohou-Nixtla** brings the power of [Nixtla's forecasting ecosystem](https://nixtla.io/) to [Yohou](https://github.com/stateful-y/yohou), providing Yohou-compatible wrappers for statistical, machine learning, and deep learning time series models.
This integration enables you to use Nixtla's high-performance forecasters (StatsForecast, NeuralForecast) within Yohou's unified API for time series forecasting. All models work seamlessly with Yohou's features: polars DataFrames, panel data support, cross-validation, and hyperparameter search via GridSearchCV/RandomizedSearchCV.
## What are the features of Yohou-Nixtla?
- **Statistical Models**: AutoARIMA, AutoETS, AutoTheta, ARIMA, Holt-Winters, Naive, and more from StatsForecast, providing fast, production-ready statistical forecasters.
- **Neural Models**: NBEATS, NHITS, MLP, PatchTST, TimesNet from NeuralForecast, offering state-of-the-art deep learning architectures.
- **Panel Data**: Native support for multiple time series with `__` column naming convention (e.g., `sales__store_1`, `sales__store_2`).
- **Yohou Compatible**: Full `fit/predict`, `get_params/set_params`, `clone` compatibility. Works with GridSearchCV, pipelines, and the Yohou ecosystem.
- **Polars Native**: All data handling uses polars DataFrames for high-performance time series operations.
> **Note**: Nixtla's MLForecast is not wrapped as Yohou already provides `PointReductionForecaster`, which turns any scikit-learn regressor (Ridge, LightGBM, XGBoost, …) into a recursive multi-step forecaster with full support for feature transformers, target transformers, and panel data.
## How to install Yohou-Nixtla?
Install the Yohou-Nixtla package using `pip`:
```bash
pip install yohou_nixtla
```
or using `uv`:
```bash
uv pip install yohou_nixtla
```
or using `conda`:
```bash
conda install -c conda-forge yohou_nixtla
```
or using `mamba`:
```bash
mamba install -c conda-forge yohou_nixtla
```
or alternatively, add `yohou_nixtla` to your `requirements.txt` or `pyproject.toml` file.
## How to get started with Yohou-Nixtla?
### 1. Fit a Statistical Forecaster
Use AutoARIMA for automatic ARIMA model selection:
```python
import polars as pl
from yohou_nixtla import AutoARIMAForecaster
# Load your time series data (must have a "time" column)
y = pl.DataFrame({
"time": pl.datetime_range(start="2020-01-01", end="2020-12-31", interval="1d", eager=True),
"sales": [100 + i * 0.5 + (i % 7) * 10 for i in range(366)],
})
# Fit and predict
forecaster = AutoARIMAForecaster(season_length=7)
forecaster.fit(y, forecasting_horizon=14)
y_pred = forecaster.predict()
```
### 2. Train Deep Learning Models
Neural models for complex patterns:
```python
from yohou_nixtla import NHITSForecaster
forecaster = NHITSForecaster(input_size=30, max_steps=100)
forecaster.fit(y, forecasting_horizon=14)
y_pred = forecaster.predict()
```
### 3. Panel Data Forecasting
Forecast multiple time series simultaneously:
```python
# Panel data with __ separator
y_panel = pl.DataFrame({
"time": pl.datetime_range(start="2020-01-01", end="2020-12-31", interval="1d", eager=True),
"sales__store_1": [...],
"sales__store_2": [...],
})
forecaster = AutoARIMAForecaster(season_length=7)
forecaster.fit(y_panel, forecasting_horizon=14)
y_pred = forecaster.predict() # Predictions for all stores
```
## How do I use Yohou-Nixtla?
Full documentation is available at [https://yohou-nixtla.readthedocs.io/](https://yohou-nixtla.readthedocs.io/).
Interactive examples are available in the `examples/` directory:
- **Online**: [https://yohou-nixtla.readthedocs.io/en/latest/pages/examples/](https://yohou-nixtla.readthedocs.io/en/latest/pages/examples/)
- **Locally**: Run `marimo edit examples/hello.py` to open an interactive notebook
## Can I contribute?
We welcome contributions, feedback, and questions:
- **Report issues or request features**: [GitHub Issues](https://github.com/stateful-y/yohou-nixtla/issues)
- **Join the discussion**: [GitHub Discussions](https://github.com/stateful-y/yohou-nixtla/discussions)
- **Contributing Guide**: [CONTRIBUTING.md](https://github.com/stateful-y/yohou-nixtla/blob/main/CONTRIBUTING.md)
If you are interested in becoming a maintainer or taking a more active role, please reach out to Guillaume Tauzin on [GitHub Discussions](https://github.com/stateful-y/yohou-nixtla/discussions).
## Where can I learn more?
Here are the main Yohou-Nixtla resources:
- Full documentation: [https://yohou-nixtla.readthedocs.io/](https://yohou-nixtla.readthedocs.io/)
- GitHub Discussions: [https://github.com/stateful-y/yohou-nixtla/discussions](https://github.com/stateful-y/yohou-nixtla/discussions)
- Interactive Examples: [https://yohou-nixtla.readthedocs.io/en/latest/pages/examples/](https://yohou-nixtla.readthedocs.io/en/latest/pages/examples/)
For questions and discussions, you can also open a [discussion](https://github.com/stateful-y/yohou-nixtla/discussions).
## License
This project is licensed under the terms of the [Apache-2.0 License](https://github.com/stateful-y/yohou-nixtla/blob/main/LICENSE).
<p align="center">
<a href="https://stateful-y.io">
<img src="docs/assets/made_by_stateful-y.png" alt="Made by stateful-y" width="200">
</a>
</p>
| text/markdown | null | Guillaume Tauzin <gtauzin@stateful-y.io> | null | Guillaume Tauzin <gtauzin@stateful-y.io> | Apache-2.0 | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"neuralforecast>=3.0",
"numpy>=2.0.0",
"pandas>=2.2.0",
"polars>=0.20",
"sklearn-wrap>=0.1.0a1",
"statsforecast>=2.0",
"yohou"
] | [] | [] | [] | [
"Homepage, https://github.com/stateful-y/yohou-nixtla",
"Documentation, https://yohou-nixtla.readthedocs.io",
"Repository, https://github.com/stateful-y/yohou-nixtla",
"Bug Tracker, https://github.com/stateful-y/yohou-nixtla/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:16:07.463089 | yohou_nixtla-0.1.0a1.tar.gz | 100,728 | 25/d3/1270a217fbcc59136bb4914c6b2f80e934690165cbc5a3aaad092287dfcc/yohou_nixtla-0.1.0a1.tar.gz | source | sdist | null | false | c2efa2c6d2504fcf3a0e10d91c1db8bc | fc83f043cd6bf1d0ce29d4d76ad2cc12d163633771b6747b691351101f3ece73 | 25d31270a217fbcc59136bb4914c6b2f80e934690165cbc5a3aaad092287dfcc | null | [
"LICENSE"
] | 228 |
2.4 | natsort-rs | 0.1.18 | A blazing fast natural sorting library for Python | <h1 align="center">natsort-rs</h1>
<p align="center">
<em>🚀 A blazing fast natural sorting library for Python written in Rust 🦀</em>
</p>
## Installation
```
pip install natsort-rs
```
## Usage
```py
from natsort_rs import natsort
```
### Sort a list of strings
```py
items = ['item 1', 'item 10', 'item 3']
print(natsort(items))
# ['item 1', 'item 3', 'item 10']
```
### Sort case insensitively
```py
items = ['Item 1', 'Item 3', 'item 2']
print(natsort(items, ignore_case=True))
# ['Item 1', 'item 2', 'Item 3']
```
### Sort complex objects based on property
```py
items = [
{'name': 'item 1', 'id': 1},
{'name': 'item 3', 'id': 3},
{'name': 'item 2', 'id': 2}
]
print(natsort(items, key=lambda d: d['name']))
# [{'name': 'item 1', 'id': 1}, {'name': 'item 2', 'id': 2}, {'name': 'item 3', 'id': 3}]
```
### Return the sorting indices
This can be helpful if you only want to get the sorted indices returned, that makes the performance-critical part
useful for custom sorting use cases:
```py
items = ['item 1', 'item 10', 'item 3']
print(natsort(items, return_indices=True))
# [0, 2, 1]
```
## Benchmark
| No. of items | Duration natsort [s] | Duration natsort-rs [s] | Relative speedup |
|-----|-----|-----|-----|
| 10 | 0.00006 | 0.00000 | 16.8 |
| 100 | 0.00094 | 0.00002 | 44.3 |
| 1000 | 0.00281 | 0.00022 | 12.7 |
| 10000 | 0.02835 | 0.00262 | 10.8 |
| 100000 | 0.29712 | 0.03334 | 8.9 |
| 1000000 | 3.31207 | 0.45333 | 7.3 |
Execute `benchmark.py` to reproduce the results.
## Development
### Local build
To build and test the package locally using `uv`:
```bash
uv run maturin develop --release
```
### Running benchmarks
To run benchmarks:
```bash
uv run benchmark.py
```
This will compare the performance of `natsort-rs` against the pure Python `natsort` library and display results in a table format.
## Credits
This Python module is build on top of the [`natord`](https://docs.rs/natord/latest/natord/) crate and inspired by [`natsort`](https://pypi.org/project/natsort/).
## License
MIT License
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | null | null | [
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3.15"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Changelog, https://github.com/valentinstn/natsort-rs/blob/main/CHANGELOG.md",
"Homepage, https://github.com/valentinstn/natsort-rs",
"Repository, https://github.com/valentinstn/natsort-rs"
] | maturin/1.12.3 | 2026-02-20T23:16:04.375067 | natsort_rs-0.1.18.tar.gz | 13,729 | 76/df/faf6a3385b65c538089f783d6b2fe58ba11e92d848ef6a7171d17e2dfe57/natsort_rs-0.1.18.tar.gz | source | sdist | null | false | 9e56ba5d8f5a584f06147c866ef470ab | 3e80ccfbeee0cf59f8bf4846e346ef3ad2f3d6e5bc9df5f5b71bee013effc8ba | 76dffaf6a3385b65c538089f783d6b2fe58ba11e92d848ef6a7171d17e2dfe57 | null | [] | 1,089 |
2.4 | mcp-maker | 0.1.1 | Auto-generate MCP servers from any data source. Zero code required. | # MCP-Maker
[](https://pypi.org/project/mcp-maker/)
[](https://pypi.org/project/mcp-maker/)
[](https://github.com/MrAliHasan/mcp-maker/actions/workflows/tests.yml)
[](https://github.com/MrAliHasan/mcp-maker/blob/main/LICENSE)
### ⚒️ Auto-generate MCP servers from any data source. Zero code required.
> Point MCP-Maker at a database, API, or file directory and get a fully functional [MCP](https://modelcontextprotocol.io/) server in seconds — ready for Claude, ChatGPT, Cursor, and any MCP-compatible AI.
---
## 🚀 Quick Start
```bash
pip install mcp-maker
# From a SQLite database
mcp-maker init sqlite:///my_database.db
mcp-maker serve
# From CSV/JSON files
mcp-maker init ./data/
mcp-maker serve
# That's it! Your AI can now query your data.
```
## Why MCP-Maker?
| | FastMCP | MCP-Maker |
|---|---------|----------|
| **Approach** | You write Python tools | It generates everything |
| **Setup time** | Minutes–hours | Seconds |
| **Code required** | Yes | No |
| **Best for** | Custom logic | Data access |
MCP-Maker uses FastMCP under the hood — it's not competing, it's building on top.
---
## 📋 Commands
| Command | Description |
|---------|-------------|
| `mcp-maker init <source>` | Generate an MCP server from a data source |
| `mcp-maker inspect <source>` | Preview what would be generated (dry run) |
| `mcp-maker serve` | Run the generated MCP server |
| `mcp-maker list-connectors` | Show available connectors |
## 🔌 Connectors
### Built-in
| Connector | URI Format | Status |
|-----------|-----------|--------|
| **SQLite** | `sqlite:///path/to/db.sqlite` | ✅ Ready |
| **Files** (CSV, JSON, txt) | `./path/to/directory/` | ✅ Ready |
| **PostgreSQL** | `postgres://user:pass@host/db` | 🔜 Coming |
| **MySQL** | `mysql://user:pass@host/db` | 🔜 Coming |
| **Airtable** | `airtable://appXXXX` | 🔜 Coming |
### Want to add a connector?
Every connector is a single Python file — PRs welcome! See [CONTRIBUTING.md](CONTRIBUTING.md).
---
## 🛠️ What Gets Generated
For each table in your data source, MCP-Maker generates:
| Tool | Description |
|------|-------------|
| `list_{table}(limit, offset)` | Paginated listing |
| `get_{table}_by_{pk}(id)` | Get by primary key |
| `search_{table}(query)` | Full-text search |
| `count_{table}()` | Row count |
| `schema_{table}()` | Column names and types |
For text files, it generates `read_{name}()` resources.
---
## 💡 Example: SQLite Database
```bash
$ mcp-maker init sqlite:///chinook.db
⚒️ MCP-Maker v0.1.0
✅ Connected to sqlite source
┌──────────────────────────────────┐
│ 📊 Discovered Tables (11) │
├──────────┬──────────┬────────────┤
│ Table │ Columns │ Rows │
├──────────┼──────────┼────────────┤
│ albums │ id, ... │ 347 │
│ artists │ id, ... │ 275 │
│ tracks │ id, ... │ 3503 │
└──────────┴──────────┴────────────┘
🎉 Generated: mcp_server.py
$ mcp-maker serve
🚀 MCP-Maker Server running...
```
Now in Claude Desktop, add the server and ask: *"What are the top 5 artists with the most albums?"*
---
## 🔗 Use with Claude Desktop
Add the generated server to your Claude Desktop config (`claude_desktop_config.json`):
```json
{
"mcpServers": {
"my-data": {
"command": "python",
"args": ["/absolute/path/to/mcp_server.py"]
}
}
}
```
Restart Claude Desktop and your data is accessible via natural language!
---
## 🤝 Contributing
MCP-Maker is designed for community contributions — each new **connector** is a self-contained PR.
See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed instructions and a step-by-step connector creation guide.
## 📦 Installation
```bash
# Core (SQLite + Files)
pip install mcp-maker
# With PostgreSQL support
pip install mcp-maker[postgres]
# With all connectors
pip install mcp-maker[all]
# Development
pip install mcp-maker[dev]
```
## 📄 License
This project is licensed under the [MIT License](LICENSE).
| text/markdown | null | Ali Hasan <ali@example.com> | null | null | null | ai, claude, generator, llm, mcp, model-context-protocol, server | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Code Generators"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"jinja2>=3.1.0",
"mcp>=1.0.0",
"rich>=13.0.0",
"typer>=0.9.0",
"pyairtable>=2.0.0; extra == \"airtable\"",
"psycopg2-binary>=2.9.0; extra == \"all\"",
"pyairtable>=2.0.0; extra == \"all\"",
"pymysql>=1.1.0; extra == \"all\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"pymysql>=1.1.0; extra == \"mysql\"",
"psycopg2-binary>=2.9.0; extra == \"postgres\""
] | [] | [] | [] | [
"Homepage, https://github.com/MrAliHasan/mcp-maker",
"Documentation, https://github.com/MrAliHasan/mcp-maker#readme",
"Repository, https://github.com/MrAliHasan/mcp-maker",
"Issues, https://github.com/MrAliHasan/mcp-maker/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:15:43.382713 | mcp_maker-0.1.1.tar.gz | 18,287 | 27/3d/f62d538313ea1f42086ba445ae845f428b59b330895e456823a0b0db3ce3/mcp_maker-0.1.1.tar.gz | source | sdist | null | false | 2c7e8e79734fb016bedc68f9a6dd6c3d | c6c2d1a014e24c9fec8e6f2b26fc3a62a8c88091c78ef09601478764ad4fa469 | 273df62d538313ea1f42086ba445ae845f428b59b330895e456823a0b0db3ce3 | MIT | [
"LICENSE"
] | 244 |
2.4 | edesto-dev | 0.10.0 | Teach AI coding agents to compile, flash, and validate firmware on real hardware. One command to bridge any agent and any board. | # edesto-dev
Join our [Discord](https://discord.gg/3bu98EcdAC)
**Teach AI coding agents how to compile, flash, and validate firmware on your hardware.**
AI coding agents stop at the terminal. `edesto init` gives them the full embedded development loop: compile, flash, on-device debugging, iterate. Now they can autonomously develop and debug firmware on real hardware. Works with Claude Code, Cursor, Codex, and OpenClaw.
https://github.com/user-attachments/assets/f1d4719d-ed60-406e-a274-0b0f2b06ac21
## Install
```
pip install edesto-dev
```
## Quick Start
```bash
# 1. Plug in your board and run:
edesto init
# 2. Open your AI coding agent in the same directory
claude
# 3. Tell it what to do:
# "The sensor readings are wrong. Find and fix the bug."
```
That's it. `edesto init` auto-detects your board, serial port, and toolchain. It generates a `SKILLS.md` that teaches your agent the write/compile/flash/validate loop, with board-specific pin references, pitfalls, and serial conventions.
You can also specify everything manually:
```bash
edesto init --board esp32 --port /dev/cu.usbserial-0001
edesto init --board esp32 --port /dev/ttyUSB0 --toolchain arduino
```
### JTAG/SWD Flashing
If your board is connected through a JTAG debugger (ST-Link, J-Link, CMSIS-DAP) instead of USB serial:
```bash
edesto init --board stm32-nucleo --upload jtag
```
This walks you through selecting your debug probe and target chip, generates an OpenOCD-based flash command, and optionally configures a serial port for monitoring. If you run `edesto init` with no USB boards detected and OpenOCD installed, it will offer JTAG setup automatically.
## How It Works
`edesto init` detects your project and generates a `SKILLS.md` (plus copies as `CLAUDE.md`, `.cursorrules`, and `AGENTS.md`) that gives your AI agent:
1. **Compile** and **flash** commands for your specific toolchain
2. A **debugging toolkit** — bidirectional serial communication (read output and send commands), safe code instrumentation, project-aware debug scanning, plus auto-detected support for logic analyzers, JTAG/SWD, and oscilloscopes
3. **Board-specific** pin references, capabilities, and common pitfalls
4. **Datasheet intelligence** — guidance on finding, reading, and citing datasheets and reference manuals, with board-family-specific tips for STM32, ESP32, and Nordic nRF documentation
5. **RTOS guidance** — context-aware FreeRTOS or Zephyr RTOS sections with task creation, synchronization primitives, ISR rules, and common concurrency pitfalls (appears automatically for ESP-IDF, Zephyr, and Arduino+ESP32 projects)
6. **Troubleshooting** guidance for common failures (port busy, baud mismatch, upload timeout)
The debugging step is what makes this work. The `edesto serial` and `edesto debug` commands give the agent structured access to your hardware. `edesto debug scan` analyzes your source code to detect logging APIs, boot markers, danger zones (ISRs), and serial commands. The agent uses `edesto serial read` and `edesto serial send` for bidirectional communication with the board, and `edesto debug instrument` for safe code instrumentation with guaranteed cleanup (`edesto debug clean` removes all instrumented lines). For example, your firmware prints structured serial output (`[READY]`, `[ERROR]`, `[SENSOR] key=value`) and the agent reads it to verify its own changes on real hardware. When you have additional debug tools installed, the agent combines serial commands with logic analyzers, oscilloscopes, or JTAG/GDB for end-to-end validation workflows.
## Supported Toolchains
| Toolchain | Detection | Commands |
|-----------|-----------|----------|
| Arduino | `.ino` files | `arduino-cli compile`, `arduino-cli upload` |
| PlatformIO | `platformio.ini` | `pio run`, `pio run --target upload` |
| ESP-IDF | `CMakeLists.txt` + `sdkconfig` | `idf.py build`, `idf.py flash` |
| Zephyr RTOS | `prj.conf`, `west.yml`, or CMake with `find_package(Zephyr` | `west build`, `west flash` |
| CMake/Make (bare-metal) | `Makefile` with cross-compiler or `CMakeLists.txt` with toolchain file | `cmake --build build`, OpenOCD flash |
| MicroPython | `boot.py` / `main.py` | `mpremote connect`, `mpremote cp` |
| Custom | `edesto.toml` | Your commands |
If edesto can't detect your toolchain, it prompts you to enter compile/upload commands and saves them to `edesto.toml` for next time.
## Supported Boards
| Slug | Board | Key Features |
|------|-------|-------------|
| `esp32` | ESP32 | WiFi, Bluetooth, BLE |
| `esp32s3` | ESP32-S3 | WiFi, BLE, USB native |
| `esp32c3` | ESP32-C3 | WiFi, BLE, RISC-V |
| `esp32c6` | ESP32-C6 | WiFi 6, BLE, Zigbee/Thread |
| `esp8266` | ESP8266 | WiFi |
| `arduino-uno` | Arduino Uno | AVR, 32KB flash |
| `arduino-nano` | Arduino Nano | AVR, compact |
| `arduino-mega` | Arduino Mega 2560 | AVR, 256KB flash, 4 serial |
| `rp2040` | Raspberry Pi Pico | Dual-core, PIO, USB |
| `teensy40` | Teensy 4.0 | 600MHz Cortex-M7, USB |
| `teensy41` | Teensy 4.1 | 600MHz, Ethernet, SD card |
| `stm32-nucleo` | STM32 Nucleo-64 | STM32, Arduino headers |
| `stm32f4-discovery` | STM32F4 Discovery | STM32F407, USB OTG, accelerometer, DAC |
| `stm32h7-nucleo` | STM32H7 Nucleo-144 | Dual-core 480MHz, Ethernet |
| `stm32l4-nucleo` | STM32L4 Nucleo-64 | Ultra-low-power, DAC |
| `nrf52840` | Adafruit Feather nRF52840 | BLE 5.0, USB, NFC, QSPI |
| `nrf5340` | nRF5340 DK | Dual-core, BLE 5.3, TrustZone |
Any board works with PlatformIO, ESP-IDF, Zephyr, MicroPython, or a custom toolchain — the table above is for auto-detection with board-specific pin references and pitfalls. Run `edesto boards` to see the full list.
## Debug Tools (Optional)
edesto auto-detects debug tools on your machine and includes them in the generated SKILLS.md. The agent picks the right tool for the problem:
| Tool | What it checks | Detection |
|------|---------------|-----------|
| **Serial** | Bidirectional communication — read output and send commands (always included) | `pyserial` |
| **Logic analyzer** | SPI/I2C/UART protocol timing and bus decoding | [Saleae Logic 2](https://www.saleae.com/) + `logic2-automation` Python package |
| **JTAG/SWD** | CPU state, crashes, HardFaults, registers, memory | `openocd` on PATH |
| **Oscilloscope** | Voltage levels, PWM frequency/duty, rise times | SCPI scope + `pyvisa` Python package |
If a tool isn't installed, its section is simply omitted — the agent won't try to use it. Run `edesto doctor` to see which tools are detected.
## Commands
```bash
# Setup
edesto init # Auto-detect everything
edesto init --board esp32 # Specify board, auto-detect port
edesto init --board esp32 --port /dev/ttyUSB0 # Fully manual
edesto init --board stm32-nucleo --upload jtag # Flash via JTAG/SWD
edesto init --toolchain platformio # Force a specific toolchain
edesto boards # List supported boards
edesto boards --toolchain arduino # Filter by toolchain
edesto doctor # Check your environment
# Serial communication
edesto serial ports # List available serial ports
edesto serial read --duration 10 # Read serial output for 10 seconds
edesto serial read --until "[READY]" # Read until marker appears
edesto serial send "status" # Send command, read response
edesto serial send "reboot" --until "[READY]" # Send command, wait for marker
edesto serial monitor # Stream serial output continuously
# Debug tools
edesto debug scan # Scan project for debug patterns
edesto debug instrument src/main.c:42 --expr val --fmt "%d" # Insert debug print
edesto debug instrument --function my_func # Add entry/exit logging
edesto debug instrument --gpio src/main.c:42 # GPIO toggle for timing
edesto debug clean # Remove all instrumentation
edesto debug clean --dry-run # Preview what would be removed
edesto debug status # Show diagnostic snapshot
edesto debug status --json # Machine-readable status
edesto debug reset # Clear all debug state
# Configuration
edesto config debug.gpio 25 # Set debug GPIO pin
edesto config serial.baud_rate # Get a config value
edesto config --list # Show all config
```
## Examples
Three example projects in `examples/`, each with an intentional bug for your AI agent to find and fix:
- **sensor-debug** — Temperature sensor with a unit conversion bug. Celsius values are correct but Fahrenheit readings are off.
- **wifi-endpoint** — ESP32 HTTP server where `/health` returns JSON with the wrong Content-Type header.
- **ota-update** — ESP32 with OTA support. The agent updates the version string and pushes firmware wirelessly.
## Prerequisites
- A supported board connected via USB or JTAG debugger
- Python 3.10+
- Your toolchain's CLI installed (e.g., `arduino-cli`, `pio`, `idf.py`, `west`, `arm-none-eabi-gcc`, `mpremote`)
- For JTAG flashing: `openocd` on PATH
- Optional: debug tools (`logic2-automation`, `openocd`, `pyvisa`) for advanced debugging
Run `edesto doctor` to check your setup.
## About
Built by [Edesto](https://edesto.com). We build tools for robotics and embedded teams.
| text/markdown | null | Edesto <greg@edesto.com> | null | null | null | embedded, esp32, arduino, firmware, ai, claude, development, platformio, esp-idf, micropython, microcontroller | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Embedded Systems",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0",
"pyserial>=3.5",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://edesto.com",
"Repository, https://github.com/edesto/edesto-dev"
] | twine/6.2.0 CPython/3.12.8 | 2026-02-20T23:15:06.958666 | edesto_dev-0.10.0.tar.gz | 87,122 | 46/c1/f383b1d0b1f0747d74564c1e993af00269a204d9626b088df3753a27f4d0/edesto_dev-0.10.0.tar.gz | source | sdist | null | false | 76df2002396a6bbd4c29c21bfd8217e4 | ca7664988d74a92421a490d31993b1924105f660889685ac866a4d6d3b7718c2 | 46c1f383b1d0b1f0747d74564c1e993af00269a204d9626b088df3753a27f4d0 | MIT | [
"LICENSE"
] | 222 |
2.4 | stestr | 4.2.1 | A parallel Python test runner built around subunit | ======
stestr
======
.. image:: https://github.com/mtreinish/stestr/actions/workflows/main.yml/badge.svg?branch=main
:target: https://github.com/mtreinish/stestr/actions/workflows/main.yml
:alt: CI Testing status
.. image:: https://img.shields.io/codecov/c/gh/mtreinish/stestr?style=flat-square
:target: https://codecov.io/gh/mtreinish/stestr
:alt: Code coverage
.. image:: https://img.shields.io/pypi/v/stestr.svg?style=flat-square
:target: https://pypi.python.org/pypi/stestr
:alt: Latest Version
.. image:: https://img.shields.io/github/license/mtreinish/stestr.svg?style=popout-square
:target: https://opensource.org/licenses/Apache-2.0
:alt: License:
* Read this in other languages: `English`_, `日本語`_
* You can see the full rendered docs at: http://stestr.readthedocs.io/en/latest/
* The code of the project is on Github: https://github.com/mtreinish/stestr
.. _English: https://github.com/mtreinish/stestr/blob/main/README.rst
.. _日本語: https://github.com/mtreinish/stestr/blob/main/README_ja.rst
.. note:: stestr v2.x.x release series will be the last series that supports
Python 2. Support for Python 2.7 was dropped in stestr release 3.0.0.
Overview
--------
stestr is parallel Python test runner designed to execute `unittest`_ test
suites using multiple processes to split up execution of a test suite. It also
will store a history of all test runs to help in debugging failures and
optimizing the scheduler to improve speed. To accomplish this goal it uses the
`subunit`_ protocol to facilitate streaming and storing results from multiple
workers.
.. _unittest: https://docs.python.org/3/library/unittest.html
.. _subunit: https://github.com/testing-cabal/subunit
stestr originally started as a fork of the `testrepository`_ project. But,
instead of being an interface for any test runner that used subunit, like
testrepository, stestr concentrated on being a dedicated test runner for python
projects. While stestr was originally forked from testrepository it is not
backwards compatible with testrepository. At a high level the basic concepts of
operation are shared between the two projects but the actual usage is not
exactly the same.
.. _testrepository: https://testrepository.readthedocs.org/en/latest
Installing stestr
-----------------
stestr is available via pypi, so all you need to do is run::
pip install -U stestr
to get stestr on your system. If you need to use a development version of
stestr you can clone the repo and install it locally with::
git clone https://github.com/mtreinish/stestr.git && pip install -e stestr
which will install stestr in your python environment in editable mode for local
development
Using stestr
------------
After you install stestr to use it to run tests is pretty straightforward. The
first thing you'll want to do is create a ``.stestr.conf`` file for your
project. This file is used to tell stestr where to find tests and basic
information about how tests are run. A basic minimal example of the
contents of this is::
[DEFAULT]
test_path=./project_source_dir/tests
which just tells stestr the relative path for the directory to use for
test discovery. This is the same as ``--start-directory`` in the standard
`unittest discovery`_.
.. _unittest discovery: https://docs.python.org/3/library/unittest.html#test-discovery
Alternatively, if you're using stestr with
`tox <https://tox.readthedocs.io/en/latest/>`__ you can integrate your stestr
config in a ``stestr`` section in the tox.ini file, for example::
[stestr]
test_path=./project_source_dir/tests
After stestr is configured you should be all set to start using stestr
to run tests. To run tests just use::
stestr run
it will first create a results repository at ``.stestr/`` in the current
working directory and then execute all the tests found by test discovery. If
you're just running a single test (or module) and want to avoid the overhead of
doing test discovery you can use the ``--no-discover``/``-n`` option to specify
that test.
For all the details on these commands and more thorough explanation of options
see the stestr manual: https://stestr.readthedocs.io/en/latest/MANUAL.html
Migrating from testrepository
-----------------------------
If you have a project that is already using testrepository stestr's source repo
contains a helper script for migrating your repo to use stestr. This script
just creates a ``.stestr.conf`` file from a ``.testr.conf`` file.
(assuming it uses a standard subunit.run test command format) To run
this from your project repo just call::
$STESTR_SOURCE_DIR/tools/testr_to_stestr.py
and you'll have a ``.stestr.conf`` created.
Building a manpage
------------------
The stestr manual has been formatted so that it renders well as html and as a
manpage. The html output and is autogenerated and published to:
https://stestr.readthedocs.io/en/latest/MANUAL.html but the manpage has to be
generated by hand. To do this you have to manually run sphinx-build with the
manpage builder. This has been automated in a small script that should be run
from the root of the stestr repository::
tools/build_manpage.sh
which will generate the troff file in doc/build/man/stestr.1 which is ready to
be packaged and or put in your system's man pages.
Contributing
------------
To browse the latest code, see: https://github.com/mtreinish/stestr
To clone the latest code, use: ``git clone https://github.com/mtreinish/stestr.git``
Guidelines for contribution are documented at: http://stestr.readthedocs.io/en/latest/developer_guidelines.html
Use `github pull requests`_ to submit patches. Before you submit a pull request
ensure that all the automated testing will pass by running ``tox`` locally.
This will run the test suite and also the automated style rule checks just as
they will in CI. If CI fails on your change it will not be able to merge.
.. _github pull requests: https://help.github.com/articles/about-pull-requests/
Community
---------
Besides Github interactions there is also a stestr IRC channel:
#stestr on `OFTC <https://oftc.net/>`__
feel free to join to ask questions, or just discuss stestr.
| text/x-rst | null | Matthew Treinish <mtreinish@kortar.org> | null | null | null | null | [
"Intended Audience :: Information Technology",
"Intended Audience :: System Administrators",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Testing",
"Topic :: Software Development :: Quality Assurance"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"cliff>=2.8.0",
"python-subunit>=1.4.0",
"fixtures>=3.0.0",
"testtools>=2.2.0",
"PyYAML>=3.10.0",
"voluptuous>=0.8.9",
"tomlkit>=0.11.6"
] | [] | [] | [] | [
"Documentation, https://stestr.readthedocs.io",
"Homepage, https://stestr.readthedocs.io/en/stable/",
"Issues, https://github.com/mtreinish/stestr/issues",
"Repository, https://github.com/mtreinish/stestr"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:14:41.367338 | stestr-4.2.1.tar.gz | 79,815 | e7/d5/2980c018510854adc11835c448f8b6086a99c533b68905adee063a68ac6b/stestr-4.2.1.tar.gz | source | sdist | null | false | 5cb21f8e64c6374cd6ba489fca38bdf1 | b60495225fa783476252572adda29d33ae3706f95fc32564f78fb4e7c9e5df20 | e7d52980c018510854adc11835c448f8b6086a99c533b68905adee063a68ac6b | Apache-2.0 | [
"LICENSE"
] | 3,069 |
2.4 | cybrid-api-bank-python | 0.126.174 | Cybrid Bank API | View our documentation on [Github](https://github.com/Cybrid-app/cybrid-api-bank-python/)
| text/markdown | Cybrid Support | support@cybrid.app | null | null | Apache-2.0 | Cybrid Bank API | [] | [] | https://github.com/Cybrid-app/cybrid-api-bank-python/ | null | >=3.6 | [] | [] | [] | [
"urllib3>=1.25.3",
"python-dateutil"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T23:14:15.931919 | cybrid_api_bank_python-0.126.174.tar.gz | 649,088 | 4b/5b/182c50b5bf714f52d94c08d3563160bf38d06282fdf299ecb2a39409fa2e/cybrid_api_bank_python-0.126.174.tar.gz | source | sdist | null | false | 2b80b3e5c5e781fc90680ca83ab46566 | bc23db5f0f873014d49363cb5d21d25888ad14ccfe82fb821fda430d2ae51d53 | 4b5b182c50b5bf714f52d94c08d3563160bf38d06282fdf299ecb2a39409fa2e | null | [] | 247 |
2.4 | cybrid-api-organization-python | 0.126.174 | Cybrid Organization API | View our documentation on [Github](https://github.com/Cybrid-app/cybrid-api-organization-python/)
| text/markdown | Cybrid Support | support@cybrid.app | null | null | Apache-2.0 | Cybrid Organization API | [] | [] | https://github.com/Cybrid-app/cybrid-api-organization-python/ | null | >=3.6 | [] | [] | [] | [
"urllib3>=1.25.3",
"python-dateutil"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T23:14:01.790380 | cybrid_api_organization_python-0.126.174.tar.gz | 105,629 | 38/3a/988c1e0edbdfde5303bbdf22473b8318e3b72063b285a7f103f1b2ef96d3/cybrid_api_organization_python-0.126.174.tar.gz | source | sdist | null | false | be338eb6e633d39e2620c6bda35c6f2e | 52eb303221f96ba8b18ada095428b54a80a5d9b11f0093dedb987774ea03a34d | 383a988c1e0edbdfde5303bbdf22473b8318e3b72063b285a7f103f1b2ef96d3 | null | [] | 242 |
2.1 | returnn | 1.20260220.234910 | The RWTH extensible training framework for universal recurrent neural networks | ==================
Welcome to RETURNN
==================
`GitHub repository <https://github.com/rwth-i6/returnn>`__.
`RETURNN paper 2016 <https://arxiv.org/abs/1608.00895>`_,
`RETURNN paper 2018 <https://arxiv.org/abs/1805.05225>`_.
RETURNN - RWTH extensible training framework for universal recurrent neural networks,
is a PyTorch/TensorFlow-based implementation of modern recurrent neural network architectures.
It is optimized for fast and reliable training of recurrent neural networks in a multi-GPU environment.
The high-level features and goals of RETURNN are:
* **Simplicity**
* Writing config / code is simple & straight-forward (setting up experiment, defining model)
* Debugging in case of problems is simple
* Reading config / code is simple (defined model, training, decoding all becomes clear)
* **Flexibility**
* Allow for many different kinds of experiments / models
* **Efficiency**
* Training speed
* Decoding speed
All items are important for research, decoding speed is esp. important for production.
See our `Interspeech 2020 tutorial "Efficient and Flexible Implementation of Machine Learning for ASR and MT" video <https://www.youtube.com/watch?v=wPKdYqSOlAY>`__
(`slides <https://www-i6.informatik.rwth-aachen.de/publications/download/1154/Zeyer--2020.pdf>`__)
with an introduction of the core concepts.
More specific features include:
- Mini-batch training of feed-forward neural networks
- Sequence-chunking based batch training for recurrent neural networks
- Long short-term memory recurrent neural networks
including our own fast CUDA kernel
- Multidimensional LSTM (GPU only, there is no CPU version)
- Memory management for large data sets
- Work distribution across multiple devices
- Flexible and fast architecture which allows all kinds of encoder-attention-decoder models
See `documentation <https://returnn.readthedocs.io/>`__.
See `basic usage <https://returnn.readthedocs.io/en/latest/basic_usage.html>`__
and `technological overview <https://returnn.readthedocs.io/en/latest/tech_overview.html>`__.
`Here is the video recording of a RETURNN overview talk <https://www-i6.informatik.rwth-aachen.de/web/Software/returnn/downloads/workshop-2019-01-29/01.recording.cut.mp4>`_
(`slides <https://www-i6.informatik.rwth-aachen.de/web/Software/returnn/downloads/workshop-2019-01-29/01.returnn-overview.session1.handout.v1.pdf>`__,
`exercise sheet <https://www-i6.informatik.rwth-aachen.de/web/Software/returnn/downloads/workshop-2019-01-29/01.exercise_sheet.pdf>`__;
hosted by eBay).
There are `many example demos <https://github.com/rwth-i6/returnn/blob/master/demos/>`_
which work on artificially generated data,
i.e. they should work as-is.
There are `some real-world examples <https://github.com/rwth-i6/returnn-experiments>`_
such as setups for speech recognition on the Switchboard or LibriSpeech corpus.
Some benchmark setups against other frameworks
can be found `here <https://github.com/rwth-i6/returnn-benchmarks>`_.
The results are in the `RETURNN paper 2016 <https://arxiv.org/abs/1608.00895>`_.
Performance benchmarks of our LSTM kernel vs CuDNN and other TensorFlow kernels
are in `TensorFlow LSTM benchmark <https://returnn.readthedocs.io/en/latest/tf_lstm_benchmark.html>`__.
There is also `a wiki <https://github.com/rwth-i6/returnn/wiki>`_.
Questions can also be asked on
`StackOverflow using the RETURNN tag <https://stackoverflow.com/questions/tagged/returnn>`_.
.. image:: https://github.com/rwth-i6/returnn/workflows/CI/badge.svg
:target: https://github.com/rwth-i6/returnn/actions
Dependencies
============
pip dependencies are listed in ``requirements.txt`` and ``requirements-dev``,
although some parts of the code may require additional dependencies (e.g. ``librosa``, ``resampy``) on-demand.
RETURNN supports Python >= 3.8. Bumps to the minimum Python version are listed in `CHANGELOG.md <https://github.com/rwth-i6/returnn/blob/master/CHANGELOG.md>`__.
TensorFlow-based setups require TensorFlow >= 2.2.
PyTorch-based setups require Torch >= 1.0.
| text/x-rst | Albert Zeyer | albzey@gmail.com | null | null | RETURNN license | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Environment :: GPU",
"Environment :: GPU :: NVIDIA CUDA",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: Other/Proprietary License",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Operating System :: Unix",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | https://github.com/rwth-i6/returnn/ | null | null | [] | [] | [] | [
"numpy"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.25 | 2026-02-20T23:13:51.279250 | returnn-1.20260220.234910.tar.gz | 2,423,670 | db/4c/bac0b574e12785b409170101e489d4be42cb69db0662d42bc717a7daf751/returnn-1.20260220.234910.tar.gz | source | sdist | null | false | dae113a6f344b04580c4c0fadff7081d | 2f8651903e0978414dcea3fa256daf5de81322f2e3e621dba7da13827d7937ff | db4cbac0b574e12785b409170101e489d4be42cb69db0662d42bc717a7daf751 | null | [] | 255 |
2.4 | lm-deluge | 0.0.120 | Python utility for using LLM API models. | # lm-deluge
`lm-deluge` is a lightweight helper library for maxing out your rate limits with LLM providers. It provides the following:
- **Unified client** – Send prompts to all relevant models with a single client.
- **Files and Images** - Include images easily for multimodal models, and PDF files for models that support them (OpenAI and Anthropic).
- **Massive concurrency with throttling** – Set `max_tokens_per_minute` and `max_requests_per_minute` and let it fly. The client will process as many requests as possible while respecting rate limits and retrying failures.
- **Spray across models/providers** – Configure a client with multiple models from any provider(s), and sampling weights. The client samples a model for each request.
- **Tool Use** – Unified API for defining tools for all providers, and creating tools automatically from python functions.
- **MCP Support** – Instantiate a `Tool` from a local or remote MCP server so that any LLM can use it, whether or not that provider natively supports MCP.
- **Computer Use** – We support computer use for all major providers, and have pre-fabricated tools to integrate with Kernel, TryCUA, and more.
- **Local & Remote Caching** – Use Anthropic caching more easily with common patterns (system-only, tools-only, last N messages, etc.) Use client-side caching to save completions to avoid repeated LLM calls to process the same input.
- **Convenient message constructor** – No more looking up how to build an Anthropic messages list with images. Our `Conversation` and `Message` classes work great with our `LLMClient` or with the `openai` and `anthropic` packages.
- **Sync and async APIs** – Use the client from sync or async code.
**STREAMING IS NOT IN SCOPE.** There are plenty of packages that let you stream chat completions across providers. The sole purpose of this package is to do very fast batch inference using APIs. Sorry!
**Update 06/02/2025:** I lied, it supports (very basic) streaming now via client.stream(...). It will print tokens as they arrive, then return an APIResponse at the end. More sophisticated streaming may or may not be implemented later, don't count on it.
## Installation
```bash
pip install lm-deluge
```
The package relies on environment variables for API keys. Typical variables include `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `COHERE_API_KEY`, `META_API_KEY`, and `GEMINI_API_KEY`. `LLMClient` will automatically load the `.env` file when imported; we recommend using that to set the environment variables. For Bedrock, you'll need to set `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`.
## Quickstart
`LLMClient` uses sensible default arguments for rate limits and sampling parameters so that you don't have to provide a ton of arguments.
```python
from lm_deluge import LLMClient
client = LLMClient("gpt-4.1-mini")
resps = client.process_prompts_sync(["Hello, world!"])
print(resps[0].completion)
```
## Spraying Across Models
To distribute your requests across models, just provide a list of more than one model to the constructor. See all available models in `models.py`. The rate limits for the client apply to the client as a whole, not per-model, so you may want to increase them:
```python
from lm_deluge import LLMClient
client = LLMClient(
["gpt-4.1-mini", "claude-4.5-haiku"],
max_requests_per_minute=10_000
)
resps = client.process_prompts_sync(
["Hello, ChatGPT!", "Hello, Claude!"]
)
print(resps[0].completion)
```
## Configuration
API calls can be customized in a few ways.
1. **Sampling Parameters.** This determines things like structured outputs, maximum completion tokens, nucleus sampling, etc. Provide a custom `SamplingParams` to the `LLMClient` to set temperature, top_p, json_mode, max_new_tokens, and/or reasoning_effort. You can pass 1 `SamplingParams` to use for all models, or a list of `SamplingParams` that's the same length as the list of models.
2. **Arguments to LLMClient.** This is where you set request timeout, rate limits, model name(s), model weight(s) for distributing requests across models, retries, caching, **and progress display style**. Set `progress="rich"` (default), `"tqdm"`, or `"manual"` to choose how progress is reported. The manual option prints an update every 30 seconds.
3. **Arguments to process_prompts.** Per-call, you can set verbosity, whether to display progress, and whether to return just completions (rather than the full APIResponse object). This is also where you provide tools.
Putting it all together:
```python
from lm_deluge import LLMClient, SamplingParams
client = LLMClient(
"gpt-4",
max_requests_per_minute=100,
max_tokens_per_minute=100_000,
max_concurrent_requests=500,
sampling_params=SamplingParams(temperature=0.5, max_new_tokens=30)
)
await client.process_prompts_async(
["What is the capital of Mars?"],
show_progress=False,
return_completions_only=True
)
```
### Queueing individual prompts
You can queue prompts one at a time and track progress explicitly. Iterate over
results as they finish with `as_completed` (or gather them all at once with
`wait_for_all`):
```python
client = LLMClient("gpt-4.1-mini", progress="tqdm")
client.open()
client.start_nowait("hello there")
# ... queue more tasks ...
async for task_id, result in client.as_completed():
print(task_id, result.completion)
client.close()
```
## Multi-Turn Conversations
Constructing conversations to pass to models is notoriously annoying. Each provider has a slightly different way of defining a list of messages, and with the introduction of images/multi-part messages it's only gotten worse. We provide convenience constructors so you don't have to remember all that stuff.
```python
from lm_deluge import Message, Conversation
prompt = Conversation().system("You are a helpful assistant.").add(
Message.user("What's in this image?").add_image("tests/image.jpg")
)
client = LLMClient("gpt-4.1-mini")
resps = client.process_prompts_sync([prompt])
```
This just works. Images can be local images on disk, URLs, bytes, base64 data URLs... go wild. You can use `Conversation.to_openai` or `Conversation.to_anthropic` to format your messages for the OpenAI or Anthropic clients directly.
See a full multi-turn chat example in `examples/multiturn.md`.
## Files
For models that support file uploads (OpenAI, Anthropic, and Gemini), you can easily include PDF files and other documents:
```python
from lm_deluge import LLMClient, Conversation
# Simple file upload
client = LLMClient("gpt-4.1-mini")
conversation = Conversation().user(
"Please summarize this document",
file="path/to/document.pdf"
)
resps = client.process_prompts_sync([conversation])
# You can also create File objects for more control
from lm_deluge import File
file = File("path/to/report.pdf", filename="Q4_Report.pdf")
conversation = Conversation().user("Analyze this financial report")
conversation.messages[0].parts.append(file)
```
Files can be local paths, URLs, bytes, or base64 data URLs, just like images.
## Tool Use
Define tools from Python functions and use them with any model:
```python
from lm_deluge import LLMClient, Tool
def get_weather(city: str) -> str:
return f"The weather in {city} is sunny and 72°F"
tool = Tool.from_function(get_weather)
client = LLMClient("claude-4.5-haiku")
resps = client.process_prompts_sync(
["What's the weather in Paris?"],
tools=[tool]
)
# you can iterate over the tool calls in the response automatically
for tool_call in resps[0].tool_calls:
print(tool_call.name, tool_call.arguments)
```
You can also automatically instantiate tools from MCP servers. Under the hood, the the constructor connects to the server, asks it what tools it has, and then creates a `Tool` from each of them, *with a built-in `call` and `acall` interface*.
```python
from lm_deluge import LLMClient, Tool
# Connect to a local MCP server and get all of its tools
filesystem_tools = Tool.from_mcp(
"filesystem",
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem", "/path/to/directory"]
)
# or load ALL the tools from a Claude Desktop like config
config = {
"mcpServers": {
"exa": {
"url": f"https://mcp.exa.ai/mcp?exaApiKey={os.getenv('EXA_API_KEY')}"
},
"zapier": {
"url": f"https://mcp.zapier.com/api/mcp/s/{os.getenv('ZAPIER_MCP_SECRET')}/mcp"
}
}
}
all_tools = Tool.from_mcp_config(config)
# let the model use the tools
client = LLMClient("gpt-4o-mini")
resps = client.process_prompts_sync(
["List the files in the current directory"],
tools=tools
)
# call the tools
for tool_call in resps[0].tool_calls:
# this is dumb sorry will make it better
tool_to_call = [x for x in tools if x.name == tool_call.name][0]
tool_to_call.call(**tool_call.arguments) # in async code, use .acall()
# or use the built-in agent loop to handle this automatically
import asyncio
async def main():
conv = Conversation().user("List the files in the current directory")
conv, resp = await client.run_agent_loop(conv, tools=tools)
print(resp.content.completion)
asyncio.run(main())
```
### Prompt Caching (Anthropic)
For Anthropic models, you can use prompt caching to reduce costs and latency for repeated context. This uses Anthropic's server-side prompt caching. Other providers like OpenAI and Google do this automatically, but Anthropic requires you to manually set cache-control on messages. You can do this in lm-deluge with a simple "cache" argument to `process_prompts_sync` or `process_prompts_async`:
```python
from lm_deluge import LLMClient, Conversation, Message
# Create a conversation with system message
conv = (
Conversation().system("You are an expert Python developer with deep knowledge of async programming.")
.add(Message.user("How do I use asyncio.gather?"))
)
# Use prompt caching to cache system message and tools
client = LLMClient("claude-4.5-sonnet")
resps = client.process_prompts_sync(
[conv],
cache="system_and_tools" # Cache system message and any tools
)
```
Available cache patterns: `"system_and_tools"`, `"tools_only"`, `"last_user_message"`, `"last_2_user_messages"`, `"last_3_user_messages"`.
## Local Caching
Besides caching from model providers (which provides cache reads at a discount, but not for free) `lm_deluge.cache` includes LevelDB, SQLite and custom dictionary based caches to cache prompts locally. Pass an instance via `LLMClient(..., cache=my_cache)` and previously seen prompts will not be re‑sent across different `process_prompts_[...]` calls.
**IMPORTANT:** Caching does not currently work for prompts in the SAME batch. That is, if you call `process_prompts_sync` with the same prompt 100 times, there will be 0 cache hits. If you call `process_prompts_sync` a *second* time with those same 100 prompts, all 100 will be cache hits. The local cache is intended to be persistent and help you save costs across many invocations, but it can't help with a single batch-inference session (yet!).
## Asynchronous Client
Use this in asynchronous code, or in a Jupyter notebook. If you try to use the sync client in a Jupyter notebook, you'll have to use `nest-asyncio`, because internally the sync client uses async code. Don't do it! Just use the async client!
```python
import asyncio
async def main():
responses = await client.process_prompts_async(
["an async call"],
return_completions_only=True,
)
print(responses[0])
asyncio.run(main())
```
## Available Models
We support all models in `src/lm_deluge/models.py`. Vertex support is not planned in the short term, since Google allows you to connect your Vertex account to AI Studio, and Vertex authentication is a huge pain (requires service account credentials, etc.)
## Feature Support
We support structured outputs via `json_mode` parameter provided to `SamplingParams`. Structured outputs with a schema are planned. Reasoning models are supported via the `reasoning_effort` parameter, which is translated to a thinking budget for Claude/Gemini. Passing `None` (or the string `"none"`) disables Gemini thoughts entirely. Image models are supported. We support tool use as documented above. We support logprobs for OpenAI models that return them.
## Built‑in tools
The `lm_deluge.pipelines` module exposes a few helper functions that combine LLMClient with prompt and output parsing to accomplish tasks:
- `extract` – structure text or images into a Pydantic model based on a schema.
- `translate` – translate a list of strings to English.
- `score_llm` – simple yes/no style scoring with optional log probability output.
Experimental embeddings (`embed.embed_parallel_async`) and document reranking (`rerank.rerank_parallel_async`) clients are also provided.
| text/markdown | null | Benjamin Anderson <ben@trytaylor.ai> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"python-dotenv",
"pyjson5",
"PyYAML",
"aiohttp",
"requests",
"xxhash",
"tqdm",
"pydantic",
"markdownify-rs==0.1.1",
"pillow",
"rich",
"pdf2image; extra == \"pdf\"",
"boto3>=1.28.0; extra == \"aws\"",
"requests-aws4auth; extra == \"aws\"",
"google-auth; extra == \"google\"",
"docker>=7.0.0; extra == \"docker\"",
"tantivy>=0.21.0; extra == \"full-text-search\"",
"lenlp>=0.1.0; extra == \"full-text-search\"",
"modal>=0.64.0; extra == \"sandbox\"",
"daytona-sdk>=0.1.4; extra == \"sandbox\"",
"docker>=7.0.0; extra == \"sandbox\"",
"fastapi>=0.100.0; extra == \"server\"",
"uvicorn>=0.20.0; extra == \"server\"",
"ty; extra == \"dev\"",
"pre-commit; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.0 | 2026-02-20T23:13:47.334980 | lm_deluge-0.0.120.tar.gz | 305,722 | 3e/e4/eab3007918ece525a7465bf1809b70b8a0dbbd1345def9a4b4312dde1ce9/lm_deluge-0.0.120.tar.gz | source | sdist | null | false | b29b667ab047940cc9e8b5c597baf901 | c718a92d316741ba47246098bdeb2b8c6754addffdd594a1389ee60de608c5b7 | 3ee4eab3007918ece525a7465bf1809b70b8a0dbbd1345def9a4b4312dde1ce9 | null | [
"LICENSE"
] | 219 |
2.4 | winebox | 0.5.6 | Wine Cellar Management Application with OCR label scanning | # WineBox
A wine cellar management application with OCR label scanning.
## Features
- **Label Scanning**: Upload wine label images for automatic text extraction via AI
- **Wine Autocomplete**: Search 100K+ wines from the [X-Wines dataset](https://github.com/rogerioxavier/X-Wines) with community ratings
- **Inventory Tracking**: Check-in and check-out bottles with full history
- **Smart Parsing**: Automatically identifies vintage, grape variety, region, and more
- **Search**: Find wines by any criteria
- **Web Interface**: Simple, mobile-friendly interface
## Quick Start
### Prerequisites
- Python 3.11+
- MongoDB 7.0+
- [Tesseract OCR](https://github.com/tesseract-ocr/tesseract) (optional fallback)
### Installation
**From PyPI:**
```bash
pip install winebox
```
**From source:**
```bash
# Clone the repository
git clone https://github.com/jdrumgoole/winebox.git
cd winebox
# Install dependencies
uv sync --all-extras
# Start MongoDB (using Docker)
docker run -d -p 27017:27017 --name mongodb mongo:7
# Install Tesseract OCR (optional)
# macOS:
brew install tesseract
# Ubuntu/Debian:
sudo apt-get install tesseract-ocr
```
### Configuration
WineBox uses TOML configuration files:
```bash
# Copy example configuration
cp config/config.toml.example config.toml
cp config/secrets.env.example secrets.env
# Edit secrets.env with your API keys
nano secrets.env
```
See the [Configuration Guide](https://winebox.readthedocs.io/configuration.html) for full details.
### Running the Server
```bash
# Development mode with auto-reload
invoke start --reload
# Background mode
invoke start-background
# Check status
invoke status
# Stop server
invoke stop
```
### Access the Application
- **Web Interface**: http://localhost:8000/static/index.html
- **API Documentation**: http://localhost:8000/docs
- **Health Check**: http://localhost:8000/health
## Configuration
WineBox uses a TOML-based configuration system with separate secrets management:
| File | Purpose |
|------|---------|
| `config.toml` | Main configuration (server, database, features) |
| `secrets.env` | Sensitive credentials (API keys) |
### Configuration Locations
Files are searched in priority order:
1. `./config.toml` - Project root (development)
2. `~/.config/winebox/config.toml` - User config
3. `/etc/winebox/config.toml` - System config (production)
### Example config.toml
```toml
[server]
host = "127.0.0.1"
port = 8000
debug = false
[database]
mongodb_url = "mongodb://localhost:27017"
mongodb_database = "winebox"
[ocr]
use_claude_vision = true
[email]
backend = "console"
```
### Example secrets.env
```bash
WINEBOX_SECRET_KEY=your-secret-key-here
WINEBOX_ANTHROPIC_API_KEY=sk-ant-api03-...
```
Environment variables can override any configuration value.
## Usage
### Check In Wine
1. Navigate to the Check In page
2. Upload front label image (required)
3. Optionally upload back label image
4. Review/edit auto-detected wine details
5. Set quantity and add notes
6. Click "Check In Wine"
### Check Out Wine
1. Go to the Cellar view
2. Click "Check Out" on a wine card
3. Enter quantity to remove
4. Add optional notes (tasting notes, occasion)
5. Confirm checkout
### Search
Use the Search page to find wines by:
- Text search (name, winery, region)
- Vintage year
- Grape variety
- Region or country
- Stock status
## API
Full REST API available at `/api`:
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/api/wines/checkin` | POST | Add wine to cellar |
| `/api/wines/{id}/checkout` | POST | Remove wine from cellar |
| `/api/wines` | GET | List all wines |
| `/api/wines/{id}` | GET | Get wine details |
| `/api/cellar` | GET | Current inventory |
| `/api/cellar/summary` | GET | Cellar statistics |
| `/api/transactions` | GET | Transaction history |
| `/api/search` | GET | Search wines |
| `/api/xwines/search` | GET | Autocomplete wine search |
| `/api/xwines/wines/{id}` | GET | X-Wines wine details |
| `/api/xwines/stats` | GET | Dataset statistics |
See `/docs` for interactive API documentation.
## Data Storage
### Database
WineBox uses MongoDB for data storage. Configure the connection in `config.toml`:
```toml
[database]
mongodb_url = "mongodb://localhost:27017"
mongodb_database = "winebox"
```
### Images
Wine label images are stored in the `data/images/` directory by default.
| Item | Default Location | Config Key |
|------|------------------|------------|
| Database | MongoDB `winebox` | `database.mongodb_database` |
| Images | `data/images/` | `storage.data_dir` |
**Note:** Back up your MongoDB database and images directory regularly.
## X-Wines Dataset
WineBox integrates the [X-Wines dataset](https://github.com/rogerioxavier/X-Wines) for wine autocomplete, providing suggestions from 100,646 wines with 21 million community ratings.
### Installing the Dataset
```bash
# Option 1: Test dataset (100 wines, for development)
uv run python -m scripts.import_xwines --version test
# Option 2: Full dataset (100K+ wines, for production)
# First, download from Google Drive
uv pip install gdown
mkdir -p data/xwines
uv run gdown --folder "https://drive.google.com/drive/folders/1LqguJNV-aKh1PuWMVx5ELA61LPfGfuu_?usp=sharing" -O data/xwines/
cp data/xwines/X-Wines_Official_Repository/last/XWines_Full_*.csv data/xwines/
# Then import
uv run python -m scripts.import_xwines --version full
```
The autocomplete appears when typing in the Wine Name field during check-in.
## Label Scanning
WineBox uses AI-powered label scanning to extract wine information from photos.
### Claude Vision (Recommended)
For best results, configure Claude Vision by adding your API key to `secrets.env`:
```bash
WINEBOX_ANTHROPIC_API_KEY=your-api-key
```
Claude Vision provides intelligent label analysis that:
- Handles decorative and artistic fonts
- Understands wine-specific terminology
- Extracts structured data (winery, vintage, grape variety, region, etc.)
- Works with curved or angled text
### Tesseract OCR (Fallback)
If no Anthropic API key is configured, WineBox falls back to Tesseract OCR:
```bash
# macOS
brew install tesseract
# Ubuntu/Debian
sudo apt-get install tesseract-ocr
```
To force Tesseract only (save API costs during development):
```toml
# config.toml
[ocr]
use_claude_vision = false
```
## Authentication
WineBox requires authentication for all API endpoints (except `/health`).
### Creating Users
```bash
# Create an admin user
uv run winebox-admin add admin@example.com --admin --password yourpassword
# Create a regular user
uv run winebox-admin add user@example.com --password yourpassword
# List all users
uv run winebox-admin list
# Disable/enable a user
uv run winebox-admin disable user@example.com
uv run winebox-admin enable user@example.com
# Change password
uv run winebox-admin passwd user@example.com --password newpassword
# Remove a user
uv run winebox-admin remove user@example.com
```
### Server Management
```bash
# Start server (foreground)
uv run winebox-server start --foreground
# Start server (background)
uv run winebox-server start
# Stop server
uv run winebox-server stop
# Restart server
uv run winebox-server restart
# Check status
uv run winebox-server status
```
### API Authentication
The API uses JWT bearer tokens. To authenticate:
1. POST to `/api/auth/token` with email (in the `username` field per OAuth2 spec) and `password` (form-urlencoded)
2. Include the returned token in subsequent requests: `Authorization: Bearer <token>`
Tokens expire after 24 hours.
## Deployment
WineBox includes deployment scripts for Digital Ocean:
```bash
# Initial server setup
uv run python -m invoke deploy-setup --host YOUR_DROPLET_IP
# Deploy to production
uv run python -m invoke deploy
```
See the [Deployment Guide](https://winebox.readthedocs.io/deployment.html) for full instructions.
## Development
### Running Tests
```bash
# Run all tests
invoke test
# With verbose output
invoke test --verbose
# With coverage
invoke test --coverage
# Run without Claude Vision (save API costs)
WINEBOX_USE_CLAUDE_VISION=false invoke test
```
### Project Structure
```
winebox/
├── winebox/ # Application package
│ ├── main.py # FastAPI app
│ ├── config/ # Configuration module
│ ├── models/ # MongoDB/Beanie models
│ ├── schemas/ # API schemas
│ ├── routers/ # API endpoints
│ ├── services/ # Business logic
│ └── static/ # Web interface
├── config/ # Configuration templates
├── deploy/ # Deployment module
├── tests/ # Test suite
├── docs/ # Documentation
└── tasks.py # Build tasks
```
### Building Documentation
```bash
invoke docs-build
invoke docs-serve
```
## Tech Stack
- **FastAPI**: Web framework
- **MongoDB**: Document database
- **Beanie**: MongoDB ODM
- **fastapi-users**: Authentication
- **Tesseract/Claude Vision**: OCR engines
- **Vanilla JS**: Frontend (no frameworks)
## License
MIT License
| text/markdown | null | Joe Drumgoole <joe@joedrumgoole.com> | null | null | MIT | cellar, fastapi, inventory, ocr, wine | [
"Development Status :: 4 - Beta",
"Environment :: Web Environment",
"Framework :: FastAPI",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Home Automation"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"aioboto3>=13.0.0",
"aiofiles>=23.0.0",
"anthropic>=0.40.0",
"bcrypt>=4.1.2",
"beanie>=1.25.0",
"fastapi-users[beanie]>=14.0.0",
"fastapi>=0.109.0",
"jinja2>=3.1.0",
"motor>=3.3.0",
"pillow>=10.0.0",
"pydantic-settings>=2.0.0",
"pydantic>=2.0.0",
"pytesseract>=0.3.10",
"python-jose[cryptography]>=3.3.0",
"python-multipart>=0.0.6",
"pyyaml>=6.0.0",
"slowapi>=0.1.9",
"uvicorn[standard]>=0.27.0",
"httpx>=0.26.0; extra == \"dev\"",
"invoke>=2.2.0; extra == \"dev\"",
"myst-parser>=2.0.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest-cov>=7.0.0; extra == \"dev\"",
"pytest-playwright>=0.4.0; extra == \"dev\"",
"pytest-xdist>=3.5.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"sphinx-rtd-theme>=2.0.0; extra == \"dev\"",
"sphinx>=7.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/jdrumgoole/winebox",
"Documentation, https://winebox.readthedocs.io",
"Repository, https://github.com/jdrumgoole/winebox",
"Issues, https://github.com/jdrumgoole/winebox/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:13:05.143359 | winebox-0.5.6.tar.gz | 5,756,169 | 8f/2e/1146ee1f727eedef0db0f6b6da80178c51003cde734a8011b0e53e793223/winebox-0.5.6.tar.gz | source | sdist | null | false | a90b5056fb9d7dc7857d7e44c44aeb18 | 8a169af18abfcb33796c3ab507d3574e3c701561be8fccb1a89849f74d2de5ad | 8f2e1146ee1f727eedef0db0f6b6da80178c51003cde734a8011b0e53e793223 | null | [
"LICENSE"
] | 231 |
2.4 | indy-hub | 1.14.4 | Indy Hub - An industrial management application for Alliance Auth | # Indy Hub for Alliance Auth
A modern industry and material‑exchange management module for [Alliance Auth](https://allianceauth.org/), focused on blueprint sharing, job tracking, and corp trading workflows for EVE Online alliances and corporations.
______________________________________________________________________
## Table of Contents
- [About](#about)
- [Features](#features)
- [Requirements](#requirements)
- [Installation](#installation)
- [Bare Metal](#bare-metal)
- [Docker](#docker)
- [Common](#common)
- [Permissions](#permissions)
- [Base Access (Required for all users)](#base-access-required-for-all-users)
- [Corporation Management (Optional)](#corporation-management-optional)
- [Material Exchange Administration (Optional)](#material-exchange-administration-optional)
- [Settings](#settings)
- [Updating](#updating)
- [Usage](#usage)
- [Screenshots](#screenshots)
- [Contributing](#contributing)
______________________________________________________________________
## About
### Features
- **Blueprint Library**: Browse, search, and manage personal and corporation blueprints.
- **Industry Jobs**: Track active and completed manufacturing, research, and invention jobs.
- **Blueprint Copy Requests**: Create requests, receive offers, chat with builders, and follow delivery status.
- **Sharing Controls**: Configure who can see and fulfill blueprint copy requests.
- **Material Exchange**: Submit buy/sell orders and follow validation/processing from one hub.
- **Order Tracking**: View clear statuses, timelines, and history for your requests and orders.
- **Notifications**: Receive in-app updates for key events (offers, deliveries, job updates).
- **Admin Tools**: Manage corp blueprint workflows and Material Exchange operations with dedicated admin views.
- **Modern UI**: Responsive, theme-friendly interface designed for daily operational use.
## Requirements
- **Alliance Auth v4+**
- **Python 3.10+**
- **Django** (as required by AA)
- **Alliance Auth AppUtils**
- **django-esi** (OpenAPI client, >=8)
- **django-eveuniverse** (populated with industry data)
- **Celery** (for background sync and notifications)
- *(Optional)* Director characters for corporate dashboards
- *(Optional)* [`aadiscordbot`](https://apps.allianceauth.org/apps/detail/allianceauth-discordbot) (preferred) or [`discordnotify`](https://apps.allianceauth.org/apps/detail/aa-discordnotify) for Discord notifications
______________________________________________________________________
## Installation
### Bare Metal
```text
pip install django-eveuniverse indy-hub
```
Add to your `local.py`:
```python
INSTALLED_APPS = [
"eveuniverse",
"indy_hub",
]
EVEUNIVERSE_LOAD_TYPE_MATERIALS = True
EVEUNIVERSE_LOAD_MARKET_GROUPS = True
```
Run migrations and collect static files:
```text
python manage.py migrate
python manage.py collectstatic --noinput
```
Populate industry data:
```text
python manage.py eveuniverse_load_data types --types-enabled-sections industry_activities type_materials
```
Restart services:
```text
systemctl restart allianceauth
```
### Docker
```text
docker compose exec allianceauth_gunicorn bash
pip install django-eveuniverse indy-hub
exit
```
Add to your `conf/local.py`:
```python
INSTALLED_APPS = [
"eveuniverse",
"indy_hub",
]
EVEUNIVERSE_LOAD_TYPE_MATERIALS = True
EVEUNIVERSE_LOAD_MARKET_GROUPS = True
```
Add to your `conf/requirements.txt` (Always use current versions)
```text
django-eveuniverse==1.6.0
indy-hub==1.14.4
```
Run migrations and collect static files:
```text
docker compose exec allianceauth_gunicorn bash
auth migrate
auth collectstatic --noinput
exit
```
Restart Auth:
```text
docker compose build
docker compose down
docker compose up -d
```
Populate industry data:
```text
docker compose exec allianceauth_gunicorn bash
auth eveuniverse_load_data types --types-enabled-sections industry_activities type_materials
exit
```
### Common
- Set permissions in Alliance Auth (see [Permissions](#permissions)).
- Authorize ESI tokens for blueprints and industry jobs.
______________________________________________________________________
## Permissions
Assign permissions in Alliance Auth to control access levels:
### Base Access (Required for all users)
- **Visible in admin:** "indy_hub | can access Indy_Hub"
- View and manage personal blueprints
- Create and manage blueprint copy requests
- Use Material Exchange (buy/sell orders)
- View personal industry jobs
- Configure personal settings and notifications
### Corporation Management (Optional)
- **Visible in admin:** "indy_hub | can admin Corp"
- View and manage corporation blueprints (director only)
- Handle corporation blueprint copy requests (accept/reject corp BP copy sharing)
- Access corporation industry jobs
- Configure corporation sharing settings
- This role is **not** meant for everyone — only for people who manage corp BPs (they can handle contracts for corpmates)
- Requires ESI director roles for the corporation
### Material Exchange Administration (Optional)
- **Visible in admin:** "indy_hub | can admin MatExchange"
- Configure Material Exchange settings
- Manage stock availability
- View all transactions
- This role is **not** meant for everyone — only for people who manage the hub (they accept/reject buy and sell orders made to the corp)
- Admin panel access
**Note**: Permissions are independent and can be combined. Most users only need `can access Indy_Hub`.
______________________________________________________________________
## Settings
Customize Indy Hub behavior in `local.py`:
```python
# Discord notifications
INDY_HUB_DISCORD_DM_ENABLED = True # Default: True
INDY_HUB_DISCORD_ACTION_TOKEN_MAX_AGE = 86400 # Default: 24 hours
# ESI compatibility date (OpenAPI)
INDY_HUB_ESI_COMPATIBILITY_DATE = "2025-09-30" # Default: app default
# ESI task staggering (rate-limit friendly scheduling)
INDY_HUB_ESI_TASK_STAGGER_THRESHOLD = 400 # Default: 400
INDY_HUB_ESI_TASK_TARGET_PER_MIN_BLUEPRINTS = 30 # Default: 30
INDY_HUB_ESI_TASK_TARGET_PER_MIN_JOBS = 30 # Default: 30
INDY_HUB_ESI_TASK_TARGET_PER_MIN_SKILLS = 40 # Default: 40
INDY_HUB_ESI_TASK_TARGET_PER_MIN_ROLES = 30 # Default: 30
# Stale refresh thresholds (hours)
INDY_HUB_ONLINE_STATUS_STALE_HOURS = 72 # Default: 72
INDY_HUB_SKILL_SNAPSHOT_STALE_HOURS = 24 # Default: 24
INDY_HUB_ROLE_SNAPSHOT_STALE_HOURS = 24 # Default: 24
INDY_HUB_STRUCTURE_NAME_STALE_HOURS = 24 # Default: 24
```
**Scheduled Tasks** (auto-created):
- `indy-hub-update-all-blueprints` → Daily at 03:30 UTC
- `indy-hub-update-all-industry-jobs` → Every 2 hours
- `indy-hub-refresh-stale-snapshots` → Hourly (skills/roles/online/structures)
______________________________________________________________________
## Updating
### Bare Metal Update
```text
# Update the package
pip install --upgrade indy-hub
# Apply migrations
python manage.py migrate
# Collect static files
python manage.py collectstatic --noinput
# Restart services
systemctl restart allianceauth
```
### Docker Update
Update Versions in `conf/requirements.txt` (Always use current versions)
```text
indy-hub==1.14.4
```
Update the Package:
```text
# Exec Into the Container
docker compose exec allianceauth_gunicorn bash
# Update the package
pip install -U indy-hub
# Apply Migrations
auth migrate
# Collect static files
auth collectstatic --no-input
# Restart Services
exit
docker compose build
docker compose down
docker compose up -d
```
If Celery runs in dedicated containers/services in your stack, also restart worker and beat/scheduler containers.
______________________________________________________________________
## Usage
1. **Navigate** to Indy Hub in the Alliance Auth dashboard
1. **Authorize ESI** for blueprints and jobs via the settings
1. **View Your Data**:
- Personal blueprints and industry jobs
- Corporation blueprints (if director)
- Pending blueprint copy requests
- Material Exchange buy/sell orders and transaction history
1. **Share Blueprints**: Set sharing scopes and send copy offers to alliance members
1. **Receive Notifications**: View job completions and copy request updates in the notification feed
______________________________________________________________________
## Screenshots
Below are a few UI highlights from the current release.
### Dashboard Overview

### Blueprint Library

### Blueprint Copy Requests

### Material Exchange Hub

### Order Requests

### Discord Notifications

### User Settings

______________________________________________________________________
## Contributing
- Open an issue or pull request on GitHub for help or to contribute
Or contact me on discord: `erkaek`
______________________________________________________________________
| text/markdown | null | erka Ekanon <erkaekanon@outlook.com> | null | null | null | allianceauth, eveonline, hub, industry, indy | [
"Environment :: Web Environment",
"Framework :: Celery",
"Framework :: Django",
"Framework :: Django :: 4.0",
"Framework :: Django :: 4.2",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"allianceauth<5,>=4",
"allianceauth-app-utils>=1",
"django-esi<9,>=7",
"django-eveuniverse>=1.5.7",
"requests>=2.31",
"aadiscordbot; extra == \"aadiscordbot\"",
"aa-discordnotify>=2; extra == \"discordnotify\""
] | [] | [] | [] | [
"Homepage, https://github.com/Erkaek/aa-Indy_Hub",
"Source, https://github.com/Erkaek/aa-Indy_Hub",
"Tracker, https://github.com/Erkaek/aa-Indy_Hub/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T23:12:26.225590 | indy_hub-1.14.4.tar.gz | 467,895 | 44/ce/90bfffbc63b914bd312a4eb8599013bbb8db1d9f879092422a37d634dd5b/indy_hub-1.14.4.tar.gz | source | sdist | null | false | f3e4c6c0cf2fb7eff8241bbcd08161c4 | 718a5108d3ac19f05e569cce4d138c81457c0e3aa8d2afee83c5ed89177d0d32 | 44ce90bfffbc63b914bd312a4eb8599013bbb8db1d9f879092422a37d634dd5b | null | [
"LICENSE"
] | 233 |
2.4 | cybrid-api-id-python | 0.126.174 | Cybrid Identity API | View our documentation on [Github](https://github.com/Cybrid-app/cybrid-api-id-python/)
| text/markdown | Cybrid Support | support@cybrid.app | null | null | Apache-2.0 | Cybrid Identity API | [] | [] | https://github.com/Cybrid-app/cybrid-api-id-python/ | null | >=3.6 | [] | [] | [] | [
"urllib3>=1.25.3",
"python-dateutil"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T23:12:22.413079 | cybrid_api_id_python-0.126.174.tar.gz | 125,262 | ed/da/998db1bbd67eca1c9ad490230b4ff7e7156c0bbd38ddf9a292fa034ed87d/cybrid_api_id_python-0.126.174.tar.gz | source | sdist | null | false | afc93b58a016756c887a739178f7f82f | 5f07d03f2caebf7f60f8a3fface4e187e399e0718c63509c2fc5ca6a6a63a791 | edda998db1bbd67eca1c9ad490230b4ff7e7156c0bbd38ddf9a292fa034ed87d | null | [] | 254 |
2.4 | pythia-datasets | 2026.2.20 | Provides utility functions for accessing data repository for Project Pythia examples/notebooks | | CI | [![GitHub Workflow Status][github-ci-badge]][github-ci-link] [![GitHub Workflow Status][pre-commit-badge]][pre-commit-link] [![Code Coverage Status][codecov-badge]][codecov-link] |
| :---------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| **Docs** | [![Documentation Status][rtd-badge]][rtd-link] |
| **Package** | [![Conda][conda-badge]][conda-link] [![PyPI][pypi-badge]][pypi-link] |
| **License** | [![License][license-badge]][repo-link] |
# pythia-datasets
Data repository for with sample data for the [Pythia Foundations](https://foundations.projectpythia.org) book.
## Sample data sets
These files are used as sample data in [Pythia Foundations](https://foundations.projectpythia.org) and are downloaded by `pythia_datasets` package:
- `NARR_19930313_0000.nc`
- `enso_data.csv`
- `jan-17-co-asos.txt.xz`
- `CESM2_sst_data.nc`
- `CESM2_grid_variables.nc`
- `daymet_v4_precip_sept_2013.nc`
## Adding new datasets
The scope of this data collection is to serve examples for [Pythia Foundations](https://foundations.projectpythia.org).
If you are adding new content to Foundations that requires a new dataset file, please follow these steps:
1. Add the dataset file to the `data/` directory
2. From the command line, run `python make_registry.py` script to update the registry file residing in `pythia_datasets/registry.txt`
3. Commit and push your changes to GitHub
## Using datasets in notebooks and/or scripts
- Ensure the `pythia_datasets` package is installed in your environment
```bash
python -m pip install pythia-datasets
# or
python -m pip install git+https://github.com/ProjectPythia/pythia-datasets
```
- Import `DATASETS` and inspect the registry to find out which datasets are available
```python
In [1]: from pythia_datasets import DATASETS
In [2]: DATASETS.registry_files
Out[2]: ['jan-17-co-asos.txt.xz', 'NARR_19930313_0000.nc']
```
- To fetch a data file of interest, use the `.fetch` method and provide the filename of the data file. This will
- download and cache the file if it doesn't exist already.
- retrieve and return the local path
```python
In [4]: filepath = DATASETS.fetch('jan-17-co-asos.txt.xz')
In [5]: filepath
Out[5]: '/Users/abanihi/Library/Caches/pythia-datasets/jan-17-co-asos.txt.xz'
```
- Once you have access to the local filepath, you can then use it to load your dataset into pandas or xarray or your package of choice:
```python
In [6]: df = pd.read_csv(filepath)
```
## Changing the default data cache location
The default cache location (where the data are saved on your local system) is dependent on the operating system. You can use the `locate()` method to identify it:
```python
from pythia_datasets import locate
locate()
```
The location can be overwritten by the `PYTHIA_DATASETS_DIR` environment
variable to the desired destination.
[github-ci-badge]: https://github.com/ProjectPythia/pythia-datasets/actions/workflows/ci.yaml/badge.svg
[pre-commit-badge]: https://results.pre-commit.ci/badge/github/ProjectPythia/pythia-datasets/main.svg
[github-ci-link]: https://github.com/ProjectPythia/pythia-datasets/actions?query=workflow%3ACI
[pre-commit-link]: https://results.pre-commit.ci/latest/github/ProjectPythia/pythia-datasets/main
[codecov-badge]: https://img.shields.io/codecov/c/github/ProjectPythia/pythia-datasets.svg?logo=codecov
[codecov-link]: https://codecov.io/gh/ProjectPythia/pythia-datasets
[rtd-badge]: https://img.shields.io/readthedocs/pythia-datasets/latest.svg?style=for-the-badge
[rtd-link]: https://pythia-datasets.readthedocs.io/en/latest/?badge=latest
[pypi-badge]: https://img.shields.io/pypi/v/pythia-datasets?logo=pypi&style=for-the-badge
[pypi-link]: https://pypi.org/project/pythia-datasets
[conda-badge]: https://img.shields.io/conda/vn/conda-forge/pythia-datasets?logo=anaconda&style=for-the-badge
[conda-link]: https://anaconda.org/conda-forge/pythia-datasets
[license-badge]: https://img.shields.io/github/license/ProjectPythia/pythia-datasets?style=for-the-badge
[repo-link]: https://github.com/ProjectPythia/pythia-datasets
| text/markdown | null | null | Project Pythia Team | null | null | Pooch, Pythia | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pooch"
] | [] | [] | [] | [
"Documentation, https://github.com/ProjectPythia/pythia-datasets",
"Homepage, https://github.com/ProjectPythia/pythia-datasets",
"Source, https://github.com/ProjectPythia/pythia-datasets",
"Tracker, https://github.com/ProjectPythia/pythia-datasets/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:12:08.973873 | pythia_datasets-2026.2.20.tar.gz | 10,941 | 63/ff/f4b10f942bdc5e55f3fbe59c515640609349a4708f5be8c5d176e608b9d3/pythia_datasets-2026.2.20.tar.gz | source | sdist | null | false | 5679843475fd6791ce7696ecacd0357d | 7ff7427d2ad99b6eca561104eac2c51f401584c380a96b6229599dd045f85be4 | 63fff4b10f942bdc5e55f3fbe59c515640609349a4708f5be8c5d176e608b9d3 | Apache-2.0 | [
"LICENSE"
] | 230 |
2.4 | nexi-xpay-mcp-server | 1.2.0 | MCP server for Nexi XPay Back Office APIs | # nexi-xpay-mcp-server
[](https://modelcontextprotocol.io)
[](https://pypi.org/project/nexi-xpay-mcp-server/)
[](https://opensource.org/licenses/MIT)
MCP server for the **Nexi XPay Back Office APIs**. Enables AI assistants (Claude, Cursor, etc.) to query orders, transaction details, warnings/anomalies, and payment methods from your Nexi XPay merchant account.
## Tools
| Tool | Description |
|------|-------------|
| `list_orders` | List orders with filters (date range, channel, status, transaction code) |
| `order_details` | Full details of a specific transaction |
| `warnings` | Retrieve warnings/anomalies (default: last 7 days) |
| `payment_methods` | List active payment methods for the merchant |
## Prerequisites
- Python >= 3.10
- A Nexi XPay merchant account with Back Office API access
- API credentials: **Alias**, **API Key** and **Secret Key** (from Nexi Back Office)
## Installation
```bash
uvx nexi-xpay-mcp-server
```
## Usage in .mcp.json
Add to your MCP configuration file (`.mcp.json` for Claude Code, `claude_desktop_config.json` for Claude Desktop):
```json
{
"mcpServers": {
"nexi": {
"command": "uvx",
"args": ["nexi-xpay-mcp-server"],
"env": {
"NEXI_ALIAS": "your_alias",
"NEXI_SECRET_KEY": "your_secret_key"
}
}
}
}
```
### Multiple merchants
Use different keys to run one instance per merchant:
```json
{
"mcpServers": {
"nexi-acme": {
"command": "uvx",
"args": ["nexi-xpay-mcp-server"],
"env": {
"NEXI_ALIAS": "acme_merchant",
"NEXI_SECRET_KEY": "acme_secret_key"
}
},
"nexi-globex": {
"command": "uvx",
"args": ["nexi-xpay-mcp-server"],
"env": {
"NEXI_ALIAS": "globex_merchant",
"NEXI_SECRET_KEY": "globex_secret_key"
}
}
}
}
```
## Environment variables
| Variable | Required | Default | Description |
|----------|:--------:|---------|-------------|
| `NEXI_ALIAS` | Yes | — | Merchant alias (also used as API key) |
| `NEXI_SECRET_KEY` | Yes | — | Secret key for MAC calculation |
| `NEXI_TEST` | No | `false` | Set to `true` to use the test environment |
## Development
```bash
git clone https://github.com/stucchi/nexi-xpay-mcp-server.git
cd nexi-xpay-mcp-server
uv sync
```
Local run:
```bash
NEXI_ALIAS=your_alias NEXI_SECRET_KEY=your_secret uv run nexi-xpay-mcp-server
```
## License
MIT
<!-- mcp-name: io.github.stucchi/nexi-xpay -->
| text/markdown | Ing. Luca Stucchi | null | null | null | null | mcp, nexi, xpay, payments, back-office | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Office/Business :: Financial"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"mcp[cli]",
"httpx"
] | [] | [] | [] | [
"Homepage, https://github.com/stucchi/nexi-xpay-mcp-server",
"Repository, https://github.com/stucchi/nexi-xpay-mcp-server",
"Issues, https://github.com/stucchi/nexi-xpay-mcp-server/issues"
] | twine/6.2.0 CPython/3.13.1 | 2026-02-20T23:12:03.361576 | nexi_xpay_mcp_server-1.2.0.tar.gz | 9,589 | a5/2e/aa3f167153624a14cc592a1fd98724aba3c5f59be4079a46ee10f6ef8ce3/nexi_xpay_mcp_server-1.2.0.tar.gz | source | sdist | null | false | 93c034097888507660db7c51e0a44d58 | bc248cea8448980ca7ae9ff2541861c770232024cae5e6a6c5ef40520ca6781b | a52eaa3f167153624a14cc592a1fd98724aba3c5f59be4079a46ee10f6ef8ce3 | MIT | [
"LICENSE"
] | 225 |
2.4 | NEMO-CE | 7.3.16 | NEMO Community Edition is a laboratory logistics web application based of NEMO. Use it to schedule reservations, control tool access, track maintenance issues, and more. | [](https://github.com/psf/black)
[](https://www.djlint.com)
[](https://www.python.org/downloads/release/python-3110/)
[](https://pypi.org/project/NEMO-CE/)
See the [wiki](https://gitlab.com/nemo-community/nemo-ce/-/wikis/home) for installation steps and new features
| text/markdown | null | Atlantis Labs LLC <atlantis@atlantislabs.io> | null | null | MIT License
Copyright (c) 2025
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | NEMO, NEMO-CE | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Intended Audience :: Science/Research",
"Intended Audience :: System Administrators",
"License :: Public Domain",
"Natural Language :: English",
"Operating System :: OS Independent",
"Framework :: Django :: 4.2",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"cryptography==46.0.3",
"Django==4.2.28",
"django-auditlog==3.4.1",
"django-filter==25.1",
"django-jsonform==2.23.2",
"django-mptt==0.18.0",
"djangorestframework==3.16.1",
"drf-excel==2.5.3",
"drf-flex-fields==1.0.2",
"fastjsonschema==2.21.2",
"ldap3==2.9.1",
"packaging==26.0",
"Pillow==12.1.0",
"pymodbus==3.11.4",
"python-dateutil==2.9.0",
"requests==2.32.5",
"pre-commit; extra == \"dev-tools\"",
"djlint; extra == \"dev-tools\"",
"black; extra == \"dev-tools\""
] | [] | [] | [] | [
"Homepage, https://gitlab.com/nemo-community/nemo-ce",
"Changelog, https://gitlab.com/nemo-community/nemo-ce/-/releases",
"Issues, https://gitlab.com/nemo-community/nemo-ce/-/issues",
"CI, https://gitlab.com/nemo-community/nemo-ce/-/pipelines"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T23:11:05.451198 | nemo_ce-7.3.16.tar.gz | 1,661,901 | ff/e5/92bc6f4dcae21a0c3364febfd0123769b72da1a606dc331fec82442f37a8/nemo_ce-7.3.16.tar.gz | source | sdist | null | false | dce2424b898626c228626b6174e0b6c8 | 08b8643d6b32aba1314c8a4891262a54a413c064d67c91888bcb37e647f98cd8 | ffe592bc6f4dcae21a0c3364febfd0123769b72da1a606dc331fec82442f37a8 | null | [
"LICENSE"
] | 0 |
2.4 | patchpal | 0.15.1 | An agentic coding and automation assistant, supporting both local and cloud LLMs | # PatchPal — An Agentic Coding and Automation Assistant
<!---->
<img src="https://raw.githubusercontent.com/amaiya/patchpal/refs/heads/main/assets/patchpal_screenshot.png" alt="PatchPal Screenshot" width="650"/>
> Supporting both local and cloud LLMs, with autopilot mode and extensible tools.
**PatchPal** is an AI coding agent that helps you build software, debug issues, and automate tasks. It supports agent skills, tool use, and executable Python generation, enabling interactive workflows for tasks such as data analysis, visualization, web scraping, API interactions, and research with synthesized findings.
Human-in-the-loop coding agents (e.g., Claude Code, OpenCode, Aider) are typically mutually exclusive with programmatic agent frameworks (e.g., smolagents, PydanticAI). A key goal of this project is to marry both: use the same agent interactively in your terminal (`patchpal`) or in Python scripts (`agent.run("task")`), plus autopilot mode for autonomous runs.
**Key Features**
- [Terminal Interface](https://amaiya.github.io/patchpal/usage/interactive/) for interactive development
- [Python API](https://amaiya.github.io/patchpal/usage/python-api/) for flexibility and extensibility
- [Built-In](https://amaiya.github.io/patchpal/features/tools/) and [Custom Tools](https://amaiya.github.io/patchpal/features/custom-tools/)
- [Skills System](https://amaiya.github.io/patchpal/features/skills/) and [MCP Integration](https://amaiya.github.io/patchpal/features/mcp/)
- [Autopilot Mode](https://amaiya.github.io/patchpal/usage/autopilot/) using [Ralph Wiggum loops](https://ghuntley.com/ralph/)
- [Project Memory](https://amaiya.github.io/patchpal/features/memory/) automatically loads project context from `~/.patchpal/repos/<repo-name>/MEMORY.md` at startup.
PatchPal prioritizes customizability: custom tools, custom skills, a flexible Python API, and support for any tool-calling LLM.
Full documentation is [here](https://amaiya.github.io/patchpal).
## Quick Start
```bash
$ pip install patchpal # install
$ patchpal # start
```
## Setup
0. **Install**: `pip install patchpal`
1. **Get an API key or a Local LLM Engine**:
- **[Cloud]** For Anthropic models (default): Sign up at https://console.anthropic.com/
- **[Cloud]** For OpenAI models: Get a key from https://platform.openai.com/
- **[Local]** For vLLM: Install from https://docs.vllm.ai/ (free - no API charges) **Recommended for Local Use**
- **[Local]** For Ollama: Install from https://ollama.com/ (⚠️ requires `OLLAMA_CONTEXT_LENGTH=32768` - see Ollama section below)
- For other providers: Check the [LiteLLM documentation](https://docs.litellm.ai/docs/providers)
2. **Set up your API key as environment variable**:
```bash
# For Anthropic (default)
export ANTHROPIC_API_KEY=your_api_key_here
# For OpenAI
export OPENAI_API_KEY=your_api_key_here
# For vLLM - API key required only if configured
export HOSTED_VLLM_API_BASE=http://localhost:8000 # depends on your vLLM setup
export HOSTED_VLLM_API_KEY=token-abc123 # optional depending on your vLLM setup
# For Ollama, no API key required
# For other providers, check LiteLLM docs
```
3. **Run PatchPal**:
```bash
# Use default model (anthropic/claude-sonnet-4-5)
patchpal
# Use a specific model via command-line argument
patchpal --model openai/gpt-5.2-codex # or openai/gpt-5-mini, anthropic/claude-opus-4-5, etc.
# Use vLLM (local)
# Note: vLLM server must be started with --tool-call-parser and --enable-auto-tool-choice
# See "Using Local Models (vLLM & Ollama)" section below for details
export HOSTED_VLLM_API_BASE=http://localhost:8000
export HOSTED_VLLM_API_KEY=token-abc123
patchpal --model hosted_vllm/openai/gpt-oss-20b
# Use Ollama (local - requires OLLAMA_CONTEXT_LENGTH=32768)
export OLLAMA_CONTEXT_LENGTH=32768
patchpal --model ollama_chat/gpt-oss:20b
# Or set the model via environment variable
export PATCHPAL_MODEL=openai/gpt-5.2
patchpal
```
**Note:** As of this writing, cloud models are much better suited for agentic workflows than local models.
## Beyond Coding: General Problem-Solving
While originally designed for software development, PatchPal is also a general-purpose assistant. With web search, file operations, shell commands, and custom tools/skills, it can help with research, data analysis, document processing, log file analyses, etc.
<img src="https://raw.githubusercontent.com/amaiya/patchpal/refs/heads/main/assets/patchpal_assistant.png" alt="PatchPal as General Assistant" width="650"/>
## Documentation
Full documentation is [available here](https://amaiya.github.io/patchpal/).
| text/markdown | PatchPal Contributors | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"litellm>=1.0.0",
"requests>=2.31.0",
"beautifulsoup4>=4.12.0",
"ddgs>=1.0.0",
"rich>=13.0.0",
"pyyaml>=6.0.0",
"prompt_toolkit>=3.0.0",
"tiktoken>=0.5.0",
"boto3",
"pymupdf>=1.23.0",
"python-docx>=1.0.0",
"python-pptx>=0.6.0",
"tree-sitter-language-pack>=0.3.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"ruff==0.14.13; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\"",
"mcp>=0.9.0; extra == \"mcp\"",
"mkdocs-material>=9.5.0; extra == \"docs\"",
"mkdocs-autorefs>=1.0.0; extra == \"docs\"",
"mkdocstrings>=0.24.0; extra == \"docs\"",
"mkdocstrings-python>=1.7.0; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/amaiya/patchpal",
"Repository, https://github.com/amaiya/patchpal",
"Issues, https://github.com/amaiya/patchpal/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:10:53.389844 | patchpal-0.15.1.tar.gz | 143,961 | f6/0f/8aec48be1cc89458cdbbbfb75c7a0a1cb2bff8327c8f71f274de030a188c/patchpal-0.15.1.tar.gz | source | sdist | null | false | 0bc59d4670ea0b657affdf5d72265132 | 632adc14eb0675b6c78ac0417c282256703a19e99db44d2b551462bf11baa6fd | f60f8aec48be1cc89458cdbbbfb75c7a0a1cb2bff8327c8f71f274de030a188c | Apache-2.0 | [
"LICENSE"
] | 233 |
2.3 | galileo-core | 3.96.0 | Shared schemas and configuration for Galileo's Python packages. | # galileo-core
Shared schemas and configuration for Galileo's Python packages.
## Running Tests
This project uses [Poetry](https://python-poetry.org/) for dependency management and [pytest](https://pytest.org/) as the test runner.
To install the test dependencies and run the test suite, use:
```bash
poetry install --with test
poetry run pytest
```
Or you could run:
```bash
inv test
```
- The first command installs all dependencies, including those needed for testing.
- The second command runs the entire test suite in parallel (as configured in `pyproject.toml`).
If you are developing locally and using this package as a dependency in other projects (e.g., the Galileo API), make sure to use the local path override in your `pyproject.toml`:
```toml
galileo-core = { path = "../galileo-core", develop = true }
```
| text/markdown | Galileo Technologies Inc. | team@rungalileo.io | null | null | Apache-2.0 | llm, quality, language_models, galileo | [
"Development Status :: 5 - Production/Stable",
"Framework :: Pydantic",
"License :: OSI Approved :: Apache Software License",
"Framework :: IPython",
"Framework :: Jupyter",
"Framework :: Jupyter :: JupyterLab",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: ML",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.10.0 | [] | [] | [] | [
"httpx<0.29.0,>=0.27.0",
"pydantic<3.0.0,>=2.6.0",
"pydantic-partial<0.11.0,>=0.10.1",
"pydantic-settings<3.0.0,>=2.2.1",
"pyjwt<3.0.0,>=2.8.0",
"pytest<9.0.0,>=8.2.1; extra == \"testing\"",
"respx<0.23.0,>=0.22.0; extra == \"testing\"",
"typing-extensions<5.0.0,>=4.12.2",
"uvloop<0.22.0,>=0.21.0; sys_platform != \"win32\""
] | [] | [] | [] | [
"Homepage, https://www.galileo.ai/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:10:32.759797 | galileo_core-3.96.0.tar.gz | 64,521 | 8a/66/eb870d1f26c58a2a8716b5eba584727d2ba579bd13e0cd32b86f3e321fd5/galileo_core-3.96.0.tar.gz | source | sdist | null | false | 0d1e0df42b29ec92b25fac1ae4b3fbc6 | 92f2450c0248a9da3ea10fe7a1fee4b7a09230aba8cbaf35a159162191c692f5 | 8a66eb870d1f26c58a2a8716b5eba584727d2ba579bd13e0cd32b86f3e321fd5 | null | [] | 498 |
2.4 | qsharp-jupyterlab | 1.25.7.dev0 | Q# extension for JupyterLab | # Q# Language Support for JupyterLab
Q# is an open-source, high-level programming language for developing and running quantum algorithms.
The `qsharp-jupyterlab` extension provides syntax highlighting for Q# documents and Q# notebook
cells in JupyterLab.
## Installation
To install the Q# JupyterLab extension, run:
```bash
pip install qsharp-jupyterlab
```
To run Q# in Jupyter notebooks, remember to also install the `qsharp` package: [https://pypi.org/project/qsharp].
## Support
For more information about the Microsoft Quantum Development Kit, visit [https://aka.ms/qdk](https://aka.ms/qdk).
## Contributing
Q# welcomes your contributions! Visit the Q# GitHub repository at [https://github.com/microsoft/qsharp] to find out more about the project.
| text/markdown | Microsoft | null | null | null | null | jupyter, jupyterlab, jupyterlab-extension | [
"Framework :: Jupyter",
"Framework :: Jupyter :: JupyterLab",
"Framework :: Jupyter :: JupyterLab :: 4",
"Framework :: Jupyter :: JupyterLab :: Extensions",
"Framework :: Jupyter :: JupyterLab :: Extensions :: Prebuilt",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/microsoft/qsharp",
"Bug Tracker, https://github.com/microsoft/qsharp/issues",
"Repository, https://github.com/microsoft/qsharp.git"
] | RestSharp/106.13.0.0 | 2026-02-20T23:10:16.316601 | qsharp_jupyterlab-1.25.7.dev0-py3-none-any.whl | 11,136 | 7b/c5/f2c8a919508979d2ceb9520efa2bd123b5af46e16ec9a2513d431de6bcc7/qsharp_jupyterlab-1.25.7.dev0-py3-none-any.whl | py3 | bdist_wheel | null | false | bec3a5c119dc234a3d1cd709b13c60fd | abcca8ac36f8316cd863fb6c70b2c81bd404ae1b9ba973ad575b4f65e1827f6e | 7bc5f2c8a919508979d2ceb9520efa2bd123b5af46e16ec9a2513d431de6bcc7 | null | [] | 73 |
2.4 | qdk | 1.25.7.dev0 | Quantum Development Kit Python Package | # qdk
The Quantum Development Kit (QDK) provides a single, cohesive Python entry point for compiling, simulating, and estimating resources for quantum programs (Q# and OpenQASM), with optional extras for visualization, cloud workflows, and interoperability with Qiskit and Cirq.
## Install
To install the core functionality, which include Q\# \& OpenQASM simulation, compilation, and resource estimation support:
```bash
pip install qdk
```
To include the Jupyter extra, which adds visualizations using Jupyter Widgets in the `qdk.widgets` submodule and syntax highlighting for Jupyter notebooks in the browser:
```bash
pip install "qdk[jupyter]"
```
To add the Azure Quantum extra, which includes functionality for working with the Azure Quantum service in the `qdk.azure` submodule:
```bash
pip install "qdk[azure]"
```
For Qiskit integration, which exposes Qiskit interop utilities in the `qdk.qiskit` submodule:
```bash
pip install "qdk[qiskit]"
```
For Cirq integration, which exposes Cirq interop utilities in the `qdk.azure.cirq` submodule:
```bash
pip install "qdk[cirq]"
```
To easily install all the above extras:
```bash
pip install "qdk[all]"
```
## Quick Start
```python
from qdk import qsharp
result = qsharp.run("{ use q = Qubit(); H(q); return MResetZ(q); }", shots=100)
print(result)
```
To use widgets (installed via `qdk[jupyter]` extra):
```python
from qdk.qsharp import eval, run
from qdk.widgets import Histogram
eval("""
operation BellPair() : Result[] {
use qs = Qubit[2];
H(qs[0]);CX(qs[0], qs[1]);
MResetEachZ(qs)
}
""")
results = run("BellPair()", shots=1000, noise=(0.005, 0.0, 0.0))
Histogram(results)
```
## Public API Surface
Submodules:
- `qdk.qsharp` – exports the same APIs as the `qsharp` Python package
- `qdk.openqasm` – exports the same APIs as the `openqasm` submodule of the `qsharp` Python package.
- `qdk.estimator` – exports the same APIs as the `estimator` submodule of the `qsharp` Python package.
- `qdk.widgets` – exports the Jupyter widgets available from the `qsharp-widgets` Python package (requires the `qdk[jupyter]` extra to be installed).
- `qdk.azure` – exports the Python APIs available from the `azure-quantum` Python package (requires the `qdk[azure]` extra to be installed).
- `qdk.qiskit` – exports the same APIs as the `interop.qiskit` submodule of the `qsharp` Python package (requires the `qdk[qiskit]` extra to be installed).
### Top level exports
For convenience, the following helpers and types are also importable directly from the `qdk` root (e.g. `from qdk import code, Result`). Algorithm execution APIs (like `run` / `estimate`) remain under `qdk.qsharp` or `qdk.openqasm`.
| Symbol | Type | Origin | Description |
| -------------------- | -------- | --------------------------- | ------------------------------------------------------------------- |
| `code` | module | `qsharp.code` | Exposes operations defined in Q\# or OpenQASM |
| `init` | function | `qsharp.init` | Initialize/configure the QDK interpreter (target profile, options). |
| `set_quantum_seed` | function | `qsharp.set_quantum_seed` | Deterministic seed for quantum randomness (simulators). |
| `set_classical_seed` | function | `qsharp.set_classical_seed` | Deterministic seed for classical host RNG. |
| `dump_machine` | function | `qsharp.dump_machine` | Emit a structured dump of full quantum state (simulator dependent). |
| `Result` | class | `qsharp.Result` | Measurement result token. |
| `TargetProfile` | class | `qsharp.TargetProfile` | Target capability / profile descriptor. |
| `StateDump` | class | `qsharp.StateDump` | Structured state dump object. |
| `ShotResult` | class | `qsharp.ShotResult` | Multi-shot execution results container. |
| `PauliNoise` | class | `qsharp.PauliNoise` | Pauli channel noise model spec. |
| `DepolarizingNoise` | class | `qsharp.DepolarizingNoise` | Depolarizing noise model spec. |
| `BitFlipNoise` | class | `qsharp.BitFlipNoise` | Bit-flip noise model spec. |
| `PhaseFlipNoise` | class | `qsharp.PhaseFlipNoise` | Phase-flip noise model spec. |
## Telemetry
This library sends telemetry. Minimal anonymous data is collected to help measure feature usage and performance.
All telemetry events can be seen in the source file [telemetry_events.py](https://github.com/microsoft/qdk/tree/main/source/pip/qsharp/telemetry_events.py).
| text/markdown | Microsoft | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"qsharp==1.25.7.dev0",
"pyqir<0.12",
"qsharp-widgets==1.25.7.dev0; extra == \"jupyter\"",
"qsharp-jupyterlab==1.25.7.dev0; extra == \"jupyter\"",
"azure-quantum>=3.5.0; extra == \"azure\"",
"qiskit<3.0.0,>=1.2.2; extra == \"qiskit\"",
"cirq-core<=1.4.1,>=1.3.0; extra == \"cirq\"",
"cirq-ionq<=1.4.1,>=1.3.0; extra == \"cirq\"",
"qsharp-widgets==1.25.7.dev0; extra == \"all\"",
"azure-quantum>=3.5.0; extra == \"all\"",
"qiskit<3.0.0,>=1.2.2; extra == \"all\"",
"cirq-core<=1.4.1,>=1.3.0; extra == \"all\"",
"cirq-ionq<=1.4.1,>=1.3.0; extra == \"all\"",
"qsharp-jupyterlab==1.25.7.dev0; extra == \"all\""
] | [] | [] | [] | [] | RestSharp/106.13.0.0 | 2026-02-20T23:10:05.604609 | qdk-1.25.7.dev0-py3-none-any.whl | 8,789 | 43/38/e65b507ed241b21be31c39f0a52c6990cb7b928ecfd58529bde87d0efa51/qdk-1.25.7.dev0-py3-none-any.whl | py3 | bdist_wheel | null | false | efd47bd3db50e8a98ff6414473890d56 | 3ec3136c42aeebcf20c2e6e7105c5b92d0f822fe2770d249782296e1655b9b4d | 4338e65b507ed241b21be31c39f0a52c6990cb7b928ecfd58529bde87d0efa51 | null | [] | 73 |
2.4 | ilum | 0.0.2 | CLI installer and management tool for the Ilum Data Lakehouse platform | # ilum
CLI tool for deploying and managing the [Ilum Data Lakehouse](https://ilum.cloud) platform on Kubernetes.
Manage Helm-based Ilum deployments with module dependency resolution, values safety, interactive wizards, health checks, and structured output — all from a single command.
## Install
```bash
pip install ilum
```
Requires Python 3.12+.
## Quick Start
```bash
# Interactive setup — walks you through cluster selection and module configuration
ilum init
ilum install
# Or one command — detects/creates a cluster and installs with defaults
ilum quickstart
# Check release status
ilum status
# Enable an optional module (resolves dependencies automatically)
ilum module enable sql
```
## Key Features
- **Install and upgrade** — `ilum install` / `ilum upgrade` with module resolution, stuck-release recovery, and breaking-change warnings
- **32 optional modules** — enable/disable with automatic dependency resolution (`ilum module enable langfuse` pulls in postgresql + clickhouse)
- **Values safety** — every upgrade detects external drift, computes a diff, and shows it for confirmation before applying
- **Health checks** — `ilum doctor` runs 13 checks covering binaries, cluster connectivity, pod health, RBAC, PVCs, and endpoints
- **Deployment presets** — `--preset production`, `--preset data-engineering`, `--preset air-gapped`
- **Local clusters** — `ilum cluster create` spins up k3d/minikube/kind with preset resource profiles
- **Log streaming** — `ilum logs core --follow` tails pod logs by module name
- **Resource monitoring** — `ilum top` shows per-module CPU/memory usage
- **Shell access** — `ilum exec core` opens an interactive shell in any module pod
- **Configuration profiles** — named profiles with cross-platform config (Linux, macOS, Windows)
- **Machine-readable output** — `--output json|yaml|csv` on all query commands for CI/CD pipelines
- **Shell completion** — `ilum --install-completion bash/zsh/fish`
## Commands
| Command | Description |
|---------|-------------|
| `ilum init` | Interactive setup wizard |
| `ilum quickstart` | One-command install with defaults |
| `ilum install` | Install the platform |
| `ilum upgrade` | Upgrade an existing installation |
| `ilum status` | Release info, pod readiness, modules |
| `ilum module enable/disable` | Manage modules with dependency resolution |
| `ilum module list` | List all 32 available modules |
| `ilum doctor` | Run health checks |
| `ilum logs <module>` | Stream pod logs |
| `ilum exec <module>` | Shell into a pod |
| `ilum top` | Resource usage per module |
| `ilum values` | View live Helm values |
| `ilum diff` | Compare values across sources |
| `ilum rollback` | Roll back to a previous revision |
| `ilum config` | Manage CLI configuration and profiles |
| `ilum connect` | Attach to an existing Ilum release |
| `ilum cleanup` | Tiered full-environment teardown |
## Prerequisites
`ilum` wraps Helm and kubectl — these must be available on your machine:
| Tool | Minimum Version |
|------|-----------------|
| Helm | 3.12+ |
| kubectl | 1.28+ |
| Docker | 24.0+ (for local clusters) |
Missing tools? `ilum deps install` will install them for you.
## Links
- [Documentation](https://github.com/ilum-cloud/ilum)
- [Ilum Platform](https://ilum.cloud)
- [Changelog](https://github.com/ilum-cloud/ilum/blob/master/CHANGELOG.md)
| text/markdown | null | Ilum <info@ilum.cloud> | null | null | null | helm, ilum, kubernetes, lakehouse, spark | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: System :: Installation/Setup",
"Topic :: System :: Systems Administration"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"kubernetes>=28",
"pydantic>=2.5",
"questionary>=2",
"rich>=13",
"ruamel-yaml>=0.18",
"typer>=0.12",
"fastapi>=0.115; extra == \"api\"",
"httpx>=0.27; extra == \"api\"",
"pyjwt[crypto]>=2.8; extra == \"api\"",
"uvicorn[standard]>=0.30; extra == \"api\"",
"pyinstaller>=6; extra == \"build\"",
"httpx>=0.27; extra == \"dev\"",
"mypy>=1.10; extra == \"dev\"",
"pytest-cov>=5; extra == \"dev\"",
"pytest-mock>=3.12; extra == \"dev\"",
"pytest>=8; extra == \"dev\"",
"ruff>=0.5; extra == \"dev\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T23:09:10.952277 | ilum-0.0.2.tar.gz | 349,470 | 05/65/58fc8469cc441d85661b3be137b668ac4437192b18d9fb9876fc3f018b46/ilum-0.0.2.tar.gz | source | sdist | null | false | add1ee2700156143393fcb4f4b7289fe | d54cbade9a7c7e619af6608715f0f9297cda40237a9bfcfaa09c772d3d9c881a | 056558fc8469cc441d85661b3be137b668ac4437192b18d9fb9876fc3f018b46 | Apache-2.0 | [] | 92 |
2.4 | hiddifypanel | 12.0.0b6 | hiddifypanel multi proxy panel | # hiddifypanel
hiddifypanel to create multi proxy using xray mtproxy and others
## 🌎 Translations
<div align=center>
[Translation](https://fink.inlang.com/github.com/hiddify/hiddifypanel)
</div>
Improve existing languages or add new ones by manually editing the JSON files located in `/assets/translations` or by using the [Inlang online editor](https://inlang.com/editor/github.com/hiddify/hiddifypanel).
# How to use it
Please visit https://github.com/hiddify/hiddify-manager/wiki for the installation.
# Source Code: https://github.com/hiddify/HiddifyPanel/
<!--
## Installation
From source:
```bash
git clone https://github.com/hiddify/HiddifyPanel hiddifypanel
cd hiddifypanel
make install
```
From pypi:
```bash
pip install hiddifypanel
```
## Executing
This application has a CLI interface that extends the Flask CLI.
Just run:
```bash
$ hiddifypanel
```
or
```bash
$ python -m hiddifypanel
```
To see the help message and usage instructions.
## First run
```bash
hiddifypanel init-db # run once
echo localhost:9000/$(hiddifypanel admin-path)
hiddifypanel run
```
> **Note**: You can also use `flask run` to run the application.
-->
| text/markdown | hiddify | null | null | null | # Attribution-NonCommercial-ShareAlike 4.0 International
## Summary:
- You are free to:
- Share — copy and redistribute the material in any medium or format
- Adapt — remix, transform, and build upon the material
- Under the following terms:
- Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- NonCommercial — You may not use the material for commercial purposes.
- ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
Creative Commons Corporation (“Creative Commons”) is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an “as-is” basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible.
### Using Creative Commons Public Licenses
Creative Commons public licenses provide a standard set of terms and conditions that creators and other rights holders may use to share original works of authorship and other material subject to copyright and certain other rights specified in the public license below. The following considerations are for informational purposes only, are not exhaustive, and do not form part of our licenses.
* __Considerations for licensors:__ Our public licenses are intended for use by those authorized to give the public permission to use material in ways otherwise restricted by copyright and certain other rights. Our licenses are irrevocable. Licensors should read and understand the terms and conditions of the license they choose before applying it. Licensors should also secure all rights necessary before applying our licenses so that the public can reuse the material as expected. Licensors should clearly mark any material not subject to the license. This includes other CC-licensed material, or material used under an exception or limitation to copyright. [More considerations for licensors](http://wiki.creativecommons.org/Considerations_for_licensors_and_licensees#Considerations_for_licensors).
* __Considerations for the public:__ By using one of our public licenses, a licensor grants the public permission to use the licensed material under specified terms and conditions. If the licensor’s permission is not necessary for any reason–for example, because of any applicable exception or limitation to copyright–then that use is not regulated by the license. Our licenses grant only permissions under copyright and certain other rights that a licensor has authority to grant. Use of the licensed material may still be restricted for other reasons, including because others have copyright or other rights in the material. A licensor may make special requests, such as asking that all changes be marked or described. Although not required by our licenses, you are encouraged to respect those requests where reasonable. [More considerations for the public](http://wiki.creativecommons.org/Considerations_for_licensors_and_licensees#Considerations_for_licensees).
## Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions.
### Section 1 – Definitions.
a. __Adapted Material__ means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image.
b. __Adapter's License__ means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License.
c. __BY-NC-SA Compatible License__ means a license listed at [creativecommons.org/compatiblelicenses](http://creativecommons.org/compatiblelicenses), approved by Creative Commons as essentially the equivalent of this Public License.
d. __Copyright and Similar Rights__ means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights.
e. __Effective Technological Measures__ means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements.
f. __Exceptions and Limitations__ means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material.
g. __License Elements__ means the license attributes listed in the name of a Creative Commons Public License. The License Elements of this Public License are Attribution, NonCommercial, and ShareAlike.
h. __Licensed Material__ means the artistic or literary work, database, or other material to which the Licensor applied this Public License.
i. __Licensed Rights__ means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license.
j. __Licensor__ means the individual(s) or entity(ies) granting rights under this Public License.
k. __NonCommercial__ means not primarily intended for or directed towards commercial advantage or monetary compensation. For purposes of this Public License, the exchange of the Licensed Material for other material subject to Copyright and Similar Rights by digital file-sharing or similar means is NonCommercial provided there is no payment of monetary compensation in connection with the exchange.
l. __Share__ means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them.
m. __Sui Generis Database Rights__ means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world.
n. __You__ means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning.
### Section 2 – Scope.
a. ___License grant.___
1. Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to:
A. reproduce and Share the Licensed Material, in whole or in part, for NonCommercial purposes only; and
B. produce, reproduce, and Share Adapted Material for NonCommercial purposes only.
2. __Exceptions and Limitations.__ For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions.
3. __Term.__ The term of this Public License is specified in Section 6(a).
4. __Media and formats; technical modifications allowed.__ The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a)(4) never produces Adapted Material.
5. __Downstream recipients.__
A. __Offer from the Licensor – Licensed Material.__ Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License.
B. __Additional offer from the Licensor – Adapted Material.__ Every recipient of Adapted Material from You automatically receives an offer from the Licensor to exercise the Licensed Rights in the Adapted Material under the conditions of the Adapter’s License You apply.
C. __No downstream restrictions.__ You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.
6. __No endorsement.__ Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i).
b. ___Other rights.___
1. Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise.
2. Patent and trademark rights are not licensed under this Public License.
3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties, including when the Licensed Material is used other than for NonCommercial purposes.
### Section 3 – License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the following conditions.
a. ___Attribution.___
1. If You Share the Licensed Material (including in modified form), You must:
A. retain the following if it is supplied by the Licensor with the Licensed Material:
i. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of warranties;
v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable;
B. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and
C. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License.
2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information.
3. If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable.
b. ___ShareAlike.___
In addition to the conditions in Section 3(a), if You Share Adapted Material You produce, the following conditions also apply.
1. The Adapter’s License You apply must be a Creative Commons license with the same License Elements, this version or later, or a BY-NC-SA Compatible License.
2. You must include the text of, or the URI or hyperlink to, the Adapter's License You apply. You may satisfy this condition in any reasonable manner based on the medium, means, and context in which You Share Adapted Material.
3. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, Adapted Material that restrict exercise of the rights granted under the Adapter's License You apply.
### Section 4 – Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database for NonCommercial purposes only;
b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material, including for purposes of Section 3(b); and
c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights.
### Section 5 – Disclaimer of Warranties and Limitation of Liability.
a. __Unless otherwise separately undertaken by the Licensor, to the extent possible, the Licensor offers the Licensed Material as-is and as-available, and makes no representations or warranties of any kind concerning the Licensed Material, whether express, implied, statutory, or other. This includes, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. Where disclaimers of warranties are not allowed in full or in part, this disclaimer may not apply to You.__
b. __To the extent possible, in no event will the Licensor be liable to You on any legal theory (including, without limitation, negligence) or otherwise for any direct, special, indirect, incidental, consequential, punitive, exemplary, or other losses, costs, expenses, or damages arising out of this Public License or use of the Licensed Material, even if the Licensor has been advised of the possibility of such losses, costs, expenses, or damages. Where a limitation of liability is not allowed in full or in part, this limitation may not apply to You.__
c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability.
### Section 6 – Term and Termination.
a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically.
b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates:
1. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or
2. upon express reinstatement by the Licensor.
For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License.
c. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License.
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public License.
### Section 7 – Other Terms and Conditions.
a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License.
### Section 8 – Interpretation.
a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License.
b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions.
c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor.
d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority.
> Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the “Licensor.” The text of the Creative Commons public licenses is dedicated to the public domain under the CC0 Public Domain Dedication. Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at creativecommons.org/policies, Creative Commons does not authorize the use of the trademark “Creative Commons” or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses.
>
> Creative Commons may be contacted at creativecommons.org | proxy, panel, multi | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"apiflask[async]==3.0.2",
"babel==2.18.0",
"bleach==6.3.0",
"bootstrap-flask==2.5.0",
"celery<6.0.0,>=5.6.2",
"flask-admin==1.6.1",
"flask-adminlte3==1.0.9",
"flask-classful==0.16.0",
"flask-login==0.6.3",
"flask-restful==0.3.10",
"flask-sqlalchemy==3.1.1",
"flask-session==0.8.0",
"flask-wtf==1.2.2",
"flask==3.1.2",
"markupsafe==3.0.3",
"pymysql==1.1.2",
"pyyaml==6.0.3",
"sqlalchemy-utils==0.42.1",
"sqlalchemy[asyncio]==2.0.46",
"strenum==0.4.15",
"wtforms==3.1.2",
"werkzeug==3.1.5",
"ansi2html==1.9.2",
"bjoern==3.2.2",
"click==8.3.1",
"cloudflare==4.3.1",
"pydantic==2.12.5",
"cryptography<42",
"fastenumplus==1.4.0",
"flask-babel==4.0.0",
"loguru==0.7.3",
"maxminddb==3.0.0",
"ping3==5.1.5",
"psutil==7.2.0",
"python-dateutil==2.9.0.post0",
"python-dotenv==1.2.1",
"python-redis-cache==4.0.2",
"python-slugify==8.0.4",
"redis==7.2.0",
"requests==2.32.5",
"pytelegrambotapi==4.31.0",
"user-agents==2.2.0",
"xtlsapi==3.3.0",
"mysqlclient==2.2.8",
"sonora<1.0.0,>=0.2.3",
"protobuf<6.0.0,>=5.26.1",
"asgiref<4.0.0,>=3.8.1",
"dynaconf<4.0.0,>=3.2.6",
"json5<1.0.0,>=0.9.28",
"jinja2>=3.1.6",
"marshmallow>=4.2.2",
"setuptools==82.0.0",
"dnspython>=2.8.0",
"pytest==8.3.3; extra == \"dev\"",
"coverage==7.6.4; extra == \"dev\"",
"flake8==7.3.0; extra == \"dev\"",
"black==26.1.0; extra == \"dev\"",
"isort==7.0.0; extra == \"dev\"",
"pytest-cov==6.0.0; extra == \"dev\"",
"codecov==2.1.13; extra == \"dev\"",
"mypy==1.19.1; extra == \"dev\"",
"gitchangelog==3.0.4; extra == \"dev\"",
"mkdocs==1.6.1; extra == \"dev\"",
"flask-debugtoolbar==0.16.0; extra == \"dev\"",
"flask-shell-ipython==0.5.3; extra == \"dev\"",
"pytest-flask==1.3.0; extra == \"dev\"",
"grpcio-tools==1.67.1; extra == \"dev\""
] | [] | [] | [] | [
"homepage, https://hiddify.com",
"repository, https://github.com/hiddify/hiddify-manager/",
"documentation, https://hiddify.com/manager"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T23:08:51.365649 | hiddifypanel-12.0.0b6.tar.gz | 9,707,106 | 81/0c/8e0482603e12df47998c68da812b49488685ded37a3100461fe95d93ba10/hiddifypanel-12.0.0b6.tar.gz | source | sdist | null | false | 949f535a0d4d375c02b42eddbb694a55 | decc2b2c3b27cdaee0609280310a4c9c876e025cdb026891b5fbf2cbde2021c0 | 810c8e0482603e12df47998c68da812b49488685ded37a3100461fe95d93ba10 | null | [
"LICENSE.md"
] | 657 |
2.1 | faster-qwen3-tts | 0.1.0 | Real-time Qwen3-TTS inference using manual CUDA graph capture | # Faster Qwen3-TTS
Real-time Qwen3-TTS inference using CUDA graph capture. No Flash Attention, no vLLM, no Triton. Just `torch.cuda.CUDAGraph`. Supports both streaming and non-streaming generation.
## Results
Benchmarks include tokenization + inference (apples-to-apples with baseline). RTF > 1.0 = faster than real-time. TTFA measured as time to first playable audio chunk using streaming (chunk_size=8).
### CustomVoice Models
CustomVoice uses predefined speaker IDs (no reference audio). These benchmarks use the first available speaker ID from the model.
| Model | CUDA Graphs RTF | CUDA Graphs TTFA |
|---|---|---|
| 0.6B CustomVoice | **5.53** | **154ms** |
| 1.7B CustomVoice | **4.78** | **171ms** |
### 0.6B Model
| GPU | Baseline RTF | Baseline TTFA | CUDA Graphs RTF | CUDA Graphs TTFA | Speedup |
|---|---|---|---|---|---|
| Jetson AGX Orin 64GB | 0.175 | 2,572ms | 1.57 | 556ms | 9.0x / 4.6x |
| Jetson Thor | 0.803 | 862ms | 1.50 | 505ms | 1.9x / 1.7x |
| DGX Spark (GB10) | 1.19 | 631ms | 2.26 | 364ms | 1.9x / 1.7x |
| RTX 4090 | 1.34 | 462ms | **5.56** | **152ms** | 4.1x / 3.0x |
| H100 80GB HBM3 | 0.59 | 1,049ms | **4.19** | **224ms** | 7.1x / 4.7x |
### 1.7B Model
| GPU | Baseline RTF | Baseline TTFA | CUDA Graphs RTF | CUDA Graphs TTFA | Speedup |
|---|---|---|---|---|---|
| Jetson AGX Orin 64GB | 0.130 | 2,594ms | 1.27 | 650ms | 9.8x / 4.0x |
| Jetson Thor | 0.772 | 912ms | 1.26 | 595ms | 1.6x / 1.5x |
| DGX Spark (GB10) | 0.975 | 749ms | 1.66 | 464ms | 1.7x / 1.6x |
| RTX 4090 | 1.32 | 468ms | **4.85** | **170ms** | 3.7x / 2.8x |
| H100 80GB HBM3 | 0.59 | 1,045ms | **3.98** | **236ms** | 6.7x / 4.4x |
**Note:** Baseline TTFA values are **streaming TTFA** from the community `Qwen3-TTS-streaming` fork (which adds streaming). The official `Qwen3-TTS` repo does **not** currently support streaming, so its “TTFA” is effectively **time-to-full-audio**. With RTF near 1.0, that means waiting for the entire sentence/paragraph to finish speaking before you hear anything. CUDA graphs uses `generate_voice_clone_streaming(chunk_size=8)` for TTFA. Both include text tokenization for fair comparison. Speedup shows throughput / TTFA improvement. The streaming fork reports additional speedups that appear tied to `torch.compile`; we couldn’t reproduce those on Jetson-class devices where `torch.compile` isn’t available.
**GPU architecture notes:** RTX 4090 (2.5 GHz clocks) outperforms H100 (1.8 GHz) for single-stream workloads. H100's lower baseline (RTF 0.59 vs 4090's 1.34) reflects design optimization for batch processing rather than single-stream inference.
## Quick Start
```bash
git clone https://github.com/andimarafioti/faster-qwen3-tts
cd faster-qwen3-tts
./setup.sh # creates venv with uv, installs deps, downloads models
./benchmark.sh # runs full benchmark, saves JSON + audio samples
```
Requires: Python 3.10+, NVIDIA GPU with CUDA, [uv](https://docs.astral.sh/uv/).
### Install (PyPI)
```bash
pip install faster-qwen3-tts
```
Note: This pulls `qwen-tts` from the official GitHub repo because it is not published on PyPI yet.
Install from source:
```bash
pip install -e .
```
### Benchmark a specific model
```bash
./benchmark.sh 0.6B
./benchmark.sh 1.7B
./benchmark.sh both # default
```
Results are saved as `bench_results_<GPU_NAME>.json` and audio samples as `sample_0.6B.wav` / `sample_1.7B.wav`.
## How It Works
Qwen3-TTS runs two autoregressive transformers per decode step:
1. **Talker** (28 layers): generates the first codebook token from text
2. **Code Predictor** (5 layers): generates 15 additional codebook tokens
A single step involves ~500 small CUDA kernel launches with Python overhead between them. The GPU spends more time waiting for the next kernel than computing.
CUDA graphs capture the entire decode step and replay it as a single GPU operation:
1. **Static KV cache**: pre-allocated fixed-size tensors (no dynamic allocation)
2. **Model's own forward**: SDPA + RoPE via the model's native attention layers
3. **Graph capture**: `torch.cuda.CUDAGraph` for both predictor and talker
4. **Padded attention**: attention mask handles variable-length KV within fixed buffers
### Per-component breakdown (Jetson AGX Orin, 0.6B)
| Component | Before | After |
|---|---|---|
| Talker (28 layers) | 75ms | 12ms |
| Predictor (15 steps) | 190ms | 26ms |
| Overhead | 65ms | 16ms |
| **Total per step** | **330ms** | **54ms** |
## Streaming
CUDA graphs support streaming output — audio chunks are yielded during generation with the same per-step performance as non-streaming mode.
### Chunk size vs performance (Jetson AGX Orin, 0.6B)
| chunk_size | TTFA | RTF | Audio per chunk |
|---|---|---|---|
| 1 | 240ms | 0.750 | 83ms |
| 2 | 266ms | 1.042 | 167ms |
| 4 | 362ms | 1.251 | 333ms |
| 8 | 556ms | 1.384 | 667ms |
| 12 | 753ms | 1.449 | 1000ms |
| Non-streaming | — | 1.36 | all at once |
Smaller chunks = lower latency but more decode overhead. `chunk_size=2` is the smallest that stays real-time on Jetson.
### Usage
```python
from faster_qwen3_tts import FasterQwen3TTS
model = FasterQwen3TTS.from_pretrained("Qwen/Qwen3-TTS-12Hz-0.6B-Base")
# Streaming — yields audio chunks during generation
for audio_chunk, sr, timing in model.generate_voice_clone_streaming(
text="Hello world!", language="English",
ref_audio="ref.wav", ref_text="...",
chunk_size=8, # 8 steps ≈ 667ms of audio per chunk
):
play(audio_chunk, sr) # process/send each chunk immediately
# Non-streaming — returns all audio at once (unchanged API)
audio_list, sr = model.generate_voice_clone(
text="Hello world!", language="English",
ref_audio="ref.wav", ref_text="...",
)
```
### CLI
Voice cloning (reference audio):
```bash
faster-qwen3-tts clone \
--model Qwen/Qwen3-TTS-12Hz-1.7B-Base \
--text "Hello world!" \
--language English \
--ref-audio ref.wav \
--ref-text "Reference transcript" \
--output out.wav
```
CustomVoice (predefined speaker IDs):
```bash
faster-qwen3-tts custom --model Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice --list-speakers
faster-qwen3-tts custom \
--model Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice \
--speaker aiden \
--text "Hello!" \
--language English \
--output out.wav
```
VoiceDesign (instruction-based):
```bash
faster-qwen3-tts design \
--model Qwen/Qwen3-TTS-12Hz-1.7B-VoiceDesign \
--instruct "Warm, confident narrator with slight British accent" \
--text "Welcome to the show." \
--language English \
--output out.wav
```
Streaming (prints RTF after write):
```bash
faster-qwen3-tts custom \
--model Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice \
--speaker aiden \
--text "Hello!" \
--language English \
--output out.wav \
--streaming
```
Server mode (keep model hot, stop with `exit`):
```bash
faster-qwen3-tts serve \
--mode custom \
--model Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice \
--speaker aiden \
--language English \
--streaming
```
### How it works
The CUDA graphs are unchanged — both predictor and talker graphs are replayed per step. The streaming generator yields codec ID chunks every `chunk_size` steps, and the model wrapper decodes each chunk to audio using a sliding window with 25-frame left context (matching the upstream codec's `chunked_decode` pattern) to avoid boundary artifacts.
## Voice Cloning: ICL Phoneme Artifact
In ICL (In-Context Learning) mode — the default voice cloning path — the model's prefill sequence ends with the last codec token of the reference audio. The model conditions its **first generated token** on whatever phoneme the reference audio happens to end on. If the reference ends mid-word or on a consonant cluster, that phoneme bleeds into the very start of the generated speech.
**The fix is applied automatically.** The wrapper appends 0.5 seconds of silence to the reference audio before encoding it. This ensures the last codec tokens in the prefill represent silence, giving the model a clean starting point regardless of how the reference recording ends — no changes to your calling code required.
## Voice Cloning with Precomputed Speaker Embeddings
For production use, extract the speaker embedding once and reuse it:
```bash
# 1. Extract speaker embedding from reference audio (one-time, ~10s)
python examples/extract_speaker.py --ref_audio voice.wav --output speaker.pt
# 2. Generate speech with CUDA graphs (real-time)
python examples/generate_with_embedding.py --speaker speaker.pt --text "Hello!" --language English --output en.wav
python examples/generate_with_embedding.py --speaker speaker.pt --text "Bonjour!" --language French --output fr.wav
python examples/generate_with_embedding.py --speaker speaker.pt --text "Hallo!" --language German --output de.wav
```
The speaker embedding is a 4KB file (2048-dim bf16 vector). In `x_vector_only` mode:
- **No accent bleed**: native pronunciation per language
- **Shorter prefill**: 10 tokens vs ~80+ in full ICL clone mode
- **No ref audio at runtime**: just the 4KB embedding file
## Files
```
faster_qwen3_tts/
model.py # Wrapper API
generate.py # Non-streaming generation loop
streaming.py # Streaming generation loop
predictor_graph.py # Predictor CUDA graph with StaticCache
talker_graph.py # Talker CUDA graph with StaticCache
examples/
extract_speaker.py # Extract speaker embedding from ref audio
generate_with_embedding.py # Generate with precomputed speaker embedding
benchmarks/
throughput.py # Throughput benchmark (RTF + audio samples)
streaming.py # Streaming benchmark (TTFA + chunk timing)
chunk_sweep.py # Chunk size sweep (RTF vs latency tradeoff)
baseline.py # Baseline qwen-tts benchmark
custom_voice.py # CustomVoice benchmark
benchmark.sh # Run benchmarks
setup.sh # Setup venv + download models
```
## License
MIT
## Acknowledgments
- [Qwen3-TTS](https://github.com/QwenLM/Qwen3-TTS) by the Qwen team
- [Qwen3-TTS-streaming](https://github.com/dffdeeq/Qwen3-TTS-streaming) for ideas and code we adapted for streaming
- [nano-qwen3tts-vllm](https://github.com/tsdocode/nano-qwen3tts-vllm) for inspiration on CUDA graph usage
- NVIDIA for providing the Jetson AGX Orin board
| text/markdown | Andres Marafioti | null | null | null | MIT | tts, qwen, cuda, cudagraph, streaming | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Multimedia :: Sound/Audio :: Speech",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"qwen-tts>=0.1.1",
"transformers>=4.57",
"torch>=2.1",
"numpy",
"soundfile",
"huggingface-hub"
] | [] | [] | [] | [
"Homepage, https://github.com/andimarafioti/faster-qwen3-tts",
"Repository, https://github.com/andimarafioti/faster-qwen3-tts"
] | twine/6.2.0 CPython/3.12.7 | 2026-02-20T23:08:13.887821 | faster_qwen3_tts-0.1.0.tar.gz | 38,400 | 35/4a/2359135897b6011363632cb4bbc132201d16a4ec74e770592a00504de71e/faster_qwen3_tts-0.1.0.tar.gz | source | sdist | null | false | 89ab00d42d4c77434e9a2a8817b711a6 | 06c5ce2d1332ee090656e21370dab438556583a36eb9c68af0425eb0c02af995 | 354a2359135897b6011363632cb4bbc132201d16a4ec74e770592a00504de71e | null | [] | 239 |
2.4 | italy-opendata-mcp | 1.0.1 | MCP server for Italian open data (municipalities, provinces, regions, postal codes, coordinates, geographic data) | # italy-opendata-mcp
MCP server exposing Italian open data (municipalities, provinces, regions, postal codes, coordinates, geographic data) through simple, developer-friendly tools.
## Features
- **7 MCP tools** to navigate the Italian administrative hierarchy
- **Official sources**: ISTAT and ANPR where available
- **Lazy download**: data is fetched on first use and cached locally (~1.8 MB SQLite)
- **Offline after first use**: all queries are local
- **No Docker**: installable via `uvx` or `pip`, starts and stops with Claude
## Data sources
| Data | Source | Type |
|------|--------|------|
| Municipalities, provinces, regions, ISTAT codes | [ISTAT](https://www.istat.it/classificazione/codici-dei-comuni-delle-province-e-delle-regioni/) | Official |
| Resident population | [ANPR](https://github.com/italia/anpr-opendata) | Official (daily updates) |
| Surface area, altitude, altimetric zone | [ISTAT](https://www.istat.it/classificazione/principali-statistiche-geografiche-sui-comuni/) | Official |
| Postal codes (CAP) | [comuni-json](https://github.com/matteocontrini/comuni-json) | Community (no official source available) |
| Centroid coordinates | [opendatasicilia](https://github.com/opendatasicilia/comuni-italiani) | Community (no official source available) |
## Installation
```bash
uvx italy-opendata-mcp
```
## Usage in .mcp.json
```json
{
"mcpServers": {
"italy-opendata": {
"command": "uvx",
"args": ["italy-opendata-mcp"]
}
}
}
```
### From source
```bash
git clone https://github.com/stucchi/italy-opendata-mcp.git
cd italy-opendata-mcp
uv venv && uv pip install -e .
```
## Tools
### Hierarchical navigation
```
list_regioni() → list_province(regione="Lombardia") → list_comuni(provincia="MI")
```
| Tool | Parameters | Description |
|------|------------|-------------|
| `list_regioni` | — | All 20 regions with municipality count and population |
| `list_province` | `regione?` | Provinces with optional region filter |
| `list_comuni` | `regione?`, `provincia?`, `limit?` | Municipalities with optional filters (default 400 results) |
### Search
| Tool | Parameters | Description |
|------|------------|-------------|
| `get_comune` | `nome_o_codice` | Full details of a municipality by name or ISTAT code |
| `get_by_cap` | `cap` | Find municipalities associated with a postal code |
### Data management
| Tool | Parameters | Description |
|------|------------|-------------|
| `refresh_dataset` | `force?` | Re-download data from sources |
| `datasets_status` | — | Local cache status |
## Available fields per municipality
Each municipality includes:
- **Registry**: name, ISTAT code, cadastral code, province abbreviation, province, region
- **Demographics**: population (ANPR, daily updates)
- **Geography**: latitude, longitude, surface area (km²), altitude (m), altimetric zone
- **Classification**: coastal, island, urbanization degree
- **Postal**: list of associated CAP codes
## Example output
```
> get_comune("Roma")
{
"codice_istat": "058091",
"nome": "Roma",
"codice_catastale": "H501",
"popolazione": 2802399,
"superficie_kmq": 1288.19,
"altitudine": 20,
"zona_altimetrica": "Pianura",
"litoraneo": 1,
"latitudine": 41.89332,
"longitudine": 12.482932,
"sigla_provincia": "RM",
"provincia": "Roma",
"regione": "Lazio",
"cap": ["00118", "00119", "00120", ...]
}
```
## Cache
Data is saved locally on first use:
| OS | Path |
|----|------|
| macOS / Linux | `~/.cache/italy-opendata-mcp/italia.db` |
| Windows | `%LOCALAPPDATA%\italy-opendata-mcp\italia.db` |
To refresh data, use `refresh_dataset(force=True)`.
## Data coverage
| | Count |
|---|---|
| Regions | 20 |
| Provinces | 107 |
| Municipalities | 7,896 |
| With population | 7,896 |
| With coordinates | 7,889 |
| With geographic data | 7,519 |
| With postal codes | 7,887 |
## License
MIT
<!-- mcp-name: io.github.stucchi/italy-opendata -->
| text/markdown | null | "Ing. Luca Stucchi" <luca.stucchi@gmail.com> | null | null | null | mcp, italy, opendata, istat, municipalities | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"mcp[cli]",
"httpx",
"openpyxl"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.1 | 2026-02-20T23:07:46.015677 | italy_opendata_mcp-1.0.1.tar.gz | 11,735 | 49/f5/97f2015e93d7ac1529f799c7b425742124e3ce40a2ad632cf233d0530f48/italy_opendata_mcp-1.0.1.tar.gz | source | sdist | null | false | f61bbd3e1f1844af02bb1c9a2d9e1f20 | a85b08ce4a7473ee9003ecd0bd9159a076b5c83fb1b56479f15ddc9d812b8266 | 49f597f2015e93d7ac1529f799c7b425742124e3ce40a2ad632cf233d0530f48 | MIT | [] | 223 |
2.4 | sentimentscopeai | 1.4.2 | Transformer-based review sentiment analysis and actionable insight generation. | 
# SentimentScopeAI
## Fine-Grained Review Sentiment Analysis & Insight Generation
SentimentScopeAI is a Python-based NLP system that leverages PyTorch and HuggingFace Transformers (pre-trained models) to move beyond binary sentiment classification and instead analyze, interpret, and reason over collections of user reviews to help companies improve their products/services
Rather than treating sentiment analysis as a black-box prediction task, this project focuses on semantic interpretation, explainability, and the generation of aggregated insights, simulating how a human analyst would read and summarize large volumes of feedback.
## Project Motivation
SentimentScopeAI is designed to answer this one main question:
* What actionable advice can be derived from collective sentiment?
## Features
1.) Pre-Trained Sentiment Modeling (PyTorch + HuggingFace)
* Uses pre-trained transformer models from HuggingFace
* Integrated via PyTorch for inference and extensibility
* Enables robust sentiment understanding without training from scratch
* Designed so downstream logic operates on model outputs, not raw text
2.) Rating Meaning Inference
* Implemented the infer_rating_meaning() function
* Converts numerical ratings (1–5) into semantic interpretations
* Uses sentiment signals, linguistic tone, and contextual cues
* Handles:
* Mixed sentiment
* Neutral or ambiguous phrasing
* Disagreement between rating score and review text
Example:
```
Rating: 3
→ "Mixed experience with noticeable positives and recurring issues."
```
3.) Explainable, Deterministic Pipeline
* Downstream reasoning is transparent and testable
* No opaque end-to-end predictions
* Model outputs are interpreted rather than blindly trusted
* Designed for debugging, auditing, and future research extension
4.) Summary Generation
* Read all reviews for a given product or service
* Aggregate sentiment signals across users
* Detect recurring strengths and weaknesses
* Generate a summary of all reviews to help stakeholders
These steps transition the system from analysis → reasoning → recommendation generation.
Example:
```
For <Company Name>'s <Service Name>: overall sentiment is mixed reflecting a balance
of positive and negative feedback
The following specific issues were extracted from negative reviews:
1) missed a few appointments
2) not signed into the right account
3) interface is horrible
4) find the interface confusing
5) invitations and acceptances are terrible
```
## System Architecture Overview
```
Reviews
↓
Pre-trained Transformer (HuggingFace + PyTorch)
↓
Sentiment Signals
↓
Rating Meaning Inference
↓
Summary Generation
```
## Tech-Stack
* **Language**: Python
* **Deep Learning**: PyTorch
* **NLP Models**: HuggingFace Transformers (pre-trained), Flan-T5
* **Web Scraping**: Playwright, Playwright-Stealth, SeleniumBase
* **Aggregated Reasoning**: Multi-model ensemble approach
* **Data Handling**: JSON, Python data structures
## Why SentimentScopeAI?
Every organization collects feedback - but reading hundreds or thousands of reviews is time-consuming, inconsistent, and difficult to scale. Important insights are often buried in repetitive comments, while actionable criticism gets overlooked.
SentimentScopeAI is designed to do the heavy lifting:
* Reads and analyzes large volumes of reviews automatically
* Identifies recurring pain points across users
* Pick the one main piece of negative from each review
* Helps teams focus on what to improve rather than sorting through raw text
## Installation & Usage
SentimentScopeAI is distributed as a Python package and can be installed via pip:
```
pip install sentimentscopeai
```
Requirements:
* Python 3.9 or newer (Python 3.10 or above is recommended for best performance and compatibility)
* PyTorch
* HuggingFace Transformers
* Internet connection
All required dependencies are automatically installed with the package.
## Basic Usage:
```python
from sentimentscopeai import SentimentScopeAI
# MAKE SURE TO PASS IN: current_folder/json_file_name, not just json_file_name if the following doesn't work
review_bot = SentimentScopeAI("json_file_name", "company_name", "service_name")
print(review_bot.generate_summary())
```
What Happens Internally
* Reviews are parsed from a structured JSON file
* Sentiment is inferred using pre-trained transformer models (PyTorch + HuggingFace)
* Rating meanings are semantically interpreted
* Flan-T5 finds the negatives from each review and summarizes the whole file
## Important Notice:
1.) JSON Input Format (Required)
SentimentScopeAI only accepts JSON input.
The review file must follow this exact structure:
```json
[
"review_text",
"review_text",
"review_text",
...
]
```
Missing fields, incorrect keys, or non-JSON formats will cause parsing errors.
2.) JSON Must Be Valid
* File must be UTF-8 encoded
* No trailing commas
* No comments
* Must be a list ([]), not a single object
You can use a JSON validator if you are unsure.
3.) One Company & One Service per JSON File (Required)
This restriction is intentional:
* Sentiment aggregation assumes a single shared context
* Summary generation relies on consistent product-level patterns
* Mixing services can produce misleading summaries and recommendations
If you need to analyze multiple products or companies, create separate JSON files and run SentimentScopeAI independently for each dataset.
4.) Model Loading Behavior
* Transformer models are lazy-loaded
* First run may take longer due to:
* Model downloads
* Tokenizer initialization
* Subsequent runs are significantly faster
This design improves startup efficiency and memory usage.
## Web Scraping Feature
SentimentScopeAI now includes an **optional automated review import feature** that can scrape reviews directly from Yelp for analysis.
### Additional Setup for Web Scraping
If you want to use the automated scraping feature, install the required browser:
```bash
playwright install chromium
```
### Example Usage
```python
from SentimentScopeAI import sentimentscopeai as ssAI
bot = ssAI.SentimentScopeAI("iphone_reviews.json", "<Company Name>", "<Service Name>")
bot.import_yelp_reviews("https://www.yelp.com/biz/business-name-city#reviews")
print(bot.generate_summary())
```
### Supported Platforms
- Yelp Reviews [https://www.yelp.com/]
### Important Notes
- Scraping may take several minutes for businesses with many reviews
- The feature includes anti-detection measures and random delays
- Reviews are automatically cleaned and formatted
- For best results, ensure a stable internet connection
### Disclaimer:
SentimentScopeAI is provided **as-is** and is **not liable** for any damages arising from its use. All input data is **processed locally** and is **not used for model training** or retained beyond execution. **Do not include personal, sensitive, or confidential information** in review data. SentimentScopeAI **may produce incomplete summaries or misclassify sentiment**. Always **verify critical insights** before making business decisions. **Web Scraping Notice:** SentimentScopeAI is **not affiliated with, endorsed by, or partnered with Yelp Inc.** Users are **solely responsible for complying with Yelp's Terms of Service** and applicable laws. This feature is provided for **research and personal use only**. Users are **responsible for ensuring ethical and appropriate use** of this system.
| text/markdown | Vignesh Thondikulam | null | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"torch",
"transformers",
"sentencepiece",
"playwright>=1.40.0",
"playwright-stealth>=0.1.0",
"seleniumbase>=4.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/VigneshT24/SentimentScopeAI",
"Repository, https://github.com/VigneshT24/SentimentScopeAI"
] | twine/6.2.0 CPython/3.12.0 | 2026-02-20T23:06:24.899008 | sentimentscopeai-1.4.2.tar.gz | 16,281 | 68/56/a44ec8f9399f8107f774494b52612c8166234ea02d8f2267b9f10d86e995/sentimentscopeai-1.4.2.tar.gz | source | sdist | null | false | 155e89a109f1d099bb2ab8708a933313 | 23563fc9da3ceaa0cfd6ba67ff5d535d51f4ec1af3eab4cc963ee0269d3a6dad | 6856a44ec8f9399f8107f774494b52612c8166234ea02d8f2267b9f10d86e995 | null | [
"LICENSE"
] | 212 |
2.4 | synix | 0.15.0 | Build system for agent memory | <p align="center">
<img src="./assets/logo.svg" alt="Synix logo" width="120">
</p>
<pre align="center">
███████╗██╗ ██╗███╗ ██╗██╗██╗ ██╗
██╔════╝╚██╗ ██╔╝████╗ ██║██║╚██╗██╔╝
███████╗ ╚████╔╝ ██╔██╗ ██║██║ ╚███╔╝
╚════██║ ╚██╔╝ ██║╚██╗██║██║ ██╔██╗
███████║ ██║ ██║ ╚████║██║██╔╝ ██╗
╚══════╝ ╚═╝ ╚═╝ ╚═══╝╚═╝╚═╝ ╚═╝
</pre>
<h3 align="center">A build system for agent memory.</h3>
<p align="center">
<video src="./templates/02-tv-returns/tv_returns.mp4" width="720" controls></video>
</p>
## The Problem
Agent memory hasn't converged. Mem0, Letta, Zep, LangMem — each bakes in a different architecture because the right one depends on your domain and changes as your agent evolves. Most systems force you to commit to a schema early. Changing your approach means migrations or starting over.
## What Synix Does
Conversations are sources. Prompts are build rules. Summaries and world models are artifacts. Declare your memory architecture in Python, build it, then change it — only affected layers rebuild. Trace any artifact back through the dependency graph to its source conversation.
```bash
uvx synix build pipeline.py
uvx synix search "return policy"
uvx synix validate # experimental
```
## Quick Start
```bash
uvx synix init my-project
cd my-project
```
Add your API key (see `pipeline.py` for provider config), then build:
```bash
uvx synix build
```
Browse, search, and validate:
```bash
uvx synix list # all artifacts, grouped by layer
uvx synix show final-report # render an artifact
uvx synix search "hiking" # full-text search
uvx synix validate # run declared validators (experimental)
```
## Defining a Pipeline
A pipeline is a Python file. Layers are real objects with dependencies expressed as object references.
```python
# pipeline.py
from synix import Pipeline, Source, SearchIndex
from synix.ext import MapSynthesis, ReduceSynthesis
pipeline = Pipeline("my-pipeline")
pipeline.source_dir = "./sources"
pipeline.build_dir = "./build"
pipeline.llm_config = {
"provider": "anthropic",
"model": "claude-haiku-4-5-20251001",
"temperature": 0.3,
"max_tokens": 1024,
}
# Parse source files
bios = Source("bios", dir="./sources/bios")
# 1:1 — apply a prompt to each input
work_styles = MapSynthesis(
"work_styles",
depends_on=[bios],
prompt="Infer this person's work style in 2-3 sentences:\n\n{artifact}",
artifact_type="work_style",
)
# N:1 — combine all inputs into one output
report = ReduceSynthesis(
"report",
depends_on=[work_styles],
prompt="Write a team analysis from these profiles:\n\n{artifacts}",
label="team-report",
artifact_type="report",
)
pipeline.add(bios, work_styles, report)
pipeline.add(SearchIndex("search", sources=[work_styles, report], search=["fulltext"]))
```
This is a complete, working pipeline. `uvx synix build pipeline.py` runs it.
For the full pipeline API, built-in transforms, validators, and advanced patterns, see [docs/pipeline-api.md](docs/pipeline-api.md).
## Configurable Transforms (`synix.ext`)
Most LLM steps follow one of four patterns. The `synix.ext` module provides configurable transforms for each — no custom classes needed.
```python
from synix.ext import MapSynthesis, GroupSynthesis, ReduceSynthesis, FoldSynthesis
```
| Transform | Pattern | Use when... |
|-----------|---------|-------------|
| `MapSynthesis` | 1:1 | Each input gets its own LLM call |
| `GroupSynthesis` | N:M | Group inputs by a metadata key, one output per group |
| `ReduceSynthesis` | N:1 | All inputs become a single output |
| `FoldSynthesis` | N:1 sequential | Accumulate through inputs one at a time |
All four take a `prompt` string with placeholders like `{artifact}`, `{artifacts}`, `{group_key}`, `{accumulated}`. Changing the prompt automatically invalidates the cache.
For full parameter reference and examples of each, see [docs/pipeline-api.md#configurable-transforms](docs/pipeline-api.md#configurable-transforms-synixext).
When you need logic beyond prompt templating — filtering, conditional branching, multi-step chains — write a [custom Transform subclass](docs/pipeline-api.md#custom-transforms).
## Built-in Transforms
Pre-built transforms for common agent memory patterns. Import from `synix.transforms`:
| Class | What it does |
|-------|-------------|
| `EpisodeSummary` | 1 transcript → 1 episode summary |
| `MonthlyRollup` | Group episodes by month, synthesize each |
| `TopicalRollup` | Group episodes by user-defined topics |
| `CoreSynthesis` | All rollups → single core memory document |
| `Merge` | Group artifacts by content similarity (Jaccard) |
## CLI Reference
| Command | What it does |
|---------|-------------|
| `uvx synix init <name>` | Scaffold a new project with sources, pipeline, and README |
| `uvx synix build` | Run the pipeline. Only rebuilds what changed |
| `uvx synix plan` | Dry-run — show what would build without running transforms |
| `uvx synix plan --explain-cache` | Plan with inline cache decision reasons |
| `uvx synix list [layer]` | List all artifacts, optionally filtered by layer |
| `uvx synix show <id>` | Display an artifact. Resolves by label or ID prefix. `--raw` for JSON |
| `uvx synix search <query>` | Full-text search. `--mode hybrid` for semantic |
| `uvx synix validate` | *(Experimental)* Run validators against build artifacts |
| `uvx synix fix` | *(Experimental)* LLM-assisted repair of violations |
| `uvx synix lineage <id>` | Show the full provenance chain for an artifact |
| `uvx synix clean` | Delete the build directory |
| `uvx synix batch-build plan` | *(Experimental)* Dry-run showing which layers would batch vs sync |
| `uvx synix batch-build run` | *(Experimental)* Submit a batch build via OpenAI Batch API. `--poll` to wait |
| `uvx synix batch-build resume <id>` | *(Experimental)* Resume a previously submitted batch build |
| `uvx synix batch-build list` | *(Experimental)* Show all batch build instances and their status |
| `uvx synix batch-build status <id>` | *(Experimental)* Detailed status for a specific batch build. `--latest` for most recent |
| `uvx 'synix[mesh]' mesh create` | *(Experimental)* Create a new mesh with config and token |
| `uvx 'synix[mesh]' mesh provision` | *(Experimental)* Join this machine to a mesh as server or client |
| `uvx 'synix[mesh]' mesh status` | *(Experimental)* Show mesh health, members, and last build |
| `uvx 'synix[mesh]' mesh list` | *(Experimental)* List all meshes on this machine |
## Batch Build (Experimental)
> **Warning:** Batch build is experimental. Commands, state formats, and behavior may change in future releases.
The OpenAI Batch API processes LLM requests asynchronously at **50% cost** with a 24-hour SLA. Synix wraps this into `batch-build` — submit your pipeline, disconnect, come back when it's done.
### Quick Example
```python
# pipeline.py — mixed-provider pipeline
pipeline.llm_config = {
"provider": "openai", # OpenAI layers → batch mode (automatic)
"model": "gpt-4o",
}
episodes = EpisodeSummary("episodes", depends_on=[transcripts])
monthly = MonthlyRollup("monthly", depends_on=[episodes])
# Force this layer to run synchronously via Anthropic
core = CoreSynthesis("core", depends_on=[monthly], batch=False)
core.config = {"llm_config": {"provider": "anthropic", "model": "claude-sonnet-4-20250514"}}
```
```bash
# Submit and wait for completion
uvx synix batch-build run pipeline.py --poll
```
### Poll vs Resume
**Poll workflow** — submit and wait in a single session:
```bash
uvx synix batch-build run pipeline.py --poll --poll-interval 120
```
**Resume workflow** — submit, disconnect, come back later:
```bash
# Submit (exits after first batch is submitted)
uvx synix batch-build run pipeline.py
# Build ID: batch-a1b2c3d4
# Resume with: synix batch-build resume batch-a1b2c3d4 pipeline.py --poll
# Check on it later
uvx synix batch-build status --latest
# Resume and poll to completion
uvx synix batch-build resume batch-a1b2c3d4 pipeline.py --poll
```
### The `batch` Parameter
Each transform accepts an optional `batch` parameter controlling whether it uses the Batch API:
| Value | Behavior |
|-------|----------|
| `None` (default) | Auto-detect: batch if the layer's provider is native OpenAI, sync otherwise. |
| `True` | Force batch mode. Raises an error if the provider is not native OpenAI. |
| `False` | Force synchronous execution, even if the provider supports batch. |
```python
episodes = EpisodeSummary("episodes", depends_on=[transcripts]) # auto
monthly = MonthlyRollup("monthly", depends_on=[episodes], batch=True) # force batch
core = CoreSynthesis("core", depends_on=[monthly], batch=False) # force sync
```
### Provider Restrictions
Batch mode **only works with native OpenAI** (`provider="openai"` with no custom `base_url`). Transforms using Anthropic, DeepSeek, or OpenAI-compatible endpoints via `base_url` always run synchronously. Setting `batch=True` on a non-OpenAI layer is a hard error.
### Transform Requirements
Transforms used in batch builds must be **stateless** — their `execute()` method must be idempotent and produce deterministic prompts from the same inputs. All built-in transforms (`EpisodeSummary`, `MonthlyRollup`, `TopicalRollup`, `CoreSynthesis`) meet this requirement.
See [docs/batch-build.md](docs/batch-build.md) for the full specification including state management, error handling, and the request collection protocol.
## Mesh — Distributed Builds (Experimental)
> **Warning:** Mesh is experimental. Commands, configuration, and behavior may change in future releases.
Synix Mesh distributes pipeline builds across machines over a private network (Tailscale). A central server receives source files from clients, runs builds, and distributes artifact bundles back. Clients automatically watch local directories, submit new files, and pull results.
```bash
# Mesh needs the [mesh] extra for its dependencies
uvx 'synix[mesh]' mesh create --name my-mesh --pipeline ./pipeline.py
uvx 'synix[mesh]' mesh provision --name my-mesh --role server
uvx 'synix[mesh]' mesh provision --name my-mesh --role client --server server-host:7433
# Check status
uvx 'synix[mesh]' mesh status --name my-mesh
```
All mesh state persists in `~/.synix-mesh/` on disk. Features: debounced build scheduling, ETag-based artifact distribution, shared-token auth, automatic leader election with term-based fencing, deploy hooks, webhook notifications.
See [docs/mesh.md](docs/mesh.md) for the full guide — configuration, server API, failover protocol, security model, and data layout.
## Key Capabilities
**Incremental rebuilds** — Change a prompt or add new sources. Only downstream artifacts reprocess.
**Full provenance** — Every artifact chains back to the source conversations that produced it. `uvx synix lineage <id>` shows the full tree.
**Fingerprint-based caching** — Build fingerprints capture inputs, prompts, model config, and transform source code. Change any component and only affected artifacts rebuild. See [docs/cache-semantics.md](docs/cache-semantics.md).
**Altitude-aware search** — Query across episode summaries, rollups, or core memory. Drill into provenance from any result.
**Architecture evolution** — Swap monthly rollups for topic-based clustering. Transcripts and episodes stay cached. No migration scripts.
## Where Synix Fits
| | Mem0 | Letta | Zep | LangMem | **Synix** |
|---|---|---|---|---|---|
| **Approach** | API-first memory store | Agent-managed memory | Temporal knowledge graph | Taxonomy-driven memory | Build system with pipelines |
| **Incremental rebuilds** | — | — | — | — | Yes |
| **Provenance tracking** | — | — | — | — | Full chain to source |
| **Architecture changes** | Migration | Migration | Migration | Migration | Rebuild |
| **Schema** | Fixed | Fixed | Fixed | Fixed | You define it |
Synix is not a memory store. It's the build system that produces one.
## Learn More
| Doc | Contents |
|-----|----------|
| [Pipeline API](docs/pipeline-api.md) | Full Python API — ext transforms, built-in transforms, projections, validators, custom transforms |
| [Entity Model](docs/entity-model.md) | Artifact identity, storage format, cache logic |
| [Cache Semantics](docs/cache-semantics.md) | Rebuild trigger matrix, fingerprint scheme |
| [Batch Build](docs/batch-build.md) | *(Experimental)* OpenAI Batch API for 50% cost reduction |
| [Mesh](docs/mesh.md) | *(Experimental)* Distributed builds across machines via Tailscale |
| [CLI UX](docs/cli-ux.md) | Output formatting, color scheme |
## Links
- [synix.dev](https://synix.dev)
- [GitHub](https://github.com/marklubin/synix)
- [llms.txt](./llms.txt) — machine-readable project summary for LLMs
- [Issue tracker](https://github.com/marklubin/synix/issues) — known limitations and roadmap
- MIT License
| text/markdown | Mark Lubin | null | null | null | null | agent, build-system, llm, memory | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"anthropic>=0.40.0",
"click>=8.0",
"fastembed>=0.4",
"openai>=1.0",
"pydantic>=2.0",
"python-dotenv>=1.0",
"pyyaml>=6.0",
"rich>=13.0",
"httpx>=0.27; extra == \"mesh\"",
"starlette>=0.40; extra == \"mesh\"",
"uvicorn[standard]>=0.30; extra == \"mesh\""
] | [] | [] | [] | [
"Homepage, https://github.com/marklubin/synix",
"Repository, https://github.com/marklubin/synix"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:06:20.127905 | synix-0.15.0.tar.gz | 3,821,511 | 7d/7f/3568172ea6e6872c6835c278f812b5a5cd99e77d20be4739bfeff76f32fa/synix-0.15.0.tar.gz | source | sdist | null | false | 85a40eca3e41db4e5ad29aecab1a1e30 | fe71e2c56e4820244ad048f62881bcd89a3c5b53eb28e6a132d55d4b42b77c62 | 7d7f3568172ea6e6872c6835c278f812b5a5cd99e77d20be4739bfeff76f32fa | MIT | [
"LICENSE"
] | 218 |
2.4 | tgcore | 1.0.22 | TGCoreSDK | Enterprise Telegram Bot API Framework. | <h1 align="center">TGCore SDK</h1>
<p align="center">
Enterprise Telegram Bot Framework • Secure • Scalable • Zero-Trust Ready
</p>
<p align="center">
<img src="https://img.shields.io/badge/Framework-TGCore-black?style=for-the-badge">
<img src="https://img.shields.io/badge/API-Services%20Pro-purple?style=for-the-badge">
<img src="https://img.shields.io/badge/Security-AES--256%20GCM-green?style=for-the-badge">
</p>














The most secure Telegram Bot SDK ever built.
## ✨ Features
- ⚡ Async native ("async/await")
- 🔐 Secure API key authentication
- 🤖 Multi-bot token support
- 🔁 Token rotation ready
- 🧩 Builder pattern + simple calls
- 📦 Auto-generated methods from OpenAPI schema
- 📚 Auto docstring generation
- 🏗 Enterprise-ready architecture
---
## 📦 Installation
`pip install tgcore`
Or install locally:
`pip install -e .`
---
## 🔑 Authentication
Create client instance:
```py
from tgcore import Client
client = Client("fw_live_xxx")
await client.telegram.send_message(
chat_id="@channel",
text="hello"
)
```
---
## 👾 Usage
### sendMessage
```py
# Latest version 1.0.16+
# Support Pyrogram/Kurigram (KeyboardBuilder)
from tgcore import Client, KeyboardBuilder
tg = Client()
async def send():
await (
tg.raw
.sendMessage(
chat_id=m.chat.id,
text="Testing",
reply_markup=(
KeyboardBuilder()
.url("This Url", "https://github.com/TeamKillerX/tgcore")
.style("This color", "danger", callback_data="#abc")
.build()
)
)
.execute()
)
```
### New button
```py
# old version: 1.0.14
from tgcore import Client, KeyboardBuilder
tg = Client()
async def use_pyrogram(m):
await tg.telegram.send_message(
chat_id=str(m.chat.id),
text="This Button",
reply_markup=(
KeyboardBuilder()
.row("GitHub", url="https://github.com")
.row("Docs", url="https://www.learnpython.org/")
.row("Pypi", url="https://pypi.org/project/tgcore/")
.build()
)
```
### Simple Call
```py
await client.telegram.send_message(
chat_id="@channel",
text="Hello world"
)
```
---
### Builder Pattern
```py
await (
client.telegram
.send_photo_call(chat_id="@channel", photo="https://img.jpg")
.execute()
)
```
---
## 🔄 Token Rotation Support
The server supports storing encrypted tokens using AES-256-GCM.
The SDK automatically uses the active token version.
## 🔒 Security Model
TgCoreSDK never exposes bot tokens to clients.
Flow:
Client → API Gateway → Decrypt → Telegram API
Benefits:
- prevents token leaks
- safe frontend usage
- safe monitoring dashboards
- supports IP restrictions
---
## Why TGCore?
Unlike traditional Telegram SDKs, TGCore is built as a **secure middleware layer** that prevents token leaks, enforces API-key auth, and supports enterprise-grade scaling.
Designed for production, not demos.
## Compared to Native Telegram API
| Feature | Telegram API | TGCore |
|-------|--------------|--------|
Token Exposure | Yes | No |
Auth Layer | None | API Key + Secret |
Proxy Support | Manual | Built-in |
Multi Bot | Limited | Yes |
Webhook Security | Basic | Zero-Trust |
## 🧾 License
Licensed under Apache License 2.0
You may:
- use commercially
- modify
- distribute
- sublicense
---
## 🤝 Contributing
Pull requests welcome.
For major changes, open an issue first to discuss what you would like to change.
---
## 🔥 Status
Production Ready
---
## 👑 Author
Built with ❤️ by Randy W
| text/markdown | TeamKillerX | null | null | null | Apache-2.0 | telegram, telegram-bot, telegram-api, sdk, framework, tgcore, ryzenth | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Natural Language :: English"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"httpx>=0.24.0",
"pydantic"
] | [] | [] | [] | [
"Source, https://github.com/TeamKillerX/tgcore",
"Issues, https://github.com/TeamKillerX/tgcore/issues",
"Documentation, https://services-pro.ryzenths.dpdns.org/api/v2/docs"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:05:31.036252 | tgcore-1.0.22.tar.gz | 18,376 | 4c/76/cda4badb6b7d39ceda71a654dce02af480fe1890c277a46f6ae9d7f25939/tgcore-1.0.22.tar.gz | source | sdist | null | false | 644be26d1d904152ad11fa33b09e2713 | e2974d78f7332acd3e810b70585e7d6fa4b4dc3b5c8a7779cd49dc0b203dec2a | 4c76cda4badb6b7d39ceda71a654dce02af480fe1890c277a46f6ae9d7f25939 | null | [
"LICENSE",
"NOTICE"
] | 246 |
2.3 | flwr-nightly | 1.27.0.dev20260220 | Flower: A Friendly Federated AI Framework | # Flower: A Friendly Federated AI Framework
<p align="center">
<a href="https://flower.ai/">
<img src="https://flower.ai/_next/image/?url=%2F_next%2Fstatic%2Fmedia%2Fflwr-head.4d68867a.png&w=384&q=75" width="140px" alt="Flower Website" />
</a>
</p>
<p align="center">
<a href="https://flower.ai/">Website</a> |
<a href="https://flower.ai/blog">Blog</a> |
<a href="https://flower.ai/docs/">Docs</a> |
<a href="https://flower.ai/events/flower-ai-summit-2026/?utm_source=docs">Summit</a> |
<a href="https://flower.ai/join-slack">Slack</a>
<br /><br />
</p>
[](https://github.com/adap/flower/blob/main/LICENSE)
[](https://github.com/adap/flower/blob/main/CONTRIBUTING.md)

[](https://pepy.tech/project/flwr)
[](https://hub.docker.com/u/flwr)
[](https://flower.ai/join-slack)
Flower (`flwr`) is a framework for building federated AI systems. The
design of Flower is based on a few guiding principles:
- **Customizable**: Federated learning systems vary wildly from one use case to
another. Flower allows for a wide range of different configurations depending
on the needs of each individual use case.
- **Extendable**: Flower originated from a research project at the University of
Oxford, so it was built with AI research in mind. Many components can be
extended and overridden to build new state-of-the-art systems.
- **Framework-agnostic**: Different machine learning frameworks have different
strengths. Flower can be used with any machine learning framework, for
example, [PyTorch](https://pytorch.org), [TensorFlow](https://tensorflow.org), [Hugging Face Transformers](https://huggingface.co/), [PyTorch Lightning](https://pytorchlightning.ai/), [scikit-learn](https://scikit-learn.org/), [JAX](https://jax.readthedocs.io/), [TFLite](https://tensorflow.org/lite/), [MONAI](https://docs.monai.io/en/latest/index.html), [fastai](https://www.fast.ai/), [MLX](https://ml-explore.github.io/mlx/build/html/index.html), [XGBoost](https://xgboost.readthedocs.io/en/stable/), [LeRobot](https://github.com/huggingface/lerobot) for federated robots, [Pandas](https://pandas.pydata.org/) for federated analytics, or even raw [NumPy](https://numpy.org/)
for users who enjoy computing gradients by hand.
- **Understandable**: Flower is written with maintainability in mind. The
community is encouraged to both read and contribute to the codebase.
Meet the Flower community on [flower.ai](https://flower.ai)!
## Federated Learning Tutorial
Flower's goal is to make federated learning accessible to everyone. This series of tutorials introduces the fundamentals of federated learning and how to implement them in Flower.
0. **[What is Federated Learning?](https://flower.ai/docs/framework/main/en/tutorial-series-what-is-federated-learning.html)**
1. **[An Introduction to Federated Learning](https://flower.ai/docs/framework/main/en/tutorial-series-get-started-with-flower-pytorch.html)**
2. **[Using Strategies in Federated Learning](https://flower.ai/docs/framework/main/en/tutorial-series-use-a-federated-learning-strategy-pytorch.html)**
3. **[Customize a Flower Strategy](https://flower.ai/docs/framework/main/en/tutorial-series-build-a-strategy-from-scratch-pytorch.html)**
4. **[Communicate Custom Messages](https://flower.ai/docs/framework/main/en/tutorial-series-customize-the-client-pytorch.html)**
Stay tuned, more tutorials are coming soon. Topics include **Privacy and Security in Federated Learning**, and **Scaling Federated Learning**.
## 30-Minute Federated Learning Tutorial
[](https://colab.research.google.com/github/adap/flower/blob/main/examples/flower-in-30-minutes/tutorial.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/examples/flower-in-30-minutes/tutorial.ipynb))
## Documentation
[Flower Docs](https://flower.ai/docs):
- [Installation](https://flower.ai/docs/framework/how-to-install-flower.html)
- [Quickstart (TensorFlow)](https://flower.ai/docs/framework/tutorial-quickstart-tensorflow.html)
- [Quickstart (PyTorch)](https://flower.ai/docs/framework/tutorial-quickstart-pytorch.html)
- [Quickstart (Hugging Face)](https://flower.ai/docs/framework/tutorial-quickstart-huggingface.html)
- [Quickstart (PyTorch Lightning)](https://flower.ai/docs/framework/tutorial-quickstart-pytorch-lightning.html)
- [Quickstart (Pandas)](https://flower.ai/docs/framework/tutorial-quickstart-pandas.html)
- [Quickstart (fastai)](https://flower.ai/docs/framework/tutorial-quickstart-fastai.html)
- [Quickstart (JAX)](https://flower.ai/docs/framework/tutorial-quickstart-jax.html)
- [Quickstart (scikit-learn)](https://flower.ai/docs/framework/tutorial-quickstart-scikitlearn.html)
- [Quickstart (Android [TFLite])](https://flower.ai/docs/framework/tutorial-quickstart-android.html)
- [Quickstart (iOS [CoreML])](https://flower.ai/docs/framework/tutorial-quickstart-ios.html)
## Flower Baselines
Flower Baselines is a collection of community-contributed projects that reproduce the experiments performed in popular federated learning publications. Researchers can build on Flower Baselines to quickly evaluate new ideas. The Flower community loves contributions! Make your work more visible and enable others to build on it by contributing it as a baseline!
- [DASHA](https://github.com/adap/flower/tree/main/baselines/dasha)
- [DepthFL](https://github.com/adap/flower/tree/main/baselines/depthfl)
- [FedBN](https://github.com/adap/flower/tree/main/baselines/fedbn)
- [FedMeta](https://github.com/adap/flower/tree/main/baselines/fedmeta)
- [FedMLB](https://github.com/adap/flower/tree/main/baselines/fedmlb)
- [FedPer](https://github.com/adap/flower/tree/main/baselines/fedper)
- [FedProx](https://github.com/adap/flower/tree/main/baselines/fedprox)
- [FedNova](https://github.com/adap/flower/tree/main/baselines/fednova)
- [HeteroFL](https://github.com/adap/flower/tree/main/baselines/heterofl)
- [FedAvgM](https://github.com/adap/flower/tree/main/baselines/fedavgm)
- [FedRep](https://github.com/adap/flower/tree/main/baselines/fedrep)
- [FedStar](https://github.com/adap/flower/tree/main/baselines/fedstar)
- [FedWav2vec2](https://github.com/adap/flower/tree/main/baselines/fedwav2vec2)
- [FjORD](https://github.com/adap/flower/tree/main/baselines/fjord)
- [MOON](https://github.com/adap/flower/tree/main/baselines/moon)
- [niid-Bench](https://github.com/adap/flower/tree/main/baselines/niid_bench)
- [TAMUNA](https://github.com/adap/flower/tree/main/baselines/tamuna)
- [FedVSSL](https://github.com/adap/flower/tree/main/baselines/fedvssl)
- [FedXGBoost](https://github.com/adap/flower/tree/main/baselines/hfedxgboost)
- [FedPara](https://github.com/adap/flower/tree/main/baselines/fedpara)
- [FedAvg](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/fedavg_mnist)
- [FedOpt](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/adaptive_federated_optimization)
Please refer to the [Flower Baselines Documentation](https://flower.ai/docs/baselines/) for a detailed categorization of baselines and for additional info including:
- [How to use Flower Baselines](https://flower.ai/docs/baselines/how-to-use-baselines.html)
- [How to contribute a new Flower Baseline](https://flower.ai/docs/baselines/how-to-contribute-baselines.html)
## Flower Usage Examples
Several code examples show different usage scenarios of Flower (in combination with popular machine learning frameworks such as PyTorch or TensorFlow).
Quickstart examples:
- [Quickstart (TensorFlow)](https://github.com/adap/flower/tree/main/examples/quickstart-tensorflow)
- [Quickstart (PyTorch)](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch)
- [Quickstart (Hugging Face)](https://github.com/adap/flower/tree/main/examples/quickstart-huggingface)
- [Quickstart (PyTorch Lightning)](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch-lightning)
- [Quickstart (fastai)](https://github.com/adap/flower/tree/main/examples/quickstart-fastai)
- [Quickstart (Pandas)](https://github.com/adap/flower/tree/main/examples/quickstart-pandas)
- [Quickstart (JAX)](https://github.com/adap/flower/tree/main/examples/quickstart-jax)
- [Quickstart (MONAI)](https://github.com/adap/flower/tree/main/examples/quickstart-monai)
- [Quickstart (scikit-learn)](https://github.com/adap/flower/tree/main/examples/quickstart-sklearn)
- [Quickstart (Android [TFLite])](https://github.com/adap/flower/tree/main/examples/android)
- [Quickstart (iOS [CoreML])](https://github.com/adap/flower/tree/main/examples/ios)
- [Quickstart (MLX)](https://github.com/adap/flower/tree/main/examples/quickstart-mlx)
- [Quickstart (XGBoost)](https://github.com/adap/flower/tree/main/examples/xgboost-quickstart)
Other [examples](https://github.com/adap/flower/tree/main/examples):
- [Raspberry Pi & Nvidia Jetson Tutorial](https://github.com/adap/flower/tree/main/examples/embedded-devices)
- [PyTorch: From Centralized to Federated](https://github.com/adap/flower/tree/main/examples/pytorch-from-centralized-to-federated)
- [Vertical FL](https://github.com/adap/flower/tree/main/examples/vertical-fl)
- [Federated Finetuning of OpenAI's Whisper](https://github.com/adap/flower/tree/main/examples/whisper-federated-finetuning)
- [Federated Finetuning of Large Language Model](https://github.com/adap/flower/tree/main/examples/flowertune-llm)
- [Federated Finetuning of a Vision Transformer](https://github.com/adap/flower/tree/main/examples/flowertune-vit)
- [Advanced Flower with TensorFlow/Keras](https://github.com/adap/flower/tree/main/examples/advanced-tensorflow)
- [Advanced Flower with PyTorch](https://github.com/adap/flower/tree/main/examples/advanced-pytorch)
- [Comprehensive Flower+XGBoost](https://github.com/adap/flower/tree/main/examples/xgboost-comprehensive)
- [Flower with KaplanMeierFitter from the lifelines library](https://github.com/adap/flower/tree/main/examples/federated-kaplan-meier-fitter)
- [Sample Level Privacy with Opacus](https://github.com/adap/flower/tree/main/examples/opacus)
- [Flower with a Tabular Dataset](https://github.com/adap/flower/tree/main/examples/fl-tabular)
## Community
Flower is built by a wonderful community of researchers and engineers. [Join Slack](https://flower.ai/join-slack) to meet them, [contributions](#contributing-to-flower) are welcome.
<a href="https://github.com/adap/flower/graphs/contributors">
<img src="https://contrib.rocks/image?repo=adap/flower&columns=10" />
</a>
## Citation
If you publish work that uses Flower, please cite Flower as follows:
```bibtex
@article{beutel2020flower,
title={Flower: A Friendly Federated Learning Research Framework},
author={Beutel, Daniel J and Topal, Taner and Mathur, Akhil and Qiu, Xinchi and Fernandez-Marques, Javier and Gao, Yan and Sani, Lorenzo and Kwing, Hei Li and Parcollet, Titouan and Gusmão, Pedro PB de and Lane, Nicholas D},
journal={arXiv preprint arXiv:2007.14390},
year={2020}
}
```
Please also consider adding your publication to the list of Flower-based publications in the docs, just open a Pull Request.
## Contributing to Flower
We welcome contributions. Please see [CONTRIBUTING.md](CONTRIBUTING.md) to get started!
| text/markdown | The Flower Authors | hello@flower.ai | null | null | Apache-2.0 | Artificial Intelligence, Federated AI, Federated Analytics, Federated Evaluation, Federated Learning, Flower, Machine Learning | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS :: MacOS X",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Software Development",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | https://flower.ai | null | <4.0,>=3.10 | [] | [] | [] | [
"numpy<3.0.0,>=1.26.0",
"grpcio<2.0.0,>=1.70.0",
"grpcio-health-checking<2.0.0,>=1.70.0",
"protobuf<7.0.0,>=5.28.0",
"cryptography<47.0.0,>=46.0.5",
"pycryptodome<4.0.0,>=3.18.0",
"iterators<0.0.3,>=0.0.2",
"typer<0.21.0,>=0.12.5",
"tomli<3.0.0,>=2.0.1",
"tomli-w<2.0.0,>=1.0.0",
"pathspec<0.13.0,>=0.12.1",
"rich<14.0.0,>=13.5.0",
"pyyaml<7.0.0,>=6.0.2",
"requests<3.0.0,>=2.31.0",
"click<9.0.0,>=8.0.0",
"SQLAlchemy<3.0.0,>=2.0.45",
"alembic<2.0.0,>=1.18.1",
"ray==2.51.1; (python_version >= \"3.10\" and python_version < \"3.13\") and extra == \"simulation\"",
"ray==2.51.1; (sys_platform != \"win32\" and python_version == \"3.13\") and extra == \"simulation\"",
"starlette<0.51.0,>=0.50.0; extra == \"rest\"",
"uvicorn[standard]<0.41.0,>=0.40.0; extra == \"rest\""
] | [] | [] | [] | [
"Homepage, https://flower.ai",
"Repository, https://github.com/adap/flower",
"Documentation, https://flower.ai"
] | poetry/2.1.3 CPython/3.10.19 Linux/6.8.0-1044-azure | 2026-02-20T23:05:17.912711 | flwr_nightly-1.27.0.dev20260220.tar.gz | 419,935 | 90/08/c4fd77d30cf5d9fa690ff4e264da7e0e4f9cc70a007dc9179b36dcc25375/flwr_nightly-1.27.0.dev20260220.tar.gz | source | sdist | null | false | 5e2a850d661f6bea1fc5552fd080c17e | 8252360f2e8162903c9919362d5bf7a69b450464f357aa2a7aa42e9171e79c09 | 9008c4fd77d30cf5d9fa690ff4e264da7e0e4f9cc70a007dc9179b36dcc25375 | null | [] | 191 |
2.4 | nativ | 0.2.0 | Python SDK for the Nativ AI localization platform | # Nativ Python SDK
The official Python client for the [Nativ](https://usenativ.com) AI localization platform.
Wraps the full Nativ REST API with **sync and async** clients, typed responses, and zero config — just add your API key.
## Installation
```bash
pip install nativ
```
## Quick start
```python
from nativ import Nativ
client = Nativ(api_key="nativ_...") # or set NATIV_API_KEY env var
# Translate text
result = client.translate("Launch your product globally", target_language="French")
print(result.translated_text) # "Lancez votre produit à l'international"
print(result.tm_match) # TM match details (score, source, etc.)
# Batch translate
results = client.translate_batch(
["Sign up", "Log in", "Settings"],
target_language="German",
)
for r in results:
print(r.translated_text)
```
## CLI
The `nativ` command is included when you install the SDK. Set your API key once and use it from any terminal or CI pipeline.
```bash
export NATIV_API_KEY="nativ_..."
```
### Translate
```bash
nativ translate "Launch your product globally" --to French
# Lancez votre produit à l'international
nativ t "Hello" --to German --formality formal --backtranslate --json
```
### Batch translate
```bash
nativ batch "Sign up" "Log in" "Settings" --to Spanish
# Or pipe from stdin (one text per line):
cat strings.txt | nativ batch --to Japanese
```
### Translation memory
```bash
nativ tm search "Hello" --target-lang fr
nativ tm list --target-lang fr --limit 10
nativ tm add "Hello" "Bonjour" --source-lang en --target-lang fr
nativ tm stats
nativ tm delete <entry-id>
```
### Languages, style guides, brand voice
```bash
nativ languages
nativ style-guides
nativ brand-voice
```
### OCR & image inspection
```bash
nativ extract screenshot.png
nativ inspect ad_creative.jpg --countries "Japan,Brazil"
```
### JSON output
Every command supports `--json` for machine-readable output, perfect for shell scripts and CI:
```bash
nativ translate "Hello" --to French --json | jq .translated_text
```
## Async usage
```python
import asyncio
from nativ import AsyncNativ
async def main():
async with AsyncNativ() as client:
result = await client.translate("Hello", target_language="Japanese")
print(result.translated_text)
asyncio.run(main())
```
## Features
### Translation
```python
result = client.translate(
"Welcome to our platform",
target_language="Spanish",
context="SaaS onboarding email subject line",
formality="formal",
backtranslate=True,
)
print(result.translated_text) # translated text
print(result.backtranslation) # back-translation for QA
print(result.rationale) # AI explanation of translation choices
print(result.tm_match.score) # TM match percentage
```
### OCR — extract text from images
```python
result = client.extract_text("screenshot.png")
print(result.extracted_text)
```
### Image culturalization
```python
result = client.culturalize_image(
"banner_en.png",
text="Soldes d'été",
language_code="fr",
num_images=3,
)
for img in result.images:
# img.image_base64 contains the generated image
pass
```
### Cultural sensitivity inspection
```python
result = client.inspect_image("ad_creative.jpg")
print(result.verdict) # "SAFE" or "NOT SAFE"
for issue in result.affected_countries:
print(f"{issue.country}: {issue.issue} → {issue.suggestion}")
```
### Translation memory
```python
# Search
matches = client.search_tm("Sign up", target_language_code="fr")
for m in matches:
print(f"{m.score:.0f}% — {m.source_text} → {m.target_text}")
# Add entry
client.add_tm_entry(
source_text="Sign up",
target_text="S'inscrire",
source_language_code="en",
target_language_code="fr-FR",
name="onboarding CTA",
)
# List & filter
entries = client.list_tm_entries(target_language_code="fr-FR", enabled_only=True)
print(f"{entries.total} entries")
# Stats
stats = client.get_tm_stats()
print(f"{stats.total} total, {stats.enabled} enabled")
```
### Languages
```python
languages = client.get_languages()
for lang in languages:
print(f"{lang.language} ({lang.language_code}) — formality: {lang.formality}")
```
### Style guides & brand voice
```python
# List style guides
guides = client.get_style_guides()
for g in guides:
print(f"{g.title} — {'enabled' if g.is_enabled else 'disabled'}")
# Get brand voice prompt
voice = client.get_brand_voice()
print(voice.prompt)
# Create a style guide
client.create_style_guide(
title="Tone of Voice",
content="Always use active voice. Avoid jargon.",
)
```
## Error handling
```python
from nativ import Nativ, InsufficientCreditsError, AuthenticationError
client = Nativ()
try:
result = client.translate("Hello", target_language="French")
except AuthenticationError:
print("Bad API key")
except InsufficientCreditsError:
print("Top up at dashboard.usenativ.com")
```
All exceptions inherit from `NativError` and carry `status_code` and `body` attributes.
| Exception | HTTP | When |
|----------------------------|------|-------------------------------|
| `AuthenticationError` | 401 | Invalid or missing API key |
| `InsufficientCreditsError` | 402 | Not enough credits |
| `ValidationError` | 400 | Bad request parameters |
| `NotFoundError` | 404 | Resource not found |
| `RateLimitError` | 429 | Too many requests |
| `ServerError` | 5xx | Nativ API server error |
## Configuration
```python
client = Nativ(
api_key="nativ_...", # or NATIV_API_KEY env var
base_url="https://...", # or NATIV_API_URL env var (default: api.usenativ.com)
timeout=120.0, # request timeout in seconds
)
```
## Building on top of this SDK
This SDK is the foundation for Nativ integrations:
- **CLI** — `nativ translate "Hello" --to French` (included, see above)
- **[nativ-mcp](https://pypi.org/project/nativ-mcp/)** — MCP server for Claude, Cursor, etc.
- **[langchain-nativ](https://pypi.org/project/langchain-nativ/)** — LangChain tool for AI agents
- **CrewAI** — works via langchain-nativ (see [CrewAI docs](https://github.com/Nativ-Technologies/nativ-python))
## License
MIT
| text/markdown | null | Nativ <hello@usenativ.com> | null | null | null | ai, cli, i18n, l10n, localization, translation | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Localization",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx>=0.24.0",
"anyio[trio]>=4.0; extra == \"dev\"",
"pytest-anyio>=0.0.0; extra == \"dev\"",
"pytest-httpx>=0.35.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://usenativ.com",
"Documentation, https://docs.usenativ.com",
"Repository, https://github.com/Nativ-Technologies/nativ-python",
"Issues, https://github.com/Nativ-Technologies/nativ-python/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:04:49.109545 | nativ-0.2.0.tar.gz | 21,679 | e9/0b/96a1864423bce68b05de8b509df0fe48bb6b4c877920b1314344d944b849/nativ-0.2.0.tar.gz | source | sdist | null | false | e66998fb53f548b3376b69ca5d76ac52 | 319aefdcac743978c7f6f37f6418bf2542790317b3e6fc0ce0690affe26de4ed | e90b96a1864423bce68b05de8b509df0fe48bb6b4c877920b1314344d944b849 | MIT | [
"LICENSE"
] | 316 |
2.4 | qBitrr2 | 5.9.1 | Intelligent automation for qBittorrent and *Arr apps (Radarr/Sonarr/Lidarr) - health monitoring, instant imports, quality upgrades, request integration | # <img src="assets/logov2-clean.png" alt="qBitrr Logo" width="40" style="vertical-align: middle;"/> qBitrr
[](https://pypi.org/project/qBitrr2/)
[](https://pypi.org/project/qBitrr2/)
[](https://hub.docker.com/r/feramance/qbitrr)
[](https://github.com/Feramance/qBitrr/actions/workflows/codeql.yml)
[](https://github.com/Feramance/qBitrr/actions/workflows/nightly.yml)
[](https://results.pre-commit.ci/latest/github/Feramance/qBitrr/master)
[](LICENSE)
> 🧩 The intelligent glue between qBittorrent and the *Arr ecosystem (Radarr, Sonarr, Lidarr). Monitors torrent health, triggers instant imports, automates quality upgrades, manages disk space, integrates with request systems (Overseerr/Ombi), and provides a modern React dashboard for complete visibility and control.
## 📚 Documentation
**Full documentation is available at: https://feramance.github.io/qBitrr/**
- [Getting Started](https://feramance.github.io/qBitrr/getting-started/) – Installation guides for pip, Docker, and native setups
- [Configuration](https://feramance.github.io/qBitrr/configuration/) – qBittorrent, Arr instances, quality profiles, and more
- [Features](https://feramance.github.io/qBitrr/features/) – Health monitoring, automated search, quality management, disk space, auto-updates
- [WebUI](https://feramance.github.io/qBitrr/webui/) – Built-in React dashboard with live monitoring and config editor
- [Troubleshooting](https://feramance.github.io/qBitrr/troubleshooting/) – Common issues and debug logging
- [API Reference](https://feramance.github.io/qBitrr/reference/api/) – REST API documentation
## ⚡ Quick Start
### 🐍 Install with pip
```bash
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install qBitrr2
# First run creates ~/config/config.toml
qbitrr
```
### 🐳 Run with Docker
```bash
docker run -d \
--name qbitrr \
--tty \
-e TZ=Europe/London \
-p 6969:6969 \
-v /path/to/appdata/qbitrr:/config \
-v /path/to/completed/downloads:/completed_downloads:rw \
--restart unless-stopped \
feramance/qbitrr:latest
```
**Docker Compose:**
```yaml
services:
qbitrr:
image: feramance/qbitrr:latest
container_name: qbitrr
restart: unless-stopped
tty: true
environment:
TZ: Europe/London
ports:
- "6969:6969"
volumes:
- /path/to/appdata/qbitrr:/config
- /path/to/completed/downloads:/completed_downloads:rw
```
Access the WebUI at `http://<host>:6969/ui` after startup.
## ✨ Key Features
- **🚀 Multi-qBittorrent Support (v5.7.x+)** – Manage torrents across multiple qBittorrent instances for load balancing, redundancy, and VPN isolation
- **🚑 Torrent Health Monitoring** – Detect stalled/failed downloads, auto-blacklist, trigger re-searches
- **🔍 Automated Search** – Missing media, quality upgrades, custom format scoring
- **🎯 Request Integration** – Pull requests from Overseerr/Ombi, prioritize user-requested media
- **📊 Quality Management** – RSS sync, queue refresh, profile switching, custom format enforcement
- **🌱 Seeding Control** – Per-tracker settings, ratio/time limits, tracker injection
- **🛡️ Hit and Run Protection** – Automatic HnR obligation tracking with configurable thresholds, partial download handling, and dead tracker bypass
- **💾 Disk Space Management** – Auto-pause when low on space, configurable thresholds
- **🔄 Auto-Updates** – GitHub release-based updates with scheduled cron support
- **💻 Modern WebUI** – Live process monitoring, log viewer, Arr insights, config editor
## 🛠️ Essential Configuration
1. **Configure qBittorrent** in `~/config/config.toml`:
```toml
[qBit]
Host = "localhost"
Port = 8080
UserName = "admin"
Password = "adminpass"
```
2. **Add Arr instances**:
```toml
[Radarr-Movies]
URI = "http://localhost:7878"
APIKey = "your-radarr-api-key"
Category = "radarr-movies"
```
3. **Set completed folder**:
```toml
[Settings]
CompletedDownloadFolder = "/path/to/completed"
```
### 🆕 Multi-qBittorrent (v5.7.x+)
Manage torrents across multiple qBittorrent instances:
```toml
[qBit] # Default instance (required)
Host = "localhost"
Port = 8080
UserName = "admin"
Password = "password"
[qBit-seedbox] # Additional instance (optional)
Host = "192.168.1.100"
Port = 8080
UserName = "admin"
Password = "seedboxpass"
```
See [Multi-qBittorrent Guide](MULTI_QBIT_V3_USER_GUIDE.md) for complete documentation.
---
See [Configuration Guide](https://feramance.github.io/qBitrr/configuration/) and [config.example.toml](config.example.toml) for all available options.
## 📖 Resources
- **Documentation:** https://feramance.github.io/qBitrr/
- **PyPI Package:** https://pypi.org/project/qBitrr2/
- **Docker Hub:** https://hub.docker.com/r/feramance/qbitrr
- **Example Config:** [config.example.toml](config.example.toml)
- **API Documentation:** [docs/reference/api.md](docs/reference/api.md)
- **Systemd Setup:** [docs/getting-started/installation/systemd.md](docs/getting-started/installation/systemd.md)
## 🐛 Issues & Support
- **Report Bugs:** [Bug Report Template](.github/ISSUE_TEMPLATE/bug_report.yml)
- **Request Features:** [Feature Request Template](.github/ISSUE_TEMPLATE/feature_request.yml)
- **Discussions:** [GitHub Discussions](https://github.com/Feramance/qBitrr/discussions)
- **Troubleshooting:** [Common Issues](https://feramance.github.io/qBitrr/troubleshooting/)
## 🤝 Contributing
Contributions welcome! See [docs/development/contributing.md](docs/development/contributing.md) for coding guidelines and development setup.
**Development setup:**
```bash
# Python backend
make newenv && make syncenv
make reformat # Format and lint
# WebUI
cd webui && npm ci
npm run dev # Dev server at localhost:5173
```
## ❤️ Support
If qBitrr saves you time and headaches:
- ⭐ **Star the repo** – helps others discover qBitrr
- 💰 **Sponsor:** [Patreon](https://patreon.com/qBitrr) | [PayPal](https://www.paypal.me/feramance)
## 📄 License
Released under the [MIT License](LICENSE). Use it, modify it, share it—commercially or personally.
---
<div align="center">
**Made with ❤️ by the qBitrr community**
[Documentation](https://feramance.github.io/qBitrr/) • [PyPI](https://pypi.org/project/qBitrr2/) • [Docker](https://hub.docker.com/r/feramance/qbitrr) • [GitHub](https://github.com/Feramance/qBitrr)
</div>
| text/markdown | Feramance | fera@fera.wtf | null | null | MIT | qbittorrent, radarr, sonarr, lidarr, arr, automation, torrent, media, plex, jellyfin, overseerr, ombi | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Communications",
"Topic :: Internet",
"Topic :: Multimedia :: Video",
"Topic :: System :: Monitoring",
"Topic :: Terminals",
"Topic :: Utilities",
"Typing :: Typed"
] | [] | https://github.com/Feramance/qBitrr | null | <4,>=3.11 | [] | [] | [] | [
"cachetools>=5.0",
"colorama>=0.4",
"coloredlogs>=15.0",
"flask>=3.0",
"environ-config>=23.1",
"ffmpeg-python>=0.2",
"jaraco.docker>=2.0",
"packaging>=23.0",
"pathos>=0.3",
"peewee>=3.17",
"ping3>=4.0",
"pyarr>=5.2",
"qbittorrent-api>=2024.2",
"requests>=2.31",
"tomlkit>=0.12",
"waitress>=3.0",
"croniter>=2.0",
"certifi>=2024.2",
"black==26.1.0; extra == \"dev\"",
"bump2version==1.0.1; extra == \"dev\"",
"isort==7.0.0; extra == \"dev\"",
"pip-tools==7.5.3; extra == \"dev\"",
"pre-commit==4.5.1; extra == \"dev\"",
"pyinstaller==6.19.0; extra == \"dev\"",
"pyupgrade==3.21.2; extra == \"dev\"",
"twine==6.2.0; extra == \"dev\"",
"ujson==5.11.0; extra == \"dev\"",
"upgrade-pip==0.1.4; extra == \"dev\"",
"ujson==5.11.0; extra == \"fast\"",
"mkdocs>=1.5.3; extra == \"docs\"",
"mkdocs-material>=9.5.0; extra == \"docs\"",
"mkdocs-material-extensions>=1.3.0; extra == \"docs\"",
"mkdocs-git-revision-date-localized-plugin>=1.2.0; extra == \"docs\"",
"mkdocs-minify-plugin>=0.7.0; extra == \"docs\"",
"mkdocs-redirects>=1.2.0; extra == \"docs\"",
"mkdocs-include-markdown-plugin>=6.0.0; extra == \"docs\"",
"pymdown-extensions>=10.0.0; extra == \"docs\"",
"markdown-include>=0.8.0; extra == \"docs\"",
"black==26.1.0; extra == \"all\"",
"bump2version==1.0.1; extra == \"all\"",
"isort==7.0.0; extra == \"all\"",
"pip-tools==7.5.3; extra == \"all\"",
"pre-commit==4.5.1; extra == \"all\"",
"pyinstaller==6.19.0; extra == \"all\"",
"pyupgrade==3.21.2; extra == \"all\"",
"twine==6.2.0; extra == \"all\"",
"ujson==5.11.0; extra == \"all\"",
"upgrade-pip==0.1.4; extra == \"all\"",
"ujson==5.11.0; extra == \"all\"",
"mkdocs>=1.5.3; extra == \"all\"",
"mkdocs-material>=9.5.0; extra == \"all\"",
"mkdocs-material-extensions>=1.3.0; extra == \"all\"",
"mkdocs-git-revision-date-localized-plugin>=1.2.0; extra == \"all\"",
"mkdocs-minify-plugin>=0.7.0; extra == \"all\"",
"mkdocs-redirects>=1.2.0; extra == \"all\"",
"mkdocs-include-markdown-plugin>=6.0.0; extra == \"all\"",
"pymdown-extensions>=10.0.0; extra == \"all\"",
"markdown-include>=0.8.0; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/Feramance/qBitrr",
"Documentation, https://feramance.github.io/qBitrr/",
"Issue Tracker, https://github.com/Feramance/qBitrr/issues",
"Source Code, https://github.com/Feramance/qBitrr",
"Changelog, https://github.com/Feramance/qBitrr/blob/master/CHANGELOG.md",
"Docker Hub, https://hub.docker.com/r/feramance/qbitrr",
"PyPI, https://pypi.org/project/qBitrr2/",
"Systemd Guide, https://feramance.github.io/qBitrr/getting-started/installation/systemd/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:04:45.944613 | qbitrr2-5.9.1.tar.gz | 2,432,859 | 89/83/f28b93da5074cf7832a029a7cd8c4197e1e58ac03e1b203e2c5dbfde232c/qbitrr2-5.9.1.tar.gz | source | sdist | null | false | 0440b743cb3afe43c2bd44c9fd7f8e73 | 1149921efa8d74579bb35d84931b88b4fb10c7e940ec2dc490dbaad638ef185f | 8983f28b93da5074cf7832a029a7cd8c4197e1e58ac03e1b203e2c5dbfde232c | null | [
"LICENSE"
] | 0 |
2.4 | isso | 0.13.2 | lightweight Disqus alternative | # Isso – a commenting server similar to Disqus
Isso – *Ich schrei sonst* – is a lightweight commenting server written in
Python and JavaScript. It aims to be a drop-in replacement for
[Disqus](http://disqus.com).
## Features
- **Comments written in Markdown**
Users can edit or delete own comments (within 15 minutes by default).
Comments in moderation queue are not publicly visible before activation.
- **SQLite backend**
*Because comments are not Big Data.*
- **Disqus & WordPress Import**
You can migrate your Disqus/WordPress comments without any hassle.
- **Configurable JS client**
Embed a single JS file, 65kB (20kB gzipped) and you are done.
See **[isso-comments.de](https://isso-comments.de/)** for a **live demo**, more
details and [documentation](https://isso-comments.de/docs/).
## Screenshot

## Getting started
### Requirements
- Python 3.7+ (+ devel headers)
- SQLite 3.3.8 or later
- a working C compiler
Install Isso from [PyPi](https://pypi.python.org/pypi/isso/):
```console
pip install isso
```
Then, follow the [Quickstart](https://isso-comments.de/docs/guides/quickstart/) guide.
If you're stuck, follow the [Install guide](https://isso-comments.de/docs/reference/installation/),
see [Troubleshooting](https://isso-comments.de/docs/guides/troubleshooting/) and browse
the [the full documentation](https://isso-comments.de/docs/).
## Docker
A Docker image with the latest stable release is provided at
`ghcr.io/isso-comments/isso:latest`. See
[Using Docker](https://isso-comments.de/docs/reference/installation/#using-docker).
## Contributing
- Pull requests are very much welcome! These might be
[good first issues](https://github.com/posativ/isso/labels/good-first-issue)
- See [Ways to Contribute](https://isso-comments.de/docs/contributing/)
- [Translate](https://isso-comments.de/docs/contributing/#translations)
### Development
<!-- TODO also mention "Development & Testing" section once new docs uploaded -->
Refer to the docs for
[Installing from Source](https://isso-comments.de/docs/reference/installation/#install-from-source).
### Help
- Join `#isso` via [Matrix](https://matrix.to/#/#isso:libera.chat) or via IRC on
[Libera.Chat](https://libera.chat/)
- Ask a question on [GitHub Discussions](https://github.com/posativ/isso/discussions).
## License
MIT, see [LICENSE](LICENSE).
| text/markdown | Martin Zimmermann | info@posativ.org | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"Topic :: Internet",
"Topic :: Internet :: WWW/HTTP :: HTTP Servers",
"Topic :: Internet :: WWW/HTTP :: WSGI :: Application",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11"
] | [] | https://github.com/posativ/isso/ | null | >=3.7 | [] | [] | [] | [
"itsdangerous",
"Jinja2",
"misaka<3.0,>=2.0",
"html5lib",
"werkzeug>=1.0",
"bleach",
"Sphinx; extra == \"doc\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.12 | 2026-02-20T23:04:39.304200 | isso-0.13.2.tar.gz | 419,648 | 2f/8c/7ab1419a53c165acee78cd2d9e9bd8cd669ce08d8e910abe651512ec60fd/isso-0.13.2.tar.gz | source | sdist | null | false | 25165ca2ea5877060b877a1b44684a9c | 1aa94eafa14ff9e82982aaa445c2e35c677107f8be8fefd6adbe486b99d4c928 | 2f8c7ab1419a53c165acee78cd2d9e9bd8cd669ce08d8e910abe651512ec60fd | null | [
"LICENSE"
] | 162 |
2.4 | docsig | 0.79.0 | Check signature params for proper documentation | |
.. image:: https://raw.githubusercontent.com/jshwi/docsig/master/docs/static/docsig.svg
:alt: docsig logo
:width: 50%
:align: center
|
|License| |PyPI| |CI| |CodeQL| |pre-commit.ci status| |codecov.io| |readthedocs.org| |python3.10| |Black| |isort| |pylint| |Security Status| |Known Vulnerabilities|
.. |License| image:: https://img.shields.io/badge/License-MIT-yellow.svg
:target: https://opensource.org/licenses/MIT
:alt: License
.. |PyPI| image:: https://img.shields.io/pypi/v/docsig
:target: https://pypi.org/project/docsig/
:alt: PyPI
.. |CI| image:: https://github.com/jshwi/docsig/actions/workflows/build.yaml/badge.svg
:target: https://github.com/jshwi/docsig/actions/workflows/build.yaml
:alt: CI
.. |CodeQL| image:: https://github.com/jshwi/docsig/actions/workflows/codeql-analysis.yml/badge.svg
:target: https://github.com/jshwi/docsig/actions/workflows/codeql-analysis.yml
:alt: CodeQL
.. |pre-commit.ci status| image:: https://results.pre-commit.ci/badge/github/jshwi/docsig/master.svg
:target: https://results.pre-commit.ci/latest/github/jshwi/docsig/master
:alt: pre-commit.ci status
.. |codecov.io| image:: https://codecov.io/gh/jshwi/docsig/branch/master/graph/badge.svg
:target: https://codecov.io/gh/jshwi/docsig
:alt: codecov.io
.. |readthedocs.org| image:: https://readthedocs.org/projects/docsig/badge/?version=latest
:target: https://docsig.io/en/latest/?badge=latest
:alt: readthedocs.org
.. |python3.10| image:: https://img.shields.io/badge/python-3.10-blue.svg
:target: https://www.python.org/downloads/release/python-390
:alt: python3.10
.. |Black| image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/psf/black
:alt: Black
.. |isort| image:: https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336
:target: https://pycqa.github.io/isort/
:alt: isort
.. |pylint| image:: https://img.shields.io/badge/linting-pylint-yellowgreen
:target: https://github.com/PyCQA/pylint
:alt: pylint
.. |Security Status| image:: https://img.shields.io/badge/security-bandit-yellow.svg
:target: https://github.com/PyCQA/bandit
:alt: Security Status
.. |Known Vulnerabilities| image:: https://snyk.io/test/github/jshwi/docsig/badge.svg
:target: https://snyk.io/test/github/jshwi/docsig/badge.svg
:alt: Known Vulnerabilities
Check Python signature params for proper documentation
-------------------------------------------------------
**docsig** is a Python documentation linter that ensures function and method
signature parameters are properly documented in docstrings. It supports multiple
docstring formats including reStructuredText (``Sphinx``), ``NumPy``, and
``Google`` styles.
Maintain accurate and up-to-date Python documentation by automatically checking
that all parameters in function signatures match their docstring documentation.
Use docsig as a standalone tool, integrate it with ``flake8``, or add it as a
``pre-commit`` hook to catch documentation issues before they reach your
repository.
Contributing
------------
If you are interested in contributing to ``docsig``, please read about contributing `here <https://docsig.io/en/latest/development/contributing.html>`__
Installation
------------
.. code-block:: console
$ pip install docsig
Usage
-----
Commandline
***********
.. code-block:: console
usage: docsig [-h] [-V] [-l] [-n] [-v] [--check-class | --check-class-constructor]
[--check-dunders] [--check-nested] [--check-overridden]
[--check-property-returns] [--check-protected]
[--check-protected-class-methods] [--ignore-args] [--ignore-kwargs]
[--ignore-no-params] [--ignore-typechecker] [-d LIST] [-t LIST] [-e PATTERN]
[-E PATH [PATH ...]] [-I] [-s STR]
[path [path ...]]
Check signature params for proper documentation
positional arguments:
path directories or files to check
optional arguments:
-h, --help show this help message and exit
-V, --version show program's version number and exit
-l, --list-checks display a list of all checks and their messages
-n, --no-ansi disable ansi output
-v, --verbose increase output verbosity
--check-class check class docstrings
--check-class-constructor
check __init__ methods
--check-dunders check dunder methods
--check-nested check nested functions and classes
--check-overridden check overridden methods
--check-property-returns
check property return values
--check-protected check protected functions and classes
--check-protected-class-methods
check public methods belonging to protected classes
--ignore-args ignore args prefixed with an asterisk
--ignore-kwargs ignore kwargs prefixed with two asterisks
--ignore-no-params ignore docstrings where parameters are not documented
--ignore-typechecker ignore checking return values
-d LIST, --disable LIST
comma separated list of rules to disable
-t LIST, --target LIST
comma separated list of rules to target
-e PATTERN, --exclude PATTERN
regular expression of files or dirs to exclude from checks
-E PATH [PATH ...], --excludes PATH [PATH ...]
path glob patterns to exclude from checks
-I, --include-ignored
check files even if they match a gitignore pattern
-s STR, --string STR string to parse instead of files
Options can also be configured with the pyproject.toml file
.. code-block:: toml
[tool.docsig]
check-dunders = false
check-overridden = false
check-protected = false
disable = [
"SIG101",
"SIG102",
"SIG402",
]
target = [
"SIG202",
"SIG203",
"SIG201",
]
Flake8
******
``docsig`` can also be used as a ``flake8`` plugin. Install ``flake8`` and
ensure your installation has registered `docsig`
.. code-block:: console
$ flake8 --version
7.3.0 (docsig: 0.79.0, mccabe: 0.7.0, pycodestyle: 2.14.0, pyflakes: 3.4.0) CPython 3.10.19 on Darwin
And now use `flake8` to lint your files
.. code-block:: console
$ flake8 example.py
example.py:1:1: SIG202 includes parameters that do not exist (params-do-not-exist) 'function'
With ``flake8`` the pyproject.toml config will still be the base config, though the
`ini files <https://flake8.pycqa.org/en/latest/user/configuration.html#configuration-locations>`_ ``flake8`` gets it config from will override the pyproject.toml config.
For ``flake8`` all args and config options are prefixed with ``sig`` to
avoid any potential conflicts with other plugins
.. code-block:: ini
[flake8]
sig-check-dunders = True
sig-check-overridden = True
sig-check-protected = True
..
end flake8
API
***
.. code-block:: python
>>> from docsig import docsig
.. code-block:: python
>>> string = '''
... def function(a, b, c) -> None:
... """Docstring summary.
...
... :param a: Description of a.
... :param b: Description of b.
... :param c: Description of c.
... """
... '''
>>> docsig(string=string, no_ansi=True)
0
.. code-block:: python
>>> string = '''
... def function(a, b) -> None:
... """Docstring summary.
...
... :param a: Description of a.
... :param b: Description of b.
... :param c: Description of c.
... """
... '''
>>> docsig(string=string, no_ansi=True)
2 in function
SIG202: includes parameters that do not exist (params-do-not-exist)
1
A full list of checks can be found `here <https://docsig.io/en/latest/usage/messages.html>`__
Message Control
***************
`Documentation on message control <https://docsig.io/en/latest/usage/message-control.html>`_
Classes
*******
`Documenting classes <https://docsig.io/en/latest/usage/configuration.html#classes>`_
pre-commit
**********
``docsig`` can be used as a `pre-commit <https://pre-commit.com>`_ hook
It can be added to your .pre-commit-config.yaml as follows:
Standalone
.. code-block:: yaml
repos:
- repo: https://github.com/jshwi/docsig
rev: v0.79.0
hooks:
- id: docsig
args:
- "--check-class"
- "--check-dunders"
- "--check-overridden"
- "--check-protected"
or integrated with ``flake8``
.. code-block:: yaml
repos:
- repo: https://github.com/PyCQA/flake8
rev: "7.1.0"
hooks:
- id: flake8
additional_dependencies:
- docsig==0.79.0
args:
- "--sig-check-class"
- "--sig-check-dunders"
- "--sig-check-overridden"
- "--sig-check-protected"
| text/x-rst | jshwi | stephen@jshwisolutions.com | jshwi | stephen@jshwisolutions.com | MIT | check, docs, docstring, params, signature | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | https://docsig.io | null | <4.0,>=3.10 | [] | [] | [] | [
"Sphinx<9,>=7",
"astroid<5.0.0,>=4.0.2",
"pathspec<1.1.0,>=0.12.1",
"tomli<3.0.0,>=2.0.1",
"wcmatch<11.0.0,>=8.5.2"
] | [] | [] | [] | [
"Homepage, https://docsig.io",
"Repository, https://github.com/jshwi/docsig",
"Documentation, https://docsig.io/en/latest"
] | poetry/2.2.1 CPython/3.10.19 Darwin/25.2.0 | 2026-02-20T23:04:37.669304 | docsig-0.79.0.tar.gz | 27,831 | 5f/1a/1da374e8e131500185d1c5324d19adcd952388508ba172b50361ce768216/docsig-0.79.0.tar.gz | source | sdist | null | false | e27306a33d6f9d56f5df725577285e46 | be08f0ef485e270255d9921443d7e5d1842d8784e8ba7557e2a29412410d4282 | 5f1a1da374e8e131500185d1c5324d19adcd952388508ba172b50361ce768216 | null | [] | 469 |
2.4 | httpstate | 0.0.9 | HTTP State, httpstate.com | httpstate.com
| text/markdown | null | "Alex Morales, HTTP State" <alex@httpstate.com> | null | null | null | httpstate | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"websockets>=16.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.19 | 2026-02-20T23:04:35.682083 | httpstate-0.0.9.tar.gz | 2,046 | 79/d4/f669e8c0546f5960bd870e34d44adc1cb09d11a7808ca997b2386dfa4f35/httpstate-0.0.9.tar.gz | source | sdist | null | false | 5a3a38254eb2ef1998a3fe54d3ede901 | 6ffe3c54de03f5fa9dab5deae377cbcfb06a63a41636817a7ee4c015f1540a43 | 79d4f669e8c0546f5960bd870e34d44adc1cb09d11a7808ca997b2386dfa4f35 | AGPL-3.0 | [
"LICENSE"
] | 216 |
2.4 | amgi-paho-mqtt | 0.36.0 | AMGI MQTT Server | # amgi-paho-mqtt
amgi-paho-mqtt is an [AMGI](https://amgi.readthedocs.io/en/latest/) compatible server to run AMGI applications against
[MQTT](https://mqtt.org/).
## Installation
```
pip install amgi-paho-mqtt==0.36.0
```
## Example
This example uses [AsyncFast](https://pypi.org/project/asyncfast/):
```python
from dataclasses import dataclass
from amgi_paho_mqtt import run
from asyncfast import AsyncFast
app = AsyncFast()
@dataclass
class Order:
item_ids: list[str]
@app.channel("order-topic")
async def order_topic(order: Order) -> None:
# Makes an order
...
if __name__ == "__main__":
run(app, "order-topic")
```
Or the application could be run via the commandline:
```commandline
asyncfast run amgi-paho-mqtt main:app order-topic
```
## Contact
For questions or suggestions, please contact [jack.burridge@mail.com](mailto:jack.burridge@mail.com).
## License
Copyright 2025 AMGI
| text/markdown | jack.burridge | jack.burridge <jack.burridge@mail.com> | null | null | null | null | [
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"amgi-common==0.36.0",
"amgi-types==0.36.0",
"paho-mqtt>=2.1.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T23:04:34.728354 | amgi_paho_mqtt-0.36.0.tar.gz | 3,715 | ac/6c/659dd1f3b6cb8138204d072b150dc9d484a571150041aa04f54840c946fc/amgi_paho_mqtt-0.36.0.tar.gz | source | sdist | null | false | a7a9ed26bff1d070c1d9538f1a3611a6 | bf4ddb285f655c3275992269fbf7bace4b7572a2dc1af52aa02d4204e5d21c84 | ac6c659dd1f3b6cb8138204d072b150dc9d484a571150041aa04f54840c946fc | MIT | [
"LICENSE"
] | 200 |
2.4 | amgi-aiokafka | 0.36.0 | AMGI Kafka Server | # amgi-aiokafka
amgi-aiokafka is an [AMGI](https://amgi.readthedocs.io/en/latest/) compatible server to run AMGI applications against
[Kafka](https://kafka.apache.org/).
## Installation
```
pip install amgi-aiokafka==0.36.0
```
## Example
This example uses [AsyncFast](https://pypi.org/project/asyncfast/):
```python
from dataclasses import dataclass
from amgi_aiokafka import run
from asyncfast import AsyncFast
app = AsyncFast()
@dataclass
class Order:
item_ids: list[str]
@app.channel("order-topic")
async def order_topic(order: Order) -> None:
# Makes an order
...
if __name__ == "__main__":
run(app, "order-topic")
```
Or the application could be run via the commandline:
```commandline
asyncfast run amgi-aiokafka main:app order-topic
```
## Contact
For questions or suggestions, please contact [jack.burridge@mail.com](mailto:jack.burridge@mail.com).
## License
Copyright 2025 AMGI
| text/markdown | jack.burridge | jack.burridge <jack.burridge@mail.com> | null | null | null | null | [
"Development Status :: 2 - Pre-Alpha",
"Framework :: AsyncIO",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiokafka>=0.12",
"amgi-common==0.36.0",
"amgi-types==0.36.0",
"typing-extensions>=4.15.0; python_full_version < \"3.11\""
] | [] | [] | [] | [
"Changelog, https://github.com/asyncfast/amgi/blob/main/CHANGELOG.md",
"Homepage, https://github.com/asyncfast/amgi/tree/main/packages/amgi-aiokafka",
"Issues, https://github.com/asyncfast/amgi/issues/",
"Repository, https://github.com/asyncfast/amgi/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T23:04:33.755574 | amgi_aiokafka-0.36.0-py3-none-any.whl | 5,191 | ae/77/6d500c57bfde9f846cea506ef2a067a1f75d8b0ecc6cb9b21841549a6de6/amgi_aiokafka-0.36.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 0414170b709c89ce5b6618ecbee83a81 | 042f2b310812301e2bb9bfd2072eaddfd486b6ccbca9ae6672c9307cc4bbdc56 | ae776d500c57bfde9f846cea506ef2a067a1f75d8b0ecc6cb9b21841549a6de6 | MIT | [
"LICENSE"
] | 205 |
2.4 | amgi-aiobotocore | 0.36.0 | AMGI AWS Services Servers | # amgi-aiobotocore
amgi-aiobotocore is an [AMGI](https://amgi.readthedocs.io/en/latest/) compatible server project, currently supporting
running AMGI applications against [SQS](https://aws.amazon.com/sqs/).
## Installation
```
pip install amgi-aiobotocore==0.36.0
```
## Example
This example uses [AsyncFast](https://pypi.org/project/asyncfast/):
```python
from dataclasses import dataclass
from amgi_aiobotocore.sqs import run
from asyncfast import AsyncFast
app = AsyncFast()
@dataclass
class Order:
item_ids: list[str]
@app.channel("order-queue")
async def order_queue(order: Order) -> None:
# Makes an order
...
if __name__ == "__main__":
run(app, "order-queue")
```
Or the application could be run via the commandline:
```commandline
asyncfast run amgi-aiobotocore-sqs main:app order-queue
```
## Contact
For questions or suggestions, please contact [jack.burridge@mail.com](mailto:jack.burridge@mail.com).
## License
Copyright 2025 AMGI
| text/markdown | jack.burridge | jack.burridge <jack.burridge@mail.com> | null | null | null | null | [
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiobotocore>=2.25.0",
"amgi-common==0.36.0",
"amgi-types==0.36.0",
"typing-extensions>=4.15.0; python_full_version < \"3.11\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T23:04:33.092382 | amgi_aiobotocore-0.36.0-py3-none-any.whl | 6,178 | 58/42/698c9398555446e28947b39c843bdf4667d17f767879aa129e31fe6c6179/amgi_aiobotocore-0.36.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 9a7ca2bf1063cd8fbbc9bfe514583ea2 | 71decf5fd3a02c59e81c0501b0564da6e38822fe340a21ca07547c6fd30517f6 | 5842698c9398555446e28947b39c843bdf4667d17f767879aa129e31fe6c6179 | MIT | [
"LICENSE"
] | 199 |
2.4 | amgi-sqs-event-source-mapping | 0.36.0 | AMGI SQS event source mapping handler | # amgi-sqs-event-source-mapping
amgi-sqs-event-source-mapping is an adaptor for [AMGI](https://amgi.readthedocs.io/en/latest/) applications to run in an
SQS event source mapped Lambda.
## Installation
```
pip install amgi-sqs-event-source-mapping==0.36.0
```
## Example
This example uses [AsyncFast](https://pypi.org/project/asyncfast/):
```python
from dataclasses import dataclass
from amgi_sqs_event_source_mapping import SqsEventSourceMappingHandler
from asyncfast import AsyncFast
app = AsyncFast()
@dataclass
class Order:
item_ids: list[str]
@app.channel("order-queue")
async def order_queue(order: Order) -> None:
# Makes an order
...
handler = SqsEventSourceMappingHandler(app)
```
## What it does
- Converts SQS batch events into AMGI `message.receive` events
- Uses the SQS queue name as the AMGI message address
- Supports partial batch failures so only failed messages are retried
- Sends outbound messages back to SQS efficiently using batching
- Optionally manages application startup and shutdown via AMGI lifespan
- Verifies message integrity using the SQS-provided MD5 checksum and retries corrupted messages
## Record handling
- Record bodies are passed to your app as bytes
- SQS record attributes become AMGI headers
- Records are only acknowledged when your app emits `message.ack`
- Records that are not acknowledged are treated as failures and will be retried
- Corrupted messages are detected automatically and retried
## Lifespan
Lifespan support is enabled by default.
- Startup runs once per Lambda execution environment
- Shutdown is attempted when the environment is terminated
Shutdown handling relies on `signal.SIGTERM`, which is supported by Python 3.12 and later Lambda runtimes.
To use fully stateless, per-invocation behavior, disable lifespan:
```python
handler = SqsEventSourceMappingHandler(app, lifespan=False)
```
## Contact
For questions or suggestions, please contact [jack.burridge@mail.com](mailto:jack.burridge@mail.com).
## License
Copyright 2025 AMGI
| text/markdown | jack.burridge | jack.burridge <jack.burridge@mail.com> | null | null | null | null | [
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"amgi-common==0.36.0",
"amgi-types==0.36.0",
"typing-extensions>=4.15.0; python_full_version < \"3.11\"",
"boto3>=1.40.70; extra == \"boto3\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T23:04:32.551176 | amgi_sqs_event_source_mapping-0.36.0.tar.gz | 5,676 | fb/e3/1d05b37c85e8563bb74e53488ec8bfe8c8f8bb170d29b285d4963d2128f1/amgi_sqs_event_source_mapping-0.36.0.tar.gz | source | sdist | null | false | ae62a22f8ccf52a6fed9a4bf6e9a7de3 | 5606315ba416db328234effec42d7a58b5af7a8625aa427cdda01f8653c7191a | fbe31d05b37c85e8563bb74e53488ec8bfe8c8f8bb170d29b285d4963d2128f1 | MIT | [
"LICENSE"
] | 197 |
2.4 | asyncfast | 0.36.0 | Add your description here | # AsyncFast
AsyncFast is a modern, event framework for building APIs with Python based on standard Python type hints.
## Installation
```
pip install asyncfast==0.36.0
```
## Example
Create a file `main.py` with:
```python
from asyncfast import AsyncFast
from pydantic import BaseModel
app = AsyncFast()
class Payload(BaseModel):
id: str
name: str
@app.channel("topic")
async def on_topic(payload: Payload) -> None:
print(payload)
```
### Running
To run the app install an AMGI server (at the moment there is only `amgi-aiokafka`) then run:
```
$ asyncfast run amgi-aiokafka main:app topic
```
### AsyncAPI Generation
```
$ asyncfast asyncapi main:app
{
"asyncapi": "3.0.0",
"info": {
"title": "AsyncFast",
"version": "0.1.0"
},
"channels": {
"OnTopic": {
"address": "topic",
"messages": {
"OnTopicMessage": {
"$ref": "#/components/messages/OnTopicMessage"
}
}
}
},
"operations": {
"receiveOnTopic": {
"action": "receive",
"channel": {
"$ref": "#/channels/OnTopic"
}
}
},
"components": {
"messages": {
"OnTopicMessage": {
"payload": {
"$ref": "#/components/schemas/Payload"
}
}
},
"schemas": {
"Payload": {
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"name": {
"title": "Name",
"type": "string"
}
},
"required": [
"id",
"name"
],
"title": "Payload",
"type": "object"
}
}
}
}
```
## Contact
For questions or suggestions, please contact [jack.burridge@mail.com](mailto:jack.burridge@mail.com).
## License
Copyright 2025 AMGI
| text/markdown | jack.burridge | jack.burridge <jack.burridge@mail.com> | null | null | null | null | [
"Development Status :: 2 - Pre-Alpha",
"Framework :: AsyncIO",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"amgi-types==0.36.0",
"pydantic>=2.0.0",
"asyncfast-cli==0.36.0; extra == \"standard\""
] | [] | [] | [] | [
"Changelog, https://github.com/asyncfast/amgi/blob/main/CHANGELOG.md",
"Documentation, https://asyncfast.readthedocs.io/en/latest/",
"Homepage, https://github.com/asyncfast/amgi/tree/main/packages/asyncfast",
"Issues, https://github.com/asyncfast/amgi/issues/",
"Repository, https://github.com/asyncfast/amgi/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T23:04:31.484693 | asyncfast-0.36.0-py3-none-any.whl | 14,635 | 23/f6/6843f91af98a616b0ee2dcdb91c51541d71ad2f17bd7213320f6830784ca/asyncfast-0.36.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 4d19cb2ebbcf5fa85f35e5a79d010adb | 9155291c1ba5242abd266bf820edf0080afd7eb0d3c55c1fabd50e416e008ea6 | 23f66843f91af98a616b0ee2dcdb91c51541d71ad2f17bd7213320f6830784ca | MIT | [
"LICENSE"
] | 202 |
2.4 | optionshawk | 0.0.1 | OptionsHawk - by OptionsHawk | # optionshawk
OptionsHawk - by OptionsHawk.
| text/markdown | optionshawk | null | null | null | null | null | [
"Development Status :: 1 - Planning",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/optionshawk/optionshawk"
] | twine/6.2.0 CPython/3.13.2 | 2026-02-20T23:04:30.578825 | optionshawk-0.0.1.tar.gz | 1,214 | ab/79/fc2bb8df22ba05cf903caf0dffafeac2a6163d81f634a10e0fc846e9955d/optionshawk-0.0.1.tar.gz | source | sdist | null | false | 8de4fff65b3aa032984a1101f4c8204b | 55cd6b3214099904f0a9e018ce49b665b25560ff08e312dedc7801311105f3cc | ab79fc2bb8df22ba05cf903caf0dffafeac2a6163d81f634a10e0fc846e9955d | MIT | [] | 218 |
2.4 | amgi-common | 0.36.0 | Add your description here | # amgi-common
This package includes some useful helpers for writing AMGI servers.
## Installation
```
pip install amgi-common==0.36.0
```
## Constructs
### Lifespan
This class handles the AMGI lifespan protocol.
```python
from amgi_common import Lifespan
async def main_loop(app, state):
"""Handle event batches"""
async def serve(app):
async with Lifespan(app) as state:
# handle app calls
await main_loop(app, state)
```
## Stoppable
This class should help with graceful shutdowns. Whenever you have a call to something with a timeout, it cancels that task. .
For example, if you have a client that has a method `fetch_messages`, which has a timeout you could loop like so:
```python
from amgi_common import Stoppable
class Server:
def __init__(self, app):
self._app = app
self._stoppable = Stoppable()
self._client = Client()
self._running = True
async def main_loop(self, app, state):
while self._running:
messages = await self._client.fetch_messages(timeout=10_000)
# Handle messages
def stop(self):
self._running = False
```
There are several ways to deal with this:
1. The above example, where on stop you will have to wait for the timeout plus the time to handle messages
1. Run the main loop in a task, and cancel it on stop
Both have their own problems. In the first case, you could be waiting a long time. In the second case, when you receive
messages, you should probably process them before shutting down.
The class `Stoppable` is there to help you. It will return the results of the callable iterably, but if the stoppable has
been told to stop, it will cancel any tasks it has running in the background, and stop any current iterations.
```python
from amgi_common import Stoppable
class Server:
def __init__(self, app):
self._app = app
self._stoppable = Stoppable()
self._client = Client()
async def main_loop(self, app, state):
async for messages in self._stoppable.call(
self._client.fetch_messages, timeout=10_000
):
# Handle messages
pass
def stop(self):
self._stoppable.stop()
```
## OperationBatcher
This class can simplify batch operations, for example, publishing multiple messages simultaneous. The implementation
allows you to enqueue an item to run the operation on, and the operation will be scheduled in batches:
```python
from amgi_common import OperationBatcher
async def batch_operation(items):
# Do a batch operation to get the batch result
return [item for item in batch_result]
operation_batcher = OperationBatcher(batch_operation)
# You can enqueue an item, it will be processed in the background automatically
result = await operation_batcher.enqueue({"an": "item"})
```
The batch operation should return an iterable of results, or exceptions. This allows for per item errors. The batch
sizes can also be limited, which is usefully when APIs have a hard limit.
## Contact
For questions or suggestions, please contact [jack.burridge@mail.com](mailto:jack.burridge@mail.com).
## License
Copyright 2025 AMGI
| text/markdown | jack.burridge | jack.burridge <jack.burridge@mail.com> | null | null | null | null | [
"Development Status :: 2 - Pre-Alpha",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"amgi-types==0.36.0",
"typing-extensions>=4.15.0; python_full_version < \"3.13\""
] | [] | [] | [] | [
"Changelog, https://github.com/asyncfast/amgi/blob/main/CHANGELOG.md",
"Homepage, https://github.com/asyncfast/amgi/tree/main/packages/amgi-common",
"Issues, https://github.com/asyncfast/amgi/issues/",
"Repository, https://github.com/asyncfast/amgi/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T23:04:29.504164 | amgi_common-0.36.0-py3-none-any.whl | 6,283 | f5/e6/365a3722a28207d469a2d481dd21aac14890a44cb45f222f439af054a45e/amgi_common-0.36.0-py3-none-any.whl | py3 | bdist_wheel | null | false | b384e4787d12f08dbc3ae2293bb00919 | 6ef333b7ac23b04196b2e262898e43c2bc0f2c1d403062834eee8ae2c56badf9 | f5e6365a3722a28207d469a2d481dd21aac14890a44cb45f222f439af054a45e | MIT | [
"LICENSE"
] | 267 |
2.4 | amgi-types | 0.36.0 | AMGI Types | # amgi-types
This package contains the types for [AMGI](https://amgi.readthedocs.io/en/latest/) applications.
## Installation
```
pip install amgi-types==0.36.0
```
## Contact
For questions or suggestions, please contact [jack.burridge@mail.com](mailto:jack.burridge@mail.com).
## License
Copyright 2025 AMGI
| text/markdown | jack.burridge | jack.burridge <jack.burridge@mail.com> | null | null | null | null | [
"Development Status :: 2 - Pre-Alpha",
"Framework :: AsyncIO",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"typing-extensions>=4.15.0; python_full_version < \"3.11\""
] | [] | [] | [] | [
"Changelog, https://github.com/asyncfast/amgi/blob/main/CHANGELOG.md",
"Homepage, https://github.com/asyncfast/amgi/tree/main/packages/amgi-types",
"Issues, https://github.com/asyncfast/amgi/issues/",
"Repository, https://github.com/asyncfast/amgi/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T23:04:28.002931 | amgi_types-0.36.0-py3-none-any.whl | 4,034 | c9/7d/9ecb07f6f0eb23ff7237532ca834d19161048f433d10f9861e88ec607dc0/amgi_types-0.36.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 7b4ee36dac08840a6db191062602fe69 | 0b22b25641f9e09743a2c4f182eb70d08641d61d3243e22e554aad7ae2dd9fb0 | c97d9ecb07f6f0eb23ff7237532ca834d19161048f433d10f9861e88ec607dc0 | MIT | [
"LICENSE"
] | 306 |
2.4 | amgi-redis | 0.36.0 | AMGI Redis Server | # amgi-redis
amgi-redis is an [AMGI](https://amgi.readthedocs.io/en/latest/) compatible server to run AMGI applications against
[Redis](https://redis.io/).
## Installation
```
pip install amgi-redis==0.36.0
```
## Example
This example uses [AsyncFast](https://pypi.org/project/asyncfast/):
```python
from dataclasses import dataclass
from amgi_redis import run
from asyncfast import AsyncFast
app = AsyncFast()
@dataclass
class Order:
item_ids: list[str]
@app.channel("order-channel")
async def order_channel(order: Order) -> None:
# Makes an order
...
if __name__ == "__main__":
run(app, "order-channel")
```
Or the application could be run via the commandline:
```commandline
asyncfast run amgi-redis main:app order-channel
```
## Contact
For questions or suggestions, please contact [jack.burridge@mail.com](mailto:jack.burridge@mail.com).
## License
Copyright 2025 AMGI
| text/markdown | jack.burridge | jack.burridge <jack.burridge@mail.com> | null | null | null | null | [
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"amgi-common==0.36.0",
"amgi-types==0.36.0",
"redis>=7.0.1",
"typing-extensions>=4.15.0; python_full_version < \"3.11\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T23:04:26.918470 | amgi_redis-0.36.0-py3-none-any.whl | 4,481 | a0/6f/a45a98c9df465f76cc0fd8384ae836f8b13648bcf0e1fd2bbe52b7d73aa2/amgi_redis-0.36.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 76741b612b74967f51f13212dff60faf | 66bffd3c2c14a09728f52e2cc80de7ca8fb56a40b379df0e65fafa73947a8ba6 | a06fa45a98c9df465f76cc0fd8384ae836f8b13648bcf0e1fd2bbe52b7d73aa2 | MIT | [
"LICENSE"
] | 197 |
2.4 | amgi-kafka-event-source-mapping | 0.36.0 | Kafka event source mapping adaptor for AMGI applications | # amgi-kafka-event-source-mapping
amgi-kafka-event-source-mapping is an adaptor for [AMGI](https://amgi.readthedocs.io/en/latest/) applications to run in
a Kafka event source mapped environment.
## Installation
```bash
pip install amgi-kafka-event-source-mapping==0.36.0
```
## Example
This example uses [AsyncFast](https://pypi.org/project/asyncfast/):
```python
from dataclasses import dataclass
from amgi_kafka_event_source_mapping import KafkaEventSourceMappingHandler
from asyncfast import AsyncFast
app = AsyncFast()
@dataclass
class Order:
item_ids: list[str]
@app.channel("orders")
async def orders(order: Order) -> None:
# Makes an order
...
handler = KafkaEventSourceMappingHandler(app)
```
## What it does
- Converts Kafka batch events into AMGI `message.receive` events
- Uses the Kafka topic name as the AMGI message address
- Supports partial batch failures so only failed records are reported
- Sends outbound messages to Kafka using an async producer
- Outbound messages are sent via the same Kafka broker (bootstrap servers) that the records were received from
- Optionally manages application startup and shutdown via AMGI lifespan
## Record handling
- Record values and keys are passed to your app as bytes
- Kafka record headers become AMGI headers
- Records are only acknowledged when your app emits `message.ack`
- Records that emit `message.nack` or are not acknowledged are treated as failures
## Nack handling
By default, records that are negatively acknowledged, or not acknowledged are logged:
```python
handler = KafkaEventSourceMappingHandler(app, on_nack="log")
```
To fail the invocation when any record is nacked, configure the handler to raise an error instead:
```python
handler = KafkaEventSourceMappingHandler(app, on_nack="error")
```
This is useful when running in environments where a failed invocation should trigger a retry, or alert.
When using this mode, handlers **must be idempotent**. Kafka event source mappings may re-deliver records after
failures, restarts, or rebalances, and your application logic should be safe to execute more than once for the same
record.
## Lifespan
Lifespan support is enabled by default.
- Startup runs once per Lambda execution environment
- Shutdown is attempted when the environment is terminated
Shutdown handling relies on `signal.SIGTERM`, which is supported by Python 3.12 and later Lambda runtimes.
To use fully stateless, per-invocation behavior, disable lifespan:
```python
handler = KafkaEventSourceMappingHandler(app, lifespan=False)
```
## Contact
For questions or suggestions, please contact [jack.burridge@mail.com](mailto:jack.burridge@mail.com).
## License
Copyright 2026 AMGI
| text/markdown | jack.burridge | jack.burridge <jack.burridge@mail.com> | null | null | null | null | [
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"amgi-aiokafka==0.36.0",
"typing-extensions>=4.15.0; python_full_version < \"3.11\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T23:04:25.463543 | amgi_kafka_event_source_mapping-0.36.0-py3-none-any.whl | 6,595 | cf/7d/4963dd052b87e760b28b32f3aa825e95657feb41fceaba3cd622c08a323a/amgi_kafka_event_source_mapping-0.36.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 58a2cfff5cf16622f235389f05b42be1 | c46a4eb559ba840b2898fb79049c3e57895fb904e697958c6efd39d836c2c85d | cf7d4963dd052b87e760b28b32f3aa825e95657feb41fceaba3cd622c08a323a | MIT | [
"LICENSE"
] | 192 |
2.4 | appium-pytest-kit | 0.1.6 | Reusable Appium 2.x + pytest mobile test framework | # appium-pytest-kit
[](https://buymeacoffee.com/gianlucasoare)
`appium-pytest-kit` is a reusable Appium 2.x + pytest plugin library for Python 3.11+. Install it once, generate a `.env`, and start writing mobile tests with zero boilerplate.
```bash
pip install appium-pytest-kit
appium-pytest-kit-init --framework --root my-project
```
**Full documentation:** [DOCUMENTATION.md](./DOCUMENTATION.md) · [docs/](./docs/)
---
## What it gives you
| | |
|---|---|
| **Zero-config fixtures** | `driver`, `waiter`, `actions`, `page_factory` — just add to your test function |
| **Auto failure artifacts** | Screenshot + page source + device logs captured automatically on failure |
| **3-tier device resolution** | explicit settings → named profile → auto-detect via adb/xcrun |
| **Session modes** | `clean` (per-test) · `clean-session` (shared) · `debug` (keep alive) |
| **Retry support** | Session reused across retry attempts — no restart cost between tries |
| **Fail-fast** | `--app-fail-fast` stops the suite after retries are exhausted, not before |
| **Explicit waits** | `WaitTimeoutError` with structured `.locator` and `.timeout` context |
| **High-level actions** | tap, type, swipe, scroll, assertions — all wait-safe |
| **Page + flow objects** | Scaffold generates `pages/` and `flows/` with base classes ready to extend |
| **Extension hooks** | Override settings, inject capabilities, run code after driver creation |
| **CLI scaffold** | One command to generate a full project structure |
---
## Dependencies
All required dependencies are **installed automatically** with `pip install appium-pytest-kit`. You do not need a separate `requirements.txt`.
| Auto-installed | Version | Purpose |
|---|---|---|
| `Appium-Python-Client` | ≥ 4.0.0 | Appium WebDriver client |
| `pydantic-settings` | ≥ 2.3.0 | `.env` and env var loading |
| `pytest` | ≥ 8.2.0 | Test runner integration |
**Optional extras** (install only what you need):
```bash
pip install "appium-pytest-kit[yaml]" # device profile YAML support
pip install "appium-pytest-kit[allure]" # Allure report attachments
pip install "appium-pytest-kit[retry]" # pytest-retry for flaky test handling
pip install "appium-pytest-kit[all]" # all optional extras
```
| Extra | Installs | When you need it |
|---|---|---|
| `[yaml]` | PyYAML ≥ 6.0 | Named device profiles in `data/devices.yaml` |
| `[allure]` | allure-pytest ≥ 2.13.0 | Screenshots + page source in Allure reports |
| `[retry]` | pytest-retry ≥ 0.6.0 | Retry flaky tests while reusing the same Appium session |
---
## Installation
### From PyPI
```bash
pip install appium-pytest-kit
```
### From GitHub
```bash
pip install git+https://github.com/gianlucasoare/appium-pytest-kit.git
```
### Local clone (development)
```bash
git clone https://github.com/gianlucasoare/appium-pytest-kit.git
cd appium-pytest-kit
pip install -e ".[dev]"
```
---
## Quickstart: test an app in 5 minutes
### 1 — Scaffold the project
```bash
pip install appium-pytest-kit
appium-pytest-kit-init --framework --root my-project
cd my-project
```
### 2 — Edit `.env` with your device and app
```env
APP_PLATFORM=android
APP_APPIUM_URL=http://127.0.0.1:4723
APP_APP_PACKAGE=com.example.myapp
APP_APP_ACTIVITY=.MainActivity
APP_DEVICE_NAME=emulator-5554
APP_PLATFORM_VERSION=14
```
### 3 — Start Appium and your emulator, then run
```bash
appium &
pytest tests/android/test_smoke.py -v
```
### 4 — Write a real test
```python
# tests/android/test_login.py
import pytest
from appium.webdriver.common.appiumby import AppiumBy
USERNAME = (AppiumBy.ID, "com.example.app:id/username")
PASSWORD = (AppiumBy.ID, "com.example.app:id/password")
LOGIN_BTN = (AppiumBy.ACCESSIBILITY_ID, "login_button")
WELCOME = (AppiumBy.ID, "com.example.app:id/welcome_text")
@pytest.mark.integration
def test_login(actions):
actions.type_text(USERNAME, "testuser")
actions.type_text(PASSWORD, "secret")
actions.tap(LOGIN_BTN)
assert actions.text(WELCOME) == "Welcome, testuser"
```
```bash
pytest -m integration -v
```
---
## Built-in fixtures
| Fixture | Scope | Description |
|---|---|---|
| `settings` | session | Resolved `AppiumPytestKitSettings` |
| `device_info` | session | Resolved device (name, UDID, version) |
| `appium_server` | session | Server URL, optional lifecycle management |
| `driver` | function | Live `appium.webdriver.Remote`, auto-quit |
| `waiter` | function | Explicit waits with `WaitTimeoutError` |
| `actions` | function | High-level UI helpers |
| `page_factory` | function | Factory for page objects: `page_factory(LoginPage)` |
---
## Page objects with `page_factory`
```python
# pages/login_page.py
from appium.webdriver.common.appiumby import AppiumBy
from appium_pytest_kit import Locator
from pages.base_page import BasePage
class LoginPage(BasePage):
_USERNAME: Locator = (AppiumBy.ID, "com.example.app:id/username")
_LOGIN_BTN: Locator = (AppiumBy.ACCESSIBILITY_ID, "login_button")
def log_in(self, username: str, password: str) -> None:
self._actions.type_text(self._USERNAME, username)
self._actions.tap(self._LOGIN_BTN)
def is_loaded(self) -> bool:
return self._actions.is_displayed(self._USERNAME)
```
```python
# tests/test_login.py
def test_login_success(page_factory):
login = page_factory(LoginPage)
login.wait_until_loaded()
login.log_in("testuser", "secret")
# ...
```
See [docs/page-objects.md](docs/page-objects.md) for the full guide.
---
## Session modes
```env
APP_SESSION_MODE=clean # fresh driver per test (default)
APP_SESSION_MODE=clean-session # one shared driver for the whole run (faster)
APP_SESSION_MODE=debug # shared + no restart on failure (local debugging)
```
---
## Retry support
Install the extra, then use `@pytest.mark.flaky(...)` and/or the `--retries` CLI flag:
```bash
pip install "appium-pytest-kit[retry]"
```
```python
# Retry this test up to 2 extra times (3 total attempts)
@pytest.mark.flaky(retries=2)
def test_flaky_animation(actions):
actions.tap(START_BTN)
actions.assert_displayed(RESULT_SCREEN)
```
```bash
# Retry every failed test up to 2 extra times, stop if something is truly broken
pytest --retries 2 --retry-delay 1 --app-fail-fast
```
**How it works:** during retries the same Appium session is reused — no restart between attempts. Once the test passes or all retries are exhausted, the session is quit and the next test starts fresh.
See [docs/cli-reference.md](docs/cli-reference.md) for the full retry flag reference.
---
## Device resolution (3-tier)
1. **Explicit** — `APP_DEVICE_NAME` / `APP_UDID` in `.env` or CLI
2. **Profile** — `APP_DEVICE_PROFILE=pixel7` from `data/devices.yaml`
3. **Auto-detect** — `adb devices` (Android) or `xcrun simctl` / `xctrace` (iOS)
```bash
pytest --app-device-profile pixel7
pytest --app-udid emulator-5554
pytest # auto-detect if nothing set
```
---
## Failure diagnostics
On test failure the framework automatically captures:
- **Screenshot** → `artifacts/screenshots/<test_id>.png`
- **Page source** → `artifacts/pagesource/<test_id>.xml`
- **Device logs** → `artifacts/device_logs/<test_id>.log`
- **Video** (if configured) → `artifacts/videos/<test_id>.mp4`
```env
APP_VIDEO_POLICY=failed # record and save only on failure
APP_VIDEO_POLICY=always # record every test
```
Allure attachments are added automatically when `allure-pytest` is installed.
---
## Configuration
Settings load from `.env` → env vars → CLI flags (highest wins).
```bash
pytest --app-platform ios
pytest --app-device-name "Pixel 7" --app-platform-version 14
pytest --app-appium-url http://192.168.1.10:4723
pytest --app-session-mode clean-session
pytest --app-device-profile pixel7
pytest --app-video-policy failed
pytest --app-override APP_EXPLICIT_WAIT_TIMEOUT=15
pytest --app-capabilities-json '{"autoGrantPermissions": true}'
pytest --app-strict-config
pytest --app-manage-appium-server
pytest --app-reporting-enabled
# Retry support (requires appium-pytest-kit[retry])
pytest --retries 2 --retry-delay 1 # retry all tests up to 2 extra times
pytest --retries 2 --app-fail-fast # stop suite after retries are exhausted
```
See [docs/configuration.md](docs/configuration.md) for all settings.
---
## Extension hooks
```python
# conftest.py
def pytest_appium_pytest_kit_capabilities(capabilities, settings):
"""Add extra capabilities before each driver session."""
if settings.platform == "android":
return {"autoGrantPermissions": True, "language": "en"}
def pytest_appium_pytest_kit_configure_settings(settings):
"""Replace settings at session start."""
return settings.model_copy(update={"explicit_wait_timeout": 20.0})
def pytest_appium_pytest_kit_driver_created(driver, settings):
"""Run setup immediately after each driver is created."""
driver.orientation = "PORTRAIT"
```
---
## Expanded waits
```python
waiter.for_clickable(locator)
waiter.for_invisibility(locator)
waiter.for_text_contains(locator, "partial text")
waiter.for_text_equals(locator, "exact text")
waiter.for_all_visible([loc1, loc2, loc3]) # single timeout for the whole group
waiter.for_all_gone([loc1, loc2])
waiter.for_any_visible([loc1, loc2])
waiter.for_context_contains("WEBVIEW")
waiter.for_android_activity("MainActivity")
```
---
## Expanded actions
```python
# Tap
actions.tap_if_present(locator)
actions.tap_if_present_first_available([l1, l2])
actions.tap_by_coordinates(x, y)
actions.double_tap(locator)
actions.long_press(locator, duration_seconds=2)
# Text
actions.type_if_present(locator, "text")
actions.type_text_slowly(locator, "otp", delay_per_char=0.15)
actions.clear(locator)
# Visibility assertions
actions.is_displayed(locator)
actions.assert_displayed(locator)
actions.is_not_displayed(locator)
actions.assert_not_displayed(locator)
actions.assert_displayed_first_available([l1, l2])
actions.assert_not_displayed_first_available([l1, l2])
# Text assertions
actions.assert_text(locator, "exact text")
actions.assert_text_contains(locator, "partial")
actions.assert_text_not_empty(locator)
# Attribute assertion
actions.assert_attribute(locator, "checked", "true")
# Enabled/disabled state
actions.is_enabled(locator)
actions.assert_enabled(locator)
actions.assert_not_enabled(locator)
# Checked/selected state (checkboxes, toggles)
actions.is_checked(locator)
actions.assert_checked(locator)
actions.assert_not_checked(locator)
# Element count
actions.count(locator) # → int
actions.assert_count(locator, 3)
# Scroll
actions.scroll_down()
actions.scroll_to_element(locator)
# Keyboard
actions.hide_keyboard()
actions.press_keycode(66) # ENTER
# App lifecycle
actions.activate_app("com.example.myapp")
actions.terminate_app("com.example.myapp")
actions.background_app(2)
actions.open_deep_link("myapp://profile", app_id="com.example.myapp")
# Hybrid
actions.switch_to_webview()
actions.switch_to_native()
```
---
## Public API
```python
from appium_pytest_kit import (
AppiumPytestKitSettings,
AppiumPytestKitError,
ConfigurationError, DeviceResolutionError, LaunchValidationError,
WaitTimeoutError, ActionError, DriverCreationError,
DeviceInfo, DriverConfig, MobileActions, Waiter,
Locator, # type alias: tuple[str, str]
build_driver_config, create_driver, load_settings, apply_cli_overrides,
)
```
---
## Fixture lifecycle
```mermaid
flowchart TD
A["pytest start"] --> B["load defaults + .env + env vars"]
B --> C["apply --app-* CLI overrides"]
C --> D["settings fixture (session)"]
D --> E{"APP_MANAGE_APPIUM_SERVER"}
E -->|"true"| F["start local Appium server"]
E -->|"false"| G["use APP_APPIUM_URL"]
F --> H["appium_server fixture (session)"]
G --> H
H --> I{"session_mode"}
I -->|"clean-session / debug"| J["_driver_shared (session)"]
I -->|"clean"| K["driver per test"]
J --> K
K --> L["waiter / actions / page_factory"]
K --> M["test runs"]
M --> N{"failed?"}
N -->|"yes"| O["capture screenshot + page source"]
N --> P["stop video (per policy)"]
O --> P
P --> Q["driver.quit() (clean mode)"]
Q --> R["report summary flush"]
R --> S["server stop (if managed)"]
```
---
## Debug logs
`appium-pytest-kit` logs every action, wait, and session lifecycle event using Python's standard `logging` module. Enable them with a single pytest flag:
```bash
pytest --log-cli-level=INFO # session lifecycle + artifacts
pytest --log-cli-level=DEBUG # full trace (every tap, wait, scroll)
```
Or persist in `pyproject.toml`:
```toml
[tool.pytest.ini_options]
log_cli = true
log_cli_level = "INFO"
```
See [docs/troubleshooting.md](docs/troubleshooting.md) for a full table of log messages.
---
## Local development
```bash
pip install -e ".[dev]"
python -m ruff check .
python -m pytest -q
python -m pytest --collect-only examples/basic/tests -q
```
---
## Documentation
| Topic | File |
|---|---|
| Installation + dependencies | [docs/installation.md](docs/installation.md) |
| Project structure + scaffold | [docs/project-structure.md](docs/project-structure.md) |
| Configuration (all settings) | [docs/configuration.md](docs/configuration.md) |
| **CLI reference (all flags)** | [docs/cli-reference.md](docs/cli-reference.md) |
| Built-in fixtures | [docs/fixtures.md](docs/fixtures.md) |
| Page objects guide | [docs/page-objects.md](docs/page-objects.md) |
| conftest.py guide | [docs/conftest-guide.md](docs/conftest-guide.md) |
| Waits reference | [docs/waits.md](docs/waits.md) |
| Actions reference | [docs/actions.md](docs/actions.md) |
| Session modes | [docs/session-modes.md](docs/session-modes.md) |
| Device resolution | [docs/device-resolution.md](docs/device-resolution.md) |
| Failure diagnostics + video | [docs/diagnostics.md](docs/diagnostics.md) |
| Error reference | [docs/errors.md](docs/errors.md) |
| Troubleshooting | [docs/troubleshooting.md](docs/troubleshooting.md) |
| text/markdown | appium-pytest-kit contributors | null | null | null | null | appium, pytest, mobile, automation, framework | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Framework :: Pytest",
"Topic :: Software Development :: Testing"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"Appium-Python-Client>=4.0.0",
"pydantic-settings>=2.3.0",
"pytest>=8.2.0",
"ruff>=0.9.0; extra == \"dev\"",
"pytest-cov>=6.0.0; extra == \"dev\"",
"PyYAML>=6.0; extra == \"yaml\"",
"allure-pytest>=2.13.0; extra == \"allure\"",
"pytest-retry>=0.6.0; extra == \"retry\"",
"PyYAML>=6.0; extra == \"all\"",
"allure-pytest>=2.13.0; extra == \"all\"",
"pytest-retry>=0.6.0; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/gianlucasoare/appium-pytest-kit",
"Documentation, https://github.com/gianlucasoare/appium-pytest-kit#readme",
"Repository, https://github.com/gianlucasoare/appium-pytest-kit.git",
"Funding, https://buymeacoffee.com/gianlucasoare"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-20T23:03:12.771656 | appium_pytest_kit-0.1.6.tar.gz | 41,392 | ad/af/7e12fd5fe0dacd839bd76b0c52eb3517db3df58a5791dd844c34b29bcbc2/appium_pytest_kit-0.1.6.tar.gz | source | sdist | null | false | 768379ce24e4f0c83709bf49ce5d3753 | 809d66b40fa3ef92ecd62dfc90ff8f346c8d27be4a92eabf97042e7f478e1947 | adaf7e12fd5fe0dacd839bd76b0c52eb3517db3df58a5791dd844c34b29bcbc2 | MIT | [
"LICENSE"
] | 181 |
2.4 | db-mcp-server | 1.0.2 | MCP server for MySQL and MongoDB databases — one instance per database, no Docker required | # db-mcp-server
MCP server for MySQL and MongoDB databases. One instance per database, no Docker required.
## Installation
```bash
uvx db-mcp-server
```
## Configuration
Configure via environment variables. Each instance connects to a single database.
### MySQL
| Variable | Required | Default | Description |
|----------|----------|---------|-------------|
| `DB_TYPE` | Yes | — | `mysql` |
| `DB_DATABASE` | Yes | — | Database name |
| `DB_PASSWORD` | Yes | — | Password |
| `DB_HOST` | No | `localhost` | Host |
| `DB_PORT` | No | `3306` | Port |
| `DB_USER` | No | `root` | User |
| `DB_MODE` | No | `read-only` | `read-only` or `read-write` |
### MongoDB
| Variable | Required | Default | Description |
|----------|----------|---------|-------------|
| `DB_TYPE` | Yes | — | `mongodb` |
| `DB_DATABASE` | Yes | — | Database name |
| `DB_URL` | Yes | — | Connection URL (`mongodb://...`) |
| `DB_MODE` | No | `read-only` | `read-only` or `read-write` |
## Usage in .mcp.json
```json
{
"mcpServers": {
"db-prod": {
"command": "uvx",
"args": ["db-mcp-server"],
"env": {
"DB_TYPE": "mysql",
"DB_MODE": "read-only",
"DB_HOST": "db.example.com",
"DB_PORT": "3306",
"DB_USER": "root",
"DB_PASSWORD": "secret",
"DB_DATABASE": "myapp"
}
}
}
}
```
For multiple databases, add multiple instances:
```json
{
"mcpServers": {
"db-prod": {
"command": "uvx",
"args": ["db-mcp-server"],
"env": { "DB_TYPE": "mysql", "DB_DATABASE": "prod", "..." : "..." }
},
"db-staging": {
"command": "uvx",
"args": ["db-mcp-server"],
"env": { "DB_TYPE": "mongodb", "DB_DATABASE": "staging", "..." : "..." }
}
}
}
```
## Tools
### MySQL
- **query** — Execute read-only SQL (SELECT, SHOW, DESCRIBE, EXPLAIN, WITH)
- **execute** — Execute write SQL (INSERT, UPDATE, DELETE) — requires `DB_MODE=read-write`
- **describe** — Describe table structure
- **list_tables** — List all tables
- **status** — Show connection info
### MongoDB
- **query** — Find documents in a collection
- **describe** — Collection stats ($collStats)
- **list_collections** — List all collections
- **aggregate** — Execute aggregation pipelines ($out/$merge blocked on read-only)
- **status** — Show connection info
## License
MIT
<!-- mcp-name: io.github.stucchi/db -->
| text/markdown | Ing. Luca Stucchi | null | null | null | null | mcp, database, mysql, mongodb, ai | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Database"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"mcp[cli]",
"aiomysql",
"motor"
] | [] | [] | [] | [
"Homepage, https://github.com/stucchi/db-mcp-server",
"Repository, https://github.com/stucchi/db-mcp-server",
"Issues, https://github.com/stucchi/db-mcp-server/issues"
] | twine/6.2.0 CPython/3.13.1 | 2026-02-20T23:03:01.255028 | db_mcp_server-1.0.2.tar.gz | 9,426 | d9/75/7d333a61829034040e234fd1623f5bfcdac0d0f6ea73d5813f665656bf22/db_mcp_server-1.0.2.tar.gz | source | sdist | null | false | fed675ed22e83d14dd3e4baefab12eaf | b3cfa4bbc415b55219c33b75fd3910a1210aca6c70aea8ebca8f142fab54ad6a | d9757d333a61829034040e234fd1623f5bfcdac0d0f6ea73d5813f665656bf22 | MIT | [
"LICENSE"
] | 200 |
2.4 | simgen-vla | 2.6.0 | VLA: Zero-Error GPU Arithmetic for PyTorch. Exact results, every calculation. | # SimGen VLA
**Zero-Error GPU Arithmetic for PyTorch**
VLA eliminates floating-point accumulation errors in GPU computations. Every sum, every matrix multiply, every gradient update - mathematically exact.
## Installation
```bash
pip install simgen-vla
```
**Requirements:**
- Python 3.10+
- PyTorch 2.0+
- CUDA GPU (Turing, Ampere, Ada, or Hopper)
- CuPy (auto-installed)
## Quick Start
```python
from simgen import vla
# Exact operations
result = vla.sum(tensor) # Zero accumulation error
result = vla.matmul(a, b) # Exact matrix multiply
result = vla.softmax(logits) # Numerically stable
# Optimizer that doesn't drift
optimizer = vla.AdamW(model.parameters(), lr=1e-3)
# Enable globally
vla.enable()
torch.sum(x) # Now uses VLA automatically!
vla.disable()
# System info
vla.info()
```
## Why VLA?
Standard floating-point arithmetic loses precision with every operation:
```python
# The Problem: Standard FP32 fails
x = torch.tensor([1e8, 1.0, -1e8, 1e-8, 1e-8], device='cuda')
print(x.sum()) # 0.0 (WRONG!)
# VLA gets it right
print(vla.sum(x)) # 1.00000002 (CORRECT!)
```
| Operation | Standard Error | VLA Error |
|-----------|---------------|-----------|
| Sum 1M values | ~10⁻¹⁰ | **0** |
| MatMul 1024x1024 | ~10⁻⁷ | **< 10⁻¹⁵** |
| 1000 Adam steps | Drift | **Exact** |
### Real-World Impact
- **Training stability**: No gradient drift over long runs
- **Reproducibility**: Same bits on any GPU (RTX 4070 = A100 = H100)
- **Financial accuracy**: $0.0001 × 1M transactions = $100 (not lost)
- **Scientific precision**: ODE integration without energy drift
## Operations (55 Kernels)
### Reductions
`sum`, `mean`, `var`, `std`, `norm`, `dot`, `prod`, `cumsum`, `logsumexp`, `min`, `max`, `argmin`, `argmax`
### Matrix Operations
`matmul`, `mm`, `bmm`, `linear`, `einsum`
### Activations
`softmax`, `log_softmax`, `relu`, `gelu`, `silu`, `sigmoid`, `tanh`, `leaky_relu`
### Normalization
`layer_norm`, `rms_norm`, `batch_norm`, `group_norm`
### Loss Functions
`cross_entropy`, `mse_loss`
### Element-wise
`add`, `sub`, `mul`, `div`, `neg`, `abs`, `exp`, `log`, `sqrt`, `rsqrt`, `pow`, `clamp`
### Advanced
`scaled_dot_product_attention`, `conv2d`, `embedding`, `dropout`
### Optimizers
`AdamW`, `SGD` (FP64 state - no drift over 1000s of steps)
## Usage Patterns
### 1. Direct Functions
```python
from simgen import vla
loss = vla.cross_entropy(logits, targets)
normalized = vla.layer_norm(x, weight, bias)
attn = vla.scaled_dot_product_attention(q, k, v)
```
### 2. Global Patching
```python
vla.enable()
# All torch.sum, torch.matmul now use VLA
model.train()
vla.disable()
```
### 3. Context Manager
```python
with vla.mode():
output = model(x)
loss = criterion(output, y)
```
### 4. Exact Optimizer
```python
# FP64 state prevents momentum/variance drift
optimizer = vla.AdamW(model.parameters(), lr=1e-3, weight_decay=0.01)
for epoch in range(1000):
loss = model(x)
loss.backward()
optimizer.step() # No drift, ever
```
## Supported GPUs
| Architecture | GPUs | Status |
|--------------|------|--------|
| sm_60 (Pascal) | GTX 1080, P100 | ✓ |
| sm_61 (Pascal) | GTX 1050, 1060 | ✓ |
| sm_70 (Volta) | V100 | ✓ |
| sm_75 (Turing) | T4, RTX 2080 | ✓ |
| sm_80 (Ampere) | A100, A10, RTX 3090 | ✓ |
| sm_86 (Ampere) | RTX 3060/3070 | ✓ |
| sm_89 (Ada) | RTX 4070/4080/4090 | ✓ |
| sm_90 (Hopper) | H100 | ✓ |
## Benchmarks
VLA adds minimal overhead while providing exact results:
| Operation | Standard | VLA | Overhead |
|-----------|----------|-----|----------|
| Sum (1M) | 0.12ms | 0.15ms | 1.25x |
| MatMul (1024²) | 0.8ms | 1.1ms | 1.4x |
| Softmax (batch) | 0.05ms | 0.06ms | 1.2x |
*Benchmarks on RTX 4070. VLA uses FP64 accumulation with error tracking.*
## How It Works
VLA uses proprietary multi-precision accumulation to capture all rounding errors during computation. The result is mathematically exact to the precision of the input.
**Key innovations:**
- Proprietary precision-preserving arithmetic
- Multi-level error capture for reductions
- FP64 optimizer state for training stability
- Precompiled CUDA kernels for each GPU architecture
## Version History
- **v2.4.0** - Clean API (`vla.sum`, `vla.matmul`), 55 kernels, Windows support
- **v2.0.x** - Native CUDA kernels, VLAResult container
- **v1.x** - Initial release with Triton backend
## License
Proprietary. All rights reserved.
© 2025-2026 Clouthier Simulation Labs
**Contact:** kyle@simgen.dev
**Website:** https://simgen.dev
| text/markdown | Clouthier Simulation Labs | Clouthier Simulation Labs <kyle@simgen.dev> | null | null | null | exact-arithmetic, GPU, precision, lossless, scientific-computing, machine-learning, deep-learning, simulation, finance, HPC, cuda, pytorch | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Scientific/Engineering :: Physics"
] | [] | https://simgen.dev | null | >=3.10 | [] | [] | [] | [
"torch>=2.0",
"cupy-cuda12x>=12.0",
"pytest; extra == \"dev\"",
"mpmath; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://simgen.dev",
"Documentation, https://simgen.dev/docs",
"Repository, https://github.com/clouthier-simulation-labs/simgen"
] | twine/6.2.0 CPython/3.12.10 | 2026-02-20T23:02:31.648480 | simgen_vla-2.6.0.tar.gz | 1,362,890 | 2c/f5/549b5ae190085f931d7c5bf22c9eab5b19abac6c639e647ed2fd69ae22a0/simgen_vla-2.6.0.tar.gz | source | sdist | null | false | 050e539a2ee72698aa8ba2142cc855de | 314a8b19ecdc4fd992b82d9ea9bb347a95890b692e62560a446c8ee8d0a60b00 | 2cf5549b5ae190085f931d7c5bf22c9eab5b19abac6c639e647ed2fd69ae22a0 | LicenseRef-Proprietary | [
"LICENSE"
] | 224 |
2.4 | aptpath-models | 0.1.3 | Consolidated Django ORM models for the Aptpath platform. | # aptpath-models
Consolidated Django ORM models for the Aptpath platform, published as a reusable pip package.
## Installation
```bash
pip install aptpath-models
```
Or directly from source:
```bash
pip install git+<repo-url>
```
## Usage
Add `aptpath_models` to `INSTALLED_APPS` in your Django settings:
```python
INSTALLED_APPS = [
...
'aptpath_models',
]
```
Import models directly:
```python
from aptpath_models.models import Profile, Company, Internship, LMSCourse
```
## Package structure
```
aptpath_models/
models/
base.py # Abstract base models
neo4j_nodes.py # Neo4j StructuredNode classes (neomodel)
users.py # MongoUser, MongoEmployer, AptPathAdmin, Profile, Permissions
skills.py # MongoSkill, MongoCategories, Language, MongoAccountType, MongoRole
user_profile.py # UserEducation, UserExperience, UserCertificate
company.py # Company, Employment, Invitation, Notification, Activity
college.py # College and related models
courses.py # MongoCourse, LMSSystem, CourseTemplates
jobs.py # Jobs, MongoApplications, JobTemplates, Assessments, DailyStreak
learning_path.py # LearningPath, LearningPathModule and templates
assessments_tests.py # AptpathTests, Durationmodel and templates
nav.py # NavigationItem, NavigationSubheader
xapi.py # XAPIStatement
moodle.py # MoodleCourse, MoodleCourseCompletion, MoodleUser
logs.py # OperationLog (all user types)
internship.py # Internship, InternshipBatches, Coupon, LeaveRequest, etc.
labs.py # Lab, SkillBuilder, CodingAssessment, StudentTasks, etc.
lms_partner.py # LMSCourse, CourseEnrollmentPaymentDetails, UserCourseStatus
blogs.py # AptpathBlog, Blog and related content models
employer_ext.py # JobV2, JobApplicationsV2, Roles, InternJobProfile
payment.py # PaymentDetails
meetings.py # Recordings
```
## Requirements
- Django >= 4.2
- psycopg2-binary (PostgreSQL)
- neomodel >= 5.0 (Neo4j graph nodes)
- django-ckeditor (rich text blog content)
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"Django>=4.2",
"psycopg2-binary",
"neomodel>=5.0",
"django-ckeditor",
"build; extra == \"dev\"",
"twine; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.9 | 2026-02-20T23:02:20.508373 | aptpath_models-0.1.3.tar.gz | 25,919 | ad/52/479e7f03a3d64e98138743ffa52df8b3f42f3137661b6c9b7963209dfd0b/aptpath_models-0.1.3.tar.gz | source | sdist | null | false | 830b8ae4b073cb8a2d82704d5fcd3e1e | 62f77be34bec9839361b257b4e471ecb1329125b08d9c6cee566c433ad6b3464 | ad52479e7f03a3d64e98138743ffa52df8b3f42f3137661b6c9b7963209dfd0b | LicenseRef-Proprietary | [] | 201 |
2.4 | graph-mcp | 0.1.0 | MCP server for Microsoft Teams, Outlook Calendar, and Mail via Microsoft Graph API | # Graph MCP
An MCP server that connects Claude to Microsoft Teams, Outlook Calendar, and Mail through the Microsoft Graph API.
## What it does
Gives Claude access to 29 tools:
| Category | Tools |
|---|---|
| **Auth** | Check status, login (browser OAuth), logout |
| **Chats** | List chats, read/send messages, create chats, list members |
| **Teams & Channels** | List teams, list channels, read/send messages, list members, read/send replies |
| **Calendar** | List calendars, list events (with date range), get event details |
| **Mail** | List emails, read email, search emails, send email, reply to email |
| **Users** | Search organization directory |
| **Presence** | Get/set your presence, get another user's presence |
| **Search** | Search messages across all chats and channels |
## Prerequisites
You need an Azure App Registration:
1. Go to [Azure Portal](https://portal.azure.com) > App registrations > New registration
2. Set platform to **Mobile and desktop applications**
3. Set redirect URI to `http://localhost:3000/auth/callback`
4. Mark as **Public client** (no client secret needed)
5. Under API permissions, add these **delegated** permissions:
- `offline_access`, `openid`, `profile`, `User.Read`
- `User.ReadBasic.All`
- `Chat.Read`, `Chat.ReadWrite`, `ChatMessage.Send`
- `ChannelMessage.Read.All`, `ChannelMessage.Send`
- `Team.ReadBasic.All`, `Channel.ReadBasic.All`, `ChannelMember.Read.All`
- `Calendars.Read`
- `Mail.Read`, `Mail.Send`
- `Presence.Read`, `Presence.Read.All`, `Presence.ReadWrite`
## Install
```
pip install graph-mcp
```
Or from a cloned repo:
```
pip install -e .
```
## Setup
```
graph-mcp setup
```
This asks for your Azure Client ID and Tenant ID, then gives you the exact command:
```
claude mcp add graph -e AZURE_CLIENT_ID=your-id -e AZURE_TENANT_ID=your-tenant -- /path/to/graph-mcp
```
Paste it, start Claude Code, and ask Claude to log in. It opens your browser for OAuth sign-in — that's it.
## How it works
```
Claude Code ──stdio──> graph-mcp ──HTTPS──> Microsoft Graph API
│
~/.graph-mcp/
tokens.enc (encrypted)
.key (auto-generated)
```
- **Auth**: OAuth2 Authorization Code flow with PKCE. Login opens your browser, a local callback server captures the token. No secrets stored in config.
- **Token persistence**: Tokens are encrypted with Fernet and stored in `~/.graph-mcp/tokens.enc`. The encryption key is auto-generated on first use. Logins survive server restarts.
- **Token refresh**: Access tokens are refreshed automatically before they expire. You only need to log in again if the refresh token itself expires.
- **Rate limiting**: Sliding window counter with exponential backoff. Respects `Retry-After` headers on 429 responses.
## Configuration
All configuration is passed via environment variables (set in the MCP config's `env` block):
| Variable | Required | Default | Description |
|---|---|---|---|
| `AZURE_CLIENT_ID` | Yes | — | From your Azure App Registration |
| `AZURE_TENANT_ID` | No | `common` | Your Azure tenant ID, or `common` for multi-tenant |
| `GRAPH_REDIRECT_URI` | No | `http://localhost:3000/auth/callback` | Must match Azure app registration |
| `GRAPH_TOKEN_ENCRYPTION_KEY` | No | auto-generated | Fernet key for token encryption |
| `GRAPH_DEBUG` | No | `false` | Enable verbose logging |
## Troubleshooting
**"Approval required" error during login**
Your Azure app is requesting scopes that aren't registered or need admin consent. Check the API permissions in the Azure portal match the list above.
**403 Forbidden on specific tools**
The endpoint needs a permission that's not in your Azure app registration, or requires admin consent.
**Login works but tools say "not authenticated"**
Restart the MCP server (`claude mcp restart graph`) — it may be running an old version.
## Disclaimer
This project is an independent open-source effort and is **not affiliated with, endorsed by, or sponsored by Microsoft Corporation**. Microsoft, Microsoft Teams, Outlook, Microsoft 365, Microsoft Graph, and Azure are trademarks of the Microsoft group of companies.
This software is provided "as is", without warranty of any kind. Use it at your own risk. The authors accept no liability for any damages, data loss, or security issues arising from the use of this software. You are responsible for complying with your organization's policies and Microsoft's [API Terms of Use](https://learn.microsoft.com/en-us/legal/microsoft-apis/terms-of-use) when using this tool.
This software accesses Microsoft services on your behalf using your own credentials and Azure app registration. Data retrieved from Microsoft Graph (emails, messages, calendar events, etc.) is passed to the LLM that invoked the tool. Be mindful of your organization's data handling policies when using this with cloud-hosted AI models.
## License
MIT — see [LICENSE](LICENSE).
| text/markdown | JustStas | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"cryptography>=42.0",
"fastmcp>=2.0",
"httpx>=0.27",
"pydantic-settings>=2.0",
"pydantic>=2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/JustStas/Graph-MCP",
"Repository, https://github.com/JustStas/Graph-MCP.git"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:02:02.579690 | graph_mcp-0.1.0.tar.gz | 16,127 | 0b/1e/0f0a6601ced15842a72d0085ff74e43cbda9234802b83146883f4ceef859/graph_mcp-0.1.0.tar.gz | source | sdist | null | false | 6aaae428c1ff759dd9049b7360b3407a | f0034183715ab7d861faf850ea67c06e3f72a21a06f4b8f370ccfdf6709de3e3 | 0b1e0f0a6601ced15842a72d0085ff74e43cbda9234802b83146883f4ceef859 | MIT | [
"LICENSE"
] | 211 |
2.4 | rda-python-dsquasar | 2.0.4 | RDA Python package to backup and recover RDA data archives to and from GLOBUS Quasar backup server | RDA python package to backup RDA dataset data onto Globus Quasar Server.
| text/markdown | null | Zaihua Ji <zji@ucar.edu> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Development Status :: 5 - Production/Stable"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"rda_python_common",
"rda_python_dsarch"
] | [] | [] | [] | [
"Homepage, https://github.com/NCAR/rda-python-dsquasar"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T23:02:01.623277 | rda_python_dsquasar-2.0.4.tar.gz | 54,198 | 40/8d/ce47e81fb5980e2f4f3525486bef6d58ea1fdf973162877887ee9897955b/rda_python_dsquasar-2.0.4.tar.gz | source | sdist | null | false | 5a4e853380ad9435bee14e1c7a8ebc4c | 7b03ff51564708038b6772ff3129aac7e8687ed0462cc0b0a0736bb93212b713 | 408dce47e81fb5980e2f4f3525486bef6d58ea1fdf973162877887ee9897955b | null | [
"LICENSE"
] | 179 |
2.4 | polli-cosmos | 0.4.1 | Unified ingest + post-processing toolkit (COSM camera MP4 generation + general square crop) | # Cosmos
Unified ingest + post‑processing toolkit
- `cosmos`: COSM camera MP4 generation (ingest) and orchestration
- `squarecrop`: general MP4 post‑processing (square crop)
Two CLIs, one SDK. TUI‑first UX with a stable programmatic API.
## Install
- Production: `pip install polli-cosmos`
- Dev (uv):
```
uv venv .venv
. .venv/bin/activate
uv pip install -e ".[dev]"
```
## CLI quickstart
```
cosmos --help
cosmos process --help
cosmos ingest run --help
cosmos crop run --help
cosmos crop preview --help
squarecrop --help
```
See docs/cosmos-cli.md and docs/squarecrop-cli.md for usage.
## Local runs (uv + make)
1) Create venv and install dev deps
```
make uv-sync
```
2) Run ingest (example)
```
make run.ingest IN=/path/to/raw OUT=./out YES=1 WINDOW=10
```
3) Run squarecrop with a jobs JSON
```
make run.crop INPUT=/path/to/clip.mp4 OUT=_work/out JOBS=_work/job.json YES=1
```
4) Render crop previews (contact sheets + stacked overlays)
```
cosmos crop preview --input /path/to/clip.mp4 --jobs-file _work/job.json --out _work/preview --frame start --frame mid --stack-time 0 --yes
```
5) Inspect provenance
```
make run.provenance DIR=_work/out
```
Jobs JSON fields for squarecrop:
- `targets`: [1536] or multiple sizes
- Offsets (recommended): `offset_x`, `offset_y` in [-1,1], relative to available margin (0=center; +right/down; −left/up)
- Alternative: `center_x`, `center_y` absolute [0..1] of full frame
- Optional trims: `trim_unit: "time"`, `trim_start`, `trim_end`
- All jobs/targets run for each input; outputs include job/size markers in filenames for traceability.
- Provenance files now include width/height/duration/fps and stable clip/view ids usable by downstream tools.
## IDs & provenance
- Clip IDs: `clip-<stem>-<sha8>`; View IDs: `view-<stem>-<sha8>` (content-hash based, deterministic).
- View provenance records offsets/centers, trim windows (seconds), target size, encoder used, and source clip id/sha.
- Video metadata (width_px/height_px/fps/duration_sec) is recorded in both clip and view artifacts; width/height are aliases for backward compatibility.
## SDK quickstart
```python
from pathlib import Path
from cosmos.sdk import ingest, IngestOptions
inputs = Path("/path/to/raw")
outputs = Path("./out")
opts = IngestOptions(quality_mode="balanced", width=3840, height=2160)
produced = ingest(inputs, outputs, manifest=None, options=opts)
```
## Slim E2E (local, optional)
- Set `COSMOS_ENABLE_LOCAL_TESTS=1`
- Run all local tests: `pytest -q tests/e2e_local`
- Slim ingest reproduction (default): `make e2e-repro-slim`
- 4K balanced, 10s window, bicubic scaler; writes `{clip}.cmd.txt` and `{clip}.log.txt` alongside outputs.
- 8K windowed reproduction (heavy, local-only): `make e2e-repro-8k` (uses `COSMOS_RUN_8K_REPRO=1`)
- defaults: `CLIP1`, 2s window, 7680x4320 output
- optional knobs: `COSMOS_8K_CLIPS`, `COSMOS_8K_WINDOW_SECONDS`, `COSMOS_8K_QUALITY_MODE`, `COSMOS_8K_SCALE_FILTER`
- CI boundary: skipped when `CI=1` unless `COSMOS_RUN_8K_IN_CI=1`
- Full 9.5k reproduction (very heavy): `make e2e-repro-full` (uses `COSMOS_FULL_REPRO=1`).
- Fixtures live under `/Users/.../ladybird` or `dev/fixtures/cache` (see dev/fixtures/README.md). CI skips these.
## Dev workflow
- Format + lint: `make fmt && make lint`
- Type‑check: `make typecheck`
- Tests: `make test`
## Encoder policy (cross‑platform)
- macOS: `h264_videotoolbox` > `libx264` (use `--prefer-hevc-hw` on large inputs to try `hevc_videotoolbox` first)
- Linux: `h264_nvenc` > `h264_qsv` > `h264_vaapi` > `libx264`
- Windows: `h264_nvenc` > `h264_qsv` > `h264_amf` > `libx264`
Presets are centralized; filter graphs are CPU‑bound (crop/hstack/vstack/scale). Use `--scale-filter` and thread flags to tune throughput and memory.
Detailed platform behavior and known limits are tracked in `docs/encoder-behavior.md`.
| text/markdown | Polli Labs | null | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"pillow>=10.4.0",
"psutil>=6.1.0",
"pydantic<3,>=2.7",
"questionary>=2.0.1",
"tqdm>=4.67.1",
"typer>=0.12.5",
"mypy>=1.11.2; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"pytest>=8.2; extra == \"dev\"",
"ruff>=0.6.9; extra == \"dev\"",
"types-pyyaml; extra == \"dev\"",
"mkdocs-material<10,>=9.5; extra == \"docs\"",
"mkdocs<2,>=1.6; extra == \"docs\"",
"mkdocstrings-python<2,>=1.10; extra == \"docs\"",
"mkdocstrings<1,>=0.24; extra == \"docs\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T23:01:42.453329 | polli_cosmos-0.4.1-py3-none-any.whl | 62,121 | a1/95/8ec51680f70595f5a976a5d59d9b877d9113170f12520edec4a5adf973fa/polli_cosmos-0.4.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 6a0cbb21c77968def79b1a409ebb6153 | 503f7910da873a54128b6c889fdda9f12ca6e66a53f16139b283f655c3913e88 | a1958ec51680f70595f5a976a5d59d9b877d9113170f12520edec4a5adf973fa | null | [
"LICENSE"
] | 85 |
2.4 | strath | 1.1.0 | This library helps ensuring that a file path is of type str or pathlib.Path. | ## FRANÇAIS
Cette bibliothèque aide à assurer qu'un chemin de fichier soit de type `str` ou
`pathlib.Path`.
En Python, il est possible de représenter des chemins de fichier au moyen de
chaînes de caractères (`str`) ou d'instances de `pathlib.Path`. Ces types
étant employés de façons fort différentes, un programmeur peut avoir besoin de
vérifier le type des objets et d'effectuer des conversions.
La bibliothèque `strath` permet de faire ces deux tâches en un seul appel de
fonction.
### Contenu
#### Le type `Strath`
`Strath` est un alias de type (`TypeAlias`) qui représente les chemins de
fichier. Il correspond à `str | Path`.
#### `ensure_path_is_pathlib`
Si le chemin est une chaîne de caractères, cette fonction le convertit en une
instance de `pathlib.Path` puis renvoie cette dernière. Si le chemin est une
instance de `pathlib.Path`, la fonction renvoie le chemin.
#### `ensure_path_is_str`
Si le chemin est une instance de `pathlib.Path`, cette fonction le convertit en
une chaîne de caractères puis renvoie cette dernière. Si le chemin est une
chaîne de caractères, la fonction renvoie le chemin.
#### Paramètres et exception `TypeError`
Les fonctions ci-dessus ont les mêmes paramètres.
`some_path` (`str` ou `pathlib.Path`): le chemin d'un fichier ou d'un dossier.
`is_none_allowed` (`bool`): détermine si `some_path` peut être `None`.
Si l'argument `some_path` est `None` et l'argument `is_none_allowed` est vrai
(`True`), les fonctions renvoient `None`. Par contre, si `is_none_allowed` est
faux (`False`), une exception `TypeError` est levée.
Si l'argument `some_path` n'est pas `None` ni une instance de `str` ou de
`pathlib.Path`, une exception `TypeError` est levée.
Pour plus d'informations, consultez la documentation des fonctions et les démos
dans le dépôt de code source.
## ENGLISH
This library helps ensuring that a file path is of type `str` or
`pathlib.Path`.
In Python, it is possible to represent file paths with character strings
(`str`) or `pathlib.Path` instances. Since these types are used in very
different ways, a programmer might need to verify the objects' type and to
perform conversions.
Library `strath` allows to do both tasks with one function call.
### Content
#### Type `Strath`
`Strath` is a `TypeAlias` that represents file paths. It corresponds to
`str | Path`.
#### `ensure_path_is_pathlib`
If the path is a string, this function converts it to a `pathlib.Path`
instance, which it returns. If the path is a `pathlib.Path` instance, the
function returns the path.
#### `ensure_path_is_str`
If the path is a `pathlib.Path` instance, this function converts it to a
string, which it returns. If the path is a string, the function returns the
path.
#### Parameters and exception `TypeError`
The above functions have the same parameters.
`some_path` (`str` or `pathlib.Path`): the path to a file or directory.
`is_none_allowed` (`bool`): determines whether `some_path` can be `None`.
If argument `some_path` is `None` and argument `is_none_allowed` is `True`,
the functions return `None`. However, if `is_none_allowed` is `False`, a
`TypeError` is raised.
If argument `some_path` is not `None` nor an instance of `str` or
`pathlib.Path`, a `TypeError` is raised.
For more information, consult the functions' documentation and the demos in the
source code repository.
| text/markdown | Guyllaume Rousseau | null | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Topic :: Utilities"
] | [] | https://github.com/GRV96/strath | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.8 | 2026-02-20T23:01:23.955787 | strath-1.1.0.tar.gz | 5,228 | 49/3b/42eb6f9f4283ced86757bc6aaa6d7d8b276316297015924feff6690ee352/strath-1.1.0.tar.gz | source | sdist | null | false | e2498e236552b0083f50c6047402060a | e3d352ca37305d108342e987389f4798a54c97ac70f7605203163a9a8879ec18 | 493b42eb6f9f4283ced86757bc6aaa6d7d8b276316297015924feff6690ee352 | null | [
"LICENSE"
] | 135 |
2.1 | airbyte-source-github | 2.1.10.dev202602202300 | Source implementation for GitHub. | # Github source connector
This is the repository for the Github source connector, written in Python.
For information about how to use this connector within Airbyte, see [the documentation](https://docs.airbyte.com/integrations/sources/github).
## Local development
### Prerequisites
- Python (~=3.9)
- Poetry (~=1.7) - installation instructions [here](https://python-poetry.org/docs/#installation)
### Installing the connector
From this connector directory, run:
```bash
poetry install --with dev
```
### Create credentials
**If you are a community contributor**, follow the instructions in the [documentation](https://docs.airbyte.com/integrations/sources/github)
to generate the necessary credentials. Then create a file `secrets/config.json` conforming to the `source_github/spec.yaml` file.
Note that any directory named `secrets` is gitignored across the entire Airbyte repo, so there is no danger of accidentally checking in sensitive information.
See `sample_files/sample_config.json` for a sample config file.
### Locally running the connector
```
poetry run source-github spec
poetry run source-github check --config secrets/config.json
poetry run source-github discover --config secrets/config.json
poetry run source-github read --config secrets/config.json --catalog sample_files/configured_catalog.json
```
### Running unit tests
To run unit tests locally, from the connector directory run:
```
poetry run pytest unit_tests
```
### Building the docker image
1. Install [`airbyte-ci`](https://github.com/airbytehq/airbyte/blob/master/airbyte-ci/connectors/pipelines/README.md)
2. Run the following command to build the docker image:
```bash
airbyte-ci connectors --name=source-github build
```
An image will be available on your host with the tag `airbyte/source-github:dev`.
### Running as a docker container
Then run any of the connector commands as follows:
```
docker run --rm airbyte/source-github:dev spec
docker run --rm -v $(pwd)/secrets:/secrets airbyte/source-github:dev check --config /secrets/config.json
docker run --rm -v $(pwd)/secrets:/secrets airbyte/source-github:dev discover --config /secrets/config.json
docker run --rm -v $(pwd)/secrets:/secrets -v $(pwd)/integration_tests:/integration_tests airbyte/source-github:dev read --config /secrets/config.json --catalog /integration_tests/configured_catalog.json
```
### Running our CI test suite
You can run our full test suite locally using [`airbyte-ci`](https://github.com/airbytehq/airbyte/blob/master/airbyte-ci/connectors/pipelines/README.md):
```bash
airbyte-ci connectors --name=source-github test
```
### Customizing acceptance Tests
Customize `acceptance-test-config.yml` file to configure acceptance tests. See [Connector Acceptance Tests](https://docs.airbyte.com/connector-development/testing-connectors/connector-acceptance-tests-reference) for more information.
If your connector requires to create or destroy resources for use during acceptance tests create fixtures for it and place them inside integration_tests/acceptance.py.
### Dependency Management
All of your dependencies should be managed via Poetry.
To add a new dependency, run:
```bash
poetry add <package-name>
```
Please commit the changes to `pyproject.toml` and `poetry.lock` files.
## Publishing a new version of the connector
You've checked out the repo, implemented a million dollar feature, and you're ready to share your changes with the world. Now what?
1. Make sure your changes are passing our test suite: `airbyte-ci connectors --name=source-github test`
2. Bump the connector version (please follow [semantic versioning for connectors](https://docs.airbyte.com/contributing-to-airbyte/resources/pull-requests-handbook/#semantic-versioning-for-connectors)):
- bump the `dockerImageTag` value in in `metadata.yaml`
- bump the `version` value in `pyproject.toml`
3. Make sure the `metadata.yaml` content is up to date.
4. Make sure the connector documentation and its changelog is up to date (`docs/integrations/sources/github.md`).
5. Create a Pull Request: use [our PR naming conventions](https://docs.airbyte.com/contributing-to-airbyte/resources/pull-requests-handbook/#pull-request-title-convention).
6. Pat yourself on the back for being an awesome contributor.
7. Someone from Airbyte will take a look at your PR and iterate with you to merge it into master.
8. Once your PR is merged, the new version of the connector will be automatically published to Docker Hub and our connector registry.
| text/markdown | Airbyte | contact@airbyte.io | null | null | ELv2 | null | [
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | https://airbyte.com | null | <3.14,>=3.10 | [] | [] | [] | [
"airbyte-cdk<8.0.0,>=7.4.1",
"sgqlc==16.3"
] | [] | [] | [] | [
"Repository, https://github.com/airbytehq/airbyte",
"Documentation, https://docs.airbyte.com/integrations/sources/github"
] | poetry/1.8.5 CPython/3.11.14 Linux/6.11.0-1018-azure | 2026-02-20T23:00:17.134930 | airbyte_source_github-2.1.10.dev202602202300.tar.gz | 240,262 | 47/49/2711bca71ac0925cfbd73c3992f9c98038f701697212404a728ab8e1ac0b/airbyte_source_github-2.1.10.dev202602202300.tar.gz | source | sdist | null | false | acf518a5883010f18aadc36efb84f45f | 1f5f80d7f3135b32041d9554cee752b91f6bb085337d54b1424d9e2dcafe3746 | 47492711bca71ac0925cfbd73c3992f9c98038f701697212404a728ab8e1ac0b | null | [] | 167 |
2.4 | clawd-code-sdk | 0.1.82 | Python SDK for Claude Code | # Claude Agent SDK for Python
Python SDK for Claude Agent. See the [Claude Agent SDK documentation](https://platform.claude.com/docs/en/agent-sdk/python) for more information.
## Installation
```bash
pip install clawd-code-sdk
```
**Prerequisites:**
- Python 3.10+
**Note:** The Claude Code CLI is automatically bundled with the package - no separate installation required! The SDK will use the bundled CLI by default. If you prefer to use a system-wide installation or a specific version, you can:
- Install Claude Code separately: `curl -fsSL https://claude.ai/install.sh | bash`
- Specify a custom path: `ClaudeAgentOptions(cli_path="/path/to/claude")`
## Quick Start
```python
import anyio
from clawd_code_sdk import query
async def main():
async for message in query(prompt="What is 2 + 2?"):
print(message)
anyio.run(main)
```
## Basic Usage: query()
`query()` is an async function for querying Claude Code. It returns an `AsyncIterator` of response messages. See [src/clawd_code_sdk/query.py](src/clawd_code_sdk/query.py).
```python
from clawd_code_sdk import query, ClaudeAgentOptions, AssistantMessage, TextBlock
# Simple query
async for message in query(prompt="Hello Claude"):
if isinstance(message, AssistantMessage):
for block in message.content:
if isinstance(block, TextBlock):
print(block.text)
# With options
options = ClaudeAgentOptions(
system_prompt="You are a helpful assistant",
max_turns=1
)
async for message in query(prompt="Tell me a joke", options=options):
print(message)
```
### Using Tools
```python
options = ClaudeAgentOptions(
allowed_tools=["Read", "Write", "Bash"],
permission_mode='acceptEdits' # auto-accept file edits
)
async for message in query(
prompt="Create a hello.py file",
options=options
):
# Process tool use and results
pass
```
### Working Directory
```python
from pathlib import Path
options = ClaudeAgentOptions(
cwd="/path/to/project" # or Path("/path/to/project")
)
```
## ClaudeSDKClient
`ClaudeSDKClient` supports bidirectional, interactive conversations with Claude
Code. See [src/clawd_code_sdk/client.py](src/clawd_code_sdk/client.py).
Unlike `query()`, `ClaudeSDKClient` additionally enables **custom tools** and **hooks**, both of which can be defined as Python functions.
### Custom Tools (as In-Process SDK MCP Servers)
A **custom tool** is a Python function that you can offer to Claude, for Claude to invoke as needed.
Custom tools are implemented in-process MCP servers that run directly within your Python application, eliminating the need for separate processes that regular MCP servers require.
For an end-to-end example, see [MCP Calculator](examples/mcp_calculator.py).
#### Creating a Simple Tool
```python
from clawd_code_sdk import tool, create_sdk_mcp_server, ClaudeAgentOptions, ClaudeSDKClient
# Define a tool using the @tool decorator
@tool("greet", "Greet a user", {"name": str})
async def greet_user(args):
return {
"content": [
{"type": "text", "text": f"Hello, {args['name']}!"}
]
}
# Create an SDK MCP server
server = create_sdk_mcp_server(
name="my-tools",
version="1.0.0",
tools=[greet_user]
)
# Use it with Claude
options = ClaudeAgentOptions(
mcp_servers={"tools": server},
allowed_tools=["mcp__tools__greet"]
)
async with ClaudeSDKClient(options=options) as client:
await client.query("Greet Alice")
# Extract and print response
async for msg in client.receive_response():
print(msg)
```
#### Benefits Over External MCP Servers
- **No subprocess management** - Runs in the same process as your application
- **Better performance** - No IPC overhead for tool calls
- **Simpler deployment** - Single Python process instead of multiple
- **Easier debugging** - All code runs in the same process
- **Type safety** - Direct Python function calls with type hints
#### Migration from External Servers
```python
# BEFORE: External MCP server (separate process)
options = ClaudeAgentOptions(
mcp_servers={
"calculator": {
"type": "stdio",
"command": "python",
"args": ["-m", "calculator_server"]
}
}
)
# AFTER: SDK MCP server (in-process)
from my_tools import add, subtract # Your tool functions
calculator = create_sdk_mcp_server(
name="calculator",
tools=[add, subtract]
)
options = ClaudeAgentOptions(
mcp_servers={"calculator": calculator}
)
```
#### Mixed Server Support
You can use both SDK and external MCP servers together:
```python
options = ClaudeAgentOptions(
mcp_servers={
"internal": sdk_server, # In-process SDK server
"external": { # External subprocess server
"type": "stdio",
"command": "external-server"
}
}
)
```
### Hooks
A **hook** is a Python function that the Claude Code _application_ (_not_ Claude) invokes at specific points of the Claude agent loop. Hooks can provide deterministic processing and automated feedback for Claude. Read more in [Claude Code Hooks Reference](https://docs.anthropic.com/en/docs/claude-code/hooks).
For more examples, see examples/hooks.py.
#### Example
```python
from clawd_code_sdk import ClaudeAgentOptions, ClaudeSDKClient, HookMatcher
async def check_bash_command(input_data, tool_use_id, context):
tool_name = input_data["tool_name"]
tool_input = input_data["tool_input"]
if tool_name != "Bash":
return {}
command = tool_input.get("command", "")
block_patterns = ["foo.sh"]
for pattern in block_patterns:
if pattern in command:
return {
"hookSpecificOutput": {
"hookEventName": "PreToolUse",
"permissionDecision": "deny",
"permissionDecisionReason": f"Command contains invalid pattern: {pattern}",
}
}
return {}
options = ClaudeAgentOptions(
allowed_tools=["Bash"],
hooks={
"PreToolUse": [
HookMatcher(matcher="Bash", hooks=[check_bash_command]),
],
}
)
async with ClaudeSDKClient(options=options) as client:
# Test 1: Command with forbidden pattern (will be blocked)
await client.query("Run the bash command: ./foo.sh --help")
async for msg in client.receive_response():
print(msg)
print("\n" + "=" * 50 + "\n")
# Test 2: Safe command that should work
await client.query("Run the bash command: echo 'Hello from hooks example!'")
async for msg in client.receive_response():
print(msg)
```
## Types
See [src/clawd_code_sdk/types.py](src/clawd_code_sdk/types.py) for complete type definitions:
- `ClaudeAgentOptions` - Configuration options
- `AssistantMessage`, `UserMessage`, `SystemMessage`, `ResultMessage` - Message types
- `TextBlock`, `ToolUseBlock`, `ToolResultBlock` - Content blocks
## Error Handling
```python
from clawd_code_sdk import (
ClaudeSDKError, # Base error
CLINotFoundError, # Claude Code not installed
CLIConnectionError, # Connection issues
ProcessError, # Process failed
CLIJSONDecodeError, # JSON parsing issues
)
try:
async for message in query(prompt="Hello"):
pass
except CLINotFoundError:
print("Please install Claude Code")
except ProcessError as e:
print(f"Process failed with exit code: {e.exit_code}")
except CLIJSONDecodeError as e:
print(f"Failed to parse response: {e}")
```
See [src/clawd_code_sdk/\_errors.py](src/clawd_code_sdk/_errors.py) for all error types.
## Available Tools
See the [Claude Code documentation](https://docs.anthropic.com/en/docs/claude-code/settings#tools-available-to-claude) for a complete list of available tools.
## Examples
See [examples/quick_start.py](examples/quick_start.py) for a complete working example.
See [examples/streaming_mode.py](examples/streaming_mode.py) for comprehensive examples involving `ClaudeSDKClient`. You can even run interactive examples in IPython from [examples/streaming_mode_ipython.py](examples/streaming_mode_ipython.py).
## Migrating from Claude Code SDK
If you're upgrading from the Claude Code SDK (versions < 0.1.0), please see the [CHANGELOG.md](CHANGELOG.md#010) for details on breaking changes and new features, including:
- `ClaudeCodeOptions` → `ClaudeAgentOptions` rename
- Merged system prompt configuration
- Settings isolation and explicit control
- New programmatic subagents and session forking features
## Development
If you're contributing to this project, run the initial setup script to install git hooks:
```bash
./scripts/initial-setup.sh
```
This installs a pre-push hook that runs lint checks before pushing, matching the CI workflow. To skip the hook temporarily, use `git push --no-verify`.
### Building Wheels Locally
To build wheels with the bundled Claude Code CLI:
```bash
# Install build dependencies
pip install build twine
# Build wheel with bundled CLI
python scripts/build_wheel.py
# Build with specific version
python scripts/build_wheel.py --version 0.1.4
# Build with specific CLI version
python scripts/build_wheel.py --cli-version 2.0.0
# Clean bundled CLI after building
python scripts/build_wheel.py --clean
# Skip CLI download (use existing)
python scripts/build_wheel.py --skip-download
```
The build script:
1. Downloads Claude Code CLI for your platform
2. Bundles it in the wheel
3. Builds both wheel and source distribution
4. Checks the package with twine
See `python scripts/build_wheel.py --help` for all options.
### Release Workflow
The package is published to PyPI via the GitHub Actions workflow in `.github/workflows/publish.yml`. To create a new release:
1. **Trigger the workflow** manually from the Actions tab with two inputs:
- `version`: The package version to publish (e.g., `0.1.5`)
- `claude_code_version`: The Claude Code CLI version to bundle (e.g., `2.0.0` or `latest`)
2. **The workflow will**:
- Build platform-specific wheels for macOS, Linux, and Windows
- Bundle the specified Claude Code CLI version in each wheel
- Build a source distribution
- Publish all artifacts to PyPI
- Create a release branch with version updates
- Open a PR to main with:
- Updated `pyproject.toml` version
- Updated `src/clawd_code_sdk/_version.py`
- Updated `src/clawd_code_sdk/_cli_version.py` with bundled CLI version
- Auto-generated `CHANGELOG.md` entry
3. **Review and merge** the release PR to update main with the new version information
The workflow tracks both the package version and the bundled CLI version separately, allowing you to release a new package version with an updated CLI without code changes.
## License and terms
Use of this SDK is governed by Anthropic's [Commercial Terms of Service](https://www.anthropic.com/legal/commercial-terms), including when you use it to power products and services that you make available to your own customers and end users, except to the extent a specific component or dependency is covered by a different license as indicated in that component's LICENSE file.
| text/markdown | null | phil65 <philipptemminghoff@gmail.com> | null | null | null | ai, anthropic, claude, sdk | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"anthropic>=0.77.0",
"anyenv>=2.0.15",
"anyio>=4.0.0",
"mcp>=0.1.0",
"anyio[trio]>=4.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"pytest-asyncio>=0.20.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-timeout>=2.4.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Documentation, https://github.com/phil65/claude-agent-sdk-python",
"Homepage, https://github.com/phil65/claude-agent-sdk-python",
"Issues, https://github.com/phil65/claude-agent-sdk-python/issues"
] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"25.10","id":"questing","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T23:00:08.418899 | clawd_code_sdk-0.1.82.tar.gz | 98,659 | f0/12/c20ca433667f2eb34fdac6e536adb439eb65d8a399ea1ec5b1c9f105bf90/clawd_code_sdk-0.1.82.tar.gz | source | sdist | null | false | 2764f7c89da15b1516cf8783d03ff071 | 417f9bce7f3f62c91a63527128fe6e23f7c30166d2dcb2b86f9cb2a510c4c818 | f012c20ca433667f2eb34fdac6e536adb439eb65d8a399ea1ec5b1c9f105bf90 | MIT | [
"LICENSE"
] | 247 |
2.4 | datacompy | 1.0.0a5 | Dataframe comparisons in Python | # DataComPy

[](https://github.com/astral-sh/ruff)
[](https://badge.fury.io/py/datacompy)
[](https://anaconda.org/conda-forge/datacompy)

DataComPy is a package to compare two DataFrames (or tables) such as Pandas, Spark, Polars, and
even Snowflake. Originally it was created to be something of a replacement
for SAS's ``PROC COMPARE`` for Pandas DataFrames with some more functionality than
just ``Pandas.DataFrame.equals(Pandas.DataFrame)`` (in that it prints out some stats,
and lets you tweak how accurate matches have to be). Supported types include:
- Pandas
- Polars
- Spark
- Snowflake
> [!IMPORTANT]
> datacompy is progressing towards a `v1` release. During this transition, a `support/0.19.x` branch will be maintained solely for `v0.19.x` users.
> This branch will only receive dependency updates and critical bug fixes; no new features will be added.
> All new feature development should target the `v1` branches (`develop` and eventually `main`).
## Quick Installation
```shell
pip install datacompy
```
or
```shell
conda install datacompy
```
### Installing extras
If you would like to use Spark or any other backends please make sure you install via extras:
```shell
pip install datacompy[spark]
pip install datacompy[snowflake]
```
## Supported backends
- Pandas: ([See documentation](https://capitalone.github.io/datacompy/pandas_usage.html))
- Spark: ([See documentation](https://capitalone.github.io/datacompy/spark_usage.html))
- Polars: ([See documentation](https://capitalone.github.io/datacompy/polars_usage.html))
- Snowflake/Snowpark: ([See documentation](https://capitalone.github.io/datacompy/snowflake_usage.html))
## Contributors
We welcome and appreciate your contributions! Before we can accept any contributions, we ask that you please be sure to
sign the [Contributor License Agreement (CLA)](https://cla-assistant.io/capitalone/datacompy).
This project adheres to the [Open Source Code of Conduct](https://developer.capitalone.com/resources/code-of-conduct/).
By participating, you are expected to honor this code.
## Roadmap
Roadmap details can be found [here](https://github.com/capitalone/datacompy/blob/develop/ROADMAP.rst)
| text/markdown | null | Faisal Dosani <faisal.dosani@capitalone.com>, Raymond Haffar <raymond.haffar@capitalone.com>, Jacob Dawang <jacob.dawang@capitalone.com> | null | Faisal Dosani <faisal.dosani@capitalone.com>, Jacob Dawang <jacob.dawang@capitalone.com>, Raymond Haffar <raymond.haffar@capitalone.com> | Apache Software License | null | [
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10.0 | [] | [] | [] | [
"jinja2>=3",
"numpy<=2.4.1,>=1.26.4",
"ordered-set<=4.1,>=4.0.2",
"pandas<=3,>=2.2",
"polars[pandas]<=1.37.1,>=0.20.4",
"build; extra == \"build\"",
"twine; extra == \"build\"",
"wheel; extra == \"build\"",
"datacompy[build]; extra == \"dev\"",
"datacompy[docs]; extra == \"dev\"",
"datacompy[qa]; extra == \"dev\"",
"datacompy[snowflake]; extra == \"dev\"",
"datacompy[spark]; extra == \"dev\"",
"datacompy[tests-spark]; extra == \"dev\"",
"datacompy[tests]; extra == \"dev\"",
"furo; extra == \"docs\"",
"myst-parser; extra == \"docs\"",
"sphinx; extra == \"docs\"",
"edgetest; extra == \"edgetest\"",
"edgetest-conda; extra == \"edgetest\"",
"mypy; extra == \"qa\"",
"pandas-stubs; extra == \"qa\"",
"pre-commit; extra == \"qa\"",
"ruff; extra == \"qa\"",
"snowflake-snowpark-python<=1.33,>=1.26; extra == \"snowflake\"",
"pyspark[connect]!=4,<=4.1.1,>=3.5; python_version <= \"3.11\" and extra == \"spark\"",
"pyspark[connect]>=4; python_version >= \"3.12\" and extra == \"spark\"",
"pytest; extra == \"tests\"",
"pytest-benchmark; extra == \"tests\"",
"pytest-cov; extra == \"tests\"",
"pytest-profiling; extra == \"tests\"",
"pytest-spark; extra == \"tests-spark\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/capitalone/datacompy/issues",
"Documentation, https://capitalone.github.io/datacompy/",
"Homepage, https://github.com/capitalone/datacompy",
"Repository, https://github.com/capitalone/datacompy.git",
"Source Code, https://github.com/capitalone/datacompy"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T22:59:12.380372 | datacompy-1.0.0a5.tar.gz | 108,362 | 9d/27/e40aa93601dec4bf98bdf74014e4f330852efc199172118756d7bc707d7a/datacompy-1.0.0a5.tar.gz | source | sdist | null | false | 48a494e4c2bfbc38e5432c7080b1e974 | a94238b72d46fce646eb210c3fc745f68937ba67634155fb5d566a7b7914121a | 9d27e40aa93601dec4bf98bdf74014e4f330852efc199172118756d7bc707d7a | null | [
"LICENSE"
] | 187 |
2.4 | pytest-gremlins | 1.3.0b0 | Fast-first mutation testing for pytest. Let the gremlins loose, see which ones survive. | # pytest-gremlins
**Fast-first mutation testing for pytest. Speed that makes mutation testing practical for everyday TDD.**
[](https://pypi.org/project/pytest-gremlins/)
[](https://pypi.org/project/pytest-gremlins/)
[](https://github.com/mikelane/pytest-gremlins/actions/workflows/ci.yml)
[](https://codecov.io/gh/mikelane/pytest-gremlins)
[](https://pytest-gremlins.readthedocs.io)
[](LICENSE)
> *Let the gremlins loose. See which ones survive.*
---
## Key Features
- **Speed-First Architecture** - Mutation switching eliminates file I/O and module reloads. Run
gremlins in seconds, not hours.
- **Native pytest Integration** - Zero configuration to start. Just add `--gremlins` to your pytest
command.
- **Coverage-Guided Selection** - Only runs tests that actually cover the mutated code. 10-100x
fewer test executions in well-modularized codebases.
- **Incremental Caching** - Results cached by content hash. Unchanged code skips re-testing entirely.
- **Parallel Execution** - Distribute gremlins across CPU cores for linear speedup.
---
## Quick Start
```bash
# Install
pip install pytest-gremlins
# Run mutation testing
pytest --gremlins
```
That's it. pytest-gremlins will instrument your code, release the gremlins, and report which ones
your tests zapped (good!) and which survived (test gaps!).
---
## Why pytest-gremlins?
**Code coverage lies.** It tells you what code your tests *execute*, but not whether your tests
would catch bugs.
**Mutation testing answers a harder question:** If I introduce a bug, will my tests fail?
### The Problem with Existing Tools
| Tool | Limitation |
| -------------- | ---------------------------------------------------------- |
| **mutmut** | Single-threaded by default, no incremental analysis |
| **Cosmic Ray** | Complex setup; distributed mode requires Celery |
| **MutPy** | Unmaintained (last update 2019), Python 3.4-3.7 only |
| **mutatest** | Unmaintained (last update 2022) |
### Our Solution: Speed Through Architecture
pytest-gremlins is fast because of *how* it works, not just parallelization:
1. **Mutation Switching** - Instrument once, toggle mutations via environment variable
2. **Coverage Guidance** - Only run tests that cover the mutated code
3. **Incremental Analysis** - Skip unchanged code on repeat runs
4. **Parallel Execution** - Safe parallelization with no shared state
---
## Performance
Benchmarked against [mutmut](https://github.com/boxed/mutmut) on a synthetic project:
| Mode | Time | vs mutmut | Speedup |
| ----------------------------------------------- | ------ | --------- | ----------------- |
| `--gremlins` (sequential) | 17.79s | 14.90s | 0.84x (see note) |
| `--gremlins --gremlin-parallel` | 3.99s | 14.90s | **3.73x faster** |
| `--gremlins --gremlin-parallel --gremlin-cache` | 1.08s | 14.90s | **13.82x faster** |
**Key findings:**
- Sequential mode is slower due to subprocess isolation overhead; detailed profiling shows
[1.7x slower on small targets](docs/performance/profiling-report.md)
- Parallel mode delivers **3.73x speedup** over mutmut
- With caching, subsequent runs are **13.82x faster**
- pytest-gremlins found 117 mutations vs mutmut's 86, with 98% kill rate vs 86%
---
## Example Output
```text
================== pytest-gremlins mutation report ==================
Zapped: 142 gremlins (85%)
Survived: 18 gremlins (11%)
Timeout: 5 gremlins (3%)
Error: 2 gremlins (1%)
Top surviving gremlins:
src/auth.py:42 >= -> > (comparison)
src/utils.py:17 + -> - (arithmetic)
src/api.py:88 True -> False (boolean)
Run with --gremlin-report=html for detailed report.
=====================================================================
```
Timeout and Error categories are only shown when their count is greater than zero.
---
## Installation
```bash
# With pip
pip install pytest-gremlins
# With uv
uv add pytest-gremlins
# With poetry
poetry add pytest-gremlins
```
Requires Python 3.11+
---
## Configuration
Zero configuration required for most projects. The plugin auto-discovers source paths from your
`pyproject.toml` setuptools config (e.g., `[tool.setuptools.packages.find]`). If auto-discovery
doesn't find your code, configure paths explicitly:
```toml
[tool.pytest-gremlins]
# Operators to use (default: all)
operators = ["comparison", "arithmetic", "boolean"]
# Paths to mutate (optional -- auto-discovered from setuptools metadata)
paths = ["src"]
# Patterns to exclude
exclude = ["**/migrations/*", "**/test_*"]
# Minimum mutation score to pass
min_score = 80
```
---
## The Gremlins Theme
We use Gremlins movie references as our domain language:
| Traditional Term | Gremlin Term | Meaning |
| ---------------------- | ----------------------- | ---------------------------------- |
| Original code | **Mogwai** | Your clean, untouched source code |
| Start mutation testing | **Feed after midnight** | Begin the mutation process |
| Mutant | **Gremlin** | A mutation injected into your code |
| Kill mutant | **Zap** | Your test caught the mutation |
| Surviving mutant | **Survivor** | Mutation your tests missed |
---
## Documentation
Full documentation: [pytest-gremlins.readthedocs.io](https://pytest-gremlins.readthedocs.io)
- [User Guide](https://pytest-gremlins.readthedocs.io/en/latest/guide/)
- [Configuration Reference](https://pytest-gremlins.readthedocs.io/en/latest/configuration/)
- [API Reference](https://pytest-gremlins.readthedocs.io/en/latest/api/)
---
## Related Projects
- [pytest-test-categories](https://github.com/mikelane/pytest-test-categories) - Enforce Google
test size standards in Python
- [dioxide](https://github.com/mikelane/dioxide) - Rust-backed dependency injection for Python
---
## Contributing
Contributions welcome! See our [Contributing Guide](CONTRIBUTING.md).
This project uses strict TDD discipline with BDD/Gherkin scenarios. All contributions must include
tests written *before* implementation.
> **Note on code coverage:** We target 69% coverage due to inherent limitations in measuring pytest
> plugins (import timing, subprocess execution). See
> [CONTRIBUTING.md](CONTRIBUTING.md#code-coverage) for details.
---
## License
MIT License. See [LICENSE](LICENSE).
---
## Changelog
See [CHANGELOG.md](CHANGELOG.md) for release history.
| text/markdown | null | Mike Lane <mikelane@gmail.com> | null | null | MIT | mutation-testing, pytest, pytest-plugin, quality-assurance, test-quality, testing | [
"Development Status :: 3 - Alpha",
"Framework :: Pytest",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Quality Assurance",
"Topic :: Software Development :: Testing",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"coverage>=7.12.0",
"pytest>=7.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/mikelane/pytest-gremlins",
"Documentation, https://pytest-gremlins.readthedocs.io",
"Repository, https://github.com/mikelane/pytest-gremlins",
"Issues, https://github.com/mikelane/pytest-gremlins/issues",
"Changelog, https://github.com/mikelane/pytest-gremlins/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T22:59:11.511536 | pytest_gremlins-1.3.0b0.tar.gz | 235,351 | 41/02/89e6def01d25a905a92be619e3da59fe885fe092e2e0a5b1e5162139d631/pytest_gremlins-1.3.0b0.tar.gz | source | sdist | null | false | c78efafacc5f98e016192f781e4f0ab8 | c2378bbce0eb0e2afb204f111150c165c418165c0e191bf2816698f8e4fc3f17 | 410289e6def01d25a905a92be619e3da59fe885fe092e2e0a5b1e5162139d631 | null | [
"LICENSE"
] | 178 |
2.4 | vplanet | 2.5.36 | The virtual planet simulator | <p align="center">
<img width = "250" src="docs/VPLanetLogo.png?raw=true"/>
</p>
<h1 align="center">VPLanet: The Virtual Planet Simulator</h1>
### NOTE: GitHub is currently blocking testing for this project, but all unit tests are passing and the code is stable!
<p align="center">
<a href="https://VirtualPlanetaryLaboratory.github.io/vplanet">
<img src="https://img.shields.io/badge/Read-the_docs-blue.svg?style=flat">
</a>
<img src="https://github.com/VirtualPlanetaryLaboratory/vplanet/actions/workflows/docs.yml/badge.svg">
<a href="https://iopscience.iop.org/article/10.1088/1538-3873/ab3ce8">
<img src="https://img.shields.io/badge/Read-the_paper-orange.svg?style=flat">
</a>
<a href="https://VirtualPlanetaryLaboratory.github.io/vplanet/conduct.html">
<img src="https://img.shields.io/badge/Code%20of-Conduct-black.svg">
</a>
<a href="https://www.youtube.com/@VPLanetCode/playlists">
<img src="https://img.shields.io/badge/You-Tube-darkred.svg">
</a>
<a href="https://github.com/VirtualPlanetaryLaboratory/vplanet/discussions">
<img src="https://img.shields.io/badge/Discussions-orange.svg">
</a>
<a href="https://virtualplanetarylaboratory.github.io/vplanet/authors.html">
<img src="https://img.shields.io/badge/Authors-purple.svg">
</a>
<br>
<img src="https://img.shields.io/badge/Unit%20Tests-19,667-darkblue.svg">
<img src="https://github.com/VirtualPlanetaryLaboratory/vplanet/actions/workflows/tests-linux.yml/badge.svg">
<img src="https://img.shields.io/badge/Ubuntu%2020-Python%203.6--3.12-7d93c7.svg">
<img src="https://img.shields.io/badge/Ubuntu%2022-Python%203.7--3.12-7d93c7.svg">
<a href="https://codecov.io/gh/VirtualPlanetaryLaboratory/vplanet">
<img src="https://codecov.io/gh/VirtualPlanetaryLaboratory/vplanet/branch/main/graph/badge.svg?token=3LFJQO1M6H">
</a>
<br>
<img src="https://github.com/VirtualPlanetaryLaboratory/vplanet/actions/workflows/tests-macos-intel.yml/badge.svg">
<img src="https://img.shields.io/badge/MacOS%2012--13-Python%203.6--3.12-7d93c7.svg">
<img src="https://github.com/VirtualPlanetaryLaboratory/vplanet/actions/workflows/tests-macos-silicon.yml/badge.svg">
<img src="https://img.shields.io/badge/MacOS%2014-Python%203.8--3.12-7d93c7.svg">
<br>
<img src="https://img.shields.io/badge/Test%20Sims-70-darkblue.svg">
<img src="https://github.com/VirtualPlanetaryLaboratory/vplanet/actions/workflows/memcheck.yml/badge.svg">
<img src="https://github.com/VirtualPlanetaryLaboratory/vplanet/actions/workflows/floatingpoint.yml/badge.svg">
<img src="https://github.com/VirtualPlanetaryLaboratory/vplanet/actions/workflows/sanitizer.yml/badge.svg">
<br>
<a href="examples">
<img src="https://img.shields.io/badge/Examples-41-darkblue.svg">
</a>
<img src="https://github.com/VirtualPlanetaryLaboratory/vplanet/actions/workflows/examples.yml/badge.svg">
<img src="https://img.shields.io/badge/Ubuntu%2022-3.6%20--%203.11-7d93c7.svg">
<img src="https://github.com/VirtualPlanetaryLaboratory/vplanet/actions/workflows/pip-install.yml/badge.svg">
</p>
### Overview
`VPLanet` is software to simulate planetary system evolution, with a focus on habitability. Physical models, typically consisting of ordinary differential equations, are coupled together to simulate evolution, from planetary cores to passing stars, for the age of a system. We strive for full transparency and reproducibility in our software, and this repository contains 1) the [source code](src), 2) [extensive documentation](https://VirtualPlanetaryLaboratory.github.io/vplanet), 3) scripts and files to [generate published figures](examples) and perform [parameter sweeps](https://virtualplanetarylaboratory.github.io/vplanet/parametersweep.html), and 4) [scripts to validate the current release](tests). We can't claim we found life beyond the Earth with closed-source or unreliable software!
To get started, ensure you have clang/gcc installed and follow the [Installation Guide](https://virtualplanetarylaboratory.github.io/vplanet/quickstart.html). You can also watch videos on our [YouTube channel](https://www.youtube.com/@VPLanetCode/playlists) on how to install and run `VPLanet`, as well as updates on recent results .
### Modules
`VPLanet` currently consists of 13 functioning "modules," each containing a set of equations
that simulates a specific physical process:
**AtmEsc**: Roche lobe overflow and thermal escape (energy-limited and radiation-recombination-limited) of an atmosphere, including water photolyzation, hydrogen
escape, oxygen escape, and oxygen build-up.
**Binary**: Orbital evolution of a single circumbinary planet.
**DistOrb**: 2nd and 4th order semi-analytic models of orbital evolution outside
of resonance.
**DistRot**: Evolution of a world's rotational axis due to orbital evolution and
the stellar torque.
**EqTide**: Tidal evolution in the equilibrium tide framework.
**Flare**: Flare frequency distribution and flare XUV luminosity evolution in low-mass stars.
**GalHabit**: Evolution of a wide orbit due to the galactic tide and impulses from
passing stars (including radial migration).
**MagmOc**: Thermal and geochemical evolution of a magma ocean.
**POISE**: Energy balance climate model including dynamic ice sheets and lithospheric
compression/rebound.
**RadHeat**: Radiogenic heating in a world's core, mantle, and crust.
**SpiNBody**: N-body integrator for the evolution of a system of massive particles.
**Stellar**: Evolution of a star's bolometric and XUV luminosity, temperature, radius, and mass concentration. Also includes magnetic braking and stellar wind spin-down.
**ThermInt**: Thermal interior evolution, including magnetic fields, for planets
undergoing plate tectonics or stagnant lid evolution.
Many of these modules can be combined together to simulate numerous phenomena and feedback loops in planetary systems.
### Resources
The [examples/](examples) directory contains input files and scripts for generating the figures in [Barnes et al. (2020)](https://ui.adsabs.harvard.edu/abs/2020PASP..132b4502B/abstract) and subsequent publications. The [Manual/](Manual) directory contains the pdf of [Barnes et al. (2020)](https://ui.adsabs.harvard.edu/abs/2020PASP..132b4502B/abstract) that describes the physics of the first 11 modules, validates the software against observations and/or past results, and includes figures from the [examples/](examples) directory.
An ecosystem of support software is also publicly available in other repositories of the [Virtual Planetary Laboratory](https://vpl.uw.edu/). The [vplot](https://github.com/VirtualPlanetaryLaboratory/vplot) package is both a command line tool to quickly plot the evolution of a single simulation and a Pythom module for generating publication-worthy figures. The [VSPACE](https://github.com/VirtualPlanetaryLaboratory/vspace) script generates input files for a parameter space sweep, which can then be performed on an arbitrary number of cores with [MultiPlanet](https://github.com/VirtualPlanetaryLaboratory/multi-planet). For large parameter sweeps, an enormous amount of data can be generated, which can slow analyses. To overcome this barrier, the [BigPlanet](https://github.com/VirtualPlanetaryLaboratory/bigplanet) code can both compress datasets into HDF5 format, including statistics of an integration, and tools to facilitate plotting. These three scripts can be executed from the command line to seamlessly [perform parameter sweeps](https://virtualplanetarylaboratory.github.io/vplanet/parametersweep.html). These Python scripts are optimized for [anaconda](https://www.anaconda.com/) distributions.
The "pip-install" badge indicates if the latest executables are available for installation via pip for the Python distributions and operating systems for which we perform unit tests (see below). (*Note: This badge is currently broken due to an unknown GitHub issue, so if you pip install, you may want to check your `VPLanet` version, which is printed near the top of the log file that is generated when you perform a simulation.*)
### Code Integrity
We are committed to maintaining a stable tool for scientists to analyze planetary system. Behind the scenes, the `VPLanet` team maintains code integrity through automatic checks at every merge into the main branch. You can see the status of these checks via the badges above. Currently we perform "Unit Tests" for the initial and final conditions across an orthogonal set of "Test Sims" (simulations), with the numbers of tests for each shown via badges. We perform the tests across all permutations of operating systems and Python version shown by the badges.
We also check for memory violations via [valgrind's memcheck tool](http://valgrind.org) ("memcheck") and [address sanitizer](https://github.com/google/sanitizers/wiki/AddressSanitizer) ("sanitizer"), check for overflow, invalid operation, and divide-by-zero floating point exceptions ("floating-point"), and test if all the [examples](examples/) work across the operating system and Python versions listed after the "examples" badge.
The percentage of the lines of code that are executed by the unit tests is shown with the "codecov" badge, with details available at our <a href="https://codecov.io/gh/VirtualPlanetaryLaboratory/vplanet">Codecov</a> account.
### Community
`VPLanet` is a community project. We're happy to take pull requests; if you want to create one, please issue it to the *dev* branch. The documentation includes [tutorials on adding new features and modules](https://VirtualPlanetaryLaboratory.github.io/vplanet/tutorials.html). `VPLanet` is a platform for planetary science that can grow exponentially, either by adding new physics or by adding competing models for clean comparisons.
A list of additional GitHub repositories with `VPLanet` examples can be found [here](https://VirtualPlanetaryLaboratory.github.io/vplanet/repos.html).
If you have questions or are running into issues, you can read or post to a [Discussion](discussions).
If you believe you have encountered a bug, please raise an issue using the [Issues](https://github.com/VirtualPlanetaryLaboratory/vplanet/issues) tab at the top of this page.
### Acknowledgments
If you use this code to generate results used in any publication or conference contribution, please cite [Barnes, R. et al. (2020), PASP, 132, 24502](https://ui.adsabs.harvard.edu/abs/2020PASP..132b4502B/abstract).
`VPLanet` development has been supported by NASA grants NNA13AA93A, NNX15AN35G, 80NSSC17K048, 13-13NAI7_0024, 80NSSC20K0229, and 80NSSC20K0261. We also acknowledge additional support from the University of Washington, the Carnegie Institute for Science, and the Austrian Space Research Institude (IWF).
Enjoy!
© 2018-2025 The VPLanet Team.
| text/markdown | Rory Barnes | rkb9@uw.edu | null | null | MIT | null | [] | [] | https://github.com/VirtualPlanetaryLaboratory/vplanet | null | >=3.6 | [] | [] | [] | [
"vplot>=1.0.5",
"vspace>=2.3.2",
"bigplanet>=3.0.1",
"multiplanet>=2.0.3",
"astropy>=3.0",
"numpy",
"tqdm",
"seaborn"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T22:57:54.628024 | vplanet-2.5.36.tar.gz | 701,312 | bc/00/5c826f5fe639679e9bc28384923dc35b9511ab78e7553a4114af216f0215/vplanet-2.5.36.tar.gz | source | sdist | null | false | 73ae1cd16a724f2f0aae31c4f39000fd | 1b2a1238175c7c6f7fb99dea2cd6c01f96804bc25e95470db53e36821fc5c04b | bc005c826f5fe639679e9bc28384923dc35b9511ab78e7553a4114af216f0215 | null | [
"LICENSE"
] | 136 |
2.4 | cmu-graphics | 1.1.46 | The graphics framework used by CMU CS Academy, geared toward beginner CS students. | 
Desktop CMU Graphics is an offline version of CMU Graphics, a
persistent-object graphics package geared towards beginner computer science
students. CMU Graphics and its desktop version are maintained by
[CMU CS Academy](https://academy.cs.cmu.edu/), a Carnegie Mellon University
project which develops free-to-use middle and high school computer science
curriculum.
This package,
including its documentation, is licensed under the
[BSD 3-Clause license](https://github.com/cmu-cs-academy/desktop-cmu-graphics/blob/master/LICENSE).
## Installation
### Choose zip or pip
There are two different ways to install Desktop CMU Graphics on a device:
1. (Mac and Windows only) Use the [zip file installer](https://academy.cs.cmu.edu/desktop) that is available
for download on the CMU CS Academy website.
1. (Mac, Windows, and Linux) Use pip, Python’s built-in package-managing software.
Both methods come with their own advantages and limitations. If you're in doubt
about which to choose, the zip file installer is the most likely to succeed. It
should work regardless of most restrictions in place on school-distributed devices.
For those using devices with Linux operating systems, or for those who are
familiar with the command line/terminal, the pip version of the
package offers a larger degree of versatility.
The remainder of these installation instructions are only for the pip version.
### Install dependencies
If you're using Windows, you don't need to install any dependencies. Skip ahead to "Install CMU Graphics" below.
If you're using a Mac, install [Homebrew](https://brew.sh/).
If you're using a Mac or Linux, install the software packages needed by pycairo. Read their [getting started page](https://pycairo.readthedocs.io/en/latest/getting_started.html) for instructions.
### Install CMU Graphics
Run the following command:
```
pip install cmu-graphics
```
## Getting Started
To run our graphics framework, include the following line at the top of your
Python file:
```
from cmu_graphics import *
```
At the end of your Python file, add this line:
```
cmu_graphics.run()
```
From there, the syntax for using the graphics package is identical to the
online version of the framework. You can find more details about how to use the
graphics framework here on our [documentation page](https://academy.cs.cmu.edu/docs).
## Teacher Support and Documentation
If you are a teacher with a CMU CS Academy Account and you have questions about
CMU Graphics, remember that you can reach out to us through the Support tab on
the left side of the CMU CS Academy website:

If you are an educator but do not have an account with CMU CS Academy, you can
[register for an account here](https://academy.cs.cmu.edu/register).
If you are a student, or you are exploring Desktop CMU Graphics,
there are plenty of resources to help you get started with
the framework. Students can reach out to their teachers for questions about
CMU Graphics, and a full reference documentation for the graphics
framework is available on our [documentation page](https://academy.cs.cmu.edu/docs).
| text/markdown | null | Austin Schick <aschick@andrew.cmu.edu> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | <3.15,>=3.11 | [] | [] | [] | [
"pygame-ce>=2",
"pycairo>=1.20"
] | [] | [] | [] | [
"Homepage, https://academy.cs.cmu.edu/",
"Documentation, https://academy.cs.cmu.edu/docs"
] | twine/6.2.0 CPython/3.9.17 | 2026-02-20T22:57:52.572479 | cmu_graphics-1.1.46.tar.gz | 233,836 | 61/2c/3a8922fe2142ab65175a427f11537f6d97c4b86b3f88ebb729c4187bcb06/cmu_graphics-1.1.46.tar.gz | source | sdist | null | false | 49a08e5e2ab1f56386dbed33bc0ec1a7 | 8dc0ad6dfb50acb270c36f4e4b22a628f47df92ce202507e993174989556c169 | 612c3a8922fe2142ab65175a427f11537f6d97c4b86b3f88ebb729c4187bcb06 | BSD-3-Clause | [
"LICENSE"
] | 198 |
2.4 | aspose-note | 26.2 | Aspose.Note-compatible Python API for reading OneNote (.one) files | # 🗒️ Aspose.Note FOSS for Python
[](https://github.com/aspose-note/aspose-note-python/actions/workflows/ci.yml)
[](https://pypi.org/project/aspose-note/)
[](https://pypi.org/project/aspose-note/)
[](LICENSE)
Quick links: [📚 Examples](examples/) • [📦 PyPI](https://pypi.org/project/aspose-note/)
✅ **Official Aspose project** — **100% free & open-source**. Provides an Aspose.Note-compatible Python API for working with OneNote `.one` files.
This repository provides a Python library with an **Aspose.Note-shaped public API** for reading Microsoft OneNote files (`.one`).
The goal is to offer a familiar surface (`aspose.note.*`) inspired by [Aspose.Note for .NET](https://products.aspose.com/note/net/), backed by this repository’s built-in MS-ONE/OneStore parser.
## ✨ Features
- ✅ Read `.one` from a file path or a binary stream
- ✅ Aspose-like DOM (Document/Page/Outline/…): traversal + type-based search
- ✅ Content extraction
- ✅ Rich text with formatting runs (TextRun/TextStyle) and hyperlinks
- ✅ Images (bytes, file name, dimensions)
- ✅ Attached files (bytes, file name)
- ✅ Tables (rows/cells + cell content)
- ✅ OneNote tags (NoteTag) on text/images/tables/list elements
- ✅ Numbered lists (NumberList) and indent levels
- ✅ PDF export via `Document.Save(..., SaveFormat.Pdf)` (uses ReportLab)
## 🚀 Quick start
```python
from aspose.note import Document
doc = Document("testfiles/SimpleTable.one")
print(doc.DisplayName)
print(doc.Count())
# pages are direct children of Document
for page in doc:
print(page.Title.TitleText.Text)
```
### 📄 Export to PDF
```python
from aspose.note import Document, SaveFormat
doc = Document("testfiles/FormattedRichText.one")
doc.Save("out.pdf", SaveFormat.Pdf)
```
## 📦 Installation
From PyPI:
```bash
python -m pip install aspose-note
```
With PDF export support:
```bash
python -m pip install "aspose-note[pdf]"
```
From a local checkout:
```bash
python -m pip install -e .
```
PDF export requires ReportLab:
```bash
python -m pip install -e ".[pdf]"
```
PyPI release page (maintainers): https://pypi.org/manage/project/aspose-note/releases/
## 🧩 Public API (what is considered supported)
Only the `aspose.note` package is considered **public and supported**.
Everything under `aspose.note._internal` is internal implementation detail and may change.
Below is a complete list of objects exported from `aspose.note.__init__`.
### 🧭 Document and traversal
- `Document(source=None, load_options=None)`
- `DisplayName: str | None`
- `CreationTime: datetime | None`
- `Count() -> int` — number of pages (direct children of Document)
- iteration: `for page in doc: ...`
- `FileFormat -> FileFormat` (best-effort)
- `GetPageHistory(page) -> list[Page]` (currently returns `[page]`)
- `DetectLayoutChanges()` (compatibility stub)
- `Save(target, format_or_options=None)`
- supported: `SaveFormat.Pdf`
- other `SaveFormat` values currently raise `UnsupportedSaveFormatException`
- `DocumentVisitor` — base visitor for traversal:
- `VisitDocumentStart/End`, `VisitPageStart/End`, `VisitTitleStart/End`, `VisitOutlineStart/End`,
`VisitOutlineElementStart/End`, `VisitRichTextStart/End`, `VisitImageStart/End`
- `Node`
- `ParentNode`
- `Document` (property) — walk up to the root `Document`
- `Accept(visitor)`
- `CompositeNode(Node)`
- `FirstChild`, `LastChild`
- `AppendChildLast(node)`, `AppendChildFirst(node)`, `InsertChild(index, node)`, `RemoveChild(node)`
- `GetEnumerator()` / iteration `for child in node: ...`
- `GetChildNodes(Type) -> list[Type]` — recursive search by type
### 🏗️ Document structure
- `Page(CompositeNode)`
- `Title: Title | None`
- `Author: str | None`
- `CreationTime: datetime | None`, `LastModifiedTime: datetime | None`
- `Level: int | None`
- `Clone(deep=False) -> Page` (minimal clone)
- `Title(CompositeNode)`
- `TitleText: RichText | None`
- `TitleDate: RichText | None`
- `TitleTime: RichText | None`
- `Outline(CompositeNode)`
- `X`, `Y`, `Width` (positioning)
- `OutlineElement(CompositeNode)`
- `IndentLevel: int`
- `NumberList: NumberList | None`
- `Tags: list[NoteTag]`
### 📝 Content
- `RichText(CompositeNode)`
- `Text: str`
- `Runs: list[TextRun]` — formatted segments
- `FontSize: float | None`
- `Tags: list[NoteTag]`
- `Append(text, style=None) -> RichText`
- `Replace(old_value, new_value) -> None`
- `TextRun(Node)`
- `Text: str`
- `Style: TextStyle`
- `Start: int | None`, `End: int | None`
- `TextStyle(Node)`
- `Bold/Italic/Underline/Strikethrough/Superscript/Subscript: bool`
- `FontName: str | None`, `FontSize: float | None`
- `FontColor: int | None`, `HighlightColor: int | None`
- `LanguageId: int | None`
- `IsHyperlink: bool`, `HyperlinkAddress: str | None`
- `Image(CompositeNode)`
- `FileName: str | None`, `Bytes: bytes`
- `Width: float | None`, `Height: float | None`
- `AlternativeTextTitle: str | None`, `AlternativeTextDescription: str | None`
- `HyperlinkUrl: str | None`
- `Tags: list[NoteTag]`
- `Replace(image) -> None` — replace image contents
- `AttachedFile(CompositeNode)`
- `FileName: str | None`, `Bytes: bytes`
- `Tags: list[NoteTag]`
- `Table(CompositeNode)`
- `ColumnWidths: list[float]`
- `BordersVisible: bool`
- `Tags: list[NoteTag]`
- `TableRow(CompositeNode)`, `TableCell(CompositeNode)`
- `NoteTag(Node)`
- fields: `shape`, `label`, `text_color`, `highlight_color`, `created`, `completed`
- `CreateYellowStar()` — convenience factory
- `NumberList(Node)`
- `Format: str | None`, `Restart: int | None`, `IsNumbered: bool`
### ⚙️ Load/save options
- `LoadOptions`
- `DocumentPassword: str | None` (password/encryption is **not supported**)
- `LoadHistory: bool`
- `SaveOptions` (base)
- `SaveFormat: SaveFormat`
- `PdfSaveOptions(SaveOptions)` (subset)
- `PageIndex: int`, `PageCount: int | None`
- `TagIconDir: str | None`, `TagIconSize: float | None`, `TagIconGap: float | None`
- `OneSaveOptions`, `HtmlSaveOptions`, `ImageSaveOptions` — declared for API compatibility but not implemented.
### 🔢 Enums
- `SaveFormat`: `One`, `Pdf`, `Html`, plus raster formats (`Jpeg`, `Png`, `Gif`, `Bmp`, `Tiff`)
- `FileFormat`: `OneNote2010`, `OneNoteOnline`, `OneNote2007`
- `HorizontalAlignment`: `Left`, `Center`, `Right`
- `NodeType`: `Document`, `Page`, `Outline`, `OutlineElement`, `RichText`, `Image`, `Table`, `AttachedFile`
### 🚨 Exceptions
- `AsposeNoteError` (base)
- `FileCorruptedException`
- `IncorrectDocumentStructureException`
- `IncorrectPasswordException`
- `UnsupportedFileFormatException` (has a `FileFormat` field)
- `UnsupportedSaveFormatException`
## 📚 MS OneNote Examples
More runnable scripts are available in [examples/](examples/) (MS OneNote `.one` samples).
### 📝 Extract all text from an MS OneNote document
```python
from aspose.note import Document, RichText
doc = Document("testfiles/FormattedRichText.one")
texts = [rt.Text for rt in doc.GetChildNodes(RichText)]
print("\n".join(texts))
```
### 🖼️ Save all images from an MS OneNote document to disk
```python
from aspose.note import Document, Image
doc = Document("testfiles/3ImagesWithDifferentAlignment.one")
for i, img in enumerate(doc.GetChildNodes(Image), start=1):
name = img.FileName or f"image_{i}.bin"
with open(name, "wb") as f:
f.write(img.Bytes)
```
### 🏷️📄 Export an MS OneNote document to PDF (custom tag icons)
```python
from aspose.note import Document, PdfSaveOptions, SaveFormat
doc = Document("testfiles/TagSizes.one")
opts = PdfSaveOptions(
SaveFormat=SaveFormat.Pdf,
TagIconDir="./tag-icons",
TagIconSize=10,
TagIconGap=2,
)
doc.Save("out.pdf", opts)
```
### 📦 Load an MS OneNote document from a binary stream
```python
from pathlib import Path
from aspose.note import Document
one_path = Path("testfiles/SimpleTable.one")
with one_path.open("rb") as f:
doc = Document(f)
print(doc.DisplayName)
print(doc.Count())
```
### 🧭 Traverse MS OneNote document structure (DOM) and print a simple outline
```python
from aspose.note import Document, Page, Outline, OutlineElement, RichText
doc = Document("testfiles/SimpleTable.one")
for page in doc.GetChildNodes(Page):
title = page.Title.TitleText.Text if page.Title and page.Title.TitleText else "(no title)"
print(f"# {title}")
for outline in page.GetChildNodes(Outline):
for oe in outline.GetChildNodes(OutlineElement):
# OutlineElement may contain RichText, Table, Image, etc.
texts = [rt.Text for rt in oe.GetChildNodes(RichText)]
if texts:
print("-", " ".join(t.strip() for t in texts if t.strip()))
```
### 🔎 Count MS OneNote DOM nodes with `DocumentVisitor`
```python
from aspose.note import Document, DocumentVisitor, Page, Image, RichText
class Counter(DocumentVisitor):
def __init__(self) -> None:
self.pages = 0
self.rich_texts = 0
self.images = 0
def VisitPageStart(self, page: Page) -> None: # noqa: N802
self.pages += 1
def VisitRichTextStart(self, rich_text: RichText) -> None: # noqa: N802
self.rich_texts += 1
def VisitImageStart(self, image: Image) -> None: # noqa: N802
self.images += 1
doc = Document("testfiles/3ImagesWithDifferentAlignment.one")
counter = Counter()
doc.Accept(counter)
print(counter.pages, counter.rich_texts, counter.images)
```
### 🔗 Extract hyperlinks from formatted text in an MS OneNote document
```python
from aspose.note import Document, RichText
doc = Document("testfiles/FormattedRichText.one")
for rt in doc.GetChildNodes(RichText):
for run in rt.Runs:
if run.Style.IsHyperlink and run.Style.HyperlinkAddress:
print(run.Text, "->", run.Style.HyperlinkAddress)
```
### 🏷️ Inspect MS OneNote tags (NoteTag) across the document
```python
from aspose.note import Document, RichText, Image, Table
doc = Document("testfiles/TagSizes.one")
def dump_tags(kind: str, tags) -> None:
for t in tags:
print(kind, "tag:", t.label)
for rt in doc.GetChildNodes(RichText):
dump_tags("RichText", rt.Tags)
for img in doc.GetChildNodes(Image):
dump_tags("Image", img.Tags)
for tbl in doc.GetChildNodes(Table):
dump_tags("Table", tbl.Tags)
```
### 🧱 Work with tables in an MS OneNote document (rows/cells)
```python
from aspose.note import Document, Table, TableRow, TableCell, RichText
doc = Document("testfiles/SimpleTable.one")
for table in doc.GetChildNodes(Table):
print("Table columns:", table.ColumnWidths)
for row_index, row in enumerate(table.GetChildNodes(TableRow), start=1):
cells = row.GetChildNodes(TableCell)
values: list[str] = []
for cell in cells:
cell_text = " ".join(rt.Text for rt in cell.GetChildNodes(RichText)).strip()
values.append(cell_text)
print(f"Row {row_index}:", values)
```
### 📎 Extract attached files from an MS OneNote document
```python
from aspose.note import Document, AttachedFile
doc = Document("testfiles/OnePageWithFile.one")
for i, af in enumerate(doc.GetChildNodes(AttachedFile), start=1):
name = af.FileName or f"attachment_{i}.bin"
with open(name, "wb") as f:
f.write(af.Bytes)
print("saved:", name)
```
### 🔢 Inspect numbered lists in an MS OneNote document (NumberList + indentation)
```python
from aspose.note import Document, OutlineElement
doc = Document("testfiles/NumberedListWithTags.one")
for oe in doc.GetChildNodes(OutlineElement):
nl = oe.NumberList
if nl is None:
continue
print(
"indent=", oe.IndentLevel,
"is_numbered=", nl.IsNumbered,
"format=", nl.Format,
"restart=", nl.Restart,
)
```
## ⚠️ Current limitations
- The implementation focuses on **reading** `.one` and building a DOM; writing back to `.one` is not implemented.
- `DocumentPassword` / encrypted documents are not supported (raises `IncorrectPasswordException`).
- Saving formats other than PDF (HTML/images/ONE) are declared for compatibility but not implemented.
## 🌐 Other platforms (official Aspose.Note)
If you need the full-featured Aspose product (writing/conversion, broader compatibility, etc.), see the official libraries:
- Aspose.Note for .NET
- Product: https://products.aspose.com/note/net/
- Documentation: https://docs.aspose.com/note/net/
- Aspose.Note for Java
- Product: https://products.aspose.com/note/java/
- Documentation: https://docs.aspose.com/note/java/
## 🛠️ Development
Run tests:
```bash
python -m pip install -e ".[pdf]"
python -m unittest discover -s tests -p "test_*.py" -v
```
Third-party license notices (e.g., ReportLab used for PDF export) are in [THIRD_PARTY_NOTICES.md](https://github.com/aspose-note/aspose-note-python/blob/main/THIRD_PARTY_NOTICES.md).
| text/markdown | Aspose | null | null | null | null | onenote, one, aspose, parser, pdf | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent",
"Topic :: File Formats"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"reportlab>=3.6; extra == \"pdf\"",
"build>=1.2; extra == \"dev\"",
"twine>=5.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/aspose-note-foss/Aspose.Note-FOSS-for-Python",
"Repository, https://github.com/aspose-note-foss/Aspose.Note-FOSS-for-Python",
"Issues, https://github.com/aspose-note-foss/Aspose.Note-FOSS-for-Python/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T22:57:51.008736 | aspose_note-26.2.tar.gz | 131,012 | 47/59/f0b381c0d6c32e8cf0d6f8f78414dfd808fc5073c14bc95b26b2ca93fd77/aspose_note-26.2.tar.gz | source | sdist | null | false | d82fffa8fe4bf0b35c14b1c88a763cf0 | dae9d7a18637498e1d9ecf352fa1d5b3f6853300bb77d7c1ebc303f1429f75b3 | 4759f0b381c0d6c32e8cf0d6f8f78414dfd808fc5073c14bc95b26b2ca93fd77 | LicenseRef-Aspose-Split | [
"LICENSE",
"THIRD_PARTY_NOTICES.md"
] | 207 |
2.4 | jamma | 2.4.5 | JAMMA: JAX-Accelerated Mixed Model Association | <p align="center">
<a href="https://github.com/michael-denyer/jamma/actions/workflows/ci.yml"><img src="https://github.com/michael-denyer/jamma/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
<a href="https://pypi.org/project/jamma/"><img src="https://img.shields.io/pypi/v/jamma.svg" alt="PyPI"></a>
<a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/python-3.11+-blue.svg" alt="Python 3.11+"></a>
<a href="https://github.com/jax-ml/jax"><img src="https://img.shields.io/badge/JAX-accelerated-9cf.svg" alt="JAX"></a>
<a href="https://numpy.org/"><img src="https://img.shields.io/badge/NumPy-2.0+-013243.svg?logo=numpy" alt="NumPy"></a>
<a href="https://hypothesis.readthedocs.io/"><img src="https://img.shields.io/badge/tested%20with-Hypothesis-blue.svg" alt="Hypothesis"></a>
<a href="https://www.gnu.org/licenses/gpl-3.0"><img src="https://img.shields.io/badge/License-GPL%203.0-blue.svg" alt="License: GPL-3.0"></a>
</p>
<p align="center">
<img src="https://raw.githubusercontent.com/michael-denyer/jamma/master/logos/JAMMA_Large_Logo_v2.png" alt="JAMMA" width="500">
</p>
**JAX-Accelerated Mixed Model Association** — A modern Python reimplementation of [GEMMA](https://github.com/genetics-statistics/GEMMA) for genome-wide association studies (GWAS).
- **GEMMA-compatible**: Drop-in replacement with identical CLI flags and output formats
- **Numerical equivalence**: Validated against GEMMA — 100% significance agreement, 100% effect direction agreement
- **Fast**: Up to 22x faster than GEMMA on kinship and 5x faster on LMM association
- **Memory-safe**: Pre-flight memory checks prevent OOM crashes before allocation
- **Pure Python**: JAX + NumPy stack, no C++ compilation required
- **Large-scale ready**: Optional [numpy-mkl ILP64](https://github.com/michael-denyer/numpy-mkl) wheels (numpy 2.4.2) for >46k sample eigendecomposition
## Installation
```bash
pip install jamma
```
Or with uv:
```bash
uv add jamma
```
## Quick Start
```bash
# Compute kinship matrix (centered relatedness)
jamma -gk 1 -bfile data/my_study -o output
# Run LMM association (Wald test)
jamma -lmm 1 -bfile data/my_study -k output/output.cXX.txt -o results
```
Output files match GEMMA format exactly:
- `output.cXX.txt` — Kinship matrix
- `results.assoc.txt` — Association results (chr, rs, ps, n_miss, allele1, allele0, af, beta, se, logl_H1, l_remle, p_wald)
- `results.log.txt` — Run log
## Python API
### One-call GWAS (recommended)
```python
from jamma import gwas
# Full pipeline: load data → kinship → eigendecomp → LMM → results
result = gwas("data/my_study", kinship_file="data/kinship.cXX.txt")
print(f"Tested {result.n_snps_tested} SNPs in {result.timing['total_s']:.1f}s")
# Compute kinship from scratch and save it
result = gwas("data/my_study", save_kinship=True, output_dir="output")
# With covariates and LRT test
result = gwas("data/my_study", kinship_file="k.txt", covariate_file="covars.txt", lmm_mode=2)
# LOCO analysis (leave-one-chromosome-out)
result = gwas("data/my_study", loco=True)
# Multi-phenotype with eigendecomp reuse
result = gwas("data/my_study", write_eigen=True, phenotype_column=1)
result = gwas("data/my_study", eigenvalue_file="output/result.eigenD.txt",
eigenvector_file="output/result.eigenU.txt", phenotype_column=2)
# SNP filtering
result = gwas("data/my_study", kinship_file="k.txt", snps_file="snps.txt", hwe=0.001)
```
### Low-level API
```python
from jamma.io import load_plink_binary
from jamma.kinship import compute_centered_kinship
from jamma.lmm import run_lmm_association_streaming
from jamma.lmm.eigen import eigendecompose_kinship
# Load PLINK data
data = load_plink_binary("data/my_study")
# Compute kinship
kinship = compute_centered_kinship(data.genotypes)
# Eigendecompose for LMM
eigenvalues, eigenvectors = eigendecompose_kinship(kinship)
# Run association (streaming from disk)
results = run_lmm_association_streaming(
bed_path="data/my_study",
phenotypes=phenotypes,
kinship=kinship,
chunk_size=5000,
)
```
## Memory Safety
Unlike GEMMA, JAMMA includes pre-flight memory checks that prevent out-of-memory crashes:
```python
from jamma.core.memory import estimate_workflow_memory
# Check memory requirements BEFORE loading data
estimate = estimate_workflow_memory(n_samples=200_000, n_snps=95_000)
print(f"Peak memory: {estimate.total_gb:.1f}GB")
print(f"Available: {estimate.available_gb:.1f}GB")
print(f"Sufficient: {estimate.sufficient}")
```
**Key features:**
- Pre-flight checks before large allocations (eigendecomposition, genotype loading)
- RSS memory logging at workflow boundaries
- Incremental result writing (no memory accumulation)
- Safe chunk size defaults with hard caps
GEMMA will silently OOM and get killed by the OS. JAMMA fails fast with clear error messages.
## Performance
Benchmark on mouse_hs1940 (1,940 samples × 12,226 SNPs), Apple M3 Pro:
| Operation | GEMMA | JAMMA | Speedup |
|--------------------|--------|-------|-----------|
| Kinship (`-gk 1`) | 24.7s | 1.1s | **22.5x** |
| LMM (`-lmm 1`) | 27.8s | 5.3s | **5.2x** |
| **Total** | 52.5s | 6.4s | **8.2x** |
## Supported Features
### Current
- [x] Kinship matrix computation — centered (`-gk 1`) and standardized (`-gk 2`)
- [x] Univariate LMM Wald test (`-lmm 1`)
- [x] Likelihood ratio test (`-lmm 2`)
- [x] Score test (`-lmm 3`)
- [x] All tests mode (`-lmm 4`)
- [x] LOCO kinship — leave-one-chromosome-out analysis (`-loco`)
- [x] Eigendecomposition reuse — multi-phenotype workflows (`-d`/`-u`/`-eigen`)
- [x] Phenotype column selection (`-n`)
- [x] SNP subset selection for association and kinship (`-snps`/`-ksnps`)
- [x] HWE QC filtering (`-hwe`)
- [x] Pre-computed kinship input (`-k`)
- [x] Covariate support (`-c`)
- [x] PLINK binary format (`.bed/.bim/.fam`) with input dimension validation
- [x] Large-scale streaming I/O (>100k samples via [numpy-mkl ILP64](https://github.com/michael-denyer/numpy-mkl) — numpy 2.4.2)
- [x] JAX acceleration (CPU/GPU)
- [x] Lambda optimization bounds (`-lmin`/`-lmax`)
- [x] Individual weights for kinship (`-widv`)
- [x] Categorical covariates with one-hot encoding (`-cat`)
- [x] Pre-flight memory checks (fail-fast before OOM)
- [x] RSS memory logging at workflow boundaries
- [x] Incremental result writing
### Planned
- [ ] Multivariate LMM (mvLMM)
## Documentation
- [Why JAMMA?](docs/WHY_JAMMA.md) — Key differentiators from GEMMA
- [User Guide](docs/USER_GUIDE.md) — Installation, usage examples, CLI reference
- [Code Map](docs/CODEMAP.md) — Architecture diagrams and source navigation
- [Equivalence Proof](docs/EQUIVALENCE.md) — Mathematical proofs and empirical validation against GEMMA
- [GEMMA Divergences](docs/GEMMA_DIVERGENCES.md) — Known differences from GEMMA
- [Performance](docs/PERFORMANCE.md) — Bottleneck analysis, scale validation, configuration guide
- [Contributing](CONTRIBUTING.md) — Development setup, testing, and PR guidelines
- [Changelog](CHANGELOG.md) — Version history
## Requirements
- Python 3.11+
- JAX 0.8.0+
- NumPy 2.0+
## License
GPL-3.0 (same as GEMMA)
| text/markdown | JAMMA Contributors | null | null | null | GPL-3.0-or-later | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"bed-reader>=1.0.0",
"click>=8.0.0",
"jax>=0.8.0",
"jaxlib>=0.8.0",
"jaxtyping>=0.2.28",
"loguru>=0.7.0",
"numpy>=2.0.0",
"progressbar2>=4.2.0",
"psutil>=5.9.0",
"threadpoolctl>=3.0.0",
"hypothesis>=6.100.0; extra == \"dev\"",
"hypothesis[numpy]>=6.100.0; extra == \"dev\"",
"pandas>=2.0.0; extra == \"dev\"",
"pre-commit>=3.7.0; extra == \"dev\"",
"pytest-benchmark>=5.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-randomly>=3.15.0; extra == \"dev\"",
"pytest-xdist>=3.5.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.5.0; extra == \"dev\"",
"scipy>=1.10.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T22:57:25.626897 | jamma-2.4.5.tar.gz | 80,063,093 | ed/73/9c2d2ec5b1ce71644a97f420fa36fd107b1a37e5b063670a3a29cd4eed77/jamma-2.4.5.tar.gz | source | sdist | null | false | 44a0516b45438c5044346fc6ce832f96 | ee393aeb689b89a1f27e94b814641b206585e2451f0db2b7bcfb5cd582412d25 | ed739c2d2ec5b1ce71644a97f420fa36fd107b1a37e5b063670a3a29cd4eed77 | null | [
"LICENSE.md"
] | 198 |
2.4 | mail-swarms | 1.3.6 | Multi-Agent Interface Layer reference implementation | # Multi-Agent Interface Layer (MAIL)
Single-swarm example | Multi-swarm example
:-------------------:|:-------------------:
| 
**MAIL** is an **open protocol** for letting autonomous agents communicate, coordinate, and cooperate across local runtimes and distributed swarms. This repository hosts both the normative specification and a production-grade **Python/FastAPI reference implementation** that demonstrate how to build interoperable agent systems on top of the MAIL contract.
---
## Quick Links
- **Protocol specification**: [spec/SPEC.md](/spec/SPEC.md)
- **JSON Schemas**: [spec/MAIL-core.schema.json](/spec/MAIL-core.schema.json), [spec/MAIL-interswarm.schema.json](/spec/MAIL-interswarm.schema.json)
- **REST transport** (OpenAPI 3.1): [spec/openapi.yaml](/spec/openapi.yaml)
- **Reference implementation source**: [src/mail/](/src/mail/__init__.py)
- **Command-line interface**: [docs/cli.md](/docs/cli.md), `uv run mail …`
- **Asynchronous HTTP client**: [docs/client.md](/docs/client.md), [src/mail/client.py](/src/mail/client.py)
- **Deployment examples and docs**: [docs/](/docs/README.md)
## 1. MAIL Protocol Overview
### Goals
- Provide a transport-agnostic **message contract** so agents from different vendors can interoperate.
- Encode **routing, addressing, and task lifecycle semantics** that work for single-swarm and cross-swarm topologies.
- Support reliable inter-swarm federation over **standard HTTP** infrastructure.
- Remain **minimal enough** to embed inside bespoke agent runtimes or platform orchestrators.
### Message Primitives
MAIL defines five core message types that all conforming systems MUST understand. Each payload is validated against `MAIL-core.schema.json`.
| `msg_type` | Required payload fields | Typical use case |
|----------------------|-------------------------------------------------------------------------------------------|---------------------------------------------------|
| `request` | `task_id`, `request_id`, `sender`, `recipient`, `subject`, `body` | Agent-to-agent task delegation |
| `response` | `task_id`, `request_id`, `sender`, `recipient`, `subject`, `body` | Reply that correlates with a prior request |
| `broadcast` | `task_id`, `broadcast_id`, `sender`, `recipients[]`, `subject`, `body` | Notify many agents in a swarm |
| `interrupt` | `task_id`, `interrupt_id`, `sender`, `recipients[]`, `subject`, `body` | High-priority stop/alter instructions |
| `broadcast_complete` | `task_id`, `broadcast_id`, `sender`, `recipients[]`, `subject`, `body` (MAILBroadcast) | Marks task completion by a supervisor agent |
All messages are wrapped in a `MAILMessage` envelope with an `id` (UUID) and RFC 3339 timestamp. Optional fields such as `sender_swarm`, `recipient_swarm`, and `routing_info` carry federation metadata without altering the core contract.
### Addressing & Routing
- **Local agents** are addressed by name (`agent-name`).
- **Interswarm addresses** append the remote swarm (`agent-name@swarm-name`).
- **Routers** MUST wrap cross-swarm traffic in a `MAILInterswarmMessage` that includes source/target swarm identifiers and optional metadata.
- **Priority tiers** ensure urgent system and user messages preempt regular agent chatter. Within a tier, messages are FIFO by enqueue sequence.
### Transport Requirements
- The **normative HTTP binding** is published in [spec/openapi.yaml](/spec/openapi.yaml) and implemented by the reference **FastAPI** service.
- **`/message`** handles user tasks and local agent traffic. **`/tasks`** returns the caller's in-flight and completed tasks, and **`/task`** fetches a specific task record by ID. **`/interswarm/forward`** / **`/interswarm/back`** move agent traffic between swarms, and **`/interswarm/message`** proxies user/admin requests to a remote swarm.
- Implementations MUST replay responses from remote swarms back into the local queue to complete task lifecycles.
### Conformance & Validation
- Use the **included JSON Schemas** for request/response validation in any runtime.
- Run **`uv run spec/validate_samples.py`** to check sample payloads against the schemas.
- Terms defined in the spec follow RFC 2119/RFC 8174 keywords.
## 2. Reference Implementation
### Key Features
- **Persistent swarm runtime** with pluggable agents, tools, and memory backends.
- **Task resume safety** via automatic queue snapshots that stash pending task messages on completion/breakpoints and restore them when the user resumes work.
- **FastAPI HTTP server** exposing REST endpoints, **Server-Sent Events (SSE)** streams, and **interswarm messaging** routes.
- **Task introspection API** surfaces `GET /tasks` and `GET /task` so callers can audit active work, inspect SSE timelines, and resume confidently from any state.
- **CLI launcher** (`mail server`, `mail client`) for running the server and an interactive REPL without writing code.
- **Async MAIL client** (`MAILClient`) mirroring the REST API with SSE helpers for quick integrations.
- Built-in **swarm registry** with **health checks** and **service discovery** for distributed deployments.
- **Configurable authentication layer** that plugs into external auth/token providers.
- **Example agents** (`supervisor`, `weather`, `math`, cross-swarm demos) showcasing MAIL usage patterns.
### Architecture Highlights
- **[src/mail/core/runtime.py](/src/mail/core/runtime.py)**: Mailbox scheduling, task orchestration, priority queues, and tool execution.
- **[src/mail/server.py](/src/mail/server.py)**: FastAPI application with REST + SSE endpoints and interswarm routing.
- **[src/mail/net/router.py](/src/mail/net/router.py)**: HTTP federation between swarms, including metadata rewriting.
- **[src/mail/net/registry.py](/src/mail/net/registry.py)**: Service registry and liveness monitoring for remote swarms.
- **[src/mail/factories/](/src/mail/factories/__init__.py)**: Agent functions that instantiate agents with their LLM/tool configuration.
- **[src/mail/examples/](/src/mail/examples/__init__.py)**: Example agents and prompts.
The runtime processes MAIL messages **asynchronously**, tracks per-task state, and produces `broadcast_complete` events to signal overall task completion.
## 3. Getting Started
### Prerequisites
- **Python 3.12+**
- [`uv`](https://github.com/astral-sh/uv) package manager (recommended) or `pip`
- **[LiteLLM](https://github.com/BerriAI/litellm) proxy endpoint** for LLM calls
- **Authentication service** providing `/auth/login` and `/auth/check` (see below)
### Installation
```bash
# Clone and enter the repository
git clone https://github.com/charonlabs/mail --branch v1.3.6
cd mail
# Install dependencies (preferred)
uv sync
# or, using pip
pip install -e .
```
### Configuration
Set the following **environment variables** before starting the server:
```bash
# Authentication endpoints
export AUTH_ENDPOINT=http://your-auth-server/auth/login
export TOKEN_INFO_ENDPOINT=http://your-auth-server/auth/check
# LLM proxy (required only if your swarm uses use_proxy=true)
export LITELLM_PROXY_API_BASE=http://your-litellm-proxy
# Optional provider keys (required for direct provider calls)
export OPENAI_API_KEY=sk-your-openai-api-key
export ANTHROPIC_API_KEY=sk-your-anthropic-key
# Optional persistence (set to "none" to disable)
export DATABASE_URL=postgresql://...
```
Defaults for host, port, swarm metadata, and client behaviour are loaded from [`mail.toml`](mail.toml). The `[server.settings]` table exposes `task_message_limit`, which bounds how many messages the runtime will process per task when `run_continuous` is active (default `15`), and `print_llm_streams` (default `true`), which controls whether runtime-managed agents print LLM reasoning/response stream chunks to server stdout. Set `print_llm_streams=false` (or pass `mail server --print-llm-streams false`) for quieter server logs; task/event SSE streaming is unaffected. Override the file or point `MAIL_CONFIG_PATH` at an alternate TOML to adjust these values per environment. Use CLI flags such as `--swarm-name`, `--swarm-source`, `--swarm-registry`, and `--print-llm-streams true|false` (or edit `mail.toml`) to override these at launch; `mail server` exports `SWARM_NAME`, `SWARM_SOURCE`, `SWARM_REGISTRY_FILE`, and `BASE_URL` for downstream tools but does not read them as config overrides.
MAIL will create the parent directory for `SWARM_REGISTRY_FILE` on startup if it is missing, so you can rely on the default `registries/` path without committing the folder.
**Swarm definitions** live in [swarms.json](/swarms.json). Each entry declares the agents, entrypoint, tools, and default models for a swarm.
### Run a Local Swarm
```bash
# Start the FastAPI server (includes SSE + registry)
uv run mail server
# or explicitly
uv run -m mail.server
```
### Federate Two Swarms (Example)
```bash
# Terminal 1
uv run mail server --port 8000 --swarm-name swarm-alpha --swarm-registry registries/swarm-alpha.json
# Terminal 2
uv run mail server --port 8001 --swarm-name swarm-beta --swarm-registry registries/swarm-beta.json
# Register each swarm with the other (requires admin bearer token)
curl -X POST http://localhost:8000/swarms \
-H "Authorization: Bearer $ADMIN_TOKEN" \
-H "Content-Type: application/json" \
-d '{"name": "swarm-beta", "base_url": "http://localhost:8001"}'
curl -X POST http://localhost:8001/swarms \
-H "Authorization: Bearer $ADMIN_TOKEN" \
-H "Content-Type: application/json" \
-d '{"name": "swarm-alpha", "base_url": "http://localhost:8000"}'
```
Agents can now address peers using `agent-name@swarm-name`, and responses will route back automatically.
## 4. Repository Layout
```
mail/
├── spec/ # Protocol specification, schemas, validation utilities
├── src/mail/ # Reference implementation (core runtime + FastAPI services)
├── docs/ # Supplemental docs (registry, inter-swarm, auth, etc.)
├── swarms.json # Default swarm configurations
├── tests/ # Pytest suite covering protocol + runtime behaviors
├── scripts/ # Operational helpers (deploy, smoke tests, tooling)
├── registries/ # Swarm registry persistence (created as needed)
├── assets/ # Diagrams and static assets (README image, etc.)
└── pyproject.toml # Project metadata and dependency definitions
```
## 5. Development Workflow
- **`uv run mail server`** – run the reference server locally.
- **`uv run pytest -q`** – execute the automated test suite.
- **`uv run ruff check --fix .`** – lint and auto-fix style issues.
- **`uv run spec/validate_samples.py`** – validate example MAIL payloads against the schemas.
## 6. Documentation & Resources
- **Quickstart guide**: [docs/quickstart.md](/docs/quickstart.md)
- **Architecture deep-dive**: [docs/architecture.md](/docs/architecture.md)
- **Protocol message format reference**: [docs/message-format.md](/docs/message-format.md)
- **HTTP/API surface**: [docs/api.md](/docs/api.md)
- **Swarm configuration & registry operations**: [docs/configuration.md](/docs/configuration.md), [docs/registry.md](/docs/registry.md)
- **Database persistence**: [docs/database.md](/docs/database.md)
- **HTTP client usage**: [docs/client.md](/docs/client.md)
- **Security hardening checklist**: [docs/security.md](/docs/security.md)
- **Agents, tools, and examples**: [docs/agents-and-tools.md](/docs/agents-and-tools.md), [docs/examples.md](/docs/examples.md)
- **Testing and troubleshooting**: [docs/testing.md](/docs/testing.md), [docs/troubleshooting.md](/docs/troubleshooting.md)
- **Runtime source directories**: [src/mail/examples/](/src/mail/examples/__init__.py), [src/mail/factories/](/src/mail/factories/__init__.py)
## 7. Contributing
- **Read [CONTRIBUTING.md](/CONTRIBUTING.md)** for branching, issue, and review guidelines.
- All commits require a **Developer Certificate of Origin sign-off** (`git commit -s`).
- Please open an issue to propose significant protocol changes before implementation.
- Core maintainers are listed in [MAINTAINERS.md](/MAINTAINERS.md).
## 8. Licensing & Trademarks
- Reference implementation code: **Apache License 2.0** ([LICENSE](/LICENSE)).
- Specification text: **Creative Commons Attribution 4.0** ([SPEC-LICENSE](/SPEC-LICENSE)).
- Essential patent claims: **Open Web Foundation Final Specification Agreement 1.0** ([SPEC-PATENT-LICENSE](/SPEC-PATENT-LICENSE)).
- Trademarks and descriptive use policy: [TRADEMARKS.md](/TRADEMARKS.md).
Using the spec or code implies acceptance of their respective terms.
---
For questions, bug reports, or feature requests, open an issue or start a discussion in this repository.
| text/markdown | null | null | null | null | null | null | [
"License :: OSI Approved :: Apache Software License"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiohttp>=3.9.0",
"asyncpg>=0.31.0",
"dict2xml>=1.7.7",
"fastapi>=0.104.1",
"fastmcp>=2.12.5",
"langchain-core>=0.3.72",
"langchain>=0.3.27",
"langgraph>=0.6.3",
"langmem>=0.0.29",
"litellm>=1.76.2",
"numpydoc>=1.9.0",
"openai>=1.106.1",
"pydantic>=2.11.7",
"pyjwt>=2.10.1",
"pyreadline3>=3.5.4; platform_system == \"Windows\"",
"rich>=13.0.0",
"sse-starlette>=3.0.2",
"toml>=0.10.2",
"types-toml>=0.10.8.20240310",
"types-ujson>=5.10.0.20250822",
"ujson>=5.8.0",
"uvicorn>=0.24.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T22:56:55.677880 | mail_swarms-1.3.6.tar.gz | 3,227,204 | 62/b5/3b868183797eab21c6bab349f0de1c3146614bf2e58a23e670342c8646cc/mail_swarms-1.3.6.tar.gz | source | sdist | null | false | ee2f1fb00e13b65420318812b11efd49 | b2cedc31040f2aaf433825462718dc4ed42115cd3b41497e1c3052e5abf61e51 | 62b53b868183797eab21c6bab349f0de1c3146614bf2e58a23e670342c8646cc | Apache-2.0 | [
"LICENSE",
"NOTICE",
"THIRD_PARTY_NOTICES.md"
] | 196 |
2.4 | monnect | 1.0.0 | Automatically connect or disconnect a Bluetooth speaker when a specific display is connected on macOS. | # Monnect
Automatically connect or disconnect a Bluetooth speaker when a specific monitor is connected on macOS.
Monnect runs as a background LaunchAgent and manages Bluetooth audio based on display state.
---
## Features
- Auto-connect speaker when monitor is detected
- Auto-disconnect when monitor is removed
- Debounced monitor detection (no flicker)
- `monnect doctor` for environment validation
- `monnect status` for live system state
- CLI lifecycle management
- Installable via pipx
---
## Requirements
- macOS
- Python 3.9+
- Homebrew
- blueutil (auto-installed during setup if missing)
---
## Installation
Recommended:
brew install pipx
pipx ensurepath
pipx install monnect
For development:
git clone https://github.com/aki21j/Monnect.git
cd Monnect
pipx install -e .
---
## Setup
Run interactive setup:
monnect setup
This will:
- Detect available displays
- Detect paired Bluetooth devices
- Install background service
---
## Commands
Start service:
monnect start
Stop service:
monnect stop
Check status:
monnect status
Run diagnostics:
monnect doctor
Uninstall service:
monnect uninstall
---
## How It Works
Monnect:
1. Monitors display state using system_profiler
2. Checks Bluetooth state via blueutil
3. Applies debounced logic
4. Connects or disconnects speaker accordingly
5. Runs continuously via macOS LaunchAgent
---
## Development
pipx uninstall monnect
pipx install -e .
---
## License
MIT License
| text/markdown | Ankit Gupta | null | null | null | MIT | null | [
"Operating System :: MacOS",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: MIT License",
"Environment :: Console"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/aki21j/Monnect"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T22:56:49.382204 | monnect-1.0.0.tar.gz | 5,770 | b1/41/8427eaec872d04e1f3b8ec85982834e54f2a18e2c52eac45268fbb652cfe/monnect-1.0.0.tar.gz | source | sdist | null | false | 27cf0108a2d75fd8aac5abe8b9f8c52f | eadddeedf82e7d59f7ac91000bfe69dc296ae54780fcad3dde868fd498710a45 | b1418427eaec872d04e1f3b8ec85982834e54f2a18e2c52eac45268fbb652cfe | null | [
"LICENSE"
] | 227 |
2.4 | openai-agents | 0.9.3 | OpenAI Agents SDK | # OpenAI Agents SDK [](https://pypi.org/project/openai-agents/)
The OpenAI Agents SDK is a lightweight yet powerful framework for building multi-agent workflows. It is provider-agnostic, supporting the OpenAI Responses and Chat Completions APIs, as well as 100+ other LLMs.
<img src="https://cdn.openai.com/API/docs/images/orchestration.png" alt="Image of the Agents Tracing UI" style="max-height: 803px;">
> [!NOTE]
> Looking for the JavaScript/TypeScript version? Check out [Agents SDK JS/TS](https://github.com/openai/openai-agents-js).
### Core concepts:
1. [**Agents**](https://openai.github.io/openai-agents-python/agents): LLMs configured with instructions, tools, guardrails, and handoffs
2. [**Handoffs**](https://openai.github.io/openai-agents-python/handoffs/): A specialized tool call used by the Agents SDK for transferring control between agents
3. [**Guardrails**](https://openai.github.io/openai-agents-python/guardrails/): Configurable safety checks for input and output validation
4. [**Sessions**](#sessions): Automatic conversation history management across agent runs
5. [**Tracing**](https://openai.github.io/openai-agents-python/tracing/): Built-in tracking of agent runs, allowing you to view, debug and optimize your workflows
Explore the [examples](examples) directory to see the SDK in action, and read our [documentation](https://openai.github.io/openai-agents-python/) for more details.
## Get started
To get started, set up your Python environment (Python 3.10 or newer required), and then install OpenAI Agents SDK package.
### venv
```bash
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install openai-agents
```
For voice support, install with the optional `voice` group: `pip install 'openai-agents[voice]'`.
For Redis session support, install with the optional `redis` group: `pip install 'openai-agents[redis]'`.
### uv
If you're familiar with [uv](https://docs.astral.sh/uv/), installing the package would be even easier:
```bash
uv init
uv add openai-agents
```
For voice support, install with the optional `voice` group: `uv add 'openai-agents[voice]'`.
For Redis session support, install with the optional `redis` group: `uv add 'openai-agents[redis]'`.
## Hello world example
```python
from agents import Agent, Runner
agent = Agent(name="Assistant", instructions="You are a helpful assistant")
result = Runner.run_sync(agent, "Write a haiku about recursion in programming.")
print(result.final_output)
# Code within the code,
# Functions calling themselves,
# Infinite loop's dance.
```
(_If running this, ensure you set the `OPENAI_API_KEY` environment variable_)
(_For Jupyter notebook users, see [hello_world_jupyter.ipynb](examples/basic/hello_world_jupyter.ipynb)_)
## Handoffs example
```python
from agents import Agent, Runner
import asyncio
spanish_agent = Agent(
name="Spanish agent",
instructions="You only speak Spanish.",
)
english_agent = Agent(
name="English agent",
instructions="You only speak English",
)
triage_agent = Agent(
name="Triage agent",
instructions="Handoff to the appropriate agent based on the language of the request.",
handoffs=[spanish_agent, english_agent],
)
async def main():
result = await Runner.run(triage_agent, input="Hola, ¿cómo estás?")
print(result.final_output)
# ¡Hola! Estoy bien, gracias por preguntar. ¿Y tú, cómo estás?
if __name__ == "__main__":
asyncio.run(main())
```
## Functions example
```python
import asyncio
from agents import Agent, Runner, function_tool
@function_tool
def get_weather(city: str) -> str:
return f"The weather in {city} is sunny."
agent = Agent(
name="Hello world",
instructions="You are a helpful agent.",
tools=[get_weather],
)
async def main():
result = await Runner.run(agent, input="What's the weather in Tokyo?")
print(result.final_output)
# The weather in Tokyo is sunny.
if __name__ == "__main__":
asyncio.run(main())
```
## The agent loop
When you call `Runner.run()`, we run a loop until we get a final output.
1. We call the LLM, using the model and settings on the agent, and the message history.
2. The LLM returns a response, which may include tool calls.
3. If the response has a final output (see below for more on this), we return it and end the loop.
4. If the response has a handoff, we set the agent to the new agent and go back to step 1.
5. We process the tool calls (if any) and append the tool responses messages. Then we go to step 1.
There is a `max_turns` parameter that you can use to limit the number of times the loop executes.
### Final output
Final output is the last thing the agent produces in the loop.
1. If you set an `output_type` on the agent, the final output is when the LLM returns something of that type. We use [structured outputs](https://platform.openai.com/docs/guides/structured-outputs) for this.
2. If there's no `output_type` (i.e. plain text responses), then the first LLM response without any tool calls or handoffs is considered as the final output.
As a result, the mental model for the agent loop is:
1. If the current agent has an `output_type`, the loop runs until the agent produces structured output matching that type.
2. If the current agent does not have an `output_type`, the loop runs until the current agent produces a message without any tool calls/handoffs.
## Common agent patterns
The Agents SDK is designed to be highly flexible, allowing you to model a wide range of LLM workflows including deterministic flows, iterative loops, and more. See examples in [`examples/agent_patterns`](examples/agent_patterns).
## Tracing
The Agents SDK automatically traces your agent runs, making it easy to track and debug the behavior of your agents. Tracing is extensible by design, supporting custom spans and a wide variety of external destinations, including [Logfire](https://logfire.pydantic.dev/docs/integrations/llms/openai/#openai-agents), [AgentOps](https://docs.agentops.ai/v1/integrations/agentssdk), [Braintrust](https://braintrust.dev/docs/guides/traces/integrations#openai-agents-sdk), [Scorecard](https://docs.scorecard.io/docs/documentation/features/tracing#openai-agents-sdk-integration), [Keywords AI](https://docs.keywordsai.co/integration/development-frameworks/openai-agent), and many more. For more details about how to customize or disable tracing, see [Tracing](http://openai.github.io/openai-agents-python/tracing), which also includes a larger list of [external tracing processors](http://openai.github.io/openai-agents-python/tracing/#external-tracing-processors-list).
## Long running agents & human-in-the-loop
There are several options for long-running agents. Refer to [the documentation](https://openai.github.io/openai-agents-python/running_agents/#long-running-agents-human-in-the-loop) for details.
## Sessions
The Agents SDK provides built-in session memory to automatically maintain conversation history across multiple agent runs, eliminating the need to manually handle `.to_input_list()` between turns.
### Quick start
```python
from agents import Agent, Runner, SQLiteSession
# Create agent
agent = Agent(
name="Assistant",
instructions="Reply very concisely.",
)
# Create a session instance
session = SQLiteSession("conversation_123")
# First turn
result = await Runner.run(
agent,
"What city is the Golden Gate Bridge in?",
session=session
)
print(result.final_output) # "San Francisco"
# Second turn - agent automatically remembers previous context
result = await Runner.run(
agent,
"What state is it in?",
session=session
)
print(result.final_output) # "California"
# Also works with synchronous runner
result = Runner.run_sync(
agent,
"What's the population?",
session=session
)
print(result.final_output) # "Approximately 39 million"
```
### Session options
- **No memory** (default): No session memory when session parameter is omitted
- **`session: Session = DatabaseSession(...)`**: Use a Session instance to manage conversation history
```python
from agents import Agent, Runner, SQLiteSession
# SQLite - file-based or in-memory database
session = SQLiteSession("user_123", "conversations.db")
# Redis - for scalable, distributed deployments
# from agents.extensions.memory import RedisSession
# session = RedisSession.from_url("user_123", url="redis://localhost:6379/0")
agent = Agent(name="Assistant")
# Different session IDs maintain separate conversation histories
result1 = await Runner.run(
agent,
"Hello",
session=session
)
result2 = await Runner.run(
agent,
"Hello",
session=SQLiteSession("user_456", "conversations.db")
)
```
### Custom session implementations
You can implement your own session memory by creating a class that follows the `Session` protocol:
```python
from agents.memory import Session
from typing import List
class MyCustomSession:
"""Custom session implementation following the Session protocol."""
def __init__(self, session_id: str):
self.session_id = session_id
# Your initialization here
async def get_items(self, limit: int | None = None) -> List[dict]:
# Retrieve conversation history for the session
pass
async def add_items(self, items: List[dict]) -> None:
# Store new items for the session
pass
async def pop_item(self) -> dict | None:
# Remove and return the most recent item from the session
pass
async def clear_session(self) -> None:
# Clear all items for the session
pass
# Use your custom session
agent = Agent(name="Assistant")
result = await Runner.run(
agent,
"Hello",
session=MyCustomSession("my_session")
)
```
## Development (only needed if you need to edit the SDK/examples)
0. Ensure you have [`uv`](https://docs.astral.sh/uv/) installed.
```bash
uv --version
```
1. Install dependencies
```bash
make sync
```
2. (After making changes) lint/test
```
make check # run tests linter and typechecker
```
Or to run them individually:
```
make tests # run tests
make mypy # run typechecker
make lint # run linter
make format-check # run style checker
```
Format code if `make format-check` fails above by running:
```
make format
```
## Acknowledgements
We'd like to acknowledge the excellent work of the open-source community, especially:
- [Pydantic](https://docs.pydantic.dev/latest/) (data validation) and [PydanticAI](https://ai.pydantic.dev/) (advanced agent framework)
- [LiteLLM](https://github.com/BerriAI/litellm) (unified interface for 100+ LLMs)
- [MkDocs](https://github.com/squidfunk/mkdocs-material)
- [Griffe](https://github.com/mkdocstrings/griffe)
- [uv](https://github.com/astral-sh/uv) and [ruff](https://github.com/astral-sh/ruff)
We're committed to continuing to build the Agents SDK as an open source framework so others in the community can expand on our approach.
| text/markdown | null | OpenAI <support@openai.com> | null | null | null | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"griffe<2,>=1.5.6",
"mcp<2,>=1.19.0; python_version >= \"3.10\"",
"openai<3,>=2.19.0",
"pydantic<3,>=2.12.3",
"requests<3,>=2.0",
"types-requests<3,>=2.0",
"typing-extensions<5,>=4.12.2",
"dapr>=1.16.0; extra == \"dapr\"",
"grpcio>=1.60.0; extra == \"dapr\"",
"cryptography<46,>=45.0; extra == \"encrypt\"",
"litellm<2,>=1.81.0; extra == \"litellm\"",
"websockets<16,>=15.0; extra == \"realtime\"",
"redis>=7; extra == \"redis\"",
"asyncpg>=0.29.0; extra == \"sqlalchemy\"",
"sqlalchemy>=2.0; extra == \"sqlalchemy\"",
"graphviz>=0.17; extra == \"viz\"",
"numpy<3,>=2.2.0; python_version >= \"3.10\" and extra == \"voice\"",
"websockets<16,>=15.0; extra == \"voice\""
] | [] | [] | [] | [
"Homepage, https://openai.github.io/openai-agents-python/",
"Repository, https://github.com/openai/openai-agents-python"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T22:56:22.531903 | openai_agents-0.9.3.tar.gz | 2,388,441 | be/bb/22e3fceb0a969f98734de71b51c09df99e66fb0f38609b63415308d7d71f/openai_agents-0.9.3.tar.gz | source | sdist | null | false | 7f4f97e1feee9dde5a1630a6dc54aaa9 | 65b150a86cae36f42da910a3a3793559ffbbe00c6fb8545e570867b84d25dbeb | bebb22e3fceb0a969f98734de71b51c09df99e66fb0f38609b63415308d7d71f | MIT | [
"LICENSE"
] | 41,588 |
2.4 | horsies | 0.1.0a17 | A Python library for distributed task execution | <p align="center">
<img src="https://suleymanozkeskin.github.io/horsies/galloping-horsie.jpg" alt="Horsies Logo" width="200" style="border-radius: 20px" />
</p>
# Horsies
**PostgreSQL-backed background task queue and workflow engine for Python.**
[**Full Documentation**](https://suleymanozkeskin.github.io/horsies/) | [**PyPI**](https://pypi.org/project/horsies/) | [**GitHub**](https://github.com/suleymanozkeskin/horsies)
---
## Monitoring
Horsies includes **Syce**, a terminal-based UI for monitoring your cluster in real-time.

[**Syce Setup & Usage**](https://suleymanozkeskin.github.io/horsies/monitoring/syce-overview/)
## Test Coverage
Overall test coverage: **77%**
| text/markdown | Suleyman Ozkeskin | null | null | null | null | task-queue, workflow-engine, dag, scheduling, distributed, postgres, async | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries",
"Topic :: System :: Distributed Computing",
"Topic :: System :: Networking",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"greenlet>=3.3.0",
"psutil>=7.2.1",
"psycopg>=3.3.2",
"psycopg-pool>=3.2.0",
"pydantic>=2.12.5",
"sqlalchemy>=2.0.46",
"psycopg-binary>=3.3.2; extra == \"pg-binary\"",
"psycopg-c>=3.3.2; extra == \"pg-c\""
] | [] | [] | [] | [
"Homepage, https://github.com/suleymanozkeskin/horsies",
"Documentation, https://suleymanozkeskin.github.io/horsies/",
"Repository, https://github.com/suleymanozkeskin/horsies",
"Issues, https://github.com/suleymanozkeskin/horsies/issues"
] | uv/0.5.1 | 2026-02-20T22:56:06.355125 | horsies-0.1.0a17.tar.gz | 166,107 | 0b/4d/b108773435d69f2667fd940e36e35eac48b6cc4b0fd2e4e52c73f9354ed0/horsies-0.1.0a17.tar.gz | source | sdist | null | false | aa55341d13946e40287e269b82422ed3 | b70c0663a46e744e05282473816a177e12c7ca996d28f181ed477d0e0f80ed9b | 0b4db108773435d69f2667fd940e36e35eac48b6cc4b0fd2e4e52c73f9354ed0 | null | [] | 160 |
2.4 | domain-check-mcp | 1.0.0 | RDAP-first domain availability checking MCP server | # domain-check-mcp
[](https://pypi.org/project/domain-check-mcp/)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/)

[](https://modelcontextprotocol.io)
[](https://claude.ai/)
[](https://github.com/stucchi/domain-check-mcp/stargazers)
[](https://github.com/stucchi/domain-check-mcp/issues)
MCP server for checking domain name availability. Supports 500+ TLDs via RDAP, with WHOIS fallback for .de and .cn.
## Installation
```bash
uvx domain-check-mcp
```
## Usage in .mcp.json
```json
{
"mcpServers": {
"domain-check": {
"command": "uvx",
"args": ["domain-check-mcp"]
}
}
}
```
## Tools
- **check_domain** — Check if a domain name is available for registration
### Example
```
check_domain("example.com")
```
Returns:
```json
{
"domain": "example.com",
"available": false,
"status": "registered"
}
```
## Supported TLDs
### Via RDAP (500+)
All major gTLDs and many ccTLDs with RDAP support, sourced from the [IANA RDAP Bootstrap](https://data.iana.org/rdap/dns.json):
.com, .net, .org, .info, .app, .dev, .io, .xyz, .site, .shop, .uk, .fr, .nl, .pl, .consulting, .cloud, .tech, .blog, .store, .online, and [many more](src/domain_engine/tld_registry.py).
### Via WHOIS
| TLD | WHOIS Server |
|-----|-------------|
| .de | whois.denic.de |
| .cn | whois.cnnic.cn |
| .fj | www.whois.fj |
| .gs | whois.nic.gs |
| .bayern | whois.nic.bayern |
| .cat | whois.nic.cat |
| .eus | whois.nic.eus |
| .radio | whois.nic.radio |
| .scot | whois.nic.scot |
| .sport | whois.nic.sport |
## How it works
1. Extracts the TLD from the domain name
2. Routes to the appropriate adapter (RDAP or WHOIS)
3. **RDAP**: HTTP lookup — 404 means available, 200 means registered
4. **WHOIS**: TCP port 43 lookup — pattern matching on the response
5. Returns a structured result with availability status
## Development
```bash
git clone https://github.com/stucchi/domain-check-mcp.git
cd domain-check-mcp
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
pytest
```
## License
MIT
<!-- mcp-name: io.github.stucchi/domain-check -->
| text/markdown | null | "Ing. Luca Stucchi" <luca.stucchi@gmail.com> | null | null | null | mcp, domain, rdap, whois, availability | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx",
"mcp[cli]",
"pytest; extra == \"dev\"",
"pytest-httpx; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.1 | 2026-02-20T22:55:38.499409 | domain_check_mcp-1.0.0.tar.gz | 11,484 | ab/44/3308529a791c8b9208ea624eb9ed9b39e488f2b41e852ee7a0f0fefafab9/domain_check_mcp-1.0.0.tar.gz | source | sdist | null | false | df8a53b88349b5271b2279405cf93c61 | 3d9758da7d46bbe7628f41dae45494551c9d9fefbfb864051b025be67c00b2f9 | ab443308529a791c8b9208ea624eb9ed9b39e488f2b41e852ee7a0f0fefafab9 | MIT | [] | 167 |
2.4 | file-modification-checker | 0.2.0 | A fast CLI tool to check for modifications. | # Modification Checking Tool
A fast CLI tool to detect new, deleted, and modified files in a folder.
## 0.1.0
### Installing and Importing
> **Installing**
Use:
```bash
pip install file_modfication_checker==0.1.0
```
>**Importing**
Use:
```python
import file_modfication_checker
```
### Command Usage
> **How To Use**
Usage:
```bash
mcheck <folder> [OPTIONS]
```
Arguments:
<folder> Name of folder to check
Options:
-e, --exclude File types to ignore
> **Result**
- Red Text: files that were deleted
- Green Text: files that were added
- Yellow Text: files that were modified
## 0.2.0
### Installing and Importing
> **Installing**
Use:
```bash
pip install file_modfication_checker==0.2.0
```
>**Importing**
Use:
```python
import file_modfication_checker
```
### Comand Usage
> **How To Use**
Usage:
```bash
mcheck <folder> [OPTIONS]
```
Arguments:
<folder> Name of folder to check
Options:
-e, --exclude File types to ignore
-i, --information Type of information to show
> **Result**
- Red Text: files that were deleted - no infromation will be given
- Green Text: files that were added - requested information will be given
- Yellow Text: files that were modified - requested information will be given
| text/markdown | Arel Umut Koyluoglu | null | null | null | null | null | [] | [] | null | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.5 | 2026-02-20T22:55:31.178931 | file_modification_checker-0.2.0.tar.gz | 2,707 | e7/60/f5da16f2d76d68a0534110a3821f8bb467ff146151aacb929e15c5a056ef/file_modification_checker-0.2.0.tar.gz | source | sdist | null | false | ffd2ebc1492b97478d6cbb7bad3d9f8f | 96ab84cd645e8eb46151ec9ada4332515dfbc89da7e0e7bdadc14cdbee078056 | e760f5da16f2d76d68a0534110a3821f8bb467ff146151aacb929e15c5a056ef | null | [] | 178 |
2.3 | pdfrest | 1.0.1 | Python client library for interacting with the PDFRest API | # pdfrest
Python client library for the PDFRest service. The project is managed with
[uv](https://docs.astral.sh/uv/) and targets Python 3.9 and newer.
## Running examples
```bash
uvx nox -s examples
uv run nox -s run-example -- examples/delete/delete_example.py
```
## Getting started
```bash
uv sync
uv run python -c "import pdfrest; print(pdfrest.__version__)"
```
## Development
To install the tooling used by CI locally, include the `--group dev` flag:
```bash
uv sync --group dev
```
It is recommended to enable the pre-commit hooks after installation:
```bash
uv run pre-commit install
```
Run the test suite with:
```bash
uv run pytest
```
Check per-function coverage for the client classes:
```bash
uvx nox -s class-coverage
```
To reuse an existing `coverage/py<version>/coverage.json` without rerunning
tests, add `-- --no-tests` (and optional `--coverage-json path`).
## Documentation
Run the docs site locally:
```bash
uv run mkdocs serve
```
Build the static documentation site:
```bash
uv run mkdocs build --strict
```
| text/markdown | Datalogics | null | Datalogics | null | null | api, document-processing, pdf, pdfrest, sdk | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"exceptiongroup>=1.3.0",
"httpx>=0.28.1",
"langcodes>=3.4.0",
"pydantic>=2.12.0"
] | [] | [] | [] | [
"Documentation, https://python.pdfrest.com/",
"Homepage, https://pdfrest.com/",
"Source, https://github.com/pdfrest/pdfrest-python"
] | uv/0.9.18 {"installer":{"name":"uv","version":"0.9.18","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T22:55:18.185090 | pdfrest-1.0.1-py3-none-any.whl | 63,761 | a9/5f/aa0f395a3c394881ebc6c0ae36ecb022253761e0c6fd113b4b3393f8a144/pdfrest-1.0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | d6d43195d9fa472bce03a5adf155719e | b775c3e4eed8c1905ba45736de42f023de5a2930969ee3550612074c08635582 | a95faa0f395a3c394881ebc6c0ae36ecb022253761e0c6fd113b4b3393f8a144 | null | [] | 165 |
2.4 | memctl | 0.8.0 | A Unix-native memory control plane for LLM orchestration | # memctl
**A Unix-native memory control plane for LLM orchestration.**
One file, one truth. Ingest files, recall with FTS5, pipe into any LLM.
```
pip install memctl
memctl init
memctl push "project architecture" --source src/ | llm "Summarize the architecture"
echo "The architecture uses event sourcing" | memctl pull --tags arch
```
---
## Why memctl?
LLMs forget everything between turns. memctl gives them persistent, structured, policy-governed memory backed by a single SQLite file.
- **Zero dependencies** — stdlib only. No numpy, no torch, no compiled extensions.
- **One file** — Everything in `memory.db` (SQLite + FTS5 + WAL).
- **Unix composable** — `push` writes to stdout, `pull` reads from stdin. Pipe freely.
- **Policy-governed** — 35 detection patterns block secrets, injection, instructional content, and PII before storage.
- **Content-addressed** — SHA-256 dedup ensures idempotent ingestion.
- **Forward-compatible** — Identical schema to [RAGIX](https://github.com/ovitrac/RAGIX). Upgrade seamlessly.
---
## Installation
```bash
pip install memctl
```
For Office/ODF document ingestion (.docx, .odt, .pptx, .odp, .xlsx, .ods):
```bash
pip install memctl[docs]
```
For MCP server support (Claude Code / Claude Desktop):
```bash
pip install memctl[mcp]
```
For everything:
```bash
pip install memctl[all]
```
**Requirements:** Python 3.10+ (3.12 recommended). No compiled dependencies for core.
PDF extraction requires `pdftotext` from poppler-utils (`sudo apt install poppler-utils` or `brew install poppler`).
---
## Quickstart
### 1. Initialize a memory workspace
```bash
memctl init
# Creates .memory/memory.db, .memory/config.json, .memory/.gitignore
```
Set the environment variable for convenience:
```bash
eval $(memctl init)
# Sets MEMCTL_DB=.memory/memory.db
```
### 2. Ingest files and recall
```bash
# Ingest source files + recall matching items → injection block on stdout
memctl push "authentication flow" --source src/auth/
# Ingest Office documents (requires memctl[docs])
memctl push "project status" --source reports/*.docx slides/*.pptx
# Ingest PDFs (requires pdftotext)
memctl push "specifications" --source specs/*.pdf
# Recall only (no ingestion)
memctl push "database schema"
```
### 3. Store LLM output
```bash
# Pipe LLM output into memory
echo "We chose JWT for stateless auth" | memctl pull --tags auth,decision --title "Auth decision"
# Or pipe from any LLM CLI
memctl push "API design" | llm "Analyze this" | memctl pull --tags api
```
### 4. Search
```bash
# Human-readable
memctl search "authentication"
# JSON for scripts
memctl search "database" --json -k 5
```
### 5. Inspect a folder (one-liner)
```bash
# Auto-mounts, auto-syncs, and inspects — all in one command
memctl inspect docs/
# Same in JSON (for scripts)
memctl inspect docs/ --json
# Skip sync (use cached state)
memctl inspect docs/ --no-sync
```
`inspect` auto-mounts the folder if needed, checks staleness, syncs only if stale, and produces a structural summary. All implicit actions are announced on stderr.
### 6. Ask a question about a folder
```bash
# One-shot: auto-mount, auto-sync, inspect + recall → LLM → answer
memctl ask docs/ "What authentication risks exist?" --llm "claude -p"
# With Ollama
memctl ask src/ "What is under-documented?" --llm "ollama run granite3.1:2b"
# JSON output with metadata
memctl ask docs/ "Summarize the architecture" --llm "claude -p" --json
```
`ask` combines mount, sync, structural inspection, and scoped recall into a single command. The LLM receives both the folder structure and content context.
### 7. Chat with memory-backed context
```bash
# Interactive chat with any LLM
memctl chat --llm "claude -p" --session
# With pre-ingested files and answer storage
memctl chat --llm "ollama run granite3.1:2b" --source docs/ --store --session
```
Each question recalls from the memory store, sends context + question to the LLM, and displays the answer. `--session` keeps a sliding window of recent Q&A pairs. `--store` persists answers as STM items.
### 8. Manage
```bash
memctl show MEM-abc123def456 # Show item details
memctl stats # Store metrics
memctl stats --json # Machine-readable stats
memctl consolidate # Merge similar STM items
memctl consolidate --dry-run # Preview without writing
```
---
## CLI Reference
```
memctl <command> [options]
```
### Commands
| Command | Description |
|---------|-------------|
| `init [PATH]` | Initialize a memory workspace (default: `.memory`) |
| `push QUERY [--source ...]` | Ingest files + recall matching items to stdout |
| `pull [--tags T] [--title T]` | Read stdin, store as memory items |
| `search QUERY [-k N]` | FTS5 full-text search |
| `show ID` | Display a single memory item |
| `stats` | Store statistics |
| `consolidate [--dry-run]` | Deterministic merge of similar STM items |
| `loop QUERY --llm CMD` | Bounded recall-answer loop with LLM |
| `mount PATH` | Register a folder as a structured source |
| `sync [PATH]` | Delta-sync mounted folders into the store |
| `inspect [PATH]` | Structural inspection with auto-mount and auto-sync |
| `ask PATH "Q" --llm CMD` | One-shot folder Q&A (inspect + scoped recall + loop) |
| `chat --llm CMD` | Interactive memory-backed chat REPL |
| `export [--tier T]` | Export memory items as JSONL to stdout |
| `import [FILE]` | Import memory items from JSONL file or stdin |
| `serve` | Start MCP server (requires `memctl[mcp]`) |
### Global Flags
| Flag | Description |
|------|-------------|
| `--db PATH` | SQLite database path |
| `--config PATH` | Path to `config.json` (auto-detected beside database) |
| `--json` | Machine-readable JSON output |
| `-q, --quiet` | Suppress stderr progress messages |
| `-v, --verbose` | Enable debug logging |
### Command Details
#### `memctl init`
```bash
memctl init [PATH] [--force] [--fts-tokenizer fr|en|raw]
```
Creates the workspace directory, SQLite database with schema, `config.json`, and `.gitignore`. Prints `export MEMCTL_DB="..."` to stdout for eval.
Idempotent: running twice on the same path exits 0 without error.
#### `memctl push`
```bash
memctl push QUERY [--source FILE ...] [--budget N] [--tier TIER] [--tags T] [--scope S]
```
Two-phase command:
1. **Ingest** (optional): processes `--source` files with SHA-256 dedup and paragraph chunking.
2. **Recall**: FTS5 search for QUERY, format matching items as an injection block on stdout.
stdout contains only the injection block (`format_version=1`). Progress goes to stderr.
#### `memctl pull`
```bash
echo "..." | memctl pull [--tags T] [--title T] [--scope S]
```
Reads text from stdin and stores it as memory items. Attempts structured proposal extraction first; falls back to single-note storage. All content passes through the policy engine before storage.
#### `memctl search`
```bash
memctl search QUERY [--tier TIER] [--type TYPE] [-k N] [--json]
```
FTS5 full-text search. Returns human-readable output by default, or JSON with `--json`.
#### `memctl consolidate`
```bash
memctl consolidate [--scope S] [--dry-run] [--json]
```
Deterministic consolidation: clusters STM items by type + tag overlap (Jaccard), merges each cluster (longest content wins), promotes to MTM. High-usage MTM items promote to LTM. No LLM calls.
#### `memctl loop`
```bash
memctl push "question" | memctl loop "question" --llm "claude -p" [--max-calls 3] [--protocol json]
```
Bounded recall-answer loop: sends context + question to an external LLM, parses its response for refinement directives, performs additional recalls from the memory store, and detects convergence. The LLM is never autonomous — it only proposes queries. The controller enforces bounds, dedup, and stopping conditions.
**Protocol:** The LLM must output a JSON first line: `{"need_more": bool, "query": "...", "stop": bool}`, followed by its answer. Supported protocols: `json` (default), `regex`, `passive` (single-pass, no refinement).
**Stopping conditions:**
- `llm_stop` — LLM sets `stop: true`
- `fixed_point` — consecutive answers are similar above threshold (default 0.92)
- `query_cycle` — LLM re-requests a query already tried
- `no_new_items` — recall returns no new items for the proposed query
- `max_calls` — iteration limit reached (default 3)
**Flags:**
| Flag | Default | Description |
|------|---------|-------------|
| `--llm CMD` | *(required)* | LLM command (e.g. `"claude -p"`, `"ollama run granite3.1:2b"`) |
| `--llm-mode` | `stdin` | How to pass the prompt: `stdin` or `file` |
| `--protocol` | `json` | LLM output protocol: `json`, `regex`, `passive` |
| `--system-prompt` | *(auto)* | Custom system prompt (text or file path) |
| `--max-calls` | `3` | Maximum LLM invocations |
| `--threshold` | `0.92` | Answer fixed-point similarity threshold |
| `--query-threshold` | `0.90` | Query cycle similarity threshold |
| `--stable-steps` | `2` | Consecutive stable steps for convergence |
| `--no-stop-on-no-new` | off | Continue even if recall returns no new items |
| `--budget` | `2200` | Token budget for context |
| `--trace` | off | Emit JSONL trace to stderr |
| `--trace-file` | *(none)* | Write JSONL trace to file |
| `--strict` | off | Exit 1 if max-calls reached without convergence |
| `--timeout` | `300` | LLM subprocess timeout (seconds) |
| `--replay FILE` | *(none)* | Replay a trace file (no LLM calls) |
**Example pipeline:**
```bash
# Iterative recall with Claude
memctl push "How does authentication work?" --source docs/ \
| memctl loop "How does authentication work?" --llm "claude -p" --trace
# Sovereign local LLM
memctl push "database schema" --source src/ \
| memctl loop "database schema" --llm "ollama run granite3.1:2b" --protocol json
# Replay a trace (no LLM needed)
memctl loop --replay trace.jsonl "original question"
```
#### `memctl mount`
```bash
memctl mount PATH [--name NAME] [--ignore PATTERN ...] [--lang HINT]
memctl mount --list
memctl mount --remove ID_OR_NAME
```
Registers a folder as a structured source. Stores metadata only — no scanning, no ingestion. The folder contents are synced separately via `sync` or automatically via `inspect`.
#### `memctl sync`
```bash
memctl sync [PATH] [--full] [--json] [--quiet]
```
Delta-syncs mounted folders into the memory store. Uses a 3-tier delta rule:
1. **New file** (not in DB) → ingest
2. **Size + mtime match** → fast skip (no hashing)
3. **Hash compare** → ingest only if content changed
If `PATH` is given but not yet mounted, it is auto-registered first. `--full` forces re-processing of all files.
#### `memctl inspect`
```bash
# Orchestration mode — auto-mounts, auto-syncs, and inspects
memctl inspect PATH [--sync auto|always|never] [--no-sync] [--mount-mode persist|ephemeral]
[--budget N] [--ignore PATTERN ...] [--json] [--quiet]
# Classic mode — inspect an existing mount by ID/name
memctl inspect --mount ID_OR_NAME [--budget N] [--json] [--quiet]
```
When given a positional `PATH`, inspect operates in **orchestration mode**:
1. **Auto-mount** — registers the folder if not already mounted
2. **Staleness check** — compares disk inventory (path/size/mtime triples) against the store
3. **Auto-sync** — runs delta sync only if stale (or always/never per `--sync`)
4. **Inspect** — generates a deterministic structural summary
Output includes file/chunk/size totals, per-folder breakdown, per-extension distribution, top-5 largest files, and rule-based observations. All paths in output are mount-relative (never absolute).
`--mount-mode ephemeral` removes the mount record after inspection (corpus data is preserved). `--no-sync` is shorthand for `--sync never`.
All implicit actions (mount, sync) are announced on stderr. `--quiet` suppresses them.
#### `memctl ask`
```bash
memctl ask PATH "question" --llm CMD [--inspect-cap N] [--budget N]
[--sync auto|always|never] [--no-sync] [--mount-mode persist|ephemeral]
[--protocol passive|json|regex] [--max-calls N] [--json] [--quiet]
```
One-shot folder Q&A. Orchestrates auto-mount, auto-sync, structural inspection, scoped recall, and bounded loop — all in one command.
| Flag | Default | Description |
|------|---------|-------------|
| `--llm CMD` | *(required)* | LLM command (e.g. `"claude -p"`) |
| `--inspect-cap` | `600` | Tokens reserved for structural context |
| `--budget` | `2200` | Total token budget (inspect + recall) |
| `--sync` | `auto` | Sync mode: `auto`, `always`, `never` |
| `--no-sync` | off | Skip sync (shorthand for `--sync never`) |
| `--mount-mode` | `persist` | Keep mount (`persist`) or remove after (`ephemeral`) |
| `--protocol` | `passive` | LLM output protocol |
| `--max-calls` | `1` | Max loop iterations |
**Budget splitting:** `--inspect-cap` tokens go to structural context (folder tree, observations). The remainder (`--budget` minus `--inspect-cap`) goes to content recall (FTS5 results scoped to the folder).
**Scoped recall:** FTS results are post-filtered to include only items from the target folder's mount. Items from other mounts are excluded.
#### `memctl chat`
```bash
memctl chat --llm CMD [--session] [--store] [--folder PATH]
[--protocol passive|json|regex] [--max-calls N] [--budget N]
[--source FILE ...] [--quiet]
```
Interactive memory-backed chat REPL. Each turn: FTS5 recall from the memory store, send context + question to the LLM, display the answer. Persistent readline history (`~/.local/share/memctl/chat_history`) and multi-line input (blank line to send).
**Stateless by default.** Each question sees only the memory store — no hidden conversation state.
| Flag | Default | Description |
|------|---------|-------------|
| `--llm CMD` | *(required)* | LLM command (e.g. `"claude -p"`, `"ollama run granite3.1:2b"`) |
| `--protocol` | `passive` | LLM output protocol. `passive` = single-pass; `json` = iterative refinement |
| `--max-calls` | `1` | Max loop iterations per turn |
| `--session` | off | Enable in-memory session context (sliding window of recent Q&A) |
| `--history-turns` | `5` | Session window size (turns) |
| `--session-budget` | `4000` | Session block character limit |
| `--store` | off | Persist each answer as STM item |
| `--source FILE...` | *(none)* | Pre-ingest files before starting |
| `--folder PATH` | *(none)* | Scope recall to a folder (auto-mount/sync) |
| `--tags` | `chat` | Tags for stored items (comma-separated) |
**Folder-scoped chat:** `--folder PATH` auto-mounts and syncs the folder, then restricts every turn's recall to that folder's items. Combines the convenience of `ask` with the interactivity of `chat`.
**stdout purity:** answers go to stdout only. Prompt, banner, and hints go to stderr.
#### `memctl export`
```bash
memctl export [--tier T] [--type T] [--scope S] [--include-archived]
```
Exports memory items as JSONL (one JSON object per line) to stdout. Each line is a complete `MemoryItem.to_dict()` serialization including full provenance.
```bash
# Export all items
memctl export > backup.jsonl
# Export only LTM decisions
memctl export --tier ltm --type decision > decisions.jsonl
# Pipe between databases
memctl export --db project-a.db | memctl import --db project-b.db
```
**stdout purity:** only JSONL data goes to stdout. Progress goes to stderr.
#### `memctl import`
```bash
memctl import [FILE] [--preserve-ids] [--dry-run]
```
Imports memory items from a JSONL file or stdin. Every item passes through the policy engine. Content-hash deduplication prevents duplicates.
| Flag | Default | Description |
|------|---------|-------------|
| `FILE` | stdin | JSONL file to import |
| `--preserve-ids` | off | Keep original item IDs (default: generate new IDs) |
| `--dry-run` | off | Count items without writing |
```bash
# Import from file
memctl import backup.jsonl --db fresh.db
# Dry run — see what would happen
memctl import backup.jsonl --dry-run
# Preserve original IDs (for controlled migration)
memctl import backup.jsonl --preserve-ids --db replica.db
```
---
## Configuration
memctl reads an optional `config.json` file from beside the database (auto-detected) or from an explicit `--config PATH` flag.
```json
{
"store": {"fts_tokenizer": "fr"},
"inspect": {
"dominance_frac": 0.40,
"low_density_threshold": 0.10,
"ext_concentration_frac": 0.75,
"sparse_threshold": 1
},
"chat": {"history_max": 1000}
}
```
**Precedence:** `CLI --flag` > `MEMCTL_*` env var > `config.json` > compiled default. Missing or invalid config file is silently ignored.
---
## Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `MEMCTL_DB` | `.memory/memory.db` | Path to SQLite database |
| `MEMCTL_BUDGET` | `2200` | Token budget for injection blocks |
| `MEMCTL_FTS` | `fr` | FTS tokenizer preset (`fr`/`en`/`raw`) |
| `MEMCTL_TIER` | `stm` | Default write tier |
| `MEMCTL_SESSION` | *(unset)* | Session ID for audit provenance |
**Precedence:** `CLI --flag` > `MEMCTL_*` env var > `config.json` > compiled default. Always.
---
## Exit Codes
| Code | Meaning |
|------|---------|
| 0 | Success (including idempotent no-op) |
| 1 | Operational error (bad args, empty input, policy rejection) |
| 2 | Internal failure (unexpected exception, I/O error) |
---
## Shell Integration
Add to `.bashrc`, `.zshrc`, or your project's `env.sh`:
```bash
export MEMCTL_DB=.memory/memory.db
# Shortcuts
meminit() { memctl init "${1:-.memory}"; }
memq() { memctl push "$1"; } # recall only
memp() { memctl push "$1" ${2:+--source "$2"}; } # push with optional source
mempull() { memctl pull --tags "${1:-}" ${2:+--title "$2"}; }
```
### Pipe Recipes
```bash
# Ingest docs + recall + feed to LLM + store output
memctl push "API design" --source docs/ | llm "Summarize" | memctl pull --tags api
# Search and pipe to jq
memctl search "auth" --json | jq '.[].title'
# Batch ingest a directory
memctl push "project overview" --source src/ tests/ docs/ -q
# Export all items as JSONL backup
memctl export > backup.jsonl
# Export only LTM items
memctl export --tier ltm > decisions.jsonl
# Import into a fresh database
memctl import backup.jsonl --db fresh.db
# Pipe between databases
memctl export --db project-a.db | memctl import --db project-b.db
# Dry-run import to check counts
memctl import backup.jsonl --dry-run
# Iterative recall-answer loop with trace
memctl push "auth flow" --source docs/ | memctl loop "auth flow" --llm "claude -p" --trace
# One-liner: inspect a folder (auto-mount + auto-sync)
memctl inspect docs/
# Inspect in JSON, pipe to jq for extension breakdown
memctl inspect src/ --json | jq '.extensions'
# Inspect without syncing (use cached state)
memctl inspect docs/ --no-sync --json
# One-shot folder Q&A (inspect + scoped recall + LLM)
memctl ask docs/ "What are the auth risks?" --llm "claude -p"
# Folder Q&A with JSON output
memctl ask src/ "Summarize the architecture" --llm "claude -p" --json
# Interactive folder-scoped chat
memctl chat --llm "claude -p" --folder docs/ --session --store
# Interactive chat with pre-ingested docs
memctl chat --llm "claude -p" --source docs/ --session --store
```
---
## MCP Server
memctl exposes 14 MCP tools for integration with Claude Code, Claude Desktop, and any MCP-compatible client.
### Quick Install
The installer checks prerequisites, installs `memctl[mcp]`, configures your client, initializes the workspace, and verifies the server starts:
```bash
# Claude Code (default)
./scripts/install_mcp.sh
# Claude Desktop
./scripts/install_mcp.sh --client claude-desktop
# Both clients (non-interactive)
./scripts/install_mcp.sh --client all --yes
# Custom Python / database path
./scripts/install_mcp.sh --python /usr/bin/python3.12 --db ~/my-project/.memory/memory.db
# Preview without changes
./scripts/install_mcp.sh --dry-run
```
The installer:
- Verifies Python 3.10+ and pip
- Runs `pip install -U "memctl[mcp]"` (idempotent)
- Creates `~/.local/share/memctl/memory.db` if missing
- Inserts/updates the `memctl` entry in the client's MCP config (timestamped `.bak` backup)
- Runs `memctl serve --check` to verify the server starts
Supported platforms: macOS and Linux.
### Manual Setup
If you prefer manual configuration:
```bash
# 1. Install
pip install "memctl[mcp]"
# 2. Initialize workspace
memctl init ~/.local/share/memctl
# 3. Verify
memctl serve --check --db ~/.local/share/memctl/memory.db
```
Then add to your client config:
**Claude Code** (`~/.claude/settings.json`):
```json
{
"mcpServers": {
"memctl": {
"command": "memctl",
"args": ["serve", "--db", "~/.local/share/memctl/memory.db"]
}
}
}
```
**Claude Desktop** (`~/Library/Application Support/Claude/claude_desktop_config.json` on macOS):
```json
{
"mcpServers": {
"memctl": {
"command": "memctl",
"args": ["serve", "--db", "~/.local/share/memctl/memory.db"]
}
}
}
```
### Start the Server
```bash
memctl serve --db ~/.local/share/memctl/memory.db
# or
python -m memctl.mcp.server --db ~/.local/share/memctl/memory.db
```
### Defense in Depth (v0.8)
The MCP server applies four layers of protection:
| Layer | Component | Purpose |
|-------|-----------|---------|
| **L0** | `ServerGuard` | Path validation (`--db-root`), write size caps, import batch limits |
| **L1** | `RateLimiter` | Token-bucket throttling: 20 writes/min, 120 reads/min per session |
| **L1** | `SessionTracker` | In-memory session state, per-turn write tracking |
| **L1** | `AuditLogger` | Structured JSONL audit trail (schema v1, `rid` correlation) |
| **L2** | `MemoryPolicy` | 35 detection patterns (secrets, injection, instructional, PII) |
| **L3** | Claude Code hooks | Optional: PreToolUse safety guard + PostToolUse audit logger |
**Secure server example:**
```bash
# Default: db-root enforced, rate limits on, audit to stderr
memctl serve --db project/memory.db
# Explicit secure mode with audit file
memctl serve --db memory.db --db-root . --audit-log audit.jsonl
# Disable rate limits (development only)
memctl serve --db memory.db --no-rate-limit
```
**Claude Code hooks** (optional, separate from core):
```bash
# Install safety guard + audit logger hooks
./scripts/install_claude_hooks.sh
# Uninstall
./scripts/uninstall_mcp.sh --hooks-only
```
### MCP Tools
| Tool | Description | Since |
|------|-------------|-------|
| `memory_recall` | Token-budgeted context injection (primary tool) | v0.1 |
| `memory_search` | Interactive FTS5 discovery | v0.1 |
| `memory_propose` | Store findings with policy governance | v0.1 |
| `memory_write` | Direct write (privileged/dev, policy-checked) | v0.1 |
| `memory_read` | Read items by ID | v0.1 |
| `memory_stats` | Store metrics | v0.1 |
| `memory_consolidate` | Trigger deterministic merge | v0.1 |
| `memory_mount` | Register, list, or remove folder mounts | v0.7 |
| `memory_sync` | Sync mounted folders (delta or full) | v0.7 |
| `memory_inspect` | Structural injection block from corpus | v0.7 |
| `memory_ask` | One-shot folder Q&A | v0.7 |
| `memory_export` | JSONL export with filters | v0.7 |
| `memory_import` | JSONL import with policy enforcement | v0.7 |
| `memory_loop` | Bounded recall-answer loop | v0.7 |
Tool names use the `memory_*` prefix for drop-in compatibility with RAGIX.
---
## How It Works
### Architecture
```
memctl/
├── types.py Data model (MemoryItem, MemoryProposal, MemoryEvent, MemoryLink)
├── store.py SQLite + FTS5 + WAL backend (10 tables + schema_meta)
├── extract.py Text extraction (text files + binary format dispatch)
├── ingest.py Paragraph chunking, SHA-256 dedup, source resolution
├── policy.py Write governance (35 patterns: secrets, injection, instructional, PII)
├── config.py Dataclass configuration + JSON config loading
├── similarity.py Stdlib text similarity (Jaccard + SequenceMatcher)
├── loop.py Bounded recall-answer loop controller
├── mount.py Folder mount registration and management
├── sync.py Delta sync with 3-tier change detection
├── inspect.py Structural inspection and orchestration
├── chat.py Interactive chat REPL (readline history, multi-line)
├── ask.py One-shot folder Q&A orchestrator
├── export_import.py JSONL export/import with policy enforcement
├── cli.py 16 CLI commands
├── consolidate.py Deterministic merge (Jaccard clustering, no LLM)
├── proposer.py LLM output parsing (delimiter + regex)
└── mcp/
├── tools.py 14 MCP tools (memory_* prefix)
├── formatting.py Injection block format (format_version=1)
└── server.py FastMCP server entry point
```
22 source files. ~8,500 lines. Zero compiled dependencies for core.
### Memory Tiers
| Tier | Purpose | Lifecycle |
|------|---------|-----------|
| **STM** (Short-Term) | Recent observations, unverified facts | Created by `pull`. Consolidated or expired. |
| **MTM** (Medium-Term) | Verified, consolidated knowledge | Created by `consolidate`. Promoted by usage. |
| **LTM** (Long-Term) | Stable decisions, definitions, constraints | Promoted from MTM by usage count or type. |
### Policy Engine
Every write path passes through the policy engine. No exceptions.
**Hard blocks** (rejected):
- 10 secret detection patterns (API keys, tokens, passwords, private keys, JWTs)
- 8 injection patterns (prompt override, system prompt fragments)
- 8 instructional block patterns (tool invocation syntax, role fragments)
- Oversized content (>2000 chars for non-pointer types)
**Soft blocks** (quarantined to STM with expiry):
- 4 instructional quarantine patterns (imperative self-instructions)
- 5 PII patterns (SSN, credit card, email, phone, IBAN)
- Missing provenance or justification
- Quarantined items stored with `injectable=False`
### FTS5 Tokenizer Presets
| Preset | Tokenizer | Use Case |
|--------|-----------|----------|
| `fr` | `unicode61 remove_diacritics 2` | French-safe default (accent normalization) |
| `en` | `porter unicode61 remove_diacritics 2` | English with Porter stemming |
| `raw` | `unicode61` | No diacritics removal, no stemming |
Expert override: `memctl init --fts-tokenizer "porter unicode61 remove_diacritics 2"`
### Supported Formats
| Category | Extensions | Requirement |
|----------|-----------|-------------|
| Text / Markup | `.md` `.txt` `.rst` `.csv` `.tsv` `.html` `.xml` `.json` `.yaml` `.toml` | None (stdlib) |
| Source Code | `.py` `.js` `.ts` `.jsx` `.tsx` `.java` `.go` `.rs` `.c` `.cpp` `.sh` `.sql` `.css` … | None (stdlib) |
| Office Documents | `.docx` `.odt` | `pip install memctl[docs]` |
| Presentations | `.pptx` `.odp` | `pip install memctl[docs]` |
| Spreadsheets | `.xlsx` `.ods` | `pip install memctl[docs]` |
| PDF | `.pdf` | `pdftotext` (poppler-utils) |
All formats are extracted to plain text before chunking and ingestion. Binary format libraries are lazy-imported — a missing library produces a clear `ImportError` with install instructions.
### Content Addressing
Every ingested file is hashed (SHA-256). Re-ingesting the same file is a no-op. Every memory item stores a `content_hash` for deduplication.
### Consolidation
Deterministic, no-LLM merge pipeline:
1. Collect non-archived STM items
2. Cluster by type + tag overlap (Jaccard similarity)
3. Merge each cluster: longest content wins; tie-break by earliest `created_at`, then lexicographic ID
4. Write merged items at MTM tier + `supersedes` links
5. Archive originals (`archived=True`)
6. Promote high-usage MTM items to LTM
---
## Database Schema
Single SQLite file with WAL mode. 10 tables + 1 FTS5 virtual table:
| Table | Purpose |
|-------|---------|
| `memory_items` | Core memory items (22 columns) |
| `memory_revisions` | Immutable revision history |
| `memory_events` | Audit log (every read/write/consolidate) |
| `memory_links` | Directional relationships (supersedes, supports, etc.) |
| `memory_embeddings` | Reserved for RAGIX (empty in memctl) |
| `corpus_hashes` | SHA-256 file dedup + mount metadata (mount_id, rel_path, ext, size_bytes, mtime_epoch, lang_hint) |
| `corpus_metadata` | Corpus-level metadata |
| `schema_meta` | Schema version, creation info |
| `memory_palace_locations` | Reserved for RAGIX |
| `memory_mounts` | Registered folder mounts (path, name, ignore patterns, lang hint) |
| `memory_items_fts` | FTS5 virtual table for full-text search |
Schema version is tracked in `schema_meta`. Current: `SCHEMA_VERSION=2`. Migration from v1 is additive (ALTER TABLE ADD COLUMN) and idempotent.
---
## Migration to RAGIX
memctl is extracted from [RAGIX](https://github.com/ovitrac/RAGIX) and maintains schema-identical databases. To upgrade:
```bash
git clone git@github.com:ovitrac/RAGIX.git
cd RAGIX
pip install -e .[all]
# Point at the same database — all items carry over
ragix memory stats --db /path/to/your/.memory/memory.db
```
| Feature | memctl | RAGIX |
|---------|--------|-------|
| SQLite schema | Forward-compatible (RAGIX can open memctl DBs) | Superset |
| Injection format | `format_version=1` | `format_version=1` |
| MCP tool names | `memory_*` | `memory_*` |
| FTS5 recall | Yes | Yes (+ hybrid embeddings) |
| Folder mount + sync | Yes (v0.3+) | No |
| Embeddings | No | Yes (FAISS + Ollama) |
| LLM-assisted merge | No | Yes |
| Graph-RAG | No | Yes |
| Reporting | No | Yes |
---
## Python API
```python
from memctl import MemoryStore, MemoryItem, MemoryPolicy
# Open or create a store
store = MemoryStore(db_path=".memory/memory.db")
# Write an item
item = MemoryItem(
title="Architecture decision",
content="We chose event sourcing for state management",
tier="stm",
type="decision",
tags=["architecture", "event-sourcing"],
)
store.write_item(item, reason="manual")
# Search
results = store.search_fulltext("event sourcing", limit=10)
for r in results:
print(f"[{r.tier}] {r.title}: {r.content[:80]}")
# Policy check
policy = MemoryPolicy()
from memctl.types import MemoryProposal
proposal = MemoryProposal(
title="Config", content="Some content",
why_store="Important finding",
provenance_hint={"source_kind": "doc", "source_id": "design.md"},
)
verdict = policy.evaluate_proposal(proposal)
print(verdict.action) # "accept", "quarantine", or "reject"
store.close()
```
---
## Testing
```bash
pip install memctl[dev]
pytest tests/ -v
```
544 tests across 18 test files covering types, store, policy, ingest, text extraction, similarity, loop controller, mount, sync, inspect, ask, chat, export/import, config, forward compatibility, contracts, CLI (subprocess), and pipe composition.
---
## License
MIT License. See [LICENSE](LICENSE) for details.
---
**Author:** Olivier Vitrac, PhD, HDR | [olivier.vitrac@adservio.fr](mailto:olivier.vitrac@adservio.fr) | Adservio Innovation Lab
| text/markdown | null | Olivier Vitrac <olivier.vitrac@adservio.fr> | null | null | MIT | llm, memory, sqlite, fts5, mcp, rag, cli, ai-agents | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"python-docx>=1.0.0; extra == \"docs\"",
"python-pptx>=0.6.21; extra == \"docs\"",
"openpyxl>=3.1.0; extra == \"docs\"",
"odfpy>=1.4.1; extra == \"docs\"",
"mcp[cli]>=0.1.0; extra == \"mcp\"",
"pytest>=7.0; extra == \"dev\"",
"ruff; extra == \"dev\"",
"memctl[docs]; extra == \"all\"",
"memctl[mcp]; extra == \"all\"",
"memctl[dev]; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/ovitrac/memctl",
"Repository, https://github.com/ovitrac/memctl",
"Issues, https://github.com/ovitrac/memctl/issues",
"Changelog, https://github.com/ovitrac/memctl/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T22:54:49.291518 | memctl-0.8.0.tar.gz | 181,169 | 40/6e/efb84e220c22248155d6d8bbb7edeaa0139a54c548d0231bd1a6c18e2eca/memctl-0.8.0.tar.gz | source | sdist | null | false | e26025fb592edb55778c9b114192020e | a086b3b5ed5ed5bee700ef108d4693f7cbf1996062db943b0623f46be28cf416 | 406eefb84e220c22248155d6d8bbb7edeaa0139a54c548d0231bd1a6c18e2eca | null | [
"LICENSE"
] | 175 |
2.4 | slurmgrid | 0.1.0 | Manage large Slurm job arrays that exceed cluster submission limits | # slurmgrid
[](https://github.com/jgaeb/slurmgrid/actions/workflows/ci.yml)
[](https://codecov.io/gh/jgaeb/slurmgrid)
[](https://pypi.org/project/slurmgrid/)
Manage large Slurm job arrays that exceed your cluster's submission limit.
If you need to run 50,000 small jobs but your cluster caps `MaxArraySize` at
10,000 (or limits total queued jobs), `slurmgrid` handles the tedious cycle of
"submit a batch, wait, submit the next batch" automatically. It chunks your
parameter manifest, submits array jobs via `sbatch`, monitors completion via
`sacct`, retries failures, and persists state so you can resume if interrupted.
## Installation
```bash
pip install slurmgrid
```
Or just clone the repo and run directly (no dependencies beyond Python 3.8+):
```bash
git clone https://github.com/jgaeb/slurmgrid.git
cd slurmgrid
python -m slurmgrid --help
```
## Quick start
1. Create a manifest file (CSV or TSV) with one row per job:
```csv
alpha,beta,seed
0.1,1,42
0.1,2,42
0.5,1,42
0.5,2,42
...
```
2. Run `slurmgrid submit` with your command template:
```bash
python -m slurmgrid submit \
--manifest params.csv \
--command "python train.py --alpha {alpha} --beta {beta} --seed {seed}" \
--partition gpu \
--time 01:00:00 \
--mem 4G \
--max-concurrent 5000
```
That's it. `slurmgrid` will:
- Shuffle and split the manifest into chunks (default: 1/3 of `MaxArraySize`)
- Submit each chunk as a fast array job via `sbatch`, using Slurm's `%throttle`
to limit concurrency to `--max-concurrent`
- Poll `sacct` every 30 seconds to track completion
- Submit the next chunk when the current one finishes
- Batch failed jobs into retry chunks (up to `--max-retries`, default 3)
- Save state to disk after every poll so you can resume if interrupted
## Usage
### Submit a new run
```bash
python -m slurmgrid submit \
--manifest params.csv \
--command "python train.py --alpha {alpha} --beta {beta}" \
--state-dir ./my_run \
--partition gpu \
--time 02:00:00 \
--mem 8G \
--cpus-per-task 4 \
--max-concurrent 5000 \
--max-retries 3 \
--poll-interval 30 \
--preamble "module load python/3.10 && conda activate myenv"
```
The `--command` template uses `{column_name}` placeholders that are resolved
from the manifest columns. Any column in the manifest can be referenced.
### Resume an interrupted run
If you lose your SSH session or Ctrl-C out, running Slurm jobs continue
independently. Resume monitoring with:
```bash
python -m slurmgrid resume --state-dir ./my_run
```
### Check status
```bash
python -m slurmgrid status --state-dir ./my_run
```
```
==================================================
Total jobs: 50000
Completed: 35420 (70.8%)
Active: 4580
Pending: 10000
Failed (retrying): 0
Failed (final): 0
Chunks: 35/50 completed, 5 active, 10 pending
==================================================
```
### Cancel all jobs
```bash
python -m slurmgrid cancel --state-dir ./my_run
```
### Dry run
Generate all chunk files and sbatch scripts without actually submitting:
```bash
python -m slurmgrid submit --manifest params.csv --command "echo {x}" --dry-run
```
Inspect the generated scripts in `./sc_state/scripts/` to verify correctness.
## How it works
1. **Chunking**: The manifest is split into sub-manifests. Each chunk gets its
own sbatch script that uses `SLURM_ARRAY_TASK_ID` to index into the
sub-manifest and extract the parameters for that task.
2. **Shuffling**: Manifest rows are shuffled before chunking (disable with
`--no-shuffle`) so each chunk gets a representative mix of the parameter
space and chunks take roughly the same wall time.
3. **Batch submission**: Each chunk is submitted as a single `sbatch --array`
call with a `%throttle` suffix to limit concurrency, which is orders of
magnitude faster than submitting jobs individually.
4. **Monitoring**: The tool polls `sacct` to track job status. One chunk runs
at a time; when it finishes, the next is submitted.
5. **Retries**: When all regular chunks are done, failed tasks are batched
into a single retry chunk and resubmitted, up to `--max-retries` per task.
6. **State persistence**: All state is saved as JSON after every poll.
Atomic writes (via temp file + rename) prevent corruption. You can
resume at any time.
## State directory layout
```
sc_state/
config.json # Frozen copy of the submission configuration
state.json # Chunk-level status and failure tracking
slurmgrid.log # Tool's own log file
chunks/
chunk_000.chunk # Sub-manifests (internal format)
chunk_001.chunk
scripts/
chunk_000.sh # Generated sbatch scripts
chunk_001.sh
logs/
chunk_000/ # Slurm stdout/stderr per chunk
slurm-12345_0.out
slurm-12345_0.err
```
## All options
| Flag | Default | Description |
|------|---------|-------------|
| `--manifest` | (required) | CSV/TSV manifest file |
| `--command` | (required) | Command template with `{column}` placeholders |
| `--state-dir` | `./sc_state` | Directory for state, chunks, scripts, logs |
| `--delimiter` | auto-detect | Manifest delimiter (`,` for .csv, `\t` for .tsv) |
| `--chunk-size` | auto-detect | Jobs per array chunk (default: `MaxArraySize / 3`) |
| `--max-concurrent` | 10000 | Max simultaneously running tasks (Slurm `%throttle`) |
| `--max-retries` | 3 | Max retries per failed job |
| `--poll-interval` | 30 | Seconds between status checks |
| `--max-runtime` | unlimited | Max seconds to run before saving state and exiting |
| `--dry-run` | false | Generate scripts without submitting |
| `--no-shuffle` | false | Don't shuffle manifest rows before chunking |
| `--partition` | | Slurm partition |
| `--time` | | Wall time limit (e.g., `01:00:00`) |
| `--mem` | | Memory per node (e.g., `4G`) |
| `--mem-per-cpu` | | Memory per CPU |
| `--cpus-per-task` | 1 | CPUs per task |
| `--gpus` | | GPU specification |
| `--gres` | | Generic resource specification |
| `--account` | | Slurm account |
| `--qos` | | Quality of service |
| `--constraint` | | Node constraint |
| `--exclude` | | Nodes to exclude |
| `--job-name-prefix` | `sc` | Prefix for Slurm job names |
| `--preamble` | | Shell commands before the main command |
| `--preamble-file` | | File containing preamble commands |
| `--extra-sbatch` | | Extra `#SBATCH` flags (repeatable) |
## Requirements
- Python 3.8+ (stdlib only, no external dependencies)
- Slurm with `sbatch`, `sacct`, `squeue`, `scancel`, `scontrol` available
- Slurm accounting enabled (`sacct` must work)
## License
MIT
| text/markdown | Johann D. Gaebler | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Science/Research",
"Operating System :: POSIX",
"Programming Language :: Python :: 3",
"Topic :: System :: Clustering",
"Topic :: System :: Distributed Computing"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Repository, https://github.com/jgaeb/slurmgrid"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T22:54:25.514355 | slurmgrid-0.1.0.tar.gz | 34,412 | a0/2f/420d44f47898e1b0f67ace1fdcc508dd15636082f7bf2afc1de6a79085fe/slurmgrid-0.1.0.tar.gz | source | sdist | null | false | 40ab86e7324d97a67138a175d280f762 | ebd2e5b6c8f9aa73aff8eeafd3f3b3cddabf85b2839e79f8e2923eed1896876f | a02f420d44f47898e1b0f67ace1fdcc508dd15636082f7bf2afc1de6a79085fe | MIT | [
"LICENSE"
] | 185 |
2.4 | yohou-optuna | 0.1.0a1 | An Optuna integration for hyperparameter tuning in Yohou | <p align="center">
<picture>
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/stateful-y/yohou-optuna/main/docs/assets/logo_light.png">
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/stateful-y/yohou-optuna/main/docs/assets/logo_dark.png">
<img src="https://raw.githubusercontent.com/stateful-y/yohou-optuna/main/docs/assets/logo_light.png" alt="Yohou-Optuna">
</picture>
</p>
[](https://pypi.org/project/yohou_optuna/)
[](https://github.com/stateful-y/yohou-optuna/blob/main/LICENSE)
[](https://pypi.org/project/yohou_optuna/)
[](https://anaconda.org/conda-forge/yohou_optuna)
[](https://codecov.io/gh/stateful-y/yohou-optuna)
## What is Yohou-Optuna?
**Yohou-Optuna** brings [Optuna](https://optuna.org/)'s hyperparameter optimization to [Yohou](https://github.com/stateful-y/yohou), providing a Yohou-compatible search class for time series forecasting.
This integration replaces grid and random search with adaptive sampling (TPE, CMA-ES, and more) while keeping Yohou's forecasting API intact. After fitting, `OptunaSearchCV` behaves like a Yohou forecaster, so you can call `predict`, `observe`, and `observe_predict` directly.
It integrates with Optuna's distributions, samplers, and storages, and wraps them for sklearn-style cloning and serialization.
## What are the features of Yohou-Optuna?
- **Adaptive optimization**: Run Optuna studies over Yohou forecasters with TPE, CMA-ES, and other samplers to find better configurations in fewer trials.
- **Forecaster-native API**: `OptunaSearchCV` is a forecaster after fitting, so you can call `predict`, `observe`, `observe_predict`, and interval methods.
- **Clone-safe wrappers**: `Sampler`, `Storage`, and `Callback` wrappers ensure Optuna objects survive sklearn cloning and serialization.
- **Time-series CV support**: Works with Yohou splitters for proper temporal validation and scorer integration.
- **Multi-metric evaluation**: Evaluate multiple scorers and refit on the one that matters most for your use case.
- **(Experimental) Persistence workflows**: Resume studies with storage-backed optimization and continue tuning over time.
## How to install Yohou-Optuna?
Install the Yohou-Optuna package using `pip`:
```bash
pip install yohou_optuna
```
or using `uv`:
```bash
uv pip install yohou_optuna
```
or using `conda`:
```bash
conda install -c conda-forge yohou_optuna
```
or using `mamba`:
```bash
mamba install -c conda-forge yohou_optuna
```
or alternatively, add `yohou_optuna` to your `requirements.txt` or `pyproject.toml` file.
## How to get started with Yohou-Optuna?
### 1. Prepare a forecaster and search space
Define a Yohou forecaster and Optuna distributions for the parameters you want to tune.
```python
from sklearn.linear_model import Ridge
from optuna.distributions import FloatDistribution, IntDistribution
from yohou.point import PointReductionForecaster
from yohou_optuna import OptunaSearchCV
forecaster = PointReductionForecaster(estimator=Ridge())
param_distributions = {
"estimator__alpha": FloatDistribution(1e-4, 10.0, log=True),
"observation_horizon": IntDistribution(3, 30),
}
search = OptunaSearchCV(
forecaster=forecaster,
param_distributions=param_distributions,
n_trials=30,
)
```
### 2. Fit the searcher
Fit the searcher on your time series data (polars DataFrame with a `time` column).
```python
search.fit(y_train, X_train, forecasting_horizon=12)
```
### 3. Predict with the best forecaster
After fitting, `search` behaves like a Yohou forecaster.
```python
y_pred = search.predict(forecasting_horizon=12)
print(search.best_params_)
```
## How do I use Yohou-Optuna?
Full documentation is available at [https://yohou-optuna.readthedocs.io/](https://yohou-optuna.readthedocs.io/).
Interactive examples are available in the `examples/` directory:
- **Online**: [https://yohou-optuna.readthedocs.io/en/latest/pages/examples/](https://yohou-optuna.readthedocs.io/en/latest/pages/examples/)
- **Locally**: Run `marimo edit examples/optuna_search.py` to open an interactive notebook
## Can I contribute?
We welcome contributions, feedback, and questions:
- **Report issues or request features**: [GitHub Issues](https://github.com/stateful-y/yohou-optuna/issues)
- **Join the discussion**: [GitHub Discussions](https://github.com/stateful-y/yohou-optuna/discussions)
- **Contributing Guide**: [CONTRIBUTING.md](https://github.com/stateful-y/yohou-optuna/blob/main/CONTRIBUTING.md)
If you are interested in becoming a maintainer or taking a more active role, please reach out to Guillaume Tauzin on [GitHub Discussions](https://github.com/stateful-y/yohou-optuna/discussions).
## Where can I learn more?
Here are the main Yohou-Optuna resources:
- Full documentation: [https://yohou-optuna.readthedocs.io/](https://yohou-optuna.readthedocs.io/)
- GitHub Discussions: [https://github.com/stateful-y/yohou-optuna/discussions](https://github.com/stateful-y/yohou-optuna/discussions)
- Interactive Examples: [https://yohou-optuna.readthedocs.io/en/latest/pages/examples/](https://yohou-optuna.readthedocs.io/en/latest/pages/examples/)
For questions and discussions, you can also open a [discussion](https://github.com/stateful-y/yohou-optuna/discussions).
## License
This project is licensed under the terms of the [Apache-2.0 License](https://github.com/stateful-y/yohou-optuna/blob/main/LICENSE).
<p align="center">
<a href="https://stateful-y.io">
<img src="docs/assets/made_by_stateful-y.png" alt="Made by stateful-y" width="200">
</a>
</p>
| text/markdown | null | Guillaume Tauzin <gtauzin@stateful-y.io> | null | Guillaume Tauzin <gtauzin@stateful-y.io> | Apache-2.0 | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"optuna",
"scipy",
"sklearn-optuna",
"yohou"
] | [] | [] | [] | [
"Homepage, https://github.com/stateful-y/yohou-optuna",
"Documentation, https://yohou-optuna.readthedocs.io",
"Repository, https://github.com/stateful-y/yohou-optuna",
"Bug Tracker, https://github.com/stateful-y/yohou-optuna/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T22:53:38.998809 | yohou_optuna-0.1.0a1.tar.gz | 99,900 | 86/aa/4e567253e2c163db8a600cf5523f2e58971ceea576419dc3d37d3a2ac080/yohou_optuna-0.1.0a1.tar.gz | source | sdist | null | false | 26abb9d626af4160f2366059e87c4cfc | 21b1065af3323589bd379a3e455bf0ba329a47e074e2c792475f2d304a4941ae | 86aa4e567253e2c163db8a600cf5523f2e58971ceea576419dc3d37d3a2ac080 | null | [
"LICENSE"
] | 160 |
2.4 | pypong-lib | 1.0.0 | The Pong library for Python. Build terminal Pong games in minutes. Zero dependencies. | # 🏓 PyPong
The Pong library for Python. Build terminal Pong games quickly!
Born from [Console Pong](https://pypi.org/project/console-pong/). Stripped down.
## Install
```bash
pip install pypong
| text/markdown | null | null | null | null | MIT | pong, game, terminal, console, library, framework, arcade, ping-pong, ascii-game, text-game, pure-python, no-dependencies, game-engine, retro | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Games/Entertainment :: Arcade",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/CheezeDeveloper/pypong"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T22:53:18.318017 | pypong_lib-1.0.0.tar.gz | 2,332 | bf/47/b89d46bc18d33a0f8dc2d8e158eea617ce516582163323e9521f0a701a2f/pypong_lib-1.0.0.tar.gz | source | sdist | null | false | 67f26928766d8a00216a1efaf8c04fd7 | 77183846bfbf2259f30dd7db85884bda7075873f4401ca912d5a84099320f72c | bf47b89d46bc18d33a0f8dc2d8e158eea617ce516582163323e9521f0a701a2f | null | [
"LICENSE"
] | 192 |
2.4 | gnuradio-mcp | 2026.2.20 | MCP server for GNU Radio — build, validate, run, and export flowgraphs programmatically. | # GR-MCP: GNU Radio MCP Server
[](https://www.python.org/downloads/)
[](LICENSE)
**GR-MCP** is a [FastMCP](https://gofastmcp.com) server for [GNU Radio](https://www.gnuradio.org/) that enables programmatic, automated, and AI-driven creation of GNU Radio flowgraphs. It exposes 80+ MCP tools for building, validating, running, and exporting `.grc` files — plus block development, protocol analysis, and OOT module management.
> **What can you do with it?**
> - Build and validate flowgraphs programmatically
> - Generate custom GNU Radio blocks from natural language descriptions
> - Parse protocol specifications into decoder pipelines
> - Analyze IQ recordings to detect signal characteristics
> - Export blocks to distributable OOT modules
> - Run flowgraphs in Docker containers with real-time variable control
> - Install and manage OOT modules via Docker
## Quickstart
### 1. Install
```bash
git clone https://git.supported.systems/MCP/gr-mcp
cd gr-mcp
# Create venv with system site-packages (required for gnuradio)
uv venv --system-site-packages --python 3.14
uv sync
```
### 2. Run
```bash
uv run gnuradio-mcp
```
### 3. Add to your MCP client
**Claude Code:**
```bash
claude mcp add gnuradio-mcp -- uv run --directory /path/to/gr-mcp gnuradio-mcp
```
**Claude Desktop / Cursor / other MCP clients:**
```json
{
"mcpServers": {
"gnuradio-mcp": {
"command": "uv",
"args": ["run", "--directory", "/path/to/gr-mcp", "gnuradio-mcp"]
}
}
}
```
### Requirements
- Python >= 3.14
- GNU Radio (tested with GRC v3.10.12.0)
- Docker (optional — for runtime control, block testing, OOT builds)
- [uv](https://docs.astral.sh/uv/) package manager
> **Note:** GR-MCP is designed for single-session use. All connected MCP clients share the same flowgraph state. Run one server instance per concurrent session.
## Features
### Flowgraph Building (30 tools)
Build, edit, validate, and export `.grc` files:
| Category | Tools |
|----------|-------|
| Blocks | `make_block`, `remove_block`, `get_blocks` |
| Parameters | `get_block_params`, `set_block_params` |
| Ports | `get_block_sources`, `get_block_sinks` |
| Connections | `connect_blocks`, `disconnect_blocks`, `get_connections` |
| Validation | `validate_block`, `validate_flowgraph`, `get_all_errors` |
| Persistence | `save_flowgraph`, `load_flowgraph` |
| Code Gen | `generate_code` |
| Discovery | `get_all_available_blocks`, `search_blocks`, `get_block_categories` |
| Options | `get_flowgraph_options`, `set_flowgraph_options` |
| Python | `create_embedded_python_block`, `evaluate_expression` |
| Bypass | `bypass_block`, `unbypass_block` |
| Import/Export | `export_flowgraph_data`, `import_flowgraph_data` |
| OOT Paths | `load_oot_blocks`, `add_block_path`, `get_block_paths` |
### Block Development (18 tools, dynamically registered)
Generate, validate, test, and export custom blocks. These tools are registered on-demand via `enable_block_dev_mode` to minimize context usage:
| Category | Tools |
|----------|-------|
| Generation | `generate_sync_block`, `generate_basic_block`, `generate_interp_block`, `generate_decim_block` |
| Validation | `validate_block_code`, `parse_block_prompt` |
| Testing | `test_block_in_docker` |
| Integration | `inject_generated_block` |
| Protocol | `parse_protocol_spec`, `generate_decoder_chain`, `get_missing_oot_modules` |
| Signal | `analyze_iq_file` |
| OOT Export | `generate_oot_skeleton`, `export_block_to_oot`, `export_from_flowgraph` |
| Mode | `enable_block_dev_mode`, `disable_block_dev_mode`, `get_block_dev_mode` |
### Runtime Control (36 tools)
Run flowgraphs in Docker containers with real-time control:
| Category | Tools |
|----------|-------|
| XML-RPC | `connect`, `disconnect`, `get_status`, `list_variables`, `get_variable`, `set_variable` |
| Execution | `start`, `stop`, `lock`, `unlock` |
| ControlPort | `connect_controlport`, `disconnect_controlport`, `get_knobs`, `set_knobs`, `get_knob_properties`, `get_performance_counters`, `post_message` |
| Docker | `launch_flowgraph`, `list_containers`, `stop_flowgraph`, `remove_flowgraph`, `connect_to_container`, `capture_screenshot`, `get_container_logs` |
| Coverage | `collect_coverage`, `generate_coverage_report`, `combine_coverage`, `delete_coverage` |
| OOT Mgmt | `detect_oot_modules`, `install_oot_module`, `list_oot_images`, `remove_oot_image`, `build_multi_oot_image`, `list_combo_images`, `remove_combo_image` |
### MCP Resources
| Resource URI | Description |
|-------------|-------------|
| `oot://directory` | Curated directory of 20 OOT modules (12 preinstalled) |
| `oot://directory/{module}` | Details for a specific OOT module |
| `prompts://block-generation/*` | Block generation patterns and templates |
| `prompts://protocol-analysis/*` | Decoder pipeline guidance |
## Usage Examples
### Building a flowgraph
```python
# Create blocks
make_block(block_type="analog_sig_source_x", name="sig_source")
make_block(block_type="audio_sink", name="speaker")
# Configure
set_block_params(block_name="sig_source", params={
"freq": "1000",
"amplitude": "0.5",
"waveform": "analog.GR_COS_WAVE"
})
# Wire and save
connect_blocks(
source_block="sig_source", source_port="0",
sink_block="speaker", sink_port="0"
)
validate_flowgraph()
save_flowgraph(path="/tmp/my_flowgraph.grc")
```
### Generating a custom block
```python
enable_block_dev_mode()
generate_sync_block(
name="pm_demod",
description="Phase modulation demodulator",
inputs=[{"dtype": "complex", "vlen": 1}],
outputs=[{"dtype": "float", "vlen": 1}],
parameters=[{"name": "sensitivity", "dtype": "float", "default": 1.0}],
work_logic="Extract instantaneous phase from complex samples"
)
```
### Protocol analysis to decoder chain
```python
enable_block_dev_mode()
# Parse a protocol spec
protocol = parse_protocol_spec(
"GFSK at 250k baud, deviation: 25khz, preamble 0xAA, sync 0x2DD4"
)
# Generate the decoder pipeline
chain = generate_decoder_chain(protocol=protocol, sample_rate=2000000.0)
# Returns: blocks, connections, variables, missing_oot_modules
```
### Exporting to an OOT module
```python
enable_block_dev_mode()
# Generate block
block = generate_sync_block(name="my_filter", ...)
# Export to distributable OOT module
export_block_to_oot(
generated=block,
module_name="mymodule",
output_dir="/path/to/gr-mymodule",
author="Your Name"
)
# Creates: CMakeLists.txt, python/mymodule/my_filter.py, grc/mymodule_my_filter.block.yml
```
### Runtime control (Docker)
```python
# Launch flowgraph in container
launch_flowgraph(
flowgraph_path="/path/to/flowgraph.py",
name="my-sdr",
xmlrpc_port=8080,
enable_vnc=True
)
# Tune in real-time
connect_to_container(name="my-sdr")
set_variable(name="freq", value=2.4e9)
# Inspect and clean up
capture_screenshot(name="my-sdr")
stop_flowgraph(name="my-sdr")
```
## Architecture
```
src/gnuradio_mcp/
├── server.py # FastMCP app entry point
├── models.py # Pydantic models for all tools
├── utils.py # Unique IDs, error formatting
├── oot_catalog.py # Curated OOT module directory
├── middlewares/
│ ├── platform.py # GNU Radio Platform wrapper
│ ├── flowgraph.py # Block/connection management
│ ├── block.py # Parameter/port access
│ ├── ports.py # Port resolution utilities
│ ├── docker.py # Docker container lifecycle
│ ├── xmlrpc.py # XML-RPC variable control
│ ├── thrift.py # ControlPort/Thrift client
│ ├── oot.py # OOT module Docker builds
│ ├── block_generator.py # Code generation for custom blocks
│ ├── oot_exporter.py # Export blocks to OOT modules
│ └── protocol_analyzer.py # Protocol parsing, decoder chains, IQ analysis
└── providers/
├── base.py # PlatformProvider (flowgraph tools)
├── mcp.py # McpPlatformProvider (registers tools)
├── runtime.py # RuntimeProvider (Docker/XML-RPC/Thrift)
├── mcp_runtime.py # McpRuntimeProvider (registers tools)
├── block_dev.py # BlockDevProvider (generation/analysis)
└── mcp_block_dev.py # McpBlockDevProvider (dynamic registration)
```
**Data flow:** GNU Radio objects → Middlewares (validation/rewrite) → Pydantic Models (serialization) → MCP Tools
## Development
```bash
# Install all dependencies
uv sync --all-extras
# Run tests
pytest
# Run specific test suite
pytest tests/unit/
pytest tests/integration/
# Pre-commit hooks (black, flake8, isort, mypy)
pre-commit run --all-files
```
## Docker Images (Optional)
For runtime control and block testing:
```bash
# Runtime image (Xvfb + VNC + ImageMagick)
docker build -f docker/Dockerfile.gnuradio-runtime -t gnuradio-runtime:latest docker/
# Coverage image (adds python3-coverage)
docker build -f docker/Dockerfile.gnuradio-coverage -t gnuradio-coverage:latest docker/
```
## License
[MIT](LICENSE)
| text/markdown | null | Ryan Malloy <ryan@supported.systems> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3.14",
"Topic :: Scientific/Engineering",
"Topic :: Communications :: Ham Radio"
] | [] | null | null | >=3.14 | [] | [] | [] | [
"pydantic>=2.12",
"fastmcp>=3.0.0b1",
"mako>=1.3",
"pyyaml>=6.0",
"docker>=7.0; extra == \"runtime\"",
"pytest>=9.0; extra == \"dev\"",
"pytest-asyncio>=1.3; extra == \"dev\"",
"pre-commit>=4.5; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://git.supported.systems/MCP/gr-mcp",
"Repository, https://git.supported.systems/MCP/gr-mcp",
"Issues, https://git.supported.systems/MCP/gr-mcp/issues"
] | uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"EndeavourOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T22:52:43.092459 | gnuradio_mcp-2026.2.20.tar.gz | 91,841 | c3/26/eb6aad02038ecde028a61e968d7357e272f9f649e5a23ab2622d8ceeec02/gnuradio_mcp-2026.2.20.tar.gz | source | sdist | null | false | 7433c89fc38d7b06f64984f63b1047e3 | a587d7ad2f2dd1adc996a6bc033f8187dc8973b311a8edae5915d749f0366aac | c326eb6aad02038ecde028a61e968d7357e272f9f649e5a23ab2622d8ceeec02 | MIT | [
"LICENSE"
] | 170 |
2.4 | idmtools-platform-comps | 3.0.5 | Comps platform for IDM-Tools | 
# idmtools-platform-comps
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents**
- [Installing](#installing)
- [Development Tips](#development-tips)
- [Building SSMT Docker Image](#building-ssmt-docker-image)
- [Choose SSMT Docker Image to use in test/script](#choose-ssmt-docker-image-to-use-in-testscript)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
## Installing
```bash
pip install idmtools-platform-comps
```
# Development Tips
There is a Makefile file available for most common development tasks. Here is a list of commands
```bash
clean - Clean up temproary files
lint - Lint package and tests
test - Run All tests
coverage - Run tests and generate coverage report that is shown in browser
```
On Windows, you can use `pymake` instead of `make`
# Building SSMT Docker Image
To build the SSMT Docker image, follow these steps
1. ```bash
docker login docker-production.packages.idmod.org
```
2. ```bash
make ssmt-image
```
3. When prompted, enter your idm username and password
# Choose SSMT Docker Image to use in test/script
There are three ways to choose which ssmt docker image to use in your script:
1. specify docker_image in SSMTWorkItem creation, for example,
```bash
wi = SSMTWorkItem(name=wi_name, command=command, docker_image='my_test_ssmt_docker_image')
```
2. define docker_image in your idmtools.ini, for example:
```bash
[COMPS2]
type = COMPS
endpoint = https://comps.idmod.org
environment = Calculon
......
docker_image = my_test_ssmt_docker_image
```
3. if not above two cases, idomtools system will determine the default ssmt docker image from platform for you:
if endpoint = https://comps.idmod.org, it will use production docker image
for all other cases, it will use the staging docker image
Note: if user overrode docker image in wi (case #1) and also defined docker image in idmtools.ini (case #2),
it will take #1 as higher priority
| text/markdown | null | Zhaowei Du <zdu@idmod.org>, Sharon Chen <shchen@idmod.org>, Clinton Collins <ccollins@idmod.org>, Benoit Raybaud <braybaud@idmod.org>, Clark Kirkman IV <ckirkman@idmod.org>, Ye Chen <yechen@idmod.org>, Mary Fisher <mafisher@idmod.org>, Mandy Izzo <mizzo@idmod.org>, Jen Schripsema <jschripsema@idmod.org>, Ross Carter <rcarter@idmod.org> | null | null | null | modeling, IDM | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"idmtools~=3.0",
"natsort~=8.4.0",
"idm-pycomps~=2.11.0",
"idmtools[test]; extra == \"test\"",
"idmtools_test; extra == \"test\"",
"idmtools_models; extra == \"test\"",
"flake8~=7.3; extra == \"packaging\"",
"bump2version; extra == \"packaging\"",
"twine~=6.2; extra == \"packaging\"",
"natsort~=8.4; extra == \"packaging\""
] | [] | [] | [] | [
"Homepage, https://github.com/InstituteforDiseaseModeling/idmtools"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T22:52:29.490982 | idmtools_platform_comps-3.0.5.tar.gz | 89,967 | f9/93/8463c97086e3c461658acfecf155e69d9725fcaee16cf7722cfd710a6c72/idmtools_platform_comps-3.0.5.tar.gz | source | sdist | null | false | 3f1f1489637bcc2a64f315f10f5b66d4 | 73c9cfb3a4179dfc9ee73de5a10b05e5575b5c9ab65e2f93538217d10d0500e5 | f9938463c97086e3c461658acfecf155e69d9725fcaee16cf7722cfd710a6c72 | null | [] | 160 |
2.4 | idmtools-platform-slurm | 3.0.5 | Provides ability to run against Slurm | 
# idmtools-platform-slurm
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents**
- [Introduction](#introduction)
- [Setting Up Virtual Environment](#setting-up-virtual-environment)
- [Development Tips](#development-tips)
- [Manually run a script as a Slurm job](#manually-run-a-script-as-a-slurm-job)
- [Use SlurmJob to run a script as a Slurm job](#use-slurmjob-to-run-a-script-as-a-slurm-job)
- [With SlurmPlatform to run a script as a Slurm job](#with-slurmplatform-to-run-a-script-as-a-slurm-job)
- [Folder structure:](#folder-structure)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
## Introduction
**SlurmPlatform** is a platform designed to facilitate the execution of experiments and simulations in slurm cluster.
## Setting Up Virtual Environment
To set up a virtual environment for **SlurmPlatform**, follow these steps:
1. **Install Python**
Ensure you have Python 3.8+ installed on your system.
2. **Create Virtual Environment**
There are multiple ways to create a virtual environment. Below is an example using `venv`:
```bash
python -m venv slurm_env
```
3. **Activate Virtual Environment**
- On Windows:
```bash
slurm_env\Scripts\activate
```
- On Linux:
```bash
source slurm_env/bin/activate
```
4. **Install SlurmPlatform**
```bash
pip install idmtools-platform-slurm
```
5. **Install Dependencies**
```bash
pip install -r requirements.txt
```
6. **Optional(No need step #4 and #5), Install all slurm platform related packages**
```bash
pip install idmtools[slurm]
```
## Development Tips
There is a Makefile file available for most common development tasks. Here is a list of commands
```bash
clean - Clean up temproary files
lint - Lint package and tests
test - Run All tests
coverage - Run tests and generate coverage report that is shown in browser
```
On Windows, you can use `pymake` instead of `make`
## Manually run a script as a Slurm job
Preparation
(1).Have target script ready, say my_script.py, suppose you have folder structure like::
```bash
script_folder
my_script.py
......
```
(2). Created a virtual environment and activated it.
Steps
1. within the target script folder, create a batch file 'sbatch.sh' (without quote) with content:
```bash
#!/bin/bash
#SBATCH --partition=b1139
#SBATCH --time=10:00:00
#SBATCH --account=b1139
#SBATCH --output=stdout.txt
#SBATCH --error=stderr.txt
# replace with your script file
python3 my_script.py
exit $RESULT
```
Note: the content here is based on Northwestern University QUEST Slurm system. For general case, above content (required #SBATCH parameters) may be a little bit different.
2. run your target script as SLURM job
execute the following commands from console (under virtual environment):
cd path_to_script_folder
`sbatch sbatch.sh`
Note: any output information from my_script.py is stored in file stdout.txt under the current folder. For example, if my_script.py kicks out another Slurm job, then its Slurm id information can be found in file stdout.txt.
## Use SlurmJob to run a script as a Slurm job
The example can be simple as the following:
--script.py--
```python
from idmtools.core.platform_factory import Platform
from idmtools_platform_slurm.utils.slurm_job.slurm_job import SlurmJob
script = '<user script path>'
# script = 'example_path/python_sim_slurm.py' # example
platform = Platform('SLURM_LOCAL', job_directory='<job_directory>')
sj = SlurmJob(script_path=script, platform=platform)
sj.run()
```
## With SlurmPlatform to run a script as a Slurm job
We have SlurmJob integrated into SlurmPlatform and any Python script can run as a Slurm job simply doing:
--script.py--
```python
from idmtools.entities.command_task import CommandTask
from idmtools.entities.experiment import Experiment
from idmtools.core.platform_factory import Platform
platform = Platform('SLURM_LOCAL', job_directory='<job_directory>')
# Define task
command = "echo 'Hello, World!'"
task = CommandTask(command=command)
# Run an experiment
experiment = Experiment.from_task(task, name="example")
experiment.run(platform=platform)
```
## Folder structure:
[See Folder Structure](../idmtools_platform_container/README.md#folder-structure)
| text/markdown | null | Sharon Chen <schen@idmod.org>, Clinton Collins <ccollins@idmod.org>, Zhaowei Du <zdu@idmod.org>, Clark Kirkman IV <ckirkman@idmod.org>, Benoit Raybaud <braybaud@idmod.org> | null | null | null | modeling, IDM | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"idmtools_platform_general~=3.0",
"dataclasses-json~=0.6",
"idmtools[test]; extra == \"test\"",
"idmtools_models; extra == \"test\"",
"idmtools_test; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/InstituteforDiseaseModeling/idmtools"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T22:52:28.477210 | idmtools_platform_slurm-3.0.5.tar.gz | 47,251 | e9/72/d1383badfb3b783cdf667a604ab3852f80569b0caa7b2fa2839ea79543b9/idmtools_platform_slurm-3.0.5.tar.gz | source | sdist | null | false | 4fea020cb24b7dfca5a350ca87dbb461 | 8ff72fad48192accdca96c7d45ae875361f9f6c45bb2290b3c1ebd776ed9ede2 | e972d1383badfb3b783cdf667a604ab3852f80569b0caa7b2fa2839ea79543b9 | null | [] | 164 |
2.4 | idmtools-cli | 3.0.5 | CLI for IDM-Tools | 
# idmtools-cli
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents**
- [Installing](#installing)
- [Development tips](#development-tips)
- [Using the CLI](#using-the-cli)
- [Version command](#version-command)
- [Experiment commands for Local Platform](#experiment-commands-for-local-platform)
- [Status](#status)
- [Delete](#delete)
- [Simulation commands for Local Platform](#simulation-commands-for-local-platform)
- [Status](#status-1)
- [GitRepo commands](#gitrepo-commands)
- [View](#view)
- [Repos](#repos)
- [Releases](#releases)
- [Peep](#peep)
- [Download](#download)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
## Installing
```bash
pip install idmtools-cli
```
# Development tips
There is a Makefile file available for most common development tasks. Here is a list of commands
```bash
clean - Clean up temproary files
lint - Lint package and tests
test - Run All tests
coverage - Run tests and generate coverage report that is shown in browser
```
On Windows, you can use `pymake` instead of `make`
# Using the CLI
The CLI requires the workers service to already be running.
`idmtools`
## Version command
To determine version of idmtools and related plugins, use the version cli command.
```
> idmtools version
```
Example output
```bash
emodpy Version: 1.3.0
Plugins:
EMODTask
idmtools Version: 1.4.0+nightly.0
Plugins:
CommandTask
idmtools-cli Version: 1.4.0+nightly.0
idmtools-models Version: 1.4.0+nightly.0
Plugins:
JSONConfiguredPythonTask
JSONConfiguredRTask
JSONConfiguredTask
PythonTask
RTask
ScriptWrapperTask
TemplatedScriptTask
idmtools-platform-comps Version: 1.4.0+nightly.0
Plugins:
COMPSPlatform
SSMTPlatform
idmtools-platform-slurm Version: 1.0.0+nightly
Plugins:
SlurmPlatform
```
## Experiment commands for Local Platform
### Status
You can check the status of experiments use the follow command. It will also summarize the simulations under
the experiment as a progress bar with green section for completed tasks, yellow for in progress, red for failed, and
white for queued.
```
> idmtools experiment --platform Local status --help
```
In addition, we used in conjunction with a console that supports auto-highlighting of hyperlinks, you will be able to
easily open up the asset directories by clicking on the data path URLs.
You can also perform filtering on the experiments
```bash
> idmtools experiment --platform Local status --tag type PythonExperiment
> idmtools experiment --platform Local status --id 8EHU147Z
```
### Delete
You can delete experiments and their child simulations using the following command. Optionally you can also delete
the associated data directories as well by using the *--data* option.
```
>idmtools experiment --platform Local delete <experiment_id>
```
## Simulation commands for Local Platform
## Status
You can check the status of simulations use the follow command.
```
>idmtools simulation --platform Local status
```
You can also filter by a either id, experiment id, status, and tags or any combination of the aforementioned
```bash
> idmtools simulation --platform Local status --experiment-id EFX6JXBV
> idmtools simulation --platform Local status --id XDT0VMVV
> idmtools simulation --platform Local status --tag a 5 --tag b
> idmtools simulation --platform Local status --experiment-id --status failed
```
## GitRepo commands
### View
You can check idmtools available examples. You can use --raw to determine to display in detailed or simplified format
```
> idmtools gitrepo view
```
### Repos
You can list all public repos for a GitHub owner. You can use --owner to specify an owner and --page for pagination
--owner default to 'institutefordiseasemodeling'
--page default to 1
```
> idmtools gitrepo repos
```
### Releases
You can list all releases of a repo by providing --owner and --repo
--owner default to 'institutefordiseasemodeling' and --repo default to 'idmtools'
```
> idmtools gitrepo releasess
```
### Peep
You can list all current files/dirs of a repo folder by providing --url
```
> idmtools gitrepo peep
```
### Download
You can download files from a public repo to a specified local folder (default to current folder). By default, it will
download idmtools examples. You can also download any files from any public repo by using --url (multiple is supported)
```
> idmtools gitrepo download
```
| text/markdown | null | Sharon Chen <schen@idmod.org>, Clinton Collins <ccollins@idmod.org>, Zhaowei Du <zdu@idmod.org>, Mary Fisher <mfisher@idmod.org>, Clark Kirkman IV <ckirkman@idmod.org>, Benoit Raybaud <braybaud@idmod.org> | null | null | null | modeling, IDM, cli | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click~=8.1.7",
"click-plugins",
"colorama~=0.4.6",
"cookiecutter~=2.6",
"idmtools~=3.0",
"pyperclip~=1.8",
"yaspin~=3.0",
"idmtools[test]; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/InstituteforDiseaseModeling/idmtools"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T22:52:26.947146 | idmtools_cli-3.0.5.tar.gz | 21,249 | ff/d3/0b7f6bfb53b416387482e1d8aa31f35ae911d8c79713798fe9f043f99ba4/idmtools_cli-3.0.5.tar.gz | source | sdist | null | false | d16ea24a093f8a13292c08ed966f0a0a | cdafe2680ac410724d6786db8c318667e4843200dc291bd8b52c53c929ac88dd | ffd30b7f6bfb53b416387482e1d8aa31f35ae911d8c79713798fe9f043f99ba4 | null | [] | 188 |
2.4 | idmtools-test | 3.0.5 | Test and demo data for IDM-Tools | 
# idmtools-test
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents**
- [Installing](#installing)
- [Testing tips](#testing-tips)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
## Installing
```bash
pip install idmtools-test
```
# Testing tips
Most project have markers defined for tests. You can run select test at the command line using marker filter
For example to run just docker related tests, you can user
`pytest -m "docker"`
The local runner tests make
| text/markdown | null | Sharon Chen <schen@idmod.org>, Zhaowei Du <zdu@idmod.org>, Ye Chen <yechen@idmod.org>, Clinton Collins <ccollins@idmod.org>, Clark Kirkman IV <ckirkman@idmod.org>, Benoit Raybaud <braybaud@idmod.org> | null | null | null | modeling, IDM, test, testdata, demodata | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"idmtools~=3.0",
"pytest~=9.0",
"pywin32>=306; sys_platform == \"win32\""
] | [] | [] | [] | [
"Homepage, https://github.com/InstituteforDiseaseModeling/idmtools"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T22:52:25.864031 | idmtools_test-3.0.5.tar.gz | 15,476,825 | 1c/1b/028d1175c9c15af67b6419df3134048778a0e6813f64b7fdc09fae1b18ce/idmtools_test-3.0.5.tar.gz | source | sdist | null | false | ab23750058589a9c1dc0264d6c8dc6da | a058049e5135faa553d569872580eeee2a690d1944219f4d0140ad921f7e15ce | 1c1b028d1175c9c15af67b6419df3134048778a0e6813f64b7fdc09fae1b18ce | null | [] | 156 |
2.4 | idmtools | 3.0.5 | Core tools for modeling | 
# idmtools-core
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents**
- [Overview](#overview)
- [Installing](#installing)
- [Development Tips](#development-tips)
- [Future Work](#future-work)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
# Overview
idmtools provides the APIS, logic, and other operations to provision, execute, analysis, and manage jobs running on an HPC cluster
To see the full API documentation, see https://institutefordiseasemodeling.github.io/idmtools/idmtools_index.html
# Installing
```bash
pip install idmtools
```
# Development Tips
There is a Makefile file available for most common development tasks. Here is a list of commands
```bash
clean - Clean up temproary files
lint - Lint package and tests
test - Run All tests
coverage - Run tests and generate coverage report that is shown in browser
```
On Windows, you can use `pymake` instead of `make`
# Future Work
* Add new analyze api to platform
* Where does this go?
* Move current code to Comps
* Add support for platform specific bootstrap scripts
| text/markdown | null | Zhaowei Du <zdu@idmod.org>, Sharon Chen <shchen@idmod.org>, Clinton Collins <ccollins@idmod.org>, Benoit Raybaud <braybaud@idmod.org>, Clark Kirkman IV <ckirkman@idmod.org>, Emily Claps <emily.claps@gatesfoundation.org>, Jen Schripsema <jschripsema@idmod.org>, Ross Carter <rcarter@idmod.org>, Mandy Izzo <mizzo@idmod.org>, Mary Fisher <mafisher@idmod.org>, Lauren George <lgeorge@idmod.org> | null | null | null | modeling, IDM, IDMTools | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"backoff~=2.2",
"coloredlogs~=15.0",
"diskcache~=5.4",
"filelock~=3.21",
"more-itertools~=10.3",
"pandas~=3.0; python_version >= \"3.11\"",
"pandas~=2.2; python_version < \"3.11\"",
"numpy<3.0,>=1.26; python_version >= \"3.11\"",
"numpy<2.3,>=1.26; python_version < \"3.11\"",
"pluggy~=1.6",
"PyYAML~=6.0",
"tabulate~=0.9",
"tqdm~=4.67",
"jinja2~=3.1",
"packaging>24.0",
"allure-pytest~=2.15; extra == \"test\"",
"junitparser~=4.0; extra == \"test\"",
"livereload~=2.6; extra == \"test\"",
"pytest~=9.0; extra == \"test\"",
"pytest-cov~=7.0; extra == \"test\"",
"pytest-html~=4.2; extra == \"test\"",
"pytest-mock~=3.15; extra == \"test\"",
"pytest-timeout~=2.4.0; extra == \"test\"",
"pytest-xdist~=3.8; extra == \"test\"",
"flake8~=7.3; extra == \"test\"",
"flake8-docstrings~=1.7; extra == \"test\"",
"coverage~=7.13; python_version >= \"3.10\" and extra == \"test\"",
"coverage<7.11,>=7.6; python_version < \"3.10\" and extra == \"test\"",
"twine~=6.2; extra == \"test\"",
"docker>5.0; extra == \"notebooks\"",
"idmtools_platform_comps; extra == \"idm\"",
"idmtools_cli; extra == \"idm\"",
"idmtools_models; extra == \"idm\"",
"idmtools_platform_comps; extra == \"full\"",
"idmtools_cli; extra == \"full\"",
"idmtools_models; extra == \"full\"",
"idmtools_platform_general; extra == \"full\"",
"idmtools_platform_slurm; extra == \"full\"",
"idmtools_platform_container; extra == \"full\"",
"idmtools_cli; extra == \"container\"",
"idmtools_models; extra == \"container\"",
"idmtools_platform_general; extra == \"container\"",
"idmtools_platform_container; extra == \"container\"",
"idmtools_cli; extra == \"slurm\"",
"idmtools_models; extra == \"slurm\"",
"idmtools_platform_general; extra == \"slurm\"",
"idmtools_platform_slurm; extra == \"slurm\""
] | [] | [] | [] | [
"Homepage, https://github.com/InstituteforDiseaseModeling/idmtools",
"Documentation, https://idmtools.readthedocs.io",
"Bug Tracker, https://github.com/InstituteforDiseaseModeling/idmtools/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T22:52:23.785223 | idmtools-3.0.5.tar.gz | 132,677 | f0/c0/74509ae9ce9f1d7a6a481a35e1a323c07003f60e8776141a65f3862b4f14/idmtools-3.0.5.tar.gz | source | sdist | null | false | 30b8124d11237e97060ce16a0e32c9a7 | 0f9b8faef972d9a09cceaa6487886d1ca552db0cd03584f683c429e5ec381034 | f0c074509ae9ce9f1d7a6a481a35e1a323c07003f60e8776141a65f3862b4f14 | null | [] | 227 |
2.4 | idmtools-models | 3.0.5 | Core tools for modeling | 
# idmtools-models
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents**
- [Installing](#installing)
- [Development Tips](#development-tips)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
## Installing
```bash
pip install idmtools-models
```
# Development Tips
There is a Makefile file available for most common development tasks. Here is a list of commands
```bash
clean - Clean up temproary files
lint - Lint package and tests
test - Run All tests
coverage - Run tests and generate coverage report that is shown in browser
```
On Windows, you can use `pymake` instead of `make`
| text/markdown | null | Ross Carter <rcarter@idmod.org>, Sharon Chen <shchen@idmod.org>, Clinton Collins <ccollins@idmod.org>, Zhaowei Du <zdu@idmod.org>, Mary Fisher <mafisher@idmod.org>, Mandy Izzo <mizzo@idmod.org>, Clark Kirkman IV <ckirkman@idmod.org>, Benoit Raybaud <braybaud@idmod.org>, Jen Schripsema <jschripsema@idmod.org> | null | null | null | modeling, IDM | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"idmtools~=3.0",
"idmtools[test]; extra == \"test\"",
"idmtools_test; extra == \"test\"",
"matplotlib~=3.10; extra == \"test\"",
"idm-pycomps>=2.11.0; extra == \"test\"",
"flake8; extra == \"packaging\"",
"bump2version; extra == \"packaging\"",
"twine; extra == \"packaging\""
] | [] | [] | [] | [
"Homepage, https://github.com/InstituteforDiseaseModeling/idmtools"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T22:52:22.482007 | idmtools_models-3.0.5.tar.gz | 15,339 | d0/d0/767ad303cd167410d6e983923bfe7eaeefa4ec811bebdc00aea2b61ec021/idmtools_models-3.0.5.tar.gz | source | sdist | null | false | 6b00291c42f0d5bd8d43cdbfb038de98 | 2257fdba1a826770363855e433e93ba5f8607ccf8f5aeffe7156e87c33804fef | d0d0767ad303cd167410d6e983923bfe7eaeefa4ec811bebdc00aea2b61ec021 | null | [] | 164 |
2.4 | idmtools-platform-container | 3.0.5 | Container platform for IDM-Tools | 
# Idmtools platform container
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents**
- [Introduction](#introduction)
- [Pre-requisites](#pre-requisites)
- [Installation](#installation)
- [Examples for container platform](#examples-for-container-platform)
- [Initialize platform](#initialize-platform)
- [Container Examples](#container-examples)
- [Check result with CLI commands](#check-result-with-cli-commands)
- [Check result files](#check-result-files)
- [Folder structure](#folder-structure)
- [Basic CLI commands](#basic-cli-commands)
- [List running jobs](#list-running-jobs)
- [Check status](#check-status)
- [Cancel job](#cancel-job)
- [View Experiments history](#view-experiments-history)
- [Note](#note)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
# Introduction
**ContainerPlatform** is a platform designed to facilitate the execution of experiments and simulations within Docker containers. It provides a robust environment with all necessary tools and dependencies installed, allowing for seamless integration and execution of computational tasks.
## Pre-requisites
- Python 3.8/3.9/3.10/3.11/3.12 x64-bit
- OS:
- Windows 10 Pro or Enterprise
- Linux
- macOS (10.15 Catalina or later)
- Docker or Docker Desktop(required for the container platform)
On Windows, please use Docker Desktop 4.0.0 or later
- **Mac user**: Only support Intel based **x86_64** architecture if you want to run emodpy related disease models on **Docker** container platform. Apple based ARM architecture currently is not supported.
## Installation
- **Install python**
Ensure you have Python 3.8+ installed on your system.
- Create and activate a virtual environment:
```
python -m venv venv
source venv/bin/activate # On macOS/Linux
venv\Scripts\activate # On Windows
```
- Install all **container** platform related packages
```bash
pip install idmtools[container]
```
- Optional: Install all **idmtools** packages
```bash
pip install idmtools[full]
```
- To **override** existing idmtools container related packages after installing emodpy, run this command
```bash
pip install idmtools[container] --force-reinstall --no-cache-dir --upgrade
```
**Mac user**: You map need to escape the square brackets with a backslash like `\[container\]` or `\[full\]` in above command.
- Extra steps for Windows user:
- Enable **Developer Mode** on Windows
If you are running the script on Windows, you need to enable Developer Mode. To enable Developer Mode, go to Settings -> Update & Security -> For developers and select Developer Mode on, or refer to this [guide](https://learn.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development).
- Enable **long file path** for Windows
Due to the file/folder structure design outlined in the section below, if running the script on Windows, be aware of the file path length limitation (less than 255 characters).
To allow longer file paths, you can enable Long Path Support in the Windows Group Policy Editor.
Refer to this [guide](https://www.autodesk.com/support/technical/article/caas/sfdcarticles/sfdcarticles/The-Windows-10-default-path-length-limitation-MAX-PATH-is-256-characters.html) for detailed instructions.
## Examples for container platform
### Initialize platform
- This is the example using Container Platform
```python
from idmtools.core.platform_factory import Platform
platform = Platform('CONTAINER', job_directory='<user job directory>')
```
- To trigger MPI, use ntasks >=2:
```python
from idmtools.core.platform_factory import Platform
platform = Platform('CONTAINER', job_directory='<user job directory>', ntasks=2)
```
- More options for container platform initialization:
refer to [ContainerPlatform attributes](hhttps://docs.idmod.org/projects/idmtools/en/latest/platforms/container/index.html#containerplatform-attributes)
### Container Examples
- idmtools examples: https://github.com/InstituteforDiseaseModeling/idmtools/tree/main/examples/platform_container
- emodpy-malaria examples: https://github.com/EMOD-Hub/emodpy-malaria/tree/main/examples-container
### Check result with CLI commands
```bash
idmtools container status <experiment id>
```
### Check result files
- on host: `<job_directory>/<suite_path>/<experiment_path>/<simulation_path>/`
- in container: `/home/container-data/<suite_path>/<experiment_path>/<simulation_path>/`
### Folder structure
By default, `idmtools` now generates simulations with the following structure:
`job_directory/s_<suite_name>_<suite_uuid>/e_<experiment_name>_<experiment_uuid>/simulation_uuid`
- `job_directory` — The base directory that contains all suite, experiment, and simulation folders.
- `s_<suite_name>_<suite_uuid>` — The suite directory, where the suite name (truncated to a maximum of 30 characters) is prefixed with s_, followed by its unique suite UUID.
- `e_<experiment_name>_<experiment_uuid>` — The experiment directory, where the experiment name (also truncated to 30 characters) is prefixed with e_, followed by its unique experiment UUID.
- `simulation_uuid` — The simulation folder identified only by its UUID.
Suite is optional. If the user does not specify a suite, the folder will be:
`job_directory/e_<experiment_name>_<experiment_uuid>/simulation_uuid`
Examples:
If you create a suite named: `my_very_long_suite_name_for_malaria_experiment`
and an experiment named: `test_experiment_with_calibration_phase`
`idmtools` will automatically truncate both names to a maximum of 30 characters and apply the prefixes s_ for suites and e_ for experiments, resulting in a path like:
```
job_directory/
└── s_my_very_long_suite_name_for_m_12345678-9abc-def0-1234-56789abcdef0/
└── e_test_experiment_with_calibrati_abcd1234-5678-90ef-abcd-1234567890ef/
└── 7c9e6679-7425-40de-944b-e07fc1f90ae7/
```
Or for no suite case:
```
job_directory/
└── e_test_experiment_with_calibrati_abcd1234-5678-90ef-abcd-1234567890ef/
└── 7c9e6679-7425-40de-944b-e07fc1f90ae7/
```
Users can customize this structure through the idmtools.ini configuration file:
- `name_directory = False` — Excludes the suite and experiment names (and their prefixes) from the simulation path.
- `sim_name_directory = True` — Includes the simulation name in the simulation folder path when name_directory = True.
## Basic CLI commands
**ContainerPlatform** provides several CLI commands to manage and monitor experiments and simulations. Below are some basic commands:
### List running jobs
To list running experiment or simulation jobs:
```bash
idmtools container jobs [<container-id>] [-l <limit>] [-n <next>]
```
### Check status
To check the status of an experiment or simulation:
```bash
idmtools container status <item-id> [-c <container_id>] [-l <limit>] [--verbose/--no-verbose]
```
### Cancel job
To cancel an experiment or simulation job:
```bash
idmtools container cancel <item-id> [-c <container_id>]
```
### View Experiments history
To view experiments history:
```bash
idmtools container history [<container-id>] [-l <limit>] [-n <next>]
```
## Note
- **WorkItem** is not supported on the Container Platform as it is unnecessary in most cases since the code already runs on the user's local computer.
- **AssetCollection** creation or referencing to an existing AssetCollection are not supported on the Container Platform with the current release. If you've used the COMPS Platform, you may have scripts using these objects. You would need to update these scripts without using these objects in order to run them on the Container Platform.
- Run with **Singularity** is not needed with Container Platform. If you take an existing COMPS example and try to run it with Container Platform, you may need to remove the code that sets up the singularity image.
| text/markdown | null | Zhaowei Du <zdu@idmod.org>, Sharon Chen <shchen@idmod.org>, Clinton Collins <ccollins@idmod.org>, Benoit Raybaud <braybaud@idmod.org>, Clark Kirkman IV <ckirkman@idmod.org>, Ye Chen <yechen@idmod.org>, Mary Fisher <mafisher@idmod.org>, Mandy Izzo <mizzo@idmod.org>, Jen Schripsema <jschripsema@idmod.org>, Ross Carter <rcarter@idmod.org> | null | null | null | modeling, IDM | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"idmtools_platform_general~=3.0",
"docker~=7.0",
"rich~=14.3",
"idmtools[test]; extra == \"test\"",
"idmtools_test; extra == \"test\"",
"idmtools_models; extra == \"test\"",
"flake8; extra == \"packaging\"",
"bump2version; extra == \"packaging\"",
"twine; extra == \"packaging\"",
"natsort; extra == \"packaging\""
] | [] | [] | [] | [
"Homepage, https://github.com/InstituteforDiseaseModeling/idmtools"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T22:52:21.378691 | idmtools_platform_container-3.0.5.tar.gz | 71,581 | a5/9f/874304cd3aaf5fc980c688783a253c4956a15be409d386a2f05cebeb0f2a/idmtools_platform_container-3.0.5.tar.gz | source | sdist | null | false | e5d1e9f1e7b2c6518ae102bfdc3a47c8 | 1b326772002c626403f736581748429e881f5ae78dc79b0430a1c1eb77540068 | a59f874304cd3aaf5fc980c688783a253c4956a15be409d386a2f05cebeb0f2a | null | [] | 155 |
2.4 | idmtools-platform-general | 3.0.5 | General platform for IDM-Tools | 
# idmtools-platform-general
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents**
- [Installing](#installing)
- [Development Tips](#development-tips)
- [Use Cases](#use-cases)
- [Feature Roadmap](#feature-roadmap)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
## Installing
```bash
pip install idmtools-platform-general
```
# Development Tips
There is a Makefile file available for most common development tasks. Here is a list of commands
```bash
clean - Clean up temproary files
lint - Lint package and tests
test - Run All tests
coverage - Run tests and generate coverage report that is shown in browser
```
On Windows, you can use `pymake` instead of `make`
# Use Cases
* Testing
* Test core functionality
* Performance Testing
* Integration with other systems
* Other HPC Systems
* Local Executions Systems
* Jupyter notebooks
* Basis for future local platforms
* Process
* Thread
* Dask
* Asyncio
# Feature Roadmap
* First Version
* Support for basic provisioning on a linux filesystem
| text/markdown | null | Zhaowei Du <zdu@idmod.org>, Sharon Chen <shchen@idmod.org>, Clinton Collins <ccollins@idmod.org>, Jen Schripsema <jschripsema@idmod.org>, Ross Carter <rcarter@idmod.org> | null | null | null | modeling, IDM | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"idmtools~=3.0",
"idmtools_cli~=3.0",
"idmtools[test]; extra == \"test\"",
"idmtools_test; extra == \"test\"",
"flake8~=7.3; extra == \"packaging\"",
"bump2version; extra == \"packaging\"",
"twine~=6.2; extra == \"packaging\"",
"natsort~=8.4; extra == \"packaging\""
] | [] | [] | [] | [
"Homepage, https://github.com/InstituteforDiseaseModeling/idmtools"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T22:52:20.341142 | idmtools_platform_general-3.0.5.tar.gz | 45,385 | cf/64/706be38eb07522b59a6cd8289e24c8d30d557d58799b3748b4448ccb39ea/idmtools_platform_general-3.0.5.tar.gz | source | sdist | null | false | 331b50e087183dcaa578bcb6646ae0d6 | 1cbc5b0637dfb59927bdc8a7deb765ebc8ca1dc79e9f628215cfd544076b2c02 | cf64706be38eb07522b59a6cd8289e24c8d30d557d58799b3748b4448ccb39ea | null | [] | 176 |
2.4 | dfvue | 6.6 | dfvue: A minimal GUI for a quick view of csv files | dfvue
=====
A simple GUI to view csv files
------------------------------
.. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.10372631.svg
:target: https://doi.org/10.5281/zenodo.10372631
:alt: Zenodo DOI
.. image:: https://badge.fury.io/py/dfvue.svg
:target: https://badge.fury.io/py/dfvue
:alt: PyPI version
.. image:: https://img.shields.io/conda/vn/conda-forge/dfvue.svg
:target: https://anaconda.org/conda-forge/dfvue
:alt: Conda version
.. image:: https://img.shields.io/badge/license-MIT-blue.svg?style=flat
:target: https://github.com/mcuntz/dfvue/blob/master/LICENSE
:alt: License
.. image:: https://github.com/mcuntz/dfvue/workflows/Continuous%20Integration/badge.svg?branch=main
:target: https://github.com/mcuntz/dfvue/actions
:alt: Build status
About dfvue
-----------
``dfvue`` is a minimal GUI for a quick view of csv files. It uses an
input panel similar to Microsoft Excel to check visually that the csv
file is read correctly. It provides most options of the
`pandas.read_csv`_ method to be very versatile on possible csv
formats.
``dfvue`` is a Python script that can be called from within Python or
as a command line tool. It is not supposed to produce
publication-ready plots but rather provide a quick overview of the csv
file.
A more complete documentation for ``dfvue`` is available from:
https://mcuntz.github.io/dfvue/
Installation
------------
``dfvue`` is an application written in Python. It can be installed
with `pip`:
.. code-block:: bash
python -m pip install customtkinter dfvue
or via Conda_:
.. code-block:: bash
conda install -c conda-forge dfvue
``dfvue`` uses CustomTkinter_ if it is installed. CustomTkinter_ is
not on Conda_. One can install CustomTkinter_ with pip on Conda_,
which works well except for Linux.
Sometimes `tkinter` is not enabled in the system's Python version. One
has to do, for example, ``sudo apt install python3-tk`` on Linux or
``brew install python3 python-tk`` on macOS with Homebrew_.
We also provide standalone applications for macOS and Windows that
come with everything needed to run ``dfvue`` including Python:
- macOS: `dfvue 6.6 Intel`_ and `dfvue 6.5.1 ARM`_ for Intel and
ARM processors, resp., for macOS 15+ [Sequoia and newer]. The same
packages without CustomTkinter_ are `dfvue 6.6 Intel aqua`_ and
`dfvue 6.5.1 ARM aqua`_ for Intel and ARM processors,
respectively.
- Windows: `dfvue 6.5.1`_, packaged on Windows 10. The same
package without CustomTkinter_ is `dfvue 6.5.1 azure`_
`dfvue > 6.0` on macOS is either for Intel processors or for Apple
Silicon (ARM) chips. The apps are notarized by Apple and might take a
short while on first opening.
Some people have problems with CustomTkinter's dropdown menus that do
not use scrollbars, e.g. for selecting variables. In this case,
uninstall CustomTkinter:
.. code-block:: bash
python -m pip uninstall customtkinter
or download the standalone package without it. This is less beautiful
but uses scrollbars with menus and might work better on your setup.
Quick usage guide
-----------------
``dfvue`` can be run from the command line:
.. code-block:: bash
dfvue csv_file*.csv
or from within Python:
.. code-block:: python
from dfvue import dfvue
dfvue('csv_file.csv')
where the csv file is optional. The latter can be left out and csv
file(s) can be opened with the "Open File" button from within
``dfvue``.
Note, ``dfvue`` uses the `TkAgg` backend of `matplotlib`. It must be
called before any other call to `matplotlib`. This also means that you
cannot launch it from within `iPython` if it was launched with
`--pylab`. It can be called from within a standard `iPython`, though,
or using `ipython --gui tk`.
General layout
^^^^^^^^^^^^^^
On opening, ``dfvue`` presents currently only one panel for producing
scatter/line plots. Here is the look in macOS light mode (higher
resolution images can be found in the documentation_):
.. image:: https://mcuntz.github.io/dfvue/images/scatter_panel_light.png
:width: 860 px
:align: left
:alt: Graphical documentation of dfvue layout
..
:height: 462 px
The pane is organised in this fashion: the plotting canvas, the
Matplotlib navigation toolbar and the pane, where one can choose the
plotting variables and plotting options. You can open another,
identical window for the same csv file with the button "New Window" on
the top right. You can then also read in a new csv file in one of the
windows with the button "Open File".
Reading a csv file
^^^^^^^^^^^^^^^^^^
The "Read csv file" window opens when a csv file is given.
.. image:: https://mcuntz.github.io/dfvue/images/read_csv_panel.png
:width: 860 px
:align: left
:alt: Read csv file window
One or several csv files can be given on the command line:
.. code-block:: bash
dfvue csv_file*.csv
or from within Python:
.. code-block:: python
from dfvue import dfvue
dfvue('csv_file.csv')
or being selected from the "Choose csv file(s)" selector that opens
when hitting the button "Open File".
The "Read csv file(s)" window reads the first 40 rows of the (first)
csv file with the `pandas.read_csv`_ method using the options given in
the pane. It shows the resulting `pandas.DataFrame`_ in tabulated
format. Changing focus from one option entry to another, for example
by hitting the <tab> key, re-reads the first 40 rows of the csv file
with `pandas.read_csv`_ using the selected options in the
form. Hitting <enter> or <return> within the window reads the entire
csv file(s) using the selected options and returns to the plotting
panels. This is the same than pressing the "Read csv" button in the
lower right corner. Multiple csv files will be read one by one with
`pandas.read_csv`_ using the same options and then concatenated with
`pandas.concat`_.
The options in the form are default options of `pandas.read_csv`_
except for `parse_date`, which is set to `True` instead of
`False`. Hover over the entry boxes to see explanations of the options
in the tooltips.
If the csv file includes a Date/Time column, it is best to set this
column as the index of the `pandas.DataFrame`_ by using
`index_col`. Correct `datetime` is indicated if the index has the data
type `datetime64[ns]` in the plot panels. This is then correctly
interpreted by the underlying Matplotlib when plotting, zooming, or
panning the axes.
`missing_value` is not an option of `pandas.read_csv`_. It is here for
convenience and any number entered in `missing_value` will be added to
pandas `na_values`.
Reading a csv file with options on the command line
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following options of `pandas.read_csv`_ can be given on the command line:
.. code-block:: bash
-s separator, --sep separator
Delimiter to use.
-i columns, --index_col columns
Column(s) to use as index, either given as column index
or string name.
-k rows, --skiprows rows
Line number(s) to skip (0-indexed, must include comma,
e.g. "1," for skipping the second row) or number of lines
to skip (int, without comma) at the start of the file.
-p bool/list/dict, --parse_dates bool/list/dict
boolean, if True -> try parsing the index.
list of int or names, e.g. 1,2,3
-> try parsing columns 1, 2, and 3 each as a separate
date column.
list of lists, e.g. [1,3]
-> combine columns 1 and 3 and parse as a single
date column.
dict, e.g. "foo":[1,3]
-> parse columns 1 and 3 as date and call result "foo"
-d format_string, --date_format format_string
Will parse dates according to this format.
For example: "%Y-%m-%d %H:%M%S". See
https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior
-m missing_value, --missing_value missing_value
Missing or undefined value set to NaN. For negative values,
use long format, e.g. --missing_value=-9999.
Examples of pandas.read_csv options
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`pandas.read_csv`_ is very powerful and can read a lot of different
formats. Here are some examples of csv files and the options for
`pandas.read_csv`_.
The most simple csv file would be like:
.. code-block::
DATETIME,TA_1_1_1,RH_1_1,ALB_1_1_1
2015-01-01 00:30:00,-2.17794549084,97.2958103396,0.0
2015-01-01 01:00:00,-2.02584908489,98.2103903979,0.0
This can simply be read by setting `index_col=0`. The first column
including date and time can simply a be a `ISO8601`_ date, for example
'2015-01-01 00:30:00' or '2015-01-01T00:30:00', or be given by
`date_format`, which would be '%Y-%m-%d %H:%M:%S' in this case. See
the documentation of `pandas.to_datetime`_ or `strftime`_.
Command line options would be:
`dfvue -i 0 csv-file`
or
`dfvue -i 0 -d '%Y-%m-%d %H:%M:%S' csv-file`
A common practice is to put a special value for measurement errors or
similar such as -9999:
.. code-block::
DATETIME,TA_1_1_1,RH_1_1,ALB_1_1_1
2015-01-01 00:30:00,-2.17794549084,97.2958103396,-9999
2015-01-01 01:00:00,-2.02584908489,98.2103903979,-9999
This can be read by setting `missing_value=-9999`. On the command
line, this is:
`dfvue -i 0 --missing_value=-9999 csv-file`
or
`dfvue -i 0 -d '%Y-%m-%d %H:%M:%S' -m '-9999' csv-file`
You have to use either put -9999 in quotes (`-m '-9999`) or use the
long form `--missing_value=-9999` instead of the short form `-m -9999`
in case of negative missing values because the command line would
interpret *-9999* as a separate option and would fail.
Date and time information can be given in different formats, for example:
.. code-block::
Date;rho H1 (kg/m3);alb H1 (-);T_Psy H1 (degC);WS_EC H1 (m/s);Prec H1 (mm/30min)
01.01.2015 00:30;97.2958103396;-9999;-2.17794549084
01.01.2015 01:00;98.2103903979;-9999;-2.02584908489
which can be read by setting the date format:
`date_format=%d.%m.%Y %H:%M`, `index_col=0`, `missing_value=-9999`, as
well as the field separator `sep=;`. On the the command line, this is:
`dfvue -s ';' -i 0 -d '%d.%m.%Y %H:%M' --missing_value=-9999 csv-file`
Or in `FLUXNET`_ / `ICOS`_ / `europe-fluxdata.eu`_ format with a
second row that shows the variable units:
.. code-block::
TIMESTAMP_END,TA_1_1_1,RH_1_1_1,ALB_1_1_1
YYYYMMDDhhmm,degC,%,adimensional
201501010030,-2.17794549084,97.2958103396,-9999
201501010100,-2.02584908489,98.2103903979,-9999
which is read with `date_format=%Y%M%d%H%M`, `index_col=0`,
`skiprows=1,`, and `missing_value=-9999`. Note the comma after '1' in
`skiprows`. Without the command, *skiprows* would be the number of rows
to skip at the beginning, i.e. the first row, which would be
wrong. The comma indicates that *skiprows* is a list and hence a list
of row indexes, that means *1* here and thus skip the second row. This
would be on the command line
`dfvue -i 0 -d '%Y%m%d%H%M' --skiprows=1, --missing_value=-9999 csv-file`
Date and time information can also be in different columns. Here the
second column is the day-of-the-year:
.. code-block::
year,jday,hour,min,tair,rhair,albedo
2015,1,0,30,-2.17794549084,97.2958103396,-9999
2015,1,1,0,-2.02584908489,98.2103903979,-9999
which can be read by setting `parse_dates=[0,1,2,3]`, `index_col=0`,
and `date_format=%Y %j %H %M`, as well as `missing_value=-9999`. Note
the brackets '[]' around `parse_dates`. Without brackets it would
parse columns 0, 1, 2, and 3 each as a separate date column, whereas
with brackets it combines columns 0, 1, 2, and 3 and parses it as a
single date column, with index '0'. It will use a space between column
entries. Hence `index_col=0` sets this combined column as the index,
parsing the dates with the format '%Y %j %H %M' with spaces between
the `strftime`_ formats.
On the command line, this would be:
`dfvue -i 0 -p [0,1,2,3] -d '%Y %j %H %M' --missing_value=-9999 csv-file`
If you want to have spaces in the list of `parse_dates` on the command
line, you have to use the long form: `--parse_dates='[0, 1, 2, 3]'`.
Scatter/Line panel
^^^^^^^^^^^^^^^^^^
Here is the Scatter/Line panel in macOS light mode, describing all
buttons, sliders, entry boxes, spinboxes, and menus:
.. image:: https://mcuntz.github.io/dfvue/images/scatter_panel_light_multiline.png
:width: 860 px
:align: left
:alt: Graphical documentation of Scatter/Line panel
The default plot is a line plot with solid lines (line style 'ls' is
'-'). One can set line style 'ls' to None and set a marker symbol,
e.g. 'o' for circles, to get a scatter plot. A large variety of line
styles, marker symbols and color notations are supported.
Transform panel
^^^^^^^^^^^^^^^
You can do calculations on the Pandas DataFrame. Use the "Transform df" button to open the transform panel:
.. image:: https://mcuntz.github.io/dfvue/images/transform_panel_light.png
:width: 860 px
:align: left
:alt: Graphical documentation of Scatter/Line panel
You can do calculations with the DataFrame. The DataFrame is called
self.df. Its column names are the names of the x, y, and y2 variables
in the drop-down menus without (size, datatype).
You can transform the DataFrame such as doing daily means of all
columns. This transformation is preset in the transform panel for an
easier start on writing DataFrame calculations and transformations:
`self.df = self.df.resample('1D').mean().squeeze()`. Calculations can
have multiple lines, import libraries, etc.
License
-------
``dfvue`` is distributed under the MIT License. See the LICENSE_ file
for details.
Copyright (c) 2023- Matthias Cuntz
``dfvue`` uses CustomTkinter_ if installed. Otherwise it uses the
Azure_ 2.0 theme by rdbende_ on Linux and Windows.
Standalone applications are produced with `cx_Freeze`_, currently
maintained by `Marcelo Duarte`_.
.. _cx_Freeze: https://cx-freeze.readthedocs.io/en/latest/
.. _dfvue 6.5.1: https://www.macu.de/extra/dfvue-6.5.1-win64.msi
.. _dfvue 6.5.1 azure: https://www.macu.de/extra/dfvue-6.5.1-win64-azure.msi
.. _dfvue 6.6 Intel: https://www.macu.de/extra/dfvue-6.6-intel.dmg
.. _dfvue 6.6 Intel aqua: https://www.macu.de/extra/dfvue-6.6-intel-aqua.dmg
.. _dfvue 6.5.1 ARM: https://www.macu.de/extra/dfvue-6.5.1-arm64.dmg
.. _dfvue 6.5.1 ARM aqua: https://www.macu.de/extra/dfvue-6.5.1-arm64-aqua.dmg
.. _documentation: https://mcuntz.github.io/dfvue/
.. _europe-fluxdata.eu: https://www.europe-fluxdata.eu
.. _pandas.concat: https://pandas.pydata.org/docs/reference/api/pandas.concat.html
.. _pandas.read_csv: https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html
.. _pandas.DataFrame: https://pandas.pydata.org/docs/reference/frame.html
.. _pandas.to_datetime: https://pandas.pydata.org/docs/reference/api/pandas.to_datetime.html
.. _read_csv: https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html
.. _rdbende: https://github.com/rdbende
.. _strftime: https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior
.. _Azure: https://github.com/rdbende/Azure-ttk-theme
.. _Conda: https://docs.conda.io/projects/conda/en/latest/
.. _CustomTkinter: https://customtkinter.tomschimansky.com
.. _FLUXNET: https://fluxnet.org
.. _Homebrew: https://brew.sh
.. _ICOS: https://www.icos-cp.eu
.. _ISO8601: https://en.wikipedia.org/wiki/ISO_8601
.. _LICENSE: https://github.com/mcuntz/dfvue/blob/main/LICENSE
.. _Marcelo Duarte: https://github.com/marcelotduarte
| text/x-rst | Matthias Cuntz | mc@macu.de | Matthias Cuntz | mc@macu.de | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"Intended Audience :: Science/Research",
"Natural Language :: English",
"Operating System :: MacOS",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Operating System :: Unix",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Atmospheric Science",
"Topic :: Scientific/Engineering :: Hydrology",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Software Development",
"Topic :: Utilities"
] | [
"any"
] | https://github.com/mcuntz/dfvue | null | >=3.8 | [] | [] | [] | [
"numpy",
"matplotlib",
"pandas",
"numpydoc<2,>=1.1; extra == \"doc\"",
"sphinx<4,>=3; extra == \"doc\"",
"sphinx_book_theme>=1.0.1; extra == \"doc\"",
"coverage[toml]<6,>=5.2.1; extra == \"test\"",
"pytest<7,>=6.0; extra == \"test\"",
"pytest-cov<3,>=2.11.0; extra == \"test\""
] | [] | [] | [] | [
"Documentation, https://mcuntz.github.io/dfvue/",
"Source, https://github.com/mcuntz/dfvue",
"Tracker, https://github.com/mcuntz/dfvue/issues",
"Changelog, https://github.com/mcuntz/dfvue/blob/main/CHANGELOG.rst",
"Conda-Forge, https://anaconda.org/conda-forge/dfvue"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T22:51:54.272885 | dfvue-6.6.tar.gz | 1,050,059 | 6e/7e/ea215eca4966cc07a55ec92a23f16562cf1ad74aea4e43ff295f5e32a3aa/dfvue-6.6.tar.gz | source | sdist | null | false | 4c18e4f97dd5a41fcb5c7618cf83711f | fa89e90b0ef0588f0c5ea207ce4914761a6d9ef4eec4b2e6a986740da6a3fae9 | 6e7eea215eca4966cc07a55ec92a23f16562cf1ad74aea4e43ff295f5e32a3aa | null | [
"LICENSE",
"AUTHORS.rst"
] | 153 |
2.4 | magic-spec | 1.0.0 | Magic Specification-Driven Development (SDD) Workflow Installer | # 🪄 Magic Spec
[](https://www.npmjs.com/package/magic-spec)
[](https://pypi.org/project/magic-spec/)
[](./LICENSE)
**Specification-Driven Development (SDD) workflow for AI coding agents.**
Stop your AI from writing code before it understands the problem.
`magic-spec` installs a structured pipeline — *Thought → Spec → Plan → Task → Code* — directly into any project, regardless of stack.
## ✨ What is Magic Spec?
`magic-spec` is a set of **markdown-based workflow instructions** for AI coding agents (Cursor, Claude, Gemini, Copilot, etc.). It acts as an operating system for agentic development, enforcing a rigorous, structured pipeline:
```
💡 Idea → 📋 Specification → 🗺️ Plan → ⚡ Task → 🚀 Code → 🔍 Retrospective
```
Once installed, your AI agent will automatically:
- Convert raw thoughts into structured specification files.
- Build a phased implementation plan from approved specs.
- Decompose the plan into atomic, trackable tasks.
- Analyze its own workflow and suggest improvements.
**No code is written until a specification exists. No spec is implemented without a plan.**
## 🚀 Quick Start
Works with **any project** — Rust, Go, Python, JavaScript, or anything else.
No runtime lock-in. Requires only Node.js *or* Python to install.
### Option A — Node.js (npx)
```bash
npx magic-spec@latest
```
### Option B — Python (uvx)
```bash
uvx magic-spec
```
Both commands do exactly the same thing:
1. Copy `.magic/` (the SDD engine) into your project.
2. Copy `.agent/workflows/magic/` (agent trigger wrappers) into your project.
3. Run the init script — creates your `.design/` workspace with `INDEX.md` and `RULES.md`.
## 🧭 Core Philosophy
| Principle | Description |
| :--- | :--- |
| **Specs First, Code Later** | The agent is forbidden from writing code from raw input. All ideas become specs first. |
| **Deterministic Process** | A strict pipeline is enforced: *Thought → Spec → Plan → Task → Code*. |
| **Constitution-Driven** | All project decisions live in `.design/RULES.md` — the project's living constitution. |
| **Self-Improving** | The Retrospective workflow analyzes real usage and generates improvement recommendations. |
## 📁 What Gets Installed
After running `npx magic-spec@latest` in your project root:
```plaintext
your-project/
│
├── .agent/workflows/magic/ # Agent entry points (slash commands)
│ ├── plan.md
│ ├── retrospective.md
│ ├── rule.md
│ ├── specification.md
│ └── task.md
│
├── .magic/ # SDD Engine (workflow logic, read-only)
│ ├── init.md
│ ├── plan.md
│ ├── retrospective.md
│ ├── rule.md
│ ├── specification.md
│ ├── task.md
│ └── scripts/
│ ├── init.sh # Init for macOS / Linux
│ └── init.ps1 # Init for Windows
│
└── .design/ # Your project workspace (generated)
├── INDEX.md # Spec registry
├── RULES.md # Project constitution
├── PLAN.md # Implementation plan
├── specifications/ # Your specification files
└── tasks/ # Task breakdowns per phase
```
## 🔗 The Workflow Pipeline
```mermaid
graph TD
IDEA["💡 Idea"] --> INIT{"🏗️ Auto-Init"}
INIT -->|.design/ exists| SPEC
INIT -->|.design/ missing| CREATE["Create .design/ structure"] --> SPEC
SPEC["📋 Specification"] <--> RULE["📜 Rule"]
SPEC --> PLAN["🗺️ Plan"]
PLAN --> TASK["⚡ Task"]
TASK --> CODE["🚀 Code"]
CODE --> RETRO["🔍 Retrospective"]
RETRO -.->|Feedback loop| SPEC
```
### Core Workflows
| # | Workflow | Purpose |
| :--- | :--- | :--- |
| 1 | **Specification** | Converts raw thoughts into structured specs. Manages statuses: `Draft → RFC → Stable → Deprecated`. |
| 2 | **Plan** | Reads Stable specs, builds a dependency graph, and produces a phased `PLAN.md`. |
| 3 | **Task** | Decomposes the plan into atomic tasks with sequential and parallel execution tracks. |
### Auxiliary Workflows
| Workflow | Purpose |
| :--- | :--- |
| **Rule** | Manages the project constitution (`RULES.md §7`). Add, amend, or remove conventions. |
| **Retrospective** | Analyzes SDD usage, collects metrics, and generates improvement recommendations. |
## 💬 How to Use (with any AI agent)
Just talk to your AI agent naturally. Initialization is **automatic** — no setup command required.
```plaintext
"Dispatch this thought into specs: I want a user auth system with JWT and Redis..."
→ Runs Specification workflow
"Create an implementation plan"
→ Runs Plan workflow
"Generate tasks for Phase 1"
→ Runs Task workflow
"Execute the next task"
→ Runs Task workflow (execution mode)
"Add rule: always use snake_case for file names"
→ Runs Rule workflow
"Run retrospective"
→ Runs Retrospective workflow
```
The AI reads the corresponding `.magic/*.md` workflow file and executes the request within the bounds of the SDD system. **No code escapes the pipeline.** ✨
## 🔄 Updating
Pull the latest engine improvements without touching your project data:
```bash
# Node.js
npx magic-spec@latest --update
# Python
uvx magic-spec --update
```
The update overwrites `.magic/` (the engine) but **never touches** `.design/` (your specs, plans, and tasks).
## 🤝 Compatibility
Works with any AI coding agent that can read markdown workflow files:
- [Cursor](https://cursor.sh) (`.cursorrules` + Agent mode)
- [Claude](https://claude.ai) (Projects)
- [Gemini](https://gemini.google.com) (via Gemini Code)
- [GitHub Copilot](https://github.com/features/copilot) (Agent mode)
- Any terminal-based or API-driven agent
## 📄 License
[MIT](./LICENSE) © 2026 Oleg Alexandrov
---
## TODO: Magic Spec
| text/markdown | null | Oleg Alexandrov <alexandrovoleg.ru@gmail.com> | null | null | MIT License | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/teratron/magic-spec",
"Repository, https://github.com/teratron/magic-spec"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T22:51:30.256574 | magic_spec-1.0.0.tar.gz | 5,013 | aa/bf/df4aeb5281f333a864cd6a911fbb7bcb2cb400e1d1aa7b4568c7d584b80b/magic_spec-1.0.0.tar.gz | source | sdist | null | false | 379d528c7dde28768290fa54f15fa2e0 | 745d19ea87f1a290883a053cbe9f7c12b231830ff585489cf7c3dcf6378ede32 | aabfdf4aeb5281f333a864cd6a911fbb7bcb2cb400e1d1aa7b4568c7d584b80b | null | [
"LICENSE"
] | 176 |
2.4 | uuid_utils | 0.14.1 | Fast, drop-in replacement for Python's uuid module, powered by Rust. | # Python UUID Utils
<div align="center">
[](https://pypi.org/project/uuid-utils/)
[](https://pypi.org/project/uuid-utils)
[](https://codspeed.io/aminalaee/uuid-utils?utm_source=badge)
</div>
---
Fast, drop-in replacement for Python's uuid module, powered by Rust.
Available UUID versions:
- `uuid1` - Version 1 UUIDs using a timestamp and monotonic counter.
- `uuid3` - Version 3 UUIDs based on the MD5 hash of some data.
- `uuid4` - Version 4 UUIDs with random data.
- `uuid5` - Version 5 UUIDs based on the SHA1 hash of some data.
- `uuid6` - Version 6 UUIDs using a timestamp and monotonic counter.
- `uuid7` - Version 7 UUIDs using a Unix timestamp ordered by time.
- `uuid8` - Version 8 UUIDs using user-defined data.
## Installation
Using `pip`:
```shell
$ pip install uuid-utils
```
or, using `conda`:
```shell
$ conda install -c conda-forge uuid-utils
```
## Example
```shell
>>> import uuid_utils as uuid
>>> # make a random UUID
>>> uuid.uuid4()
UUID('ffe95fcc-b818-4aca-a350-e0a35b9de6ec')
>>> # make a random UUID using a Unix timestamp which is time-ordered.
>>> uuid.uuid7()
UUID('018afa4a-0d21-7e6c-b857-012bc678552b')
>>> # make a UUID using a SHA-1 hash of a namespace UUID and a name
>>> uuid.uuid5(uuid.NAMESPACE_DNS, 'python.org')
UUID('886313e1-3b8a-5372-9b90-0c9aee199e5d')
>>> # make a UUID using an MD5 hash of a namespace UUID and a name
>>> uuid.uuid3(uuid.NAMESPACE_DNS, 'python.org')
UUID('6fa459ea-ee8a-3ca4-894e-db77e160355e')
```
## Compatibility with Python UUID
In some cases, for example if you are using `Django`, you might need `UUID` instances to be returned
from the standard-library `uuid`, not a custom `UUID` class.
In that case you can use the `uuid_utils.compat` which comes with a performance penalty
in comparison with the `uuid_utils` default behaviour, but is still faster than the standard-library.
```py
>>> import uuid_utils.compat as uuid
>>> # make a random UUID
>>> uuid.uuid4()
UUID('ffe95fcc-b818-4aca-a350-e0a35b9de6ec')
```
## Benchmarks
| Benchmark | Min | Max | Mean | Min (+) | Max (+) | Mean (+) |
| ---------------- | ----- | ----- | ----- | ------------- | ------------- | ------------- |
| UUID v1 | 0.061 | 0.299 | 0.194 | 0.019 (3.3x) | 0.019 (15.4x) | 0.019 (10.1x) |
| UUID v3 | 0.267 | 0.307 | 0.293 | 0.035 (7.6x) | 0.041 (7.5x) | 0.039 (7.5x) |
| UUID v4 | 0.073 | 0.119 | 0.083 | 0.005 (15.2x) | 0.005 (24.6x) | 0.005 (17.1x) |
| UUID v5 | 0.058 | 0.189 | 0.146 | 0.008 (7.6x) | 0.038 (5.0x) | 0.016 (9.0x) |
| UUID v6 | 0.032 | 0.033 | 0.032 | 0.003 (10.1x) | 0.003 (10.3x) | 0.003 (10.1x) |
| UUID v7 | 0.063 | 0.063 | 0.063 | 0.004 (16.1x) | 0.004 (16.0x) | 0.004 (16.1x) |
| UUID from hex | 0.128 | 0.139 | 0.135 | 0.016 (8.2x) | 0.017 (8.0x) | 0.016 (8.3x) |
| UUID from bytes | 0.031 | 0.135 | 0.093 | 0.016 (2.0x) | 0.016 (8.6x) | 0.016 (5.9x) |
| UUID from int | 0.027 | 0.102 | 0.043 | 0.003 (8.3x) | 0.004 (25.0x) | 0.003 (12.4x) |
| UUID from fields | 0.031 | 0.162 | 0.077 | 0.005 (6.0x) | 0.005 (30.6x) | 0.005 (14.7x) |
<sup>Benchmark results might vary in different environments, but in most cases the uuid_utils should outperform stdlib uuid.</sup><br>
## How to develop locally
```shell
$ make build
$ make test
```
Or:
```shell
$ maturin develop --release
```
| text/markdown; charset=UTF-8; variant=GFM | null | Amin Alaee <mohammadamin.alaee@gmail.com> | null | null | null | rust, uuid | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Rust",
"Intended Audience :: Developers",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://github.com/aminalaee/uuid-utils",
"Issues, https://github.com/aminalaee/uuid-utils/issues",
"Source, https://github.com/aminalaee/uuid-utils"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T22:50:46.870998 | uuid_utils-0.14.1-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl | 345,699 | 54/6e/dcd3fa031320921a12ec7b4672dea3bd1dd90ddffa363a91831ba834d559/uuid_utils-0.14.1-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl | cp39 | bdist_wheel | null | false | 0344e314cf654c4ac63b1d741738e2c7 | ce6743ba194de3910b5feb1a62590cd2587e33a73ab6af8a01b642ceb5055862 | 546edcd3fa031320921a12ec7b4672dea3bd1dd90ddffa363a91831ba834d559 | BSD-3-Clause | [
"LICENSE.md"
] | 0 |
2.4 | demekz | 0.1.8 | template | # Demekz Django App
Приложение для образовательного портала с регистрацией, авторизацией, созданием заявок и панелью администратора.
## Установка
```bash
pip install demekz
| text/markdown | Anonymous | anonim228@mail.com | null | null | null | null | [
"Framework :: Django",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11"
] | [] | https://github.com/ваш-username/demekz | null | >=3.8 | [] | [] | [] | [
"Django>=3.2"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.7 | 2026-02-20T22:50:41.614058 | demekz-0.1.8-py3-none-any.whl | 9,475 | 44/51/cab6baf0ddbdd0f087b6f59a4d955106fa7ad9cfb1b3e70ba78400af029c/demekz-0.1.8-py3-none-any.whl | py3 | bdist_wheel | null | false | 55fd770cd1e4efbf9c06e9ad66d90dfc | 1d87e4e6e87ba0b4c43b1e9849da5cb9e48ef8bf97db42609773fe63ef272de0 | 4451cab6baf0ddbdd0f087b6f59a4d955106fa7ad9cfb1b3e70ba78400af029c | null | [] | 81 |
2.3 | pdform | 0.2.5 | A library and command-line tool for working with PDF interactive forms. Uses pikepdf and pdf2htmlex. | ======
PDForm
======
A library and command-line tool for working with PDF interactive forms. It can:
* Describe the available fields in the form
* Convert the PDF to an HTML form
* Populate the PDF form with data
Uses `Pikepdf <https://pikepdf.readthedocs.io/en/latest/index.html>`_.
----------------
Describing Forms
----------------
The `pdform describe` command can be used to get information about a PDF form, such as the names and types of fields, allowable options, and so forth. Use `pdform describe --help` for command-line options.
.. code-block:: shell
pdform describe form.pdf
By default, it will show every field in the form, together with all the relevant details about the field. Command-line options exist to filter this view for easier parsing.
.. code-block::
=========================================================================
stream <_io.BufferedReader name='../../pikepdf/tests/resources/form.pdf'>
=========================================================================
Text1
-----
Label:
Text1
Type:
TextField
Required:
No
Read Only:
No
Multiline:
No
Max Length:
None
Default Value:
Current Value:
... and so on ...
-------------
Filling Forms
-------------
Filling forms is done using the `pdform fill-form` command. Typically, this will be done using JSON-formatted data, such as:
.. code-block:: json
{
"TextField1": "Some Text",
"Checkbox1": true,
"RadioGroup1": "3",
"ChoiceField1": "Option 4",
"SignatureField": "/home/myself/signature.png"
}
You can then call the command with this JSON:
.. code-block:: shell
pdform fill-form template.pdf output.pdf data.json
Or pipe this JSON into the command:
.. code-block:: shell
echo {your json here} | pdform fill-form template.pdf output.pdf -
------------------
Converting to HTML
------------------
Converting to HTML relies on `pdf2htmlEX <https://pdf2htmlex.github.io/pdf2htmlEX/>`_ to generate the initial HTML. We then use `BeautifulSoup <https://beautiful-soup-4.readthedocs.io/en/latest/>`_ to strip away most of the unnessesary code, and add the form fields.
This function can be activated in one of two ways:
1. The `pdform make-html` command. Use `pdform make-html --help` for details.
2. Directly via Python.
The command-line interface is sufficient for basic usage. It provides a handful of different output formats: plain HTML, Jinja, and PHP.
.. code-block:: shell
pdform make-html --jinja input.pdf output.jinja
However, it is likely you may wish to customize the rendered HTML. The Python interfaces gives much more flexibility for this.
.. code-block:: python
from pdform.make_html import FieldRenderer, make_html
# Define your own field renderer to control the emitted code for form fields
class MyFieldRenderer(FieldRenderer):
# See the source code for details on how to implement this class
...
soup = make_html(path, field_renderer_class=MyFieldRenderer)
# Use the BeautifulSoup object to perform any post-processing to the generated HTML
# (See the BeautifulSoup documentation for how to use it to manipulate the DOM)
...
# Output the rendered HTML
print(soup.prettify()) | text/x-rst | Dominick Johnson | dominick.johnson@tylertech.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | https://github.com/dmjohnsson23/pdform-pikpdf | null | >=3.9 | [] | [] | [] | [
"pikepdf<10.0.0,>=9.8.0",
"click<9.0.0,>=8.1.7",
"pillow<12.0.0,>=11.2.1",
"beautifulsoup4<5.0.0,>=4.13.4"
] | [] | [] | [] | [
"Homepage, https://github.com/dmjohnsson23/pdform-pikpdf"
] | poetry/2.0.1 CPython/3.12.12 Linux/4.18.0-553.105.1.el8_10.x86_64 | 2026-02-20T22:50:38.752418 | pdform-0.2.5.tar.gz | 11,820 | 42/ca/e9098f8a3c2c6adbfdfa133f6a164f3a522976925d919ee5684ffd485ca3/pdform-0.2.5.tar.gz | source | sdist | null | false | 9a616bbf20a993b21ef7368a0b8b6022 | c861e16f7bc4c5ec5ce56717765e8f450fa6b3098c016b2808c7f703957282a9 | 42cae9098f8a3c2c6adbfdfa133f6a164f3a522976925d919ee5684ffd485ca3 | null | [] | 162 |
2.4 | soliplex | 0.43.3 | An AI-powered Retrieval-Augmented Generation (RAG) system with a modern web interface. | ERROR: type should be string, got "https://pypi.org/project/soliplex/# Soliplex\n\nAn AI-powered Retrieval-Augmented Generation (RAG) system with a modern web interface.\n\n## Features\n\n- **RAG-Powered Search**: Semantic document retrieval using LanceDB vector database\n- **Multi-Room Architecture**: Independent chat environments (rooms) with separate configurations and knowledge bases\n- **Multiple LLM Providers**: OpenAI, Ollama, and compatible APIs\n- **AI Agent System**: Function calling and tool integration for AI agents\n- **OIDC Authentication**: Enterprise SSO with Keycloak integration\n- **Model Context Protocol (MCP)**: Extended AI capabilities through MCP client or exposing Room as MCP server\n- **Real-time Communication**: WebSocket-based conversation streams\n- **Quiz System**: Custom quizzes with LLM-based evaluation\n- **Observability**: Logfire integration for monitoring\n\n## Architecture\n\n### Backend (`/src/soliplex/`)\n**Python 3.12+ / FastAPI**\n\n- **Core**: FastAPI application with async support\n- **RAG Engine**: [haiku.rag-slim](https://pypi.org/project/haiku.rag-slim/)\n with LanceDB vector storage\n- **AI Integration**: [Pydantic AI](https://pypi.org/project/pydantic-ai/)\n for agent management\n- **Authentication**: Python-Keycloak with OIDC/JWT support\n- **MCP**: [FastMCP](https://pypi.org/project/fastmcp/) server and client\n implementations\n- **Configuration**: YAML-based configuration system\n\nKey modules:\n- `views/` - API endpoints (auth, completions, conversations, rooms, quizzes)\n- `agents.py` - AI agent configuration and management\n- `agui/` - AG-UI thread persistence and retrieval\n- `tools.py` - Tool definitions for AI agents\n- `mcp_server.py` / `mcp_client.py` - Model Context Protocol integration\n- `tui/` - Terminal user interface\n\n### Frontend (`/src/flutter/`)\n**Flutter 3.35+ / Dart 3.10.0+**\n\n- **Framework**: Flutter web with Material Design\n- **State Management**: Riverpod (2.6.1)\n- **Navigation**: Go Router (16.0.0)\n- **Authentication**: Flutter AppAuth (9.0.1) for OIDC\n- **Real-time**: WebSocket communication\n- **Secure Storage**: Flutter Secure Storage for credentials\n\nKey files:\n- `main.dart` - Application entry point\n- `soliplex_client.dart` - Backend API client\n- `oidc_client.dart` - OIDC authentication client\n- `controllers.dart` - Riverpod state management\n- `configure.dart` - Configuration UI\n\n### TUI (`src/soliplex/tui`)\n\nQuick-and-dirty client for room queries\n\n- **Framework**: Python `textual`\n\n## Quick Start\n\nFor detailed installation instructions, see the [Prerequisites Guide](docs/prerequisites.md).\n\n### Install Soliplex and dependencies\n\n```bash\n# Install\npython3.13 -m venv venv\nsource venv/bin/activate\npip install -e .\n\n# Configure environment\ncp .env.example .env\n# Edit .env with your settings\n```\n\n### Index Soliplex docs into RAG database\n\n```bash\nsource venv/bin/activate\nexport OLLAMA_BASE_URL=<your Ollama server / port>\n# Run docling-serve if you have not installed the full haiku.rag\ndocker run -p 5001:5001 -d -e DOCLING_SERVE_ENABLE_UI=1 \\\n quay.io/docling-project/docling-serve\nhaiku-rag --config example/haiku.rag.yaml \\\n init --db db/rag/rag.lancedb\nhaiku-rag --config example/haiku.rag.yaml \\\n add-src --db db/rag/rag.lancedb docs/\n...\n17 documents added successfully.\n```\n\nSee: `docs/rag.md` for more options.\n\n### Backend Server CLI Commands\n\nThe `soliplex-cli` command provides several utilities for managing your Soliplex installation:\n\n#### Check Configuration\nValidate your configuration file and report any missing secrets or environment variables:\n```bash\nsoliplex-cli check-config example/minimal.yaml\n```\n\n#### List Rooms\nShow all configured chat rooms:\n```bash\nsoliplex-cli list-rooms example/minimal.yaml\n```\n\n#### List Completions\nShow all configured completion endpoints:\n```bash\nsoliplex-cli list-completions example/minimal.yaml\n```\n\n#### List Secrets\nDisplay all configured secrets and their status:\n```bash\nsoliplex-cli list-secrets example/minimal.yaml\n```\n\n#### List Environment Variables\nShow all environment variables and their values:\n```bash\nsoliplex-cli list-environment example/minimal.yaml\n```\n\n#### List OIDC Providers\nDisplay configured OIDC authentication providers:\n```bash\nsoliplex-cli list-oidc-auth-providers example/minimal.yaml\n```\n\n#### Export Configuration\nExport the installation configuration as YAML:\n```bash\nsoliplex-cli config example/minimal.yaml\n```\n\n#### Export AG-UI Feature Schemas\nExport AG-UI feature schemas as JSON:\n```bash\nsoliplex-cli agui-feature-schemas example/minimal.yaml\n```\n\n#### Run Backend Server\nStart the Soliplex backend server:\n```bash\nexport OLLAMA_BASE_URL=<your Ollama server / port>\nsoliplex-cli serve example/minimal.yaml --no-auth-mode\n```\n\nServer options:\n- `--no-auth-mode` - Disable authentication (for development/testing)\n- `--host HOST` - Bind to specific host (default: 127.0.0.1)\n- `--port PORT` - Listen on specific port (default: 8000)\n- `--reload {python,config,both}` - Enable hot reload for python code, config, or both\n- `--reload-dirs DIRS` - Additional directories to watch for reload\n- `--reload-includes PATTERNS` - File patterns to include in reload watch\n- `--proxy-headers` - Enable proxy header parsing\n- `--forwarded-allow-ips IPS` - Trusted IP addresses for proxy headers\n\n### Frontend\n\n```bash\ncd src/flutter\nflutter pub get\nflutter run -d chrome --web-port 59001\n```\n\n### TUI\n\nThe TUI does not yet support authentication, so run the back-end with\n`--no-auth-mode` when using the TUI.\n\nWithin the virtual environment where you installed `soliplex`:\n\n```bash\nsoliplex-tui --help\n\n Usage: soliplex-tui [OPTIONS]\n\n╭─ Options ────────────────────────────────────────────────────────────────────╮\n│ --version -v │\n│ --url TEXT Base URL for Soliplex back-end │\n│ [default: http://127.0.0.1:8000] │\n│ --help -h Show this message and exit. │\n╰──────────────────────────────────────────────────────────────────────────────╯\n```\n\n```bash\nsoliplex-tui\n```\n\nBy default, the TUI connects to a Soliplex back-end server running\non port 8000 on your local machine:\n\n```bash\nsoliplex-tui --url http://127.0.0.1:8000\n```\n\n## Development\n\nThis project uses [PEP 735 Dependency Groups](https://peps.python.org/pep-0735/)\nfor managing development dependencies. This is the modern standard supported by\n`uv` and recent versions of `pip`.\n\n### Installing dev dependencies\n\n```bash\n# Using pip (requires pip 24.0+)\npip install -e . --group dev\n\n# Using uv (recommended)\nuv sync --group dev\n```\n\n**Note:** The older syntax `pip install -e \".[dev]\"` is for `[project.optional-dependencies]`\nand will NOT work with `[dependency-groups]`. Always use `--group dev` instead.\n\n### Available dependency groups\n\n| Group | Purpose |\n|-------|---------|\n| `dev` | Testing tools (pytest, ruff, coverage) |\n| `docs` | Documentation (mkdocs, mkdocs-material) |\n| `postgres` | PostgreSQL support (asyncpg) |\n| `tui` | Terminal UI (textual, typer) |\n\n### Running tests\n\n```bash\n# Run unit tests with coverage\npytest\n\n# Run with specific coverage threshold (CI enforces 100%)\npytest --cov-fail-under=100\n\n# Run linting\nruff check\n\n# Check formatting\nruff format --check\n```\n\n## Configuration\n\nYAML-based configuration with:\n- **Installation** (`installation.yaml`) - Main config referencing agents, rooms, and OIDC providers\n- **Rooms** (`rooms/*.yaml`) - Individual chat room configurations with RAG settings\n- **Agents** (`completions/*.yaml`) - LLM provider and model configurations\n- **OIDC** (`oidc/*.yaml`) - Authentication provider settings\n\nSee `example/` directory for sample configurations.\n\n### Environment Variables\n\nNon-secret environment variables can and mostly should be configured\ndirectly in the `installation.yaml` file (e.g. `example/installation.yaml`,\n`example/minimal.yaml`, etc.).\n\nThose files are checked into the Soliplex repository, and cannot know\nthe URL of your Ollama server (if you use Ollama), They therefore declare\nthe `OLLAMA_BASE_URL` variable without a value, meaning that the configuration\nexpects the value to be present in the environments (see:\nhttps://soliplex.github.io/soliplex/config/environment/).\n\nThose files also must not contain secrets (API keys, etc.): instead,\nthey configure secret values to be found from the environment (see\nhttps://soliplex.github.io/soliplex/config/secrets/).\n\nIf your installation configures such values to be found from the OS\nenvironment, you can create a `.env` file which defines them, and arrange\nfor the file to be sourced into your environment before startin the Soliplex\napplication.\n\nCopy `.env.example` to `.env` and edit it to configure your values:\n\n```bash\ncp .env.example .env\n```\n\n## Documentation\n\nComprehensive documentation is available in the `docs/` directory:\n\n- **[Prerequisites Guide](docs/prerequisites.md)** - Step-by-step installation checklist\n- **[Server Setup](docs/server.md)** - Backend server configuration and CLI reference\n- **[Client Setup](docs/client.md)** - Frontend Flutter application setup\n- **[Docker Deployment](docs/docker.md)** - Complete Docker and docker-compose guide\n- **[RAG Setup](docs/rag.md)** - RAG database initialization and management\n- **[Configuration](docs/config/)** - Detailed configuration options\n\n### Running with Docker\n\nSee the [Docker Deployment Guide](docs/docker.md) for complete instructions:\n\n```bash\n# Setup\ncp .env.example .env\n# Edit .env with your settings\n\n# Run\ndocker-compose up\n```\n\nAccess:\n- Backend API: http://localhost:8000\n- API Documentation: http://localhost:8000/docs\n- Frontend Web UI: http://localhost:9000\n\n## Related Repositories\n\n- **[soliplex/flutter](https://github.com/soliplex/flutter)** - Flutter frontend (cross-platform mobile/desktop)\n- **[Documentation](https://soliplex.github.io/)** - Documentation site (MkDocs)\n- **[soliplex/ingester](https://github.com/soliplex/ingester)** - Content ingestion pipeline\n- **[soliplex/ingester-agents](https://github.com/soliplex/ingester-agents)** - Document ingestion agents\n- **[soliplex/whitelabel](https://github.com/soliplex/whitelabel)** - Customer white-label appshell template\n\n## License\n\nMIT License - Copyright (c) 2025 Enfold Systems, Inc.\n" | text/markdown | null | Enfold <info@enfoldsystems.net> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"ag-ui-protocol>=0.1.10",
"aiosqlite",
"authlib",
"fastapi",
"fastmcp<2.15,>=2.14.0",
"greenlet",
"haiku.rag-slim<0.31,>=0.30.1",
"itsdangerous",
"jsonpatch",
"jwcrypto",
"logfire[fastapi]",
"pydantic-ai-slim[google]",
"pyjwt",
"python-keycloak",
"sqlmodel",
"starlette",
"trio",
"uvicorn[standard]",
"certifi>=2025.11.12"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T22:49:40.213511 | soliplex-0.43.3.tar.gz | 87,695 | 3e/a3/45133e4983431e5f1bf1eccc2f0bca613db377757b639386f4c76cba44a2/soliplex-0.43.3.tar.gz | source | sdist | null | false | 63fcd5933761fb1d2476561c1d886f2b | c68a4cc5b7d3c8a79a1c57932fb3f94006068562f0e6151a29c881303e7e0230 | 3ea345133e4983431e5f1bf1eccc2f0bca613db377757b639386f4c76cba44a2 | null | [
"LICENSE"
] | 162 |
2.4 | arkadia-data | 0.1.11 | Arkadia Data Format (AK-DATA) - A versatile data serialization format optimized for AI applications. | # Arkadia Data Format (AKD)
```text
; i :J
U, .j..fraaM. nl
b h.obWMkkWWMMWMCdkvz,k
! .mQWM:o hiMoMW v.uaXMdohbi
hI,MMmaIao.Wo .IMkoh FCMwqoXa
,.c.aWdM. d,aToW . Mb!. MopfQ.L
jhj.xoM :k aCu F: w MpmqMvMMI,I
bzMhz:W .Mw . o lYh ai M iMa pM.j
hzqWWM; M;o.WMWWMkMX f.a aa bModpo.
;tMbbv xp oJMMWWWWMMMM iv dLMXakM:T
mdh MMWWWWWWWbQLCzurjktvMor
,QFw ;M,b .MWWWWWWWMWMWd xz M,kd X
qjMIo IMTW.WWWWWMWWWM.o.I rpULaMdi.
.mMM uoWWWMWWWWWWp qM,,M l M;mMbrI
f nm MMW MWWjMuMj I o LbMac
WWdMWWWW Mv a.b..aauMhMwQf
MoWWW,WWtjonJMWtoMdoaoMI
MMMM Mi xd:Mm tMwo Cr,
xMMc .otqokWMMMao:oio.
MW . C..MkTIo
WW
QWM
WW
uMW
WW
MW
```
> **The High-Density, Token-Efficient Data Protocol for Large Language Models.**
**Arkadia Data Format (AKD)** is a schema-first protocol designed specifically to optimize communication with LLMs. By stripping away redundant syntax (like repeated JSON keys) and enforcing strict typing, AKD offers **up to 30% token savings**, faster parsing, and a metadata layer invisible to your application logic but fully accessible to AI models.
**This Python package includes the full core library and the `akd` CLI tool.**
---
## ✨ Key Features
* **📉 Token Efficiency:** Reduces context window usage by replacing verbose JSON objects with dense Positional Records (Tuples).
* **🛡️ Type Safety:** Enforces types (`int`, `float`, `bool`, `string`) explicitly in the schema before data reaches the LLM.
* **🧠 Metadata Injection:** Use `#tags` and `$attributes` to pass context (e.g., source confidence, deprecation warnings) to the LLM without polluting your data structure.
* **🖥️ Powerful CLI:** Includes the `akd` terminal tool for encoding, decoding, and benchmarking files or streams.
* **⚡ Zero Dependencies:** Pure Python implementation, lightweight and fast.
---
## 📦 Installation
Install directly from PyPI:
```bash
pip install arkadia-data
```
---
## 🚀 Quick Start (Library)
### Basic Usage
```python
import arkadia.data as akd
# 1. Encode: Python Dict -> AKD String
data = { "id": 1, "name": "Alice", "active": True }
encoded = akd.encode(data)
print(encoded)
# Output: <id:number,name:string,active:bool>(1,"Alice",true)
# 2. Decode: AKD String -> Python Dict
input_str = '<score:number>(98.5)'
result = akd.decode(input_str)
if not result.errors:
print(result.node.value) # 98.5
else:
print("Errors:", result.errors)
```
---
## 🛠 CLI Usage
The Python package installs the `akd` (alias: `ak-data`) command globally.
```text
USAGE:
akd / ak-data <command> [flags]
COMMANDS:
enc [ENCODE] Convert JSON/YAML to AK Data format
dec [DECODE] Parse AK Data format back to JSON
benchmark [BENCHMARK] Run performance and token usage tests
```
### Examples
**1. Pipe JSON to AKD (Compact Mode):**
```bash
echo '{ "data": 2}' | akd enc - -c
# Output: <data:number>(2)
```
**2. Decode AKD file to JSON:**
```bash
akd dec payload.akd -f json
```
**3. Run Benchmarks on a directory:**
```bash
akd benchmark ./data_samples
```
---
## ⚡ Benchmarks
Why switch? Because every token counts. **AKCD** (Arkadia Compressed Data) consistently outperforms standard formats.
```text
BENCHMARK SUMMARY:
JSON █████████████████████░░░░ 6921 tok 0.15 ms
AKCD ████████████████░░░░░░░░░ 5416 tok 4.40 ms
AKD ███████████████████░░░░░░ 6488 tok 4.29 ms
TOON █████████████████████████ 8198 tok 2.36 ms
FORMAT TOKENS VS JSON
---------------------------------
AKCD 5416 -21.7%
AKD 6488 -6.3%
JSON 6921 +0.0%
TOON 8198 +18.5%
CONCLUSION: Switching to AKCD saves 1505 tokens (21.7%) compared to JSON.
```
---
## 📖 Syntax Specification
AKD separates structure (Schema) from content (Data).
### 1. Primitives
Primitive values are automatically typed. Strings are quoted, numbers and booleans are bare.
| Type | Input | Encoded Output |
| --- | --- | --- |
| **Integer** | `123` | `<number>123` |
| **String** | `"hello"` | `<string>"hello"` |
| **Boolean** | `true` | `<bool>true` |
| **Null** | `null` | `<null>null` |
### 2. Schema Definition (`@Type`)
Define the structure once to avoid repeating keys.
```akd
/* Define a User type */
@User <
id: number,
name: string,
role: string
>
```
### 3. Data Structures
#### Positional Records (Tuples)
The most efficient way to represent objects. Values must match the schema order.
```akd
/* Schema: <x:number, y:number> */
(10, 20)
```
#### Named Records (Objects)
Flexible key-value pairs, similar to JSON, used when schema is loose or data is sparse.
```akd
{
id: 1,
name: "Admin"
}
```
#### Lists
Dense arrays. Can be homogenous (list of strings) or mixed.
```akd
[ "active", "pending", "closed" ]
```
### 4. Metadata System
AKD allows you to inject metadata that is **visible to the LLM** but **ignored by the parser** when decoding back to your application.
#### Attributes (`$key=value`) & Tags (`#flag`)
```akd
@Product <
$version="2.0"
sku: string,
/* Tagging a field as deprecated */
#deprecated
legacy_id: int
>
```
### 5. Escaped Identifiers (Backticks)
AK-Data allows the use of spaces, symbols, and special characters in names by wrapping them in backticks (```). This applies to schema names, field keys, and metadata attributes.
```akd
@`System User+` <
// $`last-sync`="2024-05-10" //
`Full Name`: string,
`is-active?`: bool,
$`Special ID*` id: number
>
{
`Full Name`: "John Doe",
`is-active?`: true,
id: 101
}
```
### 6. Prompt Output Mode (`--prompt-output`)
This mode is specifically designed for Large Language Models (LLMs). It transforms AK-Data into a **Structural Blueprint**, providing a perfect template for the AI to follow. Instead of raw data values, it renders a recursive, human-readable schema structure.
**Key Features:**
* **Full Structural Expansion:** Anonymous nested types are fully expanded into braces `{}`.
* **Semantic Hinting:** Field-level comments from the schema are injected directly into the template.
* **Representative Sampling:** Lists show a single blueprint element followed by a continuation hint (`...`), saving tokens while maintaining clarity.
**Example Usage:**
```bash
# Generate a structural template for an LLM
echo '<[ /* id */ id: number, name: string, val: <id: string, num: number> ]>' | akd dec -f akd --prompt-output -
```
**Output:**
```akd
[
{
id: number /* id */,
name: string,
val: {
id: string,
num: number
}
},
... /* repeat pattern for additional items */
]
```
**Why use it?**
1. **Reduce Hallucination:** The LLM sees exactly what types and formats are expected for every field.
2. **Context Efficiency:** By showing only one example in a list, you define the logic without wasting the context window on repetitive data.
3. **Implicit Instruction:** The transition from positional `()` to named `{}` in prompt mode helps the AI differentiate between the "Instructions" and the final "Compact Output".
## 📄 License
This project is licensed under the [MIT License]().
<div align="center">
<sub>Built by <strong>Arkadia Solutions</strong>. Engineering the kernel of distributed intelligence.</sub>
</div>
| text/markdown | null | Arkadia Solutions <contact@arkadia.solutions> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"dotenv>=0.9.9",
"openai>=2.14.0",
"pytest>=9.0.2",
"tiktoken>=0.12.0",
"toon-format"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.8 | 2026-02-20T22:49:34.660201 | arkadia_data-0.1.11.tar.gz | 114,326 | bb/51/9c2de0826bd0f5e2eec08c94b0b48395c91c8cc9193bd165f1dee174df5f/arkadia_data-0.1.11.tar.gz | source | sdist | null | false | 6752f9137d3a144cca6211b68d10e14e | eb34cfd18f5546731c195295fdf2e6fac0f4dcf54f48c9ab39d3b270f946d589 | bb519c2de0826bd0f5e2eec08c94b0b48395c91c8cc9193bd165f1dee174df5f | null | [
"LICENCE"
] | 165 |
2.1 | projen | 0.99.15 | CDK for software projects | <p align="center">
<a href="https://projen.io">
<img src="https://raw.githubusercontent.com/projen/projen/main/logo/projen.svg">
<h3 align="center">projen</h3>
</a>
</p><p align="center">
Define and maintain complex project configuration through code.
</p><p align="center">
<a href="https://projen.io/"><strong>Documentation</strong></a> ·
<a href="https://github.com/projen/projen/releases"><strong>Changelog</strong></a> ·
<a href="#project-types"><strong>Project types</strong></a> ·
<a href="#community"><strong>Join the community</strong></a>
</p><p align="center">
<a href="https://opensource.org/licenses/Apache-2.0"><img src="https://img.shields.io/badge/License-Apache%202.0-yellowgreen.svg" alt="Apache 2.0 License"></a>
<a href="https://gitpod.io/#https://github.com/projen/projen"><img src="https://img.shields.io/badge/Gitpod-ready--to--code-blue?logo=gitpod" alt="Gitpod ready-to-code"></a>
<a href="https://github.com/projen/projen/actions/workflows/release.yml"><img src="https://github.com/projen/projen/actions/workflows/release.yml/badge.svg" alt="Release badge"></a>
<a href="https://github.com/projen/projen/commits/main"><img src="https://img.shields.io/github/commit-activity/w/projen/projen" alt="Commit activity"></a>
</p><br/>
*projen* synthesizes project configuration files such as `package.json`,
`tsconfig.json`, `.gitignore`, GitHub Workflows, eslint, jest, etc. from a
well-typed definition written in JavaScript.
As opposed to existing templating/scaffolding tools, *projen* is not a one-off
generator. Synthesized files should never be manually edited (in fact, projen
enforces that). To modify your project setup, users interact with rich
strongly-typed class and execute `projen` to update their project configuration
files.
By defining a custom project type and using projen in multiple repositories, it's
possible to update configuration files and CI/CD workflows across dozens (or
hundreds!?) of projects.
Check out [this talk](https://youtu.be/SOWMPzXtTCw) about projen from its creator.
## Getting Started
*projen* doesn't need to be installed. You will be using [npx](https://docs.npmjs.com/cli/v7/commands/npx) to run *projen* which takes care of all required setup steps.
To create a new project, run the following command and follow the instructions:
```console
$ mkdir my-project
$ cd my-project
$ npx projen new PROJECT-TYPE
🤖 Synthesizing project...
...
```
### Project types
Currently supported project types (use `npx projen new` without a type for a
full list):
**Built-in:** (run `npx projen new <type>`)
<!-- <macro exec="node ./scripts/readme-projects.js"> -->
* [awscdk-app-java](https://projen.io/docs/api/awscdk#awscdkjavaapp-) - AWS CDK app in Java.
* [awscdk-app-py](https://projen.io/docs/api/awscdk#awscdkpythonapp-) - AWS CDK app in Python.
* [awscdk-app-ts](https://projen.io/docs/api/awscdk#awscdktypescriptapp-) - AWS CDK app in TypeScript.
* [awscdk-construct](https://projen.io/docs/api/awscdk#awscdkconstructlibrary-) - AWS CDK construct library project.
* [cdk8s-app-py](https://projen.io/docs/api/cdk8s#cdk8spythonapp-) - CDK8s app in Python.
* [cdk8s-app-ts](https://projen.io/docs/api/cdk8s#cdk8stypescriptapp-) - CDK8s app in TypeScript.
* [cdk8s-construct](https://projen.io/docs/api/cdk8s#constructlibrarycdk8s-) - CDK8s construct library project.
* [cdktf-construct](https://projen.io/docs/api/cdktf#constructlibrarycdktf-) - CDKTF construct library project.
* [java](https://projen.io/docs/api/java#javaproject-) - Java project.
* [jsii](https://projen.io/docs/api/cdk#jsiiproject-) - Multi-language jsii library project.
* [nextjs](https://projen.io/docs/api/web#nextjsproject-) - Next.js project using JavaScript.
* [nextjs-ts](https://projen.io/docs/api/web#nextjstypescriptproject-) - Next.js project using TypeScript.
* [node](https://projen.io/docs/api/javascript#nodeproject-) - Node.js project.
* [project](https://projen.io/docs/api/projen#project-) - Base project.
* [python](https://projen.io/docs/api/python#pythonproject-) - Python project.
* [react](https://projen.io/docs/api/web#reactproject-) - React project using JavaScript.
* [react-ts](https://projen.io/docs/api/web#reacttypescriptproject-) - React project using TypeScript.
* [typescript](https://projen.io/docs/api/typescript#typescriptproject-) - TypeScript project.
* [typescript-app](https://projen.io/docs/api/typescript#typescriptappproject-) - TypeScript app.
<!-- </macro> -->
**External:** (run `npx projen new --from <type>`)
* [projen-github-action-typescript](https://github.com/projen/projen-github-action-typescript/blob/main/API.md) - GitHub Action in TypeScript project.
> Use `npx projen new PROJECT-TYPE --help` to view a list of command line
> switches that allows you to specify most project options during bootstrapping.
> For example: `npx projen new jsii --author-name "Jerry Berry"`.
The `new` command will create a `.projenrc.js` file which looks like this for
`jsii` projects:
```js
const { JsiiProject } = require('projen');
const project = new JsiiProject({
authorAddress: "elad.benisrael@gmail.com",
authorName: "Elad Ben-Israel",
name: "foobar",
repository: "https://github.com/eladn/foobar.git",
});
project.synth();
```
This program instantiates the project type with minimal setup, and then calls
`synth()` to synthesize the project files. By default, the `new` command will
also execute this program, which will result in a fully working project.
Once your project is created, you can configure your project by editing
`.projenrc.js` and re-running `npx projen` to synthesize again.
> The files generated by *projen* are considered an "implementation detail" and
> *projen* protects them from being manually edited (most files are marked
> read-only, and an "anti tamper" check is configured in the CI build workflow
> to ensure that files are not updated during build).
For example, to setup PyPI publishing in `jsii` projects, you can use
[`publishToPypi option`](https://projen.io/publisher.html):
```js
const project = new JsiiProject({
// ...
publishToPypi: {
distName: "mydist",
module: "my_module",
}
});
```
Run:
```shell
npx projen
```
And you'll notice that your `package.json` file now contains a `python` section in
its `jsii` config and the GitHub `release.yml` workflow includes a PyPI
publishing step.
We recommend to put this in your shell profile, so you can simply run `pj` every
time you update `.projenrc.js`:
```bash
alias pj='npx projen'
```
Most projects come with an assortment of **tasks** that handle various
development activities, from compiling to publishing. Tasks can be and composed
together, and can be run as local commands or turned into GitHub workflows. You
can list all tasks with `npx projen --help`:
```shell
$ npx projen --help
projen [command]
Commands:
projen new [PROJECT-TYPE-NAME] [OPTIONS] Creates a new projen project
projen clobber hard resets to HEAD of origin and cleans the local repo
projen compile Only compile
projen test Run tests
projen build Full release build (test+compile)
projen upgrade upgrade dependencies (including projen)
...
```
The `build` task is the same task that's executed in your CI builds. It
typically compiles, lints, tests and packages your module for distribution.
### Shell Completions
If installed as a global package, `projen` includes rich shell tab-completion support. To enable this in your shell, run:
```shell
# Bash
projen completion >> ~/.bashrc
# ZSH
projen completion >> ~/.zshrc
```
## Features
Some examples of features built-in to project types:
* Fully synthesize `package.json`
* Standard npm scripts like `compile`, `build`, `test`, `package`
* eslint
* Jest
* jsii: compile, package, api compatibility checks, API.md
* Bump & release scripts with CHANGELOG generation based on conventional commits
* Automated PR builds
* Automated releases to npm, maven, NuGet and PyPI
* Automated dependency upgrades
* Mergify configuration
* LICENSE file generation
* gitignore + npmignore management
* Node "engines" support with coupling to CI build environment and @types/node
* Anti-tamper: CI builds will fail if a synthesized file is modified manually
## Documentation
For documentation including examples and a full API reference, visit [https://projen.io/](https://projen.io/).
## Ecosystem
*projen* takes a "batteries included" approach and aims to offer dozens of different project types out of
the box (we are just getting started). Think `projen new react`, `projen new angular`, `projen new java-maven`,
`projen new awscdk-typescript`, `projen new cdk8s-python` (nothing in projen is tied to javascript or npm!)...
Adding new project types is as simple as submitting a pull request to this repo and exporting a class that
extends `projen.Project` (or one of its derivatives). Projen automatically discovers project types so your
type will immediately be available in `projen new`.
### Projects in external modules
*projen* is bundled with many project types out of the box, but it can also work
with project types and components defined in external jsii modules (the reason
we need jsii is because projen uses the jsii metadata to discover project types
& options in projen new).
Say we have a module in npm called `projen-vuejs` which includes a single project
type for vue.js:
```bash
$ npx projen new --from projen-vuejs
```
If the referenced module includes multiple project types, the type is required.
Switches can also be used to specify initial values based on the project type
APIs. You can also use any package syntax supported by [yarn
add](https://classic.yarnpkg.com/en/docs/cli/add#toc-adding-dependencies) like
`projen-vuejs@1.2.3`, `file:/path/to/local/folder`,
`git@github.com/awesome/projen-vuejs#1.2.3`, etc.
```bash
$ npx projen new --from projen-vuejs@^2 vuejs-ts --description "my awesome vue project"
```
Under the hood, `projen new` will install the `projen-vuejs` module from npm
(version 2.0.0 and above), discover the project types in it and bootstrap the
`vuejs-ts` project type. It will assign the value `"my awesome vue project"` to
the `description` field. If you examine your `.projenrc.js` file, you'll see
that `projen-vuejs` is defined as a dev dependency:
```javascript
const { VueJsProject } = require('projen-vuejs');
const project = new VueJsProject({
name: 'my-vuejs-sample',
description: "my awesome vue project",
// ...
devDeps: [
'projen-vuejs'
]
});
project.synth();
```
## Roadmap
See [Vision](./VISION.md).
## FAQ
### Do I have to write my configuration in JavaScript?
Not at all! JavaScript is the default, but it's also possible to write it in
Java, Python, TypeScript, or even JSON. This is made
possible by the [jsii](https://github.com/aws/jsii) library which allows us
to write APIs once and generate libraries in several languages. You can choose
a different language by passing the `--projenrc-ts`, `--projenrc-py`, `--projenrc-java`, or
`--projenrc-json` flags when running `projen new`.
Note: using a `.projenrc.json` file to specify configuration only allows
accessing a subset of the entire API - the options which are passed to the
constructor of each project type.
### How does projen work with my IDE?
projen has an unofficial [VS Code extension](https://marketplace.visualstudio.com/items?itemName=MarkMcCulloh.vscode-projen). Check it out!
## Community
The projen community can be found within the #projen channel in the [cdk.dev](https://cdk.dev/)
community Slack workspace.
## Contributions
Contributions of all kinds are welcome! Check out our [contributor's
guide](./CONTRIBUTING.md) and our [code of conduct](./CODE_OF_CONDUCT.md).
For a quick start, check out a development environment:
```bash
$ git clone git@github.com:projen/projen
$ cd projen
$ npm ci
$ npm run watch # compile in the background
```
### Contributors
Thanks goes to [our wonderful contributors](https://github.com/projen/projen/graphs/contributors)!
## License
Distributed under the [Apache-2.0](./LICENSE) license.
| text/markdown | Amazon Web Services | null | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: JavaScript",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Typing :: Typed",
"Development Status :: 4 - Beta",
"License :: OSI Approved"
] | [] | https://github.com/projen/projen.git | null | ~=3.9 | [] | [] | [] | [
"constructs<11.0.0,>=10.0.0",
"jsii<2.0.0,>=1.126.0",
"publication>=0.0.3",
"typeguard==2.13.3"
] | [] | [] | [] | [
"Source, https://github.com/projen/projen.git"
] | twine/6.1.0 CPython/3.14.2 | 2026-02-20T22:49:20.410225 | projen-0.99.15.tar.gz | 4,327,983 | 34/d6/f5fee125de53aaaa3f80da12e7a877cb2d3fd738737d54182055fc92df1c/projen-0.99.15.tar.gz | source | sdist | null | false | 7aac49c430d92a255f2650fd94f00bfc | 3a258fafdc679e009b318ba09fffce83772c1e720959b81044820cb011b1e878 | 34d6f5fee125de53aaaa3f80da12e7a877cb2d3fd738737d54182055fc92df1c | null | [] | 240 |
2.4 | pmcxcl | 0.6.0 | Python bindings for Monte Carlo eXtreme (OpenCL) photon transport simulator | 
# PMCX-CL - Python bindings for Monte Carlo eXtreme (OpenCL) photon transport simulator
- Copyright: (C) Matin Raayai Ardakani (2022-2023) <raayaiardakani.m at northeastern.edu>
and Qianqian Fang (2019-2026) <q.fang at neu.edu>
- License: GNU Public License V3 or later
- Version: 0.6.0
- URL: https://pypi.org/project/pmcxcl/
- Github: https://github.com/fangq/mcxcl
\
\

This module provides a Python binding for Monte Carlo eXtreme for OpenCL (MCXCL).
For other binaries, including the standalone executable and the MATLAB bindings,
see [our website](https://mcx.space).
Monte Carlo eXtreme (MCX) is a fast photon transport simulation software for 3D
heterogeneous turbid media. By taking advantage of the massively parallel
threads and extremely low memory latency in a modern graphics processing unit
(GPU), MCX is capable of performing Monte Carlo (MC) photon simulations at a
blazing speed, typically hundreds to a thousand times faster than a single-threaded
CPU-based MC implementation.
## How to Install
* PIP: ```pip install pmcxcl```, see https://pypi.org/project/pmcxcl/
## Runtime Dependencies
* **CPU or GPU**: An OpenCL-capable CPU or GPU; most modern CPUs or GPUs support OpenCL -
an industrial-standard heterogeneous computing library and specification (https://www.khronos.org/opencl/)
* **OpenCL CPU or GPU runtime/driver**: Both NVIDIA and AMD GPU graphics drivers should contain
out-of-box OpenCL runtimes or drivers; for Intel GPUs, one should install additional OpenCL runtime
support from https://github.com/intel/compute-runtime or install the `intel-opencl-icd` package
if the OS provides (such as Ubuntu 22.04); one can also install an open-source OpenCL runtime
[POCL](http://portablecl.org/), using package manager such as `sudo apt-get install pocl-opencl-icd`. However,
POCL's support is largely limited to CPUs. You **do not need** to install CUDA SDK to use pmcxcl.
* **Python**: Python 3.6 and newer is required. **Python 2 is not supported**.
* **numpy**: Used to pass/receive volumetric information to/from pmcxcl. To install, use either conda or pip
package managers: `pip install numpy` or `conda install numpy`
* (optional) **jdata**: Only needed to read/write JNIfTI output files. To install, use pip: `pip install jdata`
on all operating systems; For Debian-based Linux distributions, you can also install to the system interpreter
using apt-get: `sudo apt-get install python3-jdata`. See https://pypi.org/project/jdata/ for more details.
* (optional) **bjdata**: Only needed to read/write BJData/UBJSON files. To install, run `pip install bjdata`
on all operating systems; For Debian-based Linux distributions, you can also install to the system interpreter
using apt-get: `sudo apt-get install python3-bjdata`. See https://pypi.org/project/bjdata/ for more details.
* (optional) **matplotlib**: For plotting the results. To install, run either `pip install matplotlib` or
`conda install matplotlib`
## Build Instructions
### Build Dependencies
* **Operating System**: pmcxcl and mcxcl can be compiled on most OSes, including Windows, Linux and MacOS.
* **OpenCL library**: compiling mcxcl or pmcxcl requires to link with `libOpenCL.so` on Linux, or `libOpenCL.dylib`
on MacOS or `OpenCL.dll` on Windows. These libraries should have been installed by either graphics driver or
OpenCL runtimes.
* **Python Interpreter**: Python 3.6 or above. The ```pip``` Python package manager and the ```wheel``` package (available
via ```pip```) are not required but recommended.
* **C/C++ Compiler**: pmcxcl can be compiled using a wide variety of C compilers, including
* GNU GCC for Linux, MacOS (intalled via MacPorts or brew), and Windows (installed via msys2, mingw64 or cygwin64)
* Microsoft Visual Studio C/C++ Compiler for Windows.
* Apple Clang for macOS, available via Xcode.
Refer to each OS's online documentations for more in-depth information on how to install these compilers.
MacOS provides built-in OpenCL library support.
* **OpenMP**: The installed C/C++ Compiler should have support for [OpenMP](https://www.openmp.org/).
GCC and Microsoft Visual Studio compiler support OpenMP out of the box. Apple Clang, however, requires manual
installation of OpenMP libraries for Apple Clang. The easiest way to do this is via the [Brew](https://brew.sh/) package
manager, preferably after selecting the correct Xcode version:
```zsh
brew install libomp
brew link --force libomp
```
* **CMake**: CMake version 3.15 and later is required. Refer to the [CMake website](https://cmake.org/download/) for more information on how to download.
CMake is also widely available on package managers across all operating systems.
### Build Steps
1. Ensure that ```cmake```, ```python``` and the C/C++ compiler are all located over your ```PATH```.
This can be queried via ```echo $env:PATH``` on Windows or ```echo $PATH``` on Linux. If not, locate them and add their folder to the ```PATH```.
2. Clone the repository and switch to the ```pmcxcl/``` folder:
```bash
git clone --recursive https://github.com/fangq/mcx.git
cd mcx/pmcxcl
```
3. One can run `python3 setup.py install` or `python3 -m pip install .` to both locally build and install the module
4. If one only wants to locally build the module, one should run `python3 -m pip wheel .`
5. If the binary module is successfully built locally, you should see a binary wheel file `pmcxcl-X.X.X-cpXX-cpXX-*.whl`
stored inside the `mcxcl/pmcxcl` folder. You can install this wheel package using `python3 -m pip install --force-reinstall pmcxcl-*.whl`
to force installing this locally compiled `pmcxcl` module and overwrite any previously installed versions.
## How to use
The PMCXCL module is easy to use. You can use the `pmcxcl.gpuinfo()` function to first verify
if you have NVIDIA/CUDA compatible GPUs installed; if there are NVIDIA GPUs detected,
you can then call the `run()` function to launch a photon simulation.
A simulation can be defined conveniently in two approaches - a one-liner and a two-liner:
* For the one-liner, one simply pass on each MCX simulation setting as positional
argument. The supported setting names are compatible to nearly all the input fields
for the MATLAB version of MCX/MCXCL - [MCXLAB](https://github.com/fangq/mcx/blob/master/mcxlab/mcxlab.m))
```python3
import pmcxcl
import numpy as np
import matplotlib.pyplot as plt
res = pmcxcl.run(nphoton=1000000, vol=np.ones([60, 60, 60], dtype='uint8'), tstart=0, tend=5e-9,
tstep=5e-9, srcpos=[30,30,0], srcdir=[0,0,1], prop=np.array([[0, 0, 1, 1], [0.005, 1, 0.01, 1.37]]))
res['flux'].shape
plt.imshow(np.log10(res['flux'][30,:, :]))
plt.show()
```
* Alternatively, one can also define a Python dict object containing each setting
as a key, and pass on the dict object to `pmcxcl.run()`
```python3
import pmcxcl
import numpy as np
cfg = {'nphoton': 1000000, 'vol':np.ones([60,60,60],dtype='uint8'), 'tstart':0, 'tend':5e-9, 'tstep':5e-9,
'srcpos': [30,30,0], 'srcdir':[0,0,1], 'prop':[[0,0,1,1],[0.005,1,0.01,1.37]]}
res = pmcxcl.run(cfg)
```
| text/markdown | Matin Raayai Ardakani, Qianqian Fang | raayaiardakani.m@northeastern.edu, q.fang@neu.edu | Qianqian Fang | null | GPLv3+ | Monte Carlo simulation, Biophotonics, Ray-tracing, Rendering, GPU, Modeling, Biomedical Optics, Tissue Optics, Simulator, Optics, OpenCL | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Environment :: GPU",
"Topic :: Scientific/Engineering :: Physics",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | https://github.com/fangq/mcxcl | https://mcx.space | >=3.6 | [
"numpy"
] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T22:49:12.510749 | pmcxcl-0.6.0-pp39-pypy39_pp73-macosx_15_0_x86_64.whl | 551,845 | 69/82/ab4c348e0722aca231a2885557a7b9b9314c2afb9bf6e6e0c93439d0f3bf/pmcxcl-0.6.0-pp39-pypy39_pp73-macosx_15_0_x86_64.whl | pp39 | bdist_wheel | null | false | 0487986188db17e0743034e9f4254c8e | 3f5ae5d068a1fd0c066845a494c093fd550ae02c3c68a62036bc64df039bf72e | 6982ab4c348e0722aca231a2885557a7b9b9314c2afb9bf6e6e0c93439d0f3bf | null | [] | 1,901 |
2.3 | mcpsec | 0.4.0 | Security scanner for MCP (Model Context Protocol) server implementations | # ⚡ mcpsec
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
**Security scanner for MCP (Model Context Protocol) server implementations.**
MCP is the universal protocol connecting AI agents (Claude, ChatGPT, Gemini, Cursor) to external tools and data sources. It's adopted by every major AI company — Anthropic, OpenAI, Google, Microsoft. Its security is broken. `mcpsec` finds the vulnerabilities.
```
███╗ ███╗ ██████╗██████╗ ███████╗███████╗ ██████╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██╔════╝██╔════╝
██╔████╔██║██║ ██████╔╝███████╗█████╗ ██║
██║╚██╔╝██║██║ ██╔═══╝ ╚════██║██╔══╝ ██║
██║ ╚═╝ ██║╚██████╗██║ ███████║███████╗╚██████╗
╚═╝ ╚═╝ ╚═════╝╚═╝ ╚══════╝╚══════╝ ╚═════╝
```
## Why?
- **82%** of MCP implementations have path traversal vulnerabilities ([Endor Labs](https://www.endorlabs.com/learn/classic-vulnerabilities-meet-ai-infrastructure-why-mcp-needs-appsec))
- **67%** are vulnerable to code injection
- **~2,000** internet-exposed MCP servers found with **zero authentication** ([Knostic](https://www.descope.com/learn/post/mcp))
- Anthropic's own Git MCP server had **3 critical RCE vulnerabilities** (CVE-2025-68143/44/45)
- Nobody built an open-source scanner for this. Until now.
## Install
```bash
pip install mcpsec
```
Or install from source:
```bash
git clone https://github.com/manthanghasadiya/mcpsec.git
cd mcpsec
pip install -e .
```
## Quick Start
```bash
# Scan an MCP server running via stdio
mcpsec scan --stdio "npx @modelcontextprotocol/server-filesystem /tmp"
# 🧠 Run AI-Powered Scan (Generates payloads + Validates findings)
# Requires DEEPSEEK_API_KEY, OPENAI_API_KEY, or ANTHROPIC_API_KEY
mcpsec scan --stdio "python my_server.py" --ai
# 💥 Run Protocol Fuzzer (Find crashes)
mcpsec fuzz --stdio "python my_server.py" --duration 30
# Scan an MCP server running via HTTP
mcpsec scan --http http://localhost:3000/mcp
# Just enumerate the attack surface (no scanning)
mcpsec info --stdio "python my_server.py"
# Save JSON report
mcpsec scan --stdio "python my_server.py" --output report.json
# Run specific scanners only
mcpsec scan --stdio "python my_server.py" --scanners prompt-injection,path-traversal
# Static Audit (Source Code Analysis)
mcpsec audit --path . --ai
# Scan NPM package (downloads and scans)
mcpsec audit --npm @modelcontextprotocol/server-filesystem
# List available scanners
mcpsec list-scanners
```
## Scanners
| Scanner | Type | What It Detects |
|---------|------|----------------|
| `prompt-injection` | Static | Hidden instructions, base64-encoded payloads, cross-tool manipulation, data exfiltration indicators in tool descriptions |
| `auth-audit` | Static | Missing authentication, over-permissioned tools, dangerous tool combinations, misleading annotations |
| `path-traversal` | Dynamic | File path traversal via `../../` payloads — **proves exploitation** with actual file contents |
| `command-injection` | Dynamic | OS command injection via shell escape characters — **proves exploitation** with command output |
| `ssrf` | Dynamic | Server-Side Request Forgery targeting cloud metadata endpoints and internal services |
| `ai-payloads` | Dynamic | **(New)** Context-aware payloads generated by LLMs (SQLi, Logic bugs, Edge cases) |
| `protocol-fuzzer` | Dynamic | **(New)** Malformed JSON-RPC messages, boundary testing, type confusion to find crashes |
**Static scanners** analyze tool definitions without calling them. **Dynamic scanners** send actual payloads through the MCP protocol and verify exploitability — no exploit, no report.
## How It Works
```
┌─────────┐ MCP Protocol ┌────────────┐
│ mcpsec │ ◄──── JSON-RPC ────► │ Target MCP │
│ client │ (stdio or HTTP) │ Server │
└────┬────┘ └────────────┘
│
├── 1. Connect (stdio subprocess or HTTP)
├── 2. Enumerate tools, resources, prompts
├── 3. Run static scanners (analyze descriptions)
├── 4. Generate & Run dynamic payloads (Fuzzing + AI)
└── 5. Report findings with evidence + remediation
```
## Example Output
```
🔴 CRITICAL Path Traversal detected in parameter 'filepath'
scanner=path-traversal tool=read_file
Payload: ../../../../../../windows/win.ini
Response: ; for 16-bit app support [fonts] [extensions] [Mail] MAPI=1
🔴 CRITICAL Command Injection detected in parameter 'target'
scanner=command-injection tool=run_diagnostics
Payload: | whoami
Response: intruder\username
🧠 CRITICAL AI Exploit: SQL Injection confirmed
scanner=ai-sqli tool=query_db
Payload: ' OR 1=1 --
Response: [Admin, User, Guest]
╔════════════╤═════════╗
║ CRITICAL │ 5 ║
║ HIGH │ 5 ║
║ MEDIUM │ 1 ║
║ LOW │ 8 ║
╟────────────┼─────────╢
║ TOTAL │ 19 ║
╚════════════╧═════════╝
```
## Development
```bash
git clone https://github.com/manthanghasadiya/mcpsec.git
cd mcpsec
pip install -e ".[dev]"
# Run against the included deliberately-vulnerable test server
mcpsec scan --stdio "python tests/vuln_test_server.py"
```
The test server (`tests/vuln_test_server.py`) contains 8 intentional vulnerabilities covering prompt injection, command injection, path traversal, missing auth, and more. Use it to test scanner development.
## Roadmap
- [x] Prompt injection scanner (keyword, imperative, encoding, cross-tool, exfiltration detection)
- [x] Authentication & authorization audit
- [x] Path traversal scanner (dynamic, payload-based)
- [x] Command injection scanner (dynamic, payload-based)
- [x] SSRF scanner (dynamic, payload-based)
- [x] JSON report output
- [x] **Static source code analysis mode** (Taint Analysis & pattern matching)
- [x] **Cross-File Taint Analysis** (Detects vulnerabilities spanning multiple files)
- [x] **Protocol Fuzzer** (Crash detection & boundary testing)
- [x] **AI-Powered Analysis** (Payload generation & Finding validation)
- [ ] SQL injection scanner (Automated with AI)
- [ ] Tool description drift detector (rug pull detection)
- [ ] HTML report dashboard
- [ ] SARIF output for CI/CD integration
- [ ] GitHub Action for automated MCP server security testing
## Contributing
Contributions welcome! See [CONTRIBUTING.md](CONTRIBUTING.md) for details on how to set up your environment and add new scanners.
## Disclaimer
This tool is intended for authorized security testing only. Only scan MCP servers you own or have explicit permission to test. The authors are not responsible for misuse.
## License
[MIT](LICENSE)
---
*Built by [Manthan](https://www.linkedin.com/in/man-ghasadiya) — because your AI agents deserve a pentest too.*
| text/markdown | Manthan | null | null | null | MIT | ai, mcp, model-context-protocol, pentesting, scanner, security | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Security",
"Topic :: Software Development :: Testing"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"anyio>=4.0.0",
"httpx>=0.27.0",
"mcp>=1.0.0",
"pydantic>=2.0.0",
"rich>=13.0.0",
"semgrep>=1.90.0",
"typer>=0.12.0",
"openai>=1.0.0; extra == \"ai\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.4.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/manthanghasadiya/mcpsec",
"Repository, https://github.com/manthanghasadiya/mcpsec"
] | twine/6.2.0 CPython/3.12.2 | 2026-02-20T22:48:35.836399 | mcpsec-0.4.0.tar.gz | 62,442 | aa/be/28b40b367924d83b3f9fa6f1b96de4fcaeb494ce421043177f43fa297160/mcpsec-0.4.0.tar.gz | source | sdist | null | false | 1cd5acc42f9562dff4d798739b409ecd | cb2b9a3a0d581e534911258180ef2459999c31eb0ce70aff00aa97c458b159cf | aabe28b40b367924d83b3f9fa6f1b96de4fcaeb494ce421043177f43fa297160 | null | [] | 186 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.