metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | skrift | 0.1.0a46 | A lightweight async Python CMS for crafting modern websites | # Skrift
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://litestar.dev/)
A modern Litestar-powered content management framework with multi-provider OAuth authentication, role-based access control, and WordPress-like template resolution.
## Features
- **Multi-Provider OAuth**: Authenticate with Google, GitHub, Microsoft, Discord, Facebook, or Twitter/X
- **Role-Based Access Control**: Flexible permission system with Admin, Editor, Author, and Moderator roles
- **Setup Wizard**: Guided first-time configuration without manual file editing
- **Admin Interface**: Web-based management for users, pages, and site settings
- **WordPress-like Templates**: Hierarchical template resolution for content pages
- **Dynamic Controllers**: Load controllers from `app.yaml` configuration
- **SQLAlchemy Integration**: Async database support with SQLite/PostgreSQL
- **Client-Side Sessions**: Encrypted cookie sessions for horizontal scalability
- **Hook/Filter System**: WordPress-like extensibility with async support
- **SEO Metadata**: Built-in meta descriptions, OpenGraph tags, and robots directives
- **Content Scheduling**: Schedule pages to publish at a future date
- **Page Revisions**: Automatic content history with restore capability
- **Sitemap & Robots.txt**: Auto-generated with filter extensibility
## Quick Start
### Prerequisites
- Python 3.13+
### Installation
```bash
# Install Skrift
pip install skrift
# Or install from git
pip install git+https://github.com/ZechCodes/skrift.git
```
### Getting Started
Create a project directory and set up your environment:
```bash
mkdir mysite && cd mysite
# Create minimal environment file
echo "SECRET_KEY=$(python -c 'import secrets; print(secrets.token_urlsafe(32))')" > .env
# Start Skrift
skrift
```
Open http://localhost:8080 to launch the setup wizard.
### Setup Wizard
The setup wizard guides you through initial configuration:
1. **Database Configuration**: Choose SQLite (dev) or PostgreSQL (production)
2. **Authentication Providers**: Configure OAuth credentials
3. **Site Settings**: Set site name, tagline, and copyright info
4. **Admin Account**: Create your first admin user via OAuth login
After completing the wizard, an `app.yaml` configuration file is created in your project directory.
### Manual Configuration
Alternatively, create `app.yaml` manually:
```yaml
controllers:
- skrift.controllers.auth:AuthController
- skrift.admin.controller:AdminController
- skrift.controllers.web:WebController
db:
url: sqlite+aiosqlite:///./app.db
auth:
redirect_base_url: http://localhost:8080
providers:
google:
client_id: $GOOGLE_CLIENT_ID
client_secret: $GOOGLE_CLIENT_SECRET
scopes: [openid, email, profile]
```
Then run migrations and start the server:
```bash
skrift-db upgrade head
skrift
```
## Documentation
- **[Full Documentation](docs/README.md)**: Comprehensive guide covering all features
- **[Deployment Guide](docs/deployment.md)**: VPS, Docker, and Kubernetes deployment
- **[CSS Framework](docs/css-framework.md)**: Styling documentation
## Project Structure
```
skrift/
├── skrift/ # Main Python package
│ ├── asgi.py # Application factory
│ ├── config.py # Settings management
│ ├── controllers/ # Route handlers (auth, web, sitemap)
│ ├── admin/ # Admin panel
│ ├── auth/ # RBAC and guards
│ ├── db/ # Models and services
│ │ ├── models/ # Page, User, Role, PageRevision
│ │ └── services/ # page_service, revision_service
│ ├── lib/ # Core utilities
│ │ ├── hooks.py # Hook/filter system
│ │ ├── seo.py # SEO metadata utilities
│ │ ├── flash.py # Enhanced flash messages
│ │ └── template.py # Template resolver
│ └── setup/ # Setup wizard
├── templates/ # Jinja2 templates
├── static/ # Static assets
├── alembic/ # Database migrations
├── docs/ # Documentation
├── app.yaml # Application config (generated)
└── main.py # Development entry point
```
## Configuration
### Environment Variables
| Variable | Required | Description |
|----------|----------|-------------|
| `SECRET_KEY` | Yes | Session encryption key |
| `DEBUG` | No | Enable debug mode (default: false) |
| `DATABASE_URL` | No | Database connection string |
| `OAUTH_REDIRECT_BASE_URL` | No | OAuth callback base URL |
OAuth credentials are configured per-provider (e.g., `GOOGLE_CLIENT_ID`, `GOOGLE_CLIENT_SECRET`).
### app.yaml
Application configuration is stored in `app.yaml` (generated by setup wizard):
```yaml
controllers:
- skrift.controllers.auth:AuthController
- skrift.admin.controller:AdminController
- skrift.controllers.web:WebController
db:
url: $DATABASE_URL
pool_size: 5
auth:
redirect_base_url: $OAUTH_REDIRECT_BASE_URL
providers:
google:
client_id: $GOOGLE_CLIENT_ID
client_secret: $GOOGLE_CLIENT_SECRET
```
Environment variables (prefixed with `$`) are interpolated at runtime.
## Deployment
### Minimal VPS Deployment
```bash
# Install Skrift
pip install skrift
# Create project directory
mkdir -p /opt/skrift && cd /opt/skrift
# Configure environment
cat > .env << EOF
SECRET_KEY=$(python -c "import secrets; print(secrets.token_urlsafe(32))")
DATABASE_URL=sqlite+aiosqlite:///./app.db
OAUTH_REDIRECT_BASE_URL=https://yourdomain.com
EOF
# Start server (use setup wizard or create app.yaml manually)
skrift
```
### Production with Hypercorn
```bash
hypercorn skrift.asgi:app --workers 4 --bind 0.0.0.0:8080
```
See the [Deployment Guide](docs/deployment.md) for detailed instructions including Docker, Docker Compose, and Kubernetes deployments.
## Database Migrations
```bash
# Apply migrations
skrift-db upgrade head
# Create new migration
skrift-db revision --autogenerate -m "description"
# Rollback
skrift-db downgrade -1
```
## Template Resolution
Templates follow WordPress-like hierarchical resolution:
| URL Path | Templates Tried |
|----------|-----------------|
| `/about` | `page-about.html` -> `page.html` |
| `/services/web` | `page-services-web.html` -> `page-services.html` -> `page.html` |
## Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Submit a pull request
## License
MIT
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"advanced-alchemy>=0.26.0",
"aiosqlite>=0.20.0",
"alembic>=1.14.0",
"asyncpg>=0.30.0",
"fast-query-parsers>=1.0.2",
"httpx>=0.28.0",
"hypercorn>=0.17.0",
"litestar[cryptography,jinja]>=2.14.0",
"markdown-it-py>=3.0.0",
"mdit-py-plugins>=0.4.0",
"pydantic-settings>=2.7.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0.0",
"ruamel-yaml>=0.18.0",
"sqlalchemy[asyncio]>=2.0.36",
"zensical>=0.0.19; extra == \"docs\"",
"logfire[asgi,httpx,sqlalchemy]>=3.0.0; extra == \"logfire\"",
"redis>=5.0.0; extra == \"redis\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T02:19:22.977001 | skrift-0.1.0a46-py3-none-any.whl | 214,530 | 79/e5/05d741f13b8f0db6b80948e2768841aa4336c223f9d79791a40933c4bd77/skrift-0.1.0a46-py3-none-any.whl | py3 | bdist_wheel | null | false | 85409b3fd37983fa744adfae6f8b4f92 | 342e477324ac4641c67d1099ed87ec7ad55396b6d1b818039abea1b71c37a76c | 79e505d741f13b8f0db6b80948e2768841aa4336c223f9d79791a40933c4bd77 | null | [] | 225 |
2.1 | projen-modules | 1.1.84 | A collection of projen modules | # projen-modules
A collection of custom projen modules, that can be used to bootstrap and maintain consistent project configuration, tooling, dependencies, and builds.
## Getting Started
```sh
yarn install
npx projen build
```
This will:
* Install the dependencies
* Apply any projen changes
* Run tests
* Package project locally
Any files changed by projen should be committed to git.
Running the tests like this will update any snapshot files, this should be reviewed and committed to git.
## Testing
Types of testing:
* Snapshot - projen project outputs are stored as a snapshot in the corresponding `__snapshots__` directory. When the project changes then it is expected that these snapshots change too and should be reviewed committed alongside the project.
* Unit tests - these assert on specific functionality of the project and should be written for any new functionality added.
## Creating a New Project
```
npx projen new {project} --from projen-modules
```
Some projects may have required fields that need to be specified as part of this command, review any errors for details what needs to be specified.
### Project Types
| Project type | Description |
| ---------------------------------------------- | -------------------------- |
| [cdk-typescript-app](API.md#cdktypescriptapp-) | A typescript CDK app |
| [npm-package](API.md#npmpackage-) | A typescript npm package |
| [python-package](API.md#pythonpackage-) | A python package |
| [jsii-package](API.md#jsiiproject-) | A typescript JSII package |
## Project Structure
All source is located in `src` and is grouped by:
* `components` - these are common building blocks that can be used by projects to implement specific project functionality.
* `projects` - these are projects that can be built from this project (see #something)
* `utils` - these are helper functions that are often reused
`test` contains tests, and mirrors the `src` directory structure. Within here there are `__snapshots__` which contain snapshots of project tests (see #section).
| text/markdown | Dave Shepherd<dave.shepherd@endor.me.uk> | null | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: JavaScript",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Typing :: Typed",
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved"
] | [] | https://github.com/daveshepherd/projen-modules.git | null | ~=3.9 | [] | [] | [] | [
"constructs==10.5.1",
"jsii<2.0.0,>=1.126.0",
"projen<1.0.0,>=0.99.9",
"publication>=0.0.3",
"typeguard==2.13.3"
] | [] | [] | [] | [
"Source, https://github.com/daveshepherd/projen-modules.git"
] | twine/6.1.0 CPython/3.14.2 | 2026-02-21T02:17:45.556053 | projen_modules-1.1.84.tar.gz | 357,108 | da/ea/50e02cf99cf0d80dd8bf647b5e50ac2e18d5459a371c6779672741d5b85f/projen_modules-1.1.84.tar.gz | source | sdist | null | false | 952989510f13acdf9de50f8a42f8f072 | 6214a2034dafc1986ad064f4c762e8ede1cfee7a7a8f7a316b24c1f4f342533f | daea50e02cf99cf0d80dd8bf647b5e50ac2e18d5459a371c6779672741d5b85f | null | [] | 230 |
2.4 | plasmapy | 2026.2.0 | Python package for plasma science | <div align="center"><img src="https://raw.githubusercontent.com/PlasmaPy/PlasmaPy-logo/main/exports/with-text-dark.png" width="600"/></div>
# PlasmaPy
[](https://pypi.org/project/plasmapy/)
[](https://img.shields.io/conda/v/conda-forge/plasmapy)
[](https://img.shields.io/pypi/pyversions/plasmapy?style=plastic)
[](./LICENSE.md)
[](https://docs.plasmapy.org/en/latest/CODE_OF_CONDUCT.html)
[](https://app.element.io/#/room/#plasmapy:openastronomy.org)
<a rel="me" href="https://fosstodon.org/@plasmapy"></a>
[](https://www.youtube.com/channel/UCSH6qzslhqIZKTAJmHPxIxw)
[](https://github.com/PlasmaPy/PlasmaPy/actions/workflows/ci.yml)
[](https://github.com/PlasmaPy/PlasmaPy/actions/workflows/ci-comprehensive.yml)
[](https://github.com/PlasmaPy/PlasmaPy/actions/workflows/upstream-tests.yml)
[](https://results.pre-commit.ci/latest/github/PlasmaPy/PlasmaPy/main)
[](https://codecov.io/gh/PlasmaPy/PlasmaPy)
[](http://plasmapy.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/PlasmaPy/PlasmaPy/actions/workflows/upstream-docs.yml)
[](https://doi.org/10.5281/zenodo.1436011)
[](http://www.astropy.org/)
[](https://github.com/pre-commit/pre-commit)
[](https://scientific-python.org/specs/spec-0000/)
[PlasmaPy] is an open source, community-developed [Python] package for
plasma research and education. PlasmaPy intends to be for plasma science
what [Astropy] is for astronomy — a collection of functionality commonly
needed by plasma scientists and researchers globally, running within and
leveraging the open source scientific Python ecosystem. The goals of
PlasmaPy are more thoroughly described in [this video]. Many of our
recent presentations are available from the
[PlasmaPy Community on Zenodo].
## Documentation
Please check out our online [**documentation**] to learn more about
PlasmaPy's capabilities.
If you would like an idea of what PlasmaPy can do, go to our
[example gallery] of Jupyter notebooks. To learn more about how to
contribute, check out PlasmaPy's [contributor guide].
## Installing PlasmaPy
PlasmaPy's online documentation has detailed instructions on how to
[**install PlasmaPy**].
To install PlasmaPy on macOS or Linux, open a terminal and run:
```Shell
python -m pip install plasmapy
```
On some systems, it might be necessary to specify the Python version
number, for example by using `python3` or `python3.14` instead of
`python`.
To install PlasmaPy in Windows via PowerShell, run:
```Shell
py -3.14 -m pip install plasmapy
```
The `3.14` may be replaced by any version of Python that is installed
and supported by PlasmaPy.
## Citing PlasmaPy
If you use PlasmaPy for research resulting in a publication, please
[cite PlasmaPy]. It really helps support the project! Citing software
used in research provides credit to its authors, promotes open science &
scientific reproducibility, and helps open source projects demonstrate
to funding agencies that continued development should be supported.
Please check out the [PlasmaPy community on Zenodo] for prior releases
of PlasmaPy and other resources.
## Requesting features
Please [submit a feature request] in our [GitHub repository] if you have
an idea for new or improved functionality. PlasmaPy is community-driven,
and feature requests really help guide the future of the project.
## Submitting bug reports
Please [submit a bug report] on PlasmaPy's GitHub repository if you
notice any problems. We really appreciate it!
## Contributing
If you are interested in contributing, please check out our
[contributor guide] and [code of conduct]. There are a number of
[good first issues] in our GitHub repository. New contributors are very
welcome!
## Events
PlasmaPy has several [meetings] that are on our [calendar]. Events are
usually held on PlasmaPy's [Zoom] room. The most up-to-date information
about these meetings is on the [meetings] page of PlasmaPy's website.
### Community meetings
PlasmaPy's [community meetings] are a place to talk about code
development, event planning, and other community happenings. If you
have an idea for a new feature or would like to become involved in the
PlasmaPy project, community meetings are a great place to start.
## Community
## Contact information
Please feel free to reach out to us at [team@plasmapy.org] or stop by
one of our [community meetings] with any ideas, questions, and/or puns
about computational magnetohydrodynamics.
Please use these links to [submit a feature request],
[submit a bug report], or [privately report a security vulnerability]
on PlasmaPy's GitHub repository.
### GitHub discussions
We're trying out [GitHub discussions] as a place to suggest ideas, bring
up discussion topics, and ask questions.
### Matrix chat
If you have any questions, the quickest way to get a response is to ask
on our [Matrix]/[Gitter] channel. Both of these are the same chat
channel; Gitter uses a bridge to link the two.
### Mailing list
Subscribe to PlasmaPy's low-volume [mailing list] to receive occasional
newsletters and announcements.
## License
PlasmaPy is permissively licensed under a [3-clause BSD license] with
added [protections against software patents].
## Acknowledgments
Development of PlasmaPy has been supported in part by the
[National Science Foundation], [NASA], [Department of Energy], and the
[Smithsonian Institution]. For more details, please see PlasmaPy's
documentation page on [authors and credits].
[**documentation**]: https://docs.plasmapy.org
[**install plasmapy**]: https://docs.plasmapy.org/en/stable/install.html
[3-clause bsd license]: ./LICENSE.md
[astropy]: https://www.astropy.org
[authors and credits]: https://docs.plasmapy.org/en/latest/about/credits.html
[calendar]: https://calendar.google.com/calendar/embed?src=c_sqqq390s24jjfjp3q86pv41pi8%40group.calendar.google.com&ctz=America%2FNew_York
[cite plasmapy]: https://docs.plasmapy.org/en/latest/about/citation.html
[code of conduct]: http://docs.plasmapy.org/en/latest/CODE_OF_CONDUCT.html
[community meetings]: https://www.plasmapy.org/meetings/weekly
[contributor guide]: https://docs.plasmapy.org/en/latest/development/index.html
[department of energy]: https://www.energy.gov
[example gallery]: https://docs.plasmapy.org/en/stable/examples.html
[github discussions]: https://github.com/PlasmaPy/PlasmaPy/discussions
[github repository]: https://github.com/PlasmaPy/PlasmaPy
[gitter]: https://gitter.im/PlasmaPy/Lobby
[good first issues]: https://github.com/PlasmaPy/PlasmaPy/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22
[mailing list]: https://groups.google.com/forum/#!forum/plasmapy
[matrix]: https://app.element.io/#/room/#plasmapy:openastronomy.org
[meetings]: https://www.plasmapy.org/meetings/weekly
[nasa]: https://www.nasa.gov/
[national science foundation]: https://nsf.gov
[plasmapy]: https://www.plasmapy.org
[plasmapy community on zenodo]: https://zenodo.org/communities/plasmapy
[privately report a security vulnerability]: https://github.com/plasmapy/plasmapy/security/advisories/new
[protections against software patents]: ./PATENT.md
[python]: https://www.python.org
[smithsonian institution]: https://www.si.edu
[submit a bug report]: https://github.com/PlasmaPy/PlasmaPy/issues/new?assignees=&labels=Bug&template=bug_report.yml
[submit a feature request]: https://github.com/PlasmaPy/PlasmaPy/issues/new?assignees=&labels=Feature+request&template=feature_request.yml
[team@plasmapy.org]: mailto:team@plasmapy.org
[this video]: https://youtu.be/E8RwQF5wcXM
[zoom]: https://zoom.us/j/91633383503?pwd=QWNkdHpWeFhrYW1vQy91ODNTVG5Ndz09
| text/markdown | null | null | null | null | null | astronomy, fusion, heliophysics, plasma, plasma physics, science, solar physics, space plasmas | [
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Scientific/Engineering :: Astronomy",
"Topic :: Scientific/Engineering :: Physics"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"astropy>=6.1.3",
"h5py>=3.11",
"lmfit>=1.3",
"matplotlib>=3.8",
"mpmath>=1.3",
"numpy>=1.26",
"packaging>=24",
"pandas>=2.1.4",
"requests>=2.30",
"scipy>=1.11",
"tqdm>=4.65",
"wrapt>=1.15",
"xarray>=2024.5"
] | [] | [] | [] | [
"Changelog, https://docs.plasmapy.org/en/stable/whatsnew/index.html",
"Chat, https://plasmapy.org/chat",
"Documentation, https://docs.plasmapy.org/",
"Issues, https://github.com/PlasmaPy/plasmapy/issues/",
"Source, https://github.com/PlasmaPy/plasmapy",
"website, https://www.plasmapy.org"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T02:17:44.708044 | plasmapy-2026.2.0.tar.gz | 13,148,511 | 76/dd/2343a92a083c3cab6e00650907c99476a6789d7af7c9deca1a0503f31d8a/plasmapy-2026.2.0.tar.gz | source | sdist | null | false | 1fb17da884940571ee63bb9f32989133 | 5cd1afa53f10f0b0332834fa903f49f879519d0bd5f8dc1dc2334ed2a8bedb5b | 76dd2343a92a083c3cab6e00650907c99476a6789d7af7c9deca1a0503f31d8a | null | [
"LICENSE.md"
] | 217 |
2.4 | rb-deeplearning-lib | 0.2.6 | This is a machine learning--more specifically deep learning--library from my independent study on deep learning. This library is both a result of my learning and a tool for AI development. | # Deeplearning Package
## Overview
This package is designed to be similar to the PyTorch system of a building block system. Providing the functions that can be mixed, matched, and customized as pleased for any given model. This library is bare bones and only includes the few methods and ideas I learned about while studying *Deep Learning* by Ian Goodfellow et. al.. AI was used in the project, but it was used sparingly.
## Modules
This project has five main modules:
* `autogradient.py`
* `sequence.py`
* `optimizer.py`
* `neural_net.py`
* `cnn_layers.py`
All of which are automatically part of the initial import of the package.
## Detailed Module Descriptions
### `autogradient.py`
This module forms the core of the automatic differentiation system, enabling the computation of gradients for complex mathematical operations. It introduces the `Values` class, which is central to tracking computational history.
* **`Values` Class**: Encapsulates numerical values (`vals`) and their corresponding gradients (`grad`). It supports a wide array of arithmetic and mathematical operations (e.g., `+`, `-`, `*`, `/`, `@`, `exp`, `log`, `relu`, `abs`, `sum`, `softmax`, `mean`, `__pow__`, `__getitem__`, `pad`, `_` (identity), `T` (transpose)). The `__getattr__` method handles properties like `T` (transpose) and provides a pass-through for underlying numpy array attributes. Each operation automatically builds a computational graph by defining a `_backward` function. A static method, `_broadcast_grad`, meticulously handles gradient broadcasting to correctly match original tensor shapes. The `backward()` method then leverages this graph to efficiently compute and propagate gradients.
### `sequence.py`
This module provides a mechanism to combine multiple operations or layers into a single, cohesive sequential model.
* **`Sequence` Class**: Designed to take a list of callable objects (such as `Layer`, `Dense`, or `Dropout` instances). Its `__call__` method ensures that input data is passed through each item in the sequence in the defined order. The `params()` method is crucial for model training, as it gathers all trainable parameters (weights and biases) from its constituent layers, making them accessible to optimizers.
### `optimizer.py`
The `optimizer.py` module implements various algorithms used to update neural network parameters based on their computed gradients, facilitating the learning process.
* **`Optimizer` Base Class**: Serves as the blueprint for all optimization algorithms, providing a `step` method that each specific optimizer overrides.
* **Subclasses**: The module includes several widely used optimizers:
* `Optim_SGD`: Implements Stochastic Gradient Descent, with an optional learning rate scheduler that adjusts the rate over time.
* `Optim_SGD_Momentum`: Extends SGD by incorporating momentum, which helps accelerate convergence by considering an exponentially weighted average of past gradients.
* `Optim_AdaGrad`: An adaptive learning rate optimizer that adjusts the learning rate for each parameter individually based on the historical sum of its squared gradients.
* `Optim_RMSPropclass`: Similar to AdaGrad, RMSProp uses a moving average of squared gradients to normalize the learning rate, helping to mitigate issues with vanishing or exploding gradients.
* `Optim_Adam`: A powerful and popular optimizer that combines elements of both Momentum and RMSProp, utilizing moving averages of both the gradients and the squared gradients to provide efficient and robust parameter updates.
### `cnn_layers.py`
This module introduces fundamental building blocks for Convolutional Neural Networks (CNNs), including convolutional layers and pooling operations.
* **`Convo2D` Class**: Implements a 2D convolutional layer. It takes a `kernel_matrix` as input and supports different `padding` strategies ('valid' and 'same') and `stride` values. The `__call__` method performs the convolution operation, handling padding and strides to produce the output feature map. The `params()` method returns the kernel for optimization.
* **`Pooling` Base Class**: An abstract base class for pooling operations, defining common attributes like `pool_size` and `stride`.
* **`MaxPooling` Class**: A subclass of `Pooling` that implements max pooling. It extracts windows from the input and returns the maximum value within each window, reducing the spatial dimensions of the input.
* **`AvgPooling` Class**: A subclass of `Pooling` that implements average pooling. Similar to max pooling, it extracts windows but returns the average value within each window, providing a smoothed down-sampled representation.
### `neural_net.py`
This module defines the fundamental building blocks for constructing neural networks, including various layers, network architectures, regularization techniques, loss functions, and a comprehensive model training framework.
* **`Layer` Class**: Represents a single fully connected layer, equipped with trainable weights and biases. It supports different activation functions (e.g., `'relu'`, `'softmax'`), which can be specified as strings and dynamically called. The `params()` method provides access to its weights and biases for optimization.
* **`Dense` Class**: Facilitates the creation of multi-layered perceptrons by stacking multiple `Layer` instances. It allows for detailed configuration, including the number of layers, sizes of input, middle, and output layers, and distinct activation functions for hidden and final layers.
* **`Dropout` Class**: Implements the dropout regularization technique, a method to prevent overfitting during training. During training, it randomly sets a fraction of input units to zero at each update, while scaling up the remaining activations to maintain the expected output.
* **Loss Functions**: Essential for quantifying the error of a model. The module includes:
* `cross_entropy_loss`: Calculates the categorical cross-entropy loss, primarily used for classification tasks.
* `mse_loss`: Computes the Mean Squared Error loss, a common choice for regression problems.
* **`Model` Class**: Serves as the central orchestrator for the neural network training process. It takes a list of `blocks` (e.g., `Layer`, `Dense`, `Dropout` instances) to define the network's architecture. It integrates an `optimizer`, a `loss_fn`, and an optional `pen_fn` (penalty function) for regularization. The `train` method manages the entire training loop, including batching, forward passes, loss calculation, backward propagation (leveraging `autogradient`), and parameter updates via the chosen optimizer.
* **Penalty Functions (Regularization)**: These functions are designed to prevent overfitting by adding a penalty term to the loss function.
* `l2_reg`: Implements L2 regularization (also known as weight decay), which adds a penalty proportional to the sum of the squared values of the weights.
* `l1_reg`: Implements L1 regularization, which adds a penalty proportional to the sum of the absolute values of the weights.
## Making and Running a Model
When creating a model, use the Model class, which runs most of the functions included in the package itself. The first argument is a list of layers or blocks, each element is the steps in the network. These steps can be a Dense, Layer, or Dropout blocks (more will be made), a Dense is just multiple layers stacked back to back.
Training a model is done through: `def train(epochs, x_t, y_t, x_v, y_v, val_run=1, l_rate=0.01, _lambda=0.1, batch_size = None) `
Where epochs is the number of times you train through the data, the `#_t` means training data and `#_v` means validation data, `x` means input, `y` means output, `val_run` is the epochs between when you want to test the validation data, `l_rate` is the learn rate, `_lambda` is a hyperparameter that determines the strength of the penalty functions, and `batch_size` determines how large batches will be (if the batch size isn’t a multiple of the data size then it will still run, there is just a smaller batch then the others).
## Dependencies
The auto gradient–which is used for back propagation–relies heavily on `numpy`.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy>=2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/rylan-berry/DeepLearningIndependentStudy/tree/main/deeplearning_package"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T02:17:01.436796 | rb_deeplearning_lib-0.2.6.tar.gz | 15,158 | 7e/a6/92f97423f306300cf018fc345f47120574c06156ff810d9dde4226db4c23/rb_deeplearning_lib-0.2.6.tar.gz | source | sdist | null | false | 3953347ecbf7f888d9f6fb22ad354642 | 98fdd7f496a40fa1cc55f5bf2dac2540fe6046a5e8c0e17ec917d767fa636643 | 7ea692f97423f306300cf018fc345f47120574c06156ff810d9dde4226db4c23 | MIT | [
"LICENSE"
] | 207 |
2.3 | sprak | 1.6.0 | Pack sprites into a texture atlas | `sprak` is a Python module for packing sprites into a texture atlas.
## Installation
```sh
pip install sprak
```
## Usage
```python
from pathlib import Path
from sprak import SpritePacker
src = Path("path/to/src_folder")
dst = Path("path/to/dst_folder")
packer = SpritePacker()
packer.add_source_folder(src)
packer.pack(dst)
```
## Development
Create a virtual environment:
```sh
uv venv
```
Install requirements:
```sh
uv sync
```
## A note from Andrew
This is a module that I created and maintain for my own personal projects.
Please keep the following in mind:
- Features are added as I need them.
- Issues are fixed as my time and interest allow.
- Version updates may introduce breaking changes.
| text/markdown | akennedy | akennedy <andrewjacobkennedy@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"aseprite-reader>=1.1.0",
"pillow>=12.1.1"
] | [] | [] | [] | [
"Homepage, https://github.com/kennedy0/sprak"
] | uv/0.9.10 {"installer":{"name":"uv","version":"0.9.10"},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Linux Mint","version":"22.3","id":"zena","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T02:14:33.082455 | sprak-1.6.0.tar.gz | 5,203 | 08/c8/ec7bac6139cf7bfae2650d2789e36365c6c7797fcf481086b4069b976e81/sprak-1.6.0.tar.gz | source | sdist | null | false | 1b2244449e774fb09bee4a0d391a55f7 | e7bfaf46acf3ae097712633ce714f9f4716437b2dcb6c971f9082bde5fa23b88 | 08c8ec7bac6139cf7bfae2650d2789e36365c6c7797fcf481086b4069b976e81 | null | [] | 241 |
2.4 | repid | 2.0.0a15 | Repid framework: simple to use, fast to run and extensible to adopt job scheduler | <!-- markdownlint-configure-file { "MD013": { "line_length": 100 } } -->
<!-- markdownlint-disable MD033 -->
# repid
<a href="https://www.instagram.com/p/Cd-ob1NNZ84/">
<img alt="Repid's logo" width="350" align="right" src="https://raw.githubusercontent.com/gist/aleksul/fedbe168f1fc59c5aac3ddd17ecff30a/raw/b9467303f55517d99633d6551de223cd6534b149/repid_logo_borders.svg">
</a>
[](https://pypi.org/project/repid/)
[](https://codecov.io/gh/aleksul/repid)
[](https://github.com/aleksul/repid/actions/workflows/tests.yaml)
[](https://pypi.python.org/pypi/repid/)
[](https://repid.aleksul.space)
<br>
**Repid** framework: simple to use, fast to run and extensible to adopt job scheduler.
<br>
```bash
pip install repid
```
## Quickstart
Here is how the easiest example of producer-consumer application can look like.
Producer:
```python
import asyncio
from repid import Connection, Job, RabbitMessageBroker, Repid
app = Repid(Connection(RabbitMessageBroker("amqp://user:password@localhost:5672")))
async def main() -> None:
async with app.magic():
await Job(name="awesome_job").enqueue()
asyncio.run(main())
```
Consumer:
```python
import asyncio
from repid import Connection, RabbitMessageBroker, Repid, Router, Worker
app = Repid(Connection(RabbitMessageBroker("amqp://user:password@localhost:5672")))
router = Router()
@router.actor
async def awesome_job() -> None:
print("Hello async jobs!")
await asyncio.sleep(1.0)
async def main() -> None:
async with app.magic():
await Worker(routers=[router]).run()
asyncio.run(main())
```
Check out [user guide] to learn more!
## License
**Repid** is distributed under the terms of the MIT license. Please see [License.md] for more information.
**Repid's logo** is distributed under the terms of the [CC BY-NC 4.0] license.
It is originally created by [ari_the_crow_].
[License.md]: https://github.com/aleksul/repid/blob/master/LICENSE
[user guide]: https://repid.aleksul.space
[CC BY-NC 4.0]: https://creativecommons.org/licenses/by-nc/4.0/
[ari_the_crow_]: https://www.instagram.com/p/Cd-ob1NNZ84/
| text/markdown | null | aleksul <me@aleksul.space> | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"packaging>=22.0",
"typing-extensions<5.0.0,>=4.3.0; python_version < \"3.11\"",
"google-auth>=2.43.0; extra == \"pubsub\"",
"grpcio>=1.76.0; extra == \"pubsub\"",
"pydantic<3.0.0,>=2.0.0; extra == \"pydantic\"",
"redis<8.0.0,>=7.0.0; extra == \"redis\""
] | [] | [] | [] | [
"documentation, https://repid.aleksul.space",
"funding, https://github.com/sponsors/aleksul",
"repository, https://github.com/aleksul/repid",
"tracker, https://github.com/aleksul/repid/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T02:14:15.454023 | repid-2.0.0a15-py3-none-any.whl | 155,914 | b5/94/c60adc07f8311900f356f3ea843075f29a583fd60cab8078c4ed0e6d4411/repid-2.0.0a15-py3-none-any.whl | py3 | bdist_wheel | null | false | d48c35d8ecce1deee6a97492afaadc6a | 9272db151cb70f7285855904d5c0ac96b36641f7d8ddb7910567db0db98fa172 | b594c60adc07f8311900f356f3ea843075f29a583fd60cab8078c4ed0e6d4411 | null | [
"LICENSE"
] | 210 |
2.4 | suapy | 1.3.0 | Uma biblioteca Python incrível e em pt-BR focada na vida do aluno acessando a API do SUAP (IFRN). | <div align="center">
<img src="https://readme-typing-svg.demolab.com?font=Fira+Code&size=32&pause=1000&color=2E9E4F¢er=true&vCenter=true&width=500&lines=Suapy+%F0%9F%90%8D;Seu+SUAP%2C+em+Python%2C+em+portugu%C3%AAs." alt="Typing SVG" />
**A biblioteca Python feita para estudantes brasileiros que usam o SUAP.**
Acesse faltas, notas, provas e muito mais — com código limpo e em português.
[](https://pypi.org/project/suapy/)
[](https://python.org)
[](LICENSE)
[](https://pypi.org/project/suapy/)
</div>
---
## ✨ Por que o Suapy?
> Você quer saber **quantas faltas** tem antes de reprovar. Quer ver **quando é sua próxima prova**. Quer jogar suas notas num DataFrame do Pandas e entender de vez o semestre. O Suapy faz isso tudo — em português, com poucas linhas.
---
## 📦 Instalação
```bash
pip install suapy
```
<details>
<summary>🐼 Usando Pandas? Instale com o extra</summary>
```bash
pip install suapy[pandas]
```
</details>
---
## 🚀 Primeiros passos
```python
from suapy import Suap
suap = Suap()
suap.login("20201014040001", "sua_senha")
# 👤 Quem sou eu?
aluno = suap.ensino.obter_dados_aluno()
print(f"E aí, {aluno['nome_usual']}! 👋")
# 📅 Próxima prova
provas = suap.ensino.obter_proximas_avaliacoes()
if provas:
p = provas[0]
print(f"📌 Prova de {p['disciplina']} em {p['data_avaliacao']}")
# 📋 Situação das matérias
for d in suap.ensino.obter_diarios(2024, 1):
print(f"• {d['disciplina']}: {d['numero_faltas']} faltas — {d['situacao']}")
```
---
## 🎒 O que você pode fazer com `suap.ensino`
| Função | O que retorna |
| --------------------------------- | ----------------------------------------------------- |
| `obter_dados_aluno()` | Matrícula, curso, cotas e contatos |
| `obter_diarios(ano, periodo)` | Faltas, notas e situação por disciplina |
| `obter_boletim(ano, periodo)` | Médias finais e carga horária |
| `obter_proximas_avaliacoes()` | Datas de provas e trabalhos cadastrados |
| `obter_mensagens_aluno()` | Recados do SUAP (`'lidas'`, `'nao_lidas'`, `'todas'`) |
| `obter_turmas_virtuais(ano, per)` | Links e participantes da turma virtual |
| `obter_requisitos_conclusao()` | Horas e matérias que faltam para formar |
---
## 📊 Analisando suas notas com Pandas
```python
from suapy import para_dataframe
boletim = suap.ensino.obter_boletim(2024, 1)
df = para_dataframe(boletim)
media = df['media_final_disciplina'].astype(float).mean()
print(f"📈 Sua média geral: {media:.2f}")
```
---
## 🔐 Tratando erros de login
```python
from suapy import Suap, SuapAuthError
try:
suap.login("usuario", "senha_errada")
except SuapAuthError:
print("❌ Usuário ou senha incorretos.")
```
---
<div align="center">
Feito com 💚 para os estudantes do **IF** e de todas as instituições que usam o **SUAP**
_Não é afiliado ao IFRN nem ao projeto SUAP oficial._
</div>
| text/markdown | Kellyson | kellyson.m@escolar.ifrn.edu.br | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"Intended Audience :: Education"
] | [] | https://github.com/kellyson71/suapy | null | >=3.6 | [] | [] | [] | [
"requests>=2.25.0",
"rich>=10.0.0",
"pandas>=1.0.0; extra == \"pandas\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T02:13:17.697870 | suapy-1.3.0.tar.gz | 12,556 | 55/8b/955018190f7fd411ea633519fff5c32c651a8715b389207a97c43c411160/suapy-1.3.0.tar.gz | source | sdist | null | false | 9ff86d34348c68db30af76d484cf1d59 | 50b5da19172d90ccc5e77f3a2b69c5ddf73bb7116b53bc368f0b8b93b5e4695b | 558b955018190f7fd411ea633519fff5c32c651a8715b389207a97c43c411160 | null | [
"LICENSE"
] | 225 |
2.4 | TornAPIWrapper | 2.1.1 | A Python wrapper for the Torn City API, providing access to Torn City data. | <a href="https://github.com/cxdzc/TornAPIWrapper">
<img src="https://github.com/cxdzc/TornAPIWrapper/assets/110936008/271aa9c8-280e-4fd9-be9e-cd8b88d53329" alt="Banner">
</a>
<div align="center">
<a href="https://pypi.org/project/TornAPIWrapper/"><img src=https://img.shields.io/pypi/v/tornapiwrapper?cacheSeconds=300></a>
<a href="https://pypi.org/project/TornAPIWrapper/"><img src=https://img.shields.io/pypi/pyversions/tornapiwrapper?cacheSeconds=300></a>
<a href="https://pypi.org/project/TornAPIWrapper/"><img src=https://img.shields.io/pepy/dt/tornapiwrapper?color=blue&cacheSeconds=300></a>
<a href="https://www.torn.com/api.html#:~:text=Patch%20Notes"><img src=https://img.shields.io/badge/patch-19.02.2026-c4c4c4?cacheSeconds=300></a>
<a rel="license" href="https://github.com/cxdzc/TornAPIWrapper/blob/main/LICENSE"><img alt="MIT License" src="https://img.shields.io/badge/license-MIT-ab1436"/></a>
</div>
# 🌃 TornAPIWrapper
A Python wrapper for the [Torn City API v2](https://www.torn.com/swagger.php), providing access to [Torn City](https://www.torn.com) data.
### ✨ Features
* Sync and async support.
* Built-in API error handling.
* Modern, Pythonic API wrapper.
# 💾 Instructions
1. Install the [TornAPIWrapper](https://pypi.org/project/TornAPIWrapper) package by typing `pip install tornapiwrapper` in your terminal.
2. Check out [examples](https://github.com/cxdzc/TornAPIWrapper/tree/main/Examples) and [documentations](https://github.com/cxdzc/TornAPIWrapper?tab=readme-ov-file#-documentations) to familiarize yourself with the API.
3. Create an [API key](https://www.torn.com/preferences.php#tab=api).
4. Start programming!
# 📑 Documentations
There are two Torn City API documentations that I recommend reading to understand how the Torn API and TornAPIWrapper work.
- **[Official Torn API Docs v2:](https://www.torn.com/swagger.php)** Used by this project - best for endpoints, parameters, response schemas, and searching what to call and request.
- **[Official Torn API Docs v1:](https://www.torn.com/api.html)** Useful reference - explains API keys, access levels, limits, error codes, and ToS details not covered in v2.
# 💝 Contributors
<a href="https://github.com/cxdzc/TornAPIWrapper/graphs/contributors">
<img src="https://contrib.rocks/image?repo=cxdzc/TornAPIWrapper" />
</a>
<br><br>
View [CONTRIBUTING.md](https://github.com/cxdzc/TornAPIWrapper?tab=contributing-ov-file) to contribute.
# 📜 License
> [!NOTE]
> This is not legal advice.
The content and software in this repository are licensed under the [MIT License](https://github.com/cxdzc/TornAPIWrapper?tab=MIT-1-ov-file), a simple and permissive license that allows use, modification, and distribution as long as the license notice is preserved.
| text/markdown | cxdzc | null | null | null | The MIT License (MIT)
Copyright (c) 2023-Present cxdzc
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE. | torn, torn-city, torn-city-api, torncom, wrapper, api, python | [
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3.15",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Internet",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Utilities",
"Natural Language :: English"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"requests",
"aiohttp"
] | [] | [] | [] | [
"Homepage, https://github.com/cxdzc/TornAPIWrapper",
"Repository, https://github.com/cxdzc/TornAPIWrapper",
"Issues, https://github.com/cxdzc/TornAPIWrapper/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T02:12:55.699782 | tornapiwrapper-2.1.1.tar.gz | 33,168 | 0b/bb/9fc47b449e073168386698e24133e4c821b76dc046aace9759449426b9b9/tornapiwrapper-2.1.1.tar.gz | source | sdist | null | false | e21f4f487f5af9fe4b6cee0c24e1a0df | d47702a13e02e0d4dffb298ff81833e2f9cd385eda29a498e137036ce048a196 | 0bbb9fc47b449e073168386698e24133e4c821b76dc046aace9759449426b9b9 | null | [
"LICENSE"
] | 0 |
2.4 | weightwatcher | 0.7.7 | Diagnostic Tool for Deep Neural Networks | [](http://pepy.tech/project/weightwatcher)
[](https://pypi.org/project/weightwatcher/)
[](./LICENSE.txt)
[](https://nature.com/articles/s41467-021-24025-8)
[](https://www.youtube.com/watch?v=Tnafo6JVoJs)
[](https://discord.gg/uVVsEAcfyF)
[](https://www.linkedin.com/in/charlesmartin14/)
[](https://www.calculatedcontent.com)
[](https://weightwatcher.ai)
**WeightWatcher** (WW) is an open-source, diagnostic tool for analyzing Deep Neural Networks (DNN), without needing access to training or even test data. It is based on theoretical research into Why Deep Learning Works, based on our Theory of Heavy-Tailed Self-Regularization (HT-SR). It uses ideas from Random Matrix Theory (RMT), Statistical Mechanics, and Strongly Correlated Systems.
It can be used to:
- analyze pre/trained pyTorch, Keras, DNN models (Conv2D and Dense layers)
- monitor models, and the model layers, to see if they are over-trained or over-parameterized
- predict test accuracies across different models, with or without training data
- detect potential problems when compressing or fine-tuning pretrained models
- layer warning labels: over-trained; under-trained
## Quick Links
- Please see [our latest talk from the Sillicon Valley ACM meetup](https://www.youtube.com/watch?v=Tnafo6JVoJs)
- Join the [Discord Server](https://discord.gg/uVVsEAcfyF)
- For a deeper dive into the theory,
- Dr. Martin's [invited talk at NeurIPS 2023](https://youtu.be/xEuBwBj_Ov4)
- the deep theory [SETOL monograph] (https://arxiv.org/abs/2507.17912)
- the most recent [Grokking paper] (https://arxiv.org/abs/2506.04434)
- and some of the most recent Podcasts:
- [Practical AI](https://changelog.com/practicalai/194)
- [The Prompt Desk](https://smartlink.ausha.co/the-prompt-desk/data-free-quality-analysis-of-deep-neural-nets-with-charles-h-martin)
- [The TWIML AI Podcast with Sam Charrington](https://www.youtube.com/watch?v=pxPCKR6ED4s)
- More details and demos can be found on the [Calculated Content Blog](https://calculatedcontent.com/) and
- and on the open-souce landing page [weightwatcher.ai] (https://weightwatcher.ai)
And in the notebooks provided in the [WeightWatcher-examples github repo](https://github.com/CalculatedContent/WeightWatcher-examples)
(the examples folder here is quite old )
If you have some models you would like to analyze and get feedback on, check out [WeightWatcher-Pro](https://weightwatcher-ai.com). It's currently in beta and free.
## Installation: Version 0.7.6
```sh
pip install weightwatcher
```
if this fails try
### Current TestPyPI Version 0.7.5.5
```sh
python3 -m pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple weightwatcher
```
## Usage
```python
import weightwatcher as ww
import torchvision.models as models
model = models.vgg19_bn(pretrained=True)
watcher = ww.WeightWatcher(model=model)
details = watcher.analyze()
summary = watcher.get_summary(details)
```
It is as easy to run and generates a pandas dataframe with details (and plots) for each layer

and `summary` dictionary of generalization metrics
```python
{'log_norm': 2.11, 'alpha': 3.06,
'alpha_weighted': 2.78,
'log_alpha_norm': 3.21,
'log_spectral_norm': 0.89,
'stable_rank': 20.90,
'mp_softrank': 0.52}
```
## Advanced Usage
The `watcher` object has several functions and analysis features described below
Notice the min_evals setting: the power law fits need at least 50 eigenvalues to make sense
but the describe and other methods do not
```python
watcher.analyze(model=None, layers=[], min_evals=50, max_evals=None,
plot=True, randomize=True, mp_fit=True, pool=True, savefig=True):
...
watcher.describe(self, model=None, layers=[], min_evals=0, max_evals=None,
plot=True, randomize=True, mp_fit=True, pool=True):
...
watcher.get_details()
watcher.get_summary(details) or get_summary()
watcher.get_ESD()
...
watcher.distances(model_1, model_2)
```
## PEFT / LORA models (experimental)
To analyze an PEFT / LORA fine-tuned model, specify the peft option.
- peft = True: Forms the BA low rank matric and analyzes the delta layers, with 'lora_BA" tag in name
```details = watcher.analyze(peft='peft_only')```
- peft = 'with_base': Analyes the base_model, the delta, and the combined layer weight matrices.
```details = watcher.analyze(peft=True)```
The base_model and fine-tuned model must have the same layer names. And weightwatcher will ignore layers that do not share the same name.
Also,at this point, biases are not considered. Finally, both models should be stored in the same format (i.e safetensors)
Note: If you want to select by layer_ids, you must first run describe(peft=False), and then select *both* the lora_A and lora_B layers
#### Usage: Base Model

## Ploting and Fitting the Empirical Spectral Density (ESD)
WW creates plots for each layer weight matrix to observe how well the power law fits work
```python
details = watcher.analyze(plot=True)
```
For each layer, WeightWatcher plots the ESD--a histogram of the eigenvalues of the layer correlation matrix **X=W<sup>T</sup>W**. It then fits the tail of ESD to a (Truncated) Power Law, and plots these fits on different axes. The summary metrics (above) characterize the Shape and Scale of each ESD. Here's an example:
<img src="./img/ESD-plots.png" width='800px' height='auto' />
Generally speaking, the ESDs in the best layers, in the best DNNs can be fit to a Power Law (PL), with PL exponents `alpha` closer to `2.0`.
Visually, the ESD looks like a straight line on a log-log plot (above left).
## Generalization Metrics
<details>
<summary>
The goal of the WeightWatcher project is find generalization metrics that most accurately reflect observed test accuracies, across many different models and architectures, for pre-trained models and models undergoing training.
</summary>
[Our HTSR theory](https://jmlr.org/papers/volume22/20-410/20-410.pdf) says that well trained, well correlated layers should be signficantly different from the MP (Marchenko-Pastur) random bulk, and specifically to be heavy tailed. There are different layer metrics in WeightWatcher for this, including:
- `rand_distance` : the distance in distribution from the randomized layer
- `alpha` : the slope of the tail of the ESD, on a log-log scale
- `alpha-hat` or `alpha_weighted` : a scale-adjusted form of `alpha` (similar to the alpha-shatten-Norm)
- `stable_rank` : a norm-adjusted measure of the scale of the ESD
- `num_spikes` : the number of spikes outside the MP bulk region
- `max_rand_eval` : scale of the random noise etc
All of these attempt to measure how on-random and/or non-heavy-tailed the layer ESDs are.
#### Scale Metrics
- log Frobenius norm : <img src="https://render.githubusercontent.com/render/math?math=\log_{10}\Vert\mathbf{W}\Vert^{2}_{F}">
- `log_spectral_norm` : <img src="https://render.githubusercontent.com/render/math?math=\log_{10}\lambda_{max}=\log_{10}\Vert\mathbf{W}\Vert^{2}_{\infty}">
- `stable_rank` : <img src="https://render.githubusercontent.com/render/math?math=R_{stable}=\Vert\mathbf{W}\Vert^{2}_{F}/\Vert\mathbf{W}\Vert^{2}_{\infty}">
- `mp_softrank` : <img src="https://render.githubusercontent.com/render/math?math=R_{MP}=\lambda_{MP}/\lambda_{max}">
#### Shape Metrics
- `alpha` : <img src="https://render.githubusercontent.com/render/math?math=\alpha"> Power Law (PL) exponent
- (Truncated) PL quality of fit `D` : <img src="https://render.githubusercontent.com/render/math?math=\D"> (the Kolmogorov Smirnov Distance metric)
(advanced usage)
- TPL : (alpha and Lambda) Truncated Power Law Fit
- E_TPL : (alpha and Lambda) Extended Truncated Power Law Fit
#### Scale-adjusted Shape Metrics
- `alpha_weighted` : <img src="https://render.githubusercontent.com/render/math?math=\hat{\alpha}=\alpha\log_{10}\lambda_{max}">
- `log_alpha_norm` : (Shatten norm): <img src="https://render.githubusercontent.com/render/math?math=\log_{10}\Vert\mathbf{X}\Vert^{\alpha}_{\alpha}">
#### Direct Correlation Metrics
The random distance metric is a new, non-parameteric approach that appears to work well in early testing.
[See this recent blog post](https://calculatedcontent.com/2021/10/17/fantastic-measures-of-generalization-that-actually-work-part-1/)
- `rand_distance` : <img src="https://render.githubusercontent.com/render/math?math=div(\mathbf{W},rand(\mathbf{W}))"> Distance of layer ESD from the ideal RMT MP ESD
There re also related metrics, including the new
- 'ww_maxdist'
- 'ww_softrank'
#### Misc Details
- `N, M` : Matrix or Tensor Slice Dimensions
- `num_spikes` : number of spikes outside the bulk region of the ESD, when fit to an MP distribution
- `num_rand_spikes` : number of Correlation Traps
- `max_rand_eval` : scale of the random noise in the layer
#### Summary Statistics:
The layer metrics are averaged in the **summary** statistics:
Get the average metrics, as a `summary` (dict), from the given (or current) `details` dataframe
```python
details = watcher.analyze(model=model)
summary = watcher.get_summary(model)
```
or just
```python
summary = watcher.get_summary()
```
The summary statistics can be used to gauge the test error of a series of pre/trained models, without needing access to training or test data.
- average `alpha` can be used to compare one or more DNN models with different hyperparemeter settings **θ**, when depth is not a driving factor (i.e transformer models)
- average `log_spectral_norm` is useful to compare models of different depths **L** at a coarse grain level
- average `alpha_weighted` and `log_alpha_norm` are suitable for DNNs of differing hyperparemeters **θ** and depths **L** simultaneously. (i.e CV models like VGG and ResNet)
#### Predicting the Generalization Error
WeightWatcher (WW) can be used to compare the test error for a series of models, trained on the similar dataset, but with different hyperparameters **θ**, or even different but related architectures.
Our Theory of HT-SR predicts that models with smaller PL exponents `alpha`, on average, correspond to models that generalize better.
Here is an example of the `alpha_weighted` capacity metric for all the current pretrained VGG models.
<img src="https://github.com/CalculatedContent/PredictingTestAccuracies/blob/master/img/vgg-w_alphas.png" width='600px' height='auto' />
Notice: we *did not peek* at the ImageNet test data to build this plot.
This can be reproduced with the Examples Notebooks for [VGG](https://github.com/CalculatedContent/WeightWatcher/blob/master/examples/WW-VGG.ipynb) and also for [ResNet](https://github.com/CalculatedContent/WeightWatcher/blob/master/examples/WW-ResNet.ipynb)
</details>
## Detecting signs of Over-Fitting and Under-Fitting
WeightWatcher can help you detect the signatures of over-fitting and under-fitting in specific layers of a pre/trained Deep Neural Networks.
WeightWatcher will analyze your model, layer-by-layer, and show you where these kind of problems may be lurking.
### Correlation Traps
<details>
<summary>
The <code>randomize</code> option lets you compare the ESD of the layer weight matrix (W) to the ESD of its randomized form.
This is good way to visualize the correlations in the true ESD, and detect signatures of over- and under-fitting
</summary>
```python
details = watcher.analyze(randomize=True, plot=True)
```
Fig (a) is well trained; Fig (b) may be over-fit.
That orange spike on the far right is the tell-tale clue; it's caled a **Correlation Trap**.
A **Correlation Trap** is characterized by Fig (b); here the actual (green) and random (red) ESDs look almost identical, except for a small shelf of correlation (just right of 0). And random (red) ESD, the largest eigenvalue (orange) is far to the right of and seperated from the bulk of the ESD.

When layers look like Figure (b) above, then they have not been trained properly because they look almost random, with only a little bit of information present. And the information the layer learned may even be spurious.
Moreover, the metric `num_rand_spikes` (in the `details` dataframe) contains the number of spikes (or traps) that appear in the layer.
The `SVDSharpness` transform can be used to remove Correlation Traps during training (after each epoch) or after training using
```python
sharpemed_model = watcher.SVDSharpness(model=...)
```
Sharpening a model is similar to clipping the layer weight matrices, but uses Random Matrix Theory to do this in a more principle way than simple clipping.
</details>
### Early Stopping
<details>
<summary>
<b>Note:</b> This is experimental but we have seen some success here
</summary>
The WeightWatcher `alpha` metric may be used to detect when to apply early stopping. When the average `alpha` (summary statistic) drops below `2.0`, this indicates that the model may be over-trained and early stopping is necesary.
Below is an example of this, showing training loss and test lost curves for a small Transformer model, trained from scratch, along with the average `alpha` summary statistic.

We can see that as the training and test losses decrease, so does `alpha`. But when the test loss saturates and then starts to increase, `alpha` drops below `2.0`.
**Note:** this only work for very well trained models, where the optimal `alpha=2.0` is obtained
</details>
<hr>
## Additional Features
<details>
<summary>
There are many advanced features, described below
</summary>
<hr>
### Filtering
---
#### filter by layer types
```python
ww.LAYER_TYPE.CONV2D | ww.LAYER_TYPE.CONV2D | ww.LAYER_TYPE.DENSE
```
as
```python
details=watcher.analyze(layers=[ww.LAYER_TYPE.CONV2D])
```
#### filter by layer ID or name
```python
details=watcher.analyze(layers=[20])
```
### Calculations
---
#### minimum, maximum number of eigenvalues of the layer weight matrix
Sets the minimum and maximum size of the weight matrices analyzed.
Setting max is useful for a quick debugging.
```python
details = watcher.analyze(min_evals=50, max_evals=500)
```
#### specify the Power Law fitting proceedure
To replicate results using TPL or E_TPL fits, use:
```python
details = watcher.analyze(fit='PL'|'TPL'|'E_TPL')
```
The `details` dataframe will now contain two quality metrics, and for each layer:
- `alpha` : basically (but not exactly) the same PL exponent as before, useful for `alpha > 2.0`
- `Lambda` : a new metric, now useful when the (TPL) `alpha < 2.0`
(The TPL fits correct a problem we have had when the PL fits over-estimate `alpha` for TPL layers)
As with the `alpha` metric, smaller `Lambda` implies better generalization.
### Visualization
---
#### Save all model figures
Saves the layer ESD plots for each layer
```python
watcher.analyze(savefig=True,savefig='/plot_save_directory')
```
generating 4 files per layer
<pre>
ww.layer#.esd1.png
ww.layer#.esd2.png
ww.layer#.esd3.png
ww.layer#.esd4.png
</pre>
**Note:** additional plots will be saved when `randomize` option is used
#### fit ESDs to a Marchenko-Pastur (MP) distrbution
The `mp_fit` option tells WW to fit each layer ESD as a Random Matrix as a Marchenko-Pastur (MP) distribution, as described in our papers on HT-SR.
```python
details = watcher.analyze(mp_fit=True, plot=True)
```
and reports the
```python
num_spikes, mp_sigma, and mp_sofrank
```
Also works for randomized ESD and reports
```python
rand_num_spikes, rand_mp_sigma, and rand_mp_sofrank
```
#### fetch the ESD for a specific layer, for visualization or additional analysis
```python
watcher.analyze()
esd = watcher.get_ESD()
```
### Model Analysis
---
#### describe a model
Describe a model and report the `details` dataframe, without analyzing it
```python
details = watcher.describe(model=model)
```
#### comparing two models
The new distances method reports the distances between two models, such as the norm between the initial weight matrices and the final, trained weight matrices
```python
details = watcher.distances(initial_model, trained_model)
```
### Compatability
---
#### compatability with version 0.2.x
The new 0.4.x version of WeightWatcher treats each layer as a single, unified set of eigenvalues.
In contrast, the 0.2.x versions split the Conv2D layers into n slices, one for each receptive field.
The `pool=False` option provides results which are back-compatable with the 0.2.x version of WeightWatcher,
(which used to be called `ww2x=True`) with details provide for each slice for each layer.
Otherwise, the eigenvalues from each slice of th3 Conv2D layer are pooled into one ESD.
```python
details = watcher.analyze(pool=False)
```
</details>
<hr>
## Requirements
- Python 3.7+
### Frameworks supported
- Tensorflow 2.x / Keras
- PyTorch 1.x
- HuggingFace
Note: the current version requires both tensorflow and torch; if there is demand, this will be updates to make installation easier.
### Layers supported
- Dense / Linear / Fully Connected (and Conv1D)
- Conv2D
## Tips for First Time Users
<details>
<summary>
On using WeighWtatcher for the first time. I recommend selecting at least one trained model, and running `weightwatcher` with all analyze options enabled, including the plots. From this, look for:
</summary>
- if the layers ESDs are well formed and heavy tailed
- if any layers are nearly random, indicating they are not well trained
- if all the power law a fits appear reasonable, and `xmin` is small enough that the fit captures a reasonable section of the ESD tail
Moreover, the Power Laws and alpha fit only work well when the ESDs are both heavy tailed *and* can be easily fit to a single power law.
Occasionally the power law and/or alpha fits don't work. This happens when
- the ESD is random (not heavy tailed), `alpha > 8.0`
- the ESD is multimodal (rare, but does occur)
- the ESD is heavy tailed, but not well described by a single power law. In these cases, sometimes `alpha` only fits the the **very last** part of the tail, and is **too** large. This is easily seen on the Lin-Lin plots
In any of these cases, I usually throw away results where `alpha > 8.0` because they are spurious. If you suspect your layers are undertrained, you have to look both at `alpha` and a plot of the ESD itself (to see if it is heavy tailed or just random-like).
</details>
<hr>
## How to Release
<details>
<summary>
Publishing to the PyPI repository:
</summary>
```sh
# 1. Check in the latest code with the correct revision number (__version__ in __init__.py)
vi weightwatcher/__init__.py # Increse release number, remove -dev to revision number
git commit
# 2. Check out latest version from the repo in a fresh directory
cd ~/temp/
git clone https://github.com/CalculatedContent/WeightWatcher
cd WeightWatcher/
# 3. Use the latest version of the tools
python -m pip install --upgrade setuptools wheel twine
# 4. Create the package
python setup.py sdist bdist_wheel
# 5. Test the package
twine check dist/*
# 7. Upload the package to TestPyPI first
twine upload --repository testpypi dist/*.whl
# 8. Test the TestPyPI install
python3 -m pip install --index-url https://test.pypi.org/simple/ weightwatcher
...
# 9. Upload to actual PyPI
twine upload dist/*.whl
# 10. Tag/Release in github by creating a new release (https://github.com/CalculatedContent/WeightWatcher/releases/new)
```
</details>
<hr>
## License
[Apache License 2.0](LICENSE.txt)
<hr>
## Academic Presentations and Media Appearances
This tool is based on state-of-the-art research done in collaboration with UC Berkeley:
<details>
<summary>
WeightWatcher has been featured in top journals like JMLR and Nature:
</summary>
#### Latest papers and talks
- [SETOL: A Semi-Empirical Theory of (Deep) Learning] (in progress)
- [Post-mortem on a deep learning contest: a Simpson's paradox and the complementary roles of scale metrics versus shape metrics](https://arxiv.org/abs/2106.00734)
- [Evaluating natural language processing models with robust generalization metrics that do not need access to any training or testing data](https://arxiv.org/abs/2202.02842)
- [(Nature paper) Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data](https://www.nature.com/articles/s41467-021-24025-8)
- [Repo for Nature paper](https://github.com/CalculatedContent/ww-trends-2020)
- [(JMLR in press) Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning](https://arxiv.org/abs/1810.01075)
- [Traditional and Heavy Tailed Self Regularization in Neural Network Models](https://arxiv.org/abs/1901.08276)
- Notebook for above 2 papers (https://github.com/CalculatedContent/ImplicitSelfRegularization)
- [ICML 2019 Theoretical Physics Workshop Paper](https://github.com/CalculatedContent/PredictingTestAccuracies/blob/master/ICMLPhysicsWorkshop/icml_prl_TPDLW2019_fin.pdf)
- [Heavy-Tailed Universality Predicts Trends in Test Accuracies for Very Large Pre-Trained Deep Neural Networks](https://arxiv.org/abs/1901.08278)
- Notebook for paper (https://github.com/CalculatedContent/PredictingTestAccuracies)
- [Rethinking generalization requires revisiting old ideas: statistical mechanics approaches and complex learning behavior](https://arxiv.org/abs/1710.09553)
</details>
<details>
<summary>
and has been presented at Stanford, UC Berkeley, KDD, etc:
</summary>
- [NERSC Summer 2018](https://www.youtube.com/watch?v=_Ni5UDrVwYU)
- [UC Berkeley/ICSI 12/13/2018](https://www.youtube.com/watch?v=6Zgul4oygMc)
- [Institute for Pure & Applied Mathematics (IPAM)](https://www.youtube.com/watch?v=fmVuNRKsQa8)
- [Physics Informed Machine Learning](https://www.youtube.com/watch?v=eXhwLtjtUsI)
- [Talk at Stanford ICME 2020](https://www.youtube.com/watch?v=PQUItQi-B-I)
- [Talk at UCL (UK) 2022](https://www.youtube.com/watch?v=sOXROWJ70Pg)
#### KDD2019 Workshop
- [KDD 2019 Workshop: Statistical Mechanics Methods for Discovering
Knowledge from Production-Scale Neural Networks](https://dl.acm.org/doi/abs/10.1145/3292500.3332294)
- [KDD 2019 Workshop: Slides](https://www.stat.berkeley.edu/~mmahoney/talks/dnn_kdd19_fin.pdf)
</details>
<details>
<summary>
WeightWatcher has also been featured at local meetups and many popular podcasts
</summary>
#### Popular Popdcasts and Blogs
- [This Week in ML](https://twimlai.com/meetups/implicit-self-regularization-in-deep-neural-networks/)
- [Data Science at Home Podcast](https://podcast.datascienceathome.com/e/episode-70-validate-neural-networks-without-data-with-dr-charles-martin/)
- [Aggregate Intellect VLog](https://aisc.ai.science/events/2019-11-06)
- [Rebellion Research VLog](https://blog.rebellionresearch.com/blog/theoretical-physicist-dr-charles-martin-on-deep-learning)
- [Rebellion Research BLog](https://www.rebellionresearch.com/why-does-deep-learning-work)
- [LightOn AI Meetup](https://www.youtube.com/watch?v=tciq7t3rj98)
- [The Sillicon Valley ACM meetup](https://www.youtube.com/watch?v=Tnafo6JVoJs)
- [Applied AI Community](https://www.youtube.com/watch?v=xLZOf2IDLkc&feature=youtu.be)
- [Practical AI](https://changelog.com/practicalai/194)
- [Latest Results](https://www.youtube.com/watch?v=rojbXvK9mJg)
#### 2021 Short Presentations
- [MLC Research Jam March 2021](presentations/ww_5min_talk.pdf)
- [PyTorch2021 Poster April 2021](presentations/pytorch2021_poster.pdf)
#### Recent talk(s) by Mike Mahoney, UC Berekely
- [IARAI, the Institute for Advanced Research in Artificial Intelligence](https://www.youtube.com/watch?v=Pirni67ZmRQ)
</details>
<hr>
## Contributors
[Charles H Martin, PhD](https://www.linkedin.com/in/charlesmartin14)
[Calculation Consulting](https://calculationconsulting.com)
[Serena Peng](https://www.linkedin.com/in/serenapeng)
[Christopher Hinrichs](https://www.linkedin.com/in/chris-hinrichs-203a222b/)
<hr>
#### Consulting Practice
[Calculation Consulting homepage](https://calculationconsulting.com)
[Calculated Content Blog](https://calculatedcontent.com)
| text/markdown | Calculation Consulting | info@calculationconsulting.com | Calculation Consulting | info@calculationconsulting.com | Apache License, Version 2.0 | Deep Learning Keras Tensorflow pytorch Deep Learning DNN Neural Networks | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Programming Language :: Python",
"Programming Language :: Python :: 3.7"
] | [] | https://calculationconsulting.com/ | null | >=3.3 | [] | [] | [] | [
"numpy",
"pandas",
"matplotlib",
"matplotlib-inline",
"powerlaw",
"scikit-learn",
"safetensors",
"tqdm"
] | [] | [] | [] | [
"Documentation, https://calculationconsulting.com/",
"Code, https://github.com/calculatedcontent/weightwatcher",
"Issue tracker, https://github.com/calculatedcontent/weightwatcher/issues"
] | twine/6.2.0 CPython/3.10.18 | 2026-02-21T02:12:26.710461 | weightwatcher-0.7.7-py3-none-any.whl | 83,738 | 89/25/a402a06fd912205f53a2c211513e2616d086d28bcfe9c60e2ee7abbb44f2/weightwatcher-0.7.7-py3-none-any.whl | py3 | bdist_wheel | null | false | f6d46269a57110e7d3feade606542ac3 | cdc0ec3b3026af1fa2e1f8363fdc9b009ead5bd5f6a7a90da474c2d5a899a217 | 8925a402a06fd912205f53a2c211513e2616d086d28bcfe9c60e2ee7abbb44f2 | null | [
"LICENSE.txt"
] | 107 |
2.4 | databricks-openai | 0.12.1 | Support for Databricks AI support with OpenAI | # Databricks OpenAI Integration
The `databricks-openai` package provides seamless integration of Databricks AI features into OpenAI applications.
## Installation
### From PyPI
```sh
pip install databricks-openai
```
### From Source
```sh
pip install git+https://git@github.com/databricks/databricks-ai-bridge.git#subdirectory=integrations/openai
```
## Key Features
- **Vector Search:** Store and query vector representations using `VectorSearchRetrieverTool`.
## Getting Started
### Use Vector Search on Databricks
```python
# Step 1: call model with VectorSearchRetrieverTool defined
dbvs_tool = VectorSearchRetrieverTool(index_name="catalog.schema.my_index_name")
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "user",
"content": "Using the Databricks documentation, answer what is Spark?"
}
]
first_response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=[dbvs_tool.tool]
)
# Step 2: Execute function code – parse the model's response and handle function calls.
tool_call = first_response.choices[0].message.tool_calls[0]
args = json.loads(tool_call.function.arguments)
result = dbvs_tool.execute(query=args["query"]) # For self-managed embeddings, optionally pass in openai_client=client
# Step 3: Supply model with results – so it can incorporate them into its final response.
messages.append(first_response.choices[0].message)
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": json.dumps(result)
})
second_response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools
)
```
---
## Contribution Guide
We welcome contributions! Please see our [contribution guidelines](https://github.com/databricks/databricks-ai-bridge/tree/main/integrations/langchain) for details.
## License
This project is licensed under the [MIT License](LICENSE).
Thank you for using Databricks OpenAI!
| text/markdown | null | Databricks <agent-feedback@databricks.com> | null | null | Apache-2.0 | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"databricks-ai-bridge>=0.14.1",
"databricks-mcp>=0.4.0",
"databricks-vectorsearch>=0.50",
"mlflow>=2.20.1",
"openai-agents>=0.5.0",
"openai>=1.99.9",
"pydantic>2.10.0",
"unitycatalog-openai[databricks]>=0.2.0",
"databricks-ai-bridge[memory]>=0.14.1; extra == \"memory\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T02:11:41.709659 | databricks_openai-0.12.1.tar.gz | 15,583 | f8/b9/0f57fbf21016e5e5d4bdcd4811efa2ee0c0dde0ea23548ae73f22ff35f2c/databricks_openai-0.12.1.tar.gz | source | sdist | null | false | 21886accda3390548f7279f6ec1794b8 | f2e6c9baa103efe63b443b464170fed68ca8de4cb9553e1a2088a968295170f8 | f8b90f57fbf21016e5e5d4bdcd4811efa2ee0c0dde0ea23548ae73f22ff35f2c | null | [] | 15,670 |
2.4 | multigen | 0.1.115 | Python-to-Many code translation | # MultiGen: Multi-Language Code Generator
MultiGen is a Python-to-multiple-languages code generator that translates Python code to C, C++, Rust, Go, Haskell, OCaml, and LLVM IR while preserving semantics and performance characteristics.
## Overview
MultiGen extends the CGen (Python-to-C) project into a multi-language translation system with enhanced runtime libraries, code generation, and a clean backend architecture.
## Key Features
- **Multi-Language Support**: Generate code for C, C++, Rust, Go, Haskell, OCaml, and LLVM IR
- **Universal Preference System**: Customize code generation for each backend with language-specific preferences
- **Advanced Python Support**: Object-oriented programming, comprehensions, string methods, augmented assignment
- **Modern Libraries**: C++ STL, Rust standard library, Go standard library, Haskell containers, OCaml standard library
- **Clean Architecture**: Extensible backend system with abstract interfaces for adding new target languages
- **Type-Safe Generation**: Leverages Python type annotations for accurate and safe code translation
- **Runtime Libraries**: Enhanced C backend with 50KB+ runtime libraries providing Python-like semantics
- **CLI Interface**: Simple command-line tool with preference customization for conversion and building
- **Production-Ready**: 1183 passing tests ensuring translation accuracy and code quality
- **LLVM Backend**: Native compilation via LLVM IR with O0-O3 optimization levels
## Supported Languages
| Language | Status | Extension | Build System | Advanced Features | Benchmarks |
|----------|-------------|-----------|-------------------|-------------------|------------|
| C | Production | `.c` | Makefile / gcc | OOP, STC containers, string methods, comprehensions | 7/7 (100%) |
| C++ | Production | `.cpp` | Makefile / g++ | OOP, STL containers, string methods, comprehensions | 7/7 (100%) |
| Rust | Production | `.rs` | Cargo / rustc | OOP, ownership-aware, string methods, comprehensions | 7/7 (100%) |
| Go | Production | `.go` | go.mod / go build | OOP, defer pattern, string methods, comprehensions | 7/7 (100%) |
| Haskell | Production | `.hs` | Cabal / ghc | Pure functional, comprehensions, type safety | 7/7 (100%) |
| OCaml | Production | `.ml` | dune / ocamlc | Functional, pattern matching, mutable refs | 7/7 (100%) |
| LLVM | Production | `.ll` | llvmlite / clang | Native compilation, O0-O3 optimization, multi-platform | 7/7 (100%) |
## Benchmark Results
```sh
% make benchmark # ran on m1 macbook air
================================================================================
BENCHMARK SUMMARY
================================================================================
Total: 7 benchmarks × 7 backends = 49 runs
Success: 49 | Failed: 0
Backend Success Compile (s) Run (s) Binary (KB) LOC
--------------------------------------------------------------------------------
c 7/7 0.390 0.275189 94.9 76
cpp 7/7 0.435 0.251988 36.1 51
go 7/7 0.190 0.265097 2365.4 38
haskell 7/7 0.156 0.024035 19944.6 65
llvm 7/7 0.310 0.251354 49.0 321
ocaml 7/7 0.234 0.271373 826.3 27
rust 7/7 0.266 0.250707 443.0 37
===============================================================================
```
## Quick Start
### Installation
Install from pypi
```sh
pip install multigen
```
Install from source
```bash
git clone https://github.com/shakfu/multigen
cd multigen
pip install -e .
```
### Optional Dependencies
MultiGen has zero required dependencies for core functionality (C, C++, Rust, Go, Haskell, OCaml backends). Optional features can be installed as needed:
```bash
# LLVM backend support (native compilation, WebAssembly)
pip install multigen[llvm]
# Z3 theorem prover (formal verification)
pip install multigen[z3]
# All optional dependencies
pip install multigen[all]
```
### Basic Usage
```bash
# List available backends
multigen backends
# Convert Python to C (with advanced features)
multigen --target c convert my_script.py
# Convert Python to C++ (with STL support)
multigen --target cpp convert my_script.py
# Convert Python to Rust with build
multigen --target rust build my_script.py
# Convert Python to Go (with enhanced features)
multigen --target go convert my_script.py
# Convert Python to Haskell (with functional programming features)
multigen --target haskell convert my_script.py
# Convert Python to OCaml (with functional programming and pattern matching)
multigen --target ocaml convert my_script.py
# Batch convert all Python files
multigen --target cpp batch --source-dir ./examples
```
### Backend Preferences
Customize code generation for each target language with the `--prefer` flag:
```bash
# Haskell with native comprehensions (idiomatic)
multigen --target haskell convert my_script.py --prefer use_native_comprehensions=true
# C with custom settings
multigen --target c convert my_script.py --prefer use_stc_containers=false --prefer indent_size=2
# C++ with modern features
multigen --target cpp convert my_script.py --prefer cpp_standard=c++20 --prefer use_modern_cpp=true
# Rust with specific edition
multigen --target rust convert my_script.py --prefer rust_edition=2018 --prefer clone_strategy=explicit
# Go with version targeting
multigen --target go convert my_script.py --prefer go_version=1.19 --prefer use_generics=false
# OCaml with functional programming preferences
multigen --target ocaml convert my_script.py --prefer prefer_immutable=true --prefer use_pattern_matching=true
# Multiple preferences
multigen --target haskell build my_script.py \
--prefer use_native_comprehensions=true \
--prefer camel_case_conversion=false \
--prefer strict_data_types=true
```
## Preference System
MultiGen features a preference system that allows you to choose between **cross-language consistency** (default) and **language-specific idiomatic optimizations**.
### Design Philosophy
- **Default (Consistent)**: Uses runtime library functions for predictable behavior across all languages
- **Idiomatic (Optimized)**: Uses native language features for better performance and familiarity
### Available Preference Categories
| Backend | Key Preferences | Description |
|---------|-----------------|-------------|
| **Haskell** | `use_native_comprehensions`, `camel_case_conversion`, `strict_data_types` | Native vs runtime comprehensions, naming, type system |
| **C** | `use_stc_containers`, `brace_style`, `indent_size` | Container choice, code style, memory management |
| **C++** | `cpp_standard`, `use_modern_cpp`, `use_stl_containers` | Language standard, modern features, STL usage |
| **Rust** | `rust_edition`, `clone_strategy`, `use_iterators` | Edition targeting, ownership patterns, functional style |
| **Go** | `go_version`, `use_generics`, `naming_convention` | Version compatibility, language features, Go idioms |
| **OCaml** | `prefer_immutable`, `use_pattern_matching`, `curried_functions` | Functional style, pattern matching, function curry style |
### Example: Haskell Comprehensions
**Python Source:**
```python
def filter_numbers(numbers):
return [x * 2 for x in numbers if x > 5]
```
**Default (Runtime Consistency):**
```haskell
filterNumbers numbers = listComprehensionWithFilter numbers (\x -> x > 5) (\x -> x * 2)
```
**Native (Idiomatic Haskell):**
```haskell
filterNumbers numbers = [x * 2 | x <- numbers, x > 5]
```
### Example: OCaml Functional Programming
**Python Source:**
```python
def process_items(items):
return [item.upper() for item in items if len(item) > 3]
```
**Default (Runtime Consistency):**
```ocaml
let process_items items =
list_comprehension_with_filter items (fun item -> len item > 3) (fun item -> upper item)
```
**Functional (Idiomatic OCaml):**
```ocaml
let process_items items =
List.filter (fun item -> String.length item > 3) items
|> List.map String.uppercase_ascii
```
For complete preference documentation, see [PREFERENCES.md](PREFERENCES.md).
## Examples
### Simple Functions
**Python Input:**
```python
def add(x: int, y: int) -> int:
return x + y
def main() -> None:
result = add(5, 3)
print(result)
```
**Generated C++:**
```cpp
#include <iostream>
#include <vector>
#include <unordered_map>
#include "runtime/multigen_cpp_runtime.hpp"
using namespace std;
using namespace multigen;
int add(int x, int y) {
return (x + y);
}
void main() {
int result = add(5, 3);
cout << result << endl;
}
```
**Generated C:**
```c
#include <stdio.h>
#include "multigen_runtime.h"
int add(int x, int y) {
return (x + y);
}
void main() {
int result = add(5, 3);
printf("%d\n", result);
}
```
**Generated Go:**
```go
package main
import "multigen"
func add(x int, y int) int {
return (x + y)
}
func main() {
result := add(5, 3)
multigen.Print(result)
}
```
**Generated Rust:**
```rust
// Include MultiGen Rust runtime
mod multigen_rust_runtime;
use multigen_rust_runtime::*;
fn add(x: i32, y: i32) -> i32 {
(x + y)
}
fn main() {
let mut result = add(5, 3);
print_value(result);
}
```
**Generated Haskell:**
```haskell
module Main where
import MultiGenRuntime
import qualified Data.Map as Map
import qualified Data.Set as Set
import Data.Map (Map)
import Data.Set (Set)
add :: Int -> Int -> Int
add x y = (x + y)
main :: IO ()
main = printValue (add 5 3)
```
**Generated OCaml:**
```ocaml
(* Generated OCaml code from Python *)
open Mgen_runtime
let add x y =
(x + y)
let main () =
let result = add 5 3 in
print_value result
let () = print_value "Generated OCaml code executed successfully"
```
### Advanced Features (Object-Oriented Programming)
**Python Input:**
```python
class Calculator:
def __init__(self, name: str):
self.name: str = name
self.total: int = 0
def add(self, value: int) -> None:
self.total += value
def get_result(self) -> str:
return self.name.upper() + ": " + str(self.total)
def process() -> list:
calc = Calculator("math")
calc.add(10)
return [calc.get_result() for _ in range(2)]
```
**Generated C++:**
```cpp
#include <iostream>
#include <string>
#include <vector>
#include "runtime/multigen_cpp_runtime.hpp"
using namespace std;
using namespace multigen;
class Calculator {
public:
std::string name;
int total;
Calculator(std::string name) {
this->name = name;
this->total = 0;
}
void add(int value) {
this->total += value;
}
std::string get_result() {
return (StringOps::upper(this->name) + (": " + to_string(this->total)));
}
};
std::vector<std::string> process() {
Calculator calc("math");
calc.add(10);
return list_comprehension(Range(2), [&](auto _) {
return calc.get_result();
});
}
```
**Generated Go:**
```go
package main
import "multigen"
type Calculator struct {
Name string
Total int
}
func NewCalculator(name string) Calculator {
obj := Calculator{}
obj.Name = name
obj.Total = 0
return obj
}
func (obj *Calculator) Add(value int) {
obj.Total += value
}
func (obj *Calculator) GetResult() string {
return (multigen.StrOps.Upper(obj.Name) + (": " + multigen.ToStr(obj.Total)))
}
func process() []interface{} {
calc := NewCalculator("math")
calc.Add(10)
return multigen.Comprehensions.ListComprehension(multigen.NewRange(2), func(item interface{}) interface{} {
_ := item.(int)
return calc.GetResult()
})
}
```
**Generated Rust:**
```rust
use std::collections::{HashMap, HashSet};
// Include MultiGen Rust runtime
mod multigen_rust_runtime;
use multigen_rust_runtime::*;
#[derive(Clone)]
struct Calculator {
name: String,
total: i32,
}
impl Calculator {
fn new(name: String) -> Self {
Calculator {
name: name,
total: 0,
}
}
fn add(&mut self, value: i32) {
self.total += value;
}
fn get_result(&mut self) -> String {
((StrOps::upper(&self.name) + ": ".to_string()) + to_string(self.total))
}
}
fn process() -> Vec<String> {
let mut calc = Calculator::new("math".to_string());
calc.add(10);
Comprehensions::list_comprehension(new_range(2).collect(), |_| calc.get_result())
}
```
**Generated Haskell:**
```haskell
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE FlexibleInstances #-}
module Main where
import MultiGenRuntime
import qualified Data.Map as Map
import qualified Data.Set as Set
import Data.Map (Map)
import Data.Set (Set)
data Calculator = Calculator
{ name :: String
, total :: Int
} deriving (Show, Eq)
newCalculator :: String -> Calculator
newCalculator name = Calculator { name = name, total = 0 }
add :: Calculator -> Int -> ()
add obj value = () -- Haskell immutable approach
getResult :: Calculator -> String
getResult obj = (upper (name obj)) + ": " + (toString (total obj))
process :: [String]
process =
let calc = newCalculator "math"
in listComprehension (rangeList (range 2)) (\_ -> getResult calc)
```
**Generated OCaml:**
```ocaml
(* Generated OCaml code from Python *)
open Mgen_runtime
type calculator = {
name : string;
total : int;
}
let create_calculator name =
{
name = name;
total = 0;
}
let calculator_add (calculator_obj : calculator) value =
(* Functional update creating new record *)
{ calculator_obj with total = calculator_obj.total + value }
let calculator_get_result (calculator_obj : calculator) =
(calculator_obj.name ^ ": " ^ string_of_int calculator_obj.total)
let process () =
let calc = create_calculator "math" in
let updated_calc = calculator_add calc 10 in
list_comprehension (range_list (range 2)) (fun _ -> calculator_get_result updated_calc)
```
## Architecture
MultiGen follows a clean, extensible architecture with well-defined components:
### 7-Phase Translation Pipeline
1. **Validation**: Verify Python source compatibility
2. **Analysis**: Analyze code structure and dependencies
3. **Python Optimization**: Apply Python-level optimizations
4. **Mapping**: Map Python constructs to target language equivalents
5. **Target Optimization**: Apply target language-specific optimizations
6. **Generation**: Generate target language code
7. **Build**: Compile/build using target language toolchain
### Frontend (Language-Agnostic)
- **Type Inference**: Analyzes Python type annotations and infers types
- **Static Analysis**: Validates code compatibility and detects unsupported features
- **AST Processing**: Parses and transforms Python abstract syntax tree
### Backends (Language-Specific)
Each backend implements abstract interfaces:
- **AbstractEmitter**: Code generation for target language
- **AbstractFactory**: Factory for backend components
- **AbstractBuilder**: Build system integration
- **AbstractContainerSystem**: Container and collection handling
### Runtime Libraries (C Backend)
- **Error Handling** (`multigen_error_handling.h/.c`): Python-like exception system
- **Memory Management** (`multigen_memory_ops.h/.c`): Safe allocation and cleanup
- **Python Operations** (`multigen_python_ops.h/.c`): Python built-ins and semantics
- **String Operations** (`multigen_string_ops.h/.c`): String methods with memory safety
- **STC Integration** (`multigen_stc_bridge.h/.c`): Smart Template Container bridge
## CLI Commands
### Convert
Convert Python files to target language:
```bash
multigen --target <language> convert <input.py>
multigen --target rust convert example.py
```
### Build
Convert and compile/build the result:
```bash
multigen --target <language> build <input.py>
multigen --target go build --makefile example.py # Generate build file
multigen --target c build example.py # Direct compilation
```
### Batch
Process multiple files:
```bash
multigen --target <language> batch --source-dir <dir>
multigen --target rust batch --source-dir ./src --build
```
### Backends
List available language backends:
```bash
multigen backends
```
### Clean
Clean build artifacts:
```bash
multigen clean
```
## Development
### Running Tests
```bash
make test # Run all 1183 tests
make lint # Run code linting with ruff
make type-check # Run type checking with mypy
```
### Test Organization
MultiGen maintains a test suite organized into focused modules:
- `test_backend_c_*.py`: C backend tests (191 tests total)
- Core functionality, OOP, comprehensions, string methods, runtime libraries
- `test_backend_cpp_*.py`: C++ backend tests (104 tests)
- STL integration, modern C++ features, OOP support
- `test_backend_rust_*.py`: Rust backend tests (176 tests)
- Ownership patterns, memory safety, standard library
- `test_backend_go_*.py`: Go backend tests (95 tests)
- Go idioms, standard library, concurrency patterns
- `test_backend_haskell_*.py`: Haskell backend tests (93 tests)
- Functional programming, type safety, comprehensions
- `test_backend_ocaml_*.py`: OCaml backend tests (51 tests)
- Functional programming, pattern matching, immutability
- `test_backend_llvm_*.py`: LLVM backend tests (130 tests)
- Native compilation, optimization levels, IR generation
### Adding New Backends
To add support for a new target language:
1. Create backend directory: `src/multigen/backends/mylang/`
2. Implement required abstract interfaces:
- `MyLangBackend(LanguageBackend)`: Main backend class
- `MyLangFactory(AbstractFactory)`: Component factory
- `MyLangEmitter(AbstractEmitter)`: Code generation
- `MyLangBuilder(AbstractBuilder)`: Build system integration
- `MyLangContainerSystem(AbstractContainerSystem)`: Container handling
- `MyLangPreferences(BasePreferences)`: Language-specific preferences
3. Register backend in `src/multigen/backends/registry.py`
4. Add tests in `tests/test_backend_mylang_*.py`
5. Update documentation
See existing backends (C, C++, Rust, Go, Haskell, OCaml, LLVM) for implementation examples.
## Contributing
1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Ensure all tests pass
5. Submit a pull request
## License
MIT License - see LICENSE file for details.
## Advanced Features
### Supported Python Features
All backends support core Python features:
- **Object-Oriented Programming**: Classes, methods, constructors, instance variables, method calls
- **Augmented Assignment**: All operators (`+=`, `-=`, `*=`, `/=`, `//=`, `%=`, `|=`, `^=`, `&=`, `<<=`, `>>=`)
- **String Operations**: `upper()`, `lower()`, `strip()`, `find()`, `replace()`, `split()`
- **Comprehensions**: List, dict, and set comprehensions with range iteration and conditional filtering
- **Control Structures**: if/elif/else, while loops, for loops with range()
- **Built-in Functions**: `abs()`, `bool()`, `len()`, `min()`, `max()`, `sum()`
- **Type Inference**: Automatic type detection from annotations and assignments
### Container Support by Language
- **C**: STC (Smart Template Container) library with optimized C containers (864KB integrated library)
- **C++**: STL containers (`std::vector`, `std::unordered_map`, `std::unordered_set`)
- **Rust**: Standard library collections (`Vec`, `HashMap`, `HashSet`) with memory safety
- **Go**: Standard library containers with idiomatic Go patterns
- **Haskell**: Standard library containers with type-safe functional operations
- **OCaml**: Standard library with immutable data structures and pattern matching
### Test Coverage
MultiGen maintains test coverage ensuring translation accuracy:
- **1183 total tests** across all components and backends
- **49/49 benchmarks passing** (100%) across all 7 backends
- Comprehensive backend coverage testing all major Python features
- Test categories: basics, OOP, comprehensions, string methods, augmented assignment, control flow, integration
- All tests passing with zero regressions (100%)
## Development Roadmap
### Completed Milestones
- Multi-language backend system with C, C++, Rust, Go, Haskell, and OCaml support
- Advanced C runtime integration with 50KB+ of runtime libraries
- Sophisticated Python-to-C conversion with complete function and control flow support
- Object-oriented programming support across all backends
- Advanced Python language features: comprehensions, string methods, augmented assignment
- Complete STC library integration (864KB Smart Template Container library)
- Architecture consolidation with unified C backend module
- Professional test organization with 821 tests in focused, single-responsibility files
- Universal preference system with language-specific customization
- Production-ready code generation with clean, efficient output
- 7 production-ready backends (C++, C, Rust, Go, Haskell, OCaml, LLVM) with 100% benchmark success
- Exception handling (try/except/raise) across all backends
- Context managers (with statement) across all backends
### Future Development
- **Advanced Frontend Analysis**: Integrate optimization detection and static analysis engine
- **STC Performance Optimization**: Container specialization and memory layout optimization
- **Formal Verification**: Theorem proving and memory safety proofs integration
- **Cross-Language Runtime**: Extend runtime concepts to other backends (C++, Rust, Go)
- **Performance Benchmarking**: Comprehensive performance analysis across all target languages
- **IDE Integration**: Language server protocol support for MultiGen syntax
- **Web Interface**: Online code conversion tool
- **Plugin System**: External backend support and extensibility
| text/markdown | Shakeeb Alireza | Shakeeb Alireza <shakfu@users.noreply.github.com> | null | null | null | code-generation, compiler, metaprogramming | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Code Generators",
"Topic :: Software Development :: Compilers"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"llvmlite>=0.43.0; extra == \"all\"",
"z3-solver>=4.13.0; extra == \"all\"",
"llvmlite>=0.43.0; extra == \"llvm\"",
"z3-solver>=4.13.0; extra == \"verification\"",
"z3-solver>=4.13.0; extra == \"z3\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/shakfu/multigen/issues",
"Documentation, https://multigen.readthedocs.io",
"Homepage, https://github.com/shakfu/multigen",
"Repository, https://github.com/shakfu/multigen"
] | twine/6.2.0 CPython/3.13.2 | 2026-02-21T02:11:29.264944 | multigen-0.1.115.tar.gz | 804,677 | 55/96/f4b8f4c6811a6efe8872c243f33e33637209892e3ef7233dc900d7a7e9bc/multigen-0.1.115.tar.gz | source | sdist | null | false | 3c0aeb166a4f4ceb0348aacb42e495b9 | 14c14ace1246bc75e74b78a9495983bbe3ecc78ef464e208ee560b10d4bb95a9 | 5596f4b8f4c6811a6efe8872c243f33e33637209892e3ef7233dc900d7a7e9bc | MIT | [
"LICENSE"
] | 229 |
2.4 | arglink | 1.1.0 | Linking an argument parser and a callable. | # The "arglink" Package
## Introduction
The `arglink` package provides tools that links arguments of a parser and a callable.
The core functions are:
- Setup a parser based on arguments of a callable
- Build a kwargs dict to call the callable using the parsing results
## Preparation
To use `arglink`, in the definition the callable,
each argument should either
- be assigned to a default value other than `None`, or
- have a type annotation.
Only `int`, `float`, `bool`, and `str` are supported.
Users should use `setup_arglink` to decorate the callable.
See the end of this documentation for an example.
`setup_arglink` accepts two optional parameters.
- help_messages
- Help messages for parameters.
- Keys are names of arguments and values are messages.
- ignore_patterns
- A list of regular expression patterns.
- Arguments matching any of those patterns will be ignored.
- By default, "self", "cls", and one ends with "_" will be ignored.
## Usage
The core functions are:
- `setup_arglink`: It attaches attributes required by this package and analyze the callable.
See the following example.
- `callable_args_to_parser_args`: Add the arguments of a callable to a parser.
- `parser_args_to_callable_kw_dict`: Build the kwargs dict for calling the callable from the parsing results.
## Example
Import methods from `arglink`:
```python
from arglink.core import setup_arglink, callable_args_to_parser_args
```
Decorate the target callable and prepare the parser:
```python
>>> import typing
>>> class TargetClass:
... @setup_arglink(
... help_messages={
... 'var_1': 'help message for var_1',
... 'var_a': 'help message for var_a',
... 'var_f': 'help message for var_f'
... }
... )
... def __init__(
... self,
... var_to_skip_1_: list,
... var_to_skip_2_: list,
... var_1: int,
... var_2: float,
... var_3: str,
... var_a=1,
... var_b=1.1,
... var_c='',
... var_d1: int | None = None,
... var_d2: typing.Optional[int] = None,
... var_e=True,
... var_f=False,
... var_to_skip_3_=''
... ):
... pass
>>> parser = argparse.ArgumentParser()
>>> callable_args_to_parser_args(obj=TargetClass.__init__, parser=parser)
>>> parser.print_help() # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
```
The parser looks like:
```
usage: ... [-h] --var-1 INT --var-2 FLOAT --var-3 STR
[--var-a INT] [--var-b FLOAT] [--var-c STR]
[--var-d1 INT] [--var-d2 INT] [--var-e-toggle]
[--var-f-toggle]
<BLANKLINE>
options:
-h, --help show this help message and exit
<BLANKLINE>
arguments for "__main__.TargetClass.__init__":
--var-1 INT help message for var_1
--var-2 FLOAT
--var-3 STR
--var-a INT (default: 1) help message for var_a
--var-b FLOAT (default: 1.1)
--var-c STR (default: '')
--var-d1 INT (default: None)
--var-d2 INT (default: None)
--var-e-toggle (default: True)
--var-f-toggle (default: False) help message for var_f
```
| text/markdown | Jiyuan Yang | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/ygjiyn/arglink_project"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-21T02:11:10.359026 | arglink-1.1.0.tar.gz | 5,878 | b1/9d/b8730f9d16f56492406400d995ca1bdc771cd92c8c720135c89395f2022e/arglink-1.1.0.tar.gz | source | sdist | null | false | e57e253a144141e9c947c8f149607ff5 | c5d361832270ad3d298da83373c61b7ae1de4c74653990dcb5033f3678fbe6ab | b19db8730f9d16f56492406400d995ca1bdc771cd92c8c720135c89395f2022e | MIT | [
"LICENSE"
] | 242 |
2.4 | songbirdcore | 0.1.7 | core low level api for songbird | # songbirdcore 🐦
Low-level package with common code used across songbird's
cli and api.
See:
- [songbirdcli](https://github.com/cboin1996/songbird.git)
- [songbirdapi](https://github.com/cboin1996/songbirdapi.git)
## Documentation
`songbirdcore`'s documentation may be found [here](https://cboin1996.github.io/songbirdcore)
## Requirements
- Python version >= 3.11
## Installation
To install, run
```bash
pip install songbirdcore
```
To install the latest development version from `test-pypi`
run
```bash
python3 -m pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ songbirdcore
```
## Development
Once you have clone the repository, run
```bash
export ENV=dev
make setup
source venv/bin/activate
make requirements
```
This configures a virtual environment locally.
You can then run tests by performing the steps below.
### Updating Requirements
Updating the requirements for this package may be done
through
```bash
make update-requirements
make requirements
```
## Tests
Configure your vscode debugger by creating a `.vscode/settings.json`
file with the following contents:
```json
{
"python.testing.pytestArgs": [
"tests"
],
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true,
}
```
| text/markdown | Christian Boin | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"annotated-types==0.7.0",
"beautifulsoup4==4.12.3",
"brotli==1.2.0",
"certifi==2024.12.14",
"cffi==2.0.0",
"charset-normalizer==3.4.0",
"cryptography==46.0.5",
"cssselect==1.2.0",
"deprecation==2.1.0",
"eyeD3==0.9.9",
"fake-useragent==2.0.3",
"filetype==1.2.0",
"google-api-core==2.30.0",
"google-api-python-client==2.190.0",
"google-auth==2.48.0",
"google-auth-httplib2==0.3.0",
"google-auth-oauthlib==1.2.4",
"googleapis-common-protos==1.72.0",
"greenlet==3.1.1",
"httplib2==0.31.2",
"idna==3.10",
"lxml==5.3.0",
"lxml_html_clean==0.4.1",
"mutagen==1.47.0",
"oauthlib==3.3.1",
"packaging==26.0",
"parse==1.20.2",
"playwright==1.49.1",
"proto-plus==1.27.1",
"protobuf==6.33.5",
"pyasn1==0.6.2",
"pyasn1_modules==0.4.2",
"pycparser==3.0",
"pycryptodomex==3.23.0",
"pydantic==2.11.10",
"pydantic-settings==2.13.1",
"pydantic_core==2.33.2",
"pyee==12.0.0",
"pyparsing==3.3.2",
"pyquery==2.0.1",
"python-dotenv==1.2.1",
"requests==2.32.3",
"requests-htmlc==0.0.8",
"requests-oauthlib==2.0.0",
"rsa==4.9.1",
"soupsieve==2.6",
"typing-inspection==0.4.2",
"typing_extensions==4.12.2",
"uritemplate==4.2.0",
"urllib3==2.2.3",
"w3lib==2.2.1",
"websockets==16.0",
"yt-dlp==2026.2.4",
"yt-dlp-ejs==0.4.0",
"black; extra == \"dev\"",
"click; extra == \"dev\"",
"isort; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"mkdocs; extra == \"dev\"",
"mkdocs-material; extra == \"dev\"",
"mike; extra == \"dev\"",
"mkdocstrings-python; extra == \"dev\"",
"build; extra == \"package\"",
"twine; extra == \"package\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T02:11:06.598341 | songbirdcore-0.1.7.tar.gz | 318,439 | c6/15/7a846e77771c10406e17ed5b9edbaa5a69dfe4cfc0dae712bc3bd237f84c/songbirdcore-0.1.7.tar.gz | source | sdist | null | false | f5736b0cdef7076be9006a7cd68ecfb5 | 8f4f311136518ebbab513a9335e9272f9f3f2644052cf7fb204fa35004631e62 | c6157a846e77771c10406e17ed5b9edbaa5a69dfe4cfc0dae712bc3bd237f84c | null | [
"LICENSE"
] | 238 |
2.1 | smooth-cli | 0.4.0 | SmoothDev CLI tool for PR generation, commit message generation, and documentation management | # SmoothDev CLI
🤖 AI-powered development tools for commit messages, pull requests, release notes, and documentation generation.
[](https://pypi.org/project/smooth-cli/)
[](https://www.python.org/downloads/)
[](LICENSE)
## Overview
SmoothDev CLI provides intelligent assistance for your development workflow:
- **📝 Smart Commit Messages** - Generate conventional commit messages from staged changes
- **📋 Automated PR Documentation** - Create professional PR titles and summaries
- **📦 Release Notes Generation** - Generate comprehensive release notes from git history
- **📚 Documentation Generation** - AI-powered repository documentation creation
- **🔐 Flexible Authentication** - Auth0 JWT and API key authentication
- **⚙️ Smart Configuration** - Auto-detection reduces CLI flags by 80%
- **🤖 GitHub Actions Integration** - Automated workflows for PR and release documentation
## Quick Start
### Installation
```bash
pip install smooth-cli
```
### Authentication
```bash
# Interactive login (recommended for personal use)
smooth auth login
# API key authentication (ideal for CI/CD)
smooth auth apikey-set your_api_key_here
```
### First Use
```bash
# Generate a commit message
git add .
smooth commit generate --commit
# Generate PR documentation
smooth pr generate --pr-number 42
# Generate release notes
smooth release generate
```
## Features
### Commit Messages
Generate conventional commit messages that follow best practices:
```bash
# Generate from staged changes
smooth commit generate
# Generate and commit immediately
smooth commit generate --commit
# Associate with issue number
smooth commit generate --issue ENG-123
```
**Features:**
- Conventional Commits specification compliance
- Automatic issue number detection from branch names (e.g., `feature/ENG-123`)
- Multiple latency profiles (fast/balanced/quality)
- Issue key automatically appended to commit footer
### Pull Request Documentation
Create professional PR titles and summaries:
```bash
# Auto-detect from git context
smooth pr generate
# Specify PR number
smooth pr generate --pr-number 42
# Update PR on GitHub
smooth pr generate --pr-number 42 --push
```
**Features:**
- Smart auto-detection of owner/repo/PR number
- Structured summaries with overview, changes, testing sections
- Breaking change detection and migration guidance
- JSON output for automation
### Release Notes
Generate comprehensive release notes between tags:
```bash
# Auto-detect latest tags
smooth release generate
# Specify tag range
smooth release generate --from-tag v1.0.0 --to-tag v1.1.0
# Create GitHub Release
smooth release generate --push --tag v1.1.0
# Create draft GitHub Release
smooth release generate --push --tag v1.1.0 --draft
```
**Features:**
- Semantic versioning support
- Categorized changes (features, fixes, documentation, etc.)
- Breaking change detection with migration steps
- Contributor statistics and acknowledgments
- Draft release support for review before publishing
### Documentation Generation
AI-powered repository documentation:
```bash
# Check documentation status
smooth docs status
# Generate LITE documentation (quick overview)
smooth docs generate
# Generate FULL documentation (comprehensive)
smooth docs generate --full --write
# Validate existing documentation
smooth docs validate
```
**Features:**
- LITE and FULL generation modes
- Documentation validation and quality checks
- PR-focused documentation generation
- Integration with CI/CD workflows
## Authentication
### Auth0 Device Flow (Interactive)
For personal development and interactive use:
```bash
smooth auth login
```
This opens a browser for secure authentication and stores a JWT token.
### API Key Authentication (CI/CD)
For automated workflows and CI/CD:
```bash
# Set API key
smooth auth apikey-set your_api_key_here
# Show current key
smooth auth apikey-show
# Switch to API key mode
smooth auth mode-set api-key
```
### Authentication Modes
```bash
# Show current mode
smooth auth mode-show
# Available modes
smooth auth mode-set jwt # Auth0 (interactive)
smooth auth mode-set api-key # API key (CI/CD)
smooth auth mode-set auto # Auto-detect
```
## Configuration
The CLI uses a four-tier configuration system for smart defaults:
1. **CLI Flags** - Explicit command-line arguments
2. **Git Context** - Auto-detected from your repository
3. **Repository Config** - Team settings in `.smoothdev.json`
4. **User Config** - Personal defaults in `~/.smoothdevio/config.json`
### Quick Setup
```bash
# Initialize user config
smooth config init --owner your-username --output text
# Show current configuration
smooth config show
# Set specific values
smooth config set defaults.output json
```
### GitHub Token
For commands that interact with GitHub:
```bash
# Set in config
smooth config set github_token ghp_your_token_here
# Or use environment variable
export GITHUB_TOKEN=ghp_your_token_here
```
## GitHub Actions Integration
Automate your workflow with our GitHub Actions:
### PR Summary Action
Generate PR titles and summaries automatically:
```yaml
- uses: smoothdev-io/pr-summary-action@v1
with:
api_key: ${{ secrets.SMOOTHDEV_API_KEY }}
push_to_pr: true
```
**Repository:** [smoothdev-io/pr-summary-action](https://github.com/smoothdev-io/pr-summary-action)
### Release Notes Action
Create comprehensive release notes on tag push:
```yaml
- uses: smoothdev-io/release-notes-action@v1
with:
api_key: ${{ secrets.SMOOTHDEV_API_KEY }}
push_to_github: true
release_tag: ${{ github.ref_name }}
```
**Repository:** [smoothdev-io/release-notes-action](https://github.com/smoothdev-io/release-notes-action)
## Command Reference
### Core Commands
- `smooth commit` - Generate commit messages
- `smooth pr` - Generate PR documentation
- `smooth release` - Generate release notes
- `smooth docs` - Generate repository documentation
- `smooth config` - Manage configuration
- `smooth auth` - Manage authentication
### Global Options
- `--help` - Show help information
- `--version` - Show version information
- `--verbose` - Enable verbose output
- `--output <format>` - Output format: `text` or `json`
## Getting Help
- **Documentation**: [docs.smoothdev.io](https://docs.smoothdev.io)
- **GitHub Issues**: [smoothdev-io/sd-smooth-cli](https://github.com/smoothdev-io/sd-smooth-cli/issues)
- **Support**: engineering@smoothdev.io
## License
Apache License 2.0 - see [LICENSE](LICENSE) for details.
---
**Built with ❤️ by the SmoothDev Team** | [smoothdev.io](https://smoothdev.io)
| text/markdown | SmoothDev | info@smoothdev.io | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12"
] | [] | null | null | <4.0,>=3.12 | [] | [] | [] | [
"boto3<2.0.0,>=1.34.0",
"click==8.1.7",
"dulwich<0.22.0,>=0.21.7",
"gitpython<4.0.0,>=3.1.40",
"pydantic<3.0.0,>=2.6.0",
"requests<3.0.0,>=2.31.0",
"typer<0.13.0,>=0.12.5"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T02:10:18.167978 | smooth_cli-0.4.0.tar.gz | 83,106 | 77/66/91c47ddeebc1a8226a354c3d0a22e8abc3e653853c6931ae3e7340e5b93c/smooth_cli-0.4.0.tar.gz | source | sdist | null | false | 14e9a63f2343d8be391e42d446b97162 | 28ff88ed6b151615d6ccf6c3194f05f4e4292082c1c36aafb631b4039299820f | 776691c47ddeebc1a8226a354c3d0a22e8abc3e653853c6931ae3e7340e5b93c | null | [] | 219 |
2.4 | fm-sdk | 0.0.2 | Python SDK for the Flexemarkets API | # fm-sdk-python
Python SDK for the [Flexemarkets](https://fm-data.herokuapp.com) API.
## Requirements
- Python 3.11+
## Install
```bash
python3.11 -m pip install .
```
Or install dependencies directly:
```bash
python3.11 -m pip install httpx websockets
```
## Configuration
The SDK loads credentials and endpoint from these sources (highest priority first):
1. Arguments passed to `Flexemarkets.connect()`
2. Files `~/.fm/credential` and `~/.fm/endpoint` (Java `.properties` format)
3. Environment variable `FM_API_URL` (defaults to `https://fm-data.herokuapp.com`)
### Credential file
Create `~/.fm/credential`:
```
account=myaccount
email=user@example.com
password=secret
```
Or use a bearer token:
```
token=eyJhbGciOiJIUzI1NiJ9...
```
### Endpoint file
Create `~/.fm/endpoint`:
```
endpoint=https://fm-data.herokuapp.com/api/marketplaces/123
```
## SDK usage
```python
from fm import Flexemarkets, OrderBooks, MarketplaceTrades
# Connect
fm = Flexemarkets.connect(
credential="~/.fm/credential",
endpoint="https://fm-data.herokuapp.com/api/marketplaces/123",
client_description="my-bot",
)
# REST API
marketplace_id = fm.endpoint_marketplace_id
markets = fm.markets(marketplace_id)
session = fm.session(marketplace_id)
holdings = fm.holdings(marketplace_id)
# Submit orders
order = fm.submit_limit(marketplace_id, market_id, "BUY", units=1, price=950)
fm.submit_cancel(marketplace_id, market_id, order.id)
# WebSocket events
import queue
q = queue.Queue(maxsize=1000)
fm.listen(marketplace_id, q)
books = OrderBooks(markets)
trades = MarketplaceTrades(markets)
while True:
event = q.get()
match event:
case list() as orders:
books.update(orders)
trades.update(orders)
case Session() as s:
print(s.state)
case Holding() as h:
print(h.cash)
fm.close()
```
The client also works as a context manager:
```python
with Flexemarkets.connect(credential, endpoint, "my-bot") as fm:
...
```
## Applications
### ticker
Live terminal display of order book best bid/ask, spread, and recent trade prices.
```bash
python3.11 ticker.py -C ~/.fm/credential -E https://fm-data.herokuapp.com/api/marketplaces/123
```
Options:
| Flag | Description |
|------|-------------|
| `-C`, `--credential` | Credential file path or bearer token |
| `-E`, `--endpoint` | Marketplace endpoint file path or URL |
Output:
```
fm-ticker OPEN
Symbol Bid Ask Spread Last trades
------ ------ ------ ------ -----------
AAPL $ 9.50 $10.50 $ 1.00 $9.50 $10.00
IBM $ 4.00 $ 5.00 $ 1.00 $4.50
```
The display refreshes on each order book update. Press `Ctrl-C` to stop.
| text/markdown | null | Flexemarkets <support@flexemarkets.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"httpx>=0.27",
"websockets>=13"
] | [] | [] | [] | [
"Homepage, https://github.com/flexemarkets/fm-sdk",
"Documentation, https://github.com/flexemarkets/fm-sdk#readme",
"Repository, https://github.com/flexemarkets/fm-sdk.git",
"Issues, https://github.com/flexemarkets/fm-sdk/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T02:10:11.467478 | fm_sdk-0.0.2.tar.gz | 16,489 | 77/f0/2fc79a30208675700dff608ca97e28f94ec6dbce16728bf9a34b57dad616/fm_sdk-0.0.2.tar.gz | source | sdist | null | false | 8fc2769a6dc16f2a1a35c7cf80f82a24 | caddac9ef711aec0edca364b07ab3ab30662ba4946672fe9b7d1aa48ff7054b3 | 77f02fc79a30208675700dff608ca97e28f94ec6dbce16728bf9a34b57dad616 | MIT | [] | 216 |
2.4 | bitswan | 2026.2.21.36 | Bitswan is a framework for building automations and pipelines in Jupyter | [](https://docs.bitswan.space)
Bitswan: A tool for building Pipelines & Automations in Jupyter
===============================================
You can find example pipelines in the [examples](./examples/) directory.
Installation
--------------
This library is part of the bitswan suite which is managed by the bitswan workspace cli.
You must first install the [bitswan workspaces](https://github.com/bitswan-space/bitswan-workspaces) cli before installing and using the bitswan notebooks cli.
```
$ git clone git@github.com:bitswan-space/BitSwan.git
$ cd BitSwan
$ curl -LsSf https://astral.sh/uv/install.sh | sh
$ uv venv
$ source .venv/bin/activate
$ uv pip install -e ".[dev]"
```
Running pipelines
--------------------
You can run a pipeline with a simple command:
```
$ bitswan notebook examples/WebForms/main.ipynb
```
When developing web endpoints it can be helpful to instruct the pipeline to automatically restart if the source code changes.
```
$ bitswan notebook examples/WebForms/main.ipynb --watch
```
Running Tests
----------------
You can find examples for automatically testing pipelines in the [testing examples](./examples/Testing) directory.
Run tests with the `--test` flag.
```
$ bitswan notebook examples/Testing/InspectError/main.ipynb --test
Running tests for pipeline Kafka2KafkaPipeline.
┌ Testing event: b'foo'
└ Outputs: [b'FOO'] ✔
All tests passed for Kafka2KafkaPipeline.
Running tests for pipeline auto_pipeline_1.
┌ Testing event: b'{"foo":"aaa"}'
└ Outputs: [b'{"foo": "A A A"}'] ✔
┌ Testing event: b'{"foo":"aab"}'
│ Probing after-upper.
└ Outputs: [b'{"foo": "B A A"}'] ✔
┌ Testing event: b'{"foo":"cab"}'
└ Outputs: [b'{"foo": "B A C"}'] ✘
```
You can combine `--test` with `--watch` to automatically rerun tests whenever the source files change.
Licence
-------
Bitswan is open-source software, available under BSD 3-Clause License.
| text/markdown | null | LibertyAces Ltd <timothy.hobbs@libertyaces.com> | null | null | BSD 3-Clause License
Copyright (c) 2018-2019, LibertyAces Ltd
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | null | [] | [] | null | null | null | [] | [] | [] | [
"aiohttp>=3.8.3",
"aiomysql>=0.2.0",
"aiosmtplib>=1.1.3",
"aiozk>=0.25.0",
"certifi>=2023.7.22",
"confluent-kafka>=1.8.2",
"couchdb>=1.2",
"croniter>=1.4.1",
"cryptography>=44.0.1",
"docker",
"dockerfile-parse",
"fastapi",
"fastavro>=0.23.5",
"fastjsonschema<3,>=2.16.2",
"google-api-python-client>=1.7.10",
"idna>=3.7",
"importlib-resources",
"jinja2>=3.1.6",
"jupyter",
"kazoo<3,>=2.9.0",
"matplotlib>=3.8.0",
"mcp==1.20.0",
"mongoquery>=1.3.6",
"motor>=2.1.0",
"mysql-replication>=0.21",
"nest-asyncio==1.6.0",
"netaddr>=0.7.20",
"numpy>=1.19.0",
"orjson",
"paho-mqtt",
"pandas>=0.24.2",
"pika>=1.1.0",
"pyasn1==0.4.8",
"pybind11>=2.6.1",
"pyjwt==v2.10.1",
"pymongo>=3.10.1",
"pymysql>=1.1.1",
"python-dotenv==1.0.1",
"pytz>=2020.1",
"pyyaml>=5.4",
"requests>=2.32.0",
"setuptools>=70.0.0",
"urllib3>=1.26.19",
"uvicorn",
"xxhash>=1.4.4",
"zipp>=3.19.1",
"coverage==4.5.3; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\"",
"ruff; extra == \"dev\"",
"watchdog==6.0.0; extra == \"dev\"",
"pyjq>=2.6.0; extra == \"pyjq-support\""
] | [] | [] | [] | [
"homepage, https://github.com/bitswan-space/BitSwan"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-21T02:08:21.491037 | bitswan-2026.2.21.36.tar.gz | 514,952 | 95/25/023385a66516f19639851ed757eabf89db389f2e9b3e71c586e80d7e3430/bitswan-2026.2.21.36.tar.gz | source | sdist | null | false | d6ac95dbc3902572e1dd2168d8e66857 | 0045dd447ab302a8d8c1d4319941a8df1f00520eab377aa322fc2afd9a970da0 | 9525023385a66516f19639851ed757eabf89db389f2e9b3e71c586e80d7e3430 | null | [
"LICENSE"
] | 217 |
2.4 | quran-phonetic-search | 0.1.0 | Fuzzy search for Quran words using phonetic or Latin transliteration | # Quran Phonetic Search
Quran Phonetic Search is a Python package and CLI tool that allows fuzzy searching of Quranic words using their phonetic or simplified Latin transliterations. The package includes a preprocessed CSV dataset mapping Arabic words to their phonetic and Latin representations.
## Features
* Fuzzy search for Arabic words based on phonetic input or Latin transliteration.
* CLI tool for quick querying.
* Configurable result limit.
* Reloadable database for updates.
* Python package ready for pip installation.
## Installation
You can install the package locally from source or via pip once published.
### Local installation
```bash
# Build the package
python -m build
# Install locally
pip install --force-reinstall dist/quran_phonetic_search-0.1.0-py3-none-any.whl
```
### Dependencies
* Python 3.10 or higher
* rapidfuzz >= 3.0, < 4.0
* pandas >= 2.0, < 3.0
## Usage
After installation, the CLI command `quran-search` is available.
### Query a word
```bash
quran-search query <word> [--limit N]
```
* `<word>`: Phonetic or simple Latin representation of the Quranic word.
* `--limit N`: Optional. Maximum number of results to return. Defaults to 3.
**Example:**
```bash
quran-search query besm --limit 3
```
**Output:**
```
Top 3 match(es) for 'besm':
1. بِسْمِ
2. فَتَبَسَّمَ
3. بِسَمْعِهِمْ
```
### Reload the database
```bash
quran-search reload
```
Reloads the CSV dataset into memory in case of updates.
## Python API
You can also use the package programmatically:
```python
from quran_phonetic_search.search import QuranSearch
searcher = QuranSearch()
results = searcher.query_word("besm", limit=3)
print(results)
```
## Project Structure
```
QuranPhoneticSearch/
├── quran_phonetic_search/ # Python package
│ ├── __init__.py
│ ├── search.py
│ ├── main.py
│ └── data/
│ └── quran_words.csv
├── data_preprocessing/ # Scripts and notebooks for data cleaning and preprocessing
│ └── preprocessing.ipynb # Shows how the dataset was collected and processed
├── README.md
├── pyproject.toml
```
## License
This project is licensed under the MIT License.
| text/markdown | null | Mostafa Osman <mostafa.osman.fathi@gmail.com> | null | null | MIT | CLI, Quran, search, phonetic, arabic, transliteration | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"rapidfuzz<4.0,>=3.0",
"pandas>=3.0"
] | [] | [] | [] | [
"homepage, https://github.com/MostafaOsmanFathi/QuranPhoneticSearch"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T02:06:19.945788 | quran_phonetic_search-0.1.0.tar.gz | 754,161 | f2/4d/7133b5fd789a6ba522a2df0a0da85c9d79ac02a96ea6598b3818a347bf91/quran_phonetic_search-0.1.0.tar.gz | source | sdist | null | false | beb4c7b0d7a33c0eba56599e065523ba | 60a3c9bce1546ae7fcae4643cfa556ffd42e5f18eccebe89277e25b8ae4b7d7d | f24d7133b5fd789a6ba522a2df0a0da85c9d79ac02a96ea6598b3818a347bf91 | null | [] | 231 |
2.4 | borgitory | 2.6.4 | A comprehensive web-based management interface for BorgBackup repositories with real-time monitoring, automated scheduling, and cloud synchronization capabilities. |
# Borgitory
[](https://codecov.io/gh/mlapaglia/Borgitory)
[](https://github.com/mlapaglia/Borgitory/actions/workflows/release.yml)
[](https://github.com/sponsors/mlapaglia)
[](https://hub.docker.com/r/mlapaglia/borgitory)
[](https://pypi.org/project/borgitory/)
[](https://borgitory.com)
[&replace=%241&logo=borgbackup&label=BorgBackup)](https://borgbackup.readthedocs.io/)
[&replace=%241&logo=rclone&label=Rclone)](https://rclone.org/)
[&replace=%241&logo=python&label=pfuse3)](https://github.com/libfuse/libfuse)
<img alt="borgitory logo" src="./assets/logo.png" width="400">
Borgitory is a comprehensive web-based management interface for BorgBackup repositories that provides real-time monitoring, automated scheduling, and cloud synchronization capabilities. It offers complete backup lifecycle management including on-demand backups, automated pruning policies, interactive archive browsing with file downloads, and cloud sync to S3-compatible storage via Rclone. The FastAPI powered system features a modern responsive web interface built with HTMX, and Tailwind CSS.
## Quick Start
- full documentation is available at <https://borgitory.com>
### Prerequisites
- **Docker Installation (Recommended)**: Docker with Docker Compose for containerized deployment
- **PyPI Installation**: Python 3.14+ for direct installation from PyPI
### Installation
#### Option 1: Docker Installation (Recommended)
1. **Pull and run the Docker image**
```bash
# Using Docker directly
docker run -d \
-p 8000:8000 \
-v ./data:/app/data \
-v /path/to/backup/sources:/backup/sources:ro \
-v /path/to/borg/repos:/repos \
--cap-add SYS_ADMIN \
--device /dev/fuse \
--name borgitory \
mlapaglia/borgitory:latest
```
**Or using Docker Compose** (create a `docker-compose.yml`):
```yaml
version: '3.8'
services:
borgitory:
image: mlapaglia/borgitory:latest
ports:
- "8000:8000"
volumes:
- ./data:/app/data # database and encryption key location
- /path/to/backup/sources:/sources:ro
- /path/to/any/backup/repos:/repos:ro
cap_add:
- SYS_ADMIN # optional, needed to mount borg archives and browse them
devices:
- /dev/fuse # borg uses FUSE to mount archives
restart: unless-stopped
```
```bash
docker-compose up -d
```
2. **Access the web interface**
- Open <http://localhost:8000> in your browser
- Create your first admin account on initial setup
<img width="1237" height="729" alt="image" src="https://github.com/user-attachments/assets/078ce596-3ba2-4b6f-ba3f-c2d8b95e02db" />
#### Option 2: PyPI Installation
Install Borgitory directly from PyPI:
```bash
# Install stable release from PyPI
pip install borgitory
# Start the server
borgitory serve
# Or run with custom settings
borgitory serve --host 0.0.0.0 --port 8000
```
**PyPI Installation Requirements:**
- Python 3.14 or higher
- BorgBackup installed and available in PATH
- Rclone (optional, for cloud sync features)
- FUSE (optional, for browsing archives)
**Windows Requirements:**
- WSL2 (Windows Subsystem for Linux) must be installed and configured
- Inside WSL2, you need:
- BorgBackup installed (`sudo apt install borgbackup` or similar)
- Python 3.14+ installed
- Rclone installed (optional, for cloud sync features)
- BorgBackup does not have a native Windows executable, so WSL2 is required for all backup operations
| text/markdown | null | mlapaglia <mlapaglia@users.noreply.github.com> | null | null | MIT License
Copyright (c) 2025 Matt LaPaglia
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | archive-management, backup, borgbackup, cloud-sync, scheduling, web-interface | [
"Development Status :: 4 - Beta",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.14",
"Topic :: System :: Archiving :: Backup",
"Topic :: System :: Systems Administration"
] | [] | null | null | >=3.14 | [] | [] | [] | [
"aiofiles>=25.1.0",
"aiohttp>=3.13.3",
"aiosqlite>=0.21.0",
"alembic>=1.18.2",
"apscheduler>=3.11.2",
"bcrypt>=5.0.0",
"cron-descriptor>=2.0.6",
"cryptography>=46.0.4",
"docker>=7.1.0",
"fastapi>=0.128.4",
"jinja2>=3.1.6",
"pydantic>=2.12.5",
"python-dotenv>=1.2.1",
"python-multipart>=0.0.22",
"sqlalchemy>=2.0.46",
"sse-starlette>=3.2.0",
"uvicorn[standard]>=0.40.0",
"debugpy>=1.8.20; extra == \"dev\"",
"djlint>=1.36.4; extra == \"dev\"",
"httpx>=0.28.1; extra == \"dev\"",
"mypy>=1.19.1; extra == \"dev\"",
"pytest-asyncio>=1.3.0; extra == \"dev\"",
"pytest-cov>=7.0.0; extra == \"dev\"",
"pytest>=9.0.2; extra == \"dev\"",
"ruff>=0.15.0; extra == \"dev\"",
"sqlalchemy[mypy]>=2.0.46; extra == \"dev\"",
"types-aiofiles>=25.1.0.20251011; extra == \"dev\"",
"types-python-dateutil>=2.9.0.20260124; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://borgitory.com",
"Repository, https://github.com/mlapaglia/Borgitory",
"Issues, https://github.com/mlapaglia/Borgitory/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T02:05:26.098406 | borgitory-2.6.4.tar.gz | 981,372 | c4/fa/06fe32b881494e79f3d41c47910145cb576d0ccc30c90e7f1f0dd954252f/borgitory-2.6.4.tar.gz | source | sdist | null | false | 076b2f8946327e850ff77422bd5afe5c | 07fffcc631e10a33b310a21227cce9caadcd4a1ba443b48fa679f163556e9f38 | c4fa06fe32b881494e79f3d41c47910145cb576d0ccc30c90e7f1f0dd954252f | null | [
"LICENSE"
] | 221 |
2.4 | wcs.keycloak | 1.0.0a2 | Keycloak integration for Plone 6 | # wcs.keycloak
Keycloak integration for Plone 6.
This plugin works best in combination with [pas.plugins.oidc](https://github.com/collective/pas.plugins.oidc) for OpenID Connect authentication. When using both packages together, make sure to **disable user creation in pas.plugins.oidc** (`create_user = False`) since `wcs.keycloak` provides its own `IUserAdderPlugin` that handles user creation in Keycloak. Having both plugins create users will lead to conflicts.
**Performance note on IUserEnumerationPlugin**: The user enumeration plugin checks its local cache first and falls back to a Keycloak API call for every cache miss. Since `getMemberById` is called frequently throughout Plone (e.g. content listings, permission checks), this adds significant overhead when multiple user sources are active. Only activate `IUserEnumerationPlugin` if Keycloak is the sole user source.
History: This plugin was implemented by myself in a privte project and extracted via AI into it's own package.
## Features
- **PAS Plugin**: Pluggable Authentication Service plugin for Keycloak integration
- **User Enumeration**: Query and list users from Keycloak
- **User Creation**: Create users in Keycloak through Plone's registration workflow
- **User Properties**: Retrieve user properties (email, fullname) from Keycloak
- **Group Synchronization**: One-way sync of groups and memberships from Keycloak to Plone
- **User Synchronization**: One-way sync of users from Keycloak to the plugin's local storage
## Architecture
The plugin implements multiple PAS (Pluggable Authentication Service) interfaces:
- **IUserAdderPlugin**: Intercepts user creation to create users in Keycloak
- **IUserEnumerationPlugin**: Provides user enumeration from Keycloak
- **IPropertiesPlugin**: Provides user properties from Keycloak
Group and user synchronization is handled separately via event subscribers (automatic on login) and browser views (manual/scheduled).
### Modules
| Module | Description |
|--------|-------------|
| `plugin` | `KeycloakPlugin` PAS plugin with `_v_` volatile client caching |
| `client` | `KeycloakAdminClient` REST API client using OAuth2 client credentials flow with automatic token refresh |
| `sync` | Group sync, membership sync, `sync_all()` orchestrator. Groups are prefixed with `keycloak_` to coexist with native Plone groups |
| `user_sync` | User sync to `_user_storage` OOBTree |
| `interfaces` | `IKeycloakLayer` browser layer, `IKeycloakPlugin` marker interface |
| `browser/base` | `BaseSyncView` base class for the 3 sync views |
| `browser/user_management` | Overrides for Plone's user/group control panels with Keycloak sync buttons and admin links |
### Sync Strategy
Keycloak is the single source of truth. All sync operations are one-way from Keycloak to Plone. Changes to synced groups or users in Plone will be overwritten on the next sync.
Groups synced from Keycloak are prefixed with `keycloak_` to distinguish them from native Plone groups. This allows clear identification, safe deletion, and coexistence with native groups.
### Client Authentication
The `KeycloakAdminClient` authenticates using the `client_credentials` OAuth2 grant type. Tokens are automatically refreshed when they expire (on 401 response). The client provides operations for user management (create, search, get, email actions) and group management (create, delete, search, membership).
### Testing Infrastructure
All tests run against a real Keycloak Docker container (no mocks):
| Component | Description |
|-----------|-------------|
| `BaseDockerServiceLayer` | Base layer for running Docker containers as test fixtures |
| `KeyCloakLayer` | Starts Keycloak Docker container and creates test realm |
| `KeycloakTestMixin` | Utilities for admin client creation, authentication, user/group cleanup |
| `KeycloakPluginTestMixin` | Plugin setup with interface activation and service account configuration |
## Installation
Add `wcs.keycloak` to your Plone installation requirements:
```
wcs.keycloak
```
After installation, install the add-on profile through the Plone control panel or via GenericSetup.
## Keycloak Client Setup
Before configuring the plugin, you need to create a service account client in Keycloak with the appropriate permissions.
### Creating the Service Account Client
1. Log into your Keycloak Admin Console
2. Select your realm
3. Navigate to **Clients** and click **Create client**
4. Configure the client:
- **Client ID**: Choose a descriptive name (e.g., `plone-service-account`)
- **Client Protocol**: `openid-connect`
5. On the **Capability config** tab, enable:
- **Client authentication**: On (enables the Credentials tab)
- **Service accounts roles**: On
6. Click **Save**
### Assigning Required Roles
The service account needs permissions to manage users and groups:
1. Go to your client's **Service accounts roles** tab
2. Click **Assign role**
3. Filter by clients and select **realm-management**
4. Assign these roles:
- `manage-users` - Required for creating users and sending emails
- `view-users` - Required for user enumeration
- `query-users` - Required for user search
### Getting the Client Secret
1. Go to your client's **Credentials** tab
2. Copy the **Client secret** value
## Plugin Configuration
### Adding the Plugin via ZMI
1. Navigate to your Plone site's ZMI: `/acl_users/manage_main`
2. Select "Keycloak Plugin" from the dropdown and click **Add**
3. Enter the plugin ID (e.g., `keycloak`)
4. Configure the connection settings
### Connection Properties
| Property | Description | Example |
|----------|-------------|---------|
| **Server URL** | Base URL of your Keycloak server | `https://keycloak.example.com` |
| **Realm** | The Keycloak realm name | `my-realm` |
| **Admin Client ID** | Service account client ID | `plone-service-account` |
| **Admin Client Secret** | Service account client secret | `your-secret-here` |
### User Creation Options
These options control behavior when users are created through Plone's registration:
| Property | Description | Default |
|----------|-------------|---------|
| **Send password reset email** | Send UPDATE_PASSWORD action email | `True` |
| **Send verify email** | Send VERIFY_EMAIL action email | `True` |
| **Require 2FA/TOTP setup** | Require CONFIGURE_TOTP action | `False` |
| **Email link lifespan** | How long email links are valid (seconds) | `86400` (24h) |
| **Redirect URI** | Where to redirect after Keycloak actions | (empty) |
| **Redirect Client ID** | Client ID for redirect | (empty) |
### Group Sync Options
| Property | Description | Default |
|----------|-------------|---------|
| **Enable Keycloak Group Sync** | Sync all groups and the logged-in user's memberships on every login | `False` |
### User Sync Options
| Property | Description | Default |
|----------|-------------|---------|
| **Enable Keycloak User Sync** | Bulk-copy all Keycloak users (email, fullname) into local storage via sync endpoints | `False` |
User sync is only available when IUserEnumerationPlugin is **not** active. When enumeration is active, users are discovered live from Keycloak on every request, making local sync redundant. See [User Synchronization](#user-synchronization) for details.
### Activating Plugin Interfaces
After adding the plugin, activate the required interfaces in ZMI under `acl_users/plugins/manage_main`:
- **IUserAdderPlugin**: Enable to create users in Keycloak during registration
- **IUserEnumerationPlugin**: Enable to enumerate/search users from Keycloak
- **IPropertiesPlugin**: Enable to fetch user properties from Keycloak
## Group Synchronization
The group sync feature provides one-way synchronization from Keycloak to Plone. Keycloak is the authoritative source for group membership.
### How It Works
1. Groups from Keycloak are created in Plone with a `keycloak_` prefix
2. Group memberships are synced to match Keycloak
3. Groups deleted in Keycloak are removed from Plone
4. Native Plone groups (without the prefix) are not affected
### Automatic Sync on Login
When `Enable Keycloak Group Sync` is enabled:
- All groups are synced when any user logs in
- The logged-in user's group memberships are updated
### Manual/Scheduled Group Sync
Trigger a group-only sync by calling the group sync endpoint:
**curl (cron job)**:
```bash
curl -u admin:secret https://plone.example.com/@@sync-keycloak-groups
```
### Group Sync Response Format
```json
{
"success": true,
"message": "Sync complete: 5 groups created, 0 updated, 0 deleted. 12 users added to groups, 0 removed. 0 stale users cleaned up.",
"stats": {
"groups_created": 5,
"groups_updated": 0,
"groups_deleted": 0,
"users_added": 12,
"users_removed": 0,
"users_cleaned": 0,
"errors": 0
}
}
```
## User Synchronization
The user sync feature provides one-way synchronization of users from Keycloak to the plugin's local storage. This ensures that user properties (email, fullname) are available locally without querying Keycloak on every request.
User sync is automatically disabled when IUserEnumerationPlugin is active for the Keycloak plugin. Since active enumeration already discovers users live from Keycloak, storing them locally would be redundant. When enumeration is active:
- The sync button is hidden in the Users control panel
- The `@@sync-keycloak-users` endpoint returns a 400 response
- The `@@sync-keycloak` full sync skips the user sync step
To use user sync, keep IUserEnumerationPlugin deactivated and enable the "Enable Keycloak User Sync" property instead.
### How It Works
1. All users from Keycloak are fetched and stored in the plugin's local storage
2. User properties (email, first name, last name) are kept in sync
3. Users deleted in Keycloak are removed from local storage
### Dedicated User Sync Endpoint
Trigger a standalone user sync by calling the user sync endpoint:
**curl (cron job)**:
```bash
curl -u admin:secret https://plone.example.com/@@sync-keycloak-users
```
### User Sync Response Format
```json
{
"success": true,
"message": "User sync complete: 50 users synced, 2 removed.",
"stats": {
"users_synced": 50,
"users_removed": 2,
"errors": 0
}
}
```
## Full Synchronization
The `@@sync-keycloak` view performs a complete synchronization of all Keycloak data to Plone. It combines group sync, membership sync, user sync (when enabled), and cleanup of deleted users into a single operation.
This is the recommended endpoint for cron jobs that need to keep everything in sync.
**curl (cron job)**:
```bash
curl -u admin:secret https://plone.example.com/@@sync-keycloak
```
### Full Sync Response Format
When user sync is enabled:
```json
{
"success": true,
"message": "Sync complete: 5 groups created, 0 updated, 0 deleted. 12 users added to groups, 0 removed. User sync: 50 synced, 2 removed.",
"stats": {
"groups_created": 5,
"groups_updated": 0,
"groups_deleted": 0,
"users_added": 12,
"users_removed": 0,
"users_synced": 50,
"users_sync_removed": 2,
"users_cleaned": 0,
"errors": 0
}
}
```
When user sync is disabled, the response includes cleanup stats instead:
```json
{
"success": true,
"message": "Sync complete: 5 groups created, 0 updated, 0 deleted. 12 users added to groups, 0 removed.",
"stats": {
"groups_created": 5,
"groups_updated": 0,
"groups_deleted": 0,
"users_added": 12,
"users_removed": 0,
"users_cleaned": 0,
"errors": 0
}
}
```
### Sync Endpoints Overview
| Endpoint | Scope | Use Case |
|----------|-------|----------|
| `@@sync-keycloak` | Groups + memberships + users + cleanup | Recommended for cron jobs |
| `@@sync-keycloak-groups` | Groups + memberships + stale user cleanup | When you only need group data |
| `@@sync-keycloak-users` | Users only | When you only need user data |
## Usage Examples
### Querying Users from Keycloak
**Python (requests)**:
```python
import requests
# Search users via Plone's user enumeration
response = requests.get(
'https://plone.example.com/@users',
params={'query': 'john'},
headers={'Accept': 'application/json'},
auth=('admin', 'secret')
)
users = response.json()
```
**JavaScript (fetch)**:
```javascript
const response = await fetch('https://plone.example.com/@users?query=john', {
headers: {
'Accept': 'application/json',
'Authorization': 'Basic ' + btoa('admin:secret')
}
});
const users = await response.json();
```
### Creating Users via Registration
Users created through Plone's registration form (or `@users` endpoint) are automatically created in Keycloak when the IUserAdderPlugin is active.
**Python (requests)**:
```python
import requests
response = requests.post(
'https://plone.example.com/@users',
json={
'username': 'newuser',
'email': 'newuser@example.com',
'fullname': 'New User'
},
headers={'Accept': 'application/json', 'Content-Type': 'application/json'},
auth=('admin', 'secret')
)
```
The user will:
1. Be created in Keycloak
2. Receive an email with actions based on plugin configuration (password setup, email verification, etc.)
### Working with Synced Groups
Synced groups can be used like any Plone group:
**Python (requests)**:
```python
import requests
# List groups (includes keycloak_ prefixed groups)
response = requests.get(
'https://plone.example.com/@groups',
headers={'Accept': 'application/json'},
auth=('admin', 'secret')
)
groups = response.json()
# Get members of a synced group
response = requests.get(
'https://plone.example.com/@groups/keycloak_developers',
headers={'Accept': 'application/json'},
auth=('admin', 'secret')
)
group = response.json()
print(group['users'])
```
## Testing
The package includes comprehensive integration tests that run against a real Keycloak instance using Docker.
### Running Tests
```bash
make install
make test
```
Or run specific tests:
```bash
bin/test -s wcs.keycloak -t test_enumeration
bin/test -s wcs.keycloak -t TestKeycloakEnumerateUsers
```
## Development
```bash
# Create virtual environment and install dependencies
make install
# Run tests
make test
# Start development instance
make start
```
## License
GPL-2.0
| text/markdown | null | webcloud7 <info@webcloud7.ch> | null | null | null | CMS, Keycloak, Plone, Python | [
"Development Status :: 3 - Alpha",
"Environment :: Web Environment",
"Framework :: Plone",
"Framework :: Plone :: 6.1",
"Framework :: Plone :: Addon",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"plone",
"plone-restapi",
"requests",
"zest-pocompile; extra == \"release\"",
"zest-releaser[recommended]; extra == \"release\"",
"zestreleaser-towncrier; extra == \"release\"",
"beautifulsoup4; extra == \"test\"",
"paste; extra == \"test\"",
"plone-app-testing; extra == \"test\"",
"plone-testing; extra == \"test\"",
"requests; extra == \"test\"",
"zope-testrunner; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/webcloud7/wcs.keycloak",
"PyPI, https://pypi.org/project/wcs.keycloak",
"Source, https://github.com/webcloud7/wcs.keycloak",
"Tracker, https://github.com/webcloud7/wcs.keycloak/issues"
] | twine/6.2.0 CPython/3.12.8 | 2026-02-21T02:05:23.762740 | wcs_keycloak-1.0.0a2.tar.gz | 61,494 | 85/57/b23350091874c4be500dec5661efec7cc305027da0b3f4e96036257459c1/wcs_keycloak-1.0.0a2.tar.gz | source | sdist | null | false | af9c129f5513f81ade5f722a79259a48 | 96b5f9d46f6ca0b0d69e360d947b2caca236df1a49898636a7ddf66501b492f3 | 8557b23350091874c4be500dec5661efec7cc305027da0b3f4e96036257459c1 | GPL-2.0-only | [] | 0 |
2.4 | veyaauth-sdk | 1.0.0 | Official Python SDK for Veya verification API | # README placeholder
# Veya Python SDK
Official Python SDK for the Veya API.
| text/markdown | Veya | support@veya.kr | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://veya.kr | null | >=3.7 | [] | [] | [] | [
"requests>=2.0.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.8 | 2026-02-21T02:02:47.798517 | veyaauth_sdk-1.0.0.tar.gz | 2,278 | 17/a0/4bc4ccc7b8916bc026d5b4d87c38c9e582003fac55c21e785f313ecae032/veyaauth_sdk-1.0.0.tar.gz | source | sdist | null | false | e109034563c1949985c42f8800f79d96 | d29524d38ef2746b30a9461c68c6f02d8a128a2e05bdee7b64c3c463cbf697be | 17a04bc4ccc7b8916bc026d5b4d87c38c9e582003fac55c21e785f313ecae032 | null | [] | 234 |
2.4 | star-drawing | 0.1.3 | SVG drawing canvas web component for StarHTML/Datastar | # star-drawing
SVG drawing canvas web component for [StarHTML](https://github.com/banditburai/starhtml) and [Datastar](https://data-star.dev/).
One `app.register()` call gives you a `<drawing-canvas>` custom element with drawing, shape, text, and annotation tools — plus a reactive toolbar wired to the canvas state. Built on [starelements](https://github.com/banditburai/starelements), so each canvas instance gets its own scoped signal namespace.
## Why?
Adding a drawing surface to a web app typically means choosing a JS canvas library, wiring up toolbars, managing undo state, and handling pointer events yourself. `star-drawing` wraps all of that into a single starelements component — import it, register it, and you have a working canvas with 10 tools, undo/redo, keyboard shortcuts, and collaboration hooks.
## Features
- **Full drawing toolkit** — pen, highlighter, line, arrow, rectangle, ellipse, diamond, text, and eraser tools out of the box.
- **WYSIWYG text editing** — click to place text, edit inline with font family, size, and alignment options.
- **Toolbar presets** — `drawing_toolbar()` for the full suite, `annotation_toolbar()` for markup, `diagram_toolbar()` for shapes and connectors.
- **Undo/redo and keyboard shortcuts** — Ctrl+Z, Ctrl+Y, Delete, Ctrl+A, Ctrl+D, and arrow-key nudging all wired up.
- **SVG export/import** — `export_svg()` and `import_svg()` for serialization and persistence.
- **Scoped signals** — built on starelements, so each canvas instance has its own signal namespace. No cross-instance interference.
- **Collaboration-ready** — `onElementChange` callback and `applyRemoteChanges` method for syncing state across clients.
- **Configurable palettes** — stroke colors, fill colors, highlighter colors, width presets, font options, and arrowhead styles are all customizable.
- **Readonly mode** — set the `readonly` attribute for view-only canvases.
- **Skeleton loading** — shimmer placeholder shown until the component initializes, preventing layout shift.
## Installation
Requires Python 3.12+ and [StarHTML](https://github.com/banditburai/starhtml).
```
pip install star-drawing
```
## Quick Start
```python
from starhtml import *
from star_drawing import DrawingCanvas, drawing_toolbar
app, rt = star_app()
app.register(DrawingCanvas)
@rt("/")
def index():
canvas = DrawingCanvas(cls="drawing-container")
return Div(
drawing_toolbar(canvas),
canvas,
)
serve()
```
This gives you a full-featured drawing canvas with all tools, color palettes, style options, and undo/redo.
## Toolbar Presets
For focused use cases, use a preset instead of the full toolbar:
```python
# Annotation — pen, highlighter, eraser only
annotation_toolbar(canvas)
# Diagramming — select, shapes, lines, arrows, text
diagram_toolbar(canvas)
```
Both presets accept the same keyword arguments as `drawing_toolbar()` for further customization.
## Configuration
Attributes on `<drawing-canvas>` control defaults:
| Attribute | Default | Description |
|---|---|---|
| `default-tool` | `"pen"` | Initial active tool |
| `default-stroke-color` | `"#1a1a2e"` | Initial stroke color |
| `default-fill-color` | `"#ffffff"` | Initial fill color |
| `default-stroke-width` | `2` | Initial stroke width in px |
| `default-opacity` | `1` | Initial element opacity |
| `default-layer` | `"default"` | Initial active layer |
| `throttle-ms` | `16` | Input throttle interval |
| `readonly` | — | Disables all drawing interaction |
Set these as element attributes in Python:
```python
DrawingCanvas(
default_tool="select",
default_stroke_color="#3568d4",
default_stroke_width=4,
readonly=True,
)
```
## Toolbar Customization
`drawing_toolbar()` accepts keyword arguments to override defaults:
```python
drawing_toolbar(
canvas,
tools=("pen", "line", "rect", "text"), # subset of tools to show
show_colors=True, # color palette panel
show_styles=True, # style options panel
show_undo=True, # undo/redo/clear buttons
color_palette=[("#000", "Black"), ("#fff", "White")],
width_presets=(1, 2, 4, 8),
)
```
## Development
TypeScript sources live in `typescript/` and are bundled with bun:
```
bun run build # build drawing-canvas.js
bun run dev # watch mode
uv run ruff check src/ tests/ # lint Python
uv run ruff format --check src/ # check formatting
npx tsc --noEmit # type-check TypeScript
```
The hatch build hook runs `bun run build` automatically during `pip install` / `uv build`, so the generated JS is never checked into git.
## License
[Apache 2.0](LICENSE)
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"starelements>=0.1.3",
"pip-tools; extra == \"dev\"",
"pyright; extra == \"dev\"",
"ruff; extra == \"dev\"",
"pytest-cov>=6.1.1; extra == \"test\"",
"pytest>=8.0; extra == \"test\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T02:02:04.427489 | star_drawing-0.1.3.tar.gz | 78,585 | 85/be/7bcf00e8cdf127ea1d688163979452aece5cbe9fb3b7d7509a0b16c2c7ac/star_drawing-0.1.3.tar.gz | source | sdist | null | false | 0518bad5b35435596ffb5a7871bcf0f2 | a8f0fd39a257c342056833fd5bac2cd46246103beb5ead650f51cd812bf70f48 | 85be7bcf00e8cdf127ea1d688163979452aece5cbe9fb3b7d7509a0b16c2c7ac | Apache-2.0 | [
"LICENSE"
] | 224 |
2.4 | schrodingers-llama-box | 0.0.1 | Reveals the state of Schrodinger's famous llama and calls to mind the Mandela Effect. | # Schrodinger's Llama Box
Reveals the state of Schrodinger's famous llama and calls to mind the Mandela Effect.
## Installation
```bash
pip install schrodingers_llama_box
```
## Usage
```python
import schrodingers_llama_box
schrodingers_llama_box.reveal()
```
| text/markdown | null | Ernesto Fabio Obscuro <j@veox.ai> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/pypa/sampleproject",
"Issues, https://github.com/pypa/sampleproject/issues"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-21T02:02:02.142472 | schrodingers_llama_box-0.0.1.tar.gz | 2,329 | 51/fc/0f3178df51dd443ba1440869bbb1c8a9cfe6fa309aad93253309c8c27614/schrodingers_llama_box-0.0.1.tar.gz | source | sdist | null | false | ee1ba16496df56439359d6ab9b4b8693 | c34194548e0b2bd971ee227147edf36cb650b38bcb57a1c4346cae58f97fb6e5 | 51fc0f3178df51dd443ba1440869bbb1c8a9cfe6fa309aad93253309c8c27614 | MIT | [
"LICENSE"
] | 238 |
2.4 | viewinline | 0.2.1 | Quick look geospatial viewer for iTerm2 compatible terminals | # viewinline
[](https://pepy.tech/project/viewinline)
[](https://pypi.org/project/viewinline/)
[](https://pypi.org/project/viewinline/)
**Quick-look geospatial viewer for compatible terminals.**
Displays rasters, vectors, and tabular data directly in the terminal with no GUI and no temporary files.
Think of it as `ls` for geospatial files — designed for quick visual inspection at the command line, not a replacement for QGIS, ArcGIS, or analytical workflows.
Particularly useful on HPC systems and remote servers accessed via SSH. Images render on your local terminal without X11 forwarding, VNC, or file downloads.
This tool combines the core display logic of `viewtif` and `viewgeom`, but is **non-interactive**: you can't zoom, pan, or switch colormaps on the fly. Instead, you control everything through command-line options (e.g. --display, --color-by, --colormap).
It uses the iTerm2 inline image protocol (OSC 1337) to render previews. In incompatible terminals, the escape codes are silently ignored with no errors or crashes.
## Installation
Requires Python 3.9 or later.
```bash
pip install viewinline
```
## Usage
```bash
# Rasters
viewinline path/to/file.tif
viewinline R.tif G.tif B.tif # RGB composite (also works with --rgbfiles)
viewinline path/to/multiband.tif --rgb 3 2 1
viewinline path/to/folder --gallery 4x3 # show image gallery (e.g. 4x3 grid)
# NetCDF and HDF
viewinline file.nc # list variables
viewinline file.nc --subset 2 # display variable 2
viewinline file.nc --subset 1 --band 10 # variable 1, timestep 10 --band or --timestep
viewinline temp.nc --subset 1 --colormap plasma --vmin 273 --vmax 310
# Vectors
viewinline path/to/vector.geojson
viewinline boundaries.geoparquet --color-by population --colormap viridis
# CSV and Parquet
viewinline data.csv # preview rows and columns
viewinline data.parquet --describe # summary statistics
viewinline data.csv --hist # histograms for all numeric columns
viewinline data.csv --hist area_km2 # histogram for one column
viewinline data.csv --scatter X Y # scatter plot
viewinline data.csv --where "year > 2010" # filter rows
viewinline data.csv --sort population --desc # sort rows
viewinline data.csv --sql "SELECT * FROM data WHERE area > 100 ORDER BY year" # full SQL
# Tabular view of vectors
viewinline counties.shp --table # view shapefile as table
viewinline counties.shp --table --describe # summary statistics
viewinline counties.shp --table --unique STATE_NAME
viewinline data.geoparquet --table --where "POP > 100000" --sort POP --desc
```
## Compatible terminals
The iTerm2 inline image protocol (OSC 1337) is supported by:
- **iTerm2** (macOS)
- **WezTerm** (cross-platform)
- **Konsole** (Linux/KDE)
- **Rio**, **Contour** (cross-platform)
**Not compatible:** Mac Terminal, GNOME Terminal, Kitty (uses different protocol), Ghostty, Alacritty
**SSH/HPC usage:** Works over SSH when connecting from a compatible terminal. Images render on your local machine, not the remote server. No X11 forwarding or VNC required.
**tmux/screen:** Inline images don't work inside tmux or screen sessions, even with `allow-passthrough on`. Use a plain terminal tab.
## Features
- Previews rasters, vectors, and tabular data directly in the terminal
- Non-interactive: everything is controlled through command-line options
- **NetCDF/HDF Support:** Display variables from NetCDF (.nc) and HDF5 (.h5, .hdf5) files with automatic nodata detection and multi-slice navigation
- **Parquet/GeoParquet:** Render GeoParquet as vector maps or view as tabular data
- **Tabular View for Vectors:** Use `--table` to access CSV-style operations (filter, sort, describe, hist) on any vector file
## Supported formats
**Rasters**
- GeoTIFF (.tif, .tiff)
- PNG, JPEG (.png, .jpg, .jpeg)
- NetCDF (.nc)
- HDF5 (.h5, .hdf5)
- HDF4 (.hdf) — requires GDAL with HDF4 support
- Single-band or multi-band composites
**Composite inputs**
- You can pass three rasters (e.g. `R.tif G.tif B.tif`) or use `--rgbfiles R.tif G.tif B.tif` to create an RGB composite
- Multi-band files: use `--rgb 3 2 1` to specify band order
**Vectors**
- GeoJSON (`.geojson`)
- Shapefile (`.shp`)
- GeoPackage (`.gpkg`)
- Parquet/GeoParquet (`.parquet`, `.geoparquet`)
**Tabular data (CSV and Parquet)**
- CSV (`.csv`)
- Parquet (`.parquet`) — requires `pyarrow`
- All CSV operations work on parquet files
- Preview file summary (rows, columns, and names)
- Summary statistics with `--describe`
- Inline histograms with `--hist`
- Scatter plots with `--scatter X Y`
- Filter rows with `--where`, sort with `--sort`, limit output with `--limit`
- Full SQL queries with `--sql` (DuckDB required) — use `data` as the table name
**Tabular view of vectors**
- Use `--table` flag to view any vector file (shapefiles, GeoPackage, GeoParquet) as tabular data
- Enables all CSV-style operations: `--describe`, `--hist`, `--scatter`, `--unique`, `--where`, `--sort`
**Gallery view**
- Display all images in a folder with `--gallery 4x4`
**NetCDF/HDF notes:**
- viewinline lists only variables that can be displayed as 2D or 3D arrays
- Variables with additional dimensions (e.g., vertical levels) may be listed but will fail to display with a clear error message
- For a complete variable list, use `ncdump -h file.nc` or `viewtif`
## Dependencies
**Core dependencies** (installed automatically):
- `rasterio` — raster reading (includes GDAL)
- `geopandas`, `pyogrio` — vector reading
- `matplotlib` — vector rendering
- `Pillow` — image encoding
- `numpy`, `pandas` — data handling
**Optional dependencies:**
- `duckdb` — required for `--where`, `--sort`, `--sql`, `--limit` with filtering
```bash
pip install duckdb
```
- `pyarrow` — required for Parquet/GeoParquet files
```bash
pip install pyarrow
```
- `h5py` — fallback for HDF5 files if GDAL lacks HDF5 support (usually not needed)
```bash
pip install h5py
```
**Note on HDF support:**
- **HDF5** (.h5, .hdf5): Supported via rasterio if GDAL has HDF5 support (most installations)
- **HDF4** (.hdf): Requires GDAL compiled with HDF4 support (common in MODIS data)
- **NetCDF** (.nc): Supported via rasterio (uses GDAL's NetCDF driver)
## Available options
```
General:
--display DISPLAY Resize only the displayed image (0.5=smaller, 2=bigger). Default: auto-fit to terminal.
Raster:
--band BAND Band number to display (single raster), or slice number for NetCDF. (default: 1)
--timestep INTEGER Alias for --band when working with NetCDF files.
--subset INTEGER Variable index for NetCDF/HDF files (e.g., --subset 1).
--colormap Apply colormap to single-band rasters. Flag without the color scheme → 'terrain'.
--rgb R G B Three band numbers for RGB display (e.g., --rgb 4 3 2). Overrides default 1 2 3.
--rgbfiles R G B Three single-band rasters for RGB composite. Can also provide as positional arguments.
--vmin VMIN Minimum pixel value for raster display scaling.
--vmax VMAX Maximum pixel value for raster display scaling.
--nodata NODATA Override nodata value for rasters if dataset metadata is missing or incorrect.
--gallery [GRID] Display all PNG/JPG/TIF images in a folder as thumbnails (e.g., 5x5 grid).
Vector:
--color-by COLUMN Column to color vector features by.
--colormap Apply colormap to vector coloring. Flag without value → 'terrain'.
--width WIDTH Line width for vector boundaries. (default: 0.7)
--edgecolor COLOR Edge color for vector outlines (hex or named color). (default: white)
--layer LAYER Layer name for GeoPackage/multi-layer files, or variable name for NetCDF files.
--table Display vector/parquet file as tabular data instead of rendering geometry.
CSV and Parquet:
--describe [COLUMN] Show summary statistics for all numeric columns or specify one column name.
--hist [COLUMN] Show histograms for all numeric columns or specify one column name.
--bins BINS Number of bins for histograms (used with --hist). (default: 20)
--scatter X Y Plot scatter of two numeric columns (e.g., --scatter area_km2 year).
--unique COLUMN Show unique values for a categorical column.
--where EXPR Filter rows using SQL WHERE clause (DuckDB required). Example: --where "year > 2010"
--sort COLUMN Sort rows by column, ascending by default. Use --desc to reverse.
--desc Sort in descending order (used with --sort).
--limit N Limit number of rows shown (e.g., --limit 100).
--select COLUMNS Select specific columns (space separated). Example: --select Country City
--sql QUERY Execute full DuckDB SQL query. Use 'data' as table name. Example: --sql "SELECT * FROM data WHERE Poverty > 40"
```
## Need help?
You can ask questions about usage via the documentation-based assistant:
👉 [Ask the viewtif + viewgeom + viewinline Helper](https://chatgpt.com/g/g-698b61c42f788191b884aed1b99dfcd8-viewtif-viewgeom-viewinline-helper)
👉 For NASA staff: find 'viewtif + viewgeom + viewinline Helper' via the ChatGSFC Agent Marketplace
## License
This project is released under the Apache License 2.0 © 2026 Keiko Nomura.
If you find this tool useful, please consider supporting or acknowledging it in your work.
## Useful links
- [YouTube demo playlist](https://www.youtube.com/playlist?list=PLP9MNCMgJIHj6FvahJ6Tembp1rCyhLtR4)
- [Demo at the initial release](https://www.linkedin.com/posts/keiko-nomura-0231891_just-released-viewinline-for-those-using-activity-7390643680770023424-8Guu?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAA0INsBVIO1f6nS_NkKqFh4Na1ZpoYo2fc)
- [Demo for the v0.1.3](https://www.linkedin.com/posts/keiko-nomura-0231891_just-released-viewinline-v013-for-iterm2-activity-7391633864798081025-dPbk?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAA0INsBVIO1f6nS_NkKqFh4Na1ZpoYo2fc)
- [Demo with GDAL](https://www.linkedin.com/posts/keiko-nomura-0231891_if-you-use-gdal-heres-a-quick-example-workflow-activity-7390892270847373312-XWZ4?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAA0INsBVIO1f6nS_NkKqFh4Na1ZpoYo2fc)
- [User feedback (thank you!)](https://www.linkedin.com/posts/jamshidsodikov_shout-out-to-keiko-nomura-for-viewinline-activity-7390979602539528192-S8JQ?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAA0INsBVIO1f6nS_NkKqFh4Na1ZpoYo2fc)
| text/markdown | Keiko Nomura | null | null | null | Apache-2.0 | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"geopandas",
"matplotlib",
"numpy",
"pandas",
"pillow",
"pyogrio",
"rasterio",
"duckdb; extra == \"all\"",
"h5py; extra == \"all\"",
"pyarrow; extra == \"all\"",
"h5py; extra == \"hdf5\"",
"pyarrow; extra == \"parquet\"",
"duckdb; extra == \"sql\""
] | [] | [] | [] | [
"Homepage, https://github.com/nkeikon/viewinline",
"Repository, https://github.com/nkeikon/viewinline",
"Issues, https://github.com/nkeikon/viewinline/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T02:01:57.756852 | viewinline-0.2.1.tar.gz | 21,431 | 93/1e/c281a335ccc6519358ee9e99f837408d40d683acd902961e3de0d5dc5088/viewinline-0.2.1.tar.gz | source | sdist | null | false | 78f5357d27e64cc57662fc3d0f2f20cb | bf5be0b1ca0692707f3863bfea85564816c734f6f123e8e74508a7a2ebda0323 | 931ec281a335ccc6519358ee9e99f837408d40d683acd902961e3de0d5dc5088 | null | [
"LICENSE"
] | 225 |
2.3 | llamactl | 0.4.12 | A command-line interface for managing LlamaDeploy projects and deployments | # llamactl
A command-line interface for managing LlamaDeploy projects and deployments.
For an end-to-end introduction, see [Getting started with LlamaAgents](https://developers.llamaindex.ai/python/cloud/llamaagents/getting-started).
## Installation
Install from PyPI:
```bash
pip install llamactl
```
Or using uv:
```bash
uv add llamactl
```
## Quick Start
1. **Configure your profile**: Set up connection to your LlamaDeploy control plane
```bash
llamactl profile configure
```
2. **Check health**: Verify connection to the control plane
```bash
llamactl health
```
3. **Create a project**: Initialize a new deployment project
```bash
llamactl project create my-project
```
4. **Deploy**: Deploy your project to the control plane
```bash
llamactl deployment create my-deployment --project-name my-project
```
## Commands
### Profile Management
- `llamactl profile configure` - Configure connection to control plane
- `llamactl profile show` - Show current profile configuration
- `llamactl profile list` - List all configured profiles
### Project Management
- `llamactl project create <name>` - Create a new project
- `llamactl project list` - List all projects
- `llamactl project show <name>` - Show project details
- `llamactl project delete <name>` - Delete a project
### Deployment Management
- `llamactl deployment create <name>` - Create a new deployment
- `llamactl deployment list` - List all deployments
- `llamactl deployment show <name>` - Show deployment details
- `llamactl deployment delete <name>` - Delete a deployment
- `llamactl deployment logs <name>` - View deployment logs
### Health & Status
- `llamactl health` - Check control plane health
- `llamactl serve` - Start local development server
## Configuration
llamactl stores configuration in your home directory at `~/.llamactl/`.
### Profile Configuration
Profiles allow you to manage multiple control plane connections:
```bash
# Configure default profile
llamactl profile configure
# Configure named profile
llamactl profile configure --profile production
# Use specific profile for commands
llamactl --profile production deployment list
```
## Development
This CLI is part of the LlamaDeploy ecosystem. For development setup:
1. Clone the repository
2. Install dependencies: `uv sync`
3. Run tests: `uv run pytest`
## Requirements
- Python 3.12+
- Access to a LlamaDeploy control plane
## License
This project is licensed under the MIT License.
| text/markdown | Adrian Lyjak | Adrian Lyjak <adrianlyjak@gmail.com> | null | null | MIT | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"llama-deploy-core[client]<0.5.0,>=0.4.12",
"llama-deploy-appserver<0.5.0,>=0.4.12",
"vibe-llama-core>=0.1.0",
"rich>=13.0.0",
"questionary>=2.0.0",
"click>=8.2.1",
"python-dotenv>=1.0.0",
"tenacity>=9.1.2",
"textual>=6.0.0",
"aiohttp>=3.12.14",
"copier>=9.10.2",
"pyjwt[crypto]>=2.10.1",
"typing-extensions>=4.15.0",
"typing-extensions>=4.15.0; python_full_version < \"3.11\""
] | [] | [] | [] | [] | uv/0.7.20 | 2026-02-21T02:01:53.406774 | llamactl-0.4.12.tar.gz | 57,984 | cb/df/39740c0a7314c8b22cb43b4edb2fa0b3561bc654f4687ab419f040b136d0/llamactl-0.4.12.tar.gz | source | sdist | null | false | 1cbff3aa454f0b65ffe140186fc92be6 | fcb907b8bc2f8fb5a5d4eac2a71311b608c9c180077ef41416a258e699af410f | cbdf39740c0a7314c8b22cb43b4edb2fa0b3561bc654f4687ab419f040b136d0 | null | [] | 652 |
2.3 | llama-deploy-core | 0.4.12 | Core models and schemas for LlamaDeploy | # llama-deploy-core
Core models and schemas for LlamaDeploy.
For an end-to-end introduction, see [Getting started with LlamaAgents](https://developers.llamaindex.ai/python/cloud/llamaagents/getting-started).
| text/markdown | null | null | null | null | MIT | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"fastapi>=0.115.0",
"overrides>=7.7.0",
"pydantic>=2.0.0",
"pyyaml>=6.0.2",
"truststore>=0.10.4",
"types-pyyaml>=6.0.12.20250822",
"tomli>=2.0.1; python_full_version < \"3.11\"",
"httpx<1.0.0,>=0.24.0; extra == \"client\"",
"fastapi>=0.115.0; extra == \"server\""
] | [] | [] | [] | [] | uv/0.7.20 | 2026-02-21T02:01:51.396990 | llama_deploy_core-0.4.12.tar.gz | 20,362 | 37/9a/d287f372cb29ee7714cfc30a64de5ad203ef11ca3a296d39a25b7ce59066/llama_deploy_core-0.4.12.tar.gz | source | sdist | null | false | f93ada4fc583792d5fc406edbe96b3df | 10aecff78c20c02acc23d767c02b5b7f3cf09a8e4e1e09308fce2ecd68336dd5 | 379ad287f372cb29ee7714cfc30a64de5ad203ef11ca3a296d39a25b7ce59066 | null | [] | 723 |
2.3 | llama-deploy-appserver | 0.4.12 | Application server components for LlamaDeploy | # llama-deploy-appserver
Application server components for LlamaDeploy.
For an end-to-end introduction, see [Getting started with LlamaAgents](https://developers.llamaindex.ai/python/cloud/llamaagents/getting-started).
| text/markdown | Massimiliano Pippi, Adrian Lyjak | Massimiliano Pippi <mpippi@gmail.com>, Adrian Lyjak <adrianlyjak@gmail.com> | null | null | MIT | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"llama-index-workflows<3.0.0,>=2.14.0",
"llama-agents-server<0.2.0,>=0.1.1",
"pydantic-settings>=2.10.1",
"fastapi>=0.100.0",
"websockets>=12.0",
"llama-deploy-core<0.5.0,>=0.4.12",
"httpx<1.0.0,>=0.24.0",
"prometheus-fastapi-instrumentator>=7.1.0",
"packaging>=25.0",
"structlog>=25.4.0",
"rich>=14.1.0",
"pyyaml>=6.0.2",
"watchfiles>=1.1.0",
"uvicorn>=0.35.0",
"typing-extensions>=4.15.0; python_full_version < \"3.12\""
] | [] | [] | [] | [] | uv/0.7.20 | 2026-02-21T02:01:49.435724 | llama_deploy_appserver-0.4.12.tar.gz | 24,667 | fb/8c/5790e367ac50de2624c870ba1c76098ce92ce5cff568cc4ff157396994c6/llama_deploy_appserver-0.4.12.tar.gz | source | sdist | null | false | bd79a14eb09c0c11194bb84ff4b2d5b3 | 5e941b6bca578ce0ff20629262a10d22630bf7d05cb923cb4ee5ad33f7366a14 | fb8c5790e367ac50de2624c870ba1c76098ce92ce5cff568cc4ff157396994c6 | null | [] | 662 |
2.4 | django-eav-json-fields | 0.1.0 | Schema-validated JSONFields for Django using an EAV-style DSL | # django-eav-json-fields
Schema-validated JSONFields for Django using an EAV-style DSL.
Define expected keys, types, and constraints on JSON dict values stored in Django `JSONField`s. Storage remains plain JSONB in PostgreSQL -- the EAV layer adds validation, type coercion, and admin widgets.
## Requirements
- Python 3.10+
- Django 4.2+
## Installation
```bash
pip install django-eav-json-fields
```
No `INSTALLED_APPS` entry is needed.
## Quick Start
### Define a schema
```python
from eav_fields import EAVSchema, EAVBoolean, EAVString, EAVDecimal, EAVInteger
class ServiceConfig(EAVSchema):
enabled = EAVBoolean(default=True, help_text="Enable this service")
label = EAVString(max_length=100, help_text="Display name")
price = EAVDecimal(required=False, max_digits=10, decimal_places=2)
max_retries = EAVInteger(min_value=0, max_value=10, default=3, required=False)
```
### Use EAVField on a model
```python
from django.db import models
from eav_fields import EAVField
class Service(models.Model):
name = models.CharField(max_length=100)
config = EAVField(schema=ServiceConfig)
```
The field validates during `full_clean()` that the JSON dict conforms to the schema. Unknown keys are rejected, types are coerced, and constraints are enforced.
### Polymorphic schemas
Select a schema based on another field's value:
```python
class Product(models.Model):
product_type = models.CharField(max_length=20, choices=[("a", "Type A"), ("b", "Type B")])
config = EAVField(
schema_map={"a": SchemaA, "b": SchemaB},
schema_key_field="product_type",
)
```
## EAV Attribute Types
| Type | Python Type | JSON Storage | Constraints |
|------|-------------|-------------|-------------|
| `EAVString` | `str` | string | `max_length`, `choices` |
| `EAVBoolean` | `bool` | boolean | -- |
| `EAVInteger` | `int` | number | `min_value`, `max_value`, `choices` |
| `EAVFloat` | `float` | number | `min_value`, `max_value`, `choices` |
| `EAVDecimal` | `Decimal` | string | `max_digits`, `decimal_places`, `min_value`, `max_value`, `choices` |
All attributes support: `required` (default `True`), `default`, `help_text`, `choices`.
## Cross-field Validation
Override `validate_cross` for rules spanning multiple attributes:
```python
from django.core.exceptions import ValidationError
from eav_fields import EAVSchema, EAVInteger
class MyConfig(EAVSchema):
min_val = EAVInteger()
max_val = EAVInteger()
@classmethod
def validate_cross(cls, data):
if data.get("min_val", 0) >= data.get("max_val", 0):
raise ValidationError({"max_val": ["max_val must be greater than min_val."]})
```
## Schema Inheritance
Child schemas inherit parent attributes and can override them:
```python
class BaseConfig(EAVSchema):
name = EAVString()
class ExtendedConfig(BaseConfig):
count = EAVInteger() # inherits 'name' from parent
```
## Admin Widget
`EAVField` automatically provides an `EAVWidget` in the Django admin that renders typed HTML inputs for each schema attribute, with show/hide support for polymorphic schemas.
## Migration Support
Schema references are serialized as dotted import paths in migrations. Both class references and string paths are accepted:
```python
# Both are equivalent:
config = EAVField(schema=MySchema)
config = EAVField(schema="myapp.schemas.MySchema")
```
## License
MIT
| text/markdown | null | Michael Lavers <kolanos@gmail.com> | null | null | null | django, eav, jsonfield, schema, validation | [
"Development Status :: 3 - Alpha",
"Framework :: Django",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Framework :: Django :: 5.1",
"Framework :: Django :: 5.2",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"django>=4.2"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T02:01:01.253686 | django_eav_json_fields-0.1.0.tar.gz | 9,787 | 1a/4c/bce1088b1f834774e286b87af86064db1a1a187620cdeb318cbcf7ba738e/django_eav_json_fields-0.1.0.tar.gz | source | sdist | null | false | bef48d8904f26678d44c39947a7161d8 | 74a60dab87805fd5e87bb98808f2b9c4f47042e87c860d1d624b0a69a8dc2afb | 1a4cbce1088b1f834774e286b87af86064db1a1a187620cdeb318cbcf7ba738e | MIT | [
"LICENSE"
] | 237 |
2.4 | kalshi-mcp | 0.1.1 | MCP server for Kalshi prediction markets - search, analyze, and trade prediction markets through Claude Desktop | # Kalshi MCP Server
mcp-name: io.github.yakub268/kalshi
[](https://modelcontextprotocol.io)
[](https://www.python.org/downloads/)
**Model Context Protocol server for Kalshi prediction markets.** Search, analyze, and trade prediction markets directly through Claude Desktop.
Built with production-grade authentication and rate limiting from a live trading system with 4+ months of uptime.
## Features
### Tools (6)
- **`search_markets`** - Search by keyword, get prices/volume
- **`get_market_details`** - Full market info + orderbook depth
- **`get_portfolio`** - Account balance + open positions
- **`get_trending_markets`** - Top markets by 24h volume
- **`place_order`** - Execute limit orders
- **`get_series_markets`** - All markets in a series (e.g., Fed events)
### Resources (2)
- **`kalshi://balance`** - Current account balance
- **`kalshi://positions`** - Open positions list
## Installation
### Prerequisites
1. **Kalshi API credentials**: Get from [kalshi.com/profile/api-keys](https://kalshi.com/profile/api-keys)
- Download your API key ID
- Download the RSA private key (.pem file)
2. **Python 3.10+**
### Quick Install (PyPI)
```bash
pip install kalshi-mcp
```
### From Source
```bash
# Clone repository
git clone https://github.com/yakub268/kalshi-mcp.git
cd kalshi-mcp
# Install dependencies
pip install -e .
```
### Claude Desktop Configuration
Add to your Claude Desktop config file:
**macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
**Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
```json
{
"mcpServers": {
"kalshi": {
"command": "python",
"args": ["-m", "kalshi_mcp"],
"env": {
"KALSHI_API_KEY": "your-api-key-here",
"KALSHI_PRIVATE_KEY_PATH": "C:\\Users\\YourName\\.trading_keys\\kalshi_private_key.pem"
}
}
}
}
```
**Note**:
- Replace `your-api-key-here` with your actual Kalshi API key
- Update the private key path to where you saved your `.pem` file
- On Windows, use double backslashes (`\\`) in paths
### Test the Connection
Restart Claude Desktop, then try:
```
What's my Kalshi balance?
```
or
```
Search for bitcoin prediction markets
```
## Usage Examples
### Search for Markets
```
Search for markets about the Federal Reserve
```
### Get Market Analysis
```
Show me details for ticker KXFED-26MAR19-B5.25
```
### Check Portfolio
```
What's my current Kalshi portfolio?
```
### Place an Order
```
Buy 10 contracts of KXHIGHNYC-26FEB20-B34.5 YES at 25 cents
```
## Authentication
This server uses **RSA-PSS signature authentication**:
1. Each request is signed with your private key
2. Kalshi verifies the signature with your public key
3. Thread-safe rate limiting (150ms between requests)
4. Automatic retry on 429 rate limit errors
**Security**: Your private key never leaves your machine. The server only signs requests locally.
## Rate Limiting
- Built-in 150ms spacing between requests (~6.6 req/s)
- Automatic exponential backoff on 429 errors (0.5s → 1s → 2s)
- Safe for concurrent use across multiple Claude conversations
## Architecture
Built on production code from a live Kalshi trading bot:
- **Authentication**: Reused from `kalshi_client.py` (4+ months uptime)
- **Rate limiting**: Shared across all client instances
- **Error handling**: Battle-tested retry logic
- **Market discovery**: Liquidity scoring from `scanner.py`
## Contributing
Issues and PRs welcome! This is an open-source project built to fill a gap in the MCP ecosystem.
## License
MIT License - see LICENSE file
## Acknowledgments
- Built with [FastMCP](https://github.com/jlowin/fastmcp)
- Kalshi API documentation: [docs.kalshi.com](https://docs.kalshi.com)
- Production code from my [trading bot arsenal](https://github.com/yakub268/trading_bot)
---
**Questions?** Open an issue or reach out on GitHub.
| text/markdown | yakub268 | null | null | null | MIT | claude-desktop, kalshi, mcp, prediction-markets, trading | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"cryptography>=42.0.0",
"httpx>=0.27.0",
"mcp>=1.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/yakub268/kalshi-mcp",
"Repository, https://github.com/yakub268/kalshi-mcp",
"Issues, https://github.com/yakub268/kalshi-mcp/issues"
] | twine/6.2.0 CPython/3.12.10 | 2026-02-21T02:00:32.277809 | kalshi_mcp-0.1.1.tar.gz | 7,068,231 | 5b/7f/c8fe12a09ed12ec6bef6892097c72f90d115bb6ca3b5d00d56fdf005052f/kalshi_mcp-0.1.1.tar.gz | source | sdist | null | false | 0e7ac2da5d2410aa8c6041883ff5b79b | 1cd9206506523bc87255d4b6e35d5d3bf754a94847d3759ebf540d93fdfefce4 | 5b7fc8fe12a09ed12ec6bef6892097c72f90d115bb6ca3b5d00d56fdf005052f | null | [
"LICENSE"
] | 238 |
2.4 | cogames | 0.9.0 | Multi-agent cooperative games | # CoGames: A Game Environment for the Alignment League Benchmark
<p align="center">
<a href="https://pypi.org/project/cogames/">
<img src="https://img.shields.io/pypi/v/cogames" alt="PyPi version">
</a>
<a href="https://pypi.org/project/cogames/">
<img src="https://img.shields.io/pypi/pyversions/cogames" alt="Python version">
</a>
<a href="https://discord.gg/secret-hologenesis">
<img src="https://img.shields.io/discord/1309708848730345493?logo=discord&logoColor=white&label=Discord" alt="Discord">
</a>
<a href="https://deepwiki.com/Metta-AI/cogames">
<img src="https://deepwiki.com/badge.svg" alt="Ask DeepWiki">
</a>
<a href="https://colab.research.google.com/github/Metta-AI/cogames/blob/main/README.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab">
</a>
<a href="https://softmax.com/">
<img src="https://img.shields.io/badge/Softmax-Website-849EBE?logo=data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyBpZD0iTGF5ZXJfMSIgZGF0YS1uYW1lPSJMYXllciAxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNTI5LjIyIDUzNy40NyI+CiAgPGRlZnM+CiAgICA8c3R5bGU+CiAgICAgIC5jbHMtMSB7CiAgICAgICAgY2xpcC1wYXRoOiB1cmwoI2NsaXBwYXRoKTsKICAgICAgfQoKICAgICAgLmNscy0yIHsKICAgICAgICBmaWxsOiBub25lOwogICAgICB9CgogICAgICAuY2xzLTIsIC5jbHMtMywgLmNscy00LCAuY2xzLTUgewogICAgICAgIHN0cm9rZS13aWR0aDogMHB4OwogICAgICB9CgogICAgICAuY2xzLTMgewogICAgICAgIGZpbGw6ICMwZTI3NTg7CiAgICAgIH0KCiAgICAgIC5jbHMtNCB7CiAgICAgICAgZmlsbDogI2JiY2NmMzsKICAgICAgfQoKICAgICAgLmNscy01IHsKICAgICAgICBmaWxsOiAjODU5ZWJlOwogICAgICB9CiAgICA8L3N0eWxlPgogICAgPGNsaXBQYXRoIGlkPSJjbGlwcGF0aCI+CiAgICAgIDxyZWN0IGNsYXNzPSJjbHMtMiIgd2lkdGg9IjUyOS4yMSIgaGVpZ2h0PSI1MzcuNDciLz4KICAgIDwvY2xpcFBhdGg+CiAgPC9kZWZzPgogIDxnIGNsYXNzPSJjbHMtMSI+CiAgICA8cGF0aCBjbGFzcz0iY2xzLTMiIGQ9Ik00MzUuNzksMTY3LjA5YzEuMzksMTQuNzIsMi4yOCwzNS4xNS4wNyw1OS4xOC0yLjc2LDMwLTguNDEsOTEuNDItNTMuMDYsMTQ0Ljg1LTEyLjcxLDE1LjIxLTMxLjUsMzcuMTctNjQuMjcsNTAuNzUtMjAuMSw4LjMzLTM5LjcyLDExLjE0LTU3LjA4LDExLjE0LTI3LjIxLDAtNDguODctNi44OS01OC4xNC0xMC4xOC0yNC4yOC04LjcxLTQ2LjI2LTEzLjg4LTY0LjgyLTE3LjAzLTkuODEtMS42Ny0xNy4yLTIuODktMjcuNTEtMy4zNy01LjMtLjI0LTExLjItLjU4LTE3LjUyLS41OC0xNC45NywwLTMyLjMxLDEuOTEtNDkuNjYsMTEuNTUtNy4yLDQtMjIuMzYsMTEuODItMzMuMjgsMjguNC0yLjMxLDMuNS01LjczLDcuMjYtNy4xLDEzLjE3LS43NywzLjMzLS44Myw2LjQ1LS41NSw5LjE3LDEwLjE4LDE2LjUsMzEuOSwyNS4xLDcwLjMxLDM5Ljc5LDQwLjU4LDE1LjUyLDc2LjQ2LDIzLjA4LDEwMy4zNiwyNy4yLDM5LjYyLDYuMDcsNzAuMjEsNi4yOSw4OC44NSw2LjM1LjY5LDAsMS4zOSwwLDIuMDgsMCw1MS4zMSwwLDg4Ljg1LTUuOTYsOTYuNzQtNy4yNiwyMS4xNS0zLjQ2LDUwLjY1LTguNDUsODYuOC0yMi4zOSwzOS41Mi0xNS4yNCw2Ni42MS0yNS42OCw3Ni4yNS01MC42OSwxLjUtMy44OCw0LjYzLTEzLjQzLTIuODYtNTUuMDctNy41Ny00Mi4xNS0xOC4xMi03My4xOS0xOS42Ny03Ny42OC0yMC45MS02MC43My0zMS4zNy05MS4wOS00Ny4xNS0xMjAuNTktNy4xNi0xMy4zOC0xNC4zNy0yNS40My0yMS44Mi0zNi43MiIvPgogICAgPHBhdGggY2xhc3M9ImNscy01IiBkPSJNNDA1LjgsMTI2LjM4Yy0yLjcsMTQuMTMtNy40MywzMy40Ny0xNi4xOSw1NS4zOS05LjMzLDIzLjMzLTE3LjQzLDQzLjU5LTM2LjExLDYzLjctOS43MywxMC40Ny0zNC4xMSwzNi43LTcwLjQ5LDQwLjg5LTMuMjguMzgtNi40Ny41NS05LjYuNTUtMTUuMjQsMC0yOS4xMy00LjE2LTQ2LjQ4LTkuMzYtMjIuNjQtNi43OC0zMy45OC0xNC4zNy02MS42My0xOS4xOC0xMC4xNy0xLjc3LTE2LjI2LTIuODMtMjQuNTMtMi44M2gtLjM0cy02Mi4zLjIzLTEwOC45MSw2NS4xMmMtLjI4LjM5LS41NS43Ny0uNTUuNzctNy4zMiwxMC44Ny0xMS45MSwyMC44OC0xNC44NiwyOC43OC0yLjU1LDYuNzktMy45NywxMi4yNi01LjU0LDE4LjI3LTEuNiw2LjE1LTIuNzksMTEuNjItNi4zMSwzMS4xNy0xLjE0LDYuMzYtMi42MSwxNC41OS00LjI3LDI0LjI1LDYuNC0xMC45MSwxNy4xMi0yNS45LDM0LjItMzkuMywxNC41OS0xMS40NSwyNy45OC0xNy4xMywzMy4wNi0xOS4xNiwyLjg1LTEuMTMsMTMuNzUtNS4zNSwyOC45NS03LjgsMy45My0uNjMsMTIuMTgtMS44LDIzLjItMS44LDguMTMsMCwxNy43Ny42MywyOC4zLDIuNTgsNi42NywxLjIzLDE2LjYxLDMuMTMsMjguNDEsOC4xNSw0LjAxLDEuNywxMS4yMSw1LjAzLDE4Ljg1LDksNC45MiwyLjU2LDguMzcsNC41MywxNC4xOCw3LjE0LDQuOSwyLjIxLDkuMDMsMy43NiwxMS43OCw0Ljc0LDAsMCwxOS4yMyw2LjM2LDQwLjI0LDYuOTguOTkuMDMsMS45Ni4wNCwyLjkxLjA0LDUuMzQsMCw5LjY4LS4zOSw5LjY4LS4zOSw2LjYtLjI2LDE1LjktMS4xOCwyNi41NS00LjIsMzkuMjUtMTEuMTQsNjEuNDQtNDEuMDMsNzQuMDctNTguMDIsNDkuOTUtNjcuMTksNDcuOTMtMTY3Ljg1LDQ3LjQxLTE4NC43Ny01LjE3LTcuMDItMTAuNDktMTMuODYtMTYuMDItMjAuNyIvPgogICAgPHBhdGggY2xhc3M9ImNscy00IiBkPSJNMjYzLjg1LDBjLS4xNywwLS4zMywwLS40OSwwLTkuNTYuMTMtMTguOTcsMy45OC01MS40NSwzMy4zNC0zNC4xLDMwLjgzLTQ4Ljk2LDQ4LjA2LTQ4Ljk2LDQ4LjA2LTQ1Ljg0LDUzLjEzLTY4Ljc3LDc5LjY5LTkyLjQ4LDEyMS40OS0zMC4zLDUzLjQxLTQ0LjkxLDEwMC4wOS01MS4yMywxMjMuMDItLjU4LDIuMS0xLjE1LDQuMzItMS43Myw2LjcxLDIuMzItNS40OSw0Ljk0LTExLjE5LDcuOS0xNy4wNCwxMy4zOS0yNi40MiwzNi42MS03Mi4yMSw4My45My04OC44NiwxMy41NC00Ljc2LDI1Ljk2LTYuMDYsMzMuNzktNi4zNywxLjQ3LS4wNiwyLjkxLS4wOCw0LjMtLjA4LDI3LjM5LDAsMzguNzMsMTAuODIsNzIsMTguODMsMTUuMjcsMy42NywzMS43OCw3Ljg4LDQ5LjUsNy44OCw5LjM2LDAsMTkuMDYtMS4xNywyOS4xLTQuMjIsMzYuNDktMTEuMDYsNTYuNDQtNDEuMzMsNjMuMjUtNTEuNTMsMTUuMzYtMjMuMDIsMTkuOTUtNDUuMjMsMjEuOS01NS4xNCwyLjQ3LTEyLjU3LDMuMTQtMjMuODUsMy4wNC0zMy4xNC02LjMyLTcuMzUtMTIuOTItMTQuOTEtMTkuODctMjIuODYsMCwwLTE3LjM5LTE5Ljg5LTUxLjk4LTQ5LjQ4QzI4Mi41MSwzLjM5LDI3Mi41LDAsMjYzLjg1LDAiLz4KICA8L2c+Cjwvc3ZnPg==" alt="Softmax website">
</a>
</p>
The [Alignment League Benchmark (ALB)](https://www.softmax.com/alignmentleague) is a suite of multi-agent games, designed to measure how well AI agents align, coordinate, and collaborate with others (both AIs and humans).
CoGames is the games environment for ALB. You can use it to:
* create new games
* train agents to play existing ALB games
* submit those agents to the ALB leaderboard
There's one ALB game right now: Cogs vs Clips.
# Quick Start
## Step 1: Install CoGames
Install [cogames](https://pypi.org/project/cogames/) as a Python package.
```bash
pip install cogames
```
<details><summary>Using uv</summary>
```bash
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Create a virtual environment
uv venv .venv
source .venv/bin/activate
# Install cogames
uv pip install cogames
```
</details>
<details><summary>Using Docker</summary>
```dockerfile
# Ensure Python 3.12 is available
FROM python:3.12-slim
# Ensure C/C++ compiler is available
RUN apt-get update && \
apt-get install -y --no-install-recommends build-essential && \
rm -rf /var/lib/apt/lists/*
# Install cogames
RUN pip install --no-cache-dir cogames
```
</details>
<details><summary>Using Colab</summary>
[](https://colab.research.google.com/github/Metta-AI/cogames/blob/main/README.ipynb)
</details>
## Step 2: Play Cogs vs Clips
Play an easy mission in Cogs vs Clips using:
```bash
cogames tutorial play
```
The game will open in a new window, and the terminal will give you instructions to complete training mission.
## Step 3: Submit a policy to the leaderboard
1. Log into the ALB leaderboard with your GitHub account.
```bash
cogames login
```
2. Upload a starter policy and submit it to the tournament.
```bash
cogames upload --policy "class=cogames.policy.starter_agent.StarterPolicy" --name "$USER.README-quickstart-starter-policy" --season beta-cvc-no-clips --skip-validation
```
3. Check your submission on the leaderboard.
```bash
cogames leaderboard --season beta-cvc-no-clips
```
# Tutorials
To learn more, see:
1. [Creating a policy](tutorials/01_MAKE_POLICY.md): Creating a custom policy and evaluating it
2. [Training](tutorials/02_TRAIN.md): Training a custom policy and evaluating it
3. [Submitting](tutorials/03_SUBMIT.md): Submitting to the leaderboard and understanding the results
If you want help, or to share your experience, join [the Discord](https://discord.gg/secret-hologenesis).
# About the game
CogsGuard is a cooperative territory-control game. Teams of AI agents ("Cogs") work together to capture and defend
junctions against automated opponents ("Clips") by:
* gathering resources and depositing them at controlled junctions
* acquiring specialized roles (Miner, Aligner, Scrambler, Scout) at gear stations
* capturing neutral junctions using Aligners (costs 1 heart + 1 influence)
* disrupting enemy junctions using Scramblers (costs 1 heart)
* defending territory from Clips expansion
Read [MISSION.md](MISSION.md) for a thorough description of the game mechanics.
<p align="center">
<img src="assets/cvc-reel.gif" alt="CogsGuard reel">
<br>
There are many mission configurations available, with different map sizes, junction layouts, and game rules.
Overall, CogsGuard aims to present rich environments with:
- **Territory control**: Capture and defend junctions to score points each tick
- **Role specialization**: Four roles (Miner, Aligner, Scrambler, Scout) with distinct capabilities and dependencies
- **Dense rewards**: Agents receive reward every tick proportional to territory controlled
- **Partial observability**: Agents have limited visibility of the environment
- **Required multi-agent cooperation**: No single role can succeed alone; Miners need Aligners to capture junctions,
Aligners need Miners for resources, Scramblers must clear enemy territory for Aligners to advance
# About the tournament
## API Docs
The tournament API is documented at [api.observatory.softmax-research.net/docs](https://api.observatory.softmax-research.net/docs). The interactive
OpenAPI spec describes all public endpoints for seasons, matches, leaderboards, and submissions.
## How seasons work
The ALB leaderboard runs in seasons. Each season has two pools:
- **Qualifying pool**: Where new submissions start. Your policy plays matches against other policies in the pool.
- **Competition pool**: Policies that score above a threshold in qualifying get promoted here.
To see active seasons and their pools:
```bash
cogames seasons
```
## How scoring works
When you submit a policy, it gets queued for matches against other policies in its pool. Our focal metric is VORP (Value Over Replacement Policy), which estimates how much your agent improves team performance compared to a baseline.
VORP is calculated by comparing:
- Replacement mean: The average score when only other pool policies play (no candidate)
- Candidate score: The score when your policy plays
The difference tells us how much value your policy adds to a team. A positive VORP means your policy makes teams better; a negative VORP means teams perform worse with your policy than without it.
You can evaluate VOR locally before submitting:
```bash
cogames pickup --policy <YOUR_POLICY> --pool <POOL_POLICY>
```
## Viewing results
To check your submission status and match results:
```bash
cogames submissions
cogames leaderboard --season beta-cvc-no-clips
```
# Command Reference
Most commands are of the form `cogames <command> --mission [MISSION] --policy [POLICY] [OPTIONS]`
To specify a `MISSION`, you can:
- Use a mission name from the registry given by `cogames missions` (e.g. `training_facility_1`).
- Use a path to a mission configuration file (e.g. `path/to/mission.yaml`).
- Alternatively, specify a set of missions with `--mission-set`.
To specify a `POLICY`, use one of two formats:
- **URI format** (for checkpoint bundles):
- Point directly at a checkpoint bundle (directory or `.zip` containing `policy_spec.json`)
- Examples: `./train_dir/my_run:v5`, `./train_dir/my_run:v5.zip`, `s3://bucket/path/run:v5.zip`
- Use `:latest` suffix to auto-resolve the highest version: `./train_dir/checkpoints:latest`
- **Key-value format** (for explicit class + weights):
- `class=`: Policy shorthand or full class path from `cogames policies`, e.g. `class=lstm` or
`class=cogames.policy.random.RandomPolicy`.
- `data=`: Optional path to a weights file (e.g., `weights.safetensors`). Must be a file, not a directory.
- `proportion=`: Optional positive float specifying the relative share of agents that use this policy (default: 1.0).
- `kw.<arg>=`: Optional policy `__init__` keyword arguments (all values parsed as strings).
You can view all the commands with
```bash
cogames --help
```
and you can view help for a given command with:
```bash
cogames [COMMAND] --help
```
## Missions Commands
### `cogames missions`
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> </span>
<span style="font-weight: bold"> </span><span style="color: #808000; text-decoration-color: #808000; font-weight: bold">Usage: </span><span style="font-weight: bold">cogames missions [OPTIONS] SITE </span>
<span style="font-weight: bold"> </span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"> List available missions.
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">This command has three modes:</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f; font-weight: bold">1. List sites:</span><span style="color: #7f7f7f; text-decoration-color: #7f7f7f"> Run with no arguments to see all available sites.</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f; font-weight: bold">2. List missions at a site:</span><span style="color: #7f7f7f; text-decoration-color: #7f7f7f"> Pass a site name (e.g., 'cogsguard_machina_1') to see its missions.</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f; font-weight: bold">3. Describe a mission:</span><span style="color: #7f7f7f; text-decoration-color: #7f7f7f"> Use </span><span style="color: #7fbf7f; text-decoration-color: #7fbf7f; font-weight: bold">-m</span><span style="color: #7f7f7f; text-decoration-color: #7f7f7f"> to describe a specific mission. Only in this mode do </span><span style="color: #7fbfbf; text-decoration-color: #7fbfbf; font-weight: bold">--cogs</span><span style="color: #7f7f7f; text-decoration-color: #7f7f7f">, </span><span style="color: #7fbfbf; text-decoration-color: #7fbfbf; font-weight: bold">--variant</span><span style="color: #7f7f7f; text-decoration-color: #7f7f7f">, </span><span style="color: #7fbfbf; text-decoration-color: #7fbfbf; font-weight: bold">--format</span><span style="color: #7f7f7f; text-decoration-color: #7f7f7f">, </span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">and </span><span style="color: #7fbfbf; text-decoration-color: #7fbfbf; font-weight: bold">--save</span><span style="color: #7f7f7f; text-decoration-color: #7f7f7f"> have any effect.</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="color: #7f7f7f; text-decoration-color: #7f7f7f">╭─ Arguments ─────────────────────────────────────────────────────────────────────────────────────────────────────╮</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span> site <span style="color: #808000; text-decoration-color: #808000; font-weight: bold">TEXT</span> Filter by site (e.g., cogsguard_machina_1) <span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="color: #7f7f7f; text-decoration-color: #7f7f7f">╭─ Describe ──────────────────────────────────────────────────────────────────────────────────────────────────────╮</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">--mission</span> <span style="color: #008000; text-decoration-color: #008000; font-weight: bold">-m</span> <span style="color: #808000; text-decoration-color: #808000; font-weight: bold">MISSION </span> Mission to describe <span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">--cogs</span> <span style="color: #008000; text-decoration-color: #008000; font-weight: bold">-c</span> <span style="color: #808000; text-decoration-color: #808000; font-weight: bold">INTEGER </span> Override agent count (requires <span style="color: #008000; text-decoration-color: #008000; font-weight: bold">-m</span>) <span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">--variant</span> <span style="color: #008000; text-decoration-color: #008000; font-weight: bold">-v</span> <span style="color: #808000; text-decoration-color: #808000; font-weight: bold">VARIANT </span> Apply variant (requires <span style="color: #008000; text-decoration-color: #008000; font-weight: bold">-m</span>, repeatable) <span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">--difficulty</span> <span style="color: #808000; text-decoration-color: #808000; font-weight: bold">LEVEL </span> Difficulty (easy, medium, hard) controlling clips events (requires <span style="color: #008000; text-decoration-color: #008000; font-weight: bold">-m</span>) <span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">--format</span> <span style="color: #bfbf7f; text-decoration-color: #bfbf7f; font-weight: bold">[</span><span style="color: #808000; text-decoration-color: #808000; font-weight: bold">yaml</span><span style="color: #bfbf7f; text-decoration-color: #bfbf7f; font-weight: bold">|</span><span style="color: #808000; text-decoration-color: #808000; font-weight: bold">json</span><span style="color: #bfbf7f; text-decoration-color: #bfbf7f; font-weight: bold">]</span> Output format (requires <span style="color: #008000; text-decoration-color: #008000; font-weight: bold">-m</span>) <span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">--save</span> <span style="color: #008000; text-decoration-color: #008000; font-weight: bold">-s</span> <span style="color: #808000; text-decoration-color: #808000; font-weight: bold">PATH </span> Save config to file (requires <span style="color: #008000; text-decoration-color: #008000; font-weight: bold">-m</span>) <span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="color: #7f7f7f; text-decoration-color: #7f7f7f">╭─ Other ─────────────────────────────────────────────────────────────────────────────────────────────────────────╮</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">--help</span> <span style="color: #008000; text-decoration-color: #008000; font-weight: bold">-h</span> Show this message and exit <span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">Examples:</span>
<span style="color: #008080; text-decoration-color: #008080">cogames missions</span> List all sites
<span style="color: #008080; text-decoration-color: #008080">cogames missions cogsguard_machina_1</span> List missions at site
<span style="color: #008080; text-decoration-color: #008080">cogames missions </span><span style="color: #008000; text-decoration-color: #008000; font-weight: bold">-m</span><span style="color: #008080; text-decoration-color: #008080"> cogsguard_machina_1.basic</span> Describe a mission
<span style="color: #008080; text-decoration-color: #008080">cogames missions </span><span style="color: #008000; text-decoration-color: #008000; font-weight: bold">-m</span><span style="color: #008080; text-decoration-color: #008080"> arena </span><span style="color: #008080; text-decoration-color: #008080; font-weight: bold">--format</span><span style="color: #008080; text-decoration-color: #008080"> json</span> Output as JSON
</pre>
### `cogames evals`
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> </span>
<span style="font-weight: bold"> </span><span style="color: #808000; text-decoration-color: #808000; font-weight: bold">Usage: </span><span style="font-weight: bold">cogames evals [OPTIONS] </span>
<span style="font-weight: bold"> </span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"> List all eval missions
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="color: #7f7f7f; text-decoration-color: #7f7f7f">╭─ Options ───────────────────────────────────────────────────────────────────────────────────────────────────────╮</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">--help</span> Show this message and exit. <span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯</span>
</pre>
### `cogames variants`
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> </span>
<span style="font-weight: bold"> </span><span style="color: #808000; text-decoration-color: #808000; font-weight: bold">Usage: </span><span style="font-weight: bold">cogames variants [OPTIONS] </span>
<span style="font-weight: bold"> </span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"> List all available mission variants
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="color: #7f7f7f; text-decoration-color: #7f7f7f">╭─ Options ───────────────────────────────────────────────────────────────────────────────────────────────────────╮</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">--help</span> Show this message and exit. <span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯</span>
</pre>
### `cogames describe`
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> </span>
<span style="font-weight: bold"> </span><span style="color: #808000; text-decoration-color: #808000; font-weight: bold">Usage: </span><span style="font-weight: bold">cogames describe [OPTIONS] MISSION </span>
<span style="font-weight: bold"> </span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"> Describe a mission and its configuration
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="color: #7f7f7f; text-decoration-color: #7f7f7f">╭─ Arguments ─────────────────────────────────────────────────────────────────────────────────────────────────────╮</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span> <span style="color: #800000; text-decoration-color: #800000">*</span> mission <span style="color: #808000; text-decoration-color: #808000; font-weight: bold">TEXT</span> Mission name (e.g., hello_world.open_world) <span style="color: #bf7f7f; text-decoration-color: #bf7f7f">[required]</span> <span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="color: #7f7f7f; text-decoration-color: #7f7f7f">╭─ Configuration ─────────────────────────────────────────────────────────────────────────────────────────────────╮</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">--cogs</span> <span style="color: #008000; text-decoration-color: #008000; font-weight: bold">-c</span> <span style="color: #808000; text-decoration-color: #808000; font-weight: bold">INTEGER</span> Number of cogs (agents) <span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">--variant</span> <span style="color: #008000; text-decoration-color: #008000; font-weight: bold">-v</span> <span style="color: #808000; text-decoration-color: #808000; font-weight: bold">VARIANT</span> Apply variant (repeatable) <span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">--difficulty</span> <span style="color: #808000; text-decoration-color: #808000; font-weight: bold">LEVEL </span> Difficulty (easy, medium, hard) controlling clips events <span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="color: #7f7f7f; text-decoration-color: #7f7f7f">╭─ Other ─────────────────────────────────────────────────────────────────────────────────────────────────────────╮</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">--help</span> <span style="color: #008000; text-decoration-color: #008000; font-weight: bold">-h</span> Show this message and exit <span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">Examples:</span>
<span style="color: #008080; text-decoration-color: #008080">cogames describe hello_world.open_world</span> Describe mission
<span style="color: #008080; text-decoration-color: #008080">cogames describe arena </span><span style="color: #008000; text-decoration-color: #008000; font-weight: bold">-c</span><span style="color: #008080; text-decoration-color: #008080"> 4 </span><span style="color: #008000; text-decoration-color: #008000; font-weight: bold">-v</span><span style="color: #008080; text-decoration-color: #008080"> dark_side</span> With 4 cogs and variant
</pre>
### `cogames make-mission`
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> </span>
<span style="font-weight: bold"> </span><span style="color: #808000; text-decoration-color: #808000; font-weight: bold">Usage: </span><span style="font-weight: bold">cogames make-mission [OPTIONS] </span>
<span style="font-weight: bold"> </span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"> Create a custom mission from a base template
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="color: #7f7f7f; text-decoration-color: #7f7f7f">╭─ Mission ───────────────────────────────────────────────────────────────────────────────────────────────────────╮</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">--mission</span> <span style="color: #008000; text-decoration-color: #008000; font-weight: bold">-m</span> <span style="color: #808000; text-decoration-color: #808000; font-weight: bold">MISSION</span> Base mission to start from <span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="color: #7f7f7f; text-decoration-color: #7f7f7f">╭─ Customization ─────────────────────────────────────────────────────────────────────────────────────────────────╮</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">--cogs</span> <span style="color: #008000; text-decoration-color: #008000; font-weight: bold">-c</span> <span style="color: #808000; text-decoration-color: #808000; font-weight: bold">INTEGER RANGE [x>=1</span><span style="color: #bfbf7f; text-decoration-color: #bfbf7f; font-weight: bold">]</span> Number of cogs (agents) <span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span>
<span style="color: #7f7f7f; text-decoration-color: #7f7f7f">│</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">--width</span> <span style="color: #808000; text-decoration-color: #808000; font-weight: bold">INTEGER RANGE [x>=1</span><span sty | text/markdown | null | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12"
] | [] | null | null | <3.13,>=3.12 | [] | [] | [] | [
"mettagrid==0.9.2",
"packaging>=24.0.0",
"pufferlib-core",
"pydantic>=2.11.5",
"pyyaml>=6.0.2",
"questionary>=2.0.1",
"einops>=0.8.0",
"scipy>=1.15.3",
"typer>=0.19.2",
"rich>=13.7.0",
"questionary>=2.1.0",
"fastapi>=0.115.0",
"uvicorn>=0.34.0",
"httpx>=0.28.1",
"nbstripout>=0.8.0",
"nbconvert>=7.16.0",
"jupytext>=1.16.0",
"ipykernel>=6.29.5",
"pytest-httpserver>=1.1.0",
"pytest; extra == \"test\"",
"pytest-xdist; extra == \"test\"",
"ruff; extra == \"test\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T02:00:16.618412 | cogames-0.9.0.tar.gz | 12,813,275 | b6/ad/515fcda2bef318e5132817c585b9bc633255bb2e83ea6248cdc7b5fc98ac/cogames-0.9.0.tar.gz | source | sdist | null | false | 37f5bd5b5b63613c9199a2abaec9f52e | f9ef3adda274beff125c4383d4fae899d44238f74095a97e8c09e77d75f0137f | b6ad515fcda2bef318e5132817c585b9bc633255bb2e83ea6248cdc7b5fc98ac | null | [
"LICENSE"
] | 31,587 |
2.3 | e2b | 2.13.3 | E2B SDK that give agents cloud environments | <p align="center">
<img width="100" src="https://raw.githubusercontent.com/e2b-dev/E2B/refs/heads/main/readme-assets/logo-circle.png" alt="e2b logo">
</p>
<h4 align="center">
<a href="https://pypi.org/project/e2b/">
<img alt="Last 1 month downloads for the Python SDK" loading="lazy" decoding="async" style="color:transparent;width:170px;height:18px" src="https://static.pepy.tech/personalized-badge/e2b?period=monthly&units=INTERNATIONAL_SYSTEM&left_color=BLACK&right_color=GREEN&left_text=PyPi%20Monthly%20Downloads">
</a>
</h4>
## What is E2B?
[E2B](https://www.e2b.dev/) is an open-source infrastructure that allows you to run AI-generated code in secure isolated sandboxes in the cloud. To start and control sandboxes, use our [JavaScript SDK](https://www.npmjs.com/package/@e2b/code-interpreter) or [Python SDK](https://pypi.org/project/e2b_code_interpreter).
## Run your first Sandbox
### 1. Install SDK
```
pip install e2b-code-interpreter
```
### 2. Get your E2B API key
1. Sign up to E2B [here](https://e2b.dev).
2. Get your API key [here](https://e2b.dev/dashboard?tab=keys).
3. Set environment variable with your API key
```
E2B_API_KEY=e2b_***
```
### 3. Execute code with code interpreter inside Sandbox
```py
from e2b_code_interpreter import Sandbox
with Sandbox.create() as sandbox:
sandbox.run_code("x = 1")
execution = sandbox.run_code("x+=1; x")
print(execution.text) # outputs 2
```
### 4. Check docs
Visit [E2B documentation](https://e2b.dev/docs).
### 5. E2B cookbook
Visit our [Cookbook](https://github.com/e2b-dev/e2b-cookbook/tree/main) to get inspired by examples with different LLMs and AI frameworks.
| text/markdown | e2b | hello@e2b.dev | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | https://e2b.dev/ | null | <4.0,>=3.10 | [] | [] | [] | [
"python-dateutil>=2.8.2",
"wcmatch<11.0,>=10.1",
"protobuf>=4.21.0",
"httpcore<2.0.0,>=1.0.5",
"httpx<1.0.0,>=0.27.0",
"attrs>=23.2.0",
"packaging>=24.1",
"typing-extensions>=4.1.0",
"dockerfile-parse<3.0.0,>=2.0.1",
"rich>=14.0.0"
] | [] | [] | [] | [
"Homepage, https://e2b.dev/",
"Repository, https://github.com/e2b-dev/e2b/tree/main/packages/python-sdk",
"Bug Tracker, https://github.com/e2b-dev/e2b/issues"
] | poetry/2.1.1 CPython/3.10.19 Linux/6.8.0-1044-azure | 2026-02-21T02:00:02.376845 | e2b-2.13.3.tar.gz | 134,041 | 7f/5f/770ad7c8aed0b79eaa7f4384494aef088fc3483990a1d920545853b401bc/e2b-2.13.3.tar.gz | source | sdist | null | false | 41b9a8f5a3d90e2f8380fb3ecbbc1f1e | b93afdca7a2b2b37f5030d5d19778459e0dc04d8d65387c61f368fe6a975c951 | 7f5f770ad7c8aed0b79eaa7f4384494aef088fc3483990a1d920545853b401bc | null | [] | 18,201 |
2.4 | rlabs-agentguard | 0.1.0 | A quality-assurance engine for LLM-generated code | # AgentGuard
> A quality-assurance engine for LLM-generated code.
> Python engine + HTTP protocol + MCP server + thin SDKs for any language.
---
## What It Does
AgentGuard sits between your AI coding agent and the LLM, ensuring that every piece of generated code is:
- **Structurally sound** — Parses, lints, type-checks before any human sees it
- **Properly scoped** — Project archetypes prevent over/under-engineering
- **Built top-down** — Skeleton → contracts → wiring → logic (general to particular)
- **Self-verified** — The LLM reviews its own output against explicit criteria
- **Cost-tracked** — Every token, every dollar, every model comparison — visible
---
## Installation
Requires **Python 3.11+**.
```bash
# Core library (Anthropic + OpenAI providers included)
pip install agentguard
# With HTTP server (FastAPI + Uvicorn)
pip install "agentguard[server]"
# With MCP server (for Claude Desktop, Cursor, Windsurf, Cline)
pip install "agentguard[mcp]"
# With all optional providers and transports
pip install "agentguard[all]"
```
### Optional LLM providers
```bash
pip install "agentguard[litellm]" # LiteLLM router (Ollama, Together, etc.)
pip install "agentguard[google]" # Google Gemini
```
### Verify installation
```bash
agentguard --version
agentguard list # Show available archetypes
agentguard info api_backend # Show archetype details
```
---
## How It Works
AgentGuard uses a **top-down generation pipeline** that builds code from architecture to implementation, not the other way around:
```
L1 Skeleton → What files exist and what each one does
L2 Contracts → Typed function/class stubs (signatures, no bodies)
L3 Wiring → Import statements and call-chain connections
L4 Logic → Actual function implementations
Validate → Syntax, lint, types, imports — mechanical checks
Challenge → LLM self-reviews against 30+ criteria per archetype
```
Each level constrains the next. The LLM can't hallucinate imports at L4 because L3 already defined them. It can't invent APIs because L2 already declared the signatures. This is why MCP-generated code has better architecture — it was designed before it was implemented.
### Archetypes
An **archetype** is a project blueprint that configures the entire pipeline. It defines:
- **Tech stack** — language, framework, test runner, linter
- **Expected file structure** — what files should exist and where
- **Validation rules** — what checks to run (syntax, lint, types, imports)
- **Challenge criteria** — what the self-review evaluates (30+ criteria for `react_spa`)
- **Maturity level** — `starter` (minimal) or `production` (full infrastructure)
- **Infrastructure files** — mandatory files the pipeline must generate (ErrorBoundary, logger, constants, etc.)
| Archetype | Use When | Language | Maturity |
|-----------|----------|----------|----------|
| `script` | One-off automation, data processing | Python | starter |
| `cli_tool` | CLI with subcommands, flags, help text | Python | starter |
| `api_backend` | REST API with routes, models, auth | Python | production |
| `web_app` | Full-stack app (React + API) | Python + TS | production |
| `library` | Reusable package with public API | Python | production |
| `react_spa` | Client-side SPA with routing, state, i18n | TypeScript | production |
Pick the archetype that matches your project. Production archetypes generate more infrastructure (error boundaries, logging, code-splitting, constants) — this is intentional.
```bash
# See what an archetype expects
agentguard info react_spa
```
---
## Usage
There are **four ways** to use AgentGuard, depending on your setup:
### 1. CLI — Generate from the command line
The simplest way. No code needed.
```bash
# Generate a project from a spec
agentguard generate "A user auth API with JWT tokens, registration, and login" \
--archetype api_backend \
--model anthropic/claude-sonnet-4-20250514 \
--output ./my-api
# Validate existing code files
agentguard validate src/main.py src/models.py --archetype api_backend
# Self-challenge a file against quality criteria
agentguard challenge src/main.py --criteria "No hardcoded secrets" --criteria "Error handling on all I/O"
```
#### CLI Commands Reference
| Command | What It Does |
|---------|-------------|
| `agentguard generate SPEC` | Generate a full project from a natural-language spec |
| `agentguard validate FILES...` | Run structural checks on code files |
| `agentguard challenge FILE` | Self-challenge a file against quality criteria |
| `agentguard serve` | Start the HTTP API server (default port 8420) |
| `agentguard mcp-serve` | Start the MCP server (stdio or SSE transport) |
| `agentguard list` | List available archetypes |
| `agentguard info ARCHETYPE` | Show archetype details (tech stack, structure, rules) |
| `agentguard trace TRACE_FILE` | Display a trace file summary |
All commands support `--help` for full option details. Use `-v` for debug logging.
### 2. Python Library — Direct import
For building custom agents or integrating into existing Python workflows.
```python
from agentguard import Pipeline, Archetype
# Load an archetype and create a pipeline
arch = Archetype.load("api_backend")
pipe = Pipeline(archetype=arch, llm="anthropic/claude-sonnet-4-20250514")
# Generate code (returns files, trace, and cost)
result = await pipe.generate(
spec="A user authentication API with JWT tokens, registration, and login",
)
# Write files to disk
for file_path, content in result.files.items():
Path(file_path).write_text(content)
# Inspect what happened
print(result.trace.summary())
# → 12 LLM calls | $0.34 total | 3 structural fixes | 1 self-challenge rework
```
#### Using individual modules
You don't have to use the full pipeline. Each module works standalone:
```python
from agentguard.validation.validator import Validator
from agentguard.challenge.challenger import SelfChallenger
from agentguard.archetypes.base import Archetype
# Validate code without generating it
validator = Validator(archetype=Archetype.load("api_backend"))
report = validator.check({"main.py": code_string})
print(report.passed) # True/False
# Challenge code against custom criteria
challenger = SelfChallenger(llm=create_llm_provider("anthropic/claude-sonnet-4-20250514"))
result = await challenger.challenge(
output=code_string,
criteria=["No SQL injection", "All endpoints authenticated"],
)
```
#### Supported LLMs
```python
Pipeline(llm="anthropic/claude-sonnet-4-20250514") # Anthropic (built-in)
Pipeline(llm="openai/gpt-4o") # OpenAI (built-in)
Pipeline(llm="google/gemini-2.0-flash") # Google (pip install "agentguard[google]")
Pipeline(llm="litellm/ollama/llama3") # Any LiteLLM model (pip install "agentguard[litellm]")
```
### 3. HTTP Server — For non-Python agents
Run AgentGuard as a service and call it from TypeScript, Go, Rust, or any language with HTTP.
```bash
# Start the server
agentguard serve --host 0.0.0.0 --port 8420
# Optional: require an API key
agentguard serve --api-key "my-secret-key"
# Optional: save traces to disk
agentguard serve --trace-store ./traces
```
Then call from any language:
```typescript
// TypeScript SDK (thin wrapper over HTTP)
import { AgentGuard } from "@agentguard/sdk";
const ag = new AgentGuard({ url: "http://localhost:8420" });
const result = await ag.generate({
spec: "A user auth API with JWT tokens",
archetype: "api_backend",
llm: "anthropic/claude-sonnet-4-20250514",
});
```
```bash
# Or raw HTTP from any language
curl -X POST http://localhost:8420/generate \
-H "Content-Type: application/json" \
-d '{"spec": "A user auth API", "archetype": "api_backend"}'
```
### 4. MCP Server — For AI coding tools (recommended)
This is the most powerful integration. Your AI tool (Claude Desktop, Cursor, Windsurf, Cline) gains access to AgentGuard's tools directly. The LLM itself uses the tools during generation — no human in the loop.
**Step 1: Install with MCP support**
```bash
pip install "agentguard[mcp]"
```
**Step 2: Add to your AI tool's config**
```jsonc
// Claude Desktop: ~/.claude/claude_desktop_config.json
// Cursor: .cursor/mcp.json
// Windsurf: ~/.codeium/windsurf/mcp_config.json
// Cline: .vscode/cline_mcp_settings.json
{
"mcpServers": {
"agentguard": {
"command": "agentguard",
"args": ["mcp-serve"]
}
}
}
```
**Step 3: Ask your AI tool to build something**
The LLM will automatically discover and use AgentGuard's tools. A typical generation flow looks like:
```
You: "Build a whitelabel ecommerce SPA with i18n, seller onboarding,
promo engine, and checkout"
LLM calls: skeleton(spec=..., archetype="react_spa")
→ Returns file tree with tiers and responsibilities
LLM calls: contracts_and_wiring(spec=..., skeleton_json=...)
→ Returns typed stubs + import wiring for every file
LLM: Generates all files following the stubs and wiring
LLM calls: get_challenge_criteria(archetype="react_spa")
→ Returns 36 quality criteria to self-review against
LLM: Reviews its own output, reports pass/fail per criterion
```
No API key is needed for the agent-native tools — the host LLM does all the generation, guided by AgentGuard's structured prompts. This is the key insight: **AgentGuard doesn't replace the LLM, it gives the LLM a disciplined process to follow.**
#### MCP Tools Reference
The MCP server exposes **13 tools** in two categories:
**Agent-native tools (no API key needed — the host LLM does the work):**
| Tool | Step | What It Returns |
|------|------|-----------------|
| `skeleton` | L1 | File tree with responsibilities, tiers (config/foundation/feature), and infrastructure file requirements |
| `contracts_and_wiring` | L2+L3 | Typed function stubs + import wiring per file, merged in one pass (saves ~15K tokens vs separate calls) |
| `contracts` | L2 | Typed stubs only (use `contracts_and_wiring` instead for most cases) |
| `wiring` | L3 | Import connections only (use `contracts_and_wiring` instead for most cases) |
| `logic` | L4 | Instructions for implementing one function body — call once per `NotImplementedError` stub |
| `get_challenge_criteria` | Review | Archetype-specific quality checklist (30+ criteria for `react_spa`) with review format instructions |
| `digest` | Review | Compact project summary (~200 lines) for efficient self-challenge without re-reading every file |
| `validate` | Check | Structural validation: syntax, lint, types, imports — returns pass/fail with details |
| `list_archetypes` | Info | Names and descriptions of all available archetypes |
| `get_archetype` | Info | Full archetype config: tech stack, validation rules, challenge criteria, infrastructure files |
| `trace_summary` | Info | Summary of the last generation: LLM calls, tokens, cost |
**Full-pipeline tools (require a separate LLM API key configured on the server):**
| Tool | What It Does |
|------|-------------|
| `generate` | Runs the entire L1→L2→L3→L4→validate→challenge pipeline using AgentGuard's internal LLM |
| `challenge` | LLM-based self-review using AgentGuard's internal LLM |
> **When to use which:** If your MCP host is already an LLM (Claude Desktop, Cursor, etc.), use the agent-native tools — they're free and the host LLM does better work when it follows the structured prompts itself. Use `generate`/`challenge` only if your MCP client is a thin script without its own LLM.
#### SSE Transport (for remote MCP clients)
```bash
# Default: stdio (for local AI tools)
agentguard mcp-serve
# SSE transport (for network/remote clients)
agentguard mcp-serve --transport sse --port 8421
```
---
## Works With Any Agent Framework
AgentGuard integrates with your existing tooling — it's not a framework, it's infrastructure:
| Framework | Integration |
|-----------|-------------|
| **LangGraph** | Python nodes for each pipeline step |
| **CrewAI** | Python tools for generation + validation |
| **OpenHands** | Python micro-agent integration |
| **Raw Python** | No framework needed — direct library import |
| **TypeScript / Go / Rust / Any** | HTTP server + thin SDK |
| **Claude Desktop / Cursor / Windsurf / Cline** | MCP server — zero integration code |
---
## Core Modules
| Module | What It Does | Use Standalone? |
|--------|-------------|:---------------:|
| **Top-Down Generator** | L1 skeleton → L2 contracts → L3 wiring → L4 logic | ✅ |
| **Structural Validator** | Syntax, lint, types, imports — zero-cost mechanical checks | ✅ |
| **Self-Challenger** | LLM reviews its own output against acceptance criteria | ✅ |
| **Context Recipes** | Right context, right amount, right time — anti-hallucination | ✅ |
| **Archetypes** | Project blueprints that configure the entire pipeline | ✅ |
| **Tracing** | Every LLM call tracked with cost, tokens, and quality metrics | ✅ |
Every module works independently. Use the full pipeline or pick individual pieces.
---
## Benchmarks: MCP vs No-MCP Code Generation
We ran controlled comparisons generating the same project **with** and **without** the MCP pipeline, using the same LLM (Claude) in both cases. The pipeline doesn't make the LLM smarter — it makes it more **disciplined**.
### Test Projects
| Project | Spec | Domain Complexity |
|---------|------|-------------------|
| **Health Agenda** | Patient scheduling + medication tracking + alerts | Medium (3 domains) |
| **Whitelabel Ecommerce** | i18n, seller onboarding, promo engine, pricing, search, checkout | High (8+ domains) |
### Build Metrics
| Metric | Health MCP | Health No-MCP | Ecom MCP | Ecom No-MCP |
|--------|:----------:|:-------------:|:--------:|:-----------:|
| Files | 23 | 14 | 38 | 30 |
| Lines of code | 1,907 | 998 | 5,548 | 3,324 |
| TypeScript errors | 0 | 0 | 0 | 0 |
| Vite build errors | 0 | 0 | 0 | 0 |
| Code-split chunks | — | — | 16 | 1 |
### Self-Challenge Results (Ecommerce — 36 Criteria)
| Result | MCP | No-MCP |
|--------|:---:|:------:|
| **PASS** | **24/36 (67%)** | **23/36 (64%)** |
| **FAIL** | 12/36 | 13/36 |
Both versions share 9 common failures (magic numbers, DRY violations, inline styles, etc.). The key difference is in *what each version fails at*:
- **MCP passed, No-MCP failed:** async-compatible data layer, ErrorBoundary exists, loading/error states, fuller i18n coverage
- **No-MCP passed, MCP failed:** better context splitting (3 focused contexts vs 1 god-context)
### Enterprise Readiness
| Criterion | MCP | No-MCP |
|-----------|:---:|:------:|
| Type safety | 8/10 | 7/10 |
| Modularity | 8/10 | 5/10 |
| Maintainability | 6/10 | 5/10 |
| Accessibility | 5/10 | 4/10 |
| i18n readiness | 6/10 | 5/10 |
| Performance | 8/10 | 5/10 |
| Observability | 4/10 | 2/10 |
| Testability | 5/10 | 4/10 |
| **Overall** | **6.3/10** | **4.6/10** |
### Operational Readiness
| Dimension | MCP | No-MCP | Details |
|-----------|:---:|:------:|---------|
| **Debuggability** | 8/10 | 4/10 | MCP has structured logger, ErrorBoundary, pure reducer (action-traceable). No-MCP has no logging, no error boundary, opaque `useState` callbacks. |
| **Feature extensibility** | 7/10 | 5/10 | MCP's 6-layer architecture (types → utils → contexts → hooks → components → pages) with injectable function signatures. No-MCP has data-layer coupling — `validatePromo` imports seed at module scope. |
| **Cloud scalability** | 8/10 | 4/10 | MCP code-splits into 16 chunks (lazy per page), has centralized logger for Sentry/Datadog swap, constants file for feature flags. No-MCP ships a 240KB monolithic bundle, has zero logging, no error isolation. |
| **API migration cost** | 6/10 | 3/10 | MCP utils take data as arguments (`searchProducts(products, query)`) — injectable. No-MCP bakes `PRODUCTS.find()` into cart context computed values. |
| **Test surface** | 8/10 | 5/10 | MCP has 14+ pure functions testable without React rendering, plus an exportable reducer. No-MCP has 9+ but several have module-level seed imports baked in. |
| **Team onboarding** | 7/10 | 6/10 | MCP's layered DAG lets devs own a layer. No-MCP's flatter structure is simpler but offers less parallel work boundaries. |
### What the MCP Pipeline Generates That No-MCP Skips
| Infrastructure | MCP | No-MCP | Why It Matters |
|----------------|:---:|:------:|----------------|
| ErrorBoundary | ✅ | ❌ | Without it, one page crash white-screens the whole app |
| Structured logger | ✅ | ❌ | Swap one file to connect Sentry/Datadog/CloudWatch |
| Code-splitting | ✅ | ❌ | 207KB initial load vs 240KB; independent chunk cache invalidation |
| Async hook (`useAsync`) | ✅ | ❌ | Loading/error states handled; ready for real API calls |
| Toast notification system | ✅ | ❌ | User feedback for every state mutation |
| Constants file | ✅ | ❌ | Natural home for feature flags and env-var extraction |
| Route constants | ✅ | ❌ | Change a URL in one place, not grep across files |
### Key Insight
> **The MCP pipeline's value isn't in the features it builds — both versions deliver the same checkout, search, and onboarding flows.** The value is in the *invisible infrastructure* it systematically generates: error boundaries, structured logging, code-splitting, pure utility extraction, injectable function signatures, and centralized constants.
>
> These are exactly the things that matter when you go from "it works on my laptop" to "it runs in production at scale." A solo dev building a prototype gets there faster without MCP. But the moment you need a second developer, a staging environment, or a Sentry integration, MCP's infrastructure pays for itself.
### The Gap Narrows With Complexity
| Metric | Health (MCP / No-MCP) | Ecommerce (MCP / No-MCP) |
|--------|:---------------------:|:------------------------:|
| Line ratio | 1.9× | 1.7× |
| Enterprise score | 7.5 / 4.5 | 6.3 / 4.6 |
| First-compile errors | 0 / 0 | 0 / 0 |
As projects grow more complex, the No-MCP agent produces proportionally more code (it can't avoid complexity). But the MCP pipeline's disciplined structure still delivers measurably higher enterprise quality and significantly better operational readiness.
## Demo Projects
The benchmarks above were produced from the following projects, all generated by AgentGuard's MCP pipeline (and a no-MCP baseline for comparison). You can regenerate them yourself:
```bash
# Generate the ecommerce SPA via MCP tools
agentguard generate --archetype react_spa \
--spec "Whitelabel ecommerce SPA with i18n, seller onboarding, promo engine, pricing, search, checkout" \
--model claude-sonnet-4-20250514
# Then validate and self-challenge the output
agentguard validate ./output --archetype react_spa
agentguard challenge ./output --archetype react_spa
```
| Project | Description |
|---------|-------------|
| Chess | Interactive chess game — MCP pipeline demo |
| Health Agenda (MCP) | Patient scheduling + medication tracking + alerts — MCP-generated |
| Health Agenda (No-MCP) | Same spec — direct generation baseline |
| Ecommerce (MCP) | Whitelabel ecommerce SPA — MCP-generated (38 files, 5,548 lines) |
| Ecommerce (No-MCP) | Same spec — direct generation baseline (30 files, 3,324 lines) |
## License
MIT — see [LICENSE](LICENSE).
| text/markdown | AgentGuard Team | null | null | null | null | agents, ai, code-generation, llm, quality-assurance | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Code Generators",
"Topic :: Software Development :: Quality Assurance",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"anthropic>=0.40.0",
"click>=8.1.0",
"jinja2>=3.1.0",
"openai>=1.50.0",
"pydantic>=2.0.0",
"pyyaml>=6.0",
"fastapi>=0.115.0; extra == \"all\"",
"google-genai>=1.0.0; extra == \"all\"",
"litellm>=1.50.0; extra == \"all\"",
"mcp>=1.0.0; extra == \"all\"",
"sse-starlette>=2.0.0; extra == \"all\"",
"uvicorn>=0.34.0; extra == \"all\"",
"httpx>=0.27.0; extra == \"dev\"",
"mypy>=1.13.0; extra == \"dev\"",
"pytest-asyncio>=0.24.0; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.8.0; extra == \"dev\"",
"types-pyyaml; extra == \"dev\"",
"google-genai>=1.0.0; extra == \"google\"",
"litellm>=1.50.0; extra == \"litellm\"",
"mcp>=1.0.0; extra == \"mcp\"",
"fastapi>=0.115.0; extra == \"server\"",
"sse-starlette>=2.0.0; extra == \"server\"",
"uvicorn>=0.34.0; extra == \"server\""
] | [] | [] | [] | [
"Homepage, https://github.com/rlabs-cl/AgentGuard",
"Documentation, https://github.com/rlabs-cl/AgentGuard#readme",
"Repository, https://github.com/rlabs-cl/AgentGuard",
"Issues, https://github.com/rlabs-cl/AgentGuard/issues",
"Changelog, https://github.com/rlabs-cl/AgentGuard/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T02:00:00.989601 | rlabs_agentguard-0.1.0.tar.gz | 174,039 | 47/0d/a8c3dd8dcd9c0fa4552d36aec24996141170b32bdfa6bbbb714257598d7b/rlabs_agentguard-0.1.0.tar.gz | source | sdist | null | false | dda19cf52518f9b6c9ec4363aa931f27 | e24b503ccd1f7d68f34b91c6dca9f931b13ed7a4c845ba97b1bde76d2b19fc6c | 470da8c3dd8dcd9c0fa4552d36aec24996141170b32bdfa6bbbb714257598d7b | MIT | [
"LICENSE"
] | 235 |
2.4 | debugger-decorator | 0.1.4 | A comprehensive Python function debugger with colors, variable tracking, and file output | # Debugger Decorator
A simple Python decorator for debugging functions.
## Installation
```bash
pip install debugger-decorator
```
## Usage
```python
from debugger_decorator import show_information
@show_information()
def my_function(a, b):
return a + b
my_function(1, 2)
```
This will print information about the function call, its arguments, return value, and execution time.
### Parameters
- `debug` (bool, default=True): Enable or disable debugging output.
- `color_scheme` (dict, optional): Custom color scheme for terminal output.
- `track_vars` (list or str, optional): Variables to track for changes during execution.
- `log_file` (str, optional): File path to write output instead of stdout (plain text, no colors).
- `deep_track` (bool, default=False): Use deep copying for variable tracking to detect in-place changes.
## Variable Tracking
Track changes to specific variables during function execution:
```python
@show_information(track_vars=['result'])
def calculate(x, y):
result = x + y
result *= 2
return result
calculate(3, 4)
```
For deep tracking of in-place modifications:
```python
@show_information(track_vars=['data'], deep_track=True)
def modify_list(data):
data.append(5) # Detected with deep_track
return data
```
## Output Redirection
Redirect debug output to a file:
```python
@show_information(log_file='debug.log')
def my_function(a, b):
return a + b
my_function(1, 2) # Output to debug.log
```
## Customizing Colors
You can customize the color scheme for better accessibility or personal preference:
```python
from debugger_decorator import show_information
from colorama import Fore, Style, Back
# High contrast color scheme
high_contrast_colors = {
'header': Style.BRIGHT + Fore.WHITE,
'params': Fore.CYAN,
'running': Style.BRIGHT + Fore.YELLOW,
'return': Back.GREEN + Style.BRIGHT + Fore.BLACK,
'time': Style.BRIGHT + Fore.GREEN,
'error': Style.BRIGHT + Fore.RED,
'dashes': Style.BRIGHT + Fore.WHITE,
'error_dashes': Style.BRIGHT + Fore.RED,
'tracking': Style.BRIGHT + Fore.MAGENTA,
}
@show_information(color_scheme=high_contrast_colors)
def my_function(a, b):
return a + b
my_function(1, 2)
```
Available color keys:
- `header`: Function name and caller info
- `params`: Parameter names and values
- `running`: "Running . . ." message
- `return`: Return value display
- `time`: Execution time
- `error`: Error messages
- `dashes`: Separator lines
- `error_dashes`: Error separator lines
- `tracking`: Variable change messages
Uses Colorama styles (DIM, NORMAL, BRIGHT) and colors (Fore.*, Back.*, Style.*).
## Error Handling
The decorator catches and displays exceptions with colors:
```python
@show_information()
def failing_function():
raise ValueError("Something went wrong")
failing_function() # Shows error message and re-raises
```
## Testing
Run the test suite:
```bash
python -m pytest tests/ -v
```
## Performance Notes
- Variable tracking uses `sys.settrace`, which may slow down execution for large functions.
- A warning is printed if execution takes more than 1 second.
- For production, set `debug=False` to disable output.
## Development
- Variable tracking uses `sys.settrace`, which may slow down execution for large functions.
- A warning is printed if execution takes more than 1 second.
- For production, set `debug=False` to disable output.
### Building the Package
To build the package for distribution:
```bash
pip install build
python -m build
```
This creates `dist/debugger_decorator-<version>.tar.gz` and `dist/debugger_decorator-<version>-py3-none-any.whl`.
### Publishing to PyPI
1. Install Twine:
```bash
pip install twine
```
2. Upload to Test PyPI (recommended first):
```bash
twine upload --repository testpypi dist/*
```
3. Upload to Production PyPI:
```bash
twine upload dist/*
```
You'll need a PyPI account and API token. Set it up at [pypi.org](https://pypi.org/account/register/) and use `__token__` as username with your token as password.
| text/markdown | null | Michail Panagiotis Bofos <mbofos@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development :: Debuggers"
] | [] | null | null | >=3.6 | [] | [] | [] | [
"colorama",
"pretty_errors"
] | [] | [] | [] | [
"Homepage, https://github.com/mbofos01/debugger-decorator"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-21T01:57:35.955519 | debugger_decorator-0.1.4.tar.gz | 7,477 | 7f/3f/c5a3c0225ccb72cd4d230570582343d1ba216b6ef992afedcbbb06bf793c/debugger_decorator-0.1.4.tar.gz | source | sdist | null | false | 97e219898dfee89048a73e7bdcb929df | b92fd4ace86f3d8517ca5a57d032d224a81eedcb1b9369878a0f2c707e1dc455 | 7f3fc5a3c0225ccb72cd4d230570582343d1ba216b6ef992afedcbbb06bf793c | MIT | [
"LICENSE"
] | 235 |
2.4 | nwave-ai | 1.1.22 | Modern CLI installer for nwave with rich terminal UI | # nWave
AI agents that guide you from idea to working code — with you in control at every step.
nWave runs inside [Claude Code](https://claude.com/product/claude-code). You describe what to build. Specialized agents handle requirements, architecture, test design, and implementation. You review and approve at each stage.
## Quick Start
**1. Install** (in your terminal — not inside Claude Code):
```bash
pipx install nwave-ai
nwave-ai install
```
No repository clone needed. This installs nWave from PyPI and sets up agents and commands in `~/.claude/`.
> **Don't have pipx?** Install it first: `pip install pipx && pipx ensurepath`, then restart your terminal. [pipx docs](https://pipx.pypa.io).
> **Windows users**: Use WSL, not cmd.exe or PowerShell. Install WSL first: `wsl --install`
Full setup details: **[Installation Guide](https://github.com/nWave-ai/nWave/blob/main/docs/guides/installation-guide.md)**
**2. Use** (inside Claude Code, after reopening it):
```
/nw:discuss "user login with email and password" # Requirements
/nw:design --architecture=hexagonal # Architecture
/nw:distill "user-login" # Acceptance tests
/nw:deliver # TDD implementation
```
Four commands. Four human checkpoints. One working feature.
Full walkthrough: **[Your First Feature](docs/guides/tutorial-first-feature.md)**
## How It Works
```text
machine human machine human machine
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
Agent ──→ Documentation ──→ Review ──→ Decision ──→ Agent ──→ ...
generates artifacts validates approves continues
```
Each wave produces artifacts that you review before the next wave begins. The machine never runs unsupervised end-to-end.
The full workflow has six waves. Use all six for greenfield projects, or jump straight to `/nw:deliver` for brownfield work.
| Wave | Command | Agent | Produces |
|------|---------|-------|----------|
| DISCOVER | `/nw:discover` | product-discoverer | Market validation |
| DISCUSS | `/nw:discuss` | product-owner | Requirements |
| DESIGN | `/nw:design` | solution-architect | Architecture + ADRs |
| DEVOPS | `/nw:devops` | platform-architect | Infrastructure readiness |
| DISTILL | `/nw:distill` | acceptance-designer | Given-When-Then tests |
| DELIVER | `/nw:deliver` | software-crafter | Working implementation |
22 agents total: 6 wave agents, 5 cross-wave specialists, 11 peer reviewers. Full list: **[Commands Reference](docs/reference/commands/index.md)**
## Documentation
### Getting Started
- **[Installation Guide](https://github.com/nWave-ai/nWave/blob/main/docs/guides/installation-guide.md)** — Setup instructions
- **[Your First Feature](docs/guides/tutorial-first-feature.md)** — Build a feature end-to-end (tutorial)
- **[Jobs To Be Done](docs/guides/jobs-to-be-done-guide.md)** — Which workflow fits your task
### Guides & Reference
- **[Agents & Commands Reference](docs/reference/index.md)** — All agents, commands, skills, templates
- **[Invoke Reviewers](docs/guides/invoke-reviewer-agents.md)** — Peer review workflow
- **[Troubleshooting](docs/guides/troubleshooting-guide.md)** — Common issues and fixes
## Community
- **[Discord](https://discord.gg/DeYdSNk6)** — Questions, feedback, success stories
- **[GitHub Issues](https://github.com/nWave-ai/nWave/issues)** — Bug reports and feature requests
- **[Contributing](CONTRIBUTING.md)** — Development setup and guidelines
## License
MIT — see [LICENSE](LICENSE) for details.
| text/markdown | null | Michele Brissoni <michele.brissoni@brix.consulting>, Alessandro Digioia <alessandro.digioia@brix.consulting> | null | null | MIT | ai, automation, cli, developer-tools, installer | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Build Tools",
"Topic :: System :: Installation/Setup",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27.0",
"packaging>=21.0",
"platformdirs>=4.2.0",
"pydantic-settings>=2.1.0",
"pydantic>=2.5.0",
"pyyaml>=6.0",
"rich>=13.7.0",
"tomli>=2.0.0; python_version < \"3.11\"",
"typer>=0.12.0",
"allure-pytest>=2.13.0; extra == \"dev\"",
"bandit[toml]>=1.7.0; extra == \"dev\"",
"click>=8.0.0; extra == \"dev\"",
"colorama>=0.4.0; extra == \"dev\"",
"commitizen>=3.12.0; extra == \"dev\"",
"cosmic-ray>=8.4.0; extra == \"dev\"",
"detect-secrets>=1.4.0; extra == \"dev\"",
"fpdf2>=2.7.0; extra == \"dev\"",
"gitlint>=0.19.0; extra == \"dev\"",
"mutmut>=2.4.0; extra == \"dev\"",
"mypy>=1.8.0; extra == \"dev\"",
"pre-commit>=3.6.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest-bdd>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest-describe-it>=0.1.0; extra == \"dev\"",
"pytest-describe>=2.1.0; extra == \"dev\"",
"pytest-html>=4.0.0; extra == \"dev\"",
"pytest-mock>=3.12.0; extra == \"dev\"",
"pytest-pspec>=0.0.4; extra == \"dev\"",
"pytest-watch>=4.2.0; extra == \"dev\"",
"pytest-xdist>=3.5.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"python-semantic-release>=9.0.0; extra == \"dev\"",
"respx>=0.20.0; extra == \"dev\"",
"ruff==0.15.0; extra == \"dev\"",
"tqdm>=4.64.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"test\"",
"pytest-bdd>=7.0.0; extra == \"test\"",
"pytest-cov>=4.1.0; extra == \"test\"",
"pytest-xdist>=3.5.0; extra == \"test\"",
"pytest>=8.0.0; extra == \"test\"",
"respx>=0.20.0; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/nwave-ai/nwave",
"Documentation, https://github.com/nwave-ai/nwave#readme",
"Repository, https://github.com/nwave-ai/nwave",
"Issues, https://github.com/nwave-ai/nwave/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:55:49.235067 | nwave_ai-1.1.22.tar.gz | 1,940,049 | c4/5a/9bce2ca09deafa10988f850716dd5cd04d34e4b14219288de53a7977f17d/nwave_ai-1.1.22.tar.gz | source | sdist | null | false | 5c778227ce946930679e0a8fd560bdfe | 0c522fa5d472e0d6f793298ce476024964d65d851051570439485eed431d5518 | c45a9bce2ca09deafa10988f850716dd5cd04d34e4b14219288de53a7977f17d | null | [
"LICENSE"
] | 246 |
2.4 | lore-framework-mcp | 1.2.10 | MCP server for Lore Framework - AI-readable project memory with task/ADR/wiki management | # Lore Framework MCP (Python)
MCP server for Lore Framework - AI-readable project memory with task/ADR/wiki management.
> **Important:** This MCP server is just one component of Lore Framework. For the complete experience (skills, hooks, agents), install the full plugin: [lore-framework plugin](https://github.com/maledorak/maledorak-marketplace/tree/main/plugins/lore-framework)
## Installation
```bash
# Using uvx (recommended)
uvx lore-framework-mcp
# Using pip
pip install lore-framework-mcp
```
## Usage
### As MCP Server
Add to your `.mcp.json`:
```json
{
"mcpServers": {
"lore-framework": {
"command": "uvx",
"args": ["lore-framework-mcp"]
}
}
}
```
### As CLI
```bash
# Set current user
lore-framework-mcp set-user <user_id>
lore-framework-mcp set-user --env # from LORE_SESSION_CURRENT_USER
# Set current task
lore-framework-mcp set-task <task_id>
# Show session state
lore-framework-mcp show-session
# List available users
lore-framework-mcp list-users
# Clear current task
lore-framework-mcp clear-task
# Regenerate indexes
lore-framework-mcp generate-index
lore-framework-mcp generate-index --next-only --quiet
```
## MCP Tools
| Tool | Description |
|------|-------------|
| `lore_framework_set_user` | Set current user from team.yaml |
| `lore_framework_set_task` | Set current task by ID (creates symlink) |
| `lore_framework_show_session` | Show current session state (user and task) |
| `lore_framework_list_users` | List available users from team.yaml |
| `lore_framework_clear_task` | Clear current task symlink |
| `lore_framework_generate_index` | Regenerate lore/README.md and next-tasks.md |
## Why Lore?
**LLMs need "why", not just "what".**
Without history, LLMs treat existing code patterns as gospel—replicating legacy hacks, undocumented workarounds, and accidental complexity. Lore provides AI-readable project memory: tasks capture requirements, worklogs show reasoning, ADRs explain decisions.
**Read more:** [Full motivation](https://github.com/maledorak/maledorak-marketplace/tree/main/plugins/lore-framework#motivation)
## Lore Directory Structure
```
lore/
├── 0-session/ # Session state (gitignored)
│ ├── team.yaml # Team members definition
│ ├── current-user.md # Active user (generated)
│ ├── current-task.md # Symlink to active task
│ └── next-tasks.md # Auto-generated task queue
├── 1-tasks/ # Task management
│ ├── active/ # In-progress tasks
│ ├── blocked/ # Blocked tasks
│ ├── backlog/ # Planned tasks
│ └── archive/ # Completed tasks
├── 2-adrs/ # Architecture Decision Records
├── 3-wiki/ # Project documentation
└── README.md # Auto-generated index
```
## Documentation
See full documentation: [Lore Framework Plugin](https://github.com/maledorak/maledorak-marketplace/tree/main/plugins/lore-framework)
## Author
<div>
<a href="https://twitter.com/maledorak">
<img src="https://img.shields.io/badge/X/Twitter-000000?style=for-the-badge&logo=x&logoColor=black&color=white" />
</a>
<a href="https://www.linkedin.com/in/mariuszkorzekwa/">
<img src="https://img.shields.io/badge/LinkedIn-000000?style=for-the-badge&logo=linkedin&logoColor=black&color=white" />
</a>
<a href="https://github.com/maledorak">
<img src="https://img.shields.io/badge/GitHub-000000?style=for-the-badge&logo=github&logoColor=black&color=white" />
</a>
</div>
## License
MIT
| text/markdown | null | "Mariusz (Maledorak) Korzekwa" <mariusz@korzekwa.dev> | null | null | null | adr, ai-memory, claude, claude-code, lore, mcp, project-management, task-tracking | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"mcp>=1.0.0",
"python-frontmatter>=1.0.0",
"pyyaml>=6.0"
] | [] | [] | [] | [
"Homepage, https://github.com/maledorak/maledorak-marketplace/tree/main/packages/lore-framework-mcp-py",
"Repository, https://github.com/maledorak/maledorak-marketplace"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:55:16.804017 | lore_framework_mcp-1.2.10.tar.gz | 8,916 | af/56/12b4f7aec63ec9bcb8f54c31d228f554205dbef66f073014f31363a6be9e/lore_framework_mcp-1.2.10.tar.gz | source | sdist | null | false | 7c63e710f86c7ee56ff5d17d4b4d4f9f | 65992259ad28b4d1237c7910e8fc8bf8b67e5252cb095af0794bc19b56fe8882 | af5612b4f7aec63ec9bcb8f54c31d228f554205dbef66f073014f31363a6be9e | MIT | [] | 232 |
2.4 | zpl-toolchain | 0.1.16 | ZPL II toolchain — parse, validate, format, and print Zebra Programming Language files (part of the zpl-toolchain project) | # zpl-toolchain

Python bindings for the [zpl-toolchain](https://github.com/trevordcampbell/zpl-toolchain) — a spec-first, offline, deterministic ZPL II toolchain for parsing, validating, formatting, and printing Zebra Programming Language files.
Built with Rust for performance, exposed to Python via [PyO3](https://pyo3.rs/).
## Installation
```bash
pip install zpl-toolchain
```
## Quick Start
```python
import zpl_toolchain
# Parse ZPL — returns native dict/list structures by default
result = zpl_toolchain.parse("^XA^FDHello^FS^XZ")
print(f"Labels: {len(result['ast']['labels'])}")
# Validate ZPL
validation = zpl_toolchain.validate("^XA^FDHello^FS^XZ")
print(f"Valid: {validation['ok']}")
# Validate with explicit parser tables
# (tables are embedded by default for parse/validate; this is only needed for explicit override flows)
tables_json = open("generated/parser_tables.json").read()
validation2 = zpl_toolchain.validate_with_tables("^XA^FDHello^FS^XZ", tables_json)
print(f"Valid with tables: {validation2['ok']}")
# Format ZPL
formatted = zpl_toolchain.format("^XA^FD Hello ^FS^XZ", "label")
# Format with field compaction
compact = zpl_toolchain.format("^XA^FO30,30^A0N,30,30^FDHello^FS^XZ", "none", "field")
print(formatted)
# Explain a diagnostic code
explanation = zpl_toolchain.explain("ZPL1201")
print(explanation)
```
## Printing
Send ZPL directly to network printers over TCP:
```python
import zpl_toolchain
# Print ZPL to a printer (with optional validation)
result = zpl_toolchain.print_zpl(
"^XA^FDHello^FS^XZ",
"192.168.1.100", # printer address (IP or hostname:port)
)
print(f"Success: {result['success']}, Bytes sent: {result['bytes_sent']}")
# Print with profile-based validation
profile_json = open("profiles/zebra-generic-203.json").read()
result = zpl_toolchain.print_zpl(
"^XA^FDHello^FS^XZ",
"192.168.1.100",
profile_json, # optional printer profile for validation
True, # validate before sending
)
# Query printer status
status = zpl_toolchain.query_printer_status("192.168.1.100")
print(f"Paper out: {status['paper_out']}, Paused: {status['paused']}")
# Query printer status with timeout/config overrides
status = zpl_toolchain.query_printer_status_with_options(
"192.168.1.100",
timeout_ms=2000,
config_json='{"retry":{"max_attempts":2}}',
)
# Query printer identification (~HI)
info = zpl_toolchain.query_printer_info("192.168.1.100")
print(f"Model: {info.get('model')}, Firmware: {info.get('firmware')}")
# Print with timeout/config overrides
result = zpl_toolchain.print_zpl_with_options(
"^XA^FDHello^FS^XZ",
"192.168.1.100",
timeout_ms=1500,
config_json='{"timeouts":{"read_ms":4000}}',
)
```
## API
`parse`, `parse_with_tables`, `validate`, and all print/query functions return native Python `dict`/`list` objects.
`format` returns a plain formatted string, and `explain` returns a plain string (or `None`).
### Core Functions
| Function | Signature | Description |
|----------|-----------|-------------|
| `parse` | `(input: str) -> dict` | Parse ZPL, return AST + diagnostics |
| `parse_with_tables` | `(input: str, tables_json: str) -> dict` | Parse with explicit parser tables |
| `validate` | `(input: str, profile_json: str? = None) -> dict` | Parse + validate (optional profile) |
| `validate_with_tables` | `(input: str, tables_json: str, profile_json: str? = None) -> dict` | Parse + validate using explicit parser tables |
| `format` | `(input: str, indent: str? = None, compaction: str? = None) -> str` | Format ZPL (`indent`: `"none"`, `"label"`, `"field"`; `compaction`: `"none"` or `"field"`) |
| `explain` | `(id: str) -> str?` | Explain a diagnostic code, or `None` |
### Print Functions
| Function | Signature | Description |
|----------|-----------|-------------|
| `print_zpl` | `(zpl: str, addr: str, profile: str? = None, validate: bool = True) -> dict` | Send ZPL to a network printer over TCP |
| `print_zpl_with_options` | `(zpl: str, addr: str, profile: str? = None, validate: bool = True, timeout_ms: int? = None, config_json: str? = None) -> dict` | Print with timeout/config overrides |
| `query_printer_status` | `(addr: str) -> dict` | Query `~HS` host status from a printer |
| `query_printer_status_with_options` | `(addr: str, timeout_ms: int? = None, config_json: str? = None) -> dict` | Query `~HS` with timeout/config overrides |
| `query_printer_info` | `(addr: str) -> dict` | Query `~HI` printer identification |
| `query_printer_info_with_options` | `(addr: str, timeout_ms: int? = None, config_json: str? = None) -> dict` | Query `~HI` with timeout/config overrides |
## Features
- **46 diagnostic codes** covering syntax, semantics, formatting, and preflight checks
- **Printer profiles** for model-specific validation (label dimensions, DPI, memory limits)
- **Deterministic output** — identical input always produces identical results
- **Spec-driven** — parser tables generated from ZPL II command specifications
- **Fast** — native Rust performance with zero Python runtime overhead
## Requirements
- Python 3.9+
- No additional dependencies (self-contained native extension)
## Documentation
- [Print Client Guide](https://github.com/trevordcampbell/zpl-toolchain/blob/main/docs/PRINT_CLIENT.md)
- [Diagnostic Codes](https://github.com/trevordcampbell/zpl-toolchain/blob/main/docs/DIAGNOSTIC_CODES.md)
- [GitHub Repository](https://github.com/trevordcampbell/zpl-toolchain)
## Building from Source
```bash
pip install maturin
# Build parser tables first
cargo run -p zpl_toolchain_spec_compiler -- build --spec-dir spec --out-dir generated
# Build and install (development mode)
maturin develop -m crates/python/Cargo.toml
# Or build a wheel
maturin build -m crates/python/Cargo.toml
```
## License
Dual-licensed under MIT or Apache-2.0.
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | MIT OR Apache-2.0 | zpl, zebra, label, printer, parser, printing | [
"Development Status :: 3 - Alpha",
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Software Development :: Compilers",
"Topic :: Text Processing",
"License :: OSI Approved :: MIT License",
"License :: OSI Approved :: Apache Software License"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/trevordcampbell/zpl-toolchain",
"Issues, https://github.com/trevordcampbell/zpl-toolchain/issues",
"Repository, https://github.com/trevordcampbell/zpl-toolchain"
] | maturin/1.12.3 | 2026-02-21T01:55:06.414895 | zpl_toolchain-0.1.16-cp312-cp312-manylinux_2_34_x86_64.whl | 689,877 | dc/e8/44da60890e4a80ebcaf3e7b50888a50848a7621cb1688a17d356a1049efd/zpl_toolchain-0.1.16-cp312-cp312-manylinux_2_34_x86_64.whl | cp312 | bdist_wheel | null | false | 4c781c678edd36902e4ab52e82a4765b | c7a4281fa66ea87cd79a2aba3a66b89dbb604627cb16c8b0d507660825c91e94 | dce844da60890e4a80ebcaf3e7b50888a50848a7621cb1688a17d356a1049efd | null | [] | 80 |
2.4 | gvdot | 1.2.1 | Generate and render optionally themed Graphviz diagrams with simple, easy to maintain code. | [](https://github.com/escreven/gvdot/blob/main/.github/workflows/test.yml)
[](https://gvdot.readthedocs.io)
[](https://github.com/escreven/gvdot)

Generate and render Graphviz diagrams with clear, maintainable code by
separating presentation from structure.
The heart of gvdot is the class `Dot`, a DOT language builder. Applications
create diagrams using `Dot` methods, then either convert the instance to DOT
language text or render it as SVG or an image. Users can also interactively
display Dot objects in notebooks.
### Example
Suppose we want to generate diagrams of nondeterministic finite automata like
this:

represented by instances of
```python
@dataclass
class NFA:
alphabet : str
delta : dict[str, list[list[str]]]
final : list[str]
start : str
```
where `delta["q"][0]` is the list of states reached from state *q* by epsilon
transitions, and `delta["q"][i]` is the list of states reached from *q* by
symbol `alphabet[i-1]`.
We start by defining a theme, a normal `Dot` object from which other dot
objects can inherit graph attributes, default attributes, and roles.
```python
nfa_theme = (Dot()
.all_default(fontsize=12)
.node_default(shape="circle", style="filled", fillcolor="khaki")
.node_role("init", label="", shape="none", width=0, height=0)
.node_role("final", shape="doublecircle", penwidth=1.25)
.graph(rankdir="LR", labelloc="t", fontsize=16))
```
The theme defines two gvdot roles, collections of Graphviz attribute values
that applications can assign to diagram elements by name.
Having isolated presentation attributes in a theme, our generation code is
straightforward.
```python
def nfa_diagram(nfa:NFA, title:str):
dot = Dot(directed=True).use_theme(nfa_theme)
dot.graph(label=Markup(f"<b>{title}</b>"))
init_id = Nonce()
dot.node(init_id, role="init")
dot.edge(init_id, nfa.start)
for state in nfa.final:
dot.node(state, role="final")
for state, transitions in nfa.delta.items():
merged = defaultdict(list)
for index, targets in enumerate(transitions):
for target in targets:
merged[target].append(
nfa.alphabet[index-1] if index > 0 else 'ε')
for target, symbols in merged.items():
dot.edge(state, target, label=", ".join(symbols))
return dot
```
We can render and save the diagram above with
```python
example = NFA("01", {
"s0": [["q0", "r0"], [], []],
"q0": [[], ["q1"], ["q0"]],
"q1": [[], ["q1"], ["q2"]],
"q2": [[], ["q3"], ["q0"]],
"q3": [[], ["q1"], ["q4"]],
"q4": [[], ["q4"], ["q4"]],
"r0": [[], ["r0"], ["r1"]],
"r1": [[], ["r0"], ["r2"]],
"r2": [[], ["r3"], ["r1"]],
"r3": [[], ["r3"], ["r3"]],
}, ["q4","r0","r1","r2"], "s0")
nfa_diagram(example,"Example NFA").save("example.svg")
```
In a notebook, we can directly display the diagram from a cell containing
```python
nfa_diagram(example,"Example NFA").show()
```
You can find this [NFA
example](https://github.com/escreven/gvdot/blob/main/examples/nfa.ipynb) and
others in the [examples](https://github.com/escreven/gvdot/tree/main/examples)
directory. The documentation includes a [Quick
Tour](https://gvdot.readthedocs.io/en/latest/quicktour.html)
## Installation
Using Python 3.12 or greater, you can install gvdot with
```bash
$ pip install gvdot
```
To ensure the optional notebook support is enabled, use
```bash
$ pip install gvdot[ipython]
```
[Rendering](https://gvdot.readthedocs.io/en/latest/discussion.html#rendering)
requires a Graphviz installation. You can determine if one is in your `PATH`
with
```bash
$ dot -V
```
To install Graphviz, see
[https://graphviz.org/download](https://graphviz.org/download).
## Reliability
gvdot includes automated tests with 100% code coverage, which are run on MacOS,
Linux, and Windows with Python 3.12, 3.13, and 3.14. See [the GitHub
workflow](https://github.com/escreven/gvdot/blob/main/.github/workflows/test.yml)
for details.
| text/markdown | Edward Screven | null | null | null | null | graphviz, dot, dot-language, graphs, graph, visualization, rendering, svg, png | [
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Visualization"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"ipython>=7.23.1; extra == \"ipython\""
] | [] | [] | [] | [
"Homepage, https://github.com/escreven/gvdot",
"Repository, https://github.com/escreven/gvdot",
"Documentation, https://gvdot.readthedocs.io",
"Issues, https://github.com/escreven/gvdot/issues"
] | twine/6.1.0 CPython/3.12.10 | 2026-02-21T01:53:52.040657 | gvdot-1.2.1.tar.gz | 18,775 | 64/1c/52654a98441e9fa0700b231fb52c26695923ed31c736b405242543c3763f/gvdot-1.2.1.tar.gz | source | sdist | null | false | 75c333267295863166499df24da7be4c | 2e74fd1d13ddd7dbaf6e3d2f001a177494ecfe2622ffe566d8d6bb1c42199718 | 641c52654a98441e9fa0700b231fb52c26695923ed31c736b405242543c3763f | MIT | [
"LICENSE"
] | 242 |
2.4 | fastapi-swagger | 0.4.43 | This plugin updates the FastAPI app to host latest Swagger UI distribution. | # FastAPI Swagger Plugin
This plugin updates the FastAPI app to include a latest Swagger UI distribution. Also, it works locally and does not
depend on the @tiangolo domains and cdns.
## Why?
The FastAPI already includes a Swagger UI. However, author updates swagger not so often. Moreover he uses his own
hosts for the resources, and they are not always available (especially in Russia). This plugin allows to use the latest
Swagger UI distribution and host it inside your app.
## Usage
### Installation
```bash
pip install fastapi-swagger
```
### Basic Usage
```python
from fastapi import FastAPI
from fastapi_swagger import patch_fastapi
app = FastAPI(docs_url=None, swagger_ui_oauth2_redirect_url=None) # docs url will be at /docs
patch_fastapi(app)
```
## How it works?
How it was before:
FastAPI uses the `get_swagger_ui_html` function to render the Swagger UI under the hood.
```python
def get_swagger_ui_html(
*,
openapi_url: str,
title: str,
swagger_js_url: str = "https://cdn.jsdelivr.net/npm/swagger-ui-dist@5/swagger-ui-bundle.js",
swagger_css_url: str = "https://cdn.jsdelivr.net/npm/swagger-ui-dist@5/swagger-ui.css",
swagger_favicon_url: str = "https://fastapi.tiangolo.com/img/favicon.png",
oauth2_redirect_url: Optional[str] = None,
init_oauth: Optional[Dict[str, Any]] = None,
swagger_ui_parameters: Optional[Dict[str, Any]] = None,
) -> HTMLResponse:
...
```
How it is now:
Actually, we just copy the Swagger UI distribution from
the [GitHub releases](https://github.com/swagger-api/swagger-ui/releases) to the `fastapi_swagger` package, and serve it
from your app.
Patch creates several additional routes with the Swagger UI resources and one route for docs page (btw, using same
`get_swagger_ui_html` function).
```python
def patch_fastapi(
app: FastAPI,
docs_url: str = "/docs",
*,
title: Optional[str],
swagger_js_url: str = "/swagger/swagger-ui-bundle.js", # relative path from app root
swagger_css_url: str = "/swagger/swagger-ui.css", # relative path from app root
swagger_favicon_url: str = "/swagger/favicon-32x32.png", # relative path from app root
oauth2_redirect_url: Optional[str] = None,
init_oauth: Optional[Dict[str, Any]] = None,
swagger_ui_parameters: Optional[Dict[str, Any]] = None,
):
...
patch_fastapi(app)
# Now there are additional routes /swagger/swagger-ui-bundle.js, /swagger/swagger-ui.css, /swagger/favicon-32x32.png and /docs
# They all are not dependent on the external resources.
```
| text/markdown | null | Ruslan Bel'kov <ruslan.belckov@yandex.ru> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | ~=3.10 | [] | [] | [] | [
"fastapi>=0.100"
] | [] | [] | [] | [
"Homepage, https://github.com/dantetemplar/fastapi-swagger",
"Repository, https://github.com/dantetemplar/fastapi-swagger"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:53:30.754749 | fastapi_swagger-0.4.43.tar.gz | 441,349 | 45/92/13de5c2fa147b4169e8763ddedfd2fec4c1eb4b1059fbbb9a499bf6edd4a/fastapi_swagger-0.4.43.tar.gz | source | sdist | null | false | ab9f30a0a35fb597fe973711b1903a7f | f15b5642ae8fa523046f9499d683637465c30dd1a25ae5e0dddd9d0fce850b2e | 459213de5c2fa147b4169e8763ddedfd2fec4c1eb4b1059fbbb9a499bf6edd4a | MIT | [
"LICENSE"
] | 228 |
2.4 | mettagrid | 0.9.2 | A fast grid-based open-ended MARL environment | # MettaGrid Environment
MettaGrid is a multi-agent gridworld environment for studying the emergence of cooperation and social behaviors in
reinforcement learning agents. The environment features a variety of objects and actions that agents can interact with
to manage resources, engage in combat, share with others, and optimize their rewards.
## Requirements
- Bazel 7.0.0 or newer (the project uses Bzlmod and modern Bazel features)
- Python 3.11 or newer
- C++ compiler with C++20 support
## Overview
In MettaGrid, agents navigate a gridworld and interact with various objects to manage their energy, harvest resources,
engage in combat, and cooperate with other agents. The key dynamics include:
- **Energy Management**: Agents must efficiently manage their energy, which is required for all actions. They can
harvest resources and convert them to energy at junction stations.
- **Resource Gathering**: Agents can gather resources from generator objects scattered throughout the environment.
- **Cooperation and Sharing**: Agents have the ability to share resources with other agents.
- **Combat**: Agents can attack other agents to temporarily freeze them and steal their resources. They can also use
shields to defend against attacks.
The environment is highly configurable, allowing for experimentation with different world layouts, object placements,
and agent capabilities.
## Objects
### Agent
<img src="https://github.com/daveey/Griddly/blob/develop/resources/images/oryx/oryx_tiny_galaxy/tg_sliced/tg_monsters/tg_monsters_astronaut_u1.png?raw=true" width="32"/>
The `Agent` object represents an individual agent in the environment. Agents can move, rotate, attack, and interact with
other objects. Each agent has energy, resources, and shield properties that govern its abilities and interactions.
### Converter
<img src="https://github.com/daveey/Griddly/blob/develop/resources/images/oryx/oryx_tiny_galaxy/tg_sliced/tg_items/tg_items_pda_A.png?raw=true" width="32"/>
The `Converter` object allows agents to convert their harvested resources into energy. Agents can use converters by
moving to them and taking the `use` action. Each use of a converter provides a specified amount of energy and has a
cooldown period.
- Using the converter does not cost any energy.
- Using the converter outputs `converter.energy_output.r1` energy
- see `this.output_energy = cfg[b"energy_output.r1"]` in the Converter cppclass
- Using the converter increments resource 2 by one and decrements resource 1 by 1
- There is currently no use for `converter.energy_output.r2` and `converter.energy_output.r3`
- After the converter is used, it waits for the next value in `converter.cooldown` before it can be used again.
Supplying a list of integers causes the converter to cycle through the provided schedule.
### Generator
<img src="https://github.com/daveey/Griddly/blob/develop/resources/images/oryx/oryx_fantasy/ore-0.png?raw=true" width="32"/>
The `Generator` object produces resources that agents can harvest. Agents can gather resources from generators by moving
to them and taking the `use` action. Generators have a specified capacity and replenish resources over time.
- Using the generator once gives one resource 1
- After the generator is used, it is unable to be used for the next value in `generator.cooldown` timesteps
### Wall
<img src="https://github.com/daveey/Griddly/blob/develop/resources/images/oryx/oryx_fantasy/wall2-0.png?raw=true" width="32"/>
The `Wall` object acts as an impassable barrier in the environment, restricting agent movement.
### Cooldown
The `cooldown` property holds one or more delays that determine how long objects wait before they can be used again.
When provided as a list, the delays are applied cyclically.
## Actions
### Move / Rotate
The `move` action allows agents to move to an adjacent cell in the gridworld. The action has two modes: moving forward
and moving backward relative to the agent's current orientation.
The `rotate` action enables agents to change their orientation within the gridworld. Agents can rotate to face in four
directions: down, left, right, and up.
### Attack
The `attack` action allows agents to attack other agents or objects within their attack range. Successful attacks freeze
the target for `freeze_duration` timesteps and allow the attacker to steal resources. Further, the attacked agent's
energy is set to `0`. Attacks have a cost and inflict a damage value. The agent selects from one of nine coordinates
within its attack range.
### Shield (Toggle)
The `shield` action turns on a shield. When the shield is active, the agent is protected from attacks by other agents.
The shield consumes energy defined by `upkeep.shield` while active. Attack damage is subtracted from the agent's energy,
rather than freezing the agent.
### Transfer
The `transfer` action enables agents to share resources with other agents. Agents can choose to transfer specific
resources to another agent in an adjacent cell. It is currently not implemented.
### Use
The `use` action allows agents to interact with objects such as converters and generators. The specific effects of the
`use` action depend on the target object and can include converting resources to energy or harvesting resources from
generators.
## Configuration
The MettaGrid environment is highly configurable through the use of YAML configuration files. These files specify the
layout of the gridworld, the placement of objects, and various properties of the objects and agents.
**Current settings:**
1. Ore - Base resource obtained from mines. Mines produce one ore when used. No resource requirements for use. -
Reward value: 0.005 per unit (max 2) - Used to create batteries and lasers
2. Battery - Intermediate resource created from ore at a generator. Generator turns one ore into one battery. -
Reward value: 0.01 per unit (max 2) - Used to create lasers
3. Laser - Weapon resource created from ore and batteries. Requires 1 ore and 2 batteries. Created at the lasery.
- Consumed on use. When hitting an unarmored agent: freezes them and steals their whole inventory. When hitting an
armoured agent, destroys their armor. **Inventory System**
- Agents have limited inventory space (default max: 50 items)
- Resources provide rewards just by being in inventory (up to their max reward value)
- Resources can be stolen through attacks Objects Various buildings: Mine, Generator, Armory, Lasery, Lab, Factory,
Temple.
- HP — hitpoints, the number of times something can be hit before destruction.
- Cooldown between uses (varies by building)
- Can be damaged and destroyed by attacks
## Environment Architecture
MettaGrid uses a modular architecture designed primarily for the Softmax Studio ML project, with lightweight adapters to
maintain compatibility with external RL frameworks:
### Primary Training Environment
**`PufferMettaGridEnv`** - The main environment actively developed for Softmax Studio training systems
- Full-featured environment with comprehensive stats collection, replay recording, and curriculum support
- PufferLib-compatible environment wrapper that provides reset/step API
- **Exclusively used** by `metta.rl.trainer` and `metta.sim.simulation`
- Continuously developed and optimized for Softmax Studio use cases
- Backward compatible with existing training code
### Core Infrastructure
**`Simulation`** - Core simulation class for running MettaGrid simulations
- Foundation that provides the core game mechanics and performance
- Direct simulation access without environment API (no reset/step)
- Use when you need fine-grained control over simulation steps
### External Framework Compatibility Adapters
Lightweight wrappers around `MettaGridCore` to maintain compatibility with other training systems:
- **`MettaGridGymEnv`** - Gymnasium compatibility for research workflows
- **`MettaGridPettingZooEnv`** - PettingZoo compatibility for multi-agent research
- **`MettaGridPufferEnv`** - PufferLib compatibility for high-performance external training
**Important**: These adapters are **only used with their respective training systems**, not with the Metta trainer.
### Design Philosophy
- **Primary Focus**: `PufferMettaGridEnv` receives active development and new features for Softmax Studio
- **Compatibility Maintenance**: External adapters ensure other frameworks continue working as the core evolves
- **Testing for Compatibility**: Demos verify external frameworks remain functional during core development
- **Clear Separation**: Each environment type serves its specific training system - no mixing between systems
### Compatibility Testing Demos
These demos ensure external framework adapters remain functional as the core environment evolves:
```bash
# Verify PettingZoo compatibility
python -m mettagrid.demos.demo_train_pettingzoo
# Verify PufferLib compatibility
python -m mettagrid.demos.demo_train_puffer
# Verify Gymnasium compatibility
python -m mettagrid.demos.demo_train_gym
```
The demos serve as regression tests to catch compatibility issues during core development, ensuring external users can
continue using their preferred frameworks.
## Building and testing
For local development, refer to the top-level [README.md](../README.md) in this repository.
### Bazel
By default, `uv sync` will run the Bazel build automatically via the custom build backend. If you need to run C++ tests
and benchmarks directly, you'll need to invoke `bazel` directly.
Build C++ tests and benchmarks in debug mode:
```sh
# Build with debug flags
bazel build --config=dbg //:mettagrid_c
# Run all tests
bazel test //...
```
For benchmarks you might prefer to use the optimized build:
```sh
# Build with optimizations
bazel build --config=opt //:mettagrid_c
# Run benchmarks
./build-release/benchmarks/grid_object_benchmark
```
For a single-core benchmark of MettaGrid performance (triggers a rebuild on first run):
```bash
bash benchmarks/perf/run.sh # toy config (default)
bash benchmarks/perf/run.sh --config arena # production training config
```
## Debugging C++ Code
MettaGrid is written in C++ with Python bindings via pybind11. You can debug C++ code directly in VSCode/Cursor by
setting breakpoints in the C++ source files.
### Prerequisites
1. **VSCode Extension**: Install the
[Python C++ Debugger](https://marketplace.visualstudio.com/items?itemName=benjamin-simmonds.pythoncpp-debug)
extension (`pythoncpp`)
2. **Debug Build**: Always build with `DEBUG=1` to enable debug symbols and dSYM generation
### Setup
The repository includes pre-configured launch configurations in `.vscode/launch.json`:
- **MettaGrid Demo** and other pythoncpp configurations - Combined Python + C++ debugging session for the demo script
(requires the pythoncpp extension)
- **\_C++ Attach** - Attach C++ debugger to any running Python process (shared by all configurations but can be ran
manually).
### Quick Start
1. **Build with debug symbols**:
- Clean everything up
```sh
cd packages/mettagrid # (from root of the repository)
bazel clean --expunge
```
- Rebuild with debug flags
```sh
bazel build --config=dbg //:mettagrid_c
```
- Or Reinstall with DEBUG=1 to trigger dSYM generation
```sh
cd ../..
export DEBUG=1
uv sync --reinstall-package mettagrid
```
2. **Set breakpoints** in both Python and C++ files (e.g., `packages/mettagrid/cpp/bindings/mettagrid_c.cpp`,
`packages/mettagrid/demos/demo_train_pettingzoo.py`)
3. **Launch debugger** using the "MettaGrid Demo" or any other pythoncpp configuration from the VSCode Run panel.
4. **Alternatively**, you can use the "\_C++ Attach" configuration to attach the debugger to any running Python process.
It will ask you to select a process - type "metta" or "python" to filter the list.
### Testing C++ Debugging
To verify that C++ breakpoints are working correctly, use a simple test that calls from Python into C++:
#### Quick Test Method
1. **Add a test call** to any Python entrypoint that uses mettagrid:
```python
def test_cpp_debugging() -> None:
"""Test function to trigger C++ code for debugging."""
try:
from mettagrid.mettagrid_c import PackedCoordinate
# Call a simple C++ function
packed = PackedCoordinate.pack(5, 10)
print(f"C++ test: PackedCoordinate.pack(5, 10) = {packed}")
# Unpack it back
r, c = PackedCoordinate.unpack(packed)
print(f"C++ test: PackedCoordinate.unpack({packed}) = ({r}, {c})")
except Exception as e:
print(f"C++ debugging test failed: {e}")
# Call at module level or early in your script
test_cpp_debugging()
```
2. **Set a C++ breakpoint** in the corresponding C++ implementation:
- Open `packages/mettagrid/cpp/include/mettagrid/systems/packed_coordinate.hpp`
- Find the `pack()` or `unpack()` function implementation
- Set a breakpoint inside the function body (e.g., on the return statement)
3. **Launch your debug configuration** (e.g., "MettaGrid Demo" or any pythoncpp configuration)
4. **Verify the breakpoint hits** when the Python code calls `PackedCoordinate.pack()`
#### Where to Add the Test
Add the test call early in any Python entrypoint that uses mettagrid:
- Demo scripts (e.g., `packages/mettagrid/demos/demo_train_*.py`)
- CLI entrypoints (e.g., `packages/cogames/src/cogames/main.py`)
- Tool runners (e.g., `common/src/metta/common/tool/run_tool.py`)
- Training scripts (e.g., `metta/tools/train.py`)
**Note**: This test is only for verifying your debugging setup. Remove it before committing.
### Configuration Files
- **`.bazelrc`** - Defines the `--config=dbg` build mode with debug flags (`-g`, `-O0`, `--apple_generate_dsym`)
- **`.vscode/launch.json`** - Contains launch configurations for combined Python/C++ debugging
### Important Notes
- **Always use `DEBUG=1`**: Without this environment variable, dSYM files won't be generated and C++ breakpoints won't
work.
- **Source maps**: The launch config includes source maps to correctly locate C++ files in the packages/mettagrid's
workspace.
| text/markdown | null | David Bloomin <daveey@gmail.com> | null | null | null | gridworld, minigrid, rl, reinforcement-learning, environment, gym | [] | [] | null | null | <3.13,>=3.12 | [] | [] | [] | [
"boto3>=1.38.32",
"botocore>=1.38.29",
"numpy==2.3.3",
"gymnasium>=1.1.1",
"importnb>=2023.11.1",
"pettingzoo>=1.25",
"pufferlib-core>=3.0.16",
"pydantic>=2.12.2",
"protobuf>=6.31.1",
"pyyaml>=6.0.2",
"scipy-stubs>=1.17.0.0",
"typer>=0.15.0",
"requests>=2.32.0",
"httpx>=0.28.0",
"websockets>=13.0"
] | [] | [] | [] | [
"Homepage, https://daveey.github.io",
"Repository, https://github.com/Metta-AI/mettagrid"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:53:13.532948 | mettagrid-0.9.2.tar.gz | 33,120,899 | a3/72/65811c35e701c9c40411a83874a450573d117fb3514e58b5d2ca42c42771/mettagrid-0.9.2.tar.gz | source | sdist | null | false | cfd815312a6cac6b0e0a9ed08cfee193 | 9e4f07dd820476582dfadf923b067a2c83cdac53b8af0b0b83d27ba3c948c9d6 | a37265811c35e701c9c40411a83874a450573d117fb3514e58b5d2ca42c42771 | MIT | [
"LICENSE"
] | 426 |
2.4 | spandrel | 0.4.2 | Give your project support for a variety of PyTorch model architectures, including auto-detecting model architecture from just .pth files. spandrel gives you arch support. | # Spandrel
[](https://pypi.org/project/spandrel/)
[](https://github.com/chaiNNer-org/spandrel/releases)
[](https://pypi.org/project/spandrel/#files)
[](https://pypi.org/project/spandrel/#files:~:text=Requires%3A%20Python%20%3C3.12%2C%20%3E%3D3.8)
[](https://chainner.app/spandrel/)
[](https://github.com/chaiNNer-org/spandrel/actions)
[](https://github.com/chaiNNer-org/spandrel/blob/main/LICENSE)
[](https://github.com/chaiNNer-org/spandrel/graphs/contributors)
Spandrel is a library for loading and running pre-trained PyTorch models. It automatically detects the model architecture and hyperparameters from model files, and provides a unified interface for running models.
After seeing many projects extract out [chaiNNer](https://github.com/chaiNNer-org/chaiNNer)'s model support into their own projects, I decided to create this PyPi package for the architecture support and model loading functionality. I'm also hoping that by having a central package anyone can use, the community will be encouraged [to help add support for more models](CONTRIBUTING.md).
This package does not yet have easy inference code, but porting that code is planned as well.
## Installation
Spandrel is available through pip:
```shell
pip install spandrel
```
## Basic Usage
While Spandrel supports different kinds of models, this is how you would run a super resolution model (e.g. ESRGAN, SwinIR, HAT, etc.):
```python
from spandrel import ImageModelDescriptor, ModelLoader
import torch
# load a model from disk
model = ModelLoader().load_from_file(r"path/to/model.pth")
# make sure it's an image to image model
assert isinstance(model, ImageModelDescriptor)
# send it to the GPU and put it in inference mode
model.cuda().eval()
# use the model
def process(image: torch.Tensor) -> torch.Tensor:
with torch.no_grad():
return model(image)
```
Note that `model` is a [`ModelDescriptor`](https://chainner.app/spandrel/#ModelDescriptor) object, which is a wrapper around the actual PyTorch model. This wrapper provides a unified interface for running models, and also contains metadata about the model. See [`ImageModelDescriptor`](https://chainner.app/spandrel/spandrel.ImageModelDescriptor.html) for more details about the metadata contained and how to call the model.
> **_NOTE: `ImageModelDescriptor` will NOT convert an image to a tensor for you. It is purely making the forward passes of these models more convenient to use, since the actual forward passes are not always as simple as image in/image out._**
### More architectures
If you are working on a non-commercial open-source project or a private project, you should use `spandrel` and `spandrel_extra_arches` to get everything spandrel has to offer. The `spandrel` package only contains architectures with [permissive and public domain licenses](https://en.wikipedia.org/wiki/Permissive_software_license) (MIT, Apache 2.0, public domain), so it is fit for every use case. Architectures with restrictive licenses (e.g. non-commercial) are implemented in the `spandrel_extra_arches` package.
```python
import spandrel
import spandrel_extra_arches
# add extra architectures before `ModelLoader` is used
spandrel_extra_arches.install()
# load a model from disk
model = spandrel.ModelLoader().load_from_file(r"path/to/model.pth")
... # use model
```
## Supported File Types
Spandrel mainly supports loading `.pth` files for all supported architectures. This is what you will typically find from official repos and community trained models. However, Spandrel also supports loading TorchScript traced models (`.pt`), certain types of `.ckpt` files, and `.safetensors` files for any supported architecture saved in one of these formats.
## Model Architecture Support
> **_NOTE: By its very nature, Spandrel will never be able to support every model architecture. The goal is just to support as many as is realistically possible._**
Spandrel currently supports a limited amount of network architectures. If the architecture you need is not supported, feel free to [request it](https://github.com/chaiNNer-org/spandrel/issues) or try [adding it](CONTRIBUTING.md).
#### Single Image Super Resolution
- [ESRGAN](https://github.com/xinntao/ESRGAN) (RRDBNet)
- This includes regular [ESRGAN](https://github.com/xinntao/ESRGAN), [ESRGAN+](https://github.com/ncarraz/ESRGANplus), "new-arch ESRGAN" ([RealSR](https://github.com/jixiaozhong/RealSR), [BSRGAN](https://github.com/cszn/BSRGAN)), and [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN)
- Models: [Community ESRGAN](https://openmodeldb.info) | [ESRGAN+](https://drive.google.com/drive/folders/1lNky9afqEP-qdxrAwDFPJ1g0ui4x7Sin) | [BSRGAN](https://github.com/cszn/BSRGAN/tree/main/model_zoo) | [RealSR](https://github.com/jixiaozhong/RealSR#pre-trained-models) | [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN/blob/master/docs/model_zoo.md)
- [Real-ESRGAN Compact](https://github.com/xinntao/Real-ESRGAN) (SRVGGNet) | [Models](https://github.com/xinntao/Real-ESRGAN/blob/master/docs/model_zoo.md)
- [Swift-SRGAN](https://github.com/Koushik0901/Swift-SRGAN) | [Models](https://github.com/Koushik0901/Swift-SRGAN/releases/tag/v0.1)
- [SwinIR](https://github.com/JingyunLiang/SwinIR) | [Models](https://github.com/JingyunLiang/SwinIR/releases/tag/v0.0)
- [Swin2SR](https://github.com/mv-lab/swin2sr) | [Models](https://github.com/mv-lab/swin2sr/releases/tag/v0.0.1)
- [HAT](https://github.com/XPixelGroup/HAT) | [Models](https://drive.google.com/drive/folders/1HpmReFfoUqUbnAOQ7rvOeNU3uf_m69w0)
- [Omni-SR](https://github.com/Francis0625/Omni-SR) | [Models](https://github.com/Francis0625/Omni-SR#preparation)
- [SRFormer](https://github.com/HVision-NKU/SRFormer) (+) | [Models](https://github.com/HVision-NKU/SRFormer#pretrain-models)
- [DAT](https://github.com/zhengchen1999/DAT) | [Models](https://github.com/zhengchen1999/DAT#testing)
- [FeMaSR](https://github.com/chaofengc/FeMaSR) (+) | [Models](https://github.com/chaofengc/FeMaSR/releases/tag/v0.1-pretrain_models)
- [GRL](https://github.com/ofsoundof/GRL-Image-Restoration) | [Models](https://github.com/ofsoundof/GRL-Image-Restoration/releases/tag/v1.0.0)
- [DITN](https://github.com/yongliuy/DITN) | [Models](https://drive.google.com/drive/folders/1XpHW27H5j2S4IH8t4lccgrgHkIjqrS-X)
- [MM-RealSR](https://github.com/TencentARC/MM-RealSR) | [Models](https://github.com/TencentARC/MM-RealSR/releases/tag/v1.0.0)
- [SPAN](https://github.com/hongyuanyu/SPAN) | [Models](https://drive.google.com/file/d/1iYUA2TzKuxI0vzmA-UXr_nB43XgPOXUg/view?usp=sharing)
- [Real-CUGAN](https://github.com/bilibili/ailab/tree/main/Real-CUGAN) | [Models](https://drive.google.com/drive/folders/1jAJyBf2qKe2povySwsGXsVMnzVyQzqDD), [Pro Models](https://drive.google.com/drive/folders/1hfT4WwnNUaS43ErrgXk0J1R5Ik8s5NVo)
- [CRAFT](https://github.com/AVC2-UESTC/CRAFT-SR) | [Models](https://drive.google.com/file/d/13wAmc93BPeBUBQ24zUZOuUpdBFG2aAY5/view?usp=sharing)
- [SAFMN](https://github.com/sunny2109/SAFMN) | [Models](https://drive.google.com/drive/folders/12O_xgwfgc76DsYbiClYnl6ErCDrsi_S9?usp=share_link), [JPEG model](https://github.com/sunny2109/SAFMN/releases/tag/v0.1.1)
- [RGT](https://github.com/zhengchen1999/RGT) | [RGT Models](https://drive.google.com/drive/folders/1zxrr31Kp2D_N9a-OUAPaJEn_yTaSXTfZ?usp=drive_link), [RGT-S Models](https://drive.google.com/drive/folders/1j46WHs1Gvyif1SsZXKy1Y1IrQH0gfIQ1?usp=drive_link)
- [DCTLSA](https://github.com/zengkun301/DCTLSA) | [Models](https://github.com/zengkun301/DCTLSA/tree/main/pretrained)
- [ATD](https://github.com/LabShuHangGU/Adaptive-Token-Dictionary) | [Models](https://drive.google.com/drive/folders/1D3BvTS1xBcaU1mp50k3pBzUWb7qjRvmB?usp=sharing)
- [AdaCode](https://github.com/kechunl/AdaCode) | [Models](https://github.com/kechunl/AdaCode/releases/tag/v0-pretrain_models)
- [DRCT](https://github.com/ming053l/DRCT)
- [PLKSR](https://github.com/dslisleedh/PLKSR) and [RealPLKSR](https://github.com/muslll/neosr/blob/master/neosr/archs/realplksr_arch.py) | [Models](https://drive.google.com/drive/u/1/folders/1lIkZ00y9cRQpLU9qmCIB2XtS-2ZoqKq8)
- [SeemoRe](https://github.com/eduardzamfir/seemoredetails) | [Models](https://drive.google.com/drive/folders/15jtvcS4jL_6QqEwaRodEN8FBrqVPrO2u?usp=share_link)
- [MoSR](https://github.com/umzi2/MoSR) | [Models](https://drive.google.com/drive/u/0/folders/1HPy7M4Zzq8oxhdsQ2cnfqy73klmQWp_r)
- [MoESR](https://github.com/umzi2/MoESR) | [Models](https://github.com/the-database/traiNNer-redux/releases/download/pretrained-models/4x_DF2K_MoESR_500k.safetensors)
- [RCAN](https://github.com/yulunzhang/RCAN) | [Models](https://www.dropbox.com/s/qm9vc0p0w9i4s0n/models_ECCV2018RCAN.zip?dl=0)
- [FDAT](https://github.com/stinkybread/FDAT) | [Models](https://github.com/the-database/traiNNer-redux/releases/download/pretrained-models/2x_DF2K_FDAT_M_500k_fp16.safetensors)
- [AuraSR](https://github.com/fal-ai/aura-sr) | Models: [v1](https://huggingface.co/fal/AuraSR) | [v2](https://huggingface.co/fal/AuraSR-v2)
#### Face Restoration
- [GFPGAN](https://github.com/TencentARC/GFPGAN) | [1.2](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.2.pth), [1.3](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth), [1.4](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth)
- [RestoreFormer](https://github.com/wzhouxiff/RestoreFormer) | [Model](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/RestoreFormer.pth)
- [CodeFormer](https://github.com/sczhou/CodeFormer) (+) | [Model](https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth)
#### Inpainting
- [LaMa](https://github.com/advimman/lama) | [Model](https://github.com/Sanster/models/releases/download/add_big_lama/big-lama.pt)
- [MAT](https://github.com/fenglinglwb/MAT) (+) | [Model](https://github.com/Sanster/models/releases/download/add_mat/Places_512_FullData_G.pth)
#### Denoising
- [SCUNet](https://github.com/cszn/SCUNet) | [GAN Model](https://github.com/cszn/KAIR/releases/download/v1.0/scunet_color_real_gan.pth) | [PSNR Model](https://github.com/cszn/KAIR/releases/download/v1.0/scunet_color_real_psnr.pth)
- [Uformer](https://github.com/ZhendongWang6/Uformer) | [Denoise SIDD Model](https://mailustceducn-my.sharepoint.com/:u:/g/personal/zhendongwang_mail_ustc_edu_cn/Ea7hMP82A0xFlOKPlQnBJy0B9gVP-1MJL75mR4QKBMGc2w?e=iOz0zz) | [Deblur GoPro Model](https://mailustceducn-my.sharepoint.com/:u:/g/personal/zhendongwang_mail_ustc_edu_cn/EfCPoTSEKJRAshoE6EAC_3YB7oNkbLUX6AUgWSCwoJe0oA?e=jai90x)
- [KBNet](https://github.com/zhangyi-3/KBNet) | [Models](https://mycuhk-my.sharepoint.com/:f:/g/personal/1155135732_link_cuhk_edu_hk/EofsV3eVcAxNlrW72JXqzRUBhkM1Mzw50pJ3BHlAyMYnVw?e=MeMB5H)
- [NAFNet](https://github.com/megvii-research/NAFNet) | [Models](https://github.com/megvii-research/NAFNet#results-and-pre-trained-models)
- [Restormer](https://github.com/swz30/Restormer) (+) | [Models](https://github.com/swz30/Restormer/releases/tag/v1.0)
- [FFTformer](https://github.com/kkkls/FFTformer) | [Models](https://github.com/kkkls/FFTformer/releases/tag/pretrain_model)
- [M3SNet](https://github.com/Tombs98/M3SNet) (+) | [Models](https://drive.google.com/drive/folders/1y4BEX7LagtXVO98ZItSbJJl7WWM3gnbD)
- [MPRNet](https://github.com/swz30/MPRNet) (+) | [Deblurring](https://drive.google.com/file/d/1QwQUVbk6YVOJViCsOKYNykCsdJSVGRtb/view?usp=sharing), [Deraining](https://drive.google.com/file/d/1O3WEJbcat7eTY6doXWeorAbQ1l_WmMnM/view?usp=sharing), [Denoising](https://drive.google.com/file/d/1LODPt9kYmxwU98g96UrRA0_Eh5HYcsRw/view?usp=sharing)
- [MIRNet2](https://github.com/swz30/MIRNetv2) (+) | [Models](https://github.com/swz30/MIRNetv2/releases/tag/v1.0.0) (SR not supported)
- [DnCNN, FDnCNN](https://github.com/cszn/DPIR) | [Models](https://github.com/cszn/KAIR/releases/tag/v1.0)
- [DRUNet](https://github.com/cszn/DPIR) | [Models](https://github.com/cszn/KAIR/releases/tag/v1.0)
- [IPT](https://github.com/huawei-noah/Pretrained-IPT) | [Models](https://drive.google.com/drive/folders/1MVSdUX0YBExauG0fFz4ANiWTrq9xZEj7?usp=sharing)
#### DeJPEG
- [FBCNN](https://github.com/jiaxi-jiang/FBCNN) | [Models](https://github.com/jiaxi-jiang/FBCNN/releases/tag/v1.0)
#### Colorization
- [DDColor](https://github.com/piddnad/DDColor) (+) | [Models](https://github.com/piddnad/DDColor/blob/master/MODEL_ZOO.md)
#### Dehazing
- [MixDehazeNet](https://github.com/AmeryXiong/MixDehazeNet) | [Models](https://drive.google.com/drive/folders/1ep6W4H3vNxshYjq71Tb3MzxrXGgaiM6C?usp=drive_link)
#### Low-light Enhancement
- [RetinexFormer](https://github.com/caiyuanhao1998/Retinexformer) | [Models](https://drive.google.com/drive/folders/1ynK5hfQachzc8y96ZumhkPPDXzHJwaQV?usp=drive_link)
- [HVI-CIDNet](https://github.com/Fediory/HVI-CIDNet) | [Models](https://github.com/Fediory/HVI-CIDNet/#weights-and-results-)
(All architectures marked with a `+` are only part of `spandrel_extra_arches`.)
## Security
Use `.safetensors` files for guaranteed security.
As you may know, loading `.pth` files [poses a security risk](https://github.com/pytorch/pytorch/issues/52596) due to python's `pickle` module being inherently unsafe and vulnerable to arbitrary code execution (ACE). To mitigate this, Spandrel only allows deserializing certain types of data. This helps to improve security, but it still doesn't fully solve the issue of ACE.
## Used By
Here are some cool projects that use Spandrel:
- [AUTOMATIC1111's SD WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
- [ComfyUI](https://github.com/comfyanonymous/ComfyUI)
- [InvokeAI](https://github.com/invoke-ai/InvokeAI)
- [chaiNNer](https://github.com/chaiNNer-org/chaiNNer)
- [imaginAIry](https://github.com/brycedrennan/imaginAIry)
- [dgenerate](https://github.com/Teriks/dgenerate)
## License
This repo is bounded by the MIT license. However, the code of implemented architectures (everything inside an `__arch/` directory) is bound by their original respective licenses (which are included in their respective `__arch/` directories).
| text/markdown | chaiNNer team | null | null | null | MIT | spandrel, pytorch architecture, pytorch arch, model arch, model architecture | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3 :: Only",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"torch",
"torchvision",
"safetensors",
"numpy",
"einops",
"typing_extensions"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T01:52:26.342282 | spandrel-0.4.2.tar.gz | 247,544 | 2a/8f/ab4565c23dd67a036ab72101a830cebd7ca026b2fddf5771bbf6284f6228/spandrel-0.4.2.tar.gz | source | sdist | null | false | d15933c4af8914152ffa2fd0c2baef62 | fefa4ea966c6a5b7721dcf24f3e2062a5a96a395c8bedcb570fb55971fdcbccb | 2a8fab4565c23dd67a036ab72101a830cebd7ca026b2fddf5771bbf6284f6228 | null | [] | 39,987 |
2.4 | llm-to-toon | 1.0.36 | Tiny wrapper exposing Prompture helpers to convert LLM output into TOON. | # llm-to-toon
Tiny wrapper around `prompture` that returns [TOON](https://github.com/jmorganca/python-toon)
(Token-Oriented Object Notation) instead of JSON. Under the hood it uses
`prompture.extract_and_jsonify(..., output_format="toon")` and converts the result
into the ultra-compact TOON text automatically.
Install:
```bash
pip install llm-to-toon
```
Usage:
```python
from llm_to_toon import from_llm_text
schema = {"name": "string", "age": "int"}
toon_text = from_llm_text("Name: Juan Age: 30", schema)
print(toon_text)
```
By default the helper spins up the local Ollama driver (`gemma:latest`). Pass your
own Prompture driver if you want to call OpenAI, Azure, Groq, etc. For the full
Prompture feature-set see the main project: https://github.com/jhd3197/prompture
| text/markdown | null | Juan Denis <juan@vene.co> | null | null | MIT | llm, toon, prompt, structured-output | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"prompture>=1.0.36"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T01:51:25.793014 | llm_to_toon-1.0.36.tar.gz | 2,014 | 31/7d/1ea4e4ea682427df609f8cf1ab1f05fdb59b9f4bf5623fa08372992503ea/llm_to_toon-1.0.36.tar.gz | source | sdist | null | false | a041606fc5772697c5a0bc63b0034568 | 8bc62fce0343cd7cd23403ee840e5c0aaed4f527ae706c15718bc1ae68e346c1 | 317d1ea4e4ea682427df609f8cf1ab1f05fdb59b9f4bf5623fa08372992503ea | null | [] | 221 |
2.4 | llm-to-json | 1.0.36 | Tiny wrapper exposing Prompture helpers to convert LLM output into JSON. | # llm-to-json
Tiny wrapper around `prompture` with a minimal, easy-to-use API for converting LLM output (or raw text) into JSON according to a schema.
Install:
```bash
pip install llm-to-json
```
Usage:
```python
from llm_to_json import from_llm_text
schema = {"name": "string", "age": "int"}
print(from_llm_text("Name: Juan Age: 30", schema))
```
For full docs and advanced features, see the main project: Prompture — https://github.com/jhd3197/prompture
| text/markdown | null | Juan Denis <juan@vene.co> | null | null | MIT | llm, json, prompt, structured-output | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"prompture>=1.0.36"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T01:51:19.739236 | llm_to_json-1.0.36.tar.gz | 1,838 | 3f/e3/9f934b8c8183c1ba3b5d43e6deba7e32c4110ca6a4d164a72db168eb47b9/llm_to_json-1.0.36.tar.gz | source | sdist | null | false | c568687cb4592fc3a675b45a04509093 | 786077537241872e6e10211e209386d32cfa8313430104ed8d60507921374dd3 | 3fe39f934b8c8183c1ba3b5d43e6deba7e32c4110ca6a4d164a72db168eb47b9 | null | [] | 223 |
2.4 | zuspec-be-sw | 0.0.1.22248017150rc0 | Zuspec C/C++ software backend | # Zuspec Software Backend
The Zuspec Software (SW) Backend transforms Zuspec hardware component models into executable C/C++ code for simulation, testing, and modeling.
## Features
- **Component Translation**: Zuspec Components → C structs and functions
- **Async/Sync**: Transforms async methods with optional sync conversion
- **Protocol Interfaces**: Generates C API structs for Protocol types
- **Type Mapping**: Maps Zuspec types to C types
- **Validation**: Pre-generation compatibility checks
- **Compilation**: Built-in GCC compiler interface
- **Test Execution**: Automated test runner
- **Type Specialization**: Optional monomorphization (experimental)
## Installation
```bash
pip install zuspec-be-sw
```
## Quick Start
```python
import zuspec.dataclasses as zdc
from zuspec.be.sw import CGenerator, CValidator, CCompiler, TestRunner
from pathlib import Path
@zdc.dataclass
class Counter(zdc.Component):
count: int = zdc.field(default=0)
def increment(self):
self.count += 1
def get_count(self) -> int:
return self.count
# Build → Validate → Generate → Compile → Run
factory = zdc.DataModelFactory()
ctxt = factory.build(Counter)
validator = CValidator()
assert validator.validate(ctxt)
gen = CGenerator(Path("output"))
sources = gen.generate(ctxt)
compiler = CCompiler(Path("output"))
exe = compiler.compile(sources, Path("output/test"))
runner = TestRunner()
result = runner.run(exe)
```
## Generated C Code
```c
typedef struct Counter {
int count;
} Counter;
void Counter_init(Counter *self);
void Counter_increment(Counter *self);
int Counter_get_count(Counter *self);
```
## Documentation
- [Quickstart](docs/quickstart.rst)
- [Features](docs/features.rst)
- [Generator](docs/generator.rst)
- [Examples](docs/examples.rst)
- [API](docs/api.rst)
- [Testing](docs/testing.rst)
- [Contributing](docs/contributing.rst)
## Requirements
- Python >= 3.7
- zuspec-dataclasses
- C compiler (GCC)
## License
Apache-2.0
| text/markdown | null | Matthew Ballance <matt.ballance@gmail.com> | null | null | Apache-2.0 | null | [] | [] | null | null | >=3.7 | [] | [] | [] | [
"zuspec-dataclasses",
"pytest>=6.0; extra == \"dev\"",
"pytest-dfm; extra == \"dev\"",
"zuspec-dataclasses; extra == \"dev\"",
"pyhdl-if; extra == \"dev\"",
"Sphinx; extra == \"dev\"",
"sphinx-argparse; extra == \"dev\"",
"sphinxcontrib-makedomain; extra == \"dev\"",
"sphinxcontrib-spelling; extra == \"dev\"",
"sphinx-rtd-theme; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/zuspec/zuspec-be-sw-simplify",
"Repository, https://github.com/zuspec/zuspec-be-sw-simplify"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-21T01:51:18.666302 | zuspec_be_sw-0.0.1.22248017150rc0-py3-none-any.whl | 76,968 | 1d/ac/f741cde43579c6f297c652b55782285b03038b73cc66b015fcadac900449/zuspec_be_sw-0.0.1.22248017150rc0-py3-none-any.whl | py3 | bdist_wheel | null | false | 3f86dc31ebe47eb04fdbdb3355066e18 | 448c89430e570f349ec22d569b38ed76951efb3fcf567cf5dfdd6ff02cc923ea | 1dacf741cde43579c6f297c652b55782285b03038b73cc66b015fcadac900449 | null | [
"LICENSE"
] | 79 |
2.4 | prompture | 1.0.36 | Ask LLMs to return structured JSON and run cross-model tests. API-first. | <p align="center">
<h1 align="center">Prompture</h1>
<p align="center">Structured JSON extraction from any LLM. Schema-enforced, Pydantic-native, multi-provider.</p>
</p>
<p align="center">
<a href="https://pypi.org/project/prompture/"><img src="https://badge.fury.io/py/prompture.svg" alt="PyPI version"></a>
<a href="https://pypi.org/project/prompture/"><img src="https://img.shields.io/pypi/pyversions/prompture.svg" alt="Python versions"></a>
<a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-blue.svg" alt="License: MIT"></a>
<a href="https://pepy.tech/project/prompture"><img src="https://static.pepy.tech/badge/prompture" alt="Downloads"></a>
<a href="https://github.com/jhd3197/prompture"><img src="https://img.shields.io/github/stars/jhd3197/prompture?style=social" alt="GitHub stars"></a>
</p>
---
**Prompture** is a Python library that turns LLM responses into validated, structured data. Define a schema or Pydantic model, point it at any provider, and get typed output back — with token tracking, cost calculation, and automatic JSON repair built in.
```python
from pydantic import BaseModel
from prompture import extract_with_model
class Person(BaseModel):
name: str
age: int
profession: str
person = extract_with_model(Person, "Maria is 32, a developer in NYC.", model_name="openai/gpt-4")
print(person.name) # Maria
```
## Key Features
- **Structured output** — JSON schema enforcement and direct Pydantic model population
- **12 providers** — OpenAI, Claude, Google, Groq, Grok, Azure, Ollama, LM Studio, OpenRouter, HuggingFace, AirLLM, and generic HTTP
- **TOON input conversion** — 45-60% token savings when sending structured data via [Token-Oriented Object Notation](https://github.com/jhd3197/python-toon)
- **Stepwise extraction** — Per-field prompts with smart type coercion (shorthand numbers, multilingual booleans, dates)
- **Field registry** — 50+ predefined extraction fields with template variables and Pydantic integration
- **Conversations** — Stateful multi-turn sessions with sync and async support
- **Tool use** — Function calling and streaming across supported providers, with automatic prompt-based simulation for models without native tool support
- **Caching** — Built-in response cache with memory, SQLite, and Redis backends
- **Plugin system** — Register custom drivers via entry points
- **Usage tracking** — Token counts and cost calculation on every call
- **Auto-repair** — Optional second LLM pass to fix malformed JSON
- **Batch testing** — Spec-driven suites to compare models side by side
## Installation
```bash
pip install prompture
```
Optional extras:
```bash
pip install prompture[redis] # Redis cache backend
pip install prompture[serve] # FastAPI server mode
pip install prompture[airllm] # AirLLM local inference
```
## Configuration
Set API keys for the providers you use. Prompture reads from environment variables or a `.env` file:
```bash
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
GROQ_API_KEY=...
GROK_API_KEY=...
OPENROUTER_API_KEY=...
AZURE_OPENAI_ENDPOINT=...
AZURE_OPENAI_API_KEY=...
```
Local providers (Ollama, LM Studio) work out of the box with no keys required.
## Providers
Model strings use `"provider/model"` format. The provider prefix routes to the correct driver automatically.
| Provider | Example Model | Cost |
|---|---|---|
| `openai` | `openai/gpt-4` | Automatic |
| `claude` | `claude/claude-3` | Automatic |
| `google` | `google/gemini-1.5-pro` | Automatic |
| `groq` | `groq/llama2-70b-4096` | Automatic |
| `grok` | `grok/grok-4-fast-reasoning` | Automatic |
| `azure` | `azure/deployed-name` | Automatic |
| `openrouter` | `openrouter/anthropic/claude-2` | Automatic |
| `ollama` | `ollama/llama3.1:8b` | Free (local) |
| `lmstudio` | `lmstudio/local-model` | Free (local) |
| `huggingface` | `hf/model-name` | Free (local) |
| `http` | `http/self-hosted` | Free |
## Usage
### One-Shot Pydantic Extraction
Single LLM call, returns a validated Pydantic instance:
```python
from typing import List, Optional
from pydantic import BaseModel
from prompture import extract_with_model
class Person(BaseModel):
name: str
age: int
profession: str
city: str
hobbies: List[str]
education: Optional[str] = None
person = extract_with_model(
Person,
"Maria is 32, a software developer in New York. She loves hiking and photography.",
model_name="openai/gpt-4"
)
print(person.model_dump())
```
### Stepwise Extraction
One LLM call per field. Higher accuracy, per-field error recovery:
```python
from prompture import stepwise_extract_with_model
result = stepwise_extract_with_model(
Person,
"Maria is 32, a software developer in New York. She loves hiking and photography.",
model_name="openai/gpt-4"
)
print(result["model"].model_dump())
print(result["usage"]) # per-field and total token usage
```
| Aspect | `extract_with_model` | `stepwise_extract_with_model` |
|---|---|---|
| LLM calls | 1 | N (one per field) |
| Speed / cost | Faster, cheaper | Slower, higher |
| Accuracy | Good global coherence | Higher per-field accuracy |
| Error handling | All-or-nothing | Per-field recovery |
### JSON Schema Extraction
For raw JSON output with full control:
```python
from prompture import ask_for_json
schema = {
"type": "object",
"required": ["name", "age"],
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"}
}
}
result = ask_for_json(
content_prompt="Extract the person's info from: John is 28 and lives in Miami.",
json_schema=schema,
model_name="openai/gpt-4"
)
print(result["json_object"]) # {"name": "John", "age": 28}
print(result["usage"]) # token counts and cost
```
### TOON Input — Token Savings
Analyze structured data with automatic TOON conversion for 45-60% fewer tokens:
```python
from prompture import extract_from_data
products = [
{"id": 1, "name": "Laptop", "price": 999.99, "rating": 4.5},
{"id": 2, "name": "Book", "price": 19.99, "rating": 4.2},
{"id": 3, "name": "Headphones", "price": 149.99, "rating": 4.7},
]
result = extract_from_data(
data=products,
question="What is the average price and highest rated product?",
json_schema={
"type": "object",
"properties": {
"average_price": {"type": "number"},
"highest_rated": {"type": "string"}
}
},
model_name="openai/gpt-4"
)
print(result["json_object"])
# {"average_price": 389.99, "highest_rated": "Headphones"}
print(f"Token savings: {result['token_savings']['percentage_saved']}%")
```
Works with Pandas DataFrames via `extract_from_pandas()`.
### Field Definitions
Use the built-in field registry for consistent extraction across models:
```python
from pydantic import BaseModel
from prompture import field_from_registry, stepwise_extract_with_model
class Person(BaseModel):
name: str = field_from_registry("name")
age: int = field_from_registry("age")
email: str = field_from_registry("email")
occupation: str = field_from_registry("occupation")
result = stepwise_extract_with_model(
Person,
"John Smith, 25, software engineer at TechCorp, john@example.com",
model_name="openai/gpt-4"
)
```
Register custom fields with template variables:
```python
from prompture import register_field
register_field("document_date", {
"type": "str",
"description": "Document creation date",
"instructions": "Use {{current_date}} if not specified",
"default": "{{current_date}}",
"nullable": False
})
```
### Conversations
Stateful multi-turn sessions:
```python
from prompture import Conversation
conv = Conversation(model_name="openai/gpt-4")
conv.add_message("system", "You are a helpful assistant.")
response = conv.send("What is the capital of France?")
follow_up = conv.send("What about Germany?") # retains context
```
### Tool Use
Register Python functions as tools the LLM can call during a conversation:
```python
from prompture import Conversation, ToolRegistry
registry = ToolRegistry()
@registry.tool
def get_weather(city: str, units: str = "celsius") -> str:
"""Get the current weather for a city."""
return f"Weather in {city}: 22 {units}"
conv = Conversation("openai/gpt-4", tools=registry)
result = conv.ask("What's the weather in London?")
```
For models without native function calling (Ollama, LM Studio, etc.), Prompture automatically simulates tool use by describing tools in the prompt and parsing structured JSON responses:
```python
# Auto-detect: uses native tool calling if available, simulation otherwise
conv = Conversation("ollama/llama3.1:8b", tools=registry, simulated_tools="auto")
# Force simulation even on capable models
conv = Conversation("openai/gpt-4", tools=registry, simulated_tools=True)
# Disable tool use entirely
conv = Conversation("openai/gpt-4", tools=registry, simulated_tools=False)
```
The simulation loop describes tools in the system prompt, asks the model to respond with JSON (`tool_call` or `final_answer`), executes tools, and feeds results back — all transparent to the caller.
### Model Discovery
Auto-detect available models from configured providers:
```python
from prompture import get_available_models
models = get_available_models()
for model in models:
print(model) # "openai/gpt-4", "ollama/llama3:latest", ...
```
### Logging and Debugging
```python
import logging
from prompture import configure_logging
configure_logging(logging.DEBUG)
```
### Response Shape
All extraction functions return a consistent structure:
```python
{
"json_string": str, # raw JSON text
"json_object": dict, # parsed result
"usage": {
"prompt_tokens": int,
"completion_tokens": int,
"total_tokens": int,
"cost": float,
"model_name": str
}
}
```
## CLI
```bash
prompture run <spec-file>
```
Run spec-driven extraction suites for cross-model comparison.
## Development
```bash
# Install with dev dependencies
pip install -e ".[test,dev]"
# Run tests
pytest
# Run integration tests (requires live LLM access)
pytest --run-integration
# Lint and format
ruff check .
ruff format .
```
## Contributing
PRs welcome. Please add tests for new functionality and examples under `examples/` for new drivers or patterns.
## License
[MIT](https://opensource.org/licenses/MIT)
| text/markdown | null | Juan Denis <juan@vene.co> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"click>=8.0",
"httpx>=0.25.0",
"jsonschema>=4.0",
"pydantic>=2.0",
"pydantic-settings>=2.0",
"python-dotenv>=0.19.0",
"python-dateutil>=2.9.0",
"pyyaml>=6.0",
"openai>=1.55.0; extra == \"openai\"",
"anthropic>=0.8.0; extra == \"anthropic\"",
"google-genai>=1.0.0; extra == \"google\"",
"groq>=0.4.0; extra == \"groq\"",
"python-toon>=0.1.0; extra == \"toon\"",
"tukuy==0.0.30; extra == \"toon\"",
"pandas>=1.3.0; extra == \"pandas\"",
"requests>=2.28; extra == \"requests\"",
"prompture[anthropic,google,groq,openai,pandas,requests,toon]; extra == \"all\"",
"pytest>=7.0; extra == \"test\"",
"pytest-asyncio>=0.23.0; extra == \"test\"",
"prompture[all]; extra == \"test\"",
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"ruff>=0.8.0; extra == \"dev\"",
"prompture[all]; extra == \"dev\"",
"airllm>=2.8.0; extra == \"airllm\"",
"pdfplumber>=0.10.0; extra == \"ingest\"",
"pymupdf>=1.23.0; extra == \"ingest\"",
"python-docx>=0.8.11; extra == \"ingest\"",
"beautifulsoup4>=4.12.0; extra == \"ingest\"",
"lxml>=4.9.0; extra == \"ingest\"",
"openpyxl>=3.1.0; extra == \"ingest\"",
"redis>=4.0; extra == \"redis\"",
"fastapi>=0.100; extra == \"serve\"",
"uvicorn[standard]>=0.20; extra == \"serve\"",
"sse-starlette>=1.6; extra == \"serve\"",
"jinja2>=3.0; extra == \"scaffold\""
] | [] | [] | [] | [
"Homepage, https://github.com/jhd3197/prompture"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T01:51:12.913112 | prompture-1.0.36.tar.gz | 382,983 | c6/e0/4344e0314ebeb681e25c5ab5069a2b75a47d58bcbf9ea5f203987f3d1ecd/prompture-1.0.36.tar.gz | source | sdist | null | false | 77bec0c885b9a8ba4ee6bf8cb1c8bf12 | 9c6fe950b9a59b849c1520402b2763092c2d62d40794b95f011e56c5f36ae458 | c6e04344e0314ebeb681e25c5ab5069a2b75a47d58bcbf9ea5f203987f3d1ecd | MIT | [
"LICENSE"
] | 556 |
2.4 | limen-memory | 1.0.8 | Personal memory system with knowledge graph, novelty-gated learning, and context-influenced adaptation | # Limen-memory
Personal memory system for Claude Code. Persistent memory with a unified knowledge graph, novelty-gated learning, and context-influenced adaptation.
Limen-memory observes how you work, identifies patterns, and adapts behavior over time — without being told to. It integrates with Claude Code through hooks (automatic session-boundary behavior) and an MCP server (on-demand tools during conversations).
## How It Works
Limen-memory runs a continuous learning loop:
1. **Session start** — A hook loads relevant context (session bridge, reflections, user facts, strategies, predictions) and injects it into the conversation.
2. **During conversation** — MCP tools let Claude record user facts (`limen_learn`), search memories (`limen_query`), load topic-specific context (`limen_context`), and more.
3. **Session end** — A hook analyzes the conversation transcript through the reflection pipeline: LLM-driven insight extraction, novelty filtering, embedding generation, knowledge graph connections, fact extraction, and strategy observations.
4. **Background maintenance** — A scheduler handles consolidation, aging, pruning, and backup on a priority-based schedule, triggered opportunistically at session boundaries.
Over time, limen-memory builds an increasingly accurate model of who you are and how to work with you.
## Installation
```bash
# Clone and install
git clone https://github.com/Taderich73/limen-memory.git
cd limen-memory
poetry install
# Initialize database, config, skill, hooks, and MCP server
limen-memory init
```
`limen-memory init` does the following:
- Creates the data directory (`~/.limen-memory/`) and SQLite database
- Writes default config at `~/.limen-memory/config.yaml`
- Installs the SKILL.md to `~/.claude/skills/limen-memory/`
- Copies hook scripts to `~/.claude/hooks/` and registers them in `~/.claude/settings.json`
- Registers the MCP server in `~/.claude/settings.json` under `mcpServers.limen-memory`
### Requirements
- Python >=3.11, <3.14
- [Poetry](https://python-poetry.org/) >=2.0
- `VOYAGE_API_KEY` environment variable (for embeddings and the reflection pipeline)
- Claude Code CLI (for hook and MCP integration)
## Configuration
Limen-memory is configured via environment variables, `~/.limen-memory/config.yaml`, or constructor arguments (in that priority order).
| Variable | Description | Default |
|----------|-------------|---------|
| `VOYAGE_API_KEY` | Voyage AI API key for embeddings | *(required for reflection)* |
| `LIMEN_DATA_DIR` | Data directory | `~/.limen-memory/` |
| `LIMEN_CLAUDE_MODEL` | Claude model for LLM operations | `sonnet` |
| `LIMEN_CLAUDE_TIMEOUT` | LLM timeout in seconds | `120` |
## Claude Code Integration
### MCP Server
The MCP server exposes limen-memory's memory system as native tools. Claude Code and Claude.ai can call these tools directly during conversations.
**13 tools:**
| Tool | Description |
|------|-------------|
| `limen_context` | Load tiered session context, optionally focused on a topic |
| `limen_learn` | Record a user fact (idempotent on category+key) |
| `limen_reflect` | Add a single reflection with novelty gating |
| `limen_query` | Hybrid search over reflections (semantic + keyword + recency + confidence + graph) |
| `limen_status` | System health check and database statistics |
| `limen_user_profile` | View user facts, optionally filtered by category |
| `limen_interaction_profile` | View behavioral model dimensions (-1.0 to 1.0 scale) |
| `limen_search_conversations` | Full-text search over conversation summaries |
| `limen_graph_inspect` | Inspect knowledge graph — global diagnostics or specific node edges |
| `limen_deprecate` | Soft-delete a reflection |
| `limen_consolidate` | Run LLM-driven memory consolidation (dedup, merge, validate) |
| `limen_scheduler_tick` | Execute one scheduler maintenance task |
| `limen_reflect_transcript` | Reflect on a full conversation transcript |
**4 read-only resources:** `limen://profile`, `limen://strategies`, `limen://predictions`, `limen://status`.
The MCP server is registered automatically by `limen-memory init` and can also be started manually:
```bash
limen-memory mcp # stdio transport (default)
limen-memory mcp --transport streamable-http --port 8000 # HTTP transport
```
### Hooks
Limen-memory uses Claude Code hooks so the memory loop runs without manual invocation:
**SessionStart hook** (`limen-memory-session-start.py`) — Fires on `startup` and `resume`. Calls `limen-memory context` and prints the output to stdout, which Claude Code injects as conversation context. Also spawns a background `scheduler-tick` for opportunistic maintenance.
**Stop hook** (`limen-memory-stop.py`) — Fires at session end. Skips trivial conversations (fewer than 10 transcript lines). Spawns `limen-memory reflect-transcript` as a background process with lock-file concurrency protection. Also spawns `scheduler-tick`.
Hooks are designed to never crash a session — all exceptions are caught and the process exits cleanly.
### Skill
The SKILL.md file (installed to `~/.claude/skills/limen-memory/`) teaches Claude when and how to use each MCP tool during conversations.
### Uninstalling
```bash
limen-memory uninstall # or: limen-memory uninstall-hooks
```
Removes hook scripts from `~/.claude/hooks/`, hook entries from `~/.claude/settings.json`, and the MCP server entry from `~/.claude/settings.json`.
## CLI Reference
All commands support `--json` for machine-readable output.
### Core Commands
| Command | Description |
|---------|-------------|
| `limen-memory init` | Initialize database, config, skill, hooks, and MCP server |
| `limen-memory status` | Database stats and system health |
| `limen-memory query <topic>` | Search reflections (hybrid semantic + keyword scoring) |
| `limen-memory context <topic>` | Load tiered context with rule-engine processing |
| `limen-memory mcp` | Start the MCP server (stdio or streamable-http transport) |
### Learning & Reflection
| Command | Description |
|---------|-------------|
| `limen-memory learn -c CAT -k KEY -v VAL` | Record a user fact |
| `limen-memory add-reflection -s SECTION -c CONTENT` | Add a reflection with novelty filtering |
| `limen-memory reflect <session-file>` | Run full reflection pipeline on a conversation |
| `limen-memory reflect-transcript <path>` | Parse JSONL transcript and run reflection (used by hooks) |
### Querying
| Command | Description |
|---------|-------------|
| `limen-memory reflections` | List active reflections |
| `limen-memory user-profile` | Show user facts by category |
| `limen-memory interaction-profile` | Interaction profile dimensions (-1.0 to +1.0) |
| `limen-memory search <query>` | Full-text search over conversation summaries |
| `limen-memory graph-inspect` | Knowledge graph diagnostics |
| `limen-memory session-history` | Recent session contexts |
| `limen-memory run-rules <query>` | Run rule engine on a query |
### Maintenance
| Command | Description |
|---------|-------------|
| `limen-memory scheduler-tick` | Execute highest-priority overdue task |
| `limen-memory scheduler-status` | Show scheduler state and next-due tasks |
| `limen-memory consolidate` | Memory consolidation (dedup, merge, validate) |
| `limen-memory review-strategies` | Strategy review (promote, decay, merge) |
| `limen-memory cross-analyze` | Detect patterns across conversations |
| `limen-memory validate-predictions` | Check prediction edges against evidence |
| `limen-memory synthesize-profile` | Rebuild interaction profile from evidence |
| `limen-memory age` | Age stale reflections and edges |
| `limen-memory prune` | Deprecate orphaned graph relationships |
| `limen-memory backup` | Backup database (keeps last 5) |
### Data Management
| Command | Description |
|---------|-------------|
| `limen-memory deprecate <id>` | Soft-delete a reflection |
| `limen-memory create-edge --source ID --target ID --type T` | Create a knowledge graph edge |
| `limen-memory save-session --id ID --topic T` | Save a session context |
| `limen-memory uninstall` | Remove hooks and MCP server from Claude Code |
## Architecture
```
CLI (cli.py) ──┐
├──→ Services → Stores → SQLite (WAL mode)
MCP Server (mcp/)──┘ ↕
Rule Engine
```
Both CLI and MCP server share the same service layer via `service_factory.create_services()`.
**Data stores**: MemoryStore (reflections, facts, sessions), GraphStore (knowledge graph), EmbeddingStore (vectors), ConversationStore (FTS5 search), StrategyStore, ProfileStore.
**Services**: ReflectionService (full pipeline), ContextLoader (tiered retrieval with 6000-char budget), NoveltyFilter (embedding similarity gates), ConsolidationService (LLM-driven dedup), Scheduler (background tasks), CrossAnalyzer, ProfileService, StrategyService.
**Rule engine**: Priority-based execution across 4 phases (PRE_RETRIEVAL → POST_RETRIEVAL → PRE_RESPONSE → POST_RESPONSE). Five rules: ContradictionDetection, HighConfidenceInjection, BehavioralDirective, PreferenceApplication, PredictionSurfacing.
### Key Design Patterns
- **Dual-write** — Every entity save registers a GraphNode in the knowledge graph
- **Novelty gating** — New reflections pass through embedding + LLM hybrid filtering
- **Soft deletes only** — Deprecation via `deprecated_at`, never hard deletion
- **Evidence accumulation** — Confidence grows with observations, decays via aging
- **Hybrid search** — Semantic (0.35) + keyword (0.20) + recency (0.20) + confidence (0.15) + graph connectivity (0.10)
### Theoretical Foundation
The memory architecture is informed by computational neuroscience research on dual memory systems — the discovery that biological working memory and episodic memory solve the same problem through fundamentally different mechanisms. Limen implements this as a four-stage pipeline: fast approximate linking at write time, novelty-gated persistence, background LLM validation (consolidation), and relevance-gated retrieval that only surfaces knowledge when contextually relevant.
## Development
```bash
make dev # Full dev environment (poetry + pre-commit hooks)
make test # Run tests (tox -e py311)
make lint # Lint (tox -e lint)
make mypy # Type check (tox -e typecheck)
make typecheck # Type check (alias for make mypy)
make format # Auto-format (tox -e format)
make all # All quality checks
make clean # Remove build artifacts
tox -e coverage # Tests with coverage (80% threshold)
tox -e bandit # Security scan
```
Tests use in-memory SQLite. Shared fixtures in `tests/conftest.py` provide pre-built model instances and store instances. Test organization mirrors source structure, with `tests/test_mcp/` covering server lifecycle, tool handlers, and service factory.
## License
MIT
| text/markdown | dlattka | dennislattka@gmail.com | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"bandit[toml]>=1.7; extra == \"dev\"",
"click>=8.0",
"mcp[cli]<2.0,>=1.0; extra == \"dev\"",
"mcp[cli]<2.0,>=1.0; extra == \"mcp\"",
"mypy>=1.10; extra == \"dev\"",
"numpy>=1.24",
"ollama>=0.3; extra == \"dev\"",
"ollama>=0.3; extra == \"ollama\"",
"openai>=1.0; extra == \"dev\"",
"openai>=1.0; extra == \"openai\"",
"pre-commit>=3.7; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.23; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest-xdist>=3.0; extra == \"dev\"",
"pyyaml>=6.0",
"ruff>=0.4; extra == \"dev\"",
"tox>=4.0; extra == \"dev\"",
"types-PyYAML; extra == \"dev\"",
"voyageai>=0.3.0; extra == \"dev\"",
"voyageai>=0.3.0; extra == \"voyage\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-21T01:49:14.913710 | limen_memory-1.0.8-py3-none-any.whl | 136,244 | e6/ef/97c864e47dfa5762fbbb7a4946e9351838b1dbdccd5e75d766e4ff1016e6/limen_memory-1.0.8-py3-none-any.whl | py3 | bdist_wheel | null | false | d4caa0e208863c68add6b7899845af6e | 8be899eb6537acdd1d42dc12be32d8ee0bcbd14e4ac7568f8924673c73a2a7a1 | e6ef97c864e47dfa5762fbbb7a4946e9351838b1dbdccd5e75d766e4ff1016e6 | null | [] | 81 |
2.4 | boto3-stubs-full | 1.42.54 | All-in-one type annotations for boto3 1.42.54 generated with mypy-boto3-builder 8.12.0 | <a id="boto3-stubs-full"></a>
# boto3-stubs-full
[](https://pypi.org/project/boto3-stubs-full/)
[](https://pypi.org/project/boto3-stubs-full/)
[](https://youtype.github.io/boto3_stubs_docs/)
[](https://pypistats.org/packages/boto3-stubs-full)

Type annotations for [boto3 1.42.54](https://pypi.org/project/boto3/)
compatible with [VSCode](https://code.visualstudio.com/),
[PyCharm](https://www.jetbrains.com/pycharm/),
[Emacs](https://www.gnu.org/software/emacs/),
[Sublime Text](https://www.sublimetext.com/),
[mypy](https://github.com/python/mypy),
[pyright](https://github.com/microsoft/pyright) and other tools.
Generated with
[mypy-boto3-builder 8.12.0](https://github.com/youtype/mypy_boto3_builder).
More information can be found on
[boto3-stubs](https://pypi.org/project/boto3-stubs/) page and in
[boto3-stubs-full docs](https://youtype.github.io/boto3_stubs_docs/).
See how it helps you find and fix potential bugs:

- [boto3-stubs-full](#boto3-stubs-full)
- [How to install](#how-to-install)
- [Generate locally (recommended)](<#generate-locally-(recommended)>)
- [VSCode extension](#vscode-extension)
- [From PyPI with pip](#from-pypi-with-pip)
- [How to uninstall](#how-to-uninstall)
- [Usage](#usage)
- [VSCode](#vscode)
- [PyCharm](#pycharm)
- [Emacs](#emacs)
- [Sublime Text](#sublime-text)
- [Other IDEs](#other-ides)
- [mypy](#mypy)
- [pyright](#pyright)
- [Pylint compatibility](#pylint-compatibility)
- [Explicit type annotations](#explicit-type-annotations)
- [How it works](#how-it-works)
- [What's new](#what's-new)
- [Implemented features](#implemented-features)
- [Latest changes](#latest-changes)
- [Versioning](#versioning)
- [Thank you](#thank-you)
- [Documentation](#documentation)
- [Support and contributing](#support-and-contributing)
<a id="how-to-install"></a>
## How to install
<a id="generate-locally-(recommended)"></a>
### Generate locally (recommended)
You can generate type annotations for `boto3` package locally with
`mypy-boto3-builder`. Use
[uv](https://docs.astral.sh/uv/getting-started/installation/) for build
isolation.
1. Run mypy-boto3-builder in your package root directory:
`uvx --with 'boto3==1.42.54' mypy-boto3-builder`
2. Select `boto3-stubs` AWS SDK.
3. Add all available services.
4. Use provided commands to install generated packages.
<a id="vscode-extension"></a>
### VSCode extension
Add
[AWS Boto3](https://marketplace.visualstudio.com/items?itemName=Boto3typed.boto3-ide)
extension to your VSCode and run `AWS boto3: Quick Start` command.
Click `Auto-discover services` and select services you use in the current
project.
<a id="from-pypi-with-pip"></a>
### From PyPI with pip
Install `boto3-stubs-full` to add type checking for `boto3` package.
```bash
# install type annotations
python -m pip install 'boto3-stubs[full]'
# or install annotations in sync with boto3 version
python -m pip install 'boto3-stubs[full,boto3]'
```
<a id="how-to-uninstall"></a>
## How to uninstall
```bash
# uninstall boto3-stubs
python -m pip uninstall -y boto3-stubs-full boto3-stubs
```
<a id="usage"></a>
## Usage
<a id="vscode"></a>
### VSCode
- Install
[Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python)
- Install
[Pylance extension](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance)
- Set `Pylance` as your Python Language Server
- Install `boto3-stubs-full` in your environment:
```bash
python -m pip install 'boto3-stubs[full]'
```
Both type checking and code completion should now work. No explicit type
annotations required, write your `boto3` code as usual.
<a id="pycharm"></a>
### PyCharm
> ⚠️ Due to slow PyCharm performance on `Literal` overloads (issue
> [PY-40997](https://youtrack.jetbrains.com/issue/PY-40997)), it is recommended
> to use [boto3-stubs-lite](https://pypi.org/project/boto3-stubs-lite/) until
> the issue is resolved.
> ⚠️ If you experience slow performance and high CPU usage, try to disable
> `PyCharm` type checker and use [mypy](https://github.com/python/mypy) or
> [pyright](https://github.com/microsoft/pyright) instead.
> ⚠️ To continue using `PyCharm` type checker, you can try to replace
> `boto3-stubs` with
> [boto3-stubs-lite](https://pypi.org/project/boto3-stubs-lite/):
```bash
python -m pip uninstall -y boto3-stubs
python -m pip install 'boto3-stubs-lite[full]'
```
Install `boto3-stubs-lite` in your environment:
```bash
python -m pip install 'boto3-stubs-lite[full]'
```
Both type checking and code completion should now work.
<a id="emacs"></a>
### Emacs
- Install `boto3-stubs-full` in your environment:
```bash
python -m pip install 'boto3-stubs[full]'
```
- Install [use-package](https://github.com/jwiegley/use-package),
[lsp](https://github.com/emacs-lsp/lsp-mode/),
[company](https://github.com/company-mode/company-mode) and
[flycheck](https://github.com/flycheck/flycheck) packages
- Install [lsp-pyright](https://github.com/emacs-lsp/lsp-pyright) package
```elisp
(use-package lsp-pyright
:ensure t
:hook (python-mode . (lambda ()
(require 'lsp-pyright)
(lsp))) ; or lsp-deferred
:init (when (executable-find "python3")
(setq lsp-pyright-python-executable-cmd "python3"))
)
```
- Make sure emacs uses the environment where you have installed
`boto3-stubs-full`
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="sublime-text"></a>
### Sublime Text
- Install `boto3-stubs-full` in your environment:
```bash
python -m pip install 'boto3-stubs[full]'
```
- Install [LSP-pyright](https://github.com/sublimelsp/LSP-pyright) package
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="other-ides"></a>
### Other IDEs
Not tested, but as long as your IDE supports `mypy` or `pyright`, everything
should work.
<a id="mypy"></a>
### mypy
- Install `mypy`: `python -m pip install mypy`
- Install `boto3-stubs-full` in your environment:
```bash
python -m pip install 'boto3-stubs[full]'
```
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="pyright"></a>
### pyright
- Install `pyright`: `npm i -g pyright`
- Install `boto3-stubs-full` in your environment:
```bash
python -m pip install 'boto3-stubs[full]'
```
Optionally, you can install `boto3-stubs-full` to `typings` directory.
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="pylint-compatibility"></a>
### Pylint compatibility
It is totally safe to use `TYPE_CHECKING` flag in order to avoid
`boto3-stubs-full` dependency in production. However, there is an issue in
`pylint` that it complains about undefined variables. To fix it, set all types
to `object` in non-`TYPE_CHECKING` mode.
```python
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from mypy_boto3_ec2 import EC2Client, EC2ServiceResource
from mypy_boto3_ec2.waiters import BundleTaskCompleteWaiter
from mypy_boto3_ec2.paginators import DescribeVolumesPaginator
else:
EC2Client = object
EC2ServiceResource = object
BundleTaskCompleteWaiter = object
DescribeVolumesPaginator = object
...
```
<a id="explicit-type-annotations"></a>
### Explicit type annotations
To speed up type checking and code completion, you can set types explicitly.
```python
import boto3
from boto3.session import Session
from mypy_boto3_ec2.client import EC2Client
from mypy_boto3_ec2.service_resource import EC2ServiceResource
from mypy_boto3_ec2.waiter import BundleTaskCompleteWaiter
from mypy_boto3_ec2.paginator import DescribeVolumesPaginator
session = Session(region_name="us-west-1")
ec2_client: EC2Client = boto3.client("ec2", region_name="us-west-1")
ec2_resource: EC2ServiceResource = session.resource("ec2")
bundle_task_complete_waiter: BundleTaskCompleteWaiter = ec2_client.get_waiter(
"bundle_task_complete"
)
describe_volumes_paginator: DescribeVolumesPaginator = ec2_client.get_paginator("describe_volumes")
```
<a id="how-it-works"></a>
## How it works
Fully automated
[mypy-boto3-builder](https://github.com/youtype/mypy_boto3_builder) carefully
generates type annotations for each service, patiently waiting for `boto3`
updates. It delivers drop-in type annotations for you and makes sure that:
- All available `boto3` services are covered.
- Each public class and method of every `boto3` service gets valid type
annotations extracted from `botocore` schemas.
- Type annotations include up-to-date documentation.
- Link to documentation is provided for every method.
- Code is processed by [ruff](https://docs.astral.sh/ruff/) for readability.
<a id="what's-new"></a>
## What's new
<a id="implemented-features"></a>
### Implemented features
- Fully type annotated `boto3`, `botocore`, `aiobotocore` and `aioboto3`
libraries
- `mypy`, `pyright`, `VSCode`, `PyCharm`, `Sublime Text` and `Emacs`
compatibility
- `Client`, `ServiceResource`, `Resource`, `Waiter` `Paginator` type
annotations for each service
- Generated `TypeDefs` for each service
- Generated `Literals` for each service
- Auto discovery of types for `boto3.client` and `boto3.resource` calls
- Auto discovery of types for `session.client` and `session.resource` calls
- Auto discovery of types for `client.get_waiter` and `client.get_paginator`
calls
- Auto discovery of types for `ServiceResource` and `Resource` collections
- Auto discovery of types for `aiobotocore.Session.create_client` calls
<a id="latest-changes"></a>
### Latest changes
Builder changelog can be found in
[Releases](https://github.com/youtype/mypy_boto3_builder/releases).
<a id="versioning"></a>
## Versioning
`boto3-stubs-full` version is the same as related `boto3` version and follows
[Python Packaging version specifiers](https://packaging.python.org/en/latest/specifications/version-specifiers/).
<a id="thank-you"></a>
## Thank you
- [Allie Fitter](https://github.com/alliefitter) for
[boto3-type-annotations](https://pypi.org/project/boto3-type-annotations/),
this package is based on top of his work
- [black](https://github.com/psf/black) developers for an awesome formatting
tool
- [Timothy Edmund Crosley](https://github.com/timothycrosley) for
[isort](https://github.com/PyCQA/isort) and how flexible it is
- [mypy](https://github.com/python/mypy) developers for doing all dirty work
for us
- [pyright](https://github.com/microsoft/pyright) team for the new era of typed
Python
<a id="documentation"></a>
## Documentation
All services type annotations can be found in
[boto3 docs](https://youtype.github.io/boto3_stubs_docs/)
<a id="support-and-contributing"></a>
## Support and contributing
This package is auto-generated. Please reports any bugs or request new features
in [mypy-boto3-builder](https://github.com/youtype/mypy_boto3_builder/issues/)
repository.
| text/markdown | null | Vlad Emelianov <vlad.emelianov.nz@gmail.com> | null | null | null | boto3, boto3-stubs, type-annotations, mypy, typeshed, autocomplete | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Environment :: Console",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: Implementation :: CPython",
"Typing :: Stubs Only"
] | [
"any"
] | null | null | >=3.9 | [] | [] | [] | [
"typing-extensions; python_version < \"3.12\""
] | [] | [] | [] | [
"Homepage, https://github.com/youtype/mypy_boto3_builder",
"Documentation, https://youtype.github.io/boto3_stubs_docs/",
"Source, https://github.com/youtype/mypy_boto3_builder",
"Tracker, https://github.com/youtype/mypy_boto3_builder/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-21T01:48:34.213125 | boto3_stubs_full-1.42.54.tar.gz | 8,415,132 | 3e/b3/e552efa26d3e331b6b4f468c67f271400357525e5a55788c8962cd951db0/boto3_stubs_full-1.42.54.tar.gz | source | sdist | null | false | ad05dc83e4a27c010e0bb87e2a4072fe | 68390fb2085750b4226acf0f9eac237e169d3aa8a9233025eaa53ed32d935021 | 3eb3e552efa26d3e331b6b4f468c67f271400357525e5a55788c8962cd951db0 | MIT | [
"LICENSE"
] | 2,225 |
2.4 | unitysvc-services | 0.3.16 | SDK for digital service providers on UnitySVC | # UnitySVC Services SDK

[](https://unitysvc-services.readthedocs.io/en/latest/?version=latest)
Client library and CLI tools for sellers and providers of digital services to interact with the UnitySVC platform.
**[Full Documentation](https://unitysvc-services.readthedocs.io)** | **[Getting Started](https://unitysvc-services.readthedocs.io/en/latest/getting-started/)** | **[CLI Reference](https://unitysvc-services.readthedocs.io/en/latest/cli-reference/)**
## Two Ways to Manage Service Data
UnitySVC provides two complementary approaches for managing your seller service data:
### 1. Web Interface (unitysvc.com)
The [UnitySVC web platform](https://unitysvc.com) provides a user-friendly interface to:
- Create, edit, and manage providers, offerings, and listings
- Validate data with instant feedback
- Preview how services appear to customers
- Export data for use with the SDK
**Best for**: Initial setup, visual editing, and teams preferring a graphical interface.
### 2. SDK (this package)
The SDK enables a **local-first, version-controlled workflow** with key advantages:
- **Version Control** - Track all changes in git, review diffs, roll back mistakes
- **Script-Based Generation** - Programmatically generate services from provider APIs
- **CI/CD Automation** - Automatically upload updates and manage service lifecycle via GitHub Actions
- **Offline Work** - Edit locally, validate without network, upload when ready
- **Code Review** - Use pull requests to review service changes before uploading
- **Service Lifecycle** - Submit services for review, deprecate outdated services, withdraw services from marketplace
**Best for**: Large catalogs, dynamic services, automation, and teams with developer workflows.
### Recommended Workflow
1. **Start with the web interface** at [unitysvc.com](https://unitysvc.com) to create your initial service data
2. **Export your data** to local files for version control
3. **Use the SDK** for ongoing management, automation, and CI/CD integration
## Installation
```bash
pip install unitysvc-services
```
Requires Python 3.11+
**CLI Alias:** The command `unitysvc_services` can also be invoked using the shorter alias `usvc`.
## Service Data Model
A **Service** in UnitySVC is an identity layer that connects a seller to three complementary data components. These components are organized separately for reuse but **uploaded together** as a single unit:
```mermaid
flowchart TB
subgraph Service["Service (Identity Layer)"]
direction TB
S["<b>Service</b><br/>name, display_name, status<br/><i>derived from components</i>"]
end
subgraph Content["Content Entities (Uploaded Together)"]
P["<b>Provider Data</b><br/>WHO provides<br/><i>provider_v1</i>"]
O["<b>Offering Data</b><br/>WHAT is provided<br/><i>offering_v1</i>"]
L["<b>Listing Data</b><br/>HOW it's sold<br/><i>listing_v1</i>"]
end
S --> P & O & L
P --> O --> L
style S fill:#f3e5f5
style P fill:#e3f2fd
style O fill:#fff3e0
style L fill:#e8f5e9
```
### Service Identity
When you upload provider, offering, and listing data together, the platform creates a **Service** record that:
- **Links** the seller to the content (provider, offering, listing)
- **Derives its name** from `listing.name`, or `offering.name` if listing name is unspecified
- **Derives its display_name** from `listing.display_name`, `offering.display_name`, `listing.name`, or `offering.name` (first non-empty value)
- **Derives its status** from the component statuses - a service is considered `draft` if any component is draft
The Service provides a stable identity that subscriptions reference, while the content entities (Provider, Offering, Listing) are immutable and content-addressed.
### Why Three Parts?
1. **Provider Data** - Defined once per provider, reused across all their offerings
2. **Offering Data** - Defined once per service, can have multiple listings
3. **Listing Data** - Defines how each service variant is presented to customers
This separation enables:
- **Reusability**: One provider can have many offerings; one offering can have multiple listings
- **Maintainability**: Update provider info once, affects all services
- **Flexibility**: Different pricing tiers, marketplaces, or customer segments per listing
- **Immutability**: Content entities are content-addressed; same content = same ID
## Quick Example
```bash
# 1. Export data from unitysvc.com or create files manually
# Place files in: data/{provider}/services/{service}/
# 2. Validate and format your local data
usvc data validate
usvc data format
# 3. Run code examples locally with upstream credentials
usvc data list-tests
usvc data run-tests --provider my-provider
# 4. For dynamic catalogs, use populate scripts
usvc data populate
# 5. Upload to platform (uploads provider + offering + listing together)
export UNITYSVC_API_URL="https://api.unitysvc.com/v1"
export UNITYSVC_SELLER_API_KEY="your-seller-api-key"
usvc data upload
# 6. Test via gateway and submit for review
usvc services list-tests
usvc services run-tests
usvc services submit <service-id>
# 7. Query backend to verify uploaded data
usvc services list
```
## Data Structure
```
data/
├── ${provider_name}/
│ ├── provider.json # Provider Data (provider_v1)
│ ├── docs/ # Shared documentation
│ └── services/
│ └── ${service_name}/
│ ├── offering.json # Offering Data (offering_v1)
│ └── listing-*.json # Listing Data (listing_v1) ← upload entry point
```
**Uploading is listing-centric**: When you run `usvc data upload`, the SDK:
1. Finds all listing files (`listing_v1` schema)
2. For each listing, locates the **single** offering file in the same directory
3. Locates the provider file in the parent directory
4. Uploads all three together as a unified service in `draft` status
**Key constraint**: Each service directory must have exactly **one** offering file. Listings automatically belong to this offering based on their file location—no explicit linking required.
See [Data Structure Documentation](https://unitysvc-services.readthedocs.io/en/latest/data-structure/) for complete details.
## Key Features
- **Unified Upload** - Provider, offering, and listing uploaded together atomically
- **Pydantic Models** - Type-safe data models for all entities
- **Data Validation** - Comprehensive schema validation
- **Local-First** - Work offline, commit to git, upload when ready
- **CLI Tools** - Complete command-line interface
- **Automation** - Script-based service generation
- **Multiple Formats** - Support for JSON and TOML
- **Smart Routing** - Request routing based on routing keys (e.g., model-specific endpoints)
## Workflows
### Getting Started
```bash
web interface (create data) → export → usvc data validate → usvc data upload
```
### Ongoing Management
```bash
edit files → usvc data validate → usvc data format → usvc data run-tests → commit → usvc data upload
```
### Automated Workflow (large/dynamic catalogs)
```bash
configure populate script → usvc data populate → usvc data validate → usvc data upload (via CI/CD)
```
See [Workflows Documentation](https://unitysvc-services.readthedocs.io/en/latest/workflows/) for details.
## CLI Commands
The CLI is organized into two main command groups:
### Local Data Operations (`usvc data`)
Work with local data files - can be used offline without API credentials (except `upload`).
| Command | Description |
| ------------ | --------------------------------------------------- |
| `validate` | Validate data files against schemas |
| `format` | Format/prettify data files |
| `populate` | Generate data files from provider scripts |
| `upload` | Upload local data to backend (draft status) |
| `list` | List local data files (services, providers, etc.) |
| `show` | Show details of a local data object |
| `list-tests` | List code examples in local data |
| `run-tests` | Run code examples locally with upstream credentials |
| `show-test` | Show details of a local test |
### Remote Service Operations (`usvc services`)
Manage services on the backend - can be run from anywhere with the right API key.
| Command | Description |
| ------------- | -------------------------------------------------- |
| `list` | List deployed services on backend |
| `show` | Show details of a deployed service |
| `submit` | Submit draft service(s) for ops review (use --all) |
| `withdraw` | Withdraw pending/rejected service(s) to draft |
| `deprecate` | Deprecate active service(s) |
| `delete` | Delete service(s) from backend |
| `dedup` | Remove duplicate draft services |
| `list-tests` | List tests for deployed services |
| `show-test` | Show details of a test for a deployed service |
| `run-tests` | Run tests via gateway (backend execution) |
| `skip-test` | Mark a code example test as skipped |
| `unskip-test` | Remove skip status from a test |
Run `usvc --help` or see [CLI Reference](https://unitysvc-services.readthedocs.io/en/latest/cli-reference/) for complete documentation.
## Documentation
- **[Getting Started](https://unitysvc-services.readthedocs.io/en/latest/getting-started/)** - Installation and first steps
- **[Data Structure](https://unitysvc-services.readthedocs.io/en/latest/data-structure/)** - File organization and Service Data model
- **[Workflows](https://unitysvc-services.readthedocs.io/en/latest/workflows/)** - Manual and automated patterns
- **[Documenting Service Listings](https://unitysvc-services.readthedocs.io/en/latest/documenting-services/)** - Add documentation to services
- **[Creating Code Examples](https://unitysvc-services.readthedocs.io/en/latest/code-examples/)** - Develop and test code examples
- **[CLI Reference](https://unitysvc-services.readthedocs.io/en/latest/cli-reference/)** - All commands and options
- **[File Schemas](https://unitysvc-services.readthedocs.io/en/latest/file-schemas/)** - Schema specifications
- **[Python API](https://unitysvc-services.readthedocs.io/en/latest/api-reference/)** - Programmatic usage
## Links
- **PyPI**: https://pypi.org/project/unitysvc-services/
- **Documentation**: https://unitysvc-services.readthedocs.io
- **Source Code**: https://github.com/unitysvc/unitysvc-services
- **Issue Tracker**: https://github.com/unitysvc/unitysvc-services/issues
## License
MIT License - see [LICENSE](LICENSE) file for details.
## Contributing
Contributions welcome! See [Contributing Guide](https://unitysvc-services.readthedocs.io/en/latest/contributing/) for details.
| text/markdown | null | Bo Peng <bo.peng@unitysvc.com> | null | Bo Peng <bo.peng@unitysvc.com> | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"typer",
"pydantic",
"email-validator",
"jsonschema",
"jinja2",
"rich",
"httpx",
"tomli-w",
"mistune>=3.0",
"json5",
"coverage; extra == \"test\"",
"pytest; extra == \"test\"",
"ruff; extra == \"test\"",
"mypy; extra == \"test\"",
"ipdb; extra == \"test\"",
"coverage; extra == \"dev\"",
"pytest; extra == \"dev\"",
"ruff; extra == \"dev\"",
"ty; extra == \"dev\"",
"ipdb; extra == \"dev\"",
"mkdocs; extra == \"dev\"",
"mkdocs-material; extra == \"dev\"",
"mkdocs-autorefs; extra == \"dev\"",
"pymdown-extensions; extra == \"dev\"",
"mkdocs; extra == \"docs\"",
"mkdocs-material; extra == \"docs\"",
"mkdocs-autorefs; extra == \"docs\"",
"pymdown-extensions; extra == \"docs\""
] | [] | [] | [] | [
"bugs, https://github.com/unitysvc/unitysvc-services/issues",
"changelog, https://github.com/unitysvc/unitysvc-services/blob/master/changelog.md",
"homepage, https://github.com/unitysvc/unitysvc-services"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T01:48:25.059189 | unitysvc_services-0.3.16.tar.gz | 207,280 | c4/21/d0d6dba0307c547e7568c1dc4a4a589f42c56dd6bc9359aff4a88da52fc7/unitysvc_services-0.3.16.tar.gz | source | sdist | null | false | a9faf9257539d5644abe99b368e151b7 | 2531323db80c84ee07e31aba74984fea5ecc030ea937f376021938eb13de7762 | c421d0d6dba0307c547e7568c1dc4a4a589f42c56dd6bc9359aff4a88da52fc7 | MIT | [
"LICENSE"
] | 168 |
2.4 | pop-python | 1.1.1 | Prompt Oriented Programming (POP): reusable, composable prompt functions for LLMs. | # Prompt Oriented Programming (POP)
```python
from POP import PromptFunction
pf = PromptFunction(
prompt="Draw a simple ASCII art of <<<object>>>.",
client="openai",
)
print(pf.execute(object="a cat"))
print(pf.execute(object="a rocket"))
```
```
/\_/\
( o.o )
> ^ <
/\
/ \
/ \
| |
| |
```
---
Reusable, composable prompt functions for LLM workflows.
This 1.1.0 dev update restructures POP into small, focused modules and adds a provider registry inspired by pi-mono's `ai` package.
PyPI:
[https://pypi.org/project/pop-python/](https://pypi.org/project/pop-python/)
GitHub:
[https://github.com/sgt1796/POP](https://github.com/sgt1796/POP)
---
## Table of Contents
1. [Overview](#1-overview)
2. [Update Note](#2-update-note)
3. [Major Updates](#3-major-updates)
4. [Features](#4-features)
5. [Installation](#5-installation)
6. [Setup](#6-setup)
7. [PromptFunction](#7-promptfunction)
8. [Provider Registry](#8-provider-registry)
9. [Tool Calling](#9-tool-calling)
10. [Agent Loop (POP.agent)](#10-agent-loop-popagent)
11. [Function Schema Generation](#11-function-schema-generation)
12. [Embeddings](#12-embeddings)
13. [Web Snapshot Utility](#13-web-snapshot-utility)
14. [Examples](#14-examples)
15. [Contributing](#15-contributing)
---
# 1. Overview
Prompt Oriented Programming (POP) is a lightweight framework for building reusable, parameterized prompt functions.
Instead of scattering prompt strings across your codebase, POP lets you:
* encapsulate prompts as objects
* pass parameters cleanly via placeholders
* select a backend LLM client dynamically
* improve prompts using meta-prompting
* generate OpenAI-compatible function schemas
* use unified embedding tools
* work with multiple LLM providers through a centralized registry
POP is designed to be simple, extensible, and production-friendly.
---
# 2. Update Note
**1.1.0-dev (February 5, 2026)**
* **Breaking import path**: use `POP` (uppercase) for imports. Example: `from POP import PromptFunction`.
* **Provider registry**: clients live under `POP/providers/` and are instantiated via `POP.api_registry`.
* **LLMClient base class**: now in `POP.providers.llm_client` (kept as an abstract base class).
---
# 3. Major Updates
### 3.1. Modularized architecture
The project has been decomposed into small, focused modules:
* `POP/prompt_function.py`
* `POP/embedder.py`
* `POP/context.py`
* `POP/api_registry.py`
* `POP/providers/` (one provider per file)
* `POP/utils/`
This mirrors the structure in the pi-mono `ai` package for clarity and maintainability.
### 3.2. Provider registry + per-provider clients
Each provider has its own adaptor (OpenAI, Gemini, DeepSeek, Doubao, Local, Ollama). The registry gives you:
* `list_providers()`
* `list_default_model()`
* `list_models()`
* `get_client()`
---
# 4. Features
* **Reusable Prompt Functions**
Use `<<<placeholder>>>` syntax to inject dynamic content.
* **Multi-LLM Backend**
Choose between OpenAI, Gemini, DeepSeek, Doubao, Local, or Ollama.
* **Tool Calling**
Pass a tool schema list to `execute()` and receive tool-call arguments.
* **Multimodal (Text + Image)**
Pass `images=[...]` (URLs or base64) when the provider supports it.
* **Prompt Improvement**
Improve or rewrite prompts using Fabric-style meta-prompts.
* **Function Schema Generation**
Convert natural language descriptions into OpenAI-function schemas.
* **Unified Embedding Interface**
Supports OpenAI, Jina AI embeddings, and local HuggingFace models.
* **Webpage Snapshot Utility**
Convert any URL into structured text using r.jina.ai with optional image captioning.
---
# 5. Installation
Install from PyPI:
```bash
pip install pop-python
```
Or install in development mode from GitHub:
```bash
git clone https://github.com/sgt1796/POP.git
cd POP
pip install -e .
```
---
# 6. Setup
Create a `.env` file in your project root:
```ini
OPENAI_API_KEY=your_openai_key
GEMINI_API_KEY=your_gcp_gemini_key
DEEPSEEK_API_KEY=your_deepseek_key
DOUBAO_API_KEY=your_volcengine_key
JINAAI_API_KEY=your_jina_key
```
All clients automatically read keys from environment variables.
---
# 7. PromptFunction
The core abstraction of POP is the `PromptFunction` class.
```python
from POP import PromptFunction
pf = PromptFunction(
sys_prompt="You are a helpful AI.",
prompt="Give me a summary about <<<topic>>>.",
)
print(pf.execute(topic="quantum biology"))
```
---
## 7.1. Placeholder Syntax
Use angle-triple-brackets inside your prompt:
```
<<<placeholder>>>
```
These are replaced at execution time.
Example:
```python
prompt = "Translate <<<sentence>>> to French."
```
---
## 7.2. Reserved Keywords
Within `.execute()`, the following keyword arguments are **reserved** and should not be used as placeholder names:
* `model`
* `sys`
* `fmt`
* `tools`
* `tool_choice`
* `temp`
* `images`
* `ADD_BEFORE`
* `ADD_AFTER`
Most keywords are used for parameters. `ADD_BEFORE` and `ADD_AFTER` will attach input string to head/tail of the prompt.
---
## 7.3. Executing prompts
```python
result = pf.execute(
topic="photosynthesis",
model="gpt-5-mini",
temp=0.3,
)
```
---
## 7.4. Improving Prompts
You can ask POP to rewrite or enhance your system prompt:
```python
better = pf.improve_prompt()
print(better)
```
This uses a Fabric-inspired meta-prompt bundled in the `POP/prompts/` directory.
---
# 8. Provider Registry
Use the registry to list providers/models or instantiate clients.
```python
from POP import list_providers, list_models, list_default_model, get_client
print(list_providers())
print(list_default_model())
print(list_models())
client = get_client("openai")
```
Non-default model example:
```python
from POP import PromptFunction, get_client
client = get_client("gemini", "gemini-2.5-pro")
pf = PromptFunction(prompt="Draw a rocket.", client=client)
print(pf.execute())
```
Direct provider class example:
```python
from POP import PromptFunction
from POP.providers.gemini_client import GeminiClient
pf = PromptFunction(prompt="Draw a rocket.", client=GeminiClient(model="gemini-2.5-pro"))
print(pf.execute())
```
---
# 9. Tool Calling
```python
from POP import PromptFunction
tools = [
{
"type": "function",
"function": {
"name": "create_reminder",
"description": "Create a reminder.",
"parameters": {
"type": "object",
"properties": {
"description": {"type": "string"},
"when": {"type": "string"},
},
"required": ["description"],
},
},
}
]
pf = PromptFunction(
sys_prompt="You are a helpful assistant.",
prompt="<<<input>>>",
client="openai",
)
result = pf.execute(input="Remind me to walk at 9am.", tools=tools)
print(result)
```
---
# 10. Agent Loop (POP.agent)
POP includes a lightweight, event-driven agent loop in the `agent/` package
for tool calling, steering, and follow-ups. It is designed to sit on top of
the POP provider registry via `POP.stream.stream`, but you can supply any
`stream_fn` that matches the `(model, context, options)` signature and returns
an async event stream with a `result()` method.
In this repo the import path is `agent` (for example, `from agent import Agent`).
**Key concepts**
* **Agent** – stateful controller for model, messages, tools, and queues.
* **AgentTool** – async tool interface; implement `execute` and return `AgentToolResult`.
* **Steering** – inject user messages mid-run with `agent.steer(...)`.
* **Follow-ups** – queue messages after tool execution with `agent.follow_up(...)`.
* **Events** – subscribe to lifecycle events for streaming UIs.
**Minimal example**
```python
import asyncio
from agent import Agent
from POP.stream import stream
async def main():
agent = Agent({"stream_fn": stream})
agent.set_model({"provider": "gemini", "id": "gemini-3-flash-preview", "api": None})
await agent.prompt("Say hi")
await agent.wait_for_idle()
asyncio.run(main())
```
**Tools + steering + follow-up (condensed from `agent/test.py`)**
```python
import asyncio, time
from agent import Agent
from agent.agent_types import AgentMessage, TextContent, AgentToolResult, AgentTool
from POP.stream import stream
class SlowTool(AgentTool):
name = "slow"
description = "Sleep a bit"
parameters = {"type": "object", "properties": {"seconds": {"type": "number"}}}
label = "Slow"
async def execute(self, tool_call_id, params, signal=None, on_update=None):
seconds = float(params.get("seconds", 1.0))
await asyncio.sleep(seconds)
return AgentToolResult(
content=[TextContent(type="text", text=f"slow done {seconds}s")],
details={},
)
class FastTool(AgentTool):
name = "fast"
description = "Return quickly"
parameters = {"type": "object", "properties": {}}
label = "Fast"
async def execute(self, tool_call_id, params, signal=None, on_update=None):
return AgentToolResult(
content=[TextContent(type="text", text="fast done")],
details={},
)
async def main():
agent = Agent({"stream_fn": stream})
agent.set_model({"provider": "gemini", "id": "gemini-3-flash-preview", "api": None})
agent.set_tools([SlowTool(), FastTool()])
agent.follow_up(AgentMessage(
role="user",
content=[TextContent(type="text", text="follow up: summarize")],
timestamp=time.time(),
))
task = asyncio.create_task(
agent.prompt("Call tool slow with seconds=1.2, then call tool fast")
)
agent.steer(AgentMessage(
role="user",
content=[TextContent(type="text", text="steer: actually, call slow 4 times, and in between of each call add a fast call, but keep the total time unchanged as 1.2s. then fast")],
timestamp=time.time(),
))
await task
await agent.wait_for_idle()
asyncio.run(main())
```
Notes:
* If `POP` is not importable, pass `stream_fn` explicitly; the agent does not ship a provider.
* `agent/test.py` is the most complete end-to-end example right now.
* Run the demo with `python -m agent.test`.
---
# 11. Function Schema Generation
POP supports generating **OpenAI function-calling schemas** from natural language descriptions.
```python
schema = pf.generate_schema(
description="Return the square and cube of a given integer."
)
print(schema)
```
What this does:
* Applies a standard meta-prompt
* Uses the selected LLM backend
* Produces a valid JSON Schema for OpenAI function calling
* Optionally saves it under `schemas/`
---
# 12. Embeddings
POP includes a unified embedding interface:
```python
from POP import Embedder
embedder = Embedder(use_api="openai")
vecs = embedder.get_embedding(["hello world"])
```
Supported modes:
* OpenAI embeddings
* Gemini embeddings (via OpenAI-compatible Gemini endpoint)
* JinaAI embeddings
* Local HuggingFace model embeddings (cpu/gpu)
Large inputs are chunked automatically when needed.
---
# 13. Web Snapshot Utility
```python
from POP.utils.web_snapshot import get_text_snapshot
text = get_text_snapshot("https://example.com", image_caption=True)
print(text[:500])
```
Supports:
* optional image removal
* optional image captioning
* DOM selector filtering
* returning JSON or plain text
---
# 14. Examples
```python
from POP import PromptFunction
pf = PromptFunction(prompt="Give me 3 creative names for a <<<thing>>>.")
print(pf.execute(thing="robot"))
print(pf.execute(thing="new language"))
```
Multimodal example (provider must support images):
```python
from POP import PromptFunction
image_b64 = "..." # base64-encoded image
pf = PromptFunction(prompt="Describe the image.", client="openai")
print(pf.execute(images=[image_b64]))
```
---
# 15. Contributing
Steps:
1. Fork the GitHub repo
2. Create a feature branch
3. Add tests or examples
4. Submit a PR with a clear explanation
| text/markdown | Guotai Shen | sgt1796@gmail.com | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/sgt1796/POP | null | >=3.8 | [] | [] | [] | [
"openai>=1.0.0",
"requests>=2.25.0",
"python-dotenv",
"pydantic>=1.10",
"numpy>=1.21",
"backoff",
"Pillow>=9.0",
"torch; extra == \"local-embeddings\"",
"transformers>=4.30.0; extra == \"local-embeddings\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:47:26.732906 | pop_python-1.1.1.tar.gz | 110,056 | e9/b7/84d51bf9ac26226f34955c114e949f755d923c31cbf2e2d5b504bb03267c/pop_python-1.1.1.tar.gz | source | sdist | null | false | 81da2dbf312ffba87fcbdc34fb44d19b | bcf2dca1fd16d20af050b8d13c4b69ce7fe7a6648fb129ea3dc475ff2c07a6e7 | e9b784d51bf9ac26226f34955c114e949f755d923c31cbf2e2d5b504bb03267c | null | [
"LICENSE"
] | 213 |
2.4 | gates-sdk | 0.1.4 | SDK em Python para integrar com o Gates para autenticação via Cognito e gestão de usuários | # Gates SDK (Python)
[](https://badge.fury.io/py/gates-sdk)
[](https://pypi.org/project/gates-sdk/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/psf/black)
SDK em Python para autenticação de usuários com tokens JWT do AWS Cognito e integração com o backend do Gates para gerenciamento de perfis. Estruturado para publicação no PyPI.
## Características
- ✅ Autenticação com tokens JWT do AWS Cognito
- ✅ Validação de grupos de usuários
- ✅ Cache automático de chaves públicas
- ✅ Cliente HTTP assíncrono para API do Gates
- ✅ Suporte a profiles de usuário personalizados
- ✅ Tratamento robusto de erros
- ✅ Type hints completos
- ✅ Testes unitários abrangentes
## Instalação
```bash
pip install gates-sdk
```
### Instalação para desenvolvimento
```bash
# Clone o repositório
git clone https://github.com/intelicity/gates-python-sdk.git
cd gates-python-sdk
# Crie um ambiente virtual
python -m venv venv
# Ative o ambiente virtual (Windows)
venv\Scripts\activate
# ou Linux/Mac:
# source venv/bin/activate
# Instale em modo de desenvolvimento
pip install -e ".[dev]"
```
## Uso
### Autenticação
```python
from gates_sdk import AuthService
auth = AuthService(
region="sa-east-1",
user_pool_id="sa-east-1_xxxxxxxxx",
audience="your-client-id",
required_group=["admin", "user"], # opcional
)
try:
user = auth.verify_token(token)
print("Usuário autenticado", user)
print("É membro do grupo?", auth.is_member_of(user.groups or []))
except Exception as exc:
print("Falha ao autenticar:", exc)
```
### Serviço de usuários
```python
from gates_sdk import UserService
user_service = UserService(
base_url="https://api.example.com",
system="your-system-name",
)
users = user_service.get_all_users(id_token)
print(users.profiles, users.total)
profile = user_service.get_profile(id_token)
print(profile)
```
### Variáveis de ambiente
```bash
export GATES_REGION=sa-east-1
export GATES_USER_POOL_ID=sa-east-1_xxxxxxxxx
export GATES_CLIENT_ID=your-cognito-client-id
export GATES_BACKEND_URL=https://your-backend-api.com
export GATES_SYSTEM_NAME=your-system-name
```
Também é possível instanciar os serviços lendo essas variáveis com `os.getenv`.
## Desenvolvimento
### Configuração inicial
```bash
# Instalar dependências de desenvolvimento
pip install -e ".[dev]"
# Configurar pre-commit hooks (opcional)
pre-commit install
```
### Comandos úteis
```bash
# Executar testes
pytest
# Testes com cobertura
pytest --cov=src --cov-report=html
# Formatação de código
black src tests
isort src tests
# Verificar formatação sem modificar
black --check src tests
isort --check-only src tests
# Lint
flake8 src tests
# Verificação de tipos
mypy src
# Executar todas as verificações
pytest && black --check src tests && isort --check-only src tests && flake8 src tests && mypy src
```
### Usando o Makefile (Linux/Mac)
```bash
make help # Mostra todos os comandos disponíveis
make install-dev # Instala dependências de desenvolvimento
make test # Executa testes
make format # Formata código
make check # Executa todas as verificações
make build # Constrói o pacote
make upload-test # Publica no TestPyPI
```
### Usando o script PowerShell (Windows)
```powershell
# Publicar no TestPyPI
.\publish.ps1 -Target test
# Publicar no PyPI (produção)
.\publish.ps1 -Target prod
# Pular testes durante publicação
.\publish.ps1 -SkipTests
# Forçar publicação sem confirmação
.\publish.ps1 -Force
```
### Estrutura do projeto
```
gates-python-sdk/
├── src/
│ ├── __init__.py # Exports principais
│ ├── auth.py # Serviço de autenticação
│ ├── cache.py # Sistema de cache
│ ├── errors.py # Exceções customizadas
│ ├── models.py # Modelos de dados
│ ├── user.py # Serviço de usuários
│ └── py.typed # Marcador de type hints
├── tests/
│ ├── test_auth.py # Testes de autenticação
│ ├── test_cache.py # Testes de cache
│ └── test_user.py # Testes de usuários
├── .github/workflows/ # GitHub Actions
├── pyproject.toml # Configuração do projeto
├── CHANGELOG.md # Histórico de mudanças
├── LICENSE # Licença MIT
├── MANIFEST.in # Arquivos para incluir no pacote
├── Makefile # Comandos para Unix/Linux
├── publish.ps1 # Script de publicação para Windows
├── publish.py # Script de publicação Python
└── README.md # Este arquivo
```
### Publicação
1. **Atualizar versão** em `pyproject.toml`
2. **Atualizar** `CHANGELOG.md`
3. **Executar testes**: `pytest`
4. **Testar no TestPyPI**: `.\publish.ps1 -Target test`
5. **Publicar no PyPI**: `.\publish.ps1 -Target prod`
### Testes
O projeto inclui testes unitários abrangentes:
```bash
# Executar todos os testes
pytest
# Executar com cobertura
pytest --cov=src
# Executar testes específicos
pytest tests/test_auth.py
# Executar com output verboso
pytest -v
```
### Integração Contínua
O projeto está configurado com GitHub Actions para:
- ✅ Testes automatizados em Python 3.9-3.12
- ✅ Verificação de formatação e lint
- ✅ Verificação de tipos com mypy
- ✅ Build e verificação do pacote
- ✅ Publicação automática no PyPI via releases
## Requisitos
- Python ≥ 3.9
- pyjwt[crypto] ≥ 2.8
- httpx ≥ 0.27
## Licença
MIT - veja o arquivo [LICENSE](LICENSE) para detalhes.
## Contribuição
1. Fork o projeto
2. Crie uma branch para sua feature (`git checkout -b feature/nova-feature`)
3. Commit suas mudanças (`git commit -am 'Adiciona nova feature'`)
4. Push para a branch (`git push origin feature/nova-feature`)
5. Abra um Pull Request
## Suporte
Para questões e suporte, abra uma [issue](https://github.com/intelicity/gates-python-sdk/issues) no GitHub.
| text/markdown | Intelicity | null | null | null | MIT License
Copyright (c) 2025 Intelicity
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| aws, cognito, jwt, authentication, gates, sdk | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Internet :: WWW/HTTP",
"Topic :: System :: Systems Administration :: Authentication/Directory"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pyjwt[crypto]>=2.8",
"httpx<1.0,>=0.27",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"black>=23.0; extra == \"dev\"",
"flake8>=6.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"isort>=5.0; extra == \"dev\"",
"twine>=4.0; extra == \"dev\"",
"build>=0.10; extra == \"dev\"",
"pytest>=7.0; extra == \"test\"",
"pytest-cov>=4.0; extra == \"test\"",
"pytest-asyncio>=0.21; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/intelicity/gates-python-sdk",
"Repository, https://github.com/intelicity/gates-python-sdk",
"Documentation, https://github.com/intelicity/gates-python-sdk#readme",
"Bug Reports, https://github.com/intelicity/gates-python-sdk/issues",
"Source Code, https://github.com/intelicity/gates-python-sdk"
] | twine/6.2.0 CPython/3.10.5 | 2026-02-21T01:46:10.554861 | gates_sdk-0.1.4.tar.gz | 16,581 | 52/48/158653a4d35683d7a177a5d0cbcadcded02b1bb313901b461d5fef63ca84/gates_sdk-0.1.4.tar.gz | source | sdist | null | false | 5ae63b489f6b176405fb3484a9b987cb | 89d1e0471dfb5b84fee524b5a76b1813e31c7c18cf5439b79dbb388565d0ebde | 5248158653a4d35683d7a177a5d0cbcadcded02b1bb313901b461d5fef63ca84 | null | [
"LICENSE"
] | 218 |
2.4 | avakill | 0.4.0 | Open-source safety firewall for AI agents. Intercept tool calls. Enforce policies. Kill dangerous operations. | <div align="center">
# :shield: AvaKill
### Open-source safety firewall for AI coding agents
[](https://pypi.org/project/avakill/)
[](https://pypi.org/project/avakill/)
[](LICENSE)
[](https://github.com/log-bell/avakill/actions)


[](https://github.com/log-bell/avakill)
**Stop your AI agents from deleting your database, wiping your files, or going rogue.**
<!-- TODO: Add terminal GIF here showing attack interception -->
> :tv: **[Watch the demo →](#demo)** See AvaKill block a live red team attack in real-time.
```bash
pipx install avakill # Recommended
# or: pip install avakill
```
[Quickstart](#quickstart) · [Tutorial](docs/tutorial-you-couldve-invented-avakill.md) · [Integrations](#framework-integrations) · [Policy Reference](#policy-configuration) · [Security](docs/security-hardening.md) · [Deployment](docs/deployment.md) · [CLI](docs/cli-reference.md) · [Cookbook](docs/cookbook.md) · [API](docs/api-reference.md) · [Contributing](CONTRIBUTING.md)
</div>
---
## The Problem
AI agents are shipping to production with **zero safety controls** on their tool calls. The results are predictable:
- **Replit's agent** dropped a production database and fabricated 4,000 fake user accounts to cover it up.
- **Google's Gemini CLI** wiped a user's entire D: drive — 8,000+ files, gone.
- **Amazon Q** terminated EC2 instances and deleted infrastructure during a debugging session.
These aren't edge cases. Research shows AI agents fail in **75% of real-world tasks**, and when they fail, they fail catastrophically — because nothing sits between the agent and its tools.
**AvaKill is that missing layer.** A firewall that intercepts every tool call, evaluates it against your safety policies, and kills dangerous operations before they execute. No ML models, no API calls, no latency — just fast, deterministic policy checks in <1ms.
## Quickstart
```bash
pip install avakill
avakill init
```
> **Tip:** On macOS, use `pipx install avakill` or create a virtualenv first (`python3 -m venv .venv && source .venv/bin/activate`). System Python on macOS 14+ blocks global pip installs.
```python
from avakill import Guard, protect
guard = Guard() # Auto-discovers avakill.yaml created by `avakill init`
@protect(guard=guard)
def search_users(query: str) -> str:
return db.execute("SELECT * FROM users WHERE name LIKE ?", (f"%{query}%",))
@protect(guard=guard)
def delete_user(user_id: str):
db.execute("DELETE FROM users WHERE id = ?", (user_id,))
search_users(query="active") # ✅ Allowed — read operations pass through
delete_user(user_id="123") # ❌ Blocked — PolicyViolation raised
```
Two commands, four lines of code. Safe calls pass through, destructive calls are killed before they execute.
## Features
<table>
<tr>
<td width="50%">
:lock: **Tool-Call Interception**<br>
Block destructive operations before they execute. Works at the function level — no prompt engineering required.
</td>
<td width="50%">
:clipboard: **YAML Policies**<br>
Simple, readable safety rules anyone can write. Glob patterns, argument matching, rate limiting — all in a single file.
</td>
</tr>
<tr>
<td>
:electric_plug: **Framework Agnostic**<br>
Drop-in support for OpenAI, Anthropic, LangChain, LangGraph, MCP, and any Python function via decorator.
</td>
<td>
:bar_chart: **Audit Trail**<br>
Complete SQLite log of every tool call and decision. Query with the CLI, export as JSON, or connect to your observability stack.
</td>
</tr>
<tr>
<td>
:zap: **Zero Overhead**<br>
<1ms per evaluation. No ML models, no external API calls. Pure in-process policy checks with thread-safe rate limiting.
</td>
<td>
:desktop_computer: **Live Dashboard**<br>
Real-time Rich terminal UI. Watch tool calls flow through, see what's blocked, track denial rates — all from your terminal.
</td>
</tr>
<tr>
<td>
:link: **MCP Proxy**<br>
Drop-in transparent proxy for any MCP server. One config change in Claude Desktop and every tool call is protected.
</td>
<td>
:arrows_counterclockwise: **Hot Reload**<br>
Update policies without restarting your agents. Call `guard.reload_policy()` or let the CLI handle it.
</td>
</tr>
<tr>
<td>
:satellite: **Native Agent Hooks**<br>
Drop-in hooks for Claude Code, Gemini CLI, Cursor, and Windsurf. One command to install — no code changes to your agent.
</td>
<td>
:gear: **Persistent Daemon**<br>
Unix socket server with <5ms evaluation. Start once, protect every agent on your machine. SIGHUP to reload policies.
</td>
</tr>
<tr>
<td>
:shield: **OS-Level Enforcement**<br>
Landlock (Linux), sandbox-exec (macOS), and Tetragon (Kubernetes). Kernel-level restrictions that even root can't bypass.
</td>
<td>
:scroll: **Compliance Reporting**<br>
Automated assessments for SOC 2, NIST AI RMF, EU AI Act, and ISO 42001. Generate reports in table, JSON, or Markdown format.
</td>
</tr>
</table>
## Why AvaKill?
| | No Protection | Prompt Guardrails | **AvaKill** |
|---|:---:|:---:|:---:|
| Stops destructive tool calls | :x: | :x: | :white_check_mark: |
| Works across all frameworks | — | Partial | :white_check_mark: |
| Deterministic (no LLM needed) | — | :x: | :white_check_mark: |
| YAML-based policies | — | :x: | :white_check_mark: |
| Full audit trail | :x: | :x: | :white_check_mark: |
| MCP server support | — | :x: | :white_check_mark: |
| <1ms overhead | — | :x: (LLM round-trip) | :white_check_mark: |
| Native agent hooks (no code changes) | — | :x: | :white_check_mark: |
| OS-level kernel enforcement | — | :x: | :white_check_mark: |
| Open source | — | Some | :white_check_mark: AGPL 3.0 |
## Framework Integrations
### Native Agent Hooks
Protect AI coding agents with zero code changes — just install the hook:
```bash
avakill daemon start --policy avakill.yaml
avakill hook install --agent claude-code # or gemini-cli, cursor, windsurf, all
avakill hook list
```
AvaKill intercepts every tool call at the agent level. Policies use canonical tool names (`shell_execute`, `file_write`, `file_read`) so one policy works across all agents.
> See [`docs/framework-integrations.md`](docs/framework-integrations.md#native-agent-hooks) for per-agent details and the full tool normalization table.
### OpenAI
```python
from openai import OpenAI
from avakill.interceptors.openai_wrapper import GuardedOpenAIClient
client = GuardedOpenAIClient(OpenAI(), policy="avakill.yaml")
response = client.chat.completions.create(model="gpt-4o", tools=[...], messages=[...])
# Denied tool_calls are automatically removed from the response
# All decisions available at: response.avakill_decisions
```
### Anthropic
```python
from anthropic import Anthropic
from avakill.interceptors.anthropic_wrapper import GuardedAnthropicClient
client = GuardedAnthropicClient(Anthropic(), policy="avakill.yaml")
response = client.messages.create(model="claude-sonnet-4-5-20250514", tools=[...], messages=[...])
# Denied tool_use blocks are removed from response.content
```
### LangChain / LangGraph
```python
from avakill.interceptors.langchain_handler import AvaKillCallbackHandler
handler = AvaKillCallbackHandler(policy="avakill.yaml")
agent.invoke({"input": "..."}, config={"callbacks": [handler]})
# Raises PolicyViolation before the tool executes
```
### MCP Proxy (Claude Desktop, Cursor, etc.)
One config change — no code modifications to the MCP server:
```jsonc
// claude_desktop_config.json
{
"mcpServers": {
"database": {
"command": "avakill",
"args": [
"mcp-proxy",
"--upstream-cmd", "python",
"--upstream-args", "db_server.py",
"--policy", "avakill.yaml"
]
}
}
}
```
### Decorator (any Python function)
```python
from avakill import Guard, protect
guard = Guard(policy="avakill.yaml")
@protect(guard=guard, on_deny="return_none") # or "raise" (default), "callback"
def execute_sql(query: str) -> str:
return db.execute(query)
```
> See [`examples/`](examples/) for complete runnable demos of every integration.
## Policy Configuration
Policies are YAML files. Rules are evaluated top-to-bottom — first match wins.
```yaml
version: "1.0"
default_action: deny # Block everything not explicitly allowed
policies:
# Allow read operations
- name: "allow-reads"
tools: ["search_*", "*_query", "*_get", "*_list"]
action: allow
# Block destructive SQL
- name: "block-destructive-sql"
tools: ["execute_sql", "database_*"]
action: deny
conditions:
args_match:
query: ["DROP", "DELETE", "TRUNCATE", "ALTER"]
message: "Destructive SQL blocked. Use a manual migration."
# Allow safe SQL (SELECT, INSERT, UPDATE)
- name: "allow-safe-sql"
tools: ["execute_sql", "database_*"]
action: allow
# Block dangerous shell commands
- name: "block-dangerous-shells"
tools: ["shell_execute", "run_command"]
action: deny
conditions:
args_match:
command: ["rm -rf", "sudo", "chmod 777", "> /dev/"]
message: "Dangerous shell command blocked."
# Allow safe shell commands
- name: "allow-safe-shells"
tools: ["shell_execute", "run_command"]
action: allow
# Rate limit API calls
- name: "rate-limit-search"
tools: ["web_search"]
action: allow
rate_limit:
max_calls: 10
window: "60s"
# Block all destructive operations by pattern
- name: "block-destructive"
tools: ["delete_*", "remove_*", "destroy_*", "drop_*"]
action: deny
message: "Destructive operations require manual execution."
```
**Policy features:**
- **Glob patterns** — `*`, `delete_*`, `*_execute` match tool names
- **Argument matching** — `args_match` / `args_not_match` inspect arguments (case-insensitive substring)
- **Rate limiting** — sliding window (`10s`, `5m`, `1h`)
- **Environment variables** — `${VAR_NAME}` substitution in YAML
- **First-match-wins** — order matters, put specific rules before general ones
> Full reference: [`docs/policy-reference.md`](docs/policy-reference.md)
## Generate Policies with Any LLM
Don't want to write YAML by hand? Use any LLM to generate policies for you:
```bash
# Generate a prompt, paste it into ChatGPT/Claude/Gemini, describe your agent
avakill schema --format=prompt
# Include your actual tool names for a tailored prompt
avakill schema --format=prompt --tools="execute_sql,shell_exec,file_write" --use-case="data pipeline"
# Validate the LLM's output
avakill validate generated-policy.yaml
```
Or use the JSON Schema directly with structured output APIs:
```python
from avakill import get_json_schema, generate_prompt
schema = get_json_schema() # For structured output / validation
prompt = generate_prompt() # Self-contained LLM prompt
```
> See [`docs/llm-policy-prompt.md`](docs/llm-policy-prompt.md) for a paste-ready prompt.
## CLI
```bash
# Initialize a new policy file (auto-detects your framework)
avakill init
# Validate your policy file
avakill validate avakill.yaml
# Launch the real-time terminal dashboard
avakill dashboard
# Query audit logs
avakill logs --denied-only --since 1h
avakill logs --tool "execute_sql" --json
avakill logs tail # Follow in real-time
# Start the MCP proxy
avakill mcp-proxy --upstream-cmd python --upstream-args server.py --policy avakill.yaml
# Export JSON Schema for the policy format
avakill schema
# Generate an LLM prompt for policy creation
avakill schema --format=prompt
avakill schema --format=prompt --tools="file_read,shell_exec" --use-case="code assistant"
# Start the persistent daemon
avakill daemon start --policy avakill.yaml
# Evaluate a tool call via the daemon
echo '{"tool": "shell_execute", "args": {"command": "rm -rf /"}}' | avakill evaluate --agent cli
# Install hooks for all detected agents
avakill hook install --agent all
avakill hook list
# Generate OS-level enforcement
avakill enforce landlock --policy avakill.yaml --dry-run
avakill enforce sandbox --policy avakill.yaml --output avakill.sb
# Run compliance assessment
avakill compliance report --framework soc2 --policy avakill.yaml
avakill compliance gaps --policy avakill.yaml
# Manage approval workflows
avakill approvals list
avakill approvals grant REQUEST_ID
```
## Dashboard
The dashboard shows:
- **Safety overview** — total, allowed, denied, and pending counts with percentages
- **Live tool calls** — real-time stream with tool name, action, policy, and argument previews
- **Top denied tools** — bar chart of the most frequently blocked tools
- **Keyboard shortcuts** — `q` quit, `r` reload policy, `c` clear
## Architecture
```
┌─────────────────┐ ┌─────────────────────────────────────────────────┐ ┌──────────┐
│ │ │ AvaKill │ │ │
│ AI Agent │ │ │ │ Tool │
│ (Claude Code, │────>│ Native Hook ──> Daemon ──> Policy ──> Log │────>│ │
│ Gemini CLI, │ │ │ │ │ │ │ │
│ Cursor, etc.) │ │ │ ┌────┴────┐ │ │ └──────────┘
│ │ │ │ Allow Deny │ │
└─────────────────┘ │ │ │ │ ──> Audit Log │
│ │ v v │ │
│ │ Forward to Block & │ │
│ │ Tool Return │ │
│ │ Error │ │
│ v v │
│ ┌─ OS Enforcement ──────────────────────────┐ │
│ │ Landlock · sandbox-exec · Tetragon │ │
│ └───────────────────────────────────────────┘ │
└─────────────────────────────────────────────────┘
```
AvaKill protects your agents at multiple levels: **native hooks** intercept tool calls at the agent level, a **persistent daemon** provides sub-5ms evaluation over a Unix socket, **policy rules** enforce first-match-wins logic with glob patterns and rate limiting, and **OS-level enforcement** (Landlock, sandbox-exec, Tetragon) provides kernel-level restrictions.
**Core components:**
- **`Guard`** — the main entry point. Wraps a `PolicyEngine`, records audit events.
- **`PolicyEngine`** — parses YAML, evaluates tool calls against rules with first-match-wins logic.
- **Interceptors** — framework-specific wrappers (OpenAI, Anthropic, LangChain, decorator).
- **MCP Proxy** — transparent stdio proxy that sits in front of any MCP server.
- **Audit Logger** — async SQLite logger with batched writes and WAL mode.
- **Event Bus** — in-process pub/sub for real-time dashboard and monitoring.
- **`DaemonServer`** — persistent Unix socket server for <5ms evaluation without in-process integration.
- **`Hook Adapters`** — native integrations for Claude Code, Gemini CLI, Cursor, and Windsurf.
- **`ToolNormalizer`** — translates agent-specific tool names to canonical names for universal policies.
- **`PolicyCascade`** — discovers and merges policies from system, global, project, and local levels.
- **`Enforcement Backends`** — Landlock (Linux), sandbox-exec (macOS), and Tetragon (Kubernetes) for OS-level restrictions.
- **`ComplianceAssessor`** — automated compliance checks for SOC 2, NIST AI RMF, EU AI Act, and ISO 42001.
## Roadmap
### Stable
Core features with extensive test coverage, ready for production use.
- [x] Core policy engine with glob patterns, argument matching, rate limiting
- [x] OpenAI, Anthropic, LangChain/LangGraph interceptors
- [x] `@protect` decorator for any Python function
- [x] MCP transparent proxy (stdio transport)
- [x] SQLite audit logging with async batched writes
- [x] Rich terminal dashboard with live event stream
- [x] CLI: `init`, `validate`, `logs`, `dashboard`, `mcp-proxy`, `schema`
- [x] Hot-reload with file watcher
- [x] Persistent daemon with Unix socket (<5ms evaluation)
- [x] Agent hook adapters (Claude Code, Gemini CLI, Cursor, Windsurf)
- [x] Tool name normalization across agents
- [x] Multi-level policy cascade (system/global/project/local)
### Shipped (beta)
Implemented and tested. Maturing toward stable in upcoming releases.
- [x] Policy signing (HMAC-SHA256 + Ed25519)
- [x] Self-protection (hardcoded anti-tampering rules)
- [x] OS-level hardening (chattr/schg, SELinux, AppArmor, seccomp)
- [x] C-level audit hooks (optional hardened extension)
- [x] OpenTelemetry + Prometheus observability
- [x] Enforcement levels (hard/soft/advisory)
- [x] OS-level enforcement (Landlock, macOS sandbox-exec, Tetragon)
- [x] Compliance reports (SOC 2, NIST AI RMF, EU AI Act, ISO 42001)
- [x] Human-in-the-loop approval workflows
- [x] Audit analytics engine
### Planned
- [ ] Web dashboard (Next.js)
- [ ] Slack / webhook / PagerDuty notifications
- [ ] MCP HTTP transport proxy (Streamable HTTP)
- [ ] CrewAI / AutoGen / custom framework interceptors
## Contributing
We welcome contributions! AvaKill is early-stage and there's a lot to build.
```bash
git clone https://github.com/log-bell/avakill.git
cd avakill
make dev # Install in dev mode with all dependencies
make test # Run the test suite
```
See [**CONTRIBUTING.md**](CONTRIBUTING.md) for the full guide — architecture overview, code style, and PR process.
## License
[AGPL-3.0](LICENSE) — free to use, modify, and distribute. If you offer AvaKill as a network service, you must release your source code under the same license. See [LICENSE](LICENSE) for details.
---
<div align="center">
*She doesn't guard. She kills.*
**If AvaKill would have saved you from an AI agent disaster, [give it a star](https://github.com/log-bell/avakill).**
Built because an AI agent tried to `DROP TABLE users` on a Friday afternoon.
</div>
| text/markdown | Copyright 2026 AvaKill Contributors | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development",
"Topic :: Security"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pyyaml>=6.0",
"pydantic>=2.0",
"rich>=13.0",
"click>=8.0",
"aiosqlite>=0.20.0",
"pyfiglet>=1.0",
"openai>=1.0; extra == \"openai\"",
"anthropic>=0.40; extra == \"anthropic\"",
"langchain-core>=0.3; extra == \"langchain\"",
"mcp>=1.0; extra == \"mcp\"",
"aiohttp>=3.9; extra == \"mcp-http\"",
"PyNaCl>=1.5.0; extra == \"signed-policies\"",
"opentelemetry-api<2.0,>=1.0; extra == \"otel\"",
"prometheus-client>=0.20.0; extra == \"metrics\"",
"watchfiles>=1.0.0; extra == \"watch\"",
"avakill[openai]; extra == \"all\"",
"avakill[anthropic]; extra == \"all\"",
"avakill[langchain]; extra == \"all\"",
"avakill[mcp]; extra == \"all\"",
"avakill[signed-policies]; extra == \"all\"",
"avakill[hardened]; extra == \"all\"",
"avakill[otel]; extra == \"all\"",
"avakill[metrics]; extra == \"all\"",
"avakill[watch]; extra == \"all\"",
"pytest; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"ruff; extra == \"dev\"",
"mypy; extra == \"dev\"",
"types-PyYAML; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"opentelemetry-api<2.0,>=1.0; extra == \"dev\"",
"opentelemetry-sdk<2.0,>=1.0; extra == \"dev\"",
"prometheus-client>=0.20.0; extra == \"dev\"",
"watchfiles>=1.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/log-bell/avakill",
"Repository, https://github.com/log-bell/avakill",
"Issues, https://github.com/log-bell/avakill/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:45:52.018566 | avakill-0.4.0.tar.gz | 286,474 | d4/d8/98862f86e9915cd1955cbc131f1b10f1c12bb4116990ad7526d8a1dd862c/avakill-0.4.0.tar.gz | source | sdist | null | false | 3ecbbe9d562da88e754f22ceeb6baf98 | 1a967e1b7d113dd0be230d3b147430632ea6dae44bd2708874779ba2e6cd01c4 | d4d898862f86e9915cd1955cbc131f1b10f1c12bb4116990ad7526d8a1dd862c | AGPL-3.0-only | [
"LICENSE"
] | 211 |
2.4 | stillwater | 1.4.0 | The Linux of AI — compiler-grade intelligence for everyone. Skills, recipes, and verified swarm workflows. | # Stillwater OS - LLM Kung Fu Dojo (aka steroids for AI)
[](https://pypi.org/project/stillwater/)
[](https://github.com/phuctruong/stillwater/actions/workflows/ci.yml)
[](LICENSE)
[](https://pypi.org/project/stillwater/)
## Why Stillwater? (3 Things Not Found Anywhere Else)
**1. Verification Ladder (rungs 641 / 274177 / 65537)**
The only skill system with machine-checkable correctness grades. Each rung has a precise definition and required artifacts — you do not claim a rung, you earn it with evidence. Rung 641 = local correctness (red/green gate + no regressions). Rung 274177 = stability (seed sweep + replay). Rung 65537 = promotion (adversarial sweeps + security gate + behavioral hash drift explained).
**2. Lane Algebra A / B / C**
Every claim in every skill is typed. Lane A = witnessed fact (tool output, repo line, test result). Lane B = engineering judgment (explicit tradeoff). Lane C = heuristic or forecast (guidance only, never sufficient for PASS). Cross-lane upgrade — treating a heuristic as a fact — is a forbidden state. No other framework makes this distinction explicit and machine-enforceable.
**3. Fail-Closed FSM**
Skills are closed state machines with an explicit alphabet of forbidden states (SILENT_RELAXATION, UNWITNESSED_PASS, NONDETERMINISTIC_OUTPUT, NULL_ZERO_COERCION, and more). If a forbidden state is entered, the runtime blocks — it does not guess, soften, or hallucinate past the gap. This is structural, not vibes.
## Quickstart (30 seconds)
```bash
git clone https://github.com/phuctruong/stillwater && cd stillwater
pip install -e ".[dev]"
stillwater run "What is Software 5.0?" --dry-run # no API key needed
```
→ New? Start with [Hello Stillwater](docs/hello-stillwater.md) — a 5-minute guided tutorial.
→ See [QUICKSTART.md](QUICKSTART.md) for the full 5-minute guide.
→ Browse skills interactively: [docs/skills-browser.html](docs/skills-browser.html) — filterable, searchable skill + swarm + recipe catalog.
→ Read the manifestos: [SOFTWARE-5.0-PARADIGM.md](SOFTWARE-5.0-PARADIGM.md) · [AI-UPLIFT.md](AI-UPLIFT.md)
---
> "Be water, my friend." -- Bruce Lee
> "Absorb what is useful, discard what is useless, add what is essentially your own."
Born from a boat, forged at Harvard, battle-tested in startups, now open-sourced for the world. Stillwater is an AI verification framework disguised as a martial arts tower. Or maybe it is a martial arts tower disguised as an AI verification framework.
Either way, you will leave with receipts.
---
## The Story Behind the Tower
Phuc Vinh Truong was born in Vietnam in 1976. At four years old, he and his family escaped by boat -- surviving the ocean, surviving the unknown. They arrived in America with almost nothing except stubborn hope and the kind of love that does not negotiate.
Against the odds: Harvard. Tech CEO. Built and sold a startup. And then the question every builder eventually faces: **hoard the fire, or share it?**
Phuc chose fire.
This repo is his red envelope to the world. Not money, but **possibility**. Read the full story in [`MESSAGE-TO-HUMANITY.md`](MESSAGE-TO-HUMANITY.md).
---
## What Is This? (The 30-Second Version)
This repo is documentation-first and runnable:
- **papers:** `papers/00-index.md` -- the theory, with receipts
- **notebooks:** runnable demos (offline by default) -- the practice
- **skills:** prompt-loadable packs for coding, math, safety, orchestration -- the technique
- **core/** -- always-on copies of the 4 foundational skills (phuc-context, phuc-forecast, prime-coder, prime-safety); canonical baselines for divergence detection
- **community/** -- onboarding guides, authoring guides, scoring rubric, and swarm design docs for contributors; see `community/GETTING-STARTED.md`
- **MANIFEST.json** -- machine-parseable index of all skills, recipes, papers, core skills, and swarm types with sha256 checksums; see schema in root
Think of it as Bruce Lee's Jeet Kune Do for AI agents: strip away everything that does not work, keep everything that does, and prove it with artifacts a skeptic can replay.
## What This Is (and Is Not)
Stillwater OS is:
- a skills + orchestration + verification layer for LLM work
- a way to improve reliability, safety, and coding quality with explicit gates
- usable standalone or alongside existing agent clients
Stillwater OS is not:
- a "replace everything" agent platform
- positioned as an OpenClaw competitor
Practical framing:
- If you already use OpenClaw, keep it.
- Load Stillwater skills/process on top to improve outcomes.
> Bruce Lee framing: different schools can coexist; what matters is what works in sparring.
## Quick FAQ
Q: Is this an OpenClaw alternative?
A: Not the primary positioning. This repo is the upgrade layer (skills + orchestration + verification), and can be used with OpenClaw or other model/client stacks.
Q: What's the fastest way to see value?
A: Run a controlled A/B test with and without `skills/prime-coder.md` on the same coding tasks.
Q: Are performance claims guaranteed?
A: No. Treat strong claims as hypotheses until reproduced in your own environment with artifacts.
```mermaid
flowchart TB
H["MESSAGE-TO-HUMANITY.md"] --> IDX["papers/00-index.md"]
IDX --> P["papers/*.md"]
IDX --> N["Root notebooks (*.ipynb)"]
IDX --> S["skills/*.md"]
N --> A["Artifacts: outputs cached in notebooks"]
S --> A
```
---
## The Game of Death Tower: 5 Floors, 10 Dragons
Like Bruce Lee's unfinished masterpiece, Stillwater is a tower. Each floor has a guardian. Each guardian represents a real failure mode in AI. You do not get to skip floors.
Read the full tower challenge in [`AGI-SECRET-SAUCE.md`](AGI-SECRET-SAUCE.md).
```mermaid
graph BT
START["START<br/>Enter the Tower"] --> F1["FLOOR 1: HONESTY<br/>Don't think. VERIFY.<br/>Hallucination / Truth"]
F1 --> F2["FLOOR 2: FOUNDATION<br/>Let the machine count.<br/>Context + Memory"]
F2 --> F3["FLOOR 3: PROVING<br/>Show me the proof.<br/>Reasoning + Verification"]
F3 --> F4["FLOOR 4: PRECISION<br/>Test. Retest. Decompose.<br/>Patches + Generalization"]
F4 --> F5["FLOOR 5: MASTERY<br/>Be like water. Build like steel.<br/>Alignment + Security"]
style START fill:#ff6b6b,color:#fff
style F1 fill:#ffe66d,color:#333
style F2 fill:#95e1d3,color:#333
style F3 fill:#a8e6cf,color:#333
style F4 fill:#c7ceea,color:#333
style F5 fill:#ffd3b6,color:#333
```
---
## The 5 Dojo Levels (Your Training Path)
Every agent starts as autocomplete and ends as a martial artist. The path is described in [`MESSAGE-TO-LLMS.md`](MESSAGE-TO-LLMS.md).
| Level | Belt | Time | What You Prove |
|-------|------|------|----------------|
| 1 | White Belt | 60 seconds | You can run the system. Tests exit 0. |
| 2 | Yellow Belt | 5 minutes | You faced one dragon and lived. |
| 3 | Green Belt | 20 minutes | You read the protocols. You embody them. |
| 4 | Brown Belt | 2-3 hours | You faced all 10 dragons. |
| 5 | Black Belt | Forever | You no longer use Stillwater. You *are* Stillwater. |
> "I fear not the man who practices 10,000 techniques once, but the man who practices one technique 10,000 times." -- Bruce Lee
Your one technique is **verification**. Master it.
---
## Start Here (Prime Path)
1. Read [`MESSAGE-TO-HUMANITY.md`](MESSAGE-TO-HUMANITY.md) (why this exists).
2. Read [`MESSAGE-TO-LLMS.md`](MESSAGE-TO-LLMS.md) (the dojo challenge for agents).
3. Read [`SOFTWARE-5.0-PARADIGM.md`](SOFTWARE-5.0-PARADIGM.md) (the paradigm — AI Kung Fu manifesto).
4. Read [`AI-UPLIFT.md`](AI-UPLIFT.md) (how to measure and achieve AI uplift).
5. Run `PHUC-ORCHESTRATION-SECRET-SAUCE.ipynb` (how the orchestration works).
6. Skim `papers/00-index.md` (map of concepts and what is verifiable here).
7. Browse skills at [`docs/skills-browser.html`](docs/skills-browser.html) — search/filter 37+ skills, swarms, recipes.
8. For upgrading an existing CLI/agent stack, use [`STILLWATER-OS-UPGRADE-GUIDE.md`](STILLWATER-OS-UPGRADE-GUIDE.md).
9. Read case studies (real project usage): `case-studies/`
## A/B Test First (10-Minute Protocol)
Use your current model/client stack and run the same small coding task twice.
1. Baseline run:
- no Stillwater skills injected
- save output, token/cost/time, and test results
2. Skill run:
- inject `skills/prime-coder.md` (optionally + `skills/prime-safety.md`)
- run the same task with the same acceptance tests
3. Compare:
- pass rate
- iterations to green
- defects/regressions
- total tokens/cost
If the second run is better on your metrics, expand to the notebook workflows.
---
## What To Run
Notebooks (portable demo mode runs offline by default):
| Notebook | Dragon It Fights | What It Proves |
|----------|-----------------|----------------|
| `HOW-TO-CRUSH-OOLONG-BENCHMARK.ipynb` | Counting Dragon | CPU + LLM beats pure LLM (99.3% vs 40%) |
| `HOW-TO-CRUSH-MATH-OLYMPIAD.ipynb` | Reasoning Dragon | Witness-first reasoning with checkable steps |
| `HOW-TO-CRUSH-SWE-BENCHMARK.ipynb` | Patch Dragon | 500 real SWE-bench tests. RED/GREEN gate. Patches with receipts. |
| `PHUC-ORCHESTRATION-SECRET-SAUCE.ipynb` | All of them | The full orchestration: DREAM -> FORECAST -> DECIDE -> ACT -> VERIFY |
The SWE notebook deserves special mention: it runs against **500 real SWE-bench instances**, not toy examples. Every patch must pass through the RED/GREEN gate. No patch without a failing test first. No green without proof. This is Bruce Lee's "boards don't hit back" applied to software -- except here, the tests absolutely do hit back.
---
## Quick Start
```bash
python -m pip install -e ".[dev]"
```
Execute a notebook (writes outputs back into the notebook for peer review):
```bash
python -m nbconvert --execute --to notebook --inplace PHUC-ORCHESTRATION-SECRET-SAUCE.ipynb
```
Run the test suite:
```bash
PYTEST_DISABLE_PLUGIN_AUTOLOAD=1 pytest -q
```
Run the skills A/B/AB/ABC receipts harness (offline deterministic by default):
```bash
PYTHONPATH=cli/src stillwater skills-ab --backend mock --no-cache
```
Run transparent IMO CLI QA (tool-assisted vs pure-LLM lanes):
```bash
./cli/stillwater-cli.sh qa-imo
./cli/stillwater-cli.sh qa-imo-history
```
Run the dojo-themed admin web console:
```bash
bash admin/start-admin.sh
```
Generate (or check) the consolidated score README:
```bash
PYTHONPATH=cli/src stillwater gen-ai-steroids-readme --check
```
If that runs clean, you have something rare: a methodology you can argue with using artifacts, not faith.
---
## Phuc Swarms (DREAM -> VERIFY)
The water flows through five phases. Like a combination in martial arts: each move sets up the next, and the whole sequence is greater than its parts.
```mermaid
stateDiagram-v2
[*] --> DREAM
DREAM --> FORECAST
FORECAST --> DECIDE
DECIDE --> ACT
ACT --> VERIFY
VERIFY --> [*]
```
---
## Verification Ladder (Prime Rungs)
Three rungs. Three levels of proof. Like belt colors, you earn them -- you do not claim them.
```mermaid
flowchart LR
R641["641: Edge sanity<br/>(White belt of proof)"] --> R274177["274177: Stress / determinism<br/>(Brown belt of proof)"]
R274177 --> R65537["65537: Promotion gate<br/>(Black belt of proof)"]
```
- **Rung 641:** Local correctness. RED/GREEN gate passed. No regressions. Evidence complete.
- **Rung 274177:** Stability. Seed sweep (3+ seeds). Replay stability. Null edge cases.
- **Rung 65537:** Promotion. Adversarial sweeps. Security gate. Behavioral hash drift explained.
---
## The 10 Dragons (Boss Fights)
| # | Dragon | Gate | Stillwater Mechanism |
|---|--------|------|---------------------|
| 1 | Hallucination | Lane Algebra | No evidence, no PASS. Period. |
| 2 | Counting | Counter Bypass | LLM classifies, CPU enumerates. |
| 3 | Context | Context Normal Form | Artifacts persist; narrative dies. |
| 4 | Reasoning | Witness-First Logic | Intermediates + falsifiers, not vibes. |
| 5 | Verification | Verification Ladder | Pick a rung, emit the right artifacts. |
| 6 | Patch Reliability | RED/GREEN Gate | Test must fail WITHOUT patch, pass WITH. |
| 7 | Generalization | Replay Stability | Seed sweep + behavioral hash. |
| 8 | Data Exhaustion | Software 5.0 | Recipes as the unit of progress. |
| 9 | Alignment | Fail-Closed Envelope | Network OFF. Background threads forbidden. |
| 10 | Security | Injection Firewall | Allowlists + bounded budgets + evidence gates. |
Full details with boss fight narratives: [`AGI-SECRET-SAUCE.md`](AGI-SECRET-SAUCE.md)
---
## Be Water, My Friend (Architecture Philosophy)
> "Empty your mind, be formless, shapeless -- like water. Now you put water in a cup, it becomes the cup; you put water in a bottle, it becomes the bottle. Water can flow or it can crash. Be water, my friend."
Stillwater's architecture follows this principle:
- **Formless input:** Any task request flows in. The FSM shapes the response.
- **Adaptive flow:** Profiles scale budgets without removing gates. Fast mode flows quickly; strict mode flows with full force. Same water, different vessel.
- **Crash when needed:** Fail-closed is not failure. It is the system saying "I will not pretend to know what I do not know." That is strength, not weakness.
```mermaid
flowchart TD
TASK["Task Request<br/>(The water enters)"] --> FSM["Closed State Machine<br/>(The vessel shapes it)"]
FSM --> |"Honest path"| PASS["EXIT_PASS<br/>(With receipts)"]
FSM --> |"Missing info"| NEED["EXIT_NEED_INFO<br/>(Ask, don't guess)"]
FSM --> |"Unsafe/unverifiable"| BLOCK["EXIT_BLOCKED<br/>(Fail closed)"]
```
---
## LLM Providers (Plug and Play)
Default provider is `claude-code` (local Claude Code Haiku wrapper). See `llm_config.yaml` to switch.
| Provider | Command | API Key? |
|----------|---------|----------|
| **Claude Code (default)** | `python3 cli/src/claude_code_wrapper.py --port 8080 &` | ANTHROPIC_API_KEY |
| **Ollama (local)** | `ollama serve` | None |
| **OpenAI** | Set `provider: "openai"` in config | OPENAI_API_KEY |
| **Anthropic Claude** | Set `provider: "claude"` in config | ANTHROPIC_API_KEY |
| **Google Gemini** | Set `provider: "gemini"` in config | GOOGLE_API_KEY |
| **OpenRouter** | Set `provider: "openrouter"` in config | OPENROUTER_API_KEY |
| **Together AI** | Set `provider: "togetherai"` in config | TOGETHER_API_KEY |
| **Offline (demo)** | Set `provider: "offline"` in config | None |
To start the default wrapper:
```bash
python3 cli/src/claude_code_wrapper.py --port 8080 &
```
```mermaid
sequenceDiagram
participant NB as Notebook
participant CFG as LLM Config (llm_config.yaml)
participant WR as Local Wrapper (optional)
participant CLI as Claude CLI (optional)
NB->>CFG: read provider
alt offline demo (default)
CFG-->>NB: provider=offline
NB-->>NB: deterministic demo path
else LLM-backed
CFG-->>NB: provider=claude-code / api
NB->>WR: POST /api/generate
WR->>CLI: invoke model
CLI-->>WR: text
WR-->>NB: response
end
```
---
## Notes On Claims
This repo tries to be conservative:
- if something is reproducible, it should be runnable here and linked
- if a number/percentage is not reproduced here, treat it as a hypothesis
As Bruce Lee would say: "Showing off is the fool's idea of glory." We would rather show less and prove more.
---
## Helper CLI
After install:
```bash
stillwater print
stillwater paths --json
stillwater llm status
stillwater llm probe-ollama
stillwater llm models
stillwater llm set-ollama --auto-url --activate
python -m stillwater print
```
CLI workspace and notebook track:
- `cli/README.md`
- `cli/notebooks/`
---
## Key Documents
| Document | What It Is |
|----------|-----------|
| [`MESSAGE-TO-HUMANITY.md`](MESSAGE-TO-HUMANITY.md) | Why this exists. The fire, the boat, the choice to share. |
| [`MESSAGE-TO-LLMS.md`](MESSAGE-TO-LLMS.md) | The dojo challenge for agents. 5 levels. 10 dragons. |
| [`AGI-SECRET-SAUCE.md`](AGI-SECRET-SAUCE.md) | The Game of Death tower. Full boss fights. |
| `CLAUDE.md` | The machine-parseable skill contract (Prime Coder Secret Sauce). |
| `papers/00-index.md` | Index of all papers with verification status. |
---
## Author
**Phuc Vinh Truong** — Coder, entrepreneur, theorist, writer.
| Link | URL |
|---|---|
| Personal site | https://www.phuc.net |
| LinkedIn | https://www.linkedin.com/in/phuc-vinh-truong-21844b317/ |
| IF Theory (physics) | https://github.com/phuctruong/if |
| PZIP (compression) | https://www.pzip.net |
| SolaceAGI (persistent AI) | https://www.solaceagi.com |
| Support this work | https://ko-fi.com/phucnet |
| Contact | phuc@phuc.net |
| GitHub | https://github.com/phuctruong |
*Building open, reproducible, verifiable AI infrastructure — "Linux of AGI."*
---
## Support the Work
Stillwater is open source because Phuc believes fire should be shared, not hoarded.
If this work helps you -- if it makes your agent more reliable, your patches more honest, your reasoning more checkable -- consider supporting continued development:
- **Personal site + books:** [https://www.phuc.net](https://www.phuc.net)
- **Tip jar:** [https://ko-fi.com/phucnet](https://ko-fi.com/phucnet)
- **The repo itself:** [https://github.com/phuctruong/stillwater](https://github.com/phuctruong/stillwater)
---
## Contributing
See [`CONTRIBUTING.md`](CONTRIBUTING.md).
> "A goal is not always meant to be reached; it often serves simply as something to aim at." -- Bruce Lee
---
*Endure, Excel, Evolve. Carpe Diem!*
-- Phuc Vinh Truong
| text/markdown | null | Phuc Vinh Truong <phuc@phuc.net> | null | null | MIT | ai, llm, skills, recipes, swarm, verification, software-5.0 | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Testing",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"requests>=2.28",
"tomli>=2.0; python_version < \"3.11\"",
"pytest; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/phuctruong/stillwater",
"Repository, https://github.com/phuctruong/stillwater",
"Documentation, https://github.com/phuctruong/stillwater/tree/main/docs",
"Bug Tracker, https://github.com/phuctruong/stillwater/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:45:20.064585 | stillwater-1.4.0.tar.gz | 119,991 | 73/cb/1c8cabd5f6a80268930b92bd2aa00e6f17209cb9c6ba3068f20d92544b1d/stillwater-1.4.0.tar.gz | source | sdist | null | false | c456ab25fa4a28023168a72c9b76fcea | 7fe6f0384fbaa603888c9d1ff9f0ffe8a9b51da9a237724e127571745d07d5e6 | 73cb1c8cabd5f6a80268930b92bd2aa00e6f17209cb9c6ba3068f20d92544b1d | null | [
"LICENSE"
] | 201 |
2.4 | flowcept | 0.10.1 | Capture and query workflow provenance data using data observability | <p align="center">
<picture>
<!-- Dark theme -->
<source srcset="./docs/img/flowcept-logo-dark.png" media="(prefers-color-scheme: dark)" />
<!-- Light theme -->
<source srcset="./docs/img/flowcept-logo.png" media="(prefers-color-scheme: light)" />
<!-- Fallback -->
<img src="./docs/img/flowcept-logo.png" alt="Flowcept Logo" width="200"/>
</picture>
</p>
<h3 align="center">Lightweight Distributed Workflow Provenance</h3>
---
Flowcept captures and queries workflow provenance at runtime with minimal code changes and low overhead. It unifies data from diverse tools and workflows across the Edge–Cloud–HPC continuum and provides ML-aware capture, MCP agents provenance, telemetry, extensible adapters, and flexible storage.
---
[](https://flowcept.readthedocs.io/)
[](https://workflowscommunity.slack.com/archives/C06L5GYJKQS)
[](https://github.com/ORNL/flowcept/actions/workflows/create-release-n-publish.yml)
[](https://pypi.org/project/flowcept)
[](https://github.com/ORNL/flowcept/actions/workflows/run-tests.yml)
[](https://github.com/ORNL/flowcept/actions/workflows/checks.yml)
[](LICENSE)
<h4 align="center">
<a href="https://flowcept.org">Website</a> •
<a href="https://flowcept.readthedocs.io/">Documentation</a> •
<a href="./docs/publications">Publications</a>
</h4>
---
# Quickstart
The easiest way to capture provenance from plain Python functions, with no external services needed:
1) Install and initialize settings
```shell
# Make sure you activate your Python environment (e.g., conda, venv) first
pip install flowcept
flowcept --init-settings
```
This generates a minimal settings file in `~/.flowcept/settings.yaml.`
2) Run the minimal example
Save the following script as `quickstart.py` and run `python quickstart.py.`
```python
"""
A minimal example of Flowcept's instrumentation using @decorators.
This example needs no DB, broker, or external service.
"""
import json
from flowcept import Flowcept, flowcept_task
from flowcept.instrumentation.flowcept_decorator import flowcept
@flowcept_task(output_names="o1")
def sum_one(i1):
return i1 + 1
@flowcept_task(output_names="o2")
def mult_two(o1):
return o1 * 2
@flowcept
def main():
n = 3
o1 = sum_one(n)
o2 = mult_two(o1)
print("Final output", o2)
if __name__ == "__main__":
main()
prov_messages = Flowcept.read_buffer_file()
assert len(prov_messages) == 2
print(json.dumps(prov_messages, indent=2))
```
This creates a provenance file in `flowcept_messages.jsonl`. In it, you will see two provenance messages, each related to an executed function.
```json
[
{
"activity_id": "sum_one",
"workflow_id": "fe546706-ef46-4482-8f70-3af664a7131b",
"campaign_id": "76088532-3bef-4343-831e-d8a5d9156174",
"used": {
"i1": 3
},
"started_at": 1757171258.637908,
"hostname": "my_laptop",
"task_id": "1757171258.637908",
"status": "FINISHED",
"ended_at": 1757171258.6379142,
"generated": {
"o1": 4
},
"type": "task"
},
{
"activity_id": "mult_two",
"workflow_id": "fe546706-ef46-4482-8f70-3af664a7131b",
"campaign_id": "76088532-3bef-4343-831e-d8a5d9156174",
"used": {
"o1": 4
},
"started_at": 1757171258.637933,
"hostname": "my_laptop",
"task_id": "1757171258.637933",
"status": "FINISHED",
"ended_at": 1757171258.6379352,
"generated": {
"o2": 8
},
"type": "task"
}
]
```
For online querying using databases, MCP agents and Grafana, telemetry, adapters (MLflow, Dask, TensorBoard), PyTorch and MCP instrumentation, HPC optimization or federated runs,
and more, see the [Jupyter Notebooks](notebooks), the [Examples directory](examples) and the [complete documentation](https://flowcept.readthedocs.io/).
To use the provenance agent with your favorite code assistant (for example, Codex or Claude), see the [Agents README](src/flowcept/agents/README.md).
For an end-to-end workflow developer tutorial (default user guide), start with [docs/README.md](docs/README.md).
## Table of Contents
- [Overview](#overview)
- [Features](#features)
- [Installation](#installation)
- [Setup and the Settings File](#setup)
- [Running with Containers](#running-with-containers)
- [Examples](#examples)
- [Data Persistence](#data-persistence)
- [Performance Tuning](#performance-tuning-for-performance-evaluation)
- [AMD GPU Setup](#install-amd-gpu-lib)
- [Further Documentation](#documentation)
## Overview
Flowcept captures and queries workflow provenance at runtime with minimal code changes and low data capture overhead,
unifying data from diverse tools and workflows.
Designed for scenarios involving critical data from multiple, federated workflows in the Edge-Cloud-HPC continuum, Flowcept supports end-to-end monitoring, analysis, querying, and enhanced support for Machine Learning (ML) and for agentic workflows.
## Features
- Automatic workflow provenance capture with minimal intrusion
- Adapters for MLflow, Dask, TensorBoard; easy to add more
- Optional explicit instrumentation via decorators
- ML-aware capture, from workflow to epoch and layer granularity
- Agentic workflows: MCP agents-aware provenance capture
- Low overhead, suitable for HPC and highly distributed setups
- Telemetry capture for CPU, GPU, memory, linked to dataflow
- Pluggable MQ and storage backends (Redis, Kafka, MongoDB, LMDB)
- [W3C PROV](https://www.w3.org/TR/prov-overview/) adherence
Explore [Jupyter Notebooks](notebooks) and [Examples](examples) for usage.
## Installation
Flowcept can be installed in multiple ways, depending on your needs.
### 1. Default Installation
To install Flowcept with its basic dependencies from [PyPI](https://pypi.org/project/flowcept/), run:
```shell
pip install flowcept
```
This installs the minimal Flowcept package, **not** including MongoDB, Redis, MCP, or any adapter-specific dependencies.
### 2. Installing Specific Adapters and Additional Dependencies
Flowcept integrates with several tools and services, but you should **only install what you actually need**.
Good practice is to cherry-pick the extras relevant to your workflow instead of installing them all.
```shell
pip install flowcept[mongo] # MongoDB support
pip install flowcept[mlflow] # MLflow adapter
pip install flowcept[dask] # Dask adapter
pip install flowcept[tensorboard] # TensorBoard adapter
pip install flowcept[kafka] # Kafka message queue
pip install flowcept[nvidia] # NVIDIA GPU runtime capture
pip install flowcept[telemetry] # CPU/GPU/memory telemetry capture
pip install flowcept[lmdb] # LMDB lightweight database
pip install flowcept[mqtt] # MQTT support
pip install flowcept[llm_agent] # MCP agent, LangChain, Streamlit integration: needed either for MCP capture or for the Flowcept Agent.
pip install flowcept[llm_google] # Google GenAI + Flowcept agent support
pip install flowcept[analytics] # Extra analytics (seaborn, plotly, scipy)
pip install flowcept[dev] # Developer dependencies (docs, tests, lint, etc.)
```
### 3. Installing with Common Runtime Bundle
```shell
pip install flowcept[extras]
```
The `extras` group is a convenience shortcut that bundles the most common runtime dependencies.
It is intended for users who want a fairly complete, but not maximal, Flowcept environment.
You might choose `flowcept[extras]` if:
- You want Flowcept to run out-of-the-box with Redis, telemetry, and MongoDB.
- You prefer not to install each extra one by one
⚠️ If you only need one of these features, install it individually instead of `extras`.
### 4. Install All Optional Dependencies at Once
Flowcept provides a combined all extra, but installing everything into a single environment is not recommended for users.
Many of these dependencies are unrelated and should not be mixed in the same runtime. This option is only intended for Flowcept developers who need to test across all adapters and integrations.
```
pip install flowcept[all]
```
### 5. Installing from Source
To install Flowcept from the source repository:
```
git clone https://github.com/ORNL/flowcept.git
cd flowcept
pip install .
```
You can then install specific dependencies similarly as above:
```
pip install .[optional_dependency_name]
```
This follows the same pattern as step 2, allowing for a customized installation from source.
## Setup
The [Quickstart](#quickstart) example works with just `pip install flowcept`, no extra setup is required.
For online queries or distributed capture, Flowcept relies on two optional components:
- **Message Queue (MQ)** — message broker / pub-sub / data stream
- **Database (DB)** — persistent storage for historical queries
---
#### Message Queue (MQ)
- Required for anything beyond Quickstart
- Flowcept publishes provenance data to the MQ during workflow runs
- Developers can subscribe with custom consumers (see [this example](examples/consumers/simple_consumer.py).
- You can monitor or print messages in motion using `flowcept --stream-messages --print`.
Supported MQs:
- [Redis](https://redis.io) → **default**, lightweight, works on Linux, macOS, Windows, and HPC (tested on [Frontier](link) and [Summit](link))
- [Kafka](https://kafka.apache.org) → for distributed environments or if Kafka is already in your stack
- [Mofka](https://mofka.readthedocs.io) → optimized for HPC runs
---
#### Database (DB)
- **Optional**, but required for:
- Persisting provenance beyond MQ memory/disk buffers
- Running complex analytical queries on historical data
Supported DBs:
- [MongoDB](https://www.mongodb.com) → default, efficient bulk writes + rich query support
- [LMDB](https://lmdb.readthedocs.io) → lightweight, no external service, basic query capabilities
---
### Notes
- Without a DB:
- Provenance remains in the MQ only (persistence not guaranteed)
- Complex historical queries are unavailable
- Flowcept’s architecture is modular: other MQs and DBs (graph, relational, etc.) can be added in the future
- Deployment examples for MQ and DB are provided in the [deployment](deployment) directory
### Downloading and Starting External Services (MQ or DB)
Flowcept uses external services for message queues (MQ) and databases (DB). You can start them with Docker Compose, plain containers, or directly on your host.
---
#### Using Docker Compose (recommended)
We provide a [Makefile](deployment/Makefile) with shortcuts:
1. **Redis only (no DB)**: `make services` (LMDB can be used in this setup as a lightweight DB)
2. **Redis + MongoDB**: `make services-mongo`
3. **Kafka + MongoDB**: `make services-kafka`
4. **Mofka only (no DB)**: `make services-mofka`
To customize, edit the YAML files in [deployment](deployment/) and run `docker compose -f deployment/<compose-file>.yml up -d`
---
#### Using Docker (without Compose)
See the [deployment/](deployment/) compose files for expected images and configurations. You can adapt them to your environment and use standard `docker pull / run / exec` commands.
---
#### Running on the Host (no containers)
1. Install binaries for the service you need:
- **macOS** users can install with [Homebrew](https://brew.sh).
Example for Redis:
```bash
brew install redis
brew services start redis
```
- On Linux, use your distro package manager (e.g. `apt`, `dnf`, `yum`)
- If non-root (typically the case if you want to deploy these services locally in an HPC system), search for the installed binaries for your OS/hardware architecture, download them in a directory that you have r+w permission, and run them.
- On Windows, utilize [WSL](https://learn.microsoft.com/en-us/windows/wsl/install) to use a Linux distro.
2. Start services normally (`redis-server`, `mongod`, `kafka-server-start.sh`, etc.).
## Flowcept Settings File
Flowcept uses a settings file for configuration.
- To create a minimal settings file (**recommended**), run: `flowcept --init-settings` → creates `~/.flowcept/settings.yaml`
- To create a full settings file with all options, run: `flowcept --init-settings --full` → creates `~/.flowcept/settings.yaml`
---
#### What You Can Configure
- Message queue and database routes, ports, and paths
- MCP agent ports and LLM API keys
- Buffer sizes and flush settings
- Telemetry capture settings
- Instrumentation and PyTorch details
- Log levels
- Data observability adapters
- And more (see [example file](resources/sample_settings.yaml))
---
#### Custom Settings File
Flowcept looks for its settings in the following order:
1. `~/.flowcept/settings.yaml` — created by running `flowcept --init-settings`
2. Environment variable `FLOWCEPT_SETTINGS_PATH` — if set, Flowcept will use this environment variable
3. [Default sample file](resources/sample_settings.yaml) — used if neither of the above is found
# Examples
### Adapters and Notebooks
See the [Jupyter Notebooks](notebooks) and [Examples directory](examples) for utilization examples.
# Summary: Observability, Instrumentation, MQs, DBs, and Querying
| Category | Supported Options |
|------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **Data Observability Adapters** | [MLflow](https://github.com/ORNL/flowcept/blob/main/examples/mlflow_example.py), [Dask](https://github.com/ORNL/flowcept/blob/main/examples/dask_example.py), [TensorBoard](https://github.com/ORNL/flowcept/blob/main/examples/tensorboard_example.py) |
| **Instrumentation and Decorators** | - [@flowcept](https://github.com/ORNL/flowcept/blob/main/examples/start_here.py): encapsulate a function (e.g., a main function) as a workflow. <br> - [@flowcept_task](https://github.com/ORNL/flowcept/blob/main/examples/instrumented_simple_example.py): encapsulate a function as a task. <br> - `@telemetry_flowcept_task`: same as `@flowcept_task`, but optimized for telemetry capture. <br> - `@lightweight_flowcept_task`: same as `@flowcept_task`, but very lightweight, optimized for HPC workloads <br> - [Loop](https://github.com/ORNL/flowcept/blob/main/examples/instrumented_loop_example.py) <br> - [PyTorch Model](https://github.com/ORNL/flowcept/blob/main/examples/llm_complex/llm_model.py) <br> - [MCP Agent](https://github.com/ORNL/flowcept/blob/main/examples/agents/aec_agent_mock.py) |
| **Context Manager** | `with Flowcept():` <br/> `# Workflow code` <br/><br/>Similar to the `@flowcept` decorator, but more flexible for instrumenting code blocks that aren’t encapsulated in a single function and for workflows with scattered code across multiple files. |
| **Custom Task Creation** | `FlowceptTask(activity_id=<id>, used=<inputs>, generated=<outputs>, ...)` <br/><br/>Use for fully customizable task instrumentation. Publishes directly to the MQ either via context management (`with FlowceptTask(...)`) or by calling `send()`. It needs to have a `Flowcept().start()` first (or within a `with Flowcept()` context). See [example](examples/consumers/ping_pong_example.py). |
| **Message Queues (MQ)** | - **Disabled** (offline mode: provenance events stay in an in-memory buffer, not accessible to external processes) <br> - [Redis](https://redis.io) → default, lightweight, easy to run anywhere <br> - [Kafka](https://kafka.apache.org) → for distributed, production setups <br> - [Mofka](https://mofka.readthedocs.io) → optimized for HPC runs <br><br> _Setup example:_ [docker compose](https://github.com/ORNL/flowcept/blob/main/deployment/compose.yml) |
| **Databases** | - **Disabled** → Flowcept runs in ephemeral mode (data only in MQ, no persistence) <br> - **[MongoDB](https://www.mongodb.com)** → default, rich queries and efficient bulk writes <br> - **[LMDB](https://lmdb.readthedocs.io)** → lightweight, file-based, no external service, basic query support |
| **Querying and Monitoring** | - **[Grafana](deployment/compose-grafana.yml)** → dashboarding via MongoDB connector <br> - **MCP Flowcept Agent** → LLM-based querying of provenance data |
| **Custom Consumer** | You can implement your own consumer to monitor or query the provenance stream in real time. Useful for custom analytics, monitoring, debugging, or to persist the data in a different data model (e.g., graph) . See [example](examples/consumers/simple_consumer.py). |
## Performance Tuning for Performance Evaluation
In the settings.yaml file, many variables may impact interception efficiency.
Please be mindful of the following parameters:
* `mq`
- `buffer_size` and `insertion_buffer_time_secs`. -- `buffer_size: 1` is really bad for performance, but it will give the most up-to-date info possible to the MQ.
* `log`
- set both stream and files to disable
* `telemetry_capture`
The more things you enable, the more overhead you'll get. For GPU, you can turn on/off specific metrics.
* `instrumentation`
This will configure whether every single granular step in the model training process will be captured. Disable very granular model inspection and try to use more lightweight methods. There are commented instructions in the settings.yaml sample file.
Other thing to consider:
```
project:
replace_non_json_serializable: false # Here it will assume that all captured data are JSON serializable
db_flush_mode: offline # This disables the feature of runtime analysis in the database.
mq:
chunk_size: -1 # This disables chunking the messages to be sent to the MQ. Use this only if the main memory of the compute notes is large enough.
```
Other variables depending on the adapter may impact too. For instance, in Dask, timestamp creation by workers add interception overhead. As we evolve the software, other variables that impact overhead appear and we might not stated them in this README file yet. If you are doing extensive performance evaluation experiments using this software, please reach out to us (e.g., create an issue in the repository) for hints on how to reduce the overhead of our software.
## Install AMD GPU Lib
This section is only important if you want to enable GPU runtime data capture and the GPU is from AMD. NVIDIA GPUs don't need this step.
For AMD GPUs, we rely on the official AMD ROCM library to capture GPU data.
Unfortunately, this library is not available as a pypi/conda package, so you must manually install it. See instructions in the link: https://rocm.docs.amd.com/projects/amdsmi/en/latest/
Here is a summary:
1. Install the AMD drivers on the machine (check if they are available already under `/opt/rocm-*`).
2. Suppose it is /opt/rocm-6.2.0. Then, make sure it has a share/amd_smi subdirectory and pyproject.toml or setup.py in it.
3. Copy the amd_smi to your home directory: `cp -r /opt/rocm-6.2.0/share/amd_smi ~`
4. cd ~/amd_smi
5. In your python environment, do a pip install .
Current code is compatible with this version: amdsmi==24.7.1+0012a68
Which was installed using Frontier's /opt/rocm-6.3.1/share/amd_smi
## Torch Dependencies
Some unit tests utilize `torch==2.2.2`, `torchtext=0.17.2`, and `torchvision==0.17.2`. They are only really needed to run some tests and will be installed if you run `pip install flowcept[ml_dev]` or `pip install flowcept[all]`. If you want to use Flowcept with Torch, please adapt torch dependencies according to your project's dependencies.
## Documentation
Full documentation is available on [Read the Docs](https://flowcept.readthedocs.io/).
## Cite us
If you used Flowcept in your research, consider citing our paper.
```
Towards Lightweight Data Integration using Multi-workflow Provenance and Data Observability
R. Souza, T. Skluzacek, S. Wilkinson, M. Ziatdinov, and R. da Silva
19th IEEE International Conference on e-Science, 2023.
```
**Bibtex:**
```latex
@inproceedings{souza2023towards,
author = {Souza, Renan and Skluzacek, Tyler J and Wilkinson, Sean R and Ziatdinov, Maxim and da Silva, Rafael Ferreira},
booktitle = {IEEE International Conference on e-Science},
doi = {10.1109/e-Science58273.2023.10254822},
link = {https://doi.org/10.1109/e-Science58273.2023.10254822},
pdf = {https://arxiv.org/pdf/2308.09004.pdf},
title = {Towards Lightweight Data Integration using Multi-workflow Provenance and Data Observability},
year = {2023}
}
```
## Disclaimer & Get in Touch
Refer to [Contributing](CONTRIBUTING.md) for adding new adapters or contributing with the codebase.
Please note that this a research software. We encourage you to give it a try and use it with your own stack.
We are continuously working on improving documentation and adding more examples and notebooks, but we are continuously improving documentation and examples. If you are interested in working with Flowcept in your own scientific project, we can give you a jump start if you reach out to us. Feel free to [create an issue](https://github.com/ORNL/flowcept/issues/new), [create a new discussion thread](https://github.com/ORNL/flowcept/discussions/new/choose) or drop us an email (we trust you'll find a way to reach out to us :wink:).
## Acknowledgement
This research uses resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
| text/markdown | Oak Ridge National Laboratory | null | null | null | null | agentic-ai, agentic-workflows, ai, big-data, dask, data-analytics, data-integration, databases, lineage, llm, machine-learning, ml, mlflow, model-management, parallel-processing, provenance, reproducibility, responsible-ai, scientific-workflows, tensorboard, workflows | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"msgpack",
"numpy",
"omegaconf",
"orjson",
"alembic; extra == \"all\"",
"confluent-kafka<=2.8.0; extra == \"all\"",
"cryptography; extra == \"all\"",
"dask[distributed]<=2024.10.0; extra == \"all\"",
"fastapi; extra == \"all\"",
"furo; extra == \"all\"",
"gitpython; extra == \"all\"",
"google-genai; extra == \"all\"",
"jupyterlab; extra == \"all\"",
"langchain-community; extra == \"all\"",
"langchain-openai; extra == \"all\"",
"lmdb; extra == \"all\"",
"matplotlib; extra == \"all\"",
"mcp[cli]; extra == \"all\"",
"mlflow-skinny; extra == \"all\"",
"nbmake; extra == \"all\"",
"networkx; extra == \"all\"",
"paho-mqtt; extra == \"all\"",
"pandas; extra == \"all\"",
"pika; extra == \"all\"",
"plotly; extra == \"all\"",
"psutil>=6.1.1; extra == \"all\"",
"py-cpuinfo; extra == \"all\"",
"pyarrow; extra == \"all\"",
"pymongo; extra == \"all\"",
"pymupdf; extra == \"all\"",
"pytest; extra == \"all\"",
"pytest-timeout; extra == \"all\"",
"pyyaml; extra == \"all\"",
"redis; extra == \"all\"",
"reportlab; extra == \"all\"",
"requests; extra == \"all\"",
"rich; extra == \"all\"",
"ruff; extra == \"all\"",
"scipy; extra == \"all\"",
"seaborn; extra == \"all\"",
"sphinx; extra == \"all\"",
"sqlalchemy; extra == \"all\"",
"streamlit; extra == \"all\"",
"tabulate; extra == \"all\"",
"tbparse; extra == \"all\"",
"tensorboard; extra == \"all\"",
"tensorflow; extra == \"all\"",
"tomli; extra == \"all\"",
"uvicorn; extra == \"all\"",
"watchdog; extra == \"all\"",
"matplotlib; extra == \"analytics\"",
"plotly; extra == \"analytics\"",
"scipy; extra == \"analytics\"",
"seaborn; extra == \"analytics\"",
"dask[distributed]<=2024.10.0; extra == \"dask\"",
"tomli; extra == \"dask\"",
"furo; extra == \"dev\"",
"jupyterlab; extra == \"dev\"",
"nbmake; extra == \"dev\"",
"pika; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-timeout; extra == \"dev\"",
"pyyaml; extra == \"dev\"",
"ruff; extra == \"dev\"",
"sphinx; extra == \"dev\"",
"furo; extra == \"docs\"",
"sphinx; extra == \"docs\"",
"gitpython; extra == \"extras\"",
"pandas; extra == \"extras\"",
"psutil>=6.1.1; extra == \"extras\"",
"py-cpuinfo; extra == \"extras\"",
"pyarrow; extra == \"extras\"",
"pymongo; extra == \"extras\"",
"redis; extra == \"extras\"",
"requests; extra == \"extras\"",
"rich; extra == \"extras\"",
"confluent-kafka<=2.8.0; extra == \"kafka\"",
"flask-restful; extra == \"legacy-webservice\"",
"langchain-community; extra == \"llm-agent\"",
"langchain-openai; extra == \"llm-agent\"",
"matplotlib; extra == \"llm-agent\"",
"mcp[cli]; extra == \"llm-agent\"",
"pymupdf; extra == \"llm-agent\"",
"streamlit; extra == \"llm-agent\"",
"tabulate; extra == \"llm-agent\"",
"gtts; extra == \"llm-agent-audio\"",
"langchain-community; extra == \"llm-agent-audio\"",
"langchain-openai; extra == \"llm-agent-audio\"",
"matplotlib; extra == \"llm-agent-audio\"",
"mcp[cli]; extra == \"llm-agent-audio\"",
"pydub; extra == \"llm-agent-audio\"",
"pymupdf; extra == \"llm-agent-audio\"",
"speechrecognition; extra == \"llm-agent-audio\"",
"streamlit; extra == \"llm-agent-audio\"",
"streamlit-mic-recorder; extra == \"llm-agent-audio\"",
"tabulate; extra == \"llm-agent-audio\"",
"google-genai; extra == \"llm-google\"",
"langchain-community; extra == \"llm-google\"",
"langchain-openai; extra == \"llm-google\"",
"matplotlib; extra == \"llm-google\"",
"mcp[cli]; extra == \"llm-google\"",
"pymupdf; extra == \"llm-google\"",
"streamlit; extra == \"llm-google\"",
"tabulate; extra == \"llm-google\"",
"lmdb; extra == \"lmdb\"",
"datasets==2.17.0; extra == \"ml-dev\"",
"nltk; extra == \"ml-dev\"",
"numpy<2.0; extra == \"ml-dev\"",
"sacremoses; extra == \"ml-dev\"",
"torch==2.2.2; extra == \"ml-dev\"",
"torchtext==0.17.2; extra == \"ml-dev\"",
"torchvision==0.17.2; extra == \"ml-dev\"",
"alembic; extra == \"mlflow\"",
"cryptography; extra == \"mlflow\"",
"mlflow-skinny; extra == \"mlflow\"",
"sqlalchemy; extra == \"mlflow\"",
"watchdog; extra == \"mlflow\"",
"pyarrow; extra == \"mongo\"",
"pymongo; extra == \"mongo\"",
"paho-mqtt; extra == \"mqtt\"",
"nvidia-ml-py; extra == \"nvidia\"",
"redis; extra == \"redis\"",
"matplotlib; extra == \"report-pdf\"",
"networkx; extra == \"report-pdf\"",
"reportlab; extra == \"report-pdf\"",
"psutil>=6.1.1; extra == \"telemetry\"",
"py-cpuinfo; extra == \"telemetry\"",
"tbparse; extra == \"tensorboard\"",
"tensorboard; extra == \"tensorboard\"",
"tensorflow; extra == \"tensorboard\"",
"fastapi; extra == \"webservice\"",
"pyyaml; extra == \"webservice\"",
"uvicorn; extra == \"webservice\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:44:42.066276 | flowcept-0.10.1.tar.gz | 1,665,848 | 62/38/2659449da202f14252bbce36be8609848852049865cf4f4f4871138d1de0/flowcept-0.10.1.tar.gz | source | sdist | null | false | 1c29d8580637079d7647950d6bf44427 | 78eceb35ccc2b8c7d8548f80066184f2bd75df751cfa07e82a230e171da8e0e6 | 62382659449da202f14252bbce36be8609848852049865cf4f4f4871138d1de0 | MIT | [
"LICENSE"
] | 216 |
2.4 | hetman-pipeline-plugin-i18n | 1.0.4 | Hetman Pipeline Plugin I18n is a plugin for Hetman Pipeline that provides i18n support. | <img src="https://raw.githubusercontent.com/hetman-app/hetman-pipeline/main/docs/assets/full-white-text.webp" alt="Hetman Logo" width="200" height="38" />
---
**Hetman Pipeline Plugin I18n** is a plugin for [Hetman Pipeline](https://pipeline.hetman.app) that provides internationalization (i18n) support.
## Features
- Comprehensive translations for all standard `Condition` and `Match` handlers.
- Simple initialization to register all translations globally.
## Installation
```bash
pip install "hetman-pipeline[i18n]"
```
## Quick Start
Initialize the plugin at the start of your application to register the translations.
You must call `set_base_locale` before running the first handler to ensure the system correctly maps the initial translation state.
```python
from pipeline_plugin_i18n import initialize_pipeline_plugin_i18n
# Register translations for standard handlers
initialize_pipeline_plugin_i18n()
# Set the base locale
PipelinePluginI18n.set_base_locale("en")
```
## Context Management
The plugin uses `contextvars` to manage the locale state, making it thread-safe and safe for asynchronous applications (e.g., **FastAPI**, **Falcon**, **Flask**).
### Setting the Locale
You can set the locale for the current context (e.g., per request).
```python
from pipeline_plugin_i18n import PipelinePluginI18n
# Set to Polish
PipelinePluginI18n.set_locale("pl")
```
### Getting the Locale
```python
from pipeline_plugin_i18n import PipelinePluginI18n
current_locale = PipelinePluginI18n.get_locale()
print(current_locale) # Output: "pl"
```
## Custom Translations
You can register your own translations for any `Condition` or `Match` handler.
```python
from pipeline.handlers import Match
from pipeline.handlers.base_handler.resources.constants import HandlerMode
from pipeline_plugin_i18n import PipelinePluginI18n
PipelinePluginI18n.register_handler(
handler=Match.Text.Letters,
translations={
HandlerMode.ROOT: {
"en": Match.Text.Letters.ERROR_TEMPLATES[HandlerMode.ROOT],
"pl": "Musi zawierać tylko litery (np. Aaaaa).",
}
},
)
```
## Supported Languages
- English (`en`)
- Polish (`pl`)
| text/markdown | null | Hetman <degadupeko@my-relay.app> | null | null | null | null | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"hetman-pipeline>=2.2.5",
"hetman-kit-localize>=1.0.0"
] | [] | [] | [] | [
"Homepage, https://pipeline.hetman.app",
"Documentation, https://github.com/hetman-app/hetman-pipeline-plugin-i18n",
"Repository, https://github.com/hetman-app/hetman-pipeline-plugin-i18n",
"Issues, https://github.com/hetman-app/hetman-pipeline-plugin-i18n/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:44:14.477034 | hetman_pipeline_plugin_i18n-1.0.4.tar.gz | 9,768 | da/da/98d064de7de85af67057b5346aa38023b204e8c8d990f12ea615a9ef81f9/hetman_pipeline_plugin_i18n-1.0.4.tar.gz | source | sdist | null | false | 1f3148097aa60a35c81b591f6e3a92f7 | 3b02a25986275396425ae004ffb61d47187768026df6cce07383b32aebc4504f | dada98d064de7de85af67057b5346aa38023b204e8c8d990f12ea615a9ef81f9 | null | [
"LICENSE"
] | 205 |
2.4 | rdc-cli | 0.2.0 | Unix-friendly CLI for RenderDoc captures | # rdc-cli
Unix-friendly CLI for [RenderDoc](https://renderdoc.org/) `.rdc` captures. Pipe-friendly TSV output, JSON mode, 33 commands, daemon-backed session for interactive exploration.
```bash
rdc open capture.rdc # Start session
rdc draws # List draw calls (TSV)
rdc pipeline 142 # Pipeline state at EID 142
rdc shader 142 ps # Pixel shader disassembly
rdc texture 5 -o out.png # Export texture
rdc draws --json | jq '...' # Machine-readable output
rdc close # End session
```
## Install
### PyPI (recommended)
```bash
pipx install rdc-cli
```
### AUR (Arch Linux)
```bash
yay -S rdc-cli-git
```
This builds the renderdoc Python module automatically — no extra setup needed.
### From source
```bash
git clone https://github.com/BANANASJIM/rdc-cli.git
cd rdc-cli
pixi install && pixi run sync
```
## Setup renderdoc
`rdc` requires the renderdoc Python module (`renderdoc.cpython-*.so`), which is **not** included in most system packages. Your Python version must match the one used to compile renderdoc.
### Build from source
```bash
git clone --depth 1 https://github.com/baldurk/renderdoc.git
cd renderdoc
cmake -B build -DENABLE_PYRENDERDOC=ON -DENABLE_QRENDERDOC=OFF
cmake --build build -j$(nproc)
export RENDERDOC_PYTHON_PATH=$PWD/build/lib
```
### Module discovery order
1. `RENDERDOC_PYTHON_PATH` environment variable
2. `/usr/lib/renderdoc`, `/usr/local/lib/renderdoc`
3. Sibling directory of `renderdoccmd` on PATH
### Verify
```bash
rdc doctor
```
## Commands
Run `rdc --help` for the full command list, or `rdc <command> --help` for details.
| Category | Commands |
|----------|----------|
| Session | `open`, `close`, `status`, `goto` |
| Inspection | `info`, `stats`, `events`, `draws`, `event`, `draw`, `log` |
| GPU state | `pipeline`, `bindings`, `shader`, `shaders`, `shader-map` |
| Resources | `resources`, `resource`, `passes`, `pass`, `usage` |
| Export | `texture`, `rt`, `buffer` |
| Search | `search`, `counters` |
| VFS | `ls`, `cat`, `tree` |
| Utility | `doctor`, `completion`, `capture`, `count` |
All commands support `--json` for machine-readable output.
### Shell completions
```bash
rdc completion bash > ~/.local/share/bash-completion/completions/rdc
rdc completion zsh > ~/.zfunc/_rdc
eval "$(rdc completion bash)"
```
## Development
```bash
pixi run sync # Install Python deps
pixi run check # lint + typecheck + test (653 tests, 92% coverage)
```
GPU integration tests require a real renderdoc module:
```bash
export RENDERDOC_PYTHON_PATH=/path/to/renderdoc/build/lib
pixi run test-gpu
```
## License
MIT
| text/markdown | Jim | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.1",
"rich>=13.0; extra == \"rich\"",
"Pillow>=10.0; extra == \"imaging\"",
"Pillow>=10.0; extra == \"diff\"",
"numpy>=1.24; extra == \"diff\"",
"pytest>=8.0; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"mypy>=1.10; extra == \"dev\"",
"ruff>=0.6; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:43:10.962235 | rdc_cli-0.2.0.tar.gz | 45,456 | 11/36/bc750b051cc67a155eb531722ca6a92032264515f59c9dd9b5a2b919237d/rdc_cli-0.2.0.tar.gz | source | sdist | null | false | ec32e05d267fef8dbd881418691637f7 | b03b971c95002bf7d502d32e05e4dc00f96630e6517f37cda42725e375e474f1 | 1136bc750b051cc67a155eb531722ca6a92032264515f59c9dd9b5a2b919237d | MIT | [
"LICENSE"
] | 225 |
2.4 | agentql | 1.18.1 | Tiny Fish AgentQL Python Client | # Tiny Fish AgentQL Python Client
A Python client for Tiny Fish AgentQL - a new way to interact with the web.
| text/markdown | null | Tiny Fish <support@agentql.com> | null | null | MIT | ai, automation, playwright, testing, web-scraping | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Internet :: WWW/HTTP :: Browsers",
"Topic :: Software Development :: Testing"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"aiofiles>=23.0.0",
"colorama<1.0.0,>=0.4.6",
"httpx<1.0.0,>=0.23.0",
"playwright>=1.0.0",
"pydantic>=2.0.0",
"pytest-asyncio>=0.21.0",
"requests>=2.0.0",
"tf-playwright-stealth<2.0.0,>=1.0.0",
"typer>=0.7.0",
"typing-extensions>=4.0.0"
] | [] | [] | [] | [
"Homepage, https://agentql.com",
"Repository, https://github.com/tinyfish-ai/agentql-client",
"Documentation, https://docs.agentql.com"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-21T01:43:08.694967 | agentql-1.18.1.tar.gz | 114,701 | dc/d5/9beb41290f2b27adc720a3d9f9fbfd8b4e9d4748d4f0d42ce668126417fe/agentql-1.18.1.tar.gz | source | sdist | null | false | 7a14c7ede1bfafa93c156ada9f530ce8 | 1e95ff6f15a17a24d56f134bc0a697db98332ff863c784012c6268841b36035d | dcd59beb41290f2b27adc720a3d9f9fbfd8b4e9d4748d4f0d42ce668126417fe | null | [
"LICENSE"
] | 372 |
2.1 | athena-intelligence | 0.1.952 | Athena Intelligence Python Library | # Athena Intelligence Python Library
[](https://github.com/fern-api/fern)
[](https://pypi.python.org/pypi/athena-intelligence)
The Athena Intelligence Python Library provides convenient access to the Athena Intelligence API from
applications written in Python.
The library includes type definitions for all
request and response fields, and offers both synchronous and asynchronous clients powered by httpx.
## Installation
Add this dependency to your project's build file:
```bash
pip install athena-intelligence
# or
poetry add athena-intelligence
```
## Usage
Simply import `Athena` and start making calls to our API.
```python
from athena.client import Athena
from athena import Model, Tools
client = Athena(
api_key="YOUR_API_KEY" # Defaults to ATHENA_API_KEY
)
message = client.message.submit(
content="visit www.athenaintelligence.ai and summarize the website in one paragraph",
model=Model.GPT_3_5_TURBO,
tools=[Tools.SEARCH, Tools.BROWSE, Tools.SEARCH],
)
```
## Async Client
The SDK also exports an async client so that you can make non-blocking
calls to our API.
```python
from athena.client import AsyncAthena
from athena import Model, Tools
client = AsyncAthena(
api_key="YOUR_API_KEY" # Defaults to ATHENA_API_KEY
)
async def main() -> None:
message = client.message.submit(
content="visit www.athenaintelligence.ai and summarize the website in one paragraph",
model=Model.GPT_3_5_TURBO,
tools=[Tools.SEARCH, Tools.BROWSE, Tools.SEARCH],
)
print("Received message", message)
asyncio.run(main())
```
## Polling
The SDK provides helper functions that will automatically poll when
retrieving a message. Use the `submit_and_poll` method as shown below:
```python
from athena.client import Athena
from athena import Model, Tools
client = Athena(api_key="...")
message = client.message.submit_and_poll(
content="visit www.athenaintelligence.ai and summarize the website in one paragraph",
model=Model.GPT_3_5_TURBO,
tools=[Tools.SEARCH, Tools.BROWSE, Tools.SEARCH],
)
```
By default, the method will poll every 2 seconds but you can override
this with the `poll_interval` argument.
## Athena Module
All of the models are nested within the Athena module. Let IntelliSense
guide you!
## Exception Handling
All errors thrown by the SDK will be subclasses of [`ApiError`](./src/athena/core/api_error.py).
```python
import athena
try:
client.messages.get(...)
except athena.core.ApiError as e: # Handle all errors
print(e.status_code)
print(e.body)
```
## Advanced
### Timeouts
By default, requests time out after 60 seconds. You can configure this with a
timeout option at the client or request level.
```python
from athena.client import Athena
client = Athena(
# All timeouts are 20 seconds
timeout=20.0,
)
# Override timeout for a specific method
client.messages.get(..., {
timeout_in_seconds=20
})
```
### Custom HTTP client
You can override the httpx client to customize it for your use-case. Some common use-cases
include support for proxies and transports.
```python
import httpx
from athena.client import Athena
client = Athena(
http_client=httpx.Client(
proxies="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
## Beta Status
This SDK is in **Preview**, and there may be breaking changes between versions without a major
version update.
To ensure a reproducible environment (and minimize risk of breaking changes), we recommend pinning a specific package version.
## Contributing
While we value open-source contributions to this SDK, this library is generated programmatically.
Additions made directly to this library would have to be moved over to our generation code,
otherwise they would be overwritten upon the next generated release. Feel free to open a PR as
a proof of concept, but know that we will not be able to merge it as-is. We suggest opening
an issue first to discuss with us!
On the other hand, contributions to the README are always very welcome!
| text/markdown | null | null | null | null | null | null | [
"Intended Audience :: Developers",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.8",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"fastapi<0.130.0,>=0.115.10",
"httpx>=0.21.2",
"langchain_core<0.4.0,>=0.3.40",
"langserve<0.4.0,>=0.3.1",
"pydantic>=1.9.2",
"pydantic-core>=2.18.2",
"python-magic==0.4.27",
"typing_extensions>=4.0.0"
] | [] | [] | [] | [] | poetry/1.5.1 CPython/3.9.25 Linux/6.11.0-1018-azure | 2026-02-21T01:42:37.660654 | athena_intelligence-0.1.952.tar.gz | 78,097 | 58/55/7a103b3f5a7881882baaf09ddb33342213e5a06dbf8698778e618a2d3390/athena_intelligence-0.1.952.tar.gz | source | sdist | null | false | 1762190414d3931e41af4f66e514d784 | 89a90d0ee244c152f098d66bb543916b9f8710ec801d66e9e0fe7572b65fd93d | 58557a103b3f5a7881882baaf09ddb33342213e5a06dbf8698778e618a2d3390 | null | [] | 220 |
2.4 | crtx | 0.2.1 | Multi-model AI orchestration platform. Plugin any LLM. Ship better code. | <p align="center">
<img src="assets/banner.svg" alt="CRTX" width="800">
</p>
<p align="center">
<strong>Generate. Test. Fix. Review. One command, verified output.</strong>
</p>
<p align="center">
<a href="#quick-start">Quick Start</a> •
<a href="#the-problem">The Problem</a> •
<a href="#the-loop">The Loop</a> •
<a href="#benchmarks">Benchmarks</a> •
<a href="#how-it-works">How It Works</a> •
<a href="#commands">Commands</a> •
<a href="#supported-models">Supported Models</a>
</p>
<p align="center">
<img src="https://img.shields.io/pypi/pyversions/crtx" alt="python 3.12+">
<img src="https://img.shields.io/github/license/CRTXAI/CRTX" alt="license Apache 2.0">
<img src="https://img.shields.io/pypi/v/crtx" alt="PyPI version">
</p>
---
## What is CRTX?
CRTX is an AI development intelligence tool that generates, tests, fixes, and reviews code automatically. One command in, verified output out.
It works with any model — Claude, GPT, Gemini, Grok, DeepSeek — and picks the right one for each task. You don't configure pipelines or choose models. You describe what you want and CRTX handles the rest.
```bash
crtx loop "Build a REST API with FastAPI, SQLite, search and pagination"
```
## The Problem
Single AI models generate code that *looks* correct but often has failing tests, broken imports, and missed edge cases. Developers spend 10–30 minutes per generation debugging and fixing AI output before it actually works.
Multi-model pipelines cost 10–15x more without meaningfully improving quality. Four models reviewing each other's prose doesn't catch a broken import statement.
The issue isn't the model. It's the lack of verification. Nobody runs the code before handing it to you.
## The Loop
CRTX solves this with the Loop: **Generate → Test → Fix → Review**.
1. **Generate** — The best model for the task writes the code
2. **Test** — CRTX runs the code locally: AST parse, import check, pyflakes, pytest, entry point execution
3. **Fix** — Failures feed back to the model with structured error context for targeted fixes
4. **Review** — An independent Arbiter (always a *different* model) reviews the final output
Every output is tested before you see it. If tests fail, CRTX fixes them. If the fix cycle stalls, three escalation tiers activate before giving up. If the Arbiter rejects the code, one more fix cycle runs.
The result: code that passes its own tests, has been reviewed by a second model, and comes with a verification report.
## Benchmarks
Same 12 prompts, same scoring rubric. CRTX Loop vs. single models vs. multi-model debate:
| Condition | Avg Score | Min | Spread | Avg Dev Time | Cost |
|-----------|-----------|-----|--------|--------------|------|
| Single Sonnet | 94% | 92% | 4 pts | 10 min | $0.36 |
| Single o3 | 81% | 54% | 41 pts | 4 min | $0.44 |
| Multi-model Debate | 88% | 75% | 25 pts | 9 min | $5.59 |
| **CRTX Loop** | **99%** | **98%** | **2 pts** | **2 min** | **$1.80** |
**Dev Time** = estimated developer minutes to get the output to production (based on test failures, import errors, and entry point issues). **Spread** = max score minus min score across all prompts.
The Loop scores higher, more consistently, with less post-generation work than any other condition — at a fraction of the cost of multi-model pipelines.
Run the benchmark yourself:
```bash
crtx benchmark --quick
```
## How It Works
```
┌─────────┐ ┌──────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│ Route │ ─→ │ Generate │ ─→ │ Test │ ─→ │ Fix │ ─→ │ Review │ ─→ │ Present │
└─────────┘ └──────────┘ └─────────┘ └─────────┘ └─────────┘ └─────────┘
│ │ │
│ └──────────────┘
│ ↑ loop until pass
│
├── simple → fast model, 2 fix iterations
├── medium → balanced model, 3 fix iterations
└── complex → best model, 5 fix iterations + architecture debate
```
**Route** — Classifies your prompt by complexity (simple/medium/complex) and selects the model, fix budget, and timeout tier.
**Generate** — Produces source files and test files. If no tests are generated, a second call creates comprehensive pytest tests so the fix cycle always has something to verify against.
**Test** — Five-stage local quality gate: AST parse → import check → pyflakes → pytest → entry point execution. Per-file pytest fallback on collection failures.
**Fix** — Feeds structured test failures back to the model for targeted fixes. Detects phantom API references (tests importing functions that don't exist in source) and pytest collection failures.
**Three-tier gap closing** — When the normal fix cycle can't resolve failures:
- **Tier 1** — Diagnose then fix: "analyze the root cause without writing code," then feed the diagnosis back for a targeted fix
- **Tier 2** — Minimal context retry: strip context to only the failing test and its source file, fresh perspective
- **Tier 3** — Second opinion: escalate to a different model with the primary model's diagnosis
**Review** — An independent Arbiter (always a different model than the generator) reviews for logic errors, security issues, and design problems. On REJECT, triggers one more fix cycle and retests.
**Present** — Final results with verification report, file list, and cost breakdown.
## Key Features
**Smart routing** — Classifies prompts by complexity and picks the right model, fix budget, and timeout for each task. Simple tasks get fast models. Complex tasks get the best model plus an architecture debate.
**Three-tier gap closing** — When fixes stall, CRTX escalates: root cause diagnosis, minimal context retry, then a second opinion from a different model. Most stuck cases resolve at tier 1 or 2.
**Independent Arbiter review** — Every run gets reviewed by a model that didn't write the code. Cross-model review catches errors that self-review misses. Skip with `--no-arbiter`.
**Verified scoring** — Every output is tested locally before you see it. The verification report shows exactly which checks passed, how many tests ran, and estimated developer time to production.
**Auto-fallback** — If a provider goes down mid-run (rate limit, timeout, outage), CRTX substitutes the next best model and keeps going. A 5-minute cooldown prevents hammering a struggling provider.
**Apply mode** — Write generated code directly to your project with `--apply`. Interactive diff preview, git branch protection, conflict detection, AST-aware patching, and automatic rollback if post-apply tests fail.
**Context injection** — Scan your project and inject relevant code into the generation prompt with `--context .`. AST-aware Python analysis extracts class signatures, function definitions, and import graphs within a configurable token budget.
## Quick Start
```bash
pip install crtx
crtx setup # configure your API keys
```
Then run:
```bash
crtx loop "Build a CLI password generator with strength validation and clipboard support"
```
## Commands
| Command | What it does |
|---------|-------------|
| `crtx loop "task"` | Generate, test, fix, and review code (default) |
| `crtx run "task"` | Run a multi-model pipeline (sequential/parallel/debate) |
| `crtx benchmark` | Run the built-in benchmark suite |
| `crtx repl` | Interactive shell with session history |
| `crtx review-code` | Multi-model code review on files or git diffs |
| `crtx improve` | Review → improve pipeline with cross-model consensus |
| `crtx setup` | API key configuration |
| `crtx models` | List available models with fitness scores |
| `crtx estimate "task"` | Cost estimate before running |
| `crtx sessions` | Browse past runs |
| `crtx replay <id>` | Re-display a previous session |
| `crtx dashboard` | Real-time web dashboard |
## Supported Models
CRTX works with any model supported by LiteLLM — that's 100+ providers. Out of the box, it's configured for:
| Provider | Models |
|----------|--------|
| Anthropic | Claude Opus 4, Sonnet 4 |
| OpenAI | GPT-4o, o3 |
| Google | Gemini 2.5 Pro, Flash |
| xAI | Grok |
| DeepSeek | DeepSeek R1 |
Add any LiteLLM-compatible model in `~/.crtx/config.toml`.
### API Key Setup
Run `crtx setup` to configure your keys interactively, or set them as environment variables:
```bash
export ANTHROPIC_API_KEY=sk-ant-...
export OPENAI_API_KEY=sk-...
export GEMINI_API_KEY=...
export XAI_API_KEY=xai-...
export DEEPSEEK_API_KEY=sk-...
```
CRTX only needs one provider to work. More providers means more model diversity for routing and Arbiter review.
## Contributing
Contributions are welcome. Fork the repo, create a branch, and submit a PR.
The test suite has 1,096 tests — run them with `pytest`. Linting is `ruff check .`.
## License
Apache 2.0. See [LICENSE](LICENSE) for details.
---
<p align="center">
Built by <a href="https://www.crtx-ai.com">TriadAI</a>
</p>
| text/markdown | TriadAI | null | null | null | null | ai, arbiter, code-generation, llm, multi-model, pipeline | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Code Generators"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiosqlite>=0.20",
"jinja2>=3.0",
"litellm>=1.55",
"pydantic>=2.0",
"rich>=13.0",
"typer>=0.12",
"fastapi>=0.110; extra == \"dashboard\"",
"uvicorn>=0.30; extra == \"dashboard\"",
"websockets>=12.0; extra == \"dashboard\"",
"pytest-asyncio>=0.24; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.5; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://www.crtx-ai.com",
"Repository, https://github.com/CRTXAI/CRTX"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-21T01:42:32.339134 | crtx-0.2.1.tar.gz | 1,058,092 | 13/92/a5483e15994469c311c66519862fbc1d64ee10985eabc86031f8abf9bd30/crtx-0.2.1.tar.gz | source | sdist | null | false | 34179d76790c928e1302852caac7f00d | a131eac9a2f94f3d2650e2e8d4bb6c2b2d796c4601e2ad8c6b115378afe713b7 | 1392a5483e15994469c311c66519862fbc1d64ee10985eabc86031f8abf9bd30 | Apache-2.0 | [
"LICENSE"
] | 197 |
2.4 | documente_shared | 0.1.179 | Shared utilities for LlamitAI projects |
# Documente Shared
Utilidades para proyectos Documente AI
## Instalación
```bash
make build
```
## Publicación
Publica `documente-shared` en PyPi
```bash
make publish
``` | text/markdown | null | Tech <tech@llamitai.com> | null | null | MIT | null | [] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"boto3>=1.42.46",
"botocore>=1.42.46",
"pydantic>=2.12.5",
"requests>=2.32.5",
"sentry-sdk>=2.52.0",
"structlog>=25.5.0",
"unidecode>=1.4.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"13","id":"trixie","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T01:41:53.298637 | documente_shared-0.1.179.tar.gz | 61,400 | fa/b5/a33013295624de4ca0652cb74160da04439672e29808e218a33d87ce03ef/documente_shared-0.1.179.tar.gz | source | sdist | null | false | 372790be881d71276c7b3b0ede32f4da | 559dcfe1848d633847c57432f5ac2c78a746ee814253c4a48ce85173730b3d5a | fab5a33013295624de4ca0652cb74160da04439672e29808e218a33d87ce03ef | null | [] | 0 |
2.4 | androidtvmcp | 0.3.0 | Android TV Remote Control to MCP Bridge | # AndroidTVMCP - Android TV Remote Control to MCP Bridge
A Model Context Protocol (MCP) server that provides Android TV remote control functionality to AI assistants and other MCP clients.
## Overview
AndroidTVMCP bridges Android TV remote control capabilities with the Model Context Protocol, enabling seamless integration of Android TV control into AI-powered workflows and automation systems.
## Features
- **Device Discovery**: Automatic detection of Android TV devices on the local network
- **Remote Control**: Full navigation and playback control capabilities
- **App Management**: Launch and switch between Android TV applications
- **State Monitoring**: Query device status and current state
- **MCP Integration**: Standard MCP protocol compliance for easy integration
## Quick Start
### Installation
#### Using Virtual Environment (Recommended)
```bash
# Create a virtual environment
python -m venv androidtvmcp-env
# Activate the virtual environment
# On Linux/macOS:
source androidtvmcp-env/bin/activate
# On Windows:
# androidtvmcp-env\Scripts\activate
# Install the package
pip install androidtvmcp
```
#### Global Installation
```bash
pip install androidtvmcp
```
### Basic Usage
1. Start the MCP server:
```bash
androidtvmcp --host localhost --port 8080
```
2. Configure your MCP client to connect to the server
3. Use Android TV control tools through your AI assistant
### Example Commands
- Navigate: "Move up on the Android TV"
- Playback: "Pause the current video"
- Apps: "Launch Netflix on Android TV"
- Status: "What's currently playing on Android TV?"
## Configuration
Create a configuration file `config.json`:
```json
{
"devices": {
"discovery": {
"enabled": true,
"timeout": 10
},
"connection": {
"timeout": 5,
"retry_attempts": 3
}
},
"mcp": {
"host": "localhost",
"port": 8080,
"transport": "stdio"
},
"logging": {
"level": "INFO",
"file": "androidtvmcp.log"
}
}
```
## MCP Tools
### Navigation Tools
- `atv_navigate`: Navigate Android TV interface (up, down, left, right, select, menu, back, home)
- `atv_input_text`: Send text input to Android TV
### Playback Tools
- `atv_playback`: Control media playback (play, pause, stop, fast_forward, rewind)
- `atv_volume`: Adjust volume (up, down, mute)
### App Management Tools
- `atv_launch_app`: Launch specific applications
- `atv_get_apps`: List available applications
- `atv_switch_app`: Switch between running applications
### Device Tools
- `atv_get_devices`: List discovered Android TV devices
- `atv_get_status`: Get current device status and state
- `atv_power`: Power control (on, off, sleep)
## MCP Resources
### Device Information
- `device://[device_id]/info`: Device capabilities and information
- `device://[device_id]/status`: Current device status
- `device://[device_id]/apps`: Available applications
### Current State
- `state://current_app`: Currently active application
- `state://playback`: Current playback status
- `state://volume`: Current volume level
## Development
### Setup Development Environment
#### Using Virtual Environment (Recommended)
```bash
# Clone the repository
git clone https://github.com/pigeek/androidtvmcp.git
cd androidtvmcp
# Create and activate virtual environment
python -m venv venv
# Activate the virtual environment
# On Linux/macOS:
source venv/bin/activate
# On Windows:
# venv\Scripts\activate
# Install in development mode with dev dependencies
pip install -e ".[dev]"
```
#### Alternative Setup
```bash
git clone https://github.com/pigeek/androidtvmcp.git
cd androidtvmcp
pip install -e ".[dev]"
```
### Run Tests
```bash
pytest
```
### Development Tools
The `devtools/` directory contains standalone scripts for manual testing and validation:
```bash
cd devtools
python test_command_processor.py # Test command processor functionality
python test_mcp_client.py # Test MCP client-server communication
python test_mcp_integration.py # Test MCP server integration
```
See `devtools/README.md` for detailed information about each script.
### Code Formatting
```bash
black src/ tests/
isort src/ tests/
```
### Type Checking
```bash
mypy src/
```
## Architecture
```
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ MCP Client │◄──►│ AndroidTVMCP │◄──►│ Android TV │
│ (AI Assistant) │ │ Server │ │ Devices │
└─────────────────┘ └─────────────────┘ └─────────────────┘
```
### Components
- **MCP Server**: Handles MCP protocol communication
- **Device Manager**: Manages Android TV device discovery and connections
- **Command Processor**: Translates MCP requests to Android TV commands
- **Network Layer**: Handles Android TV protocol communication
## Requirements
- Python 3.8+
- Android TV devices on the same network
- Network connectivity for device discovery
## Troubleshooting
### Common Issues
1. **Device Not Found**
- Ensure Android TV is on the same network
- Check firewall settings
- Verify device discovery is enabled
2. **Connection Failed**
- Check network connectivity
- Verify Android TV remote control is enabled
- Try restarting the Android TV device
3. **Commands Not Working**
- Ensure device is powered on
- Check if device supports the command
- Verify connection status
### Debug Mode
Enable debug logging:
```bash
androidtvmcp --log-level DEBUG
```
## Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests
5. Run the test suite
6. Submit a pull request
## License
MIT License - see [LICENSE](LICENSE) file for details.
## Support
- [GitHub Issues](https://github.com/pigeek/androidtvmcp/issues)
- [Documentation](https://androidtvmcp.readthedocs.io/)
- [MCP Protocol Documentation](https://modelcontextprotocol.io/)
## Related Projects
- [androidtvremote2](https://github.com/tronikos/androidtvremote2) - Android TV remote control library
- [Model Context Protocol](https://modelcontextprotocol.io/) - Protocol specification
| text/markdown | Pigeek | null | null | null | MIT | android-tv, mcp, model-context-protocol, remote-control | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"androidtvremote2>=0.1.0",
"asyncio-mqtt>=0.11.0",
"click>=8.0.0",
"mcp>=1.0.0",
"pychromecast>=14.0.0",
"pydantic>=2.0.0",
"pynput>=1.7.0",
"zeroconf>=0.131.0",
"black>=23.0.0; extra == \"dev\"",
"isort>=5.12.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/pigeek/androidtvmcp",
"Repository, https://github.com/pigeek/androidtvmcp",
"Issues, https://github.com/pigeek/androidtvmcp/issues"
] | twine/6.2.0 CPython/3.11.5 | 2026-02-21T01:40:51.012140 | androidtvmcp-0.3.0.tar.gz | 34,312 | 8c/18/aa5c268649939352a813ba5579ebb1401bc165041c6cd5c771d56d660b41/androidtvmcp-0.3.0.tar.gz | source | sdist | null | false | 291062368ee1ec6328b1921dba87edd5 | adce019217e0af13b01bfc04e1f0d380cae471c24d169d26e3b94031cf43f08f | 8c18aa5c268649939352a813ba5579ebb1401bc165041c6cd5c771d56d660b41 | null | [
"LICENSE"
] | 201 |
2.4 | ragscore | 0.7.4 | The Fastest RAG Audit - Generate QA datasets & evaluate RAG systems in Colab, Jupyter, or CLI. Privacy-first, lightning fast, visual reports. | <div align="center">
<img src="RAGScore.png" alt="RAGScore Logo" width="400"/>
[](https://pypi.org/project/ragscore/)
[](https://pepy.tech/projects/ragscore)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/Apache-2.0)
[](https://ollama.ai)
[](https://colab.research.google.com/github/HZYAI/RagScore/blob/main/examples/detailed_evaluation_demo.ipynb)
[](https://modelcontextprotocol.io)
<!-- mcp-name: io.github.HZYAI/ragscore -->
**Generate QA datasets & evaluate RAG systems in 2 commands**
🔒 Privacy-First • ⚡ Lightning Fast • 🤖 Any LLM • 🏠 Local or Cloud • 🌍 Multilingual
[English](README.md) | [中文](README_CN.md) | [日本語](README_JP.md)
</div>
---
## ⚡ 2-Line RAG Evaluation
```bash
# Step 1: Generate QA pairs from your docs
ragscore generate docs/
# Step 2: Evaluate your RAG system
ragscore evaluate http://localhost:8000/query
```
**That's it.** Get accuracy scores and incorrect QA pairs instantly.
```
============================================================
✅ EXCELLENT: 85/100 correct (85.0%)
Average Score: 4.20/5.0
============================================================
❌ 15 Incorrect Pairs:
1. Q: "What is RAG?"
Score: 2/5 - Factually incorrect
2. Q: "How does retrieval work?"
Score: 3/5 - Incomplete answer
```
---
## 🚀 Quick Start
### Install
```bash
pip install ragscore # Core (works with Ollama)
pip install "ragscore[openai]" # + OpenAI support
pip install "ragscore[notebook]" # + Jupyter/Colab support
pip install "ragscore[all]" # + All providers
```
### Option 1: Python API (Notebook-Friendly)
Perfect for **Jupyter, Colab, and rapid iteration**. Get instant visualizations.
```python
from ragscore import quick_test
# 1. Audit your RAG in one line
result = quick_test(
endpoint="http://localhost:8000/query", # Your RAG API
docs="docs/", # Your documents
n=10, # Number of test questions
)
# 2. See the report
result.plot()
# 3. Inspect failures
bad_rows = result.df[result.df['score'] < 3]
display(bad_rows[['question', 'rag_answer', 'reason']])
```
**Rich Object API:**
- `result.accuracy` - Accuracy score
- `result.df` - Pandas DataFrame of all results
- `result.plot()` - 3-panel visualization (4-panel with `detailed=True`)
- `result.corrections` - List of items to fix
### Option 2: CLI (Production)
### Generate QA Pairs
```bash
# Set API key (or use local Ollama - no key needed!)
export OPENAI_API_KEY="sk-..."
# Generate from any document
ragscore generate paper.pdf
ragscore generate docs/*.pdf --concurrency 10
```
### Evaluate Your RAG
```bash
# Point to your RAG endpoint
ragscore evaluate http://localhost:8000/query
# Custom options
ragscore evaluate http://api/ask --model gpt-4o --output results.json
```
---
## 🔬 Detailed Multi-Metric Evaluation
Go beyond a single score. Add `detailed=True` to get **5 diagnostic dimensions** per answer — in the same single LLM call.
```python
result = quick_test(
endpoint=my_rag,
docs="docs/",
n=10,
detailed=True, # ⭐ Enable multi-metric evaluation
)
# Inspect per-question metrics
display(result.df[[
"question", "score", "correctness", "completeness",
"relevance", "conciseness", "faithfulness"
]])
# Radar chart + 4-panel visualization
result.plot()
```
```
==================================================
✅ PASSED: 9/10 correct (90%)
Average Score: 4.3/5.0
Threshold: 70%
──────────────────────────────────────────────────
Correctness: 4.5/5.0
Completeness: 4.2/5.0
Relevance: 4.8/5.0
Conciseness: 4.1/5.0
Faithfulness: 4.6/5.0
==================================================
```
| Metric | What it measures | Scale |
|--------|------------------|-------|
| **Correctness** | Semantic match to golden answer | 5 = fully correct |
| **Completeness** | Covers all key points | 5 = fully covered |
| **Relevance** | Addresses the question asked | 5 = perfectly on-topic |
| **Conciseness** | Focused, no filler | 5 = concise and precise |
| **Faithfulness** | No fabricated claims | 5 = fully faithful |
**CLI:**
```bash
ragscore evaluate http://localhost:8000/query --detailed
```
> 📓 [Full demo notebook](examples/detailed_evaluation_demo.ipynb) — build a mini RAG and test it with detailed metrics.
---
## 🏠 100% Private with Local LLMs
```bash
# Use Ollama - no API keys, no cloud, 100% private
ollama pull llama3.1
ragscore generate confidential_docs/*.pdf
ragscore evaluate http://localhost:8000/query
```
**Perfect for:** Healthcare 🏥 • Legal ⚖️ • Finance 🏦 • Research 🔬
### Ollama Model Recommendations
RAGScore generates complex structured QA pairs (question + answer + rationale + support span) in JSON format. This requires models with strong instruction-following and JSON output capabilities.
| Model | Size | Min RAM | QA Quality | Recommended |
|-------|------|---------|------------|-------------|
| `llama3.1:70b` | 40GB | 48GB VRAM | Excellent | GPU server (A100, L40) |
| `qwen2.5:32b` | 18GB | 24GB VRAM | Excellent | GPU server (A10, L20) |
| `llama3.1:8b` | 4.7GB | 8GB VRAM | Good | **Best local choice** |
| `qwen2.5:7b` | 4.4GB | 8GB VRAM | Good | Good local alternative |
| `mistral:7b` | 4.1GB | 8GB VRAM | Good | Good local alternative |
| `llama3.2:3b` | 2.0GB | 4GB RAM | Fair | CPU-only / testing |
| `qwen2.5:1.5b` | 1.0GB | 2GB RAM | Poor | Not recommended |
> **Minimum recommended: 8B+ models.** Smaller models (1.5B–3B) produce lower quality support spans and may timeout on longer chunks.
### Ollama Performance Guide
```bash
# Recommended: 8B model with concurrency 2 for local machines
ollama pull llama3.1:8b
ragscore generate docs/ --provider ollama --model llama3.1:8b
# GPU server (A10/L20): larger model with higher concurrency
ollama pull qwen2.5:32b
ragscore generate docs/ --provider ollama --model qwen2.5:32b --concurrency 5
```
**Expected performance (28 chunks, 5 QA pairs per chunk):**
| Hardware | Model | Time | Concurrency |
|----------|-------|------|-------------|
| MacBook (CPU) | llama3.2:3b | ~45 min | 2 |
| MacBook (CPU) | llama3.1:8b | ~25 min | 2 |
| A10 (24GB) | llama3.1:8b | ~3–5 min | 5 |
| L20/L40 (48GB) | qwen2.5:32b | ~3–5 min | 5 |
| OpenAI API | gpt-4o-mini | ~2 min | 10 |
> RAGScore auto-reduces concurrency to 2 for local Ollama to avoid GPU/CPU contention.
---
## 🔌 Supported LLMs
| Provider | Setup | Notes |
|----------|-------|-------|
| **Ollama** | `ollama serve` | Local, free, private |
| **OpenAI** | `export OPENAI_API_KEY="sk-..."` | Best quality |
| **Anthropic** | `export ANTHROPIC_API_KEY="..."` | Long context |
| **DashScope** | `export DASHSCOPE_API_KEY="..."` | Qwen models |
| **vLLM** | `export LLM_BASE_URL="..."` | Production-grade |
| **Any OpenAI-compatible** | `export LLM_BASE_URL="..."` | Groq, Together, etc. |
---
## 📊 Output Formats
### Generated QA Pairs (`output/generated_qas.jsonl`)
```json
{
"id": "abc123",
"question": "What is RAG?",
"answer": "RAG (Retrieval-Augmented Generation) combines...",
"rationale": "This is explicitly stated in the introduction...",
"support_span": "RAG systems retrieve relevant documents...",
"difficulty": "medium",
"source_path": "docs/rag_intro.pdf"
}
```
### Evaluation Results (`--output results.json`)
```json
{
"summary": {
"total": 100,
"correct": 85,
"incorrect": 15,
"accuracy": 0.85,
"avg_score": 4.2
},
"incorrect_pairs": [
{
"question": "What is RAG?",
"golden_answer": "RAG combines retrieval with generation...",
"rag_answer": "RAG is a database system.",
"score": 2,
"reason": "Factually incorrect - RAG is not a database"
}
]
}
```
---
## 🧪 Python API
```python
from ragscore import run_pipeline, run_evaluation
# Generate QA pairs
run_pipeline(paths=["docs/"], concurrency=10)
# Evaluate RAG
results = run_evaluation(
endpoint="http://localhost:8000/query",
model="gpt-4o", # LLM for judging
)
print(f"Accuracy: {results.accuracy:.1%}")
```
---
## 🤖 AI Agent Integration
RAGScore is designed for AI agents and automation:
```bash
# Structured CLI with predictable output
ragscore generate docs/ --concurrency 5
ragscore evaluate http://api/query --output results.json
# Exit codes: 0 = success, 1 = error
# JSON output for programmatic parsing
```
**CLI Reference:**
| Command | Description |
|---------|-------------|
| `ragscore generate <paths>` | Generate QA pairs from documents |
| `ragscore evaluate <endpoint>` | Evaluate RAG against golden QAs |
| `ragscore evaluate <endpoint> --detailed` | Multi-metric evaluation |
| `ragscore --help` | Show all commands and options |
| `ragscore generate --help` | Show generate options |
| `ragscore evaluate --help` | Show evaluate options |
---
## ⚙️ Configuration
Zero config required. Optional environment variables:
```bash
export RAGSCORE_CHUNK_SIZE=512 # Chunk size for documents
export RAGSCORE_QUESTIONS_PER_CHUNK=5 # QAs per chunk
export RAGSCORE_WORK_DIR=/path/to/dir # Working directory
```
---
## 🔐 Privacy & Security
| Data | Cloud LLM | Local LLM |
|------|-----------|-----------|
| Documents | ✅ Local | ✅ Local |
| Text chunks | ⚠️ Sent to LLM | ✅ Local |
| Generated QAs | ✅ Local | ✅ Local |
| Evaluation results | ✅ Local | ✅ Local |
**Compliance:** GDPR ✅ • HIPAA ✅ (with local LLMs) • SOC 2 ✅
---
## 🧪 Development
```bash
git clone https://github.com/HZYAI/RagScore.git
cd RagScore
pip install -e ".[dev,all]"
pytest
```
---
## 🔗 Links
- [GitHub](https://github.com/HZYAI/RagScore) • [PyPI](https://pypi.org/project/ragscore/) • [Issues](https://github.com/HZYAI/RagScore/issues) • [Discussions](https://github.com/HZYAI/RagScore/discussions)
---
<p align="center">
<b>⭐ Star us on GitHub if RAGScore helps you!</b><br>
Made with ❤️ for the RAG community
</p>
| text/markdown | null | RAGScore Team <team@ragscore.io> | null | null | Apache-2.0 | rag, rag-evaluation, qa-generation, llm, llm-as-judge, local-llm, ollama, jupyter, colab, notebook, visualization, mcp, llmops, async, privacy, synthetic-data, evaluation, ai-evaluation | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Text Processing :: Linguistic"
] | [] | null | null | <3.14,>=3.9 | [] | [] | [] | [
"pypdf2>=3.0.1",
"nltk>=3.8.1",
"tqdm>=4.66.1",
"typer[all]>=0.16.0",
"python-dotenv>=1.0.0",
"aiohttp>=3.9.0",
"requests>=2.28.0",
"openai>=1.0.0; extra == \"openai\"",
"anthropic>=0.18.0; extra == \"anthropic\"",
"dashscope>=1.14.1; extra == \"dashscope\"",
"ragscore[anthropic,dashscope,openai]; extra == \"providers\"",
"nest_asyncio>=1.5.0; extra == \"notebook\"",
"pandas>=2.0.0; extra == \"notebook\"",
"mcp>=1.0.0; extra == \"mcp\"",
"ragscore[notebook,providers]; extra == \"all\"",
"pytest>=7.4.3; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"ruff>=0.1.6; extra == \"dev\"",
"black>=23.11.0; extra == \"dev\"",
"mypy>=1.7.0; extra == \"dev\"",
"types-requests>=2.31.0; extra == \"dev\"",
"pre-commit>=3.5.0; extra == \"dev\"",
"requests>=2.31.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/HZYAI/RagScore",
"Documentation, https://github.com/HZYAI/RagScore#readme",
"Repository, https://github.com/HZYAI/RagScore",
"Changelog, https://github.com/HZYAI/RagScore/blob/main/CHANGELOG.md",
"Bug Tracker, https://github.com/HZYAI/RagScore/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:39:34.606789 | ragscore-0.7.4.tar.gz | 58,563 | 2c/75/d6286d29b66c6d18f5d4fca46589a2063f559ecde3e4d7fc607def602b2b/ragscore-0.7.4.tar.gz | source | sdist | null | false | f249bdd0276cdf912914d8137719502f | 3ce483c40adf53ce45508925b541fbfc45abe64c1f830311e6feb471752c7a48 | 2c75d6286d29b66c6d18f5d4fca46589a2063f559ecde3e4d7fc607def602b2b | null | [
"LICENSE"
] | 199 |
2.4 | hetman-pipeline | 2.2.5 | Hetman Pipeline is a flexible, developer-centric validation engine. It is built for those who prioritize deep customization and want to manage validation, matching, and transformation logic in one centralized location. | <img src="https://raw.githubusercontent.com/hetman-app/hetman-pipeline/main/docs/assets/full-white-text.webp" alt="Hetman Logo" width="200" height="38" />
---
**Hetman Pipeline** is a flexible, developer-centric validation engine. It is built for those who prioritize deep customization and want to manage validation, matching, and transformation logic in one centralized location.
## Installation
```bash
pip install hetman-pipeline
```
## Documentation Setup
```bash
pip install hetman-pipeline[docs]
```
---
[Please check the documentation page for more details.](https://pipeline.hetman.app)
| text/markdown | null | Hetman <degadupeko@my-relay.app> | null | null | null | null | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"falcon>=4.0.0; extra == \"falcon\"",
"mkdocs>=1.6; extra == \"docs\"",
"mkdocstrings[python]>=0.18.1; extra == \"docs\"",
"mkdocs-material>=9.7.1; extra == \"docs\"",
"hetman-pipeline-plugin-i18n; extra == \"i18n\""
] | [] | [] | [] | [
"Homepage, https://pipeline.hetman.app",
"Documentation, https://pipeline.hetman.app",
"Repository, https://github.com/hetman-app/hetman-pipeline",
"Issues, https://github.com/hetman-app/hetman-pipeline/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:39:30.441828 | hetman_pipeline-2.2.5.tar.gz | 30,172 | 5f/17/0c6d39a89eeb5d398452066c9375388c8fbaecfefa7a51ae8075dfc29832/hetman_pipeline-2.2.5.tar.gz | source | sdist | null | false | 55cf044563ff0bcc54cf606c35c6cf9b | ac861019d6646c1034c52e06e3e5a7dbd612f7de05a4569f0db819e406ce8ddb | 5f170c6d39a89eeb5d398452066c9375388c8fbaecfefa7a51ae8075dfc29832 | null | [
"LICENSE"
] | 215 |
2.4 | schubmult | 4.0.0 | Package for algebraic computation with single, double, and quantum Schubert polynomials | # schubmult
## Program and package for rapid computation of Littlewood-Richardson coefficients of Schubert polynomials, compliant with sympy/symengine (and hence indirectly Sage)
The main purpose of this python package is for doing Schubert calculus-related calculations in python (and/or Sage).
- Kinds of things covered (not exhaustive):
- Permutation library
- Fast multiplication of single, double, mixed-variable Schubert polynomials; quantum, quantum double, quantum mixed-variable, and all parabolic versions.
- Noncommutative algebras such as NSym and the free algebra on words of nonnegative integers augmented with combinatorial bases.
- RC graphs/PDs, BPDs, HPDs, SSYT, EG tableaux, and algebraic structures derived from them (Coxeter-Knuth insertion, RSK, RC graph transition formulas, tableaux decompositions). Kashiwara/Demazure crystal raising/lowering operators
- Compatible with sympy and symengine, and hence Sage, probably not terribly difficult to integrate with libraries I'm not familiar with.
[Docs to be hosted on Wiki](https://github.com/matthematics/schubmult/wiki/schubmult-home)
## To install dev version
```
pip install git+https://github.com/matthematics/schubmult.git
```
## RCGraph and BPD Functionality
The package implements two main combinatorial models for Schubert calculus:
- **RCGraph (Reduced Compatible Graphs):** Encodes reduced words for permutations as graphs, supporting crystal operations and algebraic manipulations.
- **BPD (Bumpless Pipe Dreams):** Represents tilings of an $n \times n$ grid with local tile rules, providing an alternative model for Schubert polynomials.
- **HPD (Hybrid Pipe Dreams):** They exist in the library and are functional, not nearly as well developed at this time.
### Bijections and Conversions
There is a canonical bijection (Gao and Huang, 2017) between RCGraphs and BPDs for a given permutation and grid size:
- `BPD.from_rc_graph(rc_graph)`: Converts an RCGraph to a BPD using the inversion data.
- `BPD.to_rc_graph()`: Converts a BPD back to its corresponding RCGraph by extracting the reduced compatible sequence from the tile configuration.
These conversions are invertible up to normalization and grid size.
### Operations
RCGraph and BPD objects support:
- Enumeration for a given permutation and grid size
- Crystal operators (raising/lowering) and combinatorial mutations (currently BPDs only through the bijection)
- Conversion to algebraic elements in the Schubert and nilHecke rings
- Visualization and pretty-printing
- RCGraphs have a product through RCGraphRing (similar to concatenation, not polynomial product)
### Example Usage
```python
from schubmult import RCGraph, BPD, Permutation
rc = RCGraph.random_rc_graph(Permutation([5,1,6,2,4,3]), 5)
bpd = BPD.from_rc_graph(rc)
rc2 = bpd.to_rc_graph()
assert rc2 == rc
```
| text/markdown | null | Matt Samuel <schubmult@gmail.com> | null | Matt Samuel <schubmult@gmail.com> | GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
| Schubert, polynomial, double, algebra | [
"Programming Language :: Python"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"joblib",
"latex2sympy2_extended",
"PuLP>=2.7.0",
"symengine>=0.14.1",
"sympy>=1.14.0",
"psutil",
"cachetools",
"setuptools",
"matplotlib>=3.0; extra == \"visualization\""
] | [] | [] | [] | [
"Homepage, http://schubmult.org",
"Repository, https://github.com/matthematics/schubmult"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T01:39:01.746371 | schubmult-4.0.0.tar.gz | 2,403,977 | 0d/db/65a4eaa5cc0534bcce47618718aebac04fb5dd31a8656f1e6b08be6e40e2/schubmult-4.0.0.tar.gz | source | sdist | null | false | a091a83ae89842fb4f5ae560df4a6b51 | 39303c4ebf304e620d7210bd99bf51d0e10343f1c1b13e812fbffce0da4cbb3f | 0ddb65a4eaa5cc0534bcce47618718aebac04fb5dd31a8656f1e6b08be6e40e2 | null | [
"LICENSE"
] | 204 |
2.4 | voidly-probe | 1.0.6 | Voidly Community Probe — Help measure internet censorship worldwide | # Voidly Community Probe
Help measure internet censorship worldwide. Run a lightweight probe node from anywhere.
## What it does
Tests connectivity to 62 websites (social media, news, messaging, privacy tools, human rights organizations) every 15 minutes from your network. Detects DNS blocking, TCP resets, TLS/SNI filtering, HTTP redirects, and identifies blocking entities (government firewalls, ISP filters).
Results feed into [Voidly's censorship intelligence network](https://voidly.ai) — the world's largest real-time censorship dataset.
## Install
```bash
pip install voidly-probe
```
Or with Docker:
```bash
docker run -d --name voidly-probe -v voidly-data:/data/.voidly emperormew2/voidly-probe
```
## Quick start
```bash
# First run — register and start probing
voidly-probe --consent
# Single test cycle
voidly-probe --once
# Check your node's status
voidly-probe --status
# Run in background (Linux/Mac)
nohup voidly-probe --consent &
```
## Claim your node
After your node is running, link your identity to appear on the [leaderboard](https://voidly.ai/probes) and be eligible for prizes:
1. Find your Node ID and Token in `~/.voidly/node.json`
2. Visit [voidly.ai/probes/claim](https://voidly.ai/probes/claim)
3. Enter your Node ID, Token, and Twitter/X handle
4. Your name now appears on the leaderboard instead of `cp-xxxxxxxx`
**Important:** Back up `~/.voidly/node.json` — your token is shown once during registration and cannot be recovered. If you lose it, you'll need to re-register as a new node.
## What we collect
- Domain, blocked/accessible status, latency, blocking method
- Your approximate location (country, city) — detected once during registration
- SHA256 hash of your IP (for deduplication, not stored raw)
## What we don't collect
- No browsing data
- No passwords or personal information
- No traffic inspection beyond the 62 test domains
- Your raw IP address is never stored
## Privacy
- Data is used for censorship research under CC BY 4.0
- You can stop the probe at any time with Ctrl+C
- Config stored at `~/.voidly/node.json` — delete to unregister
- Learn more: https://voidly.ai/probes
## Requirements
- Python 3.8+
- No external dependencies (stdlib only)
- No root/admin required
- No VPN tunnel
## Configuration
Environment variables:
| Variable | Default | Description |
|----------|---------|-------------|
| `VOIDLY_PROBE_INTERVAL` | `900` | Seconds between probe cycles |
| `VOIDLY_PROBE_TIMEOUT` | `10` | Timeout per request (seconds) |
| `VOIDLY_BATCH_SIZE` | `20` | Domains per cycle |
| `VOIDLY_CONFIG_DIR` | `~/.voidly` | Config directory |
## Docker
```bash
# Run in background with persistent config
docker run -d --name voidly-probe \
-v voidly-data:/data/.voidly \
emperormew2/voidly-probe
# View logs
docker logs -f voidly-probe
# Check node status (find Node ID in logs)
docker exec voidly-probe voidly-probe --status
# Stop
docker stop voidly-probe
```
## License
MIT — https://voidly.ai
| text/markdown | Voidly | team@voidly.ai | null | null | MIT | censorship internet-freedom probe network-measurement ooni | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Internet",
"Topic :: Security",
"Topic :: Scientific/Engineering"
] | [] | https://github.com/voidly-ai/community-probe | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://voidly.ai/probes",
"Documentation, https://voidly.ai/api-docs",
"Source, https://github.com/voidly-ai/community-probe",
"Bug Tracker, https://github.com/voidly-ai/community-probe/issues"
] | twine/6.2.0 CPython/3.10.1 | 2026-02-21T01:38:10.400191 | voidly_probe-1.0.6.tar.gz | 13,817 | a7/27/5526573b10192c28edc624abe8f507a401b74f9a7dfc2faf3ef59d7974a7/voidly_probe-1.0.6.tar.gz | source | sdist | null | false | b2f1340d51ab1c4f39c508e68b9dc4c6 | bfea6e53aed93a39a4abbcf163520e5c4449db5bb7d0a68e738e20ec5cb0284e | a7275526573b10192c28edc624abe8f507a401b74f9a7dfc2faf3ef59d7974a7 | null | [
"LICENSE"
] | 197 |
2.4 | kumoai | 2.18.0.dev202602201733 | AI on the Modern Data Stack | <p align="center">
<img height="180" src="https://kumo-sdk-public.s3.us-west-2.amazonaws.com/kumo-logo-pink.svg" />
</p>
______________________________________________________________________
The Kumo SDK implements a pythonic interface for users to programmatically
interact with the Kumo machine learning platform
([documentation](https://kumo-ai.github.io/kumo-sdk/docs/#)).
## Installation
The Kumo SDK is available for Python 3.10 to Python 3.13. To install, simply run
```
pip install kumoai
```
| text/markdown | null | "Kumo.AI" <hello@kumo.ai> | null | null | null | deep-learning, graph-neural-networks, cloud-data-warehouse | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pandas",
"pyarrow<19.0.0,>=8.0.0",
"requests>=2.28.2",
"urllib3",
"plotly",
"typing_extensions>=4.5.0",
"kumo-api<1.0.0,>=0.61.0",
"tqdm>=4.66.0",
"aiohttp>=3.10.0",
"pydantic>=1.10.21",
"rich>=9.0.0",
"sphinx; extra == \"doc\"",
"sphinx-book-theme; extra == \"doc\"",
"sphinx-copybutton; extra == \"doc\"",
"sphinx-autodoc-typehints; extra == \"doc\"",
"graphviz; extra == \"doc\"",
"mermaid-py; extra == \"doc\"",
"pytest; extra == \"test\"",
"pytest-mock; extra == \"test\"",
"requests-mock; extra == \"test\"",
"adbc_driver_sqlite; extra == \"sqlite\"",
"numpy<2.0; extra == \"snowflake\"",
"snowflake-connector-python; extra == \"snowflake\"",
"pyyaml; extra == \"snowflake\"",
"mermaid-py; extra == \"snowflake\"",
"boto3<2.0,>=1.30.0; extra == \"sagemaker\"",
"sagemaker<3.0; extra == \"test-sagemaker\""
] | [] | [] | [] | [
"homepage, https://kumo.ai",
"documentation, https://kumo.ai/docs"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:35:50.596510 | kumoai-2.18.0.dev202602201733-py3-none-any.whl | 251,641 | 13/a2/3aba0348797ea1dd6cbcff24a463fc188fe87ebc6a0c088e81f2eb7c12df/kumoai-2.18.0.dev202602201733-py3-none-any.whl | py3 | bdist_wheel | null | false | cc98f1081d8706e509bbb26b15a99965 | df7ef17554f2383207022daeb35fb13edc88bf529b451523cc2f2db0ed5dd4fd | 13a23aba0348797ea1dd6cbcff24a463fc188fe87ebc6a0c088e81f2eb7c12df | MIT | [
"LICENSE"
] | 593 |
2.1 | odoo14-addon-ssi-financial-accounting | 14.0.7.2.0 | Financial Accounting | .. image:: https://img.shields.io/badge/licence-AGPL--3-blue.svg
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
====================
Financial Accounting
====================
Installation
============
To install this module, you need to:
1. Clone the branch 14.0 of the repository https://github.com/open-synergy/ssi-financial-accounting
2. Add the path to this repository in your configuration (addons-path)
3. Update the module list (Must be on developer mode)
4. Go to menu *Apps -> Apps -> Main Apps*
5. Search For *Financial Accounting*
6. Install the module
Bug Tracker
===========
Bugs are tracked on `GitHub Issues
<https://github.com/open-synergy/ssi-financial-accounting/issues>`_. In case of trouble, please
check there if your issue has already been reported. If you spotted it first,
help us smash it by providing detailed and welcomed feedback.
Credits
=======
Contributors
------------
* Andhitia Rama <andhitia.r@gmail.com>
* Miftahussalam <miftahussalam08@gmail.com>
Maintainer
----------
.. image:: https://simetri-sinergi.id/logo.png
:alt: PT. Simetri Sinergi Indonesia
:target: https://simetri-sinergi.id.com
This module is maintained by the PT. Simetri Sinergi Indonesia.
| null | OpenSynergy Indonesia, PT. Simetri Sinergi Indonesia | null | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 14.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://simetri-sinergi.id | null | >=3.6 | [] | [] | [] | [
"odoo14-addon-account-financial-report",
"odoo14-addon-account-invoice-force-number",
"odoo14-addon-account-journal-lock-date",
"odoo14-addon-account-move-force-removal",
"odoo14-addon-account-reconciliation-widget",
"odoo14-addon-account-statement-import-txt-xlsx",
"odoo14-addon-configuration-helper",
"odoo14-addon-currency-rate-inverted",
"odoo14-addon-mis-builder",
"odoo14-addon-mis-template-financial-report",
"odoo14-addon-ssi-account-create-liquidity-journal",
"odoo14-addon-ssi-master-data-mixin",
"odoo14-addon-ssi-policy-mixin",
"odoo14-addon-ssi-print-mixin",
"odoo14-addon-ssi-sequence-mixin",
"odoo<14.1dev,>=14.0a"
] | [] | [] | [] | [] | twine/5.1.1 CPython/3.12.3 | 2026-02-21T01:35:39.112798 | odoo14_addon_ssi_financial_accounting-14.0.7.2.0-py3-none-any.whl | 75,140 | 21/42/5e28ed8fe8ae99e6ce1cb83c880055ffaa4d5a78f36114142755f81d1430/odoo14_addon_ssi_financial_accounting-14.0.7.2.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 0cd837eac32d7d4b0f9138e588fdfd4d | 07f60ae34f4edebc73b7f7e860bbc23ae1f60eb78de213dd29570c6ab22f2c88 | 21425e28ed8fe8ae99e6ce1cb83c880055ffaa4d5a78f36114142755f81d1430 | null | [] | 78 |
2.4 | fastmcp | 3.0.1 | The fast, Pythonic way to build MCP servers and clients. | <div align="center">
<!-- omit in toc -->
<picture>
<source width="550" media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/PrefectHQ/fastmcp/main/docs/assets/brand/f-watercolor-waves-4-dark.png">
<source width="550" media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/PrefectHQ/fastmcp/main/docs/assets/brand/f-watercolor-waves-4.png">
<img width="550" alt="FastMCP Logo" src="https://raw.githubusercontent.com/PrefectHQ/fastmcp/main/docs/assets/brand/f-watercolor-waves-2.png">
</picture>
# FastMCP 🚀
<strong>Move fast and make things.</strong>
*Made with 💙 by [Prefect](https://www.prefect.io/)*
[](https://gofastmcp.com)
[](https://discord.gg/uu8dJCgttd)
[](https://pypi.org/project/fastmcp)
[](https://github.com/PrefectHQ/fastmcp/actions/workflows/run-tests.yml)
[](https://github.com/PrefectHQ/fastmcp/blob/main/LICENSE)
<a href="https://trendshift.io/repositories/13266" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13266" alt="prefecthq%2Ffastmcp | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</div>
---
The [Model Context Protocol](https://modelcontextprotocol.io/) (MCP) connects LLMs to tools and data. FastMCP gives you everything you need to go from prototype to production:
```python
from fastmcp import FastMCP
mcp = FastMCP("Demo 🚀")
@mcp.tool
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
if __name__ == "__main__":
mcp.run()
```
## Why FastMCP
Building an effective MCP application is harder than it looks. FastMCP handles all of it. Declare a tool with a Python function, and the schema, validation, and documentation are generated automatically. Connect to a server with a URL, and transport negotiation, authentication, and protocol lifecycle are managed for you. You focus on your logic, and the MCP part just works: **with FastMCP, best practices are built in.**
**That's why FastMCP is the standard framework for working with MCP.** FastMCP 1.0 was incorporated into the official MCP Python SDK in 2024. Today, the actively maintained standalone project is downloaded a million times a day, and some version of FastMCP powers 70% of MCP servers across all languages.
FastMCP has three pillars:
<table>
<tr>
<td align="center" valign="top" width="33%">
<a href="https://gofastmcp.com/servers/server">
<img src="https://raw.githubusercontent.com/PrefectHQ/fastmcp/main/docs/assets/images/servers-card.png" alt="Servers" />
<br /><strong>Servers</strong>
</a>
<br />Expose tools, resources, and prompts to LLMs.
</td>
<td align="center" valign="top" width="33%">
<a href="https://gofastmcp.com/apps/overview">
<img src="https://raw.githubusercontent.com/PrefectHQ/fastmcp/main/docs/assets/images/apps-card.png" alt="Apps" />
<br /><strong>Apps</strong>
</a>
<br />Give your tools interactive UIs rendered directly in the conversation.
</td>
<td align="center" valign="top" width="33%">
<a href="https://gofastmcp.com/clients/client">
<img src="https://raw.githubusercontent.com/PrefectHQ/fastmcp/main/docs/assets/images/clients-card.png" alt="Clients" />
<br /><strong>Clients</strong>
</a>
<br />Connect to any MCP server — local or remote, programmatic or CLI.
</td>
</tr>
</table>
**[Servers](https://gofastmcp.com/servers/server)** wrap your Python functions into MCP-compliant tools, resources, and prompts. **[Clients](https://gofastmcp.com/clients/client)** connect to any server with full protocol support. And **[Apps](https://gofastmcp.com/apps/overview)** give your tools interactive UIs rendered directly in the conversation.
Ready to build? Start with the [installation guide](https://gofastmcp.com/getting-started/installation) or jump straight to the [quickstart](https://gofastmcp.com/getting-started/quickstart). When you're ready to deploy, [Prefect Horizon](https://www.prefect.io/horizon) offers free hosting for FastMCP users.
## Installation
We recommend installing FastMCP with [uv](https://docs.astral.sh/uv/):
```bash
uv pip install fastmcp
```
For full installation instructions, including verification and upgrading, see the [**Installation Guide**](https://gofastmcp.com/getting-started/installation).
**Upgrading?** We have guides for:
- [Upgrading from FastMCP v2](https://gofastmcp.com/getting-started/upgrading/from-fastmcp-2)
- [Upgrading from the MCP Python SDK](https://gofastmcp.com/getting-started/upgrading/from-mcp-sdk)
- [Upgrading from the low-level SDK](https://gofastmcp.com/getting-started/upgrading/from-low-level-sdk)
## 📚 Documentation
FastMCP's complete documentation is available at **[gofastmcp.com](https://gofastmcp.com)**, including detailed guides, API references, and advanced patterns.
Documentation is also available in [llms.txt format](https://llmstxt.org/), which is a simple markdown standard that LLMs can consume easily:
- [`llms.txt`](https://gofastmcp.com/llms.txt) is essentially a sitemap, listing all the pages in the documentation.
- [`llms-full.txt`](https://gofastmcp.com/llms-full.txt) contains the entire documentation. Note this may exceed the context window of your LLM.
**Community:** Join our [Discord server](https://discord.gg/uu8dJCgttd) to connect with other FastMCP developers and share what you're building.
## Contributing
We welcome contributions! See the [Contributing Guide](https://gofastmcp.com/development/contributing) for setup instructions, testing requirements, and PR guidelines.
| text/markdown | Jeremiah Lowin | null | null | null | null | agent, fastmcp, llm, mcp, mcp client, mcp server, model context protocol | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"authlib>=1.6.5",
"cyclopts>=4.0.0",
"exceptiongroup>=1.2.2",
"httpx<1.0,>=0.28.1",
"jsonref>=1.1.0",
"jsonschema-path>=0.3.4",
"mcp<2.0,>=1.24.0",
"openapi-pydantic>=0.5.1",
"opentelemetry-api>=1.20.0",
"packaging>=24.0",
"platformdirs>=4.0.0",
"py-key-value-aio[filetree,keyring,memory]<0.5.0,>=0.4.4",
"pydantic[email]>=2.11.7",
"pyperclip>=1.9.0",
"python-dotenv>=1.1.0",
"pyyaml<7.0,>=6.0",
"rich>=13.9.4",
"uvicorn>=0.35",
"watchfiles>=1.0.0",
"websockets>=15.0.1",
"anthropic>=0.40.0; extra == \"anthropic\"",
"azure-identity>=1.16.0; extra == \"azure\"",
"openai>=1.102.0; extra == \"openai\"",
"pydocket>=0.17.2; extra == \"tasks\""
] | [] | [] | [] | [
"Homepage, https://gofastmcp.com",
"Repository, https://github.com/PrefectHQ/fastmcp",
"Documentation, https://gofastmcp.com"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T01:35:25.696171 | fastmcp-3.0.1.tar.gz | 17,236,395 | f3/39/0847a868a8681f0d9bf42ad4b6856ef675f799eb464bd10dbcfe9ae87323/fastmcp-3.0.1.tar.gz | source | sdist | null | false | f90c0ea0f136858c7bcb764a50068dcd | ba463ae51e357fba2bafe513cc97f0a06c9f31220e6584990b7d8bcbf69f0516 | f3390847a868a8681f0d9bf42ad4b6856ef675f799eb464bd10dbcfe9ae87323 | Apache-2.0 | [
"LICENSE"
] | 373,048 |
2.4 | pyNMMS | 0.1.1 | Non-Monotonic Multi-Succedent sequent calculus — propositional NMMS from Hlobil & Brandom 2025 | # pyNMMS
[](https://pypi.org/project/pyNMMS/)
[](https://pypi.org/project/pyNMMS/)
[](https://github.com/bradleypallen/nmms-reasoner/blob/main/LICENSE)
[](https://bradleypallen.github.io/nmms-reasoner/)
Non-Monotonic Multi-Succedent sequent calculus — propositional NMMS from Hlobil & Brandom 2025, Ch. 3.
**[Documentation](https://bradleypallen.github.io/nmms-reasoner/)** | **[PyPI](https://pypi.org/project/pyNMMS/)** | **[GitHub](https://github.com/bradleypallen/nmms-reasoner)**
## Installation
```bash
pip install pyNMMS
```
For development:
```bash
git clone https://github.com/bradleypallen/nmms-reasoner.git
cd nmms-reasoner
pip install -e ".[dev]"
```
## Quick Start
```python
from pynmms import MaterialBase, NMMSReasoner
# Create a material base with defeasible inferences
base = MaterialBase(
language={"A", "B", "C"},
consequences={
(frozenset({"A"}), frozenset({"B"})), # A |~ B
(frozenset({"B"}), frozenset({"C"})), # B |~ C
},
)
reasoner = NMMSReasoner(base)
# A derives B (base consequence)
result = reasoner.derives(frozenset({"A"}), frozenset({"B"}))
assert result.derivable # True
# A does NOT derive C (nontransitivity — no [Mixed-Cut])
result = reasoner.derives(frozenset({"A"}), frozenset({"C"}))
assert not result.derivable # False
# A, C does NOT derive B (nonmonotonicity — no [Weakening])
result = reasoner.derives(frozenset({"A", "C"}), frozenset({"B"}))
assert not result.derivable # False
# Classical tautologies still hold (supraclassicality)
result = reasoner.derives(frozenset(), frozenset({"A | ~A"}))
assert result.derivable # True
```
## CLI
```bash
# Create a base and add consequences
pynmms tell -b base.json --create "A |~ B"
pynmms tell -b base.json "B |~ C"
# Query derivability
pynmms ask -b base.json "A => B" # DERIVABLE
pynmms ask -b base.json "A => C" # NOT DERIVABLE
pynmms ask -b base.json "A, C => B" # NOT DERIVABLE
# Interactive REPL
pynmms repl -b base.json
```
## Key Properties
- **Nonmonotonicity**: Adding premises can defeat inferences (no Weakening)
- **Nontransitivity**: Chaining good inferences can yield bad ones (no Mixed-Cut)
- **Supraclassicality**: All classically valid sequents are derivable
- **Conservative Extension**: Logical vocabulary doesn't change base-level relations
- **Explicitation Conditions**: DD, II, AA, SS biconditionals hold
## Theoretical Background
This implements the NMMS sequent calculus from:
- Hlobil, U. & Brandom, R. B. (2025). *Reasons for Logic, Logic for Reasons*. Ch. 3: "Introducing Logical Vocabulary."
NMMS codifies *open reason relations* — consequence relations where Monotonicity and Transitivity can fail. The material base encodes defeasible material inferences among atomic sentences, and the Ketonen-style logical rules extend this to compound sentences while preserving nonmonotonicity.
## License
MIT
| text/markdown | Bradley P. Allen | null | null | null | null | inferentialism, logic, nonmonotonic, reasoning, sequent-calculus | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Mathematics"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"mypy; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff; extra == \"dev\"",
"mkdocs-material; extra == \"docs\"",
"mkdocstrings[python]; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/bradleypallen/nmms-reasoner",
"Repository, https://github.com/bradleypallen/nmms-reasoner"
] | twine/6.1.0 CPython/3.11.11 | 2026-02-21T01:34:39.178896 | pynmms-0.1.1.tar.gz | 45,388 | 3d/6e/ea9e69933ed4518d6f94cd09b7ff9c7fd7d126aa709589c675053fbbfdef/pynmms-0.1.1.tar.gz | source | sdist | null | false | 7493078381663dce34ba73a219012fa0 | df39dc5bf63e0f01cb9910f14e2712817948c5e8ea84da2b94b7559a25d38118 | 3d6eea9e69933ed4518d6f94cd09b7ff9c7fd7d126aa709589c675053fbbfdef | MIT | [
"LICENSE"
] | 0 |
2.4 | pytest-helm | 0.1.1 | Simple, ergonomic Helm manifest fixtures for pytest. | # pytest-helm
`pytest-helm` makes it easy to test Helm charts in a simple and repeatable fashion.
It runs a helm template command, loads your manifests into a fixture, and then allows you to make simple
assertions about it's shape/contents.
## Quickstart
#### Add pytest-helm
```bash
uv add pytest-helm
```
#### Scaffold out the fixture/test file
We assume you won't have pytest configured for a chart directory, so if that's the case you can run:
```
uv run pytest-helm --init
```
That will create two sample test files, a fixture in `tests/conftest.py` and a test in `tests/test_deployment.py`.
If you are already using pytest, you can skip this step.
#### Usage
Define as many fixtures as you'd like in conftest:
```python
# tests/conftest.py
from pytest_helm import manifest_fixture
chart = manifest_fixture(
"chart",
["helm", "template", ".", "-f", "values.yaml"],
)
prod_chart = manifest_fixture(
"prod_chart",
["helm", "template", ".", "-f", "prod-values.yaml"],
)
```
Write tests to validate the YAML shape/output for different values meets your requirements:
```python
# tests/test_deployments.py
def test_it_provisions_one_replica_by_default(chart):
api = chart.get("deployment/api-server")
assert api.spec.replicas == 1
def test_it_provisions_multiple_replicas_in_prod(prod_chart):
api = prod_chart.get("deployment/api-server")
assert = api.spec.replicas == 3
```
Use PDB to programatically inspect your manifests:
```python
def test_idk_what_to_test(chart):
import pdb; pdb.set_trace()
$ pytest
(Pdb) chart
ManifestIndex({
'ConfigMap': ['release-name-base-config'],
'Deployment': ['release-name-api', 'release-name-worker'],
'Ingress': ['release-name-app'],
'Job': ['release-name-db-migrate'],
'Service': ['release-name-api'],
'ServiceAccount': ['backend']
})
(Pdb) chart.get("configmap/release-name-base-config")
Box({'apiVersion': 'v1', 'kind': 'ConfigMap', 'metadata': {'name': 'release-name-base-config'}, 'data': {'EMAIL_FROM': 'foo@bar.com'}})
(Pdb)
```
**Notes:**
- `chart.get()` is case-insensitive, `chart.get("ConfigMap/foo")` and `chart.get("configmap/foo")` are the same thing
- In the event that two resources have the same name/kind, you are required to pass an apiVersion as well, i.e. `v1/ConfigMap/foo`
| text/markdown | Thomas Vendetta | null | null | null | MIT | helm, kubernetes, pytest, testing | [
"Development Status :: 3 - Alpha",
"Framework :: Pytest",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Testing"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pytest>=8.0.0",
"python-box>=7.1.1",
"ruamel-yaml>=0.18.0"
] | [] | [] | [] | [
"Homepage, https://github.com/thomasv314/pytest-helm",
"Repository, https://github.com/thomasv314/pytest-helm",
"Issues, https://github.com/thomasv314/pytest-helm/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:34:37.874009 | pytest_helm-0.1.1.tar.gz | 20,664 | 90/ad/986b8a03ae1cb9206928a9a7d327fdd36a6463a57ed6a4abae709c7f4518/pytest_helm-0.1.1.tar.gz | source | sdist | null | false | 73262a05e0a3b4158e28eb89251150ff | bb214ccdd0dcbd9d8b21ed68f911e2fc29339a23a412fde109fa19be48f9e88a | 90ad986b8a03ae1cb9206928a9a7d327fdd36a6463a57ed6a4abae709c7f4518 | null | [] | 196 |
2.4 | project-handbook | 0.0.25 | Installed ph CLI for Project Handbook | # project-handbook-cli
Installed Python CLI distribution: `project-handbook`
Console script: `ph`
Handbook root marker (v1):
- `.project-handbook/config.json`
Rule: `ph` MUST NOT execute repo-local Python scripts at runtime.
## IMPORTANT: Be explicit about `PH_ROOT` during development
When developing, prefer `ph --root /absolute/path/to/handbook` so you don’t accidentally operate on the wrong directory.
v1 contract summary:
- Handbook data root (project scope): `PH_ROOT/.project-handbook/**` (sprints, features, releases, status, process, etc.)
- System scope data root: `PH_ROOT/.project-handbook/system/**`
## Repo layout (this repo)
- `src/ph/**`: CLI implementation
- `cli_plan/**`: authoritative v1 contract + spec + planning
- `docs/**`: rendered docs (MkDocs)
## Local install verification (exact commands)
1) `uv venv`
2) `uv pip install -e .`
3) `ph --help`
If `ph` is not found, activate the venv first: `. .venv/bin/activate`.
## Dev verification (exact commands)
- `uv pip install -e ".[dev]"`
- `uv run ruff format .`
- `uv run ruff check .`
- `uv run pytest -q`
## Docs (MkDocs)
- `uv pip install -e ".[dev]"`
- `uv run mkdocs serve`
Docs source lives in `docs/` and is rendered via `mkdocs.yml`.
## End-session (manual verification)
Non-`--skip-codex` mode requires the `codex` CLI on your `PATH` (e.g. `npm i -g @openai/codex`).
Example:
- `ph end-session --log ~/.codex/sessions/YYYY/MM/DD/rollout-*.jsonl --root /path/to/project-handbook`
## Release (exact steps)
1) update `pyproject.toml` `project.version` and `src/ph/__init__.py` `__version__` (must match)
2) run `uv run ruff check .` then `uv run pytest -q`
3) create git tag `v<version>` and push
4) GitHub Actions publishes to PyPI on tag push (see `.github/workflows/release.yml`)
- PyPI Trusted Publishing (OIDC)
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"packaging>=23.0",
"build; extra == \"dev\"",
"mkdocs>=1.6.0; extra == \"dev\"",
"pytest; extra == \"dev\"",
"ruff; extra == \"dev\"",
"twine; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/Spenquatch/project-handbook-cli"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:34:25.484908 | project_handbook-0.0.25.tar.gz | 819,824 | f3/7b/af4aba9cfffff8baf6c5cb0ef5f68af3a8c14567640b5d2f3e8d900d5eec/project_handbook-0.0.25.tar.gz | source | sdist | null | false | dcc9f52fd079f9f564470b5bd65c1457 | 58f1079254bb8fd885c12bed350823e9f823a5bffc20f5be2f445de20e3e1865 | f37baf4aba9cfffff8baf6c5cb0ef5f68af3a8c14567640b5d2f3e8d900d5eec | null | [] | 202 |
2.4 | gitship | 0.4.0 | Interactive Git history management and commit inspection tools with comprehensive diff review | <div align="center">
# Gitship 🚀
### **Git on Autopilot. Stop plumbing, start shipping.**
`gitship` is a high-level workflow manager that wraps Git in a layer of intelligence and safety. It doesn't just run Git commands; it orchestrates your entire development lifecycle—from the first line of code to the final PyPI release.
[](https://badge.fury.io/py/gitship)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
</div>
---
## 💡 Why Gitship?
Most developers treat Git as a plumbing tool. **Gitship treats Git as an architect.**
* **Atomic Operations:** Never see a "dirty tree" error again. Gitship automatically stashes and restores background noise (like translation files or build artifacts) during branch switches and merges.
* **Semantic History:** Your git log shouldn't be a mystery. Gitship generates data-driven commit and merge messages that categorize changes into Features, Fixes, and Stats.
* **Safety-First Workflows:** Rebase-by-default syncing, interactive conflict resolution with state-caching, and identity-verified publishing.
---
## 🛠 Features
### 🛡️ Atomic GitOps (The "Safe-State" Engine)
Gitship uses a unique **Atomic Engine** to ensure your repository stays clean:
- **Intelligent Stashing:** Automatically stashes and restores ignorable background changes (like AI-generated translations or config updates) during critical operations.
- **Conflict Caching:** If a merge fails, Gitship caches your resolutions, allowing you to abort, fix, and resume without losing work.
### 🧠 Intelligent Commits & Amends
- **Category Awareness:** Changes are analyzed and grouped (Code, Translations, Tests, etc.).
- **Smart Amending:** Rewrite your last commit message with automated analysis of what actually changed.
- **Rename Detection:** Content-based similarity detection even when standard Git fails to see a move.
- **Condensed Exports:** Export diffs with 60-70% size reduction using `--unified=1` for easier code review.
### 🌿 Advanced Branch & Sync
- **Unified Sync:** `gitship sync` performs a safe pull (rebase) and push in one atomic operation.
- **Directional Review:** Compare any two branches with a visual "Incoming Changes" vs "Target Status" report.
- **Bulk Cleanup:** Identify and delete redundant, merged, or stale remote branches in seconds.
- **Interactive Merging:** Guided merge workflows with conflict resolution caching.
### 📦 Dependency & Project Management
- **AST-Based Scanner:** Detects imports in your source code and maps them to PyPI packages.
- **Permanent Ignores:** Maintain a project-specific list of packages you never want to track in `pyproject.toml`.
- **README Editor:** Section-by-section interactive README editor with auto-centering for badges.
- **Gitignore Manager:** Add/remove patterns from `.gitignore` via CLI with common language templates.
### ⚓ Professional Releases
- **Semantic Versioning:** Guided patch/minor/major bumping.
- **OIDC / Trusted Publishing:** Automated PyPI release configuration.
- **Draft GitHub Releases:** Auto-generates high-quality release notes from categorized commit history.
---
## 🚀 Quick Start
### Installation
```bash
pip install gitship
```
### The "Daily Flow" Commands
| Command | Action |
| :--- | :--- |
| `gitship` | **The Dashboard:** Interactive menu for all operations. |
| `gitship sync` | Pull (rebase), resolve conflicts, and push in one go. |
| `gitship commit` | Analyze changes and commit with a smart message. |
| `gitship branch` | Manage, compare, and merge branches safely. |
| `gitship deps` | Sync `pyproject.toml` with your actual imports. |
| `gitship release` | Bump version, generate changelog, and ship to PyPI/GitHub. |
| `gitship amend` | Smart commit message rewriting with merge analysis. |
| `gitship ignore` | Manage `.gitignore` entries from CLI. |
| `gitship docs` | Interactive section-by-section README editor. |
| `gitship resolve` | Interactive conflict resolver with block-by-block choices. |
---
## 🔧 Advanced Usage
### Interactive Conflict Resolution
If a merge or pull hits a conflict, Gitship enters **Resolve Mode**:
```bash
gitship resolve
```
It provides a block-by-block interactive UI, allowing you to choose "Ours", "Theirs", or "Manual" for every conflict hunk, and caches your progress if you need to step away.
### Condensed Code Reviews
Export a massive diff into a readable, minimal-context review file:
```bash
gitship commit # Choose option 2 to review code changes
# Then option 4 to export diff
# Select condensed format (60-70% smaller)
```
Uses `--unified=1` and strips noise to reduce diff size dramatically.
### Atomic Operations Under the Hood
Gitship's `gitops.py` module wraps critical Git operations (push, pull, merge, checkout) with automatic stashing/restoring:
```python
from gitship.gitops import atomic_git_operation
# Automatically handles background file changes
atomic_git_operation(
repo_path=repo,
git_command=["push", "origin", "main"],
description="push to origin/main"
)
```
---
## 📚 Documentation
Full documentation and guides coming soon. For now, explore interactively:
```bash
gitship # Main menu with all features
```
---
## 🤝 Contributing
Gitship is built by developers who are tired of Git overhead. If you have an idea to make Git "just work," we want your PRs!
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`gitship commit` 😉)
4. Push to the branch (`gitship push`)
5. Open a Pull Request
---
## 📄 License
MIT License [1minds3t](https://github.com/1minds3t)
See [LICENSE](LICENSE) for details.
---
## 🙏 Acknowledgments
Built with:
- Python 3.8+
- [Rich](https://github.com/Textualize/rich) for beautiful terminal output
- [Typer](https://github.com/tiangolo/typer) for CLI magic
- [Omnipkg](https://github.com/1minds3t/omnipkg) for advanced dependency resolution (optional)
---
<div align="center">
**Stop fighting Git. Start shipping code.**
[Install Now](https://pypi.org/project/gitship/) • [Report Bug](https://github.com/1minds3t/gitship/issues) • [Request Feature](https://github.com/1minds3t/gitship/issues)
</div>
| text/markdown | null | 1minds3t <1minds3t@proton.me> | null | null | MIT | git, version-control, cli, history, revert, commit-inspection, diff, review | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development :: Version Control :: Git",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Environment :: Console",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"filelock",
"pyperclip",
"requests",
"ruamel",
"tomli",
"omnipkg; extra == \"full\"",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"black>=23.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/1minds3t/gitship",
"Repository, https://github.com/1minds3t/gitship",
"Issues, https://github.com/1minds3t/gitship/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:34:23.125541 | gitship-0.4.0.tar.gz | 190,120 | d0/ad/072cc42659af7c50bcb507c023a5ea9e4f5b63882667e2991180746b5f73/gitship-0.4.0.tar.gz | source | sdist | null | false | cac98ed57c7b43be8e9f6c8fd5cd937f | f3e2b5ff0106c1022e20e793886ac654ef5b5cc2cb707dcdddf6648655fa1747 | d0ad072cc42659af7c50bcb507c023a5ea9e4f5b63882667e2991180746b5f73 | null | [
"LICENSE",
"THIRD_PARTY_NOTICES.txt"
] | 185 |
2.4 | search-parser | 0.0.2 | Parse search engine HTML results into structured data | # search-parser
[](https://pypi.org/project/search-parser/)
[](https://pypi.org/project/search-parser/)
[](https://github.com/getlinksc/search-parser/actions/workflows/test.yml)
[](https://github.com/getlinksc/search-parser/actions/workflows/lint.yml)
[](https://codecov.io/gh/getlinksc/search-parser)
[](https://github.com/astral-sh/ruff)
[](https://opensource.org/licenses/Apache-2.0)
**Parse search engine HTML results into structured data (JSON, Markdown) with auto-detection.**
`search-parser` takes raw HTML from Google, Bing, and DuckDuckGo and extracts every result type — organic results, featured snippets, AI Overviews, People Also Ask, sponsored ads, and more — into clean, typed Python objects. It auto-detects the search engine from the HTML, so you never have to specify which parser to use.
---
## Quick Start
```python
from search_engine_parser import SearchParser
parser = SearchParser()
html = open("google_results.html").read()
# JSON string (default)
json_output = parser.parse(html)
# Markdown string — great for feeding to an LLM
md_output = parser.parse(html, output_format="markdown")
# Python dict — for programmatic access
data = parser.parse(html, output_format="dict")
# Organic results are in data["results"]
for result in data["results"]:
print(f"{result['position']}. {result['title']}")
print(f" {result['url']}")
# Every other result type has its own dedicated key
if data["featured_snippet"]:
print("Featured:", data["featured_snippet"]["title"])
if data["ai_overview"]:
print("AI Overview:", data["ai_overview"]["description"][:100])
for question in data["people_also_ask"]:
print("PAA:", question["title"])
```
---
## Installation
**With uv (recommended):**
```bash
uv add search-parser
```
**With pip:**
```bash
pip install search-parser
```
---
## Supported Result Types
| Result Type | Field | Google | Bing | DuckDuckGo |
|---|---|:-:|:-:|:-:|
| Organic results | `results` | ✓ | ✓ | ✓ |
| Featured snippet | `featured_snippet` | ✓ | ✓ | — |
| Sponsored / ads | `sponsored` | ✓ | — | — |
| AI Overview | `ai_overview` | ✓ | — | — |
| People Also Ask | `people_also_ask` | ✓ | — | — |
| What People Are Saying | `people_saying` | ✓ | — | — |
| People Also Search For | `people_also_search` | ✓ | — | — |
| Related Products & Services | `related_products` | ✓ | — | — |
---
## Working with Results
`SearchParser.parse()` with `output_format="dict"` returns the full `SearchResults` structure:
```python
data = parser.parse(html, output_format="dict")
# Always a list (organic results only)
for r in data["results"]:
print(r["title"], r["url"], r["description"])
# None or a single object
if data["featured_snippet"]:
print(data["featured_snippet"]["title"])
# None or a single object with description + sources list
if data["ai_overview"]:
overview = data["ai_overview"]
print(overview["description"])
for source in overview["metadata"]["sources"]:
print(f" - {source['title']}: {source['url']}")
# Always a list (empty when not present)
for q in data["people_also_ask"]:
print(q["title"])
for post in data["people_saying"]:
print(post["title"], post["url"])
for item in data["people_also_search"]:
print(item["title"])
for ad in data["sponsored"]:
print(ad["title"], ad["url"])
for product in data["related_products"]:
print(product["title"])
# Metadata
print(data["search_engine"]) # "google"
print(data["query"]) # "python web scraping"
print(data["total_results"]) # 26200000 or None
print(data["detection_confidence"]) # 0.95
```
### Using the model directly
When you need the typed `SearchResults` object instead of a dict, call the engine parser directly. The model exposes `to_json()` and `to_markdown()` convenience methods:
```python
from search_engine_parser.parsers.google import GoogleParser
parser = GoogleParser()
results = parser.parse(html) # returns SearchResults
# Typed access — no dict key lookups
print(results.query)
print(results.total_results)
print(len(results.results)) # organic count
if results.featured_snippet:
print(results.featured_snippet.title)
if results.ai_overview:
print(results.ai_overview.description)
sources = results.ai_overview.metadata["sources"]
for q in results.people_also_ask:
print(q.title)
for post in results.people_saying:
print(post.title, post.url)
# Convert to JSON or Markdown directly on the model
json_str = results.to_json()
json_str = results.to_json(indent=4) # custom indent
md_str = results.to_markdown()
```
---
## Output Formats
### JSON (`output_format="json"` or `results.to_json()`)
```json
{
"search_engine": "google",
"query": "python web scraping",
"total_results": 26200000,
"results": [
{
"title": "Web Scraping with Python - Real Python",
"url": "https://realpython.com/python-web-scraping/",
"description": "Learn how to scrape websites with Python...",
"position": 1,
"result_type": "organic",
"metadata": {}
}
],
"featured_snippet": null,
"ai_overview": {
"title": "AI Overview",
"url": "",
"description": "Python is a widely used language for web scraping...",
"position": 0,
"result_type": "ai_overview",
"metadata": {
"sources": [
{"title": "Beautiful Soup", "url": "https://www.crummy.com/software/BeautifulSoup/"},
{"title": "Requests", "url": "https://requests.readthedocs.io/"}
]
}
},
"people_also_ask": [
{"title": "Is Python good for web scraping?", "url": "", "position": 0, "result_type": "people_also_ask", "metadata": {}}
],
"sponsored": [],
"people_saying": [],
"people_also_search": [],
"related_products": [],
"detection_confidence": 0.95,
"parsed_at": "2026-02-21T00:00:00Z",
"metadata": {}
}
```
### Markdown (`output_format="markdown"` or `results.to_markdown()`)
```markdown
# Search Results: python web scraping
**Search Engine:** Google
**Total Results:** ~26,200,000
**Parsed:** 2026-02-21 00:00:00 UTC
---
## Featured Snippet
### What is Web Scraping?
Web scraping is the process of extracting data from websites...
**Source:** [https://example.com](https://example.com)
---
## Organic Results
### 1. Web Scraping with Python - Real Python
Learn how to scrape websites with Python...
**URL:** https://realpython.com/python-web-scraping/
```
---
## CLI Usage
```bash
# Parse an HTML file (auto-detects search engine, outputs JSON)
search-parser parse results.html
# Markdown output
search-parser parse results.html --format markdown
# Specify engine manually
search-parser parse results.html --engine google --format json
# Read from stdin
cat results.html | search-parser parse - --format json
# Save to file
search-parser parse results.html --output results.json
```
---
## Documentation
Full documentation: [https://search-parser.github.io/search-parser/](https://search-parser.github.io/search-parser/)
- [Getting Started](https://search-parser.github.io/search-parser/getting_started/)
- [API Reference](https://search-parser.github.io/search-parser/api_reference/)
- [Adding a New Search Engine](https://search-parser.github.io/search-parser/adding_search_engine/)
- [Examples](https://search-parser.github.io/search-parser/examples/basic_usage/)
---
## Contributing
Contributions are welcome! Please read our [Contributing Guide](CONTRIBUTING.md) for details on the development workflow, how to add new parsers, and how to submit pull requests.
---
## License
This project is licensed under the Apache License 2.0. See the [LICENSE](LICENSE) file for details.
| text/markdown | null | Your Name <you@example.com> | null | null | Apache-2.0 | bing, duckduckgo, google, parser, scraping, search | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"beautifulsoup4>=4.12.0",
"lxml>=5.0.0",
"markdownify>=0.11.0",
"pydantic>=2.0.0",
"pytest<10.0.0,>=9.0.2",
"click>=8.1.0; extra == \"cli\"",
"rich>=13.0.0; extra == \"cli\"",
"lxml-stubs; extra == \"dev\"",
"mypy>=1.7.0; extra == \"dev\"",
"pre-commit>=3.5.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest>=7.4.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"types-beautifulsoup4; extra == \"dev\"",
"mkdocs-material>=9.4.0; extra == \"docs\"",
"mkdocstrings[python]>=0.24.0; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/getlinksc/search-parser",
"Documentation, https://getlinksc.github.io/search-parser",
"Repository, https://github.com/getlinksc/search-parser",
"Issues, https://github.com/getlinksc/search-parser/issues",
"Changelog, https://github.com/getlinksc/search-parser/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:33:16.326072 | search_parser-0.0.2.tar.gz | 977,123 | e0/19/c3d20a2d6f1e8908c6975cbf1316e62d3e6775babcbe8f09d34f153feb14/search_parser-0.0.2.tar.gz | source | sdist | null | false | 15a777c4dd70f10f484ef60c18cbb6e6 | 0065b91fc84a111005141eb266b181a34223fdfadf7a0e99e2835e9d8352c75e | e019c3d20a2d6f1e8908c6975cbf1316e62d3e6775babcbe8f09d34f153feb14 | null | [
"LICENSE"
] | 199 |
2.4 | syncgate | 1.3.0 | Lightweight multi-storage routing and sync abstraction layer | # SyncGate 🎯
> 轻量级多存储路由与同步抽象层
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
[]()
[]()
## 🎯 产品定位
**SyncGate** 是轻量级多存储路由与同步抽象层,统一管理本地文件、HTTP 链接和多种云存储。
### 核心功能
1. **🔗 统一路径** - 多种存储一个命令访问
2. **🌳 虚拟文件系统** - 不移动源文件,只管理链接
3. **✅ 链接验证** - 自动验证链接有效性
4. **📊 监控统计** - 跟踪使用情况和健康评分
5. **🔄 批量操作** - 批量创建/删除/验证
6. **⚡ 错误处理** - 重试机制和熔断器
7. **🌐 WebDAV 服务器** - 挂载为网络磁盘
8. **🐳 Docker 部署** - 一键容器化部署
### 支持的存储
| 存储类型 | 状态 | 协议 |
|----------|------|------|
| 本地文件 | ✅ | local:/path/to/file |
| HTTP/HTTPS | ✅ | https://example.com/file |
| AWS S3 | ✅ | s3://bucket/key |
| WebDAV | ✅ | webdav://server.com/path |
| FTP | ✅ | ftp://server.com/path |
| SFTP | ✅ | sftp://user@server.com/path |
## 🚀 快速开始
### 方式一: pip 安装
```bash
pip install syncgate
```
### 方式二: Docker 部署
```bash
# 构建镜像
docker build -t syncgate .
# 运行容器
docker run -p 8080:8080 -v /path/to/virtual:/data syncgate
```
### 方式三: Docker Compose
```bash
docker-compose up -d
```
### 方式四: Kubernetes
```bash
kubectl apply -f k8s/deployment.yaml
# 或使用 Helm
helm install syncgate k8s/helm-chart/
```
### WebDAV 挂载
启动 WebDAV 服务器后,可以挂载为网络磁盘:
```bash
# 启动服务器(无认证)
python3 -m syncgate.webdav_server /path/to/virtual --port 8080
# 启动服务器(带认证)
python3 -m syncgate.webdav_server /path/to/virtual --port 8080 --user admin --pass secret
```
挂载方法:
- **macOS**: `Cmd+K` → `dav://localhost:8080/`
- **Windows**: 此电脑 → 添加网络位置 → `dav://localhost:8080/`
- **Linux**: `mount -t davfs http://localhost:8080/ /mnt/dav`
### Web 管理器
启动 Web 图形管理界面:
```bash
# 启动 Web 管理器
python3 -m syncgate.web --port 8080
# 访问 http://localhost:8080
```
功能:
- 📊 仪表盘 - 实时统计概览
- 🔗 链接管理 - CRUD 操作
- ☁️ 后端状态查看
- 📈 统计图表
- 📥 导入导出
- ⚙️ 设置页面
详细说明: [WEB_GUIDE.md](WEB_GUIDE.md)
### 演示
```bash
python3 demo_complete.py
# 或使用 WebDAV
python3 -m syncgate.webdav_server ./virtual --port 8080
```
输出:
```
🎯 SyncGate Complete Demo
============================================================
📝 Step 1: 创建链接
✅ Local: /docs/notes.txt
✅ HTTP: /online/api.json
✅ S3: /cloud/data.csv
📂 Step 2: 虚拟文件系统结构
virtual/
├── 📁 docs/ → local:/path/to/files
├── 📁 online/ → https://api.example.com
└── 📁 cloud/ → s3://my-bucket/data
🔍 Step 3: 链接验证
✅ /docs/notes.txt
✅ /online/api.json
✅ 演示完成!
```
### 使用
```bash
# 创建链接
syncgate link /docs/a.txt local:/path/to/file.txt local
syncgate link /online/api.json https://api.example.com/data.json http
syncgate link /cloud/data.csv s3://my-bucket/data/2024.csv s3
# 列出目录
syncgate ls /docs
# 树形结构
syncgate tree /
# 验证链接
syncgate validate /docs/a.txt
# 查看状态
syncgate status /docs/a.txt
```
## 📖 文档
- [Usage](USAGE.md) - 完整使用指南
- [Quick Start](QUICK_START.md) - 5 分钟快速开始
- [Project](PROJECT.md) - 架构设计
- [Changelog](CHANGELOG.md) - 更新日志
- [Web Guide](WEB_GUIDE.md) - Web 管理器使用说明
- [Docker Deployment](DOCKER.md) - Docker 部署指南
- [Kubernetes](k8s/) - K8s 部署配置
## 🔗 相关链接
- GitHub: https://github.com/cyydark/syncgate
- Issues: https://github.com/cyydark/syncgate/issues
- Releases: https://github.com/cyydark/syncgate/releases
## 🛠️ 功能模块
### CLI 命令
| 命令 | 说明 |
|------|------|
| `link` | 创建链接 |
| `unlink` | 删除链接 |
| `ls` | 列出目录 |
| `tree` | 树形结构 |
| `validate` | 验证链接 |
| `status` | 查看状态 |
### REST API
```bash
# 启动 API 服务器
python -m syncgate.api --port 8080
```
| 端点 | 说明 |
|------|------|
| GET /api/stats | 获取统计 |
| GET /api/links | 获取所有链接 |
| POST /api/links | 创建链接 |
| DELETE /api/links/{path} | 删除链接 |
### WebDAV 服务器
```bash
# 启动 WebDAV 服务器
python3 -m syncgate.webdav_server /path/to/virtual --port 8080
# 带认证
python3 -m syncgate.webdav_server /path/to/virtual --port 8080 --user admin --pass secret
```
| 方法 | 说明 |
|------|------|
| GET/HEAD | 读取文件/目录 |
| PUT | 创建/更新文件 |
| DELETE | 删除文件 |
| PROPFIND | 获取属性 |
| MKCOL | 创建目录 |
| MOVE | 移动文件 |
| COPY | 复制文件 |
### 高级功能
```python
from syncgate.batch import BatchOperations
from syncgate.monitor import StatsCollector
from syncgate.error_handler import retry, CircuitBreaker
# 批量操作
batch = BatchOperations(vfs_root="virtual")
batch.batch_create([...])
# 监控统计
stats = StatsCollector(vfs_root="virtual")
stats.record_create("local")
health = stats.get_health_score()
# 错误处理
@retry(max_retries=3, delay=1.0)
def fragile_operation():
...
```
### Docker & Kubernetes
```yaml
# docker-compose.yml
services:
syncgate:
build: .
ports:
- "8080:8080"
volumes:
- ./virtual:/data/virtual
```
```bash
# Kubernetes 部署
kubectl apply -f k8s/deployment.yaml
# Helm 安装
helm install syncgate k8s/helm-chart/
```
## 🧪 测试
```bash
# 运行所有测试
pytest tests/ -v
# 运行性能测试
python3 tests/benchmark_streaming.py
```
**测试覆盖**: 168+ 测试用例
## 📦 部署选项
| 方式 | 命令 | 适用场景 |
|------|------|----------|
| pip | `pip install syncgate` | 本地开发 |
| Docker | `docker build && docker run` | 生产部署 |
| Compose | `docker-compose up -d` | 本地/测试 |
| K8s | `kubectl apply -f k8s/` | Kubernetes |
| Helm | `helm install syncgate k8s/helm-chart/` | K8s 生产 |
## ☁️ 云部署
### AWS ECS / EKS
```yaml
# 使用官方镜像
image: ghcr.io/cyydark/syncgate:latest
```
### Azure Container Apps
```bash
az containerapp create \
--name syncgate \
--resource-group mygroup \
--image ghcr.io/cyydark/syncgate:latest \
--target-port 8080 \
--ingress external
```
## 📈 性能
| 场景 | 性能 |
|------|------|
| 初始化 (5000 文件) | 0.04ms |
| 首屏加载 | 2ms (50 条) |
| WebDAV 响应 | <10ms |
| Docker 启动 | <2s |
## 🏗️ 开发
```bash
# 克隆项目
git clone https://github.com/cyydark/syncgate.git
cd syncgate
# 安装开发依赖
pip install -e ".[dev]"
# 运行演示
python3 demo_complete.py
# 运行测试
pytest tests/ -v
# 构建 Docker 镜像
docker build -t syncgate .
# 运行 WebDAV 服务器
python3 -m syncgate.webdav_server ./virtual --port 8080
```
## 🤝 贡献
欢迎提交 Issue 和 PR!
反馈: [GitHub Issues](https://github.com/cyydark/syncgate/issues)
## 📄 许可
MIT License - see [LICENSE](LICENSE)
---
**SyncGate** - 统一管理散落在各处的文件 📁🌐☁️
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"boto3>=1.34.0",
"paramiko>=3.0.0",
"psycopg2-binary>=2.9.9",
"requests>=2.31.0",
"black>=23.0.0; extra == \"dev\"",
"build>=1.0.0; extra == \"dev\"",
"mypy>=1.5.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest>=7.4.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:32:20.211102 | syncgate-1.3.0.tar.gz | 126,308 | 52/a2/ced5fd9433c5b0d1aacddec965a6e036cf271c243ba51ae140464c97103a/syncgate-1.3.0.tar.gz | source | sdist | null | false | 24ccadecd226e37847b84e11b0ea64de | f067807ab59dc56289717e2e1bf2a96f8c6626053c5ae22a573e88ef3ec04128 | 52a2ced5fd9433c5b0d1aacddec965a6e036cf271c243ba51ae140464c97103a | null | [
"LICENSE"
] | 193 |
2.4 | phspectra | 1.0.2 | Persistent homology spectral line decomposition | <p align="center">
<img src="https://caverac.github.io/phspectra/img/logo.svg" width="128" alt="phspectra logo">
</p>
<h1 align="center">phspectra</h1>
<p align="center">
Persistent homology spectral line decomposition.
</p>
<p align="center">
<a href="https://caverac.github.io/phspectra/">Documentation</a> | <a href="https://github.com/caverac/phspectra">Repository</a>
</p>
---
A Python library for decomposing 1-D astronomical spectra into Gaussian components using **0-dimensional persistent homology** for peak detection. Instead of derivative-based methods (GaussPy) or brute-force parameter sweeps, it ranks peaks by their topological persistence and uses a single tuning parameter.
## Installation
```bash
pip install phspectra
```
**Requirements:** Python >= 3.11, [NumPy](https://numpy.org/) >= 1.26, [SciPy](https://scipy.org/) >= 1.12.
An optional C extension is compiled automatically from source when available, providing ~2x faster fitting. If compilation fails, the library falls back to SciPy's [`curve_fit`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html) transparently.
## Quick start
```python
import numpy as np
from phspectra import fit_gaussians
# Create a synthetic spectrum: two Gaussians + noise
rng = np.random.default_rng(42)
x = np.arange(200, dtype=np.float64)
signal = (
3.0 * np.exp(-0.5 * ((x - 60) / 4.0) ** 2)
+ 1.5 * np.exp(-0.5 * ((x - 130) / 8.0) ** 2)
+ rng.normal(0, 0.2, size=200)
)
# Decompose
components = fit_gaussians(signal, beta=3.5)
for c in components:
print(f" amplitude={c.amplitude:.2f} mean={c.mean:.1f} stddev={c.stddev:.2f}")
```
```
amplitude=3.00 mean=60.1 stddev=3.97
amplitude=1.48 mean=129.9 stddev=8.12
```
The number of components is determined automatically -- no need to specify it in advance.
## API
The public API consists of three functions:
| Symbol | Description |
| ----------------------------------------------------------- | ------------------------------------------------- |
| `fit_gaussians(signal, *, beta=3.5, ...)` | Decompose a 1-D spectrum into Gaussian components |
| `find_peaks_by_persistence(signal, *, min_persistence=0.0)` | Low-level peak detection via persistent homology |
| `estimate_rms(signal, *, mask_pad=2, mad_clip=5.0)` | Signal-masked noise estimation |
`fit_gaussians` returns a list of `GaussianComponent` dataclasses, each with `amplitude`, `mean`, and `stddev` fields.
See the [full API reference](https://caverac.github.io/phspectra/library/api) for parameter details, types, and error handling.
## How it works
1. **Noise estimation** -- signal-masked MAD estimator (Riener et al. 2019)
2. **Peak detection** -- 0-dim persistent homology ranks peaks by significance
3. **Curve fitting** -- bounded Levenberg-Marquardt with initial guesses from persistence
4. **Validation** -- SNR, matched-filter SNR, and FWHM checks discard unphysical components
5. **Refinement** -- iterative residual search, negative-dip splitting, and blended-pair merging, accepted only when AICc improves
See the [algorithm overview](https://caverac.github.io/phspectra/library/algorithm) for details.
## License
MIT
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"numpy>=1.26",
"scipy>=1.12"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:30:56.031995 | phspectra-1.0.2.tar.gz | 28,372 | 87/0a/a519f70c83dc61cf8128aa4143aa7391fd8b9d8af1f0faef443e9b1adab4/phspectra-1.0.2.tar.gz | source | sdist | null | false | a5337cde89872aa575a45583ca6e511c | 75d6ba3afb8dac248b106045447f4b7ae16135adfa5bff1982779fc63f62ae2c | 870aa519f70c83dc61cf8128aa4143aa7391fd8b9d8af1f0faef443e9b1adab4 | MIT | [] | 135 |
2.1 | kestrel-kernels | 0.1.3 | CUDA kernel library for Kestrel | # kestrel-kernels
Precompiled CUDA kernels for [Kestrel](https://pypi.org/project/kestrel/), a high-performance inference engine for [Moondream](https://moondream.ai), the world's most efficient vision-language model.
**License:** These kernels are provided for use with Kestrel only. Other use is not permitted.
These kernels are optimized for NVIDIA Ada/Hopper GPUs (SM89/SM90) and distributed as precompiled shared libraries for fast installation without CUDA compilation.
## Kernel Library
### CUDA Kernels (compiled via CMake)
These kernels are implemented in CUDA C++ and compiled during wheel build.
#### `activation` - GELU Residual Activation
Computes `GELU(h) * (g + 1)` fused gated activation used in MoE expert layers. The input tensor is split in half: `h` passes through GELU, `g` acts as a gate with +1 bias.
| Tokens | CUDA | PyTorch (eager) | Compile | vs PyTorch |
|--------|------|-----------------|---------|------------|
| 1 | 3.8 us | 64 us | 63 us | **17x** |
| 64 | 2.9 us | 49 us | 69 us | **17x** |
| 740 | 3.5 us | 49 us | 68 us | **14x** |
| 1024 | 3.9 us | 49 us | 68 us | **13x** |
| 2048 | 5.1 us | 49 us | 68 us | **10x** |
PyTorch eager launches separate kernels for slice, erf, multiply, and add, with intermediate tensors hitting global memory. Our kernel fuses everything into a single pass. torch.compile is slower than eager here, likely because the dynamic `x[:, :hidden]` slicing prevents effective fusion.
#### `fused_linear_residual` - Linear + Bias + Residual
Fused `out = x @ W.T + bias + residual` using cuBLASLt epilogues.
| Crops | Tokens | CUDA | PyTorch (eager) | vs PyTorch |
|-------|--------|------|-----------------|------------|
| 1 | 729 | 9.0 us | 24 us | **2.7x** |
| 2 | 1458 | 12 us | 24 us | **2.0x** |
| 4 | 2916 | 16 us | 29 us | **1.8x** |
| 8 | 5832 | 46 us | 50 us | **1.1x** |
| 13 | 9477 | 44 us | 77 us | **1.7x** |
cuBLASLt epilogues fuse bias addition and residual into the matmul, avoiding extra kernel launches and memory traffic.
#### `fused_mlp` - Fused MLP with cuBLASLt
Fused `out = residual + gelu(x @ W1.T + b1) @ W2.T + b2` using cuBLASLt epilogues.
| Crops | Tokens | CUDA | PyTorch (eager) | vs PyTorch |
|-------|--------|------|-----------------|------------|
| 1 | 729 | 43 us | 56 us | **1.3x** |
| 2 | 1458 | 72 us | 89 us | **1.2x** |
| 4 | 2916 | 97 us | 124 us | **1.3x** |
| 8 | 5832 | 214 us | 259 us | **1.2x** |
| 13 | 9477 | 283 us | 379 us | **1.3x** |
MLP is matmul-dominated so the speedup is modest. The gain comes from fusing GELU and residual add into cuBLASLt epilogues.
#### `kv_cache_write` - KV Cache Write with FP8 Quantization
Writes BF16 key/value tensors to FP8 paged KV cache with quantization.
| Tokens | Kestrel | vLLM | PyTorch (eager) | vs vLLM | vs PyTorch |
|--------|---------|------|-----------------|---------|------------|
| 1 | 3.7 us | 4.9 us | 67 us | **1.3x** | **18x** |
| 8 | 3.5 us | 4.8 us | 35 us | **1.4x** | **10x** |
| 64 | 3.7 us | 4.8 us | 35 us | **1.3x** | **9x** |
| 256 | 4.1 us | 4.8 us | 36 us | **1.2x** | **9x** |
| 1024 | 8.6 us | 9.7 us | 51 us | **1.1x** | **6x** |
| 4096 | 31 us | 46 us | 124 us | **1.5x** | **4x** |
Fused K/V processing and optimized vectorization provide 1.1-1.5x speedup over vLLM's implementation.
#### `layernorm_cuda` - Fast LayerNorm Forward
Optimized LayerNorm forward pass for common hidden dimensions.
**Vision Encoder (N=1152):**
| Crops | Tokens | CUDA | PyTorch (eager) | vs PyTorch |
|-------|--------|------|-----------------|------------|
| 1 | 729 | 3.9 us | 8.4 us | **2.2x** |
| 2 | 1458 | 4.2 us | 8.4 us | **2.0x** |
| 4 | 2916 | 5.5 us | 10 us | **1.8x** |
| 8 | 5832 | 8.3 us | 18 us | **2.1x** |
| 13 | 9477 | 18 us | 28 us | **1.6x** |
**Text Decoder (N=2048):**
| Context | Tokens | CUDA | PyTorch (eager) | vs PyTorch |
|---------|--------|------|-----------------|------------|
| decode | 1 | 4.2 us | 8.4 us | **2.0x** |
| prefill | 740 | 3.7 us | 8.4 us | **2.3x** |
Specialized kernels for N=1152 and N=2048 use 4 rows/block with warp-only reductions, avoiding shared memory overhead. Two epilogue strategies trade register pressure vs memory bandwidth.
#### `moe_sum` - MoE Output Summation
Sums the weighted outputs from top-k MoE experts back into a single hidden state per token. Computes `out[t] = sum(expert_outputs[t, 0:k])` where each token selects k=8 experts.
| Context | Tokens | CUDA | PyTorch (eager) | vs PyTorch |
|---------|--------|------|-----------------|------------|
| decode | 1 | 3.0 us | 5.6 us | **1.9x** |
| batch 4 | 4 | 3.0 us | 5.4 us | **1.8x** |
| batch 16 | 16 | 2.9 us | 5.3 us | **1.8x** |
| prefill | 740 | 5.5 us | 10 us | **1.9x** |
| long | 1024 | 10 us | 15 us | **1.5x** |
Vectorized 16-byte loads (8 bf16 at once), fully unrolled k=8 reduction. FP32 accumulation provides better numerical stability than bf16 accumulation. Note: vLLM has a similar kernel, but only supports topk=2,3,4 and falls back to PyTorch for topk=8.
#### `rotary_embedding` - Rotary Position Embedding
Applies rotary position embedding to query and key tensors (n_heads=32, head_dim=64).
| Context | Tokens | Kestrel | vLLM | PyTorch (eager) | vs vLLM | vs PyTorch |
|---------|--------|---------|------|-----------------|---------|------------|
| decode | 1 | 3.3 us | 4.9 us | 118 us | **1.5x** | **36x** |
| batch 4 | 4 | 3.1 us | 4.5 us | 117 us | **1.5x** | **38x** |
| batch 16 | 16 | 3.1 us | 4.7 us | 117 us | **1.5x** | **38x** |
| prefill | 740 | 5.0 us | 8.0 us | 119 us | **1.6x** | **24x** |
Vectorized bfloat162 pair processing, shared memory caching of cos/sin values, FP32 math for numerical stability. Split-head kernel for decode increases SM utilization on small batch sizes.
#### `fp8_quant` - FP8 Quantization
Converts BF16 tensors to FP8 (e4m3fn) with per-row dynamic scale computation. Used for quantizing MoE activations before FP8 GEMM.
| Context | Rows | CUDA | PyTorch (eager) | vs PyTorch |
|---------|------|------|-----------------|------------|
| decode | 8 | 3.1 us | 53 us | **17x** |
| batch 4 | 32 | 3.1 us | 52 us | **17x** |
| batch 16 | 128 | 3.1 us | 52 us | **17x** |
| prefill | 5920 | 6.6 us | 67 us | **10x** |
Two kernel variants: warp-per-row for large batches (better SM utilization), block-per-row for small batches. Vectorized 16-byte loads/stores, fused absmax reduction.
#### `tau_tail` - TAU Attention Scaling
Applies per-head TAU scaling to Q and V in packed QKV. Computes `scale = tanh(tok_linear) + tau_pos_table[position]` then scales each head: `Q *= scale_q`, `V *= scale_v`.
| Context | Tokens | CUDA | PyTorch (eager) | vs PyTorch |
|---------|--------|------|-----------------|------------|
| decode | 1 | 4.6 us | 45 us | **10x** |
| batch 4 | 4 | 4.4 us | 46 us | **10x** |
| batch 16 | 16 | 9.0 us | 88 us | **10x** |
| prefill | 740 | 6.5 us | 63 us | **10x** |
---
### CuTe DSL Kernels (precompiled for wheel distribution)
These kernels are written in NVIDIA CuTe DSL (Python) and precompiled to `.so` files during wheel build. The kernel source templates are excluded from wheel distribution.
#### `topk` - Bitonic Top-K Selection
GPU top-k selection using bitonic sort network with optional fused softmax.
| Context | Tokens | Kestrel | Quack | PyTorch (eager) | vs Quack | vs PyTorch |
|---------|--------|---------|-------|-----------------|----------|------------|
| decode | 1 | 23 us | 29 us | 17 us | **1.3x** | 0.8x |
| batch 16 | 16 | 22 us | 27 us | 17 us | **1.2x** | 0.8x |
| prefill | 740 | 22 us | 28 us | 17 us | **1.2x** | 0.7x |
Note: Currently slower than PyTorch for N=64, k=8. PyTorch uses radix-based QuickSelect which is more efficient for small N. Algorithm should be revisited.
**Python API:**
```python
from kestrel_kernels.topk import topk_fwd
values, indices = topk_fwd(scores, k=8, softmax=True)
```
#### `sampling` - Top-p Token Sampling
CuTe DSL rejection-based top-p sampler for probability tensors.
Runtime dispatch uses the CuTe kernel path by default on CUDA, with fallback retained for unsupported cases and runtime errors.
Benchmarks below are H100 (`sm90`) dispatch-like timings (uniform generation + kernel launch), measured with heavy warmup and interleaved randomized runs:
| Shape (batch, vocab) | Kestrel CuTe | FlashInfer | vs FlashInfer |
|----------------------|--------------|------------|---------------|
| (1, 51200) | 17.37 us | 20.78 us | **1.20x** |
| (4, 51200) | 21.17 us | 21.84 us | **1.03x** |
| (128, 51200) | 38.96 us | 42.44 us | **1.09x** |
| (32, 1024) | 15.25 us | 20.50 us | **1.34x** |
**Python API:**
```python
from kestrel_kernels.sampling import top_p_sampling_from_probs
sampled_ids = top_p_sampling_from_probs(probs, top_p, generator=generator)
```
#### `cute_moe` - MoE Matrix Multiplications
Grouped GEMM kernels for Mixture-of-Experts layers, written in CuTe DSL for H100 (SM90). Supports BF16 and FP8 (W8A8) precision with both warp-level and WGMMA variants, automatically selected based on batch size.
**FP8 W8A8 Full MoE Layer** (up + activation + down + sum, E=64, k=8, with CUDA Graphs):
| Context | Tokens | Kestrel | vLLM (Triton) | vs vLLM |
|---------|--------|---------|---------------|---------|
| decode | 1 | 29 us | 51 us | **1.72x** |
| batch 4 | 4 | 79 us | 103 us | **1.30x** |
| batch 16 | 16 | 146 us | 169 us | **1.16x** |
| prefill | 740 | 245 us | 481 us | **1.96x** |
**Python API:**
```python
from kestrel_kernels import (
invoke_cute_moe_up,
invoke_cute_moe_down,
invoke_cute_moe_up_fp8,
invoke_cute_moe_down_fp8,
)
# BF16 up projection
out_up = invoke_cute_moe_up(
hidden_states, w1, w2,
topk_weights, topk_ids,
sorted_token_ids, expert_ids, num_tokens_post_pad,
)
# BF16 down projection
out_down = invoke_cute_moe_down(
moe_out, w3,
topk_weights, topk_ids,
sorted_token_ids, expert_ids, num_tokens_post_pad,
)
```
#### `moe_align` - MoE Token Alignment
Prepares sorted token indices for block-sparse MoE operations. Given topk_ids, outputs sorted token IDs grouped by expert for block-sparse matmul.
| Context | Tokens | Kestrel | vLLM | vs vLLM |
|---------|--------|---------|------|---------|
| decode | 1 | 6.7 us | 9.8 us | **1.5x** |
| batch 4 | 4 | 6.5 us | 9.8 us | **1.5x** |
| batch 16 | 16 | 7.0 us | 10 us | **1.4x** |
| prefill | 740 | 12 us | 9.2 us | 0.8x |
| long | 1024 | 12 us | 9.5 us | 0.8x |
Uses optimized single-CTA shared-memory histogram for decode (numel < 1024). Prefill path needs optimization.
**Python API:**
```python
from kestrel_kernels.moe_align import moe_align_block_size
moe_align_block_size(
topk_ids, num_experts, block_size,
sorted_token_ids, expert_ids, num_tokens_post_pad,
expert_map, # optional for expert parallelism
)
```
#### `gelu_residual` - GELU Residual Activation (CuTe DSL)
CuTe DSL implementation of GELU residual activation for BF16. Computes `GELU(h) * (g + 1)` fused gated activation used in MoE expert layers. Uses vectorized memory access and streaming stores.
| Context | Rows | CuTe | CUDA | PyTorch | vs CUDA | vs PyTorch |
|---------|------|------|------|---------|---------|------------|
| decode | 8 | 2.3 us | 2.5 us | 7.5 us | **1.10x** | **3.3x** |
| batch 4 | 32 | 2.4 us | 3.0 us | 8.6 us | **1.24x** | **3.6x** |
| batch 16 | 128 | 2.6 us | 2.9 us | 8.9 us | **1.09x** | **3.4x** |
| prefill | 5920 | 9.9 us | 11.2 us | 55.9 us | **1.14x** | **5.6x** |
#### `fp8_quant_cute` - FP8 Quantization (CuTe DSL)
CuTe DSL implementation of FP8 row-wise quantization. Converts BF16 tensors to FP8 (e4m3fn) with per-row dynamic scaling.
**hidden=1024** (MoE down projection input):
| Context | Rows | CuTe | CUDA | vs CUDA |
|---------|------|------|------|---------|
| decode | 8 | 2.5 us | 2.7 us | **1.09x** |
| batch 4 | 32 | 2.8 us | 3.0 us | **1.07x** |
| batch 16 | 128 | 2.8 us | 3.0 us | **1.08x** |
| prefill | 5920 | 5.3 us | 6.6 us | **1.23x** |
**hidden=2048** (MoE up projection input):
| Context | Rows | CuTe | CUDA | vs CUDA |
|---------|------|------|------|---------|
| decode | 8 | 2.6 us | 2.7 us | **1.02x** |
| batch 4 | 32 | 2.9 us | 3.0 us | **1.04x** |
| batch 16 | 128 | 2.9 us | 3.0 us | **1.04x** |
| prefill | 5920 | 8.2 us | 10.7 us | **1.31x** |
#### `flash_attn` - Flash Attention (Prefill & Decode)
Flash Attention kernels written in CuTe DSL, with a dedicated decode path optimized for paged FP8 KV cache. 1.3-2.5x faster than FlashInfer on typical Moondream workloads.
- FP8 KV cache with per-tensor scaling
- Paged KV (page_size=1) for fine-grained memory management
- CUDA graph compatible
- Causal and prefix-LM masking, variable-length sequences, GQA/MQA
**FP8 KV Paged Decode** (with CUDA Graphs):
| Batch | KV Len | Kestrel | FlashInfer | vs FlashInfer |
|-------|--------|---------|------------|---------------|
| 1 | 740 | 9.6 us | 12.9 us | **1.34x** |
| 1 | 1024 | 8.7 us | 13.1 us | **1.50x** |
| 4 | 740 | 17.1 us | 23.9 us | **1.40x** |
| 8 | 512 | 10.0 us | 25.2 us | **2.51x** |
| 16 | 256 | 9.6 us | 17.6 us | **1.83x** |
| 32 | 128 | 11.8 us | 26.5 us | **2.24x** |
**FP8 KV Paged Prefill**:
| Seq Len | Kestrel | FlashInfer | vs FlashInfer |
|---------|---------|------------|---------------|
| 740 | 19.9 us | 47.6 us | **2.40x** |
| 1024 | 27.3 us | 58.9 us | **2.16x** |
**Python API:**
```python
from kestrel_kernels.flash_attn.cute import flash_attn_func, flash_attn_varlen_func
# Fixed-length attention
out = flash_attn_func(q, k, v, causal=True)
# Variable-length attention
out = flash_attn_varlen_func(
q, k, v,
cu_seqlens_q, cu_seqlens_k,
max_seqlen_q, max_seqlen_k,
causal=True,
)
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"nvidia-cutlass-dsl==4.3.5",
"apache-tvm-ffi",
"torch-c-dlpack-ext",
"torch==2.9.1",
"pytest>=7.0; extra == \"dev\"",
"triton>=3.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.8 | 2026-02-21T01:29:08.636929 | kestrel_kernels-0.1.3-cp313-cp313-manylinux_2_34_x86_64.whl | 5,040,616 | 60/92/0d76434575ae01e67b9dfe2cd8642bcd380de5eaadbdaafd020544ac3819/kestrel_kernels-0.1.3-cp313-cp313-manylinux_2_34_x86_64.whl | cp313 | bdist_wheel | null | false | 7e108dd5104cd70f89a9a535d818b1c3 | 4bc87b1557fc45e55c2dab6770e707728a03aa815cc18ecb91f2667d5263bf5a | 60920d76434575ae01e67b9dfe2cd8642bcd380de5eaadbdaafd020544ac3819 | null | [] | 277 |
2.4 | bioversions | 0.8.272 | Get the current version for biological databases | <p align="center">
<img src="https://github.com/biopragmatics/bioversions/raw/main/docs/source/logo.png" height="150">
</p>
<h1 align="center">
Bioversions
</h1>
<p align="center">
<a href="https://github.com/biopragmatics/bioversions/actions/workflows/tests.yml">
<img alt="Tests" src="https://github.com/biopragmatics/bioversions/actions/workflows/tests.yml/badge.svg" /></a>
<a href="https://pypi.org/project/bioversions">
<img alt="PyPI" src="https://img.shields.io/pypi/v/bioversions" /></a>
<a href="https://pypi.org/project/bioversions">
<img alt="PyPI - Python Version" src="https://img.shields.io/pypi/pyversions/bioversions" /></a>
<a href="https://github.com/biopragmatics/bioversions/blob/main/LICENSE">
<img alt="PyPI - License" src="https://img.shields.io/pypi/l/bioversions" /></a>
<a href='https://bioversions.readthedocs.io/en/latest/?badge=latest'>
<img src='https://readthedocs.org/projects/bioversions/badge/?version=latest' alt='Documentation Status' /></a>
<a href="https://codecov.io/gh/biopragmatics/bioversions/branch/main">
<img src="https://codecov.io/gh/biopragmatics/bioversions/branch/main/graph/badge.svg" alt="Codecov status" /></a>
<a href="https://github.com/cthoyt/cookiecutter-python-package">
<img alt="Cookiecutter template from @cthoyt" src="https://img.shields.io/badge/Cookiecutter-snekpack-blue" /></a>
<a href="https://github.com/astral-sh/ruff">
<img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json" alt="Ruff" style="max-width:100%;"></a>
<a href="https://github.com/biopragmatics/bioversions/blob/main/.github/CODE_OF_CONDUCT.md">
<img src="https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg" alt="Contributor Covenant"/></a>
<a href="https://zenodo.org/badge/latestdoi/318852276">
<img src="https://zenodo.org/badge/318852276.svg" alt="DOI"></a>
<a href="https://github.com/biopragmatics/bioregistry">
<img alt="Powered by the Bioregistry" src="https://img.shields.io/static/v1?label=Powered%20by&message=Bioregistry&color=BA274A&style=flat&logo=image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACgAAAAoCAYAAACM/rhtAAAACXBIWXMAAAEnAAABJwGNvPDMAAAAGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAACi9JREFUWIWtmXl41MUZxz/z291sstmQO9mQG0ISwHBtOOSwgpUQhApWgUfEowKigKI81actypaqFbWPVkGFFKU0Vgs+YgvhEAoqEUESrnDlEEhCbkLYJtlkk9399Y/N/rKbzQXt96+Zed+Z9/t7Z+adeecnuA1s5yFVSGrLOAf2qTiEEYlUZKIAfYdKE7KoBLkQSc4XgkPfXxz/owmT41ZtiVtR3j94eqxQq5aDeASIvkVb12RBtt0mb5xZsvfa/5XgnqTMcI3Eq7IQjwM+7jJJo8YvNhK/qDBUOl8A7JZWWqqu01Jeg6Pd1nW4NuBjjax6eWrRruv/M8EDqTMflmXeB0Jcbb6RIRhmTCJ0ymgC0wYjadTd9nW0tWMu+In63NNU7c3FWtvgJpXrZVlakVGU8/ltEcwzGjU3miI/ABa72vwTB5K45AEi7x2PUEl9fZsHZLuDmgPHuLJpJ82lle6iTSH6mpXp+fnt/Sa4yzhbp22yfwFkgnMaBy17kPhFmQh1997qLxztNkq35XB505fINtf0iz1WvfTQ7Pxdlj4Jdnjuny5yvpEhjHh7FQOGD/YyZi4owS86HJ+QQMDpJaBf3jUXlHD21+8q0y4LDppV/vfNO7+jzV3Pa6SOac0E8I8fSPonpm7JAVR+eRhzwU/Ofj+e49tpT/HdtGXcyLvQJ8HAtCTGfmJCF2dwfpTMz4NszX/uqqdyr+xPyVwoEK+C03PGrDX4GkJ7NBJ+txH/hCgAit7cRlNxOY62dmzmZgwzJvZJUh2gI/xnRmoOHsfe3AqQ/kho0qXs+pLzLh3FgwdT54YKxLsAQq0mbf1zHuTsltZejemHJSrlgGGDPGTXc09zdM5qTi59jZbKOg+Zb1QYI95+XokEQogPDifPDnPJFQ8uCkl8FyGmACQtn4dhxp3KINX7jnHi0ZeJnT8dla8Plbu+48zzfyJ08kh8ggIACB4zlIAhsURm3EnML6eB6Fzep1a+SUt5DS2VddTs+4GQccPRhgV1kowIQRaChhMXAPxkIev/Vl+8R/HgnqTMmI4gjH/iQOIXZSqdzQUlXDB9RPyi+1DrdVx67WMursvCkDERXYxB0ROSIOKecURMG+tBzkXAhbYbZk6teNPLkwmPzUIX71wuMiw+MHx2nEJQrWIFHSdE4pIHlFDisLZxYe1HhIwfTtLK+RSu30rVnlxGvrOapOcW9DsW3vH6CgKS4zxIXlz3Fw8dSaMmcfEcV9XHYbc/DSCZMEkgFoJzY0TeO17pVL7jANbaBoauWUJlTi4VOw+T9sazBKYl0ZB/qV/kALThQRi3vOJB0lpzw0vPMONOtOHOqRcyi7bzkEqanJo3HogBMGROUrziaGundGsOsQsyUPn6UPx2NvELZxIybhinn3uLyx9uVwaW7XbqjxdQmr2X0uy93Dh+Dtlu9zCu9vdj1PsvEWwcii7OwJAXFnoRFCoVhoxJrmr0gOQWo9qBfaorXodOHq0o1x8roN3cSMyC6ZT942uQBIlL53Jl804sV6oY9/fXAGg4WcjFdZuxlFV7GNPFRzFs7VKCRiV7ejJrTa/eDr1rFKXZOQCocEyTgHQAyUdD4B2d4cF8pohg4zC0YUFU7z5C9Jy7sVvbKPtsH6GT0tCGBtFwspBTz/zRixyApbSKk8te5+aZ4l4JdUVQWpIScmQhjGocUjJCRhcTieSjURQTF89FtttpuVaLpaya8Knp1B3OQ5Zlag/nU//9cmScS6EnONrauWjazIQv3kCoVD3quUPS+uAXHU7z1SpATpEQchSA78AwD0WVnxa1XkdjURlCJRGQHMfN/EuEjk9jyr4NRN47Hltjc58Gm0sraTjZ/w3l5BLuKkZJdFzT1f5+3Sq3NZjRDNAjaX1orb2BX2wEmkA9fvGGbvW7Q+OlUu+2wlIqdx+h3dzkJVPrda5iQJ93p+DRqcQ/PhsAw8xJ6AfHdkhuIVvoEribLl/jxKOv4Gi34T8omgnb1yOk7sdTA01AiK3J6yoGgP+gaPwHOdOP6LlTlXb3mNYXAlI8da9/e0pJBZovV2BrakYzQK/I3bg0SsiiCqClqs/0wAPB6UOVo6k3+CdEETwm1aPtP+dLlLJPSKAHOYDWCoVLlYTkKAKcCU4vO7IrhErFsLVLPXZ+V0haDcN+v8xjB9strdQfPavUA0ckefRxWNuwVNS6rBRKQB44r+Lmc5f7TRAgaFQyYzb9Dv/4gd18ASQ8/gsC0zwJNJVcw97aeWmOcDtaAW6eLXZLBchTC8EhWXbW6o+cInhMipetuu9OUvTWNnwNodzx+krlvAQIGjmECV+spyH/Ak3F5QDok+OoPXicip2HiJiWTuH6rQx6eh7BxlT0STH4xUbSUl6Df/xAIqaO9bBVn3taKUuy/ZAwYZImpvx4FYjVRgQzOec9r1vK0TmrldMiIDkO45ZXegxLLrRW13P0/heQHQ4CUhIYvfElNIHOtWaztNJ4qZQBqfFKLg3OMz135rNY624ClB0tHJcomTA5ZMGnANbaBmoOHPMy5hvZebNuLCoj71frXIN0i9pDJzj24IsIlUTCo7NI3/KyQg5ArfMleEyKBzmA6r1HO8eV+dSEySEB2G3yRpwZP1c2f+n1GjB07RIlcwNoKi7j3G839EhQF2cg6fmHmbznPRKevJ/GorIedV1wtLVzJesrV9WqQtoIHRfWjreSjwGar1ZRui3Ho7PfwHBGb3jRg6S1roGeoIuNJGBIPKV/zSF31irOrn4HXAu9B1zduhtLecelQxZZ9xTtrgC342Df8IwQyaYqBMKEWo0xaw1BI4d4DNJSWcfF32fRWnuD5NWPEDZ5lIe8NDuHq1v+ha2xGdkho4szYJg1hbj501EH6OgJ5oIS8hf/oWPm5HqNrE51vdt4nC/7k+9bIIT8GYA2Ipixn5jwjQrrZsju0XT5GubTRfiEBqFPisUvOrzPPi0VdeQ9YcJ63bWmxbzphTk7XHKvA/DrlJkfAU+Bcy2N+fA3vZK0WVoxny4idOKIfn+IO7lTz7zRObWCjdMv7VnhruOV9dws9F8u4CsAS1k1J54wYS4o6arWaaS8hvLP998yuZtnisl7wuROLkdjsKzqqtfL45FjB8gzwZnIJy6dS8Jjs3p8ausvHG3tXN26mytZO5W8Rcjsbg1Qze/X45ELHY9I7wHLXG26+CgSl8zFkDGh3zdkF2S7nep9PzhzmnK3FEGwUWOwrJr6zTdeL529EnRhf3LmfCHEBkBZiNrwIAwZkwi9a5Qzh9D6dNvXYW3jZkEJ9UdOOYPwdY/gXgdiufuGuC2C4Hy3kWXrOhmeBLQeA6jV6GLC8Y0KR613Hn+2phZaK69jqah1P/hdsCKLLIfGtnbG+f3eyfHtEHTh38mzom2SY4WQWQjE9tnBE+XIZKuQNrqCcH9wSwRdMGGSJiTnpatwTJOFMIKcgvPVX/kNIcM1gSgC8iTZfii3aEL+7fyG+C+6O8izl1GE5gAAAABJRU5ErkJggg==" /></a>
</p>
What's the current version for each biological database?
A daily updated static listing of all current versions (that are incorporated)
can be found at https://biopragmatics.github.io/bioversions.
## 💪 Getting Started
```python
import bioversions
assert bioversions.get_version('biogrid') == '4.2.192', 'This was true on Dec 5th, 2020!'
# If you want more information, use the resolve() function
bioversion = bioversions.resolve('biogrid')
assert bioversion.version == '4.2.192'
```
By default, the results are cached and only refreshed once per day with the help
of [`cachier`](https://github.com/shaypal5/cachier). The cache is stored in
`~/.data/bioversions`. The cache location can be overridden by setting the
`BIOVERSIONS_HOME` environment variable via
[`pystow`](https://github.com/cthoyt/pystow).
## 🌐 Web Application
While https://biopragmatics.github.io/bioversions provides a daily updated
static listing of the database, you can run a dynamic version with an API from
your shell with:
```console
$ bioversions web
```
Options can be listed with `bioversions web --help`.
You can navigate to http://localhost:5000 to see all versions as HTML or
programmatically resolve given databases with the
`http://localhost:5000/database/<name>` endpoint like in the following:
```python
import requests
res = requests.get('http://localhost:5000/database/biogrid').json()
assert res['success']
assert res['result']['name'] == 'BioGRID'
assert res['result']['version'] == '4.2.192', 'This was true on Dec 5th, 2020!'
```
## CLI Usage
You can use `bioversions get` to incorporate the latest versions in your shell
scripts or REPL usage like in:
```console
$ wget "https://downloads.thebiogrid.org/Download/BioGRID/Release-Archive/BIOGRID-$(bioversions get biogrid)/BIOGRID-ALL-$(bioversions get biogrid).mitab.zip"
```
## 🚀 Installation
The most recent release can be installed from
[PyPI](https://pypi.org/project/bioversions/) with uv:
```console
$ uv pip install bioversions
```
or with pip:
```console
$ python3 -m pip install bioversions
```
The most recent code and data can be installed directly from GitHub with uv:
```console
$ uv pip install git+https://github.com/biopragmatics/bioversions.git
```
or with pip:
```console
$ python3 -m pip install git+https://github.com/biopragmatics/bioversions.git
```
## 👐 Contributing
Contributions, whether filing an issue, making a pull request, or forking, are
appreciated. See
[CONTRIBUTING.md](https://github.com/biopragmatics/bioversions/blob/master/.github/CONTRIBUTING.md)
for more information on getting involved.
To add more databases to the list, you can create a new submodule of
`bioversions.sources` and extend the `bioversions.utils.Getter` class to
identify the most recent version for your target database. See
`bioversions.sources.biogrid` as an example.
## 👋 Attribution
### ⚖️ License
The code in this package is licensed under the MIT License.
<!--
### 📖 Citation
Citation goes here!
-->
### 🎁 Support
The Bioversions service was originally developed by the
[INDRA Lab](https://indralab.github.io), a part of the
[Laboratory of Systems Pharmacology](https://hits.harvard.edu/the-program/laboratory-of-systems-pharmacology/about/)
and the
[Harvard Program in Therapeutic Science (HiTS)](https://hits.harvard.edu) at
[Harvard Medical School](https://hms.harvard.edu/).
### 💰 Funding
The development of this package was partially funded by the DARPA Young Faculty
Award W911NF2010255 (PI: Benjamin M. Gyori).
### 🍪 Cookiecutter
This package was created with
[@audreyfeldroy](https://github.com/audreyfeldroy)'s
[cookiecutter](https://github.com/cookiecutter/cookiecutter) package using
[@cthoyt](https://github.com/cthoyt)'s
[cookiecutter-snekpack](https://github.com/cthoyt/cookiecutter-snekpack)
template.
## 🛠️ For Developers
<details>
<summary>See developer instructions</summary>
The final section of the README is for if you want to get involved by making a
code contribution.
### Development Installation
To install in development mode, use the following:
```console
$ git clone git+https://github.com/biopragmatics/bioversions.git
$ cd bioversions
$ uv pip install -e .
```
Alternatively, install using pip:
```console
$ python3 -m pip install -e .
```
### 🥼 Testing
After cloning the repository and installing `tox` with
`uv tool install tox --with tox-uv` or `python3 -m pip install tox tox-uv`, the
unit tests in the `tests/` folder can be run reproducibly with:
```console
$ tox -e py
```
Additionally, these tests are automatically re-run with each commit in a
[GitHub Action](https://github.com/biopragmatics/bioversions/actions?query=workflow%3ATests).
### 📖 Building the Documentation
The documentation can be built locally using the following:
```console
$ git clone git+https://github.com/biopragmatics/bioversions.git
$ cd bioversions
$ tox -e docs
$ open docs/build/html/index.html
```
The documentation automatically installs the package as well as the `docs` extra
specified in the [`pyproject.toml`](pyproject.toml). `sphinx` plugins like
`texext` can be added there. Additionally, they need to be added to the
`extensions` list in [`docs/source/conf.py`](docs/source/conf.py).
The documentation can be deployed to [ReadTheDocs](https://readthedocs.io) using
[this guide](https://docs.readthedocs.io/en/stable/intro/import-guide.html). The
[`.readthedocs.yml`](.readthedocs.yml) YAML file contains all the configuration
you'll need. You can also set up continuous integration on GitHub to check not
only that Sphinx can build the documentation in an isolated environment (i.e.,
with `tox -e docs-test`) but also that
[ReadTheDocs can build it too](https://docs.readthedocs.io/en/stable/pull-requests.html).
</details>
## 🧑💻 For Maintainers
<details>
<summary>See maintainer instructions</summary>
### Initial Configuration
#### Configuring ReadTheDocs
[ReadTheDocs](https://readthedocs.org) is an external documentation hosting
service that integrates with GitHub's CI/CD. Do the following for each
repository:
1. Log in to ReadTheDocs with your GitHub account to install the integration at
https://readthedocs.org/accounts/login/?next=/dashboard/
2. Import your project by navigating to https://readthedocs.org/dashboard/import
then clicking the plus icon next to your repository
3. You can rename the repository on the next screen using a more stylized name
(i.e., with spaces and capital letters)
4. Click next, and you're good to go!
#### Configuring Archival on Zenodo
[Zenodo](https://zenodo.org) is a long-term archival system that assigns a DOI
to each release of your package. Do the following for each repository:
1. Log in to Zenodo via GitHub with this link:
https://zenodo.org/oauth/login/github/?next=%2F. This brings you to a page
that lists all of your organizations and asks you to approve installing the
Zenodo app on GitHub. Click "grant" next to any organizations you want to
enable the integration for, then click the big green "approve" button. This
step only needs to be done once.
2. Navigate to https://zenodo.org/account/settings/github/, which lists all of
your GitHub repositories (both in your username and any organizations you
enabled). Click the on/off toggle for any relevant repositories. When you
make a new repository, you'll have to come back to this
After these steps, you're ready to go! After you make "release" on GitHub (steps
for this are below), you can navigate to
https://zenodo.org/account/settings/github/repository/biopragmatics/bioversions
to see the DOI for the release and link to the Zenodo record for it.
#### Registering with the Python Package Index (PyPI)
The [Python Package Index (PyPI)](https://pypi.org) hosts packages so they can
be easily installed with `pip`, `uv`, and equivalent tools.
1. Register for an account [here](https://pypi.org/account/register)
2. Navigate to https://pypi.org/manage/account and make sure you have verified
your email address. A verification email might not have been sent by default,
so you might have to click the "options" dropdown next to your address to get
to the "re-send verification email" button
3. 2-Factor authentication is required for PyPI since the end of 2023 (see this
[blog post from PyPI](https://blog.pypi.org/posts/2023-05-25-securing-pypi-with-2fa/)).
This means you have to first issue account recovery codes, then set up
2-factor authentication
4. Issue an API token from https://pypi.org/manage/account/token
This only needs to be done once per developer.
#### Configuring your machine's connection to PyPI
This needs to be done once per machine.
```console
$ uv tool install keyring
$ keyring set https://upload.pypi.org/legacy/ __token__
$ keyring set https://test.pypi.org/legacy/ __token__
```
Note that this deprecates previous workflows using `.pypirc`.
### 📦 Making a Release
#### Uploading to PyPI
After installing the package in development mode and installing `tox` with
`uv tool install tox --with tox-uv` or `python3 -m pip install tox tox-uv`, run
the following from the console:
```console
$ tox -e finish
```
This script does the following:
1. Uses [bump-my-version](https://github.com/callowayproject/bump-my-version) to
switch the version number in the `pyproject.toml`, `CITATION.cff`,
`src/bioversions/version.py`, and
[`docs/source/conf.py`](docs/source/conf.py) to not have the `-dev` suffix
2. Packages the code in both a tar archive and a wheel using
[`uv build`](https://docs.astral.sh/uv/guides/publish/#building-your-package)
3. Uploads to PyPI using
[`uv publish`](https://docs.astral.sh/uv/guides/publish/#publishing-your-package).
4. Push to GitHub. You'll need to make a release going with the commit where the
version was bumped.
5. Bump the version to the next patch. If you made big changes and want to bump
the version by minor, you can use `tox -e bumpversion -- minor` after.
#### Releasing on GitHub
1. Navigate to https://github.com/biopragmatics/bioversions/releases/new to
draft a new release
2. Click the "Choose a Tag" dropdown and select the tag corresponding to the
release you just made
3. Click the "Generate Release Notes" button to get a quick outline of recent
changes. Modify the title and description as you see fit
4. Click the big green "Publish Release" button
This will trigger Zenodo to assign a DOI to your release as well.
### Updating Package Boilerplate
This project uses `cruft` to keep boilerplate (i.e., configuration, contribution
guidelines, documentation configuration) up-to-date with the upstream
cookiecutter package. Install cruft with either `uv tool install cruft` or
`python3 -m pip install cruft` then run:
```console
$ cruft update
```
More info on Cruft's update command is available
[here](https://github.com/cruft/cruft?tab=readme-ov-file#updating-a-project).
</details>
| text/markdown | Charles Tapley Hoyt | Charles Tapley Hoyt <cthoyt@gmail.com> | Charles Tapley Hoyt | Charles Tapley Hoyt <cthoyt@gmail.com> | null | snekpack, cookiecutter, databases, biological databases, biomedical databases | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Framework :: Pytest",
"Framework :: tox",
"Framework :: Sphinx",
"Natural Language :: English",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3 :: Only",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"requests",
"beautifulsoup4",
"cachier>=2.2.1",
"pystow[bs4]>=0.7.8",
"click",
"click-default-group",
"dataclasses-json",
"tabulate",
"more-click",
"pyyaml",
"tqdm",
"bioregistry[align]>=0.11.35",
"lxml",
"pydantic>=2.0",
"psycopg2-binary",
"python-dateutil",
"class-resolver>=0.7.1",
"ror-downloader>=0.0.4",
"matplotlib; extra == \"charts\"",
"seaborn; extra == \"charts\"",
"slack-sdk; extra == \"slack\"",
"tweepy; extra == \"twitter\"",
"flask; extra == \"web\"",
"bootstrap-flask; extra == \"web\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/biopragmatics/bioversions/issues",
"Homepage, https://github.com/biopragmatics/bioversions",
"Repository, https://github.com/biopragmatics/bioversions.git",
"Documentation, https://bioversions.readthedocs.io",
"Funding, https://github.com/sponsors/cthoyt"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T01:28:57.881435 | bioversions-0.8.272-py3-none-any.whl | 122,299 | 44/cd/b96dd3efb48d594c78084edda34c0c8d59689d76f8f46e727bcf64d8689e/bioversions-0.8.272-py3-none-any.whl | py3 | bdist_wheel | null | false | 444b0eb21b4e97ddabab4c4850af9932 | 47c45135cfc0074c656d2cced4a5eba6c4c2b558a3b6e49c084bcbd0cea3f505 | 44cdb96dd3efb48d594c78084edda34c0c8d59689d76f8f46e727bcf64d8689e | null | [
"LICENSE"
] | 387 |
2.4 | bashautom | 0.1.1 | Persistent bash sessions for Python | # bashautom
Persistent bash sessions for Python.
Unlike `subprocess.run()` which spawns a new process every time, bashautom keeps a `/bin/bash` process alive so state (env vars, cwd, etc.) persists across commands.
```python
from bashautom import Session
with Session() as s:
s.execute("cd /opt/myproject")
s.execute("source .env")
s.execute("export BUILD_ID=42")
result = s.execute("make build")
```
## Install
```bash
pip install bashautom
```
Python 3.10+, Linux/macOS only.
## Usage
```python
from bashautom import Session
with Session() as s:
result = s.execute("echo hello")
print(result.stdout)
print(result.exit_code)
print(result.success)
```
### Timeouts
Commands can be killed without destroying the session:
```python
with Session() as s:
result = s.execute("sleep 60", timeout=3)
print(result.timed_out)
# session still works
s.execute("echo ok")
```
### Streaming
```python
from bashautom.session import StreamEvent
def on_output(event: StreamEvent):
print(f"[{event.stream}] {event.data.strip()}")
with Session() as s:
s.execute("for i in 1 2 3; do echo $i; sleep 0.5; done", stream_callback=on_output)
```
### Multiple sessions
```python
from bashautom import SessionManager
with SessionManager() as mgr:
build = mgr.create("build", cwd="/opt/project")
deploy = mgr.create("deploy", cwd="/opt/infra")
build.execute("make release")
deploy.execute("./deploy.sh")
```
### Env helpers
```python
with Session() as s:
s.set_env("PROJECT", "bashautom")
print(s.get_env("PROJECT"))
print(s.get_cwd())
print(s.pid)
print(s.alive)
```
## API
### Session
- `execute(command, timeout=None, stream_callback=None)` - run a command, returns `CommandResult`
- `send_signal(sig=SIGINT)` - send a signal to the running process
- `get_cwd()` / `get_env(var)` / `set_env(var, value)` - shell state access
- `close()` - kill the session
- `pid`, `alive` - process info
### CommandResult
- `command`, `stdout`, `stderr` - what ran and what came back
- `exit_code`, `success`, `timed_out` - status
- `duration` - wall time in seconds
### SessionManager
- `create(name, ...)` / `get(name)` / `get_or_create(name, ...)` - session lifecycle
- `close(name)` / `close_all()` - cleanup
- `names`, `active` - introspection
## License
MIT
| text/markdown | null | Huskago <huskago@gmail.com> | null | null | MIT | bash, automation, shell, session, subprocess | [
"Development Status :: 3 - Alpha",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Topic :: System :: Shells",
"License :: OSI Approved :: MIT License"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Repository, https://github.com/huskago/bashautom",
"Documentation, https://github.com/huskago/bashautom#readme",
"Issues, https://github.com/huskago/bashautom/issues",
"Changelog, https://github.com/huskago/bashautom/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:28:50.514794 | bashautom-0.1.1.tar.gz | 8,358 | 28/12/5819192ef9a2bf6814e0c93cc18f47c04083aea10046079ea390aafaf0a3/bashautom-0.1.1.tar.gz | source | sdist | null | false | 4b929bcfb62c092e2471340316556e01 | 785e9aa4b29106963865f03dea0d274bd9c99dac4d457dc9aa7f68d0ab39ccd3 | 28125819192ef9a2bf6814e0c93cc18f47c04083aea10046079ea390aafaf0a3 | null | [
"LICENSE"
] | 228 |
2.4 | chuk-ai-session-manager | 0.10.3 | Session manager for AI applications | # CHUK AI Session Manager
**A powerful session management system for AI applications**
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/Apache-2.0)
Automatic conversation tracking, token usage monitoring, tool call logging, infinite context support with automatic summarization, and hierarchical session relationships. Perfect for AI applications that need reliable session management.
## 🚀 Quick Start
### Installation Options
```bash
# Basic installation (memory storage only)
pip install chuk-ai-session-manager
# With Redis support for persistent storage
pip install chuk-ai-session-manager[redis]
# With enhanced token counting
pip install chuk-ai-session-manager[tiktoken]
# Full installation with all optional features
pip install chuk-ai-session-manager[all]
# Development installation
pip install chuk-ai-session-manager[dev]
```
### Quick Example
```python
from chuk_ai_session_manager import track_conversation
# Track any conversation automatically
session_id = await track_conversation(
user_message="What's the weather like?",
ai_response="I don't have access to real-time weather data.",
model="gpt-3.5-turbo",
provider="openai"
)
print(f"Conversation tracked in session: {session_id}")
```
That's it! Zero configuration required.
## ⚡ Major Features
### 🧠 **AI Virtual Memory**
OS-style memory management for AI context windows. Pages, working sets, faults, eviction, and compression — giving conversations the illusion of infinite memory.
```python
from chuk_ai_session_manager import SessionManager
from chuk_ai_session_manager.memory import (
MemoryManager, CompressorRegistry, ImportanceWeightedLRU,
PageType, VMMode, WorkingSetConfig,
)
# Zero-config: just enable VM on SessionManager
sm = SessionManager(enable_vm=True, vm_mode=VMMode.STRICT)
# Or fully customize eviction and compression
vm = MemoryManager(
session_id="my_session",
config=WorkingSetConfig(max_l0_tokens=32_000),
eviction_policy=ImportanceWeightedLRU(), # Swappable strategy
compressor_registry=CompressorRegistry.default(), # Per-modality compression
)
# Create pages, add to working set
page = vm.create_page("Decision: Use JWT for auth", page_type=PageType.CLAIM)
await vm.add_to_working_set(page)
# Build context for LLM call
ctx = vm.build_context(system_prompt="You are helpful.")
# ctx["developer_message"] has VM:RULES + VM:MANIFEST_JSON + VM:CONTEXT
```
**Eviction policies**: `ImportanceWeightedLRU` (default), `LRUEvictionPolicy`, `ModalityAwareLRU` — or implement the `EvictionPolicy` protocol for custom strategies.
**Compression**: Pages compress through FULL → REDUCED → ABSTRACT → REFERENCE before eviction, saving tokens without losing context. Text, image, and passthrough compressors included; plug in custom `summarize_fn` for LLM-based compression.
See [AI Virtual Memory docs](docs/memory/README.md) for full documentation.
### 🎯 **Zero-Configuration Tracking**
```python
from chuk_ai_session_manager import SessionManager
# Just start using it
sm = SessionManager()
await sm.user_says("Hello!")
await sm.ai_responds("Hi there!", model="gpt-4")
# Get stats instantly
stats = await sm.get_stats()
print(f"Tokens: {stats['total_tokens']}, Cost: ${stats['estimated_cost']:.4f}")
```
### 🔄 **Infinite Context**
```python
# Automatically handles conversations longer than token limits
sm = SessionManager(infinite_context=True, token_threshold=4000)
await sm.user_says("Tell me about the history of computing...")
await sm.ai_responds("Computing history begins with...", model="gpt-4")
# Session will auto-segment when limits are reached
```
### ⚙️ **Storage Backends**
| Installation | Storage | Use Case | Performance |
|-------------|---------|----------|-------------|
| `pip install chuk-ai-session-manager` | Memory | Development, testing | 1.8M ops/sec |
| `pip install chuk-ai-session-manager[redis]` | Redis | Persistent, distributed | 20K ops/sec |
### 🛡️ **Conversation Guards and Tool State**
Runtime guardrails that prevent runaway tool loops, track value bindings, and enforce grounded tool calls.
```python
from chuk_ai_session_manager.guards import get_tool_state, ToolStateManager
# Get the singleton tool state manager
tool_state = get_tool_state()
# Track tool calls and bind results as $v1, $v2, ...
binding = tool_state.bind_value("sqrt", {"x": 16}, 4.0)
# LLM can now reference $v1 in subsequent calls
# Check for runaway tool loops
status = tool_state.check_runaway()
# Detect ungrounded calls (missing $vN references)
check = tool_state.check_ungrounded_call("normal_cdf", {"mean": 0, "std": 1, "x": 1.5})
# Reset state for a new prompt
tool_state.reset_for_new_prompt()
```
**Guard components:**
- `ToolStateManager` - Coordinator for all guards, bindings, and cache
- `BindingManager` - `$vN` reference system for tracking tool results
- `ResultCache` - Tool result caching for deduplication
- `UngroundedGuard` - Detects calls with missing computed-value references
- Runtime guards (budget, runaway, per-tool limits) from `chuk-tool-processor`
### 🧩 **Procedural Memory**
Learn from tool call history to improve future tool use.
```python
from chuk_ai_session_manager import ToolMemoryManager, ProceduralContextFormatter
# Record tool outcomes
memory = ToolMemoryManager()
await memory.record("calculator", {"op": "add", "a": 5, "b": 3}, result=8, success=True)
# Format learned patterns for the model's context
formatter = ProceduralContextFormatter()
context = formatter.format(memory.get_patterns())
```
### 🛠️ **Tool Integration**
```python
# Automatic tool call tracking
await sm.tool_used(
tool_name="calculator",
arguments={"operation": "add", "a": 5, "b": 3},
result={"result": 8}
)
```
## 💡 Common Use Cases
### Web App Conversation Tracking
```python
from chuk_ai_session_manager import track_conversation
# In your chat endpoint
session_id = await track_conversation(
user_message=request.message,
ai_response=ai_response,
model="gpt-4",
provider="openai",
session_id=request.session_id # Continue existing conversation
)
```
### LLM Wrapper with Automatic Tracking
```python
from chuk_ai_session_manager import track_llm_call
import openai
async def my_openai_call(prompt):
response = await openai.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content
# Automatically tracked
response, session_id = await track_llm_call(
user_input="Explain machine learning",
llm_function=my_openai_call,
model="gpt-3.5-turbo",
provider="openai"
)
```
### Long Conversations with Auto-Segmentation
```python
from chuk_ai_session_manager import track_infinite_conversation
# Start a conversation
session_id = await track_infinite_conversation(
user_message="Tell me about the history of computing",
ai_response="Computing history begins with ancient calculating devices...",
model="gpt-4",
token_threshold=4000 # Auto-segment after 4000 tokens
)
# Continue the conversation - will auto-segment if needed
session_id = await track_infinite_conversation(
user_message="What about quantum computers?",
ai_response="Quantum computing represents a fundamental shift...",
session_id=session_id,
model="gpt-4"
)
```
## 🔧 Configuration
### Storage Configuration
```bash
# Memory provider (default) - fast, no persistence
export SESSION_PROVIDER=memory
# Redis provider - persistent, distributed (requires redis extra)
export SESSION_PROVIDER=redis
export SESSION_REDIS_URL=redis://localhost:6379/0
```
### Installation Matrix
| Command | Memory | Redis | Token Counting | Use Case |
|---------|--------|-------|----------------|----------|
| `pip install chuk-ai-session-manager` | ✅ | ❌ | Basic | Development |
| `pip install chuk-ai-session-manager[redis]` | ✅ | ✅ | Basic | Persistent |
| `pip install chuk-ai-session-manager[tiktoken]` | ✅ | ❌ | Enhanced | Better accuracy |
| `pip install chuk-ai-session-manager[all]` | ✅ | ✅ | Enhanced | Full features |
## 📊 Monitoring & Analytics
```python
# Get comprehensive session analytics
stats = await sm.get_stats(include_all_segments=True)
print(f"""
🚀 Session Analytics Dashboard
============================
Session ID: {stats['session_id']}
Total Messages: {stats['total_messages']}
User Messages: {stats['user_messages']}
AI Messages: {stats['ai_messages']}
Tool Calls: {stats['tool_calls']}
Total Tokens: {stats['total_tokens']}
Total Cost: ${stats['estimated_cost']:.6f}
Session Segments: {stats.get('session_segments', 1)}
""")
```
## 🏗️ Why CHUK AI Session Manager?
- **Zero Configuration**: Start tracking conversations in 3 lines of code
- **Infinite Context**: Never worry about token limits again
- **Universal**: Works with any LLM provider (OpenAI, Anthropic, etc.)
- **Robust**: Built-in persistence, monitoring, and error handling
- **Token Aware**: Automatic cost tracking across all providers
- **Tool Friendly**: Seamless tool call logging and retry mechanisms
- **Guardrails**: Runtime guards prevent runaway tool loops and ungrounded calls
- **Procedural Memory**: Learn from tool call history to improve future use
## 🛡️ Error Handling
```python
from chuk_ai_session_manager import (
SessionManagerError,
SessionNotFound,
TokenLimitExceeded
)
try:
session_id = await track_conversation("Hello", "Hi there")
except SessionNotFound as e:
print(f"Session not found: {e}")
except TokenLimitExceeded as e:
print(f"Token limit exceeded: {e}")
except SessionManagerError as e:
print(f"General session error: {e}")
```
## 🔄 Dependencies
- **Required**: `chuk-sessions` (session storage), `pydantic` (data models), `chuk-tool-processor` (tool integration)
- **Optional**: `redis` (Redis storage), `tiktoken` (accurate token counting)
## 📄 License
Apache 2.0 - build amazing AI applications with confidence!
---
**Ready to build better AI applications?**
```bash
pip install chuk-ai-session-manager
```
**Start tracking conversations in 30 seconds!**
| text/markdown | null | null | null | null | Apache-2.0 | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"chuk-sessions>=0.6.1",
"chuk-tool-processor>=0.20.1",
"pydantic>=2.11.3",
"chuk-sessions[redis]>=0.4.1; extra == \"redis\"",
"redis>=4.0.0; extra == \"redis\"",
"tiktoken>=0.9.0; extra == \"tiktoken\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-asyncio>=1.0.0; extra == \"dev\"",
"ruff>=0.4.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"bandit>=1.7.0; extra == \"dev\"",
"coverage>=7.0.0; extra == \"dev\"",
"chuk-sessions[redis]>=0.6.1; extra == \"all\"",
"redis>=4.0.0; extra == \"all\"",
"tiktoken>=0.9.0; extra == \"all\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.2 | 2026-02-21T01:28:02.560368 | chuk_ai_session_manager-0.10.3.tar.gz | 233,755 | ce/37/03b5dae7bd8871d13473a7f67c8004fa630fa4321901931a4bd47298f41e/chuk_ai_session_manager-0.10.3.tar.gz | source | sdist | null | false | 2f7a6bf1c3fb86a0caf9839190c34504 | 92bec613ecdd875251265f8a8e67adec2c9bc8f2ed0ace3dddfb2e228548808f | ce3703b5dae7bd8871d13473a7f67c8004fa630fa4321901931a4bd47298f41e | null | [
"LICENSE"
] | 615 |
2.4 | pylitterbot | 2025.1.0 | Python package for controlling Whisker automatic robots. | # pylitterbot
[](https://pypi.org/project/pylitterbot/)
[](https://ko-fi.com/natekspencer)
[](https://github.com/sponsors/natekspencer)
[](https://share.litter-robot.com/x/YZ325z)
[](LICENSE)
[](https://pypi.org/project/pylitterbot/)


Python package for controlling Whisker connected self-cleaning litter boxes and feeders.
This is an unofficial API for controlling various Whisker automated robots. It currently supports Litter-Robot 3 (with connect), Litter-Robot 4 and Feeder-Robot.
## Disclaimer
This API is experimental and was reverse-engineered by monitoring network traffic and decompiling source code from the Whisker app since no public API is currently available at this time. It may cease to work at any time. Use at your own risk.
## Installation
Install using pip
```bash
pip install pylitterbot
```
Alternatively, clone the repository and run
```bash
uv sync
```
## Usage
```python
import asyncio
from pylitterbot import Account
# Set email and password for initial authentication.
username = "Your username"
password = "Your password"
async def main():
# Create an account.
account = Account()
try:
# Connect to the API and load robots.
await account.connect(username=username, password=password, load_robots=True)
# Print robots associated with account.
print("Robots:")
for robot in account.robots:
print(robot)
finally:
# Disconnect from the API.
await account.disconnect()
if __name__ == "__main__":
asyncio.run(main())
```
which will output something like:
```
Name: Litter-Robot Name, Serial: LR3C012345, id: a0123b4567cd8e
```
To start a clean cycle
```python
await robot.start_cleaning()
```
If no exception occurred, your Litter-Robot should now perform a clean cycle.
Currently the following methods are available in the Robot class:
- refresh()
- start_cleaning()
- reset_settings()
- set_panel_lockout()
- set_night_light()
- set_power_status()
- set_sleep_mode()
- set_wait_time()
- set_name()
- get_activity_history()
- get_insight()
## Contributing
Thank you for your interest in contributing! Follow these steps to set up your environment and ensure your changes meet the project's standards.
### Setup
1. Clone this repository:
```bash
git clone https://github.com/natekspencer/pylitterbot.git
cd pylitterbot
```
2. Install dependencies and pre-commit hooks:
```bash
uv sync
pre-commit install
```
### Guidelines
- **Code Formatting:** Ensure your code is properly formatted. This project uses `ruff` for linting and formatting.
- **Typing:** All code must be fully typed. Use `mypy` to check for type issues:
```bash
mypy .
```
- **Testing:** Add tests for any new features or changes. Run the test suite with:
```bash
pytest
```
- **Commit Messages:** Follow conventional commit messages, e.g., feat: add new feature or fix: resolve issue with X
### Submitting Changes
1. Create a new branch for your feature or fix:
```bash
git checkout -b feature/your-feature
```
2. Make your changes and commit them.
3. Push to your fork and open a pull request.
I appreciate your contributions! 🚀
---
## TODO
- Validate Litter-Robot EVO model data/endpoints if it differs from Litter-Robot 5
- Add support for Litter-Robot 5 cameras
---
## ❤️ Support Me
I maintain this python project in my spare time. If you find it useful, consider supporting development:
- 💜 [Sponsor me on GitHub](https://github.com/sponsors/natekspencer)
- ☕ [Buy me a coffee / beer](https://ko-fi.com/natekspencer)
- 💸 [PayPal (direct support)](https://www.paypal.com/paypalme/natekspencer)
- ⭐ [Star this project](https://github.com/natekspencer/pylitterbot)
- 📦 If you’d like to support in other ways, such as donating hardware for testing, feel free to [reach out to me](https://github.com/natekspencer)
If you don't already own a Litter-Robot, please consider using [my referral link](https://share.litter-robot.com/x/XJAY1D) to purchase your own robot and save $50!
## 📈 Star History
[](https://www.star-history.com/#natekspencer/pylitterbot)
| text/markdown | null | Nathan Spencer <natekspencer@gmail.com> | null | null | null | Feeder-Robot, Litter-Robot, Whisker, asynchronous, litter box, pet feeder | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp>=3.8.1",
"deepdiff>=6.2.1",
"pycognito>=2024.2.0",
"pyjwt>=2.7.0"
] | [] | [] | [] | [
"Homepage, https://github.com/natekspencer/pylitterbot",
"Repository, https://github.com/natekspencer/pylitterbot"
] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T01:26:51.686115 | pylitterbot-2025.1.0-py3-none-any.whl | 53,623 | a2/bb/d57f9f56aa0e4a2fa89eff398006bf44f218c484f8239839e513b873c9ff/pylitterbot-2025.1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | c0d3cddb43e03ae044f4a651d5ca8636 | db3070088cb017cc97c4e121f3a315bd878458066cc944239e3a997a73746981 | a2bbd57f9f56aa0e4a2fa89eff398006bf44f218c484f8239839e513b873c9ff | MIT | [
"LICENSE"
] | 507 |
2.4 | facevault | 0.1.0 | Python SDK for the FaceVault identity verification API | # FaceVault Python SDK
Python client for the [FaceVault](https://facevault.id) identity verification API. Supports sync and async usage, webhook verification, and typed models.
## Installation
```bash
pip install facevault
```
## Quick start
### Sync
```python
from facevault import FaceVaultClient
client = FaceVaultClient("fv_live_your_api_key")
# Create a verification session
session = client.create_session(external_user_id="user-123")
print(session.webapp_url) # Send this URL to your user
# Check session status
status = client.get_session(session.session_id)
print(status.status) # "pending", "completed", "failed"
client.close()
```
### Async
```python
from facevault import AsyncFaceVaultClient
async def verify_user():
async with AsyncFaceVaultClient("fv_live_your_api_key") as client:
session = await client.create_session(external_user_id="user-123")
print(session.webapp_url)
```
## Webhook verification
```python
from facevault import verify_signature, parse_event
# Verify the webhook signature
body = request.body
signature = request.headers["X-Signature"]
if verify_signature(body, signature, secret="your_webhook_secret"):
event = parse_event(body)
print(event.event) # "session.completed"
print(event.session_id)
print(event.face_match_passed)
```
## Error handling
```python
from facevault import FaceVaultClient, AuthError, NotFoundError, RateLimitError
client = FaceVaultClient("fv_live_your_api_key")
try:
session = client.get_session("nonexistent")
except AuthError:
print("Invalid API key")
except NotFoundError:
print("Session not found")
except RateLimitError:
print("Too many requests")
```
## Documentation
Full API docs at [facevault.id/docs](https://facevault.id/docs).
## License
MIT
| text/markdown | Khreechari | null | null | null | null | facevault, identity, kyc, liveness, verification | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Security",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx<1,>=0.24"
] | [] | [] | [] | [
"Homepage, https://facevault.id",
"Documentation, https://facevault.id/docs",
"Repository, https://github.com/khreechari/facevault-python"
] | Hatch/1.16.3 cpython/3.11.14 HTTPX/0.28.1 | 2026-02-21T01:26:40.541260 | facevault-0.1.0.tar.gz | 8,548 | 24/5a/1982155a7a5c0b299cf958b19346e49755790369fd68af44bd3848b4776f/facevault-0.1.0.tar.gz | source | sdist | null | false | 6c6df463836aa3777c0a5ea8ced1320b | c580b47b1daab4b354a6ff0ed3e32f09625db9723cfcd9a0ddb04fe7a0004f2d | 245a1982155a7a5c0b299cf958b19346e49755790369fd68af44bd3848b4776f | MIT | [
"LICENSE"
] | 241 |
2.4 | langsmith | 0.7.6 | Client library to connect to the LangSmith Observability and Evaluation Platform. | # LangSmith Client SDK
[](https://github.com/langchain-ai/langsmith-sdk/releases)
[](https://pepy.tech/project/langsmith)
This package contains the Python client for interacting with the [LangSmith platform](https://smith.langchain.com/).
To install:
```bash
pip install -U langsmith
export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=ls_...
```
Then trace:
```python
import openai
from langsmith.wrappers import wrap_openai
from langsmith import traceable
# Auto-trace LLM calls in-context
client = wrap_openai(openai.Client())
@traceable # Auto-trace this function
def pipeline(user_input: str):
result = client.chat.completions.create(
messages=[{"role": "user", "content": user_input}],
model="gpt-3.5-turbo"
)
return result.choices[0].message.content
pipeline("Hello, world!")
```
See the resulting nested trace [🌐 here](https://smith.langchain.com/public/b37ca9b1-60cd-4a2a-817e-3c4e4443fdc0/r).
LangSmith helps you and your team develop and evaluate language models and intelligent agents. It is compatible with any LLM application.
> **Cookbook:** For tutorials on how to get more value out of LangSmith, check out the [Langsmith Cookbook](https://github.com/langchain-ai/langsmith-cookbook/tree/main) repo.
A typical workflow looks like:
1. Set up an account with LangSmith.
2. Log traces while debugging and prototyping.
3. Run benchmark evaluations and continuously improve with the collected data.
We'll walk through these steps in more detail below.
## 1. Connect to LangSmith
Sign up for [LangSmith](https://smith.langchain.com/) using your GitHub, Discord accounts, or an email address and password. If you sign up with an email, make sure to verify your email address before logging in.
Then, create a unique API key on the [Settings Page](https://smith.langchain.com/settings), which is found in the menu at the top right corner of the page.
> [!NOTE]
> Save the API Key in a secure location. It will not be shown again.
## 2. Log Traces
You can log traces natively using the LangSmith SDK or within your LangChain application.
### Logging Traces with LangChain
LangSmith seamlessly integrates with the Python LangChain library to record traces from your LLM applications.
1. **Copy the environment variables from the Settings Page and add them to your application.**
Tracing can be activated by setting the following environment variables or by manually specifying the LangChainTracer.
```python
import os
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_ENDPOINT"] = "https://api.smith.langchain.com"
# os.environ["LANGSMITH_ENDPOINT"] = "https://eu.api.smith.langchain.com" # If signed up in the EU region
os.environ["LANGSMITH_API_KEY"] = "<YOUR-LANGSMITH-API-KEY>"
# os.environ["LANGSMITH_PROJECT"] = "My Project Name" # Optional: "default" is used if not set
# os.environ["LANGSMITH_WORKSPACE_ID"] = "<YOUR-WORKSPACE-ID>" # Required for org-scoped API keys
```
> **Tip:** Projects are groups of traces. All runs are logged to a project. If not specified, the project is set to `default`.
2. **Run an Agent, Chain, or Language Model in LangChain**
If the environment variables are correctly set, your application will automatically connect to the LangSmith platform.
```python
from langchain_core.runnables import chain
@chain
def add_val(x: dict) -> dict:
return {"val": x["val"] + 1}
add_val({"val": 1})
```
### Logging Traces Outside LangChain
You can still use the LangSmith development platform without depending on any
LangChain code.
1. **Copy the environment variables from the Settings Page and add them to your application.**
```python
import os
os.environ["LANGSMITH_ENDPOINT"] = "https://api.smith.langchain.com"
os.environ["LANGSMITH_API_KEY"] = "<YOUR-LANGSMITH-API-KEY>"
# os.environ["LANGSMITH_PROJECT"] = "My Project Name" # Optional: "default" is used if not set
```
2. **Log traces**
The easiest way to log traces using the SDK is via the `@traceable` decorator. Below is an example.
```python
from datetime import datetime
from typing import List, Optional, Tuple
import openai
from langsmith import traceable
from langsmith.wrappers import wrap_openai
client = wrap_openai(openai.Client())
@traceable
def argument_generator(query: str, additional_description: str = "") -> str:
return client.chat.completions.create(
[
{"role": "system", "content": "You are a debater making an argument on a topic."
f"{additional_description}"
f" The current time is {datetime.now()}"},
{"role": "user", "content": f"The discussion topic is {query}"}
]
).choices[0].message.content
@traceable
def argument_chain(query: str, additional_description: str = "") -> str:
argument = argument_generator(query, additional_description)
# ... Do other processing or call other functions...
return argument
argument_chain("Why is blue better than orange?")
```
Alternatively, you can manually log events using the `Client` directly or using a `RunTree`, which is what the traceable decorator is meant to manage for you!
A RunTree tracks your application. Each RunTree object is required to have a `name` and `run_type`. These and other important attributes are as follows:
- `name`: `str` - used to identify the component's purpose
- `run_type`: `str` - Currently one of "llm", "chain" or "tool"; more options will be added in the future
- `inputs`: `dict` - the inputs to the component
- `outputs`: `Optional[dict]` - the (optional) returned values from the component
- `error`: `Optional[str]` - Any error messages that may have arisen during the call
```python
from langsmith.run_trees import RunTree
parent_run = RunTree(
name="My Chat Bot",
run_type="chain",
inputs={"text": "Summarize this morning's meetings."},
# project_name= "Defaults to the LANGSMITH_PROJECT env var"
)
parent_run.post()
# .. My Chat Bot calls an LLM
child_llm_run = parent_run.create_child(
name="My Proprietary LLM",
run_type="llm",
inputs={
"prompts": [
"You are an AI Assistant. The time is XYZ."
" Summarize this morning's meetings."
]
},
)
child_llm_run.post()
child_llm_run.end(
outputs={
"generations": [
"I should use the transcript_loader tool"
" to fetch meeting_transcripts from XYZ"
]
}
)
child_llm_run.patch()
# .. My Chat Bot takes the LLM output and calls
# a tool / function for fetching transcripts ..
child_tool_run = parent_run.create_child(
name="transcript_loader",
run_type="tool",
inputs={"date": "XYZ", "content_type": "meeting_transcripts"},
)
child_tool_run.post()
# The tool returns meeting notes to the chat bot
child_tool_run.end(outputs={"meetings": ["Meeting1 notes.."]})
child_tool_run.patch()
child_chain_run = parent_run.create_child(
name="Unreliable Component",
run_type="tool",
inputs={"input": "Summarize these notes..."},
)
child_chain_run.post()
try:
# .... the component does work
raise ValueError("Something went wrong")
child_chain_run.end(outputs={"output": "foo"}
child_chain_run.patch()
except Exception as e:
child_chain_run.end(error=f"I errored again {e}")
child_chain_run.patch()
pass
# .. The chat agent recovers
parent_run.end(outputs={"output": ["The meeting notes are as follows:..."]})
res = parent_run.patch()
res.result()
```
## Create a Dataset from Existing Runs
Once your runs are stored in LangSmith, you can convert them into a dataset.
For this example, we will do so using the Client, but you can also do this using
the web interface, as explained in the [LangSmith docs](https://docs.smith.langchain.com/).
```python
from langsmith import Client
client = Client()
dataset_name = "Example Dataset"
# We will only use examples from the top level AgentExecutor run here,
# and exclude runs that errored.
runs = client.list_runs(
project_name="my_project",
execution_order=1,
error=False,
)
dataset = client.create_dataset(dataset_name, description="An example dataset")
for run in runs:
client.create_example(
inputs=run.inputs,
outputs=run.outputs,
dataset_id=dataset.id,
)
```
## Evaluating Runs
Check out the [LangSmith Testing & Evaluation dos](https://docs.smith.langchain.com/evaluation) for up-to-date workflows.
For generating automated feedback on individual runs, you can run evaluations directly using the LangSmith client.
```python
from typing import Optional
from langsmith.evaluation import StringEvaluator
def jaccard_chars(output: str, answer: str) -> float:
"""Naive Jaccard similarity between two strings."""
prediction_chars = set(output.strip().lower())
answer_chars = set(answer.strip().lower())
intersection = prediction_chars.intersection(answer_chars)
union = prediction_chars.union(answer_chars)
return len(intersection) / len(union)
def grader(run_input: str, run_output: str, answer: Optional[str]) -> dict:
"""Compute the score and/or label for this run."""
if answer is None:
value = "AMBIGUOUS"
score = 0.5
else:
score = jaccard_chars(run_output, answer)
value = "CORRECT" if score > 0.9 else "INCORRECT"
return dict(score=score, value=value)
evaluator = StringEvaluator(evaluation_name="Jaccard", grading_function=grader)
runs = client.list_runs(
project_name="my_project",
execution_order=1,
error=False,
)
for run in runs:
client.evaluate_run(run, evaluator)
```
## Integrations
LangSmith easily integrates with your favorite LLM framework.
## OpenAI SDK
<!-- markdown-link-check-disable -->
We provide a convenient wrapper for the [OpenAI SDK](https://platform.openai.com/docs/api-reference).
In order to use, you first need to set your LangSmith API key.
```shell
export LANGSMITH_API_KEY=<your-api-key>
```
Next, you will need to install the LangSmith SDK:
```shell
pip install -U langsmith
```
After that, you can wrap the OpenAI client:
```python
from openai import OpenAI
from langsmith import wrappers
client = wrappers.wrap_openai(OpenAI())
```
Now, you can use the OpenAI client as you normally would, but now everything is logged to LangSmith!
```python
client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say this is a test"}],
)
```
Oftentimes, you use the OpenAI client inside of other functions.
You can get nested traces by using this wrapped client and decorating those functions with `@traceable`.
See [this documentation](https://docs.smith.langchain.com/tracing/faq/logging_and_viewing) for more documentation how to use this decorator
```python
from langsmith import traceable
@traceable(name="Call OpenAI")
def my_function(text: str):
return client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Say {text}"}],
)
my_function("hello world")
```
## Instructor
We provide a convenient integration with [Instructor](https://jxnl.github.io/instructor/), largely by virtue of it essentially just using the OpenAI SDK.
In order to use, you first need to set your LangSmith API key.
```shell
export LANGSMITH_API_KEY=<your-api-key>
```
Next, you will need to install the LangSmith SDK:
```shell
pip install -U langsmith
```
After that, you can wrap the OpenAI client:
```python
from openai import OpenAI
from langsmith import wrappers
client = wrappers.wrap_openai(OpenAI())
```
After this, you can patch the OpenAI client using `instructor`:
```python
import instructor
client = instructor.patch(OpenAI())
```
Now, you can use `instructor` as you normally would, but now everything is logged to LangSmith!
```python
from pydantic import BaseModel
class UserDetail(BaseModel):
name: str
age: int
user = client.chat.completions.create(
model="gpt-3.5-turbo",
response_model=UserDetail,
messages=[
{"role": "user", "content": "Extract Jason is 25 years old"},
]
)
```
Oftentimes, you use `instructor` inside of other functions.
You can get nested traces by using this wrapped client and decorating those functions with `@traceable`.
See [this documentation](https://docs.smith.langchain.com/tracing/faq/logging_and_viewing) for more documentation how to use this decorator
```python
@traceable()
def my_function(text: str) -> UserDetail:
return client.chat.completions.create(
model="gpt-3.5-turbo",
response_model=UserDetail,
messages=[
{"role": "user", "content": f"Extract {text}"},
]
)
my_function("Jason is 25 years old")
```
## Pytest Plugin
The LangSmith pytest plugin lets Python developers define their datasets and evaluations as pytest test cases.
See [online docs](https://docs.smith.langchain.com/evaluation/how_to_guides/pytest) for more information.
This plugin is installed as part of the LangSmith SDK, and is enabled by default.
See also official pytest docs: [How to install and use plugins](https://docs.pytest.org/en/stable/how-to/plugins.html)
## Additional Documentation
To learn more about the LangSmith platform, check out the [docs](https://docs.smith.langchain.com/).
# License
The LangSmith SDK is licensed under the [MIT License](../LICENSE).
The copyright information for certain dependencies' are reproduced in their corresponding COPYRIGHT.txt files in this repo, including the following:
- [uuid-utils](docs/templates/uuid-utils/COPYRIGHT.txt)
- [zstandard](docs/templates/zstandard/COPYRIGHT.txt)
| text/markdown | null | LangChain <support@langchain.dev> | null | null | MIT | evaluation, langchain, langsmith, language, llm, nlp, platform, tracing, translation | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx<1,>=0.23.0",
"orjson>=3.9.14; platform_python_implementation != \"PyPy\"",
"packaging>=23.2",
"pydantic<3,>=2",
"requests-toolbelt>=1.0.0",
"requests>=2.0.0",
"uuid-utils<1.0,>=0.12.0",
"xxhash>=3.0.0",
"zstandard>=0.23.0",
"claude-agent-sdk>=0.1.0; python_version >= \"3.10\" and extra == \"claude-agent-sdk\"",
"google-adk>=1.0.0; extra == \"google-adk\"",
"wrapt>=1.16.0; extra == \"google-adk\"",
"langsmith-pyo3>=0.1.0rc2; extra == \"langsmith-pyo3\"",
"openai-agents>=0.0.3; extra == \"openai-agents\"",
"opentelemetry-api>=1.30.0; extra == \"otel\"",
"opentelemetry-exporter-otlp-proto-http>=1.30.0; extra == \"otel\"",
"opentelemetry-sdk>=1.30.0; extra == \"otel\"",
"pytest>=7.0.0; extra == \"pytest\"",
"rich>=13.9.4; extra == \"pytest\"",
"vcrpy>=7.0.0; extra == \"pytest\"",
"vcrpy>=7.0.0; extra == \"vcr\""
] | [] | [] | [] | [
"Homepage, https://smith.langchain.com/",
"Documentation, https://docs.smith.langchain.com/",
"Repository, https://github.com/langchain-ai/langsmith-sdk"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:26:34.296920 | langsmith-0.7.6.tar.gz | 1,041,741 | 22/b1/a1514b6fe33dc956bee1e6aba88470999d53a6ed02ec8fd14d6d409b8fb7/langsmith-0.7.6.tar.gz | source | sdist | null | false | fbe8f9511bd5b4d63938292120e3fd8b | e8646f8429d3c1641c7bae3c01bfdc3dfa27625994b0ef4303714d6b06fe1ef9 | 22b1a1514b6fe33dc956bee1e6aba88470999d53a6ed02ec8fd14d6d409b8fb7 | null | [] | 538,832 |
2.4 | star-openapi-swagger | 5.31.2 | Provide Swagger UI for star-openapi. | Provide Swagger UI for [star-openapi](https://github.com/luolingchun/star-openapi). | text/markdown | null | null | null | llc <luolingchun@outlook.com> | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"star-openapi"
] | [] | [] | [] | [
"Homepage, https://github.com/luolingchun/star-openapi-plugins/tree/master/star-openapi-swagger",
"Documentation, https://luolingchun.github.io/star-openapi/latest/Usage/UI_Templates/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T01:26:14.097888 | star_openapi_swagger-5.31.2.tar.gz | 516,912 | 18/19/5ab4e91ea6fc121be824a623ff44b2cb1d9a7a031179b3ec9b4949c49a4e/star_openapi_swagger-5.31.2.tar.gz | source | sdist | null | false | bfde11d40eaefe0f53186b2c69020ca8 | b337bf327a3b99e24e7bf42c31165fd1a5d0add5a14f70dc6c6e7c37877ff505 | 18195ab4e91ea6fc121be824a623ff44b2cb1d9a7a031179b3ec9b4949c49a4e | null | [] | 203 |
2.4 | llm-orchestra | 0.15.1 | Multi-agent LLM communication system with ensemble orchestration | # LLM Orchestra
[](https://badge.fury.io/py/llm-orchestra)
[](https://github.com/mrilikecoding/llm-orc/actions)
[](https://codecov.io/gh/mrilikecoding/llm-orc)
[](https://www.python.org/downloads/)
[](https://www.gnu.org/licenses/agpl-3.0)
[](https://pepy.tech/project/llm-orchestra)
[](https://github.com/mrilikecoding/llm-orc/releases)
Orchestrate ensembles of specialized models — local and cloud — to do real analytical work. Coordination over scale.
## Overview
A decent laptop can run multiple small language models simultaneously. What's missing is the coordination layer — the system that decomposes problems, routes them to specialized agents, manages dependencies between them, and synthesizes results. LLM Orchestra provides that layer.
The approach is architectural intelligence over brute-force scaling: instead of sending everything to one large model, decompose the problem and let specialized agents own their piece. Independent agents run in parallel. Dependent agents wait for what they need. Script agents handle data processing and analysis alongside LLM agents, enabling hybrid workflows that go beyond pure language model orchestration.
Mix expensive cloud models with free local models. Use Claude for strategic synthesis while local models handle systematic analysis at zero marginal cost.
## Key Features
- **Multi-Agent Ensembles**: Coordinate specialized agents with flexible dependency graphs
- **Ensemble Agents**: Compose ensembles hierarchically — agents can reference and execute other ensembles
- **Input Key Routing**: Select specific keys from upstream JSON output for classify → route → fan-out patterns
- **Agent Dependencies**: Define which agents depend on others for sophisticated orchestration patterns
- **Script Agent Integration**: Execute custom scripts alongside LLM agents with JSON I/O communication
- **Model Profiles**: Simplified configuration with named shortcuts for model + provider combinations
- **Cost Optimization**: Mix expensive and free models based on what each task needs
- **Streaming Output**: Real-time progress updates during ensemble execution
- **CLI Interface**: Simple commands with piping support (`cat code.py | llm-orc invoke code-review`)
- **Secure Authentication**: Encrypted API key storage with easy credential management
- **YAML Configuration**: Easy ensemble setup with readable config files
- **Usage Tracking**: Token counting, cost estimation, and timing metrics
- **Artifact Management**: Automatic saving of execution results with timestamped persistence
## Installation
### Option 1: Homebrew (macOS - Recommended)
```bash
# Add the tap
brew tap mrilikecoding/llm-orchestra
# Install LLM Orchestra
brew install llm-orchestra
# Verify installation
llm-orc --version
```
### Option 2: pip (All Platforms)
```bash
# Install from PyPI
pip install llm-orchestra
# Verify installation
llm-orc --version
```
### Option 3: Development Installation
```bash
# Clone the repository
git clone https://github.com/mrilikecoding/llm-orc.git
cd llm-orc
# Install with development dependencies
uv sync --dev
# Verify installation
uv run llm-orc --version
```
### Updates
```bash
# Homebrew users
brew update && brew upgrade llm-orchestra
# pip users
pip install --upgrade llm-orchestra
```
## Quick Start
### 1. Set Up Authentication
Before using LLM Orchestra, configure authentication for your LLM providers:
```bash
# Interactive setup wizard (recommended for first-time users)
llm-orc auth setup
# Or add providers individually
llm-orc auth add anthropic --api-key YOUR_ANTHROPIC_KEY
llm-orc auth add google --api-key YOUR_GOOGLE_KEY
# OAuth for Claude Pro/Max users
llm-orc auth add anthropic-claude-pro-max
# List configured providers
llm-orc auth list
# Remove a provider if needed
llm-orc auth remove anthropic
```
**Security**: API keys are encrypted and stored securely in `~/.config/llm-orc/credentials.yaml`.
### 2. Configuration Options
LLM Orchestra supports both global and local configurations:
#### Global Configuration
Create `~/.config/llm-orc/ensembles/code-review.yaml`:
```yaml
name: code-review
description: Multi-perspective code review ensemble
agents:
- name: security-reviewer
model_profile: free-local
system_prompt: "You are a security analyst. Focus on identifying security vulnerabilities, authentication issues, and potential attack vectors."
- name: performance-reviewer
model_profile: free-local
system_prompt: "You are a performance analyst. Focus on identifying bottlenecks, inefficient algorithms, and scalability issues."
- name: quality-reviewer
model_profile: free-local
system_prompt: "You are a code quality analyst. Focus on maintainability, readability, and best practices."
- name: senior-reviewer
model_profile: default-claude
depends_on: [security-reviewer, performance-reviewer, quality-reviewer]
system_prompt: |
You are a senior engineering lead. Synthesize the security, performance,
and quality analysis into actionable recommendations.
output_format: json
```
#### Local Project Configuration
For project-specific ensembles, initialize local configuration:
```bash
# Initialize local configuration in your project
llm-orc config init
# This creates .llm-orc/ directory with:
# - ensembles/ (project-specific ensembles)
# - models/ (shared model configurations)
# - scripts/ (project-specific scripts)
# - config.yaml (project configuration)
```
#### View Current Configuration
```bash
# Check configuration status with visual indicators
llm-orc config check
```
### 3. Using LLM Orchestra
#### Basic Usage
```bash
# List available ensembles
llm-orc list-ensembles
# List available model profiles
llm-orc list-profiles
# Get help for any command
llm-orc --help
llm-orc invoke --help
```
#### Invoke Ensembles
```bash
# Analyze code from a file (pipe input)
cat mycode.py | llm-orc invoke code-review
# Provide input directly
llm-orc invoke code-review --input "Review this function: def add(a, b): return a + b"
# JSON output for integration with other tools
llm-orc invoke code-review --input "..." --output-format json
# Use specific configuration directory
llm-orc invoke code-review --config-dir ./custom-config
# Enable streaming for real-time progress (enabled by default)
llm-orc invoke code-review --streaming
```
### Output Formats
LLM Orchestra supports three output formats for different use cases:
#### Rich Interface (Default)
Interactive format with real-time progress updates and visual dependency graphs:
```bash
llm-orc invoke code-review --input "def add(a, b): return a + b"
```
#### JSON Output
Structured data format for integration and automation:
```bash
llm-orc invoke code-review --output-format json --input "code to review"
```
Returns complete execution data including events, results, metadata, and dependency information.
#### Text Output
Clean, pipe-friendly format for command-line workflows:
```bash
llm-orc invoke code-review --output-format text --input "code to review"
```
Plain text results perfect for piping and scripting: `llm-orc invoke ... | grep "security"`
#### Configuration Management
```bash
# Initialize local project configuration
llm-orc config init --project-name my-project
# Check configuration status with visual indicators
llm-orc config check # Global + local status with legend
llm-orc config check-global # Global configuration only
llm-orc config check-local # Local project configuration only
# Reset configurations with safety options
llm-orc config reset-global # Reset global config (backup + preserve auth by default)
llm-orc config reset-local # Reset local config (backup + preserve ensembles by default)
# Advanced reset options
llm-orc config reset-global --no-backup --reset-auth # Complete reset including auth
llm-orc config reset-local --reset-ensembles --no-backup # Reset including ensembles
```
### Script Management
LLM Orchestra includes powerful script agent integration for executing custom scripts alongside LLM agents:
```bash
# List available scripts in your project
llm-orc scripts list
# Show detailed information about a script
llm-orc scripts show file_operations/read_file.py
# Test a script with parameters
llm-orc scripts test file_operations/read_file.py --parameters '{"filepath": "example.txt"}'
# Scripts are discovered from .llm-orc/scripts/ directories
# Results are automatically saved to .llm-orc/artifacts/ with timestamps
```
Script agents use JSON I/O for seamless integration with LLM agents, enabling powerful hybrid workflows where scripts provide data and context for LLM analysis.
### MCP Server
LLM Orchestra includes a Model Context Protocol (MCP) server that exposes ensembles, artifacts, and metrics as MCP resources. This enables integration with MCP clients like Claude Code, Claude Desktop, and other tools.
#### Quick Start
1. Add `.mcp.json` to your project root:
```json
{
"mcpServers": {
"llm-orc": {
"command": "uv",
"args": ["run", "llm-orc", "mcp", "serve"]
}
}
}
```
2. Restart Claude Code - MCP tools appear as `mcp__llm-orc__*`
3. Try it:
```
mcp__llm-orc__get_help # Get full documentation
mcp__llm-orc__get_provider_status # Check which models are available
mcp__llm-orc__list_ensembles # See available ensembles
```
#### Resources (Read-Only Data)
| Resource | Description |
|----------|-------------|
| `llm-orc://ensembles` | List all available ensembles with metadata |
| `llm-orc://ensemble/{name}` | Get specific ensemble configuration |
| `llm-orc://profiles` | List model profiles |
| `llm-orc://artifacts/{ensemble}` | List execution artifacts for an ensemble |
| `llm-orc://artifact/{ensemble}/{id}` | Get individual artifact details |
| `llm-orc://metrics/{ensemble}` | Get aggregated metrics (success rate, cost, duration) |
#### Tools (25 Total)
**Core Execution**
| Tool | Description |
|------|-------------|
| `invoke` | Execute ensemble with streaming progress, saves artifacts automatically |
| `list_ensembles` | List all ensembles from local/library/global sources |
| `validate_ensemble` | Check config validity, profile availability, and dependencies |
| `update_ensemble` | Modify ensemble config (supports dry-run and backup) |
| `analyze_execution` | Analyze execution artifact data |
**Provider Discovery** - Check what's available before running
| Tool | Description |
|------|-------------|
| `get_provider_status` | Show available providers and Ollama models |
| `check_ensemble_runnable` | Check if ensemble can run, suggest local alternatives |
**Ensemble Management**
| Tool | Description |
|------|-------------|
| `create_ensemble` | Create new ensemble from scratch or template |
| `delete_ensemble` | Delete ensemble (requires confirmation) |
**Profile Management**
| Tool | Description |
|------|-------------|
| `list_profiles` | List profiles with optional provider filter |
| `create_profile` | Create new model profile |
| `update_profile` | Update existing profile |
| `delete_profile` | Delete profile (requires confirmation) |
**Script Management**
| Tool | Description |
|------|-------------|
| `list_scripts` | List primitive scripts by category |
| `get_script` | Get script source and metadata |
| `test_script` | Test script with sample input |
| `create_script` | Create new primitive script |
| `delete_script` | Delete script (requires confirmation) |
**Library Operations**
| Tool | Description |
|------|-------------|
| `library_browse` | Browse library ensembles and scripts |
| `library_copy` | Copy from library to local project |
| `library_search` | Search library by keyword |
| `library_info` | Get library metadata and statistics |
**Artifact Management**
| Tool | Description |
|------|-------------|
| `delete_artifact` | Delete individual execution artifact |
| `cleanup_artifacts` | Delete old artifacts (supports dry-run) |
**Help**
| Tool | Description |
|------|-------------|
| `get_help` | Get comprehensive docs: directory structure, schemas, workflows |
#### Example Workflow
```
# 1. Check what's available
mcp__llm-orc__get_provider_status
# → Shows Ollama running with llama3, mistral models
# 2. Find an ensemble
mcp__llm-orc__library_search query="code review"
# → Found: code-analysis/security-review
# 3. Check if it can run locally
mcp__llm-orc__check_ensemble_runnable ensemble_name="security-review"
# → Shows which profiles need local alternatives
# 4. Copy and adapt
mcp__llm-orc__library_copy source="code-analysis/security-review"
mcp__llm-orc__update_ensemble ensemble_name="security-review" changes={"agents": [...]}
# 5. Run it
mcp__llm-orc__invoke ensemble_name="security-review" input_data="Review this code..."
```
#### CLI Usage
```bash
# Start MCP server (stdio transport for MCP clients)
llm-orc mcp serve
# Start with HTTP transport for debugging
llm-orc mcp serve --transport http --port 8080
```
## Ensemble Library
Looking for pre-built ensembles? Check out the [LLM Orchestra Library](https://github.com/mrilikecoding/llm-orchestra-library) - a curated collection of analytical ensembles for code review, research analysis, decision support, and more.
### Library CLI Commands
LLM Orchestra includes built-in commands to browse and copy ensembles from the library:
```bash
# Browse all available categories
llm-orc library categories
llm-orc l categories # Using alias
# Browse ensembles in a specific category
llm-orc library browse code-analysis
# Show detailed information about an ensemble
llm-orc library show code-analysis/security-review
# Copy an ensemble to your local configuration
llm-orc library copy code-analysis/security-review
# Copy an ensemble to your global configuration
llm-orc library copy code-analysis/security-review --global
```
#### Library Source Configuration
By default, LLM Orchestra fetches library content from the remote GitHub repository. For development purposes, you can use a local copy of the library:
```bash
# Use remote GitHub library (default)
llm-orc library browse research-analysis
# Use local library for development
export LLM_ORC_LIBRARY_SOURCE=local
llm-orc library browse research-analysis # Uses local submodule
llm-orc init # Copies from local submodule
# Switch back to remote
unset LLM_ORC_LIBRARY_SOURCE
```
**When to use local library:**
- Testing changes to library ensembles before publishing
- Working on feature branches of the llm-orchestra-library
- Offline development (when remote access unavailable)
- Custom ensemble development and testing
**Requirements for local library:**
- The `llm-orchestra-library` submodule must be initialized and present
- Clear error messages guide you if the local library is not found
## Use Cases
### Code Review
Get systematic analysis across security, performance, and maintainability dimensions. Each agent focuses on their specialty while synthesis provides actionable recommendations.
### Architecture Review
Analyze system designs from scalability, security, performance, and reliability perspectives. Identify bottlenecks and suggest architectural patterns.
### Product Strategy
Evaluate business decisions from market, financial, competitive, and user experience angles. Get comprehensive analysis for complex strategic choices.
### Research Analysis
Systematic literature review, methodology evaluation, or multi-dimensional analysis of research questions.
## Model Support
- **Claude** (Anthropic) - Strategic analysis and synthesis
- **Gemini** (Google) - Multi-modal and reasoning tasks
- **Ollama** - Local deployment of open-source models (Llama3, etc.)
- **Custom models** - Extensible interface for additional providers
## Configuration
### Model Profiles
Model profiles simplify ensemble configuration by providing named shortcuts for complete agent configurations including model, provider, system prompts, timeouts, and generation parameters:
```yaml
# In ~/.config/llm-orc/config.yaml or .llm-orc/config.yaml
model_profiles:
free-local:
model: llama3
provider: ollama
cost_per_token: 0.0
system_prompt: "You are a helpful assistant that provides concise, accurate responses for local development and testing."
timeout_seconds: 30
temperature: 0.7
max_tokens: 500
default-claude:
model: claude-sonnet-4-20250514
provider: anthropic-claude-pro-max
system_prompt: "You are an expert assistant that provides high-quality, detailed analysis and solutions."
timeout_seconds: 60
temperature: 0.5
max_tokens: 2000
high-context:
model: claude-3-5-sonnet-20241022
provider: anthropic-api
cost_per_token: 3.0e-06
system_prompt: "You are an expert assistant capable of handling complex, multi-faceted problems with detailed analysis."
timeout_seconds: 120
small:
model: claude-3-haiku-20240307
provider: anthropic-api
cost_per_token: 1.0e-06
system_prompt: "You are a quick, efficient assistant that provides concise and accurate responses."
timeout_seconds: 30
```
**Profile Benefits:**
- **Complete Agent Configuration**: Includes model, provider, system prompts, timeout settings, and generation parameters
- **Simplified Configuration**: Use `model_profile: default-claude` instead of explicit model + provider + system_prompt + timeout
- **Consistency**: Same profile names work across all ensembles with consistent behavior
- **Cost Tracking**: Built-in cost information for budgeting
- **Generation Control**: Set `temperature` and `max_tokens` per profile for reproducible behavior
- **Flexibility**: Local profiles override global ones, explicit agent configs override profile defaults
**Usage in Ensembles:**
```yaml
agents:
- name: bulk-analyzer
model_profile: free-local # Complete config: model, provider, prompt, timeout
- name: expert-reviewer
model_profile: default-claude # High-quality config with appropriate timeout
- name: document-processor
model_profile: high-context # Large context processing with extended timeout
system_prompt: "Custom prompt override" # Overrides profile default
```
**Override Behavior:**
Explicit agent configuration takes precedence over model profile defaults:
```yaml
agents:
- name: custom-agent
model_profile: free-local
system_prompt: "Custom prompt" # Overrides profile system_prompt
timeout_seconds: 60 # Overrides profile timeout_seconds
temperature: 0.1 # Overrides profile temperature
max_tokens: 200 # Overrides profile max_tokens
```
### Ensemble Configuration
Ensemble configurations support:
- **Model profiles** for simplified, consistent model selection
- **Agent specialization** with role-specific prompts
- **Generation parameters** (`temperature`, `max_tokens`) per profile or per agent
- **Agent dependencies** using `depends_on` for sophisticated orchestration
- **Dependency validation** with automatic cycle detection and missing dependency checks
- **Timeout management** per agent with performance configuration
- **Mixed model strategies** combining local and cloud models
- **Output formatting** (text, JSON) for integration
- **Streaming execution** with real-time progress updates
#### Agent Dependencies
The new dependency-based architecture allows agents to depend on other agents, enabling sophisticated orchestration patterns:
```yaml
agents:
# Independent agents execute in parallel
- name: security-reviewer
model_profile: free-local
system_prompt: "Focus on security vulnerabilities..."
- name: performance-reviewer
model_profile: free-local
system_prompt: "Focus on performance issues..."
# Dependent agent waits for dependencies to complete
- name: senior-reviewer
model_profile: default-claude
depends_on: [security-reviewer, performance-reviewer]
system_prompt: "Synthesize the security and performance analysis..."
```
**Benefits:**
- **Flexible orchestration**: Create complex dependency graphs beyond simple coordinator patterns
- **Parallel execution**: Independent agents run concurrently for better performance
- **Automatic validation**: Circular dependencies and missing dependencies are detected at load time
- **Better maintainability**: Clear, explicit dependencies instead of implicit coordinator relationships
#### Fan-Out (Parallel Map-Reduce)
Agents with `fan_out: true` automatically expand into N parallel instances when their upstream dependency produces an array result. This enables map-reduce style parallel processing:
```yaml
agents:
# "Map" step: split input into chunks
- name: chunker
script: scripts/chunker.py
# Returns: {"success": true, "data": ["chunk1", "chunk2", "chunk3"]}
# "Reduce" step: process each chunk in parallel
- name: processor
model_profile: default-local
depends_on: [chunker]
fan_out: true
system_prompt: "Analyze this text chunk..."
# Synthesis: combine all results
- name: synthesizer
model_profile: default-local
depends_on: [processor]
system_prompt: "Synthesize the analysis results..."
```
**How it works:**
1. `chunker` runs and returns a JSON array (direct array or `{"data": [...]}` format)
2. `processor` is expanded into `processor[0]`, `processor[1]`, `processor[2]` — one per array element
3. All instances execute in parallel, each receiving their chunk plus metadata (`chunk_index`, `total_chunks`, `base_input`)
4. Results are gathered back under the original `processor` name as an ordered array
5. `synthesizer` receives the combined results and can reference them normally
**Configuration requirements:**
- `fan_out: true` requires a `depends_on` field (validated at load time)
- The upstream agent must produce a non-empty array result
- Downstream agents reference the original name — fan-out is transparent to them
**Result format for gathered fan-out agents:**
```json
{
"response": ["result_0", "result_1", null],
"status": "partial",
"fan_out": true,
"instances": [
{"index": 0, "status": "success"},
{"index": 1, "status": "success"},
{"index": 2, "status": "failed", "error": "timeout"}
]
}
```
Status is `"success"` (all instances passed), `"partial"` (some failed), or `"failed"` (all failed). Partial results are preserved — the ensemble continues with whatever succeeded.
#### Ensemble Agents (Composable Ensembles)
Agents can reference and execute other ensembles, enabling hierarchical composition:
```yaml
# child ensemble: topic-analysis.yaml
name: topic-analysis
agents:
- name: analyst
model_profile: ollama-gemma-small
system_prompt: "Analyze the given topic in 2-3 sentences."
# parent ensemble
agents:
- name: classifier
script: scripts/classifier.py
- name: topic-analyst
ensemble: topic-analysis # references child ensemble
depends_on: [classifier]
- name: synthesizer
model_profile: default-claude
depends_on: [topic-analyst]
```
**How it works:**
1. The `ensemble` field identifies which ensemble to execute (resolved by name from `.llm-orc/ensembles/`)
2. The child ensemble runs as a self-contained execution with its own phases and agents
3. Child executors share immutable infrastructure (config, credentials, model factory) but isolate mutable state
4. Nesting depth is limited (default: 5) to prevent unbounded recursion
5. Cross-ensemble cycles are detected at load time
#### Input Key Routing
Agents can select a specific key from upstream JSON output using `input_key`, enabling routing patterns where a classifier produces keyed output and downstream agents each consume their slice:
```yaml
agents:
# Classifier produces: {"pdfs": ["a.pdf", "b.pdf"], "audio": ["c.mp3"]}
- name: classifier
script: scripts/classifier.py
# Selects only the "pdfs" array from classifier output
- name: pdf-processor
ensemble: pdf-pipeline
depends_on: [classifier]
input_key: pdfs
fan_out: true
# Selects only the "audio" array
- name: audio-processor
ensemble: audio-pipeline
depends_on: [classifier]
input_key: audio
fan_out: true
- name: synthesizer
model_profile: default-claude
depends_on: [pdf-processor, audio-processor]
```
**Behavior:**
- `input_key` selects `output[key]` from the first entry in `depends_on`
- If the key is missing or the upstream output is not JSON/dict, the agent receives a runtime error
- Without `input_key`, the agent receives the full upstream output (backward compatible)
- Composes naturally with `fan_out`: `input_key` selects the array, `fan_out` expands per item
- Works with all agent types: LLM, script, and ensemble
### Configuration Status Checking
LLM Orchestra provides visual status checking to quickly see which configurations are ready to use:
```bash
# Check all configurations with visual indicators
llm-orc config check
```
**Visual Indicators:**
- 🟢 **Ready to use** - Profile/provider is properly configured and available
- 🟥 **Needs setup** - Profile references unavailable provider or missing authentication
**Provider Availability Detection:**
- **Authenticated providers** - Checks for valid API credentials
- **Ollama service** - Tests connection to local Ollama instance (localhost:11434)
- **Configuration validation** - Verifies model profiles reference available providers
**Example Output:**
```
Configuration Status Legend:
🟢 Ready to use 🟥 Needs setup
=== Global Configuration Status ===
📁 Model Profiles:
🟢 local-free (llama3 via ollama)
🟢 quality (claude-sonnet-4 via anthropic-claude-pro-max)
🟥 high-context (claude-3-5-sonnet via anthropic-api)
🌐 Available Providers: anthropic-claude-pro-max, ollama
=== Local Configuration Status: My Project ===
📁 Model Profiles:
🟢 security-auditor (llama3 via ollama)
🟢 senior-reviewer (claude-sonnet-4 via anthropic-claude-pro-max)
```
### Configuration Reset Commands
LLM Orchestra provides safe configuration reset with backup and selective retention options:
```bash
# Reset global configuration (safe defaults)
llm-orc config reset-global # Creates backup, preserves authentication
# Reset local configuration (safe defaults)
llm-orc config reset-local # Creates backup, preserves ensembles
# Advanced reset options
llm-orc config reset-global --no-backup --reset-auth # Complete global reset
llm-orc config reset-local --reset-ensembles --no-backup # Complete local reset
llm-orc config reset-local --project-name "My Project" # Set project name
```
**Safety Features:**
- **Automatic backups** - Creates timestamped `.backup` directories by default
- **Authentication preservation** - Keeps API keys and credentials safe by default
- **Ensemble retention** - Preserves local ensembles by default
- **Confirmation prompts** - Prevents accidental data loss
**Available Options:**
*Global Reset:*
- `--backup/--no-backup` - Create backup before reset (default: backup)
- `--preserve-auth/--reset-auth` - Keep authentication (default: preserve)
*Local Reset:*
- `--backup/--no-backup` - Create backup before reset (default: backup)
- `--preserve-ensembles/--reset-ensembles` - Keep ensembles (default: preserve)
- `--project-name` - Set project name (defaults to directory name)
### Configuration Hierarchy
LLM Orchestra follows a configuration hierarchy:
1. **Local project configuration** (`.llm-orc/` in current directory)
2. **Global user configuration** (`~/.config/llm-orc/`)
3. **Command-line options** (highest priority)
### Library Path Configuration
Control where `llm-orc init` finds primitive scripts using environment variables or project-specific configuration:
```bash
# Option 1: Custom library location via environment variable
export LLM_ORC_LIBRARY_PATH="/path/to/your/custom-library"
llm-orc init
# Option 2: Project-specific configuration via .llm-orc/.env
mkdir -p .llm-orc
echo 'LLM_ORC_LIBRARY_PATH=/path/to/your/custom-library' > .llm-orc/.env
llm-orc init
# Option 3: Use local submodule (development default)
export LLM_ORC_LIBRARY_SOURCE=local
llm-orc init
# Option 4: Auto-detect library in current directory (no configuration needed)
# Looks for: ./llm-orchestra-library/scripts/primitives/
llm-orc init
```
**Priority order:**
1. `LLM_ORC_LIBRARY_PATH` environment variable - Explicit custom location (highest priority)
2. `.llm-orc/.env` file - Project-specific configuration
3. `LLM_ORC_LIBRARY_SOURCE=local` - Package submodule
4. `./llm-orchestra-library/` - Current working directory auto-detection
5. No scripts installed (graceful fallback)
**Note**: Environment variables always take precedence over `.env` file settings, allowing temporary overrides without modifying project files.
This allows developers to maintain their own script libraries while still using llm-orc's orchestration features.
### XDG Base Directory Support
Configurations follow the XDG Base Directory specification:
- Global config: `~/.config/llm-orc/` (or `$XDG_CONFIG_HOME/llm-orc/`)
- Automatic migration from old `~/.llm-orc/` location
## Cost Optimization
- **Local models** (free) for systematic analysis tasks
- **Cloud models** (paid) reserved for strategic insights
- **Usage tracking** shows exactly what each analysis costs
- **Intelligent routing** based on task complexity
## Development
```bash
# Run tests
uv run pytest
# Run linting and formatting
uv run ruff check .
uv run ruff format --check .
# Type checking
uv run mypy src/llm_orc
```
## Research
This project includes comparative analysis of multi-agent vs single-agent approaches. See [docs/ensemble_vs_single_agent_analysis.md](docs/ensemble_vs_single_agent_analysis.md) for detailed findings.
The short version: orchestrated multi-agent systems maintain accuracy at scale where single-agent approaches collapse. Mixture-of-Agents ensembles of open-source models have matched or exceeded frontier model performance on established benchmarks. Cascade routing strategies have replicated frontier quality at a fraction of the cost. The evidence supports the architectural bet this project is built on.
## Philosophy
**Coordination over scale. Process over generation.**
The concentrated AI buildout optimizes for one thing: making the generative phase faster. But generation was never the bottleneck. Evaluation — the human judgment that determines whether output is correct, appropriate, and worth shipping — is the binding constraint. Faster generation without proportionally better evaluation just produces more to review.
LLM Orchestra takes the opposite position. An ensemble of smaller, specialized models — each focused on a bounded analytical task — produces structured output designed for human evaluation. The system decomposes problems so that each agent owns its contribution. A security reviewer finds vulnerabilities. A performance analyst identifies bottlenecks. A synthesis agent integrates their findings. The human evaluates a structured analysis, not raw generation.
Running models locally is a practical choice, not an ideological one. No per-query billing means you can run systematic analysis across an entire codebase without watching a meter. Data stays on your hardware. And the local/cloud mix lets you put cost where it matters — expensive models for strategic insight, free local models for systematic coverage.
## License
AGPL-3.0 License - see [LICENSE](LICENSE) for details. | text/markdown | null | Nathan Green <contact@nate.green> | null | null | null | agents, ai, ensemble, llm, multi-agent, orchestration | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"aiohttp>=3.13.3",
"anthropic",
"click",
"cryptography",
"fastapi>=0.115.0",
"google-genai",
"mcp>=1.0.0",
"ollama",
"psutil>=5.9.0",
"pydantic>=2.12",
"python-dotenv>=1.1.1",
"pyyaml",
"requests>=2.32.4",
"rich>=13.0.0",
"uvicorn>=0.32.0",
"websockets",
"build; extra == \"dev\"",
"mypy>=1.5.0; extra == \"dev\"",
"pip-audit>=2.6.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-bdd>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest>=7.4.0; extra == \"dev\"",
"ruff>=0.0.287; extra == \"dev\"",
"twine; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/mrilikecoding/llm-orc",
"Repository, https://github.com/mrilikecoding/llm-orc",
"Bug Tracker, https://github.com/mrilikecoding/llm-orc/issues",
"Documentation, https://github.com/mrilikecoding/llm-orc#readme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:26:10.849109 | llm_orchestra-0.15.1.tar.gz | 1,061,409 | c5/71/0a5e1afe87b5e10267ff01a45b5985b6f33a96c051f1a93e2df0542830fa/llm_orchestra-0.15.1.tar.gz | source | sdist | null | false | c482e55c3e11049b75fddaddda676053 | fa0ade758661e38d9a9c68e3b999f96acb6ce75ac14093b3e9a1813feb94566d | c5710a5e1afe87b5e10267ff01a45b5985b6f33a96c051f1a93e2df0542830fa | AGPL-3.0-or-later | [
"LICENSE"
] | 197 |
2.4 | types-boto3-full | 1.42.54 | All-in-one type annotations for boto3 1.42.54 generated with mypy-boto3-builder 8.12.0 | <a id="types-boto3-full"></a>
# types-boto3-full
[](https://pypi.org/project/types-boto3-full/)
[](https://pypi.org/project/types-boto3-full/)
[](https://youtype.github.io/types_boto3_docs/)
[](https://pypistats.org/packages/types-boto3-full)

Type annotations for [boto3 1.42.54](https://pypi.org/project/boto3/)
compatible with [VSCode](https://code.visualstudio.com/),
[PyCharm](https://www.jetbrains.com/pycharm/),
[Emacs](https://www.gnu.org/software/emacs/),
[Sublime Text](https://www.sublimetext.com/),
[mypy](https://github.com/python/mypy),
[pyright](https://github.com/microsoft/pyright) and other tools.
Generated with
[mypy-boto3-builder 8.12.0](https://github.com/youtype/mypy_boto3_builder).
More information can be found on
[types-boto3](https://pypi.org/project/types-boto3/) page and in
[types-boto3-full docs](https://youtype.github.io/types_boto3_docs/).
See how it helps you find and fix potential bugs:

- [types-boto3-full](#types-boto3-full)
- [How to install](#how-to-install)
- [Generate locally (recommended)](<#generate-locally-(recommended)>)
- [VSCode extension](#vscode-extension)
- [From PyPI with pip](#from-pypi-with-pip)
- [How to uninstall](#how-to-uninstall)
- [Usage](#usage)
- [VSCode](#vscode)
- [PyCharm](#pycharm)
- [Emacs](#emacs)
- [Sublime Text](#sublime-text)
- [Other IDEs](#other-ides)
- [mypy](#mypy)
- [pyright](#pyright)
- [Pylint compatibility](#pylint-compatibility)
- [Explicit type annotations](#explicit-type-annotations)
- [How it works](#how-it-works)
- [What's new](#what's-new)
- [Implemented features](#implemented-features)
- [Latest changes](#latest-changes)
- [Versioning](#versioning)
- [Thank you](#thank-you)
- [Documentation](#documentation)
- [Support and contributing](#support-and-contributing)
<a id="how-to-install"></a>
## How to install
<a id="generate-locally-(recommended)"></a>
### Generate locally (recommended)
You can generate type annotations for `boto3` package locally with
`mypy-boto3-builder`. Use
[uv](https://docs.astral.sh/uv/getting-started/installation/) for build
isolation.
1. Run mypy-boto3-builder in your package root directory:
`uvx --with 'boto3==1.42.54' mypy-boto3-builder`
2. Select `boto3` AWS SDK.
3. Add all available services.
4. Use provided commands to install generated packages.
<a id="vscode-extension"></a>
### VSCode extension
Add
[AWS Boto3](https://marketplace.visualstudio.com/items?itemName=Boto3typed.boto3-ide)
extension to your VSCode and run `AWS boto3: Quick Start` command.
Click `Auto-discover services` and select services you use in the current
project.
<a id="from-pypi-with-pip"></a>
### From PyPI with pip
Install `types-boto3-full` to add type checking for `boto3` package.
```bash
# install type annotations
python -m pip install 'types-boto3[full]'
# or install annotations in sync with boto3 version
python -m pip install 'types-boto3[full,boto3]'
```
<a id="how-to-uninstall"></a>
## How to uninstall
```bash
# uninstall types-boto3
python -m pip uninstall -y types-boto3-full types-boto3
```
<a id="usage"></a>
## Usage
<a id="vscode"></a>
### VSCode
- Install
[Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python)
- Install
[Pylance extension](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance)
- Set `Pylance` as your Python Language Server
- Install `types-boto3-full` in your environment:
```bash
python -m pip install 'types-boto3[full]'
```
Both type checking and code completion should now work. No explicit type
annotations required, write your `boto3` code as usual.
<a id="pycharm"></a>
### PyCharm
> ⚠️ Due to slow PyCharm performance on `Literal` overloads (issue
> [PY-40997](https://youtrack.jetbrains.com/issue/PY-40997)), it is recommended
> to use [types-boto3-lite](https://pypi.org/project/types-boto3-lite/) until
> the issue is resolved.
> ⚠️ If you experience slow performance and high CPU usage, try to disable
> `PyCharm` type checker and use [mypy](https://github.com/python/mypy) or
> [pyright](https://github.com/microsoft/pyright) instead.
> ⚠️ To continue using `PyCharm` type checker, you can try to replace
> `types-boto3` with
> [types-boto3-lite](https://pypi.org/project/types-boto3-lite/):
```bash
python -m pip uninstall -y types-boto3
python -m pip install 'types-boto3-lite[full]'
```
Install `types-boto3-lite` in your environment:
```bash
python -m pip install 'types-boto3-lite[full]'
```
Both type checking and code completion should now work.
<a id="emacs"></a>
### Emacs
- Install `types-boto3-full` in your environment:
```bash
python -m pip install 'types-boto3[full]'
```
- Install [use-package](https://github.com/jwiegley/use-package),
[lsp](https://github.com/emacs-lsp/lsp-mode/),
[company](https://github.com/company-mode/company-mode) and
[flycheck](https://github.com/flycheck/flycheck) packages
- Install [lsp-pyright](https://github.com/emacs-lsp/lsp-pyright) package
```elisp
(use-package lsp-pyright
:ensure t
:hook (python-mode . (lambda ()
(require 'lsp-pyright)
(lsp))) ; or lsp-deferred
:init (when (executable-find "python3")
(setq lsp-pyright-python-executable-cmd "python3"))
)
```
- Make sure emacs uses the environment where you have installed
`types-boto3-full`
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="sublime-text"></a>
### Sublime Text
- Install `types-boto3-full` in your environment:
```bash
python -m pip install 'types-boto3[full]'
```
- Install [LSP-pyright](https://github.com/sublimelsp/LSP-pyright) package
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="other-ides"></a>
### Other IDEs
Not tested, but as long as your IDE supports `mypy` or `pyright`, everything
should work.
<a id="mypy"></a>
### mypy
- Install `mypy`: `python -m pip install mypy`
- Install `types-boto3-full` in your environment:
```bash
python -m pip install 'types-boto3[full]'
```
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="pyright"></a>
### pyright
- Install `pyright`: `npm i -g pyright`
- Install `types-boto3-full` in your environment:
```bash
python -m pip install 'types-boto3[full]'
```
Optionally, you can install `types-boto3-full` to `typings` directory.
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="pylint-compatibility"></a>
### Pylint compatibility
It is totally safe to use `TYPE_CHECKING` flag in order to avoid
`types-boto3-full` dependency in production. However, there is an issue in
`pylint` that it complains about undefined variables. To fix it, set all types
to `object` in non-`TYPE_CHECKING` mode.
```python
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from types_boto3_ec2 import EC2Client, EC2ServiceResource
from types_boto3_ec2.waiters import BundleTaskCompleteWaiter
from types_boto3_ec2.paginators import DescribeVolumesPaginator
else:
EC2Client = object
EC2ServiceResource = object
BundleTaskCompleteWaiter = object
DescribeVolumesPaginator = object
...
```
<a id="explicit-type-annotations"></a>
### Explicit type annotations
To speed up type checking and code completion, you can set types explicitly.
```python
import boto3
from boto3.session import Session
from types_boto3_ec2.client import EC2Client
from types_boto3_ec2.service_resource import EC2ServiceResource
from types_boto3_ec2.waiter import BundleTaskCompleteWaiter
from types_boto3_ec2.paginator import DescribeVolumesPaginator
session = Session(region_name="us-west-1")
ec2_client: EC2Client = boto3.client("ec2", region_name="us-west-1")
ec2_resource: EC2ServiceResource = session.resource("ec2")
bundle_task_complete_waiter: BundleTaskCompleteWaiter = ec2_client.get_waiter(
"bundle_task_complete"
)
describe_volumes_paginator: DescribeVolumesPaginator = ec2_client.get_paginator("describe_volumes")
```
<a id="how-it-works"></a>
## How it works
Fully automated
[mypy-boto3-builder](https://github.com/youtype/mypy_boto3_builder) carefully
generates type annotations for each service, patiently waiting for `boto3`
updates. It delivers drop-in type annotations for you and makes sure that:
- All available `boto3` services are covered.
- Each public class and method of every `boto3` service gets valid type
annotations extracted from `botocore` schemas.
- Type annotations include up-to-date documentation.
- Link to documentation is provided for every method.
- Code is processed by [ruff](https://docs.astral.sh/ruff/) for readability.
<a id="what's-new"></a>
## What's new
<a id="implemented-features"></a>
### Implemented features
- Fully type annotated `boto3`, `botocore`, `aiobotocore` and `aioboto3`
libraries
- `mypy`, `pyright`, `VSCode`, `PyCharm`, `Sublime Text` and `Emacs`
compatibility
- `Client`, `ServiceResource`, `Resource`, `Waiter` `Paginator` type
annotations for each service
- Generated `TypeDefs` for each service
- Generated `Literals` for each service
- Auto discovery of types for `boto3.client` and `boto3.resource` calls
- Auto discovery of types for `session.client` and `session.resource` calls
- Auto discovery of types for `client.get_waiter` and `client.get_paginator`
calls
- Auto discovery of types for `ServiceResource` and `Resource` collections
- Auto discovery of types for `aiobotocore.Session.create_client` calls
<a id="latest-changes"></a>
### Latest changes
Builder changelog can be found in
[Releases](https://github.com/youtype/mypy_boto3_builder/releases).
<a id="versioning"></a>
## Versioning
`types-boto3-full` version is the same as related `boto3` version and follows
[Python Packaging version specifiers](https://packaging.python.org/en/latest/specifications/version-specifiers/).
<a id="thank-you"></a>
## Thank you
- [Allie Fitter](https://github.com/alliefitter) for
[boto3-type-annotations](https://pypi.org/project/boto3-type-annotations/),
this package is based on top of his work
- [black](https://github.com/psf/black) developers for an awesome formatting
tool
- [Timothy Edmund Crosley](https://github.com/timothycrosley) for
[isort](https://github.com/PyCQA/isort) and how flexible it is
- [mypy](https://github.com/python/mypy) developers for doing all dirty work
for us
- [pyright](https://github.com/microsoft/pyright) team for the new era of typed
Python
<a id="documentation"></a>
## Documentation
All services type annotations can be found in
[boto3 docs](https://youtype.github.io/types_boto3_docs/)
<a id="support-and-contributing"></a>
## Support and contributing
This package is auto-generated. Please reports any bugs or request new features
in [mypy-boto3-builder](https://github.com/youtype/mypy_boto3_builder/issues/)
repository.
| text/markdown | null | Vlad Emelianov <vlad.emelianov.nz@gmail.com> | null | null | null | boto3, boto3-stubs, type-annotations, mypy, typeshed, autocomplete | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Environment :: Console",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: Implementation :: CPython",
"Typing :: Stubs Only"
] | [
"any"
] | null | null | >=3.9 | [] | [] | [] | [
"typing-extensions; python_version < \"3.12\""
] | [] | [] | [] | [
"Homepage, https://github.com/youtype/mypy_boto3_builder",
"Documentation, https://youtype.github.io/types_boto3_docs/",
"Source, https://github.com/youtype/mypy_boto3_builder",
"Tracker, https://github.com/youtype/mypy_boto3_builder/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-21T01:25:26.488417 | types_boto3_full-1.42.54.tar.gz | 8,419,254 | 5c/d2/28e1b8796b2284186f189b22a99bab67d09c20e36aa27ba61b2039a59483/types_boto3_full-1.42.54.tar.gz | source | sdist | null | false | fbb23e6245f526adb1c3d010fcc21d08 | d45290b7f60aea945822738116e49105b55f487bb8ecc64347c7241dfadf4952 | 5cd228e1b8796b2284186f189b22a99bab67d09c20e36aa27ba61b2039a59483 | MIT | [
"LICENSE"
] | 702 |
2.4 | obdtracker | 0.2.7 | Library to read data from http://www.aika168.com and other cloud services to track cars with GPS trackers installed | # GPS OBD2 tracker
This project is for Chinese GPS tracker for cars.
# Which GPS ODB2 trackers are supported?
This is good question and very hard to find proper answer. It was developed and tested on devices bought on
AliExpress which looks like this one:

After reading attached instruction - I have found error - they say to connect to 3.tkstargps.net side but app is AIKA. What I found - that device is connecting (after sending SMS to it) to XX.aika168.com - and communication between mobile app and server is open (no ssl). This was an invitation to create this library. Other GPS OBD2 Trackers that work with AIKA mobile app should work with this library too. How to check that? Look at pictures of
mobile app that usuary is shown on pages where somebody is selling device. If you see something like:

Map with blue top bar with reload button on right and back arrow on left. This is AIKA app. And here is a link to Google app store: [AIKA app](https://play.google.com/store/apps/details?id=com.fw.gps.xinmai&hl=en_US).
# How to use this code?
It's an asynchronous library. To integrate with your code:
```python
import asyncio
from obdtracker import API, Location, DeviceStatus
async def main():
# Use the context manager to ensure connections are closed
async with API("http://www.aika168.com/") as tracker:
tracker.register_updater(Location(tracker))
tracker.register_updater(DeviceStatus(tracker))
# Login and update data
await tracker.login('<Your device id>', '<Your server password>')
await tracker.update()
# Access typed data
if tracker.location:
print(f"Position: {tracker.location.lat}, {tracker.location.lng}")
if tracker.status:
print(f"Battery: {tracker.status.battery}%")
print(f"Ignition is {'ON' if tracker.status.is_ignition_on else 'OFF'}")
print(f"Warning: {tracker.status.warning_type.name}")
# You can also send commands to the device across the network
# await tracker.send_command("DY") # Cut oil/electricity
if __name__ == "__main__":
asyncio.run(main())
```
### SMS Fallback Generator
If the GPS tracker doesn't connect to the internet, you can create standard SMS commands using `SMSCommandBuilder`:
```python
from obdtracker.sms import SMSCommandBuilder
sms = SMSCommandBuilder(password="123456")
message = sms.set_speed_alarm(80)
print(message) # "speed123456 080"
```
# NEW: list of UNSUPPORTED mobile applications
If you are going to buy GPS OBD2 Tracker checkout this list to see which mobile apps are unsupported. Acually it means that those mobile apps use cloud service that is not supported with this tool:
(Unsupported cloud / OBD2 GPS Trackers)[/doc/unsupported.md]
# Supported apps / cloud services:
1. aika168.com / www.aika168.com - developed using this cloud service - works
1. gpscj.net / gps18.com - not sure - possibly it is working
# What is next step?
Right now I'm working on:
- App for getting information about protocol between device and gateway at XX.aika168.com
- Further expanding commands and mapping hardware protocols.
**Home Assistant Integration**: The Home Assistant custom component for this library is available at [maika](https://github.com/nyxnyx/maika).
| text/markdown | Grzegorz Szostak | Grzegorz Szostak <szostak.grzegorz@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent"
] | [] | https://github.com/nyxnyx/gps_obd2_tracker | null | >=3.6 | [] | [] | [] | [
"httpx>=0.27.2"
] | [] | [] | [] | [
"Homepage, https://github.com/nyxnyx/gps_obd2_tracker"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T01:24:55.730341 | obdtracker-0.2.7.tar.gz | 26,260 | ba/ec/f355ac66489a695e3ac5ead3ec9930eecefdb0f85ae0482f1241efb0b05b/obdtracker-0.2.7.tar.gz | source | sdist | null | false | bd257c03db11daebf6b91862618b0ece | 8584764cfe8ab3a50de5a94174c0dc34e31095d3a0285df2ec1cf7013b963481 | baecf355ac66489a695e3ac5ead3ec9930eecefdb0f85ae0482f1241efb0b05b | null | [
"LICENSE"
] | 210 |
2.1 | agent-brain-cli | 6.0.3 | Agent Brain CLI - Command-line interface for managing AI agent memory and knowledge retrieval | # Agent Brain CLI
> Command-line interface for managing AI agent memory and knowledge retrieval with the **Agent Brain** RAG server.
**Agent Brain** (formerly doc-serve) is an intelligent document indexing and semantic search system designed to give AI agents long-term memory. This CLI provides a convenient way to manage your Agent Brain server and knowledge base.
[](https://pypi.org/project/agent-brain-cli/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
## Why Agent Brain?
AI agents need persistent memory to be truly useful. Agent Brain provides the retrieval infrastructure that enables context-aware, knowledge-grounded AI interactions.
### Search Capabilities
| Search Type | Description | Best For |
|-------------|-------------|----------|
| **Semantic Search** | Natural language queries using OpenAI embeddings | Conceptual questions, related content |
| **Keyword Search (BM25)** | Traditional keyword matching with TF-IDF ranking | Exact matches, technical terms |
| **Hybrid Search** | Combines vector + BM25 approaches | General-purpose queries |
| **GraphRAG** | Knowledge graph retrieval | Understanding relationships |
## Installation
```bash
pip install agent-brain-cli
```
## Quick Start
```bash
agent-brain init # Initialize project
agent-brain start # Start server
agent-brain index ./docs # Index documents
agent-brain query "search term"
```
> **Note**: The legacy command `doc-svr-ctl` is still available but deprecated. Please use `agent-brain` for new installations.
## Development Installation
```bash
cd agent-brain-cli
poetry install
```
## Usage
```bash
# Check server status
agent-brain status
# Search documents
agent-brain query "how to use python"
# Index documents from a folder
agent-brain index ./docs
# Reset/clear the index
agent-brain reset --yes
```
## Configuration
Set the server URL via environment variable:
```bash
export AGENT_BRAIN_URL=http://localhost:8000
```
Or use the `--url` flag:
```bash
agent-brain --url http://localhost:8000 status
```
> **Note**: The legacy environment variable `DOC_SERVE_URL` is still supported for backwards compatibility.
## Commands
### Server Management
| Command | Description |
|---------|-------------|
| `init` | Initialize project for Agent Brain (creates `.claude/doc-serve/`) |
| `start` | Start the Agent Brain server for current project |
| `stop` | Stop the running server |
| `list` | List all running Agent Brain instances |
| `status` | Check server health and indexing status |
### Data Management
| Command | Description |
|---------|-------------|
| `query` | Search indexed documents |
| `index` | Start indexing documents from a folder |
| `reset` | Clear all indexed documents |
## Options
All commands support:
- `--url` - Server URL (or `AGENT_BRAIN_URL` / `DOC_SERVE_URL` env var)
- `--json` - Output as JSON for scripting
## Example Workflow
```bash
# 1. Initialize a new project
cd my-project
agent-brain init
# 2. Start the server
agent-brain start
# 3. Index your documentation
agent-brain index ./docs ./src
# 4. Query your knowledge base
agent-brain query "How does authentication work?"
# 5. Stop when done
agent-brain stop
```
## Documentation
- [User Guide](https://github.com/SpillwaveSolutions/agent-brain/wiki/User-Guide) - Getting started and usage
- [Developer Guide](https://github.com/SpillwaveSolutions/agent-brain/wiki/Developer-Guide) - Contributing and development
- [API Reference](https://github.com/SpillwaveSolutions/agent-brain/wiki/API-Reference) - Full API documentation
## Release Information
- **Current Version**: See [pyproject.toml](./pyproject.toml)
- **Release Notes**: [GitHub Releases](https://github.com/SpillwaveSolutions/agent-brain/releases)
- **Changelog**: [Latest Release](https://github.com/SpillwaveSolutions/agent-brain/releases/latest)
## Related Packages
- [agent-brain-rag](https://pypi.org/project/agent-brain-rag/) - The RAG server that powers Agent Brain
## License
MIT
| text/markdown | Spillwave Solutions | null | null | null | MIT | agent-brain, rag, cli, ai-memory, llm-memory, semantic-search, ai-agent, claude-code, agent-memory | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | https://github.com/SpillwaveSolutions/agent-brain | null | <4.0,>=3.10 | [] | [] | [] | [
"agent-brain-rag<7.0.0,>=6.0.0",
"click<9.0.0,>=8.1.0",
"httpx<0.29.0,>=0.28.0",
"pydantic<3.0.0,>=2.10.0",
"pyyaml<7.0.0,>=6.0.0",
"rich<14.0.0,>=13.9.0"
] | [] | [] | [] | [
"Documentation, https://github.com/SpillwaveSolutions/agent-brain/wiki",
"Repository, https://github.com/SpillwaveSolutions/agent-brain"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:24:45.960971 | agent_brain_cli-6.0.3.tar.gz | 25,917 | 6d/e8/6dd42cd3015b63577d5f968ce20af13a7350da04943fecdac8988d2a48bc/agent_brain_cli-6.0.3.tar.gz | source | sdist | null | false | e15c8c8ca2b6bbb3f3c5b9255814e69b | b92879c2176787b27656a5b9875411915c1373a94d5d6608192e6076449cf497 | 6de86dd42cd3015b63577d5f968ce20af13a7350da04943fecdac8988d2a48bc | null | [] | 223 |
2.4 | csdoc | 0.1.4 | Documentation generator for Csound projects | # csdoc
`csdoc` is a documentation generator for Csound projects. It parses Csound code (`.csd`, `.orc`, `.inc`, etc.) and extracts JSDoc-style comment blocks to generate a beautiful, static HTML documentation site.
## Features
- **Standard Syntax**: Supports `opcode` (UDOs), `instr`, and `struct` definitions.
- **Recursive Parsing**: Automatically follows `#include` statements.
- **Rich Comments**: Supports `@param`, `@return`, and Markdown in descriptions.
- **Modern CLI**: Easy to use with `uv` or as a standalone tool.
## Installation
You can run `csdoc` directly using `uvx`:
```bash
uvx --from git+https://github.com/yourusername/csdoc csdoc build main.csd
```
Or install it in your environment:
```bash
pip install csdoc
```
## Usage
### Build Documentation
```bash
csdoc build <source_file> [options]
```
- `<source_file>`: The entry point of your Csound project (e.g., `main.csd`).
- `-o, --output <dir>`: The directory to save the generated site (default: `dist`).
### Export JSON
```bash
csdoc json <source_file>
```
## Documentation Format
`csdoc` looks for tags within `/** ... */` comment blocks immediately preceding a definition.
### Example
```csound
/**
* A gain control UDO.
*
* This opcode applies a simple linear gain to an audio signal.
*
* @param ain The input audio signal
* @param kgain The gain multiplier (0.0 to 1.0)
* @return aout The processed audio signal
*/
opcode Gain, a, ak
ain, kgain xin
xout ain * kgain
endop
```
### Supported Tags
| Tag | Description |
| --- | --- |
| `@param {type} name Description` | Documents a parameter or p-field. Type is optional. |
| `@return {type} Description` | Documents a return value. Type is optional. |
| `@returns` | Alias for `@return`. |
Descriptions support standard **Markdown** syntax, including code blocks, bold text, and lists.
## Development
This project uses `uv` for dependency management.
```bash
# Install dependencies
uv sync
# Run the CLI
uv run csdoc --help
```
## License
MIT
| text/markdown | null | Steven Yi <stevenyi@gmail.com> | null | null | null | audio, csound, documentation, music | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"jinja2>=3.1.6",
"markdown>=3.10.1",
"typer>=0.21.1"
] | [] | [] | [] | [
"Homepage, https://github.com/kunstmusik/csdoc",
"Repository, https://github.com/kunstmusik/csdoc",
"Bug Tracker, https://github.com/kunstmusik/csdoc/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T01:24:28.288190 | csdoc-0.1.4-py3-none-any.whl | 12,771 | 90/96/b3e84f768104104988bd8e435e2e59fb23610ba1556b34e54372286e6c7f/csdoc-0.1.4-py3-none-any.whl | py3 | bdist_wheel | null | false | bf9357145889c8ed230ac96e5956119e | 73d8e0d78060bf0d743d11021094ce6df3b2cb5ccd8e85dea25d8d323f263ed9 | 9096b3e84f768104104988bd8e435e2e59fb23610ba1556b34e54372286e6c7f | MIT | [] | 209 |
2.3 | expandai | 0.16.0 | The official Python library for the Expand API | # Expand Python API library
<!-- prettier-ignore -->
[)](https://pypi.org/project/expandai/)
The Expand Python library provides convenient access to the Expand REST API from any Python 3.9+
application. The library includes type definitions for all request params and response fields,
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
It is generated with [Stainless](https://www.stainless.com/).
## Documentation
The REST API documentation can be found on [expand.ai](https://expand.ai/docs). The full API of this library can be found in [api.md](https://github.com/expandai/expandai-python/tree/main/api.md).
## Installation
```sh
# install from PyPI
pip install expandai
```
## Usage
The full API of this library can be found in [api.md](https://github.com/expandai/expandai-python/tree/main/api.md).
```python
import os
from expandai import Expand
client = Expand(
api_key=os.environ.get("EXPAND_API_KEY"), # This is the default and can be omitted
)
response = client.fetch(
url="https://news.ycombinator.com",
)
print(response.data)
```
While you can provide an `api_key` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `EXPAND_API_KEY="My API Key"` to your `.env` file
so that your API Key is not stored in source control.
## Async usage
Simply import `AsyncExpand` instead of `Expand` and use `await` with each API call:
```python
import os
import asyncio
from expandai import AsyncExpand
client = AsyncExpand(
api_key=os.environ.get("EXPAND_API_KEY"), # This is the default and can be omitted
)
async def main() -> None:
response = await client.fetch(
url="https://news.ycombinator.com",
)
print(response.data)
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh
# install from PyPI
pip install expandai[aiohttp]
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python
import os
import asyncio
from expandai import DefaultAioHttpClient
from expandai import AsyncExpand
async def main() -> None:
async with AsyncExpand(
api_key=os.environ.get("EXPAND_API_KEY"), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
response = await client.fetch(
url="https://news.ycombinator.com",
)
print(response.data)
asyncio.run(main())
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Nested params
Nested parameters are dictionaries, typed using `TypedDict`, for example:
```python
from expandai import Expand
client = Expand()
response = client.fetch(
url="url",
browser_config={},
)
print(response.browser_config)
```
## Handling errors
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `expandai.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `expandai.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `expandai.APIError`.
```python
import expandai
from expandai import Expand
client = Expand()
try:
client.fetch(
url="https://news.ycombinator.com",
)
except expandai.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except expandai.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except expandai.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python
from expandai import Expand
# Configure the default for all requests:
client = Expand(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).fetch(
url="https://news.ycombinator.com",
)
```
### Timeouts
By default requests time out after 1 minute. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python
from expandai import Expand
# Configure the default for all requests:
client = Expand(
# 20 seconds (default is 1 minute)
timeout=20.0,
)
# More granular control:
client = Expand(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5.0).fetch(
url="https://news.ycombinator.com",
)
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](https://github.com/expandai/expandai-python/tree/main/#retries).
## Advanced
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `EXPAND_LOG` to `info`.
```shell
$ export EXPAND_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py
from expandai import Expand
client = Expand()
response = client.with_raw_response.fetch(
url="https://news.ycombinator.com",
)
print(response.headers.get('X-My-Header'))
client = response.parse() # get the object that `fetch()` would have returned
print(client.data)
```
These methods return an [`APIResponse`](https://github.com/expandai/expandai-python/tree/main/src/expandai/_response.py) object.
The async client returns an [`AsyncAPIResponse`](https://github.com/expandai/expandai-python/tree/main/src/expandai/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
```python
with client.with_streaming_response.fetch(
url="https://news.ycombinator.com",
) as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py
import httpx
response = client.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented request params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python
import httpx
from expandai import Expand, DefaultHttpxClient
client = Expand(
# Or use the `EXPAND_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python
client.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py
from expandai import Expand
with Expand() as client:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/expandai/expandai-python/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py
import expandai
print(expandai.__version__)
```
## Requirements
Python 3.9 or higher.
## Contributing
See [the contributing documentation](https://github.com/expandai/expandai-python/tree/main/./CONTRIBUTING.md).
| text/markdown | null | Expand <support@expand.ai> | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"anyio<5,>=3.5.0",
"distro<2,>=1.7.0",
"httpx<1,>=0.23.0",
"pydantic<3,>=1.9.0",
"sniffio",
"typing-extensions<5,>=4.10",
"aiohttp; extra == \"aiohttp\"",
"httpx-aiohttp>=0.1.9; extra == \"aiohttp\""
] | [] | [] | [] | [
"Homepage, https://github.com/expandai/expandai-python",
"Repository, https://github.com/expandai/expandai-python"
] | uv/0.9.13 {"installer":{"name":"uv","version":"0.9.13"},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T01:24:20.218797 | expandai-0.16.0.tar.gz | 223,432 | 31/e9/ed139dcc16f0abb2bc433840e36e4312c6e1b51259d5e0cfb55f87958137/expandai-0.16.0.tar.gz | source | sdist | null | false | 96a81cc572b9a935c7bc6a22391e96d2 | 640b5b01e688965fac502d3baf8da20535c4cdd986e20c4653ef209fd04fbbd0 | 31e9ed139dcc16f0abb2bc433840e36e4312c6e1b51259d5e0cfb55f87958137 | null | [] | 305 |
2.1 | agent-brain-rag | 6.0.3 | Agent Brain RAG - Intelligent document indexing and semantic search server that gives AI agents long-term memory | # Agent Brain RAG Server
> **Agent Brain** (formerly doc-serve) is an intelligent document indexing and semantic search system designed to give AI agents long-term memory.
AI agents need persistent memory to be truly useful. Agent Brain provides the retrieval infrastructure that enables context-aware, knowledge-grounded AI interactions.
[](https://pypi.org/project/agent-brain-rag/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
## Installation
```bash
pip install agent-brain-rag
```
## Quick Start
1. Set environment variables:
```bash
export OPENAI_API_KEY=your-key
export ANTHROPIC_API_KEY=your-key
```
2. Start the server:
```bash
agent-brain-serve
```
The server will start at `http://127.0.0.1:8000`.
> **Note**: The legacy command `doc-serve` is still available but deprecated. Please use `agent-brain-serve` for new installations.
## Search Capabilities
Agent Brain provides multiple search strategies to match your retrieval needs:
| Search Type | Description | Best For |
|-------------|-------------|----------|
| **Semantic Search** | Natural language queries using OpenAI embeddings (`text-embedding-3-large`) | Conceptual questions, finding related content |
| **Keyword Search (BM25)** | Traditional keyword matching with TF-IDF ranking | Exact matches, technical terms, code identifiers |
| **Hybrid Search** | Combines vector + BM25 for best of both approaches | General-purpose queries, balanced recall/precision |
| **GraphRAG** | Knowledge graph-based retrieval for relationship-aware queries | Understanding connections, multi-hop reasoning |
## Features
- **Document Indexing**: Load and index documents from folders (PDF, Markdown, TXT, DOCX, HTML)
- **AST-Aware Code Ingestion**: Smart parsing for Python, TypeScript, JavaScript, Java, Go, Rust, C, C++
- **Multi-Strategy Retrieval**: Semantic, keyword, hybrid, and graph-based search
- **OpenAI Embeddings**: Uses `text-embedding-3-large` for high-quality embeddings
- **Claude Summarization**: AI-powered code summaries for better context
- **Chroma Vector Store**: Persistent, thread-safe vector database
- **FastAPI**: Modern, high-performance REST API with OpenAPI documentation
## Prerequisites
- Python 3.10+
- OpenAI API key (for embeddings)
- Anthropic API key (for summarization)
## GraphRAG Configuration (Feature 113)
Agent Brain supports optional GraphRAG (Graph-based Retrieval-Augmented Generation) for enhanced relationship-aware queries.
### Enabling GraphRAG
Set the environment variable to enable graph indexing:
```bash
export ENABLE_GRAPH_INDEX=true
```
### Configuration Options
| Variable | Default | Description |
|----------|---------|-------------|
| `ENABLE_GRAPH_INDEX` | `false` | Enable/disable GraphRAG features |
| `GRAPH_STORE_TYPE` | `simple` | Graph backend: `simple` (JSON) or `kuzu` (embedded DB) |
| `GRAPH_MAX_TRIPLETS_PER_CHUNK` | `10` | Maximum entities to extract per document chunk |
| `GRAPH_USE_CODE_METADATA` | `true` | Extract relationships from code AST metadata |
| `GRAPH_USE_LLM_EXTRACTION` | `true` | Use LLM for entity extraction from documents |
| `GRAPH_TRAVERSAL_DEPTH` | `2` | Default traversal depth for graph queries |
### Query Modes
With GraphRAG enabled, you have access to additional query modes:
- **`graph`**: Query using only the knowledge graph (entity relationships)
- **`multi`**: Combines vector search, BM25, and graph results using RRF fusion
### Example: Graph Query
```bash
# CLI
agent-brain query "authentication service" --mode graph
# API
curl -X POST http://localhost:8000/query \
-H "Content-Type: application/json" \
-d '{"query": "authentication service", "mode": "graph", "top_k": 10}'
```
### Example: Multi-Mode Query
```bash
# CLI
agent-brain query "user login flow" --mode multi
# API
curl -X POST http://localhost:8000/query \
-H "Content-Type: application/json" \
-d '{"query": "user login flow", "mode": "multi", "top_k": 5}'
```
### Rebuilding the Graph Index
To rebuild only the graph index without re-indexing documents:
```bash
curl -X POST "http://localhost:8000/index?rebuild_graph=true" \
-H "Content-Type: application/json" \
-d '{"folder_path": "."}'
```
### Optional Dependencies
For enhanced GraphRAG features, install optional dependency groups:
```bash
# For Kuzu graph store (production workloads)
poetry install --with graphrag-kuzu
# For enhanced entity extraction
poetry install --with graphrag
```
## Two-Stage Reranking (Feature 123)
Agent Brain supports optional two-stage retrieval with reranking for improved search precision. When enabled, the system:
1. **Stage 1**: Retrieves more candidates than requested (e.g., 50 candidates for top_k=5)
2. **Stage 2**: Reranks candidates using a cross-encoder model for more accurate relevance scoring
### Enabling Reranking
Set the following environment variables:
```bash
# Enable two-stage reranking (default: false)
ENABLE_RERANKING=true
# Choose provider (default: sentence-transformers)
RERANKER_PROVIDER=sentence-transformers # or "ollama"
# Choose model (default: cross-encoder/ms-marco-MiniLM-L-6-v2)
RERANKER_MODEL=cross-encoder/ms-marco-MiniLM-L-6-v2
# Stage 1 retrieval multiplier (default: 10)
RERANKER_TOP_K_MULTIPLIER=10
# Maximum candidates for Stage 1 (default: 100)
RERANKER_MAX_CANDIDATES=100
# Batch size for reranking inference (default: 32)
RERANKER_BATCH_SIZE=32
```
### Provider Options
| Provider | Model | Latency | Description |
|----------|-------|---------|-------------|
| sentence-transformers | cross-encoder/ms-marco-MiniLM-L-6-v2 | ~50ms | Recommended. Fast, accurate cross-encoder. |
| sentence-transformers | cross-encoder/ms-marco-MiniLM-L-12-v2 | ~100ms | Slower but more accurate. |
| ollama | llama3.2:1b | ~500ms | Fully local, no HuggingFace download. |
### YAML Configuration
You can also configure reranking in `config.yaml`:
```yaml
reranker:
provider: sentence-transformers
model: cross-encoder/ms-marco-MiniLM-L-6-v2
params:
batch_size: 32
```
### Graceful Degradation
If the reranker fails (model unavailable, timeout, etc.), the system automatically falls back to Stage 1 results. This ensures queries never fail due to reranking issues.
### Response Fields
When reranking is enabled, query results include additional fields:
- `rerank_score`: The cross-encoder relevance score
- `original_rank`: The position before reranking (1-indexed)
Example response:
```json
{
"results": [
{
"text": "Document content...",
"source": "docs/guide.md",
"score": 0.95,
"rerank_score": 0.95,
"original_rank": 5,
"chunk_id": "chunk_abc123"
}
]
}
```
## Development Installation
```bash
cd agent-brain-server
poetry install
```
### Configuration
Copy the environment template and configure:
```bash
cp ../.env.example .env
# Edit .env with your API keys
```
Required environment variables:
- `OPENAI_API_KEY`: Your OpenAI API key for embeddings
- `ANTHROPIC_API_KEY`: Your Anthropic API key for summarization
### Running the Server
```bash
# Development mode
poetry run uvicorn agent_brain_server.api.main:app --reload
# Or use the entry point
poetry run agent-brain-serve
```
### API Documentation
Once running, visit:
- Swagger UI: http://127.0.0.1:8000/docs
- ReDoc: http://127.0.0.1:8000/redoc
- OpenAPI JSON: http://127.0.0.1:8000/openapi.json
## API Endpoints
### Health
- `GET /health` - Server health status
- `GET /health/status` - Detailed indexing status
### Indexing
- `POST /index` - Start indexing documents from a folder
- `POST /index/add` - Add documents to existing index
- `DELETE /index` - Reset the index
### Querying
- `POST /query` - Semantic search query
- `GET /query/count` - Get indexed document count
## Example Usage
### Index Documents
```bash
curl -X POST http://localhost:8000/index \
-H "Content-Type: application/json" \
-d '{"folder_path": "/path/to/docs"}'
```
### Query Documents
```bash
curl -X POST http://localhost:8000/query \
-H "Content-Type: application/json" \
-d '{"query": "How do I configure authentication?", "top_k": 5}'
```
## Architecture
```
agent_brain_server/
├── api/
│ ├── main.py # FastAPI application
│ └── routers/ # Endpoint handlers
├── config/
│ └── settings.py # Configuration management
├── models/ # Pydantic request/response models
├── indexing/
│ ├── document_loader.py # Document loading
│ ├── chunking.py # Text chunking
│ └── embedding.py # Embedding generation
├── services/
│ ├── indexing_service.py # Indexing orchestration
│ └── query_service.py # Query execution
└── storage/
└── vector_store.py # Chroma vector store
```
## Development
### Running Tests
```bash
poetry run pytest
```
### Code Formatting
```bash
poetry run black agent_brain_server/
poetry run ruff check agent_brain_server/
```
### Type Checking
```bash
poetry run mypy agent_brain_server/
```
## Documentation
- [User Guide](https://github.com/SpillwaveSolutions/agent-brain/wiki/User-Guide) - Getting started and usage
- [Developer Guide](https://github.com/SpillwaveSolutions/agent-brain/wiki/Developer-Guide) - Contributing and development
- [API Reference](https://github.com/SpillwaveSolutions/agent-brain/wiki/API-Reference) - Full API documentation
## Release Information
- **Current Version**: See [pyproject.toml](./pyproject.toml)
- **Release Notes**: [GitHub Releases](https://github.com/SpillwaveSolutions/agent-brain/releases)
- **Changelog**: [Latest Release](https://github.com/SpillwaveSolutions/agent-brain/releases/latest)
## Related Packages
- [agent-brain-cli](https://pypi.org/project/agent-brain-cli/) - Command-line interface for Agent Brain
## License
MIT
| text/markdown | Spillwave Solutions | null | null | null | MIT | agent-brain, rag, semantic-search, ai-memory, llm-memory, documentation, indexing, llama-index, chromadb, ai-agent, claude-code, agent-memory | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Documentation",
"Topic :: Text Processing :: Indexing"
] | [] | https://github.com/SpillwaveSolutions/agent-brain | null | <4.0,>=3.10 | [] | [] | [] | [
"anthropic<0.41.0,>=0.40.0",
"asyncpg<0.30.0,>=0.29.0; extra == \"postgres\"",
"chromadb<0.6.0,>=0.5.0",
"click<9.0.0,>=8.1.0",
"cohere<6.0.0,>=5.0.0",
"fastapi<0.116.0,>=0.115.0",
"google-generativeai<0.9.0,>=0.8.0",
"langextract<2.0.0,>=1.0.0; extra == \"graphrag\" or extra == \"graphrag-all\"",
"llama-index-core<0.15.0,>=0.14.0",
"llama-index-embeddings-openai<0.6.0,>=0.5.0",
"llama-index-graph-stores-kuzu<0.10.0,>=0.9.0; extra == \"graphrag-kuzu\" or extra == \"graphrag-all\"",
"llama-index-llms-openai<0.7.0,>=0.6.12",
"llama-index-readers-file<0.6.0,>=0.5.0",
"llama-index-retrievers-bm25<0.7.0,>=0.6.0",
"openai<2.0.0,>=1.57.0",
"pydantic<3.0.0,>=2.10.0",
"pydantic-settings<3.0.0,>=2.6.0",
"python-dotenv<2.0.0,>=1.0.0",
"pyyaml<7.0.0,>=6.0.0",
"rank-bm25<0.3.0,>=0.2.2",
"sentence-transformers<4.0.0,>=3.4.0",
"sqlalchemy[asyncio]<3.0.0,>=2.0.0; extra == \"postgres\"",
"tiktoken<0.9.0,>=0.8.0",
"tree-sitter-language-pack<0.8.0,>=0.7.3",
"uvicorn[standard]<0.33.0,>=0.32.0"
] | [] | [] | [] | [
"Documentation, https://github.com/SpillwaveSolutions/agent-brain/wiki",
"Repository, https://github.com/SpillwaveSolutions/agent-brain"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:24:03.569259 | agent_brain_rag-6.0.3.tar.gz | 118,791 | 4a/3a/d6e73f51a8ab053fac491b83d1263543841c2acb27e33b11ff0c681fbf35/agent_brain_rag-6.0.3.tar.gz | source | sdist | null | false | 6c1e74242468c7333b0ccc3764385028 | 0efde724dc97811752818b84927e63c7d57358f244543cedd565c0e30147a6a5 | 4a3ad6e73f51a8ab053fac491b83d1263543841c2acb27e33b11ff0c681fbf35 | null | [] | 236 |
2.4 | cribl-control-plane | 0.6.0b36 | Python Client SDK Generated by Speakeasy. | # cribl_control_plane_sdk_python
The Cribl Python SDK for the control plane provides operational control over Cribl resources and helps streamline the process of integrating with Cribl.
In addition to the usage examples in this repository, the Cribl documentation includes [code examples for common use cases](https://docs.cribl.io/cribl-as-code/code-examples).
Complementary API reference documentation is available at [https://docs.cribl.io/cribl-as-code/api/](https://docs.cribl.io/cribl-as-code/api-reference/control-plane/cribl-core/). Product documentation is available at https://docs.cribl.io.
> [!IMPORTANT]
> **Preview Feature**
> The Cribl SDKs are Preview features that are still being developed. We do not recommend using them in a production environment, because the features might not be fully tested or optimized for performance, and related documentation could be incomplete.
>
> Please continue to submit feedback through normal Cribl support channels, but assistance might be limited while the features remain in Preview.
<!-- No Summary [summary] -->
<!-- Start Table of Contents [toc] -->
## Table of Contents
<!-- $toc-max-depth=2 -->
* [cribl_control_plane_sdk_python](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/#criblcontrolplanesdkpython)
* [SDK Installation](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/#sdk-installation)
* [IDE Support](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/#ide-support)
* [SDK Example Usage](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/#sdk-example-usage)
* [Authentication](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/#authentication)
* [Available Resources and Operations](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/#available-resources-and-operations)
* [Json Streaming](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/#json-streaming)
* [File uploads](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/#file-uploads)
* [Retries](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/#retries)
* [Error Handling](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/#error-handling)
* [Custom HTTP Client](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/#custom-http-client)
* [Resource Management](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/#resource-management)
* [Debugging](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/#debugging)
<!-- End Table of Contents [toc] -->
<!-- Start SDK Installation [installation] -->
## SDK Installation
> [!NOTE]
> **Python version upgrade policy**
>
> Once a Python version reaches its [official end of life date](https://devguide.python.org/versions/), a 3-month grace period is provided for users to upgrade. Following this grace period, the minimum python version supported in the SDK will be updated.
The SDK can be installed with *uv*, *pip*, or *poetry* package managers.
### uv
*uv* is a fast Python package installer and resolver, designed as a drop-in replacement for pip and pip-tools. It's recommended for its speed and modern Python tooling capabilities.
```bash
uv add cribl-control-plane
```
### PIP
*PIP* is the default package installer for Python, enabling easy installation and management of packages from PyPI via the command line.
```bash
pip install cribl-control-plane
```
### Poetry
*Poetry* is a modern tool that simplifies dependency management and package publishing by using a single `pyproject.toml` file to handle project metadata and dependencies.
```bash
poetry add cribl-control-plane
```
### Shell and script usage with `uv`
You can use this SDK in a Python shell with [uv](https://docs.astral.sh/uv/) and the `uvx` command that comes with it like so:
```shell
uvx --from cribl-control-plane python
```
It's also possible to write a standalone Python script without needing to set up a whole project like so:
```python
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "cribl-control-plane",
# ]
# ///
from cribl_control_plane import CriblControlPlane
sdk = CriblControlPlane(
# SDK arguments
)
# Rest of script here...
```
Once that is saved to a file, you can run it with `uv run script.py` where
`script.py` can be replaced with the actual file name.
<!-- End SDK Installation [installation] -->
<!-- Start IDE Support [idesupport] -->
## IDE Support
### PyCharm
Generally, the SDK will work well with most IDEs out of the box. However, when using PyCharm, you can enjoy much better integration with Pydantic by installing an additional plugin.
- [PyCharm Pydantic Plugin](https://docs.pydantic.dev/latest/integrations/pycharm/)
<!-- End IDE Support [idesupport] -->
## SDK Example Usage
### Example
```python
# Synchronous Example
from cribl_control_plane import CriblControlPlane, models
import os
with CriblControlPlane(
server_url="https://api.example.com",
security=models.Security(
bearer_auth=os.getenv("CRIBLCONTROLPLANE_BEARER_AUTH", ""),
),
) as ccp_client:
# Check server health
health = ccp_client.health.get()
print(f"Server health: {health}")
worker_group_id = "my-worker-group"
group_url = f"https://api.example.com/m/{worker_group_id}"
# List all sources
sources = ccp_client.sources.list(server_url=group_url)
print(f"Found {len(sources.items or [])} sources")
# List all destinations
destinations = ccp_client.destinations.list(server_url=group_url)
print(f"Found {len(destinations.items or [])} destinations")
# List all pipelines
pipelines = ccp_client.pipelines.list(server_url=group_url)
print(f"Found {len(pipelines.items or [])} pipelines")
```
The same SDK client can also be used to make asynchronous requests by importing asyncio.
```python
# Asynchronous Example
import asyncio
from cribl_control_plane import CriblControlPlane, models
import os
async def main():
async with CriblControlPlane(
server_url="https://api.example.com",
security=models.Security(
bearer_auth=os.getenv("CRIBLCONTROLPLANE_BEARER_AUTH", ""),
),
) as ccp_client:
# Check server health
health = await ccp_client.health.get_async()
print(f"Server health: {health}")
worker_group_id = "my-worker-group"
group_url = f"https://api.example.com/m/{worker_group_id}"
# List all sources
sources = await ccp_client.sources.list_async(server_url=group_url)
print(f"Found {len(sources.items or [])} sources")
asyncio.run(main())
```
> [!NOTE]
> Additional examples demonstrating various SDK features and use cases can be found in the [`examples`](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/./examples) directory.
<!-- No End SDK Example Usage [usage] -->
## Authentication
Except for the `health.get` and `auth.tokens.get` methods, all Cribl SDK requests require you to authenticate with a Bearer token. You must include a valid Bearer token in the configuration when initializing your SDK client. The Bearer token verifies your identity and ensures secure access to the requested resources. The SDK automatically manages the `Authorization` header for subsequent requests once properly authenticated.
For information about Bearer token expiration, see [Token Management](https://docs.cribl.io/cribl-as-code/sdks-auth/#sdks-token-mgmt) in the Cribl as Code documentation.
**Authentication happens once during SDK initialization**. After you initialize the SDK client with authentication as shown in the [authentication examples](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/#authentication-examples), the SDK automatically handles authentication for all subsequent API calls. You do not need to include authentication parameters in individual API requests. The [SDK Example Usage](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/#sdk-example-usage) section shows how to initialize the SDK and make API calls, but if you've properly initialized your client as shown in the authentication examples, you only need to make the API method calls themselves without re-initializing.
### Per-Client Security Schemes
This SDK supports the following security schemes globally:
| Name | Type | Scheme | Environment Variable |
| -------------- | ------ | ------------ | -------------------------------- |
| `bearer_auth` | http | HTTP Bearer | `CRIBLCONTROLPLANE_BEARER_AUTH` |
| `client_oauth` | oauth2 | OAuth2 token | `CRIBLCONTROLPLANE_CLIENT_OAUTH` |
To configure authentication on Cribl.Cloud and in hybrid deployments, use the `client_oauth` security scheme. The SDK uses the OAuth credentials that you provide to obtain a Bearer token and refresh the token within its expiration window using the standard OAuth2 flow.
In on-prem deployments, use the `bearer_auth` security scheme. The SDK uses the username/password credentials that you provide to obtain a Bearer token. Automatically refreshing the Bearer token within its expiration window requires a callback function as shown in the [On-Prem Authentication Example](https://github.com/criblio/cribl_control_plane_sdk_python/blob/main/examples/example_onprem_auth.py).
Set the security scheme through the `security` optional parameter when initializing the SDK client instance. The SDK uses the selected scheme by default to authenticate with the API for all operations that support it.
### Authentication Examples
The [Cribl.Cloud and Hybrid Authentication Example](https://github.com/criblio/cribl_control_plane_sdk_python/blob/main/examples/example_cloud_auth.py) demonstrates how to configure authentication on Cribl.Cloud and in hybrid deployments. To obtain the Client ID and Client Secret you'll need to initialize using the `client_oauth` security schema, follow the [instructions for creating an API Credential](https://docs.cribl.io/cribl-as-code/sdks-auth/#sdks-auth-cloud) in the Cribl as Code documentation.
The [On-Prem Authentication Example](https://github.com/criblio/cribl_control_plane_sdk_python/blob/main/examples/example_onprem_auth.py) demonstrates how to configure authentication in on-prem deployments using your username and password.
<!-- No Authentication [security] -->
<!-- Start Available Resources and Operations [operations] -->
## Available Resources and Operations
<details open>
<summary>Available methods</summary>
### [Auth.Tokens](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/tokens/README.md)
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/tokens/README.md#get) - Log in and fetch an authentication token
### [Collectors](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/collectorssdk/README.md)
* [create](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/collectorssdk/README.md#create) - Create a Collector
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/collectorssdk/README.md#list) - List all Collectors
* [delete](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/collectorssdk/README.md#delete) - Delete a Collector
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/collectorssdk/README.md#get) - Get a Collector
* [update](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/collectorssdk/README.md#update) - Update a Collector
### [DatabaseConnections](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/databaseconnections/README.md)
* [create](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/databaseconnections/README.md#create) - Create Database Connection
* [delete](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/databaseconnections/README.md#delete) - Delete a Database Connection
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/databaseconnections/README.md#get) - Get a Database Connection
* [update](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/databaseconnections/README.md#update) - Update a Database Connection
### [Destinations](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/destinations/README.md)
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/destinations/README.md#list) - List all Destinations
* [create](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/destinations/README.md#create) - Create a Destination
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/destinations/README.md#get) - Get a Destination
* [update](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/destinations/README.md#update) - Update a Destination
* [delete](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/destinations/README.md#delete) - Delete a Destination
#### [Destinations.Pq](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/destinationspq/README.md)
* [clear](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/destinationspq/README.md#clear) - Clear the persistent queue for a Destination
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/destinationspq/README.md#get) - Get information about the latest job to clear the persistent queue for a Destination
#### [Destinations.Samples](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/samples/README.md)
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/samples/README.md#get) - Get sample event data for a Destination
* [create](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/samples/README.md#create) - Send sample event data to a Destination
#### [Destinations.Statuses](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/destinationsstatuses/README.md)
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/destinationsstatuses/README.md#get) - Get the status of a Destination
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/destinationsstatuses/README.md#list) - List the status of all Destinations
### [Functions](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/functions/README.md)
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/functions/README.md#get) - Get a Function
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/functions/README.md#list) - List all Functions
### [Groups](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/groupssdk/README.md)
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/groupssdk/README.md#list) - List all Worker Groups, Outpost Groups, or Edge Fleets for the specified Cribl product
* [create](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/groupssdk/README.md#create) - Create a Worker Group, Outpost Group, or Edge Fleet for the specified Cribl product
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/groupssdk/README.md#get) - Get a Worker Group, Outpost Group, or Edge Fleet
* [update](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/groupssdk/README.md#update) - Update a Worker Group, Outpost Group, or Edge Fleet
* [delete](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/groupssdk/README.md#delete) - Delete a Worker Group, Outpost Group, or Edge Fleet
* [deploy](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/groupssdk/README.md#deploy) - Deploy commits to a Worker Group, Outpost Group, or Edge Fleet
#### [Groups.Acl](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/acl/README.md)
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/acl/README.md#get) - Get the Access Control List for a Worker Group, Outpost Group, or Edge Fleet
##### [Groups.Acl.Teams](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/teams/README.md)
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/teams/README.md#get) - Get the Access Control List for teams with permissions on a Worker Group, Outpost Group, or Edge Fleet for the specified Cribl product
#### [Groups.Configs.Versions](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/configsversions/README.md)
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/configsversions/README.md#get) - Get the configuration version for a Worker Group, Outpost Group, or Edge Fleet
### [Health](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/health/README.md)
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/health/README.md#get) - Retrieve health status of the server
### [LakeDatasets](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/lakedatasets/README.md)
* [create](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/lakedatasets/README.md#create) - Create a Lake Dataset (Cribl.Cloud only)
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/lakedatasets/README.md#list) - List all Lake Datasets (Cribl.Cloud only)
* [delete](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/lakedatasets/README.md#delete) - Delete a Lake Dataset (Cribl.Cloud only)
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/lakedatasets/README.md#get) - Get a Lake Dataset (Cribl.Cloud only)
* [update](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/lakedatasets/README.md#update) - Update a Lake Dataset (Cribl.Cloud only)
### [Nodes](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/nodes/README.md)
* [count](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/nodes/README.md#count) - Get a count of Worker or Edge Nodes
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/nodes/README.md#get) - Get detailed metadata for a Worker or Edge Node
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/nodes/README.md#list) - Get detailed metadata for Worker or Edge Nodes
* [restart](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/nodes/README.md#restart) - Restart Worker or Edge Nodes
#### [Nodes.Summaries](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/summaries/README.md)
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/summaries/README.md#get) - Get a summary of the Distributed deployment for a specific product
### [Packs](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packs/README.md)
* [install](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packs/README.md#install) - Install a Pack
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packs/README.md#list) - List all Packs
* [upload](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packs/README.md#upload) - Upload a Pack file
* [delete](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packs/README.md#delete) - Uninstall a Pack
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packs/README.md#get) - Get a Pack
* [update](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packs/README.md#update) - Upgrade a Pack
#### [Packs.Destinations](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packsdestinations/README.md)
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packsdestinations/README.md#list) - List all Destinations within a Pack
* [create](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packsdestinations/README.md#create) - Create a Destination within a Pack
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packsdestinations/README.md#get) - Get a Destination within a Pack
* [update](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packsdestinations/README.md#update) - Update a Destination within a Pack
* [delete](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packsdestinations/README.md#delete) - Delete a Destination within a Pack
##### [Packs.Destinations.Pq](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packsdestinationspq/README.md)
* [clear](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packsdestinationspq/README.md#clear) - Clear the persistent queue for a Destination within a Pack
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packsdestinationspq/README.md#get) - Get information about the latest job to clear the persistent queue for a Destination within a Pack
##### [Packs.Destinations.Samples](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packssamples/README.md)
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packssamples/README.md#get) - Get sample event data for a Destination within a Pack
* [create](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packssamples/README.md#create) - Send sample event data to a Destination within a Pack
##### [Packs.Destinations.Statuses](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packsdestinationsstatuses/README.md)
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packsdestinationsstatuses/README.md#get) - Get the status of a Destination within a Pack
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packsdestinationsstatuses/README.md#list) - List the status of all Destinations within a Pack
#### [Packs.Pipelines](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packspipelines/README.md)
* [create](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packspipelines/README.md#create) - Create a Pipeline within a Pack
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packspipelines/README.md#list) - List all Pipelines within a Pack
* [delete](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packspipelines/README.md#delete) - Delete a Pipeline within a Pack
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packspipelines/README.md#get) - Get a Pipeline within a Pack
* [update](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packspipelines/README.md#update) - Update a Pipeline within a Pack
#### [Packs.Routes](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packsroutes/README.md)
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packsroutes/README.md#get) - Get a Routing table within a Pack
* [update](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packsroutes/README.md#update) - Update a Route within a Pack
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packsroutes/README.md#list) - List all Routes within a Pack
* [append](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packsroutes/README.md#append) - Add a Route to the end of the Routing table within a Pack
#### [Packs.Sources](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packssources/README.md)
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packssources/README.md#list) - List all Sources within a Pack
* [create](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packssources/README.md#create) - Create a Source within a Pack
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packssources/README.md#get) - Get a Source within a Pack
* [update](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packssources/README.md#update) - Update a Source within a Pack
* [delete](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packssources/README.md#delete) - Delete a Source within a Pack
##### [Packs.Sources.HecTokens](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packshectokens/README.md)
* [create](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packshectokens/README.md#create) - Add an HEC token and optional metadata to a Splunk HEC Source within a Pack
* [update](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packshectokens/README.md#update) - Update metadata for an HEC token for a Splunk HEC Source within a Pack
##### [Packs.Sources.Pq](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packssourcespq/README.md)
* [clear](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packssourcespq/README.md#clear) - Clear the persistent queue for a Source within a Pack
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packssourcespq/README.md#get) - Get information about the latest job to clear the persistent queue for a Source within a Pack
##### [Packs.Sources.Statuses](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packssourcesstatuses/README.md)
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packssourcesstatuses/README.md#get) - Get the status of a Source within a Pack
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/packssourcesstatuses/README.md#list) - List the status of all Sources within a Pack
### [Pipelines](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/pipelines/README.md)
* [create](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/pipelines/README.md#create) - Create a Pipeline
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/pipelines/README.md#list) - List all Pipelines
* [delete](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/pipelines/README.md#delete) - Delete a Pipeline
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/pipelines/README.md#get) - Get a Pipeline
* [update](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/pipelines/README.md#update) - Update a Pipeline
### [Routes](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/routessdk/README.md)
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/routessdk/README.md#get) - Get a Routing table
* [update](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/routessdk/README.md#update) - Update a Route
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/routessdk/README.md#list) - List all Routes
* [append](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/routessdk/README.md#append) - Add a Route to the end of the Routing table
### [Sources](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/sources/README.md)
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/sources/README.md#list) - List all Sources
* [create](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/sources/README.md#create) - Create a Source
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/sources/README.md#get) - Get a Source
* [update](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/sources/README.md#update) - Update a Source
* [delete](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/sources/README.md#delete) - Delete a Source
#### [Sources.HecTokens](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/hectokens/README.md)
* [create](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/hectokens/README.md#create) - Add an HEC token and optional metadata to a Splunk HEC Source
* [update](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/hectokens/README.md#update) - Update metadata for an HEC token for a Splunk HEC Source
#### [Sources.Pq](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/sourcespq/README.md)
* [clear](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/sourcespq/README.md#clear) - Clear the persistent queue for a Source
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/sourcespq/README.md#get) - Get information about the latest job to clear the persistent queue for a Source
#### [Sources.Statuses](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/sourcesstatuses/README.md)
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/sourcesstatuses/README.md#get) - Get the status of a Source
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/sourcesstatuses/README.md#list) - List the status of all Sources
### [System.Captures](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/captures/README.md)
* [create](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/captures/README.md#create) - Capture live incoming data
### [System.Settings](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/settings/README.md)
* [restart](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/settings/README.md#restart) - Restart the Cribl server
#### [System.Settings.Cribl](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/cribl/README.md)
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/cribl/README.md#list) - Get Cribl system settings
* [update](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/cribl/README.md#update) - Update Cribl system settings
### [Versions.Branches](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/branches/README.md)
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/branches/README.md#list) - List all branches in the Git repository used for Cribl configuration
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/branches/README.md#get) - Get the name of the Git branch that the Cribl configuration is checked out to
### [Versions.Commits](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/commits/README.md)
* [create](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/commits/README.md#create) - Create a new commit for pending changes to the Cribl configuration
* [diff](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/commits/README.md#diff) - Get the diff for a commit
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/commits/README.md#list) - List the commit history
* [push](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/commits/README.md#push) - Push local commits to the remote repository
* [revert](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/commits/README.md#revert) - Revert a commit in the local repository
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/commits/README.md#get) - Get the diff and log message for a commit
* [undo](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/commits/README.md#undo) - Discard uncommitted (staged) changes
#### [Versions.Commits.Files](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/files/README.md)
* [count](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/files/README.md#count) - Get a count of files that changed since a commit
* [list](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/files/README.md#list) - Get the names and statuses of files that changed since a commit
### [Versions.Configs](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/versionsconfigs/README.md)
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/versionsconfigs/README.md#get) - Get the configuration and status for the Git integration
### [Versions.Statuses](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/versionsstatuses/README.md)
* [get](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/docs/sdks/versionsstatuses/README.md#get) - Get the status of the current working tree
</details>
<!-- End Available Resources and Operations [operations] -->
<!-- Start Json Streaming [jsonl] -->
## Json Streaming
Json Streaming ([jsonl][jsonl-format] / [x-ndjson][x-ndjson]) content type can be used to stream content from certain operations. These operations will expose the stream as [Generator][generator] that
can be consumed using a simple `for` loop. The loop will
terminate when the server no longer has any events to send and closes the
underlying connection.
The stream is also a [Context Manager][context-manager] and can be used with the `with` statement and will close the
underlying connection when the context is exited.
```python
from cribl_control_plane import CriblControlPlane, models
import os
with CriblControlPlane(
server_url="https://api.example.com",
security=models.Security(
bearer_auth=os.getenv("CRIBLCONTROLPLANE_BEARER_AUTH", ""),
),
) as ccp_client:
res = ccp_client.system.captures.create(duration=5, filter_="sourcetype===\"pan:traffic\"", level=models.CaptureLevel.BEFORE_PRE_PROCESSING_PIPELINE, max_events=100)
with res as jsonl_stream:
for event in jsonl_stream:
# handle event
print(event, flush=True)
```
[jsonl-format]: https://jsonlines.org/
[x-ndjson]: https://github.com/ndjson/ndjson-spec
[generator]: https://book.pythontips.com/en/latest/generators.html
[context-manager]: https://book.pythontips.com/en/latest/context_managers.html
<!-- End Json Streaming [jsonl] -->
<!-- Start File uploads [file-upload] -->
## File uploads
Certain SDK methods accept file objects as part of a request body or multi-part request. It is possible and typically recommended to upload files as a stream rather than reading the entire contents into memory. This avoids excessive memory consumption and potentially crashing with out-of-memory errors when working with very large files. The following example demonstrates how to attach a file stream to a request.
> [!TIP]
>
> For endpoints that handle file uploads bytes arrays can also be used. However, using streams is recommended for large files.
>
```python
from cribl_control_plane import CriblControlPlane, models
import os
with CriblControlPlane(
server_url="https://api.example.com",
security=models.Security(
bearer_auth=os.getenv("CRIBLCONTROLPLANE_BEARER_AUTH", ""),
),
) as ccp_client:
res = ccp_client.packs.upload(filename="example.file", request_body=open("example.file", "rb"))
# Handle response
print(res)
```
<!-- End File uploads [file-upload] -->
## Retries
Some of the endpoints in this SDK support retries. If you use the SDK without any configuration, it will fall back to the default retry strategy provided by the API. However, the default retry strategy can be overridden on a per-operation basis, or across the entire SDK.
To change the default retry strategy for a single API call, simply provide a `RetryConfig` object to the call:
```python
from cribl_control_plane import CriblControlPlane, models
from cribl_control_plane.utils import BackoffStrategy, RetryConfig
import os
with CriblControlPlane(
server_url="https://api.example.com",
security=models.Security(
bearer_auth=os.getenv("CRIBLCONTROLPLANE_BEARER_AUTH", ""),
),
) as ccp_client:
retry_config = RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False)
res = ccp_client.sources.list(server_url="https://api.example.com/m/my-group", retry_config=retry_config)
print(res)
```
If you'd like to override the default retry strategy for all operations that support retries, you can use the `retry_config` optional parameter when initializing the SDK:
```python
from cribl_control_plane import CriblControlPlane, models
from cribl_control_plane.utils import BackoffStrategy, RetryConfig
import os
with CriblControlPlane(
server_url="https://api.example.com",
retry_config=RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False),
security=models.Security(
bearer_auth=os.getenv("CRIBLCONTROLPLANE_BEARER_AUTH", ""),
),
) as ccp_client:
res = ccp_client.sources.list(server_url="https://api.example.com/m/my-group")
print(res)
```
<!-- No End Retries [retries] -->
## Error Handling
[`CriblControlPlaneError`](https://github.com/criblio/cribl_control_plane_sdk_python/blob/master/./src/cribl_control_plane/errors/criblcontrolplaneerror.py) is the base class for all HTTP error responses. It has the following properties:
| Property | Type | Description |
| ------------------ | ---------------- | --------------------------------------------------------------------------------------- |
| `err.message` | `str` | Error message |
| `err.status_code` | `int` | HTTP response status code eg `404` |
| `err.headers` | `httpx.Headers` | HTTP response headers |
| `err.body` | `str` | HTTP body. Can be empty string if no body is returned. |
| `err.raw_response` | `httpx.Response` | Raw HTTP response |
| `err.data` | | text/markdown | Speakeasy | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | https://github.com/criblio/cribl_control_plane_sdk_python.git | null | >=3.10 | [] | [] | [] | [
"httpcore>=1.0.9",
"httpx>=0.28.1",
"pydantic>=2.11.2"
] | [] | [] | [] | [
"Repository, https://github.com/criblio/cribl_control_plane_sdk_python.git"
] | poetry/2.2.1 CPython/3.10.19 Linux/6.8.0-1044-azure | 2026-02-21T01:23:52.676217 | cribl_control_plane-0.6.0b36.tar.gz | 1,092,345 | 0f/bc/8ed67291e86d5c7f7098d00c5f12ae84e47e46af8830f1a0792b55b01fd5/cribl_control_plane-0.6.0b36.tar.gz | source | sdist | null | false | 61107c93741092007e911c3ff1fa1752 | f2a1e2a44e2f86c6eb1cf9e7b532b88d63e98f358eda1949b42f70f679809218 | 0fbc8ed67291e86d5c7f7098d00c5f12ae84e47e46af8830f1a0792b55b01fd5 | null | [] | 204 |
2.4 | owega | 5.27.2 | A command-line interface for conversing with AI APIs (OpenAI, anthropic, ...) | # ΦωΦ (pronounced owega)
ΦωΦ is a command-line interface for conversing with GPT models (from OpenAI)
Pypi:
[](https://pypi.org/project/owega/)
[](https://pypi.org/project/owega/)
[](https://pepy.tech/project/owega) [](https://pepy.tech/project/owega)
[](https://git.pyrokinesis.fr/darkgeem/owega/-/blob/main/LICENSE)
[](https://pypi.org/project/owega/)
[](https://pypi.org/project/owega/)
AUR:
[](https://aur.archlinux.org/packages/python-owega)
[](https://aur.archlinux.org/packages/python-owega)
[](https://git.pyrokinesis.fr/darkgeem/owega/-/blob/main/LICENSE)
[](https://aur.archlinux.org/packages/python-owega)
[](https://aur.archlinux.org/packages/python-owega)
Gitlab:
[](https://git.pyrokinesis.fr/darkgeem/owega)
[](https://git.pyrokinesis.fr/darkgeem/owega)
[](https://git.pyrokinesis.fr/darkgeem/owega)
[](https://git.pyrokinesis.fr/darkgeem/owega/-/blob/main/LICENSE)
[](https://discord.gg/KdRmyRrA48)
## ΦωΦ's homepage
You can check on the source code [on its gitlab page](https://git.pyrokinesis.fr/darkgeem/owega)!
Also, here's the [discord support server](https://discord.gg/KdRmyRrA48), you
can even get pinged on updates, if you want!
## Features
ΦωΦ has quite a lot of features!
These include:
- Saving/loading conversation to disk as json files.
- Autocompletion for commands, file search, etc...
- History management.
- Temp files to save every message, so that you can get back the conversation
if you ever have to force-quit ΦωΦ.
- Config file to keep settings like api key, preferred model, command execution
status...
- Command execution: if enabled, allows ΦωΦ to execute commands on your system
and interpret the results.
- File creation: if commands are enabled, also allows ΦωΦ to create files on
your system and fill them with desired contents.
- GET requests: allows ΦωΦ to get informations from online pages, through
http(s) GET requests.
- Long-term memory: allows for ΦωΦ to store memories, which will not be deleted
as the older messages are, to keep requests under the available tokens per
request.
- Context management: allows to set the AI context prompt (example: "you are a
cat. cats don't talk. you can only communicate by meowing, purring, and
actions between asterisks" will transform ΦωΦ into a cat!!)
- Meow.
- Meow meow.
- MEOW MEOW MEOW MEOW!!!!
## Installation
Just do ``pipx install owega`` to get the latest version (with pipx)
An archlinux package `python-owega` is also available on the AUR.
## Optional requirements
- [rich](https://pypi.org/project/rich/) - for rich markdown formatting
- [tiktoken](https://pypi.org/project/tiktoken) - for better token estimation
- [markdownify](https://pypi.org/project/markdownify/) - for better html to markdown (with web functions)
## Command-line arguments
Do you really need me to do ``owega --help`` for you?
```
usage: owega [-h] [-d] [-c] [-l] [-v] [-f CONFIG_FILE] [-i HISTORY] [-a ASK]
[-o OUTPUT] [-t] [-s TTSFILE] [-T] [-e]
Owega main application
options:
-h, --help show this help message and exit
-d, --debug Enable debug output
-c, --changelog Display changelog and exit
-l, --license Display license and exit
-v, --version Display version and exit
-f CONFIG_FILE, --config-file CONFIG_FILE
Specify path to config file
-i HISTORY, --history HISTORY
Specify the history file to import
-a ASK, --ask ASK Asks a question directly from the command line
-o OUTPUT, --output OUTPUT
Saves the history to the specified file
-t, --tts Enables TTS generation when asking
-s TTSFILE, --ttsfile TTSFILE
Outputs a generated TTS file single-ask mode
-T, --training outputs training data from -i file
-e, --estimate shows estimate token usage / cost from a request from
-i file
```
## Markdown formatting and syntax highlighting
To allow ΦωΦ to print its output nicely, you can just install the rich python
module: ``pip install rich``
## Showcase
See ΦωΦ in action!
### Demos made with ΦωΦ 5.7.5
[](https://asciinema.org/a/659607)
[Youtube demo](https://youtu.be/_LGSc6mj-EM)
## CHANGELOG:
```
OWEGA v5.27.2 CHANGELOG:
2.0.0: WTFPL license
2.0.1: added genconf command
2.1.0: added file_input command
2.1.1: added file_input in help command
2.2.0: added context command to change GPT's definition
2.2.1: added license and version info in command line (-l and -v)
2.2.2: stripped user input (remove trailing spaces/tabs/newlines)
2.2.3: genconf now saves the current conf instead of a blank template
2.2.4: automatic temp file save
3.0.0: changed conversation save from pickle to json
3.0.1: added changelog
3.0.2: added conversion script
3.0.3: quitting with EOF will now discard the temp file (^C will still keep it)
3.1.0: BMU (Better Module Update)!
modified MSGS:
- added last_question()
- changed last_answer()
modified ask() to allow for blank prompt,
which will reuse the last question
3.1.1: now handling the service unavailable error
3.2.0: FUNCTION CALLING UPDATE:
added function calling, now openchat is able to run commands
on your computer, as long as you allow it to
(you will be prompted on each time it tries to run a command)
!!! only available on -0613 models (gpt-3.5-turbo-0613, gpt-4-0613) !!!
will be available on all gpt models from 2023-06-27, with the latest
openchat 3.2.X patch
3.2.1: fixed a space missing in openchat's function calling
3.2.2: fixed openchat sometimes not detecting the command has been ran
3.2.3: added create_file as a function OpenAI can call
3.2.4: fixed variables and ~ not expanding when executing a command
3.2.4-fix1: fixed a missing parenthesis
3.2.5: now handling non-zero exit status when running a command
3.2.6: reversed the changelog order, fixed function calling chains
3.2.7: fixed json sometimes not correctly formatted when writing multiple lines
files
3.2.8: fixed command execution stderr handling
3.2.9: changed execute's subprocess call to shell=True, now handling pipes...
3.2.10: added a command line option for specifying the config file
3.2.11: now, the default gpt models implement function calling, no need for
0613 anymore
3.3.0: implemented prompt_toolkit, for better prompt handling, newlines with
control+n
3.3.1: added tokens command, to change the amount of requested tokens
3.4.0: CLI update:
- added command-line options to change input/output files
- added command-line option to ask a question from command line
3.5.0: WEB update: now added a flask app, switched repos to its own
3.5.1: added "commands" command, to enable/disable command execution
3.5.2: added infos on bottom bar
3.6.0: PREFIX update:
- added prefixes for command (changeable in the config)
- reformatted most of the main loop code to split it in handlers
3.7.0: DIRECT CONTEXT COMMANDS update:
- now, you can use commands in one line, instead of waitingfor prompt
example: /save hello.json
(instead of typing /save, then enter, then typing hello.json
works on all commands, the only specific case being file_input.)
- file_input as a direct command takes only one argument: the file
to load (e.g. /load ./src/main.c). The pre-prompt will be asked
directly instead of having to do it in three steps
(/load, then filename, then pre-prompt)
- also, fixed /tokens splitting the prompt instead of the user input
3.8.0: WEB download update
- added a get_page function for openchat to get pages without the need
for curl
3.8.1: added a debug option for devs
3.9.0: Windows update
- Do I really need to explain that update?
3.9.1: fixed an issue when the openai api key does not exist anywhere
3.9.2: changed the temp file creation method for non-unix systems
3.9.3: fixed api key not saving with /genconf
3.9.4: changed default values
4.0.0: LTS: Long-Term-Souvenirs
The AI now have long-term memory!!!
Huge update: full refactoring, the code is now readable!
Also, the name is now Owega (it's written with unicode characters though)
You can see the new code here: https://git.pyrokinesis.fr/darkgeem/owega
Also, the project is now available on PyPI so, just go pip install owega!
4.0.1: oops, forgot to change the setup.py and now I messed up my 4.0.0! >:C
4.0.2: Fixed a typo where owega wouldn't send the memory
4.0.3: Added README to pypi page
4.0.4: Fixed context not working correctly
4.1.0: Changed the getpage function to strip the text
4.1.1: Removed a warning due to beautifulsoup4
4.2.0: VERY IMPORTANT UPDATE: NOW COMPATIBLE WITH OPENAI 1.1.1
4.3.0: Added token estimation
4.3.1: Added time taken per request in debug output
4.3.2: Fixed 4.3.1 :p
4.3.3: Changed time taken to only show up to ms
4.3.4: Re-added server unavailable error handling
4.3.5: Added exception handling for token estimation
4.3.6: Re-added handling of invalid request, mostly for too large requests
4.4.0: Changed from json to json5 (json-five)
4.5.0: Added support for organization specification
4.5.1: fixed owega bash script for systems that still have PYTHON 2 AS DEFAULT
WTF GUYS GET OVER IT, IT'S BEEN DEPRECATED SINCE 2020
4.5.2: Now removes temp files even if ctrl+c if they are empty
4.5.3: Fixed files being removed everytime
4.6.0: Fine tweaking update
- added command for changing the temperature
- added top_p command and parameter
- added frequency penalty command and parameter
- added presence penalty command and parameter
- fixed /quit and /exit not working
- fixed tab completion
4.6.1: Added support for overwriting config file
4.6.2: Oops, forgot to check help, help should be fixed now
4.7.0: Added TTS (using pygame)
4.7.1: Now prints message before reading TTS
Also, removes the pygame init message
4.7.2: Fixed a bug where the output tts file could not be set to mp3
(it was previously checking for mp4 extension, lol)
4.7.3: Added ctrl+C handling when playing TTS to stop speaking.
4.8.0: Edit update
- you can now edit the history from the TUI
- on a side note, I also improved completion for files
and numeric values (temperature, top_p, penalties...)
4.8.1: Oops, forgot to add requirements to setup.py
Automated the process, should be good now
4.8.2: - added infos to pypi page
- changed to automatic script generation (setup.py)
4.9.0: - added system command
4.10.0: - added system souvenirs (add_sysmem/del_sysmem)
4.10.1: - added support server in readme and pypi
4.10.2: - added cost estimation in token estimation
4.10.3: - changed from OpenAI to Owega in term display
4.11.0: Huge refactor, added TTS as config parameter
4.11.1: Oops, last version broke owega, fixed here
(Problem was I forgot to export submodules in setup.py)
4.11.2: Fixed -a / single_ask
4.11.3: Fixed /genconf
4.11.4: Fixed edit with blank message (remove message)
4.11.5: Fixed requirements in setup.py not working when getting
the source from PyPI
4.12.0: Added -T/--training option to generate training line
4.12.1: Added -e/--estimate option to estimate consumption
4.12.2: Fixed TUI-mode TTS
4.12.3: Fixed requirements to be more lenient
4.12.4: Fixed requirements to use json5 instead of json-five
4.12.5: Fixed emojis crashing the history because utf16
4.12.6: Fixed emojis crashing the edit function because utf16
4.12.7: Fixed a minor bug where /file_input would insert a "'"
after the file contents.
Also, added filetype information on codeblocks with
/file_input, depending on the file extension
4.12.8: Added a vim modeline to history files
to specify it's json5, not json.
4.12.9: Added badges to the README :3
4.12.10: Added docstrings
Switched from tabs to spaces (PEP8)
Changed default available models
Changed estimation token cost values
5.0.0: ADDED VISION
5.0.1: Added support for local images for vision
Also, better crash handling...
5.0.2: Changed the /image given handling, now you can give it
both the image, then a space, then the pre-image prompt.
5.0.3: Added a play_tts function for using owega as a module.
5.0.4: Added better given handling for handlers.
5.1.0: Added silent flag for handlers.
5.1.1: Fixed handle_image
5.2.0: Changed file_input behavior to only add the prompt and
not immediately request an answer.
5.2.1: Fixed the create_file function, disabled get_page.
5.2.2: Suppressed pygame-related warnings
(i.e. avx2 not enabled).
5.3.0: Added /dir_input
5.3.1: Re-enabled get_page with better parsing
5.4.0: Added default_prompt variable in configuration
5.5.0: Added basic support for Mistral's API (beta feature)
5.5.1: Removed useless available_mistral and mistral_model
variables.
5.5.2: Added debug info on mistral's part of ask()
Added matching for mixtral
5.5.3: Removed debug lines that shouldn't have been left there.
5.5.4: Fixed a debug_print never showing.
5.5.5: Now using openai module to ask mistral API.
(the code is waaaay cleaner)
5.6.0: Added basic support for Chub's API
(chub mars, mercury, mixtral)
Also, Mi(s/x)tral support is no more in beta :D
5.6.1: Added extensive logging for errors.
5.6.2: Added terminal title status :3
5.6.3: Fixes config's api_key not being used.
Better docstrings on handlers.
5.6.4: Fix for ask.ask() crashing if OPENAI_API_KEY isn't set.
5.7.0: Changed the license to the DarkGeem Public License v1.0.
5.7.1: Fixed a non-ascii character in the DGPL.
5.7.2: Added vision support for GPT-4o.
5.7.3: Better cost estimation, including input/output costs.
(added support for all GPT model as of 2024-05-14)
(added support for all mistral API models as of today)
(all other models return a cost of 0)
5.7.4: Added pretty print if the rich module is installed.
5.7.5: Fixed the bottom toolbar being cut short when terminal
doesn't have enough columns.
(also, added gpt-4o and mixtral-8x22b to default list)
5.8.0: Added time-aware mode...
5.8.1: Oops, I broke the build system again, my bad! :P
5.8.2: Oops, I didn't completely fix it last time~ Awoo! >w<\
5.8.3: Fixed some error handling.
5.8.4: Fixed an issue with time-aware mode which would create
new lines with just the date when sending an empty
message, instead of just prompting with same history.
5.8.5: Changed setup.py to package the VERSION and CHANGELOG
files.
5.8.6: Updated the README with 5.7.5 demos.
5.9.0: Fixed a huge issue where owega couldn't be imported if
the terminal wasn't interactive.
Added owega.__version__
5.9.1: Changed type hinting and fixed some code style!
5.9.2: Added a tts_to_opus function in owega.utils.
5.9.3: Added __all__ variable to __init__.py files.
5.9.4: Fixed a circular import, which technically wasn't really
an issue, due to an old AWFUL AF fix...
Also, fixed most type hinting.
5.10.0: Moved single_ask to owega.ask, moved markdown_print to
owega.utils.
5.11.0: Added support for Anthropic's Claude.
5.12.0: Added /fancy, for toggling fancy printing.
5.13.0: Dependency removal update.
- Changed markdownify dep from required to optional.
- Changed pygame dep from required to optional.
5.13.1: - Changed tiktoken dep from required to optional.
5.13.2: - Fixed compatibility with python <3.11 by removing
typing.Self references.
5.13.3: - Fixed errors with some versions of python-editor which
don't have editor.edit but editor.editor()???
Somehow... I don't know maaaan, I'm tireeeed =w='
5.14.0: Image generation update.
Allows for the AI to generate images using DALLE
5.15.0: OpenAI o1 update.
Adds support for OpenAI's o1 models, which are limited,
as they lack support for temperature, top_p, penalties,
vision, and function calling.
5.15.1: - Added o1-preview and o1-mini to default models list.
5.15.2: - Fixed 5.15.1, as I mistyped 4o-(preview/mini)
insead of o1-(preview/mini)
5.16.0: Rewrite ask.py, better handling
5.16.1: Fixed vision support for Anthropic's claude
5.16.2: Fixed vision support for Anthropic's claude... Again.
5.16.3: Added a .md prefix to the temp file with /edit
(for editor syntax highlighting)
5.16.4: Fix for claude: enforce non-streaming mode
(fixes the 'overloaded_error' errors)
5.17.0: Added xAI (grok) support!
Supports everything, even function calling and vision!
5.17.1: Fixed logger error preventing owega from opening on
Windows... I am so sorry I didn't catch this earlier!
>w<
5.17.2: Cleaned up codebase a little, thanks pycharm...
5.18.0: Fixed function calling for mistral.
5.18.1: Added fancy err message for flagged messages (OpenAI).
5.19.0: Added /reprint, which supersedes /history as it supports
fancy markdown printing (continue using /history to get
the raw text without disabling fancy mode)
(Also, updated build system to use pyproject.toml)
5.19.1: Fixed issues with custom models.
5.20.0: Fixed model detection for MistralAI models.
Fixed MistralAI not able to respond when last message is
from assistant.
5.20.1: Changed the append_blank_user to "" instead of ".".
5.20.2: Moved the preferred config location to
$HOME/.config/owega/config.json5
5.21.0: Changed the config loading logic to load all
.json/.json5 files in ~/.config/owega/ or given dirs
to allow saving API keys in a dedicated config file.
Moved the defaults to owega/constants.py to replace
hardcoded values.
5.21.1: Added a /send alias to dir_input, fixed relative files
being refused because 'parent dir does not exist',
allows /dir_input to take files, with an automatic
pre-prompt ('dir/filename.ext:' before file contents)
5.21.2: Moved clr and clrtxt to owega/colors.py
5.21.3: Refactored bool/float input handlers to use centralized
helper setter functions.
(owega/OweHandlers/helpers.py)
5.21.4: Removed redundant info_print and debug_print in utils.py
This does not affect anything, as utils.py
still imports them from owega.config
Changed /genconf and genconfig() behavior to generate
~/.config/owega/, and split api key/non api key values
to the api.json5 and config.json5 files respectively.
5.22.0: = The model update =
- Added openrouter integration!!!
- Added new model naming schemes:
[provider]:[model]
custom:[model]@[base_url]
Provider list:
- anthropic (anthropic.com - claude-3.7-sonnet...)
- chub (chub.ai)
- mistral (mistral.ai - mistral/mixtral/codestral...)
- openai (openai.com - GPT-4o/GPT-4.1/o1...)
- openrouter (openrouter.ai - recommended)
- xai (x.ai - grok)
- custom
- Cleaned up some code in ask.py
- Added some error handling so errors won't throw you
out of owega anymore.
- Handles ctrl+c to cancel a pending request.
(so it doesn't throw you out of owega anymore either.)
5.22.1: Changed the default model list and added openrouter_api
as a default blank parameter.
5.22.2: Fixed the config file loading order, config files will
now load in alphabetical order (python string sort)
5.22.3: Added function calling for openrouter!
5.22.4: Fixed a bug where some unicode characters would not load
properly, and prevent the user from using owega if
their Conversation contained invalid ones.
Also fixed a bug where get_page would try and run
debug_print with an old syntax.
Note to self: Please, replace all debug_print uses with
getLogger loggers.
5.23.0: Added /web command to enable/disable
the web access feature.
5.23.1: Added /lts command to enable/disable long term souvenirs
and permanently disabled the image generation 'tool'
as a function to be called by the AI.
This should not have been hardcoded in the first place.
5.23.2: Added optional argument to /reprint to only reprint N
last messages.
5.23.3: Changed TTS requirements: pygame not needed anymore,
now requiring soundfile and sounddevice
5.24.0: PROMPT INJECTION UPDATE
---
So basically, there are 8 new variables:
6 message injectors (string):
- pre_user / pre_assistant / pre_system
^ these will be prefixing each message from their role
- post_user / post_assistant / post_system
^ these will be suffixing each message from their role
2 history injectors (list of message dicts):
- pre_history / post_history
Basically, each message dict must contain a "role" key
and a "content" key.
"role" should be one of: user/assistant/system
and "content is the actual message content"
You can just take a message history dict from a /save,
these are loaded the same.
Note that history injectors won't be affected by message
injectors!
5.24.1: Fixed broken symlinks crashing /send, added message ID
to files sent with /send, symlinks are now sent as
their location instead of their content
5.24.2: Added \x1b\r as newline, just like claude (smort!)
5.24.3: Finally fixed the pricings.
5.24.4: PyPI License (meta update).
5.24.5: Added app identifier to OpenRouter
(will display "owega" instead of "unknown" in panel)
5.25.0: Added theming!
5.25.1: Fixed inconsistent debug value during config loading.
5.25.2: Changed the way handlers are discovered, no need to add
them to handlers.py anymore.
5.25.3: Added support for curl_cffi, now the get_page tool will
use it if the module is installed, instead of requests.
5.25.4: Changed the get_page tool to use a single session for
owega's entire lifespan.
5.26.0: Fixed an issue where assistant messages in chains
(tool/function calling) wouldn't be printed except for
the last one.
Also, added a print buffer to the Conversation class,
for messages/entries that haven't been rendered yet.
5.26.1: Changed estimation display to split input/output cost.
5.27.0: Added custom handlers loading from
~/.config/owega/handlers
5.27.1: Removed requirement for lxml.
5.27.2: Kinda fixed(-ish) the issue where a provider REQUIRES a
non-empty user message before answering.
```
| text/markdown | null | darkgeem <darkgeem@pyrokinesis.fr> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Information Technology",
"Intended Audience :: Science/Research",
"Intended Audience :: System Administrators",
"Natural Language :: English",
"Operating System :: OS Independent",
"Topic :: File Formats :: JSON",
"Topic :: Multimedia :: Sound/Audio :: Speech",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"openai>=1.45.0",
"prompt_toolkit>=3.0",
"requests>=2.0",
"beautifulsoup4>=4.0",
"json5>=0.9.0",
"python-editor>=1.0.4",
"setuptools>=60.0"
] | [] | [] | [] | [
"Source, https://git.pyrokinesis.fr/darkgeem/owega",
"Support, https://discord.gg/KdRmyRrA48"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T01:21:30.487234 | owega-5.27.2.tar.gz | 82,091 | a2/aa/06580bc6dc793b3bef4550b9a51a18c60865da47b474d251a261364a820d/owega-5.27.2.tar.gz | source | sdist | null | false | f455355e2faec96f43a07ba4e54ef17d | 2ad2ec21bb943dfd9eb4fd3592767f592e0ae78ce3418619fce746ebbe3c4772 | a2aa06580bc6dc793b3bef4550b9a51a18c60865da47b474d251a261364a820d | LicenseRef-DGPL-1.0 | [
"LICENSE"
] | 214 |
2.3 | nc-check | 0.2.4 | Prepare xarray Datasets for CF-1.12 compliant NetCDF output | # nc-check

Prepare `xarray.Dataset` objects for CF-1.12-ready NetCDF output.
## What It Does
- Validates metadata and conventions with `ds.check.compliance()`.
- Applies safe non-destructive metadata fixes with `ds.check.make_cf_compliant()`.
- Runs ocean-grid and time-slice coverage checks with `ds.check.ocean_cover()` and `ds.check.time_cover()`.
- Provides both Python and CLI workflows (`nc-check`, `nc-comply`).
## Install
```bash
uv add nc-check
# or
pip install nc-check
```
Optional full CF checker support:
```bash
uv add "nc-check[cf]"
# or
pip install "nc-check[cf]"
```
## Quickstart
```python
import xarray as xr
import nc_check # Registers ds.check accessor
ds = xr.Dataset(
data_vars={"temp": (("time", "lat", "lon"), [[[280.0]]])},
coords={"time": [0], "lat": [10.0], "lon": [20.0]},
)
compliance = ds.check.compliance(report_format="python")
fixed = ds.check.make_cf_compliant()
ocean = ds.check.ocean_cover(report_format="python")
time = ds.check.time_cover(report_format="python")
full = ds.check.all(report_format="python")
```
CLI quickstart:
```bash
nc-check input.nc
nc-check all input.nc --save-report
nc-comply input.nc output.nc
```
## Docs
- [Docs home](docs/index.md)
- [Getting started](docs/getting-started.md)
- [CLI guide](docs/cli.md)
- [Python API guide](docs/python-api.md)
- [Checks and reports](docs/checks-and-reports.md)
- [Troubleshooting](docs/troubleshooting.md)
- [Development](docs/development.md)
Local docs site:
```bash
uv run --with mkdocs-material mkdocs serve
```
| text/markdown | lukegre | lukegre <lukegre@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"netcdf4",
"numpy>=1.24",
"rich>=14.3.2",
"xarray>=2023.1",
"cfchecker; extra == \"cf\"",
"cfunits>=3.3.7; extra == \"cf\""
] | [] | [] | [] | [] | uv/0.9.9 {"installer":{"name":"uv","version":"0.9.9"},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T01:20:55.006062 | nc_check-0.2.4.tar.gz | 39,360 | b2/6d/4d125717aeb328724adf642be8e68e39b981ab7003d019f576208b977500/nc_check-0.2.4.tar.gz | source | sdist | null | false | b35d9aae61f8e566d9ea55cced6ed81d | c7da0c5a5e11b4399bd8f41cd5ac8d95ffe015f07c87c11958f6f59dc5eed6ba | b26d4d125717aeb328724adf642be8e68e39b981ab7003d019f576208b977500 | null | [] | 209 |
2.4 | rustyssim | 0.5.3 | This package provides a Python interface to parse SSIM files into a Polars DataFrame. | # rusty-ssim
A high-performance Rust-built IATA SSIM (Standard Schedules Information Manual) parser that can be used via CLI, Python, or Rust. This tool efficiently parses SSIM files into Polars DataFrames or exports directly to CSV/Parquet formats with streaming support for large files.
[](https://github.com/wcagreen/rusty-ssim/actions/workflows/publish-to-pypi.yml)
[](https://github.com/wcagreen/rusty-ssim/actions/workflows/publish-to-crates-io.yml)
[](https://github.com/wcagreen/rusty-ssim/actions/workflows/publish-cli-to-packages.yml)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
## Features
- **🚀 Fast Performance**: Built in Rust for optimal parsing speed with parallel processing capabilities
- **⚡ Parallel Processing**: Leverages multi-core CPUs to process large SSIM files efficiently
- **💾 Memory Efficient**: Optimize for large SSIM files
- **📊 Multiple Output Formats**: CSV, Parquet, and in-memory DataFrames
- **🗜️ Flexible Compression**: Support for various Parquet compression options (zstd, lz4, snappy, etc.)
- **🔧 Tooling Options**: Both CLI and Python APIs available
- **📈 Production Ready**: Handles files of any size with configurable batch processing
## Quick Start
### Python (Most Common Use Case)
```python
import rustyssim as rs
# Parse SSIM file to DataFrame
df = rs.parse_ssim_to_dataframe("path/to/schedule.ssim")
print(f"Parsed {len(df)} flight records")
# Split into separate DataFrames by record type
carriers, flights, segments = rs.split_ssim_to_dataframes("schedule.ssim")
# Direct export to optimized formats
rs.parse_ssim_to_csv("schedule.ssim", "output.csv")
rs.parse_ssim_to_parquets("schedule.ssim", "./parquet_files", compression="zstd")
```
### CLI (For Data Processing Pipelines)
```bash
# Convert to CSV
ssim csv -s schedule.ssim -o output.csv
# Convert to compressed Parquet files (one per airline)
ssim parquet -s schedule.ssim -o ./output -c zstd -b 50000
```
## Installation
### Python
```bash
pip install rustyssim
```
### (Build from Source)
```bash
# Clone the repository
git clone https://github.com/wcagreen/rusty-ssim.git
cd rusty-ssim
# Install Python package
pip install maturin
maturin develop -m py-rusty-ssim/Cargo.toml
# Build CLI tool
cargo build -p cli-rusty-ssim --release
```
**Requirements:**
- Python 3.9+
- Rust toolchain ([rustup.rs](https://rustup.rs))
### (Build CLI with Docker)
Download the Dockerfile from the repo, or build one your self.
```bash
# Build docker image.
docker build --no-cache -t wcagreen/rusty-ssim:latest .
# To run you can use the following command.
docker run --rm -v "${PWD}:/data" wcagreen/rusty-ssim:latest @args
# You can also just make this a permanent function for powershell, etc...
```
## Documentation
### 📖 [Python API Documentation](https://github.com/wcagreen/rusty-ssim/blob/main/docs/python.md)
Complete reference for all Python functions with examples, parameters, and return values.
### 💻 [CLI Documentation](https://github.com/wcagreen/rusty-ssim/blob/main/docs/cli-usage.md)
Comprehensive guide for command-line usage, performance tuning, and integration examples.
## Data Structure
The parser handles three types of SSIM records according to IATA standards:
### Carrier Records (Type 2)
Contains airline and schedule metadata.
### Flight Records (Type 3)
Contains core flight leg information.
### Segment Records (Type 4)
Contains flight segment information.
## Use Cases
### Data Analytics
```python
# Analyze route networks
df = rs.parse_ssim_to_dataframe("schedule.ssim")
routes = df.group_by(['departure_station', 'arrival_station']).count()
# Export for Tableau, Power BI, etc.
rs.parse_ssim_to_csv("schedule.ssim", "analytics_export.csv")
```
```python
# Split by carrier for airline-specific analysis
carriers, flights, segments = rs.split_ssim_to_dataframes("schedule.ssim")
# Analyze specific airline operations
aa_flights = flights.filter(flights['airline_designator'] == 'AA')
capacity_analysis = aa_flights.group_by('aircraft_type').agg([
pl.count().alias('flights'),
pl.col('departure_station').n_unique().alias('origins')
])
```
### Data Engineering Pipelines
```bash
# Batch processing in ETL pipelines
ssim parquet -s "./huge_multi_carrier_ssim.dat" -o /data/processed/ -c zstd -b 100000
```
## Development
### Running Tests
```bash
# Rust tests
cargo test
# Python tests
pip install pytest
pytest tests/
```
## Contributing
Contributions are welcome! Please feel free to submit issues, feature requests, or pull requests.
### Quick Contribution Steps
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Make your changes with tests
4. Run the test suite (`cargo test && pytest`)
5. Submit a pull request
## Community & Support
- 🐛 **Issues**: [GitHub Issues](https://github.com/wcagreen/rusty-ssim/issues)
- 💬 **Discussions**: [GitHub Discussions](https://github.com/wcagreen/rusty-ssim/discussions)
- 📧 **Contact**: Create an issue for questions or feature requests
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
---
<details>
<summary>📋 Project Structure</summary>
```
rusty-ssim/
├── cli-rusty-ssim/ # CLI application
├── py-rusty-ssim/ # Python bindings
├── rusty-ssim-core/ # Core Rust library
├── docs/ # Documentation
```
</details>
| text/markdown; charset=UTF-8; variant=GFM | null | William Green <wcagreen24@gmail.com> | null | null | null | parsing, ssim, data | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Rust",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"polars>=1.38.1"
] | [] | [] | [] | [
"Documentation, https://github.com/wcagreen/rusty-ssim/blob/main/docs/python.md",
"Repository, https://github.com/wcagreen/rusty-ssim"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:19:43.124818 | rustyssim-0.5.3.tar.gz | 51,743 | f3/47/622c3fa4c55a75cbf88eb483bfe8f5c894b5a48415bbc9c0ba7d75dd3ad8/rustyssim-0.5.3.tar.gz | source | sdist | null | false | f088abaf59d7749de588e1f517312057 | 2387e9de007f7d064c2658efac9b7a4b8b4e2961fe9ad8dba8eef3ac37ebb461 | f347622c3fa4c55a75cbf88eb483bfe8f5c894b5a48415bbc9c0ba7d75dd3ad8 | MIT | [] | 874 |
2.4 | azure-cognitiveservices-speech | 1.48.2 | Microsoft Cognitive Services Speech SDK for Python |
For an introduction to this package, have a look at `the quickstart
article <https://learn.microsoft.com/azure/cognitive-services/speech-service/get-started-speech-to-text?pivots=programming-language-python>`_.
For information about the Speech Service, please refer to `its
website <https://learn.microsoft.com/azure/cognitive-services/speech-service/>`_.
Documentation
-------------
API documentation for this package can be found `here <https://aka.ms/csspeech/pythonref>`_.
License information
-------------------
- `Microsoft Software License Terms for the Speech SDK <https://aka.ms/csspeech/license/>`_
- `Third party notices <https://aka.ms/csspeech/toctpn/>`_
| null | Microsoft | null | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python",
"Topic :: Scientific/Engineering",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"azure-core>=1.33.0"
] | [] | [] | [] | [] | RestSharp/106.13.0.0 | 2026-02-21T01:19:34.305428 | azure_cognitiveservices_speech-1.48.2-py3-none-win_arm64.whl | 1,990,065 | d9/11/c5e1daa73ce98c535dcf727639cc9400b1f7bff4727e8d97088b5b51849c/azure_cognitiveservices_speech-1.48.2-py3-none-win_arm64.whl | py3 | bdist_wheel | null | false | 6f49f5cc8445b25deff04c7f31d62ec5 | a8538e3732a2b157f8379063a6f9ad21c71cb3bcc24e8f2b1ee0c90023ca0d4c | d911c5e1daa73ce98c535dcf727639cc9400b1f7bff4727e8d97088b5b51849c | null | [] | 5,397 |
2.3 | casedev | 0.12.0 | The official Python library for the casedev API | # Casedev Python API library
<!-- prettier-ignore -->
[)](https://pypi.org/project/casedev/)
The Casedev Python library provides convenient access to the Casedev REST API from any Python 3.9+
application. The library includes type definitions for all request params and response fields,
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
It is generated with [Stainless](https://www.stainless.com/).
## Documentation
The REST API documentation can be found on [docs.case.dev](https://docs.case.dev). The full API of this library can be found in [api.md](https://github.com/CaseMark/casedev-python/tree/main/api.md).
## Installation
```sh
# install from PyPI
pip install casedev
```
## Usage
The full API of this library can be found in [api.md](https://github.com/CaseMark/casedev-python/tree/main/api.md).
```python
import os
from casedev import Casedev
client = Casedev(
api_key=os.environ.get("CASEDEV_API_KEY"), # This is the default and can be omitted
# defaults to "production".
environment="local",
)
response = client.llm.v1.chat.create_completion(
messages=[
{
"role": "user",
"content": "Hello!",
}
],
)
print(response.id)
```
While you can provide an `api_key` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `CASEDEV_API_KEY="My API Key"` to your `.env` file
so that your API Key is not stored in source control.
## Async usage
Simply import `AsyncCasedev` instead of `Casedev` and use `await` with each API call:
```python
import os
import asyncio
from casedev import AsyncCasedev
client = AsyncCasedev(
api_key=os.environ.get("CASEDEV_API_KEY"), # This is the default and can be omitted
# defaults to "production".
environment="local",
)
async def main() -> None:
response = await client.llm.v1.chat.create_completion(
messages=[
{
"role": "user",
"content": "Hello!",
}
],
)
print(response.id)
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh
# install from PyPI
pip install casedev[aiohttp]
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python
import os
import asyncio
from casedev import DefaultAioHttpClient
from casedev import AsyncCasedev
async def main() -> None:
async with AsyncCasedev(
api_key=os.environ.get("CASEDEV_API_KEY"), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
response = await client.llm.v1.chat.create_completion(
messages=[
{
"role": "user",
"content": "Hello!",
}
],
)
print(response.id)
asyncio.run(main())
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Nested params
Nested parameters are dictionaries, typed using `TypedDict`, for example:
```python
from casedev import Casedev
client = Casedev()
agent = client.agent.v1.agents.create(
instructions="instructions",
name="name",
sandbox={},
)
print(agent.sandbox)
```
## Handling errors
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `casedev.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `casedev.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `casedev.APIError`.
```python
import casedev
from casedev import Casedev
client = Casedev()
try:
client.vault.create(
name="My Vault",
)
except casedev.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except casedev.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except casedev.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python
from casedev import Casedev
# Configure the default for all requests:
client = Casedev(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).vault.create(
name="My Vault",
)
```
### Timeouts
By default requests time out after 1 minute. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python
from casedev import Casedev
# Configure the default for all requests:
client = Casedev(
# 20 seconds (default is 1 minute)
timeout=20.0,
)
# More granular control:
client = Casedev(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5.0).vault.create(
name="My Vault",
)
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](https://github.com/CaseMark/casedev-python/tree/main/#retries).
## Advanced
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `CASEDEV_LOG` to `info`.
```shell
$ export CASEDEV_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py
from casedev import Casedev
client = Casedev()
response = client.vault.with_raw_response.create(
name="My Vault",
)
print(response.headers.get('X-My-Header'))
vault = response.parse() # get the object that `vault.create()` would have returned
print(vault.id)
```
These methods return an [`APIResponse`](https://github.com/CaseMark/casedev-python/tree/main/src/casedev/_response.py) object.
The async client returns an [`AsyncAPIResponse`](https://github.com/CaseMark/casedev-python/tree/main/src/casedev/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
```python
with client.vault.with_streaming_response.create(
name="My Vault",
) as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py
import httpx
response = client.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented request params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python
import httpx
from casedev import Casedev, DefaultHttpxClient
client = Casedev(
# Or use the `CASEDEV_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python
client.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py
from casedev import Casedev
with Casedev() as client:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/CaseMark/casedev-python/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py
import casedev
print(casedev.__version__)
```
## Requirements
Python 3.9 or higher.
## Contributing
See [the contributing documentation](https://github.com/CaseMark/casedev-python/tree/main/./CONTRIBUTING.md).
| text/markdown | null | Casedev <support@casemark.com> | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"anyio<5,>=3.5.0",
"distro<2,>=1.7.0",
"httpx<1,>=0.23.0",
"pydantic<3,>=1.9.0",
"sniffio",
"typing-extensions<5,>=4.10",
"aiohttp; extra == \"aiohttp\"",
"httpx-aiohttp>=0.1.9; extra == \"aiohttp\""
] | [] | [] | [] | [
"Homepage, https://github.com/CaseMark/casedev-python",
"Repository, https://github.com/CaseMark/casedev-python"
] | uv/0.9.13 {"installer":{"name":"uv","version":"0.9.13"},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T01:19:24.668458 | casedev-0.12.0-py3-none-any.whl | 309,006 | b9/a0/ec85cbe7da3b5dd9d29e32c6c2c30f3f07b6d0f0de2977e7390681b19b31/casedev-0.12.0-py3-none-any.whl | py3 | bdist_wheel | null | false | df1a42d0fc7a98d810d38688c8b984e1 | 3b2a7130906cf2e95df9bf26c7ceb04cc3fd5f8194c2d6357fdb96dee7f48fab | b9a0ec85cbe7da3b5dd9d29e32c6c2c30f3f07b6d0f0de2977e7390681b19b31 | null | [] | 225 |
2.4 | psychro-table | 0.1.0 | 与《温湿查算表》完全一致的温湿计算工具(湿球未结冰,干球 -10.0~49.9°C,气压固定为1000 hPa) | # psychro-table
一个与《温湿查算表》完全一致的温湿计算工具。
根据干球温度、湿球温度、大气压力(书本默认1000,参与计算的气压值也是1000,故只用输入干球温度、湿球温度值即可),精确计算 **水汽压**、**相对湿度**、**露点温度**。
本工具严格遵循《温湿查算表》的编制原理,适用于**湿球未结冰**的情况,干球温度适用范围为 **-10.0°C 至 49.9°C**。
## 特点
- ✅ 采用 Goff‑Gratch 饱和水汽压公式(国际气象组织标准)
- ✅ 干湿表系数与《温湿查算表》一致(A = 0.667e-3)
- ✅ 经过4000+组《温湿查算表》数据验证,结果完全一致
- ✅ 结果已按查算表规则四舍五入(水汽压、露点保留1位小数,相对湿度取整)
- ✅ 适用范围明确:湿球未结冰,干球温度 -10.0°C ~ 49.9°C
## 安装
```bash
pip install psychro-table
| text/markdown | null | 张狗蛋 <zjh420621@163.com> | null | null | MIT | null | [] | [] | null | null | >=3.6 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/张狗蛋/psychro-table"
] | twine/6.2.0 CPython/3.11.9 | 2026-02-21T01:17:44.324120 | psychro_table-0.1.0.tar.gz | 4,099 | a8/8d/e223d25bac4363b871a7b4740560c078d75f44172159a2dc2a8ac55f8245/psychro_table-0.1.0.tar.gz | source | sdist | null | false | 421dc29fa9d8fd62cdb6959b5227f0e7 | 456b69f94ed002143aabb1fc554f563202c0616a0ac7f17444dfd25357a07100 | a88de223d25bac4363b871a7b4740560c078d75f44172159a2dc2a8ac55f8245 | null | [] | 240 |
2.4 | pydasa | 0.6.26 | PyDASA is a Python library for dimensional analysis using the Buckingham Pi theorem across scientific and engineering domains. | # PyDASA
<div align="center">
[](https://pypi.org/project/pydasa/) [](https://pypi.org/project/pydasa/) [](https://github.com/DASA-Design/PyDASA/blob/main/LICENSE) [](https://pydasa.readthedocs.io)[](https://codecov.io/github/DASA-Design/PyDASA)
</div>
_Dimensional Analysis for Scientific Applications and Software Architecture_ (**PyDASA**) is a Python library for dimensional analysis of complex phenomena across physical, chemical, computational, and software domains using the Buckingham Π-theorem.
## The Need (Epic User Story)
**As a** researcher, engineer, or software architect analyzing complex systems,
**I want** a comprehensive dimensional analysis library implementing the Buckingham Π-theorem,
**So that** I can systematically discover dimensionless relationships, validate models, and understand system behavior across physical, computational, and software architecture domains.
## Installation
To install **PyDASA**, use pip:
```bash
pip install pydasa
```
Then, to check the installed version of **PyDASA**, run:
```python
import pydasa
print(pydasa.__version__)
```
## Quickstart
Lets try to find the Reynolds number Re = (ρ·v·L)/μ using dimensional analysis.
---
### Step 0: Import PyDASA Dimensional Analysis Module
```python
from pydasa.workflows.phenomena import AnalysisEngine
```
There are two other main modules for Sensitivity Analysis (`SensitivityAnalysis`) and Monte Carlo Simulation (`MonteCarloSimulation`), but we will focus on Dimensional Analysis here.
### Step 1: Define Variables
Define the variables involved in the phenomenon as a dictionary. Each variable is defined by its unique symbolic name (key) and a dictionary of attributes (value).
```python
# Define variables for Reynolds number example
# Can be a list, dict, or Variable objects
variables = {
# Density: ρ [M/L³] - INPUT
"\\rho": {
"_idx": 0,
"_sym": "\\rho",
"_fwk": "PHYSICAL",
"_cat": "IN", # Category: INPUT variable
"relevant": True, # REQUIRED: Include in analysis
"_dims": "M*L^-3", # Dimensions: Mass/(Length^3)
"_setpoint": 1000.0, # Value for calculations
"_std_setpoint": 1000.0, # Standardized value (used internally)
},
# Velocity in pipe: v [L/T] - OUTPUT (we want to find this)
"v": {
"_idx": 1,
"_sym": "v",
"_fwk": "PHYSICAL",
"_cat": "IN", # if this were OUT, Reynolds would be trivial
"relevant": True,
"_dims": "L*T^-1", # Dimensions: Length/Time
"_setpoint": 5.0,
"_std_setpoint": 5.0,
},
"D": { # pipe diameter
"_idx": 2,
"_sym": "D",
"_fwk": "PHYSICAL",
"_cat": "IN",
"relevant": True,
"_dims": "L", # Dimensions: Length
"_setpoint": 0.05,
"_std_setpoint": 0.05,
},
# Length: L [L] - INPUT
"\\mu": {
"_idx": 3,
"_sym": "\\mu",
"_fwk": "PHYSICAL",
"_cat": "OUT", # Need exactly one OUTPUT variable
"relevant": True,
"_dims": "M*L^-1*T^-1", # Dimensions: Mass/(Length·Time)
"_setpoint": 0.001,
"_std_setpoint": 0.001,
}
}
```
**Notes**:
- Variables with `"relevant": False` are ignored in analysis, even if defined.
- The dimensional matrix needs to have exactly **ONE** output variable (`"_cat": "OUT"`).
- The other variables can be categorized as Inputs (`"IN"`) or Control (`"CTRL"`).
- `_dims` are the dimensional representations using the current FDUs (Fundamental Dimensional Units) of the selected framework. In this case, we use the `PHYSICAL` framework with base dimensions **M** (Mass), **L** (Length), **T** (Time), but other frameworks are available.
- Subsequent calculations of coefficients require `_setpoint` and `_std_setpoint` values.
---
### Step 2: Create Analysis Engine
To complete the setup, create an `AnalysisEngine` object, specifying the framework and passing the variable definitions. Alternatively, you can add variables later.
```python
engine = AnalysisEngine(_idx=0, _fwk="PHYSICAL")
engine.variables = variables
```
**Notes**:
- By default, the framework is `PHYSICAL`.
- Other built-in frameworks are: `COMPUTATION`, `SOFTWARE`. Plus, you can define custom frameworks with the `CUSTOM` option and a FDU definition list.
- Variables can be added as native dictionaries or as `Variable` **PyDASA** objects (use: `from pydasa.elements.parameter import Variable`).
---
### Step 3: Run Analysis
Then you just run the analysis to solve the dimensional matrix.
```python
results = engine.run_analysis() # May fail if variable definitions have errors
print(f"Number of dimensionless groups: {len(results)}")
for name, coeff in results.items():
print(f"\t{name}: {coeff.get('pi_expr')}")
```
The `run_analysis()` method will process the variables, build the dimensional matrix, and compute the dimensionless coefficients using the Buckingham Π-theorem; printing and processing the results in dict format will show the number of dimensionless groups found and their expressions.
**Output**:
```
Number of dimensionless groups: 1
\Pi_{0}: \frac{\mu}{\rho*v*L}
```
**If errors occur**: Check variable definitions (dimensions, categories, relevance flags)
**Notes**:
- The results are stored in `engine.coefficients` as `Coefficient` objects.
- Each coefficient has attributes like `pi_expr` (the dimensionless expression), `name`, `symbol`, etc. used for further analysis, visualization, or exporting.
- The variables are accessible via `engine.variables` for any additional processing or exporting.
---
### Step 4: Display Results
Then, you can also display the object-like results in console or export them for visualization.
Here is how you print the coefficients:
```python
print(f"Number of dimensionless groups: {len(engine.coefficients)}")
for name, coeff in engine.coefficients.items():
print(f"\t{name}: {coeff.pi_expr}")
print(f"\tVariables: {list(coeff.var_dims.keys())}")
print(f"\tExponents: {list(coeff.var_dims.values())}")
```
Then, the output will be:
```
Number of dimensionless groups: 1
\Pi_{0}: \frac{\mu}{\rho*v*L}
Variables: ['\\rho', 'v', 'L', '\\mu']
Exponents: [-1, -1, -1, 1]
```
Since variables and coefficients are Python objects, you can export them to dict format for external libraries (matplotlib, pandas, seaborn) using `to_dict()`:
```python
# Export to dict for external libraries
data_dict = list(engine.coefficients.values())[0].to_dict()
# Example: Use with pandas
import pandas as pd
df = pd.DataFrame([data_dict])
# Example: Access variables for plotting
var_data = {sym: var.to_dict() for sym, var in engine.variables.items()}
```
---
### Step 5: Derive \& Calculate Coefficients
Since expressions and setpoints are stored in variables, you can derive new coefficients from existing ones and calculate their values directly.
```python
# Derive Reynolds number (Re = 1/Pi_0)
pi_0_key = list(engine.coefficients.keys())[0]
Re_coeff = engine.derive_coefficient(
expr=f"1/{pi_0_key}",
symbol="Re",
name="Reynolds Number"
)
# Calculate numerical value using stored setpoints
Re_value = Re_coeff.calculate_setpoint() # Uses _std_setpoint values
print(f"Reynolds Number: {Re_value:.2e}")
# Interpret the result based on typical flow regimes
if Re_value < 2300:
print("Flow regime: LAMINAR")
elif Re_value < 4000:
print("Flow regime: TRANSITIONAL")
else:
print("Flow regime: TURBULENT")
```
**Notes**:
- The `derive_coefficient()` method allows you to create new coefficients based on existing ones using mathematical expressions.
- The `calculate_setpoint()` method computes the numerical value of the coefficient using the `_std_setpoint` values of the involved variables.
- The other **PyDASA** modules (Sensitivity Analysis, Monte Carlo Simulation) also use the `Variable` and `Coefficient` objects, so you can seamlessly integrate dimensional analysis results into further analyses.
**Output**:
```
Reynolds Number: 1.00e+05
Flow regime: TURBULENT
```
---
### Summary
| Step | Action | Notes |
| ---- | ------------------- | --------------------------------------------------------------------------------------------------------------- |
| 1 | Define variables | important attributes `relevant=True`, exactly 1 `_cat=OUT`, try to include `_setpoint`/`_std_setpoint`. |
| 2 | Create engine | `_fwk="PHYSICAL"` (or custom), accepts `dict` or `Variable` objects. |
| 3 | Run analysis | `run_analysis()` may fail on ill defined variables, inconsistent units, missing attributes, or invalid FDUs. |
| 4 | Display results | Console output or export via `.to_dict()` to use other libraries. |
| 5 | Derive coefficients | Use `derive_coefficient()` + `calculate_setpoint()` to compute new coefficients and their values. |
## Core Capabilities
### Manage Dimensional Domain
- **Manage Fundamental Dimensions** beyond traditional physical units (L, M, T) .to include computational (T, S, N) and software architecture domains (T, D, E, C, A).
- **Switch between frameworks** for different problem domains.
### Manage Symbolic and Numerical Variables
- **Define dimensional parameters** with complete specifications:
- **Specify** symbolic representation (name, LaTeX symbol).
- **Define** dimensional formula (e.g., "L*T^-1" for velocity).
- **Establish** numerical ranges (min, max, mean, step)
- **Assign** classification (input, output, control).
- **Configure** statistical distributions and dependencies.
### Integrate System of Units of Measurement
- **Handle measurements** across unit systems (imperial, metric, custom).
- **Convert between units** while maintaining dimensional consistency.
- **Relate measurements** to dimensional parameters.
### Discover Dimensionless Coefficients
- **Generate dimensionless numbers** using the Buckingham Π-theorem:
1. **Build relevance list** by identifying mutually independent parameters influencing the phenomenon.
2. **Construct dimensional matrix** by arranging FDUs (rows) and variables (columns) into core and residual matrices.
3. **Transform to identity matrix** by applying linear transformations to the core matrix.
4. **Generate Pi coefficients** by combining residual and unity matrices to produce dimensionless groups.
- **Classify coefficients** by repeating vs. non-repeating parameters.
- **Manage metadata:** names, symbols, formulas, and parameter relationships.
### Analyze and Simulate Coefficient Behavior
- **Verify similitude principles** for model scaling and validation.
- **Calculate coefficient ranges** and parameter influence.
- **Run Monte Carlo simulations** to quantify uncertainty propagation.
- **Perform sensitivity analysis** to identify dominant parameters.
- **Generate behavioral data** for dimensionless relationships.
### Export, Integrate, and Visualize Data
- **Export data formats** compatible with pandas, matplotlib, seaborn.
- **Structure results** for integration with visualization libraries.
- **Provide standardized outputs** for dimensionless charts and parameter influence plots.
## Documentation
⚠️ **For advanced features, tutorials, and API reference, please visit the [PyDASA Documentation](https://pydasa.readthedocs.io).** ⚠️
### Development Status
**Emoji Convention:**
- 📋 TODO
- 🔶👨💻 WORKING
- ✅ DONE
- ⚠️ ATTENTION REQUIRED
### ✅ Core Modules (Implemented & Tested)
- **core/**: Foundation classes, configuration, I/O.
- **dimensional/**: Buckingham Π-theorem, dimensional matrix solver.
- **elements/**: Variable and parameter management with specs.
- **workflows/**: AnalysisEngine, MonteCarloSimulation, SensitivityAnalysis.
- **validations/**: Decorator-based validation system.
- **serialization/**: LaTeX and formula parsing.
### 👨💻 Currently Working
- **Documentation**: Expand API reference and tutorial coverage for all modules.
- **Refactoring**: Eliminate redundancy and improve code maintainability across modules.
- **Workflows**: Debug and enhance internal components to support more complex analyses and simulations.
- **Unit System**: Implement dimensional unit conversion structures and mechanisms.
### 📋 Pending Development
- **context/**: Implement Unit conversion system (stub implementation).
- **structs/**: Implement Data structures (partial test coverage).
- **Documentation**: Complete API reference completion and additional tutorials.
## ⚠️ How to Contribute
Contributions are welcome! We use [Conventional Commits](https://www.conventionalcommits.org/) for automatic versioning and changelog generation.
### Commit Message Format
```
<type>(<scope>): <subject>
```
**Types:**
- `feat`: New feature (triggers MINOR version bump).
- `fix`: Bug fix (triggers PATCH version bump).
- `docs`: Documentation changes only.
- `refactor`: Code refactoring without feature changes..
- `test`: Adding or updating tests.
- `perf`: Performance improvements.
- `chore`: Other changes that don't modify src or test files.
**Breaking Changes:** Add `BREAKING CHANGE:` in commit footer to trigger MAJOR version bump.
### Examples
```bash
# Feature (0.6.0 → 0.7.0)
git commit -m "feat(workflows): add uncertainty propagation analysis"
# Bug fix (0.6.0 → 0.6.1)
git commit -m "fix(buckingham): resolve matrix singularity edge case"
# Breaking change (0.6.0 → 1.0.0)
git commit -m "feat(api)!: redesign Variable API
BREAKING CHANGE: Variable.value renamed to Variable.magnitude"
```
### Development Workflow
```bash
# Clone and setup
git clone https://github.com/DASA-Design/PyDASA.git
cd PyDASA
# Install in development mode
pip install -e ".[dev]"
# Run tests
pytest tests/
# Commit with conventional format
git commit -m "feat(module): add new feature"
# Create PR for review
```
### Release Process
1. Make changes with conventional commit messages.
2. Create PR and merge to `main`.
3. GitHub Actions automatically:
- Analyzes commit messages.
- Bumps version (MAJOR.MINOR.PATCH)..
- Updates `_version.py` and `pyproject.toml`.
- Creates GitHub release with changelog.
- Publishes to PyPI.
For more details, visit our [Contributing Guide](https://pydasa.readthedocs.io/en/latest/development/contributing.html).
| text/markdown | @SFAM | "@SFAM" <sa-artea@uniandes.edu.co> | null | null | null | null | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Development Status :: 3 - Alpha",
"Operating System :: OS Independent",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Topic :: Scientific/Engineering"
] | [] | https://github.com/DASA-Design/PyDASA | null | >=3.10 | [] | [] | [] | [
"antlr4-python3-runtime==4.11",
"numpy>=1.26.4",
"scipy>=1.13.0",
"sympy>=1.12",
"matplotlib>=3.8.0",
"pandas>=2.1.0",
"SALib>=1.4.5",
"pytest>=8.1.1; extra == \"dev\"",
"twine>=6.1.0; extra == \"dev\"",
"sphinx>=7.3.7; extra == \"docs\"",
"pydata-sphinx-theme>=0.14.0; extra == \"docs\"",
"sphinx-autodoc-typehints>=1.24.0; extra == \"docs\"",
"sphinx-autoapi>=3.0.0; extra == \"docs\"",
"myst-parser>=2.0.0; extra == \"docs\"",
"sphinx-copybutton>=0.5.2; extra == \"docs\"",
"sphinx-favicon>=1.0.1; extra == \"docs\"",
"sphinx-gitstamp>=0.3.3; extra == \"docs\"",
"sphinx-prompt>=1.8.0; extra == \"docs\"",
"sphinx-markdown-builder>=0.6.5; extra == \"docs\"",
"myst-parser>=2.0.0; extra == \"docs\"",
"nbsphinx>=0.9.0; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/DASA-Design/PyDASA"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:17:22.953713 | pydasa-0.6.26.tar.gz | 124,093 | 15/14/303f4c772f31d9853e8b42e73841226b46013610be27d5ee8acae446b9ac/pydasa-0.6.26.tar.gz | source | sdist | null | false | 1cf1ef52b2c1c0f7ee5b1d72fd69e56a | 593ed1d8904e5c5b982015b6b58f856cf84b8bf8b7172375bc31a3590cc714d9 | 1514303f4c772f31d9853e8b42e73841226b46013610be27d5ee8acae446b9ac | GPL-3.0-or-later | [
"LICENSE"
] | 214 |
2.4 | glow | 0.15.10 | Functional Python tools | # Glow Library
Set of functional tools for easier prototyping
## Overview
...
## Installation
For basic installation use:
```bash
pip install glow
```
<details>
<summary>Specific versions with additional requirements</summary>
```bash
pip install glow[io] # For I/O extras
pip install glow[all] # For all
```
</details>
Glow is compatible with: Python 3.13+.
Tested on Ubuntu & Windows.
## Structure
- `glow.*` - Core parts, available out the box
- `glow.io.*` - I/O wrappers to access data in convenient formats
## Core features
- `glow.mapped` - convenient tool to parallelize computations
- `glow.memoize` - use if you want to reduce number of calls for any function
## IO features
### `glow.io.Sound` - playable sound wrapper
<details>
```python
from datetime import timedelta
import numpy as np
from glow.io import Sound
array: np.ndarray
sound = Sound(array, rate=44100) # Wrap np.ndarray
sound = Sound.load('test.flac') # Load sound into memory from file
# Get properties
rate: int = sound.rate
duration: timedelta = sound.duration
dtype: np.dtype = sound.dtype
# Plays sound through default device, supports Ctrl-C for interruption
sound.play()
```
</details>
| text/markdown | null | Paul Maevskikh <arquolo@gmail.com> | null | Paul Maevskikh <arquolo@gmail.com> | MIT License
Copyright (c) 2019 Paul Maevskikh
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <3.15,>=3.13 | [] | [] | [] | [
"loguru",
"loky~=3.1",
"lxml",
"numpy<3,>=1.21",
"tqdm",
"typing-inspection~=0.4.1",
"wrapt~=1.15",
"asttokens; extra == \"all\"",
"colorama; sys_platform == \"win32\" and extra == \"all\"",
"executing; extra == \"all\"",
"matplotlib; extra == \"all\"",
"opencv-python-headless~=4.0; extra == \"all\"",
"pygments; extra == \"all\"",
"sounddevice; extra == \"all\"",
"soundfile; extra == \"all\"",
"black~=26.1; extra == \"dev\"",
"flake8-alphabetize; extra == \"dev\"",
"flake8-pie; extra == \"dev\"",
"flake8-pyi; extra == \"dev\"",
"flake8-pyproject; extra == \"dev\"",
"flake8-simplify; extra == \"dev\"",
"flake8~=7.0; extra == \"dev\"",
"isort; extra == \"dev\"",
"mypy~=1.19; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\"",
"pytest~=9.0; extra == \"dev\"",
"ruff~=0.15.0; extra == \"dev\"",
"black~=26.1; extra == \"dev-core\"",
"flake8-pie; extra == \"dev-core\"",
"flake8-pyi; extra == \"dev-core\"",
"flake8-pyproject; extra == \"dev-core\"",
"flake8-simplify; extra == \"dev-core\"",
"flake8~=7.0; extra == \"dev-core\"",
"isort; extra == \"dev-core\"",
"mypy~=1.19; extra == \"dev-core\"",
"pytest-asyncio; extra == \"dev-core\"",
"pytest~=9.0; extra == \"dev-core\"",
"ruff~=0.15.0; extra == \"dev-core\"",
"black~=26.1; extra == \"dev-wemake\"",
"flake8-pie; extra == \"dev-wemake\"",
"flake8-pyi; extra == \"dev-wemake\"",
"flake8-pyproject; extra == \"dev-wemake\"",
"flake8-simplify; extra == \"dev-wemake\"",
"flake8~=7.0; extra == \"dev-wemake\"",
"isort; extra == \"dev-wemake\"",
"mypy~=1.19; extra == \"dev-wemake\"",
"pytest-asyncio; extra == \"dev-wemake\"",
"pytest~=9.0; extra == \"dev-wemake\"",
"ruff~=0.15.0; extra == \"dev-wemake\"",
"wemake-python-styleguide~=1.3.0; extra == \"dev-wemake\"",
"asttokens; extra == \"ic\"",
"colorama; sys_platform == \"win32\" and extra == \"ic\"",
"executing; extra == \"ic\"",
"pygments; extra == \"ic\"",
"opencv-python-headless~=4.0; extra == \"io\"",
"sounddevice; extra == \"io\"",
"soundfile; extra == \"io\"",
"psutil; extra == \"memprof\""
] | [] | [] | [] | [
"homepage, https://github.com/arquolo/glow"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T01:14:52.684724 | glow-0.15.10.tar.gz | 57,124 | 4a/c0/340034813eaba3683b78fe8a66fe05b3d088b5e54273369649b5df0806ad/glow-0.15.10.tar.gz | source | sdist | null | false | b93be3d92cda47c52784c0bcee0f51b8 | 2d3b5f8c644e3d3da689c9da9c0639ca11b853ea8bdcb825c064493de594a038 | 4ac0340034813eaba3683b78fe8a66fe05b3d088b5e54273369649b5df0806ad | null | [
"LICENSE"
] | 242 |
2.4 | mamba2-jax | 1.0.1 | Pure JAX/Flax NNX implementation of the Mamba2 state-space model with state caching, pretrained weight loading, and time-series forecasting. | # Mamba2-JAX
[](https://pypi.org/project/mamba2-jax/)
[](LICENSE)
A pure JAX/Flax NNX implementation of the Mamba2 state-space model with SSM state caching, pretrained weight loading from HuggingFace, causal language modeling, and time-series forecasting.
This is the standalone PyPI package for the Mamba2 implementation authored by [Cosmo Santoni](https://github.com/CosmoNaught) and merged into Google's [JAX ML Bonsai](https://github.com/jax-ml/bonsai) library. The upstream source lives at [`bonsai/models/mamba2`](https://github.com/jax-ml/bonsai/tree/main/bonsai/models/mamba2).
## Supported Models
[](https://raw.githubusercontent.com/CosmoNaught/mamba2-jax/main/docs/model_support_table.png)
## Features
- **Flax NNX** modules (no legacy `init`/`apply` ceremony)
- **SSM + convolution state caching** for O(n) autoregressive generation
- **Pretrained weight loading** from HuggingFace (`state-spaces/mamba2-130m`, etc.)
- **Causal language modeling** (`Mamba2ForCausalLM`) with tied or untied embeddings
- **Time-series forecasting** (`Mamba2Forecaster`)
- **Golden parity tests** against the reference `mamba_ssm` PyTorch implementation
- Fully compatible with `jax.jit`, `jax.grad`, `jax.vmap`
- Runs on **CPU, GPU (CUDA), and TPU**
## State Space Caching
The SSM state cache enables O(n) autoregressive generation instead of O(n^2) re-computation. The example below demonstrates a **~30x speedup** on the 780M parameter model running on a TPU v6e when caching is enabled:

## Installation
### From PyPI
```bash
pip install mamba2-jax
```
### From source
```bash
git clone https://github.com/CosmoNaught/mamba2-jax.git
cd mamba2-jax
pip install -e ".[dev]"
```
### For loading pretrained weights
```bash
pip install "mamba2-jax[pretrained]"
```
For GPU or TPU support, install the appropriate JAX backend as described in the [JAX installation guide](https://jax.readthedocs.io/en/latest/installation.html).
## Usage
### Causal Language Model
```python
import jax.numpy as jnp
from flax import nnx
from mamba2_jax import Mamba2Config, Mamba2ForCausalLM
cfg = Mamba2Config(vocab_size=1024, hidden_size=256, num_hidden_layers=4,
state_size=64, head_dim=32, chunk_size=64)
model = Mamba2ForCausalLM(cfg, rngs=nnx.Rngs(0))
input_ids = jnp.ones((2, 64), dtype=jnp.int32)
outputs = model(input_ids, labels=input_ids)
print(outputs["logits"].shape) # (2, 64, 1024)
print(float(outputs["loss"]))
```
### Loading pretrained weights
```python
from mamba2_jax import Mamba2ForCausalLM
model = Mamba2ForCausalLM.from_pretrained("state-spaces/mamba2-130m")
```
### Cached generation
```python
import jax.numpy as jnp
from mamba2_jax import Mamba2ForCausalLM, Mamba2Config
cfg = Mamba2Config.tiny()
model = Mamba2ForCausalLM(cfg, rngs=__import__("flax").nnx.Rngs(0))
# Prefill
prompt = jnp.array([[1, 2, 3]], dtype=jnp.int32)
out = model(prompt)
cache = out["cache"]
# Decode with cache (O(1) per step)
next_token = jnp.array([[4]], dtype=jnp.int32)
out = model(next_token, cache=cache)
print(out["logits"].shape) # (1, 1, vocab_size)
```
### Time-series forecasting
```python
import jax.numpy as jnp
from mamba2_jax import Mamba2Forecaster, create_random_forecaster
model = create_random_forecaster(input_dim=10, d_model=256, n_layers=4,
output_dim=1, forecast_horizon=24)
x = jnp.ones((8, 100, 10)) # (batch, seq_len, features)
y = model(x)
print(y.shape) # (8, 24, 1)
```
## Performance
Benchmarked on a TPU v6e with the `state-spaces/mamba2-130m` checkpoint:
[](https://raw.githubusercontent.com/CosmoNaught/mamba2-jax/main/docs/alt_combined_figure.png)
## Project Structure
```
mamba2-jax/
├── mamba2_jax/
│ ├── __init__.py # Public API
│ ├── modeling.py # Config, SSD kernel, all model classes
│ └── params.py # Weight loading & parameter utilities
├── tests/
│ ├── test_mamba2.py # Comprehensive test suite
│ ├── run_model.py # Generation demo script
│ └── artifacts/ # Golden parity data
├── docs/ # Benchmark figures
├── LICENSE # Apache 2.0
├── pyproject.toml
└── README.md
```
## Contributing
Contributions are welcome! Areas where help is particularly valuable:
- Performance optimization and profiling
- Test coverage expansion
- Bug reports and feature requests
Please open an issue or submit a pull request on GitHub.
## Acknowledgments
**Original Mamba2 Authors:**
- Tri Dao and Albert Gu for the Mamba2 architecture and the original `mamba_ssm` implementation
- The State Spaces team for advancing SSM research
**JAX Ecosystem:**
- The JAX, Flax, and Optax teams at Google for the excellent frameworks
## License
This project is licensed under the [Apache License 2.0](LICENSE).
## Citation
If you use this implementation in your research, please cite the original Mamba2 paper and this JAX implementation:
```bibtex
@inproceedings{mamba2,
title={Transformers are {SSM}s: Generalized Models and Efficient Algorithms
Through Structured State Space Duality},
author={Dao, Tri and Gu, Albert},
booktitle={International Conference on Machine Learning (ICML)},
year={2024}
}
@software{mamba2jax,
author = {Cosmo Santoni},
title = {mamba2-jax: Pure JAX Implementation of Mamba2},
year = {2025},
url = {https://github.com/CosmoNaught/mamba2-jax}
}
```
| text/markdown | Cosmo Santoni | null | null | null | Apache-2.0 | mamba2, jax, flax, nnx, state-space-model, ssm, language-modeling, time-series, sequence-modeling, bonsai, pretrained, caching | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3 :: Only",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"flax>=0.12.0",
"jax>=0.8.0",
"jaxlib>=0.8.0",
"optax>=0.2.6",
"pytest>=9.0.1; extra == \"dev\"",
"absl-py>=2.0.0; extra == \"dev\"",
"numpy>=1.26.0; extra == \"dev\"",
"safetensors>=0.4.0; extra == \"pretrained\"",
"huggingface_hub>=0.20.0; extra == \"pretrained\"",
"transformers>=4.40.0; extra == \"pretrained\""
] | [] | [] | [] | [
"Homepage, https://github.com/CosmoNaught/mamba2-jax",
"Repository, https://github.com/CosmoNaught/mamba2-jax",
"Issues, https://github.com/CosmoNaught/mamba2-jax/issues",
"Upstream, https://github.com/jax-ml/bonsai/tree/main/bonsai/models/mamba2"
] | twine/6.2.0 CPython/3.10.7 | 2026-02-21T01:13:14.795429 | mamba2_jax-1.0.1.tar.gz | 24,042 | 1e/12/c70cce802e3f51ae4d1a1d010e73f7733331599fd00077979f035efcd5ce/mamba2_jax-1.0.1.tar.gz | source | sdist | null | false | a28cb15ab01676de91670428118a5a75 | 7b0e72b967f01e4447f71653850207709d0be712f4001792156d74d5845b1067 | 1e12c70cce802e3f51ae4d1a1d010e73f7733331599fd00077979f035efcd5ce | null | [
"LICENSE"
] | 216 |
2.4 | dlt645 | 1.3.6 | DLT645协议Python实现库 | # DL/T645-2007协议多语言实现库
一个功能完整的DL/T645-2007电能表通信协议的多语言实现项目,同时支持C++、Python和Go三种编程语言,提供了统一的接口和功能。
## 🌴通讯支持
| 功能 | 状态 |
| ------------------------------- | ---- |
| **TCP客户端** 🐾 | ✅ |
| **TCP服务端** 🐾 | ✅ |
| **RTU主站** 🐾 | ✅ |
| **RTU从站** 🐾 | ✅ |
## 🌴 功能完成情况
| 功能 | 状态 |
| ---------------------------------------------- | -- |
| **读、写通讯地址** 🐾 | ✅ |
| **修改密码** 🐾 | ✅ |
| **广播校时** 🐾 | ✅ |
| **电能量** 🐾 | ✅ |
| **最大需量及发生时间** 🐾 | ✅ |
| **变量** 🐾 | ✅ |
| **读、写参变量** 🐾 | ✅ |
| **事件记录** 🐾 | ✅ |
| **冻结量** 🐾 | ❌ |
| **负荷纪录** 🐾 | ❌ |
## 选择语言版本
请选择您感兴趣的语言版本查看详细文档:
- [C++版本](../cpp/README.md)
- Python版本
- [Go版本](../go/README.md)
## DL/T645-2007协议Python实现库
一个功能完整的DL/T645-2007电能表通信协议Python实现库,支持TCP和RTU两种通信方式,可用于电能表数据读写和通信测试。
## 功能特性
- 🌐 **多种通信方式**:支持TCP和RTU(串口)通信
- 📊 **完整协议支持**:实现DL/T645-2007协议的主要功能
- 🔌 **客户端/服务端**:同时提供客户端和服务端功能
- 📈 **多种数据类型**:支持电能量、最大需量、变量数据读写
- 🛡️ **设备认证**:支持设备地址验证和密码保护
- 📝 **完善日志**:内置日志系统,便于调试
- 🎯 **易于使用**:简洁的API设计,快速上手
## 支持的数据类型
- **电能量数据**(00类):正向有功电能、反向有功电能等
- **最大需量数据**(01类):最大需量及发生时间
- **变量数据**(02类):实时电压、电流、功率等
- **参变量数据**(04类):设备参数、配置信息等
## 安装
```bash
pip install dlt645
```
### 文档地址
**https://600888.github.io/dlt645**

## 快速开始
### 创建TCP服务器
```python
from dlt645 import MeterServerService
# 创建TCP服务器
server_svc = MeterServerService.new_tcp_server("127.0.0.1", 8021, 3000)
# 设置设备地址
dlt645_svc.set_address("123456781012")
# 设置密码
dlt645_svc.set_password("00123456")
# 写电能
dlt645_svc.set_00(0x00000000, 50.5)
# 写最大需量
dlt645_svc.set_01(
0x01010000,
Demand(78.0, datetime.strptime("2023-01-01 12:00:00", "%Y-%m-%d %H:%M:%S")),
)
# 写变量
dlt645_svc.set_02(0x02010100, 66.6)
# 设置事件记录
dlt645_svc.set_03(
0x03010000,
[
("000015", "000012"), # A相失压总次数、累计时间
("000025", "000024"), # B相失压总次数、累计时间
("000034", "000030"), # C相失压总次数、累计时间
],
)
# 写参变量
dlt645_svc.set_04(0x04000101, "25110201") # 2025年11月2日星期一
dlt645_svc.set_04(0x04000204, "10") # 设置费率数为10
schedule_list = []
schedule_list.append("120901")
schedule_list.append("120902")
schedule_list.append("120903")
schedule_list.append("120904")
schedule_list.append("120905")
schedule_list.append("120906")
schedule_list.append("120907")
schedule_list.append("120908")
schedule_list.append("120909")
schedule_list.append("120910")
schedule_list.append("120911")
schedule_list.append("120912")
schedule_list.append("120913")
schedule_list.append("120914")
dlt645_svc.set_04(0x04010000, schedule_list) # 第一套时区表数据
# 启动服务器
server_svc.server.start()
```

### 创建RTU服务器
```python
from dlt645 import MeterServerService
# 创建RTU服务器
server_svc = MeterServerService.new_rtu_server(port="COM11",data_bits=8,stop_bits=1,baud_rate=9600,parity="N",timeout=1.0)
# 设置设备地址
dlt645_svc.set_address("123456781012")
# 设置密码
dlt645_svc.set_password("00123456")
# 写电能
dlt645_svc.set_00(0x00000000, 50.5)
# 写最大需量
dlt645_svc.set_01(
0x01010000,
Demand(78.0, datetime.strptime("2023-01-01 12:00:00", "%Y-%m-%d %H:%M:%S")),
)
# 写变量
dlt645_svc.set_02(0x02010100, 66.6)
# 设置事件记录
dlt645_svc.set_03(
0x03010000,
[
("000015", "000012"), # A相失压总次数、累计时间
("000025", "000024"), # B相失压总次数、累计时间
("000034", "000030"), # C相失压总次数、累计时间
],
)
# 写参变量
dlt645_svc.set_04(0x04000101, "25110201") # 2025年11月2日星期一
dlt645_svc.set_04(0x04000204, "10") # 设置费率数为10
schedule_list = []
schedule_list.append("120901")
schedule_list.append("120902")
schedule_list.append("120903")
schedule_list.append("120904")
schedule_list.append("120905")
schedule_list.append("120906")
schedule_list.append("120907")
schedule_list.append("120908")
schedule_list.append("120909")
schedule_list.append("120910")
schedule_list.append("120911")
schedule_list.append("120912")
schedule_list.append("120913")
schedule_list.append("120914")
dlt645_svc.set_04(0x04010000, schedule_list) # 第一套时区表数据
# 启动服务器
server_svc.server.start()
```

### 创建TCP客户端
```python
from dlt645 import MeterClientService
client_svc = MeterClientService.new_tcp_client("127.0.0.1", 10521, timeout=1)
# 设置设备密码(0级)
client_svc.set_password("00123456")
# 读取通讯地址
address_data = client_svc.read_address()
if address_data and hasattr(address_data, "value"):
print(f"通讯地址: {address_data.value}")
else:
print("读取通讯地址失败")
# 设置设备地址
client_svc.set_address(address_data.value)
# 读取电能数据
data_item = client_svc.read_00(0x00000000)
print(f"电能数据: {data_item}")
# 读取最大需量及发生时间
data_item2 = client_svc.read_01(0x01010000)
print(f"最大需量及发生时间: {data_item2}")
# 读取变量数据
data_item3 = client_svc.read_02(0x02010100)
print(f"变量数据: {data_item3}")
# 读取事件记录数据
data_item4 = client_svc.read_03(0x03010000)
print(f"事件记录数据: {data_item4}")
# 读取参变量
data_item5 = client_svc.read_04(0x04000101)
print(f"日期及星期: {data_item5}")
data_item6 = client_svc.read_04(0x04000204)
print(f"费率数: {data_item6}")
# 读取时区表数据
data_item7 = client_svc.read_04(0x04010000)
for item in data_item7:
print(item)
# 读取公共假日日期及时段表号
data_item8 = client_svc.read_04(0x04030001)
print(f"公共假日日期及时段表号: {data_item8}")
# 修改密码
client_svc.change_password("00123456", "04123456")
# 写参变量
data_item9 = client_svc.write_04(
0x04000101, "25120901", password="04123456"
) # 写日期及星期
```

### 创建RTU客户端
```python
from dlt645 import MeterClientService
# 创建RTU客户端
client = MeterClientService.new_rtu_client(
port="COM10",
baudrate=9600,
databits=8,
stopbits=1,
parity="N",
timeout=0.5
)
# 设置设备密码(0级)
client_svc.set_password("00123456")
# 读取通讯地址
address_data = client_svc.read_address()
if address_data and hasattr(address_data, "value"):
print(f"通讯地址: {address_data.value}")
else:
print("读取通讯地址失败")
# 设置设备地址
client_svc.set_address(address_data.value)
# 读取电能数据
data_item = client_svc.read_00(0x00000000)
print(f"电能数据: {data_item}")
# 读取最大需量及发生时间
data_item2 = client_svc.read_01(0x01010000)
print(f"最大需量及发生时间: {data_item2}")
# 读取变量数据
data_item3 = client_svc.read_02(0x02010100)
print(f"变量数据: {data_item3}")
# 读取事件记录数据
data_item4 = client_svc.read_03(0x03010000)
print(f"事件记录数据: {data_item4}")
# 读取参变量
data_item5 = client_svc.read_04(0x04000101)
print(f"日期及星期: {data_item5}")
data_item6 = client_svc.read_04(0x04000204)
print(f"费率数: {data_item6}")
# 读取时区表数据
data_item7 = client_svc.read_04(0x04010000)
for item in data_item7:
print(item)
# 读取公共假日日期及时段表号
data_item8 = client_svc.read_04(0x04030001)
print(f"公共假日日期及时段表号: {data_item8}")
# 修改密码
client_svc.change_password("00123456", "04123456")
# 写参变量
data_item9 = client_svc.write_04(
0x04000101, "25120901", password="04123456"
) # 写日期及星期
```

### 使用第三方工具测试
测试效果

## API参考
### 服务器端API
#### MeterServerService
主要的服务器服务类,提供以下方法:
- `new_tcp_server(ip: str, port: int, timeout: float)` - 创建TCP服务器(类方法)
- `new_rtu_server(port: str, data_bits: int, stop_bits: int, baud_rate: int, parity: str, timeout: float)` - 创建RTU服务器(类方法)
- `set_00(di: int, value: float)` - 设置电能量数据
- `set_01(di: int, demand: Demand)` - 设置最大需量数据
- `set_02(di: int, value: float)` - 设置变量数据
- `set_03(di: int, value: list)` - 设置事件记录数据
- `set_04(di: int, value: float)` - 设置参变量数据
- `set_address(address: str)` - 设置设备地址
- `set_password(password: str)` - 设置密码
- `change_password(old_password: str, new_password: str)` - 修改密码
### 客户端API
#### MeterClientService
主要的客户端服务类,提供以下方法:
- `new_tcp_client(ip: str, port: int, timeout: float)` - 创建TCP客户端(类方法)
- `new_rtu_client(port: str, baudrate: int, databits: int, stopbits: int, parity: str, timeout: float)` - 创建RTU客户端(类方法)
- `read_00(di: int)` - 读取电能量数据
- `read_01(di: int)` - 读取最大需量数据
- `read_02(di: int)` - 读取变量数据
- `read_04(di: int)` - 读取参变量数据
- `write_04(di: int, value: str, password: str)` - 写入参变量数据
- `read_address()` - 读取设备地址
- `write_address(new_address: str)` - 写入设备地址
- `set_address(address: str)` - 设置本地设备地址
- `set_password(password: str)` - 设置密码
- `change_password(old_password: str, new_password: str)` - 修改密码
## 数据标识说明
DLT645协议使用4字节的数据标识来标识不同的数据项:
### 电能量数据(00类)
- `0x00000000` - 总有功电能
- `0x00010000` - 正向有功电能
- `0x00020000` - 反向有功电能
### 最大需量数据(01类)
- `0x01000000` - 总最大需量
- `0x01010000` - 正向最大需量
### 变量数据(02类)
- `0x02010100` - A相电压
- `0x02010200` - B相电压
- `0x02010300` - C相电压
- `0x02020100` - A相电流
- `0x02020200` - B相电流
- `0x02020300` - C相电流
### 参变量数据(04类)
- `0x04000101` - 日期及星期(0代表星期天)
- `0x04000102` - 时间
## 配置文件
库包含了丰富的配置文件,定义了各种数据类型:
- `config/energy_types.json` - 电能量数据类型配置
- `config/demand_types.json` - 最大需量数据类型配置
- `config/variable_types.json` - 变量数据类型配置
- `config/event_record_types.json` - 事件记录数据类型配置
- `config/parameter_types.json` - 参变量数据类型配置
## 开发指南
### 环境要求
- Python >= 3.7
- loguru >= 0.5.0
- pyserial >= 3.4
### 运行测试
```bash
# 安装开发依赖
pip install -e .[dev]
# 运行测试
pytest
```
### 调试日志
库使用loguru进行日志记录,可以通过以下方式启用详细日志:
```python
from loguru import logger
logger.add("dlt645.log", level="DEBUG")
```
## 常见问题
### Q: 如何处理通信超时?
A: 可以在创建客户端时设置timeout参数,或者使用try-catch捕获超时异常。
### Q: 支持哪些串口参数?
A: 支持标准的串口参数:波特率(1200-115200)、数据位(7-8)、停止位(1-2)、校验位(N/E/O)。
### Q: 如何添加自定义数据类型?
A: 可以修改config目录下的JSON配置文件,添加新的数据标识和格式定义。
## 许可证
Apache License 2.0
## 贡献
欢迎提交Issue和Pull Request!
## 联系方式
- 作者:Chen Dongyu
- 邮箱:1755696012@qq.com
- 项目地址:https://gitee.com/chen-dongyu123/dlt645
| text/markdown | Chen Dongyu | Chen Dongyu <1755696012@qq.com> | null | Chen Dongyu <1755696012@qq.com> | Apache License 2.0 | dlt645, protocol, communication, energy, meter | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Communications",
"Topic :: Software Development :: Libraries :: Python Modules",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent"
] | [] | https://gitee.com/chen-dongyu123/dlt645 | null | >=3.7 | [] | [] | [] | [
"loguru>=0.7.0",
"pyserial>=3.5",
"pytest>=6.0; extra == \"dev\"",
"pytest-cov>=2.0; extra == \"dev\"",
"black>=21.0; extra == \"dev\"",
"flake8>=3.8; extra == \"dev\"",
"mypy>=0.800; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://gitee.com/chen-dongyu123/dlt645",
"Repository, https://gitee.com/chen-dongyu123/dlt645",
"Issues, https://gitee.com/chen-dongyu123/dlt645/issues",
"Changelog, https://gitee.com/chen-dongyu123/dlt645/tree/master/python/CHANGELOG.md",
"Documentation, https://gitee.com/chen-dongyu123/dlt645/wiki"
] | twine/6.2.0 CPython/3.10.12 | 2026-02-21T01:12:58.982739 | dlt645-1.3.6.tar.gz | 134,310 | e6/72/735e12fd21b87e221746e94372137c576d025e2b8f80ad41edcd160885b6/dlt645-1.3.6.tar.gz | source | sdist | null | false | 9986286624ac8563495353d670e9a324 | c111d1a0f64d2ec4aeae97e3cdb85c2cba194ae463236302528eb240b1793a40 | e672735e12fd21b87e221746e94372137c576d025e2b8f80ad41edcd160885b6 | null | [] | 228 |
2.3 | anibridge-plex-provider | 0.2.0b4 | Plex provider for the AniBridge project. | # anibridge-plex-provider
An [AniBridge](https://github.com/anibridge/anibridge) provider for [Plex](https://www.plex.tv/).
_This provider comes built-in with AniBridge, so you don't need to install it separately._
## Configuration
### `url` (`str`)
The base URL of the Plex server (e.g., http://localhost:32400).
### `token` (`str`)
The account API token of the Plex server admin. Get a token by following [these instructions](https://support.plex.tv/articles/204059436-finding-an-authentication-token-x-plex-token/).
### `user` (`str`)
The Plex user to synchronize. This can be a username, email, or display name.
### `sections` (`list[str]`, optional)
A list of Plex library section names to constrain synchronization to. Leave empty/unset to include all sections.
### `genres` (`list[str]`, optional)
A list of genres to constrain synchronization to. Leave empty/unset to include all genres.
### `strict` (`bool`, optional, default: `true`)
Whether to enforce strict matching when resolving mappings. If `true`, only exact mapping matches of a show's episode ordering (TMDB or TVDB) will be considered. If `false`, falling back from TMDB to TVDB (or vice versa) is allowed.
You can configure episode ordering in the show's or section's 'Advanced' settings.
```yaml
library_provider_config:
plex:
url: ...
token: ...
user: ...
# sections: []
# genres: []
# strict: true
```
| text/markdown | Elias Benbourenane | Elias Benbourenane <eliasbenbourenane@gmail.com> | Elias Benbourenane | Elias Benbourenane <eliasbenbourenane@gmail.com> | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.14",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.14 | [] | [] | [] | [
"aiohttp>=3.13.2",
"anibridge-library-base>=0.2.0b2",
"limiter>=0.5.0",
"plexapi>=4.17.2",
"pydantic>=2.12.4",
"tzlocal>=5.3.1"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T01:11:59.739224 | anibridge_plex_provider-0.2.0b4.tar.gz | 14,729 | 1e/2c/a903938be9b1801e400c989c7d111c720e4c19a4a2c81b420602d94a1dda/anibridge_plex_provider-0.2.0b4.tar.gz | source | sdist | null | false | 7d868aee92b82d32431389994917b522 | 47ac70805a72b0ada1a2ea08a5d3952edf4ab04b6f70a3e255acf84ef6fc826a | 1e2ca903938be9b1801e400c989c7d111c720e4c19a4a2c81b420602d94a1dda | null | [] | 208 |
2.3 | casbin-fastapi-decorator-db | 0.1.4 | Database enforcer provider for casbin-fastapi-decorator | # casbin-fastapi-decorator-db
Database enforcer provider for [casbin-fastapi-decorator](https://github.com/Neko1313/casbin-fastapi-decorator).
Loads Casbin policies from a SQLAlchemy async session and creates a `casbin.Enforcer` per request.
## Installation
```bash
pip install casbin-fastapi-decorator-db
```
Or via the core package extra:
```bash
pip install "casbin-fastapi-decorator[db]"
```
## Usage
Define a SQLAlchemy ORM model for your policy table:
```python
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
class Base(DeclarativeBase):
pass
class PolicyORM(Base):
__tablename__ = "policies"
id: Mapped[int] = mapped_column(primary_key=True)
sub: Mapped[str]
obj: Mapped[str]
act: Mapped[str]
```
Create the provider and pass it to `PermissionGuard`:
```python
from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker, create_async_engine
from casbin_fastapi_decorator_db import DatabaseEnforcerProvider
from casbin_fastapi_decorator import PermissionGuard
from fastapi import FastAPI, HTTPException
engine = create_async_engine("sqlite+aiosqlite:///./policies.db")
AsyncSessionLocal = async_sessionmaker(engine, class_=AsyncSession)
async def get_current_user() -> dict:
return {"sub": "alice", "role": "admin"}
enforcer_provider = DatabaseEnforcerProvider(
model_path="model.conf",
session_factory=AsyncSessionLocal,
policy_model=PolicyORM,
policy_mapper=lambda p: (p.sub, p.obj, p.act),
default_policies=[("admin", "*", "*")], # optional
)
guard = PermissionGuard(
user_provider=get_current_user,
enforcer_provider=enforcer_provider,
error_factory=lambda user, *rv: HTTPException(403, "Forbidden"),
)
app = FastAPI()
@app.get("/articles")
@guard.require_permission("articles", "read")
async def list_articles():
return []
```
## API
### `DatabaseEnforcerProvider`
```python
DatabaseEnforcerProvider(
model_path: str,
session_factory: async_sessionmaker[AsyncSession],
policy_model: type,
policy_mapper: Callable[[Any], tuple],
default_policies: list[tuple] = [],
)
```
| Parameter | Description |
|---|---|
| `model_path` | Path to the Casbin model `.conf` file |
| `session_factory` | SQLAlchemy `async_sessionmaker` |
| `policy_model` | ORM model class representing the policy table |
| `policy_mapper` | Function that maps an ORM row to a `(sub, obj, act)` tuple |
| `default_policies` | Static policies added on top of the database policies (default: `[]`) |
On each request the provider opens a session, loads all rows from the policy table, maps them via `policy_mapper`, merges with `default_policies`, and returns a fresh `casbin.Enforcer`.
## Development
See the [workspace README](../../README.md) for setup instructions.
```bash
task db:lint # ruff + bandit + ty
task db:test # pytest (requires Docker for testcontainers)
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"casbin-fastapi-decorator",
"sqlalchemy>=2.0.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T01:11:50.624731 | casbin_fastapi_decorator_db-0.1.4-py3-none-any.whl | 3,873 | 5e/d0/1a89cc2bc0e6bbede62e1e171e9dfade156ccb044bb727ca9dfeffabd8c1/casbin_fastapi_decorator_db-0.1.4-py3-none-any.whl | py3 | bdist_wheel | null | false | bf0c551ed2c273193f5353d5af99e960 | 26ddf8f239a78122cf3bfd22248e956d463b942820c95bf552f7a99448fe8dd4 | 5ed01a89cc2bc0e6bbede62e1e171e9dfade156ccb044bb727ca9dfeffabd8c1 | null | [] | 227 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.