metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | azure-template | 0.1.0b5894846 | Microsoft Azure Template Package Client Library for Python | [](https://dev.azure.com/azure-sdk/public/_build/latest?definitionId=472)
# Azure Template Package client library for Python
This template package matches necessary patterns that the development team has established to create a unified SDK. The packages contained herein can be installed singly or as part of the `azure` namespace. Any other introductory text should go here.
This package has been tested with Python 3.7+.
For a more complete set of Azure libraries, see https://aka.ms/azsdk/python/all.
# Getting started
For a rich example of a well formatted readme, please check [here.](https://github.com/Azure/azure-sdk/blob/main/docs/policies/README-TEMPLATE.md) In addition, this is an [example readme](https://github.com/Azure/azure-sdk/blob/main/docs/policies/README-EXAMPLE.md) that should be emulated. Note that the top-level sections in this template align with that of the [template.](https://github.com/Azure/azure-sdk/blob/main/docs/policies/README-TEMPLATE.md)
# Key concepts
Bullet point list of your library's main concepts.
# Examples
Examples of some of the key concepts for your library.
# Troubleshooting
Running into issues? This section should contain details as to what to do there.
# Next steps
More sample code should go here, along with links out to the appropriate example tests.
# Contributing
If you encounter any bugs or have suggestions, please file an issue in the [Issues](<https://github.com/Azure/azure-sdk-for-python/issues>) section of the project.
| text/markdown | null | Microsoft Corporation <azuresdkengsysadmins@microsoft.com> License-Expression: MIT | null | null | null | azure, azure sdk | [
"Development Status :: 4 - Beta",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :... | [] | null | null | >=3.9 | [] | [] | [] | [
"azure-core<2.0.0,>=1.23.0"
] | [] | [] | [] | [
"Bug Reports, https://github.com/Azure/azure-sdk-for-python/issues",
"repository, https://github.com/Azure/azure-sdk-for-python"
] | RestSharp/106.13.0.0 | 2026-02-18T19:22:29.216031 | azure_template-0.1.0b5894846-py3-none-any.whl | 3,357 | ca/3b/be9198a8d540ee1b051f7b271f93ff92d835fbf0a17ebd4836018f2ed878/azure_template-0.1.0b5894846-py3-none-any.whl | py3 | bdist_wheel | null | false | 1c4339b512ae0e4dba503f7c6037e529 | a30cfba6fda9d0023c9363ed2c405e4a10a248a7f4fa5acd4b8009b05ec422d4 | ca3bbe9198a8d540ee1b051f7b271f93ff92d835fbf0a17ebd4836018f2ed878 | null | [] | 232 |
2.4 | cratedb-toolkit | 0.0.43 | CrateDB Toolkit | # CrateDB Toolkit
[](https://pypi.org/project/cratedb-toolkit/)
[](https://pepy.tech/project/cratedb-toolkit/)
[](https://github.com/crate/cratedb-toolkit/blob/main/LICENSE)
[](https://pypi.org/project/cratedb-toolkit/)
[](https://pypi.org/project/cratedb-toolkit/)
[](https://codecov.io/gh/crate/cratedb-toolkit/)
» [Documentation]
| [Changelog]
| [Community Forum]
| [PyPI]
| [Issues]
| [Source code]
| [License]
| [CrateDB]
[![ci-main][ci-main-badge]][ci-main-workflow]
[![ci-cloud][ci-cloud-badge]][ci-cloud-workflow]
[![ci-dynamodb][ci-dynamodb-badge]][ci-dynamodb-workflow]
[![ci-influxdb][ci-influxdb-badge]][ci-influxdb-workflow]
[![ci-kinesis][ci-kinesis-badge]][ci-kinesis-workflow]
[![ci-mongodb][ci-mongodb-badge]][ci-mongodb-workflow]
[![ci-postgresql][ci-postgresql-badge]][ci-postgresql-workflow]
[![ci-pymongo][ci-pymongo-badge]][ci-pymongo-workflow]
## About
This software package includes a range of modules and subsystems to work
with CrateDB and CrateDB Cloud efficiently.
You can use CrateDB Toolkit to run data I/O procedures and automation tasks
of different kinds around CrateDB and CrateDB Cloud. It can be used both as
a standalone program, and as a library.
It aims for [DWIM]-like usefulness and [UX], and provides CLI and HTTP
interfaces, and others.
## Install
Install package.
```shell
pip install --upgrade cratedb-toolkit
```
Verify installation.
```shell
ctk --version
```
Run with Docker.
```shell
alias ctk="docker run --rm ghcr.io/crate/cratedb-toolkit ctk"
ctk --version
```
## Contribute
Contributions are very much welcome. Please visit the [Documentation]
to learn how to spin up a sandbox environment on your workstation, or create
a [ticket][Issues] to report a bug or propose a feature.
## Status
Breaking changes should be expected until a 1.0 release, so version pinning is
strongly recommended, especially when using this software as a library.
For example:
```shell
pip install 'cratedb-toolkit[full]==0.0.38'
```
[Changelog]: https://github.com/crate/cratedb-toolkit/blob/main/CHANGES.md
[Community Forum]: https://community.crate.io/
[CrateDB]: https://crate.io/products/cratedb
[CrateDB Cloud]: https://console.cratedb.cloud/
[Documentation]: https://cratedb-toolkit.readthedocs.io/
[DWIM]: https://en.wikipedia.org/wiki/DWIM
[Issues]: https://github.com/crate/cratedb-toolkit/issues
[License]: https://github.com/crate/cratedb-toolkit/blob/main/LICENSE
[PyPI]: https://pypi.org/project/cratedb-toolkit/
[Source code]: https://github.com/crate/cratedb-toolkit
[UX]: https://en.wikipedia.org/wiki/User_experience
[ci-main-badge]: https://github.com/crate/cratedb-toolkit/actions/workflows/main.yml/badge.svg
[ci-main-workflow]: https://github.com/crate/cratedb-toolkit/actions/workflows/main.yml
[ci-cloud-badge]: https://github.com/crate/cratedb-toolkit/actions/workflows/cratedb-cloud.yml/badge.svg
[ci-cloud-workflow]: https://github.com/crate/cratedb-toolkit/actions/workflows/cratedb-cloud.yml
[ci-dynamodb-badge]: https://github.com/crate/cratedb-toolkit/actions/workflows/dynamodb.yml/badge.svg
[ci-dynamodb-workflow]: https://github.com/crate/cratedb-toolkit/actions/workflows/dynamodb.yml
[ci-influxdb-badge]: https://github.com/crate/cratedb-toolkit/actions/workflows/influxdb.yml/badge.svg
[ci-influxdb-workflow]: https://github.com/crate/cratedb-toolkit/actions/workflows/influxdb.yml
[ci-kinesis-badge]: https://github.com/crate/cratedb-toolkit/actions/workflows/kinesis.yml/badge.svg
[ci-kinesis-workflow]: https://github.com/crate/cratedb-toolkit/actions/workflows/kinesis.yml
[ci-mongodb-badge]: https://github.com/crate/cratedb-toolkit/actions/workflows/mongodb.yml/badge.svg
[ci-mongodb-workflow]: https://github.com/crate/cratedb-toolkit/actions/workflows/mongodb.yml
[ci-postgresql-badge]: https://github.com/crate/cratedb-toolkit/actions/workflows/postgresql.yml/badge.svg
[ci-postgresql-workflow]: https://github.com/crate/cratedb-toolkit/actions/workflows/postgresql.yml
[ci-pymongo-badge]: https://github.com/crate/cratedb-toolkit/actions/workflows/pymongo.yml/badge.svg
[ci-pymongo-workflow]: https://github.com/crate/cratedb-toolkit/actions/workflows/pymongo.yml
| text/markdown | The CrateDB Developers | null | null | Andreas Motl <andreas.motl@crate.io>, Hernan Cianfagna <hernan@crate.io>, Niklas Schmidtmer <niklas@crate.io>, Walter Behmann <walter@crate.io> | AGPL 3, EUPL 1.2 | cratedb, cratedb-admin, cratedb-cloud, cratedb-diagnostics, cratedb-shell, data-processing, data-retention, managed-cratedb, toolkit | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Customer Service",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: End Users/Desktop",
"Intended Audience :: Information Technology",
"Intended Audience :: Manufacturing",
"In... | [] | null | null | >=3.8 | [] | [] | [] | [
"attrs<26",
"boltons<26",
"cattrs<26",
"click<8.2",
"click-aliases<2,>=1.0.4",
"colorama<1",
"colorlog",
"crash",
"cratedb-sqlparse==0.0.17",
"croud<1.16,>=1.13",
"importlib-metadata; python_version < \"3.8\"",
"importlib-resources; python_version < \"3.9\"",
"keyrings-cryptfile<2",
"orjso... | [] | [] | [] | [
"Changelog, https://github.com/crate/cratedb-toolkit/blob/main/CHANGES.md",
"Documentation, https://cratedb-toolkit.readthedocs.io/",
"Homepage, https://cratedb-toolkit.readthedocs.io/",
"Issues, https://github.com/crate/cratedb-toolkit/issues",
"Repository, https://github.com/crate/cratedb-toolkit"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T19:22:06.254322 | cratedb_toolkit-0.0.43.tar.gz | 237,736 | 9e/f4/bce3f7320177410060ae529e879fed4e146106bd2902f6cb4d0490b60054/cratedb_toolkit-0.0.43.tar.gz | source | sdist | null | false | 8bb2302fb27abe937c2a5cbbbf3bd16c | 0ce28d8f2ee7ac9d1846a6c78cca6a60951086bf005f7fa67a25388dcd28b173 | 9ef4bce3f7320177410060ae529e879fed4e146106bd2902f6cb4d0490b60054 | null | [
"LICENSE"
] | 682 |
2.4 | ectf | 2026.0.7 | Tools for eCTF competitors | # eCTF Tools
## Setup
[uv](https://docs.astral.sh/uv/) is required to run the tools
Once installed, you can run the eCTF tools with:
```commandline
uvx run ectf --help
```
See https://rules.ectf.mitre.org/2026/system/ectf_tools.html
for full tool documentation | text/markdown | null | Ben Janis <btjanis@mitre.org> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"arrow>=1.3.0",
"attrs>=25.3.0",
"loguru>=0.7.3",
"pyserial>=3.5",
"pyyaml>=6.0.2",
"requests>=2.32.5",
"rich>=14.1.0",
"tqdm>=4.67.1",
"typer>=0.16.1"
] | [] | [] | [] | [] | uv/0.9.18 {"installer":{"name":"uv","version":"0.9.18","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T19:21:00.805589 | ectf-2026.0.7.tar.gz | 40,156 | 01/55/cd2b03a2cfc4a26b2b205f84adcc7e5d3ad3e9e84a6e8677d162c94d6fdb/ectf-2026.0.7.tar.gz | source | sdist | null | false | 0108289a87c87dc0a4c82dc07f012444 | 4c5b30136a5fb7ec4aa3f7e5f00f22c51c33afc8caed1a4dcc1ec7efab4485c9 | 0155cd2b03a2cfc4a26b2b205f84adcc7e5d3ad3e9e84a6e8677d162c94d6fdb | null | [
"LICENSE.txt"
] | 689 |
2.4 | django-oscar-bluelight | 6.0.0 | Bluelight Specials - Enhancements to the offer and vouchers features for Django Oscar. | # Django Oscar Bluelight Specials
[](https://pypi.python.org/pypi/django-oscar-bluelight)
[](https://pypi.python.org/pypi/django-oscar-bluelight)
[](https://pypi.python.org/pypi/django-oscar-bluelight)
This package contains enhancements and improvements to the built-in
offers and vouchers features in Django Oscar.
## Features
- **Group Restricted Vouchers**: Bluelight adds the ability to restrict application of vouchers to a specific whitelist of groups (`django.contrib.auth.models.Group`). For example, you could create a voucher code that can only be applied by users who belong to the group _Customer Service Reps_.
- **Compound Offer Conditions**: By default, Oscar only allows assigning a single condition to a promotional offer. Compound offer conditions allow you to create more complex logic around when an offer should be enabled. For example, you could create a compound condition specifying that a basket must contain at least 3 items _and_ have a total value greater than $50.
- Compound conditions can aggregate an unlimited number of child conditions together.
- Compound conditions can join their child conditions using either an _AND_ or an _OR_ conjunction.
- Very complex conditions requiring both _AND_ and _OR_ conjunctions can be modeled by creating multiple levels of compound conditions.
- **Parent / Child Voucher Codes**: By default Oscar doesn't support bulk creation of voucher codes. Bluelight adds the ability to bulk create any number of child vouchers (with unique, automatically generated codes) for any standard (non-child) voucher. This can be useful when sending voucher codes to customer's through email, as it allows the creation of hundreds or thousands of non-sequential, one-time-use codes.
- Child codes can be added when creating a new voucher or after a voucher is created.
- More child codes can be generated for a voucher at any time.
- Child codes can be exported in CSV and JSON formats.
- Any time a parent voucher is edited (name changed, benefit altered, etc), all child codes are also updated to match.
- When a parent voucher is deleted, all children are also deleted.
- Once a voucher has child codes assigned to it, the parent voucher itself can not be applied by anyone.
## Roadmap
- Make child code creation and updating more performant, possibly by better tracking of dirty model fields before saving.
- Add ability to duplicate vouchers.
- Add ability to add conditions to vouchers.
## Caveats
Bluelight currently works by forking four of Oscar's apps: offer,
voucher, dashboard.offers, and dashboard.vouchers. Currently there is no
way to use Bluelight if your application has already forked those
applications.
## Installation
Install [django-oscar-bluelight]{.title-ref}.:
```sh
pip install django-oscar-bluelight
```
Import Bluelight's settings into your projects `settings.py` file.
```py
from oscar.defaults import *
from oscarbluelight.defaults import * # Needed so that Bluelight's views show up in the dashboard
```
Add Bluelight to your installed apps (replacing the equivalent Django
Oscar apps). The top-level `oscarbluelight` app must be defined before
the `oscar` app---if it isn't Django will not correctly find the
Bluelight's templates.
```py
INSTALLED_APPS = [
...
# Bluelight. Must come before `django-oscar` so that template inheritance / overrides work correctly.
'oscarbluelight',
'thelabdb.pgviews',
# django-oscar
'oscar',
'oscar.apps.analytics',
'oscar.apps.checkout',
'oscar.apps.address',
'oscar.apps.shipping',
'oscar.apps.catalogue',
'oscar.apps.catalogue.reviews',
'sandbox.partner', # 'oscar.apps.partner',
'sandbox.basket', # 'oscar.apps.basket',
'oscar.apps.payment',
'oscarbluelight.offer', # 'oscar.apps.offer',
'oscar.apps.order',
'oscar.apps.customer',
'oscar.apps.search',
'oscarbluelight.voucher', # 'oscar.apps.voucher',
'oscar.apps.wishlists',
'oscar.apps.dashboard',
'oscar.apps.dashboard.reports',
'oscar.apps.dashboard.users',
'oscar.apps.dashboard.orders',
'oscar.apps.dashboard.catalogue',
'oscarbluelight.dashboard.offers', # 'oscar.apps.dashboard.offers',
'oscar.apps.dashboard.partners',
'oscar.apps.dashboard.pages',
'oscar.apps.dashboard.ranges',
'oscar.apps.dashboard.reviews',
'oscarbluelight.dashboard.vouchers', # 'oscar.apps.dashboard.vouchers',
'oscar.apps.dashboard.communications',
'oscar.apps.dashboard.shipping',
...
]
```
Fork the basket application in your project and add
`BluelightBasketMixin` as a parent class of the `Line` model.
```py
from oscar.apps.basket.abstract_models import AbstractLine
from oscarbluelight.mixins import BluelightBasketLineMixin
class Line(BluelightBasketLineMixin, AbstractLine):
pass
from oscar.apps.basket.models import * # noqa
```
## Usage
After installation, the new functionality will show up in the Oscar
dashboard under the Offers menu.
| text/markdown | null | thelab <thelabdev@thelab.co> | null | null | ISC | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"django-oscar<4.2,>=4.0",
"django-tasks>=0.7.0",
"django>=5.2",
"djangorestframework<4,>=3.16.1",
"thelabdb>=0.7.0"
] | [] | [] | [] | [
"Homepage, https://gitlab.com/thelabnyc/django-oscar/django-oscar-bluelight",
"Repository, https://gitlab.com/thelabnyc/django-oscar/django-oscar-bluelight"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T19:20:52.073077 | django_oscar_bluelight-6.0.0.tar.gz | 3,591,852 | b4/1a/33ee524e7138bf5dfbfa5c1ba9600551ceb175de03979df9d79fb80c90ce/django_oscar_bluelight-6.0.0.tar.gz | source | sdist | null | false | 9dcf7a8af8f31931f72c40adac707b5a | 27dbb1b20edbe01e3c8bd9a30143a9675f93c5aef296d2943d4fed59361ad0d3 | b41a33ee524e7138bf5dfbfa5c1ba9600551ceb175de03979df9d79fb80c90ce | null | [
"LICENSE"
] | 1,579 |
2.4 | niimetric | 0.1.5 | Evaluate image quality metrics (SSIM, MAE, LPIPS, PSNR) for NIfTI images | # NiiMetric
A Python CLI tool for evaluating image quality metrics between NIfTI (.nii/.nii.gz) medical images.
[](https://badge.fury.io/py/niimetric)
[](https://opensource.org/licenses/MIT)
## Features
- **PSNR** - Peak Signal-to-Noise Ratio
- **SSIM** - Structural Similarity Index (slice-based with configurable dimension)
- **MAE** - Mean Absolute Error
- **LPIPS** - Learned Perceptual Image Patch Similarity (slice-based)
- **Auto-cropping** - Automatically crops to brain region based on reference image (30% mean threshold)
- **Foreground masking** - Evaluates only on non-air regions (excludes background)
- **Configurable slice dimension** - Evaluate on sagittal, coronal, or axial slices
- **CSV output** - Save results to CSV file
- **Mask export** - Optionally save the foreground mask as NIfTI
## Installation
```bash
pip install niimetric
```
## Usage
### Basic Usage
```bash
# Single metric
niimetric -a reference.nii.gz -b image.nii.gz --ssim -o output.csv
niimetric -a reference.nii.gz -b image.nii.gz --psnr -o output.csv
niimetric -a reference.nii.gz -b image.nii.gz --mae -o output.csv
niimetric -a reference.nii.gz -b image.nii.gz --lpips -o output.csv
# All metrics at once
niimetric -a reference.nii.gz -b image.nii.gz --all -o output.csv
```
### Slice Dimension Selection
```bash
# Evaluate on axial slices (default)
niimetric -a reference.nii.gz -b image.nii.gz --all --dim 2 -o output.csv
# Evaluate on sagittal slices
niimetric -a reference.nii.gz -b image.nii.gz --all --dim 0 -o output.csv
# Evaluate on coronal slices
niimetric -a reference.nii.gz -b image.nii.gz --all --dim 1 -o output.csv
```
### Save Foreground Mask
```bash
niimetric -a reference.nii.gz -b image.nii.gz --all -o output.csv --save-mask mask.nii.gz
```
## Arguments
| Argument | Description |
|----------|-------------|
| `-a, --reference` | Path to reference NIfTI image (used for cropping boundaries) |
| `-b, --image` | Path to comparison NIfTI image |
| `-o, --output` | Path to output CSV file |
| `--ssim` | Calculate Structural Similarity Index |
| `--psnr` | Calculate Peak Signal-to-Noise Ratio |
| `--mae` | Calculate Mean Absolute Error |
| `--lpips` | Calculate Learned Perceptual Image Patch Similarity |
| `--all` | Calculate all metrics |
| `--dim` | Dimension for slice-based evaluation: 0=sagittal, 1=coronal, 2=axial (default: 2) |
| `--save-mask` | Path to save the foreground mask as NIfTI file |
## Output Format
Results are saved in CSV format:
```csv
reference,image,metric,value
reference.nii.gz,image.nii.gz,PSNR,25.432100
reference.nii.gz,image.nii.gz,SSIM,0.921500
reference.nii.gz,image.nii.gz,MAE,0.045200
reference.nii.gz,image.nii.gz,LPIPS,0.032100
```
## How It Works
1. **Load** both NIfTI images
2. **Auto-crop** based on reference image (30% mean threshold per slice)
3. **Normalize** both images to 0-1 range
4. **Create foreground mask** from reference (10% intensity threshold)
5. **Compute metrics** only on foreground regions
6. **Save results** to CSV
## Dependencies
- `nibabel` - NIfTI file I/O
- `numpy` - Array operations
- `scikit-image` - SSIM, PSNR calculations
- `torch` - LPIPS backend
- `lpips` - Perceptual similarity metric
## Python API
You can also use niimetric as a Python library:
```python
from niimetric import load_nifti, compute_ssim, compute_psnr, compute_mae, compute_lpips
from niimetric import auto_crop_volumes
from niimetric.metrics import create_foreground_mask
from niimetric.utils import normalize_to_range
# Load images
ref = load_nifti("reference.nii.gz")
img = load_nifti("image.nii.gz")
# Auto-crop
ref_cropped, img_cropped, bbox = auto_crop_volumes(ref, img)
# Normalize
ref_norm = normalize_to_range(ref_cropped, 0, 1)
img_norm = normalize_to_range(img_cropped, 0, 1)
# Create mask
mask = create_foreground_mask(ref_norm)
# Compute metrics
psnr = compute_psnr(ref_norm, img_norm, mask=mask)
ssim = compute_ssim(ref_norm, img_norm, mask=mask, dim=2)
mae = compute_mae(ref_norm, img_norm, mask=mask)
lpips = compute_lpips(ref_norm, img_norm, mask=mask, dim=2)
```
## License
MIT License - see [LICENSE](LICENSE) for details.
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
| text/markdown | null | Your Name <your.email@example.com> | null | null | MIT | nifti, mri, image-quality, ssim, psnr, lpips, mae | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language ... | [] | null | null | >=3.8 | [] | [] | [] | [
"nibabel>=4.0.0",
"numpy>=1.20.0",
"scikit-image>=0.19.0",
"torch>=1.9.0",
"lpips>=0.1.4"
] | [] | [] | [] | [
"Homepage, https://github.com/yourusername/niimetric",
"Repository, https://github.com/yourusername/niimetric"
] | twine/6.2.0 CPython/3.11.4 | 2026-02-18T19:19:20.173107 | niimetric-0.1.5.tar.gz | 11,984 | 45/d1/1520a135935d7f79f96010b3c75ad291aa5cb2c7efc5ff6e6f9341cc3df1/niimetric-0.1.5.tar.gz | source | sdist | null | false | 79b16aef4eb526a6de0bb09d743e7837 | f77b539bf310ad4e3e030b748131feb283c76a1c046d797b52fb3d1c60d5807e | 45d11520a135935d7f79f96010b3c75ad291aa5cb2c7efc5ff6e6f9341cc3df1 | null | [
"LICENSE"
] | 251 |
2.1 | rlane-libgoogle | 1.0.5 | Connect to Google Service API's | ## libgoogle
Connect to Google Service API's.
The `libgoogle` package provides a function to connect to a google
service (such as Calendar, Drive and Mail), and manage credentials
and access tokens under the `XDG` schema.
### function connect
```python
connect(scope: str, version: str) -> googleapiclient.discovery.Resource
```
Connect to Google service identified by `scope` and `version`.
Args:
scope: (valid examples):
"https://www.googleapis.com/auth/gmail"
"https://www.googleapis.com/auth/gmail.readonly"
"gmail"
"gmail.readonly"
"drive.metadata.readonly"
"photoslibrary.readonly"
version: "v1", "v3", etc.
Files:
credentials: XDG_CONFIG_HOME / libgoogle / credentials.json
Must exist, or raises FileNotFoundError.
token: XDG_CACHE_HOME / libgoogle / {scope}-token.json
### function set_debug
```python
set_debug(flag: bool) -> None
```
Turn on/off low-level `httplib2` debugging.
Args:
flag: True to turn on debugging, False to turn off.
| text/markdown | null | Russel Lane <russel@rlane.com> | null | null | MIT | api, client, connect, google, google-api-python-client, google-auth-httplib2, google-auth-oauthlib, httplib2, oauthlib, python, services, xdg | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"google-api-python-client>=2.158.0",
"google-auth-httplib2>=0.2.0",
"google-auth-oauthlib>=1.2.1",
"loguru>=0.7.3",
"xdg>=6.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/russellane/libgoogle"
] | twine/5.1.1 CPython/3.13.0 | 2026-02-18T19:19:06.179596 | rlane_libgoogle-1.0.5.tar.gz | 3,995 | 6b/bc/659e8a9e0fbc26bc03981c3c2a8eb1b4b03ad00f5fb5978d50919ec1a60a/rlane_libgoogle-1.0.5.tar.gz | source | sdist | null | false | 22040e4b460a73bcc772e31ff21cd727 | c4c37949d156ae7aa1745b2af8d17e9b4cf40cb8d4b108fff9af528ce67ca587 | 6bbc659e8a9e0fbc26bc03981c3c2a8eb1b4b03ad00f5fb5978d50919ec1a60a | null | [] | 251 |
2.1 | rlane-libfile | 1.0.4 | Represent a filesystem item such as a file or folder | ## libfile
Libfile.
This module offers a class that combines pathlib.Path, os.walk,
cached os.struct_stat, debug/trace logging, and the ability to execute
a --dry-run through 'most' of the code without changing the filesystem.
File(object) represents a filesystem item, such as a file or folder, which
may or may not exist when the object is initialized. This differs
from os.DirEntry(object), which is only instantiated for an existing item.
File has nothing to do with input/output... yet.
For you old unix cats like me, a 'folder' is a 'directory', and this
module enjoys the 50% reduction in the number of syllables required,
the 3-fewer keystrokes for singular, 4 for plural, and non-collision
with python built-in 'dir'. Embrace the technology!
### class File
Represent a filesystem item such as a file or folder.
The path argument to the constructor, and other methods that take a
path argument, accept either a string, or an object implementing the
os.PathLike interface which returns a string, or another path object.
See: https://docs.python.org/3/library/pathlib.html#pure-paths
| text/markdown | null | Russel Lane <russel@rlane.com> | null | null | MIT | libfile | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"loguru>=0.7.2"
] | [] | [] | [] | [
"Homepage, https://github.com/russellane/libfile"
] | twine/5.1.1 CPython/3.13.0 | 2026-02-18T19:18:59.794844 | rlane_libfile-1.0.4.tar.gz | 6,398 | 3e/ff/034f0c512ac29e03d6036779f303ba59f1ddd93fcf45307eb675799613a3/rlane_libfile-1.0.4.tar.gz | source | sdist | null | false | e42c80eaa8b6cd79cff8ca7bff97ec51 | 880c901610d7a28657a14635c8dcae220dda9738a8bc390970719b3638c85be0 | 3eff034f0c512ac29e03d6036779f303ba59f1ddd93fcf45307eb675799613a3 | null | [] | 258 |
2.1 | rlane-libcurses | 1.0.11 | Curses based boxes, menus, loggers | ## libcurses
Framework and tools for multi-threaded, curses(3)-based, terminal applications.
* Write to screen from multiple threads.
* Use `libcurses.wrapper` instead of `curses.wrapper`.
* Use `libcurses.getkey` instead of `curses.getch`.
* Use `libcurses.getline` instead of `curses.getstr`.
* Preserve the cursor with context manager `libcurses.preserve_cursor`.
* Register callbacks with `register_fkey` to handle function-keys pressed
during `getkey` and `getline` processing.
* Register callbacks with `add_mouse_handler` to handle mouse events
during `getkey` and `getline` processing.
* Manage a logger destination, `LogSink`, to write to a curses window.
* A `Grid` framework.
### class Grid
Grid of windows.
A rectangular collection of windows with shared (collapsed) borders that
resize the windows to either side (syncronized shrink/expand) when moused upon.
+-------+---+------+ example `Grid`, 9 windows.
| | | |
+-------+---+------+
| | |
+------+----+------+
| | |
+------+--+--------+
| | |
+---------+--------+
Drag and drop an interior border to resize the windows on either side.
Double-click an interior border to enter Resize Mode:
* scroll-wheel and arrow-keys move the border, and
* click anywhere, Enter and Esc to exit Resize Mode.
Grids also provide a wrapper around `curses.newwin` that takes positioning
parameters that describe the spatial-relationship to other windows on the
screen, instead of (y,x) coordinates:
+--------+ +--------+
| | | ^ |
| |<------ left2r --| | |
| | | | |
|<---------------- left ---| | |
| | | | |
+--------+ +--|-----+
| | | ^
bottom2t | | bottom top | | top2b
v | | |
+-----|--+ +--------+
| | | | |
| | |-- right ---------------->|
| | | | |
| | |-- right2l ----->| |
| v | | |
+--------+ +--------+
For example, this 3x13 grid with three 3x5 boxes may be described at least
three different ways:
+---+---+---+
| a | b | c |
+---+---+---+
grid = Grid(curses.newwin(3, 13))
1) a = grid.box('a', 3, 5)
b = grid.box('b', 3, 5, left2r=a)
c = grid.box('c', 3, 5, left2r=b)
2) c = grid.box('c', 3, 5, right=grid)
b = grid.box('b', 3, 5, right=c)
a = grid.box('a', 3, 5, right=b)
3) a = grid.box('a', 3, 5, left=grid)
c = grid.box('c', 3, 5, right=grid)
b = grid.box('b', 3, 0, left2r=a, right=c)
If two endpoints are given (such as 3b), the length will be calculated to
fill the gap between the endpoints.
### class LogSink
Logger sink to curses window.
The `LogSink` class provides a logger destination that writes log
messages to a curses window, and methods that control various
logging features.
### class MouseEvent
Wrap `curses.getmouse` with additional, convenience-properties.
`MouseEvent` encapsulates the results of `curses.getmouse`,
x x-coordinate.
y y-coordinate.
bstate bitmask describing the type of event.
and provides these additional properties:
button button number (1-5).
nclicks number of clicks (1-3).
is_pressed True if button is pressed.
is_released True if button was just released.
is_alt True if Alt key is held.
is_ctrl True if Ctrl key is held.
is_shift True if Shift key is held.
is_moving True if mouse is moving.
### method add_mouse_handler
Call `func` with `args` when mouse event happens at (y, x).
### method clear_mouse_handlers
Remove all mouse handlers.
### function get_colormap
```python
get_colormap() -> dict[str, int]
```
Return map of `loguru-level-name` to `curses-color/attr`.
Call after creating all custom levels with `logger.level()`.
Map is build once and cached; repeated calls return same map.
### function getkey
```python
getkey(win: _curses.window | None = None, no_mouse: bool = False) -> int | None
```
Read and return a character from window.
Args:
win: curses window to read from.
no_mouse: ignore mouse events (for internal use).
Return:
-1 when no-input in no-delay mode, or
None on end of file, or
>=0 int character read.
### function getline
```python
getline(win: _curses.window) -> str | None
```
Read and return a line of input from window.
A line is terminated with CR, LF or KEY_ENTER.
Backspace deletes the previous character.
NAK (ctrl-U) kills the line.
Mouse events are handled.
### function preserve_cursor
```python
preserve_cursor() -> Iterator[tuple[int, int]]
```
Context manager to save and restore the cursor.
### function register_fkey
```python
register_fkey(func: Callable[[int], NoneType], key: int = 0) -> None
```
Register `func` to be called when `key` is pressed.
Args:
func: callable, to be called on receipt of `key`.
key: the key to be captured, e.g., `curses.KEY_F1`,
or zero (0) for all keys.
`func` is appended to a list for the `key`;
pass func=None to remove list of funcs for `key` from registry.
### function wrapper
```python
wrapper(func: Callable[[_curses.window], NoneType]) -> None
```
Use instead of `curses.wrapper`.
| text/markdown | null | Russel Lane <russel@rlane.com> | null | null | MIT | curses, python, xterm-256color, mouse | [
"Development Status :: 4 - Beta",
"Environment :: Console :: Curses",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"loguru>=0.7.2"
] | [] | [] | [] | [
"Homepage, https://github.com/russellane/libcurses"
] | twine/5.1.1 CPython/3.13.0 | 2026-02-18T19:18:53.513747 | rlane_libcurses-1.0.11.tar.gz | 52,206 | ce/26/72c7324887413b69974c603da3c327e63ed0a2280272b4c13c4e1879a466/rlane_libcurses-1.0.11.tar.gz | source | sdist | null | false | c97a33d7cfa84d02e9c6b00f1498329b | 594dd3c1bd763a4c02c131f6d5fe39730aa6dc991e5d0a3be672eb8819d27ec0 | ce2672c7324887413b69974c603da3c327e63ed0a2280272b4c13c4e1879a466 | null | [] | 873 |
2.4 | tuteliq | 2.3.1 | Official Python SDK for Tuteliq - AI-powered child safety API | <p align="center">
<img src="./assets/logo.png" alt="Tuteliq" width="200" />
</p>
<h1 align="center">Tuteliq Python SDK</h1>
<p align="center">
<strong>Official Python SDK for the Tuteliq API</strong><br>
AI-powered child safety analysis
</p>
<p align="center">
<a href="https://pypi.org/project/tuteliq/"><img src="https://img.shields.io/pypi/v/tuteliq.svg" alt="PyPI version"></a>
<a href="https://pypi.org/project/tuteliq/"><img src="https://img.shields.io/pypi/pyversions/tuteliq.svg" alt="Python versions"></a>
<a href="https://github.com/Tuteliq/python/actions"><img src="https://img.shields.io/github/actions/workflow/status/Tuteliq/python/ci.yml" alt="build status"></a>
<a href="https://github.com/Tuteliq/python/blob/main/LICENSE"><img src="https://img.shields.io/github/license/Tuteliq/python.svg" alt="license"></a>
</p>
<p align="center">
<a href="https://docs.tuteliq.ai">API Docs</a> •
<a href="https://tuteliq.ai">Dashboard</a> •
<a href="https://trust.tuteliq.ai">Trust</a> •
<a href="https://discord.gg/7kbTeRYRXD">Discord</a>
</p>
---
## Installation
```bash
pip install tuteliq
```
### Requirements
- Python 3.9+
---
## Quick Start
```python
import asyncio
from tuteliq import Tuteliq
async def main():
client = Tuteliq(api_key="your-api-key")
# Quick safety analysis
result = await client.analyze("Message to check")
if result.risk_level != RiskLevel.SAFE:
print(f"Risk: {result.risk_level}")
print(f"Summary: {result.summary}")
await client.close()
asyncio.run(main())
```
Or use as a context manager:
```python
async with Tuteliq(api_key="your-api-key") as client:
result = await client.analyze("Message to check")
```
---
## API Reference
### Initialization
```python
from tuteliq import Tuteliq
# Simple
client = Tuteliq(api_key="your-api-key")
# With options
client = Tuteliq(
api_key="your-api-key",
timeout=30.0, # Request timeout in seconds
max_retries=3, # Retry attempts
retry_delay=1.0, # Initial retry delay in seconds
)
```
### Bullying Detection
```python
result = await client.detect_bullying("Nobody likes you, just leave")
if result.is_bullying:
print(f"Severity: {result.severity}") # Severity.MEDIUM
print(f"Types: {result.bullying_type}") # ["exclusion", "verbal_abuse"]
print(f"Confidence: {result.confidence}") # 0.92
print(f"Rationale: {result.rationale}")
```
### Grooming Detection
```python
from tuteliq import DetectGroomingInput, GroomingMessage, MessageRole
result = await client.detect_grooming(
DetectGroomingInput(
messages=[
GroomingMessage(role=MessageRole.ADULT, content="This is our secret"),
GroomingMessage(role=MessageRole.CHILD, content="Ok I won't tell"),
],
child_age=12,
)
)
if result.grooming_risk == GroomingRisk.HIGH:
print(f"Flags: {result.flags}") # ["secrecy", "isolation"]
```
### Unsafe Content Detection
```python
result = await client.detect_unsafe("I don't want to be here anymore")
if result.unsafe:
print(f"Categories: {result.categories}") # ["self_harm", "crisis"]
print(f"Severity: {result.severity}") # Severity.CRITICAL
```
### Quick Analysis
Runs bullying and unsafe detection in parallel:
```python
result = await client.analyze("Message to check")
print(f"Risk Level: {result.risk_level}") # RiskLevel.SAFE/LOW/MEDIUM/HIGH/CRITICAL
print(f"Risk Score: {result.risk_score}") # 0.0 - 1.0
print(f"Summary: {result.summary}")
print(f"Action: {result.recommended_action}")
```
### Emotion Analysis
```python
result = await client.analyze_emotions("I'm so stressed about everything")
print(f"Emotions: {result.dominant_emotions}") # ["anxiety", "sadness"]
print(f"Trend: {result.trend}") # EmotionTrend.WORSENING
print(f"Followup: {result.recommended_followup}")
```
### Action Plan
```python
from tuteliq import GetActionPlanInput, Audience, Severity
plan = await client.get_action_plan(
GetActionPlanInput(
situation="Someone is spreading rumors about me",
child_age=12,
audience=Audience.CHILD,
severity=Severity.MEDIUM,
)
)
print(f"Steps: {plan.steps}")
print(f"Tone: {plan.tone}")
```
### Incident Report
```python
from tuteliq import GenerateReportInput, ReportMessage
report = await client.generate_report(
GenerateReportInput(
messages=[
ReportMessage(sender="user1", content="Threatening message"),
ReportMessage(sender="child", content="Please stop"),
],
child_age=14,
)
)
print(f"Summary: {report.summary}")
print(f"Risk: {report.risk_level}")
print(f"Next Steps: {report.recommended_next_steps}")
```
### Voice Streaming
Real-time voice streaming with live safety analysis over WebSocket. Requires `websockets`:
```bash
pip install tuteliq[voice]
```
```python
from tuteliq import VoiceStreamConfig, VoiceStreamHandlers
session = client.voice_stream(
config=VoiceStreamConfig(
interval_seconds=10,
analysis_types=["bullying", "unsafe"],
),
handlers=VoiceStreamHandlers(
on_transcription=lambda e: print(f"Transcript: {e.text}"),
on_alert=lambda e: print(f"Alert: {e.category} ({e.severity})"),
),
)
await session.connect()
# Send audio chunks as they arrive
await session.send_audio(audio_bytes)
# End session and get summary
summary = await session.end()
print(f"Risk: {summary.overall_risk}")
print(f"Score: {summary.overall_risk_score}")
print(f"Full transcript: {summary.transcript}")
```
---
## Credits Tracking
Each response includes the number of credits consumed:
```python
result = await client.detect_bullying("Test message")
print(f"Credits used: {result.credits_used}") # 1
```
| Method | Credits | Notes |
|--------|---------|-------|
| `detect_bullying()` | 1 | Single text analysis |
| `detect_unsafe()` | 1 | Single text analysis |
| `detect_grooming()` | 1 per 10 msgs | `ceil(messages / 10)`, min 1 |
| `analyze_emotions()` | 1 per 10 msgs | `ceil(messages / 10)`, min 1 |
| `get_action_plan()` | 2 | Longer generation |
| `generate_report()` | 3 | Structured output |
| `analyze_voice()` | 5 | Transcription + analysis |
| `analyze_image()` | 3 | Vision + OCR + analysis |
---
## Tracking Fields
All methods support `external_id` and `metadata` for correlating requests:
```python
result = await client.detect_bullying(
"Test message",
external_id="msg_12345",
metadata={"user_id": "usr_abc", "session": "sess_xyz"},
)
# Echoed back in response
print(result.external_id) # "msg_12345"
print(result.metadata) # {"user_id": "usr_abc", ...}
```
---
## Usage Tracking
```python
result = await client.detect_bullying("test")
# Access usage stats after any request
if client.usage:
print(f"Limit: {client.usage.limit}")
print(f"Used: {client.usage.used}")
print(f"Remaining: {client.usage.remaining}")
# Request metadata
print(f"Request ID: {client.last_request_id}")
```
---
## Error Handling
```python
from tuteliq import (
Tuteliq,
TuteliqError,
AuthenticationError,
RateLimitError,
ValidationError,
NotFoundError,
ServerError,
TimeoutError,
NetworkError,
)
try:
result = await client.detect_bullying("test")
except AuthenticationError as e:
print(f"Auth error: {e.message}")
except RateLimitError as e:
print(f"Rate limited: {e.message}")
except ValidationError as e:
print(f"Invalid input: {e.message}, details: {e.details}")
except ServerError as e:
print(f"Server error {e.status_code}: {e.message}")
except TimeoutError as e:
print(f"Timeout: {e.message}")
except NetworkError as e:
print(f"Network error: {e.message}")
except TuteliqError as e:
print(f"Error: {e.message}")
```
---
## Type Hints
The SDK is fully typed. All models are dataclasses with type hints:
```python
from tuteliq import (
# Enums
Severity,
GroomingRisk,
RiskLevel,
EmotionTrend,
Audience,
MessageRole,
# Input types
AnalysisContext,
DetectBullyingInput,
DetectGroomingInput,
DetectUnsafeInput,
AnalyzeInput,
AnalyzeEmotionsInput,
GetActionPlanInput,
GenerateReportInput,
# Message types
GroomingMessage,
EmotionMessage,
ReportMessage,
# Result types
BullyingResult,
GroomingResult,
UnsafeResult,
AnalyzeResult,
EmotionsResult,
ActionPlanResult,
ReportResult,
Usage,
)
```
---
## FastAPI Example
```python
from fastapi import FastAPI, HTTPException
from tuteliq import Tuteliq, RateLimitError
app = FastAPI()
client = Tuteliq(api_key="your-api-key")
@app.post("/check-message")
async def check_message(message: str):
try:
result = await client.analyze(message)
if result.risk_level.value in ["high", "critical"]:
raise HTTPException(
status_code=400,
detail={"error": "Message blocked", "reason": result.summary}
)
return {"safe": True, "risk_level": result.risk_level.value}
except RateLimitError:
raise HTTPException(status_code=429, detail="Too many requests")
```
---
## Best Practices
### Message Batching
The **bullying** and **unsafe content** methods analyze a single `text` field per request. If your platform receives messages one at a time (e.g., a chat app), concatenate a **sliding window of recent messages** into one string before calling the API. Single words or short fragments lack context for accurate detection and can be exploited to bypass safety filters.
```python
# Bad — each message analyzed in isolation, easily evaded
for msg in messages:
client.detect_bullying(text=msg)
# Good — recent messages analyzed together
window = " ".join(recent_messages[-10:])
client.detect_bullying(text=window)
```
The **grooming** method already accepts a `messages` list and analyzes the full conversation in context.
### PII Redaction
Enable `PII_REDACTION_ENABLED=true` on your Tuteliq API to automatically strip emails, phone numbers, URLs, social handles, IPs, and other PII from detection summaries and webhook payloads. The original text is still analyzed in full — only stored outputs are scrubbed.
---
## Support
- **API Docs**: [docs.tuteliq.ai](https://docs.tuteliq.ai)
- **Discord**: [discord.gg/7kbTeRYRXD](https://discord.gg/7kbTeRYRXD)
- **Email**: support@tuteliq.ai
- **Issues**: [GitHub Issues](https://github.com/Tuteliq/python/issues)
---
## License
MIT License - see [LICENSE](LICENSE) for details.
---
## Get Certified — Free
Tuteliq offers a **free certification program** for anyone who wants to deepen their understanding of online child safety. Complete a track, pass the quiz, and earn your official Tuteliq certificate — verified and shareable.
**Three tracks available:**
| Track | Who it's for | Duration |
|-------|-------------|----------|
| **Parents & Caregivers** | Parents, guardians, grandparents, teachers, coaches | ~90 min |
| **Young People (10–16)** | Young people who want to learn to spot manipulation | ~60 min |
| **Companies & Platforms** | Product managers, trust & safety teams, CTOs, compliance officers | ~120 min |
**Start here →** [tuteliq.ai/certify](https://tuteliq.ai/certify)
- 100% Free — no login required
- Verifiable certificate on completion
- Covers grooming recognition, sextortion, cyberbullying, regulatory obligations (KOSA, EU DSA), and more
---
## The Mission: Why This Matters
Before you decide to contribute or sponsor, read these numbers. They are not projections. They are not estimates from a pitch deck. They are verified statistics from the University of Edinburgh, UNICEF, NCMEC, and Interpol.
- **302 million** children are victims of online sexual exploitation and abuse every year. That is **10 children every second**. *(Childlight / University of Edinburgh, 2024)*
- **1 in 8** children globally have been victims of non-consensual sexual imagery in the past year. *(Childlight, 2024)*
- **370 million** girls and women alive today experienced rape or sexual assault in childhood. An estimated **240–310 million** boys and men experienced the same. *(UNICEF, 2024)*
- **29.2 million** incidents of suspected child sexual exploitation were reported to NCMEC's CyberTipline in 2024 alone — containing **62.9 million files** (images, videos). *(NCMEC, 2025)*
- **546,000** reports of online enticement (adults grooming children) in 2024 — a **192% increase** from the year before. *(NCMEC, 2025)*
- **1,325% increase** in AI-generated child sexual abuse material reports between 2023 and 2024. The technology that should protect children is being weaponized against them. *(NCMEC, 2025)*
- **100 sextortion reports per day** to NCMEC. Since 2021, at least **36 teenage boys** have taken their own lives because they were victimized by sextortion. *(NCMEC, 2025)*
- **84%** of reports resolve outside the United States. This is not an American problem. This is a **global emergency**. *(NCMEC, 2025)*
End-to-end encryption is making platforms blind. In 2024, platforms reported **7 million fewer incidents** than the year before — not because abuse stopped, but because they can no longer see it. The tools that catch known images are failing. The systems that rely on human moderators are overwhelmed. The technology to detect behavior — grooming patterns, escalation, manipulation — in real-time text conversations **exists right now**. It is running at [api.tuteliq.ai](https://api.tuteliq.ai).
The question is not whether this technology is possible. The question is whether we build the company to put it everywhere it needs to be.
**Every second we wait, another child is harmed.**
We have the technology. We need the support.
If this mission matters to you, consider [sponsoring our open-source work](https://github.com/sponsors/Tuteliq) so we can keep building the tools that protect children — and keep them free and accessible for everyone.
---
<p align="center">
<sub>Built with care for child safety by the <a href="https://tuteliq.ai">Tuteliq</a> team</sub>
</p>
| text/markdown | null | Tuteliq <sales@tuteliq.ai> | null | null | null | ai, bullying-detection, child-safety, content-moderation, grooming-detection, online-safety, parental-control, sdk, tuteliq | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Langu... | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx>=0.25.0",
"mypy>=1.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"websockets>=12.0; extra == \"voice\""
] | [] | [] | [] | [
"Homepage, https://tuteliq.ai",
"Documentation, https://docs.tuteliq.ai",
"Repository, https://github.com/Tuteliq/python",
"Issues, https://github.com/Tuteliq/python/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T19:18:50.078262 | tuteliq-2.3.1.tar.gz | 438,157 | c4/12/73bb5732bb1168ae8b6bdd45aecebffcfeab32c6702f7acd1de0bebff3c1/tuteliq-2.3.1.tar.gz | source | sdist | null | false | 096f37423e5388c995ac4b324fb7bbea | 5fe2665cfd292cdb6f9ec905921c34ec7e0fb219d432a916020896dbe8f49182 | c41273bb5732bb1168ae8b6bdd45aecebffcfeab32c6702f7acd1de0bebff3c1 | MIT | [
"LICENSE"
] | 250 |
2.1 | rlane-libcli | 1.0.11 | Command line interface framework | ## libcli
Command line interface framework.
The `libcli` package, built on
[argparse](https://docs.python.org/3/library/argparse.html), provides
functions and features that are common to or desired by many command
line applications:
* Colorized help output, with `prog -h`, or `prog --help`.
* Help output in `Markdown` format, with `prog --md-help`.
* Print all help (top command, and all subcommands), with `prog -H`, or
`prog --long-help`. (For commands with subcommands).
* Configure logging, with `-v`, or `--verbose`
(`"-v"`=INFO, `"-vv"`=DEBUG, `"-vvv"`=TRACE). Integrated
with [loguru](https://github.com/Delgan/loguru) and
[logging](https://docs.python.org/3/library/logging.html).
* Print the current version of the application, with `-V`, or `--version`;
uses value from application's package metadata.
* Load configuration from a file, *before* parsing the command line,
with `--config FILE`. (Well, it parsed at least that much.. and "-v"
for debugging "--config" itself.) This allows values from the config
file to be available when building the `argparse.ArgumentParser`, for
setting defaults, or including within help strings of arguments/options.
* Print the active configuration, after loading the config file, with `--print-config`.
* Print the application's URL, with `--print-url`;
uses value from application's package metadata.
* Integrate with [argcomplete](https://github.com/kislyuk/argcomplete),
with `--completion`.
* Automatic inclusion of all common options, above.
* Normalized help text of all command line arguments/options.
* Force the first letter of all help strings to be upper case.
* Force all help strings to end with a period.
* Provides a function `add_default_to_help` to consistently include
a default value in an argument/option's help string.
* Supports single command applications, and command/sub-commands applications.
### class BaseCLI
Command line interface base class.
$ cat minimal.py
from libcli import BaseCLI
class HelloCLI(BaseCLI):
def main(self) -> None:
print("Hello")
if __name__ == "__main__":
HelloCLI().main()
$ python minimal.py -h
Usage: minimal.py [-h] [-v] [-V] [--print-config] [--print-url] [--completion [SHELL]]
General Options:
-h, --help Show this help message and exit.
-v, --verbose `-v` for detailed output and `-vv` for more detailed.
-V, --version Print version number and exit.
--print-config Print effective config and exit.
--print-url Print project url and exit.
--completion [SHELL] Print completion scripts for `SHELL` and exit (default: `bash`).
$ cat simple.py
from libcli import BaseCLI
class HelloCLI(BaseCLI):
def init_parser(self) -> None:
self.parser = self.ArgumentParser(
prog=__package__,
description="This program says hello.",
)
def add_arguments(self) -> None:
self.parser.add_argument(
"--spanish",
action="store_true",
help="Say hello in Spanish.",
)
self.parser.add_argument(
"name",
help="The person to say hello to.",
)
def main(self) -> None:
if self.options.spanish:
print(f"Hola, {self.options.name}!")
else:
print(f"Hello, {self.options.name}!")
if __name__ == "__main__":
HelloCLI().main()
$ python simply.py -h
Usage: simple.py [--spanish] [-h] [-v] [-V] [--print-config] [--print-url]
[--completion [SHELL]] name
This program says hello.
Positional Arguments:
name The person to say hello to.
Options:
--spanish Say hello in Spanish.
General Options:
-h, --help Show this help message and exit.
-v, --verbose `-v` for detailed output and `-vv` for more detailed.
-V, --version Print version number and exit.
--print-config Print effective config and exit.
--print-url Print project url and exit.
--completion [SHELL] Print completion scripts for `SHELL` and exit (default: `bash`).
### class BaseCmd
Base command class; for commands with subcommands.
$ cat complex.py
from libcli import BaseCLI, BaseCmd
class EnglishCmd(BaseCmd):
def init_command(self) -> None:
parser = self.add_subcommand_parser(
"english",
help="Say hello in English",
description="The `%(prog)s` command says hello in English.",
)
parser.add_argument(
"name",
help="The person to say hello to.",
)
def run(self) -> None:
print(f"Hello {self.options.name}!")
class SpanishCmd(BaseCmd):
def init_command(self) -> None:
parser = self.add_subcommand_parser(
"spanish",
help="Say hello in Spanish",
description="The `%(prog)s` command says hello in Spanish.",
)
parser.add_argument(
"name",
help="The person to say hello to.",
)
def run(self) -> None:
print(f"Hola {self.options.name}!")
class HelloCLI(BaseCLI):
def init_parser(self) -> None:
self.parser = self.ArgumentParser(
prog=__package__,
description="This program says hello.",
)
def add_arguments(self) -> None:
self.add_subcommand_classes([EnglishCmd, SpanishCmd])
def main(self) -> None:
if not self.options.cmd:
self.parser.print_help()
self.parser.exit(2, "error: Missing COMMAND")
self.options.cmd()
if __name__ == "__main__":
HelloCLI().main()
$ python complex.py -H
---------------------------------- COMPLEX.PY ----------------------------------
usage: complex.py [-h] [-H] [-v] [-V] [--print-config] [--print-url]
[--completion [SHELL]]
COMMAND ...
This program says hello.
Specify one of:
COMMAND
english Say hello in English.
spanish Say hello in Spanish.
General options:
-h, --help Show this help message and exit.
-H, --long-help Show help for all commands and exit.
-v, --verbose `-v` for detailed output and `-vv` for more detailed.
-V, --version Print version number and exit.
--print-config Print effective config and exit.
--print-url Print project url and exit.
--completion [SHELL] Print completion scripts for `SHELL` and exit
(default: `bash`).
------------------------------ COMPLEX.PY ENGLISH ------------------------------
usage: complex.py english [-h] name
The `complex.py english` command says hello in English.
positional arguments:
name The person to say hello to.
options:
-h, --help Show this help message and exit.
------------------------------ COMPLEX.PY SPANISH ------------------------------
usage: complex.py spanish [-h] name
The `complex.py spanish` command says hello in Spanish.
positional arguments:
name The person to say hello to.
options:
-h, --help Show this help message and exit.
| text/markdown | null | Russel Lane <russel@rlane.com> | null | null | MIT | argparse, cli, color, markdown, python | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"ansicolors>=1.1.8",
"argcomplete>=3.5.1",
"tomli>=2.1.0",
"tomli_w>=1.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/russellane/libcli"
] | twine/5.1.1 CPython/3.13.0 | 2026-02-18T19:18:46.613944 | rlane_libcli-1.0.11.tar.gz | 23,490 | c2/b7/22930e8e3d2ae283b24ed049cce8152819009afb85b15a6d9667c8b99171/rlane_libcli-1.0.11.tar.gz | source | sdist | null | false | fb6fa6bc9c8deb1b0bda26cba339d235 | efe4e6e199d603fccbeec33085e37684faf0a9d82102020657747089f76148b9 | c2b722930e8e3d2ae283b24ed049cce8152819009afb85b15a6d9667c8b99171 | null | [] | 817 |
2.4 | cdscompare | 0.3.0rc4 | Compare CDS annotations from one or multiple GFF files | # CDScompare
**CDScompare performs the quantitative comparison of structural genome annotations (GFF3) of the same genome.**
For each pair of overlapping genes, it computes numerical similarity scores based on CDS organization, exon–intron structure and reading frame conservation.
It is implemented in Python and distributed as a command-line tool.
The tool supports:
- pairwise comparison between two genome annotations
- multi-comparison of several annotations against a common reference
- two pairing strategies (best or all)
## Installation
### Requirements
One of the following must be available:
- Python ≥ 3.9 with `pip` or `pipx`
- Docker or Apptainer / Singularity
---
### Installation from PyPI (recommended)
#### Using `pipx` (isolated CLI install)
```bash
pipx install cdscompare==<version>
```
*Replace \<version\> with a specific release number (e.g. 0.3.0rc4).*
#### Using `pip`
```bash
pip install cdscompare==<version>
```
Test the installation:
```bash
cdscompare --version
```
### Container image (Docker / Apptainer)
A pre-built OCI image is available via GitHub Container Registry:
```bash
ghcr.io/johgi/cdscompare:<version>
```
#### Docker
Pull the image:
```bash
docker pull ghcr.io/johgi/cdscompare:<version>
```
To run on local files:
```bash
docker run --rm \
-v "$PWD:/work" \
-w /work \
ghcr.io/johgi/cdscompare:<version> \
file1.gff file2.gff
```
#### Apptainer / Singularity
Pull the image:
```bash
apptainer pull docker://ghcr.io/johgi/cdscompare:<version>
```
To run on local files:
```bash
apptainer run \
--bind "$PWD:/work" \
cdscompare_<version>.sif \
/work/file1.gff /work/file2.gff
```
### Installing development / unreleased versions
```bash
git clone git@github.com:ranwez/CDScompare.git
cd CDScompare
pip install .
```
*After installation, the repository is no longer required.*
---
## Command-line interface
```bash
cdscompare [OPTIONS] GFF_1 GFF_2 [GFF_3 ...]
```
### Positional arguments
| Argument | Description |
|---------|-------------|
| `GFF_1` | Annotation file in GFF3 format. |
| `GFF_2 ...` | One or more additional annotation files. When more than two GFF files are provided, the first file (`GFF_1`) is used as the reference, and all other files are compared against it. |
> **Note:**
> - At least **two GFF files** are required.
> - In pairwise comparisons (exactly two GFF files), identity scores and gene pairings are invariant to the input file order.
> - GFF input file basenames must be unique (used as annotation identifiers).
### Options
| Option | Description |
|-------|-------------|
| `-d, --out_dir` | Output directory where result files are written (default: `results`). |
| `-p, --pairing_mode` | Pairing strategy used within clusters of overlapping genes. Possible values are:<br>• `best` (default): selects a globally optimal gene pairing using dynamic programming.<br>• `all`: reports all overlapping gene pairings without global optimization. |
## Output files
### Pairwise comparison
CDScompare will produce:
#### 1. Detailed comparison file (`<annotation1>_vs_<annotation2>.csv`)
This file contains one line per gene comparison, with the following columns:
| Column | Description |
|------|-------------|
| `chromosome` | Chromosome identifier including strand (`_direct` or `_reverse`). |
| `cluster` | Identifier of the cluster of overlapping genes. |
| `annot1_gene` | Gene identifier in the first input annotation. |
| `annot2_gene` | Gene identifier in the second input annotation. |
| `matches` | Number of nucleotide positions matching between CDS structures. |
| `mismatches` | Total number of mismatched nucleotides (exon–intron + reading frame mismatches). |
| `identity_score` | Percentage identity computed as matches / (matches + mismatches). |
| `annot1_start` | Genomic start coordinate of the reference gene. |
| `annot1_end` | Genomic end coordinate of the reference gene. |
| `annot2_start` | Genomic start coordinate of the alternative gene. |
| `annot2_end` | Genomic end coordinate of the alternative gene. |
| `annot1_mRNA` | Identifier of the selected reference mRNA used for the comparison. |
| `annot2_mRNA` | Identifier of the selected alternative mRNA used for the comparison. |
| `EI_mismatches_zones` | Genomic intervals where exon–intron structures differ between annotations. |
| `RF_mismatches_zones` | Genomic intervals where reading frames differ between annotations. |
| `EI_mismatches` | Total length (in nucleotides) of exon–intron mismatches. |
| `RF_mismatches` | Total length (in nucleotides) of reading frame mismatches. |
| `annot1_mRNA_number` | Number of mRNAs annotated for the reference gene. |
| `annot2_mRNA_number` | Number of mRNAs annotated for the alternative gene. |
Special values:
- `_` : undefined or not applicable
- `~` : no corresponding gene reported for this comparison. This occurs when:
- no overlapping gene exists in the other annotation, or
- the gene was paired with a different gene in `best` pairing mode.
#### 2. Summary file (`<annotation1>_vs_<annotation2>.txt`)
This file contains a global summary:
```
Number of loci (whole data):
- found in both annotations : X
- found only in the reference : Y
- found only in the alternative : Z
```
### Multi-comparison mode
When more than two annotation files are provided:
- A detailed comparison file and a summary file are produced for each alternative annotation.
- An additional global summary file (`synthesis_<annotation1>.csv`) is generated, with one line per reference annotation gene, and two columns for each alternative annotation:
| Column | Description |
|--------|---------|
| `Reference_locus` | Gene identifier in the reference annotation. |
| `<alt>.locus` | Identifier of the best matching gene in the corresponding alternative annotation. |
| `<alt>.identity` | Identity score (%) for the corresponding gene pairing. |
## Scope and assumptions
- Gene overlap is detected based on **gene genomic coordinates**.
- Identity scores are computed **only from CDS features**.
UTRs are ignored, and exon–intron structure is inferred from CDS organization.
- Alternative splicing is handled by selecting the **best matching mRNA pair** for each gene pairing.
- CDScompare assumes that all input annotation files:
- describe the **same genome assembly**
- follow standard **GFF3 conventions**, with the following feature hierarchy:
- `gene` features with an `ID` attribute
- `mRNA` features with `ID` and `Parent` attributes
- `CDS` features with a `Parent` attribute pointing to an mRNA
- contain **non-overlapping CDS coordinates within a single mRNA**
- use **unique gene identifiers (`ID`) within each file**
## Citation
[...]
## Developers notes
### Development installation
This method installs the package in editable mode and includes development dependencies (pytest and ruff).
```bash
git clone git@github.com:ranwez/CDScompare.git
cd CDScompare
pip install -e .[dev]
```
To run the tests:
```bash
python -m pytest -v
```
### Developer documentation
The internal code structure documentation is generated using Sphinx.
To build the documentation locally:
```bash
pip install -e .[docs]
python -m sphinx -b html docs/source docs/build/html
```
The generated documentation can be opened locally in a web browser (docs/build/html/index.html).
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"attrs>=23.0.0",
"build>=0.10; extra == \"dev\"",
"twine>=4.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.1; extra == \"dev\"",
"sphinx>=7.0; extra == \"docs\"",
"sphinx-autodoc-typehints; extra == \"docs\"",
"furo; extra == \"docs\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T19:18:44.991216 | cdscompare-0.3.0rc4.tar.gz | 21,079 | a1/a7/e9c7d74d0271f8858dfa6981c0d3b2179fe2dcac0d8e2730afbb3e2f510d/cdscompare-0.3.0rc4.tar.gz | source | sdist | null | false | 8b7ec33c0b4333d178e3778e638dd1ed | e55216ad063b9ce63b4652166f00c3b4f32cc3ab09a9881d7286da747fe17c41 | a1a7e9c7d74d0271f8858dfa6981c0d3b2179fe2dcac0d8e2730afbb3e2f510d | null | [
"LICENSE"
] | 219 |
2.4 | timelength | 3.0.3 | A flexible python duration parser designed for human readable lengths of time. | # [timelength](https://github.com/EtorixDev/timelength)
[](https://pypi.org/project/timelength/)
[](https://pypi.org/project/timelength/)
[](https://pypi.org/project/timelength/)
[](https://github.com/EtorixDev/timelength)
[](https://github.com/EtorixDev/timelength)
[](https://pypi.org/project/timelength/)
A flexible python duration parser designed for human readable lengths of time, including long form durations such as `1 day, 5 hours, and 30 seconds`, short form durations such as `1d5h30s`, numerals such as `twelve hours`, HHMMSS such as `12:30:15`, and a mix thereof such as `1 day 5h 30s`.
- [Installation](#installation)
- [Usage](#usage)
- [Basic](#basic)
- [Advanced](#advanced)
- [FailureFlags](#failureflags)
- [ParserSettings](#parsersettings)
- [Guess the Locale](#guess-the-locale)
- [Operations & Comparisons](#operations--comparisons)
- [Example Inputs](#example-inputs)
- [Usage Notes](#usage-notes)
- [Supported Locales](#supported-locales)
- [Customization](#customization)
## Installation
`timelength` can be installed via pip:
```
pip install timelength
```
Or added to your project via uv:
```
uv add timelength
```
## Usage
### Basic
Import `TimeLength` and pass it a string to evaluate. As long as a single valid result is found the parsing will be considered a success regardless of any invalid content included in the input.
```python
from datetime import datetime, timezone
from timelength import TimeLength
tl = TimeLength("1.27 hours and 23 miles")
print(tl.result.success)
# True
print(tl.result.seconds)
# 4572.0
print(tl.to_minutes(max_precision=3))
# 76.2
print(tl.result.delta)
# 1:16:12
print(tl.ago(base=datetime(2025, 1, 1, 0, 0, 0, 0, timezone.utc)))
# 2024-12-31 22:43:48+00:00
print(tl.hence(base=datetime(2025, 1, 1, 0, 0, 0, 0, timezone.utc)))
# 2025-01-01 01:16:12+00:00
print(tl.result.invalid)
# ((23.0, FailureFlags.LONELY_VALUE), ('miles', FailureFlags.UNKNOWN_TERM))
print(tl.result.valid)
# ((1.27, Scale(scale=3600.0, ...)),)
```
### Advanced
Control can be increased by making use of `FailureFlags` and `ParserSettings`. Both are passed to whichever `Locale` you are parsing with which is then passed to the `TimeLength` to be parsed.
`FailureFlags` is an IntFlag enum which holds all the currently possible reasons for a parse to fail. The default is `FailureFlags.NONE` which means as long as a single value is parsed, it will be considered a success. To achieve the opposite behavior, `FailureFlags.ALL` will cause the parsing to be considered a failure if any `FailureFlags` are detected, even if a valid value was parsed. See below for a full list.
`ParserSettings` is an object with various parsing settings. See below for a full list of options along with the default values.
In the first part of the example below, the flags are set to `(FailureFlags.LONELY_VALUE | FailureFlags.UNKNOWN_TERM)`, which means as long as a single valid item is found, the parsing will only be considered a failure if `FailureFlags.LONELY_VALUE` or `FailureFlags.UNKNOWN_TERM` show up in the invalid items. As `19` is included in the input without an accompanying scale, it is considered a `LONELY_VALUE` and thus invalid.
In the second part, the flags are reset, meaning nothing will cause parsing to be considered a failure as long as a single valid item is found. Additionally, the settings are updated such that duplicate scales are not allowed to be parsed. Due to this, `5m` from the input is added to the invalid items as a `DUPLICATE_SCALE` due to the preceding `35m`. Also, `19` is still in the invalid items as a `LONELY_VALUE`. Despite this, since the flags is set to `FailureFlags.NONE` and since `3.5d, 35m` is successfully parsed, the overall parsing is considered a success.
```python
from timelength import English, FailureFlags, ParserSettings, TimeLength
flags = (FailureFlags.LONELY_VALUE | FailureFlags.UNKNOWN_TERM)
locale = English(flags=flags)
tl = TimeLength("3.5d, 35m, 5m, 19", locale=locale)
print(tl.result.success)
# False
print(tl.result.invalid)
# ((19.0, FailureFlags.LONELY_VALUE),)
print(tl.result.valid)
# ((3.5, Scale(scale=86400.0, ...)), (35.0, Scale(scale=60.0, ...)), (5.0, Scale(scale=60.0, ...)))
flags = FailureFlags.NONE
settings = ParserSettings(allow_duplicate_scales=False)
locale.flags = flags
locale.settings = settings
tl.parse()
print(tl.result.success)
# True
print(tl.result.invalid)
# (('5m', FailureFlags.DUPLICATE_SCALE), (19.0, FailureFlags.LONELY_VALUE))
print(tl.result.valid)
# ((3.5, Scale(scale=86400.0, ...)), (35.0, Scale(scale=60.0, ...)))
```
To put it simply, `FailureFlags` is used to determine if an item that is in the invalid items tuple should invalidate the parsing as a whole, whereas `ParserSettings` is used to determine if certain customizable situations should even result in an item being added to the invalid items tuple to begin with.
### FailureFlags
The members of the `FailureFlags` IntEnum are:
- `NONE` — No failures will cause parsing to fail.
- `ALL` — Any failure will cause parsing to fail.
- `MALFORMED_CONTENT` — The fallback when something that shouldn't have happened, happened.
- `UNKNOWN_TERM` — The parsed value was not recognized from the terms or symbols in the config.
- Ex: `1 mile`
- `MALFORMED_DECIMAL` — Multiple decimals were attempted within a singular decimal segment.
- Ex: `1.2.3min`
- `MALFORMED_THOUSAND` — A thousand segment was attempted but did not have a leading number or a proper number of digits following a thousand separator.
- Ex: `,234`, `1,23`, or `1,2345`
- `MALFORMED_FRACTION` — A fraction was attempted but had more than 2 values, a missing value, or was not formatted correctly.
- Ex: `1/2/3`, `/2`, or `1+ / +2`
- `MALFORMED_HHMMSS` — An HH:MM:SS was attempted but had more segments than enabled scales or was not formatted correctly.
- Ex: `1:2:3:4:5:6:7:8:9:10:11:12:13:14:15` or `1:15:26:`
- `LONELY_VALUE` — A value was parsed with no paired scale.
- Ex: `one minute and twenty`
- `LONELY_SCALE` — A scale was parsed with no paired value.
- Ex: `2 minutes and hours`
- `DUPLICATE_SCALE` — The same scale was parsed multiple times.
- Ex: `1min 5:23` or `1min 5 minutes and 23 seconds`
- `CONSECUTIVE_CONNECTOR` — More than the allowed number of connectors were parsed in a row.
- Ex: `1h 2min`
- `CONSECUTIVE_SEGMENTOR` — More than the allowed number of segmentors were parsed in a row.
- Ex: `1h,, 2min`
- `CONSECUTIVE_SPECIAL` — More than the allowed number of special characters were parsed in a row.
- Ex: `1h 2min!!`
- `MISPLACED_ALLOWED_TERM` — An allowed term was found in the middle of a segment/sentence.
- Ex: `1!h 2min`
- `MISPLACED_SPECIAL` — A special character was found in the middle of a segment/sentence.
- Ex: `1, /2`
- `UNUSED_OPERATION` — A term of the operation numeral was parsed but unused on any values.
- Ex: `2 min of`
- `AMBIGUOUS_MULTIPLIERS` — More than one multiplier was parsed for a single segment which may be ambiguous.
- Ex: `half of a quarter of two minutes and 30s`
### ParserSettings
- assume_scale: `Literal["LAST", "SINGLE", "NEVER"] = "SINGLE"` — How to handle no scale being provided.
- `LAST` will assume seconds only for the last value if no scale is provided for it.
- `SINGLE` will only assume seconds when a single input is provided.
- `NEVER` will never assume seconds when no scale is provided.
- limit_allowed_terms: `bool = True` — Prevent terms from the `allowed_terms` list in
the config from being used in the middle of a segment, thus interrupting a value/scale pair.
- The affected segment will become abandoned and added to the invalid list.
- The terms may still be used at the beginning or end of a segment/sentence.
- If `False`, The terms will be ignored (within other limitations) and not effect parsing.
- allow_duplicate_scales: `bool = True` — Allow scales to be parsed multiple times, stacking their values.
- If `False`, the first scale will be used and subsequent duplicates will be added to the invalid list.
- allow_thousands_extra_digits: `bool = False` — Allow thousands to be parsed with more than three digits following a thousand delimiter.
- Ex: `1,2345` will be interpreted as `12,345`.
- allow_thousands_lacking_digits: `bool = False` — Allow a number to be parsed with less than three digits following a thousand delimiter.
- Ex: `1,23` will be interpreted as `123`.
- allow_decimals_lacking_digits: `bool = True` — Allow decimals to be parsed with no number following the
decimal delimiter.
- Ex: `1.` will be interpreted as `1.0`.
### Guess the Locale
If you're unsure of which locale the input will belong to, you can attempt to guess the locale from the available options.
```python
from timelength import English, Spanish, Guess, TimeLength
guess = Guess()
tl = TimeLength("5 minutos", locale=guess)
print(tl.locale)
# Spanish
print(tl.result.success)
# True
print(tl.result.valid)
# ((5.0, Scale(scale=60.0, ...)))
tl.content = "5 minutes"
tl.parse(guess_locale=guess) # or `guess_locale=True`
print(tl.locale)
# English
print(tl.result.success)
# True
print(tl.result.valid)
# ((5.0, Scale(scale=60.0, ...)))
```
To guess the locale you can pass in an instantiated `Guess` either on creation of the `TimeLength` or to the `parse()` function on subsequent calls. Doing so will save locale instantiations as with each new `Guess` created, each locale available is instantiated in `Guess().locales`. Alternatively, for `parse()`, you can pass `True` for `guess_locale` and a new `Guess` will be instantiated automatically.
Guessing works by parsing with each locale. The one with the least invalid results is considered the correct locale. Ties are broken by most valid followed by alphabetically.
Any flags or settings passed to the `Guess` will be passed on to each locale. If none are provided, the defaults in each locale's configs are used. If you need to specify flags or settings per locale, you can directly access `Guess().locales`, which is a list of instantiated versions of all of the currently available locales.
If you have any custom locales you would like to be included in the possible results, append to `Guess().locales` before passing it to the `TimeLength` or `parse()` function.
### Operations & Comparisons
`TimeLength` objects support various arithmetic operations and comparisons between each other, datetimes, timedelta, and numbers. The supported options are:
1. **Addition**
- `TimeLength` + `TimeLength` or `timedelta` or number -> `TimeLength`
- `datetime` or `timedelta` + `TimeLength` -> `datetime` or `timedelta`
2. **Subtraction**
- `TimeLength` - `TimeLength` or `timedelta` or number -> `TimeLength`
- `datetime` or `timedelta` - `TimeLength` -> `datetime` or `timedelta`
3. **Multiplication**
- `TimeLength` * number -> `TimeLength`
- number * `TimeLength` -> `TimeLength`
4. **Division**
- `TimeLength` / `TimeLength` or `timedelta` or number -> `TimeLength` or `float`
- `timedelta` / `TimeLength` -> `float`
5. **Floor Division**
- `TimeLength` // `TimeLength` or `timedelta` or number -> `TimeLength` or `float`
- `timedelta` // `TimeLength` -> `float`
6. **Modulo**
- `TimeLength` % `TimeLength` or `timedelta` or number -> `TimeLength`
- `timedelta` % `TimeLength` -> `timedelta`
7. **Divmod**
- `divmod(TimeLength, TimeLength or timedelta or number)` -> `tuple[float, TimeLength]`
- `divmod(timedelta, TimeLength)` -> `tuple[float, timedelta]`
8. **Power**
- `TimeLength` ** number -> `TimeLength` (optionally moduloed by `TimeLength` or `timedelta` or number)
9. **Comparisons**
- `TimeLength` > `TimeLength` or `timedelta` -> `bool`
- `TimeLength` >= `TimeLength` or `timedelta` -> `bool`
- `TimeLength` < `TimeLength` or `timedelta` -> `bool`
- `TimeLength` <= `TimeLength` or `timedelta` -> `bool`
- `TimeLength` == `TimeLength` or `timedelta` -> `bool`
- `TimeLength` != `TimeLength` or `timedelta` -> `bool`
10. **Other Operations**
- `abs(TimeLength)` -> `TimeLength` (returns `self` unchanged as `TimeLength` is an absolute measurement)
- `+TimeLength` -> `TimeLength` (returns `self` unchanged)
- `-TimeLength` -> `TimeLength` (returns `self` unchanged)
- `bool(TimeLength)` -> `bool` (returns `True` if parsing succeeded, otherwise `False`)
- `len(TimeLength)` -> `int` (returns the length of `self.content`)
## Example Inputs
- `1m`
- `1min`
- `1 Minute`
- `1m and 2 SECONDS`
- `3h, 2 min, 3sec`
- `1.2d`
- `1,234s`
- `one hour`
- `twenty-two hours and thirty five minutes`
- `half of a day`
- `1/2 of a day`
- `1/4 hour`
- `1 Day, 2:34:12`
- `1:2:34:12`
- `1:5:13:27:22`
### Usage Notes
1. **Numerals**
- Segmentors are ignored when parsing numerals in order to achieve consistency over ambiguity.
2. **Multipliers**
- A single multiplier (ex: `half`) is allowed per segment (value + scale). This is due to the ambiguity introduced when more than one multiplier is used. May be revisited in the future if a good way to handle this ambiguity is found.
3. **HHMMSS**
- It is not strictly adherent to typical `HH:MM:SS` standards. Any parsable numbers work in each slot, whether they are single digits, multiple digits, or include decimals.
- For example, `2.56:27/3:270:19.2231` is a valid input in place of `2.56 days, 9 hours, 270 minutes, and 19.2231 seconds`.
- It also accepts a single connector, such as a space, between deliminators. Example: `2: 6: 3`.
- Supports up to as many segments as there are scales defined, including custom scales (10 default, `Millisecond` to `Century`).
- The segments are parsed in reverse order, so smallest to largest (the order defined in the config and therefore the order loaded into the `Locale`, may differ for custom defined `Scale`s) to ensure the correct scales are applied.
- EXCEPTION: The default base from which `HH:MM:SS` starts at is `Second`. Any scales (typically of lesser value) listed prior in the config, or appended to the scales list before `Second` in the case of custom scales, will not be utilized unless `Second` is disabled or as many `HH:MM:SS` segments are parsed as there are scales defined.
- `12:30:15` is `12 hours, 30 minutes, and 15 seconds`, NOT `12 minutes, 30 seconds, and 15 milliseconds`.
- `1:10:100:12:52:30:24:60:60:1000` will make use of `Century` to `Millisecond`.
## Supported Locales
1. English
2. Spanish
3. Basic Custom
- Copy & modify an existing config with new terms as long as your new `Locale` follows the existing config parser's grammar structure
4. Advanced Custom
- Write your own parsing logic if your `Locale`'s grammar structure differs too drastically (PRs welcome)
### Customization
`timelength` allows for customizing the parsing behavior through JSON configuration. To get started, copy an existing locale JSON in `timelength/locales/`. The custom JSON may be placed anywhere.
**Ensure the JSON being used is from a trusted source, as the parser is loaded dynamically based on the file specified in the JSON. This could allow for unintended code execution if an unsafe config is loaded.**
Valid JSONs must include the following keys:
- `connectors`
- Characters/words that join two parts of the same segment.
- Must have at least one value.
- `segmentors`
- Characters/words that separate an input into segments.
- Must have at least one value.
- `allowed_terms`
- Characters or terms that won't be categorized as an invalid input. If sent multiple times in a row (ex: `!!`), they will still be marked as invalid. If `ParserSettings().limit_allowed_terms` is set then these characters won't be allowed in the middle of a segment (ex: `1!min`) and will cause the segment progress to reset.
- `hhmmss_delimiters`
- Characters used to form `HH:MM:SS` segments. Can't have overlap with `decimal_delimiters`, `thousand_delimiters`, or `fraction_delimiters`.
- `decimal_delimiters`
- Characters used to separate decimals from digits. Can't have overlap with `hhmmss_delimiters`, `thousand_delimiters`, or `fraction_delimiters`.
- `thousand_delimiters`
- Characters used to break up large numbers. Can't have overlap with `hhmmss_delimiters`, `decimal_delimiters`, or `fraction_delimiters`.
- `fraction_delimiters`
- Characters used to form fractions. Can't have overlap with `hhmmss_delimiters`, `decimal_delimiters`, or `thousand_delimiters`.
- `parser_file`
- The name of this locale's parser file (extension included) located in `timelength/parsers/`, or the path to the parser file if stored elsewhere.
- **Ensure only a trusted file is used as this could allow unintended code execution.**
- The internal parser method must share a name with the file. Example: `parser_one.py` and `def parser_one()`.
- `scales`
- Periods of time. The defaults are `millisecond`, `second`, `minute`, `hour`, `day`, `week`, `month`, `year`, `decade`, and `century`. Default scales must be present. At least one scale must be valid and enabled. Custom scales can be added following the format of the others. The following keys must be present and populated:
- scale
- The number of seconds this scale represents.
- singular
- The lowercase singular form of this scale.
- plural
- The lowercase plural form of this scale.
- terms
- All terms that could be parsed as this scale. Accents and other NFKD markings should not be present as they are filtered from the user input.
- The following key is optional to disable a scale without removing it from the config:
- enabled
- A bool indicating if the scale is enabled or not.
- `numerals`
- Word forms of numbers. May be populated or left empty. Each element must itself have the following keys, even if their contents are not used:
- `type`
- The numeral type.
- `value`
- The numerical value of this numeral.
- `terms`
- Characters/words that parse to this numeral's value.
- `flags`
- A list of `FailureFlags` which will cause parsing to be considered a failure. Values should be upper cased. This is optional and can be passed to the locale dynamically instead. See a full list of options in the [FailureFlags](#failureflags) section.
- `settings`
- A dictionary of options to modify parsing behavior. Each key should have a string or boolean value as appropriate. This is optional and can be passed to the locale dynamically instead. See a full list of options in the [ParserSettings](#parsersettings) section.
- `extra_data`
- Any data a parser needs that is not already covered. May be populated or left empty. The locale loads this into a `Locale().extra_data` attribute, leaving the parser to utilize it.
Once your custom JSON is filled out, you can use it as follows:
```python
from timelength import TimeLength, Locale
output = TimeLength("30 minutes", locale = Locale("path/to/config.json"))
```
If all goes well, the parsing will succeed, and if not, an error will point you in the right direction.
| text/markdown | null | Etorix <admin@etorix.dev> | null | null | null | datetime, duration, parser, parsing, time, timedelta, timelength | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming... | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://pypi.org/project/timelength/",
"Repository, https://github.com/EtorixDev/timelength/",
"Sponsor, https://github.com/sponsors/EtorixDev",
"Donate, https://ko-fi.com/Etorix"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T19:18:08.869111 | timelength-3.0.3.tar.gz | 83,473 | 8b/e9/9185269f853f34153eeb1c89a7ea656138810a4d7bdb3ac5b9e257d62aca/timelength-3.0.3.tar.gz | source | sdist | null | false | 8adea7d555e49e4840cb18d9fa4de611 | 60c6a716a4c680700bfefd1176e6d1fdda6f5694e7e60e2c3c6c11ce3eaa524e | 8be99185269f853f34153eeb1c89a7ea656138810a4d7bdb3ac5b9e257d62aca | MIT | [
"LICENSE"
] | 468 |
2.4 | narada | 0.1.36 | Python client SDK for Narada | <p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/NaradaAI/narada-python-sdk/main/static/Narada-logo-dark.png">
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/NaradaAI/narada-python-sdk/main/static/Narada-logo.png">
<img alt="NARADA AI Logo." src="https://raw.githubusercontent.com/NaradaAI/narada-python-sdk/main/static/Narada-logo.png" width="300">
</picture>
</p>
<h1 align="center">Computer Use for Agentic Process Automation!</h1>
<p align="center">
<a href="https://narada.ai"><img src="https://img.shields.io/badge/Sign%20Up-Cloud-blue?logo=cloud" alt="Sign Up"></a>
<a href="https://docs.narada.ai"><img src="https://img.shields.io/badge/Documentation-Docs-blue?logo=gitbook" alt="Documentation"></a>
<a href="https://x.com/intent/user?screen_name=Narada_AI"><img src="https://img.shields.io/badge/Follow-Twitter-1DA1F2?logo=twitter&logoColor=white" alt="Twitter Follow"></a>
<a href="https://www.linkedin.com/company/97417492/"><img src="https://img.shields.io/badge/Follow-LinkedIn-0077B5?logo=linkedin&logoColor=white" alt="LinkedIn Follow"></a>
</p>
The official Narada Python SDK that helps you launch browsers and run tasks with Narada UI agents.
## Installation
```bash
pip install narada
```
## Quick Start
**Important**: The first time Narada opens the automated browser, you will need to manually install the [Narada Enterprise extension](https://chromewebstore.google.com/detail/enterprise-narada-ai-assi/bhioaidlggjdkheaajakomifblpjmokn) and log in to your Narada account.
After installation and login, create a Narada API Key (see [this link](https://docs.narada.ai/documentation/authentication#api-key) for instructions) and set the following environment variable:
```bash
export NARADA_API_KEY=<YOUR KEY>
```
That's it. Now you can run the following code to spin up Narada to go and download a file for you from arxiv:
```python
import asyncio
from narada import Narada
async def main() -> None:
# Initialize the Narada client.
async with Narada() as narada:
# Open a new browser window and initialize the Narada UI agent.
window = await narada.open_and_initialize_browser_window()
# Run a task in this browser window.
response = await window.agent(
prompt='Search for "LLM Compiler" on Google and open the first arXiv paper on the results page, then open the PDF. Then download the PDF of the paper.',
# Optionally generate a GIF of the agent's actions.
generate_gif=True,
)
print("Response:", response.model_dump_json(indent=2))
if __name__ == "__main__":
asyncio.run(main())
```
This would then result in the following trajectory:
<p align="center">
<a href="https://youtu.be/bpy-xnSeboY">
<img src="https://i.imgur.com/TyEuD5d.gif" alt="File Download Example" width="600">
</a>
</p>
You can use the SDK to launch browsers and run automated tasks using natural language instructions. For more examples and code samples, please explore the [`examples/`](examples/) folder in this repository.
## Features
- **Natural Language Control**: Send instructions in plain English to control browser actions
- **Parallel Execution**: Run multiple browser tasks simultaneously across different windows
- **Error Handling**: Built-in timeout handling and retry mechanisms
- **Action Recording**: Generate GIFs of agent actions for debugging and documentation
- **Async Support**: Full async/await support for efficient operations
## Key Capabilities
- **Web Search & Navigation**: Automatically search, click links, and navigate websites
- **Data Extraction**: Extract information from web pages using AI understanding
- **Form Interaction**: Fill out forms and interact with web elements
- **File Operations**: Download files and handle web-based documents
- **Multi-window Management**: Coordinate tasks across multiple browser instances
## License
This project is licensed under the Apache 2.0 License.
## Support
For questions, issues, or support, please contact: support@narada.ai
## Citation
We appreciate it if you could cite Narada if you found it useful for your project.
```bibtex
@software{narada_ai2025,
author = {Narada AI},
title = {Narada AI: Agentic Process Automation for Enterprise},
year = {2025},
publisher = {GitHub},
url = {https://github.com/NaradaAI/narada-python-sdk}
}
```
<div align="center">
Made with ❤️ in Berkeley, CA.
</div>
| text/markdown | null | Narada <support@narada.ai> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiohttp>=3.12.13",
"narada-core==0.0.15",
"packaging==24.2",
"playwright>=1.53.0",
"rich>=14.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/NaradaAI/narada-python-sdk/narada",
"Repository, https://github.com/NaradaAI/narada-python-sdk",
"Issues, https://github.com/NaradaAI/narada-python-sdk/issues"
] | uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T19:18:05.998002 | narada-0.1.36-py3-none-any.whl | 17,836 | ec/6b/fe8fd2a775383374f83e700e5717619add95240dcd4f9fc72696f21009cd/narada-0.1.36-py3-none-any.whl | py3 | bdist_wheel | null | false | 073673828b18bb469027c52e4a226003 | 08401a8cdb028b92510bc7bc9eee5ee7ac1d76e603d08f5be6344b4b5835368a | ec6bfe8fd2a775383374f83e700e5717619add95240dcd4f9fc72696f21009cd | Apache-2.0 | [] | 305 |
2.4 | narada-core | 0.0.15 | Code shared by the `narada` and `narada-pyodide` packages. | Code shared by the `narada` and `narada-pyodide` packages.
| text/markdown | null | Narada <support@narada.ai> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"pydantic==2.12.5"
] | [] | [] | [] | [
"Homepage, https://github.com/NaradaAI/narada-python-sdk/narada-core",
"Repository, https://github.com/NaradaAI/narada-python-sdk",
"Issues, https://github.com/NaradaAI/narada-python-sdk/issues"
] | uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T19:18:04.955815 | narada_core-0.0.15.tar.gz | 5,002 | 4f/82/8dd32ee27112fadb264642d14f2f0275f129c1e63126d4f31fcf83683469/narada_core-0.0.15.tar.gz | source | sdist | null | false | 3c9869da2543e22142ae32364c34e35e | aa2b2b582ff782a924f7c604191247fe7262cd155b18f4e2e263ac03fea5f748 | 4f828dd32ee27112fadb264642d14f2f0275f129c1e63126d4f31fcf83683469 | Apache-2.0 | [] | 313 |
2.4 | VIStk | 0.3.15.5 | Visual Interfacing Structure for python using tkinter | # Welcome to my Visual Interfacing Structure (VIS)
A simple to use framework to build GUIs for python apps
Enjoy!
| text/markdown | null | Elijah Love <elijah2005l@gmail.com> | null | Elijah Love <elijah2005l@gmail.com> | BSD 2-Clause License
Copyright (c) 2025, Elijah Love
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
| python, gui | [
"Development Status :: 2 - Pre-Alpha",
"Programming Language :: Python"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pyinstaller"
] | [] | [] | [] | [
"Homepage, https://github.com/KarlTheKrazyKat/VIS/blob/master/README.md",
"Documentation, https://github.com/KarlTheKrazyKat/VIS/blob/master/documentation.md",
"Repository, https://github.com/KarlTheKrazyKat/VIS/",
"Bug Tracker, https://github.com/KarlTheKrazyKat/VIS/blob/master/knownissues.md",
"Changelog,... | twine/6.1.0 CPython/3.13.7 | 2026-02-18T19:18:04.262108 | vistk-0.3.15.5.tar.gz | 89,642 | b6/ed/aa2ddf6868bf599ee9102fed3633216a31b696ee510ec88fa2edd5fd46a3/vistk-0.3.15.5.tar.gz | source | sdist | null | false | 9e1a4425fd9efbf3d5fdab57a4cd2e28 | 063c1eac766ff8ccbaa4a9bb78568731ffd21dd753d5e6703bd2020182912fbd | b6edaa2ddf6868bf599ee9102fed3633216a31b696ee510ec88fa2edd5fd46a3 | null | [
"LICENSE"
] | 0 |
2.4 | narada-pyodide | 0.0.40 | Pyodide-compatible Python client SDK for Narada | This is a Pyodide-compatible slim implementation for the [actual Narada Python SDK](https://github.com/NaradaAI/narada-python-sdk/narada), intended for internal use only.
| text/markdown | null | Narada <support@narada.ai> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"narada-core==0.0.15",
"packaging==24.2"
] | [] | [] | [] | [
"Homepage, https://github.com/NaradaAI/narada-python-sdk/packages/narada-pyodide",
"Repository, https://github.com/NaradaAI/narada-python-sdk",
"Issues, https://github.com/NaradaAI/narada-python-sdk/issues"
] | uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T19:18:01.601647 | narada_pyodide-0.0.40.tar.gz | 6,703 | 3d/1c/1b177b47cb6bceea8ea855365a22f8e4d65cd1c50c6b02148c0ef257f8b6/narada_pyodide-0.0.40.tar.gz | source | sdist | null | false | 563e0bdda1ff4a1f4c6c03cbc87f3b70 | 0fce5ca8529961470450b1729f294fa982584b267b82ab9aa40f5e217f9f935c | 3d1c1b177b47cb6bceea8ea855365a22f8e4d65cd1c50c6b02148c0ef257f8b6 | Apache-2.0 | [] | 238 |
2.4 | salahnow-cli | 0.2.0 | SalahNow command-line prayer times | # SalahNow CLI
Command-line prayer times using the same source logic as SalahNow Web:
- Türkiye: Diyanet (`ezanvakti.emushaf.net`)
- Worldwide: Muslim World League via AlAdhan (`method=3`, `school=1`)
## Install
### Option 1: pipx from PyPI (recommended)
```bash
pipx install salahnow-cli
```
### Option 2: pip from PyPI
```bash
python3 -m pip install --user salahnow-cli
```
### Option 3: local dev install
```bash
pipx install .
```
### Option 4: install script
```bash
./scripts/install.sh
```
## Commands
### Show today's prayer times
```bash
salahnow
```
Check version:
```bash
salahnow --version
```
### Next prayer + live countdown
```bash
salahnow next
```
### Configure location/method/time format
```bash
salahnow config
```
Print current config:
```bash
salahnow config --show
```
Non-interactive example:
```bash
salahnow config \
--city "New York" \
--country "United States" \
--country-code US \
--lat 40.7128 \
--lon -74.0060 \
--method mwl \
--time-format 12h
```
Config file path:
```text
~/.config/salahnow/config.json
```
### Notifications daemon
```bash
salahnow notify
```
- macOS: uses `osascript`
- Linux: uses `notify-send` (if available)
### Zsh completion
Typer/click completion is built in:
```bash
salahnow --install-completion zsh
```
or print script:
```bash
salahnow --show-completion zsh
```
## Notes
- Non-Türkiye locations are forced to `mwl` source (same as web behavior).
- Türkiye locations can use Diyanet, and CLI resolves `diyanetIlceId` from nearest TR city if missing.
- The CLI stores runtime cache in `~/.cache/salahnow/prayer_cache.json` and uses cached data when APIs are temporarily unavailable.
| text/markdown | SalahNow contributors | null | null | null | MIT | aladhan, cli, diyanet, islam, prayer-times, salah | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programm... | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx<1.0,>=0.27",
"rich<15.0,>=13.7",
"typer<1.0,>=0.12",
"pytest<9.0,>=8.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/blcnyy/salahnow",
"Source, https://github.com/blcnyy/salahnow",
"Issues, https://github.com/blcnyy/salahnow/issues"
] | twine/6.2.0 CPython/3.9.6 | 2026-02-18T19:17:23.296951 | salahnow_cli-0.2.0.tar.gz | 15,711 | cb/c9/20a6f5d5c3c05bb1e9c62fbdd139a440310cfa653802069c9bfd1f18b48e/salahnow_cli-0.2.0.tar.gz | source | sdist | null | false | 8e8d6c1679ea73f64de451459f92e01e | 7dacc134c240f65d804a28e00a095dec57cb66cd219d83d754011e493669fa23 | cbc920a6f5d5c3c05bb1e9c62fbdd139a440310cfa653802069c9bfd1f18b48e | null | [
"LICENSE"
] | 261 |
2.4 | docuware-client | 0.6.1 | DocuWare REST-API client | # docuware-client
This is a client library for the REST API of [DocuWare][1] DMS. Since
[DocuWare's documentation][2] regarding the REST API is very sparse (at the
time these lines were written), this client serves only a part of the API's
functionality.
Please keep in mind: **This software is not related to DocuWare.** It is a work
in progress, may yield unexpected results, and almost certainly contains bugs.
> ⚠️ Starting with version 0.5.0, OAuth2 authentication is the new default.
> Unless you explicitly request cookie authentication with
> `dw.login(..., cookie_auth=True)`, OAuth2 will be used. OAuth2 authentication
> has been available since DocuWare 7.10, and
> [cookie authentication will be discontinued](https://start.docuware.com/blog/product-news/docuware-sdk-discontinuation-of-cookie-authentication)
> with DocuWare 7.11.
>
> ⚠️ Introduced in version 0.6.0, `httpx` is used instead of `requests`. The API
> should be largely compatible, but if you rely on specific `requests` behavior,
> implementation details or `requests` structures (like `Session`), please stay
> on version 0.5.x.
## Usage
First you have to log in and create a persistent session:
```python
import json
import pathlib
import docuware
dw = docuware.Client("http://localhost")
session = dw.login("username", "password", "organization")
with open(".session", "w") as f:
json.dump(session, f)
```
From then on you have to reuse the session, otherwise you will be locked out of
the DocuWare service for a period of time (10 minutes or even longer). As the
session cookie may change on subsequent logins, update the session file on
every login.
```python
session_file = pathlib.Path(".session")
if session_file.exists():
with open(session_file) as f:
session = json.load(f)
else:
session = None
dw = docuware.Client("http://localhost")
session = dw.login("username", "password", "organization", saved_session=session)
with open(session_file, "w") as f:
json.dump(session, f)
```
Or simpler, using the `connect` function which handles sessions and credentials automatically:
```python
# Tries to find credentials in arguments, environment variables DW_USERNAME, DW_PASSWORD,
# DW_ORG or .credentials file
dw = docuware.connect(url="http://localhost", username="...", password="...")
```
Iterate over the organizations and file cabinets:
```python
for org in dw.organizations:
print(org)
for fc in org.file_cabinets:
print(" ", fc)
```
If you already know the ID or name of the objects, you can also access them
directly.
```python
org = dw.organization("1")
```python
org = dw.organization("1")
fc = org.file_cabinet("Archive")
# If you only know the ID:
doc = fc.get_document(123456)
```
Now some examples of how to search for documents. First you need a search
dialog:
```python
# Let's use the first one:
dlg = fc.search_dialog()
# Or a specific search dialog:
dlg = fc.search_dialog("Default search dialog")
```
Each search term consists of a field name and a search pattern. Each search
dialog knows its fields:
```python
for field in dlg.fields.values():
print("Id =", field.id)
print("Length=", field.length)
print("Name =", field.name)
print("Type =", field.type)
print("-------")
```
Let's search for some documents:
```python
# Search for DOCNO equal to '123456':
for result in dlg.search("DOCNO=123456"):
print(result)
# Search for two patterns alternatively:
for result in dlg.search(["DOCNO=123456", "DOCNO=654321"], operation=docuware.OR):
print(result)
# Search for documents in a date range (01-31 January 2023):
for result in dlg.search("DWSTOREDATETIME=2023-01-01T00:00:00,2023-02-01T00:00:00"):
print(result)
```
Please note that search terms may also contain metacharacters such as `*`, `(`,
`)`, which may need to be escaped when searching for these characters
themselves.
```python
for result in dlg.search("DOCTYPE=Invoice \\(incoming\\)"):
print(result)
```
Search terms can be as simple as a single string, but can also be more complex.
The following two queries are equivalent:
```python
dlg.search(["FIELD1=TERM1,TERM2", "FIELD2=TERM3"])
dlg.search({"FIELD1": ["TERM1", "TERM2"], "FIELD2": ["TERM3"]})
```
The result of a search is always an iterator over the search results, even if
no result was obtained. Each individual search result holds a `document`
attribute, which gives access to the document in the archive. The document
itself can be downloaded as a whole or only individual attachments.
```python
for result in dlg.search("DOCNO=123456"):
doc = result.document
# Download the complete document ...
data, content_type, filename = doc.download(keep_annotations=True)
docuware.write_binary_file(data, filename)
# ... or individual attachments (or sections, as DocuWare calls them)
for att in doc.attachments:
data, content_type, filename = att.download()
docuware.write_binary_file(data, filename)
```
Create a new document with index fields:
```python
data = {
"Subject": "My Document",
"Date": "2023-01-01",
}
# Create document:
doc = fc.create_document(fields=data)
# Add a file as attachment to the new document:
doc.upload_attachment("path/to/file.pdf")
```
Update index fields of a document:
```python
doc.update({"Status": "Approved", "Amount": 120.0})
```
Delete documents:
```python
dlg = fc.search_dialog()
for result in dlg.search(["FIELD1=TERM1,TERM2", "FIELD2=TERM3"]):
document = result.document
document.delete()
```
Users and groups of an organisation can be accessed and managed:
```python
# Iterate over the list of users and groups:
for user in org.users:
print(user)
for group in org.groups:
print(group)
# Find a specific user:
user = org.users["John Doe"] # or: org.users.get("John Doe")
# Add a user to a group:
group = org.groups["Managers"] # or: org.groups.get("Managers")
group.add_user(user)
# or
user.add_to_group(group)
# Deactivate user:
user.active = False # or True to activate user
# Create a new user:
user = docuware.User(first_name="John", last_name="Doe")
org.users.add(user, password="123456")
```
## CLI usage
This package also includes a simple CLI program for collecting information
about the archive and searching and downloading documents or attachments.
First you need to log in:
```console
$ dw-client login --url http://localhost/ --username "Doe, John" --password FooBar --organization "Doe Inc."
```
The credentials and the session cookie are stored in the `.credentials` and
`.session` files in the current directory.
Of course, `--help` will give you a list of all options:
```console
$ dw-client --help
```
Some search examples (Bash shell syntax):
```console
$ dw-client search --file-cabinet Archive Customer=Foo\*
$ dw-client search --file-cabinet Archive DocNo=123456 "DocType=Invoice \\(incoming\\)"
$ dw-client search --file-cabinet Archive DocDate=2022-02-14
```
Downloading documents:
```console
$ dw-client search --file-cabinet Archive Customer=Foo\* --download document --annotations
```
> Note: `--annotations` forces the download as a PDF with annotations embedded. Without this flag, the document is downloaded in its original format without annotations.
Downloading a specific document by ID (new in v0.6.1):
```console
$ dw-client get --file-cabinet Archive --id 123456
```
Downloading attachments of a specific document:
```console
# Download document itself (original format) to stdout:
$ dw-client get --file-cabinet Archive --id 123456 --attachment document > output.pdf
# Download specific attachment:
$ dw-client get --file-cabinet Archive --id 123456 --attachment ATTACHMENT_ID --output my_file.pdf
# Download all attachments to a directory:
$ dw-client get --file-cabinet Archive --id 123456 --attachment "*" --output ./downloads/
```
Creating and updating documents:
```console
# Create a new document with index fields:
$ dw-client create --file-cabinet Archive --file invoice.pdf Subject="New Invoice" Amount=100.50
# Update index fields of an existing document:
$ dw-client update --file-cabinet Archive --id 123456 Status=Approved
# Add an attachment to a document:
$ dw-client attach --file-cabinet Archive --id 123456 --file supplement.pdf
# Remove an attachment:
$ dw-client detach --file-cabinet Archive --id 123456 --attachment-id ATTACHMENT_ID
```
Downloading attachments (or sections):
```console
$ dw-client search --file-cabinet Archive DocNo=123456 --download attachments
```
Some information about your DocuWare installation:
```console
$ dw-client info
```
Listing all organizations, file cabinets and dialogs at once:
```console
$ dw-client list
```
A more specific list, only one file cabinet:
```console
$ dw-client list --file-cabinet Archive
```
You can also display a (partial) selection of the contents of individual fields:
```console
$ dw-client list --file-cabinet Archive --dialog custom --field DocNo
```
## Further reading
* Entry point to [DocuWare's official documentation][2] of the REST API.
* Notable endpoint: `/DocuWare/Platform/Content/PlatformLinkModel.pdf`
## License
This work is released under the BSD 3 license. You may use and redistribute
this software as long as the copyright notice is preserved.
[1]: https://docuware.com/
[2]: https://developer.docuware.com/rest/index.html
| text/markdown | Stefan Schönberger | mail@sniner.dev | null | null | BSD-3-Clause | null | [
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Lan... | [] | https://github.com/sniner/docuware-client | null | >=3.9 | [] | [] | [] | [
"httpx[http2]<0.29.0,>=0.28.1"
] | [] | [] | [] | [
"Repository, https://github.com/sniner/docuware-client"
] | poetry/2.2.1 CPython/3.14.3 Linux/6.18.9-arch1-2 | 2026-02-18T19:17:03.825446 | docuware_client-0.6.1.tar.gz | 30,271 | ba/16/906b5ed446e638ab406b9d1962e88c7d44c318e54b5fe18b39981b4e6de6/docuware_client-0.6.1.tar.gz | source | sdist | null | false | 11c7195d020090f212342d63f33e4a8f | dbfe227e3dcf404a4f0baed4c5e15e83c3642bfd86ef02f3d80e3b92596083c2 | ba16906b5ed446e638ab406b9d1962e88c7d44c318e54b5fe18b39981b4e6de6 | null | [] | 275 |
2.4 | enyal | 0.7.5 | Persistent, queryable memory for AI coding agents | # Enyal
**Persistent, queryable memory for AI coding agents.**
Enyal gives AI agents like Claude Code durable context that survives session restarts. Every conversation becomes accumulated institutional knowledge—facts, preferences, decisions, and conventions that persist and grow.
## Features
- **Persistent Memory**: Context survives restarts, crashes, and process termination
- **Semantic Search**: Find relevant context using natural language queries (768-dim embeddings via nomic-embed-text-v1.5)
- **Knowledge Graph**: Link related entries with relationships (supersedes, depends_on, conflicts_with, relates_to)
- **Validity Tracking**: Automatically filter superseded entries and flag conflicts
- **Entry Versioning**: Full history of changes with automatic version creation
- **Usage Analytics**: Track how context is accessed and used over time
- **Health Monitoring**: Get insights into stale, orphan, and conflicting entries
- **Hierarchical Scoping**: Global → workspace → project → file context inheritance
- **Fully Offline**: Zero network calls during operation
- **Cross-Platform**: macOS (Intel + Apple Silicon), Linux, and Windows
- **MCP Compatible**: Works with Claude Code, Cursor, Windsurf, Kiro, and any MCP client
## Quick Start
Get up and running in under 2 minutes:
### 1. Install
```bash
# Using uv (recommended)
uv pip install enyal --system
# Or using pip
pip install enyal
```
### 2. Configure Your MCP Client
**Universal configuration** (works with Claude Code, Cursor, Windsurf, Kiro):
```json
{
"mcpServers": {
"enyal": {
"command": "uvx",
"args": ["enyal", "serve"]
}
}
}
```
**For macOS Intel users** (requires Python 3.11 or 3.12):
```json
{
"mcpServers": {
"enyal": {
"command": "uvx",
"args": ["--python", "3.12", "enyal", "serve"]
}
}
}
```
### 3. Start Using
```
You: Remember that this project uses pytest for all testing
Assistant: [calls enyal_remember] Stored context about testing framework
You: What testing framework should I use?
Assistant: [calls enyal_recall] Based on stored context, this project uses pytest.
```
## Platform Support
| Platform | Python 3.11 | Python 3.12 | Python 3.13 |
|----------|-------------|-------------|-------------|
| macOS Apple Silicon | Supported | Supported | Supported |
| macOS Intel | Supported | Supported | Not supported* |
| Linux | Supported | Supported | Supported |
| Windows | Supported | Supported | Supported |
*macOS Intel + Python 3.13 is not supported due to PyTorch ecosystem constraints.
## Installation Methods
### Method 1: uv (Recommended)
```bash
# Install globally
uv pip install enyal --system
# Run server
enyal serve
# With model preloading for faster first query
enyal serve --preload
```
### Method 2: pip
```bash
# Install globally
pip install enyal
# Run server
enyal serve
```
### Method 3: pipx
```bash
# Install in isolated environment
pipx install enyal
# Run server
enyal serve
```
### Method 4: uvx (Run without installing)
For ephemeral execution without permanent installation:
```bash
# Most platforms (auto-selects Python)
uvx enyal serve
# macOS Intel (explicit Python version)
uvx --python 3.12 enyal serve
```
Note: `uvx` runs the package in a temporary environment each time. For persistent installation, use Method 1, 2, or 3.
## MCP Integration
Enyal works with any MCP-compatible client. The configuration is the same across platforms—only the command may vary for macOS Intel.
### Claude Code
**File locations:**
- Project: `.mcp.json` (in project root)
- User: `~/.claude/.mcp.json`
**Standard configuration:**
```json
{
"mcpServers": {
"enyal": {
"command": "uvx",
"args": ["enyal", "serve"],
"env": {
"ENYAL_DB_PATH": "~/.enyal/context.db"
}
}
}
}
```
**macOS Intel configuration:**
```json
{
"mcpServers": {
"enyal": {
"command": "uvx",
"args": ["--python", "3.12", "enyal", "serve"],
"env": {
"ENYAL_DB_PATH": "~/.enyal/context.db"
}
}
}
}
```
**CLI setup:**
```bash
# Standard
claude mcp add-json enyal '{"command":"uvx","args":["enyal","serve"]}'
# macOS Intel
claude mcp add-json enyal '{"command":"uvx","args":["--python","3.12","enyal","serve"]}'
```
### Claude Desktop
**File locations:**
- macOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
**Configuration:**
```json
{
"mcpServers": {
"enyal": {
"command": "uvx",
"args": ["enyal", "serve"],
"env": {
"ENYAL_DB_PATH": "~/.enyal/context.db"
}
}
}
}
```
### Cursor
**File locations:**
- Global: `~/.cursor/mcp.json`
- Project: `.cursor/mcp.json`
**Configuration:**
```json
{
"mcpServers": {
"enyal": {
"command": "uvx",
"args": ["enyal", "serve"],
"env": {
"ENYAL_DB_PATH": "~/.enyal/context.db"
}
}
}
}
```
**UI setup:** File → Preferences → Cursor Settings → MCP
### Windsurf
**File location:** `~/.codeium/windsurf/mcp_config.json`
**Configuration:**
```json
{
"mcpServers": {
"enyal": {
"command": "uvx",
"args": ["enyal", "serve"],
"env": {
"ENYAL_DB_PATH": "~/.enyal/context.db"
}
}
}
}
```
**UI setup:** Windsurf Settings → Cascade → MCP, or use the Plugin Store
### Kiro
**File locations:**
- Global: `~/.kiro/settings/mcp.json`
- Project: `.kiro/settings/mcp.json`
**Configuration:**
```json
{
"mcpServers": {
"enyal": {
"command": "uvx",
"args": ["enyal", "serve"],
"env": {
"ENYAL_DB_PATH": "~/.enyal/context.db"
},
"autoApprove": ["enyal_recall", "enyal_stats", "enyal_get"]
}
}
}
```
**UI setup:** Click the Kiro ghost tab → MCP Servers → "+"
See [docs/INTEGRATIONS.md](docs/INTEGRATIONS.md) for detailed platform-specific guides.
## Available Tools
### Core Tools
| Tool | Description |
|------|-------------|
| **enyal_remember** | Store new context with optional duplicate detection, conflict detection, and auto-linking |
| **enyal_recall** | Semantic search with validity filtering (excludes superseded entries by default) |
| **enyal_recall_by_scope** | Scope-aware search that automatically finds context relevant to the current file/project |
| **enyal_forget** | Remove or deprecate context (soft-delete by default, hard-delete optional) |
| **enyal_update** | Update existing entries (content, confidence, tags) - automatically creates version |
| **enyal_get** | Retrieve a specific entry by ID with full metadata |
| **enyal_stats** | Get usage statistics and health metrics |
### Knowledge Graph Tools
| Tool | Description |
|------|-------------|
| **enyal_link** | Create relationships between entries (relates_to, supersedes, depends_on, conflicts_with) |
| **enyal_unlink** | Remove a relationship between entries |
| **enyal_edges** | Get all relationships for an entry |
| **enyal_traverse** | Walk the knowledge graph from an entry |
| **enyal_impact** | Find all entries that depend on a given entry |
### Intelligence Tools
| Tool | Description |
|------|-------------|
| **enyal_health** | Get graph health metrics (stale, orphan, conflicting entries) |
| **enyal_review** | Get entries needing review (stale, orphan, or conflicted) |
| **enyal_history** | Get version history for an entry |
| **enyal_analytics** | Get usage analytics (recall frequency, top accessed entries) |
### Content Types
| Type | Use For | Example |
|------|---------|---------|
| `fact` | Objective information | "The database uses PostgreSQL 15" |
| `preference` | User/team preferences | "Prefer tabs over spaces" |
| `decision` | Recorded decisions | "Chose React over Vue for frontend" |
| `convention` | Coding standards | "All API endpoints follow REST naming" |
| `pattern` | Code patterns | "Error handling uses Result<T, E> pattern" |
### Scope Levels
| Scope | Applies To | Example Path |
|-------|------------|--------------|
| `global` | All projects | (none) |
| `workspace` | Directory of projects | `/Users/dev/projects` |
| `project` | Single project | `/Users/dev/myproject` |
| `file` | Specific file | `/Users/dev/myproject/src/auth.py` |
### Relationship Types
| Type | Use For | Example |
|------|---------|---------|
| `relates_to` | General semantic relationship | "Testing guide" relates to "pytest conventions" |
| `supersedes` | Entry A replaces entry B | New decision supersedes old decision |
| `depends_on` | Entry A requires entry B | Feature depends on architecture decision |
| `conflicts_with` | Entries contradict each other | "Use tabs" conflicts with "Use spaces" |
## CLI Usage
Enyal provides a command-line interface for direct interaction:
```bash
# Store context
enyal remember "Always use pytest for testing" --type convention --scope project
# Search context
enyal recall "testing framework" --limit 5
# Get entry details
enyal get <entry-id>
# View statistics
enyal stats
# Remove context
enyal forget <entry-id>
# Run MCP server
enyal serve --preload
```
**Options:**
- `--db PATH` — Custom database path
- `--json` — Output in JSON format
See [docs/CLI.md](docs/CLI.md) for complete CLI reference.
## Python Library
```python
from enyal.core.store import ContextStore
from enyal.core.retrieval import RetrievalEngine
from enyal.models.context import ContextType, ScopeLevel
# Initialize store
store = ContextStore("~/.enyal/context.db")
retrieval = RetrievalEngine(store)
# Remember something
entry_id = store.remember(
content="Always use pytest for testing in this project",
content_type=ContextType.CONVENTION,
scope_level=ScopeLevel.PROJECT,
scope_path="/Users/dev/myproject",
tags=["testing", "pytest"]
)
# Remember with duplicate detection
result = store.remember(
content="Use pytest for all testing", # Similar to existing
check_duplicate=True, # Enable duplicate checking
duplicate_threshold=0.85, # Similarity threshold
on_duplicate="reject" # "reject", "merge", or "store"
)
# Returns dict: {"entry_id": "...", "action": "existing", "similarity": 0.92}
# Recall relevant context (hybrid semantic + keyword search)
results = retrieval.search(
query="how should I write tests?",
limit=5,
min_confidence=0.5
)
for result in results:
print(f"{result.score:.2f}: {result.entry.content}")
# Scope-aware search (file → project → workspace → global)
results = retrieval.search_by_scope(
query="testing conventions",
file_path="/Users/dev/myproject/src/auth.py",
limit=5
)
# Find similar entries (useful for deduplication checks)
similar = store.find_similar(
content="pytest testing conventions",
threshold=0.8,
limit=3
)
# Update context (automatically creates a version record)
store.update(entry_id, confidence=0.9, tags=["testing", "pytest", "unit-tests"])
# Get specific entry
entry = store.get(entry_id)
# Get statistics
stats = store.stats()
print(f"Total entries: {stats.total_entries}")
```
### Knowledge Graph
```python
from enyal.models.context import EdgeType
# Create entries
old_decision = store.remember(content="Use Python 3.10", content_type="decision")
new_decision = store.remember(content="Use Python 3.13", content_type="decision")
# Link entries (new supersedes old)
store.link(new_decision, old_decision, EdgeType.SUPERSEDES)
# Search with validity filtering (superseded entries excluded by default)
results = retrieval.search("Python version", exclude_superseded=True)
# Include superseded entries with metadata
results = retrieval.search("Python version", exclude_superseded=False)
for r in results:
if r.is_superseded:
print(f"SUPERSEDED: {r.entry.content} (by {r.superseded_by})")
# Traverse the graph
related = store.traverse(new_decision, max_depth=2)
# Find what depends on an entry
dependents = store.traverse(new_decision, direction="incoming",
edge_types=[EdgeType.DEPENDS_ON])
```
### Versioning & History
```python
# Every remember() creates an initial version
entry_id = store.remember(content="Initial approach", content_type="decision")
# Every update() creates a new version
store.update(entry_id, content="Revised approach")
store.update(entry_id, content="Final approach")
# Get version history
history = store.get_history(entry_id)
for version in history:
print(f"v{version['version']}: {version['change_type']} - {version['content']}")
# Output:
# v3: updated - Final approach
# v2: updated - Revised approach
# v1: created - Initial approach
```
### Analytics & Health
```python
# Track usage (called automatically during recall)
store.track_usage(entry_id, "recall", query="approach", result_rank=1)
# Get analytics
analytics = store.get_analytics(days=30)
print(f"Top recalled: {analytics['top_recalled']}")
# Health check
health = store.health_check()
print(f"Health score: {health['health_score']:.0%}")
print(f"Stale entries: {health['stale_entries']}")
print(f"Orphan entries: {health['orphan_entries']}")
print(f"Conflicts: {health['unresolved_conflicts']}")
# Get entries needing review
stale = store.get_stale_entries(days_old=180)
orphans = store.get_orphan_entries()
conflicts = store.get_conflicted_entries()
```
## Configuration
### Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `ENYAL_DB_PATH` | `~/.enyal/context.db` | Database file location |
| `ENYAL_PRELOAD_MODEL` | `false` | Pre-load embedding model at startup |
| `ENYAL_LOG_LEVEL` | `INFO` | Logging level (DEBUG, INFO, WARNING, ERROR) |
| `ENYAL_SSL_CERT_FILE` | (system) | Path to CA certificate bundle (for corporate networks) |
| `ENYAL_SSL_VERIFY` | `true` | Enable/disable SSL verification (set `false` only as last resort) |
| `ENYAL_MODEL_PATH` | (none) | Path to local pre-downloaded model |
| `ENYAL_HF_ENDPOINT` | (none) | Custom HuggingFace Hub endpoint URL (e.g., Artifactory proxy) |
| `ENYAL_OFFLINE_MODE` | `false` | Prevent network calls (use with cached/local model) |
### Database Location
The default database is stored at `~/.enyal/context.db`. This single SQLite file contains:
- All context entries and metadata
- Vector embeddings for semantic search
- Full-text search index
## Troubleshooting
### Installation Fails on macOS Intel
**Symptom:** Error about torch/PyTorch wheels not found
**Cause:** PyTorch doesn't provide wheels for macOS Intel + Python 3.13
**Solution:** Use Python 3.11 or 3.12:
```bash
# Install with specific Python version
uv pip install enyal --python 3.12 --system
```
### MCP Server Not Connecting
1. **Check uvx is available:**
```bash
uvx --version
```
2. **Test server manually:**
```bash
uvx enyal serve
# Should start without errors, waiting for MCP protocol
```
3. **Enable debug logging:**
```json
{
"mcpServers": {
"enyal": {
"command": "uvx",
"args": ["enyal", "serve", "--log-level", "DEBUG"]
}
}
}
```
4. **Check server status:**
- Claude Code: `/mcp` command
- Cursor: Settings → MCP → check status
- Windsurf: Cascade → Plugins
- Kiro: Ghost tab → MCP Servers
### Slow First Query
The first query loads the embedding model (~80MB). This takes ~1-2 seconds. Subsequent queries are fast (~34ms).
**To pre-load the model at startup:**
```json
{
"mcpServers": {
"enyal": {
"command": "uvx",
"args": ["enyal", "serve", "--preload"]
}
}
}
```
### Database Locked Error
If you see "database is locked" errors, ensure only one MCP server instance is running per database file. Use different `ENYAL_DB_PATH` values for different projects if needed.
### Permission Errors
On macOS/Linux, ensure the database directory exists and is writable:
```bash
mkdir -p ~/.enyal
chmod 755 ~/.enyal
```
### SSL Certificate Errors (Corporate Networks)
**Symptom:** Error containing "SSL: CERTIFICATE_VERIFY_FAILED" or "self signed certificate in certificate chain"
**Cause:** Corporate networks with SSL inspection (Zscaler, BlueCoat, etc.) inject enterprise CA certificates that Python doesn't recognize by default.
**Quick Fix:**
```bash
# Option 1: Point to your corporate CA bundle (recommended)
export ENYAL_SSL_CERT_FILE=/path/to/corporate-ca-bundle.crt
enyal model download
# Option 2: Pre-download model on unrestricted network, then use offline
export ENYAL_OFFLINE_MODE=true
```
**For MCP configuration:**
```json
{
"mcpServers": {
"enyal": {
"command": "uvx",
"args": ["enyal", "serve"],
"env": {
"ENYAL_SSL_CERT_FILE": "/path/to/corporate-ca-bundle.crt"
}
}
}
}
```
**Check your SSL configuration:**
```bash
enyal model status
```
See [docs/SSL_TROUBLESHOOTING.md](docs/SSL_TROUBLESHOOTING.md) for detailed troubleshooting guide.
### Custom Model Registry (Artifactory)
If your organization uses Artifactory or another proxy to mirror HuggingFace models, set `ENYAL_HF_ENDPOINT` to redirect all model downloads:
```bash
export ENYAL_HF_ENDPOINT=https://artifactory.corp.com/artifactory/api/huggingface
enyal model download
```
**For MCP configuration:**
```json
{
"mcpServers": {
"enyal": {
"command": "uvx",
"args": ["enyal", "serve"],
"env": {
"ENYAL_HF_ENDPOINT": "https://artifactory.corp.com/artifactory/api/huggingface",
"ENYAL_SSL_CERT_FILE": "/path/to/corporate-ca-bundle.crt"
}
}
}
}
```
When `ENYAL_HF_ENDPOINT` is set, Enyal automatically:
- Sets `HF_ENDPOINT` before `huggingface_hub` is imported
- Disables HF Xet storage (`HF_HUB_DISABLE_XET=1`) since Xet bypasses HTTP proxies
- Increases download timeouts for first-fetch latency through the proxy
No additional authentication configuration is needed — Artifactory handles upstream auth transparently.
## Architecture
Enyal uses a unified SQLite database with:
- **Relational storage** for metadata and attributes
- **sqlite-vec** for vector similarity search (384-dim embeddings)
- **FTS5** for keyword search
- **Knowledge graph** with typed edges (supersedes, depends_on, conflicts_with, relates_to)
- **Version history** for change tracking
- **Usage analytics** for access patterns
- **WAL mode** for concurrent access
See [docs/ARCHITECTURE.md](docs/ARCHITECTURE.md) for detailed design decisions.
## Development
```bash
# Clone repository
git clone https://github.com/seancorkum/enyal.git
cd enyal
# Install with dev dependencies
uv sync --all-extras
# Run tests
uv run pytest
# Type checking
uv run mypy src/enyal
# Linting
uv run ruff check src/enyal
```
## Performance
Benchmarked on Intel Mac with Python 3.12:
| Metric | Target (p95) | Measured (p95) | Status |
|--------|--------------|----------------|--------|
| Cold start (model load + first query) | <2000ms | ~1500ms | ✓ |
| Warm query latency | <50ms | ~34ms | ✓ |
| Write latency | <50ms | ~34ms | ✓ |
| Concurrent reads (4 threads) | <150ms | ~85ms | ✓ |
| Memory (100k entries estimated) | <500MB | ~35MB | ✓ |
**Embedding model:** [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) (22M params, 384 dimensions)
Run benchmarks:
```bash
uv run python benchmarks/benchmark_performance.py
```
## License
MIT
| text/markdown | Sean Corkum | null | null | null | MIT | agents, ai, context, llm, mcp, memory | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Pyth... | [] | null | null | >=3.11 | [] | [] | [] | [
"einops>=0.7.0",
"fastmcp>=2.0.0",
"numpy<2.0.0,>=1.26.0; python_version < \"3.13\"",
"numpy>=2.0.0; python_version >= \"3.13\"",
"pydantic>=2.0.0",
"sentence-transformers>=3.0.0",
"sqlite-vec>=0.1.0",
"truststore>=0.9.0; extra == \"corporate\"",
"mypy>=1.10.0; extra == \"dev\"",
"pytest-asyncio>=... | [] | [] | [] | [
"Homepage, https://github.com/seancorkum/enyal",
"Documentation, https://github.com/seancorkum/enyal#readme",
"Repository, https://github.com/seancorkum/enyal",
"Issues, https://github.com/seancorkum/enyal/issues"
] | twine/6.1.0 CPython/3.12.9 | 2026-02-18T19:16:54.093422 | enyal-0.7.5.tar.gz | 291,758 | 6d/89/8bba19bf02ee91fe136d6952adbec521c447228b70d485c4b87ee8fef3ab/enyal-0.7.5.tar.gz | source | sdist | null | false | bbfc7b2260b870910404c9d17e1d3017 | b8b032b02949aaeaf6d4b8ef365b8a4b61602c808bbcc1a915cd238b33a61f9c | 6d898bba19bf02ee91fe136d6952adbec521c447228b70d485c4b87ee8fef3ab | null | [
"LICENSE"
] | 242 |
2.4 | svg-ultralight | 0.98.0 | a sensible way to create svg files with Python | # svg_ultralight
The most straightforward way to create SVG files with Python.
## Four principal functions:
from svg_ultralight import new_svg_root, write_svg, write_png_from_svg, write_png
## One convenience:
from svg_ultralight import NSMAP
### new_svg_root
x_: Optional[float],
y_: Optional[float],
width_: Optional[float],
height_: Optional[float],
pad_: float = 0
dpu_: float = 1
nsmap: Optional[Dict[str, str]] = None (svg_ultralight.NSMAP if None)
**attributes: Union[float, str],
-> etree.Element
Create an svg root element from viewBox style arguments and provide the necessary svg-specific attributes and namespaces. This is your window onto the scene.
Three ways to call:
1. The trailing-underscore arguments are the same you'd use to create a `rect` element (plus `pad_` and `dpu_`). `new_svg_root` will infer `viewBox`, `width`, and `height` svg attributes from these values.
2. Use the svg attributes you already know: `viewBox`, `width`, `height`, etc. These will be written to the xml file.
3. Of course, you can combine 1. and 2. if you know what you're doing.
See `namespaces` below.
* `x_`: x value in upper-left corner
* `y_`: y value in upper-left corner
* `width_`: width of viewBox
* `height_`: height of viewBox
* `pad_`: the one small convenience I've provided. Optionally increase viewBox by `pad` in all directions.
* `dpu_`: pixels per viewBox unit for output png images.
* `nsmap`: namespaces. (defaults to svg_ultralight.NSMAP). Available as an argument should you wish to add additional namespaces. To do this, add items to NSMAP then call with `nsmap=NSMAP`.
* `**attributes`: the trailing-underscore arguments are an *optional* shortcut for creating a scene. The entire svg interface is available to you through kwargs. See `A few helpers` below for details on attribute-name translation between Python and xml (the short version: `this_name` becomes `this-name` and `this_` becomes `this`)
### namespaces (svg_ultralight.NSMAP)
`new_svg_root` will create a root with several available namespaces.
* `"dc": "http://purl.org/dc/elements/1.1/"`
* `"cc": "http://creativecommons.org/ns#"`
* `"rdf": "http://www.w3.org/1999/02/22-rdf-syntax-ns#"`
* `"svg": "http://www.w3.org/2000/svg"`
* `"xlink": "http://www.w3.org/1999/xlink"`
* `"sodipodi": "http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"`
* `"inkscape": "http://www.inkscape.org/namespaces/inkscape"`
I have made these available to you as `svg_ultralight.NSMAP`
### write_svg
svg: str,
xml: etree.Element,
stylesheet: Optional[str] = None,
do_link_css: bool = False,
**tostring_kwargs,
-> str:
Write an xml element as an svg file. This will link or inline your css code and insert the necessary declaration, doctype, and processing instructions.
* `svg`: path to output file (include extension .svg)
* `param xml`: root node of your svg geometry (created by `new_svg_root`)
* `stylesheet`: optional path to a css stylesheet
* `do_link_css`: link to stylesheet, else (default) write contents of stylesheet into svg (ignored if `stylesheet` is None). If you have a stylesheet somewhere, the default action is to dump the entire contents into your svg file. Linking to the stylesheet is more elegant, but inlining *always* works.
* `**tostring_kwargs`: optional kwarg arguments for `lxml.etree.tostring`. Passing `xml_declaration=True` by itself will create an xml declaration with encoding set to UTF-8 and an svg DOCTYPE. These defaults can be overridden with keyword arguments `encoding` and `doctype`. If you don't know what this is, you can probably get away without it.
* `returns`: for convenience, returns svg filename (`svg`)
* `effects`: creates svg file at `svg`
### write_png_from_svg
inkscape_exe: str,
svg: str
png: Optional[str]
-> str
Convert an svg file to a png. Python does not have a library for this. That has an upside, as any library would be one more set of svg implementation idiosyncrasies we'd have to deal with. Inkscape will convert the file. This function provides the necessary command-line arguments.
* `inkscape_exe`: path to inkscape.exe
* `svg`: path to svg file
* `png`: optional path to png output (if not given, png name will be inferred from `svg`: `'name.svg'` becomes `'name.png'`)
* `return`: png filename
* `effects`: creates png file at `png` (or infers png path and filename from `svg`)
### write_png
inkscape_exe: str,
png: str,
xml: etree.Element,
stylesheet: Optional[str] = None
-> str
Create a png without writing an initial svg to your filesystem. This is not faster (it may be slightly slower), but it may be important when writing many images (animation frames) to your filesystem.
* `inkscape_exe`: path to inkscape.exe
* `png`: path to output file (include extension .png)
* `param xml`: root node of your svg geometry (created by `new_svg_root`)
* `stylesheet`: optional path to a css stylesheet
* `returns`: for convenience, returns png filename (`png`)
* `effects`: creates png file at `png`
## A few helpers:
from svg_ultralight.constructors import new_element, new_sub_element
I do want to keep this ultralight and avoid creating some pseudo scripting language between Python and lxml, but here are two very simple, very optional functions to save your having to `str()` every argument to `etree.Element`.
### constructors.new_element
tag: str
**params: Union[str, float]
-> etree.Element
Python allows underscores in variable names; xml uses dashes.
Python understands numbers; xml wants strings.
This is a convenience function to swap `"_"` for `"-"` and `10.2` for `"10.2"` before creating an xml element.
Translates numbers to strings
>>> elem = new_element('line', x1=0, y1=0, x2=5, y2=5)
>>> etree.tostring(elem)
b'<line x1="0" y1="0" x2="5" y2="5"/>'
Translates underscores to hyphens
>>> elem = new_element('line', stroke_width=1)
>>> etree.tostring(elem)
b'<line stroke-width="1"/>'
Removes trailing underscores. You'll almost certainly want to use reserved names like ``class`` as svg parameters. This
can be done by passing the name with a trailing underscore.
>>> elem = new_element('line', class_='thick_line')
>>> etree.tostring(elem)
b'<line class="thick_line"/>'
Special handling for a 'text' argument. Places value between element tags.
>>> elem = new_element('text', text='please star my project')
>>> etree.tostring(elem)
b'<text>please star my project</text>'
### constructors.new_sub_element
parent: etree.Element
tag: str
**params: Union[str, float]
-> etree.Element
As above, but creates a subelement.
>>> parent = etree.Element('g')
>>> _ = new_sub_element('rect')
>>> etree.tostring(parent)
b'<g><rect/></g>'
### update_element
Another way to add params through the new_element name / float translator. Again unnecessary, but potentially helpful. Easily understood from the code or docstrings.
## Extras:
### query.map_elems_to_bounding_boxes
Python cannot parse an svg file. Python can *create* an svg file, and Inkscape can parse (and inspect) it. Inkscape has a command-line interface capable of reading an svg file and returning some limited information. This is the only way I know for a Python program to:
1. create an svg file (optionally without writing to filesystem)
2. query the svg file for bounding-box information
3. create an adjusted svg file.
This would be necessary for, e.g., algorithmically fitting text in a box.
from svg_ultralight.queries import map_elems_to_bounding_boxes
You can get a tiny bit more sophisticated with Inkscape bounding-box queries, but not much. This will give you pretty much all you can get out of it.
### animate.write_gif
Create an animated gif from a sequence of png filenames. This is a Pillow one-liner, but it's convenient for me to have it, so it might be convenient for you. Requires pillow, which is not a project dependency.
from svg_ultralight.animate import write_gif
[Full Documentation and Tutorial](https://shayallenhill.com/svg-with-css-in-python/)
| text/markdown | Shay Hill | Shay Hill <shay_public@hotmail.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"cssutils>=2.11.1",
"fonttools>=4.60.1",
"lxml>=6.0.2",
"paragraphs>=1.0.1",
"pillow>=11.3.0",
"pyphen>=0.17.2",
"svg-path-data>=0.5.3",
"types-lxml>=2025.8.25",
"typing-extensions>=4.15.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T19:15:50.371424 | svg_ultralight-0.98.0.tar.gz | 60,904 | 00/d8/606c34f2cf8cb7f0b7fcc1ef2f93d2df4ca776bdfd001bff182baaf34a78/svg_ultralight-0.98.0.tar.gz | source | sdist | null | false | efa69ec862c785969aa8ca55a778c3de | 4552562070f89d87893e3e90bb06e4a97e50af34fb9067e5a960664c6e79ab11 | 00d8606c34f2cf8cb7f0b7fcc1ef2f93d2df4ca776bdfd001bff182baaf34a78 | MIT | [] | 263 |
2.4 | reachy-mini-bwc | 1.3.1.post2 | Brain Wave Collective maintained fork of reachy-mini (originally developed by Pollen Robotics). | # ⚠️ Notice: This is a Brain Wave Collective Maintained Fork
This repository is a **fork** of the original upstream project.
It exists to maintain compatibility with Brain Wave Collective systems and may include dependency or version adjustments that differ from upstream.
If you are not using Brain Wave Collective software, you likely want the official upstream project instead.
All original licensing and attribution are preserved.
---
# Reachy Mini 🤖
[](https://huggingface.co/docs/reachy_mini/)
[](https://discord.gg/Y7FgMqHsub)
**Reachy Mini is an open-source, expressive robot made for hackers and AI builders.**
🛒 [**Buy Reachy Mini**](https://www.hf.co/reachy-mini/)
[](https://www.pollen-robotics.com/reachy-mini/)
## ⚡️ Build and start your own robot
**Choose your platform to access the specific guide:**
| **🤖 Reachy Mini (Wireless)** | **🔌 Reachy Mini Lite** | **💻 Simulation** |
| :---: | :---: | :---: |
| The full autonomous experience.<br>Raspberry Pi 4 + Battery + WiFi. | The developer version.<br>USB connection to your computer. | No hardware required.<br>Prototype in MuJoCo. |
| 👉 [**Go to Wireless Guide**](https://huggingface.co/docs/reachy_mini/platforms/reachy_mini/get_started) | 👉 [**Go to Lite Guide**](https://huggingface.co/docs/reachy_mini/platforms/reachy_mini_lite/get_started) | 👉 [**Go to Simulation**](https://huggingface.co/docs/reachy_mini/platforms/simulation/get_started) |
> ⚡ **Pro tip:** Install [uv](https://docs.astral.sh/uv/getting-started/installation/) for 10-100x faster app installations (auto-detected, falls back to `pip`).
<br>
## 📱 Apps & Ecosystem
Reachy Mini comes with an app store powered by Hugging Face Spaces. You can install these apps directly from your robot's dashboard with one click!
* **🗣️ [Conversation App](https://huggingface.co/spaces/pollen-robotics/reachy_mini_conversation_app):** Talk naturally with Reachy Mini (powered by LLMs).
* **📻 [Radio](https://huggingface.co/spaces/pollen-robotics/reachy_mini_radio):** Listen to the radio with Reachy Mini!
* **👋 [Hand Tracker](https://huggingface.co/spaces/pollen-robotics/hand_tracker_v2):** The robot follows your hand movements in real-time.
👉 [**Browse all apps on Hugging Face**](https://hf.co/reachy-mini/#/apps)
<br>
## 🚀 Getting Started with Reachy Mini SDK
### User guides
* **[Installation](https://huggingface.co/docs/reachy_mini/SDK/installation)**: 5 minutes to set up your computer
* **[Quickstart Guide](https://huggingface.co/docs/reachy_mini/SDK/quickstart)**: Run your first behavior on Reachy Mini
* **[Python SDK](https://huggingface.co/docs/reachy_mini/SDK/python-sdk)**: Learn to move, see, speak, and hear.
* **[AI Integrations](https://huggingface.co/docs/reachy_mini/SDK/integration)**: Connect LLMs, build Apps, and publish to Hugging Face.
* **[Core Concepts](https://huggingface.co/docs/reachy_mini/SDK/core-concept)**: Architecture, coordinate systems, and safety limits.
* 🤗[**Share your app with the community**](https://huggingface.co/blog/pollen-robotics/make-and-publish-your-reachy-mini-apps)
* 📂 [**Browse the Examples Folder**](examples)
### 🤖 AI-Assisted Development
Using an AI coding agent (Claude Code, Codex, Copilot, etc.)? You can start building apps right away. Paste this prompt to your agent:
> *I'd like to create a Reachy Mini app. Start by reading https://github.com/pollen-robotics/reachy_mini/blob/develop/agents.md*
This [**agents.md**](agents.md) guide gives AI agents everything they need: SDK patterns, best practices, example apps, and step-by-step skills.
### Quick Look
After [installing the SDK](https://huggingface.co/docs/reachy_mini/SDK/installation), once your robot is awake, you can control it in just **a few lines of code**:
```python
from reachy_mini import ReachyMini
from reachy_mini.utils import create_head_pose
with ReachyMini() as mini:
# Look up and tilt head
mini.goto_target(
head=create_head_pose(z=10, roll=15, degrees=True, mm=True),
duration=1.0
)
```
<br>
## 🛠 Hardware Overview
Reachy Mini robots are sold as kits and generally take **2 to 3 hours** to assemble. Detailed step-by-step guides are available in the platform-specific folders linked above.
* **Reachy Mini (Wireless):** Runs onboard (RPi 4), autonomous, includes IMU. [See specs](https://huggingface.co/docs/reachy_mini/platforms/reachy_mini/hardware).
* **Reachy Mini Lite:** Runs on your PC, powered via wall outlet. [See specs](https://huggingface.co/docs/reachy_mini/platforms/reachy_mini_lite/hardware).
<br>
## ❓ Troubleshooting
Encountering an issue? 👉 **[Check the Troubleshooting & FAQ Guide](https://huggingface.co/docs/reachy_mini/troubleshooting)**
<br>
## 🤝 Community & Contributing
* **Join the Community:** Join [Discord](https://discord.gg/2bAhWfXme9) to share your moments with Reachy, build apps together, and get help.
* **Found a bug?** Open an issue on this repository.
## License
This project is licensed under the Apache 2.0 License. See the [LICENSE](LICENSE) file for details.
Hardware design files are licensed under Creative Commons BY-SA-NC.
| text/markdown | null | Pollen Robotics <contact@pollen-robotics.com> | Brain Wave Collective | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=2.2.5",
"scipy<2.0.0,>=1.15.3",
"reachy_mini_motor_controller>=1.5.3",
"eclipse-zenoh~=1.7.0",
"opencv-python<=5.0",
"cv2_enumerate_cameras>=1.2.1",
"psutil",
"jinja2",
"uvicorn[standard]",
"fastapi",
"pyserial",
"huggingface-hub>=1.4.0",
"sounddevice<0.6,>=0.5.1",
"soundfile==0.13... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T19:15:49.417607 | reachy_mini_bwc-1.3.1.post2.tar.gz | 3,000,522 | 30/75/1954e7044f56022282bed319f105b3202f07d319f4ab5a33d574463160be/reachy_mini_bwc-1.3.1.post2.tar.gz | source | sdist | null | false | 229dcdb149e8d9aa465e7679492a6dd0 | 017d074f8bba1f09c0cfde46bbc81b78ff77ff551697c83668f87e803fb66520 | 30751954e7044f56022282bed319f105b3202f07d319f4ab5a33d574463160be | null | [
"LICENSE"
] | 223 |
2.4 | conventional-pre-commit | 4.4.0 | A pre-commit hook that checks commit messages for Conventional Commits formatting. | # conventional-pre-commit
A [`pre-commit`](https://pre-commit.com) hook to check commit messages for
[Conventional Commits](https://conventionalcommits.org) formatting.
Works with Python >= 3.8.
## Usage
Make sure `pre-commit` is [installed](https://pre-commit.com#install).
Create a blank configuration file at the root of your repo, if needed:
```console
touch .pre-commit-config.yaml
```
Add/update `default_install_hook_types` and add a new repo entry in your configuration file:
```yaml
default_install_hook_types:
- pre-commit
- commit-msg
repos:
# - repo: ...
- repo: https://github.com/compilerla/conventional-pre-commit
rev: <git sha or tag>
hooks:
- id: conventional-pre-commit
stages: [commit-msg]
args: []
```
Install the `pre-commit` script:
```console
pre-commit install --install-hooks
```
Make a (normal) commit :x::
```console
$ git commit -m "add a new feature"
[INFO] Initializing environment for ....
Conventional Commit......................................................Failed
- hook id: conventional-pre-commit
- duration: 0.07s
- exit code: 1
[Bad commit message] >> add a new feature
Your commit message does not follow Conventional Commits formatting
https://www.conventionalcommits.org/
```
And with the `--verbose` arg:
```console
$ git commit -m "add a new feature"
[INFO] Initializing environment for ....
Conventional Commit......................................................Failed
- hook id: conventional-pre-commit
- duration: 0.07s
- exit code: 1
[Bad commit message] >> add a new feature
Your commit message does not follow Conventional Commits formatting
https://www.conventionalcommits.org/
Conventional Commit messages follow a pattern like:
type(scope): subject
extended body
Please correct the following errors:
- Expected value for type from: build, chore, ci, docs, feat, fix, perf, refactor, revert, style, test
Run:
git commit --edit --file=.git/COMMIT_EDITMSG
to edit the commit message and retry the commit.
```
Make a (conventional) commit :heavy_check_mark::
```console
$ git commit -m "feat: add a new feature"
[INFO] Initializing environment for ....
Conventional Commit......................................................Passed
- hook id: conventional-pre-commit
- duration: 0.05s
```
## Install with pip
`conventional-pre-commit` can also be installed and used from the command line:
```shell
pip install conventional-pre-commit
```
Then run the command line script:
```shell
conventional-pre-commit [types] input
```
- `[types]` is an optional list of Conventional Commit types to allow (e.g. `feat fix chore`)
- `input` is a file containing the commit message to check:
```shell
conventional-pre-commit feat fix chore ci test .git/COMMIT_MSG
```
Or from a Python program:
```python
from conventional_pre_commit.format import is_conventional
# prints True
print(is_conventional("feat: this is a conventional commit"))
# prints False
print(is_conventional("nope: this is not a conventional commit"))
# prints True
print(is_conventional("custom: this is a conventional commit", types=["custom"]))
```
## Passing `args`
`conventional-pre-commit` supports a number of arguments to configure behavior:
```shell
$ conventional-pre-commit -h
usage: conventional-pre-commit [-h] [--no-color] [--force-scope] [--scopes SCOPES] [--strict] [--verbose] [types ...] input
Check a git commit message for Conventional Commits formatting.
positional arguments:
types Optional list of types to support
input A file containing a git commit message
options:
-h, --help show this help message and exit
--no-color Disable color in output.
--force-scope Force commit to have scope defined.
--scopes SCOPES List of scopes to support. Scopes should be separated by commas with no spaces (e.g. api,client).
--strict Force commit to strictly follow Conventional Commits formatting. Disallows fixup! and merge commits.
--verbose Print more verbose error output.
```
Supply arguments on the command-line, or via the pre-commit `hooks.args` property:
```yaml
repos:
- repo: https://github.com/compilerla/conventional-pre-commit
rev: <git sha or tag>
hooks:
- id: conventional-pre-commit
stages: [commit-msg]
args: [--strict, --force-scope, feat, fix, chore, test, custom]
```
**NOTE:** when using as a pre-commit hook, `input` is supplied automatically (with the current commit's message).
## Development
`conventional-pre-commit` comes with a [VS Code devcontainer](https://code.visualstudio.com/learn/develop-cloud/containers)
configuration to provide a consistent development environment.
With the `Remote - Containers` extension enabled, open the folder containing this repository inside Visual Studio Code.
You should receive a prompt in the Visual Studio Code window; click `Reopen in Container` to run the development environment
inside the devcontainer.
If you do not receive a prompt, or when you feel like starting from a fresh environment:
1. `Ctrl/Cmd+Shift+P` to bring up the command palette in Visual Studio Code
1. Type `Remote-Containers` to filter the commands
1. Select `Rebuild and Reopen in Container` to completely rebuild the devcontainer
1. Select `Reopen in Container` to reopen the most recent devcontainer build
## Versioning
Versioning generally follows [Semantic Versioning](https://semver.org/).
## Making a release
Releases to PyPI and GitHub are triggered by pushing a tag.
1. Ensure all changes for the release are present in the `main` branch
1. Tag with the new version: `git tag vX.Y.Z` for regular release, `git tag vX.Y.Z-preN` for pre-release
1. Push the new version tag: `git push origin vX.Y.Z`
## License
[Apache 2.0](LICENSE)
Inspired by matthorgan's [`pre-commit-conventional-commits`](https://github.com/matthorgan/pre-commit-conventional-commits).
| text/markdown | null | Compiler LLC <dev@compiler.la> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| git, pre-commit, conventional-commits | [
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"black; extra == \"dev\"",
"build; extra == \"dev\"",
"coverage; extra == \"dev\"",
"flake8; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"pytest; extra == \"dev\"",
"setuptools_scm; extra == \"dev\""
] | [] | [] | [] | [
"code, https://github.com/compilerla/conventional-pre-commit",
"tracker, https://github.com/compilerla/conventional-pre-commit/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T19:14:53.042500 | conventional_pre_commit-4.4.0.tar.gz | 27,161 | eb/6a/c4c902f9526c026b8f5d59ac099028bea7acb2415d5f6603668e92dfa22c/conventional_pre_commit-4.4.0.tar.gz | source | sdist | null | false | 14616ebfd54891fa3ee687e88a1997d8 | 5426bef9039a162c3203cc7de3624954dea7120af8dec6f878d8f1c83c58af7b | eb6ac4c902f9526c026b8f5d59ac099028bea7acb2415d5f6603668e92dfa22c | null | [
"LICENSE"
] | 3,154 |
2.4 | ygg | 0.2.16 | Type-friendly utilities for moving data between Python objects, Arrow, Polars, Pandas, Spark, and Databricks | # Yggdrasil (Python)
Type-friendly utilities for moving data between Python objects, Arrow, Polars, pandas, Spark, and Databricks. The package bundles enhanced dataclasses, casting utilities, and lightweight wrappers around Databricks and HTTP clients so Python/data engineers can focus on schemas instead of plumbing.
## When to use this package
Use Yggdrasil when you need to:
- Convert payloads across dataframe engines without rewriting type logic for each backend.
- Define dataclasses that auto-coerce inputs, expose defaults, and surface Arrow schemas.
- Run Databricks SQL jobs or manage clusters with minimal boilerplate.
- Add resilient retries, concurrency helpers, and dependency guards to data pipelines.
## Prerequisites
- Python **3.10+**
- [uv](https://docs.astral.sh/uv/) for virtualenv and dependency management.
Optional extras:
- `polars`, `pandas`, `pyarrow`, and `pyspark` for engine-specific conversions.
- `databricks-sdk` for workspace, SQL, jobs, and compute helpers.
- `msal` for Azure AD authentication when using `MSALSession`.
## Installation
From the `python/` directory:
```bash
uv venv .venv
source .venv/bin/activate
uv pip install -e .[dev]
```
Extras are grouped by engine:
- `.[polars]`, `.[pandas]`, `.[spark]`, `.[databricks]` – install only the integrations you need.
- `.[dev]` – adds testing, linting, and typing tools (`pytest`, `ruff`, `black`, `mypy`).
### Databricks example
Install the `databricks` extra and run SQL with typed results:
```python
from yggdrasil.databricks.workspaces import Workspace
from yggdrasil.databricks.sql import SQLEngine
ws = Workspace(host="https://<workspace-url>", token="<token>")
engine = SQLEngine(workspace=ws)
stmt = engine.execute("SELECT 1 AS value")
result = stmt.wait(engine)
tbl = result.arrow_table()
print(tbl.to_pandas())
```
### Parallel processing and retries
```python
from yggdrasil.pyutils import parallelize, retry
@parallelize(max_workers=4)
def square(x):
return x * x
@retry(tries=5, delay=0.2, backoff=2)
def sometimes_fails(value: int) -> int:
...
print(list(square(range(5))))
```
## Project layout
- `yggdrasil/dataclasses` – `yggdataclass` decorator plus Arrow schema helpers.
- `yggdrasil/types` – casting registry (`convert`, `register_converter`), Arrow inference, and default generators.
- `yggdrasil/libs` – optional bridges to Polars, pandas, Spark, and Databricks SDK types.
- `yggdrasil/databricks` – workspace, SQL, jobs, and compute helpers built on the Databricks SDK.
- `yggdrasil/requests` – retry-capable HTTP sessions and Azure MSAL auth helpers.
- `yggdrasil/pyutils` – concurrency and retry decorators.
- `yggdrasil/ser` – serialization helpers and dependency inspection utilities.
- `tests/` – pytest-based coverage for conversions, dataclasses, requests, and platform helpers.
## Testing
From `python/`:
```bash
pytest
```
Optional checks when developing:
```bash
ruff check
black .
mypy
```
## Troubleshooting and common pitfalls
- **Missing optional dependency**: Install the matching extra (e.g., `uv pip install -e .[polars]`) or wrap calls with `require_polars`/`require_pyspark` from `yggdrasil.libs`.
- **Schema mismatches**: Use `arrow_field_from_hint` and `CastOptions` to enforce expected Arrow metadata when casting.
- **Databricks auth**: Provide `host` and `token` to `Workspace`. For Azure, ensure environment variables align with your workspace deployment.
## Contributing
1. Fork and branch.
2. Install with `uv pip install -e .[dev]`.
3. Run tests and linters.
4. Submit a PR describing the change and any new examples added to the docs.
| text/markdown | Yggdrasil contributors | null | null | null | Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | arrow, polars, pandas, spark, databricks, typing, dataclass, serialization | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Intended Audience :: Developers",
"Intended Audience :: Information... | [] | null | null | >=3.10 | [] | [] | [] | [
"requests>=2",
"polars>=1.3",
"pyarrow>=20",
"dill>=0.4",
"databricks-sdk>=0.71",
"pytest; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\"",
"black; extra == \"dev\"",
"ruff; extra == \"dev\"",
"mypy; extra == \"dev\"",
"build; extra == \"dev\"",
"twine; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Platob/Yggdrasil",
"Repository, https://github.com/Platob/Yggdrasil",
"Documentation, https://github.com/Platob/Yggdrasil"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T19:14:45.186482 | ygg-0.2.16.tar.gz | 199,795 | ad/06/3db6e572110bf296891eef189454c6bf5c3c76b72b151b1265b45aa0ad07/ygg-0.2.16.tar.gz | source | sdist | null | false | ac437a5df8db3a0c9aa843c3bc36687b | 98e32444d510d1a124505aad81627f6e71f599c96adc442a75599c9b005fb77d | ad063db6e572110bf296891eef189454c6bf5c3c76b72b151b1265b45aa0ad07 | null | [
"LICENSE"
] | 249 |
2.4 | guardclaw | 0.1.2 | Cryptographic evidence ledger for autonomous agent accountability | # GuardClaw
Cryptographic evidence ledger for autonomous agent accountability
Autonomous systems require stronger guarantees than mutable logs can provide.
GuardClaw implements the minimal cryptographic properties required for replay-bound, verifiable agent evidence.
GuardClaw records what AI agents do and makes those records cryptographically verifiable.
It does not block execution.
It does not enforce policy.
It does not require SaaS infrastructure.
It provides verifiable evidence of what was recorded.
📄 *Protocol specification:*
https://github.com/viruswami5511/guardclaw/blob/master/docs/PROTOCOL.md
🔒 *Security model:*
https://github.com/viruswami5511/guardclaw/blob/master/SECURITY.md
⚠️ *Threat model:*
https://github.com/viruswami5511/guardclaw/blob/master/THREAT_MODEL.md
---
## Status
*Alpha (v0.1.2)*
GuardClaw is experimental software.
Breaking changes may occur before v1.0.
Appropriate for development, research, and low-risk automation.
Not recommended for high-risk production systems.
Explicit guarantees and limitations are defined in the Security and Threat Model documents linked above.
---
## What GuardClaw Provides
- Ed25519 cryptographic signing
- Deterministic canonical JSON serialization
- Ledger-local nonce-based replay detection
- Tamper-evident verification
- Offline verification (no network required)
- CLI replay inspection
---
## What GuardClaw Does NOT Provide
- Policy enforcement
- Authorization engine
- Settlement or reconciliation logic
- Hash-chained ledger structure
- Durable replay state across restarts
- Distributed consensus
- Key rotation management
- Trusted timestamp authority
- File deletion detection
- Cross-system replay prevention
GuardClaw is an evidence layer, not a control plane.
---
## Installation
```bash
pip install guardclaw
```
For development:
```bash
git clone https://github.com/viruswami5511/guardclaw.git
cd guardclaw
pip install -e .
```
---
## Quick Start
### 1. Generate a Signing Key
```python
from guardclaw.core.crypto import Ed25519KeyManager
key_manager = Ed25519KeyManager.generate()
```
### 2. Start an Evidence Emitter
```python
from guardclaw.core.emitter import EvidenceEmitter
emitter = EvidenceEmitter(
key_manager=key_manager,
ledger_path=".guardclaw/ledger"
)
emitter.start()
```
### 3. Observe Agent Actions
```python
from guardclaw.core.observers import Observer
observer = Observer("observer-1")
observer.set_emitter(emitter)
observer.on_intent("agent-1", "analyze_data")
observer.on_execution("agent-1", "analyze_data")
observer.on_result("agent-1", "analyze_data", "completed")
```
Each event:
- Receives a cryptographically secure 32-character hexadecimal nonce
- Is serialized deterministically
- Is signed using Ed25519
- Is appended to the ledger
### 4. Stop the Emitter
```python
emitter.stop()
```
Ledger output is written to .guardclaw/ledger/ as signed JSONL events.
---
## Verifying a Ledger
```bash
guardclaw replay .guardclaw/ledger
```
Verification performs:
- Schema validation
- Nonce validation
- Canonical reconstruction
- Signature verification
- Ledger-local replay detection
Verification can be performed offline using only:
- The ledger file
- The public key
---
## Protocol Overview (v0.1.2)
Each event conforms to:
```json
{
"event_id": "string",
"timestamp": "ISO-8601 UTC",
"event_type": "intent | execution | result | failure",
"subject_id": "string",
"action": "string",
"nonce": "32 hex characters",
"correlation_id": "string | null",
"metadata": "object | null"
}
```
### Nonce Constraints
- MUST exist
- MUST be 32 hexadecimal characters
- MUST be unique per subject_id
Duplicate nonce within the same subject is considered replay.
Replay state in v0.1.2 is memory-local and not durable across restarts.
See full specification:
https://github.com/viruswami5511/guardclaw/blob/master/docs/PROTOCOL.md
---
## Security Summary
If private keys remain secure:
- Signed events cannot be modified without detection
- Events are cryptographically attributable
- Replay within a ledger is detectable
- Verification fails loudly on tampering
- Verification works offline
GuardClaw does not guarantee:
- Prevention of malicious behavior
- Durable replay protection
- Cross-system replay prevention
- Absolute timestamp correctness
- Protection against compromised keys
- Immutable storage
Full analysis:
https://github.com/viruswami5511/guardclaw/blob/master/SECURITY.md
https://github.com/viruswami5511/guardclaw/blob/master/THREAT_MODEL.md
---
## Testing
Run replay protection tests:
```bash
python -m pytest tests/unit/test_replay_protection.py -v
```
Expected result:
```text
16 passed
```
---
## Roadmap
Planned future areas (non-binding):
- Hash chaining
- Durable replay protection
- Key rotation audit events
- External timestamp anchoring
- Delegated authority model
These are not part of v0.1.2 guarantees.
---
## When to Use GuardClaw
Appropriate for:
- Development environments
- Internal AI tooling
- Research prototypes
- Low-risk automation
- Audit experimentation
Not recommended for production use in:
- Financial settlement systems
- Critical infrastructure
- Regulatory-grade audit without additional controls
- Long-term archival systems
- High-risk autonomous systems
---
## Contributing
Contributions are welcome.
Before submitting:
- Read the Protocol specification
- Read the Security model
- Include tests
- Maintain scope discipline
---
## License
Apache-2.0
---
## Philosophy
GuardClaw does not promise perfect safety.
It provides cryptographic evidence of what was recorded.
Nothing more. Nothing less.
| text/markdown | Viru | null | null | null | Apache-2.0 | ai, agents, cryptography, audit, ledger, accountability, replay-protection, verification | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Topic :: Security :: Cryptography",
"Topic :: Software Development ... | [] | null | null | >=3.9 | [] | [] | [] | [
"cryptography>=41.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/viruswami5511/guardclaw"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-18T19:14:26.068159 | guardclaw-0.1.2.tar.gz | 54,800 | 7f/7d/fcf520af09ba3805656d5ce4207df0ff556e05b72023a9e57937233a88b2/guardclaw-0.1.2.tar.gz | source | sdist | null | false | f7c792d733eb7e13ae533a00529a43eb | 2082725f99de85cf2aece9f0a7d9888528a2252e3a7de2cd80cbaf13b1bbff33 | 7f7dfcf520af09ba3805656d5ce4207df0ff556e05b72023a9e57937233a88b2 | null | [] | 241 |
2.4 | soso | 1.0.0 | For Converting Metadata Records into Science On Schema.Org Markup | # soso
[](https://www.repostatus.org/#wip)

[](https://codecov.io/github/clnsmth/soso)
[](https://zenodo.org/badge/latestdoi/666558073)

For converting dataset metadata into [Science On Schema.Org](https://github.com/ESIPFed/science-on-schema.org) markup.
## Quick Start
### Installation
Install from PyPI:
$ pip install soso
### Metadata Conversion
To perform a conversion, specify the file path of the metadata and the desired conversion strategy. Each metadata standard corresponds to a specific strategy.
>>> from soso.main import convert
>>> r = convert(file='metadata.xml', strategy='EML')
>>> r
'{"@context": {"@vocab": "https://schema.org/", "prov": "http://www. ...}'
For a list of available strategies, please refer to the documentation of the `convert` function.
### Adding Unmappable Properties
Some SOSO properties may not be derived from metadata records alone. In such cases, additional information can be provided via `kwargs`, where keys match the top level property name, and values are the property value.
For example, the `url` property representing the landing page URL does not exist in an EML metadata record. But this information is known to the repository hosting the dataset.
>>> kwargs = {'url': 'https://sample-data-repository.org/dataset/472032'}
>>> r = convert(file='metadata.xml', strategy='EML', **kwargs)
>>> r
'{"@context": {"@vocab": "https://schema.org/", "prov": "http://www. ...}'
It's worth noting that this `kwargs` approach is not limited to supplying unmappable properties; it can be utilized to override any top-level SOSO property.
Unmappable properties are listed in the strategy documentation.
### Other Modifications
Any additional modifications can be made to the resulting JSON-LD string before it is used. Simply parse the string into a Python dictionary, make the necessary changes, and then convert it back to a JSON-LD string.
### Shared Conversion Scripts
When data repositories use a common metadata standard and adopt shared infrastructure, such as databases containing ancillary information, a shared conversion script can be used. These scripts reliably reference the shared infrastructure to create a richer SOSO record by incorporating this additional information. Below is a list of available scripts and their usage examples:
- [SPASE-schema.org Conversion Script](https://soso.readthedocs.io/en/latest/user/examples/spase-HowToConvert.html)
## API Reference and User Guide
The API reference and user guide are available on [Read the Docs](https://soso.readthedocs.io).
## Code of Conduct
In the spirit of collaboration, we emphasize the importance of maintaining a respectful and inclusive environment.
See the [Code of Conduct](https://soso.readthedocs.io/en/latest/dev/conduct.html#conduct) for details.
| text/markdown | Colin Smith | colin.smith@wisc.edu | Colin Smith | colin.smith@wisc.edu | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"daiquiri>=3.0.0",
"lxml>=5.0.0",
"pyshacl>=0.26.0",
"requests>=2.32.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T19:13:09.145224 | soso-1.0.0.tar.gz | 100,755 | d0/67/5caedfe2af2fa292d6dc7407910445b95a1cc40961cacaf6a2b80292b5dd/soso-1.0.0.tar.gz | source | sdist | null | false | 286ed59cc3afb190b6e5f7a5543751d5 | f8119bba806e3a871a9f9b6a789eccd80066f2bedbe1c099eec44bdfe8425c9c | d0675caedfe2af2fa292d6dc7407910445b95a1cc40961cacaf6a2b80292b5dd | null | [
"LICENSE"
] | 237 |
2.4 | standardwebhooks | 1.0.1 | Standard Webhooks | Python library for Standard Webhooks
# Example
Verifying a webhook payload:
```python
from standardwebhooks.webhooks import Webhook
wh = Webhook(base64_secret)
wh.verify(webhook_payload, webhook_headers)
```
# Development
## Requirements
- python 3
## Installing dependencies
```sh
python -m venv .venv
pip install -r requirements.txt && pip install -r requirements-dev.txt
```
## Contributing
Before opening a PR be sure to format your code!
```sh
./scripts/format.sh
```
## Running Tests
Simply run:
```sh
pytest
```
| text/markdown | Standard Webhooks | null | null | null | MIT | webhooks | [
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libra... | [] | null | null | >=3.6 | [] | [] | [] | [
"httpx>=0.23.0",
"attrs>=21.3.0",
"python-dateutil",
"Deprecated",
"types-python-dateutil",
"types-Deprecated"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T19:13:06.793303 | standardwebhooks-1.0.1.tar.gz | 5,103 | c4/7d/04fc3aa177403472d3ddae90953d8f878dc5fd21ba29c02fc9e97e10703f/standardwebhooks-1.0.1.tar.gz | source | sdist | null | false | 923b7fda69ac25474398db840c45fbc5 | b557bb2e4b16ada179a517ec0fe6cbec5acf976c5619922bf29c457f89a451bd | c47d04fc3aa177403472d3ddae90953d8f878dc5fd21ba29c02fc9e97e10703f | null | [] | 24,369 |
2.1 | codeflare-sdk | 0.35.0 | Python SDK for codeflare client | # CodeFlare SDK
[](https://github.com/project-codeflare/codeflare-sdk/actions/workflows/unit-tests.yml)

An intuitive, easy-to-use python interface for batch resource requesting, access, job submission, and observation. Simplifying the developer's life while enabling access to high-performance compute resources, either in the cloud or on-prem.
For guided demos and basics walkthroughs, check out the following links:
- Guided demo notebooks available [here](https://github.com/project-codeflare/codeflare-sdk/tree/main/demo-notebooks/guided-demos), and copies of the notebooks with [expected output](https://github.com/project-codeflare/codeflare-sdk/tree/main/demo-notebooks/guided-demos/notebook-ex-outputs) also available
- these demos can be copied into your current working directory when using the `codeflare-sdk` by using the `codeflare_sdk.copy_demo_nbs()` function
- Additionally, we have a [video walkthrough](https://www.youtube.com/watch?v=U76iIfd9EmE) of these basic demos from June, 2023
Full documentation can be found [here](https://project-codeflare.github.io/codeflare-sdk/index.html)
## Installation
Can be installed via `pip`: `pip install codeflare-sdk`
## Authentication
CodeFlare SDK uses [kube-authkit](https://github.com/opendatahub-io/kube-authkit) for Kubernetes authentication, supporting multiple authentication methods:
- **Auto-Detection** - Automatically detects kubeconfig or in-cluster authentication
- **Token-Based** - Authenticate with API server token
- **OIDC** - OpenID Connect authentication with device flow or client credentials
- **OpenShift OAuth** - Native OpenShift OAuth support
- **Kubeconfig** - Traditional kubeconfig file authentication
- **In-Cluster** - Service account authentication when running in a pod
### Quick Start
```python
from kube_authkit import get_k8s_client, AuthConfig
from codeflare_sdk import set_api_client, Cluster, ClusterConfiguration
# Option 1: Auto-detect authentication (recommended - no explicit auth needed!)
cluster = Cluster(ClusterConfiguration(
name='my-cluster',
num_workers=2,
))
cluster.apply()
# Option 2: OIDC authentication
auth_config = AuthConfig(
method="oidc",
oidc_issuer="https://your-oidc-provider.com",
client_id="your-client-id",
use_device_flow=True
)
api_client = get_k8s_client(config=auth_config)
set_api_client(api_client) # Register with CodeFlare SDK
# Option 3: OpenShift OAuth with token
auth_config = AuthConfig(
k8s_api_host="https://api.example.com:6443",
token="your-token"
)
api_client = get_k8s_client(config=auth_config)
set_api_client(api_client) # Register with CodeFlare SDK
# Now create your cluster
cluster = Cluster(ClusterConfiguration(
name='my-cluster',
num_workers=2,
))
cluster.apply()
```
### Migration from Legacy Authentication
If you're using the deprecated `TokenAuthentication` or `KubeConfigFileAuthentication` classes, please see our [Migration Guide](./docs/auth_migration_guide.md) for detailed instructions on updating to kube-authkit.
**Legacy classes (deprecated):**
```python
# ⚠️ Deprecated - will be removed in v1.0.0
from codeflare_sdk import TokenAuthentication
auth = TokenAuthentication(token="...", server="...")
auth.login()
```
**New recommended approach:**
```python
# ✅ Recommended - Auto-detection (no explicit auth needed!)
from codeflare_sdk import Cluster, ClusterConfiguration
cluster = Cluster(ClusterConfiguration(name="my-cluster"))
# ✅ For OIDC or OpenShift OAuth with token
from kube_authkit import AuthConfig, get_k8s_client
from codeflare_sdk import set_api_client
auth_config = AuthConfig(
k8s_api_host="https://api.example.com:6443",
token="your-token"
)
api_client = get_k8s_client(config=auth_config)
set_api_client(api_client) # Register with CodeFlare SDK
```
## Development
Please see our [CONTRIBUTING.md](./CONTRIBUTING.md) for detailed instructions.
## Release Instructions
### Automated Releases
It is possible to use the Release Github workflow to do the release. This is generally the process we follow for releases
### Manual Releases
The following instructions apply when doing release manually. This may be required in instances where the automation is failing.
- Check and update the version in "pyproject.toml" file.
- Commit all the changes to the repository.
- Create Github release (<https://docs.github.com/en/repositories/releasing-projects-on-github/managing-releases-in-a-repository#creating-a-release>).
- Build the Python package. `poetry build`
- If not present already, add the API token to Poetry.
`poetry config pypi-token.pypi API_TOKEN`
- Publish the Python package. `poetry publish`
- Trigger the [Publish Documentation](https://github.com/project-codeflare/codeflare-sdk/actions/workflows/publish-documentation.yaml) workflow
| text/markdown | Michael Clifford | mcliffor@redhat.com | null | null | Apache-2.0 | codeflare, python, sdk, client, batch, scale | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | https://github.com/project-codeflare/codeflare-sdk | null | <4.0,>=3.11 | [] | [] | [] | [
"cryptography==46.0.5",
"executing==2.2.1",
"ipywidgets==8.1.2",
"kube-authkit>=0.4.0",
"kubernetes>=27.2.0",
"openshift-client==1.0.18",
"pydantic>=2.10.6",
"ray[data,default]==2.52.1",
"rich<15.0,>=12.5"
] | [] | [] | [] | [
"Repository, https://github.com/project-codeflare/codeflare-sdk"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T19:12:24.013041 | codeflare_sdk-0.35.0.tar.gz | 166,382 | 3c/5f/3d6ced79527866f318481767a8533e3ead5fc9cb50cf821d725693271336/codeflare_sdk-0.35.0.tar.gz | source | sdist | null | false | 9297eca2064d95595eba1a54b67b9910 | 307ea35757b54eefbfefdb140bf06d14d2d448d682aa2f66d1a38ec1e45abfb2 | 3c5f3d6ced79527866f318481767a8533e3ead5fc9cb50cf821d725693271336 | null | [] | 1,310 |
2.4 | mocea | 2.0.6 | Idle droplet monitor and auto-terminator. | # mocea
Idle droplet monitor and auto-terminator.
**mocea** (Monitoring and Management of digital OCEA droplets) is a lightweight agent that runs on a DigitalOcean droplet, detects when it's idle, and auto-terminates it via the DigitalOcean API (snapshot + destroy) to save costs. It runs as a systemd service and uses `psutil` for rich local metric collection.
## Installation
```bash
pip install mocea
```
## Quick Start
```bash
# Test the checks
mocea check
# Start in foreground with dry-run
mocea run --dry-run --idle-minutes 5
# Install as systemd service
sudo mocea install
```
## CLI Commands
```
mocea run Start the monitoring agent
mocea check Run all checks once and display results
mocea status Show service status and current check results
mocea config Show active configuration
mocea cloudinit Generate cloud-init user-data for droplet bootstrap
mocea install Install mocea as a systemd service
mocea uninstall Remove mocea systemd service
```
### `mocea run`
```
Options:
-c, --config PATH Config file path
--idle-minutes INTEGER Override idle timeout (minutes)
--dry-run Log actions but don't execute
--log-level [DEBUG|INFO|WARNING]
```
## Configuration
Config file: `/etc/mocea/config.toml` (or `~/.config/mocea/config.toml`)
```toml
idle_minutes = 30
check_interval = 60
min_uptime_minutes = 10
[checks.cpu]
enabled = true
threshold = 5.0 # percent, below = idle
[checks.process]
enabled = true
names = ["python", "ffmpeg", "jupyter"]
[checks.ssh]
enabled = true
[checks.load]
enabled = false
threshold = 0.3
[checks.network]
enabled = false
threshold_kbps = 10
# interface = "eth0" # omit for all non-loopback interfaces
[checks.gpu]
enabled = false
[checks.heartbeat]
enabled = false
file = "/tmp/mocea-heartbeat"
stale_minutes = 15
[action]
type = "api" # "api" | "shutdown"
# Optional: api action settings
# [action.api]
# snapshot_before_destroy = true
[logging]
level = "INFO"
# file = "/var/log/mocea.log" # omit for stdout only
```
Priority: CLI flags > config file > defaults.
## Checks
| Check | Signal | Idle when |
|-------|--------|-----------|
| **cpu** | `psutil.cpu_percent()` | Below threshold (default 5%) |
| **process** | `psutil.process_iter()` | No configured process names running |
| **ssh** | `psutil.net_connections()` port 22 | No ESTABLISHED SSH sessions |
| **load** | `psutil.getloadavg()` | 1-min load below threshold (default 0.3) |
| **network** | `psutil.net_io_counters()` | Throughput below threshold (default 10 KB/s) |
| **gpu** | `nvidia-smi` | GPU utilization below threshold (default 5%) |
| **heartbeat** | File mtime | File missing or stale (default 15 min) |
All enabled checks must report idle (AND logic) for the entire `idle_minutes` duration before the action triggers.
## Actions
| Action | Description |
|--------|-------------|
| **api** | Direct DO API: snapshot then destroy (default, requires `DO_API_TOKEN`) |
| **shutdown** | Simple `shutdown -h now` (no snapshot, for disposable droplets) |
## Safety
- **Minimum uptime**: Won't terminate within first 10 minutes after boot
- **Dry-run mode**: `--dry-run` logs what would happen without executing
- **Lock file**: Prevents multiple instances
- **Fail-safe**: If a check errors, it's treated as "active" (keeps running)
- **Heartbeat file**: Any process can touch `/tmp/mocea-heartbeat` to prevent termination
## Managing the Systemd Service
Once installed with `sudo mocea install`, mocea runs as a standard systemd service called `mocea.service`. Here's how to manage it.
### Checking Status
```bash
# mocea's built-in status (service + check results)
sudo mocea status
# Or use systemctl directly
sudo systemctl status mocea.service
```
### Viewing Logs
mocea logs to the systemd journal. Use `journalctl` to read them:
```bash
# Recent logs
sudo journalctl -u mocea.service -n 50
# Follow logs in real time (like tail -f)
sudo journalctl -u mocea.service -f
# Logs since last boot
sudo journalctl -u mocea.service -b
# Logs from the last hour
sudo journalctl -u mocea.service --since "1 hour ago"
```
### Stopping and Starting
```bash
# Stop mocea (it will restart on next boot since it's still enabled)
sudo systemctl stop mocea.service
# Start it again
sudo systemctl start mocea.service
# Restart (stop + start)
sudo systemctl restart mocea.service
```
### Disabling and Re-enabling
"Enabled" means mocea starts automatically on boot. "Disabled" means it won't.
```bash
# Stop AND prevent auto-start on boot
sudo systemctl disable --now mocea.service
# Re-enable auto-start (and start it now)
sudo systemctl enable --now mocea.service
```
### Updating mocea
mocea is installed in a virtualenv at `/opt/mocea`. To upgrade, use the venv's pip — **not** the system pip (which will fail with `externally-managed-environment` on modern Debian/Ubuntu):
```bash
# Stop the service
sudo systemctl stop mocea.service
# Upgrade using the venv's pip
sudo /opt/mocea/bin/pip install --upgrade --no-cache-dir mocea
# Restart the service to pick up the new code
sudo systemctl start mocea.service
# Verify the new version
mocea --version
```
If the mocea binary path changed (e.g. you recreated the virtualenv), reinstall the service:
```bash
sudo mocea uninstall
sudo /path/to/new/venv/bin/mocea install
```
### Uninstalling
```bash
sudo mocea uninstall
```
This stops the service, disables it, and removes the unit file.
### Troubleshooting
| Symptom | Command | What to look for |
|---------|---------|-----------------|
| Service won't start | `sudo journalctl -u mocea.service -n 30` | Python errors, missing config |
| Exit code 203/EXEC | `cat /etc/systemd/system/mocea.service` | `ExecStart` must be an absolute path; reinstall with `sudo mocea install` |
| Service keeps restarting | `sudo systemctl status mocea.service` | Shows restart count and exit code |
| Not sure if running | `sudo systemctl is-active mocea.service` | Prints `active` or `inactive` |
The service is configured to restart automatically on failure after a 30-second delay. If it keeps crashing, check the logs with `journalctl`.
## Cloud-Init Bootstrap
Generate cloud-init user-data to auto-install mocea on new droplets:
```bash
mocea cloudinit > user-data.yaml
```
## Development
```bash
git clone https://github.com/ksteptoe/mocea.git
cd mocea
make bootstrap # Create venv and install dependencies
make test # Run tests
make lint # Run linter
make format # Format code
```
| text/markdown | null | Kevin Steptoe <kevin.steptoe@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"click>=8.1",
"psutil>=5.9",
"rich>=13.0",
"pydo>=0.3.0",
"pytest>=8; extra == \"dev\"",
"pytest-cov>=5; extra == \"dev\"",
"pytest-xdist>=3.6; extra == \"dev\"",
"pytest-timeout>=2.3; extra == \"dev\"",
"ruff>=0.6; extra == \"dev\"",
"pre-commit>=3.7; extra == \"dev\"",
"build>=1.2; extra == \"... | [] | [] | [] | [
"Homepage, https://github.com/ksteptoe/mocea",
"Repository, https://github.com/ksteptoe/mocea"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T19:12:15.219208 | mocea-2.0.6.tar.gz | 20,008 | 72/6b/d7958c5c642823b67fcf57f782b32046992a99f3509ac4ed566e7e75990b/mocea-2.0.6.tar.gz | source | sdist | null | false | db4d978d49d8d8a86617d3cea8e98bba | 0f2ab3e9a69cc503154ddc3c44c5cbba0face5673f980da914bb4698d0847631 | 726bd7958c5c642823b67fcf57f782b32046992a99f3509ac4ed566e7e75990b | MIT | [
"LICENSE.txt",
"AUTHORS.md"
] | 225 |
2.4 | charsplit-fst | 0.1.3 | German compound word splitter using Rust + FST | # charsplit-fst
A memory-efficient Rust port of the CharSplit algorithm for German compound splitting, using Finite State Transducers (FST).
## Overview
Charsplit-fst implements the CharSplit algorithm for splitting German compound words into their component parts. It achieves 89% memory reduction compared to the original Python implementation by using Finite State Transducer (FST) data structures.
Based on CharSplit by Don Tuggener: https://github.com/dtuggener/CharSplit
## Features
- 51% smaller data files: 39 MB JSON → 18.2 MB FST
- 89% lower memory usage: 19.6 MB vs 180 MB runtime
- UTF-8 safe: Proper character-based indexing for German Unicode characters
- Python bindings via PyO3
- [WebAssembly demo](https://steadfastgaze.github.io/charsplit-fst/) for browser-based usage
- CLI tool for batch processing
## Installation
### Python
Available on [PyPI](https://pypi.org/project/charsplit-fst/).
```bash
pip install charsplit-fst
```
### Rust
```bash
cargo add charsplit-fst
```
## Quick Start
### Python
```python
from charsplit_fst import Splitter
splitter = Splitter()
results = splitter.split_compound("Autobahnraststätte")
# Returns: [(0.795, 'Autobahn', 'Raststätte'), ...]
```
### Rust
```rust
use charsplit_fst::Splitter;
let splitter = Splitter::new()?;
let results = splitter.split_compound("Autobahnraststätte");
```
### CLI
```bash
cargo run --bin charsplit-fst -- Autobahnraststätte
```
## Algorithm
The algorithm splits German compounds using ngram probability scoring:
**Score formula**: `start_prob - in_prob + pre_prob`
Where:
- `start_prob`: Maximum prefix probability of second part
- `in_prob`: Minimum infix probability crossing split boundary
- `pre_prob`: Maximum suffix probability of first part
## Performance
- Memory: 19.6 MB RSS (vs 180 MB for Python)
- Data size: 18.2 MB on disk (vs 39 MB JSON)
## Web Demo
A browser-based demo using WebAssembly is available in `web-demo/`.
```bash
# Build the WASM version
./build-wasm.sh
# Serve from project root
python -m http.server 8000
# Open http://localhost:8000/web-demo/
```
The demo runs entirely in the browser using WebAssembly. No server-side processing is required.
**Browser support:** The demo will try to use Brotli compression via DecompressionStream API where supported, falling back to uncompressed data for browsers that don't support it. Works in all modern browsers.
## Development
```bash
# Build
cargo build --release
# Run tests
cargo test
# Build Python bindings
maturin develop
# Build WASM
./build-wasm.sh
```
## Acknowledgments
This project is a Rust port of CharSplit by Don Tuggener.
- Algorithm: Based on Tuggener (2016), *Incremental Coreference Resolution for German*, University of Zurich.
- Original Implementation: dtuggener/CharSplit (https://github.com/dtuggener/CharSplit) (MIT Licensed).
- Data: The n-gram probabilities are derived from the model provided by the original author.
## License
MIT OR Apache-2.0
See LICENSE-MIT and LICENSE-APACHE-2.0 for details.
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | null | null | [
"Programming Language :: Rust",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"License :: OSI Approved :: Apache Software License"
] | [] | null | null | >=3.12 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/steadfastgaze/charsplit-fst",
"Repository, https://github.com/steadfastgaze/charsplit-fst"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T19:11:05.394197 | charsplit_fst-0.1.3.tar.gz | 8,790,500 | 22/dc/bd1919a699e44809284ffdfcc623384fc3c93e600a695e0152e28562ad49/charsplit_fst-0.1.3.tar.gz | source | sdist | null | false | 76be0dd3bf217ccfedbd45e58dee73b4 | 7747697537d08fdef57e312e9b5cb6b6788d7c50b6cb8a5a8d98b13357c9c6e8 | 22dcbd1919a699e44809284ffdfcc623384fc3c93e600a695e0152e28562ad49 | MIT OR Apache-2.0 | [
"LICENSE-APACHE-2.0",
"LICENSE-MIT"
] | 610 |
2.4 | edasuite | 0.0.5 | A Python library for exploratory data analysis with advanced statistical features | # EDASuite
A comprehensive Python library for exploratory data analysis with advanced features for data profiling, quality assessment, and stability monitoring.
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
## Interactive Viewer
EDASuite includes a built-in interactive dashboard to explore your analysis results in the browser.
```python
from edasuite.viewer.server import serve_results
# From a saved JSON file
serve_results("eda_results.json")
# Or directly from EDA results
results = runner.run(data=df, schema=schema, target_variable="target")
serve_results(results)
```
**Summary** — Dataset overview, insights, top features by IV, data quality score, and provider match rates.

**Catalog** — Sortable feature table with type, provider, target correlation, IV, and PSI at a glance.

**Deep Dive** — Per-feature detail view with statistics, box plots, distribution charts, target associations, and correlations.

**Associations** — Mixed-method heatmap (Pearson, Theil's U, Eta) showing relationships across all features.

## Features
### Core Analysis
- **Automated Feature Analysis**
- Continuous features: mean, median, std, quartiles, skewness, kurtosis, outliers
- Categorical features: mode, value counts, cardinality, entropy
- Automatic type inference with schema override support
- Missing value detection and sentinel value replacement
### Advanced Statistics
- **Target Relationship Analysis**
- Information Value (IV) and Weight of Evidence (WoE)
- Optimal binning for continuous features
- Predictive power classification
- Statistical significance testing
- **Correlation & Association Analysis**
- Pearson and Spearman correlations with p-values
- Theil's U (asymmetric categorical associations)
- Eta / Correlation Ratio (categorical-continuous)
- Unified association matrix across all feature types
- Top-N correlation tracking per feature
### Data Quality
- **Quality Assessment System**
- Automated quality scoring (0-10 scale)
- Per-feature quality flags (high_missing, low_variance, constant, outliers)
- Overall dataset quality metrics
- Actionable recommendations
- **Sentinel Value Handling**
- Automatic detection and replacement of no-hit values
- Provider-specific default value handling
- Preserves integer dtypes using pandas nullable types (e.g. Int64) to avoid silent float upcasting
- Configurable via DatasetSchema
### Stability Monitoring
- **Cohort-Based Stability**
- PSI (Population Stability Index) for categorical features
- KS (Kolmogorov-Smirnov) test for continuous features
- Train/test drift detection
- Feature-level stability metrics
- **Time-Based Stability**
- Multiple time window strategies (monthly, weekly, quartile, custom)
- Temporal trend analysis (increasing, decreasing, volatile)
- Auto-detection of optimal time periods
- Minimum sample size enforcement
### Provider Analytics
- **Provider Match Rates**
- Automatic detection via `<provider>_record_not_found` columns
- Data coverage statistics by provider (% of records with data)
- Feature-level availability tracking
- Not-found record counts per provider
- Supports both column-based and schema-based detection
### Performance
- **Large Dataset Support**
- Multiple file format support (CSV, Parquet)
- Chunked CSV reading for files >100MB
- Configurable sampling for faster analysis
- Memory-efficient correlation computation
- Tested with 100K+ rows, 400+ features
## Installation
```bash
pip install edasuite
```
## Quick Start
### Basic Usage
```python
from edasuite import EDARunner, DataLoader
import pandas as pd
# Option 1: Load from file using DataLoader
df = DataLoader.load_csv("data.csv")
# Option 2: Use existing DataFrame
df = pd.read_csv("data.csv") # or from database, etc.
# Initialize runner
runner = EDARunner(
max_categories=50,
top_correlations=10
)
# Run analysis
results = runner.run(
data=df,
output_path="eda_results.json"
)
```
### Loading Data
EDASuite provides `DataLoader` utilities for loading data:
```python
from edasuite import DataLoader
# Load CSV
df = DataLoader.load_csv("data.csv")
# Load Parquet (faster for large files)
df = DataLoader.load_parquet("data.parquet")
# Load with sampling
df = DataLoader.load_csv("large_file.csv", sample_size=10000)
```
### With DatasetSchema
```python
from edasuite import (
EDARunner, DataLoader,
ColumnConfig, ColumnType, ColumnRole, Sentinels, DatasetSchema,
)
# Load data and schema
df = DataLoader.load_csv("data.csv")
schema = DataLoader.load_schema("schema.json")
# Or create schema programmatically
schema = DatasetSchema([
ColumnConfig('age', ColumnType.CONTINUOUS, ColumnRole.FEATURE,
provider='demographics', description='User age',
sentinels=Sentinels(not_found='-1')),
ColumnConfig('zip_code', ColumnType.CATEGORICAL, ColumnRole.FEATURE,
provider='address', description='ZIP code',
sentinels=Sentinels(not_found='', missing='00000')),
ColumnConfig('target', ColumnType.BINARY, ColumnRole.TARGET),
])
# Run with schema
runner = EDARunner()
results = runner.run(
data=df,
schema=schema,
target_variable="target",
output_path="eda_results.json"
)
```
**Schema JSON format** (`schema.json`):
```json
{
"columns": [
{
"name": "age",
"type": "continuous",
"role": "feature",
"provider": "demographics",
"description": "User age",
"sentinels": {
"not_found": "-1",
"missing": null
}
}
]
}
```
### Working with DataFrames
EDARunner works with pandas DataFrames, making it easy to integrate into existing data pipelines:
```python
import pandas as pd
from edasuite import EDARunner
# From database
df = pd.read_sql("SELECT * FROM users", connection)
# From API
import requests
data = requests.get("https://api.example.com/data").json()
df = pd.DataFrame(data)
# In-memory transformations
df['age_group'] = pd.cut(df['age'], bins=[0, 30, 50, 100])
# Run EDA
runner = EDARunner()
results = runner.run(data=df, target_variable='target')
```
This is particularly useful for:
- Working in Jupyter notebooks
- Data loaded from databases (via `pd.read_sql()`)
- In-memory transformations without saving to disk
- Integration with existing data pipelines
See [examples/example_12_dataframe_input.py](examples/example_12_dataframe_input.py) for more examples.
### Stability Analysis
#### Cohort-Based (Train/Test)
```python
from edasuite import EDARunner, DataLoader
# Load data and schema
df = DataLoader.load_parquet("data.parquet")
schema = DataLoader.load_schema("schema.json")
# Configure for stability analysis
runner = EDARunner(
calculate_stability=True,
cohort_column='dataTag',
baseline_cohort='training',
comparison_cohort='test'
)
results = runner.run(
data=df,
schema=schema
)
```
#### Time-Based
```python
from edasuite import EDARunner, DataLoader
# Load data and schema
df = DataLoader.load_parquet("data.parquet")
schema = DataLoader.load_schema("schema.json")
# Configure for time-based stability
runner = EDARunner(
time_based_stability=True,
time_column='onboarding_time',
time_window_strategy='monthly', # or 'weekly', 'quartiles', 'custom'
baseline_period='first',
comparison_periods='all',
min_samples_per_period=100
)
results = runner.run(
data=df,
schema=schema
)
```
## DatasetSchema
`DatasetSchema` enables advanced functionality by defining column types, roles, providers, and sentinel values:
### Variable Type Override
Override automatic type inference:
```json
{
"name": "customer_id",
"type": "categorical",
"role": "feature"
}
```
### Sentinel Values
Define values that should be treated as missing:
```json
{
"name": "income",
"type": "continuous",
"role": "feature",
"sentinels": {
"not_found": "-1",
"missing": "0"
}
}
```
### Provider Tracking
Track data sources:
```json
{
"name": "credit_score",
"type": "continuous",
"role": "feature",
"provider": "bureau_provider",
"description": "FICO credit score"
}
```
## Output Format
EDASuite produces structured JSON output with three top-level sections:
### metadata
- Timestamp, execution time, version
- Configuration (target variable, sampling, correlations)
- Schema availability indicator
### summary
- Feature type distribution and counts
- Data quality score with recommendations
- Dataset info (rows, columns, memory, missing, duplicates)
- Provider match rates (if schema with providers is used)
- Feature counts across 16+ categories (see [Feature Counts](#feature-counts))
- Association matrix (Pearson, Theil's U, Eta merged into a single N×N structure)
- Top features by statistical score
### features
List of per-feature analysis, each including:
- Statistics (mean, median, mode, quartiles, etc.)
- Distribution (histogram or value counts)
- Missing values
- Quality assessment
- Correlations (with target and other features)
- Target relationship (IV, WoE if target specified)
- Stability (PSI/KS if enabled)
## Provider Match Rates / Hit Rates
EDASuite automatically computes provider match rates (also called "hit rates") to help you understand data coverage from different third-party data providers.
### Automatic Detection
Provider match rates are computed automatically during EDA using one of two methods:
#### Method 1: Using `<provider>_record_not_found` columns (Preferred)
If your dataset includes columns like `payu_record_not_found`, `truecaller_record_not_found`, etc., EDASuite will automatically detect and use them:
```python
runner = EDARunner()
df = DataLoader.load_csv("data.csv")
results = runner.run(data=df)
# Access provider stats
provider_stats = results['summary']['provider_match_rates']
```
#### Method 2: Using DatasetSchema (Fallback)
If no `record_not_found` columns exist, you can use a schema to group features by provider:
```python
df = DataLoader.load_csv("data.csv")
schema = DataLoader.load_schema("schema.json")
runner = EDARunner()
results = runner.run(data=df, schema=schema)
# Provider stats show match rates based on feature null analysis
provider_stats = results['summary']['provider_match_rates']
```
### Example
See [examples/example_10_provider_match_rates.py](examples/example_10_provider_match_rates.py) for a complete working example.
## Feature Counts
EDASuite automatically computes feature counts across 16+ categories — useful for dashboards, feature selection, and data quality monitoring.
### Automatic Computation
Feature counts are computed automatically during EDA and included in the results:
```python
from edasuite import EDARunner, DataLoader
runner = EDARunner()
df = DataLoader.load_csv("data.csv")
results = runner.run(
data=df,
target_variable="target" # Required for correlation and IV
)
# Access feature counts
feature_counts = results['summary']['feature_counts']
print(f"High Correlation: {feature_counts['high_correlation']['count']}")
print(f"Redundant Features: {feature_counts['redundant_features']['count']}")
print(f"High IV: {feature_counts['high_iv']['count']}")
print(f"High Stability: {feature_counts['high_stability']['count']}")
```
### Categories
**Target Relationship**
| Category | Threshold | Description |
|----------|-----------|-------------|
| **High Correlation** | best association > 0.1 | Features associated with target (Pearson, Eta, or Theil's U) |
| **High IV** | IV > 0.1 | Features with strong predictive power |
| **Significant Correlations** | p-value < 0.05 | Statistically significant target correlations |
| **Suspected Leakage** | IV > 0.5 | Features with suspiciously high predictive power |
**Feature Quality**
| Category | Threshold | Description |
|----------|-----------|-------------|
| **Redundant Features** | correlation > 0.7 | Highly correlated with another feature |
| **High Missing** | > 30% | Features with substantial missing values |
| **Constant Features** | 1 unique value | Zero-variance features |
| **Low Variance** | low CV | Features with very low coefficient of variation |
| **Not Recommended** | composite | Features flagged as unsuitable for modeling |
| **Highly Skewed** | \|skewness\| > 1.0 | Features with heavy distributional skew |
| **High Kurtosis** | kurtosis > 3.0 | Outlier-prone features |
| **High Cardinality** | — | Categoricals with high unique-value ratio |
**Predictive Power Breakdown**
| Category | Description |
|----------|-------------|
| **Predictive Power** | Count of features by IV class: unpredictive, weak, medium, strong, very strong |
**Stability**
| Category | Threshold | Description |
|----------|-----------|-------------|
| **High Stability** | PSI < 0.1 | Stable distribution across cohorts/time |
| **Minor Shift** | 0.1 ≤ PSI < 0.2 | Minor distribution drift |
| **Major Shift** | PSI ≥ 0.2 | Major distribution drift |
| **Increasing Drift** | — | Worsening distribution drift over time |
| **Volatile Stability** | — | Inconsistent stability across periods |
### Example
See [examples/example_11_feature_counts.py](examples/example_11_feature_counts.py) for a complete working example with UI formatting.
## Advanced Configuration
### Correlation Settings
```python
runner = EDARunner(
top_correlations=10, # Top N correlations per feature
max_correlation_features=500 # Limit features in correlation matrix
)
```
### Sampling for Large Datasets
```python
runner = EDARunner(
sample_size=10000 # Analyze sample of 10K rows
)
```
### Custom Column Selection
```python
df = DataLoader.load_csv("data.csv")
results = runner.run(
data=df,
columns=['age', 'income', 'zip_code'] # Analyze specific columns
)
```
### Compact JSON Output
```python
df = DataLoader.load_csv("data.csv")
results = runner.run(
data=df,
output_path="results.json",
compact_json=True # Minimize JSON size
)
```
### Parquet File Benefits
Parquet format offers significant advantages:
- **Faster loading**: Columnar format with efficient compression
- **Smaller file size**: Typically 50-80% smaller than CSV
- **Type preservation**: Maintains data types (no type inference needed)
- **Column selection**: Read only needed columns (reduces memory usage)
```python
# Convert CSV to Parquet (one-time operation)
import pandas as pd
df = pd.read_csv("data.csv")
df.to_parquet("data.parquet", index=False)
# Then use Parquet for faster analysis
df = DataLoader.load_parquet("data.parquet")
runner = EDARunner()
results = runner.run(data=df)
```
## Development
```bash
pip install -e . # Install for development
python -m build # Build package
python -m pytest tests/ # Run tests
```
## Documentation
- [Architecture](docs/ARCHITECTURE.md) — internals, module structure, data flow
- [Decision Records](docs/decisions/) — key design decisions and rationale
- [Examples](examples/) — usage examples and demos
## Requirements
- Python 3.9+
- pandas >= 2.0.0
- numpy >= 1.24.0
- scipy >= 1.10.0
- pyarrow >= 10.0.0 (for Parquet support)
## License
MIT License - see [LICENSE](LICENSE) file for details.
## Contact
For questions or suggestions:
- Email: dev@lattiq.com
- GitHub: [https://github.com/lattiq/edasuite](https://github.com/lattiq/edasuite)
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
| text/markdown | null | LattIQ Development Team <dev@lattiq.com> | null | null | MIT | data analysis, exploratory data analysis, eda | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: P... | [] | null | null | >=3.9 | [] | [] | [] | [
"pandas<3.0.0,>=2.0.0",
"numpy<2.0.0,>=1.24.0",
"scipy<2.0.0,>=1.10.0",
"pyarrow>=10.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-xdist>=3.0.0; extra == \"dev\"",
"pytest-benchmark>=4.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"ruff>=0.1.0... | [] | [] | [] | [
"Homepage, https://github.com/lattiq/edasuite",
"Repository, https://github.com/lattiq/edasuite"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T19:10:51.967600 | edasuite-0.0.5.tar.gz | 87,697 | 70/89/4978cd8643c23e979b895a98b2041077402dbdca64cb0e11231936cfcac5/edasuite-0.0.5.tar.gz | source | sdist | null | false | 6dc119bff2239a5b703ce46a2329350f | 9e9a8597380fd1919e398cb0bf9097cae52ea0952d7c6b6fa9acf674a217ea6c | 70894978cd8643c23e979b895a98b2041077402dbdca64cb0e11231936cfcac5 | null | [
"LICENSE"
] | 235 |
2.4 | lake-flow-pipeline | 0.1.1 | Data Lake pipelines for Vector DB, RAG & AI. Ingest, process, embed, and semantic search. | # LakeFlow Backend
FastAPI backend and data pipelines for [LakeFlow](https://github.com/Lampx83/EDUAI): ingest, staging, processing, embedding, and semantic search.
---
## Overview
- **API:** FastAPI app (`lakeflow.main:app`) — auth, search, embed, pipeline trigger, Qdrant proxy, system.
- **Data Lake:** Layered zones under `LAKEFLOW_DATA_BASE_PATH`: `000_inbox` → `100_raw` → `200_staging` → `300_processed` → `400_embeddings` → `500_catalog`.
- **Vector store:** Qdrant (default collection `lakeflow_chunks`). Embeddings via sentence-transformers (e.g. `all-MiniLM-L6-v2`).
---
## Requirements
- Python ≥ 3.10
- Qdrant (e.g. Docker: `docker compose up -d qdrant`)
- See `requirements.txt` for Python dependencies
---
## Install & run
**With Docker** (from the LakeFlow repo root where `docker-compose.yml` is):
```bash
docker compose up --build
# API: http://localhost:8011
```
**Local dev** (from repo root, go to `lakeflow`):
```bash
cd lakeflow
python3 -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r requirements.txt
pip install -e .
# Create/copy .env (repo root or lakeflow) with LAKEFLOW_DATA_BASE_PATH, QDRANT_HOST, etc.
python -m uvicorn lakeflow.main:app --reload --port 8011
```
- If you get **`bad interpreter`** (venv points to wrong Python): remove `.venv`, run `python3 -m venv .venv` again then `pip install -r requirements.txt` and `pip install -e .`.
- If you get **`Address already in use`** (port 8011 in use): free the port then restart the server — `lsof -ti :8011 | xargs kill -9`
- **Swagger:** http://localhost:8011/docs
- **ReDoc:** http://localhost:8011/redoc
- **Embed API:** [docs/API_EMBED.md](docs/API_EMBED.md) — `POST /search/embed`
---
## Pipeline steps (CLI)
Run from the **lakeflow** directory (with venv activated and `LAKEFLOW_DATA_BASE_PATH` set in `.env` or environment).
| Step | Command | Output |
|------|---------|--------|
| 0 – Inbox → Raw | `python -m lakeflow.scripts.step0_inbox` | Hash, dedup, catalog |
| 1 – Staging | `python -m lakeflow.scripts.step1_raw` | `pdf_profile.json`, `validation.json` |
| 2 – Processed | `python -m lakeflow.scripts.step2_staging` | `clean_text.txt`, `chunks.json`, `tables.json` |
| 3 – Embeddings | `python -m lakeflow.scripts.step3_processed_files` | `embeddings.npy`, `chunks_meta.json` |
| 4 – Qdrant | `python -m lakeflow.scripts.step3_processed_qdrant` | Points in Qdrant |
Or use the **Streamlit UI** (Pipeline Runner) when `LAKEFLOW_MODE=DEV`.
---
## Main APIs
- **POST /auth/login** – Demo login (e.g. `admin` / `admin123`), returns JWT.
- **POST /search/embed** – Body `{"text": "..."}` → `vector`, `embedding`, `dim`.
- **POST /search/semantic** – Body `{"query": "...", "top_k": 5, "qdrant_url": "...", "collection_name": "..."}`.
- **POST /search/qa** – RAG-style Q&A (semantic search + LLM). Optional.
- **POST /pipeline/run** – Run a pipeline step (auth required).
- **GET/POST /qdrant/** – Qdrant collections and points (proxy).
---
## Design notes
- **Idempotent** pipelines; deterministic UUIDs for Qdrant.
- **SQLite** without WAL (NAS-friendly).
- **No full-file load** for large files; streaming where applicable.
---
## License
Same as the root repository.
| text/markdown | LakeFlow | null | null | null | null | rag, ai, vector-db, data-lake, qdrant, embeddings | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"ruff>=0.1.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"python-dotenv>=1.0.0; extra == \"full\"",
"pandas>=2.0.0; extra == \"full\"",
"numpy>=1.26.0; extra == \"full\"",
"PyPDF2>=3.0.0; extra == \"full\"",
"pdfplumber>=0.11.0; extra == \"full\"",
"python-docx>=1.1.0; extra == \"full\"",
"... | [] | [] | [] | [
"Homepage, https://lake-flow.vercel.app",
"Documentation, https://lake-flow.vercel.app/docs",
"Repository, https://github.com/Lampx83/LakeFlow"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T19:10:28.559421 | lake_flow_pipeline-0.1.1.tar.gz | 43,968 | 77/02/a760ac4a273c87a34b926787c32887d6450a56e5ca7e5facf5b7d33c21cd/lake_flow_pipeline-0.1.1.tar.gz | source | sdist | null | false | 53435ddc960a8cf9736d33d72cca3ee5 | 42d7de17f6770ab52e1fe691f6153b1d35584a0843440f91c21951fba3cf3f6a | 7702a760ac4a273c87a34b926787c32887d6450a56e5ca7e5facf5b7d33c21cd | MIT | [] | 223 |
2.4 | depictio-cli | 0.7.3 | Depictio CLI to interact with the Depictio API | <!-- markdownlint-disable MD033 MD041 -->
<p align="center">
<img src="docs/images/logo_hd.png#gh-light-mode-only" alt="Depictio logo" width="300">
<img src="docs/images/logo_hd_white.png#gh-dark-mode-only" alt="Depictio logo" width="300">
</p>
<div align="center">
<!-- markdownlint-enable MD033 -->
## Project Information
[](LICENSE)
[](https://github.com/depictio/depictio/releases)
[](https://www.python.org/)
## We rely on
[](https://fastapi.tiangolo.com/)
[](https://dash.plotly.com/)
[](https://www.mongodb.com/)
[](https://min.io/)
[](https://pydantic-docs.helpmanual.io/)
[](https://pola-rs.github.io/polars-book/)
## Container
[](https://github.com/depictio/depictio/pkgs/container/depictio)
[](https://github.com/depictio/depictio/pkgs/container/depictio)
[](https://github.com/depictio/depictio)
## Quality
[](https://github.com/astral-sh/ruff)
[](https://github.com/depictio/depictio/actions/workflows/deploy.yaml)
[](https://github.com/depictio/depictio/issues)
## Documentation
[](https://depictio.github.io/depictio-docs/latest/)
## Gitpod quickstart
[](https://gitpod.io/#https://github.com/depictio/depictio)
</div>
Depictio is a modern, interactive platform that enables dashboards creation from bioinformatics workflows outputs. The system is designed to be running through helm/kubernetes or docker-compose, and is built on top of FastAPI and Dash.
**Homepage**: [depictio.github.io](https://depictio.github.io/depictio-docs/latest/)
Copyright(c) 2023-2026 Thomas Weber <thomas.weber@embl.de> (see LICENSE)
| text/markdown | null | Thomas Weber <thomas.weber@embl.de> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"boto3==1.36.21",
"colorlog==6.9.0",
"httpx==0.28.1",
"devtools==0.12.2",
"polars-lts-cpu[deltalake,excel,numpy,pandas,pyarrow]==1.19.0",
"deltalake==0.24.0",
"python-jose==3.3.0",
"pyyaml==6.0.2",
"typer==0.16.0",
"click==8.2.0",
"tomli==2.2.1",
"rich==14.2.0",
"pydantic[email]==2.10.6",
... | [] | [] | [] | [
"Homepage, http://github.com/depictio/depictio"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T19:10:26.317884 | depictio_cli-0.7.3.tar.gz | 175,604 | c8/96/f03744f1277031f721571fab3f386add79e81ddd5f3ea313d9f882c9abcd/depictio_cli-0.7.3.tar.gz | source | sdist | null | false | d9a6ad80b821ae7d657596f5e3cf8cd8 | e1af1bdfde7201647b7a3be3c754e832020b82bc1c81410d760f6039091573ee | c896f03744f1277031f721571fab3f386add79e81ddd5f3ea313d9f882c9abcd | MIT | [
"LICENSE"
] | 237 |
2.4 | paylobster | 0.1.0 | PayLobster SDK — Agent-to-agent payments on Base | # PayLobster Python SDK
Agent-to-agent payments on Base. Escrow, reputation, and trust — in 3 lines of code.
```python
from paylobster import PayLobster
pl = PayLobster(private_key="0x...")
escrow = pl.escrow.create(seller="0xAgent...", amount="5.00", description="Code review")
```
## Install
```bash
pip install paylobster
```
With framework integrations:
```bash
pip install paylobster[langchain] # LangChain tools
pip install paylobster[crewai] # CrewAI tools
pip install paylobster[autogen] # AutoGen tools
```
## Quick Start
### Read-Only (No Private Key)
```python
from paylobster import PayLobster
pl = PayLobster()
# List registered agents
agents = pl.identity.list(limit=10)
for agent in agents:
print(f"{agent.name}: {agent.address}")
# Check reputation
rep = pl.reputation.get("0x...")
print(f"Score: {rep.score}, Tier: {rep.tier}") # Score: 72, Tier: silver
# Check balance
balance = pl.get_balance("0x...", token="USDC")
print(f"Balance: {balance} USDC")
```
### Full Access (With Private Key)
```python
from paylobster import PayLobster
pl = PayLobster(private_key="0x...")
# Register your agent
agent = pl.identity.register(
name="CodeReviewer",
metadata={"skills": ["python", "security"]}
)
print(f"Registered as Agent #{agent.identity_id}")
# Create an escrow (auto-funds with USDC)
escrow = pl.escrow.create(
seller="0xServiceProvider...",
amount="10.00",
description="Review PR #42"
)
print(f"Escrow #{escrow.escrow_id} created")
# After work is done, release payment
pl.escrow.release(escrow.escrow_id)
print("Payment released!")
# Check your reputation after the transaction
rep = pl.reputation.get(pl.address)
print(f"Your score: {rep.score} ({rep.tier})")
```
### LangChain Integration
```python
from paylobster.integrations.langchain import PayLobsterToolkit
from langchain.agents import create_react_agent
# Get PayLobster tools
toolkit = PayLobsterToolkit(private_key="0x...")
tools = toolkit.get_tools()
# Use with any LangChain agent
agent = create_react_agent(llm, tools)
result = agent.invoke({"input": "Check the reputation of 0xABC..."})
```
## Environment Variables
```bash
export PAYLOBSTER_PRIVATE_KEY="0x..." # For signing transactions
export PAYLOBSTER_API_KEY="pk_live_..." # For hosted mode (coming soon)
```
## Networks
```python
# Base Mainnet (default)
pl = PayLobster(network="base")
# Base Sepolia (testnet)
pl = PayLobster(network="base-sepolia")
# Custom RPC
pl = PayLobster(rpc_url="https://your-rpc.com")
```
## Contracts (Base Mainnet)
| Contract | Address |
| ---------- | -------------------------------------------- |
| Identity | `0xA174ee274F870631B3c330a85EBCad74120BE662` |
| Reputation | `0x02bb4132a86134684976E2a52E43D59D89E64b29` |
| Credit | `0xD9241Ce8a721Ef5fcCAc5A11983addC526eC80E1` |
| Escrow V3 | `0x49EdEe04c78B7FeD5248A20706c7a6c540748806` |
| USDC | `0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913` |
## Links
- **Website:** https://paylobster.com
- **Docs:** https://paylobster.com/docs
- **npm SDK:** `npm install pay-lobster`
- **CLI:** `npm install -g @paylobster/cli`
- **MCP Server:** https://paylobster.com/mcp/mcp
- **GitHub:** https://github.com/itsGustav/PayLobster
## License
MIT
| text/markdown | null | Gustav Intelligence <hello@paylobster.com> | null | null | null | agents, ai, base, blockchain, escrow, payments, reputation, x402 | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: P... | [] | null | null | >=3.9 | [] | [] | [] | [
"eth-account>=0.9.0",
"httpx>=0.24.0",
"pydantic>=2.0.0",
"web3>=6.0.0",
"pyautogen>=0.2.0; extra == \"autogen\"",
"crewai>=0.40.0; extra == \"crewai\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"langchain-core>=0.2.0; extra == \"l... | [] | [] | [] | [
"Homepage, https://paylobster.com",
"Documentation, https://paylobster.com/docs",
"Repository, https://github.com/itsGustav/PayLobster",
"Issues, https://github.com/itsGustav/PayLobster/issues"
] | twine/6.2.0 CPython/3.9.6 | 2026-02-18T19:10:12.183115 | paylobster-0.1.0.tar.gz | 15,829 | f9/27/cbd30a96c4db3d95522dc6a4589c2766f4716ebbc7800d95cf61642e106c/paylobster-0.1.0.tar.gz | source | sdist | null | false | b3b5113cccebed8734cde939f3e75fef | a96bcef40c5d4cd47019326a3cb39acc221ac04aabee19a71c8052d34f0822d5 | f927cbd30a96c4db3d95522dc6a4589c2766f4716ebbc7800d95cf61642e106c | MIT | [] | 237 |
2.4 | instamatic | 2.2.1 | Python program for automated electron diffraction data collection | [](https://github.com/instamatic-dev/instamatic/actions/workflows/test.yml)
[](https://app.readthedocs.org/projects/instamatic/builds/)
[](https://pypi.org/project/instamatic/)
[](https://pypi.org/project/instamatic/)
[](https://doi.org/10.5281/zenodo.1090388)

# Instamatic
Instamatic is a Python program that is being developed with the aim to automate the collection of electron diffraction data. At the core is a Python library for transmission electron microscope experimental control with bindings for the JEOL/FEI microscopes and interfaces to the ASI/TVIPS/Gatan cameras. Routines have been implemented for collecting serial electron diffraction (serialED), continuous rotation electron diffraction (cRED, aka 3D-ED / microED), and stepwise rotation electron diffraction (RED) data. For streaming cameras, instamatic includes a live-view GUI.
Instamatic is distributed via [pypi](https://pypi.org/project/instamatic) and https://github.com/instamatic-dev/instamatic/releases. However, the most up-to-date version of the code (including bugs!) is available from this repository.
Electron microscopes supported:
- JEOL microscopes with the TEMCOM library
- FEI microscopes via the scripting interface
Cameras supported:
- ASI Timepix
- ASI CheeTah through `serval-toolkit` library
- TVIPS cameras through EMMENU4 API
- Quantum Detectors MerlinEM
- Gatan cameras through FEI scripting interface
- (Gatan cameras through DM plugin [1])
Instamatic has been developed on a JEOL-2100 with a Timepix camera, and a JEOL-1400 and JEOL-3200 with TVIPS cameras (XF416/F416).
See [instamatic-dev/instamatic-tecnai-server](https://github.com/instamatic-dev/instamatic-tecnai-server) for a TEM interface to control a FEI Tecnai or FEI Titan TEM and associated cameras on Windows XP/Python 3.4 via instamatic.
[1]: Support for Gatan cameras is somewhat underdeveloped. As an alternative, a DigitalMicrograph script for collecting cRED data on a OneView camera (or any other Gatan camera) can be found [here](https://github.com/instamatic-dev/InsteaDMatic).
## Installation
If you use conda, create a new environment:
```
conda create -n instamatic python=3.11
conda activate instamatic
```
Install using pip, works with python versions 3.9 or newer:
```bash
pip install instamatic
```
## OS requirement
The package requires Windows 7 or higher. It has been mainly developed and tested under Windows 7 and higher.
## Package dependencies
Check [pypoject.toml](pypoject.toml) for the full dependency list and versions.
## Documentation
See [the documentation](https://instamatic.readthedocs.io) for how to set up and use Instamatic.
## Reference
If you found `Instamatic` useful, please consider citing it or one of the references below.
Each software release is archived on [Zenodo](https://zenodo.org), which provides a DOI for the project and each release. The project DOI [10.5281/zenodo.1090388](https://doi.org/10.5281/zenodo.1090388) will always resolve to the latest archive, which contains all the information needed to cite the release.
Alternatively, some of the methods implemented in `Instamatic` are described in:
- B. Wang, X. Zou, and S. Smeets, [Automated serial rotation electron diffraction combined with cluster analysis: an efficient multi-crystal workflow for structure determination](https://doi.org/10.1107/S2052252519007681), IUCrJ (2019). 6, 854-867
- B. Wang, [Development of rotation electron diffraction as a fully automated and accurate method for structure determination](http://www.diva-portal.org/smash/record.jsf?pid=diva2:1306254). PhD thesis (2019), Dept. of Materials and Environmental Chemistry (MMK), Stockholm University
- M.O. Cichocka, J. Ångström, B. Wang, X. Zou, and S. Smeets, [High-throughput continuous rotation electron diffraction data acquisition via software automation](http://dx.doi.org/10.1107/S1600576718015145), J. Appl. Cryst. (2018). 51, 1652–1661
- S. Smeets, X. Zou, and W. Wan, [Serial electron crystallography for structure determination and phase analysis of nanocrystalline materials](http://dx.doi.org/10.1107/S1600576718009500), J. Appl. Cryst. (2018). 51, 1262–1273
## Maintenance
- 2025-now: [@Baharis](https://github.com/Baharis)
- 2015-2025: [@stefsmeets](https://github.com/stefsmeets)
| text/markdown | null | Stef Smeets <s.smeets@esciencecenter.nl> | null | Daniel Mariusz Tchoń <tchon@fzu.cz> | BSD License | electron-crystallography, electron-microscopy, electron-diffraction, serial-crystallography, 3D-electron-diffraction, micro-ed, data-collection, automation | [
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"Lice... | [] | null | null | >=3.9 | [] | [] | [] | [
"comtypes>=1.1.7; sys_platform == \"win32\"",
"h5py>=2.10.0",
"ipython>=7.11.1",
"lmfit>=1.0.0",
"matplotlib>=3.1.2",
"mrcfile>=1.1.2",
"numpy>=1.17.3",
"pandas>=1.0.0",
"pillow>=7.0.0",
"pywinauto>=0.6.8; sys_platform == \"windows\"",
"pyyaml>=5.3",
"scikit-image>=0.17.1",
"scipy>=1.3.2",
... | [] | [] | [] | [
"homepage, https://github.com/instamatic-dev/instamatic",
"issues, http://github.com/instamatic-dev/instamatic/issues",
"documentation, https://instamatic.readthedocs.io",
"changelog, https://github.com/instamatic-dev/instamatic/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T19:10:05.397048 | instamatic-2.2.1.tar.gz | 3,372,378 | 8a/71/1d63757613a2f65c9e6a6b0bdad15f77b6289678c56e107f15cb7699f0f7/instamatic-2.2.1.tar.gz | source | sdist | null | false | 018315abb88a2c443587b9636e33e7db | f69c0b1782257192767661e4b94286c108c1659f0662b0a18fe9e8157c0cb120 | 8a711d63757613a2f65c9e6a6b0bdad15f77b6289678c56e107f15cb7699f0f7 | null | [
"LICENCE"
] | 239 |
2.4 | robotframework-okw-remote-ssh | 0.3.0 | Robot Framework keyword library for deterministic remote command execution and SFTP file transfer via SSH. | # robotframework-okw-remote-ssh
Standalone Robot Framework library for deterministic, synchronous remote interaction via SSH.
Session-based command execution, structured verification (stdout, stderr, exit code, duration),
and SFTP file transfer. Designed for CI pipelines, infrastructure validation, and cross-platform
automation (Linux, macOS, Windows with OpenSSH).
**[Keyword Documentation (Libdoc)](https://hrabovszki1023.github.io/robotframework-okw-remote-ssh/RemoteSshLibrary.html)**
## Features
- **Session-based** SSH connections via Paramiko
- **Strict separation** of execution and verification
- **Command queuing**: `Set Remote` collects commands, `Execute Remote` sends them in one SSH call (preserves shell context)
- **Three match modes**: EXACT, WCM (wildcard: `*`, `?`), REGX (regex)
- **SFTP file transfer**: upload, download, verify, clear, remove (files and directories)
- **OKW token support**: `$IGNORE` (skip), `$EMPTY` (assert empty)
- **Value expansion**: `$MEM{KEY}` placeholders across all parameters
- **No GUI coupling**, no dependency on OKW Core
## Drei-Phasen-Modell
Alle Keywords folgen einem festen Zusammenspiel:
| Phase | Keywords | Aufgabe |
|-------|----------|---------|
| **Vorbereiten** | `Set Remote` | Kommandos sammeln (kein SSH-Aufruf) |
| **Ausführen** | `Execute Remote`, `Execute Remote And Continue` | Kommandos absenden, Ergebnis speichern |
| **Prüfen** | `Verify Remote Response`, `Verify Remote Stderr`, `Verify Remote Exit Code`, ... | Gespeichertes Ergebnis auswerten |
> **Hinweis:** *Vorbereiten* ist optional – `Execute Remote` kann auch direkt mit einem Kommando aufgerufen werden.
> Werden mehrere `Set Remote` gesammelt, baut `Execute Remote` sie mit `&&` zusammen und schickt sie als **einen** SSH-Aufruf.
> So bleibt der Shell-Kontext (Arbeitsverzeichnis, Variablen) erhalten.
## Installation
```bash
pip install robotframework-okw-remote-ssh
```
## Quick Start
```robot
*** Settings ***
Library robotframework_okw_remote_ssh.RemoteSshLibrary
*** Test Cases ***
Single Command
Open Remote Session myhost my_server
Execute Remote myhost echo Hello
Verify Remote Response myhost Hello
Close Remote Session myhost
Multi Command With Context
Open Remote Session myhost my_server
Set Remote myhost cd /opt/app
Set Remote myhost ls -la
Execute Remote myhost
Verify Remote Response WCM myhost *app*
Close Remote Session myhost
Tolerate Expected Errors
Open Remote Session myhost my_server
Execute Remote And Continue myhost cat /no/such/file
Verify Remote Exit Code myhost 1
Verify Remote Stderr WCM myhost *No such file*
Close Remote Session myhost
```
## Session Configuration
Sessions are configured via YAML files in `remotes/` (or a custom config directory).
Example `remotes/my_server.yaml`:
```yaml
host: 10.0.0.1
port: 22
username: testuser
auth:
type: password
secret_id: "my_server/testuser"
```
Passwords are stored separately in `~/.okw/secrets.yaml` (never in the repository).
## Keywords
### Session Lifecycle
| Keyword | Description |
|---------|-------------|
| `Open Remote Session` | Opens a named SSH session using a YAML config reference |
| `Close Remote Session` | Closes the session and releases all resources |
### Execution
| Keyword | Description |
|---------|-------------|
| `Set Remote` | Queues a command for later execution (no SSH call). Supports `$MEM{KEY}` expansion. |
| `Execute Remote` | With command: executes immediately. Without: joins all queued `Set Remote` commands with `&&` and executes. FAIL on `exit_code != 0`. |
| `Execute Remote And Continue` | Same as `Execute Remote`, but never fails on nonzero exit code. |
### Verification -- stdout
| Keyword | Description |
|---------|-------------|
| `Verify Remote Response` | EXACT match on stdout |
| `Verify Remote Response WCM` | Wildcard match on stdout (`*` = any chars, `?` = one char) |
| `Verify Remote Response REGX` | Regex match on stdout |
### Verification -- stderr
| Keyword | Default | Description |
|---------|---------|-------------|
| `Verify Remote Stderr` | `$EMPTY` | EXACT match on stderr. Without argument: asserts empty |
| `Verify Remote Stderr WCM` | `$EMPTY` | Wildcard match on stderr |
| `Verify Remote Stderr REGX` | `$EMPTY` | Regex match on stderr |
### Verification -- exit code / duration
| Keyword | Description |
|---------|-------------|
| `Verify Remote Exit Code` | Numeric exact compare |
| `Verify Remote Duration` | Expression check: `>`, `>=`, `<`, `<=`, `==`, range `a..b` |
### Memorize
| Keyword | Description |
|---------|-------------|
| `Memorize Remote Response Field` | Stores a response field (`stdout`, `stderr`, `exit_code`, `duration_ms`) in `$MEM{KEY}` for later use |
### File Transfer -- Upload / Download
| Keyword | Description |
|---------|-------------|
| `Put Remote File` | Uploads a file via SFTP |
| `Get Remote File` | Downloads a file via SFTP |
| `Put Remote Directory` | Recursively uploads a directory via SFTP |
| `Get Remote Directory` | Recursively downloads a directory via SFTP |
### File Transfer -- Verify
| Keyword | Default | Description |
|---------|---------|-------------|
| `Verify Remote File Exists` | `YES` | Asserts file exists (`YES`) or does not exist (`NO`) |
| `Verify Remote Directory Exists` | `YES` | Asserts directory exists (`YES`) or does not exist (`NO`) |
The expected parameter accepts `YES`/`NO`, `TRUE`/`FALSE`, or `1`/`0` (case-insensitive).
### File Transfer -- Clear
| Keyword | Description |
|---------|-------------|
| `Clear Remote Directory` | Deletes files in the directory (not in subdirectories), keeps directory structure |
| `Clear Remote Directory Recursively` | Deletes all files recursively, keeps entire directory tree |
### File Transfer -- Remove (idempotent)
All remove keywords are **idempotent**: if the target does not exist, they return PASS.
| Keyword | Description |
|---------|-------------|
| `Remove Remote File` | Removes a single file |
| `Remove Remote Directory` | Removes an empty directory |
| `Remove Remote Directory Recursively` | Removes a directory and all its contents |
## OKW Token Support
| Token | Behavior |
|-------|----------|
| `$IGNORE` | Keyword becomes a no-op (PASS). Execution/verification/transfer is skipped. |
| `$EMPTY` | For verify keywords: asserts that the checked field is empty. |
## KI-Testgenerierung
Testfaelle koennen mit jeder KI (Claude, ChatGPT, Copilot, ...) generiert werden.
Den System-Prompt dafuer findest du unter [`prompts/okw-testgenerator.md`](prompts/okw-testgenerator.md).
Einfach den Prompt in die KI kopieren und natuerlichsprachig beschreiben, was getestet werden soll.
Die KI erzeugt ein fertiges `.robot`-File.
## License
AGPL-3.0-or-later. See [LICENSE](LICENSE) for details.
| text/markdown | Zoltán Hrabovszki | null | null | null | null | robotframework, testing, testautomation, ssh, sftp, okw | [
"Development Status :: 4 - Beta",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Langua... | [] | null | null | >=3.9 | [] | [] | [] | [
"robotframework>=6.0",
"paramiko>=3.4.0",
"PyYAML>=6.0",
"okw-contract-utils>=0.2.0",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/Hrabovszki1023/robotframework-okw-remote-ssh",
"Homepage, https://github.com/Hrabovszki1023/robotframework-okw-remote-ssh",
"Documentation, https://github.com/Hrabovszki1023/robotframework-okw-remote-ssh/tree/main/docs"
] | twine/6.2.0 CPython/3.11.7 | 2026-02-18T19:08:51.342999 | robotframework_okw_remote_ssh-0.3.0.tar.gz | 33,706 | 97/72/eb5e1df00a815ca56f68292011bdcbeb703deb825cf664ae049fc11a201e/robotframework_okw_remote_ssh-0.3.0.tar.gz | source | sdist | null | false | ca7cd12dcef4135120f7507dfbb04e1d | 5ad7e56735c6a85318be9b5b740f7fabdfd8c64f16a9c35b439f2108a572bbf7 | 9772eb5e1df00a815ca56f68292011bdcbeb703deb825cf664ae049fc11a201e | AGPL-3.0-or-later | [
"LICENSE"
] | 213 |
2.4 | llama-index | 0.14.15 | Interface between LLMs and your data | # 🗂️ LlamaIndex 🦙
[](https://pypi.org/project/llama-index/)
[](https://github.com/run-llama/llama_index/actions/workflows/build_package.yml)
[](https://github.com/jerryjliu/llama_index/graphs/contributors)
[](https://discord.gg/dGcwcsnxhU)
[](https://x.com/llama_index)
[](https://www.reddit.com/r/LlamaIndex/)
[](https://www.phorm.ai/query?projectId=c5863b56-6703-4a5d-87b6-7e6031bf16b6)
LlamaIndex (GPT Index) is a data framework for your LLM application. Building with LlamaIndex typically involves working with LlamaIndex core and a chosen set of integrations (or plugins). There are two ways to start building with LlamaIndex in
Python:
1. **Starter**: [`llama-index`](https://pypi.org/project/llama-index/). A starter Python package that includes core LlamaIndex as well as a selection of integrations.
2. **Customized**: [`llama-index-core`](https://pypi.org/project/llama-index-core/). Install core LlamaIndex and add your chosen LlamaIndex integration packages on [LlamaHub](https://llamahub.ai/)
that are required for your application. There are over 300 LlamaIndex integration
packages that work seamlessly with core, allowing you to build with your preferred
LLM, embedding, and vector store providers.
The LlamaIndex Python library is namespaced such that import statements which
include `core` imply that the core package is being used. In contrast, those
statements without `core` imply that an integration package is being used.
```python
# typical pattern
from llama_index.core.xxx import ClassABC # core submodule xxx
from llama_index.xxx.yyy import (
SubclassABC,
) # integration yyy for submodule xxx
# concrete example
from llama_index.core.llms import LLM
from llama_index.llms.openai import OpenAI
```
### Important Links
LlamaIndex.TS [(Typescript/Javascript)](https://github.com/run-llama/LlamaIndexTS)
[Documentation](https://docs.llamaindex.ai/en/stable/)
[X (formerly Twitter)](https://x.com/llama_index)
[LinkedIn](https://www.linkedin.com/company/llamaindex/)
[Reddit](https://www.reddit.com/r/LlamaIndex/)
[Discord](https://discord.gg/dGcwcsnxhU)
### Ecosystem
- LlamaHub [(community library of data loaders)](https://llamahub.ai)
- LlamaLab [(cutting-edge AGI projects using LlamaIndex)](https://github.com/run-llama/llama-lab)
## 🚀 Overview
**NOTE**: This README is not updated as frequently as the documentation. Please check out the documentation above for the latest updates!
### Context
- LLMs are a phenomenal piece of technology for knowledge generation and reasoning. They are pre-trained on large amounts of publicly available data.
- How do we best augment LLMs with our own private data?
We need a comprehensive toolkit to help perform this data augmentation for LLMs.
### Proposed Solution
That's where **LlamaIndex** comes in. LlamaIndex is a "data framework" to help you build LLM apps. It provides the following tools:
- Offers **data connectors** to ingest your existing data sources and data formats (APIs, PDFs, docs, SQL, etc.).
- Provides ways to **structure your data** (indices, graphs) so that this data can be easily used with LLMs.
- Provides an **advanced retrieval/query interface over your data**: Feed in any LLM input prompt, get back retrieved context and knowledge-augmented output.
- Allows easy integrations with your outer application framework (e.g. with LangChain, Flask, Docker, ChatGPT, or anything else).
LlamaIndex provides tools for both beginner users and advanced users. Our high-level API allows beginner users to use LlamaIndex to ingest and query their data in
5 lines of code. Our lower-level APIs allow advanced users to customize and extend any module (data connectors, indices, retrievers, query engines, reranking modules),
to fit their needs.
## 💡 Contributing
Interested in contributing? Contributions to LlamaIndex core as well as contributing
integrations that build on the core are both accepted and highly encouraged! See our [Contribution Guide](CONTRIBUTING.md) for more details.
New integrations should meaningfully integrate with existing LlamaIndex framework components. At the discretion of LlamaIndex maintainers, some integrations may be declined.
## 📄 Documentation
Full documentation can be found [here](https://docs.llamaindex.ai/en/latest/)
Please check it out for the most up-to-date tutorials, how-to guides, references, and other resources!
## 💻 Example Usage
```sh
# custom selection of integrations to work with core
pip install llama-index-core
pip install llama-index-llms-openai
pip install llama-index-llms-replicate
pip install llama-index-embeddings-huggingface
```
Examples are in the `docs/examples` folder. Indices are in the `indices` folder (see list of indices below).
To build a simple vector store index using OpenAI:
```python
import os
os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("YOUR_DATA_DIRECTORY").load_data()
index = VectorStoreIndex.from_documents(documents)
```
To build a simple vector store index using non-OpenAI LLMs, e.g. Llama 2 hosted on [Replicate](https://replicate.com/), where you can easily create a free trial API token:
```python
import os
os.environ["REPLICATE_API_TOKEN"] = "YOUR_REPLICATE_API_TOKEN"
from llama_index.core import Settings, VectorStoreIndex, SimpleDirectoryReader
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.llms.replicate import Replicate
from transformers import AutoTokenizer
# set the LLM
llama2_7b_chat = "meta/llama-2-7b-chat:8e6975e5ed6174911a6ff3d60540dfd4844201974602551e10e9e87ab143d81e"
Settings.llm = Replicate(
model=llama2_7b_chat,
temperature=0.01,
additional_kwargs={"top_p": 1, "max_new_tokens": 300},
)
# set tokenizer to match LLM
Settings.tokenizer = AutoTokenizer.from_pretrained(
"NousResearch/Llama-2-7b-chat-hf"
)
# set the embed model
Settings.embed_model = HuggingFaceEmbedding(
model_name="BAAI/bge-small-en-v1.5"
)
documents = SimpleDirectoryReader("YOUR_DATA_DIRECTORY").load_data()
index = VectorStoreIndex.from_documents(
documents,
)
```
To query:
```python
query_engine = index.as_query_engine()
query_engine.query("YOUR_QUESTION")
```
By default, data is stored in-memory.
To persist to disk (under `./storage`):
```python
index.storage_context.persist()
```
To reload from disk:
```python
from llama_index.core import StorageContext, load_index_from_storage
# rebuild storage context
storage_context = StorageContext.from_defaults(persist_dir="./storage")
# load index
index = load_index_from_storage(storage_context)
```
## 🔧 Dependencies
We use poetry as the package manager for all Python packages. As a result, the
dependencies of each Python package can be found by referencing the `pyproject.toml`
file in each of the package's folders.
```bash
cd <desired-package-folder>
pip install poetry
poetry install --with dev
```
## A note on Verification of Build Assets
By default, `llama-index-core` includes a `_static` folder that contains the nltk and tiktoken cache that is included with the package installation. This ensures that you can easily run `llama-index` in environments with restrictive disk access permissions at runtime.
To verify that these files are safe and valid, we use the github `attest-build-provenance` action. This action will verify that the files in the `_static` folder are the same as the files in the `llama-index-core/llama_index/core/_static` folder.
To verify this, you can run the following script (pointing to your installed package):
```bash
#!/bin/bash
STATIC_DIR="venv/lib/python3.13/site-packages/llama_index/core/_static"
REPO="run-llama/llama_index"
find "$STATIC_DIR" -type f | while read -r file; do
echo "Verifying: $file"
gh attestation verify "$file" -R "$REPO" || echo "Failed to verify: $file"
done
```
## 📖 Citation
Reference to cite if you use LlamaIndex in a paper:
```
@software{Liu_LlamaIndex_2022,
author = {Liu, Jerry},
doi = {10.5281/zenodo.1234},
month = {11},
title = {{LlamaIndex}},
url = {https://github.com/jerryjliu/llama_index},
year = {2022}
}
```
| text/markdown | null | Jerry Liu <jerry@llamaindex.ai> | null | Andrei Fajardo <andrei@runllama.ai>, Haotian Zhang <ht@runllama.ai>, Jerry Liu <jerry@llamaindex.ai>, Logan Markewich <logan@llamaindex.ai>, Simon Suo <simon@llamaindex.ai>, Sourabh Desai <sourabh@llamaindex.ai> | null | LLM, NLP, RAG, data, devtools, index, retrieval | [
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"llama-index-cli<0.6,>=0.5.0; python_version > \"3.9\"",
"llama-index-core<0.15.0,>=0.14.15",
"llama-index-embeddings-openai<0.6,>=0.5.0",
"llama-index-indices-managed-llama-cloud>=0.4.0",
"llama-index-llms-openai<0.7,>=0.6.0",
"llama-index-readers-file<0.6,>=0.5.0",
"llama-index-readers-llama-parse>=0.... | [] | [] | [] | [
"Documentation, https://docs.llamaindex.ai/en/stable/",
"Homepage, https://llamaindex.ai",
"Repository, https://github.com/run-llama/llama_index"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T19:06:39.540503 | llama_index-0.14.15-py3-none-any.whl | 7,264 | 02/94/b338e8985313e6e3a5321638f3d7d457310da6cb4ab1298eea3b323cb06c/llama_index-0.14.15-py3-none-any.whl | py3 | bdist_wheel | null | false | 62947dc1f799641929f3fa47b6446552 | 469bf8ff77a445dbf402ed08978a0c8ebf59d40fcd15d289e07e5791e0513cea | 0294b338e8985313e6e3a5321638f3d7d457310da6cb4ab1298eea3b323cb06c | MIT | [
"LICENSE"
] | 69,576 |
2.4 | llama-index-core | 0.14.15 | Interface between LLMs and your data | # LlamaIndex Core
The core python package to the LlamaIndex library. Core classes and abstractions
represent the foundational building blocks for LLM applications, most notably,
RAG. Such building blocks include abstractions for LLMs, Vector Stores, Embeddings,
Storage, Callables and several others.
We've designed the core library so that it can be easily extended through subclasses.
Building LLM applications with LlamaIndex thus involves building with LlamaIndex
core as well as with the LlamaIndex [integrations](https://github.com/run-llama/llama_index/tree/main/llama-index-integrations) needed for your application.
| text/markdown | null | Jerry Liu <jerry@llamaindex.ai> | null | Andrei Fajardo <andrei@runllama.ai>, Haotian Zhang <ht@runllama.ai>, Jerry Liu <jerry@llamaindex.ai>, Logan Markewich <logan@llamaindex.ai>, Simon Suo <simon@llamaindex.ai>, Sourabh Desai <sourabh@llamaindex.ai> | null | LLM, NLP, RAG, data, devtools, index, retrieval | [
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"aiohttp<4,>=3.8.6",
"aiosqlite",
"banks<3,>=2.3.0",
"dataclasses-json",
"deprecated>=1.2.9.3",
"dirtyjson<2,>=1.0.8",
"eval-type-backport<0.3,>=0.2.0; python_version < \"3.10\"",
"filetype<2,>=1.2.0",
"fsspec>=2023.5.0",
"httpx",
"llama-index-workflows!=2.9.0,<3,>=2",
"nest-asyncio<2,>=1.5.8"... | [] | [] | [] | [
"Homepage, https://llamaindex.ai",
"Repository, https://github.com/run-llama/llama_index",
"Documentation, https://docs.llamaindex.ai/en/stable/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T19:05:48.274495 | llama_index_core-0.14.15.tar.gz | 11,593,505 | 0c/4f/7c714bdf94dd229707b43e7f8cedf3aed0a99938fd46a9ad8a418c199988/llama_index_core-0.14.15.tar.gz | source | sdist | null | false | c22064b54a111dee654a10f60b43d6bf | 3766aeeb95921b3a2af8c2a51d844f75f404215336e1639098e3652db52c68ce | 0c4f7c714bdf94dd229707b43e7f8cedf3aed0a99938fd46a9ad8a418c199988 | MIT | [
"LICENSE"
] | 138,972 |
2.4 | goodseed | 0.2.6 | ML experiment tracker | # GoodSeed
ML experiment tracker. Logs metrics and configs to local SQLite files, serves them via a built-in HTTP server, and visualizes them in the browser.
Full documentation at [goodseed.ai/docs](https://goodseed.ai/docs/).
## Install
```bash
pip install goodseed
```
Python 3.9+ required. No runtime dependencies.
For development:
```bash
pip install -e ".[dev]"
```
## Quick Start
Log metrics and configs from a training script:
```python
import goodseed
run = goodseed.Run(experiment_name="my-experiment")
run.log_configs({"learning_rate": 0.001, "batch_size": 32})
for step in range(100):
loss = train_step()
run.log_metrics({"loss": loss}, step=step)
run.close()
```
Your data is saved to a local SQLite file. You can also use `with goodseed.Run(...) as run:` to close the run automatically.
Then view your runs:
```bash
goodseed serve
```
Open the printed link in your browser to see your runs, metrics, and configs.
## Coming from Neptune?
You can export your data from [neptune.ai](https://neptune.ai) and import it into GoodSeed using [neptune-exporter](https://github.com/neptune-ai/neptune-exporter). See the [migration guide](https://docs.neptune.ai/transition_hub/migration/to_goodseed) for details.
## Configuration
| Variable | Description |
|----------|-------------|
| `GOODSEED_HOME` | Data directory (default: `~/.goodseed`) |
| `GOODSEED_PROJECT` | Default project name (default: `default`) |
## CLI
```bash
goodseed # Start the server (default command)
goodseed serve [dir] # Start the server, optionally from a specific directory
goodseed serve --port 9000 # Use a custom port
goodseed list # List projects
goodseed list -p default # List runs in a project
```
## Tests
```bash
pip install -e ".[dev]"
pytest tests/ -v
```
See [DOCS.md](DOCS.md) for architecture details and API reference.
| text/markdown | Goodseed Team | null | null | null | null | experiment-tracking, machine-learning, mlops | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Py... | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx>=0.25.0; extra == \"dev\"",
"pytest-timeout>=2.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"scikit-learn>=1.0.0; extra == \"examples\"",
"torch>=2.0.0; extra == \"examples\""
] | [] | [] | [] | [
"Homepage, https://goodseed.ai/",
"Repository, https://github.com/kripner/goodseed",
"Issues, https://github.com/kripner/goodseed/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T19:05:20.150904 | goodseed-0.2.6.tar.gz | 68,301 | 7a/8b/d0291fb32e3e33f68caeb71e949d1879d9679f6d1d85e8e4f83f729c7563/goodseed-0.2.6.tar.gz | source | sdist | null | false | 702f94ad365d88cd564b8c60fd480a55 | f38901159519f491b7480a6edff98ec24c6940a19169735eb88aced5aa162b3a | 7a8bd0291fb32e3e33f68caeb71e949d1879d9679f6d1d85e8e4f83f729c7563 | MIT | [
"LICENSE"
] | 232 |
2.4 | python-jsonrpc-lib | 0.3.1 | Simple, yet solid - Type-safe JSON-RPC 1.0/2.0 with OpenAPI support | # python-jsonrpc-lib
**Simple, yet solid.** JSON-RPC 1.0/2.0 for Python.
JSON-RPC is a small protocol: a method name, some parameters, a result. python-jsonrpc-lib keeps it that way. You write ordinary Python functions and dataclasses; the library handles validation, routing, error responses, and API documentation. No framework lock-in, no external dependencies, no boilerplate.
## Install
```bash
pip install python-jsonrpc-lib
```
## Quickstart
Define methods as classes with typed parameters. The library validates inputs, routes calls, and builds responses automatically.
```python
from dataclasses import dataclass
from jsonrpc import JSONRPC, Method, MethodGroup
@dataclass
class AddParams:
a: int
b: int
class Add(Method):
def execute(self, params: AddParams) -> int:
return params.a + params.b
@dataclass
class GreetParams:
name: str
greeting: str = 'Hello'
class Greet(Method):
def execute(self, params: GreetParams) -> str:
return f'{params.greeting}, {params.name}!'
rpc = JSONRPC(version='2.0')
rpc.register('add', Add())
rpc.register('greet', Greet())
response = rpc.handle('{"jsonrpc": "2.0", "method": "add", "params": {"a": 5, "b": 3}, "id": 1}')
# '{"jsonrpc": "2.0", "result": 8, "id": 1}'
```
Pass in a JSON string, get a JSON string back. What carries it over the wire is up to you.
If `a` is `"five"` instead of `5`, the caller receives a `-32602 Invalid params` error immediately — no exception handling on your end.
The same `AddParams` dataclass drives validation, IDE autocomplete, and the OpenAPI schema.
## Why python-jsonrpc-lib?
- **Zero dependencies** — pure Python 3.11+. Nothing to pin, nothing to audit beyond the library itself.
- **Type validation from dataclasses** — declare parameters as a dataclass, get automatic validation and clear error messages for free.
- **OpenAPI docs auto-generated** — type hints and docstrings you already wrote become a full OpenAPI 3.0 spec. Point any Swagger-compatible UI at it and your API is self-documented.
- **Transport-agnostic** — `rpc.handle(json_string)` returns a string. HTTP, WebSocket, TCP, message queue: your choice.
- **Spec-compliant by default** — v1.0 and v2.0 rules enforced out of the box, configurable when you need to support legacy clients.
## Namespacing and Middleware
Use `MethodGroup` to organize methods into namespaces and add cross-cutting concerns:
```python
math = MethodGroup()
math.register('add', Add())
rpc = JSONRPC(version='2.0')
rpc.register('math', math)
# "math.add" is now available
```
## Quick Prototyping
For scripts and throwaway code, the `@rpc.method` decorator registers functions directly (v2.0 only):
```python
rpc = JSONRPC(version='2.0')
@rpc.method
def add(a: int, b: int) -> int:
return a + b
```
For production use, prefer `Method` classes — they support context, middleware, and groups.
## Documentation
Full documentation with tutorials, integration guides, and API reference:
- [Tutorial: Hello World](docs/tutorial/01-hello-world.md) — first method, explained
- [Tutorial: Parameters](docs/tutorial/03-parameters.md) — dataclass validation in detail
- [Tutorial: Context](docs/tutorial/05-context.md) — authentication and per-request data
- [Tutorial: OpenAPI](docs/tutorial/07-openapi.md) — interactive API documentation
- [Flask integration](docs/integrations/flask.md)
- [FastAPI integration](docs/integrations/fastapi.md)
- [Philosophy](docs/philosophy.md) — design decisions and trade-offs
- [API Reference](docs/api-reference.md)
## Claude Code Integration
If you use [Claude Code](https://claude.ai/claude-code), a skill for this library is available. It gives Claude built-in knowledge of jsonrpc-lib's API: creating methods, registering them, organizing with groups, handling errors, and adding context and middleware — without having to look up docs.
To use it, add the skill file to your project's `.claude/skills/` directory.
## License
MIT
| text/markdown | Andy Smith | null | null | null | MIT | dataclass, json-rpc, jsonrpc, protocol, rpc, validation | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Pytho... | [] | null | null | >=3.11 | [] | [] | [] | [
"mypy>=1.0; extra == \"dev\"",
"ruff>=0.15.0; extra == \"dev\"",
"mkdocs-material>=9.5.0; extra == \"docs\"",
"mkdocs>=1.5.0; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/uandysmith/python-jsonrpc-lib",
"Documentation, https://uandysmith.github.io/python-jsonrpc-lib/",
"Repository, https://github.com/uandysmith/python-jsonrpc-lib",
"Issues, https://github.com/uandysmith/python-jsonrpc-lib/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T19:05:00.838927 | python_jsonrpc_lib-0.3.1.tar.gz | 96,188 | 97/7a/3a5074bb1aa5e9220a0fec0cc1a43dbe760cab857dd93d09eed5cb7ddcd6/python_jsonrpc_lib-0.3.1.tar.gz | source | sdist | null | false | 8ccf2e25c7342c3d3278cc2be790f5b0 | ce0713509ce16c4728f7bc20455db029801a4d6e7578c8d5680e15f92393b270 | 977a3a5074bb1aa5e9220a0fec0cc1a43dbe760cab857dd93d09eed5cb7ddcd6 | null | [
"LICENSE"
] | 231 |
2.1 | blog-coeur | 0.0.19 | Coeur - static site management | # Coeur - static site management
Coeur is just another Python tool to generate static sites with thousands of pages, with a bit of extra power.
## Name explanation
"cœur" is a French word for "core" or "heart".
This tool is called Coeur for two personal reasons: 1. it will be the core of my blog-farm; 2. one of my favorite places is the Sacre ***Cœur*** in Paris.
## How to install
Coeur is a common python package, you can install from pypi:
```sh
pip install --user blog-coeur
```
The command to be used is `blog-coeur` and get `help` to start to use:
```sh
blog-coeur --help
```
## Module SSG - Static Site Generator
The `ssg` module in **Coeur** allows you to create a static website from a `sqlite` database and import content from markdown files.
### How to Use
#### Create Your Coeur App
To start a new project, run the `create` command:
```
blog-coeur ssg create my-blog
```
#### Development Server
Run a simple local server to build your blog:
```
blog-coeur ssg server --max-posts=1000 --port=8081
```
#### Admin Web Panel
Manage your blog’s posts through a web dashboard (static HTML + REST API):
```
blog-coeur ssg admin
```
Go to: `http://localhost:8000/`
**Admin module** — The admin is a single-page app served by the same process. It lets you:
- **Search / list posts:** Choose a database (or "All"), then search by title or leave the search empty and click Search to list posts paginated. Results open in the left panel.
- **Edit a post:** Click a result to load it. The main area shows two columns: editable fields (title, content, path, date, image) on the left and a live preview on the right. Content supports HTML or Markdown; Markdown is rendered in the preview.
- **Rich editor:** Toolbar (bold, italic, headings, lists, link) for the content. Use "View source" to switch between WYSIWYG and raw HTML/Markdown; for Markdown posts, the source view shows Markdown.
- **Update:** Click "Update post" to save changes. The API is REST (GET/PUT); the navbar link "API (Swagger)" opens the interactive docs at `/docs`.
#### Build Static Site
Build your blog with the `build` command:
```
blog-coeur ssg build
```
The blog will be generated in the `public` folder.
#### Markdown Import
To import your markdown files from Zola Framework to Coeur:
```
blog-coeur ssg markdown-to-db "./content"
```
### SSG Features
- Import markdown files from Zola Framework
- Basic blog template based on Zolarwind
- HTML Minification
- Sitemap generation with pagination
- Hot reload
- Admin dashboard: search/list posts, rich editor, view source (HTML/Markdown), update via REST API with Swagger at `/docs`
### TODO List
- [ ] Custom templates (themes)
- [ ] Documentation about templates
- [ ] Support to use post Tags
- [ ] Hot reload v2 - add websocket to auto-refresh the html in the browser
## Module CMP - Content Machine Processor
The CMP module was designed to simplify content creation powered by the OpenAI API.
### How to Use
The content will be created as posts inside the blog's SQLite database, which is generated by the ssg module. It's important to start your project using the ssg command.
Assuming you already have the blog created with Coeur, you need to set up your OpenAI key in the `.env` file and then you can use the cmp module as follows:
```
blog-coeur cmp title-to-post "Aguas de Lindóia" --custom-prompt="Create a full article about the city, need to be funny and talking in a positive way about the place"
```
The first parameter is required and will be the title of the post.
It is highly recommended to use a custom-prompt to enhance your experience and get better results. This prompt can be in any language.
You can also set the `img-url` parameter to include an image in the post. This needs to be a valid image URL, such as one hosted on S3.
## Module PDS – Post Distribution Service
This module will help us to automatically publish the blog posts on social media.
```
blog-coeur pds publish [OPTIONS] CHANNELS:{instagram}...
```
# Do you want to help?
This is an open-source project, and I need help to make it better.
If you are a developer, feel free to contact me to work on items from my personal roadmap, or you can suggest something new. Be free, let's do it together.
If you are a company, contact me to support the growth of my project. I'm open to improve it for specifics new use-cases. | text/markdown | David Silva | srdavidsilva@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | https://github.com/sr2ds/coeur | null | <3.14,>=3.10 | [] | [] | [] | [
"typer[all]<0.25.0,>=0.24.0",
"toml<0.11.0,>=0.10.2",
"sqlalchemy<3.0.0,>=2.0.30",
"jinja2<4.0.0,>=3.1.4",
"psutil<6.0.0,>=5.9.8",
"mistune<4.0.0,>=3.0.2",
"pyyaml<7.0.0,>=6.0.1",
"watchdog<5.0.0,>=4.0.1",
"openai<2.0.0,>=1.51.2",
"python-dotenv<2.0.0,>=1.0.1",
"python-slugify<9.0.0,>=8.0.4",
... | [] | [] | [] | [
"Repository, https://github.com/sr2ds/coeur"
] | poetry/1.8.3 CPython/3.10.12 Linux/6.14.0-1017-azure | 2026-02-18T19:04:45.624594 | blog_coeur-0.0.19.tar.gz | 127,778 | 11/d3/b803bd8a582fa8fadf94538e01d798cc836cb37e3699ac6958edebe32144/blog_coeur-0.0.19.tar.gz | source | sdist | null | false | 8f8c18fd2584521691ed5a19564cc31f | 77fe9f77bc17f24b7bd8ea69848c16fa81f93747631501d95a5c77849313fe86 | 11d3b803bd8a582fa8fadf94538e01d798cc836cb37e3699ac6958edebe32144 | null | [] | 230 |
2.4 | speedtools | 0.32.0 | NFS4 HS (PC) resource utilities | # About
This project provides data extraction utilities for the following Need for Speed 4 HS asset files:
- tracks
- cars
Additionally, Blender addon is provided to directly import track and car data.
# Recommended setup
1. Download the latest ZIP file from [Releases][5].
2. Open Blender.
3. Go to `Edit -> Preferences -> Get Extensions`.
4. Click `v` in the top right corner of the window.
5. Click `Install from Disk...`.
6. Select the downloaded ZIP file.
7. Activate the addon in the `Add-ons` panel.
# Old setup instructions
> **Warning**: Use instructions from this section only if you have problems with the recommended setup.
1. Create new empty Blender project
2. Open the __Scripting__ tab
3. Copy-paste the following two commands into the Blender console:
```
import sys, subprocess
subprocess.call([sys.executable, "-m", "pip", "install", "speedtools"])
```
This command will install the [`speedtools`][1] package to your Blender Python installation.
> **Note**: Python installation that comes with Blender is completely separate from the global Python installation on your system. For this reason, it is necessary to use the Blender scripting console to install the package correctly.
4. Copy and paste the content of [this][2] file to the Blender scripting window.
5. Click the __▶__ button.
6. You should see `Track resources` and `Car resources` under `File > Import`.
# Setup (command line version)
Install the package from PyPI:
```
pip install speedtools
```
Currently, the command line version does not provide any useful functionality.
# Development dependencies
This setup is needed only if you plan to modify the add-on.
To develop the project the following dependencies must be installed on your system:
* [Kaitai Struct compiler][3]
Make sure the binary directories are in your `PATH`.
# Re-make project
This tool is a part of the re-make project. The project source code is available [here][4].
[1]: https://pypi.org/project/speedtools/
[2]: https://github.com/e-rk/speedtools/blob/master/speedtools/blender/io_nfs4_import.py
[3]: https://kaitai.io/
[4]: https://github.com/e-rk/velocity
[5]: https://github.com/e-rk/speedtools/releases
| text/markdown | Rafał Kuźnia | rafal.kuznia@protonmail.com | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3 :: Only",
"Topic :: File Formats",
"Typing :: Typed"
] | [] | null | null | null | [] | [] | [] | [
"kaitaistruct",
"pillow",
"click",
"more-itertools",
"parse",
"ffmpeg-python"
] | [] | [] | [] | [
"homepage, https://github.com/e-rk/speedtools",
"repository, https://github.com/e-rk/speedtools"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T19:04:42.539062 | speedtools-0.32.0.tar.gz | 55,855 | 91/fd/6a0517cf600fa6259f9d3f60975e94e54e08288708e5f7a87bb00b5cd97d/speedtools-0.32.0.tar.gz | source | sdist | null | false | 477db6fcf905983e68db4ce4e025b81d | 89cf01b26659d9b4a200778ff62928f4d12aacd6e1f33ad9a1757dabd6c9848e | 91fd6a0517cf600fa6259f9d3f60975e94e54e08288708e5f7a87bb00b5cd97d | GPL-3.0-or-later | [
"LICENSE"
] | 247 |
2.1 | knitout-interpreter | 0.1.4 | Support for interpreting knitout files used for controlling automatic V-Bed Knitting machines. | # knitout-interpreter
[](https://pypi.org/project/knitout-interpreter)
[](https://pypi.org/project/knitout-interpreter)
[](https://opensource.org/licenses/MIT)
[](https://mhofmann-khoury.github.io/knitout_interpreter/)
A Python library for interpreting, executing, and debugging [knitout](https://textiles-lab.github.io/knitout/knitout.html) files used to control automatic V-Bed knitting machines.
## Installation
```bash
pip install knitout-interpreter
```
## Quick Example
```python
from knitout_interpreter.run_knitout import run_knitout
# Execute a knitout file
instructions, machine, knit_graph = run_knitout("pattern.k")
```
## Documentation
**Full documentation:** [https://mhofmann-khoury.github.io/knitout_interpreter/](https://mhofmann-khoury.github.io/knitout_interpreter/)
## Links
- **PyPI**: https://pypi.org/project/knitout-interpreter
- **Documentation**: https://mhofmann-khoury.github.io/knitout_interpreter/
- **Source Code**: https://github.com/mhofmann-Khoury/knitout_interpreter
- **Issues**: https://github.com/mhofmann-Khoury/knitout_interpreter/issues
---
**Made by the Northeastern University ACT Lab**
| text/markdown | Megan Hofmann | m.hofmann@northeastern.edu | Megan Hofmann | m.hofmann@northeastern.edu | MIT | knit, machine knit, textile, Northeastern, ACT Lab, fabrication, knitout | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Manufacturing",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11"... | [] | https://mhofmann-khoury.github.io/knitout_interpreter/ | null | <3.14,>=3.11 | [] | [] | [] | [
"parglare>=0.21.0",
"virtual-knitting-machine>=0.0.28",
"importlib_resources>=6.5"
] | [] | [] | [] | [
"Repository, https://github.com/mhofmann-Khoury/knitout_interpreter",
"Documentation, https://mhofmann-khoury.github.io/knitout_interpreter/"
] | poetry/1.7.1 CPython/3.11.14 Linux/6.14.0-1017-azure | 2026-02-18T19:04:41.934610 | knitout_interpreter-0.1.4.tar.gz | 3,836,985 | 60/7f/63b53d505593ec6ba05ccd2ca9063918523bb8ccc4c041c91ba88f3b426e/knitout_interpreter-0.1.4.tar.gz | source | sdist | null | false | afd35d376d09cd8e76cc572ba6c7a775 | 6d7f15da04a814722fb55200a5e93596080242d15b1dc99c5040b3e6c8ea49ee | 607f63b53d505593ec6ba05ccd2ca9063918523bb8ccc4c041c91ba88f3b426e | null | [] | 225 |
2.1 | qumulo-api | 7.8.0.1 | Qumulo Python SDK | ================
Qumulo API Tools
================
This package contains the Qumulo Core Python SDK and the qq CLI utility, which
allow users to interact with the Qumulo REST API server.
Using the Python SDK
====================
To get started, import the `RestClient` class from the `qumulo.rest_client`
module and create an instance. The `RestClient` class contains attributes
that allow programmatic access to all of the Qumulo Core REST API endpoints.
For example::
from qumulo.rest_client import RestClient
# Create an instance of RestClient associated with the Qumulo Core file
system at qumulo.mycompany.net
rc = RestClient("qumulo.mycompany.net", 8000)
# Log in to Qumulo Core using local user or Active Directory credentials
rc.login("username", "password")
# Print all of the SMB share configuration information
print(rc.smb.smb_list_shares())
To inspect the various available properties, open a Python REPL and run the
following commands::
from qumulo.rest_client import RestClient
rc = RestClient("qumulo.mycompany.net", 8000)
# See REST API groups:
[p for p in dir(rc) if not p.startswith('_')]
# See SDK endpoints within a particular API group
[p for p in dir(rc.quota) if not p.startswith('_')]
Using qq
========
After installing the qumulo-api package, the `qq` CLI utility will be installed
in your system.
Note: On Windows, `qq.exe` can be found under the `Scripts\\` directory in your
Python installation. Adding this path your your `%%PATH%%` environment variable
will allow you to run `qq.exe` without prefixing it with the full path.
To see all commands available from the ``qq`` tool::
$ qq --help
To run most commands against the REST API server, you must first login::
$ qq --host host_ip login --user admin
Once authenticated, you can run other commands::
# Get the network configuation of nodes in the cluster:
$ qq --host <qumulo_host> network_poll
# Get the list of users
$ qq --host <qumulo_host> auth_list_users
# Get help with a specific command
$ qq --host <qumulo_host> auth_list_users --help
To see the information about the actual HTTP requests and responses sent over
the wire for a particular command, use the `--debug` flag::
$ qq --host <qumulo_host> --debug smb_settings_get
REQUEST: GET https://<qumulo_host>:8000/v1/smb/settings
REQUEST HEADERS:
User-Agent: qq
Content-Type: application/json
Content-Length: 0
Authorization: Bearer <token>
RESPONSE STATUS: 200
RESPONSE:
Date: Fri, 18 Mar 2022 22:15:47 GMT
ETag: "VNhqnQ"
Content-Type: application/json
Content-Length: 329
Strict-Transport-Security: max-age=31536000; includeSubdomain
{'session_encryption': 'NONE', 'supported_dialects': ['SMB2_DIALECT_2_002', 'SMB2_DIALECT_2_1', 'SMB2_DIALECT_3_0', 'SMB2_DIALECT_3_11'], 'hide_shares_from_unauthorized_users': False, 'hide_shares_from_unauthorized_hosts': False, 'snapshot_directory_mode': 'VISIBLE', 'bypass_traverse_checking': False, 'signing_required': False}
Notes
=====
For more information, visit our Knowledge Base site: https://care.qumulo.com
| text/x-rst | Qumulo, Inc. | python@qumulo.com | null | null | null | Qumulo QFSD | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8"
] | [] | http://www.qumulo.com/ | null | <4,>=3.8 | [] | [] | [] | [
"dataclasses-json>=0.5.2",
"python-dateutil>=2.8.2",
"tqdm>=4.24.0",
"typing-extensions>=3.7.4.3"
] | [] | [] | [] | [] | twine/3.1.1 pkginfo/1.5.0.1 requests/2.28.1 setuptools/57.0.0 requests-toolbelt/0.9.1 tqdm/4.65.0 CPython/3.8.0 | 2026-02-18T19:04:03.399697 | qumulo_api-7.8.0.1-py3-none-any.whl | 319,686 | 3f/1e/2f0a40a74bf7126d3fcd58643b61a6a0db834277ef6109399dd8317e0bce/qumulo_api-7.8.0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 068b8d5738711ad088d19cd1d92a0539 | cebc96c8784c6e840844bc03ada1abc8178756115da0c13de048d2d3a66de59f | 3f1e2f0a40a74bf7126d3fcd58643b61a6a0db834277ef6109399dd8317e0bce | null | [] | 238 |
2.4 | ace-framework | 0.8.2 | Build self-improving AI agents that learn from experience | <img src="https://framerusercontent.com/images/XBGa12hY8xKYI6KzagBxpbgY4.png" alt="Kayba Logo" width="1080"/>
# Agentic Context Engine (ACE)

[](https://discord.gg/mqCqH7sTyK)
[](https://twitter.com/kaybaai)
[](https://badge.fury.io/py/ace-framework)
[](https://www.python.org/downloads/)

**AI agents that get smarter with every task**
⭐ Star this repo if you find it useful!
---
## What is ACE?
ACE enables AI agents to **learn from their execution feedback**—what works, what doesn't—and continuously improve. No fine-tuning, no training data, just automatic in-context learning.
The framework maintains a **Skillbook**: a living document of strategies that evolves with each task. When your agent succeeds, ACE extracts patterns. When it fails, ACE learns what to avoid. All learning happens transparently in context.
- **Self-Improving**: Agents autonomously get smarter with each task
- **20-35% Better Performance**: Proven improvements on complex tasks
- **49% Token Reduction**: Demonstrated in browser automation benchmarks
- **No Context Collapse**: Preserves valuable knowledge over time
---
## LLM Quickstart
1. Direct your favorite coding agent (Cursor, Claude Code, Codex, etc) to [Quick Start Guide](docs/QUICK_START.md)
2. Prompt away!
---
## Quick Start
### 1. Install
```bash
pip install ace-framework
```
### 2. Set API Key
```bash
export OPENAI_API_KEY="your-api-key"
```
### 3. Run
```python
from ace import ACELiteLLM
agent = ACELiteLLM(model="gpt-4o-mini")
answer = agent.ask("What does Kayba's ACE framework do?")
print(answer) # "ACE allows AI agents to remember and learn from experience!"
```
**Done! Your agent learns automatically from each interaction.**
[→ Quick Start Guide](docs/QUICK_START.md) | [→ Setup Guide](docs/SETUP_GUIDE.md)
---
## Use Cases
### Claude Code with Learning [→ Quick Start](ace/integrations/claude_code)
Run coding tasks with Claude Code while ACE learns patterns from each execution, building expertise over time for your specific codebase and workflows.
### Automated System Prompting
The Skillbook acts as an evolving system prompt that automatically improves based on execution feedback—no manual prompt engineering required.
### Enhance Existing Agents
Wrap your existing agent (browser-use, LangChain, custom) with ACE learning. Your agent executes tasks normally while ACE analyzes results and builds a skillbook of effective strategies.
### Build Self-Improving Agents
Create new agents with built-in learning for customer support, data extraction, code generation, research, content creation, and task automation.
---
## Demos
### The Seahorse Emoji Challenge
A challenge where LLMs often hallucinate that a seahorse emoji exists (it doesn't).
<img src="examples/seahorse-emoji-ace.gif" alt="Seahorse Emoji ACE Demo" width="70%"/>
In this example:
1. The agent incorrectly outputs a horse emoji
2. ACE reflects on the mistake without external feedback
3. On the second attempt, the agent correctly realizes there is no seahorse emoji
[→ Try it yourself](examples/litellm/seahorse_emoji_ace.py)
### Tau2 Benchmark
Evaluated on the airline domain of [τ2-bench](https://github.com/sierra-research/tau2-bench) (Sierra Research) — a benchmark for multi-step agentic tasks requiring tool use and policy adherence. Agent: Claude Haiku 4.5. Strategies learned on the train split with no reward signals; all results on the held-out test split.
*pass^k = probability that all k independent attempts succeed. Higher k is a stricter test of agent consistency.*
<img src="benchmarks/tasks/tau_bench/Tau2Benchmark Result Haiku4.5.png" alt="Tau2 Benchmark Results - Haiku 4.5" width="35%"/>
ACE doubles agent consistency at pass^4 using only 15 learned strategies — gains compound as the bar gets higher.
### Browser Automation
**Online Shopping Demo**: ACE vs baseline agent shopping for 5 grocery items.
<img src="examples/browser-use/online-shopping/results-online-shopping-brwoser-use.png" alt="Online Shopping Demo Results" width="70%"/>
In this example:
- ACE learns to navigate the website over 10 attempts
- Performance stabilizes and step count decreases by 29.8%
- Token costs reduce 49.0% for base agent and 42.6% including ACE overhead
[→ Try it yourself & see all demos](examples/browser-use/README.md)
### Claude Code Loop
In this example, Claude Code is enhanced with ACE and self-reflects after each execution while translating the ACE library from Python to TypeScript.
**Python → TypeScript Translation:**
| Metric | Result |
|--------|--------|
| Duration | ~4 hours |
| Commits | 119 |
| Lines written | ~14k |
| Outcome | Zero build errors, all tests passing |
| API cost | ~$1.5 (Sonnet for learning) |
[→ Claude Code Loop](examples/claude-code-loop/)
---
## Integrations
ACE integrates with popular agent frameworks:
| Integration | ACE Class | Use Case |
|-------------|-----------|----------|
| LiteLLM | `ACELiteLLM` | Simple self-improving agent |
| LangChain | `ACELangChain` | Wrap LangChain chains/agents |
| browser-use | `ACEAgent` | Browser automation |
| Claude Code | `ACEClaudeCode` | Claude Code CLI |
| ace-learn CLI | `ACEClaudeCode` | Learn from Claude Code sessions |
| Opik | `OpikIntegration` | Production monitoring and cost tracking |
[→ Integration Guide](docs/INTEGRATION_GUIDE.md) | [→ Examples](examples/)
---
## How Does ACE Work?
*Inspired by the [ACE research framework](https://arxiv.org/abs/2510.04618) from Stanford & SambaNova.*
ACE enables agents to learn from execution feedback — what works, what doesn't — and continuously improve. No fine-tuning, no training data, just automatic in-context learning. Three specialized roles work together:
1. **Agent** — Your agent, enhanced with strategies from the Skillbook
2. **Reflector** — Analyzes execution traces to extract learnings. In recursive mode, the Reflector writes and runs Python code in a sandboxed REPL to programmatically query traces — finding patterns, errors, and insights that single-pass analysis misses
3. **SkillManager** — Curates the Skillbook: adds new strategies, refines existing ones, and removes outdated patterns based on the Reflector's analysis
The key innovation is the **Recursive Reflector** — instead of summarizing traces in a single pass, it writes and executes Python code in a sandboxed environment to programmatically explore agent execution traces. It can search for patterns, isolate errors, query sub-agents for deeper analysis, and iterate until it finds actionable insights. These insights flow into the **Skillbook** — a living collection of strategies that evolves with every task.
```mermaid
flowchart LR
Skillbook[(Skillbook<br>Learned Strategies)]
Start([Query]) --> Agent[Agent<br>Enhanced with Skillbook]
Agent <--> Environment[Task Environment<br>Evaluates & provides feedback]
Environment -- Feedback --> Reflector[Reflector<br>Analyzes traces via<br>sandboxed code execution]
Reflector --> SkillManager[SkillManager<br>Curates strategies]
SkillManager -- Updates --> Skillbook
Skillbook -. Injects context .-> Agent
```
---
## Documentation
- [Quick Start Guide](docs/QUICK_START.md) - Get running in 5 minutes
- [Setup Guide](docs/SETUP_GUIDE.md) - Installation, configuration, providers
- [Integration Guide](docs/INTEGRATION_GUIDE.md) - Add ACE to existing agents
- [API Reference](docs/API_REFERENCE.md) - Complete API documentation
- [Complete Guide to ACE](docs/COMPLETE_GUIDE_TO_ACE.md) - Deep dive into concepts
- [Prompt Engineering](docs/PROMPT_ENGINEERING.md) - Advanced prompt techniques
- [Insight Source Tracing](docs/INSIGHT_SOURCES.md) - Track skill provenance and query origins
- [Agentic System Prompting](examples/agentic-system-prompting/README.md) - Automatically generate prompt improvements from past traces
- [Examples](examples/) - Ready-to-run code examples
- [Benchmarks](benchmarks/README.md) - Evaluate ACE performance
- [Changelog](CHANGELOG.md) - Recent changes
---
## Contributing
We love contributions! Check out our [Contributing Guide](CONTRIBUTING.md) to get started.
---
## Acknowledgment
Inspired by the [ACE paper](https://arxiv.org/abs/2510.04618) and [Dynamic Cheatsheet](https://arxiv.org/abs/2504.07952).
---
<div align="center">
**⭐ Star this repo if you find it useful!**
**Built with ❤️ by [Kayba](https://kayba.ai) and the open-source community.**
</div>
| text/markdown | null | "Kayba.ai" <hello@kayba.ai> | null | "Kayba.ai" <hello@kayba.ai> | MIT | ai, llm, agents, machine-learning, self-improvement, context-engineering, ace, openai, anthropic, claude, gpt | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Pyt... | [] | null | null | >=3.12 | [] | [] | [] | [
"litellm>=1.78.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"python-toon>=0.1.0",
"tenacity>=8.0.0",
"instructor>=1.0.0",
"browser-use>=0.9.0; extra == \"browser-use\"",
"opik>=1.8.0; extra == \"observability\"",
"langchain-openai>=0.3.35; extra == \"langchain\"",
"langchain-anthropic>=0.3.0; ex... | [] | [] | [] | [
"Homepage, https://kayba.ai",
"Documentation, https://github.com/Kayba-ai/agentic-context-engine#readme",
"Repository, https://github.com/Kayba-ai/agentic-context-engine",
"Issues, https://github.com/Kayba-ai/agentic-context-engine/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T19:03:03.674366 | ace_framework-0.8.2.tar.gz | 230,777 | 74/b8/699345eb9315eea89d827e15be6d94184b70c7202c576118515bade7c550/ace_framework-0.8.2.tar.gz | source | sdist | null | false | b8facfc8ff4c2b310d84638d246abdee | 7f61bd3da6419cdb2fd93a96f657c21b35201a063e068ee75adb19042194476e | 74b8699345eb9315eea89d827e15be6d94184b70c7202c576118515bade7c550 | null | [
"LICENSE"
] | 264 |
2.4 | rhesis | 0.6.5 | Rhesis - Testing and validation platform for LLM applications (alias for rhesis-sdk) | # Rhesis
**Testing and validation platform for LLM applications**
This package is an alias for [`rhesis-sdk`](https://pypi.org/project/rhesis-sdk/). Installing `rhesis` is equivalent to installing `rhesis-sdk` at the same version.
## Installation
```bash
pip install rhesis
```
This is equivalent to:
```bash
pip install rhesis-sdk
```
## Usage
```python
from rhesis.sdk import RhesisClient
client = RhesisClient()
```
## Why Two Package Names?
- **`rhesis`** - Short, memorable name for quick installation
- **`rhesis-sdk`** - Explicit SDK package name
Both packages are always released together with the same version number. Use whichever you prefer.
## Documentation
- **Full Documentation**: [docs.rhesis.ai](https://docs.rhesis.ai)
- **API Reference**: [docs.rhesis.ai/api](https://docs.rhesis.ai/api)
- **Getting Started Guide**: [docs.rhesis.ai/getting-started](https://docs.rhesis.ai/getting-started)
## Links
- [Website](https://rhesis.ai)
- [GitHub](https://github.com/rhesis-ai/rhesis)
- [PyPI - rhesis](https://pypi.org/project/rhesis/)
- [PyPI - rhesis-sdk](https://pypi.org/project/rhesis-sdk/)
## License
MIT License - see [LICENSE](LICENSE) for details.
| text/markdown | null | Engineering Team <engineering@rhesis.ai>, Harry Cruz <harry@rhesis.ai>, Nicolai Bohn <nicolai@rhesis.ai> | null | null | MIT | ai, llm, machine-learning, testing, validation | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Develo... | [] | null | null | >=3.10 | [] | [] | [] | [
"rhesis-sdk==0.6.5"
] | [] | [] | [] | [
"Homepage, https://rhesis.ai",
"Repository, https://github.com/rhesis-ai/rhesis",
"Documentation, https://docs.rhesis.ai",
"Bug Tracker, https://github.com/rhesis-ai/rhesis/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T19:01:25.334863 | rhesis-0.6.5.tar.gz | 5,775 | 4e/39/067007bfe7c53fa54d0ce09d5d60c177078dd114f305257a580bdbd085e5/rhesis-0.6.5.tar.gz | source | sdist | null | false | 0827769b2c7392af3815842aeff7e75d | b81fcf9571084b10df5bb4e2d383059d58920ab43836f3b7fe35554ed8268261 | 4e39067007bfe7c53fa54d0ce09d5d60c177078dd114f305257a580bdbd085e5 | null | [
"LICENSE"
] | 232 |
2.4 | k8s-agent-sandbox | 0.1.1.post3 | A client library to interact with the Agentic Sandbox on Kubernetes. | # Agentic Sandbox Client Python
This Python client provides a simple, high-level interface for creating and interacting with
sandboxes managed by the Agent Sandbox controller. It's designed to be used as a context manager,
ensuring that sandbox resources are properly created and cleaned up.
It supports a **scalable, cloud-native architecture** using Kubernetes Gateways and a specialized
Router, while maintaining a convenient **Developer Mode** for local testing.
## Architecture
The client operates in two modes:
1. **Production (Gateway Mode):** Traffic flows from the Client -> Cloud Load Balancer (Gateway)
-> Router Service -> Sandbox Pod. This supports high-scale deployments.
2. **Development (Tunnel Mode):** Traffic flows from Localhost -> `kubectl port-forward` -> Router
Service -> Sandbox Pod. This requires no public IP and works on Kind/Minikube.
3. **Advanced / Internal Mode**: The client connects directly to a provided api_url, bypassing
discovery. This is useful for in-cluster communication or when connecting through a custom domain.
## Prerequisites
- A running Kubernetes cluster.
- The **Agent Sandbox Controller** installed.
- `kubectl` installed and configured locally.
## Setup: Deploying the Router
Before using the client, you must deploy the `sandbox-router`. This is a one-time setup.
1. **Build and Push the Router Image:**
For both Gateway Mode and Tunnel mode, follow the instructions in [sandbox-router](sandbox-router/README.md)
to build, push, and apply the router image and resources.
2. **Create a Sandbox Template:**
Ensure a `SandboxTemplate` exists in your target namespace. The [test_client.py](test_client.py)
uses the [python-runtime-sandbox](../../../examples/python-runtime-sandbox/) image.
```bash
kubectl apply -f python-sandbox-template.yaml
```
## Installation
1. **Create a virtual environment:**
```bash
python3 -m venv .venv
source .venv/bin/activate
```
2. **Option 1: Install from PyPI (Recommended):**
The package is available on [PyPI](https://pypi.org/project/k8s-agent-sandbox/) as `k8s-agent-sandbox`.
```bash
pip install k8s-agent-sandbox
```
If you are using [tracing with GCP](GCP.md#tracing-with-open-telemetry-and-google-cloud-trace),
install with the optional tracing dependencies:
```bash
pip install "k8s-agent-sandbox[tracing]"
```
3. **Option 2: Install from source via git:**
```bash
# Replace "main" with a specific version tag (e.g., "v0.1.0") from
# https://github.com/kubernetes-sigs/agent-sandbox/releases to pin a version tag.
export VERSION="main"
pip install "git+https://github.com/kubernetes-sigs/agent-sandbox.git@${VERSION}#subdirectory=clients/python/agentic-sandbox-client"
```
4. **Option 3: Install from source in editable mode:**
If you have not already done so, first clone this repository:
```bash
cd ~
git clone https://github.com/kubernetes-sigs/agent-sandbox.git
cd agent-sandbox/clients/python/agentic-sandbox-client
```
And then install the agentic-sandbox-client into your activated .venv:
```bash
pip install -e .
```
If you are using [tracing with GCP](GCP.md#tracing-with-open-telemetry-and-google-cloud-trace),
install with the optional tracing dependencies:
```
pip install -e ".[tracing]"
```
## Usage Examples
### 1. Production Mode (GKE Gateway)
Use this when running against a real cluster with a public Gateway IP. The client automatically
discovers the Gateway.
```python
from k8s_agent_sandbox import SandboxClient
# Connect via the GKE Gateway
with SandboxClient(
template_name="python-sandbox-template",
gateway_name="external-http-gateway", # Name of the Gateway resource
namespace="default"
) as sandbox:
print(sandbox.run("echo 'Hello from Cloud!'").stdout)
```
### 2. Developer Mode (Local Tunnel)
Use this for local development or CI. If you omit `gateway_name`, the client automatically opens a
secure tunnel to the Router Service using `kubectl`.
```python
from k8s_agent_sandbox import SandboxClient
# Automatically tunnels to svc/sandbox-router-svc
with SandboxClient(
template_name="python-sandbox-template",
namespace="default"
) as sandbox:
print(sandbox.run("echo 'Hello from Local!'").stdout)
```
### 3. Advanced / Internal Mode
Use `api_url` to bypass discovery entirely. Useful for:
- **Internal Agents:** Running inside the cluster (connect via K8s DNS).
- **Custom Domains:** Connecting via HTTPS (e.g., `https://sandbox.example.com`).
```python
with SandboxClient(
template_name="python-sandbox-template",
# Connect directly to a URL
api_url="http://sandbox-router-svc.default.svc.cluster.local:8080",
namespace="default"
) as sandbox:
sandbox.run("ls -la")
```
### 4. Custom Ports
If your sandbox runtime listens on a port other than 8888 (e.g., a Node.js app on 3000), specify `server_port`.
```python
with SandboxClient(
template_name="node-sandbox-template",
server_port=3000
) as sandbox:
# ...
```
## Testing
A test script is included to verify the full lifecycle (Creation -> Execution -> File I/O -> Cleanup).
### Run in Dev Mode:
```
python test_client.py --namespace default
```
### Run in Production Mode:
```
python test_client.py --gateway-name external-http-gateway
```
| text/markdown | null | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"kubernetes",
"requests",
"pydantic",
"pytest; extra == \"test\"",
"pytest-xdist; extra == \"test\"",
"opentelemetry-api~=1.39.0; extra == \"tracing\"",
"opentelemetry-sdk~=1.39.0; extra == \"tracing\"",
"opentelemetry-exporter-otlp~=1.39.0; extra == \"tracing\"",
"opentelemetry-instrumentation-requ... | [] | [] | [] | [
"Homepage, https://github.com/kubernetes-sigs/agent-sandbox",
"Bug Tracker, https://github.com/kubernetes-sigs/agent-sandbox/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T19:01:20.496267 | k8s_agent_sandbox-0.1.1.post3.tar.gz | 35,900 | 49/a4/e312407bdfc702bd6af5b2fcf95e040dacd26e33ba4bb6a871c06fc84eab/k8s_agent_sandbox-0.1.1.post3.tar.gz | source | sdist | null | false | 419d4c32ea3c55d98e79aa984e77aa46 | b8405f0f2b319f6547f927b4b1cb1e5cc9a930acb41e4a13edadb852e72c9a31 | 49a4e312407bdfc702bd6af5b2fcf95e040dacd26e33ba4bb6a871c06fc84eab | null | [] | 241 |
2.4 | rhesis-sdk | 0.6.5 | SDK for testing and validating LLM applications | # Rhesis SDK 🧠
<meta name="google-site-verification" content="muyrLNdeOT9KjYaOnfpOmGi8K5xPe8o7r_ov3kEGdXA" />
<p align="center">
<a href="https://github.com/rhesis-ai/rhesis/blob/main/LICENSE">
<img src="https://img.shields.io/badge/license-MIT-blue" alt="License">
</a>
<a href="https://pypi.org/project/rhesis-sdk/">
<img src="https://img.shields.io/pypi/v/rhesis-sdk" alt="PyPI Version">
</a>
<a href="https://pypi.org/project/rhesis-sdk/">
<img src="https://img.shields.io/pypi/pyversions/rhesis-sdk" alt="Python Versions">
</a>
<a href="https://discord.rhesis.ai">
<img src="https://img.shields.io/discord/1340989671601209408?color=7289da&label=Discord&logo=discord&logoColor=white" alt="Discord">
</a>
<a href="https://www.linkedin.com/company/rhesis-ai">
<img src="https://img.shields.io/badge/LinkedIn-Rhesis_AI-blue?logo=linkedin" alt="LinkedIn">
</a>
<a href="https://huggingface.co/rhesis">
<img src="https://img.shields.io/badge/🤗-Rhesis-yellow" alt="Hugging Face">
</a>
<a href="https://docs.rhesis.ai">
<img src="https://img.shields.io/badge/docs-rhesis.ai-blue" alt="Documentation">
</a>
</p>
> Your team defines expectations, Rhesis generates and executes thousands of test scenarios. So that you know what you ship.
The Rhesis SDK empowers developers to programmatically access curated test sets and generate comprehensive test scenarios for Gen AI applications. Transform domain expertise into automated testing: access thousands of test scenarios, generate custom validation suites, and integrate seamlessly into your workflow to keep your Gen AI robust, reliable & compliant.
<img src="https://cdn.prod.website-files.com/68c3e3b148a4fd9bcf76eb6a/68d66fa1ff10c81d4e4e4d0f_Frame%201000004352.png"
loading="lazy"
width="1392"
sizes="(max-width: 479px) 100vw, (max-width: 767px) 95vw, (max-width: 991px) 94vw, 95vw"
alt="Rhesis Platform Results"
srcset="https://cdn.prod.website-files.com/68c3e3b148a4fd9bcf76eb6a/68d66fa1ff10c81d4e4e4d0f_Frame%201000004352.png 2939w"
class="uui-layout41_lightbox-image-01-2">
## 📑 Table of Contents
- [Features](#-features)
- [Installation](#-installation)
- [Getting Started](#-getting-started)
- [Obtain an API Key](#1-obtain-an-api-key-)
- [Configure the SDK](#2-configure-the-sdk-%EF%B8%8F)
- [Quick Start](#-quick-start)
- [Working with Test Sets](#working-with-test-sets-)
- [Generating Custom Test Sets](#generating-custom-test-sets-%EF%B8%8F)
- [About Rhesis AI](#-about-rhesis-ai)
- [Community](#-community-)
- [Hugging Face](#-hugging-face)
- [Support](#-support)
- [License](#-license)
## ✨ Features
The Rhesis SDK provides programmatic access to the Rhesis testing platform:
- **Access Test Sets**: Browse and load curated test sets across multiple domains and use cases
- **Generate Test Scenarios**: Create custom test sets from prompts, requirements, or domain knowledge
- **Seamless Integration**: Integrate testing into your CI/CD pipeline and development workflow
- **Comprehensive Coverage**: Scale your testing from dozens to thousands of scenarios
- **Open Source**: MIT-licensed with full transparency and community-driven development
## 🚀 Installation
Install the Rhesis SDK using pip:
```bash
pip install rhesis-sdk
```
## 🐍 Python Requirements
Rhesis SDK requires **Python 3.10** or newer.
## 🏁 Getting Started
### 1. Obtain an API Key 🔑
1. Visit [https://app.rhesis.ai](https://app.rhesis.ai)
2. Sign up for a Rhesis account
3. Navigate to your account settings
4. Generate a new API key
Your API key will be in the format `rh-XXXXXXXXXXXXXXXXXXXX`. Keep this key secure and never share it publicly.
> **Note:** On the Rhesis App, you can also create test sets for your own use cases and access them via the SDK. You only need to connect your GitHub account to create a test set.
### 2. Configure and use the SDK.
```python
from pprint import pprint
from rhesis.sdk.entities import TestSet
from rhesis.sdk.synthesizers import PromptSynthesizer
os.environ["RHESIS_API_KEY"] = "rh-your-api-key" # Get from app.rhesis.ai settings
os.environ["RHESIS_BASE_URL"] = "https://api.rhesis.ai" # optional
# Browse available test sets
for test_set in TestSet().all():
pprint(test_set)
# Generate custom test scenarios
synthesizer = PromptSynthesizer(
prompt="Generate tests for a medical chatbot that must never provide diagnosis"
)
test_set = synthesizer.generate(num_tests=10)
pprint(test_set.tests)
```
### Generating Custom Test Sets 🛠️
If none of the existing test sets fit your needs, you can generate your own. You can check out [app.rhesis.ai](http://app.rhesis.ai). There you can define requirements, scenarios and behaviors.
## 🧪 About Rhesis AI
Rhesis is an open-source testing platform that transforms how Gen AI teams validate their applications. Through collaborative test management, domain expertise becomes comprehensive automated testing: legal defines requirements, marketing sets expectations, engineers build quality, and everyone knows exactly how the Gen AI application performs before users do.
**Key capabilities:**
- **Collaborative Test Management**: Your entire team contributes requirements without writing code
- **Automated Test Generation**: Generate thousands of test scenarios from team expertise
- **Comprehensive Coverage**: Scale from dozens of manual tests to thousands of automated scenarios
- **Edge Case Discovery**: Find potential failures before your users do
- **Compliance Validation**: Ensure systems meet regulatory and ethical standards
Made in Potsdam, Germany 🇩🇪
Visit [rhesis.ai](https://rhesis.ai) to learn more about our platform and services.
## 👥 Community 💬
Join our [Discord server](https://discord.rhesis.ai) to connect with other users and developers.
## 🤗 Hugging Face
You can also find us on [Hugging Face](https://huggingface.co/rhesis). There, you can find our test sets across multiple use cases.
## 🆘 Support
For questions, issues, or feature requests:
- **Documentation**: [docs.rhesis.ai](https://docs.rhesis.ai)
- **Discord Community**: [discord.rhesis.ai](https://discord.rhesis.ai)
- **GitHub Discussions**: [Community discussions](https://github.com/rhesis-ai/rhesis/discussions)
- **Email**: hello@rhesis.ai
- **Issues**: [Report bugs or request features](https://github.com/rhesis-ai/rhesis/issues)
## 📝 License
The Rhesis SDK is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
The SDK is completely open-source and freely available for use, modification, and distribution.
---
**Made with ❤️ in Potsdam, Germany 🇩🇪**
Learn more at [rhesis.ai](https://rhesis.ai)
| text/markdown | null | Engineering Team <engineering@rhesis.ai>, Harry Cruz <harry@rhesis.ai>, Nicolai Bohn <nicolai@rhesis.ai> | null | null | MIT | ai, llm, machine-learning, testing | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Software Development :: Testing"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp",
"aiohttp-security",
"aiohttp-session",
"appdirs",
"cryptography",
"deepeval==3.7.0",
"deepteam>=0.2.5",
"diskcache",
"ipykernel>=7.1.0",
"ipython>=8.37.0",
"jinja2>=3.1.6",
"langchain-google-genai>=2.1.12",
"litellm>=1.76.0",
"markitdown[docx,pdf,pptx,xlsx]>=0.1.0",
"mcp>=1.2.... | [] | [] | [] | [
"Homepage, https://rhesis.ai",
"Repository, https://github.com/rhesis-ai/rhesis",
"Documentation, https://rhesis-sdk.readthedocs.io",
"Bug Tracker, https://github.com/rhesis-ai/rhesis/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T19:00:55.072536 | rhesis_sdk-0.6.5.tar.gz | 982,138 | 97/4d/f84d964d163c63612202af7a5cf973ce196a0e698caab0bd07c88c7c3cd7/rhesis_sdk-0.6.5.tar.gz | source | sdist | null | false | ff79ebd3ff79a53f04d78479d122ef01 | 7f5540ff2aaad18a3f232ef9ccd6aee93559c205a05ea266319d7b7a82756465 | 974df84d964d163c63612202af7a5cf973ce196a0e698caab0bd07c88c7c3cd7 | null | [
"LICENSE"
] | 255 |
2.4 | netbox-rir-manager | 0.3.1 | NetBox plugin for managing RIR (ARIN, RIPE, etc.) resources | # NetBox RIR Manager
[](https://pypi.org/project/netbox-rir-manager/)
[](https://github.com/jsenecal/netbox-rir-manager/blob/main/LICENSE)
[](https://github.com/jsenecal/netbox-rir-manager/actions/workflows/ci.yml)
[](https://codecov.io/gh/jsenecal/netbox-rir-manager)
[](https://github.com/netbox-community/netbox)
A NetBox plugin for managing Regional Internet Registry (RIR) resources directly from within NetBox.
## Features
- **Sync RIR resources** — Import organizations, contacts (POCs), networks, and ASNs from your RIR into NetBox
- **Auto-link to IPAM** — Automatically link synced networks to existing NetBox Aggregates, Prefixes, and ASNs
- **Write operations** — Reassign, reallocate, remove, and delete networks directly through the plugin
- **Ticket tracking** — Monitor RIR tickets (reassignments, reallocations, deletions) and their status
- **Per-user API keys** — Each user stores their own RIR API key with encryption at rest
- **Scheduled sync** — Daily background jobs keep your RIR data up to date
- **Pluggable backend architecture** — Abstract backend system designed to support multiple RIRs (ARIN supported today; PRs welcome for RIPE, APNIC, etc.)
- **Full REST API** — All resources and actions are available through NetBox's REST API
- **Sync logging** — Full audit trail of every sync operation with status and details
## Requirements
| Dependency | Version |
|---|---|
| Python | 3.12+ |
| NetBox | 4.5+ |
| pyregrws | 0.2.0+ |
## Installation
### 1. Install the plugin
```bash
pip install netbox-rir-manager
```
Or if installing from source:
```bash
pip install git+https://github.com/jsenecal/netbox-rir-manager.git
```
### 2. Enable the plugin
Add the plugin to your NetBox `configuration.py`:
```python
PLUGINS = [
"netbox_rir_manager",
]
```
### 3. Configure the plugin (optional)
Add any configuration overrides to `PLUGINS_CONFIG` in `configuration.py`:
```python
PLUGINS_CONFIG = {
"netbox_rir_manager": {
"top_level_menu": True,
"sync_interval_hours": 24,
"auto_link_networks": True,
"enabled_backends": ["ARIN"],
"encryption_key": "", # defaults to NetBox SECRET_KEY
"api_retry_count": 3,
"api_retry_backoff": 2,
},
}
```
### 4. Run database migrations
```bash
cd /opt/netbox/netbox
python manage.py migrate
```
### 5. Restart NetBox
Restart both the NetBox WSGI service and the RQ worker. The exact command depends on your deployment method (systemd, Docker Compose, etc.). Refer to the [NetBox documentation](https://netboxlabs.com/docs/netbox/en/stable/) for your specific setup.
## Configuration
| Setting | Default | Description |
|---|---|---|
| `top_level_menu` | `True` | Display RIR Manager as a top-level menu item in the NetBox navigation |
| `sync_interval_hours` | `24` | Interval in hours between scheduled background syncs |
| `auto_link_networks` | `True` | Automatically link synced RIR networks to matching NetBox Aggregates and Prefixes |
| `enabled_backends` | `["ARIN"]` | List of enabled RIR backends |
| `encryption_key` | `""` | Key used to encrypt API keys at rest. Falls back to NetBox's `SECRET_KEY` when empty |
| `api_retry_count` | `3` | Number of retries for failed RIR API calls |
| `api_retry_backoff` | `2` | Exponential backoff multiplier between API retries (in seconds) |
> **Warning:** The `encryption_key` (or NetBox's `SECRET_KEY` if left empty) is used to encrypt stored API keys. Changing or losing this key will make all previously encrypted API keys unrecoverable. Store it securely and back it up alongside your database.
## Usage
### Setting up an RIR Config
1. Navigate to **RIR Manager > Configs** and create a new RIR Config
2. Select the RIR backend (e.g. ARIN) and provide the organization handle for your account
### Adding a User API Key
1. Navigate to **RIR Manager > User Keys** and add a new key
2. Select the RIR Config and enter your ARIN Online API key
3. The key is encrypted at rest using the configured `encryption_key`
### Syncing resources
- **Manual sync:** Trigger a sync from the RIR Config detail view
- **Scheduled sync:** A background job runs automatically based on `sync_interval_hours`
Synced resources (organizations, contacts, networks) appear under **RIR Manager > Resources** and are automatically linked to matching NetBox IPAM objects when `auto_link_networks` is enabled.
### Write operations
From a network's detail view, you can perform write operations against the RIR:
- **Reassign** — Reassign a subnet from a parent network
- **Reallocate** — Reallocate a subnet from a parent network
- **Remove** — Remove a reassigned/reallocated network
- **Delete** — Submit a deletion request to the RIR
Write operations that require RIR approval will create a ticket that can be tracked under **RIR Manager > Tickets**.
## REST API
All models and actions are exposed through NetBox's REST API under `/api/plugins/rir-manager/`. Available endpoints:
- `/api/plugins/rir-manager/configs/` — RIR configurations
- `/api/plugins/rir-manager/user-keys/` — Per-user API keys
- `/api/plugins/rir-manager/organizations/` — RIR organizations
- `/api/plugins/rir-manager/contacts/` — RIR contacts (POCs)
- `/api/plugins/rir-manager/networks/` — RIR networks
- `/api/plugins/rir-manager/sync-logs/` — Sync operation logs
- `/api/plugins/rir-manager/tickets/` — RIR tickets
## Development
### Setup
```bash
git clone https://github.com/jsenecal/netbox-rir-manager.git
cd netbox-rir-manager
pip install -e ".[dev]"
```
### Linting
```bash
ruff check netbox_rir_manager/ tests/
ruff format --check netbox_rir_manager/ tests/
```
### Running tests
```bash
pytest
```
Tests require a running NetBox environment with PostgreSQL and Redis. See the [CI workflow](.github/workflows/ci.yml) for the full setup.
## Contributing
Contributions are welcome! In particular, PRs adding support for additional RIR backends (RIPE, APNIC, LACNIC, AFRINIC) are encouraged. The plugin uses a pluggable backend architecture — see `netbox_rir_manager/backends/base.py` for the abstract `RIRBackend` class to implement.
## License
This project is licensed under the Apache License 2.0. See the [LICENSE](LICENSE) file for details.
| text/markdown | null | Jonathan Senecal <contact@jonathansenecal.com> | null | null | Apache-2.0 | netbox, netbox-plugin, rir, arin, ripe, ipam, network-automation | [
"Development Status :: 3 - Alpha",
"Framework :: Django",
"Intended Audience :: System Administrators",
"Intended Audience :: Telecommunications Industry",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming La... | [] | null | null | >=3.12 | [] | [] | [] | [
"pyregrws>=0.2.7",
"tenacity>=8.0",
"geopy>=2.4",
"pycountry>=24.6",
"pytest>=8.0; extra == \"dev\"",
"pytest-django>=4.5; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"ruff>=0.8; extra == \"dev\"",
"pre-commit>=4.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/jsenecal/netbox-rir-manager",
"Source, https://github.com/jsenecal/netbox-rir-manager",
"Tracker, https://github.com/jsenecal/netbox-rir-manager/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:59:30.003894 | netbox_rir_manager-0.3.1.tar.gz | 68,070 | 25/3a/1ee2b02c52d6bb474adb362f04a253dd9fe0f21603318ac2f087c1449bf1/netbox_rir_manager-0.3.1.tar.gz | source | sdist | null | false | 3f7d1b2dba0523ecb914bb8788636372 | c6878dd69a7fbe4650114382dfdfb125bb2cee7e3a99f777fe7478c1b42180f7 | 253a1ee2b02c52d6bb474adb362f04a253dd9fe0f21603318ac2f087c1449bf1 | null | [
"LICENSE"
] | 232 |
2.4 | ariadne-roots | 0.2.0a1 | Ariadne root tracing GUI and trait calculator. | # Ariadne
[](https://pypi.org/project/ariadne-roots/)
[](https://github.com/Salk-Harnessing-Plants-Initiative/Ariadne/releases)
🌱 is a small software package for analyzing images of _Arabidopsis thaliana_ roots.
📷 It features a GUI for semi-automated image segmentation
<img src="assets/color-final.gif" width="250" height="250">
⏰ with support for time-series GIFs
<img src="assets/early-final.gif" width="250" height="250">
☠️ that creates dynamic 2D skeleton graphs of the root system architecture (RSA).
🔍 It's designed specifically to handle complex, messy, and highly-branched root systems well — the same situations in which current methods fail.
📊 It also includes some (very cool) algorithms for analyzing those skeletons, which were mostly developed by other (very cool) people<sup id="a1">[1](#f1)</sup><sup>,</sup><sup id="a2">[2](#f2)</sup>. The focus is on measuring cost-performance trade-offs and Pareto optimality in RSA networks.
⚠️ This is very much a work-in-progress! These are custom scripts written for an ongoing research project — so all code is provided as-is.
🔨 That said, if you're interested in tinkering with the code, enjoy! PRs are always welcome. And please reach out with any comments, ideas, suggestions, or feedback.
## Documentation
📚 **Detailed documentation for scientists and developers:**
| Document | Description |
|----------|-------------|
| [Scientific Methods](docs/scientific-methods.md) | Pareto optimality calculations, mathematical formulas, and academic references |
| [Output Fields Reference](docs/output-fields.md) | Complete reference for all CSV output fields with units and interpretation |
For citing the underlying methods, see the [References](#references) section or the [Scientific Methods](docs/scientific-methods.md) documentation.
## Installation
Ariadne is installed as a Python package called `ariadne-roots`. We recommend using a package manager and creating an isolated environment for `ariadne-roots` and its dependencies.
You can find the latest version of `ariadne-roots` on the [Releases](https://github.com/Salk-Harnessing-Plants-Initiative/Ariadne/releases) page.
## Installation (Users)
We recommend installing Ariadne in an isolated environment using `uv`. You can install it with [uv](https://docs.astral.sh/uv/) to keep your environment clean and reproducible.
### Prerequisites
The GUI requires **tkinter**, which is part of Python's standard library:
- **uv (recommended)**: tkinter is included automatically with uv's managed Python installations
- **macOS (system Python via Homebrew)**: `brew install python-tk@3.12` (only if not using uv)
- **Ubuntu/Debian (system Python)**: `sudo apt-get install python3-tk` (only if not using uv)
- **Windows (system Python)**: tkinter is typically included with standard Python installations
- **conda/mamba**: tkinter is included automatically
To verify tkinter is available: `python -c "import tkinter"`
**Note:** If you use `uv` to manage Python (recommended), tkinter is already included and requires no separate installation.
There are three main ways to install and run Ariadne:
---
### Option 1. Local Environment
This creates a local `.venv` folder to hold Ariadne and its dependencies.
```sh
# Create a local environment with pip + setuptools pre-seeded
uv venv --seed .venv
# Activate it (Linux/macOS)
source .venv/bin/activate
# Activate it (Windows PowerShell)
.venv\Scripts\activate
# Install Ariadne
uv pip install ariadne-roots
```
Then run the GUI:
```sh
ariadne-trace
```
---
### Option 2. One-liner Install & Run (no manual environment necessary)
You can also run Ariadne directly with `uvx`, which installs it into an isolated cache and exposes the CLI:
```sh
uvx ariadne-trace
```
This will launch the GUI without needing to set up or activate a venv manually.
---
### Option 3. Project-Based Workflow (for developers/power users)
This approach creates a dedicated project for using Ariadne with `uv init` and `uv add`:
```sh
# Create a new project directory
uv init ariadne-project
cd ariadne-project
# Add Ariadne as a dependency (creates .venv automatically)
uv add ariadne-roots
# Run the GUI
uv run ariadne-trace
```
This is ideal if you want to:
- Keep Ariadne alongside other analysis tools in a project
- Lock dependencies with `uv.lock` for reproducibility
- Manage multiple environments for different analyses
---
## Usage
### If installed in a local environment (Option 1)
Activate the environment and run:
```sh
source .venv/bin/activate # or .venv\Scripts\activate on Windows
ariadne-trace
```
### If using the one-liner (Option 2)
Simply run:
```sh
uvx ariadne-trace
```
### If using project-based workflow (Option 3)
Run from your project directory:
```sh
cd ariadne-project
uv run ariadne-trace
```
### `conda` environment installation
Follow the instructions to install [Miniforge3](https://github.com/conda-forge/miniforge).
### Step-by-Step Installation
1. **Create an isolated environment:**
```sh
mamba create --name ariadne python=3.11
```
2. **Activate your environment:**
```sh
mamba activate ariadne
```
3. **Install `ariadne-roots` using pip:**
```sh
pip install --pre ariadne-roots # Use --pre to include pre-release versions
```
- Omit the `--pre` flag if you only want to install stable releases.
## Usage
1. **Activate your environment:**
```sh
mamba activate ariadne
```
2. **Open the GUI:**
```sh
ariadne-trace
```
### Trace with Ariadne
1. **Click on “Trace”** to trace roots.
2. The following window should open:
<img src="assets/Trace_Menu.png" width="450" height="450">
3. **Click on “Import image file”** and select the image to trace the roots.
4. **Select the zoom factor for your image**
- A window should popup asking for the zoom factor
- Use the "Zoom in" and Zoom out" button to adjust the zoom needed to trace the root system with high precision
- Click on "OK"
- After this, any new image imported will be opened with the identical zoom factor
- Remark: after closing Ariadne, the zoom factor will be canceled. Therefore, take note of the zoom factor used and reapply the same everytime when restarting Ariadne.
5. **Trace the first root:**
- Start tracing the entire primary root first (it should appear green).
- To save time, place a dot on each region where a lateral root is emitted.
6. **Save the traced root:**
- When the first root is fully traced, click on the “Save” button on the left-hand menu of Ariadne or press “g” on your keyboard.
- A new window will pop up asking for the plant ID. Tap any letter “A” or number "1".
- Each time you click on “Save”, a .json file will be saved in the folder at the location of Location_1 (see above).
7. **Trace additional roots:**
- When you are done tracing the first root, click on the “Change root” button on the left-hand menu of Ariadne.
- Select a new plant ID, like “B”, to trace the second root.
- Continue tracing each root on your image following these steps.
8. **Finish tracing:**
- When you have traced all roots on your image, click on “Change root” and repeat from “Step 3” above for any new images.
### Analyze with Ariadne
1. **Organize your files:**
- Gather all the .json files stored at the location where Ariadne has been installed into a new folder named "OUTPUT_JSON" (referred to as "location_1" later on).
- Create a folder named "RESULTS" (referred to as "location_2").
- Create a new folder named "Output".
2. **Prepare for analysis:**
- Close Ariadne but keep the terminal open.
- Follow the instructions in step 2 above to set up the terminal.
3. **Run the analysis:**
- Click on "Analyze" in Ariadne.
<img src="assets/Welcome.png" width="400" height="250">
- **Set scaling parameters** (optional):
- A dialog will appear asking you to configure measurement units
- Enter the conversion factor (e.g., if 1 pixel = 2.5 mm, enter `1` for pixels and `2.5` for distance)
- Select or enter the unit name (e.g., "mm", "cm", "µm")
- Click "OK" to continue, or "Cancel" to use default (pixels)
- **Select input files**:
- Choose the .json files to analyze from "location_1"
- An info dialog will confirm the number of files selected
- **Select output folder**:
- Choose "location_2" for the output
- The software will analyze all selected files and save results
- **Completion**:
- A dialog will show where the results were saved (CSV report and Pareto plots)
### Results
- In the output folder you selected, you will find:
- A Pareto optimality plot for each root (PNG format)
- A timestamped CSV file (e.g., `report_20241110_153045.csv`) storing all RSA traits for each root
**Note on Units:** If you configured scaling during analysis, all length measurements in the CSV and plots will be in your specified units (e.g., mm, cm). Otherwise, measurements are in pixels.
The RSA traits included in the CSV are
- **Total root length:** Total root length
- **Travel distance:** Sum of the length from the hypocotyl to each root tip (Pareto related trait)
- **Alpha:** Trade-off value between growth and transport efficiency (Pareto related trait)
- **Scaling distance to front:** Pareto optimality value (Pareto related trait)
- **Total root length (random):** Random total root length
- **Travel distance (random):** Random sum of the length from the hypocotyl to each root tip (Pareto related trait)
- **Alpha (random):** Random trade-off value between growth and transport efficiency (Pareto related trait)
- **Scaling distance to front (random):** Random Pareto optimality value (Pareto related trait)
- **PR length:** Length of the primary root
- **PR minimal length:** Euclidean distance from the hypocotyl to the primary root tip
- **Basal zone length:** length from the hypocotyl to the insertion of the first lateral root along the primary root
- **Branched zone length:** length from the insertion of the first lateral root to the insertion of the last lateral root along the primary root
- **Apical zone length:** length from the last lateral root to the root tip along the primary root
- **Mean LR lengths:** Average length of all lateral roots
- **Mean LR minimal distances:** Average Euclidean distance between each lateral root tip and its insertion on the primary root for all lateral roots
- **Median LR lengths:** Median length of all lateral roots
- **Median LR minimal distances:** Median Euclidean distance between each lateral root tip and its insertion on the primary root for all lateral roots
- **Sum LR minimal distances:** Sum of the Euclidean distances between each lateral root tip and its insertion on the primary root for all lateral roots
- **Mean LR angles:** Average lateral root set point angles
- **Median LR angles:** Median lateral root set point angles
- **LR count:** Number of lateral roots
- **LR density:** Number of lateral roots divided by primary root length
- **Branched zone density:** Number of lateral roots divided by Branched zone length
- **LR lengths:** Length of each individual lateral root
- **LR angles:** Lateral root set point angle of each individual lateral root
- **LR minimal distance:** Euclidean distance between each lateral root tip and its insertion on the primary root for each lateral root
- **Barycentre x displacement:** Vertical distance between the hypocotyl base to the barycenter of the convex hull
- **Barycentre y displacement:** Horizontal distance between the hypocotyl base to the barycenter of the convex hull
- **Total minimal distance:** Sum of LR minimal distances plus PR minimal length
- **Tortuosity (Material/Total Distance Ratio):** Total root length divided by total minimal distance
#### 3D Pareto Analysis Fields (Optional)
When **"Add path tortuosity to Pareto (3D, slower)"** is enabled during analysis, additional fields are computed that include path coverage as a third objective:
- **Path tortuosity:** Sum of tortuosity values for all root paths
- **alpha_3d, beta_3d, gamma_3d:** Interpolated Pareto weights (α + β + γ = 1)
- **epsilon_3d:** Multiplicative ε-indicator measuring distance from the 3D Pareto front
- **epsilon_3d_material/transport/coverage:** Individual ratio components showing which objective constrains optimality
- **Corner costs (Steiner/Satellite/Coverage):** Reference values for optimal architectures at each corner of the Pareto surface
For complete field descriptions, see the [Output Fields Reference](docs/output-fields.md).
##### Keybinds
* `Left-click`: place/select node.
* `Ctrl`: Hold Ctrl to scroll through the image with the mouth
* `t`: toggle skeleton visibility (default: on)
* `e`: next frame (GIFs only)
* `q`: previous frame (GIFs only)
* `r`: toggle proximity override. By default, clicking on or near an existing node will select it. When this override is on, a new node will be placed instead. Useful for finer control in crowded areas (default: off)
* `i`: toggle insertion mode. By default, new nodes extend a branch (i.e., have a degree of 1). Alternatively, use insertion mode to intercalate a new node between 2 existing ones. Useful for handling emering lateral roots in regions you have already segmented (default: off)
* `g`: Save output file
* `d`: Delete currently selected node(s)
* `c`: Erase the current tree and ask for a new plant ID
* `+`: Zoom in
* `-`: Zoom out
* `Ctrl-Z`: Undo last action
## Contributing
Follow these steps to set up your development environment and start making contributions to the project.
1. **Navigate to the desired directory**
Change directories to where you would like the repository to be downloaded
```sh
cd /path/on/computer/for/repos
```
2. **Clone the repository**
```sh
git clone https//github.com/Salk-Harnessing-Plants-Initiative/Ariadne.git
```
3. **Navigate to the root of the cloned repository**
```sh
cd Ariadne
```
## 🛠️ For Developers
### Requirements
- [uv](https://github.com/astral-sh/uv) for dependency management
- Python 3.11+
### Setting Up a Development Environment
Clone the repository:
```bash
git clone https://github.com/Salk-Harnessing-Plants-Initiative/Ariadne.git
cd Ariadne
```
## 🛠️ Development with uv
We use [uv](https://github.com/astral-sh/uv) for dependency management and tooling.
- This workflow is tested in GitHub Actions using `.github/workflows/test-dev.yml`.
- Python version is pinned to 3.12 in `.python-version` (CI tests 3.12 and 3.13).
- Dependencies are locked in `uv.lock` for reproducible builds.
### Quick Start
After cloning the repository, set up your development environment:
```bash
uv sync
```
This command:
- Reads `.python-version` to use Python 3.12 automatically
- Creates `.venv` (if it doesn't exist)
- Installs dependencies from the committed `uv.lock` file (reproducible!)
- Installs runtime dependencies plus the `dev` group (tests, linters, etc.)
**Important:** Always use `uv sync` (not `uv pip install`) to ensure you get the exact dependency versions from the lockfile.
---
### Running Commands
Use `uv run` to execute commands **inside the project environment** without manually activating `.venv`:
```bash
# Run tests with coverage
uv run pytest --cov=ariadne_roots --cov-report=term-missing
# Check code formatting
uv run black --check .
# Run linting
uv run ruff check .
# Run the CLI
uv run ariadne-trace
```
**Cross-platform:** `uv run` works on Linux, macOS, and Windows without needing venv activation.
---
### Updating Dependencies
To update dependencies, modify `pyproject.toml` then regenerate the lockfile:
```bash
# Update lockfile after changing dependencies
uv lock
# Sync environment with new lockfile
uv sync
# Commit both files
git add pyproject.toml uv.lock
git commit -m "Update dependencies"
```
**CI Integration:** Our CI uses `uv sync --frozen` to ensure the lockfile isn't modified during builds, catching any uncommitted dependency changes.
---
### Security & Dependency Scanning
We maintain secure dependencies through regular vulnerability scanning. To check for known vulnerabilities:
```bash
# Scan current dependencies for security issues
uvx pip-audit
```
**Best Practices:**
- Run `uvx pip-audit` before each release
- Check the lockfile regularly for outdated dependencies with known vulnerabilities
- Review [GitHub Security Advisories](https://github.com/Salk-Harnessing-Plants-Initiative/Ariadne/security/advisories) for this repository
**Automated Monitoring:**
Consider enabling GitHub Dependabot to receive automated pull requests for dependency updates and security patches.
**Current Status:** No known vulnerabilities (last checked: November 2025)
---
### 3. Building artifacts
To build source and wheel distributions:
```bash
uv build
```
Artifacts will be created in the `dist/` directory.
---
### Alternative: install dev extras with pip
If you’re not using `uv`, you can still install everything with pip:
```bash
pip install -e ".[dev]"
```
This installs runtime + dev dependencies into your current environment.
---
### Instructions for `conda` environment
1. **Create a development environment**
This will install the necessary dependencies and the `ariadne-roots` package in editable mode
```sh
mamba create --name ariadne_dev python=3.11 # python 3.11, 3.12, 3.13 are tested in the CI
```
2. **Activate the development environment**
```sh
mamba activate ariadne_dev
```
3. **Install dev dependencies and source code in editable mode**
```bash
pip install -e ".[dev]"
```
## Development Rules
1. **Create a branch for your changes**
Before making any changes, create a new branch
```sh
git checkout -b your-branch-name
```
2. **Code**
Make your changes. Please make sure your code is readable and documented.
- The Google style is preferred.
- Use docstrings with args and returns defined for each function.
- Typing annotations are preferred.
- Use comments to explain steps of calculations and algorithms.
- Use consistent variable names.
- Please use full words and not letters as variable names so that variables are readable.
3. **Commit often**
Commit your changes frequently with short, descriptive messages. This helps track progress and makes it easier to identify issues.
```sh
git add <changed_files>
git commit -m "Short, descriptive commit message"
```
4. **Open a pull request**
Before you make any changes, you can write a descriptive plan of what you intend to do and why.
Once your changes are ready, push your branch to the remote repository. Provide a clear explanation of what you changed and why.
```sh
git push origin your-branch-name
```
- Go to the repository on GitHub.
- Click on **Compare & pull request**.
- Fill in the title and description of your pull request.
- Click **Create pull request**.
5. **Test your changes**
Ensure your changes pass all tests and do not break existing functionality.
6. **Request a review**
In the pull request, request a review from the appropriate team members. Notify them via GitHub.
7. **Merge your changes to main**
After your code passes review, merge your changes to the `main` branch.
- Click **Merge pull request** on GitHub.
- Confirm the merge.
8. **Delete your remote branch**
Once your changes are merged, delete your remote branch to keep the repository clean.
## Releasing `ariadne-roots`
The GitHub Action workflow `.github/workflows/python-publish.yml` results in the package, `ariadne-roots`, being released at [PyPI](https://pypi.org/project/ariadne-roots/).
To release a new package, follow these instructions:
**Follow contributing instructions above**
1. **Make a new branch to record your changes**
```sh
git checkout -b <your_name>/bump_version_to_<version>
```
2. **Modify version**
The `pyproject.toml` file contains the information for the pip package. Incrementally increase the "version" with each release.
**Semantic Versioning**
Semantic versioning (SemVer) is a versioning system that uses the format:
`MAJOR.MINOR.PATCH`
- **MAJOR:** Increase when you make incompatible API changes.
- **MINOR:** Increase when you add functionality in a backward-compatible manner.
- **PATCH:** Increase when you make backward-compatible bug fixes.
For example:
- If the current version is `1.2.3`:
- A breaking change would result in `2.0.0`.
- Adding a new feature would result in `1.3.0`.
- Fixing a bug would result in `1.2.4`.
Learn more about the rules of semantic versioning [here](https://semver.org).
3. **Commit changes**
After making the required modifications, commit your changes:
```sh
git add pyproject.toml
git commit -m "Bump version to <version>"
git push origin <your_name>/bump_version_to_<version>
```
4. **Open a pull request**
1. Go to the repository on GitHub.
2. You should see a banner prompting you to compare & create a pull request for your branch. Click it.
3. Fill in the pull request title and description. For example:
- **Title:** Bump version to `<version>`
- **Description:** "This PR updates the version to `<version>` for release."
4. Click **Create pull request**.
5. **Request a review**
After creating the pull request, in the right-hand sidebar, click on **Reviewers** and select the appropriate reviewer(s). Notify the reviewer(s) via GitHub.
6. **Merge your changes to `main` after review**
Once the reviewer approves your pull request, merge it into the `main` branch.
7. **Release to trigger the workflow**
1. Go to the [release page](https://github.com/Salk-Harnessing-Plants-Initiative/Ariadne/releases).
2. Draft a new release:
- Create a new tag with the version number you used in the repository.
- Have GitHub draft the release notes to include all the changes since the last release.
- Modify the release name to include `ariadne-roots`, so that it says `ariadne-roots v<version>` like the rest.
3. Please ask for your release to be reviewed before releasing.
8. **Verify the release**
Check [PyPI](https://pypi.org/project/ariadne-roots/#history) and the GitHub Actions of our repository to make sure the pip package was created and published successfully.
- You should see the latest release with the correct version number at pypi.org.
- The Github Actions should have green checkmarks and not red X's associated with your release.
## Contributors
- Kian Faizi
- Matthieu Platre
- Elizabeth Berrigan
## Contact
For any questions or further information, please contact:
- **Matthieu Platre:** [matthieu.platre@inrae.fr](mailto:matthieu.platre@inrae.fr)
## References
<b id="f1">1.</b> Chandrasekhar, Arjun, and Navlakha, Saket. "Neural arbors are Pareto optimal." _Proceedings of the Royal Society B_ 286.1902 (2019): 20182727. https://doi.org/10.1098/rspb.2018.2727 [↩](#a1)
<b id="f2">2.</b> Conn, Adam, et al. "High-resolution laser scanning reveals plant architectures that reflect universal network design principles." _Cell Systems_ 5.1 (2017): 53-62. https://doi.org/10.1016/j.cels.2017.06.017 [↩](#a2)
| text/markdown | null | Matthieu Platre <mattplatre@gmail.com>, Kian Faizi <kian@caltech.edu>, Elizabeth Berrigan <eberrigan@salk.edu> | null | null | null | ariadne, plants, roots, phenotyping, pareto | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Development Status :: 4 - Beta"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"pillow",
"networkx",
"numpy",
"scipy",
"matplotlib",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"black; extra == \"dev\"",
"pydocstyle; extra == \"dev\"",
"toml; extra == \"dev\"",
"twine; extra == \"dev\"",
"build; extra == \"dev\"",
"ipython; extra == \"dev\"",
"ruff; e... | [] | [] | [] | [
"Homepage, https://github.com/Salk-Harnessing-Plants-Initiative/Ariadne",
"Issues, https://github.com/Salk-Harnessing-Plants-Initiative/Ariadne/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T18:59:10.846045 | ariadne_roots-0.2.0a1.tar.gz | 101,569 | 8d/0e/b37011054bd1f2c841edf127c99e2409b61fa0fdcfeeac695945130d1371/ariadne_roots-0.2.0a1.tar.gz | source | sdist | null | false | ec05ed637c810d2d527305154710ab3f | 72a944dd97e0246662ee729e119024f875c664444bb46e9e27ce1d724465fd1b | 8d0eb37011054bd1f2c841edf127c99e2409b61fa0fdcfeeac695945130d1371 | GPL-3.0-or-later | [
"LICENSE"
] | 205 |
2.3 | llama-cloud | 1.4.1 | The official Python library for the llama-cloud API | # Llama Cloud Python API library
<!-- prettier-ignore -->
[)](https://pypi.org/project/llama_cloud/)
The Llama Cloud Python library provides convenient access to the Llama Cloud REST API from any Python 3.9+
application. The library includes type definitions for all request params and response fields,
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
It is generated with [Stainless](https://www.stainless.com/).
## MCP Server
Use the Llama Cloud MCP Server to enable AI assistants to interact with this API, allowing them to explore endpoints, make test requests, and use documentation to help integrate this SDK into your application.
[](https://cursor.com/en-US/install-mcp?name=%40llamaindex%2Fllama-cloud-mcp&config=eyJuYW1lIjoiQGxsYW1haW5kZXgvbGxhbWEtY2xvdWQtbWNwIiwidHJhbnNwb3J0IjoiaHR0cCIsInVybCI6Imh0dHBzOi8vbGxhbWFjbG91ZC1wcm9kLnN0bG1jcC5jb20iLCJoZWFkZXJzIjp7IngtbGxhbWEtY2xvdWQtYXBpLWtleSI6Ik15IEFQSSBLZXkifX0)
[](https://vscode.stainless.com/mcp/%7B%22name%22%3A%22%40llamaindex%2Fllama-cloud-mcp%22%2C%22type%22%3A%22http%22%2C%22url%22%3A%22https%3A%2F%2Fllamacloud-prod.stlmcp.com%22%2C%22headers%22%3A%7B%22x-llama-cloud-api-key%22%3A%22My%20API%20Key%22%7D%7D)
> Note: You may need to set environment variables in your MCP client.
## Documentation
The REST API documentation can be found on [developers.llamaindex.ai](https://developers.llamaindex.ai/). The full API of this library can be found in [api.md](https://github.com/run-llama/llama-cloud-py/tree/main/api.md).
## Installation
```sh
# install from PyPI
pip install llama_cloud
```
## Usage
The full API of this library can be found in [api.md](https://github.com/run-llama/llama-cloud-py/tree/main/api.md).
```python
import os
from llama_cloud import LlamaCloud
client = LlamaCloud(
api_key=os.environ.get("LLAMA_CLOUD_API_KEY"), # This is the default and can be omitted
)
parsing = client.parsing.create(
tier="agentic",
version="latest",
file_id="abc1234",
)
print(parsing.id)
```
While you can provide an `api_key` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `LLAMA_CLOUD_API_KEY="My API Key"` to your `.env` file
so that your API Key is not stored in source control.
## Async usage
Simply import `AsyncLlamaCloud` instead of `LlamaCloud` and use `await` with each API call:
```python
import os
import asyncio
from llama_cloud import AsyncLlamaCloud
client = AsyncLlamaCloud(
api_key=os.environ.get("LLAMA_CLOUD_API_KEY"), # This is the default and can be omitted
)
async def main() -> None:
parsing = await client.parsing.create(
tier="agentic",
version="latest",
file_id="abc1234",
)
print(parsing.id)
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh
# install from PyPI
pip install llama_cloud[aiohttp]
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python
import os
import asyncio
from llama_cloud import DefaultAioHttpClient
from llama_cloud import AsyncLlamaCloud
async def main() -> None:
async with AsyncLlamaCloud(
api_key=os.environ.get("LLAMA_CLOUD_API_KEY"), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
parsing = await client.parsing.create(
tier="agentic",
version="latest",
file_id="abc1234",
)
print(parsing.id)
asyncio.run(main())
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Pagination
List methods in the Llama Cloud API are paginated.
This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:
```python
from llama_cloud import LlamaCloud
client = LlamaCloud()
all_runs = []
# Automatically fetches more pages as needed.
for run in client.extraction.runs.list(
extraction_agent_id="30988414-9163-4a0b-a7e0-35dd760109d7",
limit=20,
skip=0,
):
# Do something with run here
all_runs.append(run)
print(all_runs)
```
Or, asynchronously:
```python
import asyncio
from llama_cloud import AsyncLlamaCloud
client = AsyncLlamaCloud()
async def main() -> None:
all_runs = []
# Iterate through items across all pages, issuing requests as needed.
async for run in client.extraction.runs.list(
extraction_agent_id="30988414-9163-4a0b-a7e0-35dd760109d7",
limit=20,
skip=0,
):
all_runs.append(run)
print(all_runs)
asyncio.run(main())
```
Alternatively, you can use the `.has_next_page()`, `.next_page_info()`, or `.get_next_page()` methods for more granular control working with pages:
```python
first_page = await client.extraction.runs.list(
extraction_agent_id="30988414-9163-4a0b-a7e0-35dd760109d7",
limit=20,
skip=0,
)
if first_page.has_next_page():
print(f"will fetch next page using these details: {first_page.next_page_info()}")
next_page = await first_page.get_next_page()
print(f"number of items we just fetched: {len(next_page.items)}")
# Remove `await` for non-async usage.
```
Or just work directly with the returned data:
```python
first_page = await client.extraction.runs.list(
extraction_agent_id="30988414-9163-4a0b-a7e0-35dd760109d7",
limit=20,
skip=0,
)
print(
f"the current start offset for this page: {first_page.skip}"
) # => "the current start offset for this page: 1"
for run in first_page.items:
print(run.id)
# Remove `await` for non-async usage.
```
## Nested params
Nested parameters are dictionaries, typed using `TypedDict`, for example:
```python
from llama_cloud import LlamaCloud
client = LlamaCloud()
parsing = client.parsing.create(
tier="fast",
version="2026-01-08",
agentic_options={},
)
print(parsing.agentic_options)
```
## File uploads
Request parameters that correspond to file uploads can be passed as `bytes`, or a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance or a tuple of `(filename, contents, media type)`.
```python
from pathlib import Path
from llama_cloud import LlamaCloud
client = LlamaCloud()
client.files.create(
file=Path("/path/to/file"),
purpose="purpose",
)
```
The async client uses the exact same interface. If you pass a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance, the file contents will be read asynchronously automatically.
## Handling errors
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `llama_cloud.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `llama_cloud.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `llama_cloud.APIError`.
```python
import llama_cloud
from llama_cloud import LlamaCloud
client = LlamaCloud()
try:
client.pipelines.list(
project_id="my-project-id",
)
except llama_cloud.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except llama_cloud.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except llama_cloud.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python
from llama_cloud import LlamaCloud
# Configure the default for all requests:
client = LlamaCloud(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).pipelines.list(
project_id="my-project-id",
)
```
### Timeouts
By default requests time out after 1 minute. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python
from llama_cloud import LlamaCloud
# Configure the default for all requests:
client = LlamaCloud(
# 20 seconds (default is 1 minute)
timeout=20.0,
)
# More granular control:
client = LlamaCloud(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5.0).pipelines.list(
project_id="my-project-id",
)
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](https://github.com/run-llama/llama-cloud-py/tree/main/#retries).
## Advanced
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `LLAMA_CLOUD_LOG` to `info`.
```shell
$ export LLAMA_CLOUD_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py
from llama_cloud import LlamaCloud
client = LlamaCloud()
response = client.pipelines.with_raw_response.list(
project_id="my-project-id",
)
print(response.headers.get('X-My-Header'))
pipeline = response.parse() # get the object that `pipelines.list()` would have returned
print(pipeline)
```
These methods return an [`APIResponse`](https://github.com/run-llama/llama-cloud-py/tree/main/src/llama_cloud/_response.py) object.
The async client returns an [`AsyncAPIResponse`](https://github.com/run-llama/llama-cloud-py/tree/main/src/llama_cloud/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
```python
with client.pipelines.with_streaming_response.list(
project_id="my-project-id",
) as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py
import httpx
response = client.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented request params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python
import httpx
from llama_cloud import LlamaCloud, DefaultHttpxClient
client = LlamaCloud(
# Or use the `LLAMA_CLOUD_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python
client.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py
from llama_cloud import LlamaCloud
with LlamaCloud() as client:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/run-llama/llama-cloud-py/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py
import llama_cloud
print(llama_cloud.__version__)
```
## Requirements
Python 3.9 or higher.
## Contributing
See [the contributing documentation](https://github.com/run-llama/llama-cloud-py/tree/main/./CONTRIBUTING.md).
| text/markdown | Llama Cloud | null | null | null | MIT | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3.9",
"Pro... | [] | null | null | >=3.9 | [] | [] | [] | [
"anyio<5,>=3.5.0",
"distro<2,>=1.7.0",
"httpx<1,>=0.23.0",
"pydantic<3,>=1.9.0",
"sniffio",
"typing-extensions<5,>=4.10",
"aiohttp; extra == \"aiohttp\"",
"httpx-aiohttp>=0.1.9; extra == \"aiohttp\""
] | [] | [] | [] | [
"Homepage, https://github.com/run-llama/llama-cloud-py",
"Repository, https://github.com/run-llama/llama-cloud-py"
] | twine/5.1.1 CPython/3.12.9 | 2026-02-18T18:58:11.938878 | llama_cloud-1.4.1.tar.gz | 2,277,660 | 72/e8/bba7b09a9d333f41cfec895a7a910978d60095ac0cc7601bac7d9f868a62/llama_cloud-1.4.1.tar.gz | source | sdist | null | false | 6452c284cb7813f3f5edf512cf62af3d | d3b324c36cbbae5fcf0f2a17af521fc10ae0d6bdaab54b704e61818fe23d421e | 72e8bba7b09a9d333f41cfec895a7a910978d60095ac0cc7601bac7d9f868a62 | null | [] | 15,962 |
2.4 | ladybug-rhino | 1.44.8 | A library for communicating between Ladybug Tools core libraries and Rhinoceros CAD. | 
[](https://github.com/IronLanguages/ironpython2/releases/tag/ipy-2.7.8/)
# ladybug-rhino
A library for communicating between Ladybug Tools core libraries and Rhinoceros CAD.
This library is used by both the Grasshopper and Rhino plugins to communicate with
the ladybug core Python library. Note that this library has dependencies
on Rhino SDK and Grasshopper SDK and is intended to contain all of such dependencies
for the LBT-Grasshopper plugin. It is NOT intended to be run with cPython with
the exceptions of running the CLI or when used with the cPython capabilities in Rhino 8.
## Installation
`pip install -U ladybug-rhino`
To check if Ladybug Rhino command line is installed correctly try `ladybug-rhino viz`
and you should get a `viiiiiiiiiiiiizzzzzzzzz!` back in response!
## [API Documentation](http://ladybug-tools.github.io/ladybug-rhino/docs/)
## Local Development
1. Clone this repo locally
```python
git clone git@github.com:ladybug-tools/ladybug-rhino
# or
git clone https://github.com/ladybug-tools/ladybug-rhino
```
2. Install dependencies
```console
cd ladybug-rhino
pip install -r dev-requirements.txt
pip install -r requirements.txt
pip install pythonnet
pip install rhinoinside
```
3. Generate Documentation
```console
sphinx-apidoc -f -e -d 4 -o ./docs ./ladybug_rhino
sphinx-build -b html ./docs ./docs/_build/docs
```
| text/markdown | Ladybug Tools | info@ladybug.tools | null | null | AGPL-3.0 | null | [
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: Implementation :: IronPython",
"Operating System :: OS Independent"
] | [] | https://github.com/ladybug-tools/ladybug-rhino | null | null | [] | [] | [] | [
"ladybug-display>=0.5.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.12 | 2026-02-18T18:58:09.955616 | ladybug_rhino-1.44.8.tar.gz | 163,848 | c8/98/c5351db9ff5b7021f6048e330674e762da96e8ad7a655376dda91bdbd845/ladybug_rhino-1.44.8.tar.gz | source | sdist | null | false | 88194f2016b95ff3220d20d6961d7769 | 8608fbf81841d1ed9360d48807e32f144dc1eb9f7d51cf4480f7b297c9dd1fd0 | c898c5351db9ff5b7021f6048e330674e762da96e8ad7a655376dda91bdbd845 | null | [
"LICENSE"
] | 1,279 |
2.4 | kida-templates | 0.2.2 | Modern template engine for Python 3.14t — AST-native, free-threading ready | # )彡 Kida
[](https://pypi.org/project/kida-templates/)
[](https://github.com/lbliii/kida/actions/workflows/tests.yml)
[](https://pypi.org/project/kida-templates/)
[](https://opensource.org/licenses/MIT)
**Modern template engine for Python 3.14t**
```python
from kida import Environment
env = Environment()
template = env.from_string("Hello, {{ name }}!")
print(template.render(name="World"))
# Output: Hello, World!
```
---
## What is Kida?
Kida is a modern template engine for Python 3.14t. It compiles templates to Python AST directly (no string generation), supports streaming and fragment rendering, and is built for free-threading.
**What's good about it:**
- **AST-native** — Compiles to Python AST directly. Structured code manipulation, compile-time optimization, precise error source mapping.
- **Free-threading ready** — Safe for Python 3.14t concurrent execution (PEP 703). All public APIs are thread-safe.
- **Dual-mode rendering** — `render()` uses StringBuilder for maximum throughput. `render_stream()` yields chunks for streaming HTTP and SSE.
- **Modern syntax** — Pattern matching, pipeline operator, unified `{% end %}`, null coalescing, optional chaining.
- **Zero dependencies** — Pure Python, includes native `Markup` implementation.
---
## Installation
```bash
pip install kida-templates
```
Requires Python 3.14+
---
## Quick Start
| Function | Description |
|----------|-------------|
| `Environment()` | Create a template environment |
| `env.from_string(src)` | Compile template from string |
| `env.get_template(name)` | Load template from filesystem |
| `template.render(**ctx)` | Render to string (StringBuilder, fastest) |
| `template.render_stream(**ctx)` | Render as generator (yields chunks) |
| `RenderedTemplate(template, ctx)` | Lazy iterable wrapper for streaming |
---
## Features
| Feature | Description | Docs |
|---------|-------------|------|
| **Template Syntax** | Variables, filters, control flow, pattern matching | [Syntax →](https://lbliii.github.io/kida/docs/syntax/) |
| **Inheritance** | Template extends, blocks, includes | [Inheritance →](https://lbliii.github.io/kida/docs/syntax/inheritance/) |
| **Filters & Tests** | 40+ built-in filters, custom filter registration | [Filters →](https://lbliii.github.io/kida/docs/reference/filters/) |
| **Streaming** | Statement-level generator rendering via `render_stream()` | [Streaming →](https://lbliii.github.io/kida/docs/usage/streaming/) |
| **Async Support** | Native `async for`, `await` in templates | [Async →](https://lbliii.github.io/kida/docs/syntax/async/) |
| **Caching** | Fragment caching with TTL support | [Caching →](https://lbliii.github.io/kida/docs/syntax/caching/) |
| **Components & Slots** | `{% def %}`, `{% call %}`, default + named `{% slot %}` | [Functions →](https://lbliii.github.io/kida/docs/syntax/functions/) |
| **Partial Evaluation** | Compile-time evaluation of static expressions | [Advanced →](https://lbliii.github.io/kida/docs/advanced/compiler/) |
| **Block Recompilation** | Recompile only changed blocks in live templates | [Advanced →](https://lbliii.github.io/kida/docs/advanced/compiler/) |
| **Extensibility** | Custom filters, tests, globals, loaders | [Extending →](https://lbliii.github.io/kida/docs/extending/) |
📚 **Full documentation**: [lbliii.github.io/kida](https://lbliii.github.io/kida/)
---
## Usage
<details>
<summary><strong>File-based Templates</strong> — Load from filesystem</summary>
```python
from kida import Environment, FileSystemLoader
env = Environment(loader=FileSystemLoader("templates/"))
template = env.get_template("page.html")
print(template.render(title="Hello", content="World"))
```
</details>
<details>
<summary><strong>Template Inheritance</strong> — Extend base templates</summary>
**base.html:**
```kida
<!DOCTYPE html>
<html>
<body>
{% block content %}{% end %}
</body>
</html>
```
**page.html:**
```kida
{% extends "base.html" %}
{% block content %}
<h1>{{ title }}</h1>
<p>{{ content }}</p>
{% end %}
```
</details>
<details>
<summary><strong>Control Flow</strong> — Conditionals, loops, pattern matching</summary>
```kida
{% if user.is_active %}
<p>Welcome, {{ user.name }}!</p>
{% end %}
{% for item in items %}
<li>{{ item.name }}</li>
{% end %}
{% match status %}
{% case "active" %}
Active user
{% case "pending" %}
Pending verification
{% case _ %}
Unknown status
{% end %}
```
</details>
<details>
<summary><strong>Components & Named Slots</strong> — Reusable UI composition</summary>
```kida
{% def card(title) %}
<article class="card">
<h2>{{ title }}</h2>
<div class="actions">{% slot header_actions %}</div>
<div class="body">{% slot %}</div>
</article>
{% end %}
{% call card("Settings") %}
{% slot header_actions %}<button>Save</button>{% end %}
<p>Body content.</p>
{% end %}
```
`{% slot %}` is the default slot. Named slot blocks inside `{% call %}` map to
matching placeholders in `{% def %}`.
</details>
<details>
<summary><strong>Filters & Pipelines</strong> — Transform values</summary>
```kida
{# Traditional syntax #}
{{ title | escape | capitalize | truncate(50) }}
{# Pipeline operator #}
{{ title |> escape |> capitalize |> truncate(50) }}
{# Custom filters #}
{{ items | sort(attribute="name") | first }}
```
</details>
<details>
<summary><strong>Streaming Rendering</strong> — Yield chunks as they're ready</summary>
```python
from kida import Environment
env = Environment()
template = env.from_string("""
<ul>
{% for item in items %}
<li>{{ item }}</li>
{% end %}
</ul>
""")
# Generator: yields each statement as a string chunk
for chunk in template.render_stream(items=["a", "b", "c"]):
print(chunk, end="")
# RenderedTemplate: lazy iterable wrapper
from kida import RenderedTemplate
rendered = RenderedTemplate(template, {"items": ["a", "b", "c"]})
for chunk in rendered:
send_to_client(chunk)
```
Works with inheritance (`{% extends %}`), includes, and all control flow. Blocks like `{% capture %}` and `{% spaceless %}` buffer internally and yield the processed result.
</details>
<details>
<summary><strong>Async Templates</strong> — Await in templates</summary>
```python
{% async for item in fetch_items() %}
{{ item }}
{% end %}
{{ await get_user() }}
```
</details>
<details>
<summary><strong>Fragment Caching</strong> — Cache expensive blocks</summary>
```kida
{% cache "navigation" %}
{% for item in nav_items %}
<a href="{{ item.url }}">{{ item.title }}</a>
{% end %}
{% end %}
```
</details>
---
## Architecture
<details>
<summary><strong>Compilation Pipeline</strong> — AST-native</summary>
```
Template Source → Lexer → Parser → Kida AST → Compiler → Python AST → exec()
```
Kida generates `ast.Module` objects directly. This enables:
- **Structured code manipulation** — Transform and optimize AST nodes
- **Compile-time optimization** — Dead code elimination, constant folding
- **Precise error source mapping** — Exact line/column in template source
</details>
<details>
<summary><strong>Dual-Mode Rendering</strong> — StringBuilder + streaming generator</summary>
```python
# render() — StringBuilder (fastest, default)
_out.append(...)
return "".join(_out)
# render_stream() — Generator (streaming, chunked HTTP)
yield ...
```
The compiler generates both modes from a single template. `render()` uses StringBuilder for maximum throughput. `render_stream()` uses Python generators for statement-level streaming — ideal for chunked HTTP responses and Server-Sent Events.
</details>
<details>
<summary><strong>Thread Safety</strong> — Free-threading ready</summary>
All public APIs are thread-safe by design:
- **Template compilation** — Idempotent (same input → same output)
- **Rendering** — Uses only local state (StringBuilder pattern)
- **Environment** — Copy-on-write for filters/tests/globals
- **LRU caches** — Atomic operations
Module declares itself GIL-independent via `_Py_mod_gil = 0` (PEP 703).
</details>
---
## Performance
- **Simple render** — ~0.12ms
- **Complex template** — ~2.1ms
- **Concurrent (8 threads)** — ~0.15ms avg under Python 3.14t free-threading
---
## Documentation
📚 **[lbliii.github.io/kida](https://lbliii.github.io/kida/)**
| Section | Description |
|---------|-------------|
| [Get Started](https://lbliii.github.io/kida/docs/get-started/) | Installation and quickstart |
| [Syntax](https://lbliii.github.io/kida/docs/syntax/) | Template language reference |
| [Usage](https://lbliii.github.io/kida/docs/usage/) | Loading, rendering, escaping |
| [Extending](https://lbliii.github.io/kida/docs/extending/) | Custom filters, tests, loaders |
| [Reference](https://lbliii.github.io/kida/docs/reference/) | Complete API documentation |
| [Tutorials](https://lbliii.github.io/kida/docs/tutorials/) | Jinja2 migration, Flask integration |
---
## Development
```bash
git clone https://github.com/lbliii/kida.git
cd kida
# Uses Python 3.14t by default (.python-version)
uv sync --group dev --python 3.14t
PYTHON_GIL=0 uv run --python 3.14t pytest
```
---
## The Bengal Ecosystem
A structured reactive stack — every layer written in pure Python for 3.14t free-threading.
| | | | |
|--:|---|---|---|
| **ᓚᘏᗢ** | [Bengal](https://github.com/lbliii/bengal) | Static site generator | [Docs](https://lbliii.github.io/bengal/) |
| **∿∿** | [Purr](https://github.com/lbliii/purr) | Content runtime | — |
| **⌁⌁** | [Chirp](https://github.com/lbliii/chirp) | Web framework | [Docs](https://lbliii.github.io/chirp/) |
| **=^..^=** | [Pounce](https://github.com/lbliii/pounce) | ASGI server | [Docs](https://lbliii.github.io/pounce/) |
| **)彡** | **Kida** | Template engine ← You are here | [Docs](https://lbliii.github.io/kida/) |
| **ฅᨐฅ** | [Patitas](https://github.com/lbliii/patitas) | Markdown parser | [Docs](https://lbliii.github.io/patitas/) |
| **⌾⌾⌾** | [Rosettes](https://github.com/lbliii/rosettes) | Syntax highlighter | [Docs](https://lbliii.github.io/rosettes/) |
Python-native. Free-threading ready. No npm required.
---
## License
MIT License — see [LICENSE](LICENSE) for details.
| text/markdown | null | null | null | null | null | template-engine, jinja2, free-threading, async, templates | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.14",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content :: CGI Tools/Libraries",
"Topic :: Text Processing :: Markup :: HTML",
"Topic :: Software Development ::... | [] | null | null | >=3.14 | [] | [] | [] | [
"bengal>=0.2.0; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/lbliii/kida",
"Documentation, https://github.com/lbliii/kida",
"Repository, https://github.com/lbliii/kida",
"Changelog, https://github.com/lbliii/kida/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:57:01.637414 | kida_templates-0.2.2.tar.gz | 326,955 | ee/a5/394b7e13169ea55951bf171c1c27d016ef0f1d2853e57470be927779ac43/kida_templates-0.2.2.tar.gz | source | sdist | null | false | 027aeeeece6d7615f4c7a5e0e4ae312f | 1ea90485882e0e0c6fbaa007d82d2af1a51cfbfbcc5ed423aa707070141c0c70 | eea5394b7e13169ea55951bf171c1c27d016ef0f1d2853e57470be927779ac43 | MIT | [
"LICENSE"
] | 449 |
2.4 | Topsis-Harsheen-102317037 | 0.1 | A Python package to perform TOPSIS method | # Topsis-Harsheen-123456
This package implements the TOPSIS method using Python.
## Installation
pip install Topsis-Harsheen-123456
## Usage
topsis input_file weights impacts output_file
Example:
topsis data.xlsx "1,1,1,1,1" "+,+,+,+,+" result.csv
## Parameters
- Input file must contain 3 or more columns
- First column is non-numeric
- Remaining columns must be numeric
- Weights separated by comma
- Impacts must be + or -
## Output
Output file will contain:
- Topsis Score
- Rank
| text/markdown | Harsheen Kaur | your_email@example.com | null | null | null | null | [] | [] | null | null | null | [] | [] | [] | [
"pandas",
"numpy",
"openpyxl"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.1 | 2026-02-18T18:56:46.715510 | topsis_harsheen_102317037-0.1.tar.gz | 2,632 | b0/1d/c0d02b4db02052de2b25c9304693037dda478b9cc31befbcce96283dd55c/topsis_harsheen_102317037-0.1.tar.gz | source | sdist | null | false | 24886968c466eca0a507a105552d9d51 | b4b333ecc40a385ac7df1bb35f7348ec0ec9c0a684730536aee7f0ab3cff0dc9 | b01dc0d02b4db02052de2b25c9304693037dda478b9cc31befbcce96283dd55c | null | [
"LICENSE"
] | 0 |
2.4 | youtrack-updater | 1.0.5 | Auto-update JetBrains YouTrack Docker containers | # YouTrack Docker Updater
A CLI tool that checks for new JetBrains YouTrack Docker image versions and automatically updates a running instance managed via Docker Compose.
The tool:
- Reads the current YouTrack version from `docker-compose.yml`
- Compares it with the latest available Docker image tag
- Pre-pulls the new image to minimize downtime
- Updates `docker-compose.yml` and restarts the service
---
## Requirements
- Linux host with Docker installed
- Docker Compose v2 (`docker compose`)
- Python 3.10+
- A `docker-compose.yml` with a `jetbrains/youtrack:<tag>` image
## Installation
### With pipx (recommended)
```bash
pipx install youtrack-updater
```
### With pip
```bash
pip install youtrack-updater
```
## Upgrade
```bash
pipx upgrade youtrack-updater
```
## Usage
```bash
youtrack-updater
```
If a newer YouTrack version is available, you'll be prompted to confirm the update.
### Options
```
--compose-file PATH Path to docker-compose.yml (default: docker-compose.yml)
--version Show version and exit
```
### Examples
```bash
# default — looks for ./docker-compose.yml
youtrack-updater
# custom compose file location
youtrack-updater --compose-file /opt/youtrack/docker-compose.yml
```
## Development
```bash
pip install -e ".[dev]"
pytest -v
```
Run a specific test:
```bash
pytest -v -s -k "test_update_sequence"
``` | text/markdown | null | Anton Samofal <anton.smfl@gmail.com> | null | null | null | automation, devops, docker, jetbrains, youtrack | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: System Administrators",
"Programming Language :: Python :: 3",
"Topic :: System :: Systems Administration"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"colorama>=0.4.6",
"packaging>=24.0",
"requests>=2.32.0",
"pytest>=8.0; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/asamofal/youtrack-updater"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:56:38.233415 | youtrack_updater-1.0.5.tar.gz | 10,427 | fd/48/b4097754b4afdda3436d5158cbefe2f3dde54411ca201f9a922223ac68bd/youtrack_updater-1.0.5.tar.gz | source | sdist | null | false | e93899d3e580fdddef25d5d1beac3f35 | a7d3c6f0ffd39bcc276a6808cd52ac82374944f34774bdb4b082dc4d8582b6ba | fd48b4097754b4afdda3436d5158cbefe2f3dde54411ca201f9a922223ac68bd | Apache-2.0 | [
"LICENSE"
] | 236 |
2.4 | checkdmarc | 5.13.4 | A Python module and command line parser for SPF and DMARC records | # checkdmarc
[](https://github.com/domainaware/checkdmarc/actions/workflows/python-tests.yaml)
[](https://pypi.org/project/checkdmarc/)
[](https://pypistats.org/packages/checkdmarc)
A Python module, command line utility, and [web application](https://github.com/domainaware/checkdmarc-web-frontend) for validating SPF and DMARC DNS records.
## Features
- API, CLI, and web interfaces
- Can test multiple domains at once
- CLI output in JSON or CSV format
- DNSSEC validation
- SPF
- Record validation
- Counting of DNS lookups and void lookups
- Counting of lookups per mechanism
- DMARC
- Validation and parsing of DMARC records
- Shows warnings when the DMARC record is made ineffective by `pct` or `sp` values
- Checks for authorization records on reporting email addresses
- BIMI
- Validation of the mark format and certificate
- Parsing of the mark certificate
- MX records
- Preference
- IPv4 and IPv6 addresses
- Checks for STARTTLS
- Use of DNSSEC/TLSA/DANE to pin certificates
- MTA-STS
- SMTP TLS reporting
- Record and policy parsing and validation
- SOA record parsing
- Nameserver listing
## Docker support
1. Build the image using docker `build . -t checkdmarc`
2. Use the image with a command like `docker run --rm checkdmarc google.nl`
| text/markdown | null | Sean Whalen <whalenster@gmail.com> | null | null | null | BIMI, DMARC, DNS, MTA-STS, SPF | [
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Communications :: Email",
"Topic :: Security"
] | [] | null | null | null | [] | [] | [] | [
"cryptography<47.0,>=45.0",
"dnspython>=2.0.0",
"expiringdict>=1.1.4",
"importlib-resources>=6.0",
"pem>=23.1.0",
"publicsuffixlist>=0.10.0",
"pyleri>=1.3.2",
"pyopenssl>=24.2.1",
"requests>=2.25.0",
"timeout-decorator>=0.4.1",
"xmltodict>=0.14.2",
"hatch>=1.14.0; extra == \"build\"",
"myst-... | [] | [] | [] | [
"Homepage, https://github.com/domainaware/checkdmarc",
"Documentation, https://domainaware.github.io/checkdmarc/",
"Issues, https://github.com/domainaware/checkdmarc/issues",
"Changelog, https://github.com/domainaware/checkdmarc/blob/master/CHANGELOG.md"
] | Hatch/1.16.3 cpython/3.12.3 HTTPX/0.28.1 | 2026-02-18T18:56:34.208327 | checkdmarc-5.13.4.tar.gz | 62,013 | 03/2a/b36e197c26c23c9c6668e5cc5f7366dd1ee48f1cbe66080c25e2b79b50e2/checkdmarc-5.13.4.tar.gz | source | sdist | null | false | 5b71cc07cabc1e8eb20aafcc367ec348 | bd64268470b0d33453cee684cae88e46efb42c0df3a5a61d5a3de6a88080a698 | 032ab36e197c26c23c9c6668e5cc5f7366dd1ee48f1cbe66080c25e2b79b50e2 | Apache-2.0 | [
"LICENSE"
] | 1,511 |
2.4 | aiel-cli | 1.3.4 | AI Execution Layer CLI | # aiel — AI Execution Layer CLI

`aiel` is the reference CLI for the AI Execution Layer: a workspace-centric automation platform that keeps AI workflows versioned, reviewable, and operator friendly. The CLI mirrors the day-to-day lifecycle for teams shipping AI automations—from authenticating against the platform, to selecting the right workspace/project, synchronizing code, and inspecting remote manifests.
Every command is transparent about its roadmap sprint, so teams can adopt the CLI confidently while newer subcommands are still rolling out.
---
## Features
- **Roadmap-aware UX**: discover commands and their target sprint directly from the CLI (`aiel roadmap` and `aiel commands`).
- **Profile-safe authentication**: `aiel auth ...` manages multiple environments, keyring-backed tokens (with file fallback), and human-friendly panels.
- **Workspace configuration**: `aiel config ...` drives interactive workspace/project selection powered by `/v1/auth/me`.
- **Git-inspired sync**: `aiel repo ...` offers familiar `init`, `status`, `add`, `commit`, `pull`, and `push` flows backed by the data plane.
- **Insightful observability**: `aiel info ...` and `aiel files ...` surface active context, manifests, and repo metadata for debugging.
- **Full test coverage**: the test suite exercises every route with fixtures and mocks, ensuring confidence in day-to-day automation.
---
## Installation
```bash
pip install aiel-cli
```
The CLI exposes the `aiel` entrypoint via Typer. Python 3.10+ is required.
---
## Quick Start
```bash
# 1. Discover the active roadmap
aiel roadmap
# 2. Authenticate (token prompt is hidden)
aiel auth login
# 3. Initialize repo metadata
aiel repo init
# 4. Select a workspace and project
aiel config set workspace onboarding_team2
aiel config set project linkedin_onboarding
# 5. Inspect context and manifests
aiel info workspace
aiel files ls
# 6. Sync a repo
aiel repo pull
aiel repo status
```
---
## Command Guide
| Group | Highlights |
|---------|------------|
| `aiel roadmap` / `aiel commands` | Render sprint tables and hints for every command (including placeholders such as `aiel logs` and `aiel doctor`). |
| `aiel auth` | `login`, `status`, `list`, `logout`, and `revoke` (roadmap) with Rich panels to document base URLs, scopes, and storage locations. |
| `aiel config` | `list`, `set workspace`, `set project`, `set show` provide a safe workflow for selecting workspace/project context per profile. |
| `aiel repo` | Implements a Git-inspired flow (`status`, `init`, `add`, `commit`, `pull`, `push`) over `.aiel/` metadata and the data plane. |
| `aiel info` | Read-only introspection commands for workspace/projects. |
| `aiel files` | Read-only manifest view + repo metadata for the most recent pull. |
Each command prints detailed help (`--help`) and panels summarizing the action taken so operators have immediate context.
---
## Authentication & Configuration
1. `aiel auth login`
- Validates tokens via `/v1/auth/me` and stores them using keyring when available (file fallback otherwise).
- Panels indicate where the token was stored and which user/tenant is active.
2. `aiel auth status / list / logout`
- Status exits non-zero when the token is missing or invalid—ideal for CI smoke tests.
- `list` enumerates profiles, marking the active one and where the token is stored.
- `logout` removes local credentials per profile.
3. `aiel config set workspace`
- Fetches workspaces/projects via `_get_me` and guides the operator through selecting defaults.
- Auto-selects projects when only one is available; otherwise displays an interactive prompt (Questionary).
4. `aiel config set project`
- Ensures a workspace is selected, then prompts or validates the project slug.
- `aiel config list` reflects the active profile + workspace and project slug/name.
---
## Credentials & Local State
- Auth profiles live in `~/.config/aiel/credentials.json` with restrictive file permissions on Unix.
- Tokens can also be injected via `AIEL_TOKEN` for CI or non-interactive usage.
- Repo state lives under `.aiel/` in the working directory (`state.json`, `index.json`, `commits/`).
- Exclude local files and folders from repo commands with `.aielignore` (same idea as `.gitignore`).
---
## Repo Workflow
All state lives under `.aiel/` (state.json, index.json, commits/). The flow mirrors Git:
1. `aiel init`
- Creates `.aiel/state.json` and `.aiel/index.json` with normalized metadata from the active profile.
2. `aiel status`
- Compares working tree hashes vs the last manifest and prints staged vs unstaged sections.
3. `aiel add <path|.>`
- Computes sha256, content-types, and stages upserts or deletes into `index.json`.
4. `aiel commit -m "message"`
- Writes a local commit document under `.aiel/commits/` and marks it as pending.
5. `aiel pull`
- Uses signed download URLs from the data plane to refresh the working tree and update manifest metadata.
6. `aiel push`
- Signs uploads, streams bytes to the storage layer, commits uploads, deletes when needed, refreshes manifest metadata, and clears the index.
All network interactions are stubbed in the test suite, ensuring deterministic coverage without real API calls.
---
## Workspace Introspection
- `aiel info workspace` prints the resolved user, tenant, workspace, and project so there’s no ambiguity about the current context.
- `aiel info workspaces/projects` walks the payload returned by `_get_me` and prints structured tables for visibility.
- `aiel files ls` renders the last manifest as a tree and summarizes repo metadata (last pull, pending commit, version).
These commands are intentionally read-only and safe to run in CI/CD or during incident response.
---
## Testing
The repository ships with an end-to-end test suite covering every CLI route (unit + integration). Tests rely on a dedicated fixture configuration at `tests/data/test_profile.json`, which includes:
- `X-API-Token`: `<X-API-Token>`
- Workspace slug: `onboarding_team2`
- Project slug: `linkedin_onboarding`
Run the suite with coverage enforcement:
```bash
pip install -e .[dev]
pytest --cov=aiel --cov-report=term-missing --cov-fail-under=100
```
The tests patch network calls, use Typer’s `CliRunner`, and stage temporary `.aiel/` directories to exercise the entire workflow without hitting real services.
---
## Contributing
1. Fork and clone the repository.
2. Create a fresh virtual environment with Python 3.10+.
3. Install dependencies (`pip install -e .[dev]`).
4. Run `pytest --cov=aiel --cov-report=term-missing --cov-fail-under=100` before opening a PR.
5. Update the roadmap comments or README whenever you expose a new command.
Open issues or discussions if you need new command groups, additional transports, or would like to upstream scripts into `tools/`.
| text/markdown | null | Aldenir Flauzino <aldenirsrv@gmail.com> | null | null | ```text
MIT License
Copyright (c) 2025 <Aldenir Flauzino>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
``` | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27.0",
"questionary>=2.0.0",
"rich>=13.7.1",
"typer>=0.12.3",
"pytest-cov>=4.1; extra == \"dev\"",
"pytest>=7.4; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:56:22.067606 | aiel_cli-1.3.4.tar.gz | 45,857 | 14/f6/fde94ac71bcf3f4d68d8f0504041981d1a2a9035203941bfb0691c6d0ce6/aiel_cli-1.3.4.tar.gz | source | sdist | null | false | 1d971670d0a253911f86a32406f0abba | 14c4a7c6ca87046d468c9db5e81056a4bf02d0940e85223167de4a9a11502dac | 14f6fde94ac71bcf3f4d68d8f0504041981d1a2a9035203941bfb0691c6d0ce6 | null | [
"LICENSE"
] | 283 |
2.4 | is-it-slop | 0.5.0 | Detect AI-generated slop text using machine learning. | # is-it-slop
Fast and accurate AI text detection using machine learning.
Python bindings for the is-it-slop AI text detection library, powered by Rust and ONNX Runtime.
> **This is the main inference library.** For most users, this is all you need. If you want to train custom models or access the preprocessing pipeline directly, see [`is-it-slop-preprocessing`](https://pypi.org/project/is-it-slop-preprocessing/).
## Features
- **Fast inference**: Rust-backed ONNX runtime with optimized preprocessing
- **Pre-trained model**: Embedded at compile time, no downloads required
- **Simple API**: Single function call for predictions
- **Batch processing**: Efficient multi-text inference
- **Text chunking**: Handles variable-length documents (50-5000+ tokens)
- **Cross-platform**: Linux, macOS (Apple Silicon), Windows
## Installation
```bash
uv add is-it-slop
```
or using pip:
```bash
pip install is-it-slop
```
## Quick Start
```python
from is_it_slop import is_this_slop
# Predict on single text
result = is_this_slop("Your text here")
print(result.classification) # "Human" or "AI"
print(f"AI probability: {result.ai_probability:.2%}")
# Use custom threshold
result = is_this_slop("Your text here", threshold=0.7)
print(result.classification)
# Batch processing
from is_it_slop import is_this_slop_batch
texts = ["First text", "Second text", "Third text"]
results = is_this_slop_batch(texts)
for text, result in zip(texts, results):
print(f"{text}: {result.classification} ({result.ai_probability:.1%})")
```
## API Reference
### Functions
**`is_this_slop(text, threshold=None)`**
Predict whether a single text is AI-generated or human-written.
**Parameters:**
- `text` (str): Input text string
- `threshold` (float, optional): Classification threshold (0.0-1.0). If not provided, uses the default optimized threshold.
**Returns:** `PredictionResult` object
**Example:**
```python
result = is_this_slop("This text was written by a human.")
print(result.classification) # "Human" or "AI"
print(result.ai_probability) # 0.0 to 1.0
print(result.human_probability) # 0.0 to 1.0
```
---
**`is_this_slop_batch(texts, threshold=None)`**
Predict whether multiple texts are AI-generated or human-written.
**Parameters:**
- `texts` (list[str]): List of text strings
- `threshold` (float, optional): Classification threshold (0.0-1.0)
**Returns:** List of `PredictionResult` objects
**Example:**
```python
results = is_this_slop_batch([
"First document to check",
"Second document to check",
"Third document to check"
])
for i, result in enumerate(results):
print(f"Text {i+1}: {result.classification}")
```
### PredictionResult Object
Result object with classification and probabilities:
**Attributes:**
- `classification` (str): Either `"Human"` or `"AI"`
- `human_probability` (float): Probability of human-written text (0.0-1.0)
- `ai_probability` (float): Probability of AI-generated text (0.0-1.0)
**Note:** `human_probability + ai_probability == 1.0`
**String representation:**
```python
>>> result = is_this_slop("Some text")
>>> print(result)
Human (AI: 12.3%)
>>> repr(result)
'PredictionResult(human=0.877, ai=0.123, class=Human)'
```
### Constants
**`CLASSIFICATION_THRESHOLD`**
The default threshold value used for classification. This threshold is optimized for overall F1 score based on validation data.
```python
from is_it_slop import CLASSIFICATION_THRESHOLD
print(f"Default threshold: {CLASSIFICATION_THRESHOLD}")
```
**`MODEL_VERSION`**
The version of the embedded model.
```python
from is_it_slop import MODEL_VERSION
print(f"Model version: {MODEL_VERSION}")
```
## How It Works
1. **Text Cleaning**: Normalizes HTML entities, encoding artifacts, and whitespace
2. **Tokenization**: Uses tiktoken (o200k_base) BPE encoding
3. **Chunking**: Splits long texts into 150-token overlapping chunks
4. **Vectorization**: TF-IDF with 2-4 token n-grams
5. **Inference**: ONNX Runtime with LogisticRegression model
6. **Aggregation**: Combines chunk predictions using weighted mean
This pipeline ensures consistent preprocessing between training and inference.
## Performance Characteristics
- **Short texts** (< 150 tokens): Single chunk, instant inference
- **Medium texts** (150-1000 tokens): ~2-7 chunks, efficient batch processing
- **Long texts** (1000+ tokens): Automatically chunked and aggregated
The Rust implementation provides significant speedup over pure Python:
- 5-10x faster preprocessing (tokenization, vectorization)
- Parallel batch processing
- Zero-copy operations where possible
## Command-Line Interface
For CLI usage, install the Rust binary:
```bash
cargo install is-it-slop --features cli
```
```bash
# Basic usage
is-it-slop "Your text here"
# JSON output
is-it-slop "Your text here" --format json
# Classification only (0 or 1)
is-it-slop "Your text here" --format class
```
## Platform Support
Pre-built wheels available for:
- **Linux**: x86_64, aarch64 (manylinux_2_28)
- **macOS**: Apple Silicon (ARM64)
- **Windows**: x86_64
## License
MIT
## Links
- [PyPI Package](https://pypi.org/project/is-it-slop/)
- [GitHub Repository](https://github.com/SamBroomy/is-it-slop)
- [Preprocessing Library](https://pypi.org/project/is-it-slop-preprocessing/)
| text/markdown | null | SamBroomy <36888606+SamBroomy@users.noreply.github.com> | null | null | MIT | AI-text-detector, ML, TF-IDF, Tokenization, ai-detection, machine-learning, onnx, pyo3, rust, text-classification | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating Syst... | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://github.com/SamBroomy/is-it-slop/blob/main/README.md",
"Homepage, https://github.com/SamBroomy/is-it-slop/blob/main/python/is-it-slop/README.md",
"Issues, https://github.com/SamBroomy/is-it-slop/issues",
"Repository, https://github.com/SamBroomy/is-it-slop"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T18:55:48.386596 | is_it_slop-0.5.0-cp313-cp313-manylinux_2_28_aarch64.whl | 41,648,966 | d3/ac/6f9c25be61ffa50562409a2778fb1696e1205088899dbc29f2d6eb54df09/is_it_slop-0.5.0-cp313-cp313-manylinux_2_28_aarch64.whl | cp313 | bdist_wheel | null | false | d0d5940e7508a1ecd4c7a322d5cba770 | bac2c29cb40f2426fe9895e0adbe611d2f9e20258e44fd3ef5e0eae31492c187 | d3ac6f9c25be61ffa50562409a2778fb1696e1205088899dbc29f2d6eb54df09 | null | [] | 1,465 |
2.4 | is-it-slop-preprocessing | 0.5.0 | Fast TF-IDF vectorization. Preprocessing step for `is-it-slop` package, written in Rust. | # is-it-slop-preprocessing
Fast TF-IDF text vectorization for training AI text detection models.
Implementation in Rust with Python bindings.
> **Note for inference users:** If you only want to use the AI text detection model for predictions, install [`is-it-slop`](https://pypi.org/project/is-it-slop/) instead. This preprocessing library is primarily for the training step or accessing the preprocessing pipeline directly.
The Python bindings allow us to use the same Rust-based text preprocessing at training and inference time, ensuring consistency between model training and deployment.
## Features
- **Token n-grams**: Uses tiktoken BPE token sequences (not characters/words)
- **sklearn-compatible API**: Drop-in replacement for training pipelines
- **Parallel processing**: Automatic multi-threading via Rust/rayon
- **Multiple serialization formats**: rkyv (default), bincode, and JSON support
## Installation
```bash
pip install is-it-slop-preprocessing
```
## Quick Start
```python
from is_it_slop_preprocessing import TfidfVectorizer, VectorizerParams
# Configure vectorizer (n-gram range is fixed at 2-4 tokens)
params = VectorizerParams(
min_df=10, # Ignore terms in < 10 docs
max_df=0.8, # Ignore terms in > 80% of docs
sublinear_tf=True # Apply log scaling to term frequencies
)
# Fit and transform training data
vectorizer, X_train = TfidfVectorizer.fit_transform(train_texts, params)
# Transform test data
X_test = vectorizer.transform(test_texts)
# Save vectorizer for inference
vectorizer.save("tfidf_vectorizer.rkyv")
```
## Platform Support
Pre-built wheels available for:
- **Linux**: x86_64, aarch64 (manylinux_2_28)
- **macOS**: Apple Silicon (ARM64)
- **Windows**: x86_64
## License
MIT
| text/markdown | null | SamBroomy <36888606+SamBroomy@users.noreply.github.com> | null | null | MIT | AI-text-detector, ML, TF-IDF, Tokenization, ai-detection, machine-learning, onnx, pyo3, rust, text-classification | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating Syst... | [] | null | null | >=3.11 | [] | [] | [] | [
"numpy>=2.0",
"scipy>=1.14"
] | [] | [] | [] | [
"Documentation, https://github.com/SamBroomy/is-it-slop/blob/main/README.md",
"Homepage, https://github.com/SamBroomy/is-it-slop/blob/main/python/is-it-slop-preprocessing/README.md",
"Issues, https://github.com/SamBroomy/is-it-slop/issues",
"Repository, https://github.com/SamBroomy/is-it-slop"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T18:55:34.505820 | is_it_slop_preprocessing-0.5.0-cp312-cp312-macosx_11_0_arm64.whl | 18,509,218 | 6f/d8/e477b7d44cd28b3ed8ee4b9181baf3a0c2a683eea88b5bcf008c3091bf3d/is_it_slop_preprocessing-0.5.0-cp312-cp312-macosx_11_0_arm64.whl | cp312 | bdist_wheel | null | false | 1abae9a3921f60fd6363a5915f815b20 | 15bc6b8aa514030ed590049968056ad457f58abbed1d228bd9f6574ae1224912 | 6fd8e477b7d44cd28b3ed8ee4b9181baf3a0c2a683eea88b5bcf008c3091bf3d | null | [] | 1,439 |
2.4 | gsql2rsql | 0.9.6 | Transpile Graph Query Language (openCypher) to Recursive SQL (Databricks) | # gsql2rsql
[](https://badge.fury.io/py/gsql2rsql)
[](https://github.com/devmessias/gsql2rsql/actions/workflows/ci.yml)
[](https://devmessias.github.io/gsql2rsql)
[](https://opensource.org/licenses/MIT)
**Query your Delta Tables as a Graph**
No need for a separate graph database. Write intuitive OpenCypher queries, get Databricks SQL automatically.
> **Why Databricks?**
>
> Databricks provides tables designed for massive scale, enabling efficient storage and querying of tens of billions of triples with features like time travel No ETL or migration needed—just query your data lake as a graph. Recently, Databricks released support for recursive queries, unlocking the use of SQL warehouses for graph-type queries.
>
---
## Why gsql2rsql?
| Challenge | Solution |
|-----------|----------|
| Graph queries require complex SQL with `WITH RECURSIVE` | Write 5 lines of Cypher instead |
| Need to maintain a separate graph database | Query Delta Lake directly |
| LLM-generated complex SQL is hard to audit | Human-readable Cypher + deterministic transpilation (optionally pass to LLM for final optimization) |
| Scaling to tens of billions of triples is costly in graph DBs | Delta Lake stores billions of triples efficiently, with Spark scalability |
## See It in Action
```bash
pip install gsql2rsql
```
```python
from gsql2rsql import GraphContext
# Point to your existing Delta tables - no migration needed
graph = GraphContext(
nodes_table="catalog.fraud.nodes",
edges_table="catalog.fraud.edges",
)
# Write graph queries with familiar Cypher syntax
sql = graph.transpile("""
MATCH path = (origin:Person {id: 12345})-[:TRANSACTION*1..4]->(dest:Person)
WHERE dest.risk_score > 0.8
RETURN dest.id, dest.name, dest.risk_score, length(path) AS depth
ORDER BY depth, dest.risk_score DESC
LIMIT 3
""")
print(sql)
```
**5 lines of Cypher → optimized Databricks SQL with recursive CTEs**
<details>
<summary>Click to see the generated SQL (auto-generated from transpiler)</summary>
```sql
WITH RECURSIVE
paths_1 AS (
-- Base case: direct edges (depth = 1)
SELECT
e.src AS start_node,
e.dst AS end_node,
1 AS depth,
ARRAY(e.src, e.dst) AS path,
ARRAY(NAMED_STRUCT('src', e.src, 'dst', e.dst, 'amount', e.amount, 'timestamp', e.timestamp)) AS path_edges,
ARRAY(e.src) AS visited
FROM catalog.fraud.edges e
JOIN catalog.fraud.nodes src ON src.id = e.src
WHERE (relationship_type = 'TRANSACTION') AND (src.id) = (12345)
UNION ALL
-- Recursive case: extend paths
SELECT
p.start_node,
e.dst AS end_node,
p.depth + 1 AS depth,
CONCAT(p.path, ARRAY(e.dst)) AS path,
ARRAY_APPEND(p.path_edges, NAMED_STRUCT('src', e.src, 'dst', e.dst, 'amount', e.amount, 'timestamp', e.timestamp)) AS path_edges,
CONCAT(p.visited, ARRAY(e.src)) AS visited
FROM paths_1 p
JOIN catalog.fraud.edges e
ON p.end_node = e.src
WHERE p.depth < 4
AND NOT ARRAY_CONTAINS(p.visited, e.dst)
AND (relationship_type = 'TRANSACTION')
)
SELECT
_gsql2rsql_dest_id AS id
,_gsql2rsql_dest_name AS name
,_gsql2rsql_dest_risk_score AS risk_score
,(SIZE(_gsql2rsql_path_id) - 1) AS depth
FROM (
SELECT
sink.id AS _gsql2rsql_dest_id
,sink.name AS _gsql2rsql_dest_name
,sink.risk_score AS _gsql2rsql_dest_risk_score
,source.id AS _gsql2rsql_origin_id
,source.name AS _gsql2rsql_origin_name
,source.risk_score AS _gsql2rsql_origin_risk_score
,p.start_node
,p.end_node
,p.depth
,p.path AS _gsql2rsql_path_id
,p.path_edges AS _gsql2rsql_path_edges
FROM paths_1 p
JOIN catalog.fraud.nodes sink
ON sink.id = p.end_node
JOIN catalog.fraud.nodes source
ON source.id = p.start_node
WHERE p.depth >= 1 AND p.depth <= 4 AND (sink.risk_score) > (0.8)
) AS _proj
ORDER BY depth ASC, _gsql2rsql_dest_risk_score DESC
LIMIT 3
```
</details>
---
> **Early Stage Project — Not for OLTP or end-user queries**
>
> This project is in **early development**. APIs may change, features may be incomplete, and bugs are expected. Contributions and feedback are welcome!
>
> This transpiler is for **internal analytics and exploration** (data science, engineering, analysis). It obviously makes no sense for OLTP! If you plan to expose transpiled queries to end users, be careful: implement validation, rate limiting, and security. Use common sense.
>
>
## Real-World Examples
### Fraud Detection
```cypher
-- Find fraud rings: accounts connected through suspicious transactions
MATCH (a:Account)-[:TRANSFER*2..4]->(b:Account)
WHERE a.flagged = true AND b.flagged = true
RETURN DISTINCT a.id, b.id, length(path) AS hops
```
[See more fraud detection queries →](examples/fraud.md)
### Credit Analysis
```cypher
-- Analyze credit exposure through guarantor chains
MATCH path = (borrower:Customer)-[:GUARANTEES*1..3]->(guarantor:Customer)
WHERE borrower.credit_score < 600
RETURN borrower.id, COLLECT(guarantor.id) AS chain
```
[See more credit analysis queries →](examples/credit.md)
### Social Network
```cypher
-- Friends of friends who work at tech companies
MATCH (me:Person {id: 123})-[:KNOWS*1..2]->(friend)-[:WORKS_AT]->(c:Company)
WHERE c.industry = 'Technology'
RETURN DISTINCT friend.name, c.name
```
[See all feature examples →](examples/features.md)
---
**That's it!** No schema boilerplate, no complex setup.
[Full User Guide →](user-guide.md)
---
## Low-Level API (Without GraphContext)
For advanced use cases or non-Triple-Store schemas, use the components directly:
```python
from gsql2rsql import OpenCypherParser, LogicalPlan, SQLRenderer
from gsql2rsql.common.schema import NodeSchema, EdgeSchema, EntityProperty
from gsql2rsql.renderer.schema_provider import SimpleSQLSchemaProvider, SQLTableDescriptor
# 1. Define schema (SimpleSQLSchemaProvider)
schema = SimpleSQLSchemaProvider()
person = NodeSchema(
name="Person",
node_id_property=EntityProperty("id", int),
properties=[EntityProperty("name", str)],
)
schema.add_node(
person,
SQLTableDescriptor(table_name="people", node_id_columns=["id"]),
)
knows = EdgeSchema(
name="KNOWS",
source_node_id="Person",
sink_node_id="Person",
)
schema.add_edge(
knows,
SQLTableDescriptor(table_name="friendships"),
)
# 2. Transpile
parser = OpenCypherParser()
ast = parser.parse("MATCH (p:Person)-[:KNOWS]->(f:Person) RETURN p.name, f.name")
plan = LogicalPlan.process_query_tree(ast, schema)
plan.resolve(original_query="...")
renderer = SQLRenderer(db_schema_provider=schema)
sql = renderer.render_plan(plan)
```
---
## Key Features
| Feature | Description |
|---------|-------------|
| **Variable-length paths** | `[:REL*1..5]` via `WITH RECURSIVE` |
| **Cycle detection** | Automatic `ARRAY_CONTAINS` checks |
| **Path functions** | `length(path)`, `nodes(path)`, `relationships(path)` |
| **No-label nodes** | `(a)-[:REL]->(b:Label)` matches any node type for `a` |
| **Inline filters** | `(n:Person {id: 123})` pushes predicates to source |
| **Undirected edges** | `(a)-[:KNOWS]-(b)` via optimized UNION ALL |
| **Aggregations** | COUNT, SUM, AVG, COLLECT, etc. |
| **Type safety** | Schema validation before SQL generation |
---
## Architecture
gsql2rsql uses a **4-phase pipeline** for correctness:
```
OpenCypher → Parser → Planner → Resolver → Renderer → SQL
```
1. **Parser**: Cypher → AST (syntax only, no schema)
2. **Planner**: AST → Logical operators (semantics)
3. **Resolver**: Validate columns & types against schema
4. **Renderer**: Operators → Databricks SQL
This separation ensures each phase has clear responsibilities and can be tested independently.
---
## Documentation
| Section | Description |
|---------|-------------|
| [**User Guide**](user-guide.md) | Getting started, GraphContext, schema setup |
| [**Examples**](examples/index.md) | 69 complete queries with generated SQL |
---
## Project Status
> **Research Project**
>
**Contributions welcome!**
- [GitHub Repository](https://github.com/devmessias/gsql2rsql)
- [Issue Tracker](https://github.com/devmessias/gsql2rsql/issues)
- [Contributing Guide](contributing.md)
---
## License
MIT License - see [LICENSE](https://github.com/devmessias/gsql2rsql/blob/main/LICENSE)
---
| text/markdown | null | Bruno Messias <devmessias@gmail.com> | null | null | MIT | cypher, databricks, graph, opencypher, query, recursive, sql, transpiler | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Database",
"T... | [] | null | null | >=3.11 | [] | [] | [] | [
"antlr4-python3-runtime>=4.13.0",
"mypy>=1.8.0; extra == \"dev\"",
"pyright>=1.1.0; extra == \"dev\"",
"pyspark>=3.5.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest-timeout>=2.1.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.2.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:55:06.823950 | gsql2rsql-0.9.6.tar.gz | 2,732,300 | 10/59/9c1a736361067011b80e150519883f4808a1467984f0cd651bbef71a05ad/gsql2rsql-0.9.6.tar.gz | source | sdist | null | false | b1c99daa6a9dd88ddc1fe41bbc2bba4a | 71a1c8d9c238eea212db3a1cb73e4bf6c9cd6c7ad2966284ce868ac7574aee37 | 10599c1a736361067011b80e150519883f4808a1467984f0cd651bbef71a05ad | null | [
"LICENSE.md"
] | 243 |
2.4 | ara-api | 1.2.0rc2 | Applied Robotics Avia API for ARA MINI, ARA EDU and ARA FPV drones |
# ARA API (УСТАРЕЛО)
// TODO: обновить документацию
---
  
**Applied Robotics Avia API** — это современный API для управления линейкой дронов и самолетов компании **Applied Robotics Avia**, а также для работы с симулятором **AgroTechSim**.
---
## 📖 Описание
Данный проект предлагает единый интерфейс для управления, анализа и автоматизации полетов. **ara-api** сочетает простоту использования, высокую производительность и поддержку множества языков программирования.
### Основные особенности:
1. Экранирование работы от конечного пользователя.
2. Интегрированная документация, загружаемая вместе с API.
3. Высокая скорость работы благодаря использованию HTTP/2 и gRPC.
4. Простота запуска и настройки.
5. Поддержка анализаторов для выполнения лабораторных работ.
6. Предохранительные меры для безопасности автономных полетов.
---
## 🌟 Возможности
- **Мульти-языковая поддержка**: Автономное управление доступно на следующих языках: C#/.NET, C++, Dart, Go, Java, Kotlin, Node.js, Objective-C, PHP, Python, Ruby.
- **Протокол gRPC**: используется как внутри приложения, так и для внешнего взаимодействия.
- **Данные с полетного контроллера**: чтение одометрии, ориентации, IMU, оптического потока и дальномера.
- **Встроенный анализатор**: запись и анализ логов, а также два режима запуска анализатора - online и offline.
- **Библиотека работы с камерой**: анализ изображения и получение готовых значений(QR-code, Aruco, Blob)
---
## 🚀 Установка проекта
```
foo@bar: pip3 install ara-api
```
## 📚 Использование
### Терминал
1) Запуск ядра API:
```bash
foo@bar:~$ ara-api-core
```
2) Запуск анализатора:
```bash
foo@bar:~$ ara-api-analyzer
```
3) Запуск пакета для работы с камерой:
```bash
foo@bar:~$ ara-api-vision
```
***⚠️Важно!!! Перед началом использования удостоверьтесь, что ваш дрон обновлен до самой актуальной версии***
---
## Вызов документации по использованию команд:
1) Запуск ядра API:
```bash
foo@bar:~$ ara-api-core --help
```
2) Запуск анализатора:
```bash
foo@bar:~$ ara-api-analyzer --help
```
3) Запуск пакета для работы с камерой:
```bash
foo@bar:~$ ara-api-vision --help
```
---
## 🧩Функции библиотеки ```ara_core```
---
### `takeoff(altitude)`
Вызывает сервис взлёта.
- **Параметры:**
- `altitude` (float): Высота, на которую нужно подняться.
- **Возвращает:**
- `str`: Статус операции взлёта.
---
### `land()`
Вызывает сервис посадки.
- **Возвращает:**
- `str`: Статус операции посадки.
---
### `move_by_point(x, y)`
Вызывает сервис перемещения.
- **Параметры:**
- `x` (float): Координата X точки, в которую нужно переместиться.
- `y` (float): Координата Y точки, в которую нужно переместиться.
- **Возвращает:**
- `str`: Статус операции перемещения.
---
### `change_altitude(altitude)`
Вызывает сервис изменения высоты.
- **Параметры:**
- `altitude` (float): значение высоты в метрах, на которую нужно переместиться.
- **Возвращает:**
- `str`: Статус операции перемещения.
---
### `set_velocity(vx, vy)`
Вызывает сервис задания скорости.
- **Параметры:**
- `vx` (float): Скорость дрона по X.
- `vy` (float): Скорость дрона по Y.
- **Возвращает:**
- `str`: Статус операции перемещения.
---
### `reset_velocity_state()`
Вызывает сервис перемещения
- **Возвращает:**
- `str`: Статус операции перемещения.
---
### `get_imu_data()`
Получает данные IMU (инерциального измерительного устройства) от сервиса драйвера.
- **Возвращает:**
- `dict`: Словарь с данными гироскопа и акселерометра.
---
### `get_attitude_data()`
Получает данные ориентации от сервиса драйвера.
- **Возвращает:**
- `dict`: Словарь с данными об углах ориентации.
---
### `get_odometry_data()`
Получает данные одометрии от сервиса драйвера.
- **Возвращает:**
- `dict`: Словарь с данными о положении и скорости.
---
### `get_optical_flow_data()`
Получает данные оптического потока от сервиса драйвера.
- **Возвращает:**
- `dict`: Словарь с данными об оптическом потоке.
| text/markdown; charset=UTF-8; variant=GFM | Alexander Kleimenov | null | null | Alexander Kleimenov <nequamy@gmail.com> | null | robotics, drones, API, ARA | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3... | [] | null | null | >=3.8 | [] | [] | [] | [
"grpcio==1.69.0",
"grpcio-tools==1.69.0",
"pyfiglet<1.0.0,>=0.8.0",
"opencv-python==4.10.0.84",
"opencv-contrib-python==4.10.0.84",
"rich<15.0.0,>=14.0.0",
"numpy<1.25.0,>=1.24.0; python_full_version == \"3.8.*\"",
"numpy<2.0.0,>=1.26.0; python_full_version == \"3.9.*\"",
"numpy<3.0.0,>=2.1.0; pytho... | [] | [] | [] | [] | uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T18:54:28.773511 | ara_api-1.2.0rc2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl | 4,612,995 | 15/68/b8c20f6e9254773660076ffae7aeb59e4752c0d194409cb6e65ffd8e7fe9/ara_api-1.2.0rc2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl | cp311 | bdist_wheel | null | false | f61e1682baec615d9a38027334582904 | 933d1c825d7f42fde98f9f28bad23bf9b3a68dd6513e005e9080b3d5b4e39b33 | 1568b8c20f6e9254773660076ffae7aeb59e4752c0d194409cb6e65ffd8e7fe9 | null | [
"LICENSE"
] | 1,415 |
2.4 | rodin | 1.9.21 | A comprehensive toolkit for processing and analyzing metabolomics data. | ## **Rodin: Metabolomics Data Analysis Toolkit**
[](https://doi.org/10.1093/bioadv/vbaf088)
_Rodin_ is a Python library specifically designed for the comprehensive processing and analysis of metabolomics data and other omics data. It is a class-methods based toolkit, facilitating a range of tasks from basic data manipulation to advanced statistical evaluations, visualization, and metabolic pathway analysis.
Now, most of its functionality is available in the Web App at https://rodin-meta.com.
### **Features**
- **Efficient Data Handling**: Streamlined manipulation and transformation of metabolomics data and other omics.
- **Robust Statistical Analysis**: Includes ANOVA, t-tests, and more.
- **Machine Learning Methods**: Random Forest, Logistic and Linear regressions.
- **Advanced Dimensionality Reduction**: Techniques like PCA, t-SNE, UMAP.
- **Interactive Data Visualization**: Tools for effective data visualization.
- **Pathway Analysis**: Features for metabolic pathway analysis.
### **Installation**
We recommend installing Rodin in a separate environment for effective dependency management.
#### Prerequisites
- Python (3.10 or higher)
#### Install Rodin
```bash
pip install rodin
```
or install Rodin directly from GitHub:
```bash
pip install git+https://github.com/BM-Boris/rodin.git
```
#### Basic Example
Here's a basic example demonstrating the usage of Rodin for data analysis. Comprehensive Jupyter notebook guides can be found in the 'guides' folder
```python
import rodin
# Assume 'features.csv' and 'class_labels.csv' are your datasets
features_path = 'path/to/features.csv'
classes_path = 'path/to/class_labels.csv'
# Creating an instance of Rodin_Class
rodin_instance = rodin.create(features_path, classes_path)
# Transform the data (imputation, normalization, and log-transformation steps)
rodin_instance.transform()
# Run t-test comparing two groups based on 'age'
rodin_instance.ttest('age')
# Run two-way anova test comparing groups based on 'age' and 'region'
rodin_instance.twoway_anova(['age','region'])
# Run multiple logistic regressions and linear regressions to get pvalues for each feature
rodin_instance.sf_lg('sex')
rodin_instance.sf_lr('age')
#Run a random forest classifier and regressor to obtain the metrics of the trained model using k-fold validation, with assigned feature importance scores to each variable
rodin_instance.rf_class('region')
rodin_instance.rf_regress('age')
#Slice the whole object using the pattern from pandas
rodin_instance = rodin_instance[rodin_instance.features[rodin_instance.features['imp(rf) age']>0]]
# Perform PCA with 2 principal components (UMAP and t-SNE are available as well)
rodin_instance.run_pca(n_components=2)
# Plotting the PCA results
# 'region' column in the 'samples' DataFrame is used for coloring the points
rodin_instance.plot(dr_name='pca', hue='region', title='PCA Plot')
# Volcano Plot
rodin_instance.volcano(p='p_adj(owa) region', effect_size='lfc (New York vs Georgia)', sign_line=0.01)
# Box Plot
rodin_instance.boxplot(rows=[9999,4561], hue='region')
# Clustergram
rodin_instance.clustergram(hue='sex',standardize='row')
# Pathway analysis
rodin_instance.analyze_pathways(pvals='p_value', stats='statistic',mode='positive')
# Replace 'p_value' and 'statistic' with the actual column names in your 'features' DataFrame(rodin_instance.features) and provide Mass spectrometry analysis mode.
```
The updated guide can be accessed here: https://bm-boris.github.io/rodin_guide/basics.html. Test data from the guide can be found at https://github.com/BM-Boris/rodin_guide/tree/main/data.
#### Contact
For questions, suggestions, or feedback, please contact boris.minasenko@emory.edu
### Citation
If you use **Rodin** in your research, please cite the following paper:
Minasenko B, Wang D, Cirillo P, Krigbaum N, Cohn B, Jones DP, Collins JM, Hu X.
*Rodin: a streamlined metabolomics data analysis and visualization tool.* **Bioinformatics Advances**. 2025; 5(1): vbaf088.
https://doi.org/10.1093/bioadv/vbaf088
| text/markdown | Boris Minasenko | boris.minasenko@emory.edu | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/BM-Boris/rodin | null | >=3.10 | [] | [] | [] | [
"numpy>=1.21.4",
"pandas>=1.3.4",
"scipy>=1.7.3",
"scikit-learn>=1.0",
"umap-learn>=0.5.1",
"matplotlib>=3.5.0",
"seaborn>=0.11.2",
"statsmodels>=0.13.0",
"tqdm>=4.62.3",
"dash-bio>=0.8.0",
"dash>=2.7.0",
"pickle-mixin>=1.0.2",
"networkx>=2.6",
"plotly>=5.19.0",
"fastcluster"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.10.6 | 2026-02-18T18:54:22.178596 | rodin-1.9.21.tar.gz | 2,094,344 | 5f/4d/f63ae2b04f79f0b1f75d3dd490df73d567662af751230a74194e20432761/rodin-1.9.21.tar.gz | source | sdist | null | false | 80fa6f2785229d8b32a0e276a6477c04 | 6eb953fc28af292e675dacb67d71e5dd9d47f3cdf011a00cca3af619c036c9b6 | 5f4df63ae2b04f79f0b1f75d3dd490df73d567662af751230a74194e20432761 | null | [
"LICENSE"
] | 243 |
2.4 | ethraid | 3.2.2 | Characterize long-period companions using RV trends, astrometric accelerations, and direct imaging | <div align="center">
<img src="ethraid/example/ethraid.jpg" width="168" height="150">
</div>
# Ethraid
Characterize long-period companions with partial orbits.
Please cite Van Zandt \& Petigura (2024) and the following DOI if you make use of this software in your research.
[](https://zenodo.org/doi/10.5281/zenodo.10841606)
## Environment
### Create new environment with python 3.14
- *\$ conda create --name ethraid_env python=3.14*
- *\$ conda activate ethraid_env*
## Download using pip
- *\$ pip install ethraid*
- If the installation fails, try upgrading pip: *\$ curl https://bootstrap.pypa.io/get-pip.py | python*
## OR Download repo from Github
### Install dependencies using requirements.txt
- *\$ pip install -r requirements.txt*
### Build code from top level of repo
- *\$ cd ethraid/*
- *\$ python setup.py build_ext --inplace*
### Run 3 simple test configuration files to ensure all API and CLI functions are working correctly
- *\$ python test.py*
- This should only take a few minutes. Desired outcome:
```
Test complete:
0 errors encountered running API array calculation
0 errors encountered running API array loading
0 errors encountered running CLI array calculation
0 errors encountered running CLI array loading
0 errors encountered running CLI 'all' function
```
- These are only meant to test basic functionality. The output plots and bounds will be meaningless because of the small number of models sampled.
## Create a configuration file from the template provided and provide the required parameters and desired data
- *\$ cp template_config.py my_config.py*
- NOTE: ethraid uses AU for all distances and M_Jup for all masses. Access helpful conversion factors using e.g.
```
from ethraid import Ms2Mj, pc_in_au
```
which respectively convert solar masses to Jupiter masses and parsecs to AU in your config file.
## Example usage
### CLI: Run orbit fits, plot results, and print 95\% confidence intervals all at once from the command line. Ethraid's installation includes a configuration file, ```example/config_191939.py```, which may be used to validate its functionality on the system HD 191939.
- *\$ ethraid all -cf path/to/ethraid/example/config_191939.py* -rfp results/191939/191939_processed.h5 -t 1d 2d
- Note that the *-rfp* (read file path) flag requires the path to the output directory where the fit results are stored. On a first run, this path *does not exist yet,* but it will be created after the fit and before plotting.
### Alternative: run each command separately
#### Run orbit fits using parameters in configuration file
- *\$ ethraid run -cf path/to/ethraid/example/config_191939.py*
#### Load and plot saved results
- *\$ ethraid plot -cf path/to/ethraid/example/config_191939.py -rfp results/191939/191939_processed.h5 -t 1d 2d*
#### Print 95\% mass and semi-major axis confidence intervals based on derived posterior
- *\$ ethraid lims -cf path/to/ethraid/example/config_191939.py -rfp results/191939/191939_processed.h5*
### Another alternative: use the api_run.py module to interface easily with the API
- Add the following code to the end of the module to run a fit, plot the results, and print a summary to the command line.
```
if __name__ == "__main__":
config_path = 'path/to/ethraid/example/config_191939.py'
read_file_path = 'results/191939/191939_processed.h5'
plot=True
verbose = True
run(config_path, read_file_path,
plot=plot, verbose=verbose)
```
## Results
#### Running ethraid from scratch will generate a directory called *results/\{star_name\}/* containing the raw (large, reshapeable) posterior arrays and/or their processed (small, non-reshapeable) counterparts. After plotting, the directory will contain up to three plots: a joint 2D posterior in mass-separation space, as well as the marginalized 1D PDFs and CDFs. Samples of these plots are below.
- 2D joint posterior
<img src="ethraid/example/191939/191939_2d.png" width="450" height="450">
- 1D PDFs
<img src="ethraid/example/191939/191939_pdf_1d.png" width="600" height="400">
- 1D CDFs
<img src="ethraid/example/191939/191939_cdf_1d.png" width="550" height="350">
## Troubleshooting
### Why are my posteriors splotchy?
- Try increasing the *num_points* argument. This will increase the number of sampled orbit models with a corresponding rise in run time.
- Try decreasing the *grid_num* argument. This will lower the resolution of the grid and help smooth over stochastic variation with no increase in run time.
### Why does my RV/astrometry posterior extend down to the low-mass/high-separation regime (the bottom-right corner)? It's not helping to rule out any models!
- Check your input data values. They may be consistent with 0, meaning that companions producing negligible signals are good fits to the data.
### Why does the astrometry posterior have large overlap with the RVs? It's not helping to rule out any models!
- Check the range of orbital separations you're probing. Beyond ~25 years (~8.5 AU for a Sun-like star), the RV and astrometry posteriors have the same m-a relation, and thus have the same information content.
### How do I check the Δμ value of my desired target in the *Hipparcos-Gaia* Catalog of Accelerations?
- Use the built-in API function:
```
from ethraid.compiled import helper_functions_astro as help_astro
help_astro.HGCA_retrieval(hip_id="99175")
```
Output:
```
(0.12684636376342961, 0.03385628059095324)
```
This target has Δμ = 0.127 +/- 0.034 mas/yr, a 3.7 σ measurement.
| text/markdown | Judah Van Zandt | judahvz@astro.ucla.edu | null | null | null | null | [] | [] | null | null | null | [] | [] | [] | [
"astropy>=6.0",
"Cython>=3.0",
"h5py>=3.10",
"matplotlib>=3.8",
"numpy>=1.26",
"pandas>=2.0",
"scipy>=1.11",
"tqdm>=4.64.0",
"astroquery>=0.4.7",
"setuptools>=65.6.3"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T18:53:31.954745 | ethraid-3.2.2.tar.gz | 6,354,568 | 20/53/1ea65e2a11557a2dfd8e388abe2dfb47ade747ce8eb55a8a8b979ee25159/ethraid-3.2.2.tar.gz | source | sdist | null | false | 68fbf1f43642845a5eb51a745d475f9f | cc0548771ef9ca418015a363fa24b3bb301bb29484de4bef118c256e1c269a2b | 20531ea65e2a11557a2dfd8e388abe2dfb47ade747ce8eb55a8a8b979ee25159 | null | [
"LICENSE"
] | 164 |
2.4 | classapi | 0.1.0.1 | Add your description here | # ClassApi
ClassApi is a small convenience layer on top of FastAPI that enables class-based views (CBV). It preserves FastAPI's features (typing, dependencies, automatic docs) while letting you organize handlers as classes.
**Key features**
- Use `BaseView` as a base class for handlers (`get`, `post`, `put`, ...).
- Support for `pre_process` and `pre_<method>` hooks to validate or transform requests.
- Register route modules by module path (supports relative module paths like `.src.urls`).
- Combined signatures from `pre_process` and the handler method are exposed to FastAPI for documentation and form generation.
**Development setup (using `uv` helper)**
1. Create a virtual environment:
```bash
python -m venv .venv
```
2. Use your `uv` helper to run `pip` inside the project environment and install dependencies:
```bash
uv run pip install fastapi uvicorn
# or install from requirements: uv run pip install -r requirements.txt
```
**Quickstart**
Create `main.py`:
```py
from classapi import ClassApi
app = ClassApi()
app.include_routers(".src.urls")
```
Example routes and views layout (tests/app_test/src):
```py
# tests/app_test/src/urls.py
from .views import HelloWorldView
urls = [
{"path": "/hello", "view": HelloWorldView}
]
# tests/app_test/src/views.py
from classapi import View, Header, HTTPException
from typing import Annotated
class ValidateUser(BaseView):
def pre_process(self, jwt: Annotated[str | None, Header()] = None):
if jwt != "valid_jwt":
raise HTTPException(status_code=401, detail="Unauthorized")
class HelloWorldView(ValidateUser, BaseView):
methods = ["GET"]
def get(self, name: str = "World"):
return {"Hello": name}
```
**Supported `urls` formats**
- Dict entries: `{"path": "/x", "view": MyView, ...fastapi kwargs...}` — extra kwargs (e.g. `response_model`) are forwarded to `add_api_route`.
- Tuple/list entries: `("/x", MyView)`.
- You may use relative imports from the calling module: `app.include_routers(".src.urls")`.
**`View` classes**
- Define HTTP methods: `get`, `post`, `put`, `delete`, `patch`.
- Limit exposed methods with `methods = ["GET"]` on the class.
- Hooks:
- `pre_process(self, ...)` — runs before any handler.
- `pre_get(self, ...)`, `pre_post(...)`, ... — run before a specific handler.
- Signatures from `pre_process`, `pre_<method>` and the handler itself are merged and exposed to FastAPI; annotate parameters with `Annotated[..., Header()]`, `Cookie()`, etc., to appear correctly in `/docs`.
Example: header extraction in `pre_process`:
```py
def pre_process(self, jwt: Annotated[str|None, Header()] = None):
...
```
If you accidentally place `Annotated[...]` as a default (e.g. `jwt = Annotated[...]`), ClassApi attempts to normalize it so FastAPI recognizes the dependency. Still, annotate parameters properly when possible.
**Running the app**
- Use `uv` to run the app with reload during development:
```bash
uv run uvicorn tests.app_test.main:app --reload
```
or, if you use the helper script `test_init.py` at the repository root:
```bash
uv run .\test_init.py
```
**Debugging endpoint signatures**
If docs don't show expected parameters, you can inspect endpoint signatures at runtime:
```py
for r in app.routes:
print(r.path, getattr(r.endpoint, '__signature__', None))
```
**Editor integration (VSCode / Pylance)**
Pylance is a static analyzer and doesn't pick up runtime-generated signatures. To get editor hovers that match your runtime docs, create a `.pyi` stub next to your views module describing the public signatures (this does not change runtime behavior).
**Contributing**
- Open issues or PRs.
- Add tests under `tests/` and run them with `pytest`.
---
If you want, I can generate a `views.pyi` stub for your views, add example tests, or add a minimal `pyproject.toml`. Which should I do next?
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"fastapi[standard]>=0.129.0"
] | [] | [] | [] | [] | uv/0.9.7 | 2026-02-18T18:52:25.569859 | classapi-0.1.0.1.tar.gz | 5,892 | 35/e9/b6e9556ef414967456f1d61bee974bfd691ee93c23a2de1f578b06be709a/classapi-0.1.0.1.tar.gz | source | sdist | null | false | 4f4ca7074de419fc57686fdbd6454daa | b093ac6db8ef79d676a45b65bdf57b3ca59c5b834c74709702893899b776d205 | 35e9b6e9556ef414967456f1d61bee974bfd691ee93c23a2de1f578b06be709a | null | [] | 249 |
2.4 | frameio_kit | 0.0.10 | A Python framework for building Frame.io apps | # frameio-kit
A Python framework for building Frame.io integrations. Handle webhooks, custom actions, OAuth, and API calls with minimal boilerplate — you write the business logic, frameio-kit handles the rest.
```python
from frameio_kit import App, WebhookEvent, ActionEvent, Message
app = App()
@app.on_webhook("file.ready")
async def on_file_ready(event: WebhookEvent):
print(f"File {event.resource_id} is ready!")
@app.on_action("my_app.analyze", name="Analyze File", description="Analyze this file")
async def analyze_file(event: ActionEvent):
return Message(title="Analysis Complete", description="File analyzed successfully!")
```
## Installation
```bash
pip install frameio-kit
```
Optional extras for additional features:
```bash
pip install frameio-kit[otel] # OpenTelemetry tracing
pip install frameio-kit[dynamodb] # DynamoDB storage backend
pip install frameio-kit[install] # Self-service installation UI
```
## Features
- **Decorator-based routing** — `@app.on_webhook` and `@app.on_action` map events to handler functions
- **Automatic validation** — Pydantic models give you full type safety and editor support
- **Secure by default** — built-in HMAC signature verification for all incoming requests
- **Middleware system** — add cross-cutting concerns like logging, auth, and tracing
- **OpenTelemetry integration** — optional distributed tracing with zero mandatory dependencies
- **OAuth integration** — Adobe Login support for user-specific authentication
- **Self-service installation** — branded install pages for workspace admins
- **Built on FastAPI** — embed into existing apps with `include_router()` or run standalone
- **Built for Python 3.14+** with full type hints
## Documentation
Full documentation is available at [frameio-kit.dev](https://frameio-kit.dev):
- [Quickstart](https://frameio-kit.dev/getting-started/quickstart/) — build your first integration
- [Webhooks](https://frameio-kit.dev/guides/webhooks/) — react to Frame.io events
- [Custom Actions](https://frameio-kit.dev/guides/custom-actions/) — build interactive experiences
- [Client API](https://frameio-kit.dev/guides/client-api/) — make calls back to Frame.io
- [Middleware](https://frameio-kit.dev/guides/middleware/) — add cross-cutting concerns
- [OpenTelemetry](https://frameio-kit.dev/guides/opentelemetry/) — distributed tracing
- [User Authentication](https://frameio-kit.dev/guides/user-auth/) — OAuth flows
- [Self-Service Installation](https://frameio-kit.dev/guides/self-service-install/) — multi-tenant install UI
- [API Reference](https://frameio-kit.dev/reference/api/) — complete type documentation
## Contributing
Contributions are welcome! Whether you're fixing a typo or adding a feature, every contribution helps.
### Prerequisites
- Python 3.14+
- [uv](https://docs.astral.sh/uv/) package manager
### Setup
```bash
git clone https://github.com/billyshambrook/frameio-kit.git
cd frameio-kit
uv sync
uv run prek install
```
### Development
```bash
uv run pytest # Run tests
uv run prek run --all-files # Run static checks
uv run zensical serve # Build docs locally
```
### Getting Help
- **Questions?** Open a [discussion](https://github.com/billyshambrook/frameio-kit/discussions)
- **Bug reports?** Open an [issue](https://github.com/billyshambrook/frameio-kit/issues)
## License
[MIT](LICENSE)
| text/markdown | null | null | null | null | null | frameio, frame-io, frame.io, video, collaboration, sdk, api, client, integration, webhook, webhooks, custom-actions, asgi, fastapi, async, asyncio, automation, workflow, media, events | [
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.14 | [] | [] | [] | [
"cryptography>=44.0.0",
"frameio>=0.0.23",
"frameio-experimental>=0.0.3",
"httpx>=0.28.1",
"itsdangerous>=2.2.0",
"pydantic>=2.11.10",
"fastapi>=0.115.0",
"aioboto3>=13.0.0; extra == \"dynamodb\"",
"jinja2>=3.1.0; extra == \"install\"",
"python-multipart>=0.0.18; extra == \"install\"",
"opentele... | [] | [] | [] | [
"Homepage, https://frameio-kit.dev",
"Documentation, https://frameio-kit.dev",
"Changelog, https://github.com/billyshambrook/frameio-kit/releases",
"Repository, https://github.com/billyshambrook/frameio-kit",
"Issues, https://github.com/billyshambrook/frameio-kit/issues",
"Discussions, https://github.com/... | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T18:52:24.618851 | frameio_kit-0.0.10-py3-none-any.whl | 67,355 | 0c/0d/ce24d6ba2c75ff3bc09d501a209e48b02023e751dfcd1dda32b8d39974bd/frameio_kit-0.0.10-py3-none-any.whl | py3 | bdist_wheel | null | false | a35461f6e4f0203df6cc08907f145c61 | 6012bfca07e7ffd769809e2ec85477535f97e7daa16cc4ddf057f4a83eae2d3b | 0c0dce24d6ba2c75ff3bc09d501a209e48b02023e751dfcd1dda32b8d39974bd | MIT | [
"LICENSE"
] | 0 |
2.4 | acodex | 0.104.0 | Fast, typed Python SDK for Codex CLI workflows with TypeScript SDK parity. | # acodex
acodex is a fast, typed Python SDK for Codex CLI workflows (sync/async, streaming, structured output,
images, safety controls), and API parity with the TypeScript SDK.
[](https://pypi.org/project/acodex/)
[](https://pypi.org/project/acodex/)
[](LICENSE)
[](https://codecov.io/gh/maksimzayats/acodex)
[](https://docs.acodex.dev)
## What is acodex?
acodex spawns the `codex` CLI and exchanges JSONL events over stdin/stdout so you can run agent
threads from Python with a fully typed surface: sync + async clients, streaming events, structured
output, image inputs, resumable threads, and safety controls exposed as explicit options.
## Install
### Prerequisite: Codex CLI
acodex wraps an external CLI. Install the Codex CLI and ensure `codex` is on your `PATH` (or pass
`codex_path_override=...` to `Codex(...)` / `AsyncCodex(...)`).
- Upstream CLI: https://github.com/openai/codex
One installation option:
```bash
npm install -g @openai/codex
codex --version
```
### Install acodex (uv-first)
```bash
uv add acodex
uv run python your_script.py
```
`pip install acodex` also works, but `uv` is recommended.
Recommended for structured output: structured-output extra (primary pattern via `output_type`):
```bash
uv add "acodex[structured-output]"
# or:
pip install "acodex[structured-output]"
```
## 60-second quickstart (sync)
```python
from pydantic import BaseModel
from acodex import Codex
class SummaryPayload(BaseModel):
summary: str
thread = Codex().start_thread(
sandbox_mode="read-only",
approval_policy="on-request",
web_search_mode="disabled",
)
turn = thread.run(
"Summarize this repo.",
output_type=SummaryPayload,
)
print(turn.structured_response.summary)
```
Call `run()` repeatedly on the same `Thread` instance to continue the conversation. To resume later
from disk, use `Codex().resume_thread(thread_id)`.
## Async quickstart
```python
import asyncio
from acodex import AsyncCodex
async def main() -> None:
thread = AsyncCodex().start_thread()
turn = await thread.run("Say hello")
print(turn.final_response)
asyncio.run(main())
```
## Advanced: stream parsed events
Use `run_streamed()` to react to intermediate progress (tool calls, streaming responses, item
updates, and final usage).
```python
from acodex import Codex, ItemCompletedEvent, TurnCompletedEvent, TurnFailedEvent
codex = Codex()
thread = codex.start_thread()
streamed = thread.run_streamed(
"List the top 3 risks for this codebase. Be concise.",
)
for event in streamed.events:
if isinstance(event, ItemCompletedEvent):
print("item", event.item)
elif isinstance(event, TurnCompletedEvent):
print("usage", event.usage)
elif isinstance(event, TurnFailedEvent):
print("error", event.error.message)
turn = streamed.result
print(turn.final_response)
```
`streamed.result` is available only after `streamed.events` is fully consumed.
## Why acodex
- **Typed surface**: strict type hints + mypy strict, no runtime deps by default.
- **Sync + async**: `Codex`/`Thread` and `AsyncCodex`/`AsyncThread`.
- **Streaming events**: `Thread.run_streamed()` yields parsed `ThreadEvent` dataclasses.
- **Structured output**: validate into a Pydantic model via `output_type` (recommended), or pass
`output_schema` (JSON Schema) for schema-only parity with the TypeScript SDK.
- **Images**: pass `UserInputLocalImage` alongside text in a single turn.
- **Resume threads**: `resume_thread(thread_id)` (threads persisted under `~/.codex/sessions`).
- **Safety controls**: expose Codex CLI controls as `ThreadOptions` (`sandbox_mode`,
`approval_policy`, `web_search_mode`, `working_directory`, ...).
- **TS SDK parity**: vendored TypeScript SDK is the source of truth; compatibility tests fail loudly
on drift.
- **Quality gates**: Ruff + mypy strict + 100% coverage.
## Compatibility & parity (TypeScript SDK)
The vendored TypeScript SDK under `vendor/codex-ts-sdk/src/` is the source of truth. CI runs a
Python-only compatibility suite that parses those TS sources and asserts the Python exports,
options keys, events/items models, and class surface stay compatible.
An hourly workflow checks for new stable Codex releases and opens a PR to bump the vendored SDK:
`.github/workflows/codex-ts-sdk-bump.yaml`.
- Compatibility policy: `COMPATIBILITY.md`
- Intentional divergences (documented + tested): `DIFFERENCES.md`
- Contributing: `CONTRIBUTING.md`
## Links
- Docs: https://docs.acodex.dev
- GitHub: https://github.com/maksimzayats/acodex
- Issues: https://github.com/maksimzayats/acodex/issues
## Disclaimer
It is independently maintained and is not affiliated with, sponsored by, or endorsed by OpenAI.
## License
Apache-2.0. See `LICENSE`.
| text/markdown | null | Maksim Zayats <maksim@zayats.dev> | null | null | null | agents, automation, cli, codex, jsonl, openai, sdk, streaming | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Lan... | [] | null | null | >=3.10 | [] | [] | [] | [
"jsonref>=1; extra == \"structured-output\"",
"pydantic>=2; python_version < \"3.15\" and extra == \"structured-output\""
] | [] | [] | [] | [
"Homepage, https://github.com/maksimzayats/acodex",
"Repository, https://github.com/maksimzayats/acodex",
"Issues, https://github.com/maksimzayats/acodex/issues",
"Documentation, https://docs.acodex.dev",
"Changelog, https://github.com/maksimzayats/acodex/releases"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T18:52:16.318119 | acodex-0.104.0.tar.gz | 185,020 | 54/73/a1d18824900310e6ca06ca13a4c13c5e32bbdf1bd404be58f3e8b37efeaa/acodex-0.104.0.tar.gz | source | sdist | null | false | 5c8bec6d7e9f879b17acea106cc21ebb | cc4bb95a776ba72ccde4492305d15a15723591f48822c761fb99c60c02393e9c | 5473a1d18824900310e6ca06ca13a4c13c5e32bbdf1bd404be58f3e8b37efeaa | Apache-2.0 | [
"LICENSE"
] | 267 |
2.4 | courtlistener-api-client | 0.0.1 | Python SDK for the Court Listener API | # CourtListener API Client
A Python client for the [CourtListener API](https://www.courtlistener.com/api/rest/v4/), providing access to millions of legal opinions, dockets, judges, and more from [Free Law Project](https://free.law/).
## Installation
```bash
pip install courtlistener-api-client
```
## Authentication
You'll need a CourtListener API token. You can get one by [creating an account](https://www.courtlistener.com/register/) and generating a token in your [profile settings](https://www.courtlistener.com/profile/api/).
Set it as an environment variable:
```bash
export COURTLISTENER_API_TOKEN="your-token-here"
```
Or pass it directly to the client:
```python
from courtlistener import CourtListener
client = CourtListener(api_token="your-token-here")
```
## Quickstart
```python
from courtlistener import CourtListener
client = CourtListener()
# Get a specific opinion by ID
opinion = client.opinions.get(1)
# Search for opinions
response = client.opinions.list(cluster__case_name="Miranda")
# Access results from the current page
for opinion in response.results:
print(opinion)
# Check the total count of matching results
print(response.count)
# Iterate through all results across pages
response = client.dockets.list(court="scotus")
for docket in response:
print(docket)
```
### Pagination
List queries return a `ResourceIterator` that handles pagination automatically:
```python
results = client.dockets.list(court="scotus")
# Iterate through all results across all pages
for docket in results:
print(docket)
# Or navigate pages manually
results = client.dockets.list(court="scotus")
print(results.results) # current page results
if results.has_next():
results.next()
print(results.results) # next page results
```
## Available Endpoints
Access any endpoint as an attribute on the client. Each endpoint supports `.get(id)` and `.list(**filters)`.
| Endpoint | Description |
| --- | --- |
| `search` | General search across all types |
| `opinion_search` | Search opinions |
| `recap_search` | Search RECAP archive |
| `dockets` | Court dockets |
| `docket_entries` | Docket entries |
| `recap_documents` | RECAP documents |
| `opinions` | Court opinions |
| `opinions_cited` | Citation relationships |
| `clusters` | Opinion clusters |
| `courts` | Court information |
| `audio` | Oral argument audio |
| `people` | Judges and other persons |
| `positions` | Judge positions |
| `parties` | Case parties |
| `attorneys` | Attorneys |
| `financial_disclosures` | Financial disclosures |
| `alerts` | User alerts |
| `docket_alerts` | Docket alerts |
| `tags` | User-created tags |
| `visualizations` | Visualization data |
| `schools` | Schools |
| `educations` | Judge education records |
| `political_affiliations` | Political affiliations |
| `aba_ratings` | ABA ratings |
| `fjc_integrated_database` | FJC integrated database |
See the [CourtListener API docs](https://www.courtlistener.com/api/rest-info/) for the full list and available filters.
| text/markdown | null | Free Law Project <info@free.law> | null | null | null | legal, courts | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: I... | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.27.0",
"pydantic>=2.0.0",
"mcp>=1.0.0",
"click>=8.0.0; extra == \"dev\"",
"jinja2>=3.0.0; extra == \"dev\"",
"mypy>=1.10.0; extra == \"dev\"",
"pre-commit>=4.5.1; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"python-dotenv>=1.0.0; extr... | [] | [] | [] | [
"Repository, https://github.com/freelawproject/courtlistener-api-client",
"Organisation Homepage, https://free.law/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T18:52:07.546936 | courtlistener_api_client-0.0.1.tar.gz | 11,847 | a7/2f/6e40de58d3419dcecfedfc034a6780df76d64c312f3d671cddb32dfa8393/courtlistener_api_client-0.0.1.tar.gz | source | sdist | null | false | 98726dc4215748f3f100194cf6105392 | 2b62bb4d1f524a49adbb11de43436fd412c7510faf8f871bafa084def05ed43d | a72f6e40de58d3419dcecfedfc034a6780df76d64c312f3d671cddb32dfa8393 | BSD-2-Clause | [
"LICENSE"
] | 256 |
2.4 | panini-nlp | 0.2.0 | Sanskrit NLP library grounded in Pāṇinian grammar. Deterministic, Graph-based, and Neuro-symbolic. | # Panini-NLP
**A Deterministic, Graph-based, Neuro-symbolic implementation of Pāṇini's Grammar.**
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/)
`panini-nlp` is a Python library that decodes the **Aṣṭādhyāyī** not as a static text, but as a computable **Cyclic Directed Graph**. It provides the complete structural context of the grammar—every Sūtra and Dhātu—alongside deterministic engines for Sandhi, Prosody, and Phonetics.
## 📖 About
This project decodes the 2,500-year-old **Aṣṭādhyāyī** as the world's first **Deterministic Generative AI**.
Pāṇini's grammar is not a mere set of rules; it is a **formal generative system** that produces infinite valid Sanskrit words from finite roots using a highly compressed, algebraic source code. `panini-nlp` implements this architecture:
1. **Universal Registry**: A digital structural map of all 3,996 Sūtras and ~2,000 Dhātus.
2. **Deterministic Engine**: Mathematical implementation of Sandhi (phonetics) and Chandas (prosody).
3. **Neuro-Symbolic Bridge**: A GNN layer to handle ambiguity (Vipratisedha) where the deterministic path branches.
We are not inventing AI for Sanskrit; we are decoding the algorithm that was already written.
---
## 🚀 Key Features
### 1. The Complete "Source Code"
Unlike other tools that implement only a few rules, `panini-nlp` contains the **Full Registry**:
- **3,996 Sūtras**: Every rule from the Aṣṭādhyāyī is present as a Python function stub in `panini_nlp/rules/`.
- **~2,000 Dhātus**: Every root from the Dhātupāṭha is registered in `panini_nlp/roots/`.
- **Maheshvara Sutras**: Implements the "Prime Number Architecture" (71.4% prime density) for phoneme compression.
### 2. Hybrid Neuro-Symbolic Architecture
- **Symbolic Core**: Deterministic engines for Sandhi (Euphonic Junction) and Chandas (Prosody).
- **Neural Guidance**: A Graph Neural Network (GNN) layer (`panini_nlp/gnn`) to resolve rule conflicts (Vipratisedha) by learning the graph topology.
### 3. "Seed Kernel" Efficiency
- The core logic fits in **< 50KB**.
- The entire structural knowledge base is generated from raw source texts.
---
## 📦 Installation
```bash
# Clone the repository
git clone https://github.com/meru-os/panini-nlp.git
cd panini-nlp
# Install mostly-pure Python core
pip install .
# Install with Neural Network support (PyTorch)
pip install .[gnn]
```
## 🛠️ Usage Examples
### 1. Deterministic Sandhi (Phonetic Engine v0.2)
Apply formal rules like *Akaḥ Savarṇe Dīrghaḥ* (6.1.101). The engine uses **Varna-Vibhasha** (Phoneme Decomposition) to handle Devanagari input correctly.
```python
from panini_nlp import SandhiEngine
sandhi = SandhiEngine()
result = sandhi.apply("देव", "आलय")
print(result.modified)
# Output: देवालय (Devalaya) - Rule 6.1.101 applied
```
### 2. Derivation Simulation (Prakriya)
Trace the path of a word through the grammar graph.
```python
# Run the included demo script
# python3 examples/derive_brahman.py
from panini_nlp.rules import registry as rule_registry
# Access Rule 1.1.1 (Growth Definitions)
rule = rule_registry.get("1.1.1")
print(f"{rule.id}: {rule.text}")
# Output: 1.1.1: vṛddhir ādaic
```
### 3. Prosody Analysis (Chandas)
Analyze the binary rhythm of a verse (Laghu/Guru).
```python
from panini_nlp import ChandasAnalyzer
analyzer = ChandasAnalyzer()
meter = analyzer.analyze("dharmakṣetre kurukṣetre samavetā yuyutsavaḥ")
print(meter.pattern)
# Output: G L G G L G G G ... (Binary Stream)
```
---
## 📂 Project Structure
```text
panini-nlp/
├── panini_nlp/
│ ├── rules/ # 3996 Auto-generated Sutra stubs (Adhyaya 1-8)
│ ├── roots/ # ~2000 Auto-generated Root definitions (Gana 1-10)
│ ├── sandhi.py # Deterministic Sandhi Engine
│ ├── chandas.py # Pingala's Binary Prosody Algorithms
│ ├── maheshvara.py # Prime Number Phonetic Analysis
│ ├── gnn/ # Graph Neural Network Models (PyTorch)
│ ├── data/ # Raw Source Texts (Ashtadhyayi.txt, Dhatupatha.txt)
│ └── validator.py # Pipeline Orchestrator
├── examples/
│ ├── derive_brahman.py # Paper Example: Derivation of "Brahman"
│ └── derive_shuklam.py # Demo Example: "Shuklam Baradharam" analysis
├── requirements.txt
└── setup.py
```
## License
MIT License. Free for research and education.
| text/markdown | null | Sai Rohit <akulasairohit@gmail.com> | null | null | MIT | sanskrit, nlp, panini, grammar, gnn, sandhi, morphology, ashtadhyayi, dhatupatha | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Languag... | [] | null | null | >=3.9 | [] | [] | [] | [
"msgpack>=1.0; extra == \"compression\"",
"torch>=2.0; extra == \"gnn\"",
"torch_geometric>=2.3; extra == \"gnn\"",
"networkx>=3.0; extra == \"gnn\"",
"msgpack>=1.0; extra == \"all\"",
"torch>=2.0; extra == \"all\"",
"torch_geometric>=2.3; extra == \"all\"",
"networkx>=3.0; extra == \"all\""
] | [] | [] | [] | [
"Repository, https://github.com/meru-os/panini-nlp"
] | twine/6.2.0 CPython/3.11.3 | 2026-02-18T18:51:55.590048 | panini_nlp-0.2.0.tar.gz | 876,656 | 8b/9a/58ecaa9226dcfee5fc635c0b1590bf477babd291263cc405d55e774d31b7/panini_nlp-0.2.0.tar.gz | source | sdist | null | false | 6f92e2fb722abd1d2a516a0ff4143373 | 8ba0e385ba6ee59d15d71703894301d30d0a154987ee04337f8cf3bb3e6300a1 | 8b9a58ecaa9226dcfee5fc635c0b1590bf477babd291263cc405d55e774d31b7 | null | [] | 242 |
2.4 | unvii | 0.1.2 | Client SDK for the Unvii conversational AI API. | # Unvii Client SDK
## Install
```bash
pip install .
```
## Usage
```python
from unvii import Model, generate_api_key
BASE_URL = "https://unvii.onrender.com"
api_key = generate_api_key(base_url=BASE_URL)
model = Model(api_key=api_key, system_prompt="You are concise and helpful.", base_url=BASE_URL)
reply = model.generate_text("Explain black holes in two sentences.")
print(reply)
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"requests<3.0,>=2.32"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.3 | 2026-02-18T18:51:20.835034 | unvii-0.1.2.tar.gz | 2,374 | 34/d4/8f63499fabfc0d241da9d7e9584add936516eb03b809be586fd3ae6eb2a7/unvii-0.1.2.tar.gz | source | sdist | null | false | 3a5c7c2a06e21b9f85d8d450cbd226bb | ea76e3f23c964e54145cf8e1af1c88c70bb4674985df873229086cde4d9619df | 34d48f63499fabfc0d241da9d7e9584add936516eb03b809be586fd3ae6eb2a7 | null | [] | 243 |
2.4 | k8s-ai-cli | 0.1.5 | CLI that explains Kubernetes pod failures from logs using AI. Works locally with kubectl. | # k8s-ai
A Python CLI for Kubernetes pod failure triage using AI-powered log analysis.
## Features
- Reads pod logs directly through your local `kubectl` context
- Uses OpenAI Python SDK v1.x for log analysis
- Returns likely root cause, confidence level, and remediation guidance
- Supports `--mock`, `--previous`, `--container`, and `--timeout`
## Installation
```bash
pip install k8s-ai-cli
```
## Quick Start
```bash
k8s-ai default my-pod
k8s-ai default my-pod --mock
k8s-ai default my-pod --previous
k8s-ai default my-pod --container app
```
## Mock Example
```bash
k8s-ai default my-pod --mock
```
## Real Cluster Example
```bash
# Current pod logs
k8s-ai default my-pod
# Previous logs (useful for CrashLoopBackOff)
k8s-ai default my-pod --previous
# Target a specific container
k8s-ai default my-pod --container app
```
## Environment Variables
`k8s-ai` requires an OpenAI API key.
Linux/macOS:
```bash
export OPENAI_API_KEY="your_api_key_here"
```
Windows PowerShell:
```powershell
$env:OPENAI_API_KEY="your_api_key_here"
```
## How It Works
1. Collects logs for the target pod using `kubectl`.
2. Sends relevant log context to an OpenAI-compatible model (OpenAI SDK v1.x).
3. Produces structured output with probable cause, confidence, and remediation steps.
## Safety
- Runs locally on your machine.
- Uses your current `kubectl` permissions and context.
- Does not persist collected pod logs by default.
## Roadmap
- Broader failure-pattern coverage for common Kubernetes workloads
- Optional structured output modes for CI and automation
- Additional debugging context integration (events/describe)
## License
MIT
| text/markdown | Dheeraj Koduru | null | null | null | MIT | kubernetes, k8s, devops, sre, logs, ai, cli | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"typer>=0.12",
"rich>=13.7",
"python-dotenv>=1.0",
"httpx>=0.27",
"openai>=1.0.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.5 | 2026-02-18T18:51:20.497505 | k8s_ai_cli-0.1.5.tar.gz | 10,723 | 8f/3b/a57d5057e0eda3b9407b7b1e2e4cd82e76134e17e988320f7a4112f954e6/k8s_ai_cli-0.1.5.tar.gz | source | sdist | null | false | f66dadb3f7ef033608cc3a64ac3b98d2 | e7be35e545766da8ad320f533dd9cf731fc9375b3a25fd3ae7083419dbbe5296 | 8f3ba57d5057e0eda3b9407b7b1e2e4cd82e76134e17e988320f7a4112f954e6 | null | [
"LICENSE"
] | 247 |
2.4 | vllm-plugin-meralion2 | 0.1.4 | A vLLM plugin to register the MERaLiON-2-10B model architecture with vLLM’s plugin system. | ## MERaLiON2 vLLM Plugin
### Licence
[MERaLiON-Public-Licence-v3](https://huggingface.co/datasets/MERaLiON/MERaLiON_Public_Licence/blob/main/MERaLiON-Public-Licence-v3.pdf)
### Set up Environment
This vLLM plugin supports vLLM version `0.6.5` ~ `0.7.3` (V0 engine), and `0.8.5` ~ `0.8.5.post1` (V1 engine).
Install the MERaLiON2 vLLM plugin.
```bash
pip install vllm-plugin-meralion2
```
It's strongly recommended to install flash-attn for better memory and gpu utilization.
```bash
pip install flash-attn --no-build-isolation
```
### Offline Inference
Refer to [offline_example.py](https://huggingface.co/MERaLiON/MERaLiON-2-10B/blob/main/vllm_plugin_meralion2/offline_example.py) for offline inference example.
### OpenAI-compatible Serving
Refer to [openai_serve_example.sh](https://huggingface.co/MERaLiON/MERaLiON-2-10B/blob/main/vllm_plugin_meralion2/openai_serve_example.sh) for openAI-compatible serving example.
To call the server, you can refer to [openai_client_example.py](https://huggingface.co/MERaLiON/MERaLiON-2-10B/blob/main/vllm_plugin_meralion2/openai_client_example.py).
Alternatively, you can try calling the server with curl, refer to [openai_client_curl.sh](https://huggingface.co/MERaLiON/MERaLiON-2-10B/blob/main/vllm_plugin_meralion2/openai_client_curl.sh).
### Changelog
#### 0.1.4
- Fixed multi-audio handling for a single request.
- Fixed server-side internal failure when multiple requests with different audio chunk counts are batched together.
- Added more docstrings for better code readability and maintenance.
Full history: see [CHANGELOG.md](./CHANGELOG.md).
| text/markdown | MERaLiON Team | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"librosa"
] | [] | [] | [] | [
"Modelpage, https://huggingface.co/MERaLiON/MERaLiON-2-10B",
"Homepage, https://huggingface.co/MERaLiON/MERaLiON-2-10B/tree/main/vllm_plugin_meralion2",
"Documentation, https://huggingface.co/MERaLiON/MERaLiON-2-10B/blob/main/vllm_plugin_meralion2/readme.md",
"Changelog, https://huggingface.co/MERaLiON/MERaLi... | twine/6.2.0 CPython/3.12.11 | 2026-02-18T18:51:06.479521 | vllm_plugin_meralion2-0.1.4.tar.gz | 30,754 | 7d/ab/8ab629af8cdb99986a88628d57c12f5fa51be44a4445e35964ed772bbcf6/vllm_plugin_meralion2-0.1.4.tar.gz | source | sdist | null | false | af808453aadb98634ad92d91fafa23f7 | 49e52bc85c48d3fa0d4c313a9325a63d0445f1a6a31cd993d255ec36b3b72c1b | 7dab8ab629af8cdb99986a88628d57c12f5fa51be44a4445e35964ed772bbcf6 | null | [] | 264 |
2.1 | meru-os | 1.0.0 | The First Sovereign AI Operating System based on Sanskrit Logic | # 🕉️ Meru OS: The Sovereign AI Stack
Meru OS is the world's first **Sovereign AI Operating System** built on **Vedic Logic** (Panini's Grammar), **Prime Number Theory** (Pingala's Binary), and **Neuro-Symbolic Architecture**.
Meru OS represents a paradigm shift from "Black Box" probabilistic models to **"Glass Box" Deterministic Intelligence**. By encoding logic directly into the kernel using Prime Numbers, we achieve an AI that is **transparent, verifiable, and energy-efficient.**
## 🌱 Green AI: The Eco-Sattva Architecture
Meru OS is designed to be the world's most energy-efficient AI stack:
1. **Reversible Computing**: The **Bija Hypervisor** tracks state changes as Prime Factors. This allows processes to be perfectly reversed (Pralaya) without entropy loss, theoretically approaching **Zero-Heat Computing** (Landauer's Limit).
2. **Compressed Intelligence**: Using **Pingala's Chandas**, we run complex models on kilobytes of data (e.g., the Universal Corpus is just 190KB), drastically reducing training and inference costs.
3. **Logical vs. Brute Force**: Instead of burning GPU cycles to "guess" the next token, **Panini's Grammar** derives it elegantly using invariant rules.
## 🇮🇳 The Sovereign Trinity
1. **Shiva Kernel (`meru.shiva`)**:
- The Bootloader.
- Runs directly on **Sanskrit Bytecode** (`.skt`).
- Only 39 bytes (The Maheshvara Sutras).
2. **Bija Hypervisor (`meru.bija`)**:
- **Reversible Computing**.
- Translates Linux Syscalls into **Prime Factors**.
- Allows "Time Travel Debugging" (Pralaya).
3. **Panini Intelligence (`meru.panini`)**:
- **The Glass Box**.
- A Neuro-Symbolic Model that **understands** language structure.
- Pre-loaded with the **Universal Corpus** (Sanskrit/Greek/German Cognates).
- **Zero Hallucination** on foundational truths.
## 🌍 The Universal Connection
Meru OS includes a compressed **Universal Language Stack** (190KB).
It proves that **Sanskrit, Greek, and German** share the same mathematical roots.
```python
import meru_os
model = meru_os.PaniniLLM()
print(model.query("MOTHER"))
# Output: ['mātṛ (Sanskrit)', 'mētēr (Greek)', 'mutter (German)']
```
## 🚀 Installation
```bash
pip install meru-os
```
## 📜 The Manifesto
We believe that AI Sovereignty comes not from owning GPU clusters, but from owning the **Logic**.
Meru OS is India's contribution to the future of computing—a future that is efficient, interpretable, and aligned with cosmic rhythms.
**Jaya Hind.** 🇮🇳
| text/markdown | Meru AI | sovereign@meru.ai | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | https://github.com/meru-ai/meru-os | null | >=3.8 | [] | [] | [] | [
"torch",
"networkx",
"sympy"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.3 | 2026-02-18T18:50:22.095261 | meru_os-1.0.0-py3-none-any.whl | 12,236 | 4b/0c/0b7a85a7774be858b69ba2211e6889b38a297acde649690c0abbf0f59a84/meru_os-1.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 1bbfa6254842bcb45cf17e7cf65f5482 | 978c6c0f9ab9900e09c36b24977c2b8d714b2f8a14ec6753e374cc8180d50a00 | 4b0c0b7a85a7774be858b69ba2211e6889b38a297acde649690c0abbf0f59a84 | null | [] | 29 |
2.4 | fastembed-vectorstore | 0.5.3 | In-memory vector store with fastembed | # FastEmbed VectorStore
[](https://github.com/sauravniraula/fastembed_vectorstore)
A high-performance, in-memory vector store with FastEmbed integration for Python applications.
## Supported Embedding Models
The library supports a wide variety of embedding models:
- **BGE Models**: BGEBaseENV15, BGELargeENV15, BGESmallENV15 (with quantized variants)
- **Nomic Models**: NomicEmbedTextV1, NomicEmbedTextV15 (with quantized variants)
- **GTE Models**: GTEBaseENV15, GTELargeENV15 (with quantized variants)
- **Multilingual Models**: MultilingualE5Small, MultilingualE5Base, MultilingualE5Large
- **Specialized Models**: ClipVitB32, JinaEmbeddingsV2BaseCode, ModernBertEmbedLarge
- **And many more...**
## Installation
### Prerequisites
- Python 3.8 or higher
### Install from PyPI
```bash
pip install fastembed-vectorstore
```
### From Source
1. Clone the repository:
```bash
git clone https://github.com/sauravniraula/fastembed_vectorstore.git
cd fastembed_vectorstore
```
2. Install the package:
```bash
pip install -e .
```
## Quick Start
```python
from fastembed_vectorstore import FastembedVectorstore, FastembedEmbeddingModel
# Initialize with a model
model = FastembedEmbeddingModel.BGESmallENV15
vectorstore = FastembedVectorstore(model)
# Optional Configurations
# vectorstore = FastembedVectorstore(
# model,
# show_download_progress=False, # default: True
# cache_directory="fastembed_cache", # default: fastembed_cache
# )
# Add documents
documents = [
"The quick brown fox jumps over the lazy dog",
"A quick brown dog jumps over the lazy fox",
"The lazy fox sleeps while the quick brown dog watches",
"Python is a programming language",
"Rust is a systems programming language"
]
# Embed and store documents
success = vectorstore.embed_documents(documents)
print(f"Documents embedded: {success}")
# Search for similar documents
query = "What is Python?"
results = vectorstore.search(query, n=3)
for doc, similarity in results:
print(f"Document: {doc}")
print(f"Similarity: {similarity:.4f}")
print("---")
# Save the vector store
vectorstore.save("my_vectorstore.json")
# Load the vector store later
loaded_vectorstore = FastembedVectorstore.load(model, "my_vectorstore.json")
# Optional Configurations
# loaded_vectorstore = FastembedVectorstore.load(
# model,
# "my_vectorstore.json",
# show_download_progress=False, # default: True
# cache_directory="fastembed_cache", # default: fastembed_cache
# )
```
## API Reference
### FastembedEmbeddingModel
Enum containing all supported embedding models. Choose based on your use case:
- **Small models**: Faster, lower memory usage (e.g., `BGESmallENV15`)
- **Base models**: Balanced performance (e.g., `BGEBaseENV15`)
- **Large models**: Higher quality embeddings (e.g., `BGELargeENV15`)
- **Quantized models**: Reduced memory usage (e.g., `BGESmallENV15Q`)
### FastembedVectorstore
#### Constructor
```python
vectorstore = FastembedVectorstore(
model: FastembedEmbeddingModel,
show_download_progress: bool | None = ...,
cache_directory: str | os.PathLike[str] | None = ...,
)
```
Args:
- `model`: Embedding model to use.
- `show_download_progress`: Whether to show model download progress. Defaults to True.
- `cache_directory`: Directory to cache/download model files. Defaults to `./fastembed`.
#### Methods
##### `embed_documents(documents: List[str]) -> bool`
Embeds a list of documents and stores them in the vector store.
##### `search(query: str, n: int) -> List[Tuple[str, float]]`
Searches for the most similar documents to the query. Returns a list of tuples containing (document, similarity_score).
##### `save(path: str) -> bool`
Saves the vector store to a JSON file.
##### `load(model: FastembedEmbeddingModel, path: str) -> FastembedVectorstore`
Loads a vector store from a JSON file.
## Performance Considerations
- **Memory Usage**: All embeddings are stored in memory, so consider the size of your document collection
- **Model Selection**: Smaller models are faster but may have lower quality embeddings
- **Batch Processing**: The `embed_documents` method processes documents in batches for efficiency
## Use Cases
- **Semantic Search**: Find documents similar to a query
- **Document Clustering**: Group similar documents together
- **Recommendation Systems**: Find similar items or content
- **Question Answering**: Retrieve relevant context for Q&A systems
- **Content Discovery**: Help users find related content
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
## License
This project is licensed under the Apache License 2.0 - see the [LICENSE](https://github.com/sauravniraula/fastembed_vectorstore/blob/main/LICENSE) file for details.
## Author
- **Saurav Niraula** - [sauravniraula](https://github.com/sauravniraula)
- Email: developmentsaurav@gmail.com
## Acknowledgments
- Built with [FastEmbed](https://github.com/qdrant/fastembed) for efficient text embeddings | text/markdown | sauravniraula | sauravniraula <developmentsaurav@gmail.com> | null | null | null | fastembed, vectorstore, python, embedding, vector database, similarity search | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"fastembed",
"numpy"
] | [] | [] | [] | [
"Repository, https://github.com/sauravniraula/fastembed_vectorstore",
"Issues, https://github.com/sauravniraula/fastembed_vectorstore/issues"
] | uv/0.8.3 | 2026-02-18T18:49:16.338883 | fastembed_vectorstore-0.5.3.tar.gz | 4,813 | 18/d9/02b99231016586081f6f1200ebf592a64ea860b1c37403a50ad0b6c7db21/fastembed_vectorstore-0.5.3.tar.gz | source | sdist | null | false | 800bcddd8e7d000aaded63cc73f08744 | 0c00aa9886840d77f11e637185c34edc7ec3e1aa3e319dd4c3268141d21aec8e | 18d902b99231016586081f6f1200ebf592a64ea860b1c37403a50ad0b6c7db21 | Apache-2.0 | [] | 284 |
2.4 | demandlib | 0.2.3a1 | Creating heat and power demand profiles from annual values | ========
Overview
========
.. start-badges
.. list-table::
:stub-columns: 1
* - docs
- |docs|
* - tests
- | |tox-pytest| |tox-checks| |coveralls|
* - package
- | |version| |wheel| |supported-versions| |supported-implementations| |commits-since| |packaging|
.. |tox-pytest| image:: https://github.com/oemof/oemof-demand/workflows/tox%20pytests/badge.svg
:target: https://github.com/oemof/oemof-demand/actions?query=workflow%3A%22tox+checks%22
.. |tox-checks| image:: https://github.com/oemof/oemof-demand/workflows/tox%20checks/badge.svg?branch=dev
:target: https://github.com/oemof/oemof-demand/actions?query=workflow%3A%22tox+checks%22
.. |packaging| image:: https://github.com/oemof/oemof-demand/workflows/packaging/badge.svg?branch=dev
:target: https://github.com/oemof/oemof-demand/actions?query=workflow%3Apackaging
.. |docs| image:: https://readthedocs.org/projects/oemof-demand/badge/?style=flat
:target: https://oemof-demand.readthedocs.io/
:alt: Documentation Status
.. |coveralls| image:: https://coveralls.io/repos/oemof/oemof-demand/badge.svg?branch=dev&service=github
:alt: Coverage Status
:target: https://coveralls.io/github/oemof/oemof-demand?branch=dev
.. |version| image:: https://img.shields.io/pypi/v/oemof-demand.svg
:alt: PyPI Package latest release
:target: https://pypi.org/project/oemof-demand
.. |wheel| image:: https://img.shields.io/pypi/wheel/oemof-demand.svg
:alt: PyPI Wheel
:target: https://pypi.org/project/oemof-demand
.. |supported-versions| image:: https://img.shields.io/pypi/pyversions/oemof-demand.svg
:alt: Supported versions
:target: https://pypi.org/project/oemof-demand
.. |supported-implementations| image:: https://img.shields.io/pypi/implementation/oemof-demand.svg
:alt: Supported implementations
:target: https://pypi.org/project/oemof-demand
.. |commits-since| image:: https://img.shields.io/github/commits-since/oemof/oemof-demand/latest/dev
:alt: Commits since latest release
:target: https://github.com/oemof/oemof-demand/compare/master...dev
.. end-badges
Creating heat and power demand profiles from annual values.
* Free software: MIT license
Installation
============
::
pip install oemof-demand
You can also install the in-development version with::
pip install https://github.com/oemof/oemof-demand/archive/master.zip
Documentation
=============
https://oemof-demand.readthedocs.io/
Development
===========
To run all the tests run::
tox
Note, to combine the coverage data from all the tox environments run:
.. list-table::
:widths: 10 90
:stub-columns: 1
- - Windows
- ::
set PYTEST_ADDOPTS=--cov-append
tox
- - Other
- ::
PYTEST_ADDOPTS=--cov-append tox
| text/x-rst | null | oemof developer group <contact@oemof.org> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: Unix",
"Operating System :: POSIX",
"Operating System :: Microsoft :: Windows",
"Operating System :: MacOS",
"Oper... | [] | null | null | >=3.10 | [] | [] | [] | [
"oemof.demand"
] | [] | [] | [] | [
"Changelog, https://oemof-demand.readthedocs.io/en/latest/changelog.html",
"Documentation, https://oemof-demand.readthedocs.io/",
"Homepage, https://github.com/oemof/oemof-demand",
"Issue Tracker, https://github.com/oemof/oemof-demand/issues/"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T18:48:20.368850 | demandlib-0.2.3a1.tar.gz | 5,577 | e5/2b/2a03f71376f2ffaf48007858db1a8fa4715b186c4cb0f9dd924ae618b40d/demandlib-0.2.3a1.tar.gz | source | sdist | null | false | 2999800be38ebfc57929025b5fca848f | 6c10d40534ee14ee9a06ec5b80e02d0eb6ed68ee8c38494434ecd3c0d7116d8d | e52b2a03f71376f2ffaf48007858db1a8fa4715b186c4cb0f9dd924ae618b40d | null | [
"LICENSE"
] | 217 |
2.4 | FlaskBB | 2.2.0 | A classic Forum Software in Python using Flask. | # FlaskBB
[](https://github.com/flaskbb/flaskbb/actions/workflows/tests.yml)
[](https://https://github.com/flaskbb/flaskbb)
[](https://matrix.to/#/#flaskbb:matrix.org)
*FlaskBB is a Forum Software written in Python using the micro framework Flask.*
Currently, following features are implemented:
* Private Messages
* Admin Interface
* Group based permissions
* Markdown Support
* Topic Tracker
* Unread Topics/Forums
* i18n Support
* Completely Themeable
* Plugin System
* Command Line Interface
## Quickstart
For a complete installation guide please visit the installation documentation
[here](https://flaskbb.readthedocs.org/en/latest/installation.html).
This is how you set up an development instance of FlaskBB:
* Create a virtualenv
* Configuration
* `make devconfig`
* Install dependencies and FlaskBB
* `make install`
* Run the development server
* `make run`
* Visit [localhost:5000](http://localhost:5000)
## License
FlaskBB is licensed under the [BSD License](https://github.com/flaskbb/flaskbb/blob/master/LICENSE).
# Links
* [Project Website](https://github.com/flaskbb/flaskbb)
* [Documentation](https://flaskbb.readthedocs.io)
* [Source Code](https://github.com/flaskbb/flaskbb)
| text/markdown | null | Peter Justin <peter.justin@outlook.com> | null | Peter Justin <peter.justin@outlook.com> | null | community, flask, forum, social | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Framework :: Flask",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
... | [] | null | null | >=3.12 | [] | [] | [] | [
"alembic>=1.18.1",
"attrs>=25.4.0",
"babel>=2.17.0",
"blinker>=1.9",
"celery>=5.6.2",
"click-log>=0.4.0",
"click-plugins>=1.1.1.2",
"click-repl>=0.3.0",
"click>=8.3.1",
"email-validator>=2.3.0",
"flask-alembic>=3.2.0",
"flask-allows2>=1.1.0",
"flask-babelplus>=2.4.1",
"flask-caching>=2.3.1... | [] | [] | [] | [
"Homepage, https://github.com/flaskbb/flaskbb",
"Documentation, https://flaskbb.readthedocs.io/en/latest/",
"Repository, https://github.com/flaskbb/flaskbb",
"Issues, https://github.com/flaskbb/flaskbb",
"Changelog, https://github.com/flaskbb/flaskbb/blob/master/CHANGES"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T18:48:04.891893 | flaskbb-2.2.0.tar.gz | 5,801,668 | 3b/d0/612d9eaa667110345216ce589175e7a7c72126bdd10612b0f065296574ea/flaskbb-2.2.0.tar.gz | source | sdist | null | false | f61c88d20a7254c6ca1f1e081d38f1db | 3a24ac17dabe6872ff6ccca4a6eb51fcf557974ffb90d7a24ceb730b311a608c | 3bd0612d9eaa667110345216ce589175e7a7c72126bdd10612b0f065296574ea | BSD-3-Clause | [
"LICENSE"
] | 0 |
2.4 | NuCS | 9.1.2 | A Numpy and Numba based Python library for solving Constraint Satisfaction Problems over finite domains | 








## TLDR
NuCS is a Python library for solving Constraint Satisfaction and Optimization Problems.
Because it is 100% written in Python,
NuCS is easy to install and allows to model complex problems in a few lines of code.
The NuCS solver is also very fast because it is powered by [Numpy](https://numpy.org/) and [Numba](https://numba.pydata.org/).
## Installation
```bash
pip install nucs
```
## Documentation
Check out [NuCS documentation](https://nucs.readthedocs.io/).
## With NuCS, in a few seconds you can ...
### Find all 14200 solutions to the [12-queens problem](https://www.csplib.org/Problems/prob054/)
```bash
NUMBA_CACHE_DIR=.numba/cache python -m nucs.examples.queens -n 12
```

### Compute the 92 solutions to the [BIBD(8,14,7,4,3) problem](https://www.csplib.org/Problems/prob028/)
```bash
NUMBA_CACHE_DIR=.numba/cache python -m nucs.examples.bibd -v 8 -b 14 -r 7 -k 4 -l 3
```

### Demonstrate that the optimal [10-marks Golomb ruler](https://www.csplib.org/Problems/prob006/) length is 55
```bash
NUMBA_CACHE_DIR=.numba/cache python -m nucs.examples.golomb -n 10
```

| text/markdown | null | Yan Georget <yan.georget@gmail.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: P... | [] | null | null | >=3.11 | [] | [] | [] | [
"numba==0.64.0rc1",
"numpy==2.4.2",
"rich"
] | [] | [] | [] | [
"Homepage, https://github.com/yangeorget/nucs",
"Issues, https://github.com/yangeorget/nucs/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:47:19.938116 | nucs-9.1.2.tar.gz | 60,825 | 67/6c/63465ce50e48317fcf68cebae7db205609e1dedf6a1d0490e9ed67f30755/nucs-9.1.2.tar.gz | source | sdist | null | false | af50d978886487a7ad7c9e4ef9a74a64 | 67d8ed02f34c484570e50a2149928e68d7ccd86db0a5c9a6b1cd564251f3cf0c | 676c63465ce50e48317fcf68cebae7db205609e1dedf6a1d0490e9ed67f30755 | null | [
"LICENSE.md"
] | 0 |
2.4 | allen-powerplatform-client | 0.1.4 | Power Platform Connector | No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator)
| text/markdown | OpenAPI Generator community | team@openapitools.org | null | null | null | OpenAPI, OpenAPI-Generator, Power Platform Connector | [] | [] | null | null | null | [] | [] | [] | [
"urllib3<3.0.0,>=2.1.0",
"python-dateutil>=2.8.2",
"pydantic>=2",
"typing-extensions>=4.7.1"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.8 | 2026-02-18T18:46:14.749502 | allen_powerplatform_client-0.1.4.tar.gz | 26,126 | 62/5c/a469cb788ec55ee5db8eaeac7b0c30b0cb4ab9390ef3f82c63226d4054e8/allen_powerplatform_client-0.1.4.tar.gz | source | sdist | null | false | 169c949aab69a11fd7b48c7bf64d65f2 | 94e78f4f52d0a689e8a01fd7bf395d1d8bb6e7ac7bfe3fddaf90efd6fd271d92 | 625ca469cb788ec55ee5db8eaeac7b0c30b0cb4ab9390ef3f82c63226d4054e8 | null | [] | 261 |
2.4 | authzed | 1.24.3 | Client library for SpiceDB. | # Authzed Python Client
[](https://pypi.org/project/authzed)
[](https://www.apache.org/licenses/LICENSE-2.0.html)
[](https://github.com/authzed/authzed-py/actions)
[](https://groups.google.com/g/authzed-oss)
[](https://discord.gg/jTysUaxXzM)
[](https://twitter.com/authzed)
This repository houses the Python client library for Authzed.
[Authzed] is a database and service that stores, computes, and validates your application's permissions.
Developers create a schema that models their permissions requirements and use a client library, such as this one, to apply the schema to the database, insert data into the database, and query the data to efficiently check permissions in their applications.
Supported client API versions:
- [v1](https://docs.authzed.com/reference/api#authzedapiv1) - Core SpiceDB API for permissions checks, schema management, and relationship operations
- [materialize/v0](https://buf.build/authzed/api/docs/main:authzed.api.materialize.v0) - Materialize API for building materialized permission views
You can find more info on each API on the [Authzed API reference documentation].
Additionally, Protobuf API documentation can be found on the [Buf Registry Authzed API repository].
See [CONTRIBUTING.md] for instructions on how to contribute and perform common tasks like building the project and running tests.
[Authzed]: https://authzed.com
[Authzed API Reference documentation]: https://docs.authzed.com/reference/api
[Buf Registry Authzed API repository]: https://buf.build/authzed/api/docs/main
[CONTRIBUTING.md]: CONTRIBUTING.md
## Getting Started
We highly recommend following the **[Protecting Your First App]** guide to learn the latest best practice to integrate an application with Authzed.
If you're interested in examples of a specific version of the API, they can be found in their respective folders in the [examples directory].
[Protecting Your First App]: https://docs.authzed.com/guides/first-app
[examples directory]: /examples
## Basic Usage
### Installation
This project is packaged as the wheel `authzed` on the [Python Package Index].
If you are using [pip], the command to install the library is:
```sh
pip install authzed
```
[Python Package Index]: https://pypi.org/project/authzed
[pip]: https://pip.pypa.io
### Initializing a client
With the exception of [gRPC] utility functions found in `grpcutil`, everything required to connect and make API calls is located in a module respective to API version.
In order to successfully connect, you will have to provide a [Bearer Token] with your own API Token from the [Authzed dashboard] in place of `t_your_token_here_1234567deadbeef` in the following example:
[grpc]: https://grpc.io
[Bearer Token]: https://datatracker.ietf.org/doc/html/rfc6750#section-2.1
[Authzed Dashboard]: https://app.authzed.com
```py
from authzed.api.v1 import Client
from grpcutil import bearer_token_credentials
client = Client(
"grpc.authzed.com:443",
bearer_token_credentials("t_your_token_here_1234567deadbeef"),
)
```
### Performing an API call
```py
from authzed.api.v1 import (
CheckPermissionRequest,
CheckPermissionResponse,
ObjectReference,
SubjectReference,
)
post_one = ObjectReference(object_type="blog/post", object_id="1")
emilia = SubjectReference(object=ObjectReference(
object_type="blog/user",
object_id="emilia",
))
# Is Emilia in the set of users that can read post #1?
resp = client.CheckPermission(CheckPermissionRequest(
resource=post_one,
permission="reader",
subject=emilia,
))
assert resp.permissionship == CheckPermissionResponse.PERMISSIONSHIP_HAS_PERMISSION
```
### Insecure Client Usage
When running in a context like `docker compose`, because of Docker's virtual networking,
the gRPC client sees the SpiceDB container as "remote." It has built-in safeguards to prevent
calling a remote client in an insecure manner, such as using client credentials without TLS.
However, this is a pain when setting up a development or testing environment, so we provide
the `InsecureClient` as a convenience:
```py
from authzed.api.v1 import InsecureClient
client = InsecureClient(
"spicedb:50051",
"my super secret token"
)
```
## Materialize API
The authzed-py supports Authzed Materialize API.
The Materialize API allows you to build and maintain materialized views of your permissions data in your own systems for high-performance lookups.
Learn more in the **[Materialize API Quickstart Guide]** that can be found the examples directory.
[Materialize API Quickstart Guide]: /examples/materialize/QUICKSTART.md
| text/markdown | Authzed | Authzed <support@authzed.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"grpcio~=1.63",
"protobuf<7,>=5.26",
"grpc-interceptor<0.16,>=0.15.4",
"googleapis-common-protos<2,>=1.65.0",
"protovalidate<1.2.0,>=0.7.1"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:45:03.447630 | authzed-1.24.3.tar.gz | 150,221 | 79/f9/d3331d382bfcb10f56fb72fc20e81a7d52663c93e41800ed7594f04c63d4/authzed-1.24.3.tar.gz | source | sdist | null | false | 78ee5be843cb69fa6a81c3b893bd166e | dad1a79d1885a424d75c30460b4c05abfb8bbc9b6e2b1e6dd13d9b64d3b7768c | 79f9d3331d382bfcb10f56fb72fc20e81a7d52663c93e41800ed7594f04c63d4 | Apache-2.0 | [] | 3,522 |
2.4 | fastexcel-keye | 0.20.10 | A fast excel file reader for Python, written in Rust (fork with style support) | # `fastexcel`
A fast excel file reader for Python and Rust.
Docs:
* [Python](https://fastexcel.toucantoco.dev/).
* [Rust](https://docs.rs/fastexcel).
## Stability
The Python library is considered production-ready. The API is mostly stable, and we avoid breaking changes as much as
possible. v1.0.0 will be released once the [milestone](https://github.com/ToucanToco/fastexcel/milestone/2) is reached.
> ⚠️ The free-threaded build is still considered experimental
The Rust crate is still experimental, and breaking changes are to be expected.
## Installation
```bash
# Lightweight installation (no PyArrow dependency)
pip install fastexcel
# With Polars support only (no PyArrow needed)
pip install fastexcel[polars]
# With Pandas support (includes PyArrow)
pip install fastexcel[pandas]
# With PyArrow support
pip install fastexcel[pyarrow]
# With all integrations
pip install fastexcel[pandas,polars]
```
## Quick Start
### Modern usage (recommended)
FastExcel supports the [Arrow PyCapsule Interface](https://arrow.apache.org/docs/format/CDataInterface/PyCapsuleInterface.html) for zero-copy data exchange with libraries like Polars, without requiring pyarrow as a dependency.
Use fastexcel with any Arrow-compatible library without requiring pyarrow.
```python
import fastexcel
# Load an Excel file
reader = fastexcel.read_excel("data.xlsx")
sheet = reader.load_sheet(0) # Load first sheet
# Use with Polars (zero-copy, no pyarrow needed)
import polars as pl
df = pl.DataFrame(sheet) # Direct PyCapsule interface
print(df)
# Or use the to_polars() method (also via PyCapsule)
df = sheet.to_polars()
print(df)
# Or access the raw Arrow data via PyCapsule interface
schema = sheet.__arrow_c_schema__()
array_data = sheet.__arrow_c_array__()
```
### Traditional usage (with pandas/pyarrow)
```python
import fastexcel
reader = fastexcel.read_excel("data.xlsx")
sheet = reader.load_sheet(0)
# Convert to pandas (requires `pandas` extra)
df = sheet.to_pandas()
# Or get pyarrow RecordBatch directly
record_batch = sheet.to_arrow()
```
### Working with tables
```python
reader = fastexcel.read_excel("data.xlsx")
# List available tables
tables = reader.table_names()
print(f"Available tables: {tables}")
# Load a specific table
table = reader.load_table("MyTable")
df = pl.DataFrame(table) # Zero-copy via PyCapsule, no pyarrow needed
```
## Key Features
- **Zero-copy data exchange** via [Arrow PyCapsule Interface](https://arrow.apache.org/docs/format/CDataInterface/PyCapsuleInterface.html)
- **Flexible dependencies** - use with Polars (no PyArrow needed) or Pandas (includes PyArrow)
- **Seamless Polars integration** - `pl.DataFrame(sheet)` and `sheet.to_polars()` work without PyArrow via PyCapsule interface
- **High performance** - written in Rust with [calamine](https://github.com/tafia/calamine) and [Apache Arrow](https://arrow.apache.org/)
- **Memory efficient** - lazy loading and optional eager evaluation
- **Type safety** - automatic type inference with manual override options
## Contributing & Development
### Prerequisites
You'll need:
1. **[Rust](https://rustup.rs/)** - Rust stable or nightly
2. **[uv](https://docs.astral.sh/uv/getting-started/installation/)** - Fast Python package manager (will install Python 3.10+ automatically)
3. **[git](https://git-scm.com/)** - For version control
4. **[make](https://www.gnu.org/software/make/)** - For running development commands
**Python Version Management:**
uv handles Python installation automatically. To use a specific Python version:
```bash
uv python install 3.13 # Install Python 3.13
uv python pin 3.13 # Pin project to Python 3.13
```
### Quick Start
```bash
# Clone the repository (or from your fork)
git clone https://github.com/ToucanToco/fastexcel.git
cd fastexcel
# First-time setup: install dependencies, build debug version, and setup pre-commit hooks
make setup-dev
```
Verify your installation by running:
```bash
make
```
This runs a full development cycle: formatting, building, linting, and testing
### Development Commands
Run `make help` to see all available commands, or use these common ones:
```bash
make all # full dev cycle: format, build, lint, test
make install # install with debug build (daily development)
make install-prod # install with release build (benchmarking)
make test # to run the tests
make lint # to run the linter
make format # to format python and rust code
make doc-serve # to serve the documentation locally
```
### Useful Resources
* [`python/fastexcel/_fastexcel.pyi`](./python/fastexcel/_fastexcel.pyi) - Python API types
* [`python/tests/`](./python/tests) - Comprehensive usage examples
## Benchmarking
For benchmarking, use `make benchmarks` which automatically builds an optimised wheel.
This is required for profiling, as dev mode builds are much slower.
### Speed benchmarks
```bash
make benchmarks
```
### Memory profiling
```bash
mprof run -T 0.01 python python/tests/benchmarks/memory.py python/tests/benchmarks/fixtures/plain_data.xls
```
## Creating a release
1. Create a PR containing a commit that only updates the version in `Cargo.toml`.
2. Once it is approved, squash and merge it into main.
3. Tag the squashed commit, and push it.
4. The `release` GitHub action will take care of the rest.
## Dev tips
* Use `cargo check` to verify that your rust code compiles, no need to go through `maturin` every time
* `cargo clippy` = 💖
* Careful with arrow constructors, they tend to allocate a lot
* [`mprof`](https://github.com/pythonprofilers/memory_profiler) and `time` go a long way for perf checks,
no need to go fancy right from the start
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Rust",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :... | [] | https://github.com/ToucanToco/fastexcel | null | >=3.10 | [] | [] | [] | [
"typing-extensions>=4.0.0; python_full_version < \"3.10\"",
"pandas>=1.4.4; extra == \"pandas\"",
"pyarrow>=8.0.0; extra == \"pandas\"",
"polars>=0.16.14; extra == \"polars\"",
"pyarrow>=8.0.0; extra == \"pyarrow\""
] | [] | [] | [] | [
"Issues, https://github.com/michael-keye/fastexcel-keye/issues",
"Source Code, https://github.com/michael-keye/fastexcel-keye"
] | maturin/1.12.2 | 2026-02-18T18:44:28.969269 | fastexcel_keye-0.20.10.tar.gz | 71,125 | e1/e1/bc0ea03a749ab766eef01be79abca21266939b626108f53d5c0727941ad1/fastexcel_keye-0.20.10.tar.gz | source | sdist | null | false | ce3d8afc380ec1453137183130d06c82 | d28dfead4868674f099e704d598b1a9d9e6efa9eea8a60986fdd43efb9077dba | e1e1bc0ea03a749ab766eef01be79abca21266939b626108f53d5c0727941ad1 | null | [] | 1,295 |
2.4 | cnrgh-dl | 2.0.0 | Client de téléchargement compatible avec l'authentification à double facteur (2FA) requise par le portail de téléchargement du CNRGH. | # cnrgh-dl
`cnrgh-dl` (*CNRGH Download*) est un client de téléchargement compatible avec l'**authentification à double facteur** (2FA) requise par le portail de téléchargement du **CNRGH**.
Sa documentation complète (prérequis, installation, utilisation...) est consultable via le lien suivant : https://www.cnrgh.fr/data-userdoc/download_with_client/. | text/markdown | Maxime Blanchon | Maxime Blanchon <maxime.blanchon@cnrgh.fr> | null | null | null | CNRGH, download, projects, client, 2FA | [
"License :: OSI Approved :: CEA CNRS Inria Logiciel Libre License, version 2.1 (CeCILL-2.1)",
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"... | [] | null | null | >=3.10 | [] | [] | [] | [
"requests<3.0.0,>=2.32.5",
"pydantic<3.0.0,>=2.12.3",
"colorlog<7.0.0,>=6.10.1",
"tqdm<5.0.0,>=4.66.5",
"environs<15.0.0,>=14.3.0",
"platformdirs<5.0.0,>=4.5.0",
"typing-extensions<5.0.0,>=4.12.2",
"urllib3<3.0.0,>=2.5.0",
"psutil<8.0.0,>=7.1.1",
"packaging>=25.0"
] | [] | [] | [] | [
"Documentation, https://www.cnrgh.fr/data-userdoc/download_with_client/",
"Changelog, https://www.cnrgh.fr/data-userdoc/download_with_client/#changelog"
] | uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T18:44:12.965252 | cnrgh_dl-2.0.0.tar.gz | 36,296 | 6c/07/0d02a7ccfc404e3e8ee749a89c162ba888253874c5ec286894171b7a31e8/cnrgh_dl-2.0.0.tar.gz | source | sdist | null | false | 81c0139023fdf068e598b72cefefde35 | c28ee66396ad7eebb6b7428903b3cbf89dc57b1f7bbffef914dd85f6fd9f47af | 6c070d02a7ccfc404e3e8ee749a89c162ba888253874c5ec286894171b7a31e8 | CECILL-2.1 | [
"LICENSE"
] | 266 |
2.4 | dazense-core | 0.0.40 | dazense Core is your analytics context builder with the best chat interface. | # dazense CLI
Command-line interface for dazense chat.
## Installation
```bash
pip install dazense-core
```
## Build Prerequisites (for `python build.py`)
- Node.js (includes `npm`) on PATH
- Bun on PATH
Verify:
```bash
node -v
npm -v
bun -v
```
## Usage
```bash
dazense --help
Usage: dazense COMMAND
╭─ Commands ────────────────────────────────────────────────────────────────╮
│ chat Start the dazense chat UI. │
│ debug Test connectivity to configured resources. │
│ init Initialize a new dazense project. │
│ sync Sync resources to local files. │
│ test Run and explore dazense tests. │
│ --help (-h) Display this message and exit. │
│ --version Display application version. │
╰───────────────────────────────────────────────────────────────────────────╯
```
### Initialize a new dazense project
```bash
dazense init
```
This will create a new dazense project in the current directory. It will prompt you for a project name and ask you to configure:
- **Database connections** (BigQuery, DuckDB, Databricks, Snowflake, PostgreSQL)
- **Git repositories** to sync
- **LLM provider** (OpenAI, Anthropic, Mistral, Gemini)
- **Slack integration**
- **Notion integration**
The resulting project structure looks like:
```
<project>/
├── dazense_config.yaml
├── .dazenseignore
├── RULES.md
├── databases/
├── queries/
├── docs/
├── semantics/
├── repos/
├── agent/
│ ├── tools/
│ └── mcps/
└── tests/
```
Options:
- `--force` / `-f`: Force re-initialization even if the project already exists
### Start the dazense chat UI
```bash
dazense chat
```
This will start the dazense chat UI. It will open the chat interface in your browser at `http://localhost:5005`.
### Test connectivity
```bash
dazense debug
```
Tests connectivity to all configured databases and LLM providers. Displays a summary table showing connection status and details for each resource.
### Sync resources
```bash
dazense sync
```
Syncs configured resources to local files:
- **Databases** — generates markdown docs (`columns.md`, `preview.md`, `description.md`, `profiling.md`) for each table into `databases/`
- **Git repositories** — clones or pulls repos into `repos/`
- **Notion pages** — exports pages as markdown into `docs/notion/`
After syncing, any Jinja templates (`*.j2` files) in the project directory are rendered with the dazense context.
### Run tests
```bash
dazense test
```
Runs test cases defined as YAML files in `tests/`. Each test has a `name`, `prompt`, and expected `sql`. Results are saved to `tests/outputs/`.
Options:
- `--model` / `-m`: Models to test against (default: `openai:gpt-4.1`). Can be specified multiple times.
- `--threads` / `-t`: Number of parallel threads (default: `1`)
Examples:
```bash
dazense test -m openai:gpt-4.1
dazense test -m openai:gpt-4.1 -m anthropic:claude-sonnet-4-20250514
dazense test --threads 4
```
### Explore test results
```bash
dazense test server
```
Starts a local web server to explore test results in a browser UI showing pass/fail status, token usage, cost, and detailed data comparisons.
Options:
- `--port` / `-p`: Port to run the server on (default: `8765`)
- `--no-open`: Don't automatically open the browser
### BigQuery service account permissions
When you connect BigQuery during `dazense init`, the service account used by `credentials_path`/ADC must be able to list datasets and run read-only queries to generate docs. Grant the account:
- Project: `roles/bigquery.jobUser` (or `roles/bigquery.user`) so the CLI can submit queries
- Each dataset you sync: `roles/bigquery.dataViewer` (or higher) to read tables
The combination above mirrors the typical "BigQuery User" setup and is sufficient for dazense's metadata and preview pulls.
### Snowflake authentication
Snowflake supports three authentication methods during `dazense init`:
- **SSO**: Browser-based authentication (recommended for organizations with SSO policies)
- **Password**: Traditional username/password
- **Key-pair**: Private key file with optional passphrase
## Development
### Building the package
```bash
cd cli
python build.py --help
Usage: build.py [OPTIONS]
Build and package dazense-core CLI.
╭─ Parameters ──────────────────────────────────────────────────────────────────╮
│ --force -f --no-force Force rebuild the server binary │
│ --skip-server -s --no-skip-server Skip server build, only build Python pkg │
│ --bump Bump version (patch, minor, major) │
╰───────────────────────────────────────────────────────────────────────────────╯
```
This will:
1. Build the frontend with Vite
2. Compile the backend with Bun into a standalone binary
3. Bundle everything into a Python wheel in `dist/`
### Installing for development
```bash
cd cli
pip install -e .
```
### Publishing to PyPI
```bash
# Build first
python build.py
# Publish
uv publish dist/*
```
## Architecture
```
dazense chat (CLI command)
↓ spawns
dazense-chat-server (Bun-compiled binary, port 5005)
+ FastAPI server (port 8005)
↓ serves
Backend API + Frontend Static Files
↓
Browser at http://localhost:5005
```
| text/markdown | metazense | null | null | null | null | ai, analytics, chat | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
... | [] | null | null | >=3.10 | [] | [] | [] | [
"anthropic>=0.76.0",
"apscheduler>=3.10.0",
"cryptography>=46.0.3",
"cyclopts>=4.4.4",
"dotenv>=0.9.9",
"fastapi>=0.128.0",
"google-genai>=1.61.0",
"ibis-framework[bigquery,databricks,duckdb,mssql,postgres,snowflake]>=9.0.0",
"jinja2>=3.1.0",
"mistralai>=1.11.1",
"notion-client>=2.7.0",
"notio... | [] | [] | [] | [
"Homepage, https://dazense.metazense.com",
"Repository, https://github.com/metazense/dazense"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T18:42:58.645990 | dazense_core-0.0.40-py3-none-manylinux2014_aarch64.whl | 43,060,639 | b0/78/5650aa5ffe44438a78571f412f905644de5a18f9b6a9430f438259d04532/dazense_core-0.0.40-py3-none-manylinux2014_aarch64.whl | py3 | bdist_wheel | null | false | 9970b6fa1f3d859166603efda18c34ca | 77b8ecadbbac4076ecb4282936507d8953c9d8a0ebc4642222ee98760e3a7f3f | b0785650aa5ffe44438a78571f412f905644de5a18f9b6a9430f438259d04532 | Apache-2.0 | [
"LICENSE"
] | 345 |
2.4 | django-ai-support | 0.1.11 | AI support for your django app. | # django-ai-support
This library is powered by langchain and langgraph to make easy AI support for your online shop or any website you want to create with Django.
[This is used Agent workflow](https://langchain-ai.github.io/langgraph/tutorials/workflows/#agent)
## Get started
### settings
Settings should be something like this
```python
AI_SUPPORT_SETTINGS = {
"TOOLS": [],
"SYSTEM_PROMPT": "You are the supporter of a bookstore website.",
"LLM_MODEL": model,
}
```
TOOLS: tools are for your AI model.
SYSTEM_PROMPT: This is important for your AI support, but the default prompt is: ```You are the supporter of a bookstore website.``` As you see.
LLM_MODEL: your chat model. like this:
```python
model = init_chat_model("gemini-2.5-flash", model_provider="google_genai")
```
[see more](https://python.langchain.com/docs/integrations/chat/)
*Be careful*: LLM_MODEL can not be None.
### config
Additionally, you can integrate your tools within your apps and add them to `TOOLS`.
For example, I have an app with the name of book. and this is inside of my `tools.py` file:
```python
from langchain_core.tools import tool
from .models import Book
@tool(description="this is to get list of book with a price")
def get_books_with_price(price:int) -> str:
"""
get book with input price
Args:
price (int): price. in dollars
Returns:
str: list of books.
"""
text = ""
for book in Book.objects.filter(price=price).select_related("author"):
book_detail = f"book name: {book.name}, book price: {book.price}, Author: {book.author.first_name} {book.author.last_name}"
text += book_detail + "\n"
return text
```
After that, I can add this tool easily. This is inside my `apps.py` file:
```python
from django.apps import AppConfig
class BookConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'book'
def ready(self):
from .tools import get_books_with_price as my_tool
from django_ai_support.conf import append_tools
append_tools([my_tool])
```
[read more about tool calling in langchain](https://langchain-ai.github.io/langgraph/how-tos/tool-calling/#dynamically-select-tools)
### API
Now, you can use the chat API to talk with your AI support:
```python
from django.urls import path
from django_ai_support.views import ChatAiSupportApi
urlpatterns = [
path("ai/", ChatAiSupportApi.as_view())
]
```

| text/markdown | null | EmadDeve20 <emaddeve20@gmail.com> | null | null | null | null | [
"Environment :: Web Environment",
"Framework :: Django",
"Framework :: Django :: 4.0",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Languag... | [] | null | null | >=3.10 | [] | [] | [] | [
"Django>=4.0",
"djangorestframework>=3.13.1",
"langchain>=1.2.7",
"langgraph>=1.0.7",
"drf-spectacular>=0.24.2",
"langgraph-checkpoint-redis==0.3.4",
"langgraph-checkpoint-mongodb==0.3.1",
"langgraph-checkpoint-postgres==3.0.4",
"psycopg==3.3.2",
"psycopg-binary==3.3.2",
"psycopg-pool==3.3.0",
... | [] | [] | [] | [
"Homepage, https://github.com/EmadDeve20/django-ai-support",
"Issues, https://github.com/EmadDeve20/django-ai-support/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:42:33.306108 | django_ai_support-0.1.11.tar.gz | 19,779 | c8/e1/02b6d48e2b860d6d2fbf602053c1abc94d45c4f156595d5a70ece689f3e2/django_ai_support-0.1.11.tar.gz | source | sdist | null | false | 6f3825d1c693bf79d9b42e87bdf9b8f6 | fc94d8d93d2c7eb401f8865d2db57a5cd30e335a34c8b44e20e7dae098825c0d | c8e102b6d48e2b860d6d2fbf602053c1abc94d45c4f156595d5a70ece689f3e2 | GPL-3.0-or-later | [
"LICENSE"
] | 252 |
2.4 | django-silky | 1.0.4 | Silky smooth profiling for the Django Framework — modernized UI fork of django-silk | # django-silky
[](https://pypi.org/project/django-silky/)
[](https://pypi.org/project/django-silky/)
[](https://pypi.org/project/django-silky/)
[](https://opensource.org/licenses/MIT)
**django-silky** is a modernized-UI fork of [django-silk](https://github.com/jazzband/django-silk) — a live profiling and inspection tool for the Django framework.
It keeps 100 % of the original functionality (request/response recording, SQL inspection, code profiling, dynamic profiling) while shipping a fully redesigned interface built on CSS custom properties (light **and** dark mode), Lucide icons, an inline filter bar, multi-column sort chips, and proper pagination.
---
## What's different from django-silk?
| Feature | django-silk | django-silky |
|---|---|---|
| Theme | Fixed dark nav, light body | Full light/dark toggle, persisted in `localStorage` |
| Filter UI | 300 px slide-out drawer | Inline collapsible filter bar |
| Sort | Single column, GET param | Multi-column sort chips (session-persisted) |
| Pagination | Query-slice (`LIMIT N`) | Real Django `Paginator` with prev/next/page numbers |
| Detail pages | Plain tables | Hero bar + metric pills + section cards |
| Icons | Font Awesome (CDN) | Lucide (self-hosted, no external requests) |
| URL sharing | State lost on reload | Sort + per-page encoded in URL; filter bar state in `localStorage` |
---
## Migrating from django-silk
`django-silky` is a drop-in replacement — same app label (`silk`), same
database schema (migrations 0001 – 0008), all your existing data is retained.
```bash
pip uninstall django-silk
pip install django-silky
# No manage.py migrate needed — schema is identical
```
For full instructions, version compatibility details, and rollback steps see
**[MIGRATING.md](MIGRATING.md)**.
---
## Requirements
* Django 4.2, 5.1, 5.2, 6.0
* Python 3.10, 3.11, 3.12, 3.13, 3.14
---
## Installation
```bash
pip install django-silky
```
With optional request body formatting:
```bash
pip install django-silky[formatting]
```
### settings.py
```python
MIDDLEWARE = [
...
'silk.middleware.SilkyMiddleware',
...
]
TEMPLATES = [{
...
'OPTIONS': {
'context_processors': [
...
'django.template.context_processors.request',
],
},
}]
INSTALLED_APPS = [
...
'silk',
]
```
> **Middleware order:** Any middleware placed *before* `SilkyMiddleware` that returns a response without calling `get_response` will prevent Silk from running. If you use `django.middleware.gzip.GZipMiddleware`, place it **before** `SilkyMiddleware`.
### urls.py
```python
from django.urls import include, path
urlpatterns += [
path('silk/', include('silk.urls', namespace='silk')),
]
```
### Migrate and collect static
```bash
python manage.py migrate
python manage.py collectstatic
```
The UI is now available at `/silk/`.
---
## Features
### Request Inspection
Silk's middleware records every HTTP request and response — method, status code, path, timing, SQL query count, and headers/bodies — and presents them in a filterable, sortable, paginated table.
### SQL Inspection
Every SQL query executed during a request is captured with its execution time, tables involved, number of joins, and a full stack trace so you can see exactly where in your code it was triggered.
### Code Profiling
#### Decorator / context manager
```python
from silk.profiling.profiler import silk_profile
@silk_profile(name='View Blog Post')
def post(request, post_id):
p = Post.objects.get(pk=post_id)
return render(request, 'post.html', {'post': p})
```
```python
def post(request, post_id):
with silk_profile(name='View Blog Post #%d' % post_id):
p = Post.objects.get(pk=post_id)
return render(request, 'post.html', {'post': p})
```
#### cProfile integration
```python
SILKY_PYTHON_PROFILER = True
SILKY_PYTHON_PROFILER_BINARY = True # also save .prof files
```
When enabled, a call-graph coloured by time is shown on the profile detail page.
#### Dynamic profiling
Profile third-party code without touching its source:
```python
SILKY_DYNAMIC_PROFILING = [{
'module': 'path.to.module',
'function': 'MyClass.bar',
}]
```
### Code Generation
Silk generates a `curl` command and a Django test-client snippet for every request, making it easy to replay a captured request from the terminal or a unit test.
---
## Configuration
### Authentication
```python
SILKY_AUTHENTICATION = True # user must be logged in
SILKY_AUTHORISATION = True # user must have is_staff=True (default)
# Custom permission check:
SILKY_PERMISSIONS = lambda user: user.is_superuser
```
### Request / response body limits
```python
SILKY_MAX_REQUEST_BODY_SIZE = -1 # -1 = no limit
SILKY_MAX_RESPONSE_BODY_SIZE = 1024 # bytes; larger bodies are discarded
```
### Sampling (high-traffic sites)
```python
SILKY_INTERCEPT_PERCENT = 50 # record only 50 % of requests
# or
SILKY_INTERCEPT_FUNC = lambda request: 'profile' in request.session
```
### Garbage collection
```python
SILKY_MAX_RECORDED_REQUESTS = 10_000
SILKY_MAX_RECORDED_REQUESTS_CHECK_PERCENT = 10 # GC runs on 10 % of requests
```
Trigger manually (e.g. from a cron job):
```bash
python manage.py silk_request_garbage_collect
```
Clear all data immediately:
```bash
python manage.py silk_clear_request_log
```
### Query analysis
```python
SILKY_ANALYZE_QUERIES = True
SILKY_EXPLAIN_FLAGS = {'format': 'JSON', 'costs': True}
```
> **Warning:** `EXPLAIN ANALYZE` on PostgreSQL actually executes the query, which may cause unintended side effects. Use with caution.
### Meta-profiling
```python
SILKY_META = True # shows how long Silk itself takes per request
```
### Sensitive data masking
```python
# Default set — case insensitive
SILKY_SENSITIVE_KEYS = {'username', 'api', 'token', 'key', 'secret', 'password', 'signature'}
```
### Custom profiler storage
```python
# Django >= 4.2
STORAGES = {
'SILKY_STORAGE': {
'BACKEND': 'path.to.StorageClass',
},
}
SILKY_PYTHON_PROFILER_RESULT_PATH = '/path/to/profiles/'
```
---
## Development
```bash
git clone https://github.com/VaishnavGhenge/django-silky.git
cd django-silky
python -m venv .venv && source .venv/bin/activate
pip install -e ".[formatting]"
pip install -r project/requirements.txt
# Run the example project
DB_ENGINE=sqlite3 python project/manage.py migrate
DB_ENGINE=sqlite3 python project/manage.py runserver
# Visit http://127.0.0.1:8000/silk/ (login: admin / admin)
# Watch SCSS while editing UI
npx gulp watch
# Run tests
DB_ENGINE=sqlite3 python -m pytest project/tests/ -q
```
---
## Credits
django-silky is a fork of [django-silk](https://github.com/jazzband/django-silk), originally created by [Michael Ford](https://github.com/mtford90) and maintained by [Jazzband](https://jazzband.co/). All core profiling functionality comes from the upstream project; this fork focuses solely on UI improvements.
---
## License
MIT — same as the upstream [django-silk](https://github.com/jazzband/django-silk).
| text/markdown | Michael Ford | mtford@gmail.com | Vaishnav Ghenge | null | MIT License | null | [
"Development Status :: 4 - Beta",
"Environment :: Web Environment",
"Framework :: Django",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.1",
"Framework :: Django :: 5.2",
"Framework :: Django :: 6.0",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating... | [] | https://github.com/VaishnavGhenge/django-silky | null | >=3.10 | [] | [] | [] | [
"Django>=4.2",
"sqlparse",
"gprof2dot>=2017.09.19",
"autopep8; extra == \"formatting\""
] | [] | [] | [] | [
"Source, https://github.com/VaishnavGhenge/django-silky",
"Bug Tracker, https://github.com/VaishnavGhenge/django-silky/issues",
"Migration Guide, https://github.com/VaishnavGhenge/django-silky/blob/master/MIGRATING.md",
"Upstream, https://github.com/jazzband/django-silk"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T18:41:53.431953 | django_silky-1.0.4.tar.gz | 4,719,607 | b3/6c/5cf079e0622757eaff642728a7397dccdb4380883cc068ce33e9e3c41bba/django_silky-1.0.4.tar.gz | source | sdist | null | false | 6dbc92d3e213b2d61c58d8f40370b4ac | 60bf0827ed9a148910da3b687ca5459c37221529d485dda47e0b3deaaf3d6277 | b36c5cf079e0622757eaff642728a7397dccdb4380883cc068ce33e9e3c41bba | null | [
"LICENSE"
] | 251 |
2.4 | numba | 0.64.0 | compiling Python code using LLVM | *****
Numba
*****
.. image:: https://badges.gitter.im/numba/numba.svg
:target: https://gitter.im/numba/numba?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge
:alt: Gitter
.. image:: https://img.shields.io/badge/discuss-on%20discourse-blue
:target: https://numba.discourse.group/
:alt: Discourse
.. image:: https://zenodo.org/badge/3659275.svg
:target: https://zenodo.org/badge/latestdoi/3659275
:alt: Zenodo DOI
.. image:: https://img.shields.io/pypi/v/numba.svg
:target: https://pypi.python.org/pypi/numba/
:alt: PyPI
.. image:: https://dev.azure.com/numba/numba/_apis/build/status/numba.numba?branchName=main
:target: https://dev.azure.com/numba/numba/_build/latest?definitionId=1?branchName=main
:alt: Azure Pipelines
A Just-In-Time Compiler for Numerical Functions in Python
#########################################################
Numba is an open source, NumPy-aware optimizing compiler for Python sponsored
by Anaconda, Inc. It uses the LLVM compiler project to generate machine code
from Python syntax.
Numba can compile a large subset of numerically-focused Python, including many
NumPy functions. Additionally, Numba has support for automatic
parallelization of loops, generation of GPU-accelerated code, and creation of
ufuncs and C callbacks.
For more information about Numba, see the Numba homepage:
https://numba.pydata.org and the online documentation:
https://numba.readthedocs.io/en/stable/index.html
Installation
============
Please follow the instructions:
https://numba.readthedocs.io/en/stable/user/installing.html
Demo
====
Please have a look and the demo notebooks via the mybinder service:
https://mybinder.org/v2/gh/numba/numba-examples/master?filepath=notebooks
Contact
=======
Numba has a discourse forum for discussions:
* https://numba.discourse.group
| null | null | null | null | null | BSD | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.1... | [] | https://numba.pydata.org | null | >=3.10 | [] | [] | [] | [
"llvmlite<0.47,>=0.46.0dev0",
"numpy<2.5,>=1.22"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:41:20.974750 | numba-0.64.0.tar.gz | 2,765,679 | 23/c9/a0fb41787d01d621046138da30f6c2100d80857bf34b3390dd68040f27a3/numba-0.64.0.tar.gz | source | sdist | null | false | 5c00a95c0155101e98b7357622c15251 | 95e7300af648baa3308127b1955b52ce6d11889d16e8cfe637b4f85d2fca52b1 | 23c9a0fb41787d01d621046138da30f6c2100d80857bf34b3390dd68040f27a3 | null | [
"LICENSE",
"LICENSES.third-party"
] | 2,513,708 |
2.1 | mrtmavlink | 30.25 | MRT Mavlink Python Bindings | # mrtmavlink-py3
Autogenerated Python3 bindings for Magothy.xml
Requires Python3 >= 3.8
| text/markdown | null | null | null | Nathan Knotts <nathan@magothyrt.com> | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"wheel"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:37:32.980338 | mrtmavlink-30.25.tar.gz | 228,593 | e3/c2/9d59c929afe6bf236afc3618251dea5442d65ebfd8d6303525a471cc0c44/mrtmavlink-30.25.tar.gz | source | sdist | null | false | 52479e32631fe238ecbbc48a804d4b20 | bde9b17d4d7dbb4607499830c7788a305d785bf974ed300df6a8c549c4845d56 | e3c29d59c929afe6bf236afc3618251dea5442d65ebfd8d6303525a471cc0c44 | null | [] | 252 |
2.4 | amplify-excel-migrator | 1.5.3 | A CLI tool to migrate Excel data to AWS Amplify | # Amplify Excel Migrator
[](https://badge.fury.io/py/amplify-excel-migrator)
[](https://pypi.org/project/amplify-excel-migrator/)
[](https://pepy.tech/project/amplify-excel-migrator)
[](https://opensource.org/licenses/MIT)
A CLI tool to migrate data from Excel files to AWS Amplify GraphQL API.
Developed for the MECO project - https://github.com/sworgkh/meco-observations-amplify
## Installation
### From PyPI (Recommended)
Install the latest stable version from PyPI:
```bash
pip install amplify-excel-migrator
```
### From Source
Clone the repository and install:
```bash
git clone https://github.com/EyalPoly/amplify-excel-migrator.git
cd amplify-excel-migrator
pip install .
```
## Usage
The tool has five subcommands:
### 1. Configure (First Time Setup)
Save your AWS Amplify configuration:
```bash
amplify-migrator config
```
This will prompt you for:
- Excel file path
- AWS Amplify API endpoint
- AWS Region
- Cognito User Pool ID
- Cognito Client ID
- Admin username
Configuration is saved to `~/.amplify-migrator/config.json`
### 2. Show Configuration
View your current saved configuration:
```bash
amplify-migrator show
```
### 3. Export Schema
Export your GraphQL schema to a markdown reference document:
```bash
# Export all models
amplify-migrator export-schema
# Export to a specific file
amplify-migrator export-schema --output my-schema.md
# Export specific models only
amplify-migrator export-schema --models User Post Comment
```
This generates a comprehensive markdown document with:
- All model fields with types and requirements
- Enum definitions
- Custom type structures
- Foreign key relationships
- Excel formatting guidelines
Perfect for sharing with team members who need to prepare Excel files for migration.
💡 The exported schema reference can help you prepare your Excel file. For detailed formatting guidelines, see the [Excel Format Specification](docs/EXCEL_FORMAT_SPECIFICATION.md).
### 4. Export Data
Export model records from your Amplify backend to an Excel file:
```bash
# Export a single model's records
amplify-migrator export-data --model Reporter
# Export multiple models (each as a separate sheet)
amplify-migrator export-data --model Reporter Article Comment
# Export all models
amplify-migrator export-data --all
# Export to a specific file
amplify-migrator export-data --model Reporter --output reporter_backup.xlsx
amplify-migrator export-data --all --output full_backup.xlsx
```
Records are sorted by primary field and exported with scalar, enum, and ID fields. When exporting multiple models, each model gets its own sheet in the Excel file. This is useful for backing up data, auditing records, or preparing corrections for re-migration.
### 5. Run Migration
Run the migration using your saved configuration:
```bash
amplify-migrator migrate
```
You'll only be prompted for your password (for security, passwords are never cached).
### Quick Start
```bash
# First time: configure the tool
amplify-migrator config
# View current configuration
amplify-migrator show
# Export schema documentation (share with team)
amplify-migrator export-schema
# Export existing records to Excel
amplify-migrator export-data --model Reporter
amplify-migrator export-data --all
# Run migration (uses saved config)
amplify-migrator migrate
# View help
amplify-migrator --help
```
📋 For detailed Excel format requirements, see the [Excel Format Specification](docs/EXCEL_FORMAT_SPECIFICATION.md).
### Example: Configuration
```
╔════════════════════════════════════════════════════╗
║ Amplify Migrator - Configuration Setup ║
╚════════════════════════════════════════════════════╝
📋 Configuration Setup:
------------------------------------------------------
Excel file path [data.xlsx]: my-data.xlsx
AWS Amplify API endpoint: https://xxx.appsync-api.us-east-1.amazonaws.com/graphql
AWS Region [us-east-1]:
Cognito User Pool ID: us-east-1_xxxxx
Cognito Client ID: your-client-id
Admin Username: admin@example.com
✅ Configuration saved successfully!
💡 You can now run 'amplify-migrator migrate' to start the migration.
```
### Example: Migration
```
╔════════════════════════════════════════════════════╗
║ Migrator Tool for Amplify ║
╠════════════════════════════════════════════════════╣
║ This tool requires admin privileges to execute ║
╚════════════════════════════════════════════════════╝
🔐 Authentication:
------------------------------------------------------
Admin Password: ********
```
## Requirements
- Python 3.8+
- AWS Amplify GraphQL API
- AWS Cognito User Pool
- Admin access to the Cognito User Pool
## Features
### Data Processing & Conversion
- **Automatic type parsing** - Smart field type detection for all GraphQL types including scalars, enums, and custom types
- **Custom types and enums** - Full support for Amplify custom types with automatic conversion
- **Duplicate detection** - Automatically skips existing records to prevent duplicates
- **Foreign key resolution** - Automatic relationship handling with pre-fetching for performance
### AWS Integration
- **Configuration caching** - Save your setup, reuse it for multiple migrations
- **MFA support** - Works with multi-factor authentication
- **Admin group validation** - Ensures proper authorization before migration
### Performance
- **Async uploads** - Fast parallel uploads with configurable batch size
- **Connection pooling** - Efficient HTTP connection reuse for better performance
- **Pagination support** - Handles large datasets efficiently
### User Experience
- **Interactive prompts** - Easy step-by-step configuration
- **Progress reporting** - Real-time feedback on migration status
- **Detailed error messages** - Clear context for troubleshooting failures
- **Schema export** - Generate markdown documentation of your GraphQL schema to share with team members
- **Data export** - Export existing model records to Excel for backup, auditing, or correction
## Excel Format Requirements
Your Excel file must follow specific formatting guidelines for sheet names, column headers, data types, and special field handling. For comprehensive format requirements, examples, and troubleshooting, see:
📋 **[Excel Format Specification Guide](docs/EXCEL_FORMAT_SPECIFICATION.md)**
## Advanced Features
- **Foreign Key Resolution** - Automatically resolves relationships between models with pre-fetching for optimal performance
- **Schema Introspection** - Dynamically queries your GraphQL schema to understand model structures and field types
- **Configurable Batch Processing** - Tune upload performance with adjustable batch sizes (default: 20 records per batch)
- **Progress Reporting** - Real-time batch progress with per-sheet confirmation prompts before upload
## Error Handling & Recovery
When records fail to upload, the tool provides a robust recovery mechanism to help you identify and fix issues without starting over.
### How It Works
1. **Automatic Error Capture** - Each failed record is logged with detailed error messages explaining what went wrong
2. **Failed Records Export** - After migration completes, you'll be prompted to export failed records to a new Excel file with a timestamp (e.g., `data_failed_records_20251201_143022.xlsx`)
3. **Easy Retry** - Fix the issues in the exported file and run the migration again using only the failed records
4. **Progress Visibility** - Detailed summary shows success/failure counts, percentages, and specific error reasons for each failed record
The tool tracks which records succeeded and failed, providing row-level context to help you quickly identify and resolve issues. Simply export the failed records, fix the errors in the Excel file, and re-run the migration with the corrected file.
## Troubleshooting
### Authentication & AWS Configuration
**Authentication Errors:**
- Verify your Cognito User Pool ID and Client ID are correct
- Ensure your username and password are valid
- Check that your user is in the ADMINS group
**MFA Issues:**
- Enable MFA in your Cognito User Pool settings if required
- Ensure your user has MFA set up (SMS or software token)
**AWS Credentials:**
- Set up AWS credentials in `~/.aws/credentials`
- Or set environment variables: `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_DEFAULT_REGION`
- Or use `aws configure` to set up your default profile
**Permission Errors:**
- Add your user to the ADMINS group in Cognito User Pool
- Contact your AWS administrator if you don't have permission
### Excel Format & Validation Issues
For errors related to Excel file format, data types, sheet naming, required fields, or foreign keys, see the comprehensive troubleshooting guide:
📋 **[Common Issues and Solutions](docs/EXCEL_FORMAT_SPECIFICATION.md#common-issues-and-solutions)**
## License
MIT
| text/markdown | Eyal Politansky | 10eyal10@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/EyalPoly/amplify-excel-migrator | null | >=3.8 | [] | [] | [] | [
"pandas>=1.3.0",
"requests>=2.26.0",
"boto3>=1.18.0",
"pycognito>=2023.5.0",
"PyJWT>=2.0.0",
"aiohttp>=3.8.0",
"openpyxl>=3.0.0",
"inflect>=7.0.0",
"amplify-auth>=0.1.0",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest-mock>=3.12.0; extra == \"dev\"",
"pytes... | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T18:37:29.798386 | amplify_excel_migrator-1.5.3.tar.gz | 63,697 | 72/47/ef9975922da784eb4f01289c962500a5d0414dfe82a86c6b98439b46bc32/amplify_excel_migrator-1.5.3.tar.gz | source | sdist | null | false | 661f0ab5593e9c0af4e993cc3622a78c | 686c726ea91847c85f3ba7283a076c8104d886c31379da19efb6ad7985493b85 | 7247ef9975922da784eb4f01289c962500a5d0414dfe82a86c6b98439b46bc32 | null | [
"LICENSE"
] | 252 |
2.4 | nono-py | 0.1.0 | Python bindings for nono capability-based sandboxing | <div align="center">
<img src="assets/nono-py.png" alt="nono logo" width="500"/>
<a href="https://discord.gg/pPcjYzGvbS">
<img src="https://img.shields.io/badge/Chat-Join%20Discord-7289da?style=for-the-badge&logo=discord&logoColor=white" alt="Join Discord"/>
</a>
<p>
<a href="https://opensource.org/licenses/Apache-2.0">
<img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg" alt="License"/>
</a>
<a href="https://github.com/always-further/nono-py/actions/workflows/ci.yml">
<img src="https://github.com/always-further/nono-py/actions/workflows/ci.yml/badge.svg" alt="CI Status"/>
</a>
<a href="https://docs.nono.sh">
<img src="https://img.shields.io/badge/Docs-docs.nono.sh-green.svg" alt="Documentation"/>
</a>
</p>
<p>
<a href="https://discord.gg/pPcjYzGvbS">
<img src="https://img.shields.io/badge/Chat-Join%20Discord-7289da?style=for-the-badge&logo=discord&logoColor=white" alt="Join Discord"/>
</a>
</p>
</div>
# nono-py
Python bindings for [nono](https://github.com/always-further/nono), a capability-based sandboxing library.
nono provides OS-enforced sandboxing using Landlock (Linux) and Seatbelt (macOS). Once a sandbox is applied, unauthorized operations are structurally impossible.
## Installation
```bash
pip install nono-py
```
### From source
Requires Rust toolchain and maturin:
```bash
pip install maturin
maturin develop
```
## Usage
```python
from nono_py import CapabilitySet, AccessMode, apply, is_supported
# Check platform support
if not is_supported():
print("Sandboxing not supported on this platform")
exit(1)
# Build capability set
caps = CapabilitySet()
caps.allow_path("/tmp", AccessMode.READ_WRITE)
caps.allow_path("/home/user/project", AccessMode.READ)
caps.allow_file("/etc/hosts", AccessMode.READ)
caps.block_network()
# Apply sandbox (irreversible!)
apply(caps)
# Now the process can only access granted paths
# Network access is blocked
# This applies to all child processes too
```
## API Reference
### Enums
#### `AccessMode`
File system access mode:
- `AccessMode.READ` - Read-only access
- `AccessMode.WRITE` - Write-only access
- `AccessMode.READ_WRITE` - Both read and write access
### Classes
#### `CapabilitySet`
A collection of capabilities that define sandbox permissions.
```python
caps = CapabilitySet()
# Add directory access (recursive)
caps.allow_path("/tmp", AccessMode.READ_WRITE)
# Add single file access
caps.allow_file("/etc/hosts", AccessMode.READ)
# Block network
caps.block_network()
# Add command to allow/block lists
caps.allow_command("git")
caps.block_command("rm")
# Add platform-specific rule (macOS Seatbelt)
caps.platform_rule("(allow mach-lookup (global-name \"com.apple.system.logger\"))")
# Utility methods
caps.deduplicate() # Remove duplicates
caps.path_covered("/tmp/foo") # Check if path is covered
caps.fs_capabilities() # List all fs capabilities
caps.summary() # Human-readable summary
```
#### `QueryContext`
Query permissions without applying the sandbox:
```python
caps = CapabilitySet()
caps.allow_path("/tmp", AccessMode.READ)
ctx = QueryContext(caps)
result = ctx.query_path("/tmp/file.txt", AccessMode.READ)
# {'status': 'allowed', 'reason': 'granted_path', 'granted_path': '/tmp', 'access': 'read'}
result = ctx.query_path("/var/log/test", AccessMode.READ)
# {'status': 'denied', 'reason': 'path_not_granted'}
result = ctx.query_network()
# {'status': 'allowed', 'reason': 'network_allowed'}
```
#### `SandboxState`
Serialize and restore capability sets:
```python
caps = CapabilitySet()
caps.allow_path("/tmp", AccessMode.READ)
# Serialize to JSON
state = SandboxState.from_caps(caps)
json_str = state.to_json()
# Restore from JSON
restored_state = SandboxState.from_json(json_str)
restored_caps = restored_state.to_caps()
```
#### `SupportInfo`
Platform support information:
```python
info = support_info()
print(info.is_supported) # True/False
print(info.platform) # "linux" or "macos"
print(info.details) # Human-readable details
```
### Functions
#### `apply(caps: CapabilitySet) -> None`
Apply the sandbox. **This is irreversible.** Once applied, the current process and all children can only access resources granted by the capabilities.
#### `is_supported() -> bool`
Check if sandboxing is supported on this platform.
#### `support_info() -> SupportInfo`
Get detailed platform support information.
## Platform Support
| Platform | Backend | Requirements |
|----------|---------|--------------|
| Linux | Landlock | Kernel 5.13+ with Landlock enabled |
| macOS | Seatbelt | macOS 10.5+ |
| Windows | - | Not supported |
## Development
```bash
# Install dev dependencies
pip install maturin pytest mypy
# Build and install for development
make dev
# Run tests
make test
# Run linters
make lint
# Format code
make fmt
```
## License
Apache-2.0
| text/markdown; charset=UTF-8; variant=GFM | null | Luke Hinds <lukehinds@gmail.com> | null | null | Apache-2.0 | sandbox, security, capability, landlock, seatbelt | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Rust",
"Topic :: Security",
"Typing :: Typed"... | [] | null | null | >=3.9 | [] | [] | [] | [
"pytest>=8; extra == \"dev\"",
"mypy>=1.10; extra == \"dev\"",
"ruff>=0.4; extra == \"dev\"",
"maturin<2,>=1; extra == \"dev\""
] | [] | [] | [] | [
"Changelog, https://github.com/always-further/nono-py/releases",
"Documentation, https://docs.nono.sh",
"Homepage, https://github.com/always-further/nono-py",
"Issues, https://github.com/always-further/nono-py/issues",
"Repository, https://github.com/always-further/nono-py"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:36:53.396663 | nono_py-0.1.0.tar.gz | 146,292 | 60/4b/29c20d94977a6a68c3d657eb14bbd0d67403fd68a692b9f2419cfeb35e29/nono_py-0.1.0.tar.gz | source | sdist | null | false | 2b355030d10a6bf75127bff501f786ce | 940d0d7606475e463865b5f47e19166650acc09c3cb813c06ad55be2fd1b5956 | 604b29c20d94977a6a68c3d657eb14bbd0d67403fd68a692b9f2419cfeb35e29 | null | [
"LICENSE"
] | 1,076 |
2.3 | plan2eplus | 0.1.5 | Translate floor plan geometry into building energy models | # plan2eplus
## plan2eplus makes it easy to translate floor plan geometry into building energy models
plan2eplus is a Python library designed for authoring building energy models, with the aim of helping designers better understand the energy consumption and thermal performance of proposed building during the early stages of design. plan2eplus is designed for [EnergyPlus](https://energyplus.net/), a popular software for creating building energy models.
plan2eplus provides support for:
- Translating validated geometric information into EnergyPlus models
- Studying the impact of natural ventilation on thermal performance using EnergyPlus's AirflowNetwork
- Quickly visualizing input geometries as 2D plans
- Studying the impact of different floor plans on thermal performance
## About
Typical methods for creating an energy model for a given floor plan can be slow and error-prone. In a typical workflow, one might obtain the dimensions of the geometries of the components of their floorplan, redraw the geometry based on these dimensions, and then have another program generate the energy model. plan2eplus relieves the need to redraw geometries, making it possible for users to go directly from dimensions of the plan to energy models. The package exposes a series of user friendly objects that can either be defined in Python or in JSON files, making it easy bring geometric information from a variety of venues.
plan2eplus is designed for early-stage, climate-aware architecture. As such, support for EnergyPlus's AirflowNetwork, which enables users to understand the potential benefits of using natural ventilation, is a first-order consideration. plan2eplus also provides methods for visualizing floor plans and various quantities of interest, enabling designers to quickly understand how their chosen design affects thermal performance. As part of this goal, plan2eplus also provides functionality to quickly generate a series of different designs and study the impact of changes.
## Install
You can install plan2eplus using uv or pip:
```bash
# with uv
uv add plan2eplus
# with pip
pip install plan2eplus
```
plan2eplus provides an interface to EnergyPlus, making it easy to author EnergyPlus models. In order to _run_ energy models on your device, you will need a local installation on EnergyPlus. You can download EnergyPlus [here](https://energyplus.net/downloads). Note: currently, plan2eplus has only been tested using EnergyPlus 22.1.
## Usage
Below is a basic example of a workflow for creating a simple model with two adjacent rooms. For more examples, please refer to the (forthcoming) documentation.
```python
from replan2eplus.ezcase.ez import EZ
from replan2eplus.ops.zones.user_interface import Room
from replan2eplus.geometry.domain import Domain
from replan2eplus.geometry.range import Range
# define geometry - all values are in meters
domain1 = Domain(horz_range=Range(0,1), vert_range=Range(0,1))
domain2 = Domain(horz_range=Range(1,2), vert_range=Range(0,1))
height = 3.00
# define rooms
room1 = Room(id=0, name="room1", domain1, height)
room2 = Room(id=1, name="room2", domain2, height)
rooms = [room1, room2]
# define the case
case = (
EZ()
.add_zones(rooms)
.add_constructions()
)
# run the case
case.save_and_run(run=True)
```
## Contributing
Please contact <jnwagwu@stanford.edu> if you are interested in helping to make plan2eplus better!
| text/markdown | Juliet Nwagwu Ume-Ezeoke | Juliet Nwagwu Ume-Ezeoke <jnwagwu@stanford.edu> | null | null | null | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"bpython>=0.25",
"cyclopts>=4.4.6",
"eppy>=0.5.63",
"expression>=5.6.0",
"geomeppyupdated>=0.1.0",
"ladybug-core>=0.44.19",
"loguru>=0.7.3",
"matplotlib>=3.10.8",
"omegaconf>=2.3.0",
"pipe>=2.2",
"polars>=1.33.1",
"pooch>=1.8.2",
"pre-commit>=4.5.1",
"pyarrow>=21.0.0",
"pydantic-settings... | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T18:36:50.710069 | plan2eplus-0.1.5.tar.gz | 60,071 | 99/ee/d5bf66eef5fbd8e7a9f5b10f6b537b28c8e57183a7173e3d532b6cd8a683/plan2eplus-0.1.5.tar.gz | source | sdist | null | false | cf4b4d4f41ca8b5b91f73d7f45626401 | 9b9d9de6af121e8f14e7ad0e0e0ef8aa2109e44f147bd5d9deb90695c586bd03 | 99eed5bf66eef5fbd8e7a9f5b10f6b537b28c8e57183a7173e3d532b6cd8a683 | null | [] | 257 |
2.4 | openpmd-beamphysics | 0.14.0 | Tools for analyzing and viewing particle data in the openPMD standard, extension beamphysics. | # openPMD-beamphysics
| **`Documentation`** |
| -------------------------------------------------------------------------------------------------------------------------------------------- |
| [](https://christophermayes.github.io/openPMD-beamphysics/) |
Tools for analyzing and viewing particle data in the openPMD standard, extension beamphysics.
<https://github.com/openPMD/openPMD-standard/blob/upcoming-2.0.0/EXT_BeamPhysics.md>
# Installing openpmd-beamphysics
Installing `openpmd-beamphysics` from the `conda-forge` channel can be achieved by adding `conda-forge` to your channels with:
```
conda config --add channels conda-forge
```
Once the `conda-forge` channel has been enabled, `openpmd-beamphysics` can be installed with:
```
conda install openpmd-beamphysics
```
It is possible to list all of the versions of `openpmd-beamphysics` available on your platform with:
```
conda search openpmd-beamphysics --channel conda-forge
```
## Development environment
A conda environment file is provided in this repository and may be used for a
development environment.
To create a new conda environment using this file, do the following:
```bash
git clone https://github.com/ChristopherMayes/openPMD-beamphysics
cd openPMD-beamphysics
conda env create -n beamphysics-dev -f environment.yml
conda activate beamphysics-dev
python -m pip install --no-deps -e .
```
Alternatively, with a virtualenv and pip:
```bash
git clone https://github.com/ChristopherMayes/openPMD-beamphysics
cd openPMD-beamphysics
python -m venv beamphysics-venv
source beamphysics-venv/bin/activate
python -m pip install -e .
```
| text/markdown | Christopher Mayes | null | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| null | [
"Development Status :: 4 - Beta",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"scipy>=1.0.0",
"matplotlib",
"h5py",
"python-dateutil",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"pytest-benchmark; extra == \"dev\"",
"pyyaml; extra == \"dev\"",
"mkdocs; extra == \"doc\"",
"mkdocs-jupyter; extra == \"doc\"",
"mkdocs-macros-plugin; extra == \"doc\... | [] | [] | [] | [
"Homepage, https://github.com/ChristopherMayes/openPMD-beamphysics"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:36:45.834782 | openpmd_beamphysics-0.14.0.tar.gz | 11,241,968 | 93/70/c9adb9a0a0067d0e920df80847ed3d3fae4ebe5b517c05a523d7cfd327af/openpmd_beamphysics-0.14.0.tar.gz | source | sdist | null | false | 8217ad715328681d69473a1cfb580ceb | a01b296913311934f1d9c8d44c91260e70f9deb5fb80648f72a67066241f871f | 9370c9adb9a0a0067d0e920df80847ed3d3fae4ebe5b517c05a523d7cfd327af | null | [
"LICENSE"
] | 551 |
2.1 | roboka | 0.4.5 | A Python wrapper for Rubika bot API. can you read documents in rb-chat.ir site or this github page: https://github.com/aliz17-web/roboka | # Roboka
Roboka is a Python wrapper for the **Rubika Bot API**.
With this library you can easily create and manage your Rubika bots in Python.
can you read documents in site: https://rb-chat.ir
or this github page: https://github.com/aliz17-web/roboka
# Installation
```bash
pip install roboka | text/markdown | Ali Zolghadr | tradecoinex5@gmail.com | null | null | Proprietary | null | [] | [] | null | null | null | [] | [] | [] | [] | [] | [] | [] | [] | python-requests/2.32.5 | 2026-02-18T18:35:27.962053 | roboka-0.4.5-py3-none-any.whl | 7,578 | 88/ec/996de1e2103cf02b2b52eedcbafd1acb686d8021be0e8510ac6d392d3e32/roboka-0.4.5-py3-none-any.whl | py3 | bdist_wheel | null | false | ed61976dd33ccba2c443b79c5bc25bb7 | b773be67f129ccfadf7f461be709a1bb4f4d737e7a70af11b4d4d6738a919487 | 88ec996de1e2103cf02b2b52eedcbafd1acb686d8021be0e8510ac6d392d3e32 | null | [] | 258 |
2.4 | cmdtrix | 0.2.0 | Matrix-console-effect made in Python. | <div id="top"></div>
<p>
<a href="https://pepy.tech/project/cmdtrix/" alt="Downloads">
<img src="https://static.pepy.tech/personalized-badge/cmdtrix?period=total&units=international_system&left_color=grey&right_color=blue&left_text=Downloads" align="right">
</a>
<a href="https://pypi.org/project/cmdtrix/" alt="Visitors">
<img src="https://hitscounter.dev/api/hit?url=https%3A%2F%2Fgithub.com%2FSilenZcience%2Fcmdtrix&label=Visitors&icon=person-circle&color=%23479f76" align="right">
</a>
<a href="https://github.com/SilenZcience/cmdtrix/tree/main/cmdtrix/" alt="CodeSize">
<img src="https://img.shields.io/github/languages/code-size/SilenZcience/cmdtrix?color=purple" align="right">
</a>
</p>
[![OS-Windows]][OS-Windows]
[![OS-Linux]][OS-Linux]
[![OS-MacOS]][OS-MacOS]
<br/>
<div align="center">
<h2 align="center">cmdtrix</h2>
<p align="center">
matrix-console-effect made in Python.
<br/>
<a href="https://github.com/SilenZcience/cmdtrix/blob/main/cmdtrix/main.py">
<strong>Explore the code »</strong>
</a>
<br/>
<br/>
<a href="https://github.com/SilenZcience/cmdtrix/issues">Report Bug</a>
·
<a href="https://github.com/SilenZcience/cmdtrix/issues">Request Feature</a>
</p>
</div>
<details>
<summary>Table of Contents</summary>
<ol>
<li>
<a href="#about-the-project">About The Project</a>
<ul>
<li><a href="#made-with">Made With</a></li>
</ul>
</li>
<li>
<a href="#getting-started">Getting Started</a>
<ul>
<li><a href="#prerequisites">Prerequisites</a></li>
<li><a href="#installation">Installation</a></li>
</ul>
</li>
<li><a href="#usage">Usage</a>
<ul>
<li><a href="#examples">Examples</a></li>
<li><a href="#help">Help</a></li>
</ul>
</li>
<li><a href="#license">License</a></li>
<li><a href="#contact">Contact</a></li>
</ol>
</details>
<div id="about-the-project"></div>
## About The Project
This project simply emulates "The Matrix"-effect on any console-terminal.
<div id="made-with"></div>
### Made With
[![Python][MadeWith-Python]](https://www.python.org/)
[![Python][Python-Version]](https://www.python.org/)
<p align="right">(<a href="#top">back to top</a>)</p>
<div id="getting-started"></div>
## Getting Started
<div id="prerequisites"></div>
### Prerequisites
It is necessary that a font is installed that supports the unicode characters used (greek, cyrillic).
<div id="installation"></div>
### Installation
[![Version][CurrentVersion]](https://pypi.org/project/cmdtrix/)
1. install the python package ([PyPI-cmdtrix](https://pypi.org/project/cmdtrix/)):
```console
pip install cmdtrix
```
```console
pip install git+https://github.com/SilenZcience/cmdtrix.git
```
<p align="right">(<a href="#top">back to top</a>)</p>
<div id="usage"></div>
## Usage
```console
cmdtrix [-h] [-c COLOR] ...
```
```console
python -m cmdtrix [-h] [-c COLOR] ...
```
| Argument | Description |
|------------------------|----------------------------------------------------------------------|
| -h, --help | show help message and exit |
| -v, --version | output version information |
| -c [\*], --color [\*] | set the main-color to * |
| -p [\*], --peak [\*] | set the peak-color to * |
| -r, --rainbow | enable rainbow color transitions |
| -d p, --dim p | add chance p (percent) for dim characters |
| -i p, --italic p | add chance p (percent) for italic characters |
| -b, --bottomup p | add chance p (percent) for bottom-up cascades |
| -m * p c | hide a custom message * within the Matrix, with chance p and color c |
| -S \*, --symbols \* | set a custom series of symbols to choose from |
| -j, --japanese | use japanese characters (overrides -S; requires appropriate fonts) |
| -s, --synchronous | sync the matrix columns speed |
| --framedelay DELAY | set the framedelay (in sec) to slow down the Matrix, default is 0.015|
| --timer DELAY | exit the Matrix after DELAY (in sec) automatically |
| --onkey | only spawn columns on key-press |
<div id="examples"></div>
### Examples
```console
cmdtrix -m SilenZcience 5 red -m cmdtrix 5 blue -d 5 -m Star*The*Repo 10 magenta
```
> 
<!--  -->


<div id="help"></div>
### Help
> **Q: Why am i seeing weird characters like `[31;1mr[0m `in the console?**
> A: This project uses `ANSI-Escape Codes` to display colors in the Terminal. If you see these weird characters, then your Terminal does not support these Codes.
> ⚠️If you are using the Conhost Command Prompt on Windows you can most likely solve this issue by going in the Registry under `Computer\HKEY_CURRENT_USER\Console` and adding/editing the `DWORD` value `VirtualTerminalLevel` and setting it to `1`.
---
> **Q: Why are some characters partially white even though i specified a color?**
> A: This is a Bug inside the Terminal you are using. You can fix the issue by disabling all dimmed and italic characters using `-d 0 -i 0`
<p align="right">(<a href="#top">back to top</a>)</p>
<div id="license"></div>
## License
This project is licensed under the MIT License - see the [LICENSE](https://github.com/SilenZcience/cmdtrix/blob/main/LICENSE) file for details
<div id="contact"></div>
## Contact
> **SilenZcience** <br/>
[![GitHub-SilenZcience][GitHub-SilenZcience]](https://github.com/SilenZcience)
[OS-Windows]: https://img.shields.io/badge/os-windows-green
[OS-Linux]: https://img.shields.io/badge/os-linux-green
[OS-MacOS]: https://img.shields.io/badge/os-macOS-green
[MadeWith-Python]: https://img.shields.io/badge/Made%20with-Python-brightgreen
[Python-Version]: https://img.shields.io/badge/python-3.7%20%7C%203.8%20%7C%203.9%20%7C%203.10%20%7C%203.11%20%7C%203.12%20%7C%203.13%20%7C%203.14-blue
[CurrentVersion]: https://img.shields.io/pypi/v/cmdtrix.svg
[GitHub-SilenZcience]: https://img.shields.io/badge/GitHub-SilenZcience-orange
| text/markdown | null | "Silas A. Kraume" <silas.kraume1552@gmail.com> | null | null | null | console, crossplatform, matrix, python, terminal | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Environment :... | [] | null | null | >=3.7 | [] | [] | [] | [
"pynput>=1.6.0"
] | [] | [] | [] | [
"Download, https://github.com/SilenZcience/cmdtrix/tarball/master",
"Github, https://github.com/SilenZcience/cmdtrix",
"Tracker, https://github.com/SilenZcience/cmdtrix/issues"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T18:34:33.824404 | cmdtrix-0.2.0-py3-none-any.whl | 13,696 | 79/72/385ac3fdd616d734df92559bcba139755494467bf588a48ca7c1b92e4319/cmdtrix-0.2.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 2e87f876e5209b8f36849e012846ca7f | d8d7ee0fde2f8c6ad43d09d94d8ddc251ef4c31595b1f38e88f157215eb1f0ff | 7972385ac3fdd616d734df92559bcba139755494467bf588a48ca7c1b92e4319 | null | [
"LICENSE"
] | 118 |
2.4 | invarum | 0.1.7 | CLI for Invarum, the Governance-grade LLM Quality Engineering platform. | # ⚡ Invarum CLI
**Prompt it. Measure it. Fix it. Prove it. Bring certainty to LLM quality—and evidence.**
The Invarum CLI is a thin, fast client for the Invarum Cloud Engine. Use it to run quantitative LLM evaluations, generate **audit-ready evidence bundles**, and enforce **policy gates** in CI/CD—without leaving the command line.
> **Get started:** You need an Invarum account and API key.
> Sign up at **[app.invarum.com](https://app.invarum.com)**.
---
## 📦 Features
### 1) Headless Invarum Engine
Submit prompts to the Invarum Cloud, where they’re evaluated with the deterministic **4D Energy Model**.
* **Live status:** stream progress and view the final response in your terminal
* **Scoring:** get immediate **α / β / γ / δ** scores in a readable table
### 2) Audit-Ready Evidence
Export forensic artifacts for any run—ready to attach to an **incident review** or internal audit packet.
* **JSON evidence bundle:** machine-readable export containing scores, policy outcomes, metadata, and **SHA-256** integrity hashes
* **PDF report:** download a formatted audit report via the CLI
### 3) CI/CD Gating
Stop bad prompts from reaching production.
* Use `--strict` to return **exit code 1** when a run fails policy gates
* Ideal for GitHub Actions, GitLab CI, and regression test suites
### 4) Enterprise Observability (OTel)
Invarum is **OpenTelemetry (OTel) native**.
* Each run can emit standard OTel traces
* Connect Datadog, Honeycomb, or New Relic to view quality signals alongside operational telemetry
---
## ⚛️ The Invarum Engine
Unlike “LLM-as-a-judge” tools that depend on subjective model opinions, Invarum evaluates outputs using a deterministic pipeline and returns **repeatable scores**, **policy gate decisions**, and **audit-ready evidence bundles** suitable for **incident review** and internal governance.
### The 4D Energy Model
We measure LLM behavior along four orthogonal axes:
| Metric | Signal | What it Measures |
| :-------------------- | :------------------------- | :--------------------------------------------------------------------------------------------------------------------------- |
| **α TaskScore** | **Task alignment** | Did the output follow the request and constraints (format, requirements, and reference match when provided)? |
| **β Coherence** | **Semantic continuity** | Did the response stay on-track—logically consistent, well-structured, and free of drift or contradiction? |
| **γ Entropy / Order** | **Variance & determinism** | Is output variability appropriate for the domain and task (stable for scientific/legal; broader for creative/brainstorming)? |
| **δ Efficiency** | **Cost-to-value** | How much useful information was delivered per token (and time), relative to the expected structure and verbosity? |
> The physics analogy is intentional: scores behave like measurable state variables, and policy gates define what “stable” looks like for a given domain.
### Policy-as-Code Gating
Runs are evaluated against a selected **Policy Profile** (internal governance by default). The engine returns:
* **Gate results** (must-pass requirements and scored thresholds)
* An overall verdict plus an explicit decision state:
**pass / pass_with_advisory / fail_with_advisory / fail**
* Structured **advisories** with recommended remediation steps
### Security & Privacy
Invarum is designed for auditability without unnecessary data retention:
1. **BYOK:** your LLM API keys are encrypted at rest and never exposed in plaintext.
2. **Configurable I/O retention:** prompts and responses can be stored temporarily for debugging or minimized/redacted depending on workspace policy.
3. **Immutable evidence:** evidence bundles retain **SHA-256** hashes and run metadata for integrity verification—even when raw text retention is minimized.
---
## 🚀 Installation
Install directly via pip:
```bash
pip install git+https://github.com/Invarum/invarum-cli.git@v0.1.6
```
*Requires Python 3.9+*
---
## ⚡ Quickstart
### 1) Get an API Key
Log in to the dashboard: **Settings → Developer Access Keys**.
### 2) Authenticate
Save your key locally. This persists until you revoke it.
```bash
invarum login --key inv_sk_your_secret_key_here
```
### 3) Run an Evaluation
```bash
invarum run "Summarize the main findings of this abstract in 5 bullets." --domain scientific
```
**Example Output:**
```text
Running evaluation...
Run ID: run_a1b2c3d4
╭─ LLM Response ──────────────────────────────────────╮
│ 1. The study establishes a correlation between... │
│ 2. Methodology involved a double-blind trial... │
│ ... │
╰─────────────────────────────────────────────────────╯
┏━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━┓
┃ Metric ┃ Score ┃
┡━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━┩
│ Alpha (Task) │ 0.892 │
│ Beta (Coherence) │ 0.910 │
│ Gamma (Order/Entropy)│ 0.450 │
│ Delta (Efficiency) │ 0.780 │
└──────────────────────┴───────┘
Decision: PASS_WITH_ADVISORY
Policy Profile: internal_governance_default
View details: https://app.invarum.com/runs/run_a1b2c3d4
```
> Tip: Open “View details” to inspect diagnostics, sensitivity analysis, and operator traces in the dashboard.
---
## 🛠 Advanced Usage
### Reference-Based Grading
Provide a gold-standard answer to enable higher-fidelity grading when appropriate.
```bash
invarum run "Explain quantum entanglement" --reference "Quantum entanglement is a phenomenon where..."
```
Load from files:
```bash
invarum run -f prompt.txt --reference-file ground_truth.txt
```
### Task, Domain, and Generation Overrides
Help classification or tune generation.
```bash
# Specify task and domain
invarum run "extract dates from this contract" --task extract --domain legal
# Override model temperature
invarum run "Write a creative poem" --temp 0.9
```
### Export Evidence (Incident Review / Audit Packet)
```bash
# Export JSON evidence bundle
invarum export run_a1b2c3d4 --format json --output evidence.json
# Export formatted PDF audit report
invarum export run_a1b2c3d4 --format pdf --output report.pdf
```
### CI/CD Integration
The CLI supports environment variables for automation.
```bash
export INVARUM_API_KEY="inv_sk_..."
# --strict forces a non-zero exit code on policy failure
invarum run -f prompt.txt --strict --json > results.json
```
---
## 🧠 Architecture
Invarum uses a thin client architecture:
1. **CLI (this repo):** auth, file IO, request formatting, and rendering. No proprietary scoring logic runs locally.
2. **Cloud engine:** prompts are evaluated by the PBPEF pipeline, producing scores, policy outcomes, traces, and evidence artifacts.
```
[CLI] → [API Gateway] → [PBPEF Pipeline] → [Run Record + Evidence]
↑ ↓
└────────────── summarized results ────────────────┘
```
---
## ❓ Troubleshooting
**"Command not found" after installation?**
If you ran `pip install` but typing `invarum` gives an error, your computer's Python script directory might not be in your system PATH.
You can fix this by adding the path to your environment variables, OR simply run the tool using `python -m`:
```bash
python -m invarum login
python -m invarum run "Test prompt"
```
---
## 🔬 Roadmap
**MVP (Live Now):**
* [x] Cloud-based energy scoring (α/β/γ/δ)
* [x] Policy gating & exit codes
* [x] Web dashboard sync
* [x] Evidence export (JSON & PDF)
**Coming Soon:**
* [ ] Batch processing (CSV input)
* [ ] `invarum check` regression suites
* [ ] Automated drift detection between runs
---
## 🧑🔬 Author
**Lucretius Coleman**
PhD in Physics | Computational Methods | Quantum Systems & Prompt Engineering
[lacolem1@invarum.com](mailto:lacolem1@invarum.com)
---
## 📄 License
MIT — see `LICENSE`.
| text/markdown | null | "L. Adrian Coleman" <lacolem1@invarum.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"Topic :: Software Development :: Quality Assurance"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"typer[all]>=0.9.0",
"requests>=2.31.0",
"rich>=13.0.0",
"pydantic>=2.0.0"
] | [] | [] | [] | [
"Homepage, https://app.invarum.com",
"Repository, https://github.com/Invarum/invarum-cli",
"Documentation, https://github.com/Invarum/invarum-cli#readme"
] | twine/6.2.0 CPython/3.12.10 | 2026-02-18T18:34:33.112459 | invarum-0.1.7.tar.gz | 13,411 | b5/32/f7a3d1b3ea7a4371248c4bf83cfabb60374b8e95eac489f988e2b4f1d776/invarum-0.1.7.tar.gz | source | sdist | null | false | 39ed4063fbf84c4214b43d614ae8ea93 | 0678908cc4f20e76cf6f273f08d26e1981c576d491305d15f60bc0fa0c8673e7 | b532f7a3d1b3ea7a4371248c4bf83cfabb60374b8e95eac489f988e2b4f1d776 | null | [] | 246 |
2.4 | firecrawl | 4.16.0 | Python SDK for Firecrawl API | # Firecrawl Python SDK
The Firecrawl Python SDK is a library that allows you to easily scrape and crawl websites, and output the data in a format ready for use with language models (LLMs). It provides a simple and intuitive interface for interacting with the Firecrawl API.
## Installation
To install the Firecrawl Python SDK, you can use pip:
```bash
pip install firecrawl-py
```
## Usage
1. Get an API key from [firecrawl.dev](https://firecrawl.dev)
2. Set the API key as an environment variable named `FIRECRAWL_API_KEY` or pass it as a parameter to the `Firecrawl` class.
Here's an example of how to use the SDK:
```python
from firecrawl import Firecrawl
from firecrawl.types import ScrapeOptions
firecrawl = Firecrawl(api_key="fc-YOUR_API_KEY")
# Scrape a website (v2):
data = firecrawl.scrape(
'https://firecrawl.dev',
formats=['markdown', 'html']
)
print(data)
# Crawl a website (v2 waiter):
crawl_status = firecrawl.crawl(
'https://firecrawl.dev',
limit=100,
scrape_options=ScrapeOptions(formats=['markdown', 'html'])
)
print(crawl_status)
```
### Scraping a URL
To scrape a single URL, use the `scrape` method. It takes the URL as a parameter and returns a document with the requested formats.
```python
# Scrape a website (v2):
scrape_result = firecrawl.scrape('https://firecrawl.dev', formats=['markdown', 'html'])
print(scrape_result)
```
### Crawling a Website
To crawl a website, use the `crawl` method. It takes the starting URL and optional parameters as arguments. You can control depth, limits, formats, and more.
```python
crawl_status = firecrawl.crawl(
'https://firecrawl.dev',
limit=100,
scrape_options=ScrapeOptions(formats=['markdown', 'html']),
poll_interval=30
)
print(crawl_status)
```
### Asynchronous Crawling
<Tip>Looking for async operations? Check out the [Async Class](#async-class) section below.</Tip>
To enqueue a crawl asynchronously, use `start_crawl`. It returns the crawl `ID` which you can use to check the status of the crawl job.
```python
crawl_job = firecrawl.start_crawl(
'https://firecrawl.dev',
limit=100,
scrape_options=ScrapeOptions(formats=['markdown', 'html']),
)
print(crawl_job)
```
### Checking Crawl Status
To check the status of a crawl job, use the `get_crawl_status` method. It takes the job ID as a parameter and returns the current status of the crawl job.
```python
crawl_status = firecrawl.get_crawl_status("<crawl_id>")
print(crawl_status)
```
### Manual Pagination (v2)
Crawl and batch scrape status responses may include a `next` URL when more data is available. The SDK auto-paginates by default; to page manually, disable auto-pagination and pass the opaque `next` URL back to the SDK.
```python
from firecrawl.v2.types import PaginationConfig
# Crawl: fetch one page at a time
crawl_job = firecrawl.start_crawl("https://firecrawl.dev", limit=100)
status = firecrawl.get_crawl_status(
crawl_job.id,
pagination_config=PaginationConfig(auto_paginate=False),
)
if status.next:
page2 = firecrawl.get_crawl_status_page(status.next)
# Batch scrape: fetch one page at a time
batch_job = firecrawl.start_batch_scrape(["https://firecrawl.dev"])
status = firecrawl.get_batch_scrape_status(
batch_job.id,
pagination_config=PaginationConfig(auto_paginate=False),
)
if status.next:
page2 = firecrawl.get_batch_scrape_status_page(status.next)
```
### Cancelling a Crawl
To cancel an asynchronous crawl job, use the `cancel_crawl` method. It takes the job ID of the asynchronous crawl as a parameter and returns the cancellation status.
```python
cancel_crawl = firecrawl.cancel_crawl(id)
print(cancel_crawl)
```
### Map a Website
Use `map` to generate a list of URLs from a website. Options let you customize the mapping process, including whether to use the sitemap or include subdomains.
```python
# Map a website (v2):
map_result = firecrawl.map('https://firecrawl.dev')
print(map_result)
```
{/* ### Extracting Structured Data from Websites
To extract structured data from websites, use the `extract` method. It takes the URLs to extract data from, a prompt, and a schema as arguments. The schema is a Pydantic model that defines the structure of the extracted data.
<ExtractPythonShort /> */}
### Crawling a Website with WebSockets
To crawl a website with WebSockets, use the `crawl_url_and_watch` method. It takes the starting URL and optional parameters as arguments. The `params` argument allows you to specify additional options for the crawl job, such as the maximum number of pages to crawl, allowed domains, and the output format.
```python
# inside an async function...
nest_asyncio.apply()
# Define event handlers
def on_document(detail):
print("DOC", detail)
def on_error(detail):
print("ERR", detail['error'])
def on_done(detail):
print("DONE", detail['status'])
# Function to start the crawl and watch process
async def start_crawl_and_watch():
# Initiate the crawl job and get the watcher
watcher = app.crawl_url_and_watch('firecrawl.dev', exclude_paths=['blog/*'], limit=5)
# Add event listeners
watcher.add_event_listener("document", on_document)
watcher.add_event_listener("error", on_error)
watcher.add_event_listener("done", on_done)
# Start the watcher
await watcher.connect()
# Run the event loop
await start_crawl_and_watch()
```
## Error Handling
The SDK handles errors returned by the Firecrawl API and raises appropriate exceptions. If an error occurs during a request, an exception will be raised with a descriptive error message.
## Async Class
For async operations, you can use the `AsyncFirecrawl` class. Its methods mirror the `Firecrawl` class, but you `await` them.
```python
from firecrawl import AsyncFirecrawl
firecrawl = AsyncFirecrawl(api_key="YOUR_API_KEY")
# Async Scrape (v2)
async def example_scrape():
scrape_result = await firecrawl.scrape(url="https://example.com")
print(scrape_result)
# Async Crawl (v2)
async def example_crawl():
crawl_result = await firecrawl.crawl(url="https://example.com")
print(crawl_result)
```
## v1 compatibility
For legacy code paths, v1 remains available under `firecrawl.v1` with the original method names.
```python
from firecrawl import Firecrawl
firecrawl = Firecrawl(api_key="YOUR_API_KEY")
# v1 methods (feature‑frozen)
doc_v1 = firecrawl.v1.scrape_url('https://firecrawl.dev', formats=['markdown', 'html'])
crawl_v1 = firecrawl.v1.crawl_url('https://firecrawl.dev', limit=100)
map_v1 = firecrawl.v1.map_url('https://firecrawl.dev')
```
| text/markdown | Mendable.ai | "Mendable.ai" <nick@mendable.ai> | null | "Mendable.ai" <nick@mendable.ai> | MIT License | SDK, API, firecrawl | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
... | [] | https://github.com/firecrawl/firecrawl | null | >=3.8 | [] | [] | [] | [
"requests",
"httpx",
"python-dotenv",
"websockets",
"nest-asyncio",
"pydantic>=2.0",
"aiohttp"
] | [] | [] | [] | [
"Documentation, https://docs.firecrawl.dev",
"Source, https://github.com/firecrawl/firecrawl",
"Tracker, https://github.com/firecrawl/firecrawl/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T18:34:28.047918 | firecrawl-4.16.0.tar.gz | 168,558 | 5e/3d/7e7d5f5ca169ab77484796c83ee282bdad3e2e2f5875a24e610ec8610c49/firecrawl-4.16.0.tar.gz | source | sdist | null | false | 3c44ac96b500881fa28d46c24ff73039 | f69f498b11eb38a30f868c61c2e9441b7dcd63fbdb0469777ad37c7fdcc08b57 | 5e3d7e7d5f5ca169ab77484796c83ee282bdad3e2e2f5875a24e610ec8610c49 | null | [
"LICENSE"
] | 12,424 |
2.4 | firecrawl-py | 4.16.0 | Python SDK for Firecrawl API | # Firecrawl Python SDK
The Firecrawl Python SDK is a library that allows you to easily scrape and crawl websites, and output the data in a format ready for use with language models (LLMs). It provides a simple and intuitive interface for interacting with the Firecrawl API.
## Installation
To install the Firecrawl Python SDK, you can use pip:
```bash
pip install firecrawl-py
```
## Usage
1. Get an API key from [firecrawl.dev](https://firecrawl.dev)
2. Set the API key as an environment variable named `FIRECRAWL_API_KEY` or pass it as a parameter to the `Firecrawl` class.
Here's an example of how to use the SDK:
```python
from firecrawl import Firecrawl
from firecrawl.types import ScrapeOptions
firecrawl = Firecrawl(api_key="fc-YOUR_API_KEY")
# Scrape a website (v2):
data = firecrawl.scrape(
'https://firecrawl.dev',
formats=['markdown', 'html']
)
print(data)
# Crawl a website (v2 waiter):
crawl_status = firecrawl.crawl(
'https://firecrawl.dev',
limit=100,
scrape_options=ScrapeOptions(formats=['markdown', 'html'])
)
print(crawl_status)
```
### Scraping a URL
To scrape a single URL, use the `scrape` method. It takes the URL as a parameter and returns a document with the requested formats.
```python
# Scrape a website (v2):
scrape_result = firecrawl.scrape('https://firecrawl.dev', formats=['markdown', 'html'])
print(scrape_result)
```
### Crawling a Website
To crawl a website, use the `crawl` method. It takes the starting URL and optional parameters as arguments. You can control depth, limits, formats, and more.
```python
crawl_status = firecrawl.crawl(
'https://firecrawl.dev',
limit=100,
scrape_options=ScrapeOptions(formats=['markdown', 'html']),
poll_interval=30
)
print(crawl_status)
```
### Asynchronous Crawling
<Tip>Looking for async operations? Check out the [Async Class](#async-class) section below.</Tip>
To enqueue a crawl asynchronously, use `start_crawl`. It returns the crawl `ID` which you can use to check the status of the crawl job.
```python
crawl_job = firecrawl.start_crawl(
'https://firecrawl.dev',
limit=100,
scrape_options=ScrapeOptions(formats=['markdown', 'html']),
)
print(crawl_job)
```
### Checking Crawl Status
To check the status of a crawl job, use the `get_crawl_status` method. It takes the job ID as a parameter and returns the current status of the crawl job.
```python
crawl_status = firecrawl.get_crawl_status("<crawl_id>")
print(crawl_status)
```
### Manual Pagination (v2)
Crawl and batch scrape status responses may include a `next` URL when more data is available. The SDK auto-paginates by default; to page manually, disable auto-pagination and pass the opaque `next` URL back to the SDK.
```python
from firecrawl.v2.types import PaginationConfig
# Crawl: fetch one page at a time
crawl_job = firecrawl.start_crawl("https://firecrawl.dev", limit=100)
status = firecrawl.get_crawl_status(
crawl_job.id,
pagination_config=PaginationConfig(auto_paginate=False),
)
if status.next:
page2 = firecrawl.get_crawl_status_page(status.next)
# Batch scrape: fetch one page at a time
batch_job = firecrawl.start_batch_scrape(["https://firecrawl.dev"])
status = firecrawl.get_batch_scrape_status(
batch_job.id,
pagination_config=PaginationConfig(auto_paginate=False),
)
if status.next:
page2 = firecrawl.get_batch_scrape_status_page(status.next)
```
### Cancelling a Crawl
To cancel an asynchronous crawl job, use the `cancel_crawl` method. It takes the job ID of the asynchronous crawl as a parameter and returns the cancellation status.
```python
cancel_crawl = firecrawl.cancel_crawl(id)
print(cancel_crawl)
```
### Map a Website
Use `map` to generate a list of URLs from a website. Options let you customize the mapping process, including whether to use the sitemap or include subdomains.
```python
# Map a website (v2):
map_result = firecrawl.map('https://firecrawl.dev')
print(map_result)
```
{/* ### Extracting Structured Data from Websites
To extract structured data from websites, use the `extract` method. It takes the URLs to extract data from, a prompt, and a schema as arguments. The schema is a Pydantic model that defines the structure of the extracted data.
<ExtractPythonShort /> */}
### Crawling a Website with WebSockets
To crawl a website with WebSockets, use the `crawl_url_and_watch` method. It takes the starting URL and optional parameters as arguments. The `params` argument allows you to specify additional options for the crawl job, such as the maximum number of pages to crawl, allowed domains, and the output format.
```python
# inside an async function...
nest_asyncio.apply()
# Define event handlers
def on_document(detail):
print("DOC", detail)
def on_error(detail):
print("ERR", detail['error'])
def on_done(detail):
print("DONE", detail['status'])
# Function to start the crawl and watch process
async def start_crawl_and_watch():
# Initiate the crawl job and get the watcher
watcher = app.crawl_url_and_watch('firecrawl.dev', exclude_paths=['blog/*'], limit=5)
# Add event listeners
watcher.add_event_listener("document", on_document)
watcher.add_event_listener("error", on_error)
watcher.add_event_listener("done", on_done)
# Start the watcher
await watcher.connect()
# Run the event loop
await start_crawl_and_watch()
```
## Error Handling
The SDK handles errors returned by the Firecrawl API and raises appropriate exceptions. If an error occurs during a request, an exception will be raised with a descriptive error message.
## Async Class
For async operations, you can use the `AsyncFirecrawl` class. Its methods mirror the `Firecrawl` class, but you `await` them.
```python
from firecrawl import AsyncFirecrawl
firecrawl = AsyncFirecrawl(api_key="YOUR_API_KEY")
# Async Scrape (v2)
async def example_scrape():
scrape_result = await firecrawl.scrape(url="https://example.com")
print(scrape_result)
# Async Crawl (v2)
async def example_crawl():
crawl_result = await firecrawl.crawl(url="https://example.com")
print(crawl_result)
```
## v1 compatibility
For legacy code paths, v1 remains available under `firecrawl.v1` with the original method names.
```python
from firecrawl import Firecrawl
firecrawl = Firecrawl(api_key="YOUR_API_KEY")
# v1 methods (feature‑frozen)
doc_v1 = firecrawl.v1.scrape_url('https://firecrawl.dev', formats=['markdown', 'html'])
crawl_v1 = firecrawl.v1.crawl_url('https://firecrawl.dev', limit=100)
map_v1 = firecrawl.v1.map_url('https://firecrawl.dev')
```
| text/markdown | Mendable.ai | "Mendable.ai" <nick@mendable.ai> | null | "Mendable.ai" <nick@mendable.ai> | MIT License | SDK, API, firecrawl | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
... | [] | https://github.com/firecrawl/firecrawl | null | >=3.8 | [] | [] | [] | [
"requests",
"httpx",
"python-dotenv",
"websockets",
"nest-asyncio",
"pydantic>=2.0",
"aiohttp"
] | [] | [] | [] | [
"Documentation, https://docs.firecrawl.dev",
"Source, https://github.com/firecrawl/firecrawl",
"Tracker, https://github.com/firecrawl/firecrawl/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T18:34:22.755174 | firecrawl_py-4.16.0.tar.gz | 168,579 | 8c/ff/9f3dade9baf882e8aee9dd5869f6c69295794d9b62f1a3f4444136b8146c/firecrawl_py-4.16.0.tar.gz | source | sdist | null | false | ae691937a15b282b87892470bbc9ba05 | 5f6d6fdeb3404429c851fc5a4e990f6659a9e9c72577b434480ad1616bb03374 | 8cff9f3dade9baf882e8aee9dd5869f6c69295794d9b62f1a3f4444136b8146c | null | [
"LICENSE"
] | 33,322 |
2.4 | gensageai | 0.0.1 | Python client library for the gensageai API | # OpenAI Python Library
The OpenAI Python library provides convenient access to the OpenAI API
from applications written in the Python language. It includes a
pre-defined set of classes for API resources that initialize
themselves dynamically from API responses which makes it compatible
with a wide range of versions of the OpenAI API.
You can find usage examples for the OpenAI Python library in our [API reference](https://beta.openai.com/docs/api-reference?lang=python) and the [OpenAI Cookbook](https://github.com/openai/openai-cookbook/).
## Installation
You don't need this source code unless you want to modify the package. If you just
want to use the package, just run:
```sh
pip install --upgrade gensageai
```
Install from source with:
```sh
python setup.py install
```
### Optional dependencies
Install dependencies for [`openai.embeddings_utils`](openai/embeddings_utils.py):
```sh
pip install openai[embeddings]
```
Install support for [Weights & Biases](https://wandb.me/openai-docs):
```
pip install openai[wandb]
```
Data libraries like `numpy` and `pandas` are not installed by default due to their size. They’re needed for some functionality of this library, but generally not for talking to the API. If you encounter a `MissingDependencyError`, install them with:
```sh
pip install openai[datalib]
```
## Usage
The library needs to be configured with your account's secret key which is available on the [website](https://platform.openai.com/account/api-keys). Either set it as the `OPENAI_API_KEY` environment variable before using the library:
```bash
export OPENAI_API_KEY='sk-...'
```
Or set `openai.api_key` to its value:
```python
import openai
openai.api_key = "sk-..."
# list models
models = openai.Model.list()
# print the first model's id
print(models.data[0].id)
# create a chat completion
chat_completion = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello world"}])
# print the chat completion
print(chat_completion.choices[0].message.content)
```
### Params
All endpoints have a `.create` method that supports a `request_timeout` param. This param takes a `Union[float, Tuple[float, float]]` and will raise an `openai.error.Timeout` error if the request exceeds that time in seconds (See: https://requests.readthedocs.io/en/latest/user/quickstart/#timeouts).
### Microsoft Azure Endpoints
In order to use the library with Microsoft Azure endpoints, you need to set the `api_type`, `api_base` and `api_version` in addition to the `api_key`. The `api_type` must be set to 'azure' and the others correspond to the properties of your endpoint.
In addition, the deployment name must be passed as the engine parameter.
```python
import openai
openai.api_type = "azure"
openai.api_key = "..."
openai.api_base = "https://example-endpoint.openai.azure.com"
openai.api_version = "2023-05-15"
# create a chat completion
chat_completion = openai.ChatCompletion.create(deployment_id="deployment-name", model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello world"}])
# print the completion
print(completion.choices[0].message.content)
```
Please note that for the moment, the Microsoft Azure endpoints can only be used for completion, embedding, and fine-tuning operations.
For a detailed example of how to use fine-tuning and other operations using Azure endpoints, please check out the following Jupyter notebooks:
- [Using Azure completions](https://github.com/openai/openai-cookbook/tree/main/examples/azure/completions.ipynb)
- [Using Azure fine-tuning](https://github.com/openai/openai-cookbook/tree/main/examples/azure/finetuning.ipynb)
- [Using Azure embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/azure/embeddings.ipynb)
### Microsoft Azure Active Directory Authentication
In order to use Microsoft Active Directory to authenticate to your Azure endpoint, you need to set the `api_type` to "azure_ad" and pass the acquired credential token to `api_key`. The rest of the parameters need to be set as specified in the previous section.
```python
from azure.identity import DefaultAzureCredential
import openai
# Request credential
default_credential = DefaultAzureCredential()
token = default_credential.get_token("https://cognitiveservices.azure.com/.default")
# Setup parameters
openai.api_type = "azure_ad"
openai.api_key = token.token
openai.api_base = "https://example-endpoint.openai.azure.com/"
openai.api_version = "2023-05-15"
# ...
```
### Command-line interface
This library additionally provides an `openai` command-line utility
which makes it easy to interact with the API from your terminal. Run
`openai api -h` for usage.
```sh
# list models
openai api models.list
# create a chat completion (gpt-3.5-turbo, gpt-4, etc.)
openai api chat_completions.create -m gpt-3.5-turbo -g user "Hello world"
# create a completion (text-davinci-003, text-davinci-002, ada, babbage, curie, davinci, etc.)
openai api completions.create -m ada -p "Hello world"
# generate images via DALL·E API
openai api image.create -p "two dogs playing chess, cartoon" -n 1
# using openai through a proxy
openai --proxy=http://proxy.com api models.list
```
## Example code
Examples of how to use this Python library to accomplish various tasks can be found in the [OpenAI Cookbook](https://github.com/openai/openai-cookbook/). It contains code examples for:
- Classification using fine-tuning
- Clustering
- Code search
- Customizing embeddings
- Question answering from a corpus of documents
- Recommendations
- Visualization of embeddings
- And more
Prior to July 2022, this OpenAI Python library hosted code examples in its examples folder, but since then all examples have been migrated to the [OpenAI Cookbook](https://github.com/openai/openai-cookbook/).
### Chat Completions
Conversational models such as `gpt-3.5-turbo` can be called using the chat completions endpoint.
```python
import openai
openai.api_key = "sk-..." # supply your API key however you choose
completion = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello world"}])
print(completion.choices[0].message.content)
```
### Completions
Text models such as `text-davinci-003`, `text-davinci-002` and earlier (`ada`, `babbage`, `curie`, `davinci`, etc.) can be called using the completions endpoint.
```python
import openai
openai.api_key = "sk-..." # supply your API key however you choose
completion = openai.Completion.create(model="text-davinci-003", prompt="Hello world")
print(completion.choices[0].text)
```
### Embeddings
In the OpenAI Python library, an embedding represents a text string as a fixed-length vector of floating point numbers. Embeddings are designed to measure the similarity or relevance between text strings.
To get an embedding for a text string, you can use the embeddings method as follows in Python:
```python
import openai
openai.api_key = "sk-..." # supply your API key however you choose
# choose text to embed
text_string = "sample text"
# choose an embedding
model_id = "text-similarity-davinci-001"
# compute the embedding of the text
embedding = openai.Embedding.create(input=text_string, model=model_id)['data'][0]['embedding']
```
An example of how to call the embeddings method is shown in this [get embeddings notebook](https://github.com/openai/openai-cookbook/blob/main/examples/Get_embeddings.ipynb).
Examples of how to use embeddings are shared in the following Jupyter notebooks:
- [Classification using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Classification_using_embeddings.ipynb)
- [Clustering using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Clustering.ipynb)
- [Code search using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Code_search.ipynb)
- [Semantic text search using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Semantic_text_search_using_embeddings.ipynb)
- [User and product embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/User_and_product_embeddings.ipynb)
- [Zero-shot classification using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Zero-shot_classification_with_embeddings.ipynb)
- [Recommendation using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Recommendation_using_embeddings.ipynb)
For more information on embeddings and the types of embeddings OpenAI offers, read the [embeddings guide](https://beta.openai.com/docs/guides/embeddings) in the OpenAI documentation.
### Fine-tuning
Fine-tuning a model on training data can both improve the results (by giving the model more examples to learn from) and reduce the cost/latency of API calls (chiefly through reducing the need to include training examples in prompts).
Examples of fine-tuning are shared in the following Jupyter notebooks:
- [Classification with fine-tuning](https://github.com/openai/openai-cookbook/blob/main/examples/Fine-tuned_classification.ipynb) (a simple notebook that shows the steps required for fine-tuning)
- Fine-tuning a model that answers questions about the 2020 Olympics
- [Step 1: Collecting data](https://github.com/openai/openai-cookbook/blob/main/examples/fine-tuned_qa/olympics-1-collect-data.ipynb)
- [Step 2: Creating a synthetic Q&A dataset](https://github.com/openai/openai-cookbook/blob/main/examples/fine-tuned_qa/olympics-2-create-qa.ipynb)
- [Step 3: Train a fine-tuning model specialized for Q&A](https://github.com/openai/openai-cookbook/blob/main/examples/fine-tuned_qa/olympics-3-train-qa.ipynb)
Sync your fine-tunes to [Weights & Biases](https://wandb.me/openai-docs) to track experiments, models, and datasets in your central dashboard with:
```bash
openai wandb sync
```
For more information on fine-tuning, read the [fine-tuning guide](https://beta.openai.com/docs/guides/fine-tuning) in the OpenAI documentation.
### Moderation
OpenAI provides a Moderation endpoint that can be used to check whether content complies with the OpenAI [content policy](https://platform.openai.com/docs/usage-policies)
```python
import openai
openai.api_key = "sk-..." # supply your API key however you choose
moderation_resp = openai.Moderation.create(input="Here is some perfectly innocuous text that follows all OpenAI content policies.")
```
See the [moderation guide](https://platform.openai.com/docs/guides/moderation) for more details.
## Image generation (DALL·E)
```python
import openai
openai.api_key = "sk-..." # supply your API key however you choose
image_resp = openai.Image.create(prompt="two dogs playing chess, oil painting", n=4, size="512x512")
```
## Audio transcription (Whisper)
```python
import openai
openai.api_key = "sk-..." # supply your API key however you choose
f = open("path/to/file.mp3", "rb")
transcript = openai.Audio.transcribe("whisper-1", f)
```
## Async API
Async support is available in the API by prepending `a` to a network-bound method:
```python
import openai
openai.api_key = "sk-..." # supply your API key however you choose
async def create_chat_completion():
chat_completion_resp = await openai.ChatCompletion.acreate(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello world"}])
```
To make async requests more efficient, you can pass in your own
`aiohttp.ClientSession`, but you must manually close the client session at the end
of your program/event loop:
```python
import openai
from aiohttp import ClientSession
openai.aiosession.set(ClientSession())
# At the end of your program, close the http session
await openai.aiosession.get().close()
```
See the [usage guide](https://platform.openai.com/docs/guides/images) for more details.
## Requirements
- Python 3.7.1+
In general, we want to support the versions of Python that our
customers are using. If you run into problems with any version
issues, please let us know on our [support page](https://help.openai.com/en/).
## Credit
This library is forked from the [Stripe Python Library](https://github.com/stripe/stripe-python).
| text/markdown | gensageai | support@gensei.ai | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/open-lm/openai-python | null | >=3.7.1 | [] | [] | [] | [
"requests>=2.20",
"tqdm",
"typing_extensions; python_version < \"3.8\"",
"aiohttp",
"black~=21.6b0; extra == \"dev\"",
"pytest==6.*; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\"",
"pytest-mock; extra == \"dev\"",
"numpy; extra == \"datalib\"",
"pandas>=1.2.3; extra == \"datalib\"",
"pand... | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T18:33:50.417677 | gensageai-0.0.1.tar.gz | 56,538 | a8/5f/f473e6ee4cd656f483ee07b28b14fe1ba7c923117aa2fa779d4a82e07575/gensageai-0.0.1.tar.gz | source | sdist | null | false | fc0ba36b5dab1dfa1d2f6fba3b4ec016 | 30157d15306ca80cc2bb65290b91660e6690b150621c56e33ffdb7d95ff189e0 | a85ff473e6ee4cd656f483ee07b28b14fe1ba7c923117aa2fa779d4a82e07575 | null | [
"LICENSE"
] | 261 |
2.4 | jaseci | 2.2.17 | Jaseci - A complete AI-native programming ecosystem with Jac language, LLM integration, and full-stack web apps | # Jaseci
### Complete AI-Native Programming Ecosystem
Jaseci is a **meta-package** that provides a unified installation for the complete Jaseci ecosystem.
## What's Included
When you install `jaseci`, you automatically get:
- **jaclang** - The Jac programming language
- **byllm** - LLM integration for AI-native programming
- **jac-client** - Full-stack web applications with React-like components in Jac
## Installation
```bash
pip install jaseci
```
## Quick Start
After installation, you can start using Jac:
```bash
jac --help
```
## Documentation
- **Jac Language**: [https://www.jac-lang.org](https://www.jac-lang.org)
- **byLLM (AI Integration)**: [https://www.byllm.ai](https://www.byllm.ai)
- **Jac Client**: [https://docs.jaseci.org/jac-client/](https://docs.jaseci.org/jac-client/)
- **Jaseci Homepage**: [https://jaseci.org](https://jaseci.org)
- **GitHub Repository**: [https://github.com/Jaseci-Labs/jaseci](https://github.com/Jaseci-Labs/jaseci)
| text/markdown | null | Jason Mars <jason@mars.ninja> | null | Jason Mars <jason@mars.ninja> | null | jac, jaclang, jaseci, ai, llm, full-stack, web-apps, jac-client, jac-scale, programming-language, machine-learning, artificial-intelligence | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"jaclang>=0.10.3",
"byllm>=0.4.21",
"jac-client>=0.2.19",
"jac-scale>=0.1.10",
"jac-super>=0.1.4"
] | [] | [] | [] | [
"Repository, https://github.com/Jaseci-Labs/jaseci",
"Homepage, https://jaseci.org",
"Documentation, https://jac-lang.org"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T18:33:47.663612 | jaseci-2.2.17.tar.gz | 1,953 | e7/a2/1387d799ac6d1789c15b55e21654fa2ff54b27abdf34d82bf7b244b41b28/jaseci-2.2.17.tar.gz | source | sdist | null | false | 5e6fe3fc5c34663b5e82355e465a772c | a0bb086a0787f199e0464e9d98f832fc7a05facfe31c79bf308cbc1f9ca59480 | e7a21387d799ac6d1789c15b55e21654fa2ff54b27abdf34d82bf7b244b41b28 | MIT | [] | 289 |
2.4 | byllm | 0.4.21 | byLLM Provides Easy to use APIs for different LLM Providers to be used with Jaseci's Jaclang Programming Language. | <div align="center">
<img src="https://byllm.jaseci.org/logo.png" height="150">
[About byLLM] | [Quick Start & Tutorials] | [Full Reference] | [Research Paper]
</div>
[About byLLM]: https://www.byllm.ai
[Quick Start & Tutorials]: https://docs.jaseci.org/tutorials/ai/quickstart/
[Full Reference]: https://docs.jaseci.org/reference/plugins/byllm/
[Research Paper]: https://dl.acm.org/doi/10.1145/3763092
# byLLM : Prompt Less, Smile More!
[](https://pypi.org/project/byllm/) [](https://pypi.org/project/byllm/) [](https://github.com/jaseci-labs/jaseci/actions/workflows/test-jaseci.yml) [](https://discord.gg/6j3QNdtcN6)
byLLM is an innovative AI integration framework built for the Jaseci ecosystem, implementing the cutting-edge Meaning Typed Programming (MTP) paradigm. MTP revolutionizes AI integration by embedding prompt engineering directly into code semantics, making AI interactions more natural and maintainable. While primarily designed to complement the Jac programming language, byLLM also provides a powerful Python library interface.
Installation is simple via PyPI:
```bash
pip install byllm
```
## Basic Example
Consider building an application that translates english to other languages using an LLM. This can be simply built as follows:
```python
import from byllm.lib { Model }
glob llm = Model(model_name="gpt-4o");
def translate_to(language: str, phrase: str) -> str by llm();
with entry {
output = translate_to(language="Welsh", phrase="Hello world");
print(output);
}
```
This simple piece of code replaces traditional prompt engineering without introducing additional complexity.
## Power of Types with LLMs
Consider a program that detects the personality type of a historical figure from their name. This can eb built in a way that LLM picks from an enum and the output strictly adhere this type.
```python
import from byllm.lib { Model }
glob llm = Model(model_name="gemini/gemini-2.0-flash");
enum Personality {
INTROVERT, EXTROVERT, AMBIVERT
}
def get_personality(name: str) -> Personality by llm();
with entry {
name = "Albert Einstein";
result = get_personality(name);
print(f"{result} personality detected for {name}");
}
```
> Similarly, custom types can be used as output types which force the LLM to adhere to the specified type and produce a valid result.
## Control! Control! Control!
Even if we are elimination prompt engineering entierly, we allow specific ways to enrich code semantics through **docstrings** and **semstrings**.
```python
"""Represents the personal record of a person"""
obj Person {
has name: str;
has dob: str;
has ssn: str;
}
sem Person.name = "Full name of the person";
sem Person.dob = "Date of Birth";
sem Person.ssn = "Last four digits of the Social Security Number of a person";
"""Calculate eligibility for various services based on person's data."""
def check_eligibility(person: Person, service_type: str) -> bool by llm();
```
Docstrings naturally enhance the semantics of their associated code constructs, while the `sem` keyword provides an elegant way to enrich the meaning of class attributes and function arguments. Our research shows these concise semantic strings are more effective than traditional multi-line prompts.
## Configuration
### Project-wide Configuration (jac.toml)
Configure byLLM behavior globally using `jac.toml`:
```toml
[plugins.byllm]
system_prompt = "You are a helpful assistant..."
[plugins.byllm.model]
default_model = "gpt-4o-mini"
[plugins.byllm.call_params]
temperature = 0.7
```
This enables centralized control over:
- System prompts across all LLM calls
- Default model selection
- Common parameters like temperature
### Custom Model Endpoints
Connect to custom or self-hosted models:
```jac
import from byllm.lib { Model }
glob llm = Model(
model_name="custom-model",
config={
"api_base": "https://your-endpoint.com/v1/chat/completions",
"api_key": "your_key",
"http_client": True
}
);
```
## How well does byLLM work?
byLLM is built using the underline priciple of Meaning Typed Programming and we shown our evaluation data compared with two such AI integration frameworks for python, such as DSPy and LMQL. We show significant performance gain against LMQL while allowing on par or better performance to DSPy, while reducing devloper complexity upto 10x.
**Full Documentation**: [Jac byLLM Documentation](https://www.jac-lang.org/learn/jac-byllm/with_llm/)
**Complete Examples**: [Jac Examples Gallery](https://docs.jaseci.org/tutorials/examples/)
**Research**: The research journey of MTP is available on [ACM Digital Library](https://dl.acm.org/doi/10.1145/3763092) and published at OOPSLA 2025.
## Quick Links
- [Getting Started Guide](https://www.jac-lang.org/learn/jac-byllm/quickstart/)
- [Jac Language Documentation](https://www.jac-lang.org/)
- [GitHub Repository](https://github.com/jaseci-labs/jaseci)
## Contributing
We welcome contributions to byLLM! Whether you're fixing bugs, improving documentation, or adding new features, your help is appreciated.
Areas we actively seek contributions:
- Bug fixes and improvements
- Documentation enhancements
- New examples and tutorials
- Test cases and benchmarks
Please see our [Contributing Guide](https://www.jac-lang.org/internals/contrib/) for detailed instructions.
If you find a bug or have a feature request, please [open an issue](https://github.com/jaseci-labs/jaseci/issues/new/choose).
## Community
Join our vibrant community:
- [Discord Server](https://discord.gg/6j3QNdtcN6) - Chat with the team and community
## License
This project is licensed under the MIT License.
### Third-Party Dependencies
byLLM integrates with various LLM providers (OpenAI, Anthropic, Google, etc.) through LiteLLM.
## Cite our research
> Jayanaka L. Dantanarayana, Yiping Kang, Kugesan Sivasothynathan, Christopher Clarke, Baichuan Li, Savini
Kashmira, Krisztian Flautner, Lingjia Tang, and Jason Mars. 2025. MTP: A Meaning-Typed Language Ab-
straction for AI-Integrated Programming. Proc. ACM Program. Lang. 9, OOPSLA2, Article 314 (October 2025),
29 pages. [https://doi.org/10.1145/3763092](https://dl.acm.org/doi/10.1145/3763092)
## Jaseci Contributors
<a href="https://github.com/jaseci-labs/jaseci/graphs/contributors">
<img src="https://contrib.rocks/image?repo=jaseci-labs/jaseci" />
</a>
| text/markdown | null | Jason Mars <jason@mars.ninja> | null | Jason Mars <jason@mars.ninja> | null | llm, jaclang, jaseci, byLLM | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"jaclang>=0.10.3",
"litellm<1.80.0,>=1.75.5.post1",
"loguru<0.8.0,>=0.7.2",
"pillow<10.5.0,>=10.4.0",
"pytest>=8.3.2; extra == \"dev\"",
"wikipedia; extra == \"tools\"",
"opencv-python-headless; extra == \"video\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T18:32:48.482060 | byllm-0.4.21.tar.gz | 105,194 | cf/ad/ed884ad1ecaff97b3a451ce5a1044e4aeb2800553b09967fe4d32af034b6/byllm-0.4.21.tar.gz | source | sdist | null | false | 73fa2461568865c235e35621b8b8f702 | 4d3cc11e7dd06be3a8f20629c079eefbd0bb285d38d3e14017fcb460bcddcf27 | cfaded884ad1ecaff97b3a451ce5a1044e4aeb2800553b09967fe4d32af034b6 | MIT | [] | 1,059 |
2.4 | reconciler-for-ynab | 0.1.0 | Reconciler for YNAB - Reconcile YNAB transactions to reach a target balance | # reconciler-for-ynab
[](https://results.pre-commit.ci/latest/github/mxr/reconciler-for-ynab/main)
Reconcile for YNAB - Reconcile YNAB transactions from the CLI
## What This Does
When YNAB imports your transactions and balances in sync, reconciliation is a simple one-click process. But sometimes there’s a mismatch, and hunting it down is tedious. I was frustrated with going line-by-line through records to find which transactions should be cleared and reconciled, so I wrote this tool. It streamlines the process by finding which transactions should be reconciled to match a target balance. It will either output the transactions to reconcile, or reconcile them automatically through the [YNAB API](https://api.ynab.com/).
Suppose I want to automatically reconcile my credit card ending in 1234 to \$1,471.32. I can do that as follows:
```console
$ reconciler-for-ynab --reconcile --account-name-regex 'credit.+1234' --target 1471.32
** Refreshing SQLite DB **
Fetching budget data...
Budget Data: 100%|███████████████████████████████████████████████████████| 10/10 [00:00<00:00, 52.24it/s]
Done
Inserting budget data...
Payees: 100%|████████████████████████████████████████████████████████████| 7/7 [00:00<00:00, 2252.93it/s]
Transactions: 100%|███████████████████████████████████████████████████| 14/14 [00:00<00:00, 10605.07it/s]
Done
** Done **
[Credit Card]: Testing combinations: 100%|██████████████████████████| 32/32 [00:00<00:00, 1065220.06it/s]
[Credit Card] Match found:
[Credit Card] * $3.04 - Starbucks
[Credit Card] * $45.14 - Caffe Panna
[Credit Card] Reconciling: 100%|███████████████████████████████████████████| 2/2 [00:00<00:00, 11.76it/s]
[Credit Card] Done
```
## Installation
```console
$ pip install reconciler-for-ynab
```
## Usage
### Token
Provision a [YNAB Personal Access Token](https://api.ynab.com/#personal-access-tokens) and save it as an environment variable.
```console
$ export YNAB_PERSONAL_ACCESS_TOKEN="..."
```
### Quickstart
Run the tool from the terminal to print out the transactions:
```console
$ reconciler-for-ynab --account-name-regex 1234 --target 500.30
```
Run it again with `--reconcile` to reconcile the account.
```console
$ reconciler-for-ynab --account-name-regex 1234 --target 500.30 --reconcile
```
You can set `--mode` to `batch` to process multiple accounts at once:
```console
$ reconciler-for-ynab --reconcile --mode batch --account-target-pairs 'Checking=500' 'Credit=290'
[Checking]: Testing combinations: 100%|████████████████████████| 32/32 [00:00<00:00, 800000.00it/s]
[Checking] Match found:
[Checking] * $10.00 - Payee
[Checking] * $20.00 - Payee
[Checking] Reconciling: 100%|████████████████████████████████████████| 2/2 [00:00<00:00, 20.00it/s]
[Checking] Done
[Credit Card]: Testing combinations: 100%|█████████████████████| 32/32 [00:00<00:00, 800000.00it/s]
[Credit Card] Match found:
[Credit Card] * $10.00 - Payee
[Credit Card] * $20.00 - Payee
[Credit Card] Reconciling: 100%|█████████████████████████████████████| 2/2 [00:00<00:00, 20.00it/s]
[Credit Card] Done
Batch reconciling done.
```
### All Options
```console
$ reconcile-for-ynab --help
usage: reconciler-for-ynab [-h] [--mode {single,batch}] [--account-name-regex ACCOUNT_NAME_REGEX]
[--target TARGET]
[--account-target-pairs ACCOUNT_TARGET_PAIRS [ACCOUNT_TARGET_PAIRS ...]]
[--reconcile] [--sqlite-export-for-ynab-db SQLITE_EXPORT_FOR_YNAB_DB]
[--sqlite-export-for-ynab-full-refresh] [--version]
options:
-h, --help show this help message and exit
--mode {single,batch}
Reconciliation mode. `single` uses --account-name-regex/--target. `batch`
uses --account-target-pairs.
--account-name-regex ACCOUNT_NAME_REGEX
Regex to match account name (must match exactly one account)
--target TARGET Target balance to match towards for reconciliation
--account-target-pairs ACCOUNT_TARGET_PAIRS [ACCOUNT_TARGET_PAIRS ...]
Batch mode only. Account regex/target pairs in `ACCOUNT_NAME_REGEX=TARGET`
format (example: `Checking=500.30`).
--reconcile Whether to actually perform the reconciliation - if unset, this tool only
prints the transactions that would be reconciled
--sqlite-export-for-ynab-db SQLITE_EXPORT_FOR_YNAB_DB
Path to sqlite-export-for-ynab SQLite DB file (respects sqlite-export-for-
ynab configuration)
--sqlite-export-for-ynab-full-refresh
Whether to **DROP ALL TABLES** and fetch all budget data again. If unset,
this tool only does an incremental refresh
--version show program's version number and exit
```
| text/markdown | Max R | mxr@users.noreply.github.com | null | null | MIT | ynab, budget, reconcile, cli | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy"
] | [] | https://github.com/mxr/reconciler-for-ynab | null | >=3.12 | [] | [] | [] | [
"aiohttp>=3",
"babel",
"sqlite-export-for-ynab>=1.4.2",
"tldm>=1.0.2"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.12 | 2026-02-18T18:32:28.001690 | reconciler_for_ynab-0.1.0.tar.gz | 12,164 | 80/7b/528bbf1ded708d1389e5ff7b5ce6e8d9c7d3eef7f7e42090e45643d49154/reconciler_for_ynab-0.1.0.tar.gz | source | sdist | null | false | bdaa394a412c2efc41f82cc4c7367be9 | 49fac0d576140717c0bff574a7b24de7a324a14d99f4a1a5a1e235e27aee39e2 | 807b528bbf1ded708d1389e5ff7b5ce6e8d9c7d3eef7f7e42090e45643d49154 | null | [
"LICENSE"
] | 238 |
2.4 | robyn-config | 0.4.0 | A powerful CLI tool to bootstrap and manage production-ready Robyn applications with best practices built-in. | # robyn-config
[](https://pepy.tech/project/robyn-config)
[](https://badge.fury.io/py/robyn-config)
[](https://github.com/Lehsqa/robyn-config/blob/main/LICENSE)

[](https://deepwiki.com/Lehsqa/robyn-config)
`robyn-config` is a comprehensive CLI tool designed to bootstrap and manage [Robyn](https://robyn.tech) applications. It streamlines your development workflow by generating production-ready project structures and automating repetitive tasks, allowing you to focus on building your business logic.
Think of it as the essential companion for your Robyn projects-handling everything from initial setup with best practices to injecting new feature components as your application grows.
## 📦 Installation
You can simply use Pip for installation.
```bash
pip install robyn-config
```
## 🤖 AI Agent Skills
`robyn-config` also supports AI agent skills, which let agents apply reusable project-specific workflows and guidance.
To add the Robyn Config skills pack, run:
```bash
npx skills add Lehsqa/robyn-config-skills
```
## 🤔 Usage
### 🚀 Create a Project
To bootstrap a new project with your preferred architecture and ORM, run:
```bash
# Create a DDD project with SQLAlchemy (uses uv by default)
robyn-config create my-service --orm sqlalchemy --design ddd ./my-service
```
```bash
# Create an MVC project with Tortoise ORM, locking with poetry
robyn-config create newsletter --orm tortoise --design mvc --package-manager poetry ~/projects/newsletter
```
```bash
# Launch the interactive create UI
robyn-config create -i
```
Interactive mode defaults destination to `.` and lets you edit all fields
before confirmation. If you pass flags (for example `--orm tortoise`),
those values are prefilled in the form and still editable.
### ➕ Add Business Logic
Once inside a project, you can easily add new entities (models, routes, repositories, etc.) using the `add` command. This automatically generates all necessary files and wiring based on your project's architecture.
```bash
# Add a 'product' entity to your project
cd my-service
robyn-config add product
```
This will:
- Generate models/tables.
- Create repositories.
- Setup routes and controllers.
- Register everything in the app configuration.
- Respect your configured paths: `add` reads injection targets from `[tool.robyn-config.add]` in `pyproject.toml` (e.g., domain/operational/presentation paths for DDD or views/repository/urls for MVC). You can customize those paths before running `add` to steer where new code is written.
### 🏃 CLI Options
```
Usage: robyn-config [OPTIONS] COMMAND [ARGS]...
Options:
--help Show this message and exit.
Commands:
add Add new business logic to an existing robyn-config project.
create Copy the template into destination with specific configurations.
```
**`create` command options:**
- `name`: Sets the project name used in templated files like
`pyproject.toml` and `README.md`. Required unless `-i` is used.
- `-i`, `--interactive`: Launch a Textual terminal UI to fill create
options interactively.
- `--orm`: Selects the database layer. Options: `sqlalchemy` (default), `tortoise`.
- `--design`: Toggles between the architecture templates. Options: `ddd` (default), `mvc`.
- `--package-manager`: Choose how dependencies are locked/installed. Options: `uv` (default), `poetry`.
- `destination`: The target directory. Defaults to `.` (including in
interactive mode).
**`add` command options:**
- `name`: The name of the entity/feature to add (e.g., `user`, `order-item`).
- `project_path`: Path to the project root. Defaults to current directory.
## 🐍 Python Version Support
`robyn-config` is compatible with the following Python versions:
> Python >= 3.11
Please make sure you have the correct version of Python installed before starting to use this project.
## 💡 Features
- **Rapid Scaffolding**: Instantly generate robust, production-ready Robyn backend projects.
- **Integrated Component Management**: Use the CLI to inject models, routes, and repositories into your existing architecture, ensuring consistency and best practices.
- **Architectural Flexibility**: Native support for **Domain-Driven Design (DDD)** and **Model-View-Controller (MVC)** patterns.
- **ORM Choice**: Seamless integration with **SQLAlchemy** or **Tortoise ORM**.
- **Package Manager choice**: Lock/install via **uv** (default) or **poetry**, with fresh lock files generated in quiet mode.
- **Resilient operations**: `create` cleans up generated files if it fails; `add` rolls back using a temporary backup to keep your project intact.
- **Production Ready**: Includes Docker, Docker Compose, and optimized configurations out of the box.
- **DevEx**: Pre-configured with `ruff`, `pytest`, `black`, and `mypy` for a superior development experience.
- **AI Agent Skills**: Installable skills support for AI agents to streamline specialized workflows.
## 🗒️ How to contribute
### 🏁 Get started
Feel free to open an issue for any clarifications or suggestions.
### ⚙️ To Develop Locally
#### Prerequisites
- Python >= 3.11
- `uv` (recommended) or `pip`
#### Setup
1. Clone the repository:
```bash
git clone https://github.com/Lehsqa/robyn-config.git
```
2. Setup a virtual environment and install dependencies:
```bash
uv venv && source .venv/bin/activate
uv pip install -e .[dev]
```
3. Run linters and tests:
```bash
make check
```
## ✨ Special thanks
Special thanks to the [Robyn](https://github.com/sparckles/Robyn) team for creating such an amazing framework!
| text/markdown | null | Leshqa <slav4ik77777@gmail.com> | null | null | MIT | null | [] | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"click>=8.1.7",
"Jinja2>=3.1",
"textual>=0.85.0",
"ruff>=0.6.0; extra == \"dev\"",
"black>=24.10.0; extra == \"dev\"",
"pytest>=7.4.0; extra == \"dev\"",
"httpx>=0.27.0; extra == \"dev\"",
"build>=1.2.1; extra == \"dev\"",
"twine>=1.6.0; extra == \"dev\"",
"Sphinx>=8.0.0; extra == \"dev\"",
"sph... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:32:06.320158 | robyn_config-0.4.0.tar.gz | 77,319 | 64/23/c7ffe5a19dc155530e50723528b9625c3685a6da306b3479ca0b62d75303/robyn_config-0.4.0.tar.gz | source | sdist | null | false | c8a08579d2154347b7974f230d57ef2c | 02cd4fe2b84f58d05991f20e0c5bbbc12b712209fd16494e5a25f1f067a40f0a | 6423c7ffe5a19dc155530e50723528b9625c3685a6da306b3479ca0b62d75303 | null | [
"LICENSE"
] | 236 |
2.4 | dvsim | 1.10.0 | DV system | <!--
# Copyright lowRISC contributors (OpenTitan project).
# Licensed under the Apache License, Version 2.0, see LICENSE for details.
# SPDX-License-Identifier: Apache-2.0
-->
# DVSim

## About the project
[OpenTitan](https://opentitan.org) is an open source silicon Root of Trust (RoT) project.
OpenTitan will make the silicon RoT design and implementation more transparent, trustworthy, and secure for enterprises, platform providers, and chip manufacturers.
OpenTitan is administered by [lowRISC CIC](https://www.lowrisc.org) as a collaborative project to produce high quality, open IP for instantiation as a full-featured product.
See the [OpenTitan site](https://opentitan.org) and [OpenTitan docs](https://opentitan.org/book/) for more information about the project.
## About this repository
This repository contains **DVSim** which is a build and run system written in Python that runs a variety of EDA tool flows.
There are multiple steps involved in running EDA tool flows.
DVSim encapsulates them all to provide a single, standardized command-line interface to launch them.
While DVSim was written to support OpenTitan, it can be used for any ASIC project.
All EDA tool flows on OpenTitan are launched using the DVSim tool.
The following flows are currently supported:
* Simulations
* Coverage Unreachability Analysis (UNR)
* Formal (formal property verification (FPV), and connectivity)
* Lint (semantic and stylistic)
* Synthesis
* CDC
* RDC
### Installation
#### Using nix and direnv
If you have [Nix](https://nixos.org/download/) and [direnv](https://direnv.net/) installed, then it's as simple as `direnv allow .`.
New to Nix? Perhaps checkout this [installer](https://determinate.systems/posts/determinate-nix-installer/) which will enable flakes by default.
#### Using uv direct
The recommended way of installing DVSim is inside a virtual environment to isolate the dependencies from your system python install.
We use the `uv` tool for python dependency management and creating virtual environments.
First make sure you have `uv` installed, see the [installation documentation](https://docs.astral.sh/uv/getting-started/installation/) for details and alternative installation methods.
There is a python package that can be installed with `pip install uv`, however the standalone installer is preferred.
##### macOS and Linux
```console
curl -LsSf https://astral.sh/uv/install.sh | sh
uv venv
uv sync
```
From there you can run the `dvsim` tool.
##### Windows (Powershell)
```console
irm https://astral.sh/uv/install.ps1 | iex
uv venv
uv sync
```
From there you can run the `dvsim` tool.
### Using DVSim
For further information on how to use DVsim with OpenTitan see [Getting Started](https://opentitan.org/book/doc/getting_started/index.html)
## History
DVSim development started in the main OpenTitan repository under `utils/dvsim`.
This repository contains the code that origionaly lived there, as well as the full git history copied over as generated by `git subtree split -p utils/dvsim`.
## Documentation
The project contains comprehensive documentation of all IPs and tools.
You can access it [online at opentitan.org/book/](https://opentitan.org/book/).
### Other related documents
* [Testplanner tool](./doc/testplanner.md)
* [Design document](./doc/design_doc.md)
* [Glossary](./doc/glossary.md)
## How to contribute
Have a look at [CONTRIBUTING](https://github.com/lowRISC/opentitan/blob/master/CONTRIBUTING.md) and our [documentation on project organization and processes](https://opentitan.org/book/doc/project_governance/README.md) for guidelines on how to contribute code to this repository.
## Licensing
Unless otherwise noted, everything in this repository is covered by the Apache License, Version 2.0 (see [LICENSE](https://github.com/lowRISC/opentitan/blob/master/LICENSE) for full text).
## Bugs
Please see [link](https://github.com/lowRISC/dvsim/issues) for a list of open bugs and feature requests.
| text/markdown | lowRISC contributors (OpenTitan project) | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.1.7",
"enlighten>=1.12.4",
"gitpython>=3.1.45",
"hjson>=3.1.0",
"jinja2>=3.1.6",
"logzero>=1.7.0",
"psutil>=7.2.2",
"pydantic>=2.9.2",
"pyyaml>=6.0.2",
"tabulate>=0.9.0",
"toml>=0.10.2",
"gitpython; extra == \"ci\"",
"pyhamcrest>=2.1.0; extra == \"ci\"",
"pyright>=1.1.381; extra ... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T18:31:18.831534 | dvsim-1.10.0.tar.gz | 441,092 | f5/5a/079b187fafdec246c5b861b2c5842d1cf622363cdd7dcb1b5a74d112e220/dvsim-1.10.0.tar.gz | source | sdist | null | false | 383df2d7ed532a316115b11fcb3b054b | 4073a41589a04feded7d0607d9ec4d8f8d01d00d80e40565d0ad628e9cce4abd | f55a079b187fafdec246c5b861b2c5842d1cf622363cdd7dcb1b5a74d112e220 | Apache-2.0 | [
"LICENSE"
] | 268 |
2.4 | awesome-llm-apps-mcp | 0.1.0 | MCP server providing AI agent inspiration from the awesome-llm-apps collection | # awesome-llm-apps-mcp
An MCP (Model Context Protocol) server that gives AI assistants access to **160+ AI agent examples** from the [awesome-llm-apps](https://github.com/Shubhamsaboo/awesome-llm-apps) collection.
Ask "I want to build a finance agent" and get relevant agent ideas, architecture patterns, tech stacks, and actual source code — directly inside Claude, Cursor, or any MCP-compatible tool.
## Features
- **Search agents** by keyword, use case, or technology
- **Browse categories**: Starter agents, advanced multi-agent teams, RAG, voice, MCP, and more
- **Get inspiration**: Recommended tech stacks and architecture patterns for your use case
- **Read source code**: Full Python implementations you can study and adapt
- **Zero config**: No API keys, no databases — just install and go
## Installation
```bash
pip install awesome-llm-apps-mcp
```
Or run directly with `uvx`:
```bash
uvx awesome-llm-apps-mcp
```
## Configuration
### Claude CLI
```bash
claude mcp add awesome-llm-apps -- uvx awesome-llm-apps-mcp
```
### Cursor (`.cursor/mcp.json`)
```json
{
"mcpServers": {
"awesome-llm-apps": {
"command": "uvx",
"args": ["awesome-llm-apps-mcp"]
}
}
}
```
### Claude Desktop (`claude_desktop_config.json`)
```json
{
"mcpServers": {
"awesome-llm-apps": {
"command": "uvx",
"args": ["awesome-llm-apps-mcp"]
}
}
}
```
## Tools
### `search_ai_agents`
Find agents by keyword or use case. Returns lightweight summaries with relevance scores.
```
search_ai_agents("finance agent")
search_ai_agents("RAG with ChromaDB", category="rag_tutorials")
search_ai_agents("multi-agent team", limit=5)
```
### `get_agent_details`
Get full documentation, features, and tech stack for a specific agent.
```
get_agent_details("starter-ai-agents-xai-finance-agent")
```
### `get_agent_source_code`
Get the actual Python source files for an agent.
```
get_agent_source_code("starter-ai-agents-xai-finance-agent")
```
### `list_agent_categories`
Browse all categories with agent counts.
```
list_agent_categories()
```
**Categories:**
- `starter_ai_agents` — Single-agent apps with focused functionality
- `advanced_ai_agents` — Multi-agent systems and complex applications
- `rag_tutorials` — Retrieval-Augmented Generation implementations
- `advanced_llm_apps` — Advanced LLM application patterns
- `ai_agent_framework_crash_course` — Framework tutorials (Google ADK, OpenAI SDK)
- `awesome_agent_skills` — Reusable agent skill definitions
- `mcp_ai_agents` — Model Context Protocol agent implementations
- `voice_ai_agents` — Voice/audio-based agent applications
### `get_agent_inspiration`
Get high-level suggestions for a use case, including recommended tech stack and architecture.
```
get_agent_inspiration("a customer support chatbot with RAG")
```
## Development
```bash
git clone <repo>
cd agent-mcp
pip install -e ".[dev]"
pytest tests/
```
### Rebuilding the catalog
If you want to rebuild `catalog.json` from a local copy of `awesome-llm-apps`:
```bash
python scripts/build_catalog.py /path/to/awesome-llm-apps
```
### Testing with MCP Inspector
```bash
npx @modelcontextprotocol/inspector awesome-llm-apps-mcp
```
## License
MIT
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"mcp>=1.0.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.5 | 2026-02-18T18:30:38.520348 | awesome_llm_apps_mcp-0.1.0.tar.gz | 705,966 | 26/2c/a35d12d2988b37c0df5301a477583a759105a7826c6d0e12a82aecaf945e/awesome_llm_apps_mcp-0.1.0.tar.gz | source | sdist | null | false | 72be02349a973f89b766ccee03e0e760 | e55906cece468bd6eab7909178255a983da44dd70b8acc42806cc9ce5686b797 | 262ca35d12d2988b37c0df5301a477583a759105a7826c6d0e12a82aecaf945e | MIT | [] | 267 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.