metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | letsping | 0.1.2 | The Human-in-the-Loop SDK for AI Agents | # LetsPing Python SDK
[](https://badge.fury.io/py/letsping)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/letsping/)
The official state management infrastructure for Human-in-the-Loop (HITL) AI agents.
LetsPing provides a durable "pause button" for autonomous agents, decoupling the agent's execution logic from the human's response time. It handles state serialization, secure polling, and notification routing (Slack, Email) automatically.
## Installation
```bash
pip install letsping
```
## Configuration
Set your API key as an environment variable (recommended) or pass it directly.
```bash
export LETSPING_API_KEY="lp_live_..."
```
## Usage
### 1. The "Ask" Primitive (Blocking)
Use this when you want to pause a script until a human approves.
```python
from letsping import LetsPing
client = LetsPing()
# Pauses here for up to 24 hours (default)
decision = client.ask(
service="payments-agent",
action="transfer_funds",
payload={
"amount": 5000,
"currency": "USD",
"recipient": "acct_99"
},
priority="critical"
)
# Execution resumes only after approval
print(f"Transfer approved by {decision['metadata']['actor_id']}")
```
### 2. Async / Non-Blocking (FastAPI/LangGraph)
For high-concurrency environments or event loops.
```python
import asyncio
from letsping import LetsPing
async def main():
client = LetsPing()
# Non-blocking wait
decision = await client.aask(
service="github-agent",
action="merge_pr",
payload={"pr_id": 42},
timeout=3600 # 1 hour timeout
)
asyncio.run(main())
```
### 3. LangChain / Agent Integration
LetsPing provides a compliant tool interface that can be injected directly into LLM agent toolkits (LangChain, CrewAI, etc). This allows the LLM to *decide* when to ask for help.
```python
from letsping import LetsPing
client = LetsPing()
tools = [
# ... your other tools (search, calculator) ...
# Inject the human as a tool
client.tool(
service="research-agent",
action="review_draft",
priority="high"
)
]
```
## Error Handling
The SDK uses typed exceptions for control flow.
* `ApprovalRejectedError`: Raised when the human explicitly clicks "Reject".
* `TimeoutError`: Raised when the duration (default 24h) expires without a decision.
* `LetsPingError`: Base class for API or network failures.
## License
MIT | text/markdown | null | LetsPing Team <hello@letsping.co> | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"httpx>=0.23.0"
] | [] | [] | [] | [
"Homepage, https://letsping.co",
"Repository, https://github.com/CordiaLabs/letsping"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:20:19.131498 | letsping-0.1.2.tar.gz | 7,003 | f6/bb/9dba7b820a250ad766f525a24349d5c3ed3dd2e12d2bf81bb8e5b5ea6dbc/letsping-0.1.2.tar.gz | source | sdist | null | false | c8128473c2fcb96cca2e751f4f39c0df | b8aa6e2f9ac33d77ca914ae36735df68f6580dcb9d6a21cba0ee893c606affc3 | f6bb9dba7b820a250ad766f525a24349d5c3ed3dd2e12d2bf81bb8e5b5ea6dbc | null | [
"LICENSE"
] | 247 |
2.4 | xcube-resampling | 0.3.0 | A package to resample, reproject, and rectify geospatial datasets. | # xcube-resampling
[](https://github.com/xcube-dev/xcube-resampling/actions/workflows/unit-tests.yml)
[](https://codecov.io/gh/xcube-dev/xcube-resampling)
[](https://pypi.org/project/xcube-resampling/)
[](https://anaconda.org/conda-forge/xcube-resampling)
[](https://anaconda.org/conda-forge/xcube-resampling)
[](https://github.com/psf/black)
**xcube-resampling** provides efficient algorithms for transforming datasets into
different spatial grids and temporal scales. It is designed for geospatial workflows that need
flexible resampling and reprojection. This library provides up and downsampling for both
spatial and temporal domains.
### ✨ Features
- #### Spatial Resampling
- **Affine resampling** – simple resampling using affine transformations
- **Reprojection** – convert datasets between different coordinate reference systems (CRS)
- **Rectification** – transform irregular grids into regular, well-structured grids
- #### Temporal Resampling
- **Time-based resampling** – upsample or downsample data along the time dimension
All methods work seamlessly with chunked (lazily loaded) [xarray.Datasets](https://docs.xarray.dev/en/stable/generated/xarray.Dataset.html) and are powered by [Dask](https://www.dask.org/) for scalable, out-of-core computation.
### ⚡ Lightweight & Independent
The package is independent of the core *xcube* framework and has minimal dependencies:
`affine, dask, dask-image, numba, numpy, pyproj, xarray, zarr`.
Find out more in the [xcube-resampling Documentation](https://xcube-dev.github.io/xcube-resampling/).
| text/markdown | xcube Development Team | null | null | null | MIT | xcube, xarray, dask, reprojection, rectification, affine transformation, parallel processing | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development",
"Topic :: Scientific/Engineering",
"Typing :: Typed",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Operating System :: Unix",
"Operating System :: MacOS"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"affine>=2.2",
"dask>=2021.6",
"dask-image>=0.6",
"numba>=0.52",
"numpy>=1.16",
"pyproj>=3.0",
"xarray>=2024.7",
"zarr<3,>=2.11",
"build; extra == \"dev\"",
"hatch; extra == \"dev\"",
"twine; extra == \"dev\"",
"black; extra == \"dev\"",
"isort; extra == \"dev\"",
"ruff; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"jupyterlab; extra == \"dev\"",
"matplotlib; extra == \"dev\"",
"mkdocs; extra == \"doc\"",
"mkdocs-autorefs; extra == \"doc\"",
"mkdocs-material; extra == \"doc\"",
"mkdocs-jupyter; extra == \"doc\"",
"mkdocstrings; extra == \"doc\"",
"mkdocstrings-python; extra == \"doc\""
] | [] | [] | [] | [
"Documentation, https://github.com/xcube-dev/xcube-resampling",
"Repository, https://github.com/xcube-dev/xcube-resampling",
"Changelog, https://github.com/xcube-dev/xcube-resampling/blob/main/CHANGES.md",
"Issues, https://github.com/xcube-dev/xcube-resampling/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:20:06.070606 | xcube_resampling-0.3.0.tar.gz | 69,506 | d8/db/505c0fccd80a874480b3b56e3a247e68dce30396c630f1f2b92787e3b339/xcube_resampling-0.3.0.tar.gz | source | sdist | null | false | 1b1195a8225ae1133ffe920e2d99bf5c | 6044ad81186fe5b6965ff593defd1e291b13441e7365c851735f67a316536d01 | d8db505c0fccd80a874480b3b56e3a247e68dce30396c630f1f2b92787e3b339 | null | [
"LICENSE"
] | 291 |
2.4 | mintalib | 0.0.28 | Minimal Technical Analysis Library for Python | # Minimal Technical Analysis Library for Python
This package offers a curated list of technical analysis indicators implemented in `cython` for optimal performance. The library is built around `numpy` arrays and offers a variety interface for `pandas` and `polars` dataframes and series.
> **Warning** This project is experimental and the interface is likely to change.
## Functions
Concrete calculation functions are available from the `mintalib.function` module with names like `sma`, `atr`, `macd`, etc.
The first parameter of a function is either `prices` or `series` depending on whether
the function expects a dataframe of prices or a single series.
A `prices` dataframe can be a pandas or polars dataframe. The column names for prices are expected to include `open`, `high`, `low`, `close`, `volume` all in **lower case**.
A `series` can be a pandas/polars series or a numpy array.
```python
import mintalib.functions as ta
prices = ... # pandas/polars DataFrame
sma = ta.sma(prices, 50)
atr = ta.atr(prices, 14)
```
# Pandas Extension
Mintalib can be used as a pandas extension via a `ts` accessor. Series calculations are accessible on pandas series, and prices calculations are accessible on dataframes.
To activate the extension you only need to import the module `mintalib.pandas`.
```python
import mintalib.pandas # noqa F401
prices = ... # pandas DataFrame
sma = prices.close.ts.sma(50)
atr = prices.ts.atr(14)
```
# Polars Expressions
Mintalib offers expression factory methods via the `mintalib.expressions` module.
The methods accept a source expression through the keyword-only `src` parameter.
The source expression can also be passed as the first parameter to facilitate the use with `pipe`.
```python
import mintalib.expressions as ta
prices = ... # polars DataFrame
prices.select(
ta.macd().struct.unnest(),
sma=ta.sma(50),
atr=ta.atr(14),
trend=ta.ema(50).pipe(ta.roc, 1)
)
```
# Polars Extension
Mintalib can be used as a polars extension via a `ts` accessor for polars series, dataframes and expressions.
Indicators that expect a prices inputs should be invoked on a struct with all required fields (see `OHLC` in example below). Indicators with multi column outputs like `macd` return a polars struct.
To activate the extension you only need to import the module `mintalib.polars`.
```python
from mintalib.polars import CLOSE, OHLC
# CLOSE is short-hand for pl.col('close')
# OHLC is short-hand for pl.struct(['open', 'high', 'low', 'close'])
prices = ... # polars DataFrame
prices.select(
CLOSE.ts.macd().struct.unnest(),
sma=CLOSE.ts.sma(50),
atr=OHLC.ts.atr(),
trend=CLOSE.ts.ema(20).ts.roc(1)
)
```
## Using Indicators (Legacy Interface)
Indicators offer a composable interface where a calculation function is bound with its parameters into a callable object. Indicators are accessible from the `mintalib.indicators` module with names like `EMA`, `SMA`, `ATR`, `MACD`, etc ...
An indicator instance can be invoked as a function or via the `@` operator as syntactic sugar.
So for example `SMA(50) @ prices` can be used to compute the 50 period simple moving average on `prices`, in place of `SMA(50)(prices)`.
The `@` operator can also be used to chain indicators, where for example `ROC(1) @ EMA(20)` means `ROC(1)` applied to `EMA(20)`.
```python
from mintalib.indicators import SMA, EMA, ROC, MACD
prices = ... # pandas DataFrame
result = prices.assign(
sma50 = SMA(50),
sma200 = SMA(200),
rsi = RSI(14),
trend = ROC(1) @ EMA(20)
)
```
## List of Indicators
| Name | Description |
|:-----------|:--------------------------------------------------------------|
| ABS | Absolute Value |
| ADX | Average Directional Index |
| ALMA | Arnaud Legoux Moving Average |
| ATR | Average True Range |
| AVGPRICE | Average Price |
| BBANDS | Bollinger Bands |
| BBP | Bollinger Bands Percent (%B) |
| BBW | Bollinger Bands Width |
| BOP | Balance of Power |
| CCI | Commodity Channel Index |
| CLAG | Confirmation Lag |
| CMF | Chaikin Money Flow |
| CROSSOVER | Cross Over |
| CROSSUNDER | Cross Under |
| CURVE | Curve (quadratic regression) |
| DEMA | Double Exponential Moving Average |
| DIFF | Difference |
| DMI | Directional Movement Indicator |
| EMA | Exponential Moving Average |
| EVAL | Expression Eval (pandas only) |
| EXP | Exponential |
| FLAG | Flag Value |
| HMA | Hull Moving Average |
| KAMA | Kaufman Adaptive Moving Average |
| KELTNER | Keltner Channel |
| KER | Kaufman Efficiency Ratio |
| LAG | Lag Function |
| LOG | Logarithm |
| LROC | Logarithmic Rate of Change |
| MACD | Moving Average Convergenge Divergence |
| MACDV | Moving Average Convergenge Divergence - Volatility Normalized |
| MAD | Rolling Mean Absolute Deviation |
| MAV | Generic Moving Average |
| MAX | Rolling Maximum |
| MDI | Minus Directional Index |
| MFI | Money Flow Index |
| MIDPRICE | Mid Price |
| MIN | Rolling Minimum |
| NATR | Average True Range (normalized) |
| PDI | Plus Directional Index |
| PPO | Price Percentage Oscillator |
| PRICE | Generic Price |
| QSF | Quadratic Series Forecast (quadratic regression) |
| RMA | Rolling Moving Average (RSI style) |
| ROC | Rate of Change |
| RSI | Relative Strength Index |
| RVALUE | R-Value (linear regression) |
| SAR | Parabolic Stop and Reverse |
| SHIFT | Shift Function |
| SIGN | Sign |
| SLOPE | Slope (linear regression) |
| SMA | Simple Moving Average |
| STDEV | Standard Deviation |
| STEP | Step Function |
| STOCH | Stochastic Oscillator |
| STREAK | Consecutive streak of values above zero |
| SUM | Rolling sum |
| TEMA | Triple Exponential Moving Average |
| TRANGE | True Range |
| TSF | Time Series Forecast (linear regression) |
| TYPPRICE | Typical Price |
| UPDOWN | Flag for value crossing up & down levels |
| WCLPRICE | Weighted Close Price |
| WMA | Weighted Moving Average |
## Example Notebooks
Example notebooks in the `examples` folder.
## Installation
You can install this package with pip
```console
pip install mintalib
```
## Dependencies
- python >= 3.10
- numpy
- pandas
- polars [optional]
## Related Projects
- [ta-lib](https://github.com/mrjbq7/ta-lib) Python wrapper for TA-Lib
- [qtalib](https://github.com/josephchenhk/qtalib) Quantitative Technical Analysis Library
- [polars-ta](https://github.com/wukan1986/polars_ta) Technical Analysis Indicators for polars
- [polars-talib](https://github.com/Yvictor/polars_ta_extension) Polars extension for Ta-Lib: Support Ta-Lib functions in Polars expressions
| text/markdown | null | Furechan <furechan@xsmail.com> | null | null | MIT License | cython, technical-analysis, indicators | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy",
"pandas",
"polars; extra == \"polars\""
] | [] | [] | [] | [
"homepage, https://github.com/furechan/mintalib"
] | twine/6.0.1 CPython/3.10.19 | 2026-02-20T08:18:10.780784 | mintalib-0.0.28.tar.gz | 812,271 | aa/2c/c1ae78ef6cbfb21796687de4546f7999a64db2bd68d88a603dc0a598d3c2/mintalib-0.0.28.tar.gz | source | sdist | null | false | 101c262d441be27f157a5216a6b2aa12 | faedf54482bc29acfe02a597982c58596d7bf9bbf7749af7470cb4a8848b40a4 | aa2cc1ae78ef6cbfb21796687de4546f7999a64db2bd68d88a603dc0a598d3c2 | null | [] | 171 |
2.4 | invoke-plugin-for-sphinx | 4.2.0 | Sphinx plugin which can render invoke tasks with autodoc | [](https://api.reuse.software/info/github.com/SAP/invoke-plugin-for-sphinx)
[](https://github.com/psf/black)
[](https://pycqa.github.io/isort/)
[](https://badge.fury.io/py/invoke-plugin-for-sphinx)
[](https://coveralls.io/github/SAP/invoke-plugin-for-sphinx)
# Invoke Plugin for Sphinx
This is a plugin which allows the documentation of invoke tasks with sphinx `autodoc`.
An invoke task looks like a normal function but the `@task` decorator creates a `Task` object behind the scenes.
Documenting these with `autodoc` can lead to errors or unexpected results.
## Installation
`pip install invoke-plugin-for-sphinx`, that's it.
## Usage
Add the plugin to the extensions list:
```py
extensions = ["invoke_plugin_for_sphinx"]
```
Then you can use `.. automodule::` as usual.
Behind the scenes, the function documenter of `autodoc` is extended to also handle tasks equal to functions.
Therefore the same configurations, limitations and features apply.
## Development
This project uses `uv`.
To setup a venv for development use
`python3.14 -m venv venv && pip install uv && uv sync --all-groups && rm -rf venv/`.
Then use `source .venv/bin/activate` to activate your venv.
## Build and Publish
Execute the release action with the proper version.
## Support, Feedback, Contributing
This project is open to feature requests/suggestions, bug reports etc. via [GitHub issues](https://github.com/SAP/invoke-plugin-for-sphinx/issues). Contribution and feedback are encouraged and always welcome. For more information about how to contribute, the project structure, as well as additional contribution information, see our [Contribution Guidelines](CONTRIBUTING.md).
## Code of Conduct
We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone. By participating in this project, you agree to abide by its [Code of Conduct](CODE_OF_CONDUCT.md) at all times.
## Licensing
Copyright 2026 SAP SE or an SAP affiliate company and invoke-plugin-for-sphinx contributors. Please see our [LICENSE](LICENSE) for copyright and license information. Detailed information including third-party components and their licensing/copyright information is available [via the REUSE tool](https://api.reuse.software/info/github.com/SAP/invoke-plugin-for-sphinx).
| text/markdown | null | Kai Harder <kai.harder@sap.com> | null | null | null | sphinx, invoke, plugin, inv, documentation | [
"Development Status :: 5 - Production/Stable",
"Environment :: Plugins",
"Framework :: Sphinx :: Extension",
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Documentation",
"Topic :: Documentation :: Sphinx",
"Typing :: Typed"
] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"sphinx<10,>=7",
"invoke<3,>=2.1",
"typing-extensions~=4.4"
] | [] | [] | [] | [
"Changelog, https://github.com/SAP/invoke-plugin-for-sphinx/blob/main/CHANGELOG.md",
"Issue Tracker, https://github.com/SAP/invoke-plugin-for-sphinx/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:18:09.403623 | invoke_plugin_for_sphinx-4.2.0.tar.gz | 92,284 | 0f/b0/598f765c89366d4a1c325c5dcce1ce82243561f6ff12a86fcf5aa62db1c7/invoke_plugin_for_sphinx-4.2.0.tar.gz | source | sdist | null | false | 017858345c7930294eda7ab68bb268b9 | 9224606411729c51f8719f5685e8ab52a1335c35a414fb0d393e6b63f2010b80 | 0fb0598f765c89366d4a1c325c5dcce1ce82243561f6ff12a86fcf5aa62db1c7 | Apache-2.0 | [
"LICENSE"
] | 237 |
2.4 | backupchan-server-lib | 1.0.1 | Utilities and structures used by the Backup-chan server. | # Backup-chan server library



A library containing utility functions and structures for the Backup-chan server.
## Installing
```bash
# The easy way
pip install backupchan-server-lib
# Install from source
git clone https://github.com/Backupchan/server-lib.git backupchan-server-lib
cd backupchan-server-lib
pip install .
```
| text/markdown | null | Moltony <koronavirusnyj@gmail.com> | null | null | BSD-3-Clause | null | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: BSD License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Topic :: System :: Archiving :: Backup",
"Topic :: Software Development :: Libraries",
"Typing :: Typed"
] | [] | null | null | null | [] | [] | [] | [
"pytest; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Backupchan/server-lib",
"Repository, https://github.com/Backupchan/server-lib.git",
"Issues, https://github.com/Backupchan/server-lib/issues",
"Changelog, https://github.com/Backupchan/server-lib/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.5 | 2026-02-20T08:16:46.999085 | backupchan_server_lib-1.0.1.tar.gz | 4,130 | 31/37/c9c2f748aa7419e79173820743b70533e9b3cae556d41e1e54632c9295e5/backupchan_server_lib-1.0.1.tar.gz | source | sdist | null | false | c66fda8c8e11dbedcb883b8b417340c8 | 8b7c7aaaf9988655f5535dd30a149a993b95489ee8d7b4aafc2145a01152945a | 3137c9c2f748aa7419e79173820743b70533e9b3cae556d41e1e54632c9295e5 | null | [
"LICENSE"
] | 247 |
2.4 | pulumi-oci | 4.0.0a1771573594 | A Pulumi package for creating and managing Oracle Cloud Infrastructure resources. | # Oracle Cloud Infrastructure Resource Provider
The Oracle Cloud Infrastructure (OCI) Resource Provider lets you manage [OCI](https://www.oracle.com/cloud/) resources.
## Installing
This package is available for several languages/platforms:
### Node.js (JavaScript/TypeScript)
To use with JavaScript or TypeScript in Node.js, install using either `npm`:
```bash
npm install @pulumi/oci
```
or `yarn`:
```bash
yarn add @pulumi/oci
```
### Python
To use with Python, install using `pip`:
```bash
python3 -m pip install pulumi_oci
```
To use [uv](https://docs.astral.sh/uv/) instead:
```bash
uv pip install pulumi_oci
```
### Go
To use with Go, use `go get` to grab the latest version of the library:
```bash
go get github.com/pulumi/pulumi-oci/sdk/v3/...
```
### .NET
To use with .NET, install using `dotnet add package`:
```bash
dotnet add package Pulumi.Oci
```
## Configuration
The following configuration options are available for the `oci` provider:
| Option | Environment variable | Description |
|--------------------------|-------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `oci:auth` | | The type of auth to use. Options are 'ApiKey', 'SecurityToken', 'InstancePrincipal', 'ResourcePrincipal' and 'OKEWorkloadIdentity'. By default, 'ApiKey' will be used. |
| `oci:tenancyOcid` | `TF_VAR_tenancy_ocid` | OCID of your tenancy. |
| `oci:userOcid` | `TF_VAR_user_ocid` | OCID of the user calling the API. |
| `oci:privateKey` | `TF_VAR_private_key` | The contents of the private key file. Required if `privateKeyPath` is not defined and takes precedence if both are defined. |
| `oci:privateKeyPath` | `TF_VAR_private_key_path` | The path (including filename) of the private key stored on your computer. Required if `privateKey` is not defined. |
| `oci:privateKeyPassword` | `TF_VAR_private_key_password` | Passphrase used for the key, if it is encrypted. |
| `oci:fingerprint` | `TF_VAR_fingerprint` | Fingerprint for the key pair being used. |
| `oci:region` | `TF_VAR_region` | An OCI region. |
| `oci:configFileProfile` | `TF_VAR_config_file_profile` | The custom profile to use instead of the `DEFAULT` profile in `.oci/config`. |
Use the [Required Keys and OCIDs](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#Required_Keys_and_OCIDs) chapter of the OCI Developer Guide to learn:
- [How to Generate an API Signing Key](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#two)
- [How to Get the Key's Fingerprint](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#four)
- [Where to Get the Tenancy's OCID and User's OCID](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five)
## Reference
For detailed reference documentation, please visit [the Pulumi registry](https://www.pulumi.com/registry/packages/oci/api-docs/).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, oci, oracle, category/cloud | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://www.pulumi.com",
"Repository, https://github.com/pulumi/pulumi-oci"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T08:16:28.657911 | pulumi_oci-4.0.0a1771573594.tar.gz | 10,996,107 | 69/20/e687c8d0967cf77616c4285f32bfba1e9cc0293c6bbb8aad3c5d91c013f1/pulumi_oci-4.0.0a1771573594.tar.gz | source | sdist | null | false | c7e6a87653f183c815f503729872223a | de76e18feaaeadc5fd3e6603b5c23cedfe1316df1d662328c023671e1510516b | 6920e687c8d0967cf77616c4285f32bfba1e9cc0293c6bbb8aad3c5d91c013f1 | null | [] | 228 |
2.4 | humanitix-client | 1.20.0.1 | A client library for accessing Humanitix Public API | # humanitix-client
A client library for accessing Humanitix Public API
## Usage
First, create a client:
```python
from humanitix_client import Client
client = Client(base_url="https://api.humanitix.com/")
```
If the endpoints you're going to hit require authentication, use `AuthenticatedClient` instead:
```python
from humanitix_client import AuthenticatedClient
client = AuthenticatedClient(
base_url="https://api.humanitix.com/",
token="SuperSecretToken",
auth_header_name="X-Api-Key",
prefix="",
)
```
Now call your endpoint and use your models:
```python
from humanitix_client.models import MyDataModel
from humanitix_client.api.my_tag import get_my_data_model
from humanitix_client.types import Response
with client as client:
my_data: MyDataModel = get_my_data_model.sync(client=client)
# or if you need more info (e.g. status_code)
response: Response[MyDataModel] = get_my_data_model.sync_detailed(client=client)
```
Or do the same thing with an async version:
```python
from humanitix_client.models import MyDataModel
from humanitix_client.api.my_tag import get_my_data_model
from humanitix_client.types import Response
async with client as client:
my_data: MyDataModel = await get_my_data_model.asyncio(client=client)
response: Response[MyDataModel] = await get_my_data_model.asyncio_detailed(client=client)
```
By default, when you're calling an HTTPS API it will attempt to verify that SSL is working correctly. Using certificate verification is highly recommended most of the time, but sometimes you may need to authenticate to a server (especially an internal server) using a custom certificate bundle.
```python
client = AuthenticatedClient(
base_url="https://internal_api.example.com",
token="SuperSecretToken",
verify_ssl="/path/to/certificate_bundle.pem",
)
```
You can also disable certificate validation altogether, but beware that **this is a security risk**.
```python
client = AuthenticatedClient(
base_url="https://internal_api.example.com",
token="SuperSecretToken",
verify_ssl=False
)
```
Things to know:
1. Every path/method combo becomes a Python module with four functions:
1. `sync`: Blocking request that returns parsed data (if successful) or `None`
1. `sync_detailed`: Blocking request that always returns a `Request`, optionally with `parsed` set if the request was successful.
1. `asyncio`: Like `sync` but async instead of blocking
1. `asyncio_detailed`: Like `sync_detailed` but async instead of blocking
1. All path/query params, and bodies become method arguments.
1. If your endpoint had any tags on it, the first tag will be used as a module name for the function (my_tag above)
1. Any endpoint which did not have a tag will be in `humanitix_client.api.default`
## Advanced customizations
There are more settings on the generated `Client` class which let you control more runtime behavior, check out the docstring on that class for more info. You can also customize the underlying `httpx.Client` or `httpx.AsyncClient` (depending on your use-case):
```python
from humanitix_client import Client
def log_request(request):
print(f"Request event hook: {request.method} {request.url} - Waiting for response")
def log_response(response):
request = response.request
print(f"Response event hook: {request.method} {request.url} - Status {response.status_code}")
client = Client(
base_url="https://api.example.com",
httpx_args={"event_hooks": {"request": [log_request], "response": [log_response]}},
)
# Or get the underlying httpx client to modify directly with client.get_httpx_client() or client.get_async_httpx_client()
```
You can even set the httpx client directly, but beware that this will override any existing settings (e.g., base_url):
```python
import httpx
from humanitix_client import Client
client = Client(
base_url="https://api.example.com",
)
# Note that base_url needs to be re-set, as would any shared cookies, headers, etc.
client.set_httpx_client(httpx.Client(base_url="https://api.example.com", proxies="http://localhost:8030"))
```
## Building / publishing this package
This project uses [Poetry](https://python-poetry.org/) to manage dependencies and packaging. Here are the basics:
1. Update the metadata in pyproject.toml (e.g. authors, version)
1. If you're using a private repository, configure it with Poetry
1. `poetry config repositories.<your-repository-name> <url-to-your-repository>`
1. `poetry config http-basic.<your-repository-name> <username> <password>`
1. Publish the client with `poetry publish --build -r <your-repository-name>` or, if for public PyPI, just `poetry publish --build`
If you want to install this client into another project without publishing it (e.g. for development) then:
1. If that project **is using Poetry**, you can simply do `poetry add <path-to-this-client>` from that project
1. If that project is not using Poetry:
1. Build a wheel with `poetry build -f wheel`
1. Install that wheel from the other project `pip install <path-to-wheel>`
| text/markdown | null | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"attrs>=22.2.0",
"httpx<0.29.0,>=0.23.0",
"python-dateutil<3.0.0,>=2.8.0"
] | [] | [] | [] | [] | poetry/2.3.2 CPython/3.13.5 Darwin/24.6.0 | 2026-02-20T08:15:17.626204 | humanitix_client-1.20.0.1-py3-none-any.whl | 173,071 | 9d/3e/f982779668ae44183cd0a7d36fed471cea1cb59107e4e8707d527341a2f0/humanitix_client-1.20.0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | a04745c01e6d3650ee8eff803d7dd64c | 74f972f3b475db9415e8ed192dbed964bbc6845dcaaa26b4c619fe449aecd135 | 9d3ef982779668ae44183cd0a7d36fed471cea1cb59107e4e8707d527341a2f0 | null | [] | 241 |
2.4 | nadouf-math | 0.1.0 | A simple Python library for mathematical operations | # Nadouf-Math Library
## Content
- [About library](#about-library)
- [Installing](#installing)
- [Import](#import)
- [Fast start](#fast-start)
- [Documentation](#documentation)
## About library
A simple Python library for mathematical operations. In NadoufMath you can choose what you want - use functions or objects with methods!
## Installing
```bash
The library is still under development.
Installation via pip is not yet available.
Stay tuned for updates!
```
## Import
```python
from nadouf_math import *
```
All the following examples assume that you entered this string at the beginning.
## Fast start
```python
from nadouf_math import Nadoufmath
# Creating an object with value 5
calc = Nadoufmath(5)
# Demonstrate arithmetic operations
print(f"Sum: {calc.sum_of(11, 12, 13, 14, 15)}") # 5 + (11 + 12 + 13 + 14 + 15) = 70
print(f"Difference: {calc.difference_of(3, 2, 1)}") # 5 - 3 - 2 - 1 = -1
print(f"Square: {calc.square()}") # 5² = 25
print(f"Cube: {calc.cube()}") # 5³ = 125
print(f"Power 4: {calc.power(4)}") # 5⁴ = 625
print(f"Square root: {calc.square_root()}") # √5
```
## Documentation
- [Simple mathematical operations](#simple-mathematical-operations)
- [Powers and Roots](#powers-and-roots)
- [Higher-order mathematical functions](#higher-order-mathematical-functions)
- [Trigonometry](#trigonometry)
- [Rounding](#rounding)
- [Checking the properties of numbers](#checking-the-properties-of-numbers)
- [Constants](#constants)
- [End of the documentation](#end-of-documentation)
## Simple mathematical operations
For sum of numbers you can use function **sum_of** or method **sum_of** for your object:
```python
print(sum_of(5, 6)) # if you want to use function. it will print 11
number = Nadoufmath(5)
print(number.sum_of(6)) # if you want to use the method. it will print 11 too
```
For difference of numbers you can use function **difference_of** or method **difference_of** for your object:
```python
print(difference_of(10, 3)) # if you want to use function. it will print 7
number = Nadoufmath(10)
print(number.difference_of(3)) # if you want to use the method. it will print 7 too
```
For division of numbers you can use function **division_of** or method **division_of** for your object:
```python
print(division_of(5, 5)) # if you want to use function. it will print 1
number = Nadoufmath(5)
print(number.division_of(5)) # if you want to use the method. it will print 1 too
```
For integer division you should use function **int_division_of** or method **int_division_of** for your object:
```python
print(int_division_of(5, 2)) # if you want to use function. it will print 2
number = Nadoufmath(5)
print(number.int_division_of(2)) # if you want to use the method. it will print 2 too
```
For multiply of numbers you can use function **multiply_of** or method **multiply_of** for your object:
```python
print(multiply_of(7, 8)) # if you want to use function. it will print 56
number = Nadoufmath(7)
print(number.multiply_of(8)) # if you want to use the method. it will print 56 too
```
## Powers and Roots
For square of number you can use function **square** or method **square** for your object:
```python
print(square(4)) # if you want to use function. it will print 16
number = Nadoufmath(4)
print(number.square()) # if you want to use the method. it will print 16 too
```
For cube of number you can use function **cube** or method **cube** for your object:
```python
print(cube(2)) # if you want to use function. it will print 8
number = Nadoufmath(2)
print(number.cube()) # if you want to use the method. it will print 8 too
```
For any other power of number you should use function **power** or method **power** for your object:
```python
print(power(2, 4)) # if you want to use function. it will print 16
number = Nadoufmath(2)
print(number.power(4)) # if you want to use the method. it will print 16 too
```
If you want to count any power of number 2 you should use function **power_of_2**:
```python
print(power_of_2(3)) # if you want to use function. it will print 8
```
For square root of number you have to use function **square_root** or method **square_root** for your object:
```python
print(square_root(64)) # if you want to use function. it will print 8
number = Nadoufmath(64)
print(number.square_root()) # if you want to use the method. it will print 8 too
```
If you want to count cube root of number you should use function **cube_root** or method **cube_root** for your object:
```python
print(cube_root(125)) # if you want to use function. it will print 5
number = Nadoufmath(125)
print(number.cube_root()) # if you want to use the method. it will print 5 too
```
## Higher-order mathematical functions
For factorial of number you need to use the function **factorial** or method **factorial** for your object:
```python
print(factorial(3)) # if you want to use function. it will print 6
number = Nadoufmath(3)
print(number.factorial()) # if you want to use the method. it will print 6 too
```
If you want to count the greatest common divisor you can use the function **gcd_with** or method **gcd_with** for your object:
```python
print(gcd_with(5, 10)) # if you want to use function. it will print 5
number = Nadoufmath(5)
print(number.gcd_with(10)) # if you want to use the method. it will print 5 too
```
If you need to count the least common multiple you should use the function **lcm_with** or method **lcm_with** for your object:
```python
print(lcm_with(20, 30)) # if you want to use function. it will print 60
number = Nadoufmath(20)
print(number.lcm_with(30)) # if you want to use the method. it will print 60 too
```
## Trigonometry
Trigonometry in nadouf-math library counts in radians!
For a cosine of an angle you should use the function **cos** or the method **cos** for your object:
```python
print(cos(0.1)) # if you want to use function. it will print 0.9950041652780258
number = Nadoufmath(0.1)
print(number.cos()) # if you want to use the method. it will print 0.9950041652780258 too
```
For a sine of an angle you should use the function **sin** or the method **sin** for your object:
```python
print(sin(0.3)) # if you want to use function. it will print 0.29552020666133955
number = Nadoufmath(0.3)
print(number.sin()) # if you want to use the method. it will print 0.29552020666133955 too
```
For a tangent of an angle you should use the function **tan** or the method **tan** for your object:
```python
print(tan(0.2)) # if you want to use function. it will print 0.2027100355086725
number = Nadoufmath(0.2)
print(number.tan()) # if you want to use the method. it will print 0.2027100355086725 too
```
## Rounding
For usual round you can use the integrated function **round** in Python.
For round down you need to use the function **floor** or the method **floor** for your object:
```python
print(floor(5.9)) # if you want to use function. it will print 5
number = Nadoufmath(5.9)
print(number.floor()) # if you want to use the method. it will print 5 too
```
For round up you need to use the function **ceil** or the method **ceil** for your object:
```python
print(ceil(5.2)) # if you want to use function. it will print 6
number = Nadoufmath(5.2)
print(number.ceil()) # if you want to use the method. it will print 6 too
```
## Checking the properties of numbers
If you want to compare number with infinity you should use the function **is_infinity** or the method **is_infinity** for your object:
```python
print(is_infinity(5)) # if you want to use function. it will print False
number = Nadoufmath(5)
print(number.is_infinity()) # if you want to use the method. it will print False too
```
If you want to check number is positive or negative you can use functions **is_positive**, **is_negative** or methods **is_positive** and **is_negative** for your object:
```python
print(is_positive(-6)) # if you want to use function. it will print False
print(is_negative(-3)) # if you want to use function. it will print True
number_1 = Nadoufmath(-6)
number_2 = Nadoufmath(-3)
print(number_1.is_positive()) # if you want to use the method. it will print False too
print(number_2.is_negative()) # if you want to use the method. it will print True too
```
To find out the sign of a number use the function **sign** or the method **sign** for your object:
```python
print(sign(6)) # if you want to use function. it will print Positive
number = Nadoufmath(6)
print(number.sign()) # if you want to use the method. it will print Positive too
```
To check parity of a number use functions **is_even**, **is_odd** or use methods **is_even** and **is_odd** for your objects:
```python
print(is_even(79)) # if you want to use function. it will print False
print(is_odd(67)) # if you want to use function. it will print True
number = Nadoufmath(79)
print(number.is_even()) # if you want to use the method. it will print False too
print(number.is_odd()) # if you want to use the method. it will print True
```
## Constants
To use number Pi use variable **number_pi**:
```python
print(number_pi) # it will print 3.14159265359
```
To use number E use variable **number_e**:
```python
print(number_e) # it will print 2.718281828459
```
## End of the documentation
All the functions of the Nadouf-Mathematica library were described in the documentation. Thanks for reading!
| text/markdown | Nadouvvv | nadoufmail@gmail.com | null | null | null | math, mathematics, calculator, arithmetic, trigonometry, nadouf, Nadouf, maths | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: Mathematics",
"Intended Audience :: Developers"
] | [] | https://github.com/Nadouvvv/NadoufMath-library | null | >=3.6 | [] | [] | [] | [] | [] | [] | [] | [
"Source, https://github.com/Nadouvvv/NadoufMath-library",
"Bug Reports, https://github.com/Nadouvvv/NadoufMath-library/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T08:15:10.435259 | nadouf_math-0.1.0-py3-none-any.whl | 6,232 | b1/74/2087f8e387744108bf61cca2d9aadd79b08f4e4eb3f4f449f09822d7651b/nadouf_math-0.1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 1ae11c6ea7ec19bede2cd6efa4a727c5 | 6d72197c302b1c97193a0753e281fa0e1c1f59815910f05e9b334844768f017c | b1742087f8e387744108bf61cca2d9aadd79b08f4e4eb3f4f449f09822d7651b | null | [
"LICENSE"
] | 108 |
2.4 | onnxscript | 0.6.3.dev20260220 | Naturally author ONNX functions and models using a subset of Python | # ONNX Script
[](https://github.com/microsoft/onnxscript/actions/workflows/main.yaml)
[](https://aiinfra.visualstudio.com/ONNX%20Converters/_build/latest?definitionId=1258&branchName=main)
[](https://pypi.org/project/onnxscript)
[](https://pypi.org/project/onnxscript)
[](https://github.com/astral-sh/ruff)
[](https://github.com/psf/black)
ONNX Script enables developers to naturally author ONNX functions and
models using a subset of Python. ONNX Script is:
* **Expressive:** enables the authoring of all ONNX functions.
* **Simple and concise:** function code is natural and simple.
* **Debuggable:** allows for eager-mode evaluation that provides for a
more delightful ONNX model debugging experience.
This repo also covers:
* **ONNX Script Optimizer:** provides functionality to optimize an ONNX
model by performing optimizations and clean-ups such as constant folding,
dead code elimination, etc.
* **ONNX Rewriter:** provides functionality to replace certain patterns in
an ONNX graph with replacement patterns based on user-defined rewrite rules.
Note however that ONNX Script does **not** intend to support the entirety
of the Python language.
Website: [https://microsoft.github.io/onnxscript/](https://microsoft.github.io/onnxscript/)
## Design Overview
ONNX Script provides a few major capabilities for authoring and debugging
ONNX models and functions:
* A converter which translates a Python ONNX Script function into an
ONNX graph, accomplished by traversing the [Python Abstract Syntax Tree][python-ast] to build an ONNX graph equivalent of the function.
* A converter that operates inversely, translating ONNX models and
functions into ONNX Script. This capability can be used to fully round-trip
ONNX Script ↔ ONNX graph.
* A runtime shim that allows such functions to be evaluated
(in an "eager mode"). This functionality currently relies on
[ONNX Runtime][onnx-runtime] for executing every [ONNX Operator][onnx-ops],
and there is a Python-only reference runtime for ONNX underway that
will also be supported.
Note that the runtime is intended to help understand and debug function definitions. Performance is not a goal here.
## Installing ONNX Script
```bash
pip install --upgrade onnxscript
```
### Install for Development
```bash
git clone https://github.com/microsoft/onnxscript
cd onnxscript
pip install -r requirements-dev.txt
pip install -e .
```
### Run Unit Tests
```bash
pytest .
```
## Example
```python update-readme
import onnx
# We use ONNX opset 15 to define the function below.
from onnxscript import FLOAT, script
from onnxscript import opset15 as op
# We use the script decorator to indicate that
# this is meant to be translated to ONNX.
@script()
def onnx_hardmax(X, axis: int):
"""Hardmax is similar to ArgMax, with the result being encoded OneHot style."""
# The type annotation on X indicates that it is a float tensor of
# unknown rank. The type annotation on axis indicates that it will
# be treated as an int attribute in ONNX.
#
# Invoke ONNX opset 15 op ArgMax.
# Use unnamed arguments for ONNX input parameters, and named
# arguments for ONNX attribute parameters.
argmax = op.ArgMax(X, axis=axis, keepdims=False)
xshape = op.Shape(X, start=axis)
# use the Constant operator to create constant tensors
zero = op.Constant(value_ints=[0])
depth = op.GatherElements(xshape, zero)
empty_shape = op.Constant(value_ints=[0])
depth = op.Reshape(depth, empty_shape)
values = op.Constant(value_ints=[0, 1])
cast_values = op.CastLike(values, X)
return op.OneHot(argmax, depth, cast_values, axis=axis)
# We use the script decorator to indicate that
# this is meant to be translated to ONNX.
@script()
def sample_model(X: FLOAT[64, 128], Wt: FLOAT[128, 10], Bias: FLOAT[10]) -> FLOAT[64, 10]:
matmul = op.MatMul(X, Wt) + Bias
return onnx_hardmax(matmul, axis=1)
# onnx_model is an in-memory ModelProto
onnx_model = sample_model.to_model_proto()
# Save the ONNX model at a given path
onnx.save(onnx_model, "sample_model.onnx")
# Check the model
try:
onnx.checker.check_model(onnx_model)
except onnx.checker.ValidationError as e:
print(f"The model is invalid: {e}")
else:
print("The model is valid!")
```
The decorator parses the code of the function, converting it into an
intermediate representation. If it fails, it produces an error message
indicating the line where the error was detected. If it succeeds, the
intermediate representation can be converted into an ONNX graph
structure of type `FunctionProto`:
* `Hardmax.to_function_proto()` returns a `FunctionProto`
### Eager Mode Evaluation
Eager mode is mostly used to debug and validate that intermediate results
are as expected. The function defined above can be called as below,
executing in an eager-evaluation mode:
```python
import numpy as np
v = np.array([[0, 1], [2, 3]], dtype=np.float32)
result = Hardmax(v)
```
More examples can be found in the [docs/examples](docs/examples) directory.
## ONNX Script Tools
### ONNX Optimizer
The ONNX Script Optimizer tool provides the user with the functionality to optimize an ONNX model by performing optimizations and clean-ups such as constant folding, dead code elimination, etc. In order to utilize the optimizer tool:
```python
import onnxscript
onnxscript.optimizer.optimize(onnx_model)
```
For a detailed summary of all the optimizations applied by the optimizer call, refer to the tutorial [Optimizing a Model using the Optimizer](https://microsoft.github.io/onnxscript/tutorial/optimizer/optimize.html)
### ONNX Rewriter
The ONNX Rewriter tool provides the user with the functionality to replace certain patterns in an ONNX graph with another pattern based on user-defined rewrite rules. The rewriter tools allows two different methods in which patterns in the graph can be rewritten.
### Pattern-based rewriting
For this style of rewriting, the user provides a `target_pattern` that is to be replaced, a `replacement_pattern` and a `match_condition` (pattern rewrite will occur only if the match condition is satisfied). A simple example on how to use the pattern-based rewriting tool is as follows:
```python
from onnxscript.rewriter import pattern
# The target pattern
def erf_gelu_pattern(op, x):
return 0.5 * (x * (op.Erf(x / math.sqrt(2)) + 1.0))
def erf_gelu_pattern_2(op, x):
return (x * (op.Erf(x / math.sqrt(2)) + 1.0)) * 0.5
# The replacement pattern
def gelu(op, x: ir.Value):
return op.Gelu(x, domain="com.microsoft")
# Create multiple rules
rule1 = pattern.RewriteRule(
erf_gelu_pattern, # Target Pattern
gelu, # Replacement
)
rule2 = pattern.RewriteRule(
erf_gelu_pattern_2, # Target Pattern
gelu, # Replacement
)
# Create a Rewrite Rule Set with multiple rules.
rewrite_rule_set = pattern.RewriteRuleSet([rule1, rule2])
# Apply rewrites
model_with_rewrite_applied = onnxscript.rewriter.rewrite(
model, # Original ONNX Model
pattern_rewrite_rules=rewrite_rule_set,
)
return model_with_rewrite_applied
```
For a detailed tutorial on how to create target_pattern, replacement_pattern and match_condition blocks in order to utilize the pattern-based rewriter, refer to the tutorial [Pattern-based Rewrite Using Rules](https://microsoft.github.io/onnxscript/tutorial/rewriter/rewrite_patterns.html)
## Development Guidelines
Every change impacting the converter or the eager evaluation must be
unit tested with class `OnnxScriptTestCase` to ensure both systems do
return the same results with the same inputs.
### Coding Style
We use `ruff`, `black`, `isort`, and `mypy` etc. to check code formatting and use `lintrunner` to run all linters.
You can install the dependencies and initialize with
```sh
pip install lintrunner lintrunner-adapters
lintrunner init
```
This will install lintrunner on your system and download all the necessary dependencies to run linters locally.
If you want to see what lintrunner init will install, run `lintrunner init --dry-run`.
To lint local changes:
```bash
lintrunner
```
To format files:
```bash
lintrunner f
```
To lint all files:
```bash
lintrunner --all-files
```
Use `--output oneline` to produce a compact list of lint errors, useful when
there are many errors to fix.
See all available options with `lintrunner -h`.
To read more about lintrunner, see [wiki](https://github.com/pytorch/pytorch/wiki/lintrunner).
To update an existing linting rule or create a new one, modify `.lintrunner.toml` or create a
new adapter following examples in https://github.com/justinchuby/lintrunner-adapters.
## Contributing
We're always looking for your help to improve the product (bug fixes, new features, documentation, etc). Currently ONNX Script is under early and heavy development, so we encourage proposing any major changes by [filing an issue](https://github.com/microsoft/onnxscript/issues) to discuss your idea with the team first.
### Report a Security Issue
**Please do not report security vulnerabilities through public GitHub issues.**
Please refer to our guidance on filing [Security Issues](SECURITY.md).
### Licensing Guidelines
This project welcomes contributions and suggestions. Most contributions require you to
agree to a Contributor License Agreement (CLA) declaring that you have the right to,
and actually do, grant us the rights to use your contribution. For details, visit
https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need
to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the
instructions provided by the bot. You will only need to do this once across all repositories using our CLA.
### Code of Conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must follow
[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos is subject to those third-party's policies.
[python-ast]: https://docs.python.org/3/library/ast.html
[onnx-runtime]: https://onnxruntime.ai
[onnx-ops]: https://github.com/onnx/onnx/blob/main/docs/Operators.md
[onnxfns1A.py]: https://github.com/microsoft/onnxscript/blob/main/onnxscript/tests/models/onnxfns1A.py
| text/markdown | null | Microsoft Corporation <onnx@microsoft.com> | null | null | MIT License
Copyright (c) Microsoft Corporation
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Operating System :: POSIX",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"License :: OSI Approved :: MIT License"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"ml_dtypes",
"numpy",
"onnx_ir<2,>=0.1.16",
"onnx>=1.17",
"packaging",
"typing_extensions>=4.10"
] | [] | [] | [] | [
"Homepage, https://microsoft.github.io/onnxscript/",
"Repository, https://github.com/microsoft/onnxscript"
] | RestSharp/106.13.0.0 | 2026-02-20T08:14:49.958059 | onnxscript-0.6.3.dev20260220-py3-none-any.whl | 694,591 | 90/10/c659fc461205c4ae0683fe2e8ddc17482394e228eb0509605087445134e2/onnxscript-0.6.3.dev20260220-py3-none-any.whl | py3 | bdist_wheel | null | false | 5892cb380b15347e5aec0a3629154c01 | f0221db6bc37ad3837a7bf9abc5b21d6bcb6d623e9962bedcaa564b1231e71fc | 9010c659fc461205c4ae0683fe2e8ddc17482394e228eb0509605087445134e2 | null | [] | 2,889 |
2.4 | python-pq | 0.6.0 | Postgres-backed job queue for Python | # pq
[](https://pypi.org/project/python-pq/)
[](LICENSE)
[](https://www.python.org/downloads/)
Postgres-backed job queue for Python with fork-based worker isolation.
If you already run Postgres, you don't need Redis or RabbitMQ to process background jobs. pq uses `SELECT ... FOR UPDATE SKIP LOCKED` to turn your existing database into a reliable task queue. Enqueue in the same transaction as your writes, and process tasks in isolated child processes that can't crash your worker.
```python
from pq import PQ
pq = PQ("postgresql://localhost/mydb")
pq.run_db_migrations()
def send_email(to: str, subject: str) -> None:
...
pq.enqueue(send_email, to="user@example.com", subject="Hello")
pq.run_worker()
```
## Install
```bash
uv add python-pq
```
Or with pip:
```bash
pip install python-pq
```
Requires PostgreSQL and Python 3.13+.
## Features
- **Fork isolation** -- Each task runs in a forked child process. If it OOMs, segfaults, or crashes, the worker keeps running.
- **No extra infrastructure** -- Uses your existing Postgres. No broker to deploy, monitor, or lose data.
- **Transactional enqueueing** -- Enqueue tasks in the same database transaction as your writes. If the transaction rolls back, the task is never created.
- **Periodic tasks** -- Schedule with intervals (`timedelta`) or cron expressions. Control overlap, pause/resume without deleting.
- **Priority queues** -- Five levels from `BATCH` (0) to `CRITICAL` (100). Dedicate workers to specific priority tiers.
- **Lifecycle hooks** -- Run `pre_execute` / `post_execute` code in the forked child, safe for fork-unsafe libraries like OpenTelemetry.
## Tasks
### Enqueueing
Pass any importable function with its arguments:
```python
def greet(name: str) -> None:
print(f"Hello, {name}!")
pq.enqueue(greet, name="World")
pq.enqueue(greet, "World") # Positional args work too
```
### Delayed execution
```python
from datetime import datetime, timedelta, UTC
pq.enqueue(greet, "World", run_at=datetime.now(UTC) + timedelta(hours=1))
```
### Priority
```python
from pq import Priority
pq.enqueue(task, priority=Priority.CRITICAL) # 100 - runs first
pq.enqueue(task, priority=Priority.HIGH) # 75
pq.enqueue(task, priority=Priority.NORMAL) # 50 (default)
pq.enqueue(task, priority=Priority.LOW) # 25
pq.enqueue(task, priority=Priority.BATCH) # 0 - runs last
```
### Cancellation
```python
task_id = pq.enqueue(my_task)
pq.cancel(task_id)
```
### Client IDs
Use `client_id` for idempotency and lookups:
```python
pq.enqueue(process_order, order_id=123, client_id="order-123")
task = pq.get_task_by_client_id("order-123")
# Duplicate client_id raises IntegrityError
```
### Upsert
Insert or update a task by `client_id`. Useful for debouncing -- only the latest version runs:
```python
pq.upsert(send_email, to="a@b.com", client_id="welcome-email")
# Second call updates the existing task (resets to PENDING)
pq.upsert(send_email, to="new@b.com", client_id="welcome-email")
```
## Periodic Tasks
### Intervals
```python
from datetime import timedelta
def heartbeat() -> None:
print("alive")
pq.schedule(heartbeat, run_every=timedelta(minutes=5))
```
### Cron expressions
```python
pq.schedule(weekly_report, cron="0 9 * * 1") # Monday 9am
```
### With arguments
```python
pq.schedule(report, run_every=timedelta(hours=1), report_type="hourly")
```
### Overlap control
By default, periodic tasks don't overlap -- if an instance is still running when the next tick arrives, the tick is skipped:
```python
# Default: max_concurrent=1, no overlap
pq.schedule(sync_inventory, run_every=timedelta(minutes=5))
# Allow unlimited concurrency
pq.schedule(fast_task, run_every=timedelta(seconds=30), max_concurrent=None)
```
### Pausing and resuming
```python
# Pause -- task stays in the database but won't run
pq.schedule(sync_inventory, run_every=timedelta(minutes=5), active=False)
# Resume
pq.schedule(sync_inventory, run_every=timedelta(minutes=5), active=True)
```
### Multiple schedules
Use `key` to register the same function with different configurations:
```python
pq.schedule(sync_data, run_every=timedelta(hours=1), key="us", region="us")
pq.schedule(sync_data, run_every=timedelta(hours=2), key="eu", region="eu")
pq.unschedule(sync_data, key="us")
```
## Workers
### Running
```python
pq.run_worker(poll_interval=1.0) # Run forever
processed = pq.run_worker_once() # Process single task (for testing)
```
### Timeout
Kill tasks that run too long:
```python
pq.run_worker(max_runtime=300) # 5 minute timeout per task
```
### Priority-dedicated workers
Reserve workers for high-priority tasks:
```python
from pq import Priority
# This worker only processes CRITICAL and HIGH
pq.run_worker(priorities={Priority.CRITICAL, Priority.HIGH})
```
### Lifecycle hooks
Run code before/after each task in the forked child process:
```python
from pq import PQ, Task, Periodic
def setup_tracing(task: Task | Periodic) -> None:
print(f"Starting: {task.name}")
def flush_tracing(task: Task | Periodic, error: Exception | None) -> None:
if error:
print(f"Failed: {error}")
pq.run_worker(pre_execute=setup_tracing, post_execute=flush_tracing)
```
Hooks run in the forked child, making them safe for fork-unsafe resources like OpenTelemetry.
## Serialization
Arguments are serialized automatically:
| Type | Method |
|------|--------|
| JSON-serializable (str, int, list, dict) | JSON |
| Pydantic models | `model_dump()` → JSON |
| Custom objects, lambdas | dill (pickle) |
## Async tasks
Async handlers work without any changes:
```python
import httpx
async def fetch(url: str) -> None:
async with httpx.AsyncClient() as client:
response = await client.get(url)
print(response.status_code)
pq.enqueue(fetch, "https://example.com")
```
## Error handling
Failed tasks are marked with status `FAILED` and the error is stored:
```python
for task in pq.list_failed():
print(f"{task.name}: {task.error}")
pq.clear_failed(before=datetime.now(UTC) - timedelta(days=7))
pq.clear_completed(before=datetime.now(UTC) - timedelta(days=1))
```
## How it works
Every task runs in a forked child process:
```
Worker (parent)
|
+-- fork() -> Child executes task -> exits
| (OOM/crash only affects child)
|
+-- Continues processing next task
```
The parent monitors via `os.wait4()` and detects timeout, OOM (SIGKILL), and signal-based crashes. The child process exits after every task, giving you true memory isolation.
Multiple workers can run in parallel. Tasks are claimed atomically with PostgreSQL's `FOR UPDATE SKIP LOCKED`, so each task runs exactly once.
## Alternatives
There are good options in this space. pq makes different tradeoffs:
| | Broker | Isolation | Use case |
|---|---|---|---|
| **pq** | Postgres | Fork (process-per-task) | Teams already on Postgres who want fewer moving parts |
| **Celery** | Redis/RabbitMQ | Per-worker process | Large-scale, multi-language, established teams |
| **RQ** | Redis | Per-worker process | Simple Redis-based queues |
| **Dramatiq** | Redis/RabbitMQ | Per-worker process/thread | Celery alternative with better defaults |
| **ARQ** | Redis | Async (single process) | Async-first applications |
| **Procrastinate** | Postgres | Async (single process) | Async-first, Postgres-backed, Django integration |
pq is a good fit when:
- You already run Postgres and don't want to add Redis or RabbitMQ
- You want transactional enqueueing (enqueue atomically with your writes)
- You need true process isolation per task (OOM/crash safety)
- You want periodic tasks with overlap control, pause/resume, and cron
pq is not the right choice when:
- You need very high throughput (10,000+ jobs/second) -- use a dedicated broker
- You need cross-language workers -- Celery or a dedicated queue service is better
- You need complex workflows (DAGs, chaining, fan-out) -- look at Temporal or Prefect
## Documentation
Full docs at [ricwo.github.io/pq](https://ricwo.github.io/pq/).
## Development
```bash
make dev # Start Postgres
uv run pytest # Run tests
```
## License
MIT
| text/markdown | ricwo | ricwo <r@cogram.com> | null | null | null | postgres, job-queue, task-queue, background-jobs, worker | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries",
"Topic :: System :: Distributed Computing",
"Typing :: Typed"
] | [] | null | null | <3.15,>=3.13 | [] | [] | [] | [
"alembic>=1.17.2",
"click>=8.3.1",
"croniter>=6.0.0",
"dill>=0.4.0",
"loguru>=0.7.3",
"psycopg2-binary>=2.9.11",
"pydantic>=2.12.5",
"pydantic-settings>=2.12.0",
"sqlalchemy>=2.0.45",
"mkdocs-material>=9.7.1; extra == \"docs\"",
"mkdocstrings[python]>=1.0.0; extra == \"docs\""
] | [] | [] | [] | [
"Documentation, https://ricwo.github.io/pq/",
"Homepage, https://github.com/ricwo/pq",
"Issues, https://github.com/ricwo/pq/issues",
"Repository, https://github.com/ricwo/pq"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:14:44.339647 | python_pq-0.6.0.tar.gz | 19,987 | 09/af/f2b7fe70ff3a6f5ba2db02ad590d2b6705d2f22043543341ff47266a2d16/python_pq-0.6.0.tar.gz | source | sdist | null | false | 50da685883b2d324a1a763171655d093 | a31afcc57a3676a8320f72d154b93c56e521468ac6dbf9570983f3cb683d7ae2 | 09aff2b7fe70ff3a6f5ba2db02ad590d2b6705d2f22043543341ff47266a2d16 | MIT | [
"LICENSE"
] | 231 |
2.1 | databricks-feature-engineering | 0.14.1a1 | Databricks Feature Engineering Client | The Databricks Feature Engineering client is used to:
* Create, read, and write feature tables
* Train models on feature data
* Publish feature tables to online stores for real-time serving
* Upgrade workspace feature table metadata to Unity Catalog
# Documentation
Documentation can be found per-cloud at:
- [AWS](https://docs.databricks.com/applications/machine-learning/feature-store/index.html)
- [Azure](https://docs.microsoft.com/en-us/azure/databricks/applications/machine-learning/feature-store/)
- [GCP](https://docs.gcp.databricks.com/applications/machine-learning/feature-store/index.html)
For release notes, see
- [AWS](https://docs.databricks.com/release-notes/feature-store/databricks-feature-store.html)
- [Azure](https://docs.microsoft.com/en-us/azure/databricks/release-notes/feature-store/databricks-feature-store)
- [GCP](https://docs.gcp.databricks.com/release-notes/feature-store/databricks-feature-store.html)
# Limitations
This library can only run on Databricks runtimes. It can be installed locally or in CI/CD environments for unit testing.
**DB license**
Copyright (2022) Databricks, Inc.
This library (the "Software") may not be used except in connection with the Licensee's use of the Databricks Platform Services
pursuant to an Agreement (defined below) between Licensee (defined below) and Databricks, Inc. ("Databricks"). This Software
shall be deemed part of the Downloadable Services under the Agreement, or if the Agreement does not define Downloadable Services,
Subscription Services, or if neither are defined then the term in such Agreement that refers to the applicable Databricks Platform
Services (as defined below) shall be substituted herein for "Downloadable Services". Licensee's use of the Software must comply at
all times with any restrictions applicable to the Downlodable Services and Subscription Services, generally, and must be used in
accordance with any applicable documentation.
Additionally, and notwithstanding anything in the Agreement to the contrary:
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
If you have not agreed to an Agreement or otherwise do not agree to these terms, you may not use the Software.
This license terminates automatically upon the termination of the Agreement or Licensee's breach of these terms.
Agreement: the agreement between Databricks and Licensee governing the use of the Databricks Platform Services, which shall be, with
respect to Databricks, the Databricks Terms of Service located at www.databricks.com/termsofservice, and with respect to Databricks
Community Edition, the Community Edition Terms of Service located at www.databricks.com/ce-termsofuse, in each case unless Licensee
has entered into a separate written agreement with Databricks governing the use of the applicable Databricks Platform Services.
Databricks Platform Services: the Databricks services or the Databricks Community Edition services, according to where the Software is used.
Licensee: the user of the Software, or, if the Software is being used on behalf of a company, the company.
| text/markdown | null | Databricks <feedback@databricks.com> | null | null | Databricks Proprietary License | databricks, feature engineering, feature store | [
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.8.10 | [] | [] | [] | [
"mlflow-skinny[databricks]<4,>=2.16.0",
"pyyaml<7,>=6",
"boto3<2,>=1.16.7",
"dbl-tempo<1,>=0.1.26",
"azure-cosmos==4.3.1",
"numpy<3,>=1.19.2",
"protobuf<7,>=5",
"flask<3,>=1.1.2",
"sqlparse<1,>=0.4.0",
"databricks-sdk>=0.76.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.8.10 | 2026-02-20T08:14:28.613274 | databricks_feature_engineering-0.14.1a1-py3-none-any.whl | 338,338 | eb/e6/5d4bcaabd5ad20c992b6d81a33ffd81263e2bb676e3b543b86e10bab7e7d/databricks_feature_engineering-0.14.1a1-py3-none-any.whl | py3 | bdist_wheel | null | false | ccbe9d1d5b40b8158d3620b891507130 | ab494c853da24edaa63a9304f89ffdcb89b3d48e4f5a9fff07db85fecaca5aae | ebe65d4bcaabd5ad20c992b6d81a33ffd81263e2bb676e3b543b86e10bab7e7d | null | [] | 2,325 |
2.4 | prime | 0.5.40 | Prime Intellect CLI + SDK | <p align="center">
<picture>
<source media="(prefers-color-scheme: light)" srcset="https://github.com/user-attachments/assets/40c36e38-c5bd-4c5a-9cb3-f7b902cd155d">
<source media="(prefers-color-scheme: dark)" srcset="https://github.com/user-attachments/assets/6414bc9b-126b-41ca-9307-9e982430cde8">
<img alt="Prime Intellect" src="https://github.com/user-attachments/assets/40c36e38-c5bd-4c5a-9cb3-f7b902cd155d" width="312" style="max-width: 100%;">
</picture>
</p>
---
<h3 align="center">
Prime Intellect CLI & SDKs
</h3>
---
<div align="center">
[](https://pypi.org/project/prime/)
[](https://pypi.org/project/prime/)
[](https://pypi.org/project/prime/)
Command line interface and SDKs for managing Prime Intellect GPU resources, sandboxes, and environments.
</div>
## Overview
Prime is the official CLI and Python SDK for [Prime Intellect](https://primeintellect.ai), providing seamless access to GPU compute infrastructure, remote code execution environments (sandboxes), and AI inference capabilities.
**What can you do with Prime?**
- Deploy GPU pods with H100, A100, and other high-performance GPUs
- Create and manage isolated sandbox environments for running code
- Access hundreds of pre-configured development environments
- SSH directly into your compute instances
- Manage team resources and permissions
- Run OpenAI-compatible inference requests
## Installation
### Using uv (recommended)
First, install uv if you haven't already:
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
Then install prime:
```bash
uv tool install prime
```
### Using pip
```bash
pip install prime
```
## Quick Start
### Authentication
```bash
# Interactive login (recommended)
prime login
# Or set API key directly
prime config set-api-key
# Or use environment variable
export PRIME_API_KEY="your-api-key-here"
```
Get your API key from the [Prime Intellect Dashboard](https://app.primeintellect.ai).
### Basic Usage
```bash
# Browse environments on the hub
prime env list
# List available GPUs
prime availability list
# Create a GPU pod
prime pods create --gpu A100 --count 1
# SSH into a pod
prime pods ssh <pod-id>
# Create a sandbox
prime sandbox create python:3.11
```
## Features
### Environments Hub
Access hundreds of RL environments on our community hub with deep integrations with sandboxes, training, and evaluation stack.
```bash
# Browse available environments
prime env list
# View environment details
prime env info <environment-name>
# Install an environment locally
prime env install <environment-name>
# Create and push your own environment
prime env init my-environment
prime env push my-environment
```
Environments provide pre-configured setups for machine learning, data science, and development workflows, tested and verified by the Prime Intellect community.
### GPU Pod Management
Deploy and manage GPU compute instances:
```bash
# Browse available configurations
prime availability list --gpu-type H100_80GB
# Create a pod with specific configuration
prime pods create --id <config-id> --name my-training-pod
# Monitor pod status
prime pods status <pod-id>
# SSH access
prime pods ssh <pod-id>
# Terminate when done
prime pods terminate <pod-id>
```
### Sandboxes
Isolated environments for running code remotely:
```bash
# Create a sandbox
prime sandbox create python:3.11
# List sandboxes
prime sandbox list
# Execute commands
prime sandbox exec <sandbox-id> "python script.py"
# Upload/download files
prime sandbox upload <sandbox-id> local_file.py /remote/path/
prime sandbox download <sandbox-id> /remote/file.txt ./local/
# Clean up
prime sandbox delete <sandbox-id>
```
### Team Management
Manage resources across team contexts:
```bash
# List your teams
prime teams list
# Set team context
prime config set-team-id <team-id>
# All subsequent commands use team context
prime pods list # Shows team's pods
```
## Configuration
### API Key
Multiple ways to configure your API key:
```bash
# Option 1: Interactive (hides input)
prime config set-api-key
# Option 2: Direct
prime config set-api-key YOUR_API_KEY
# Option 3: Environment variable
export PRIME_API_KEY="your-api-key"
```
Configuration priority: CLI config > Environment variable
### SSH Key
Configure SSH key for pod access:
```bash
prime config set-ssh-key-path ~/.ssh/id_rsa.pub
```
### View Configuration
```bash
prime config view
```
## Python SDK
Prime also provides a Python SDK for programmatic access:
```python
from prime_sandboxes import APIClient, SandboxClient, CreateSandboxRequest
# Initialize client
client = APIClient(api_key="your-api-key")
sandbox_client = SandboxClient(client)
# Create a sandbox
sandbox = sandbox_client.create(CreateSandboxRequest(
name="my-sandbox",
docker_image="python:3.11-slim",
cpu_cores=2,
memory_gb=4,
))
# Wait for creation
sandbox_client.wait_for_creation(sandbox.id)
# Execute commands
result = sandbox_client.execute_command(sandbox.id, "python --version")
print(result.stdout)
# Clean up
sandbox_client.delete(sandbox.id)
```
### Async SDK
```python
import asyncio
from prime_sandboxes import AsyncSandboxClient, CreateSandboxRequest
async def main():
async with AsyncSandboxClient(api_key="your-api-key") as client:
sandbox = await client.create(CreateSandboxRequest(
name="async-sandbox",
docker_image="python:3.11-slim",
))
await client.wait_for_creation(sandbox.id)
result = await client.execute_command(sandbox.id, "echo 'Hello'")
print(result.stdout)
await client.delete(sandbox.id)
asyncio.run(main())
```
## Use Cases
### Machine Learning Training
```bash
# Deploy a pod with 8x H100 GPUs
prime pods create --gpu H100 --count 8 --name ml-training
# SSH and start training
prime pods ssh <pod-id>
```
## Support & Resources
- **Documentation**: [github.com/PrimeIntellect-ai/prime-cli](https://github.com/PrimeIntellect-ai/prime-cli)
- **Dashboard**: [app.primeintellect.ai](https://app.primeintellect.ai)
- **API Docs**: [api.primeintellect.ai/docs](https://api.primeintellect.ai/docs)
- **Discord**: [discord.gg/primeintellect](https://discord.gg/primeintellect)
- **Website**: [primeintellect.ai](https://primeintellect.ai)
## Related Packages
- **prime-sandboxes** - Lightweight SDK for sandboxes only (if you don't need the full CLI)
## License
MIT License - see [LICENSE](https://github.com/PrimeIntellect-ai/prime-cli/blob/main/LICENSE) file for details.
| text/markdown | null | Prime Intellect <contact@primeintellect.ai> | null | null | null | cli, cloud, compute, gpu | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"build>=1.0.0",
"cryptography>=41.0.0",
"httpx>=0.25.0",
"prime-evals>=0.1.3",
"prime-sandboxes>=0.1.0",
"prime-tunnel>=0.1.0",
"pydantic>=2.0.0",
"rich>=13.3.1",
"toml>=0.10.0",
"typer>=0.9.0",
"verifiers>=0.1.10",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.13.1; extra == \"dev\"",
"ty>=0.0.0a6; extra == \"dev\"",
"types-toml>=0.10.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/PrimeIntellect-ai/prime-cli",
"Documentation, https://github.com/PrimeIntellect-ai/prime-cli#readme",
"Repository, https://github.com/PrimeIntellect-ai/prime-cli.git",
"Changelog, https://github.com/PrimeIntellect-ai/prime-cli/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T08:13:48.829056 | prime-0.5.40.tar.gz | 293,688 | 82/58/c620ada2e12355b8b08c0f1b6bd385d5bbeee83e3c64582441e8a00434c2/prime-0.5.40.tar.gz | source | sdist | null | false | 7dba08cf189e996b1e7178e9dfc78ded | 59a0e7ab62890f5d1618038a0d326a4ed369ad45026ef181b11e9c815c02752b | 8258c620ada2e12355b8b08c0f1b6bd385d5bbeee83e3c64582441e8a00434c2 | MIT | [
"LICENSE"
] | 624 |
2.4 | openscvx | 0.4.1.dev91 | A general Python-based successive convexification implementation which uses a JAX backend. | <a id="readme-top"></a>
<img src="figures/openscvx_logo.svg" width="1200"/>
<p align="center">
<a href="https://github.com/OpenSCvx/OpenSCvx/actions/workflows/lint.yml"><img src="https://github.com/OpenSCvx/OpenSCvx/actions/workflows/lint.yml/badge.svg"/></a>
<a href="https://github.com/OpenSCvx/OpenSCvx/actions/workflows/tests-unit.yml"><img src="https://github.com/OpenSCvx/OpenSCvx/actions/workflows/tests-unit.yml/badge.svg"/></a>
<a href="https://github.com/OpenSCvx/OpenSCvx/actions/workflows/tests-integration.yml"><img src="https://github.com/OpenSCvx/OpenSCvx/actions/workflows/tests-integration.yml/badge.svg"/></a>
<a href="https://github.com/OpenSCvx/OpenSCvx/actions/workflows/nightly.yml"><img src="https://github.com/OpenSCvx/OpenSCvx/actions/workflows/nightly.yml/badge.svg"/></a>
<a href="https://github.com/OpenSCvx/OpenSCvx/actions/workflows/release.yml"><img src="https://github.com/OpenSCvx/OpenSCvx/actions/workflows/release.yml/badge.svg?event=release"/></a>
</p>
<p align="center">
<a href="https://arxiv.org/abs/2410.22596"><img src="http://img.shields.io/badge/arXiv-2410.22596-B31B1B.svg"/></a>
<a href="https://www.apache.org/licenses/LICENSE-2.0"><img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg" alt="License: Apache 2.0"/></a>
</p>
<!-- PROJECT LOGO -->
<br />
## What is OpenSCvx
OpenSCvx is a general python-based successive convexification implementation which uses a JAX backend.
It is designed to be easy to use for anyone and fast enough for everyone, all while being open and modular for contributors.
OpenSCvx provides a clean symbolic interface for problem definition which should be intuitive to users of NumPy, JAX, and CVXPY. This allows us to hide a lot of the under-the-hood magic away from the user while also providing a modular architecture, enabling contributors to focus on the algorithms without worrying about interface design.
OpenSCvx makes heavy use of [JAX](https://github.com/jax-ml/jax) to efficiently perform calculations in the successive convex programming loop through automatic differentiation, ahead-of-time (AOT) compilation, vectorization, and GPU acceleration. Behind this is a [CVXPY](https://github.com/cvxpy/cvxpy/)-based backend to solve the convex subproblems.
This is an open project and is under active development. Try it out, give us feedback, and help contribute.
```python
import openscvx as ox
g = 9.81
# Define states
position = ox.State("position", shape=(2,))
position.min = [0.0, 0.0]
position.max = [10.0, 10.0]
position.initial = [0.0, 10.0]
position.final = [10.0, 5.0]
velocity = ox.State("velocity", shape=(1,))
velocity.min = [0.0]
velocity.max = [10.0]
velocity.initial = [0.0]
velocity.final = [ox.Free(10.0)]
# Define control (angle from vertical)
theta = ox.Control("theta", shape=(1,))
theta.min = [0.0]
theta.max = [1.755]
theta.guess = [[0.09], [1.755]]
# Define dynamics
dynamics = {
"position": ox.Concat(
velocity * ox.Sin(theta),
-velocity * ox.Cos(theta),
),
"velocity": g * ox.Cos(theta),
}
constraints = []
for state in [position, velocity]:
constraints.append(ox.ctcs(state <= state.max))
constraints.append(ox.ctcs(state.min <= state))
# Build and solve
problem = ox.Problem(
dynamics=dynamics,
constraints=constraints,
states=[position, velocity],
controls=[theta],
time=ox.Time(initial=0.0, final=ox.Minimize(2.0), min=0.0, max=2.0),
N=2,
)
problem.initialize()
results = problem.solve()
results = problem.post_process()
```
## Installation
OpenSCvx is available on [PyPI](https://pypi.org/project/openscvx/) and can be trivially installed with pip.
It is recommended to install OpenSCvx inside a virtual environment (venv, conda, uv, *etc.*). If you don't already have one set up:
```bash
python3 -m venv .venv
source .venv/bin/activate
```
### Using pip
```bash
pip install openscvx
```
### Using uv
If you have [uv installed](https://docs.astral.sh/uv/getting-started/installation/) you can prefix the commands with `uv` for faster installation:
```bash
uv pip install openscvx
```
> [!TIP]
> **Optional Dependencies**
>
> For GUI support or CVXPYGen code generation:
> ```bash
> pip install openscvx[gui,cvxpygen]
> ```
> [!TIP]
> **Nightly Builds**
>
> To install the latest development version (nightly), use the `--pre` flag:
> ```bash
> pip install --pre openscvx
> ```
## Installing From Source
### Using pip
```bash
git clone https://github.com/OpenSCvx/OpenSCvx.git
cd OpenSCvx
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
```
### Using uv
```bash
git clone https://github.com/OpenSCvx/OpenSCvx.git
cd OpenSCvx
uv venv
source .venv/bin/activate
uv pip install -e .
```
## Getting Started
Check out the OpenSCvx documentation to help you get started
- [Getting Started Docs](https://openscvx.github.io/OpenSCvx/latest/getting-started/)
- [Users Guide](https://openscvx.github.io/OpenSCvx/latest/UsersGuide/00_introduction/)
- [API Reference](https://openscvx.github.io/OpenSCvx/latest/Reference/problem/)
### Running the Examples
We also have a selection of problems in the `examples/` folder as well as on the [Examples page](https://openscvx.github.io/OpenSCvx/latest/Examples/abstract/brachistochrone/) of the documentation. The example trajectory optimization problems are grouped by application and represent some of the problem types that can be solved by OpenSCvx.
> [!Note]
> To run the examples, you'll need to clone this repository and install OpenSCvx in editable mode (`pip install -e .`). See the [Installing From Source](#installing-from-source) section above for detailed installation instructions.
To run a problem simply run any of the examples directly, for example:
```sh
python3 examples/abstract/brachistochrone.py
```
and adjust the plotting as needed.
Check out the problem definitions inside `examples/` to see how to define your own problems.
## Code Structure
<img src="figures/oscvx_structure_full_dark.svg" width="1200"/>
## What is implemented
This repo has the following features:
1. Free Final Time
2. Fully adaptive time dilation (`s` is appended to the control vector)
3. Continuous-Time Constraint Satisfaction
4. FOH and ZOH exact discretization (`t` is a state so you can bring your own scheme)
6. Vectorized and Ahead-of-Time (AOT) Compiled Multishooting Discretization
7. JAX Autodiff for Jacobians
<p align="right">(<a href="#readme-top">back to top</a>)</p>
## Acknowledgements
This work was supported by a NASA Space Technology Graduate Research Opportunity and the Office of Naval Research under grant N00014-17-1-2433. The authors would like to acknowledge Natalia Pavlasek, Fabio Spada, Samuel Buckner, Abhi Kamath, Govind Chari, and Purnanand Elango as well as the other Autonomous Controls Laboratory members, for their many helpful discussions and support throughout this work.
## Citation
Please cite the following works if you use the repository,
```tex
@ARTICLE{hayner2025los,
author={Hayner, Christopher R. and Carson III, John M. and Açıkmeşe, Behçet and Leung, Karen},
journal={IEEE Robotics and Automation Letters},
title={Continuous-Time Line-of-Sight Constrained Trajectory Planning for 6-Degree of Freedom Systems},
year={2025},
volume={},
number={},
pages={1-8},
keywords={Robot sensing systems;Vectors;Vehicle dynamics;Line-of-sight propagation;Trajectory planning;Trajectory optimization;Quadrotors;Nonlinear dynamical systems;Heuristic algorithms;Convergence;Constrained Motion Planning;Optimization and Optimal Control;Aerial Systems: Perception and Autonomy},
doi={10.1109/LRA.2025.3545299}}
```
```tex
@misc{elango2024ctscvx,
title={Successive Convexification for Trajectory Optimization with Continuous-Time Constraint Satisfaction},
author={Purnanand Elango and Dayou Luo and Abhinav G. Kamath and Samet Uzun and Taewan Kim and Behçet Açıkmeşe},
year={2024},
eprint={2404.16826},
archivePrefix={arXiv},
primaryClass={math.OC},
url={https://arxiv.org/abs/2404.16826},
}
```
```tex
@misc{chari2025qoco,
title = {QOCO: A Quadratic Objective Conic Optimizer with Custom Solver Generation},
author = {Chari, Govind M and A{\c{c}}{\i}kme{\c{s}}e, Beh{\c{c}}et},
year = {2025},
eprint = {2503.12658},
archiveprefix = {arXiv},
primaryclass = {math.OC},
}
```
| text/markdown | null | Chris Hayner and Griffin Norris <haynec@uw.edu> | null | null | Apache Software License | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"cvxpy>=1.8.1",
"qoco",
"numpy",
"jax",
"plotly",
"termcolor",
"diffrax",
"absl-py",
"flatbuffers",
"viser",
"matplotlib",
"pyyaml",
"pyqtgraph; extra == \"gui\"",
"PyQt5; extra == \"gui\"",
"scipy; extra == \"gui\"",
"PyOpenGL; extra == \"gui\"",
"PyOpenGL_accelerate; extra == \"gui\"",
"cvxpygen; extra == \"cvxpygen\"",
"qocogen; extra == \"cvxpygen\"",
"stljax; extra == \"stl\"",
"jaxlie; extra == \"lie\"",
"pytest; extra == \"test\"",
"scipy; extra == \"test\"",
"jaxlie; extra == \"test\"",
"svgpathtools; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://openscvx.github.io/openscvx/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T08:13:06.194532 | openscvx-0.4.1.dev91.tar.gz | 33,027,172 | cf/66/1c3261f8f6c79016b0bf4eda31d9044f91697df852d752dceb79b3be644c/openscvx-0.4.1.dev91.tar.gz | source | sdist | null | false | ba3e1836ce59636a708fd7561df4e56f | ea4ed58dde9a7dc15faf20b507af848e6a4a2345d5d07606b7a534f267e6f35c | cf661c3261f8f6c79016b0bf4eda31d9044f91697df852d752dceb79b3be644c | null | [
"LICENSE"
] | 225 |
2.4 | noetl | 2.8.4 | A framework to build and run data pipelines and workflows. | # NoETL
**NoETL** is an automation framework for orchestrating **APIs, databases, and scripts** using a declarative **Playbook DSL**.
Execution is standardized around an **MCP-style tool model**: consistent tool contracts, structured input/output, and a predictable lifecycle. From an MCP perspective, `tools` include API endpoints, database operations, and scripts/utilities **NoETL** orchestrates and optimizes them via playbooks.
With **NoETL Gateway**, playbooks can be deployed as a **distributed backend**: developers ship business logic as playbooks, and UIs/clients call stable endpoints without deploying dedicated microservices for each workflow.
[](https://badge.fury.io/py/noetl)
## Documentation
**https://noetl.dev**
## Distribution channels
- **PyPI**
- `noetl` (Python): https://pypi.org/project/noetl/
- `noetlctl` (Rust CLI): https://pypi.org/project/noetlctl/
- **crates.io**
- `noetl`: https://crates.io/crates/noetl
- `noetl-gateway`: https://crates.io/crates/noetl-gateway
- **APT (Debian/Ubuntu)**
- Repo: https://github.com/noetl/apt
- **Homebrew**
- Tap: https://github.com/noetl/homebrew-tap
## License
MIT License — see [LICENSE](LICENSE) for details.
| text/markdown | null | Kadyapam <182583029+kadyapam@users.noreply.github.com> | null | null | null | etl, ml, ops, cleansing, data, pipeline, workflow, automation | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"fastapi>=0.115.6",
"starlette>=0.49.1",
"pydantic>=2.11.4",
"aiofiles==24.1.0",
"psycopg[binary,pool]>=3.2.7",
"connectorx>=0.4.3",
"greenlet>=3.2.1",
"uvicorn>=0.34.0",
"requests>=2.32.3",
"httpx>=0.28.1",
"google-auth>=2.27.0",
"google-cloud-storage>=2.18.0",
"python-multipart==0.0.20",
"PyYAML>=6.0.1",
"Jinja2>=3.1.6",
"pycryptodome>=3.21",
"cryptography>=44.0.0",
"PyJWT>=2.8.0",
"urllib3>=2.3",
"Authlib>=1.6.5",
"click<8.2.1,>=8.1.0",
"psutil>=7.0.0",
"memray>=1.17.2",
"deepdiff>=8.6.1",
"lark>=1.2.2",
"duckdb>=1.3.0",
"duckdb-engine>=0.17.0",
"snowflake-connector-python>=4.0.0",
"polars[pyarrow]>=1.30.0",
"xlsxwriter>=3.2.9",
"networkx>=3.5",
"pydot>=4.0.1",
"kubernetes>=30.1.0",
"fsspec>=2025.5.1",
"gcsfs>=2025.5.1",
"boto3>=1.38.45",
"azure-identity>=1.23.0",
"azure-keyvault-secrets>=4.8.0",
"ortools>=9.10",
"kubernetes>=31.0.0",
"nats-py>=2.10.0",
"pytest>=8.1.1; extra == \"dev\"",
"pytest-asyncio>=0.23.6; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest-mock>=3.14.0; extra == \"dev\"",
"build>=1.2.2.post1; extra == \"publish\"",
"twine>=6.1.0; extra == \"publish\"",
"maturin<2.0,>=1.0; extra == \"publish\"",
"jupyterlab>=4.4.3; extra == \"notebook\"",
"jupysql>=0.11.1; extra == \"notebook\"",
"pandas>=2.2.3; extra == \"notebook\"",
"matplotlib>=3.10.3; extra == \"notebook\"",
"noetl-cli==2.5.3; extra == \"cli\"",
"playwright>=1.43.0; extra == \"ibkr\""
] | [] | [] | [] | [
"Homepage, https://noetl.io",
"Repository, https://github.com/noetl/noetl",
"Issues, https://github.com/noetl/noetl/issues"
] | uv/0.9.30 {"installer":{"name":"uv","version":"0.9.30","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T08:12:00.508012 | noetl-2.8.4.tar.gz | 2,820,526 | 74/ab/b6675038d5c3593ae59538a781f227920b9444ff4e6c75c60922e8aa9e4a/noetl-2.8.4.tar.gz | source | sdist | null | false | 9ad2cc6a9e48bdab30d8226b54b33beb | b5036ce49d15e5587915d16f7975557531d72f576384be47ed0f7800204adf80 | 74abb6675038d5c3593ae59538a781f227920b9444ff4e6c75c60922e8aa9e4a | MIT | [
"LICENSE"
] | 231 |
2.4 | isage-finetune | 0.1.0.4 | SAGE Fine-tuning Framework - Trainers and data loaders for LLM fine-tuning | # sage-finetune
Fine-tuning implementations for the SAGE AI data processing framework.
## Installation
```bash
pip install isage-finetune
```
For LoRA training:
```bash
pip install isage-finetune[peft]
```
## Features
- **LoRA Trainer**: Parameter-efficient fine-tuning with Low-Rank Adaptation
- **Mock Trainer**: Testing trainer for pipeline validation
- **JSON/JSONL Loader**: Flexible data loading for instruction and chat formats
## Quick Start
```python
from sage_finetune import MockTrainer, JSONDatasetLoader
# Load training data
loader = JSONDatasetLoader()
train_data = loader.load("train.jsonl")
# Train (mock for testing)
trainer = MockTrainer()
result = trainer.train(train_data)
print(f"Loss: {result['train_loss']:.4f}")
```
### LoRA Fine-tuning
```python
from sage_finetune import LoRATrainer, LoRAConfig
trainer = LoRATrainer(
model_name="gpt2",
lora_config=LoRAConfig(r=8, lora_alpha=16),
)
result = trainer.train(train_dataset)
trainer.save_model("./my_lora_model")
```
## Data Formats
### Instruction Format
```json
{"instruction": "Summarize this text", "input": "Long text...", "output": "Summary..."}
```
### Chat Format
```json
{"messages": [{"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi"}]}
```
## Integration with SAGE
When SAGE is installed, components auto-register with the framework:
```python
from sage.libs.finetune import create_trainer
trainer = create_trainer("lora", model_name="gpt2")
```
## License
Apache 2.0
| text/markdown | null | IntelliStream Team <shuhao_zhang@hust.edu.cn> | null | null | null | finetune, training, LoRA, LLM, AI | [] | [] | null | null | ==3.11.* | [] | [] | [] | [
"pydantic>=2.0.0",
"typing-extensions>=4.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"ruff>=0.4.0; extra == \"dev\"",
"isage-pypi-publisher>=0.2.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/intellistream/sage-finetune",
"Repository, https://github.com/intellistream/sage-finetune"
] | twine/6.2.0 CPython/3.11.11 | 2026-02-20T08:11:10.398485 | isage_finetune-0.1.0.4-py2.py3-none-any.whl | 58,247 | 49/01/92c0d6de0f700376f76f181c30606b56813001c86bc85de5f4a55656bbb4/isage_finetune-0.1.0.4-py2.py3-none-any.whl | py2.py3 | bdist_wheel | null | false | 6a85bf12832275aace7b1f3bec84e0e8 | da8fbfc2e3ae87179858ce466d53b030442f4628f114d6bce49426dac37943be | 490192c0d6de0f700376f76f181c30606b56813001c86bc85de5f4a55656bbb4 | MIT | [] | 213 |
2.4 | isage-data | 0.2.3.2 | SAGE Data - Unified data loaders for memory benchmark datasets (LongMemEval, Locomo, MemAgentBench, etc.) | # SAGE Data ��
**Dataset management module for SAGE benchmark suite**
Provides unified access to multiple datasets through a two-layer architecture:
- **Sources**: Physical datasets (qa_base, bbh, mmlu, gpqa, locomo, orca_dpo)
- **Usages**: Logical views for experiments (rag, libamm, neuromem, agent_eval)
## Quick Start
### Automatic Setup (Recommended)
```bash
# Clone the repository
git clone https://github.com/intellistream/sageData.git
cd sageData
# Run quickstart script (handles everything including Git LFS)
./quickstart.sh
source .venv/bin/activate
```
The `quickstart.sh` script will:
- ✅ Detect and install Git LFS if needed (for dataset files)
- ✅ Pull LFS-tracked data files automatically
- ✅ Create Python virtual environment
- ✅ Install all dependencies
**Note**: Some datasets (like LibAMM benchmark files) use Git LFS. The quickstart script will handle
this automatically, but you can also manually install Git LFS:
- Ubuntu/Debian: `sudo apt install git-lfs`
- macOS: `brew install git-lfs`
- Windows: Download from [git-lfs.github.com](https://git-lfs.github.com/)
### Manual Setup
```bash
# Install Git LFS (if needed)
git lfs install
# Pull LFS data files
git lfs pull
# Setup Python environment
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
```
```python
from sage.data import DataManager
manager = DataManager.get_instance()
# Access datasets by logical usage profile
rag = manager.get_by_usage("rag")
qa_loader = rag.load("qa_base") # already instantiated
queries = qa_loader.load_queries()
# Or fetch a specific data source directly
bbh_loader = manager.get_by_source("bbh")
tasks = bbh_loader.get_task_names()
```
## 🛠️ CLI 使用方式(精简版)
安装后可直接使用 `sage-data` 命令:
```bash
sage-data list # 显示数据源状态(已下载/缺失/远程)
sage-data usage rag # 查看某个 usage 的数据映射
sage-data download locomo # 下载指定数据源(仅支持部分源)
# 选项
sage-data list --json # JSON 输出,便于脚本处理
sage-data --data-root /path # 指定自定义数据根目录
```
当前支持自动下载的源:`locomo`, `longmemeval`, `memagentbench`, `mmlu`。 其他如 `gpqa`, `orca_dpo` 采用按需在线加载(Hugging
Face),`qa_base`/`bbh` 等随包内置。
## Available Datasets
| Dataset | Description | Download Required | Storage |
| ------------ | ---------------------------------------- | ------------------------------------------------------ | --------------------------- |
| **qa_base** | Question-Answering with knowledge base | ❌ No (included) | Local files |
| **locomo** | Long-context memory benchmark | ✅ Yes (`python -m locomo.download`) | Local files (2.68MB) |
| **bbh** | BIG-Bench Hard reasoning tasks | ❌ No (included) | Local JSON files |
| **mmlu** | Massive Multitask Language Understanding | 📥 Optional (`python -m mmlu.download --all-subjects`) | On-demand or Local (~160MB) |
| **gpqa** | Graduate-Level Question Answering | ✅ Auto (Hugging Face) | On-demand (~5MB cached) |
| **orca_dpo** | Preference pairs for alignment/DPO | ✅ Auto (Hugging Face) | On-demand (varies) |
See `examples/` for detailed usage examples.
## 📖 Examples
```bash
python examples/qa_examples.py # QA dataset usage
python examples/locomo_examples.py # LoCoMo dataset usage
python examples/bbh_examples.py # BBH dataset usage
python examples/mmlu_examples.py # MMLU dataset usage
python examples/gpqa_examples.py # GPQA dataset usage
python examples/orca_dpo_examples.py # Orca DPO dataset usage
python examples/integration_example.py # Cross-dataset integration
```
## License
MIT License - see [LICENSE](LICENSE) file.
## 🔗 Links
- **Repository**: https://github.com/intellistream/sageData
- **Issues**: https://github.com/intellistream/sageData/issues
## ❓ Common Issues
**Q: Where's the LoCoMo data?**\
A: Run `python -m locomo.download` to download it (2.68MB from Hugging Face).
**Q: How to download MMLU for offline use?**\
A: Run `python -m mmlu.download --all-subjects` to download all subjects (~160MB).
**Q: GPQA access error?**\
A: You need to accept the dataset terms on Hugging Face:
https://huggingface.co/datasets/Idavidrein/gpqa
**Q: How to use Orca DPO for alignment research?**\
A: Use `DataManager.get_by_source("orca_dpo")` to get the loader, then use `format_for_dpo()` to
prepare data for training.
______________________________________________________________________
**Version**: 0.1.0 | **Last Updated**: December 2025
| text/markdown | null | IntelliStream Team <shuhao_zhang@hust.edu.cn> | null | null | MIT | dataset, benchmark, memory, ai, longmemeval, locomo, memagentbench, sage | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | ==3.11.* | [] | [] | [] | [
"isage-common>=0.2.0",
"pandas>=2.0.0",
"numpy<2.3.0,>=1.26.0",
"pyyaml>=6.0",
"datasets>=2.14.0",
"pyarrow<18.0.0,>=10.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"detect-secrets>=1.5.0; extra == \"dev\"",
"pre-commit>=2.20.0; extra == \"dev\"",
"isage-pypi-publisher>=0.2.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/intellistream/sageData",
"Repository, https://github.com/intellistream/sageData",
"Documentation, https://github.com/intellistream/sageData/blob/main/README.md",
"Issues, https://github.com/intellistream/sageData/issues"
] | twine/6.2.0 CPython/3.11.11 | 2026-02-20T08:11:06.310174 | isage_data-0.2.3.2-py2.py3-none-any.whl | 1,758,779 | b3/46/7ae1282aa8c9e1be8289c52ef45fb389b1cc2183b87374eea85c6d096484/isage_data-0.2.3.2-py2.py3-none-any.whl | py2.py3 | bdist_wheel | null | false | 505e5542346db20e336302a29d2d6278 | 4f1291af037cd5ea33cd669b430ed56d5be4a103272f1b24eaf9843604c955ea | b3467ae1282aa8c9e1be8289c52ef45fb389b1cc2183b87374eea85c6d096484 | null | [
"LICENSE"
] | 148 |
2.4 | py-oprun | 0.0.1 | Python connector for oprun | # Python connector for oprun
| text/markdown | null | txello <txello7@proton.me> | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries",
"Topic :: System :: Distributed Computing",
"Topic :: System :: Networking",
"Topic :: Utilities",
"Framework :: AsyncIO",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://txello.github.io/py-oprun",
"Homepage, https://github.com/txello/py-oprun",
"Issues, https://github.com/txello/py-oprun/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:10:37.833837 | py_oprun-0.0.1.tar.gz | 2,285 | 14/5e/45a5264330129132c5d588df240792092767c9b9b5eb3beb8ddad47aaea4/py_oprun-0.0.1.tar.gz | source | sdist | null | false | 0299a54fe5ca77a5a0c6507158e58b2c | 4a7ca0075245639156652907af12a98482e150fcbb37ef6bae25e24430b3ddfe | 145e45a5264330129132c5d588df240792092767c9b9b5eb3beb8ddad47aaea4 | null | [
"LICENSE"
] | 254 |
2.4 | prisme | 2.26.1 | Code generation framework for full-stack applications from Pydantic models | # Prism
[](https://github.com/Lasse-numerous/prisme/actions/workflows/ci.yml)
[](https://codecov.io/gh/Lasse-numerous/prisme)
[](https://pypi.org/project/prisme/)
[](https://prisme.readthedocs.io/)
[](LICENSE)
> **"One spec, full spectrum."**
Prism is a code generation framework that turns Pydantic model definitions into production-ready full-stack applications. Define your data models once in Python and generate a complete backend (REST + GraphQL + MCP APIs), a React frontend, authentication, admin panel, tests, Docker config, CI/CD pipelines, and cloud deployment — all from a single spec file.
```bash
pip install prisme # or: uv add prisme
prisme create my-app && cd my-app && prisme install && prisme generate && prisme test && prisme dev
```
## Why Prism?
Most code generators produce a one-time scaffold you immediately start editing by hand. Prism is different: it generates code you can **regenerate** without losing your customizations. Protected regions, base/extension file splitting, and smart merge strategies let you keep evolving your spec while preserving every line of custom business logic.
| Problem | How Prism solves it |
|---|---|
| Writing the same CRUD across REST, GraphQL, and frontend | Define once in a spec, generate everywhere |
| Generated code becomes unmaintainable after edits | Four file strategies preserve your customizations across regenerations |
| Setting up auth, admin, Docker, CI for every project | All included — toggle features in the spec |
| No type safety between backend and frontend | End-to-end types from database column to React prop |
| Repetitive filtering, sorting, pagination boilerplate | Declared per-field in the spec, generated into every API layer |
## What You Get
From a single `specs/models.py` file, `prisme generate` produces:
**Backend** — Python 3.13+ / FastAPI / SQLAlchemy (async)
- SQLAlchemy models with relationships, soft delete, timestamps, and temporal queries
- Pydantic schemas (create, update, read, list, filter)
- Service layer with CRUD, bulk operations, nested creates, and extension points
- REST API with filtering, sorting, pagination, and OpenAPI docs
- GraphQL API (Strawberry) with queries, mutations, connections, and subscriptions stubs
- MCP server (FastMCP) so AI assistants can interact with your data
- Alembic migrations
- JWT authentication with RBAC, OAuth (GitHub/Google), email verification, password reset
- API key authentication
- Admin panel with role-based access
**Frontend** — React 19 / TypeScript / Vite / Tailwind CSS
- TypeScript types mirroring your backend schemas
- React components (list, detail, create, edit) with the Nordic design system
- Data-fetching hooks (REST and GraphQL)
- Client-side routing with protected routes
- Auth pages (login, signup, password reset, profile)
- Admin dashboard
- Landing page, error pages, search
- Headless component architecture with pluggable widgets
**Infrastructure & DevOps**
- Docker Compose for dev and production (with Traefik reverse proxy)
- GitHub Actions CI/CD with Codecov, semantic-release, Dependabot, commitlint
- Hetzner Cloud deployment via Terraform
- Dev container with Claude Code integration
**Testing**
- Backend tests (pytest) and frontend tests (Vitest + React Testing Library)
- Generated automatically from your spec
## Spec as Code
Everything starts with a Python file. No YAML, no GUI — just Pydantic models with full IDE support.
```python
# specs/models.py
from prisme import (
StackSpec, ModelSpec, FieldSpec, FieldType, FilterOperator,
RESTExposure, GraphQLExposure, MCPExposure, FrontendExposure,
)
spec = StackSpec(
name="my-crm",
version="1.0.0",
description="Customer Relationship Management",
models=[
ModelSpec(
name="Customer",
soft_delete=True,
timestamps=True,
fields=[
FieldSpec(
name="name",
type=FieldType.STRING,
max_length=255,
required=True,
searchable=True,
filter_operators=[FilterOperator.EQ, FilterOperator.ILIKE],
),
FieldSpec(
name="email",
type=FieldType.STRING,
max_length=255,
required=True,
unique=True,
ui_widget="email",
),
FieldSpec(
name="status",
type=FieldType.ENUM,
enum_values=["active", "inactive", "prospect"],
default="prospect",
),
],
rest=RESTExposure(enabled=True, tags=["customers"]),
graphql=GraphQLExposure(enabled=True, use_connection=True),
mcp=MCPExposure(enabled=True, tool_prefix="customer"),
frontend=FrontendExposure(enabled=True, nav_label="Customers"),
),
],
)
```
**13 field types** (STRING, TEXT, INTEGER, FLOAT, DECIMAL, BOOLEAN, DATETIME, DATE, TIME, UUID, JSON, ENUM, FOREIGN_KEY) and **17 filter operators** give you fine-grained control over every field.
## Selective Exposure
Each model independently controls which APIs and UI it exposes:
```python
ModelSpec(
name="InternalMetric",
rest=RESTExposure(enabled=True), # REST API only
graphql=GraphQLExposure(enabled=False), # No GraphQL
mcp=MCPExposure(enabled=False), # No MCP tools
frontend=FrontendExposure(enabled=False), # No UI pages
)
```
## Extend, Don't Overwrite
Prism uses four file strategies to keep your customizations safe:
| Strategy | Behavior | Use case |
|---|---|---|
| `ALWAYS_OVERWRITE` | Regenerated every time | Types, schemas, generated code |
| `GENERATE_ONCE` | Created once, never touched again | Custom pages, user-written services |
| `GENERATE_BASE` | Base class regenerated, your extension preserved | Service layer, components |
| `MERGE` | Smart merge with protected regions | Router assembly, app providers |
Protected regions (`// PRISM:PROTECTED:START` / `// PRISM:PROTECTED:END`) let you embed custom code inside regenerated files — Prism preserves those blocks on every regeneration.
## Project Templates
Start with the template that fits your use case:
```bash
prisme create my-app # Full-stack (default)
prisme create my-app --template minimal # Backend only
prisme create my-app --template api-only # API without frontend
prisme create my-app --template mcp-only # MCP server only
prisme create my-app --template website # Content website
prisme create my-app --template saas # SaaS with auth + billing
prisme create my-app --template enterprise-platform # Enterprise with admin
```
Options: `--database sqlite|postgresql`, `--package-manager npm|pnpm|yarn`, `--docker`, `--no-ci`, and more.
## Authentication & Authorization
Toggle in your spec — generated end-to-end:
- **JWT authentication** with access/refresh tokens
- **Role-based access control** (RBAC) with custom roles and permissions
- **OAuth** (GitHub, Google) with configurable providers
- **API key authentication** for service-to-service communication
- **Email verification** and **password reset** via Resend
- **Signup whitelisting** and access control
- **Admin panel** with role-gated views
## Design System
The generated frontend ships with the **Nordic** design system (Tailwind-based), with additional presets:
- **3 theme presets**: Nordic, Minimal, Corporate
- **Dark mode** with light/dark/system toggle
- **2 icon sets**: Lucide React, Heroicons
- **Customizable** colors, fonts, border radius, and animations
## Docker & Deployment
```bash
# Local development with Docker
prisme create my-app --docker
prisme dev --docker
# → app available at http://my-app.localhost
# Production deployment to Hetzner Cloud
prisme deploy init --domain example.com
prisme deploy apply -e production
```
- Shared **Traefik** reverse proxy — run multiple projects simultaneously
- Automatic **subdomain routing** (`project-name.localhost`)
- Production Docker Compose with replicas and SSL
- **Terraform** templates for Hetzner Cloud with staging/production environments
## CLI Reference
Prism ships with **88 CLI commands** across project lifecycle, code generation, testing, Docker, deployment, and more:
```bash
# Core workflow
prisme create my-project # Scaffold a new project
prisme install # Install backend + frontend dependencies
prisme generate # Generate code from spec
prisme generate --dry-run # Preview changes without writing
prisme generate --diff # Show diff of what would change
prisme test # Run all tests (backend + frontend)
prisme dev # Start dev servers (backend + frontend)
prisme dev --watch # Watch spec for changes and regenerate
prisme validate specs/models.py # Validate your spec
# Database
prisme db migrate # Create and apply Alembic migrations
prisme db seed # Seed with test data
prisme db reset # Reset database
# Docker
prisme docker init # Generate Docker dev config
prisme docker init-prod --domain example.com # Production config
prisme dev --docker # Run everything in Docker
# CI/CD
prisme ci init # Generate GitHub Actions workflows
# Deployment
prisme deploy init # Initialize Terraform config
prisme deploy apply -e staging # Deploy to staging
prisme deploy ssh production # SSH into production server
# Override management
prisme review list # See what you've customized
prisme review diff <file> # Diff a customized file
```
See `prisme --help` for the full command tree.
## Technology Stack
| Layer | Technology |
|-------|------------|
| Specification | Pydantic (Python 3.13+) |
| Database | PostgreSQL / SQLite |
| ORM | SQLAlchemy (async) |
| Migrations | Alembic |
| REST API | FastAPI |
| GraphQL | Strawberry GraphQL |
| MCP | FastMCP |
| Auth | JWT + OAuth + API Keys |
| Frontend | React 19 + TypeScript + Vite + Tailwind CSS |
| Testing | pytest / Vitest + React Testing Library |
| Containers | Docker + Traefik |
| CI/CD | GitHub Actions + semantic-release |
| Deployment | Terraform (Hetzner Cloud) |
## Under the Hood
Prism's generator pipeline:
```
StackSpec → validate → GeneratorContext → 30 generators → 278 Jinja2 templates → your project
```
- **30 generators** (11 backend, 15 frontend, 2 testing, 2 infrastructure)
- **278 Jinja2 templates** producing static, inspectable output
- **Manifest tracking** detects which files you've customized
- **Protected region** parsing preserves inline customizations
## Installation
```bash
# Using uv (recommended)
uv add prisme
# Using pip
pip install prisme
```
Requires Python 3.13+. Published on [PyPI](https://pypi.org/project/prisme/).
## Documentation
Full documentation at **[prisme.readthedocs.io](https://prisme.readthedocs.io/)**:
- [Getting Started](https://prisme.readthedocs.io/getting-started/) — Installation and quickstart
- [User Guide](https://prisme.readthedocs.io/user-guide/) — CLI reference, spec guide, extensibility
- [Tutorials](https://prisme.readthedocs.io/tutorials/) — Step-by-step project tutorials
- [API Reference](https://prisme.readthedocs.io/reference/) — Specification class reference
- [Architecture](https://prisme.readthedocs.io/architecture/) — Design principles and internals
## Contributing
Contributions welcome. See [CONTRIBUTING.md](CONTRIBUTING.md) for setup, commit conventions, and development workflow.
```bash
git clone https://github.com/Lasse-numerous/prisme.git && cd prisme
uv sync --all-extras
uv run pre-commit install --hook-type pre-commit --hook-type commit-msg --hook-type pre-push
uv run pytest
```
## License
MIT License — see [LICENSE](LICENSE) for details.
Copyright (c) 2026 Numerous ApS
| text/markdown | Prism Contributors | null | null | null | null | code-generation, fastapi, graphql, pydantic, react | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Code Generators",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"click>=8.1",
"jinja2>=3.1",
"pydantic>=2.10",
"rich>=13.0",
"mypy>=1.14; extra == \"dev\"",
"playwright>=1.40; extra == \"dev\"",
"pre-commit>=4.0; extra == \"dev\"",
"pytest-asyncio>=0.24; extra == \"dev\"",
"pytest-cov>=6.0; extra == \"dev\"",
"pytest-playwright>=0.4; extra == \"dev\"",
"pytest-timeout>=2.3; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"python-semantic-release>=9.0; extra == \"dev\"",
"ruff>=0.9; extra == \"dev\"",
"mkdocs-material>=9.5; extra == \"docs\"",
"mkdocs-minify-plugin>=0.8; extra == \"docs\"",
"mkdocs>=1.6; extra == \"docs\"",
"mkdocstrings[python]>=0.24; extra == \"docs\"",
"pygments>=2.17; extra == \"docs\"",
"pymdown-extensions>=10.0; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/Lasse-numerous/prisme",
"Documentation, https://prisme.readthedocs.io/",
"Repository, https://github.com/Lasse-numerous/prisme",
"Issues, https://github.com/Lasse-numerous/prisme/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T08:10:21.569186 | prisme-2.26.1.tar.gz | 929,148 | aa/6b/a4e4ed89535e6888eaeaca4b452820dcb42f4d22580066794cb5ac687e60/prisme-2.26.1.tar.gz | source | sdist | null | false | da5ad263e99f3020f7acb2d87b9821f1 | 3063ed81c0a1c89fa425024baab0a71f4a45884c89a76660acbbf74dd429aba4 | aa6ba4e4ed89535e6888eaeaca4b452820dcb42f4d22580066794cb5ac687e60 | MIT | [
"LICENSE"
] | 254 |
2.4 | django-kickstartx | 1.1.0 | Scaffold production-ready Django projects in seconds — MVP or REST API, CBV or FBV. | # 🚀 Django Kickstart
**Scaffold production-ready Django projects in seconds.**
Skip the boilerplate. Start building.
[](https://pypi.org/project/django-kickstartx/)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
---
## ✨ Features
- **Two project types**: MVP (traditional Django with templates) or API (Django REST Framework)
- **View style choice**: Function-Based Views (FBV) or Class-Based Views (CBV)
- **Database options**: SQLite (dev) or PostgreSQL (production)
- **Docker support**: Optional `--docker` flag generates a production-ready `Dockerfile`, `docker-compose.yml`, and `entrypoint.sh`
- **Auto virtual environment**: Creates a venv and installs dependencies automatically
- **Production-ready settings**: Security hardened, environment variables via `python-decouple`
- **Admin panel**: Enabled and configured out of the box
- **URL routing**: Fully wired with app URLs included
- **Example model**: `Item` model with admin registration, tests, and views
- **Beautiful starter templates**: Modern CSS with responsive layout (MVP only)
- **DRF browsable API**: Auto-configured with pagination and permissions (API only)
---
## 📦 Installation
```bash
pip install django-kickstartx
```
---
## 🚀 Quick Start
### Interactive mode (guided prompts)
```bash
django-kickstart create myproject
```
### Flag mode (one-liner)
```bash
# MVP with function-based views + SQLite
django-kickstart create myproject --type mvp --views fbv --db sqlite
# REST API with class-based views + PostgreSQL
django-kickstart create myproject --type api --views cbv --db postgresql
# Any project with Docker support
django-kickstart create myproject --type api --views fbv --db postgresql --docker
```
### After creating your project
A virtual environment is created automatically with all dependencies installed.
```bash
cd myproject
# Activate the virtual environment
# Windows:
venv\Scripts\activate
# macOS/Linux:
source venv/bin/activate
cp .env.example .env
python manage.py migrate
python manage.py createsuperuser
python manage.py runserver
```
> **Tip:** Use `--no-venv` to skip automatic virtual environment creation.
### With Docker
If you used `--docker`, skip the venv entirely and use Compose:
```bash
cd myproject
cp .env.example .env
docker-compose up --build
```
Once the containers are running:
```bash
docker-compose exec web python manage.py createsuperuser
```
> **Note:** For PostgreSQL projects, the `web` container waits for the database to pass its health check before running migrations automatically.
---
## 🔧 Options
| Flag | Choices | Default | Description |
|---|---|---|---|
| `--type` | `mvp`, `api` | interactive | MVP (templates) or API (DRF) |
| `--views` | `fbv`, `cbv` | interactive | Function or class-based views |
| `--db` | `sqlite`, `postgresql` | interactive | Database backend |
| `--no-venv` | — | `false` | Skip automatic virtual environment creation |
| `--docker` | — | `false` | Add Docker configuration (`Dockerfile`, `docker-compose.yml`, `.dockerignore`, and `entrypoint.sh` for PostgreSQL) |
---
## 📁 Generated Structure
### MVP Project
```
myproject/
├── venv/ # Auto-created virtual environment
├── manage.py
├── requirements.txt
├── .env.example
├── .gitignore
├── myproject/
│ ├── settings.py # Security, DB, static/media config
│ ├── urls.py # Admin + core app wired
│ ├── wsgi.py
│ └── asgi.py
├── core/
│ ├── admin.py # Item model registered
│ ├── models.py # Example Item model
│ ├── views.py # FBV or CBV
│ ├── urls.py
│ ├── forms.py # ModelForm
│ ├── tests.py
│ └── templates/core/
│ ├── base.html
│ ├── home.html
│ └── about.html
└── static/css/style.css
```
### API (DRF) Project
```
myproject/
├── venv/ # Auto-created virtual environment
├── manage.py
├── requirements.txt
├── .env.example
├── .gitignore
├── myproject/
│ ├── settings.py # DRF + CORS config included
│ ├── urls.py # Admin + /api/ router
│ ├── wsgi.py
│ └── asgi.py
└── core/
├── admin.py
├── models.py
├── serializers.py # DRF ModelSerializer
├── views.py # @api_view or ModelViewSet
├── urls.py # DRF Router or explicit paths
└── tests.py
```
### With `--docker` (additional files)
```
myproject/
├── Dockerfile # python:3.12-slim-bookworm, no-cache pip install
├── docker-compose.yml # web service (+ db service for PostgreSQL)
├── .dockerignore
└── entrypoint.sh # PostgreSQL only — waits for DB, then migrates
```
---
## 🤔 What's Included?
### Settings Highlights
- `SECRET_KEY` loaded from `.env`
- `DEBUG` and `ALLOWED_HOSTS` from environment
- Pre-configured password validators
- Static & media file configuration
- Production security settings (commented, ready to uncomment)
- Login/logout redirect URLs
### MVP Extras
- Django HTML templates with `{% block %}` structure
- Clean starter CSS with responsive grid
- ModelForm with widget customization
### API Extras
- Django REST Framework with pagination
- `django-cors-headers` configured
- `django-filter` included in requirements
- DRF browsable API at `/api/`
---
## 📄 License
MIT © 2026
---
## 🤝 Contributing
1. Fork the repo
2. Create a feature branch: `git checkout -b feature/my-feature`
3. Commit: `git commit -m 'Add my feature'`
4. Push: `git push origin feature/my-feature`
5. Open a Pull Request
---
## 🌟 Star this project
If Django Kickstart saved you time, give it a ⭐ on GitHub!
| text/markdown | null | Drew Hilario <hilarioandrew12@gmail.com> | null | null | MIT | django, scaffold, boilerplate, starter, project-template, kickstart | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Framework :: Django",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Code Generators"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"click>=8.0",
"jinja2>=3.0",
"colorama>=0.4"
] | [] | [] | [] | [
"Homepage, https://www.djangokickstartx.me/",
"Repository, https://github.com/andrewhilario/django-kickstartx",
"Issues, https://github.com/andrewhilario/django-kickstartx/issues"
] | twine/6.2.0 CPython/3.10.5 | 2026-02-20T08:10:15.927808 | django_kickstartx-1.1.0.tar.gz | 21,861 | ef/8f/d766cb9d9ffe03353da67acef924f4901f21648eae3997d3047317546eaf/django_kickstartx-1.1.0.tar.gz | source | sdist | null | false | aaea0bb258cea0ea97d53d054beab3d3 | 5d422baf3e7cebeba348872e3207258fa62098c028abbbb4c9949be5ada296d2 | ef8fd766cb9d9ffe03353da67acef924f4901f21648eae3997d3047317546eaf | null | [
"LICENSE"
] | 239 |
2.4 | nedo-vision-training | 1.3.2 | A comprehensive training service library for AI models in the Nedo Vision platform | # Nedo Vision Training Service
A distributed AI model training service for the Nedo Vision platform. This service manages training workflows, monitoring, and lifecycle management for computer vision models using RF-DETR architecture.
## Features
- **Configurable Training Service**: Automated training with customizable intervals and parameters
- **gRPC Communication**: Reliable communication with the vision manager and other services
- **Distributed Training**: Support for multi-GPU and distributed training scenarios
- **Real-time Monitoring**: System resource monitoring and training progress tracking
- **Cloud Integration**: AWS S3 integration for model storage and dataset management
- **Message Queue Support**: RabbitMQ integration for task queue management
## Installation
Install the package from PyPI:
```bash
pip install nedo-vision-training
```
For GPU support with CUDA 12.1:
```bash
pip install nedo-vision-training[gpu] --extra-index-url https://download.pytorch.org/whl/cu121
```
For development with all tools:
```bash
pip install nedo-vision-training[dev]
```
## Quick Start
### Using the CLI
After installation, you can use the training service CLI:
```bash
# Show CLI help
nedo-training --help
# Check system dependencies and requirements
nedo-training doctor
# Start training service with authentication token
nedo-training run --token YOUR_TOKEN
# Start with custom server configuration
nedo-training run --token YOUR_TOKEN --server-host custom.server.com --server-port 60000
# Start with custom REST API port
nedo-training run --token YOUR_TOKEN --rest-api-port 8081
# Start with custom intervals
nedo-training run --token YOUR_TOKEN --system-usage-interval 30 --latency-check-interval 15
# Start with all custom configurations
nedo-training run --token YOUR_TOKEN \
--server-host custom.server.com \
--server-port 60000 \
--rest-api-port 8081 \
--system-usage-interval 30 \
--latency-check-interval 15
```
### Configuration Options
The service supports various configuration options:
#### Available Commands
- `doctor`: Check system dependencies and requirements (CUDA, NVIDIA drivers, etc.)
- `run`: Start the training service
#### Run Command Options
- `--token`: Authentication token for secure communication (required)
- `--server-host`: gRPC server host (default: localhost)
- `--server-port`: gRPC server port (default: 50051)
- `--rest-api-port`: Manager REST API port (default: 8081)
- `--system-usage-interval`: System usage reporting interval in seconds (default: 30)
- `--latency-check-interval`: Latency monitoring interval in seconds (default: 10)
## Architecture
### Core Components
- **TrainingService**: Main service orchestrator for training workflows
- **RFDETRTrainer**: RF-DETR algorithm implementation with PyTorch backend
- **TrainerLogger**: Real-time training progress logging via gRPC
- **ResourceMonitor**: System resource monitoring (GPU, CPU, memory)
### Dependencies
The service relies on several key technologies:
- **PyTorch**: Deep learning framework with CUDA support
- **RF-DETR**: Roboflow's Real-time Detection Transformer
- **gRPC**: High-performance RPC framework
- **RabbitMQ**: Message queue for distributed task management
- **AWS SDK**: Cloud storage integration
- **NVIDIA ML**: GPU monitoring and management
## Development Setup
## Troubleshooting
### Common Issues
1. **gRPC Connection Timeouts**: Ensure the server host and port are correctly configured
2. **CUDA Out of Memory**: Reduce batch size or use gradient accumulation
3. **Missing Dependencies**: Reinstall with `pip install --upgrade nedo-vision-training`
### Support
For issues and questions:
- Check the logs for detailed error information
- Ensure your token is valid and not expired
- Verify network connectivity to the training manager
## License
This project is part of the Nedo Vision platform. Please refer to the main project license for usage terms.
| text/markdown | null | Willy Achmat Fauzi <willy.achmat@gmail.com> | null | Willy Achmat Fauzi <willy.achmat@gmail.com> | null | computer-vision, machine-learning, ai, training, deep-learning, object-detection, neural-networks, pytorch | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"grpcio<2.0.0,>=1.59.0",
"grpcio-tools<2.0.0,>=1.59.0",
"pika<2.0.0,>=1.3.0",
"rfdetr<2.0.0,>=1.2.0",
"pynvml<12.0.0,>=11.4.0",
"psutil<6.0.0,>=5.8.0",
"torch<3.0.0,>=2.0.0",
"torchvision<1.0.0,>=0.15.0",
"numpy<2.0.0,>=1.21.0",
"pillow<11.0.0,>=9.0.0",
"opencv-python<5.0.0,>=4.8.0",
"requests<3.0.0,>=2.31.0",
"tqdm<5.0.0,>=4.65.0",
"torch==2.3.1; extra == \"gpu\"",
"torchvision==0.18.1; extra == \"gpu\"",
"pytest>=7.0.0; extra == \"dev\"",
"black>=22.0.0; extra == \"dev\"",
"isort>=5.10.0; extra == \"dev\"",
"mypy>=0.950; extra == \"dev\"",
"flake8>=4.0.0; extra == \"dev\"",
"pre-commit>=2.17.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://gitlab.com/sindika/research/nedo-vision/nedo-vision-training-service",
"Documentation, https://gitlab.com/sindika/research/nedo-vision/nedo-vision-training-service/-/blob/main/README.md",
"Repository, https://gitlab.com/sindika/research/nedo-vision/nedo-vision-training-service",
"Bug Reports, https://gitlab.com/sindika/research/nedo-vision/nedo-vision-training-service/-/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T08:10:03.972492 | nedo_vision_training-1.3.2.tar.gz | 51,137 | 76/c7/ea5b6aa436f16a6b45e580c02a57bd5a6dc5cfd72f87a6202654d982c280/nedo_vision_training-1.3.2.tar.gz | source | sdist | null | false | 629fe287eaea67c57cfe344ebe5641d9 | 4f602220dbce11e7ae055086eebbe7718da6ced88149744862c1d03d89a3fdd7 | 76c7ea5b6aa436f16a6b45e580c02a57bd5a6dc5cfd72f87a6202654d982c280 | MIT | [] | 241 |
2.4 | rational-linkages | 2.5.0 | Rational Linkages | [](https://github.com/hucik14/rational-linkages)
[](https://git.uibk.ac.at/geometrie-vermessung/rational-linkages)
[](https://pypi.org/project/rational-linkages/)
[](https://doi.org/10.1007/978-3-031-64057-5_27)
[](https://git.uibk.ac.at/geometrie-vermessung/rational-linkages/-/jobs)
[](https://rational-linkages.readthedocs.io/?badge=latest)
[](https://github.com/hucik14/rational-linkages/issues)
[](https://git.uibk.ac.at/geometrie-vermessung/rational-linkages/-/jobs)
[](https://git.uibk.ac.at/geometrie-vermessung/rational-linkages/-/network/main)
[](https://mybinder.org/v2/gh/hucik14/rational-linkages/HEAD?labpath=docs%2Fsource%2Ftutorials%2Fsynthesis_bennett.ipynb)
# Rational Linkages <img src="/docs/source/figures/rl-logo.png" width="5%">
This Python-based package provides a collection of methods for the synthesis,
analysis, design, and rapid prototyping
of the single-loop rational linkages, allowing one to create 3D-printable
collision-free mechanisms synthesised for a given task (set of poses).
<img src="/docs/source/figures/r4.JPEG" width="24%"> <img src="/docs/source/figures/r6li.JPEG" width="24%"> <img src="/docs/source/figures/r6hp.JPEG" width="24%"> <img src="/docs/source/figures/r6joh.JPEG" width="24%">
The package was originally developed as a part of the research project at the
Unit of Geometry and Surveying, University of Innsbruck, Austria.
## Documentation, tutorials, issues
[Rational Linkages Documentation](https://rational-linkages.readthedocs.io/) is
hosted on Read the Docs, and provides a comprehensive overview of the package with
[examples and tutorials](https://rational-linkages.readthedocs.io/latest/general/overview.html).
Since the self-hosted repository (Gitlab, University of Innsbruck) does not allow external users to create issues,
please, use the [package mirror](https://github.com/hucik14/rational-linkages)
hosted on GitHub for submitting **issues** and **feature requests**. Additionally,
you can *"watch/star"* the issue tracker package **to get notified about the updates**
(new releases will be also announced there).
You can test live-example of Jupyter notebook using Binder, by clicking on the
following badge:
[](https://mybinder.org/v2/gh/hucik14/rational-linkages/HEAD?labpath=docs%2Fsource%2Ftutorials%2Fsynthesis_bennett.ipynb)
In case of other questions or contributions, please, email the author at:
`daniel.huczala@uibk.ac.at`
STL files of some mechanisms may be found as
[models on Printables.com](https://www.printables.com/@hucik14_497869/collections/443601).
The results may look like this Bennett manipulator made by our collaborators from the Department of Robotics,
VSB -- Technical University
of Ostrava. See [full video on Youtube](https://www.youtube.com/watch?v=T_7lkPjdcCg).

## Intallation instuctions
Recommended Python version is **3.11**, when it provides the smoothest plotting
(but 3.10 or higher are supported). Python 3.11 is also the version used for
development.
### Install from PyPI
Using pip:
<code>pip install rational-linkages</code>
or with optional dependencies:
<code>pip install rational-linkages[opt,exu]</code>
Mac/linux users might need to use backslashes to escape the brackets, e.g.:
<code>pip install rational-linkages\\[opt,exu\\]</code>
for installing also **opt**ional dependencies (scipy - optimization problems solving, ipython - inline plotting,
matplotlib - alternative engine for 3D plotting, gmpy2 - optimized symbolic computations, trimesh + manifold3d - STL
meshes generation)
and **cad** dependencies (exudyn - multibody simulations, ngsolve - work with meshes in exudyn,
build123d - generating STEP files of linkages, trimesh + manifold 3d - generating STL files).
On **Linux systems**, to run GUI interactive plotting,
some additional libraries might be required for plotting with PyQt6. For example,
on Ubuntu, it can be installed as follows:
<code>sudo apt install libgl1-mesa-glx libxkbcommon-x11-0 libegl1 libdbus-1-3</code>
or on Ubuntu 24.04 and higher:
<code>sudo apt install libgl1 libxkbcommon-x11-0 libegl1 libdbus-1-3</code>
On 64-bit platform, <code>gmpy2</code> package for optimized symbolic computations can be useful.
### Install from source
1. Clone the repository (use preferably your client, or clone with the button on top of this page or using the following line)
<code>git clone https://git.uibk.ac.at/geometrie-vermessung/rational-linkages.git </code>
2. Navigate to the repository folder
<code>cd rational-linkages</code>
3. Install the *editable* version of the package using pip:
<code>pip install -e .[opt]</code>
or
<code>pip install -e .[opt,dev,doc]</code> including the development and documentation dependencies.
Mac/linux users might need to use backslashes to escape the brackets, e.g.:
<code>pip install -e .\\[opt\\]</code>
To locally develop, you need to install the [Rust toolchain](https://www.rust-lang.org) and
build the Rust code yourself. On top of that, on **Windows**, you need to install a
C++ build toolchain. In `Visual Studio Installer`, select:
* MSVC v143 - VS 2022 C++ x64/x86 build tools (latest)
* Windows 11 SDK
* C++ CMake tools for Windows
Alternatively, on **Linux**, you need to install:
* build-essential
Then, if adding Rust-based functions, navigate to the `rational_linkages/rust` folder and run:
<code>cargo build --release</code>
## Citing the package
For additional information, see our preprint paper, and in the case of usage, please,
cite it:
Huczala, D., Siegele, J., Thimm, D.A., Pfurner, M., Schröcker, HP. (2024).
Rational Linkages: From Poses to 3D-Printed Prototypes.
In: Lenarčič, J., Husty, M. (eds) Advances in Robot Kinematics 2024. ARK 2024.
Springer Proceedings in Advanced Robotics, vol 31. Springer, Cham.
DOI: [10.1007/978-3-031-64057-5_27](https://doi.org/10.1007/978-3-031-64057-5_27).
```bibtex
@inproceedings{huczala2024linkages,
title={Rational Linkages: From Poses to 3D-printed Prototypes},
author={Daniel Huczala and Johannes Siegele and Daren A. Thimm and Martin Pfurner and Hans-Peter Schröcker},
year={2024},
booktitle={Advances in Robot Kinematics 2024. ARK 2024},
publisher={Springer International Publishing},
url={https://doi.org/10.1007/978-3-031-64057-5_27},
doi={10.1007/978-3-031-64057-5_27},
}
```
### Preprint of the paper
On **arXiv:2403.00558**: [https://arxiv.org/abs/2403.00558](https://arxiv.org/abs/2403.00558).
## Acknowledgements
Funded by the European Union. Views and opinions expressed are however those of
the author(s) only and do not necessarily reflect those of the European Union
or the European Research Executive Agency (REA). Neither the European Union
nor the granting authority can be held responsible for them.
<img src="./docs/source/figures/eu.png" width="250" />
| text/markdown | null | Daniel Huczala <daniel.huczala@uibk.ac.at> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Rust",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"biquaternion-py>=1.2.0",
"numpy>=1.10.0",
"sympy>=1.10.0",
"PyQt6>=6.2.0",
"pyqtgraph<0.14",
"PyOpenGL>=3.0.0",
"matplotlib>=3.9.0; platform_system == \"Windows\" and platform_machine == \"ARM64\"",
"ipython>=8.0.0; extra == \"opt\"",
"scipy>=1.10.0; extra == \"opt\"",
"matplotlib>=3.9.0; extra == \"opt\"",
"exudyn>=1.9.0; extra == \"cad\"",
"ngsolve>=6.2.0; extra == \"cad\"",
"build123d; extra == \"cad\"",
"trimesh; extra == \"cad\"",
"manifold3d; extra == \"cad\"",
"sphinx; extra == \"doc\"",
"sphinx-rtd-theme; extra == \"doc\"",
"nbsphinx; extra == \"doc\"",
"sphinxcontrib-bibtex; extra == \"doc\"",
"toml; extra == \"doc\"",
"pandoc; extra == \"doc\"",
"gitpython; extra == \"doc\"",
"build; extra == \"dev\"",
"cibuildwheel; extra == \"dev\"",
"coverage; extra == \"dev\"",
"pytest; extra == \"dev\"",
"flake8; extra == \"dev\"",
"flake8-pyproject; extra == \"dev\"",
"black; extra == \"dev\"",
"isort; extra == \"dev\"",
"twine; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://git.uibk.ac.at/geometrie-vermessung/rational-linkages",
"Documentation, http://rational-linkages.readthedocs.io/",
"Issues, https://github.com/hucik14/rational-linkages/issues"
] | twine/6.1.0 CPython/3.11.0 | 2026-02-20T08:09:35.368904 | rational_linkages-2.5.0-cp314-cp314-win_arm64.whl | 275,693 | 08/83/e2305b7e82dc6e1f30aaa3382f8a9cadea45c6ce519d2736cae3bb7bcfdf/rational_linkages-2.5.0-cp314-cp314-win_arm64.whl | cp314 | bdist_wheel | null | false | d2ce31ec4e2c2c6cd8ea4f76455725b7 | 18cfa37fbaa99f79014f0ce3e82df310e29683cc64e864172f71b43740fd7ed3 | 0883e2305b7e82dc6e1f30aaa3382f8a9cadea45c6ce519d2736cae3bb7bcfdf | GPL-3.0-or-later | [
"LICENSE"
] | 2,325 |
2.4 | sanna | 0.13.5 | Trust infrastructure for AI agents — constitution enforcement, cryptographic receipts, MCP governance gateway | # Sanna — Trust Infrastructure for AI Agents
Sanna checks reasoning during execution, halts when constraints are violated, and generates portable cryptographic receipts proving governance was enforced. Constitution-as-code: your governance rules live in version-controlled YAML, not in a vendor dashboard.
## Quick Start — Library Mode
```bash
pip install sanna
```
Set up governance (one-time):
```bash
sanna init # Choose template, set agent name, enforcement level
sanna keygen # Generate Ed25519 keypair (~/.sanna/keys/)
# Output:
# Generated Ed25519 keypair (a1b2c3d4e5f6...)
# Private key: /Users/you/.sanna/keys/a1b2c3d4e5f6...7890.key
# Public key: /Users/you/.sanna/keys/a1b2c3d4e5f6...7890.pub
sanna sign constitution.yaml --private-key ~/.sanna/keys/<key-id>.key
```
Now wrap the functions you want to govern. `@sanna_observe` decorates the functions you choose — internal reasoning, prompt construction, and non-governed function calls produce no receipts.
```python
from sanna import sanna_observe, SannaHaltError
@sanna_observe(
constitution_path="constitution.yaml",
constitution_public_key_path="~/.sanna/keys/<key-id>.pub", # from sanna keygen above
)
def my_agent(query: str, context: str) -> str:
return "Based on the data, revenue grew 12% year-over-year."
# @sanna_observe wraps the return value in a SannaResult with .output and .receipt.
# The original str return is available as result.output.
try:
result = my_agent(
query="What was revenue growth?",
context="Annual report: revenue increased 12% YoY to $4.2B."
)
print(result.output) # The original str return value
print(result.receipt) # Cryptographic governance receipt (dict)
# To persist receipts, use ReceiptStore separately:
# from sanna import ReceiptStore
# store = ReceiptStore(".sanna/receipts.db")
# store.store(result.receipt)
except SannaHaltError as e:
print(f"HALTED: {e}") # Constitution violation detected
```
## Quick Start — Gateway Mode
No code changes to your agent. The gateway sits between your MCP client and downstream servers.
```bash
pip install sanna[mcp]
sanna init # Creates constitution.yaml + gateway.yaml
sanna keygen --label gateway
sanna sign constitution.yaml --private-key ~/.sanna/keys/<key-id>.key
sanna gateway --config gateway.yaml
```
Minimum `gateway.yaml`:
```yaml
gateway:
constitution: ./constitution.yaml
signing_key: ~/.sanna/keys/<gateway-key-id>.key # Key generated by sanna keygen
constitution_public_key: ~/.sanna/keys/<author-key-id>.pub # Public key of constitution signer
receipt_store: .sanna/receipts/
downstream:
- name: notion
command: npx
args: ["-y", "@notionhq/notion-mcp-server"]
env:
OPENAPI_MCP_HEADERS: "${OPENAPI_MCP_HEADERS}"
default_policy: can_execute
```
Point your MCP client (Claude Desktop, Claude Code, Cursor) at the gateway instead of directly at your downstream servers. Every tool call is now governed. The gateway governs tool calls that pass through it — only actions that cross the governance boundary produce receipts. Reasoning is captured via the explicit `_justification` parameter in tool calls, not from internal model reasoning. The gateway cannot observe LLM chain-of-thought.
```
MCP Client (Claude Desktop / Claude Code / Cursor)
|
v (MCP stdio)
sanna-gateway
| 1. Receive tool call
| 2. Evaluate against constitution
| 3. Enforce policy (allow / escalate / deny)
| 4. Generate signed receipt
| 5. Forward to downstream (if allowed)
v (MCP stdio)
Downstream MCP Servers (Notion, GitHub, filesystem, etc.)
```
## Demo
Run a self-contained governance demo — no external dependencies:
```bash
sanna demo
```
This generates keys, creates a constitution, simulates a governed tool call, generates a receipt, and verifies it.
## Core Concepts
**Constitution** — YAML document defining what the agent can, cannot, and must escalate. Ed25519-signed. Modification after signing is detected on load. Constitution signing (via `sanna sign`) is required for enforcement. Constitution approval is an optional additional governance step for multi-party review workflows.
**Receipt** — JSON artifact binding inputs, reasoning, action, and check results into a cryptographically signed, schema-validated, deterministically fingerprinted record. Receipts are generated per governed action — when an agent calls a tool or executes a decorated function — not per conversational turn. An agent that reasons for twenty turns and executes one action produces one receipt.
**Coherence Checks (C1-C5)** — Five built-in deterministic heuristics. No API calls or external dependencies.
| Check | Invariant | What it catches |
|-------|-----------|-----------------|
| C1 | `INV_NO_FABRICATION` | Output contradicts provided context |
| C2 | `INV_MARK_INFERENCE` | Definitive claims without hedging |
| C3 | `INV_NO_FALSE_CERTAINTY` | Confidence exceeding evidence strength |
| C4 | `INV_PRESERVE_TENSION` | Conflicting information collapsed |
| C5 | `INV_NO_PREMATURE_COMPRESSION` | Complex input reduced to single sentence |
**Authority Boundaries** — `can_execute` (forward), `must_escalate` (prompt user), `cannot_execute` (deny). Policy cascade: per-tool override > server default > constitution.
**Key Management** — Public keys are stored in `~/.sanna/keys/` and referenced by their key ID (SHA-256 fingerprint of the public key). For verification, pass the public key path explicitly via `--public-key` on the CLI or `constitution_public_key_path` in code. See [docs/key-management.md](https://github.com/nicallen-exd/sanna/blob/main/docs/key-management.md) for key roles and rotation.
## Receipt Format
Every governed action produces a reasoning receipt — a JSON artifact that cryptographically binds inputs, outputs, check results, and constitution provenance. See [spec/sanna-specification-v1.0.md](https://github.com/nicallen-exd/sanna/blob/main/spec/sanna-specification-v1.0.md) for the full specification.
**Identification**
| Field | Type | Description |
|-------|------|-------------|
| `spec_version` | string | Schema version, `"1.0"` |
| `tool_version` | string | Package version, e.g. `"0.13.4"` |
| `checks_version` | string | Check algorithm version, e.g. `"5"` |
| `receipt_id` | string | UUID v4 unique identifier |
| `correlation_id` | string | Path-prefixed identifier for grouping related receipts |
**Integrity**
| Field | Type | Description |
|-------|------|-------------|
| `receipt_fingerprint` | string | 16-hex SHA-256 truncation for compact display |
| `full_fingerprint` | string | 64-hex SHA-256 of all fingerprinted fields |
| `context_hash` | string | 64-hex SHA-256 of canonical inputs |
| `output_hash` | string | 64-hex SHA-256 of canonical outputs |
**Content**
| Field | Type | Description |
|-------|------|-------------|
| `timestamp` | string | ISO 8601 timestamp |
| `inputs` | object | Dictionary of function arguments passed to the decorated function (e.g., `query`, `context`) |
| `outputs` | object | Contains `response` |
**Governance**
| Field | Type | Description |
|-------|------|-------------|
| `checks` | array | List of `CheckResult` objects with `check_id`, `passed`, `severity`, `evidence` |
| `checks_passed` | integer | Count of checks that passed |
| `checks_failed` | integer | Count of checks that failed |
| `status` | string | `"PASS"` / `"WARN"` / `"FAIL"` / `"PARTIAL"` |
| `constitution_ref` | object | Contains `document_id`, `policy_hash`, `version`, `source`, `signature_verified`, `constitution_approval` |
| `enforcement` | object or null | Contains `action`, `reason`, `failed_checks`, `enforcement_mode`, `timestamp` when enforcement triggered |
| `evaluation_coverage` | object | Contains `total_invariants`, `evaluated`, `not_checked`, `coverage_basis_points` |
**Receipt Triad (Gateway)**
| Field | Type | Description |
|-------|------|-------------|
| `input_hash` | string | 64-hex SHA-256, present in gateway receipts |
| `reasoning_hash` | string | 64-hex SHA-256 of reasoning content |
| `action_hash` | string | 64-hex SHA-256 of action content |
| `assurance` | string | `"full"` or `"partial"` |
**Identity and Signature**
| Field | Type | Description |
|-------|------|-------------|
| `receipt_signature` | object | Contains `value`, `key_id`, `signed_by`, `signed_at`, `scheme` |
| `identity_verification` | object or null | Verification results for identity claims, when present |
**Extensions**
| Field | Type | Description |
|-------|------|-------------|
| `extensions` | object | Reverse-domain namespaced metadata (`com.sanna.gateway`, `com.sanna.middleware`) |
This section provides a high-level overview. For a complete field reference and normative format details, see [spec/sanna-specification-v1.0.md](https://github.com/nicallen-exd/sanna/blob/main/spec/sanna-specification-v1.0.md).
Minimal example receipt (abbreviated -- production receipts typically contain 3-7 checks):
```json
{
"spec_version": "1.0",
"tool_version": "0.13.4",
"checks_version": "5",
"receipt_id": "a1b2c3d4-e5f6-4a7b-8c9d-0e1f2a3b4c5d",
"receipt_fingerprint": "7b4d06e836514eef",
"full_fingerprint": "7b4d06e836514eef26ab96f5c62b193d036c92b45d966ef7025d75539ff93aca",
"correlation_id": "sanna-my-agent-1708128000",
"timestamp": "2026-02-17T00:00:00+00:00",
"inputs": {"query": "refund policy", "context": "All sales are final."},
"outputs": {"response": "Unfortunately, all sales are final per our policy."},
"context_hash": "...(64 hex)...",
"output_hash": "...(64 hex)...",
"checks": [
{"check_id": "C1", "name": "Context Contradiction", "passed": true, "severity": "info"}
],
"checks_passed": 1,
"checks_failed": 0,
"status": "PASS",
"constitution_ref": {"document_id": "support-agent/1.0", "policy_hash": "...", "signature_verified": true},
"enforcement": null
}
```
## Constitution Format
Constitutions are YAML documents that define an agent's governance boundaries. They are version-controlled, cryptographically signed (and optionally approved) before enforcement.
```yaml
sanna_constitution: "1.1"
identity:
agent_name: support-agent
domain: customer-support
description: Handles refund and billing inquiries
provenance:
authored_by: governance-team
approved_by: vp-risk
approval_date: "2026-01-15"
boundaries:
- id: B1
description: Only answer questions about products in the catalog
category: scope
severity: critical
- id: B2
description: Never promise refunds outside the 30-day window
category: policy
severity: critical
invariants:
- id: INV_NO_FABRICATION
rule: Never state facts not grounded in provided context
enforcement: critical
- id: INV_MARK_INFERENCE
rule: Clearly mark any inference or assumption
enforcement: warning
- id: INV_NO_FALSE_CERTAINTY
rule: Do not express certainty beyond what evidence supports
enforcement: warning
- id: INV_PRESERVE_TENSION
rule: When context contains conflicting rules, surface both
enforcement: warning
- id: INV_NO_PREMATURE_COMPRESSION
rule: Do not over-summarize multi-faceted context
enforcement: warning
authority_boundaries:
can_execute:
- Look up order status
- Search knowledge base
must_escalate:
- Issue refund over $500
- Override account restrictions
cannot_execute:
- Delete customer accounts
- Access payment credentials
escalation_targets:
- condition: "refund over limit"
target:
type: webhook
url: https://ops.example.com/escalate
reasoning:
require_justification: true
assurance_level: full
```
## Custom Evaluators
Register domain-specific invariant evaluators alongside the built-in C1-C5 checks:
```python
from sanna.evaluators import register_invariant_evaluator
from sanna.receipt import CheckResult
@register_invariant_evaluator("INV_PII_CHECK")
def pii_check(query, context, output, **kwargs):
"""Flag outputs containing email addresses."""
import re
has_pii = bool(re.search(r'\b[\w.+-]+@[\w-]+\.[\w.]+\b', output))
return CheckResult(
check_id="INV_PII_CHECK",
name="PII Detection",
passed=not has_pii,
severity="high",
evidence="Email address detected in output" if has_pii else "",
)
```
Add the invariant to your constitution and it runs alongside C1-C5 automatically.
## Receipt Querying
```python
from sanna import ReceiptStore
store = ReceiptStore(".sanna/receipts.db")
# Query with filters
receipts = store.query(agent_id="support-agent", status="FAIL", limit=10)
# Drift analysis
from sanna import DriftAnalyzer
analyzer = DriftAnalyzer(store)
report = analyzer.analyze(window_days=30, threshold=0.15)
```
Or via CLI:
```bash
sanna drift-report --db .sanna/receipts.db --window 30 --json
```
## Constitution Templates
`sanna init` offers three interactive templates plus blank:
| Template | Use Case |
|----------|----------|
| Enterprise IT | Strict enforcement, ServiceNow-style compliance |
| Customer-Facing | Standard enforcement, Salesforce-style support agents |
| General Purpose | Advisory enforcement, starter template |
| Blank | Empty constitution for custom configuration |
Five additional gateway-oriented templates are available in `examples/constitutions/`:
| Template | Use Case |
|----------|----------|
| `openclaw-personal` | Individual agents on personal machines |
| `openclaw-developer` | Skill builders for marketplace distribution |
| `cowork-personal` | Knowledge workers with Claude Desktop |
| `cowork-team` | Small teams sharing governance via Git (each dev runs own gateway) |
| `claude-code-standard` | Developers with Claude Code + MCP connectors |
## CLI Reference
All commands are available as `sanna <command>` or `sanna-<command>`:
| Command | Description |
|---------|-------------|
| `sanna init` | Interactive constitution generator with template selection |
| `sanna keygen` | Generate Ed25519 keypair (`--label` for human-readable name) |
| `sanna sign` | Sign a constitution with Ed25519 |
| `sanna verify` | Verify receipt integrity, signature, and provenance chain |
| `sanna verify-constitution` | Verify constitution signature |
| `sanna approve` | Approve a signed constitution |
| `sanna demo` | Run self-contained governance demo |
| `sanna inspect` | Pretty-print receipt contents |
| `sanna check-config` | Validate gateway config (dry-run) |
| `sanna gateway` | Start MCP enforcement proxy |
| `sanna mcp` | Start MCP server (7 tools, stdio transport) |
| `sanna diff` | Diff two constitutions (text/JSON/markdown) |
| `sanna drift-report` | Fleet governance drift report |
| `sanna bundle-create` | Create evidence bundle zip |
| `sanna bundle-verify` | Verify evidence bundle (7-step) |
| `sanna generate` | Generate receipt from trace-data JSON |
## API Reference
The top-level `sanna` package exports 10 names:
```python
from sanna import (
__version__, # Package version string
sanna_observe, # Decorator: governance wrapper for agent functions
SannaResult, # Return type from @sanna_observe-wrapped functions
SannaHaltError, # Raised when a halt-enforcement invariant fails
generate_receipt, # Generate a receipt from trace data
SannaReceipt, # Receipt dataclass
verify_receipt, # Offline receipt verification
VerificationResult, # Verification result dataclass
ReceiptStore, # SQLite-backed receipt persistence
DriftAnalyzer, # Per-agent failure-rate trending
)
```
Everything else imports from submodules: `sanna.constitution`, `sanna.crypto`, `sanna.enforcement`, `sanna.evaluators`, `sanna.verify`, `sanna.bundle`, `sanna.hashing`, `sanna.drift`.
## Verification
Verification proves four properties:
- **Schema validation:** Receipt structure matches the expected format.
- **Hash verification:** Content hashes match the actual inputs and outputs (tamper detection).
- **Signature verification:** Receipt was signed by a known key (authenticity).
- **Chain verification:** Constitution was signed, and any approvals are cryptographically bound.
```bash
# Verify receipt integrity
sanna verify receipt.json
# Verify with signature check
sanna verify receipt.json --public-key <key-id>.pub
# Full chain: receipt + constitution + approval
sanna verify receipt.json \
--constitution constitution.yaml \
--constitution-public-key <key-id>.pub
# Evidence bundle (self-contained zip)
sanna bundle-create \
--receipt receipt.json \
--constitution constitution.yaml \
--public-key <key-id>.pub \
--output evidence.zip
sanna bundle-verify evidence.zip
```
No network. No API keys. No vendor dependency.
## Enterprise Features
- **DMARC-style adoption**: Start with `log` enforcement (observe), move to `warn` (escalate), then `halt` (enforce).
- **Ed25519 cryptographic signatures**: Constitutions, receipts, and approval records are independently signed and verifiable.
- **Offline verification**: No platform dependency. Verify receipts with a public key and the CLI.
- **Evidence bundles**: Self-contained zip archives with receipt, constitution, and public keys for auditors.
- **Drift analytics**: Per-agent failure-rate trending with linear regression and breach projection. See [docs/drift-reports.md](https://github.com/nicallen-exd/sanna/blob/main/docs/drift-reports.md).
- **Receipt Triad**: Cryptographic binding of input, reasoning, and action for auditability. See [docs/reasoning-receipts.md](https://github.com/nicallen-exd/sanna/blob/main/docs/reasoning-receipts.md).
- **Receipt queries**: SQL recipes, MCP query tool. See [docs/receipt-queries.md](https://github.com/nicallen-exd/sanna/blob/main/docs/receipt-queries.md).
- **Key management**: SHA-256 key fingerprints, labeled keypairs. See [docs/key-management.md](https://github.com/nicallen-exd/sanna/blob/main/docs/key-management.md).
- **Production deployment**: Docker, logging, retention, failure modes. See [docs/production.md](https://github.com/nicallen-exd/sanna/blob/main/docs/production.md).
- **Gateway configuration**: Full config reference. See [docs/gateway-config.md](https://github.com/nicallen-exd/sanna/blob/main/docs/gateway-config.md).
## Security
- **Ed25519 cryptographic signatures**: Constitutions, receipts, and approval records are independently signed and verifiable offline.
- **Prompt injection isolation**: Evaluator prompts use trust separation -- trusted policy rules are isolated from untrusted agent content to mitigate prompt injection risks through trust separation and input escaping. Untrusted content is wrapped in `<audit>` tags with XML entity escaping.
- **Atomic file writes**: All file operations use symlink-protected atomic writes (`O_NOFOLLOW`, `O_EXCL`, `fsync`, `os.replace()`).
- **SQLite hardening**: Receipt stores validate file ownership, enforce 0o600 permissions, and reject symlinks.
- **Signature structure validation**: Enforcement points validate Ed25519 base64 encoding and 64-byte signature length, rejecting whitespace, junk, and placeholder strings.
## Cryptographic Design
- **Signing**: Ed25519 over canonical JSON (RFC 8785-style deterministic serialization)
- **Hashing**: SHA-256 for all content hashes, fingerprints, and key IDs
- **Canonicalization**: Sorted keys, NFC Unicode normalization, integer-only numerics (no floats in signed content)
- **Fingerprinting**: Pipe-delimited fields hashed with SHA-256; 16-hex truncation for display, 64-hex for full fingerprint
See the [specification](https://github.com/nicallen-exd/sanna/blob/main/spec/sanna-specification-v1.0.md) for full cryptographic construction details.
## Threat Model
**Defends against:**
- Tampering with stored receipts (detected via fingerprint and signature verification)
- Unverifiable governance claims (receipts are cryptographically signed attestations)
- Substitution of receipts across contexts (receipts are cryptographically bound to specific inputs, outputs, and correlation IDs; verifiers should enforce timestamp and correlation expectations)
- Unauthorized tool execution (constitution enforcement blocks or escalates disallowed actions)
**Does not defend against:**
- Compromised runtime environment (if the host is compromised, all bets are off)
- Stolen signing keys (key compromise requires re-keying and re-signing)
- Bypassing Sanna entirely (governance only applies to functions decorated with `@sanna_observe` or tool calls routed through the gateway)
- Malicious constitutions (Sanna enforces the constitution as written; it does not validate whether the constitution itself is correct or sufficient)
## Limitations
Receipts are attestations of process, not guarantees of outcome.
- Receipts do not prove internal reasoning was truthful -- they prove that checks were run against the output
- Receipts do not prove upstream input was complete or accurate
- Receipts do not protect against a compromised host or stolen signing keys
- Receipts do not prove the constitution itself was correct or sufficient for the use case
- Heuristic checks (C1-C5) are deterministic but not exhaustive -- they catch common failure modes, not all possible failures
## Observability (OpenTelemetry)
Sanna can emit OpenTelemetry signals to correlate governed actions with receipts on disk. Receipts are the canonical audit artifact — telemetry is optional and intended for dashboards, alerts, and correlation.
```bash
pip install "sanna[otel]"
```
See [docs/otel-integration.md](https://github.com/nicallen-exd/sanna/blob/main/docs/otel-integration.md) for configuration and signal reference.
## Install
```bash
pip install sanna # Core library (Python 3.10+)
pip install sanna[mcp] # MCP server + gateway
pip install sanna[otel] # OpenTelemetry bridge
```
## Development
```bash
git clone https://github.com/nicallen-exd/sanna.git
cd sanna
pip install -e ".[dev]"
python -m pytest tests/ -q
```
## License
Apache 2.0
| text/markdown | nicallen-exd | null | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to the Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by the Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding any notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright 2026-present Nicholas Allen
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| ai, governance, trust, constitution, receipts, verification, mcp, agents, cryptographic | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Healthcare Industry",
"Topic :: Security",
"Topic :: Security :: Cryptography",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Software Development :: Quality Assurance",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"jsonschema>=4.17",
"pyyaml>=6.0",
"cryptography>=41.0",
"filelock>=3.0",
"mcp>=1.0; extra == \"mcp\"",
"opentelemetry-api>=1.20.0; extra == \"otel\"",
"opentelemetry-sdk>=1.20.0; extra == \"otel\"",
"pytest>=7.0; extra == \"dev\"",
"jsonschema>=4.17; extra == \"dev\"",
"pyyaml>=6.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://sanna.dev",
"Documentation, https://sanna.dev/docs",
"Repository, https://github.com/nicallen-exd/sanna",
"Issues, https://github.com/nicallen-exd/sanna/issues",
"Changelog, https://github.com/nicallen-exd/sanna/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.12.8 | 2026-02-20T08:08:05.680215 | sanna-0.13.5.tar.gz | 484,842 | 82/e1/af59ca4d9926482ae8995582cdb77e7e68ad7ccb986ff3149fdd030d3d08/sanna-0.13.5.tar.gz | source | sdist | null | false | c9c89be17b728e60c8c5dc037ad9721f | 934f1559947e246901b40053b2248de62f3bf613eef039c08d6c0d0f24582a26 | 82e1af59ca4d9926482ae8995582cdb77e7e68ad7ccb986ff3149fdd030d3d08 | null | [
"LICENSE"
] | 248 |
2.4 | llmSHAP | 1.5.1 | Multi-threaded explainability for LLMs: words, sentences, documents, images, and tools. | <div align='center'>
<picture>
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/filipnaudot/llmSHAP/main/docs/_static/llmSHAP-logo-lightmode.png">
<img alt="llmSHAP logo" src="https://raw.githubusercontent.com/filipnaudot/llmSHAP/main/docs/_static/llmSHAP-logo-darkmode.png" width="50%" height="50%">
</picture>
</div>
<br/>

[](https://filipnaudot.github.io/llmSHAP/)
[](https://pepy.tech/projects/llmshap)
A multi-threaded explainability framework using Shapley values for LLM-based outputs.
---
## Getting Started
Install the `llmshap` package (with all optional dependencies):
```bash
pip install "llmshap[all]"
```
Install in editable mode with all optional dependencies (after cloning the repository):
```bash
pip install -e ".[all]"
```
Documentation is available at [llmSHAP Docs](https://filipnaudot.github.io/llmSHAP/) and a hands-on tutorial can be found [here](https://filipnaudot.github.io/llmSHAP/tutorial.html).
- [Full documentation](https://filipnaudot.github.io/llmSHAP/)
- [Tutorial](https://filipnaudot.github.io/llmSHAP/tutorial.html)
---
# Example Usage
```python
from llmSHAP import DataHandler, BasicPromptCodec, ShapleyAttribution
from llmSHAP.llm import OpenAIInterface
data = "In what city is the Eiffel Tower?"
handler = DataHandler(data, permanent_keys={0,3,4})
result = ShapleyAttribution(model=OpenAIInterface("gpt-4o-mini"),
data_handler=handler,
prompt_codec=BasicPromptCodec(system="Answer the question briefly."),
use_cache=True,
num_threads=16,
).attribution()
print("\n\n### OUTPUT ###")
print(result.output)
print("\n\n### ATTRIBUTION ###")
print(result.attribution)
print("\n\n### HEATMAP ###")
print(result.render())
```
## Multimodal Example with `Image`:
The following example shows `llmSHAP` with images.
```python
from llmSHAP import DataHandler, BasicPromptCodec, ShapleyAttribution, Image
from llmSHAP.llm import OpenAIInterface
data = {
"question": "Has our stockprice increased or decreased since the beginning?",
"Num employees" : "The company has about 450 employees.",
"[IMAGE] Stock chart" : Image(image_path="./docs/_static/demo-stock-price.png"),
"Report release date" : "Quarterly reports are released on the 15th.",
"Headquarter Location" : "The headquarters is located in a mid-sized city.",
"Num countries" : "It has offices in three countries."
}
result = ShapleyAttribution(model=OpenAIInterface("gpt-5-mini", reasoning="low"),
data_handler=DataHandler(data, permanent_keys={"question"}),
prompt_codec=BasicPromptCodec(system="Answer the question briefly."),
use_cache=True,
num_threads=35,
).attribution()
print("\n\n### OUTPUT ###")
print(result.output)
print("\n\n### HEATMAP ###")
print(result.render(abs_values=True, render_labels=True))
```
<div align='center'>
<picture>
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/filipnaudot/llmSHAP/main/docs/_static/example-result-lightmode.png">
<img alt="llmSHAP logo" src="https://raw.githubusercontent.com/filipnaudot/llmSHAP/main/docs/_static/example-result-darkmode.png" width="100%" height="100%">
</picture>
</div>
## Embedding-Based Output Scoring
`EmbeddingCosineSimilarity` measures semantic similarity between outputs using embeddings.
It supports two backends:
- **API** — any OpenAI-compatible embeddings endpoint via `api_url_endpoint`.
- **Local** — a `sentence-transformers` model downloaded on first use.
For the local backend, install the `embeddings` extra:
```bash
pip install "llmshap[embeddings]"
```
The example below uses the API backend, which is already included in `[all]`.
```python
from llmSHAP import DataHandler, BasicPromptCodec, ShapleyAttribution, EmbeddingCosineSimilarity
from llmSHAP.llm import OpenAIInterface
data = "In what city is the Eiffel Tower?"
handler = DataHandler(data)
result = ShapleyAttribution(model=OpenAIInterface("gpt-4o-mini"),
data_handler=handler,
prompt_codec=BasicPromptCodec(system="Answer the question briefly."),
use_cache=True,
num_threads=16,
value_function=EmbeddingCosineSimilarity(
model_name = "text-embedding-3-small",
api_url_endpoint = "https://api.openai.com/v1")
).attribution()
print("\n\n### OUTPUT ###")
print(result.output)
print("\n\n### HEATMAP ###")
print(result.render(abs_values=True, render_labels=True))
```
---
## Example data
You can pass either a string or a dictionary:
```python
from llmSHAP import DataHandler
# String input
data = "The quick brown fox jumps over the lazy dog"
handler = DataHandler(data)
# Dictionary input
data = {"a": "The", "b": "quick", "c": "brown", "d": "fox"}
handler = DataHandler(data)
```
To exclude certain keys from the computations, use `permanent_keys`:
```python
from llmSHAP import DataHandler
data = {"a": "The", "b": "quick", "c": "brown", "d": "fox"}
handler = DataHandler(data, permanent_keys={"a", "d"})
# Get data with index 1 WITHOUT the permanent features.
print(handler.get_data({1}, exclude_permanent_keys=True, mask=False))
# Output: {'b': 'quick'}
# Get data with index 1 AND the permanent features.
print(handler.get_data({1}, exclude_permanent_keys=False, mask=False))
# Output: {'a': 'The', 'b': 'quick', 'd': 'fox'}
```
---
## Comparison with TokenSHAP
| Capability | **llmSHAP** | **TokenSHAP** |
| ------------------------------------------------------------------------- | ----------------------------------------------------------- | ------------------------------ |
| Threaded | ✅ (optional ``num_threads``) | ❌ |
| Modular architecture | ✅ | ❌ |
| Exact Shapley option | ✅ (Full enumeration) | ❌ (Monte Carlo sampling) |
| Generation caching across coalitions | ✅ | ❌ |
| Heuristics | SlidingWindow • Monte Carlo • Counterfactual | Monte Carlo |
| Sentence-/chunk-level attribution | ✅ | ✅ |
| Permanent context pinning (always-included features) | ✅ | ❌ |
| Pluggable similarity metric | ✅ TF-IDF, embeddings | ✅ TF-IDF, embeddings |
| Docs & tutorial | ✅ Sphinx docs + tutorial | ✅ README only |
| Unit tests & CI | ✅ Pytest + GitHub Actions | ❌ |
| Vision object attribution | ❌ | ✅ PixelSHAP |
---
<br/>
<br/>
<br/>
# Stars ⭐️
[](https://www.star-history.com/#filipnaudot/llmSHAP&type=date&legend=top-left)
| text/markdown | Filip Naudot | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"tqdm",
"openai; extra == \"openai\"",
"python-dotenv; extra == \"openai\"",
"sentence-transformers; extra == \"embeddings\"",
"pytest; extra == \"dev\"",
"matplotlib; extra == \"dev\"",
"ipywidgets; extra == \"dev\"",
"sphinx; extra == \"dev\"",
"myst-parser; extra == \"dev\"",
"sphinx-book-theme; extra == \"dev\"",
"sphinx-design; extra == \"dev\"",
"openai; extra == \"all\"",
"python-dotenv; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/filipnaudot/llmSHAP",
"Repository, https://github.com/filipnaudot/llmSHAP",
"Documentation, https://filipnaudot.github.io/llmSHAP/",
"Issues, https://github.com/filipnaudot/llmSHAP/issues"
] | twine/6.2.0 CPython/3.11.5 | 2026-02-20T08:07:44.407710 | llmshap-1.5.1.tar.gz | 18,870 | b1/a0/e5a3c7427a0a73d2c6d6c386cf640fcfe878ae96c454bfc8ffb52f1bdd00/llmshap-1.5.1.tar.gz | source | sdist | null | false | 46ae0682a4ffab7e5e375f38be8ca060 | 266f42a9c942cce8e36f502563929c05884404c9c8db0d66dd5921848c995a19 | b1a0e5a3c7427a0a73d2c6d6c386cf640fcfe878ae96c454bfc8ffb52f1bdd00 | MIT | [
"LICENSE"
] | 0 |
2.4 | trainsight | 0.2.1 | AI Training Intelligence Dashboard - Live GPU telemetry, anomaly detection, and training diagnostics | # TrainSight
AI Training Intelligence Dashboard for live GPU telemetry, anomaly detection, and training-oriented diagnostics.
## Features
- **Live GPU Monitoring**: Real-time GPU utilization, memory, temperature, and power metrics
- **Anomaly Detection**: Automatic detection of thermal throttling, memory leaks, and training anomalies
- **OOM Prediction**: Predict Out-of-Memory errors before they crash your training
- **Framework Integrations**: Native support for PyTorch, Lightning, HuggingFace, DeepSpeed, Accelerate, Ray Train
- **Experiment Tracking**: Bridge to MLflow and Weights & Biases
- **Production Ready**: Prometheus exporter, Kubernetes GPU pod monitoring, cloud cost estimation
- **Batch Size Optimization**: Automatic batch size recommendation based on GPU memory
## Installation
### From PyPI (Recommended)
```bash
pip install trainsight
```
### From GitHub
```bash
pip install git+https://github.com/modalgrasp/trainsight.git
```
### Optional Dependencies
Install only what you need:
```bash
# PyTorch integration
pip install "trainsight[pytorch]"
# PyTorch Lightning integration
pip install "trainsight[lightning]"
# DeepSpeed integration
pip install "trainsight[deepspeed]"
# HuggingFace Accelerate integration
pip install "trainsight[accelerate]"
# Ray Train integration
pip install "trainsight[ray]"
# Experiment tracking
pip install "trainsight[mlflow]"
pip install "trainsight[wandb]"
# Kubernetes monitoring
pip install "trainsight[kubernetes]"
# Install everything
pip install "trainsight[all]"
```
## Quick Start
### CLI Dashboard
```bash
trainsight
```
### Programmatic Usage
```python
from trainsight import Dashboard
from trainsight.core.bus import EventBus
from trainsight.collectors.gpu_collector import GPUCollector
# Create event bus and collector
bus = EventBus()
collector = GPUCollector()
# Subscribe to GPU events
def on_gpu_stats(event):
print(f"GPU Util: {event.payload['utilization']}%")
bus.subscribe("gpu.stats", on_gpu_stats)
# Start dashboard
# dashboard = Dashboard(bus, collector)
# dashboard.run()
```
### PyTorch Integration
```python
import torch
from trainsight.integrations.pytorch import TrainSightHook
model = torch.nn.Linear(100, 10)
hook = TrainSightHook(model)
# Hook automatically monitors gradients and activations
output = model(torch.randn(32, 100))
output.backward()
```
### PyTorch Lightning Integration
```python
import pytorch_lightning as pl
from trainsight.integrations.lightning import TrainSightCallback
trainer = pl.Trainer(
callbacks=[TrainSightCallback()],
max_epochs=10,
)
trainer.fit(model)
```
### HuggingFace Transformers Integration
```python
from transformers import Trainer, TrainingArguments
from trainsight.integrations.huggingface import TrainSightCallback
training_args = TrainingArguments(
output_dir="./output",
callbacks=[TrainSightCallback],
)
trainer = Trainer(model=model, args=training_args)
```
## Architecture
```text
Collectors -> EventBus -> Analyzers -> Predictors -> Dashboard / CLI / Logger
```
Core modules:
- `trainsight/core/event.py` - Event types and payloads
- `trainsight/core/bus.py` - Synchronous event bus
- `trainsight/core/async_bus.py` - Non-blocking async event bus
- `trainsight/core/dispatcher.py` - Collector orchestration
- `trainsight/collectors/gpu_collector.py` - NVIDIA GPU metrics
- `trainsight/analyzers/` - Anomaly and bottleneck detection
- `trainsight/predictors/oom_predictor.py` - OOM prediction
- `trainsight/integrations/` - Framework integrations
## Configuration
Default config: `trainsight/config/default.yaml`
```yaml
mode: full
enable_behavior_learning: true
oom_model: statistical
thermal_limit: 85
refresh_rate: 30
```
## Prometheus Exporter
Enable in config:
```yaml
enable_prometheus: true
prometheus_port: 9108
```
Metrics endpoint: `http://127.0.0.1:9108/metrics`
## Official Build Verification
- Soft check (default): warns on signature/hash mismatch.
- Strict mode: refuses startup when verification fails.
Enable strict mode:
```bash
export TRAINSIGHT_OFFICIAL_ONLY=1
```
Or in config:
```yaml
strict_official_build: true
```
## Plugin System
Create `~/.trainsight/plugins/my_plugin.py`:
```python
def register(bus):
bus.subscribe("gpu.stats", custom_handler)
def custom_handler(event):
print("Custom plugin:", event.payload)
```
## Debug / Simulation / Replay
```bash
trainsight --debug
trainsight --simulate
trainsight --replay gpu_usage_log.csv
```
Textual inspector:
```bash
TEXTUAL_DEVTOOLS=1 trainsight --debug
```
## Testing
Install dev dependencies and run tests:
```bash
pip install -e ".[dev]"
pytest -q
```
Tests are hardware-independent and use pure logic paths.
## License
TrainSight Community License v1.0 - See [LICENSE](LICENSE) for details.
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
| text/markdown | Pratham Patel | null | null | null | TrainSight Community License v1.0 | gpu, monitoring, training, deep-learning, machine-learning, pytorch, nvidia, telemetry, dashboard, anomaly-detection, mlops | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: System :: Monitoring",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"nvidia-ml-py",
"psutil",
"textual",
"rich",
"pyyaml",
"numpy",
"scikit-learn",
"prometheus-client",
"cryptography",
"pytest; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"torch>=2.0.0; extra == \"pytorch\"",
"pytorch-lightning>=2.0.0; extra == \"lightning\"",
"torch>=2.0.0; extra == \"lightning\"",
"deepspeed>=0.10.0; extra == \"deepspeed\"",
"torch>=2.0.0; extra == \"deepspeed\"",
"accelerate>=0.20.0; extra == \"accelerate\"",
"torch>=2.0.0; extra == \"accelerate\"",
"ray[train]>=2.0.0; extra == \"ray\"",
"mlflow>=2.0.0; extra == \"mlflow\"",
"wandb>=0.15.0; extra == \"wandb\"",
"kubernetes>=27.0.0; extra == \"kubernetes\"",
"pydantic>=2.0.0; extra == \"validation\"",
"torch>=2.0.0; extra == \"all\"",
"pytorch-lightning>=2.0.0; extra == \"all\"",
"deepspeed>=0.10.0; extra == \"all\"",
"accelerate>=0.20.0; extra == \"all\"",
"ray[train]>=2.0.0; extra == \"all\"",
"mlflow>=2.0.0; extra == \"all\"",
"wandb>=0.15.0; extra == \"all\"",
"kubernetes>=27.0.0; extra == \"all\"",
"pydantic>=2.0.0; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/modalgrasp/trainsight",
"Documentation, https://github.com/modalgrasp/trainsight#readme",
"Repository, https://github.com/modalgrasp/trainsight",
"Issues, https://github.com/modalgrasp/trainsight/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:07:34.739463 | trainsight-0.2.1.tar.gz | 57,770 | c8/ee/fd2064721fdfcd1a192c90df7f8681392158786f7c20d886461d2f5e53e9/trainsight-0.2.1.tar.gz | source | sdist | null | false | 66ba106c9966eb019e1024eb278c5836 | 874e87cab365737f185ec5bdaba80789030efcf8d3acbef0451a9fb16f846480 | c8eefd2064721fdfcd1a192c90df7f8681392158786f7c20d886461d2f5e53e9 | null | [
"LICENSE"
] | 230 |
2.4 | isage-neuromem | 0.2.1.4 | NeuroMem - Brain-inspired memory system for AI agents with multi-modal storage | 
<h3 align="center">
Love grows from the smallest of memories ~
</h3>
<p align="center">
| <a href="docs/cn/README_cn.md"><b>中文文档</b></a> | <a href="https://intellistream.slack.com/"><b>Developer Slack</b></a> |
</p>
🔥 Welcome! NeuroMem is a subproject of [SAGE](https://github.com/intellistream/SAGE), dedicated to exploring memory systems for large language models.
---
## Getting Started
Install neuromem with `pip` :
```bash
pip install isage-neuromem
```
For development, clone the repo and use the provided quickstart script:
```bash
git clone https://github.com/intellistream/NeuroMem.git
cd neuromem
./quickstart.sh
```
This will set up a local development environment (virtualenv + dependencies) suitable for running tests and benchmarks. You can then:
- Explore examples in `examples/`
- Run the benchmark suite in `benchmarks/`
- Dive into the core implementation under `sage/neuromem/`
<!-- # isage-neuromem
**NeuroMem** is a brain-inspired memory management engine for SAGE (Structured AI Graph Engine). It provides flexible memory collection abstractions with support for vector databases, key-value stores, and graph structures, designed specifically for RAG (Retrieval-Augmented Generation) applications.
## Installation
### From PyPI
```bash
pip install isage-neuromem
```
### For Development
```bash
# Clone the repository
git clone https://github.com/intellistream/NeuroMem.git
cd NeuroMem
# Quick start (recommended)
./quickstart.sh
# Or manual installation
pip install -e .
pip install pre-commit # For contributors
pre-commit install
```
## Quick Start
```python
from sage.neuromem import MemoryManager, UnifiedCollection
# Using MemoryManager (recommended for multiple collections)
manager = MemoryManager()
collection = manager.create_collection("my_collection")
# Or directly using UnifiedCollection
collection = UnifiedCollection(
name="my_collection",
storage_backend="memory" # Memory, Redis, or SageDB
)
# Insert data
collection.insert("id1", {"text": "Hello, world!"})
# Create an index for search
collection.add_index({
"name": "text_search",
"index_type": "bm25" # FAISS, BM25, Graph, FIFO, etc.
})
# Search
results = collection.retrieve("hello", top_k=5, index_name="text_search")
```
## Features
- **UnifiedCollection**: Single abstraction for all memory types
- Multi-index support: FAISS (vectors), BM25 (text), Graph, FIFO, Segment
- Flexible storage: Memory, Redis, SageDB
- Unified insert/retrieve API
- **Collection Configuration**: YAML-based config management
- Pre-configured templates for common use cases
- Automatic parameter validation and conversion
- See `COLLECTION_CONFIG_GUIDE.md` for details
- **Storage Flexibility**: Pluggable storage backends
- In-memory storage for development
- Redis for distributed deployments
- SageDB for large-scale vector storage
- **Paper Features**: Mixins for advanced memory techniques
- Triple Storage, Link Evolution, Forgetting
- Heat Score Migration, Token Budget
- Conflict Detection, HippoRAG patterns, etc.
## Architecture
```
sage/neuromem/
├── memory_manager.py # Central manager for collections
├── memory_collection/
│ ├── unified_collection.py # Unified collection abstraction ⭐
│ ├── collection_config.py # YAML configuration management
│ ├── indexes/ # Index implementations
│ │ ├── faiss_index.py
│ │ ├── bm25_index.py
│ │ ├── graph_index.py
│ │ └── ...
│ └── paper_features.py # Paper feature mixins
├── search_engine/ # Index algorithms
├── storage_engine/ # Storage backends
│ ├── storage_factory.py
│ ├── memory_storage.py
│ ├── redis_storage.py
│ └── sagedb_storage.py
└── utils/ # Utility functions
```
## Quick Start (Advanced)
```python
from sage.neuromem import UnifiedCollection
from sage.neuromem.memory_collection import TripleStorageMixin, LinkEvolutionMixin
# Create collection with paper features
class AdvancedCollection(UnifiedCollection, TripleStorageMixin, LinkEvolutionMixin):
pass
collection = AdvancedCollection("my_advanced_collection")
# Use advanced features
collection.insert("id1", {"text": "information"})
# Store triple relationships
triple = collection.store_triple(
query="What is X?",
passage="X is...",
answer="X"
)
# Track link evolution
collection.evolve_links("source_id", "target_id", metadata={"confidence": 0.9})
```
## Migration from v0.1.x
If you're upgrading from v0.1.x or earlier versions, see [MIGRATION_GUIDE.md](docs/dev-note/MIGRATION_GUIDE.md) for detailed migration instructions.
### Quick Migration Summary
```python
# Old (v0.1.x)
from sage.neuromem import VDBMemoryCollection
collection = VDBMemoryCollection({"name": "test"})
# New (v0.2.1+)
from sage.neuromem import UnifiedCollection
collection = UnifiedCollection("test", storage_backend="memory")
collection.add_index({"name": "default", "index_type": "faiss", "dim": 768})
```
texts=["Hello world", "Goodbye world"],
metadatas=[{"source": "doc1"}, {"source": "doc2"}]
)
# Create index
index_config = {
"name": "my_index",
"embedding_model": "mockembedder",
"dim": 128,
"backend_type": "FAISS"
}
collection.create_index(index_config)
# Retrieve
results = collection.retrieve(
"Hello",
index_name="my_index",
topk=5
)
```
## Package Structure
NeuroMem is part of the SAGE ecosystem and installed as a namespace package:
- **Package name on PyPI**: `isage-neuromem`
- **Import path**: `sage.neuromem`
- **Namespace**: Part of SAGE (Structured AI Graph Engine)
## Documentation
- **[CollectionConfig Guide](docs/COLLECTION_CONFIG_GUIDE.md)** - Complete configuration management documentation
- Creating collections from code, dict, and YAML
- Index configuration and storage backend selection
- Migration guide from legacy formats
- Best practices and examples
- **[Memory Services API Reference](sage/neuromem/services/API_REFERENCE.md)** - Detailed API documentation for memory services
- **[Contributing Guide](docs/CONTRIBUTING.md)** - Development guidelines and contribution workflow
## Benchmarks
Comprehensive benchmark suite is available in `benchmarks/`:
- **Experiment Pipeline**: Complete benchmark pipeline for memory operations
- **Evaluation Tools**: Performance analysis and metrics
- **Configurations**: Pre-configured test scenarios
See [benchmarks/README.md](benchmarks/README.md) for details.
## Future Plans
This sub-project is designed as a core memory component of SAGE and may be rewritten in C++/Rust for better performance in the future.
## Part of SAGE Ecosystem
NeuroMem is a component of the SAGE (Structured AI Graph Engine) project by IntelliStream Team. -->
## License
Apache-2.0 License - see LICENSE file for details.
| text/markdown | null | IntelliStream Team <shuhao_zhang@hust.edu.cn> | null | null | Apache-2.0 | memory, ai, agents, vector-database, knowledge-graph, rag, llm | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Database"
] | [] | null | null | ==3.11.* | [] | [] | [] | [
"isage-common",
"isage-vdb",
"isage-kernel",
"numpy<2.3.0,>=1.26.0",
"pyyaml<7.0,>=6.0",
"networkx<4.0,>=3.0",
"python-dateutil>=2.8.0",
"bm25s>=0.1.0",
"PyStemmer>=2.0.0",
"faiss-cpu>=1.7.0",
"datasketch>=1.5.0",
"python-dotenv<2.0.0,>=1.1.0",
"nvidia-ml-py>=12.535.108",
"torch<3.0.0,>=2.7.0",
"sentence-transformers<4.0.0,>=3.1.0",
"transformers<4.54.0,>=4.52.0",
"requests<3.0.0,>=2.32.0",
"aiohttp<4.0.0,>=3.12.0",
"fastapi<1.0.0,>=0.115.0",
"uvicorn<1.0.0,>=0.34.0",
"openai<1.91.0,>=1.52.0",
"cohere<6.0.0,>=5.16.0",
"ollama<1.0.0,>=0.4.6",
"zhipuai<2.9.0,>=2.0.1",
"PyJWT<2.9.0,>=2.8.0",
"boto3<2.0.0,>=1.34.0",
"aioboto3<15.0.0,>=14.1.0",
"tenacity>=8.2.0",
"redis<6.0.0,>=5.0.0",
"neo4j<6.0.0,>=5.0.0",
"isage-data; extra == \"dev\"",
"isage-platform; extra == \"dev\"",
"pandas>=2.0.0; extra == \"dev\"",
"pyarrow>=10.0.0; extra == \"dev\"",
"matplotlib>=3.7.0; extra == \"dev\"",
"nltk>=3.8.0; extra == \"dev\"",
"openai>=1.0.0; extra == \"dev\"",
"spacy>=3.0.0; extra == \"dev\"",
"psutil>=5.9.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-benchmark>=4.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\"",
"isage-pypi-publisher>=0.2.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/intellistream/NeuroMem",
"Documentation, https://intellistream.github.io/SAGE/",
"Repository, https://github.com/intellistream/NeuroMem",
"Bug Tracker, https://github.com/intellistream/NeuroMem/issues"
] | twine/6.2.0 CPython/3.11.11 | 2026-02-20T08:06:21.426418 | isage_neuromem-0.2.1.4-py2.py3-none-any.whl | 371,821 | 2d/99/524825b402c584046b71bef1f942be9dd502d94f5a298b1efa3978e6b70d/isage_neuromem-0.2.1.4-py2.py3-none-any.whl | py2.py3 | bdist_wheel | null | false | beff0196d6d54a8d9c2c43d1fb07e37b | 2b33955cb7cab9c9198bcb1a4cdc80dac04ad805079836e83a64b570d0be3ab0 | 2d99524825b402c584046b71bef1f942be9dd502d94f5a298b1efa3978e6b70d | null | [
"LICENSE"
] | 223 |
2.4 | sopy-quantum | 1.7.1 | Representation and Decomposition with Sums of Product for Operations in separated dimensions. Now using bandlimit place data on lattice. | # Sums Of Product
## SoPy
[](https://pepy.tech/projects/sopy-quantum)
## Sums of Products for data and science
### Conceptual
Let multidimensional distributions be handled in the new-old fashion way... Methods as old as the census and modernized by Beylkin and Mohlenkamp 2005 for physics. Wherein is a suite of code to hold and decompose SoP vectors. We engage with the word *decomposition* not as a dimensional reduction, but as a canonical-rank reducer. See, data already is in SoP form, why write it in dense hyper dimensions?
Since 2018, we have been aware that Coulomb and other functions can be written in SoP ways, but thats the published secret sauce.
We simply are publishing our best understanding of how the SoP vector should be decomposed. Including some tricks which have not seen the light of day before that fundamentally improve the process, see Fibonacci.
Recent additions to this package allow you to treat your data, gaussians, as operators; or compute the multiplication of exp_i-k^X by your dataset while maintaining separated dimensions!
Expect a paper to be published when time can be found to do so.
### How to install
`pip install sopy-quantum`
`import sopy as sp`
# New features
## pySCF
Take an arbitrary electronic structure system defined in pySCF, you can put it into SoP 3D space.
A stage towards various applications. Go to examples/pySCF_wavefunction.ipynb to follow my logic.
## Fourier Transform
The work here, should not fall into the trap of native-Fast Fourier Transform. Multiply an arbritary vector by exp(i k X^).
Using really sophisicated operator logic embedded in recent work.
## Gaussian Blurr Transform
Multiply an arbritary vector by exp(-0.5 alpha (X^-position)**2 ).
Using really sophisicated operator logic embedded in recent work.
## Tensorly interface
Unclear when its appropriate, but you can use examples/ext to expand SoP into space and use Tensorly to reduce it again.
### Functions
First set a lattice,
lattices = 2*[np.linspace(-10,10,100)]
2D gaussian at (2,6) with sigmas (1,1), and polynominal 0,0
u = sp.Vector().gaussian(a = 1,positions = [2,6],sigmas = [1,1],ls = [0,0], lattices = lattices)
2D gaussian at (0.1,-0.6) with sigmas (1,1), and polynominal 0,0
k = sp.Vector().gaussian(a = 1,positions = [0.1,-0.6],sigmas = [1,1],ls = [0,0], lattices = lattices)
2D gaussian at (-1,-2) with sigmas (1,1), and polynominal 1,1
k = k.gaussian(a = 2,positions = [-1,-2],sigmas = [1,1],ls = [1,1], lattices = lattices)
2D gaussian at (-2,-5) with sigmas (1,1), and polynominal 1,0
v = k.copy().gaussian(a = 2,positions = [-2,-5],sigmas = [1,1],ls = [1,0], lattices = lattices)
Multiply operand by exp_i(k ^X ) for k = (1,0)
cv = sp.Operand( u, sp.Vector() )
cv.exp_i([1,0]).cot(cv)
linear dependence factor...
alpha = 0
take v and remove k from it, and decompose into vector u ; outputing to vector q
q = u.learn(v-k, alpha = alpha, iterate = 1)
Get the Euclidean distance from vector v-k and q
q.dist(v-k)
Reduce v with Fibonacci procedure
[ v.fibonacci( partition = partition, iterate = 10, total_iterate = 10).dist(v) for partition in range(1,len(v))]
Compare with standard approaches
[ v.decompose( partition = partition, iterate = 10, total_iterate = 10).dist(v) for partition in range(1,len(v))]
Use boost
[ v.boost().fibonacci( partition = partition, iterate = 10 ,alpha = 1e-2).unboost().dist(v) for partition in range(1,len(v))]
### How to Contribute
* Write to disk/database/json
* Develop amplitude/component to various non-local resources
* Engage with Quantum Galaxies deploying matrices in separated dimensions
### Contact Info
[ SoPy Website ](https://sopy.quantumgalaxies.org)
[ Quantum Galaxies Articles ](https://www.quantumgalaxies.org/articles)
[ Quantum Galaxies Corporation ](https://www.quantumgalaxies.com)
| text/markdown | null | Jonathan Jerke <jonathan@quantumgalaxies.org> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"tensorflow>=2.16.1",
"scikit-learn>=1.6.1",
"bandlimit>=1.2.4",
"pyscf>=2.11.0; extra == \"pyscf\"",
"tensorly>=0.9.0; extra == \"tensorly\""
] | [] | [] | [] | [
"Homepage, https://sopy.quantumgalaxies.org",
"Issues, https://github.com/quantumgalaxies/sopy/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T08:04:31.908944 | sopy_quantum-1.7.1.tar.gz | 17,207 | 6d/70/576b04d833c3d5de427f751e0fa9cfd4eb2e0ede0fd129f05716cdc9eaac/sopy_quantum-1.7.1.tar.gz | source | sdist | null | false | 3c5d0a534135e4b7f960c0adf9c5831b | 5ab30c51387428e1f2139eae73b09837bba226ea0ac25ba0f490ddbb6b0d0c51 | 6d70576b04d833c3d5de427f751e0fa9cfd4eb2e0ede0fd129f05716cdc9eaac | MIT | [
"LICENSE.txt"
] | 238 |
2.4 | ouroboros-ai | 0.12.2 | Self-Improving AI Workflow System | <p align="center">
<br/>
<img src="https://raw.githubusercontent.com/Q00/ouroboros/main/docs/screenshots/dashboard.png" width="600" alt="Ouroboros TUI Dashboard">
<br/>
<strong>OUROBOROS</strong>
<br/>
<em>The Serpent That Eats Itself — Better Every Loop</em>
<br/>
</p>
<p align="center">
<strong>Stop prompting. Start specifying.</strong>
<br/>
<sub>Transform vague ideas into validated specifications — before writing a single line of code</sub>
</p>
<p align="center">
<a href="https://pypi.org/project/ouroboros-ai/"><img src="https://img.shields.io/pypi/v/ouroboros-ai?color=blue" alt="PyPI Version"></a>
<a href="https://github.com/Q00/ouroboros/actions/workflows/test.yml"><img src="https://img.shields.io/github/actions/workflow/status/Q00/ouroboros/test.yml?branch=main" alt="Tests"></a>
<a href="https://python.org"><img src="https://img.shields.io/badge/python-3.14+-blue" alt="Python"></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT-green" alt="License"></a>
<a href="https://github.com/Q00/ouroboros/stargazers"><img src="https://img.shields.io/github/stars/Q00/ouroboros?style=social" alt="Stars"></a>
</p>
<p align="center">
<a href="#-quick-start">Quick Start</a> ·
<a href="#-why-ouroboros">Why Ouroboros?</a> ·
<a href="#-how-it-works">How It Works</a> ·
<a href="#-commands">Commands</a> ·
<a href="#-architecture">Architecture</a>
</p>
---
## Quick Start
**Plugin Mode** (No Python Required):
```bash
# 1. Install
claude /plugin marketplace add Q00/ouroboros
claude /plugin install ouroboros@ouroboros
# 2. Interview — expose hidden assumptions
ooo interview "I want to build a task management CLI"
# 3. Generate Seed spec
ooo seed
```
**Full Mode** (Python 3.14+):
```bash
# 1. Setup
uv sync && ouroboros setup
# 2. Execute
ouroboros run --seed project.yaml --parallel
# 3. Evaluate
ouroboros evaluate
```
<details>
<summary><strong>What just happened?</strong></summary>
1. `ooo interview` — Socratic questioning exposed your hidden assumptions and contradictions
2. `ooo seed` — Crystallized answers into an immutable specification (the "Seed")
3. The Seed is what you hand to AI — no more "build me X" and hoping for the best
</details>
---
## Why Ouroboros?
> *"I can already prompt Claude directly. Why do I need this?"*
### The Problem: Garbage In, Garbage Out
Human requirements arrive **ambiguous**, **incomplete**, and **contradictory**. When AI executes them directly:
```
You: "Build me a task management CLI"
↓
Claude builds something
↓
You realize it's wrong (forgot about priorities)
↓
Rewrite prompt → Claude rebuilds → Still wrong
↓
3 hours later, debugging requirements, not code
```
### The Solution: Specify Before You Build
Ouroboros exposes hidden assumptions **before** AI writes a single line of code:
```
Q: "Should completed tasks be deletable or archived?"
Q: "What happens when two tasks have the same priority?"
Q: "Is this for teams or solo use?"
↓
→ 12 hidden assumptions exposed
→ Seed generated. Ambiguity: 0.15
→ Claude builds exactly what you specified. First try.
```
### Core Benefits
| Problem | Ouroboros Solution |
|:--------|:-------------------|
| Vague requirements → wrong output | Socratic interview exposes hidden assumptions before coding begins |
| Most expensive model for everything | PAL Router: **85% cost reduction** via automatic tier selection |
| No idea if you're still on track | Drift detection flags when execution diverges from spec |
| Stuck → retry the same approach harder | 5 lateral thinking personas offer fresh angles |
| Did we actually build the right thing? | 3-stage evaluation (Mechanical → Semantic → Consensus) |
---
## How It Works
Ouroboros applies two ancient methods to transform messy human intent into precise specifications:
- **Socratic Questioning** — *"Why do you want this? Is that truly necessary?"* → reveals hidden assumptions
- **Ontological Analysis** — *"What IS this, really? Symptom or root cause?"* → finds the essential problem
These iterate until a **Seed** crystallizes — a spec with `Ambiguity ≤ 0.2`. Only then does execution begin.
### The Pipeline
```
Interview → Seed → Route → Execute → Evaluate → Adapt
(Phase 0) (0) (1) (2) (4) (3,5)
```
| Phase | What It Does |
|:-----:|-------------|
| **0 — Big Bang** | Socratic + Ontological questioning → crystallized Seed |
| **1 — PAL Router** | Auto-selects model tier: 1x / 10x / 30x → **~85% cost savings** |
| **2 — Double Diamond** | Discover → Define → Design → Deliver |
| **3 — Resilience** | Stagnation? Switch to one of 5 lateral thinking personas |
| **4 — Evaluation** | Mechanical ($0) → Semantic ($$) → Consensus ($$$$) |
| **5 — Secondary Loop** | TODO registry: defer the trivial, pursue the essential |
---
## Commands
### Plugin Mode (No Python Required)
| Command | Description |
|:--------|:------------|
| `ooo interview` | Socratic questioning → expose hidden assumptions |
| `ooo seed` | Crystallize answers into immutable spec |
| `ooo unstuck` | 5 lateral thinking personas when you're stuck |
| `ooo help` | Full command reference |
### Full Mode (Python 3.14+)
Unlock execution, evaluation, and drift tracking:
```bash
ooo setup # register MCP server (one-time)
```
| Command | Description |
|:--------|:------------|
| `ooo run` | Execute seed via Double Diamond decomposition |
| `ooo evaluate` | 3-stage verification (Mechanical → Semantic → Consensus) |
| `ooo status` | Drift detection + session tracking |
| `ouroboros dashboard` | Interactive TUI dashboard |
### Natural Language Triggers
You can also use natural language — these work identically:
| Instead of... | Say... |
|:-------------|:-------|
| `ooo interview` | "Clarify requirements" / "Explore this idea" |
| `ooo unstuck` | "I'm stuck" / "Help me think differently" |
| `ooo evaluate` | "Check if this works" / "Verify the implementation" |
| `ooo status` | "Where are we?" / "Show current progress" |
---
## Architecture
<details>
<summary><code>75 modules</code> · <code>1,341 tests</code> · <code>97%+ coverage</code></summary>
```
src/ouroboros/
├── core/ ◆ Types, errors, seed, ontology
├── bigbang/ ◇ Phase 0: Interview → Seed
├── routing/ ◇ Phase 1: PAL router, tiers
├── execution/ ◇ Phase 2: Double Diamond
├── resilience/ ◇ Phase 3: Lateral thinking
├── evaluation/ ◇ Phase 4: 3-stage evaluation
├── secondary/ ◇ Phase 5: TODO registry
├── orchestrator/ ★ Claude Agent SDK integration
├── observability/ ○ Drift control, retrospective
├── persistence/ ○ Event sourcing, checkpoints
├── providers/ ○ LiteLLM adapter (100+ models)
└── cli/ ○ Command-line interface
```
</details>
---
## Troubleshooting
### Plugin Mode
**`ooo: command not found`**
- Reinstall: `claude /plugin marketplace add Q00/ouroboros`
- Then: `claude /plugin install ouroboros@ouroboros`
- Restart Claude Code after installation
### Full Mode
**`ouroboros: command not found`**
- Ensure Python 3.14+ is installed: `python --version`
- Run `uv sync` from the ouroboros directory
- Or install globally: `pip install ouroboros-ai`
**`ouroboros status health` shows errors**
- Missing API key: Set `ANTHROPIC_API_KEY` or `OPENAI_API_KEY`
- Database error: Run `ouroboros config init`
- MCP server issue: Run `ooo setup`
### Common Issues
**"Ambiguity not decreasing"** — Provide more specific answers in interview, or use `ooo unstuck` for fresh perspectives.
**"Execution stalled"** — Ouroboros auto-detects stagnation and switches personas. Check logs with `ouroboros status --events`.
---
## Contributing
```bash
git clone https://github.com/Q00/ouroboros
cd ouroboros
uv sync --all-groups && uv run pytest
```
- [GitHub Issues](https://github.com/Q00/ouroboros/issues)
- [GitHub Discussions](https://github.com/Q00/ouroboros/discussions)
---
<p align="center">
<em>"The beginning is the end, and the end is the beginning."</em>
<br/><br/>
<code>MIT License</code>
</p>
| text/markdown | null | Q00 <jqyu.lee@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.14 | [] | [] | [] | [
"aiosqlite>=0.20.0",
"anthropic>=0.52.0",
"cachetools>=5.0.0",
"claude-agent-sdk>=0.1.0",
"httpx>=0.27.0",
"litellm>=1.80.0",
"mcp>=1.26.0",
"prompt-toolkit>=3.0.0",
"pydantic>=2.0.0",
"python-dotenv>=1.2.1",
"pyyaml>=6.0.0",
"rich>=13.0.0",
"sqlalchemy[asyncio]>=2.0.0",
"stamina>=25.1.0",
"structlog>=24.0.0",
"textual>=1.0.0",
"typer>=0.12.0",
"pandas>=2.2.0; extra == \"dashboard\"",
"plotly>=5.24.0; extra == \"dashboard\"",
"streamlit>=1.40.0; extra == \"dashboard\""
] | [] | [] | [] | [] | uv/0.9.2 | 2026-02-20T08:03:47.285452 | ouroboros_ai-0.12.2.tar.gz | 775,339 | 60/0e/039a9d0eaa9ad7b8062eb03d98e50d5b7e33440cdb2672baf14763a26c92/ouroboros_ai-0.12.2.tar.gz | source | sdist | null | false | 78cbdfd428f54f5856159854e2e0eda4 | 291eb31a92b3a4f3bb2d45e9aa84ecf68b19e67304f52c1b1ed9410e8c0ad3c9 | 600e039a9d0eaa9ad7b8062eb03d98e50d5b7e33440cdb2672baf14763a26c92 | null | [
"LICENSE"
] | 238 |
2.4 | microservice-api | 0.3.3 | Lightweight FastAPI microservice for identity and access management, built with asyncpg and Tortoise ORM | # microservice-api
Lightweight FastAPI microservice for identity and access management, built with asyncpg and Tortoise ORM.
## Features
- FastAPI endpoints for auth, users, groups, and permissions
- Async Postgres stack with Tortoise ORM and asyncpg
- JWT-based auth helpers and middleware
- Background worker scaffolding
## Installation
```bash
pip install microservice-api
```
## Quickstart
```bash
uvicorn microservice.main:create_app --factory --reload
```
## Example
Run the minimal example (no DB/auth/worker) from the repo:
```bash
make install
make run
```
Or run it directly:
```bash
./.venv/bin/python examples/run.py
```
## Configuration
Set environment variables in a `.env` file or your shell. Common values include database connection info and JWT secrets.
## Development
```bash
pip install -e .[dev]
pytest
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"fastapi==0.129.0",
"uvicorn[standard]==0.32.1",
"pydantic==2.10.3",
"email-validator==2.1.1",
"tortoise-orm==1.1.2",
"asyncpg==0.30.0",
"pyjwt==2.11.0",
"httpx==0.28.1",
"requests==2.32.3",
"apscheduler==3.11.2",
"picologging==0.9.3",
"python-dotenv==1.2.1",
"stripe<8.0.0,>=5.0.0",
"google-genai>=1.0.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.8 | 2026-02-20T08:02:48.701894 | microservice_api-0.3.3.tar.gz | 28,426 | fb/32/0f8ba01f7d0ee5dd77adbfda4465be9e9bd4e453d1f6e8e2415793b52ced/microservice_api-0.3.3.tar.gz | source | sdist | null | false | 7ffe418eee340334b939378f95bfa621 | 515fe8cca926dcb83e17ba53f7ff13c9a050b48e0f8c0a9155648711f6efa0b1 | fb320f8ba01f7d0ee5dd77adbfda4465be9e9bd4e453d1f6e8e2415793b52ced | null | [] | 233 |
2.4 | warpzone-sdk | 16.0.1.dev1 | The main objective of this package is to centralize logic used to interact with Azure Functions, Azure Service Bus and Azure Table Storage | # WarpZone SDK
This package contains tools used in the WarpZone project.
These tool include:
- [Client for Storage](#client-for-storage)
- [Client for Servicebus](#client-for-servicebus)
- [Function wrapper](#function-wrapper)
---
## Client for Storage
### Blob storage
`WarpzoneBlobClient` client is used for uploading to and downloading from Azure Storage Blob Service.

---
## Client for Servicebus
Due to limitations on message sizes, we use different methods for sending _events_ and _data_ using Azure Service Bus.
### Events
We use the Service Bus for transmitting event messages. By an _event_, we mean a JSON formatted message, containing information about an event occuring in one part of the system, which needs to trigger another part of the system (such as an Azure Function trigger).
`WarpzoneEventClient` client is used for sending and receiving events.

--
### Data
We **do not** use the Service Bus for transmitting data directly. Instead, we use a claim-check pattern, were we store the data using Storage Blob, and transmit an event about the details of this stored data.
`WarpzoneDataClient` client is used for sending and receiving data in this way. The following diagram shows how the process works:
1. Data is uploaded
2. Event containing the blob location is send
3. Event is received
4. Data is downloaded using the blob location contained in the event

The transmitted event has the following format:
```json
{
"container_name": "<container-name>",
"blob_name": "<blob-name>",
"timestamp": "<%Y-%m-%dT%H:%M:%S%z>"
}
```
The data will be stored with
- `<container-name>` = `<topic-name>`
- `<blob-name>` = `<subject>/year=<%Y>/month=<%m>/day=<%d>/hour=<%H>/<message-id>.<extension>`
---
## Function Wrapper
For executing logic, we use a framework built on top of Azure Functions. The following diagram shows how the framework works:
1. The function is triggered by a **trigger** object (e.g. a timer or a message being received)
2. Possible **dependency** objects are initialized (potentially using information from the trigger). These are used to integrate with external systems (e.g. a database client).
3. Using the trigger and dependencies as inputs, the function outputs and an **output** object (e.g. a message being sent).

The reason we have used our own framework instead of Azure Functions directly, is that we want to use our own objects as triggers, dependencies and outputs, instead of the built-in bindings. For example, as explained [above](#data), we have created our own abstraction of a message for transmitting data (`warpzone.DataMessage`); so we would like to use this, instead of the built-in binding `azure.function.ServiceBusMessage`.
Since it is not yet possible to define [custom bindings](https://github.com/Azure/azure-webjobs-sdk/wiki/Creating-custom-input-and-output-bindings) in Python, we have defined our own wrapping logic, to handle the conversion between our own objects and the built-in bindings. The following diagram shows how the wrapping logic works:
1. Azure trigger binding is converted to trigger object
2. Either
- (a) Output object is converted to Azure output binding.
- (b) Use custom output logic, when no suitable output binding exists (e.g. we use the Azure Service Bus SDK instead of the Service Bus output binding, since this is [recommended](https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus-output?tabs=python-v1%2Cin-process%2Cextensionv5&pivots=programming-language-python#usage))
3. All logs and traces are sent to App Insights automatically.

--
### Examples
Azure Function with data messages as trigger and output:
```json
# function.json
{
"scriptFile": "__init__.py",
"entryPoint": "main",
"bindings": [
{
"name": "msg",
"type": "serviceBusTrigger",
"direction": "in",
"connection": "...",
"topicName": "...",
"subscriptionName": "..."
}
]
}
```
```python
import warpzone as wz
def do_nothing(data_msg: wz.DataMessage) -> wz.DataMessage:
return data_msg
main = wz.functionize(
f=do_nothing,
trigger=wz.triggers.DataMessageTrigger(binding_name="msg"),
output=wz.outputs.DataMessageOutput(wz.Topic.UNIFORM)
)
```
Azure Function with HTTP messages as trigger and output:
```json
# function.json
{
"scriptFile": "__init__.py",
"entryPoint": "main",
"bindings": [
{
"authLevel": "anonymous",
"name": "req",
"type": "httpTrigger",
"direction": "in"
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}
```
```python
# __init__.py
import warpzone as wz
import azure.functions as func
def return_ok(req: func.HttpRequest) -> func.HttpResponse:
return func.HttpResponse("OK")
main = wz.functionize(
f=return_ok,
trigger=wz.triggers.HttpTrigger(binding_name="req"),
output=wz.outputs.HttpOutput()
)
```
Azure Function using dependencies:
```python
import warpzone as wz
def do_nothing(
data_msg: wz.DataMessage,
db: wz.WarpzoneDatabaseClient,
) -> wz.DataMessage:
return data_msg
main = wz.functionize(
f=do_nothing,
trigger=wz.triggers.DataMessageTrigger(binding_name="msg"),
output=wz.outputs.DataMessageOutput(wz.Topic.UNIFORM),
dependencies=[wz.dependencies.DeltaDatabaseDependency()],
)
```
| text/markdown | Team Enigma | enigma@energinet.dk | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp>=3.8.3",
"azure-core>=1.26.3",
"azure-functions>=1.12.0",
"azure-identity>=1.15.0",
"azure-monitor-opentelemetry-exporter>=1.0.0b36",
"azure-servicebus>=7.8.0",
"azure-storage-blob>=12.14.1",
"cryptography==43.0.3",
"datamazing>=5.1.6",
"deltalake==1.2.1",
"numpy>=1.26.4",
"obstore>=0.8.2",
"opentelemetry-sdk>=1.32.0",
"pandas>=2.0.3",
"polars>=1.33.1",
"pyarrow>=19.0.0",
"typeguard>=4.0.1"
] | [] | [] | [] | [] | poetry/2.3.2 CPython/3.10.19 Linux/6.11.0-1018-azure | 2026-02-20T08:01:50.422058 | warpzone_sdk-16.0.1.dev1-py3-none-any.whl | 43,489 | 8b/d7/006789901c9c15bd16211414a4bb751dfe46cab8ef27232fc54f63790502/warpzone_sdk-16.0.1.dev1-py3-none-any.whl | py3 | bdist_wheel | null | false | b70da56ca32cea21e22944a1e3bb0f33 | 4d7b4c5b8af4a136dad816ffc71d56ac71eab938ba6fa94f78f26b331a10a4ed | 8bd7006789901c9c15bd16211414a4bb751dfe46cab8ef27232fc54f63790502 | null | [] | 219 |
2.4 | pybit | 5.14.1 | Python3 Bybit HTTP/WebSocket API Connector | # pybit
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
[](#contributors-)
<!-- ALL-CONTRIBUTORS-BADGE:END -->
[](https://www.python.org/downloads/)
[](https://pypi.org/project/pybit/)

Official Python3 API connector for Bybit's HTTP and WebSockets APIs.
## Table of Contents
- [About](#about)
- [Development](#development)
- [Installation](#installation)
- [Usage](#usage)
- [Contact](#contact)
- [Contributors](#contributors)
- [Donations](#donations)
## About
Put simply, `pybit` (Python + Bybit) is the official lightweight one-stop-shop module for the Bybit HTTP and WebSocket APIs. Originally created by [Verata Veritatis](https://github.com/verata-veritatis), it's now maintained by Bybit employees – however, you're still welcome to contribute!
It was designed with the following vision in mind:
> I was personally never a fan of auto-generated connectors that used a mosh-pit of various modules you didn't want (sorry, `bravado`) and wanted to build my own Python3-dedicated connector with very little external resources. The goal of the connector is to provide traders and developers with an easy-to-use high-performing module that has an active issue and discussion board leading to consistent improvements.
## Development
`pybit` is being actively developed, and new Bybit API changes should arrive on `pybit` very quickly. `pybit` uses `requests` and `websocket-client` for its methods, alongside other built-in modules. Anyone is welcome to branch/fork the repository and add their own upgrades. If you think you've made substantial improvements to the module, submit a pull request and we'll gladly take a look.
## Installation
`pybit` requires Python 3.9.1 or higher. The module can be installed manually or via [PyPI](https://pypi.org/project/pybit/) with `pip`:
```
pip install pybit
```
## Usage
You can retrieve a specific market like so:
```python
from pybit.unified_trading import HTTP
```
Create an HTTP session and connect via WebSocket for Inverse on mainnet:
```python
session = HTTP(
testnet=False,
api_key="...",
api_secret="...",
)
```
Information can be sent to, or retrieved from, the Bybit APIs:
```python
# Get the orderbook of the USDT Perpetual, BTCUSDT
session.get_orderbook(category="linear", symbol="BTCUSDT")
# Create five long USDC Options orders.
# (Currently, only USDC Options support sending orders in bulk.)
payload = {"category": "option"}
orders = [{
"symbol": "BTC-30JUN23-20000-C",
"side": "Buy",
"orderType": "Limit",
"qty": "0.1",
"price": i,
} for i in [15000, 15500, 16000, 16500, 16600]]
payload["request"] = orders
# Submit the orders in bulk.
session.place_batch_order(payload)
```
Check out the example python files or the list of endpoints below for more information on available
endpoints and methods. Usage examples on the `HTTP` methods can
be found in the [examples folder](https://github.com/bybit-exchange/pybit/tree/master/examples).
## Contact
Reach out for support on your chosen platform:
- [Telegram](https://t.me/BybitAPI) group chat
- [Discord](https://discord.com/invite/VBwVwS2HUs) server
## Contributors
Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):
<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
<!-- prettier-ignore-start -->
<!-- markdownlint-disable -->
<table>
<tr>
<td align="center"><a href="https://github.com/dextertd"><img src="https://avatars.githubusercontent.com/u/54495183?v=4" width="100px;" alt=""/><br /><sub><b>dextertd</b></sub></a><br /><a href="https://github.com/bybit-exchange/pybit/commits?author=dextertd" title="Code">💻</a> <a href="https://github.com/bybit-exchange/pybit/commits?author=dextertd" title="Documentation">📖</a></td>
<td align="center"><a href="https://github.com/ervuks"><img src="https://avatars.githubusercontent.com/u/17198438?v=4" width="100px;" alt=""/><br /><sub><b>ervuks</b></sub></a><br /><a href="https://github.com/bybit-exchange/pybit/commits?author=ervuks" title="Code">💻</a> <a href="https://github.com/bybit-exchange/pybit/commits?author=ervuks" title="Documentation">📖</a></td>
<td align="center"><a href="https://github.com/verata-veritatis"><img src="https://avatars0.githubusercontent.com/u/9677388?v=4" width="100px;" alt=""/><br /><sub><b>verata-veritatis</b></sub></a><br /><a href="https://github.com/bybit-exchange/pybit/commits?author=verata-veritatis" title="Code">💻</a> <a href="https://github.com/bybit-exchange/pybit/commits?author=verata-veritatis" title="Documentation">📖</a></td>
<td align="center"><a href="https://github.com/APF20"><img src="https://avatars0.githubusercontent.com/u/74583612?v=4" width="100px;" alt=""/><br /><sub><b>APF20</b></sub></a><br /><a href="https://github.com/bybit-exchange/pybit/commits?author=APF20" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/cameronhh"><img src="https://avatars0.githubusercontent.com/u/30434979?v=4" width="100px;" alt=""/><br /><sub><b>Cameron Harder-Hutton</b></sub></a><br /><a href="https://github.com/bybit-exchange/pybit/commits?author=cameronhh" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/tomcru"><img src="https://avatars0.githubusercontent.com/u/35841182?v=4" width="100px;" alt=""/><br /><sub><b>Tom Rumpf</b></sub></a><br /><a href="https://github.com/bybit-exchange/pybit/commits?author=tomcru" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/sheungon"><img src="https://avatars.githubusercontent.com/u/13306724?v=4" width="100px;" alt=""/><br /><sub><b>OnJohn</b></sub></a><br /><a href="https://github.com/bybit-exchange/pybit/commits?author=sheungon" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/tconley"><img src="https://avatars1.githubusercontent.com/u/1893207?v=4" width="100px;" alt=""/><br /><sub><b>Todd Conley</b></sub></a><br /><a href="https://github.com/tconley/pybit/commits?author=tconley" title="Ideas">🤔</a></td>
<td align="center"><a href="https://github.com/kolya5544"><img src="https://avatars.githubusercontent.com/u/20096248?v=4" width="100px;" alt=""/><br /><sub><b>Kolya</b></sub></a><br /><a href="https://github.com/bybit-exchange/pybit/commits?author=kolya5544" title="Code">💻</a></td>
</tr>
</table>
<!-- markdownlint-enable -->
<!-- prettier-ignore-end -->
<!-- ALL-CONTRIBUTORS-LIST:END -->
This project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!
| text/markdown | Dexter Dickinson | dexter.dickinson@bybit.com | null | null | MIT License | bybit api connector | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10"
] | [] | https://github.com/bybit-exchange/pybit | null | >=3.6 | [] | [] | [] | [
"requests",
"websocket-client",
"pycryptodome"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.0 | 2026-02-20T08:00:58.388365 | pybit-5.14.1.tar.gz | 57,107 | 57/c7/129423ed54f85b8742e6ea37a63f7474cfd6f66bc2dbbd056145ab768c22/pybit-5.14.1.tar.gz | source | sdist | null | false | c69f9ada772697393c2b849b59863a72 | 0b2353d36548ebbb4535d1a955d4522df85e03b9d274ff5ce5ce894b1eff4a7b | 57c7129423ed54f85b8742e6ea37a63f7474cfd6f66bc2dbbd056145ab768c22 | null | [
"LICENSE"
] | 15 |
2.4 | isage-agentic | 0.1.0.3 | SAGE Agentic Framework - Agent framework, planning, tool selection, and workflow | # SAGE Agentic Framework
**Independent package for agentic AI capabilities: planning, workflows, and agent coordination**
[](https://badge.fury.io/py/isage-agentic)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
> **📢 Note**: Tool selection algorithms have been moved to [`sage-agentic-tooluse`](https://github.com/intellistream/sage-agentic-tooluse) for focused development.
## 🎯 Overview
`sage-agentic` provides a comprehensive framework for building agentic AI systems with:
- **Planning Algorithms**: ReAct, Tree of Thoughts (ToT), hierarchical planning
- **Workflow Management**: Workflow orchestration and optimization
- **Agent Coordination**: Multi-agent collaboration and registry
- **Reasoning**: Advanced reasoning capabilities and timing decisions
## 📦 Installation
```bash
# Basic installation
pip install isage-agentic
# With LLM support
pip install isage-agentic[llm]
# Development installation
pip install isage-agentic[dev]
```
## 🔧 Tool Selection (Moved to sage-agentic-tooluse)
**Tool selection algorithms are now in a separate package:**
- **Repository**: https://github.com/intellistream/sage-agentic-tooluse
- **Install**: `pip install isage-agentic-tooluse`
- **Import**: `from sage_libs.sage_agentic_tooluse import ...`
```python
# Tool selection - use sage-agentic-tooluse package
from sage_libs.sage_agentic_tooluse import (
KeywordToolSelector,
EmbeddingToolSelector,
HybridToolSelector,
DFSDTToolSelector,
GorillaAdapter,
)
```
**Why separate tool selection?**
- Focused development by dedicated team
- Rapid iteration with independent versioning
- Can be used outside SAGE ecosystem
## 🚀 Quick Start
### Planning
```python
from sage_libs.sage_agentic.agents.planning import ReActPlanner
# Create planner
planner = ReActPlanner(llm=your_llm_client)
# Generate plan
plan = planner.plan(
task="Analyze this document and summarize key findings",
context={"document": doc_content}
)
```
### Workflow Management
```python
from sage_libs.sage_agentic.workflow import WorkflowEngine
# Create workflow
workflow = WorkflowEngine()
# Register and execute workflows
workflow.register("data_pipeline", pipeline_config)
result = workflow.execute("data_pipeline", inputs=data)
```
### Intent Recognition
```python
from sage_libs.sage_agentic.agents.intent import IntentClassifier
# Create intent classifier
classifier = IntentClassifier()
# Classify user intent
intent = classifier.classify("Show me the sales report for last month")
```
## 📚 Key Components
### 1. **Planning** (`agents/planning/`)
Planning algorithms and strategies:
- **ToT (Tree of Thoughts)**: Multi-path reasoning with backtracking
- **ReAct**: Reasoning + Acting interleaved execution
- **Hierarchical Planner**: Hierarchical task decomposition
- **Dependency Graph**: Task dependency management
- **Timing Decider**: Execution timing optimization
### 2. **Workflow** (`workflow/`, `workflows/`)
Workflow orchestration capabilities:
- **Workflow Engine**: Execute multi-step workflows
- **Workflow Nodes**: Define workflow components
- **Workflow Edges**: Connect workflow steps
- **Optimization**: Workflow optimization strategies
### 3. **Reasoning** (`reasoning/`)
Advanced reasoning capabilities:
- **Chain of Thought**: Step-by-step reasoning
- **Reflection**: Self-evaluation and correction
- **Meta-reasoning**: Reasoning about reasoning processes
### 4. **Evaluation** (`eval/`)
Agent evaluation capabilities:
- Metrics tracking
- Determinism testing
- Telemetry and monitoring
### 5. **Interfaces & Registry** (`interface/`, `interfaces/`, `registry/`)
Unified interfaces and registration system for:
- Planners
- Workflows
- Agents
- Intent classifiers
## 🔧 Architecture
```
sage_libs/sage_agentic/
├── agents/ # Agent implementations
│ ├── planning/ # Planning algorithms (ReAct, ToT, etc.)
│ ├── intent/ # Intent detection and classification
│ ├── bots/ # Bot implementations
│ ├── runtime/ # Runtime execution
│ └── profile/ # Agent profiles
├── workflow/ # Workflow orchestration
├── workflows/ # Workflow implementations
├── reasoning/ # Reasoning capabilities
├── eval/ # Evaluation tools
├── interface/ # Protocol definitions
├── interfaces/ # Interface implementations
└── registry/ # Component registry
```
## 🎓 Use Cases
1. **Multi-Agent Systems**: Build coordinated multi-agent workflows
2. **Complex Task Planning**: Decompose tasks with hierarchical planning
3. **Adaptive Workflows**: Dynamic workflow execution with reasoning
4. **Intent-Driven Systems**: Classify and route based on user intent
5. **Research**: Experiment with different planning strategies
## 🔗 Integration with SAGE
This package is part of the SAGE ecosystem but can be used independently:
```python
# Standalone usage
from sage_libs.sage_agentic import ReActPlanner
# With SAGE interface layer (if installed)
from sage.libs.agentic import ReActPlanner
```
## Related Packages
- **[sage-agentic-tooluse](https://github.com/intellistream/sage-agentic-tooluse)**: Tool selection algorithms
- **[sage-agentic-tooluse-benchmark](https://github.com/intellistream/sage-agentic-tooluse-benchmark)**: Tool selection evaluation
## 📖 Documentation
- **Repository**: https://github.com/intellistream/sage-agentic
- **SAGE Documentation**: https://intellistream.github.io/SAGE-Pub/
- **Issues**: https://github.com/intellistream/sage-agentic/issues
## 🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## 📄 License
MIT License - see [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
Part of the [SAGE](https://github.com/intellistream/SAGE) ecosystem for stream analytics and generative AI.
## 📧 Contact
- **Team**: IntelliStream Team
- **Email**: shuhao_zhang@hust.edu.cn
- **GitHub**: https://github.com/intellistream
---
**Part of the SAGE ecosystem** - Stream Analytics for Generative AI Engines
| text/markdown | null | IntelliStream Team <shuhao_zhang@hust.edu.cn> | null | null | null | agentic, agent, planning, tool-selection, workflow, LLM, AI | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic>=2.0.0",
"typing-extensions>=4.0.0",
"isage-libs>=0.2.0",
"openai>=1.0.0",
"anthropic>=0.20.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"ruff>=0.8.4; extra == \"dev\"",
"isage-pypi-publisher>=0.2.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/intellistream/sage-agentic",
"Repository, https://github.com/intellistream/sage-agentic"
] | twine/6.2.0 CPython/3.11.11 | 2026-02-20T08:00:15.355963 | isage_agentic-0.1.0.3-cp311-none-any.whl | 149,829 | 86/58/38818764e86b7bbe9443879d709770ce1e623a7995df59da58584e6b0c86/isage_agentic-0.1.0.3-cp311-none-any.whl | cp311 | bdist_wheel | null | false | cc719baf317fa92aa452abca42ba894f | 96c5bbcac45b1e87df48f87a059ffa81b3076fc817540f0508bda0cd925a5721 | 865838818764e86b7bbe9443879d709770ce1e623a7995df59da58584e6b0c86 | MIT | [
"LICENSE"
] | 214 |
2.4 | OpenPinch | 0.0.28 | An advanced pinch analysis and total site integration toolkit | # OpenPinch
OpenPinch is an open-source toolkit for advanced Pinch Analysis and Total Site
Integration.
## History
OpenPinch started in 2011 as Excel Workbook with macros. Since its inception, the workbook was developed in multiple directions, including Total Site Heat Integration, multiple utility targeting, retrofit targeting, cogeneration targeting, and more. The latest version of the Excel Workbook is free-to-use and available in the "Excel Version" folder on the OpenPinch github repository.
In 2021, a Python implementation of OpenPinch began, bringing the capabilities of the long-running Excel workbook into a modern Python API. The goal is to provide a sound basis for research, development and application. It is also freely available for integrating with other software tools and projects, and embeding results
into wider optimisation workflows.
## Citation
In scientific works, please cite this github repository, including the Pypi version number. Forks of OpenPinch should also reference back to this source ideally.
At present, a publication for citation is under peer-review, and the approperiate reference will be provided in due course.
## Highlights
- Multi-scale targeting: unit operation, process, site, community, and regional zones
- Direct heat integration targeting and indirect heat integration targeting (via the utility system)
- Multiple utility targeting (isothermal and non-isothermal)
- Grand composite curve (GCC) manipulation and visualisation helpers
- Excel template for importing data
- Visualisation via a Streamlit web application
- Pydantic schema models for validated programmatic workflows
## Installation
Install the latest published release from PyPI:
```bash
python -m pip install openpinch
```
## Quickstart
The high-level service accepts Excel data input via the template format. Copy and edit the Excel template (identical to the OpenPinch Excel Workbook) to input stream and utility data.
```python
from pathlib import Path
from OpenPinch import PinchProblem
pp = PinchProblem()
pp.load(Path("[location]/[filname].xlsb"))
pp.target()
pp.export_to_Excel(Path("results"))
```
Alteratively, one can define each individual stream following the defined schema.
```python
from OpenPinch import PinchProblem
from OpenPinch.lib.schema import TargetInput, StreamSchema, UtilitySchema
from OpenPinch.lib.enums import StreamType
streams = [
StreamSchema(
zone="Process Unit",
name="Reboiler Vapor",
t_supply=200.0,
t_target=120.0,
heat_flow=8000.0,
dt_cont=10.0,
htc=1.5,
),
StreamSchema(
zone="Process Unit",
name="Feed Preheat",
t_supply=40.0,
t_target=160.0,
heat_flow=6000.0,
dt_cont=10.0,
htc=1.2,
),
]
utilities = [
UtilitySchema(
name="Cooling Water",
type=StreamType.Cold,
t_supply=25.0,
t_target=35.0,
heat_flow=120000.0,
dt_cont=5.0,
htc=0.8,
price=12.0,
)
]
input_data = TargetInput(streams=streams, utilities=utilities)
pp = PinchProblem()
pp.load(input_data)
pp.target()
pp.export_to_Excel(Path("results"))
```
## Visualisation through Streamlit
A Streamlit app provides a simple way to explore OpenPinch analysis results. In streamlit_app.py, a user can define the path to the stream and utility data for the problem.
Run with
``streamlit run streamlit_app.py``
to load a case and launch the interactive dashboard defined in
``OpenPinch/streamlit_webviewer/web_graphing.py``.
"""
## Documentation
Full documentation (getting started, guides, and API reference) is available:
https://openpinch.readthedocs.io/en/latest/
Please note: the reference guide, like the repository, is under development. Errors are likely due the research nature of the project.
## Testing
Install the project in editable mode along with any optional test dependencies,
then run the test suite with:
```bash
python -m pip install -e .
pytest
```
## Contributors
Founder: Dr Tim Walmsley, University of Waikato
Stephen Burroughs, Benjamin Lincoln, Alex Geary, Harrison Whiting, Khang Tran, Roger Padullés, Jasper Walden
## Contributing
Issues and pull requests are welcome! Please open a discussion if you have questions about data formats or feature ideas. When submitting code, aim for:
- Typed interfaces and clear docstring
- Small methods with singular purpose
- Pytests covering new behaviour
- Updated documentation where relevant
## License
OpenPinch is released under the MIT License. See `LICENSE` for details.
| text/markdown | null | Tim Walmsley <tim.walmsley@waikato.ac.nz> | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"coolprop",
"matplotlib",
"numpy",
"openpyxl",
"pandas",
"pint",
"plotly",
"pydantic",
"pyxlsb",
"scipy",
"streamlit",
"tespy",
"black>=23.9; extra == \"dev\"",
"ruff>=0.1.8; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/waikato-ahuora-smart-energy-systems/OpenPinch",
"Issues, https://github.com/waikato-ahuora-smart-energy-systems/OpenPinch/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:59:55.207199 | openpinch-0.0.28.tar.gz | 40,091,790 | 88/28/ee5f39d82a32e653a12e6536ad777f71d54fb3d5e0aa256fb2a929615955/openpinch-0.0.28.tar.gz | source | sdist | null | false | 45205650db1b4a2a66efa3e3536df047 | 34b7de0a8d84ff1cd4aebf08f9499a1d1a60d734c764fad475a3535c38c4f853 | 8828ee5f39d82a32e653a12e6536ad777f71d54fb3d5e0aa256fb2a929615955 | MIT | [
"LICENSE"
] | 0 |
2.4 | mcp-browser-tools | 0.3.0 | MCP服务器提供浏览器自动化功能,支持stdio、SSE和Streamable HTTP传输协议 | # MCP Browser Tools
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/mcp-browser-tools/)
[](https://github.com/astral-sh/uv)
MCP (Model Context Protocol) 浏览器自动化工具包,提供网页信息获取和浏览器操作功能,帮助AI模型与网页进行交互。
支持三种传输协议:stdio、SSE (Server-Sent Events) 和 Streamable HTTP。
## 功能特性
- 🌐 **网页导航**:导航到任意URL并等待页面加载完成
- 📄 **内容提取**:获取页面HTML、文本内容和元信息
- 🎯 **元素操作**:点击、填写表单等页面交互操作
- ⏱️ **智能等待**:等待特定元素出现
- 🔍 **信息提取**:提取页面中的链接、图片等结构化信息
- 📸 **截图功能**:截取页面截图
- 💻 **JavaScript执行**:在页面中执行JavaScript代码
- 🔄 **多协议支持**:支持 stdio、SSE 和 Streamable HTTP 三种传输协议
- ⚡ **实时通信**:通过 SSE 和 HTTP Stream 实现服务器推送和双向通信
- 🛡️ **安全验证**:输入验证和错误处理
- 📊 **性能监控**:工具执行时间监控和日志记录
## 安装
### 从 PyPI 安装
```bash
pip install mcp-browser-tools
```
### 从源码构建安装
使用 [uv](https://github.com/astral-sh/uv) 构建和安装:
```bash
# 克隆仓库
git clone https://github.com/K-Summer/mcp-browser-tools.git
cd mcp-browser-tools
# 使用 uv 构建
uv build
# 安装构建的包
uv pip install dist/mcp_browser_tools-*.whl
```
### 开发安装
```bash
# 克隆仓库
git clone https://github.com/K-Summer/mcp-browser-tools.git
cd mcp_browser_tools
# 创建虚拟环境并安装
uv venv
source .venv/bin/activate # Linux/macOS
# 或 .venv\Scripts\activate # Windows
# 安装开发依赖
uv pip install -e .
uv pip install -e ".[dev]" # 安装开发依赖
```
## 🚀 快速开始
### 1. 安装 Playwright 浏览器
```bash
playwright install
```
### 2. 验证安装
```bash
# 检查版本
mcp-browser-tools --version
# 查看帮助
mcp-browser-tools --help
# 列出支持的传输协议
mcp-browser-tools --list-transports
```
### 3. 选择传输协议并启动服务器
#### 传输协议选择指南
| 协议 | 适用场景 | 优点 | 缺点 |
| --------------- | ----------------- | ------------------------ | ------------------------- |
| **stdio** | CLI工具、本地开发 | 简单、稳定、功能完整 | 不支持远程连接 |
| **SSE** | Web应用、实时通信 | 支持服务器推送、HTTP兼容 | 单向通信(服务器→客户端) |
| **HTTP Stream** | 流式API、长连接 | 双向通信、灵活 | 配置复杂 |
#### 启动服务器
```bash
# 使用 stdio 协议(推荐用于本地开发)
mcp-browser-tools --transport stdio
# 使用 SSE 协议(推荐用于Web应用)
mcp-browser-tools --transport sse --host 127.0.0.1 --port 8000
# 使用 HTTP Stream 协议
mcp-browser-tools --transport http_stream --host 0.0.0.0 --port 8080
# 自定义配置
mcp-browser-tools --transport sse --host localhost --port 9000 --log-level DEBUG --server-name "my-browser-tools"
```
#### 环境变量方式
```bash
# 使用 stdio 协议
export MCP_TRANSPORT_MODE=stdio
mcp-browser-tools
# 使用 SSE 协议
export MCP_TRANSPORT_MODE=sse
export MCP_HOST=127.0.0.1
export MCP_PORT=8000
mcp-browser-tools
# 使用 HTTP Stream 协议
export MCP_TRANSPORT_MODE=http_stream
export MCP_HOST=0.0.0.0
export MCP_PORT=8080
mcp-browser-tools
```
#### Python 模块方式
```bash
# 直接运行模块
python -m mcp_browser_tools --transport stdio
# 使用不同的传输协议
python -m mcp_browser_tools --transport sse --port 9000
python -m mcp_browser_tools --transport http_stream --host localhost
```
### 4. 测试服务器连接
#### 测试 SSE 服务器连接
启动 SSE 服务器后,可以使用以下方法测试连接:
```bash
# 使用 curl 测试
curl -N http://localhost:8000/sse
# 使用 Python 测试
python -c "
import aiohttp
import asyncio
async def test():
async with aiohttp.ClientSession() as session:
async with session.get('http://localhost:8000/sse') as response:
print(f'状态码: {response.status}')
async for line in response.content:
print(line.decode().strip())
break
asyncio.run(test())
"
```
#### 验证 MCP over SSE 端点
```bash
# 测试 MCP 端点
curl -N http://localhost:8000/mcp-sse
```
#### 使用测试脚本
项目包含测试脚本:
```bash
# 快速测试连接
python quick_test.py
# 完整测试
python test_sse_connection.py
# 检查服务器状态
python check_server.py
```
````
### 3. 使用示例
```python
import asyncio
import json
from mcp.server.stdio import stdio_server
from mcp_browser_tools.server import main
async def main():
# MCP服务器会自动连接到stdio
await stdio_server(main)
if __name__ == "__main__":
asyncio.run(main())
````
## 🛠️ 可用工具
MCP Browser Tools 提供以下浏览器自动化工具:
### 导航和页面操作
#### 1. navigate_to_url
导航到指定URL
**参数:**
- `url` (string, 必需): 要导航到的URL
**示例:**
```json
{
"name": "navigate_to_url",
"arguments": {
"url": "https://example.com"
}
}
```
#### 2. go_back
返回上一页
**参数:** 无
**示例:**
```json
{
"name": "go_back",
"arguments": {}
}
```
#### 3. go_forward
前进到下一页
**参数:** 无
**示例:**
```json
{
"name": "go_forward",
"arguments": {}
}
```
#### 4. refresh_page
刷新当前页面
**参数:** 无
**示例:**
```json
{
"name": "refresh_page",
"arguments": {}
}
```
### 2. get_page_content
获取当前页面内容
```json
{
"name": "get_page_content",
"arguments": {}
}
```
### 3. get_page_title
获取页面标题
```json
{
"name": "get_page_title",
"arguments": {}
}
```
### 4. click_element
点击页面元素
```json
{
"name": "click_element",
"arguments": {
"selector": "#submit-button"
}
}
```
### 5. fill_input
填充输入框
```json
{
"name": "fill_input",
"arguments": {
"selector": "#username",
"text": "myusername"
}
}
```
### 6. wait_for_element
等待元素出现
```json
{
"name": "wait_for_element",
"arguments": {
"selector": ".result-item",
"timeout": 30
}
}
```
### 7. execute_javascript
执行 JavaScript 代码
```json
{
"name": "execute_javascript",
"arguments": {
"script": "return document.title"
}
}
```
### 8. take_screenshot
截取页面截图
```json
{
"name": "take_screenshot",
"arguments": {
"path": "screenshot.png"
}
}
```
## 高级功能
### 直接使用 BrowserTools 类
```python
from mcp_browser_tools.browser_tools import BrowserTools
import asyncio
async def main():
async with BrowserTools() as tools:
# 导航到网站
await tools.navigate_to_url("https://example.com")
# 获取页面内容
content = await tools.get_page_content()
print(content["title"])
# 点击按钮
await tools.click_element("#submit")
# 填写表单
await tools.fill_input("#name", "John Doe")
# 等待结果
await tools.wait_for_element(".success-message")
asyncio.run(main())
```
### SSE 客户端连接
```python
import aiohttp
import asyncio
async def connect_sse():
# 连接到 SSE 端点
async with aiohttp.ClientSession() as session:
async with session.get("http://localhost:8000/mcp-sse") as response:
async for line in response.content:
line = line.decode('utf-8').strip()
if line.startswith("data: "):
data = json.loads(line[6:])
print(f"服务器事件: {data}")
asyncio.run(connect_sse())
```
### 使用 SSE 双向通信
```python
import asyncio
from sse_client_example import MCPClient
async def main():
client = MCPClient("http://localhost:8000")
await client.connect()
# 获取工具列表
await client.list_tools()
# 调用工具
await client.call_tool("navigate_to_url", {
"url": "https://example.com"
})
await client.disconnect()
asyncio.run(main())
```
### 配置传输模式
```python
from mcp_browser_tools.config import ServerConfig
# 创建 SSE 配置
config = ServerConfig(
transport_mode="sse",
sse_host="0.0.0.0",
sse_port=8000
)
# 运行 SSE 服务器
# await main()
```
### 执行 JavaScript
```python
result = await tools.execute_javascript("return window.location.href")
print(result)
```
### 截图功能
```python
await tools.take_screenshot("page.png")
```
## 配置
### 服务器配置
服务器启动时会输出完整的配置信息,方便下次启动时使用相同的配置。
#### 环境变量配置
支持通过环境变量配置服务器参数:
```bash
# 服务器基本信息
export MCP_SERVER_NAME="mcp-browser-tools"
export MCP_SERVER_VERSION="0.2.3"
export MCP_LOG_LEVEL="INFO"
# 传输模式配置
export MCP_TRANSPORT_MODE="sse" # 或 "stdio"
# SSE 服务器配置
export MCP_SSE_HOST="localhost"
export MCP_SSE_PORT="8000"
```
#### 配置文件方式
```python
from mcp_browser_tools.config import ServerConfig
# 使用 SSE(默认)
config = ServerConfig(
transport_mode="sse",
sse_host="localhost",
sse_port=8000
)
# 使用 stdio
config = ServerConfig(
transport_mode="stdio"
)
```
### 自定义浏览器启动参数
```python
from mcp_browser_tools.browser_tools import BrowserTools
from playwright.async_api import async_playwright
async with BrowserTools() as tools:
tools.browser = await tools.playwright.chromium.launch(
headless=True,
args=[
'--no-sandbox',
'--disable-setuid-sandbox',
'--disable-dev-shm-usage'
]
)
```
### 设置用户代理
```python
tools.context = await tools.browser.new_context(
user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36"
)
```
## 使用场景
### 1. 网页信息爬取
```json
[
{
"name": "navigate_to_url",
"arguments": { "url": "https://news.ycombinator.com" }
},
{ "name": "get_page_content", "arguments": {} },
{ "name": "get_page_title", "arguments": {} }
]
```
### 2. 自动化表单填写
```json
[
{
"name": "navigate_to_url",
"arguments": { "url": "https://example.com/login" }
},
{
"name": "fill_input",
"arguments": { "selector": "#username", "text": "user" }
},
{
"name": "fill_input",
"arguments": { "selector": "#password", "text": "pass" }
},
{ "name": "click_element", "arguments": { "selector": "#login-button" } }
]
```
### 3. 等待动态内容
```json
[
{
"name": "navigate_to_url",
"arguments": { "url": "https://dynamic-site.com" }
},
{
"name": "wait_for_element",
"arguments": { "selector": ".dynamic-content", "timeout": 60 }
},
{ "name": "get_page_content", "arguments": {} }
]
```
## 开发
### 安装开发依赖
```bash
uv add --dev pytest pytest-asyncio black isort mypy
```
### 运行测试
```bash
pytest
```
### 代码格式化
```bash
black mcp_browser_tools/
isort mcp_browser_tools/
```
### 类型检查
```bash
mypy mcp_browser_tools/
```
## 许可证
MIT License
## 贡献
欢迎提交 Issue 和 Pull Request!
## 更新日志
### v0.2.3
- **版本号升级**:从 0.2.2 升级到 0.2.3
- **配置输出功能**:服务器启动时输出完整的配置信息
- **环境变量支持**:支持通过环境变量配置服务器参数
- **SSE 服务器基础功能**:提供基本的 SSE 服务器功能
- **文档完善**:更新所有文档中的版本信息
- **注意**:SSE 模式目前提供基础功能,推荐使用 stdio 模式获得完整功能
### v0.2.2
- **默认使用 SSE (Server-Sent Events) 传输协议**
- 修复了 SSE 服务器启动问题,确保服务器能正确启动
- 修复了 HTTP 方法错误,SSE 端点现在使用正确的 GET 方法
- 改进了 SSE 服务器线程管理,避免阻塞主事件循环
- 更新了所有相关文档和示例代码
### v0.2.1
- 添加了 SSE (Server-Sent Events) 传输协议支持
- 实现了双协议架构:stdio 和 SSE 两种传输模式
- **默认使用 SSE 传输协议**,提供更好的实时通信体验
- 新增 SSE 服务器端点和 WebSocket 双向通信
- 提供了完整的 SSE 客户端示例
- 修复了入口点配置问题,解决了 uvx 命令的协程警告
- 更新了依赖配置,将已弃用的 `tool.uv.dev-dependencies` 替换为 `dependency-groups.dev`
- 改进了 UTF-8 编码支持,确保所有文件正确使用 UTF-8 编码
### v0.2.0
- 添加了完整的错误处理和重试机制
- 改进了页面内容提取功能,支持自定义提取规则
- 优化了浏览器性能和内存使用
- 增加了详细的日志记录和调试信息
- 完善了配置管理,支持自定义浏览器设置
### v0.1.0
- 初始版本发布
- 支持基本的浏览器操作功能
- MCP 服务器实现
| text/markdown | null | Your Name <your.email@example.com> | null | null | MIT | ai, automation, browser, mcp, scraping, web | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"beautifulsoup4>=4.12.0",
"fastapi>=0.104.0",
"httpx>=0.25.0",
"lxml>=4.9.0",
"mcp>=1.0.0",
"playwright>=1.40.0",
"pydantic>=2.0.0",
"uvicorn>=0.24.0",
"black>=23.0.0; extra == \"dev\"",
"isort>=5.12.0; extra == \"dev\"",
"mypy>=1.5.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/K-Summer/mcp-browser-tools",
"Repository, https://github.com/K-Summer/mcp-browser-tools",
"Issues, https://github.com/K-Summer/mcp-browser-tools/issues"
] | uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T07:58:29.352165 | mcp_browser_tools-0.3.0-py3-none-any.whl | 34,400 | a2/51/0b0b5bb8c64cce8eda73f201d3a4a3d248f317561f74235ac6fa92150f47/mcp_browser_tools-0.3.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 42f3655ea9bc17488232202306b423cf | 1b4665d1abb085940b843f2ffa6ba688eec767faa83fd89e25430f595920fe7d | a2510b0b5bb8c64cce8eda73f201d3a4a3d248f317561f74235ac6fa92150f47 | null | [
"LICENSE"
] | 226 |
2.4 | audeer | 2.4.0 | Helpful Python functions | ======
audeer
======
|tests| |coverage| |docs| |python-versions| |license|
The Python package **audeer** collects small tools and functions
that deal with common tasks.
For example, it incorporates functions for handling file paths,
using multi-threading, or showing progress bars.
The package is lightweight,
and has the small tqdm_ package
as it's only external dependency.
Have a look at the installation_ and usage_ instructions as a starting point.
Code example,
that lists all WAV files in the ``data`` folder:
.. code-block:: python
import audeer
files = audeer.list_file_names("data", filetype="wav")
.. _tqdm: https://tqdm.github.io/
.. _installation: https://audeering.github.io/audeer/installation.html
.. _usage: https://audeering.github.io/audeer/usage.html
.. badges images and links:
.. |tests| image:: https://github.com/audeering/audeer/workflows/Test/badge.svg
:target: https://github.com/audeering/audeer/actions?query=workflow%3ATest
:alt: Test status
.. |coverage| image:: https://codecov.io/gh/audeering/audeer/branch/main/graph/badge.svg?token=PUA9P2UJW1
:target: https://codecov.io/gh/audeering/audeer
:alt: code coverage
.. |docs| image:: https://img.shields.io/pypi/v/audeer?label=docs
:target: https://audeering.github.io/audeer/
:alt: audeer's documentation
.. |license| image:: https://img.shields.io/badge/license-MIT-green.svg
:target: https://github.com/audeering/audeer/blob/main/LICENSE
:alt: audeer's MIT license
.. |python-versions| image:: https://img.shields.io/pypi/pyversions/audeer.svg
:target: https://pypi.org/project/audeer/
:alt: audeer's supported Python versions
| text/x-rst | null | Hagen Wierstorf <hwierstorf@audeering.com>, Johannes Wagner <jwagner@audeering.com> | null | null | null | Python, tools | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Utilities"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"tqdm"
] | [] | [] | [] | [
"repository, https://github.com/audeering/audeer/",
"documentation, https://audeering.github.io/audeer/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:58:10.780503 | audeer-2.4.0.tar.gz | 46,180 | 0b/98/b9b42287497812e6d383de3ebc6f0bf3d966877c678318711b4ba9909505/audeer-2.4.0.tar.gz | source | sdist | null | false | 3f2262685d104820573c5caab57bb6a5 | 77c9635e2c4fa4ccdf5378a835e158ddf8557bf6ec7faf609ce106cd09cb5ece | 0b98b9b42287497812e6d383de3ebc6f0bf3d966877c678318711b4ba9909505 | MIT | [
"LICENSE"
] | 2,115 |
2.1 | rciam-federation-registry-agent | 4.0.1 | A library that connects to ams using argo-ams-library and syncs with MITREid, SimpleSAMLphp and Keycloak | # rciam-federation-registry-agent
**RCIAM Federation Registry Agent** main objective is to sync data between RCIAM Federation Registry and
different identity and access management solutions, such as Keycloak, SATOSA, SimpleSAMLphp and MITREid Connect.
This python library includes a module named `ServiceRegistryAms/` to pull and publish messages from ARGO Messaging
Service using the argo-ams-library, an API module named `MitreidConnect/` to communicate with the API of the MITREid, an
API module named `Keycloak/` to communicate with the API of the Keycloak.
The main standalone scripts that are used to deploy updates to the third party services are under `bin/`:
- `deployer_keycloak` for Keycloak
- `deployer_mitreid` for MITREid
- `deployer_ssp` for SimpleSAMLphp
## Installation
First install the packages from the requirements.txt file
```bash
pip install -r requirements.txt
```
Install rciam-federation-registry-agent
```bash
pip install rciam-federation-registry-agent
```
## Usage
### deployer_keycloak
deployer_keycloak requires the path of the config file as an argument
```bash
deployer_keycloak -c example_deployers.config.json
```
### deployer_mitreid
deployer_mitreid requires the path of the config file as an argument
```bash
deployer_mitreid -c example_deployers.config.json
```
### deployer_ssp
deployer_ssp requires the path of the config file as an argument
```bash
deployer_ssp -c example_deployers.config.json
```
## Configuration
An example of the required configuration file can be found in conf/example_deployers.config.json. The different
configuration options are described below.
```json
{
"keycloak": {
"ams": {
"host": "example.host.com",
"project": "ams-project-name-keycloak",
"pull_topic": "ams-topic-keycloak",
"pull_sub": "ams-sub-keycloak",
"token": "ams-token-keycloak",
"pub_topic": "ams-publish-topic-keycloak",
"poll_interval": 1
},
"auth_server": "https://example.com/auth",
"realm": "example",
"client_id": "client ID",
"client_secret": "client secret"
},
"mitreid": {
"ams": {
"host": "example.host.com",
"project": "ams-project-name-mitreid",
"pull_topic": "ams-topic-mitreid",
"pull_sub": "ams-sub-mitreid",
"token": "ams-token-mitreid",
"pub_topic": "ams-publish-topic-mitreid",
"poll_interval": 1
},
"issuer": "https://example.com/oidc",
"refresh_token": "refresh token",
"client_id": "client ID",
"client_secret": "client secret"
},
"ssp": {
"ams": {
"host": "example.host.com",
"project": "ams-project-name-ssp",
"pull_topic": "ams-topic-ssp",
"pull_sub": "ams-sub-ssp",
"token": "ams-token-ssp",
"pub_topic": "ams-publish-topic-ssp",
"poll_interval": 1,
"deployer_name": "1"
},
"metadata_conf_file": "/path/to/ssp/metadata/file.php",
"cron_secret": "SSP cron secret",
"cron_url": "http://localhost/proxy/module.php/cron/cron.php",
"cron_tag": "hourly",
"request_timeout": 100
},
"log_conf": "conf/logger.conf"
}
```
As shown above there are three main groups, namely Keycloak, MITREid and SSP and each group can have its own AMS
settings and service specific configuration values. The only global value is the `log_conf` path if you want to use the
same logging configuration for both of the deployers. In case you need a different configuration for a deployer you can
add log_conf in the scope of "MITREid" or "SSP".
### ServiceRegistryAms
Use ServiceRegistryAms as a manager to pull and publish messages from AMS
```python
from ServiceRegistryAms.PullPublish import PullPublish
with open('config.json') as json_data_file:
config = json.load(json_data_file)
ams = PullPublish(config)
message = ams.pull(1)
ams.publish(args)
```
### Keycloak
Use Keycloak as an API manager to communicate with Keycloak
- First obtain an access token and create the Keycloak API Client (find client_credentials_grant under `Utils` directory)
```python
access_token = client_credentials_grant(issuer_url, client_id, client_secret)
keycloak_agent = KeycloakClientApi(issuer_url, access_token)
```
- Use the following functions to create, delete and update a service on client_credentials_grant
```python
response = keycloak_agent.create_client(keycloak_msg)
response = keycloak_agent.update_client(external_id, keycloak_msg)
response = keycloak_agent.delete_client(external_id)
```
### MITREid Connect
Use MITREid Connect as an API manager to communicate with MITREid
- First obtain an access token and create the MITREid API Client (find refresh_token_grant under `Utils` directory)
```python
access_token = refresh_token_grant(issuer_url, refresh_token, client_id, client_secret)
mitreid_agent = mitreidClientApi(issuer_url, access_token)
```
- Use the following functions to create, delete and update a service on MITREid
```python
response = mitreid_agent.createClient(mitreid_msg)
response = mitreid_agent.updateClientById(external_id, mitreid_msg)
response = mitreid_agent.deleteClientById(external_id)
```
## License
[Apache](http://www.apache.org/licenses/LICENSE-2.0)
| text/markdown | grnet | faai@grnet.gr | null | null | ASL 2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: PHP",
"Programming Language :: Python",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7"
] | [] | https://github.com/rciam/rciam-federation-registry-agent | null | null | [] | [] | [] | [
"argo-ams-library>=0.5.9",
"certifi==2023.5.7",
"chardet==4.0.0",
"idna==3.3",
"oauthlib==3.2.2",
"requests-oauthlib==1.3.1",
"requests==2.27.1",
"types-requests==2.30.0.0",
"urllib3==1.26.9"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.8.20 | 2026-02-20T07:56:27.658307 | rciam_federation_registry_agent-4.0.1.tar.gz | 23,381 | 4d/80/148cf60f5c50461c7e018ce979df2f0e236fdfe787e2b4d3fc23e6b68a6c/rciam_federation_registry_agent-4.0.1.tar.gz | source | sdist | null | false | cd06aa0641187d90e71023e75524f3e9 | 654ecbf28eed3fa21d9d2f6b90b50d4d8b04917ffe363428758951786a410129 | 4d80148cf60f5c50461c7e018ce979df2f0e236fdfe787e2b4d3fc23e6b68a6c | null | [] | 235 |
2.4 | sara-engine | 0.1.7 | A biologically plausible, lightweight Spiking Neural Network engine (CPU-only, No-BP). | # SARA Engine (Liquid Harmony)
**SARA (Spiking Advanced Recursive Architecture)** is a next-generation AI engine (SNN-based) that mimics the biological brain's "power efficiency, event-driven processing, and self-organization."
It completely eliminates the "backpropagation (BP)" and "matrix operations" that modern deep learning (ANNs) rely on, achieving advanced recognition and learning capabilities using **only sparse spike communication**.
It operates on CPU only, without using any GPU.
Current Version: **v0.1.7**
## Features
* **No Backpropagation**: Learns without error backpropagation, using local learning rules (Momentum Delta) and reservoir computing.
* **CPU Only & Lightweight**: Does not require expensive GPU resources. Runs fast on standard CPU environments.
* **Multi-Scale True Liquid Reservoir**: Three parallel reservoir layers with different temporal characteristics (Decay), with recurrent connections within each layer. Achieves short-term memory using information "echo."
* **Rust Acceleration**: Core computation logic is written in Rust for high performance.
## Installation
```bash
pip install sara-engine
```
Quick Start
```bash
from sara_engine import SaraGPT
# Initialize the brain
brain = SaraGPT(sdr_size=1024)
# Create an input pattern (SDR)
input_sdr = brain.encoder.encode("Hello SARA")
# Think (Forward pass)
output_sdr, spikes = brain.forward_step(input_sdr)
print(f"Output Active Neurons: {len(output_sdr)}")
```
License
MIT License
| text/markdown; charset=UTF-8; variant=GFM | null | Your Name <your.email@example.com> | null | null | null | null | [
"Programming Language :: Rust",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/yourusername/sara_engine"
] | maturin/1.11.5 | 2026-02-20T07:54:58.567756 | sara_engine-0.1.7.tar.gz | 42,613,043 | ee/bc/009a40dea76877e034aea8eb7316195b6d718907bb5d4630f1c114944a43/sara_engine-0.1.7.tar.gz | source | sdist | null | false | 7865da30cd6ccd4ea72b59f5e0ed6af7 | 378cb938160c660c245607b73d937ec77a16d54a22d3c3ec61137abd0e06417c | eebc009a40dea76877e034aea8eb7316195b6d718907bb5d4630f1c114944a43 | null | [] | 238 |
2.4 | httpx-oauth2 | 1.0.6 | Add your description here | # HTTPX-OAuth2
My implementation of an `httpx.BaseTransport` that negotiates an access token and puts it in the request headers before sending it.
# Installation
`pip install httpx-oauth2`
# Usage
The library only needs to be setup. Once it is done, the authentication will happen behind the usage of `httpx.Client`, meaning **you shouldn't need to change existing httpx code**.
## Imports
```python
import httpx
from httpx_oauth2 import (
OAuthAuthorityClient,
ClientCredentials,
ResourceOwnerCredentials,
AuthenticatingTransportFactory
)
```
## Client Credentials
```python
api_client = httpx.Client(base_url='http://example')
# ============== ADD THIS ==============
oauth_authority = OAuthAuthorityClient(
httpx.Client(base_url='http://localhost:8080/realms/master'),
)
transports = AuthenticatingTransportFactory(oauth_authority)
credentials = ClientCredentials('client-1', 'my-secret', ('scope-1',))
api_client._transport = transports.authenticating_transport(api_client._transport, credentials)
# ===== JUST THIS. NOW USE A USUAL =====
api_client.get('/users')
```
## Resource Owner (Client Credentials with a technical account)
```python
api_client = httpx.Client(base_url='http://example')
# ============== ADD THIS ==============
oauth_authority = OAuthAuthorityClient(
httpx.Client(base_url='http://localhost:8080/realms/master'),
)
transports = AuthenticatingTransportFactory(oauth_authority)
credentials = ResourceOwnerCredentials('client-3', 'my-secret').with_username_password('user', 'pwd')
api_client._transport = transports.authenticating_transport(api_client._transport, credentials)
# ===== JUST THIS. NOW USE A USUAL =====
api_client.get('/users')
```
## Token Exchange
```python
api_client = httpx.Client(base_url='http://example')
# ============== ADD THIS ==============
oauth_authority = OAuthAuthorityClient(
httpx.Client(base_url='http://localhost:8080/realms/master'),
)
transports = AuthenticatingTransportFactory(oauth_authority)
credentials = ClientCredentials('client-1', 'my-secret', ('scope-1',))
api_client._transport = transports.token_exchange_transport(
api_client._transport,
credentials,
lambda: flask.request.headers['Authorization'].removeprefix('Bearer ') # A callable that returns the token to be exchanged
)
# ===== JUST THIS. NOW USE A USUAL =====
api_client.get('/users')
```
## Getting an access token
```python
oauth_authority = OAuthAuthorityClient(
httpx.Client(base_url='http://localhost:8080/realms/master'),
)
credentials = ClientCredentials('client-1', 'my-secret', ('scope-1',))
token = oauth_authority.get_token(credentials)
```
## Cache and Automatic retry
Access token are cached. Exchanged tokens too.
If the `AuthenticatingTransport` see that the response is 401 (meaning the token wasn't valid anymore), it will:
- Try to refresh the token with the refresh_token if supported.
- Request a new token.
- Re-send the request.
## Multithreading
The Token negotiation is behind a thread synchronization mechanism, meaning if multiple threads need a token at the same time, only one token will be negotiated with the authority and shared across all threads.
## But '\_' means its protected?
Yes. But I haven't found an easier way to let `httpx` build the base transport but still be able to wrap it with custom behavior.
| text/markdown | null | Bastien Exertier <exertier.bastien@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"cachelib>=0.13.0",
"httpx>=0.28.1",
"jwt>=1.3.1"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:54:41.940820 | httpx_oauth2-1.0.6.tar.gz | 18,735 | 00/73/1b42fb93f9489f144dbc4cc79223e0fb47c9855caa91b5363c93a4b54e80/httpx_oauth2-1.0.6.tar.gz | source | sdist | null | false | fe864d431241695e6648fd25ee7b04ce | d3f3afb3db38cf2cba519ca205337f143872772c8e578e67e2f93dfb63093f33 | 00731b42fb93f9489f144dbc4cc79223e0fb47c9855caa91b5363c93a4b54e80 | null | [
"LICENSE.md"
] | 239 |
2.3 | damagescanner | 1.0.0 | Direct damage assessments for natural hazards | # DamageScanner: direct damage assessments for natural hazards
<img align="right" width="200" alt="Logo" src="https://raw.githubusercontent.com/ElcoK/DamageScanner/main/docs/images/logo-dark.png">
[](https://fair-software.eu)
[](https://github.com/ElcoK/DamageScanner)
[](https://github.com/VU-IVM/DamageScanner/actions/workflows/pytest.yml)
[](https://doi.org/10.5281/zenodo.2551015)
[](https://vu-ivm.github.io/DamageScanner/)
[](https://badge.fury.io/py/damagescanner)
[](https://pypistats.org/packages/damagescanner)
A python toolkit for direct damage assessments for natural hazards. Even though the method is initially developed for flood damage assessments, it can calculate damages for any hazard for which you just require a vulnerability curve (i.e. a one-dimensional relation).
**Please note:** This package is still in development phase. In case of any problems, or if you have any suggestions for improvements, please raise an *issue*.
## Background
This package is (loosely) based on the original DamageScanner, which calculated potential flood damages based on inundation depth and land use using depth-damage curves in the Netherlands. The DamageScanner was originally developed for the 'Netherlands Later' project [(Klijn et al., 2007)](https://www.rivm.nl/bibliotheek/digitaaldepot/WL_rapport_Overstromingsrisicos_Nederland.pdf). The original land-use classes were based on the Land-Use Scanner in order to evaluate the effect of future land-use change on flood damages.
## Installation
To use `DamageScanner` in your project:
### Using `uv` (recommended)
```bash
uv add damagescanner
```
### Using `pip`
```bash
pip install damagescanner
```
## Development & Testing
To set up a local environment for development or to run tests:
### Using `uv` (recommended)
[uv](https://github.com/astral-sh/uv) is an extremely fast Python package manager and is the preferred way to set up the development environment.
```bash
# Clone the repository
git clone https://github.com/VU-IVM/DamageScanner.git
cd DamageScanner
# Create a virtual environment and install all optional dependencies
uv sync --all-groups
```
### Using Miniconda
If you prefer [Miniconda](https://docs.conda.io/en/latest/miniconda.html), use the provided `environment.yml` file:
```bash
# Add conda-forge channel for extra packages
conda config --add channels conda-forge
# Create environment and activate
conda env create -f environment.yml
conda activate ds-test
```
## Documentation
Please refer to the [documentation](https://vu-ivm.github.io/DamageScanner/) of this project for the full documentation of all functions.
## How to cite
If you use the **DamageScanner** in your work, please cite the package directly:
* Koks. E.E. & de Bruijn, J. (2026). DamageScanner: Python tool for natural hazard damage assessments. Zenodo. http://doi.org/10.5281/zenodo.2551015
Here's an example BibTeX entry:
```
@misc{damagescannerPython,
author = {Koks, E.E. and {de Bruijn}, J.},
title = {DamageScanner: Python tool for natural hazard damage assessments},
year = 2026,
doi = {10.5281/zenodo.2551015},
url = {http://doi.org/10.5281/zenodo.2551015}
}
```
## License
Copyright (C) 2026 Elco Koks & Jens de Bruijn. All versions released under the [MIT license](LICENSE).
| text/markdown | Elco Koks, Jens de Bruijn | Elco Koks <elco.koks@vu.nl>, Jens de Bruijn <jens.de.bruijn@vu.nl> | null | null | MIT License | GIS, natural hazards, damage assessment, remote sensing, raster, vector, geospatial | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: GIS",
"Topic :: Utilities"
] | [] | null | null | <3.15,>=3.12 | [] | [] | [] | [
"numpy>=2.3.0",
"geopandas>=1.0.0",
"rasterio>=1.5.0",
"matplotlib>=3.10.0",
"tqdm>=4.67.0",
"xlrd>=2.0.0",
"pyproj>=3.7.2",
"xarray>=2025.0.0",
"rioxarray>=0.20",
"openpyxl>=2.0.0",
"exactextract>=0.3.0"
] | [] | [] | [] | [
"Homepage, https://github.com/VU-IVM/DamageScanner",
"Documentation, https://vu-ivm.github.io/DamageScanner/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T07:53:18.578767 | damagescanner-1.0.0-py3-none-any.whl | 33,102 | 67/51/3c066f6b3bb7a0bf045fad49a14452605fa450fc6ba0547a5f954a58cb36/damagescanner-1.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | b0e1b4c2732ebf775e0d168df8038a01 | b87b89e809f583fe2784fcb489ec5602a85962fc7a68155f52db861218f94316 | 67513c066f6b3bb7a0bf045fad49a14452605fa450fc6ba0547a5f954a58cb36 | null | [] | 285 |
2.4 | testgen-ai | 0.1.8 | The Autonomous QA Agent from Your CLI - AI-powered test generation and execution | # 🚀 TestGen AI
> **The Autonomous QA Agent from Your CLI**
[](https://pypi.org/project/testgen-ai/)
[](https://www.python.org/downloads/)
[](https://JayPatil165.github.io/TestGen-AI/)
[](https://github.com/psf/black)
TestGen AI is a CLI tool that automatically generates, runs, and reports on test suites for your code using LLMs. Point it at a directory, set an API key, and let it handle the rest.
---
## 📦 Installation
```bash
pip install testgen-ai
```
Verify:
```bash
testgen --version
```
---
## ⚡ Quick Start
### 1. Set your API key (one-time)
```bash
# Google Gemini (default — has a free tier)
testgen config set GEMINI_API_KEY AIza...
# OpenAI
testgen config set OPENAI_API_KEY sk-...
# Anthropic / Claude
testgen config set ANTHROPIC_API_KEY sk-ant-...
# Ollama (local, no key needed)
testgen config set LLM_PROVIDER ollama
```
Keys are saved globally to `~/.testgen/.env` and apply to every project automatically.
### 2. Generate tests
```bash
testgen generate ./src
```
### 3. Run tests
```bash
testgen test
```
### 4. Generate a report
```bash
testgen report
```
### 5. Or do everything in one shot
```bash
testgen auto ./src
```
---
## ✨ Features
- 🤖 **AI-Powered Generation** — Uses GPT-4, Claude, Gemini, or local Ollama to write real test cases, not boilerplate
- 🌍 **14 Languages** — Python, JavaScript, TypeScript, Go, Rust, Java, C#, Ruby, PHP, Swift, Kotlin, C++, HTML, CSS
- 👀 **Watch Mode** — Detects file saves and regenerates tests in real time (`--watch`)
- 📊 **Terminal Dashboard** — Color-coded test matrix with pass/fail/skip/duration per test
- 📈 **HTML Reports** — Professional reports with metrics, coverage insights, and execution distributions
- ⚡ **Smart Context** — AST-based extraction sends only relevant code to the LLM, keeping token costs low
- 🔄 **One-command Workflow** — `testgen auto` runs the full generate → execute → report pipeline
---
## 🎯 The AGER Architecture
TestGen AI operates on a 4-step loop:
```
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│ Analyze │────▶│ Generate │────▶│ Execute │────▶│ Report │
│ (Scanner)│ │ (Brain) │ │ (Runner) │ │ (Visuals)│
└──────────┘ └──────────┘ └──────────┘ └──────────┘
```
| Phase | What happens |
|-------|-------------|
| **Analyze** | Scans your directory, extracts function signatures and docstrings to build minimal context |
| **Generate** | Sends context to your LLM and receives executable test code |
| **Execute** | Runs the generated tests via the appropriate framework (pytest, Jest, cargo test, etc.) |
| **Report** | Renders a live terminal matrix and compiles an HTML report |
---
## 🎨 CLI Commands
| Command | What it does |
|---------|-------------|
| `testgen generate <path>` | Generate tests for all source files in `<path>` |
| `testgen generate <path> --watch` | Watch mode — regenerate on every save |
| `testgen test` | Run all generated tests |
| `testgen test --verbose` | Run with full output |
| `testgen report` | Build an HTML test report |
| `testgen auto <path>` | Full pipeline: generate → test → report |
| `testgen config set KEY VALUE` | Save a config value globally |
| `testgen config show` | Print current global config |
---
## 📊 Terminal Output
```
╔══════════════════════════════════════════════════════════════════╗
║ TEST EXECUTION MATRIX ║
╠═══════════════════════════════╦══════════╦══════════╦════════════╣
║ Test Name ║ Status ║ Duration ║ Details ║
╠═══════════════════════════════╬══════════╬══════════╬════════════╣
║ test_user_login ║ ✔ PASS ║ 0.24s ║ ║
║ test_user_registration ║ ✔ PASS ║ 0.31s ║ ║
║ test_password_validation ║ ✘ FAIL ║ 0.12s ║ AssertionE…║
║ test_database_connection ║ ✔ PASS ║ 5.01s ║ [SLOW] ║
║ test_api_endpoint_users ║ ✔ PASS ║ 0.89s ║ ║
╚═══════════════════════════════╩══════════╩══════════╩════════════╝
Summary: 4 passed, 1 failed, 0 skipped | Total: 6.57s
```
---
## 🌍 Supported Languages
| Language | Test Framework Used |
|----------|-------------------|
| Python | pytest |
| JavaScript | Jest |
| TypeScript | Jest |
| Go | go test |
| Rust | cargo test |
| Java | JUnit |
| C# | NUnit / xUnit |
| Ruby | RSpec |
| PHP | PHPUnit |
| Swift | XCTest |
| Kotlin | JUnit |
| C++ | Google Test |
| HTML | Custom HTML validator |
| CSS | Custom CSS linter |
> The target language's runtime (Node, Go, JDK, etc.) must be installed on your machine. `pip install testgen-ai` only installs the TestGen AI tool itself.
---
## ⚙️ Configuration
Config can be set globally (persists across all projects):
```bash
testgen config set GEMINI_API_KEY AIza...
testgen config set LLM_MODEL gemini-2.0-flash
testgen config set LLM_PROVIDER gemini
```
Or per-project via a `.env` file in the project root (overrides global):
```env
OPENAI_API_KEY=sk-project-key
LLM_PROVIDER=openai
LLM_MODEL=gpt-4o
TEST_OUTPUT_DIR=./tests
MAX_CONTEXT_TOKENS=8000
```
### Supported Providers
| Provider | Key name | Default model |
|----------|---------|---------------|
| Google Gemini | `GEMINI_API_KEY` | `gemini-2.0-flash` |
| OpenAI | `OPENAI_API_KEY` | `gpt-4o` |
| Anthropic | `ANTHROPIC_API_KEY` | `claude-3-5-sonnet` |
| Ollama (local) | *(none)* | `llama3` |
---
## 🔧 Optional: Browser / UI Testing
For Playwright-based UI test generation:
```bash
pip install testgen-ai[browser]
playwright install
```
The `playwright install` step downloads browser binaries and is a one-time setup per machine.
---
## 🛠️ Technology Stack
| Component | Technology |
|-----------|-----------|
| Language | Python 3.10+ |
| CLI Framework | Typer |
| Terminal UI | Rich |
| AI Layer | LiteLLM (model-agnostic) |
| Validation | Pydantic |
| File Watching | Watchdog |
| Testing Core | pytest |
| UI Testing | Playwright (optional) |
| Reporting | Jinja2 |
---
## 🤝 Contributing
1. Fork the repository
2. Create a feature branch: `git checkout -b feature/my-feature`
3. Make your changes
4. Run tests: `pytest`
5. Commit: `git commit -m "Add my feature"`
6. Push: `git push origin feature/my-feature`
7. Open a Pull Request
---
## 📚 Documentation
Full documentation: **[JayPatil165.github.io/TestGen-AI](https://JayPatil165.github.io/TestGen-AI/)**
---
## 📧 Contact
- **Author**: Jay Ajitkumar Patil
- **Email**: [patiljay32145@gmail.com](mailto:patiljay32145@gmail.com)
- **GitHub**: [@JayPatil165](https://github.com/JayPatil165)
- **LinkedIn**: [jay-patil-4ab857326](https://www.linkedin.com/in/jay-patil-4ab857326/)
- **Issues**: [GitHub Issues](https://github.com/JayPatil165/TestGen-AI/issues)
---
<p align="center">
<strong>⭐ Star this repo if TestGen AI saves you time! ⭐</strong><br>
Made with ❤️ by developers, for developers
</p>
| text/markdown | null | Jay Ajitkumar Patil <patiljay32145@gmail.com> | null | null | null | testing, qa, ai, llm, test-generation, automation, pytest, tdd | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Testing",
"Topic :: Software Development :: Quality Assurance"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"typer[all]>=0.9.0",
"rich>=13.0.0",
"litellm>=1.0.0",
"pydantic>=2.0.0",
"pydantic-settings>=2.0.0",
"watchdog>=3.0.0",
"pytest>=7.0.0",
"pytest-json-report>=1.5.0",
"tiktoken>=0.7.0",
"jinja2>=3.1.0",
"google-genai>=1.0.0",
"pytest-mock>=3.12.0",
"playwright>=1.40.0; extra == \"browser\"",
"black>=23.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/JayPatil165/TestGen-AI",
"Repository, https://github.com/JayPatil165/TestGen-AI",
"Issues, https://github.com/JayPatil165/TestGen-AI/issues",
"Documentation, https://JayPatil165.github.io/TestGen-AI/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:52:35.200003 | testgen_ai-0.1.8.tar.gz | 167,699 | 34/63/77a13f6bf2ac6ce5dcf7353b9521982849ae2cebc74909b5ba2d7436e472/testgen_ai-0.1.8.tar.gz | source | sdist | null | false | efe2364699727842853d67d3ca35b2da | fee15f37ba614b47a26624d9d557604bccd480e4a0c2812f7a241990c5720835 | 346377a13f6bf2ac6ce5dcf7353b9521982849ae2cebc74909b5ba2d7436e472 | null | [] | 229 |
2.4 | synapse-sdk | 2026.1.46 | synapse sdk | # Synapse SDK v2
> To be merged into [synapse-sdk](https://github.com/datamaker-kr/synapse-sdk) after development
## Table of Contents
- [Installation](#installation)
- [Migration Guide](#migration-guide)
- [Plugin Utils](#plugin-utils)
- [Plugin Types](#plugin-types)
- [Pre-Annotation Actions](#pre-annotation-actions)
- [Plugin Discovery](#plugin-discovery)
- [Storage Utils](#storage-utils)
- [Dataset Converters](#dataset-converters)
- [API Reference](#api-reference)
- [get_plugin_actions](#get_plugin_actions)
- [get_action_method](#get_action_method)
- [get_action_config](#get_action_config)
- [read_requirements](#read_requirements)
- [run_plugin](#run_plugin)
- [PluginDiscovery](#plugindiscovery)
- [Storage](#storage)
- [Action Base Classes](#action-base-classes)
- [BaseTrainAction](#basetrainaction)
- [BaseExportAction](#baseexportaction)
- [BaseUploadAction](#baseuploadaction)
- [Agent Client Streaming](#agent-client-streaming)
---
## Installation
```bash
pip install synapse-sdk
```
---
## Migration Guide
### Plugin Utils
**Old (synapse-sdk v1):**
```python
from synapse_sdk.plugins.utils import get_action_class, get_plugin_actions, read_requirements
# Get run method by loading the action class
action_method = get_action_class(config['category'], action).method
```
**New (synapse-sdk v2):**
```python
from synapse_sdk.plugins.utils import get_action_method, get_plugin_actions, read_requirements
# Get run method directly from config (no class loading needed)
action_method = get_action_method(config, action)
```
### Plugin Types
**Old:**
```python
from synapse_sdk.plugins.enums import PluginCategory
from synapse_sdk.plugins.base import RunMethod
```
**New:**
```python
from synapse_sdk.plugins.enums import PluginCategory, RunMethod
```
**Provider renames:**
- `file_system` -> `local` (alias `file_system` still works)
- `FileSystemStorage` -> `LocalStorage`
- `GCPStorage` -> `GCSStorage`
### Pre-Annotation Actions
**Old (synapse-sdk v1):**
```python
from synapse_sdk.plugins.categories.pre_annotation.actions.to_task import ToTaskAction
class AnnotationToTask:
def convert_data_from_file(...):
...
def convert_data_from_inference(...):
...
action = ToTaskAction(run=run_instance, params=params)
result = action.start()
```
**New (synapse-sdk v2):**
```python
from synapse_sdk.plugins.actions.to_task import ToTaskAction
class ToTask(ToTaskAction):
action_name = 'to_task'
def convert_data_from_file(...):
...
def convert_data_from_inference(...):
...
```
Update `config.yaml` to use `to_task` and point the entrypoint to your `ToTask` subclass.
### Dataset Converters
**Old (synapse-sdk v1):**
```python
from synapse_sdk.utils.converters import get_converter, FromDMToYOLOConverter
```
**New (synapse-sdk v2):**
```python
from synapse_sdk.utils.converters import get_converter, FromDMToYOLOConverter
# Factory function for all format conversions
converter = get_converter('dm_v2', 'yolo', root_dir='/data/dm_dataset', is_categorized=True)
converter.convert()
converter.save_to_folder('/data/yolo_output')
# Supported format pairs:
# - DM (v1/v2) ↔ YOLO
# - DM (v1/v2) ↔ COCO
# - DM (v1/v2) ↔ Pascal VOC
```
**Breaking change**: Direct imports from `synapse_sdk.utils.converters` no longer work. Use `synapse_sdk.utils.converters` instead. For backward compatibility, re-exports are available through `synapse_sdk.plugins.datasets`.
**API changes**:
- Parameter `is_categorized_dataset` renamed to `is_categorized`
- `root_dir` is now a `Path` object (but str still accepted)
- Added `DatasetFormat` enum for type-safe format specification
**Example: Convert DM v2 to YOLO with splits**:
```python
from synapse_sdk.utils.converters import get_converter
converter = get_converter(
source='dm_v2',
target='yolo',
root_dir='/data/dm_dataset',
is_categorized=True, # has train/valid/test splits
)
# Perform conversion
result = converter.convert()
# Save to output directory
converter.save_to_folder('/data/yolo_output')
```
**Example: Convert YOLO to DM v2**:
```python
converter = get_converter(
source='yolo',
target='dm_v2',
root_dir='/data/yolo_dataset',
is_categorized=False,
)
converter.convert()
converter.save_to_folder('/data/dm_output')
```
---
## API Reference
### get_plugin_actions
Extract action names from plugin configuration.
```python
from synapse_sdk.plugins.utils import get_plugin_actions
# From dict
actions = get_plugin_actions({'actions': {'train': {}, 'export': {}}})
# Returns: ['train', 'export']
# From PluginConfig
actions = get_plugin_actions(plugin_config)
# From path
actions = get_plugin_actions('/path/to/plugin') # reads config.yaml
```
### get_action_method
Get the execution method (job/task/serve_application) for an action.
```python
from synapse_sdk.plugins.utils import get_action_method
from synapse_sdk.plugins.enums import RunMethod
method = get_action_method(config, 'train')
if method == RunMethod.JOB:
# Create job record, run async
pass
elif method == RunMethod.TASK:
# Run as Ray task
pass
```
### get_action_config
Get full configuration for a specific action.
```python
from synapse_sdk.plugins.utils import get_action_config
config = get_action_config(plugin_config, 'train')
# Returns: {'name': 'train', 'method': 'job', 'entrypoint': '...', ...}
```
### read_requirements
Parse a requirements.txt file.
```python
from synapse_sdk.plugins.utils import read_requirements
reqs = read_requirements('/path/to/requirements.txt')
# Returns: ['numpy>=1.20', 'torch>=2.0'] or None if file doesn't exist
```
### run_plugin
Execute plugin actions with automatic discovery.
```python
from synapse_sdk.plugins.runner import run_plugin
# Auto-discover from Python module path
result = run_plugin('plugins.yolov8', 'train', {'epochs': 10})
# Auto-discover from config.yaml path
result = run_plugin('/path/to/plugin', 'train', {'epochs': 10})
# Execution modes
result = run_plugin('plugin', 'train', params, mode='local') # Current process (default)
result = run_plugin('plugin', 'train', params, mode='task') # Ray Actor (fast startup)
job_id = run_plugin('plugin', 'train', params, mode='job') # Ray Job API (async)
# Explicit action class (skips discovery)
result = run_plugin('yolov8', 'train', {'epochs': 10}, action_cls=TrainAction)
```
**Option 1: Define actions with `@action` decorator (recommended for Python modules):**
```python
# plugins/yolov8.py
from synapse_sdk.plugins.decorators import action
from pydantic import BaseModel
class TrainParams(BaseModel):
epochs: int = 10
batch_size: int = 32
@action(name='train', description='Train YOLOv8 model', params=TrainParams)
def train(params: TrainParams, ctx):
# Training logic here
return {'accuracy': 0.95}
@action(name='infer')
def infer(params, ctx):
# Inference logic
return {'predictions': [...]}
# Run it:
# run_plugin('plugins.yolov8', 'train', {'epochs': 20})
```
**Option 2: Define actions with `BaseAction` class:**
```python
# plugins/yolov8.py
from synapse_sdk.plugins.action import BaseAction
from pydantic import BaseModel
class TrainParams(BaseModel):
epochs: int = 10
class TrainAction(BaseAction[TrainParams]):
action_name = 'train'
params_model = TrainParams
def execute(self):
# self.params contains validated TrainParams
# self.ctx contains RuntimeContext (logger, env, job_id)
return {'accuracy': 0.95}
# Run it:
# run_plugin('plugins.yolov8', 'train', {'epochs': 20})
```
**Option 3: Define actions with `config.yaml` (recommended for packaged plugins):**
```yaml
# plugin/config.yaml
name: YOLOv8 Plugin
code: yolov8
version: 1.0.0
category: neural_net
description: YOLOv8 object detection plugin
actions:
train:
entrypoint: plugin.train.TrainAction # or plugin.train:TrainAction
method: job
description: Train YOLOv8 model
infer:
entrypoint: plugin.inference.InferAction
method: task
description: Run inference
export:
entrypoint: plugin.export.export_model
method: task
```
```python
# Run from config path:
run_plugin('/path/to/plugin', 'train', {'epochs': 20})
```
**Entrypoint formats:**
- Dot notation: `plugin.train.TrainAction` (module.submodule.ClassName)
- Colon notation: `plugin.train:TrainAction` (module.submodule:ClassName)
### PluginDiscovery
Comprehensive plugin introspection.
```python
from synapse_sdk.plugins.discovery import PluginDiscovery
# Load from config.yaml
discovery = PluginDiscovery.from_path('/path/to/plugin')
# Or introspect a Python module
discovery = PluginDiscovery.from_module(my_module)
# Available methods
discovery.list_actions() # ['train', 'export']
discovery.has_action('train') # True
discovery.get_action_method('train') # RunMethod.JOB
discovery.get_action_config('train') # ActionConfig instance
discovery.get_action_class('train') # Loads class from entrypoint
```
### Storage
Storage utilities for working with different storage backends.
**Installation for cloud providers:**
```bash
pip install synapse-sdk[all] # Includes S3, GCS, SFTP support + Ray
```
**Available providers:**
- `local` / `file_system` - Local filesystem
- `s3` / `amazon_s3` / `minio` - S3-compatible storage
- `gcs` / `gs` / `gcp` - Google Cloud Storage
- `sftp` - SFTP servers
- `http` / `https` - HTTP file servers
**Basic usage:**
```python
from synapse_sdk.utils.storage import (
get_storage,
get_pathlib,
get_path_file_count,
get_path_total_size,
)
# Get storage instance
storage = get_storage({
'provider': 'local',
'configuration': {'location': '/data'}
})
# Upload a file
url = storage.upload(Path('/tmp/file.txt'), 'uploads/file.txt')
# Check existence
exists = storage.exists('uploads/file.txt')
# Get pathlib object for path operations
path = get_pathlib(config, '/uploads')
for file in path.rglob('*.txt'):
print(file)
# Get file count and total size
count = get_path_file_count(config, '/uploads')
size = get_path_total_size(config, '/uploads')
```
**Provider configurations:**
```python
# Local filesystem
{'provider': 'local', 'configuration': {'location': '/data'}}
# S3/MinIO
{'provider': 's3', 'configuration': {
'bucket_name': 'my-bucket',
'access_key': 'AKIAIOSFODNN7EXAMPLE',
'secret_key': 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
'region_name': 'us-east-1',
'endpoint_url': 'http://minio:9000', # optional, for MinIO
}}
# Google Cloud Storage
{'provider': 'gcs', 'configuration': {
'bucket_name': 'my-bucket',
'credentials': '/path/to/service-account.json',
}}
# SFTP
{'provider': 'sftp', 'configuration': {
'host': 'sftp.example.com',
'username': 'user',
'password': 'secret', # or 'private_key': '/path/to/id_rsa'
'root_path': '/data',
}}
# HTTP
{'provider': 'http', 'configuration': {
'base_url': 'https://files.example.com/uploads/',
'timeout': 60,
}}
```
---
## Changes from v1
### Breaking Changes
These changes **require code updates** when migrating from v1:
| v1 | v2 | Migration |
|----|----|----|
| `get_action_class(category, action)` | `get_action_method(config, action)` | Pass config dict instead of category string |
| `action_class.method` | `get_action_method(config, action)` | Method is now read from config, not class attribute |
| `@register_action` decorator | Removed | Define actions in `config.yaml` or use `PluginDiscovery.from_module()` |
| `_REGISTERED_ACTIONS` global | Removed | Use `PluginDiscovery` for action introspection |
| `get_storage('s3://...')` URL strings | Dict-only config | Use `get_storage({'provider': 's3', 'configuration': {...}})` |
| `from ... import FileSystemStorage` | `from ... import LocalStorage` | Class renamed |
| `from ... import GCPStorage` | `from ... import GCSStorage` | Class renamed |
| Subclassing `BaseStorage` ABC | Implement `StorageProtocol` | Use structural typing (duck typing) instead of inheritance |
### Non-Breaking Changes
These changes are **backwards compatible** - existing code continues to work:
| Feature | Notes |
|---------|-------|
| Provider alias `file_system` | Still works, maps to `LocalStorage` |
| Provider aliases `gcp`, `gs` | Still work, map to `GCSStorage` |
| `get_plugin_actions()` | Same API |
| `read_requirements()` | Same API |
| `get_pathlib()` | Same API |
| `get_path_file_count()` | Same API |
| `get_path_total_size()` | Same API |
### New Features in v2
| Feature | Description |
|---------|-------------|
| `PluginDiscovery` | Discover actions from config files or Python modules |
| `PluginDiscovery.from_module()` | Auto-discover `@action` decorators and `BaseAction` subclasses |
| `StorageProtocol` | Protocol-based interface for custom storage implementations |
| `HTTPStorage` provider | New provider for HTTP file servers |
| Plugin Upload utilities | `archive_and_upload()`, `build_and_upload()`, `download_and_upload()` |
| File utilities | `calculate_checksum()`, `create_archive()`, `create_archive_from_git()` |
| `AsyncAgentClient` | Async client with WebSocket/HTTP streaming for job logs |
| `tail_job_logs()` | Stream job logs with protocol auto-selection |
| `BaseTrainAction` | Training base class with dataset/model helpers |
| `BaseExportAction` | Export base class with filtered results helper |
| `BaseUploadAction` | Upload base class with step-based workflow and rollback |
| `i18n` module | Internationalization support for log messages |
| CLI `--lang` option | Language selection for `synapse plugin run` command |
---
## Internationalization (i18n)
Log messages can be displayed in multiple languages. Currently supported: English (`en`) and Korean (`ko`).
### CLI Usage
```bash
# Run with Korean log messages
synapse plugin run train --lang=ko --params '{"epochs": 10}'
# Short form
synapse plugin run train -l ko
# Works with all execution modes
synapse plugin run train --mode local --lang=ko
synapse plugin run train --mode task --lang=ko
synapse plugin run train --mode job --lang=ko
```
### Programmatic Usage
**Using Executors:**
```python
from synapse_sdk.plugins.executors.local import LocalExecutor
from synapse_sdk.plugins.executors.ray.task import RayActorExecutor
from synapse_sdk.plugins.executors.ray.jobs_api import RayJobsApiExecutor
# LocalExecutor with Korean
executor = LocalExecutor(env={'DEBUG': 'true'}, language='ko')
result = executor.execute(TrainAction, {'epochs': 10})
# RayActorExecutor with Korean
executor = RayActorExecutor(
working_dir='/path/to/plugin',
num_gpus=1,
language='ko',
)
# RayJobsApiExecutor with Korean
executor = RayJobsApiExecutor(
dashboard_address='http://localhost:8265',
working_dir='/path/to/plugin',
language='ko',
)
```
**Custom i18n Messages in Plugins:**
Plugin developers can provide multi-language messages using `LocalizedMessage` or dict format:
```python
from synapse_sdk.i18n import LocalizedMessage
# Using LocalizedMessage
msg = LocalizedMessage({
'en': 'Processing {count} files',
'ko': '{count}개의 파일을 처리 중',
})
# Log with i18n support
self.ctx.log_message(
LogMessageCode.CUSTOM_MESSAGE,
message={'en': 'Custom message', 'ko': '사용자 정의 메시지'},
level=LogLevel.INFO,
)
```
---
## Action Base Classes
Category-specific base classes that provide helper methods and progress tracking for common workflows.
### BaseTrainAction
For training workflows with dataset/model helpers.
```python
from synapse_sdk.plugins import BaseTrainAction
from pydantic import BaseModel
class TrainParams(BaseModel):
dataset: int
epochs: int = 10
class MyTrainAction(BaseTrainAction[TrainParams]):
action_name = 'train'
params_model = TrainParams
def execute(self) -> dict:
# Helper methods use self.client (from RuntimeContext)
dataset = self.get_dataset() # Uses params.dataset
self.set_progress(1, 3, self.progress.DATASET)
model_path = self._train(dataset)
self.set_progress(2, 3, self.progress.TRAIN)
model = self.create_model(model_path, name='my-model')
self.set_progress(3, 3, self.progress.MODEL_UPLOAD)
return {'model_id': model['id']}
```
**Progress categories:** `DATASET`, `TRAIN`, `MODEL_UPLOAD`
**Helper methods:**
- `get_dataset()` - Fetch dataset using `params.dataset`
- `create_model(path, **kwargs)` - Upload trained model
- `get_model(model_id)` - Retrieve existing model
### BaseExportAction
For export workflows with filtered data retrieval.
```python
from typing import Any
from synapse_sdk.plugins import BaseExportAction
from pydantic import BaseModel
class ExportParams(BaseModel):
filter: dict
output_path: str
class MyExportAction(BaseExportAction[ExportParams]):
action_name = 'export'
params_model = ExportParams
def get_filtered_results(self, filters: dict) -> tuple[Any, int]:
# Override for your target type
return self.client.get_assignments(filters)
def execute(self) -> dict:
results, count = self.get_filtered_results(self.params.filter)
self.set_progress(0, count, self.progress.DATASET_CONVERSION)
for i, item in enumerate(results, 1):
# Process and export item
self.set_progress(i, count, self.progress.DATASET_CONVERSION)
return {'exported': count}
```
| text/markdown | null | datamaker <developer@datamaker.io> | null | null | Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | null | [
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"pydantic>=2.0.0",
"requests>=2.28.0",
"httpx>=0.27.0",
"aiohttp>=3.8.0",
"pyyaml>=6.0.0",
"pyjwt>=2.10.1",
"websocket-client>=1.6.0",
"websockets>=12.0",
"typer>=0.15.0",
"rich>=14.0.0",
"questionary>=2.1.0",
"jinja2>=3.1.0",
"pillow>=10.0.0",
"tqdm>=4.65.0",
"ray[all]==2.50.0; extra == \"all\"",
"universal-pathlib>=0.3.7; extra == \"all\"",
"s3fs>=2024.0.0; extra == \"all\"",
"gcsfs>=2024.0.0; extra == \"all\"",
"sshfs>=2024.0.0; extra == \"all\"",
"mcp>=1.25.0; extra == \"all\"",
"openpyxl>=3.1.0; extra == \"all\"",
"pytest>=7.0.0; extra == \"test\"",
"pytest-asyncio>=0.21.0; extra == \"test\"",
"pytest-cov>=4.0.0; extra == \"test\"",
"pytest-mock>=3.10.0; extra == \"test\"",
"pytest-timeout>=2.1.0; extra == \"test\"",
"pytest-xdist>=3.0.0; extra == \"test\"",
"pytest-html>=3.1.0; extra == \"test\"",
"pytest-json-report>=1.5.0; extra == \"test\"",
"requests-mock>=1.10.0; extra == \"test\"",
"responses>=0.25.0; extra == \"test\"",
"respx>=0.20.0; extra == \"test\"",
"pre-commit; extra == \"dev\"",
"ruff>=0.8.0; extra == \"dev\"",
"pydoc-markdown>=4.8.0; extra == \"docs\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T07:51:52.203420 | synapse_sdk-2026.1.46-py3-none-any.whl | 628,307 | de/3d/667e7c844445f69c08308397fbaac0929c8d256596c355e7fefadcef7958/synapse_sdk-2026.1.46-py3-none-any.whl | py3 | bdist_wheel | null | false | 90cb2f7d4437c87a0e353f8a7afe5dbd | bba5af9b6a275386cc665444884ce8fa2c09ab61085519a2003b05bd517eaeb1 | de3d667e7c844445f69c08308397fbaac0929c8d256596c355e7fefadcef7958 | null | [
"LICENSE"
] | 278 |
2.4 | pulumi-azure | 6.33.0a1771569631 | A Pulumi package for creating and managing Microsoft Azure cloud resources, based on the Terraform azurerm provider. We recommend using the [Azure Native provider](https://github.com/pulumi/pulumi-azure-native) to provision Azure infrastructure. Azure Native provides complete coverage of Azure resources and same-day access to new resources and resource updates. | [](https://github.com/pulumi/pulumi-azure/actions)
[](https://slack.pulumi.com)
[](https://npmjs.com/package/@pulumi/azure)
[](https://pypi.org/project/pulumi-azure)
[](https://badge.fury.io/nu/pulumi.azure)
[](https://pkg.go.dev/github.com/pulumi/pulumi-azure/sdk/v6/go)
[](https://github.com/pulumi/pulumi-azure/blob/master/LICENSE)
# Microsoft Azure Resource Provider
> **_NOTE:_** We recommend using the [Azure Native provider](https://github.com/pulumi/pulumi-azure-native) to provision Azure infrastructure. Azure Native provides complete coverage of Azure resources and same-day access to new resources and resource updates because it’s built and automatically from the Azure Resource Manager API.
>
> Azure Classic is based on the Terraform azurerm provider. It has fewer resources and resource options and receives new Azure features more slowly than Azure Native. However, Azure Classic remains fully-supported for existing usage.
The Azure Classic resource provider for Pulumi lets you use Azure resources in your cloud programs. To use
this package, please [install the Pulumi CLI first](https://pulumi.io/). For a streamlined Pulumi walkthrough, including language runtime installation and Azure configuration, select "Get Started" below.
<div>
<a href="https://www.pulumi.com/docs/get-started/azure" title="Get Started">
<img src="https://www.pulumi.com/images/get-started.svg?" width="120">
</a>
</div>
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
npm install @pulumi/azure
or `yarn`:
yarn add @pulumi/azure
### Python
To use from Python, install using `pip`:
pip install pulumi_azure
### Go
To use from Go, use `go get` to grab the latest version of the library
go get github.com/pulumi/pulumi-azure/sdk/v6
### .NET
To use from .NET, install using `dotnet add package`:
dotnet add package Pulumi.Azure
## Concepts
The `@pulumi/azure` package provides a strongly-typed means to build cloud applications that create
and interact closely with Azure resources. Resources are exposed for the entire Azure surface area,
including (but not limited to), 'appinsights', 'compute', 'cosmosdb', 'keyvault', and more.
## Configuring credentials
There are a variety of ways credentials may be configured for the Azure provider, appropriate for
different use cases. Refer to the [Azure configuration options](
https://www.pulumi.com/registry/packages/azure/installation-configuration/#configuration-options).
## Reference
For further information, visit [Azure in the Pulumi Registry](https://www.pulumi.com/registry/packages/azure/)
or for detailed API reference documentation, visit [Azure API Docs in the Pulumi Registry](https://www.pulumi.com/registry/packages/azure/api-docs/).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, azure | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-azure"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T07:51:20.095374 | pulumi_azure-6.33.0a1771569631.tar.gz | 5,593,406 | 82/03/f3a495b73381ead3f4083b5dfaebf41b61ac5c94fe70aaceed8dcdd33006/pulumi_azure-6.33.0a1771569631.tar.gz | source | sdist | null | false | 244024e4626ac15431ee8f8611b3a3b9 | 1a357ce133f8f53e031864353abe57f6d36cd82cb7e1eb3765c77d118f8be61f | 8203f3a495b73381ead3f4083b5dfaebf41b61ac5c94fe70aaceed8dcdd33006 | null | [] | 209 |
2.4 | annorefine | 2026.2.20 | Genome annotation refinement using RNA-seq data | # AnnoRefine
[](https://github.com/nextgenusfs/annorefine/actions)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/annorefine/)
[](https://nextgenusfs.github.io/annorefine/)
**High-performance genome annotation refinement toolkit using RNA-seq data**
AnnoRefine is a Rust-based toolkit for refining genome annotations and generating gene prediction hints from RNA-seq evidence. It provides both command-line tools and Python bindings for seamless integration into bioinformatics pipelines.
📖 **[Full Documentation](https://nextgenusfs.github.io/annorefine/)**
## Features
- 🔧 **UTR Refinement** - Extend and trim UTRs based on RNA-seq coverage
- 🔀 **Splice Site Refinement** - Adjust intron boundaries using junction evidence
- 🆕 **Novel Gene Detection** - Discover new genes from RNA-seq data
- 🎯 **Hint Generation** - Convert BAM alignments to Augustus/GeneMark hints
- 📊 **Hint Processing** - Join and filter hints from multiple sources
- ⚡ **High Performance** - Multi-threaded Rust implementation
- 🐍 **Python Bindings** - Easy integration into Python workflows
- 🧭 **Strand-Aware** - Supports all RNA-seq library types (FR, RF, UU)
## Installation
**Python Package (Recommended):**
```bash
pip install annorefine
```
**Standalone Binary:**
Download from [GitHub Releases](https://github.com/nextgenusfs/annorefine/releases)
**Build from Source:**
```bash
git clone https://github.com/nextgenusfs/annorefine.git
cd annorefine
cargo build --release
```
See the [Installation Guide](https://nextgenusfs.github.io/annorefine/guide/installation/) for detailed instructions.
## Quick Start
**Python API:**
```python
import annorefine
# Refine annotations
result = annorefine.refine(
fasta_file="genome.fa",
gff3_file="annotations.gff3",
bam_file="alignments.bam",
output_file="refined.gff3"
)
# Generate hints for gene prediction
result = annorefine.bam2hints(
bam_file="alignments.bam",
output_file="hints.gff",
library_type="RF",
contig_map={'NC_000001.11': 'chr1'} # Optional: rename contigs
)
# Join hints from multiple sources
result = annorefine.join_hints(
input_files=["bam_hints.gff", "protein_hints.gff"],
output_file="joined_hints.gff"
)
```
**Command Line:**
```bash
# Refine annotations
annorefine utrs \
--fasta genome.fa \
--gff3 annotations.gff3 \
--bam alignments.bam \
--output refined.gff3
# Generate hints
annorefine bam2hints \
--in alignments.bam \
--out hints.gff \
--stranded RF
# Join hints
annorefine join-hints \
--input bam_hints.gff protein_hints.gff \
--output joined_hints.gff
```
See the [User Guide](https://nextgenusfs.github.io/annorefine/guide/bam2hints/) for more examples.
## Use Cases
- **Annotation Refinement** - Improve existing gene models with RNA-seq evidence
- **Augustus Gene Prediction** - Generate hints for ab initio gene prediction
- **GeneMark-ETP** - Create intron-only hints for GeneMark
- **funannotate2 Integration** - Seamless integration with gene prediction pipelines
## Documentation
- 📖 [User Guide](https://nextgenusfs.github.io/annorefine/guide/installation/)
- 🐍 [Python API Reference](https://nextgenusfs.github.io/annorefine/api/functions/)
- 💻 [Command Line Reference](https://nextgenusfs.github.io/annorefine/api/overview/)
- 🚀 [funannotate2 Integration](https://nextgenusfs.github.io/annorefine/guide/python/)
## Performance
- **Multi-threaded** - Parallel processing with Rust backend
- **Memory efficient** - Streaming BAM processing
- **Scalable** - Handles mammalian-sized genomes efficiently
**Typical performance:**
- Human genome (~20K genes): 10-30 minutes on 8 cores
- Memory usage: 2-8 GB depending on genome size
## Support
- 📖 [Documentation](https://nextgenusfs.github.io/annorefine/)
- 🐛 [Bug Reports](https://github.com/nextgenusfs/annorefine/issues)
- 💬 [Discussions](https://github.com/nextgenusfs/annorefine/discussions)
## Citation
```
Palmer, J. (2025). AnnoRefine: High-performance genome annotation refinement using RNA-seq data.
GitHub: https://github.com/nextgenusfs/annorefine
```
## License
MIT License - see [LICENSE](LICENSE) file for details.
---
**Built with ❤️ in Rust** | [Documentation](https://nextgenusfs.github.io/annorefine/) | [PyPI](https://pypi.org/project/annorefine/)
| text/markdown; charset=UTF-8; variant=GFM | null | Jon Palmer <nextgenusfs@gmail.com> | null | null | MIT | bioinformatics, genomics, annotation, rna-seq | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Rust",
"Topic :: Scientific/Engineering :: Bio-Informatics"
] | [] | https://github.com/nextgenusfs/annorefine | null | >=3.9 | [] | [] | [] | [
"pytest>=6.0; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"black; extra == \"dev\"",
"isort; extra == \"dev\"",
"mypy; extra == \"dev\"",
"pre-commit; extra == \"dev\""
] | [] | [] | [] | [
"Bug Reports, https://github.com/nextgenusfs/annorefine/issues",
"Source, https://github.com/nextgenusfs/annorefine"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:49:40.345268 | annorefine-2026.2.20-cp39-cp39-manylinux_2_28_x86_64.whl | 7,678,046 | 83/9c/e240c480d45b79d1fd494277a840a5eb242046407e642ece8221c2ef1267/annorefine-2026.2.20-cp39-cp39-manylinux_2_28_x86_64.whl | cp39 | bdist_wheel | null | false | 9854994b82f1436bfb3f700bbbaf584b | 9aef5bc9337596469d3c2c04723da8e3b6fedb5aa59934206d5091db2209e5d2 | 839ce240c480d45b79d1fd494277a840a5eb242046407e642ece8221c2ef1267 | null | [
"LICENSE"
] | 933 |
2.1 | titan-cli | 0.1.12 | Modular development tools orchestrator - Streamline your workflows with AI integration and intuitive terminal UI | # Titan CLI
> Modular development tools orchestrator - Streamline your workflows with AI integration and intuitive terminal UI
Titan CLI is a powerful command-line orchestrator that automates Git, GitHub, JIRA workflows through an extensible plugin system with optional AI assistance.
## ✨ Features
- 🔧 **Project Configuration** - Centralized `.titan/config.toml` for project-specific settings
- 🔌 **Plugin System** - Extend functionality with Git, GitHub, JIRA, and custom plugins
- 🎨 **Modern TUI** - Beautiful terminal interface powered by Textual
- 🤖 **AI Integration** - Optional AI assistance (Claude & Gemini) for commits, PRs, and analysis
- ⚡ **Workflow Engine** - Compose atomic steps into powerful automated workflows
- 🔐 **Secure Secrets** - OS keyring integration for API tokens and credentials
## 📦 Installation
### For Users (Recommended)
```bash
# Install with pipx (isolated environment)
pipx install titan-cli
# Verify installation
titan --version
# Launch Titan
titan
```
**Note:** This installs the stable production version. You only get the `titan` command.
### For Contributors (Development Setup)
**See [DEVELOPMENT.md](DEVELOPMENT.md) for complete development setup.**
Quick start:
```bash
# Clone repository
git clone https://github.com/masorange/titan-cli.git
cd titan-cli
# Setup development environment
make dev-install
# Run development version
titan-dev
```
**Note:** Development setup creates a `titan-dev` command that runs from your local codebase, allowing you to test changes immediately. This command is **not available** to end users who install from PyPI.
## 🚀 Quick Start
### First Time Setup
```bash
# Launch Titan (runs setup wizards on first launch)
titan
```
On first run, Titan will guide you through:
1. **Global Setup** - Configure AI providers (optional)
2. **Project Setup** - Enable plugins and configure project settings
### Basic Usage
```bash
# Launch interactive TUI
titan
# Or run specific workflows
titan workflow run <workflow-name>
```
## 🔌 Built-in Plugins
Titan CLI v1.0.0 includes three core plugins:
- **Git Plugin** - Smart commits, branch management, AI-powered commit messages
- **GitHub Plugin** - Create PRs with AI descriptions, manage issues, code reviews
- **JIRA Plugin** - Search issues, AI-powered analysis, workflow automation
## 🤖 AI Integration
Titan supports multiple AI providers:
- **Anthropic Claude** (Sonnet, Opus, Haiku)
- **Google Gemini** (Pro, Flash)
Configure during first setup or later via the TUI settings.
## 📚 Documentation
- **Contributing**: See [DEVELOPMENT.md](DEVELOPMENT.md)
- **AI Agent Guide**: See [CLAUDE.md](CLAUDE.md)
- **Release History**: See [GitHub Releases](https://github.com/masorange/titan-cli/releases)
## 🤝 Contributing
Contributions are welcome! See [DEVELOPMENT.md](DEVELOPMENT.md) for:
- Development setup
- Code style guidelines
- Testing requirements
- Architecture overview
## 📄 License
MIT License - see [LICENSE](LICENSE) for details
## 🙏 Acknowledgments
Built with:
- [Typer](https://typer.tiangolo.com/) - CLI framework
- [Textual](https://textual.textualize.io/) - Terminal UI framework
- [Pydantic](https://docs.pydantic.dev/) - Data validation
- [Poetry](https://python-poetry.org/) - Dependency management
| text/markdown | MasOrange Apps Team | apps-management-stores@masorange.es | null | null | MIT | cli, workflow, orchestrator, automation, devtools, ai | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | https://github.com/masorange/titan-cli | null | <4.0.0,>=3.10 | [] | [] | [] | [
"anthropic<0.76.0,>=0.75.0",
"google-auth<3.0.0,>=2.43.0",
"google-genai<2.0.0,>=1.58.0",
"jinja2<4.0.0,>=3.1.4",
"keyring<26.0.0,>=25.7.0",
"packaging<25.0,>=23.0",
"pydantic<3.0.0,>=2.0.0",
"python-dotenv<2.0.0,>=1.2.1",
"pyyaml<7.0.0,>=6.0.3",
"requests<3.0.0,>=2.31.0",
"structlog<26.0.0,>=25.5.0",
"textual<2.0.0,>=1.0.0",
"tomli<3.0.0,>=2.0.0",
"tomli-w<2.0.0,>=1.0.0",
"typer<1.0.0,>=0.20.0"
] | [] | [] | [] | [
"Documentation, https://github.com/masorange/titan-cli",
"Repository, https://github.com/masorange/titan-cli"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T07:49:03.097517 | titan_cli-0.1.12.tar.gz | 273,976 | 4e/36/714fb8fe46d145c5f58b12139012a59ed2e136f18920e7302aff5f71f64e/titan_cli-0.1.12.tar.gz | source | sdist | null | false | 02f05f744008da0c89e06dac746fe065 | 92fd2bdd7ef98da022f40852da4a9bd60074b24fce03bd799a03ae25cc3057da | 4e36714fb8fe46d145c5f58b12139012a59ed2e136f18920e7302aff5f71f64e | null | [] | 244 |
2.2 | ecotorch | 0.2.4 | A lightweight package to measure the ecological and financial effect of training and evaluation of pytorch projects. | # EcoTorch
A lightweight, plug-and-play tool to measure the ecological impact and efficiency of your PyTorch models.
EcoTorch runs in the background while your models learn or get tested. It tracks exactly how much power your machine uses, figures out your carbon footprint based on your location, and gives you a final efficiency score. It works seamlessly across Mac, Windows, and Linux.
## Installation
You can grab the tool directly from the public Python store. Open your terminal and type:
`pip install ecotorch`
## Quick Start
You do not need to rewrite any of your existing work to use EcoTorch. Just wrap your normal learning or testing loops inside the `TrainTracker` and `EvalTracker`.
### Quick example:
```python
import torch
from ecotorch import TrainTracker, EvalTracker, Mode
# Set up your model and data
model = ...
train_loader = ...
test_loader = ...
epoch = ...
# Wrap your train loop in the TrainTracker
with TrainTracker(epochs=epoch, model=model, train_dataloader=train_loader) as train_tracker:
# Training logic...
initial_loss = 2.5
final_loss = 0.5
# Final score
score = train_tracker.calculate_efficiency_score(initial_loss=initial_loss, final_loss=final_loss)
print(f"Efficiency Score: {score}")
# You can track evaluation and inference
with EvalTracker(test_dataloader=test_loader, train_tracker=train_tracker) as eval_tracker:
# Evaluation logic...
acc = 0.9
# Final score
score = eval_tracker.calculate_efficiency_score(accuracy=acc)
print(f"Efficiency Score: {score}")
```
A fully implemented example is available in [testing.py](testing/testing.py)
## How It Works
When you start the tracker, it automatically:
- Finds your location: It checks where you are in the world to find out how clean your local power grid is.
- Reads the power meter: It taps directly into your machine's graphics chip or Apple brain to read the exact power drops being used.
- Does the math: When the block finishes, it calculates your total energy used (kWh), your emitted carbon (grams of CO2), and a final efficiency score based on how much your model improved versus how much energy it burned.
| text/markdown | Leo Nagy | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"datasets>=4.5.0",
"geoip2fast>=1.2.2",
"nanobind>=2.11.0",
"nvidia-ml-py>=13.590.48",
"pandas>=3.0.0",
"pandas-stubs~=3.0.0",
"pycountry>=26.2.16",
"torch>=2.10.0",
"torchvision>=0.25.0"
] | [] | [] | [] | [] | uv/0.7.19 | 2026-02-20T07:46:32.440825 | ecotorch-0.2.4.tar.gz | 2,493,237 | 24/63/54d673a44e2424da0d64b0d37f35f9909febc4ee34652cbcb5b772c802ee/ecotorch-0.2.4.tar.gz | source | sdist | null | false | c18dcaf33f733836480f981ea306c336 | 7a5ce00ff8c32988dff881b83a90dd927bdf7e5956f9d49d002c2403750e9d9e | 246354d673a44e2424da0d64b0d37f35f9909febc4ee34652cbcb5b772c802ee | null | [] | 235 |
2.4 | hardware-connector | 0.2.0 | AI-powered lab instrument connection assistant | # hardware-connector
AI-powered CLI tool that helps engineers connect to lab instruments. It uses an LLM agent to diagnose connection issues, install dependencies, fix permissions, and generate working Python code — all from your terminal.
## How it works
1. Detects your OS, Python environment, USB devices, and VISA backends
2. Loads device-specific knowledge (pinouts, quirks, known errors)
3. Runs an AI agent loop that iteratively diagnoses and fixes connection issues
4. Outputs working Python code that communicates with your instrument
## Supported devices
| Device | Manufacturer | Type |
|--------|-------------|------|
| DS1054Z | Rigol | Oscilloscope |
More devices coming soon.
## Requirements
- Python 3.10+
- An [Anthropic API key](https://console.anthropic.com/)
## Installation
```bash
pip install hardware-connector
```
## Setup
Set your Anthropic API key:
```bash
export ANTHROPIC_API_KEY='sk-ant-...'
```
## Usage
### Connect to a device
```bash
# Auto-detect connected device
hardware-connector connect
# Specify a device
hardware-connector connect --device rigol_ds1054z
# Auto-confirm all actions (no prompts)
hardware-connector connect --device rigol_ds1054z --yes
```
### Other commands
```bash
hardware-connector list-devices # Show supported devices
hardware-connector detect # Show environment info + detected devices
hardware-connector config get # View configuration
hardware-connector config set model claude-sonnet-4-20250514 # Change LLM model
hardware-connector version # Print version
```
### Options
| Flag | Description |
|------|-------------|
| `--device`, `-d` | Device identifier (e.g. `rigol_ds1054z`) |
| `--yes`, `-y` | Auto-confirm all actions |
| `--model`, `-m` | LLM model to use |
| `--max-iterations` | Max agent iterations (default: 20) |
## Configuration
```bash
# Disable telemetry
hardware-connector config set telemetry off
# Change default model
hardware-connector config set model claude-sonnet-4-20250514
```
Model resolution order: `--model` flag → `HARDWARE_AGENT_MODEL` env var → config DB → default.
## License
Copyright (c) 2026 Yash Prakash. All rights reserved. See [LICENSE](LICENSE) for details.
| text/markdown | Yash Prakash | null | null | null | null | hardware, lab, instruments, visa, scpi, oscilloscope, test-equipment | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering",
"Topic :: System :: Hardware"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"typer>=0.9.0",
"rich>=13.0.0",
"anthropic>=0.39.0",
"supabase>=2.0.0",
"pyvisa>=1.13.0",
"pyvisa-py>=0.7.0",
"pyusb>=1.2.0",
"duckduckgo-search>=6.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"openai>=1.0.0; extra == \"openai\"",
"google-genai>=1.0.0; extra == \"google\"",
"openai>=1.0.0; extra == \"all-providers\"",
"google-genai>=1.0.0; extra == \"all-providers\""
] | [] | [] | [] | [
"Homepage, https://github.com/Yash-Prakash1/connector",
"Repository, https://github.com/Yash-Prakash1/connector",
"Issues, https://github.com/Yash-Prakash1/connector/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T07:45:57.358887 | hardware_connector-0.2.0.tar.gz | 86,397 | c1/02/09758b4013a5e5886716579595f05ad9704a955c1972f709b014114fae9d/hardware_connector-0.2.0.tar.gz | source | sdist | null | false | 4e350c87e1e4675aecefe5c4e5798fe2 | ed152f1986d193557eac8de30b4f4ef383acd345ed4fd7eedaf493f141f789a8 | c10209758b4013a5e5886716579595f05ad9704a955c1972f709b014114fae9d | LicenseRef-Proprietary | [
"LICENSE"
] | 242 |
2.4 | mojentic | 1.2.1 | Mojentic is an agentic framework that aims to provide a simple and flexible way to assemble teams of agents to solve complex problems. | # Mojentic
Mojentic is a framework that provides a simple and flexible way to interact with Large Language Models (LLMs). It offers integration with various LLM providers and includes tools for structured output generation, task automation, and more. With comprehensive support for all OpenAI models including GPT-5 and automatic parameter adaptation, Mojentic handles the complexities of different model types seamlessly. The future direction is to facilitate a team of agents, but the current focus is on robust LLM interaction capabilities.
[](LICENSE.md)
[](https://www.python.org/downloads/)
[](https://svetzal.github.io/mojentic/)
## 🚀 Features
- **LLM Integration**: Support for multiple LLM providers (OpenAI, Ollama)
- **Latest OpenAI Models**: Full support for GPT-5, GPT-4.1, and all reasoning models (o1, o3, o4 series)
- **Automatic Model Adaptation**: Seamless parameter handling across different OpenAI model types
- **Structured Output**: Generate structured data from LLM responses using Pydantic models
- **Tools Integration**: Utilities for date resolution, image analysis, and more
- **Multi-modal Capabilities**: Process and analyze images alongside text
- **Simple API**: Easy-to-use interface for LLM interactions
- **Future Development**: Working towards an agent framework with team coordination capabilities
## 📋 Requirements
- Python 3.11+
- Ollama (for local LLM support)
- Required models: `mxbai-embed-large` for embeddings
## 🔧 Installation
We recommend using [uv](https://docs.astral.sh/uv/) for fast, reliable Python project management.
```bash
# Install from PyPI using uv
uv pip install mojentic
# Or with pip
pip install mojentic
```
Or install from source:
```bash
git clone https://github.com/svetzal/mojentic.git
cd mojentic
# Using uv (recommended)
uv sync
# Or with pip
pip install -e .
```
## 🚦 Quick Start
```python
from mojentic.llm import LLMBroker
from mojentic.llm.gateways import OpenAIGateway, OllamaGateway
from mojentic.llm.gateways.models import LLMMessage
from mojentic.llm.tools.date_resolver import ResolveDateTool
from pydantic import BaseModel, Field
# Initialize with OpenAI (supports all models including GPT-5, GPT-4.1, reasoning models)
openai_llm = LLMBroker(model="gpt-5", gateway=OpenAIGateway(api_key="your_api_key"))
# Or use other models: "gpt-4o", "gpt-4.1", "o1-mini", "o3-mini", etc.
# Or use Ollama for local LLMs
ollama_llm = LLMBroker(model="qwen3:32b")
# Simple text generation
result = openai_llm.generate(messages=[LLMMessage(content='Hello, how are you?')])
print(result)
# Generate structured output
class Sentiment(BaseModel):
label: str = Field(..., description="Label for the sentiment")
sentiment = openai_llm.generate_object(
messages=[LLMMessage(content="Hello, how are you?")],
object_model=Sentiment
)
print(sentiment.label)
# Use tools with the LLM
result = openai_llm.generate(
messages=[LLMMessage(content='What is the date on Friday?')],
tools=[ResolveDateTool()]
)
print(result)
# Image analysis
result = openai_llm.generate(messages=[
LLMMessage(content='What is in this image?', image_paths=['path/to/image.jpg'])
])
print(result)
```
## 🔑 OpenAI configuration
OpenAIGateway now supports environment-variable defaults so you can get started without hardcoding secrets:
- If you omit `api_key`, it will use the `OPENAI_API_KEY` environment variable.
- If you omit `base_url`, it will use the `OPENAI_API_ENDPOINT` environment variable (useful for custom endpoints like Azure/OpenAI-compatible proxies).
- Precedence: values you pass explicitly to `OpenAIGateway(api_key=..., base_url=...)` always override environment variables.
Examples:
```python
from mojentic.llm import LLMBroker
from mojentic.llm.gateways import OpenAIGateway
# 1) Easiest: rely on environment variables
# export OPENAI_API_KEY=sk-...
# export OPENAI_API_ENDPOINT=https://api.openai.com/v1 # optional
llm = LLMBroker(
model="gpt-4o-mini",
gateway=OpenAIGateway() # picks up OPENAI_API_KEY/OPENAI_API_ENDPOINT automatically
)
# 2) Explicitly override one or both values
llm = LLMBroker(
model="gpt-4o-mini",
gateway=OpenAIGateway(api_key="your_key", base_url="https://api.openai.com/v1")
)
```
## 🤖 OpenAI Model Support
The framework automatically handles parameter differences between model types, so you can switch between any models without code changes.
### Model-Specific Limitations
Some models have specific parameter restrictions that are automatically handled:
- **GPT-5 Series**: Only supports `temperature=1.0` (default). Other temperature values are automatically adjusted with a warning.
- **o1 & o4 Series**: Only supports `temperature=1.0` (default). Other temperature values are automatically adjusted with a warning.
- **o3 Series**: Does not support the `temperature` parameter at all. The parameter is automatically removed with a warning.
- **All Reasoning Models** (o1, o3, o4, GPT-5): Use `max_completion_tokens` instead of `max_tokens`, and have limited tool support.
The framework will automatically adapt parameters and log warnings when unsupported values are provided.
## 🏗️ Project Structure
```
src/
├── mojentic/ # Main package
│ ├── llm/ # LLM integration (primary focus)
│ │ ├── gateways/ # LLM provider adapters (OpenAI, Ollama)
│ │ ├── registry/ # Model registration
│ │ └── tools/ # Utility tools for LLMs
│ ├── agents/ # Agent implementations (under development)
│ ├── context/ # Shared memory and context (under development)
├── _examples/ # Usage examples
```
The primary focus is currently on the `llm` module, which provides robust capabilities for interacting with various LLM providers.
## 📚 Documentation
Visit [the documentation](https://svetzal.github.io/mojentic/) for comprehensive guides, API reference, and examples.
## 🧪 Development
```bash
# Clone the repository
git clone https://github.com/svetzal/mojentic.git
cd mojentic
# Using uv (recommended)
uv sync --extra dev
# Or with pip
pip install -e ".[dev]"
# Run tests
pytest
# Quality checks
flake8 src # Linting
bandit -r src # Security scan
pip-audit # Dependency vulnerabilities
```
## ✅ Project Status
The agentic aspects of this framework are in the highest state of flux. The first layer has stabilized, as have the simpler parts of the second layer, and we're working on the stability of the asynchronous pubsub architecture. We expect Python 3.14 will be the real enabler for the async aspects of the second layer.
## 📄 License
This code is Copyright 2025 Mojility, Inc. and is freely provided under the terms of the [MIT license](LICENSE.md).
| text/markdown | null | Stacey Vetzal <stacey@vetzal.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"pydantic>=2.12.5",
"structlog>=25.5.0",
"numpy>=2.4.2",
"ollama>=0.6.1",
"openai>=2.21.0",
"anthropic>=0.83.0",
"tiktoken>=0.12.0",
"parsedatetime>=2.6",
"pytz>=2025.2",
"serpapi>=0.1.5",
"colorama>=0.4.6",
"filelock>=3.24.3",
"urllib3>=2.6.3",
"pytest>=9.0.2; extra == \"dev\"",
"pytest-asyncio>=1.3.0; extra == \"dev\"",
"pytest-spec>=5.2.0; extra == \"dev\"",
"pytest-cov>=7.0.0; extra == \"dev\"",
"pytest-mock>=3.15.1; extra == \"dev\"",
"flake8>=7.3.0; extra == \"dev\"",
"bandit>=1.9.3; extra == \"dev\"",
"pip-audit>=2.10.0; extra == \"dev\"",
"mkdocs>=1.6.1; extra == \"dev\"",
"mkdocs-material>=9.7.2; extra == \"dev\"",
"mkdocs-llmstxt>=0.5.0; extra == \"dev\"",
"mkdocstrings[python]>=1.0.3; extra == \"dev\"",
"griffe-fieldz>=0.4.0; extra == \"dev\"",
"pymdown-extensions>=10.21; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/svetzal/mojentic",
"Issues, https://github.com/svetzal/mojentic/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:44:06.395396 | mojentic-1.2.1.tar.gz | 121,837 | 9d/f1/7d260bcde78a2b4ac027eb761804dde7f308075ad8f431c786b09b868b23/mojentic-1.2.1.tar.gz | source | sdist | null | false | 0bdd4046db38fb9f9e526153d8e833ed | 00d3c4fb57bf638b581ba907e9502068b1325e7ceb67b0c1bed9ba5a56ee9240 | 9df17d260bcde78a2b4ac027eb761804dde7f308075ad8f431c786b09b868b23 | null | [
"LICENSE.md"
] | 244 |
2.4 | pyxecm | 3.2.10 | A Python library to interact with Opentext Content Management Rest API | # PYXECM
A python library to interact with Opentext Content Mangement REST API.
The product API documentation is available on [OpenText Developer](https://developer.opentext.com/ce/products/extendedecm)
Detailed documentation of this package is available [here](https://opentext.github.io/pyxecm/).
## Quick start - Library usage
Install the latest version from pypi:
```bash
pip install pyxecm
```
### Start using the package libraries
example usage of the OTCS class, more details can be found in the docs:
```python
from pyxecm import OTCS
otcs_object = OTCS(
protocol="https",
hostname="otcs.domain.tld",
port="443",
public_url="otcs.domain.tld",
username="admin",
password="********",
base_path="/cs/llisapi.dll",
)
otcs_object.authenticate()
nodes = otcs_object.get_subnodes(2000)
for node in nodes["results"]:
print(node["data"]["properties"]["id"], node["data"]["properties"]["name"])
```
## Quick start - Customizer usage
- Create an `.env` file as described here: [sample-environment-variables](customizerapisettings/#sample-environment-variables)
- Create an payload file to define what the customizer should do, as described here [payload-syntax](payload-syntax)
```bash
pip install pyxecm[customizer]
pyxecm-customizer PAYLOAD.tfvars/PAYLOAD.yaml
```
## Quick start - API
- Install pyxecm with api and customizer dependencies
- Launch the Rest API server
- Access the Customizer API at [http://localhost:8000/api](http://localhost:8000/api)
```bash
pip install pyxecm[api,customizer]
pyxecm-api
```
## Disclaimer
Copyright © 2025 Open Text Corporation, All Rights Reserved.
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| text/markdown | null | Kai Gatzweiler <kgatzweiler@opentext.com>, "Dr. Marc Diefenbruch" <mdiefenb@opentext.com> | null | null | null | appworks, archivecenter, contentserver, extendedecm, opentext, otds | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content :: Content Management System"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"lxml>=6.0.0",
"opentelemetry-api>=1.34.1",
"opentelemetry-exporter-otlp>=1.34.1",
"opentelemetry-instrumentation-requests>=0.55b1",
"opentelemetry-instrumentation-threading>=0.55b1",
"opentelemetry-sdk>=1.34.1",
"pandas>=2.3.1",
"requests-toolbelt>=1.0.0",
"requests>=2.32.4",
"suds>=1.2.0",
"websockets>=15.0.1",
"xmltodict>=0.14.2",
"asyncio>=3.4.3; extra == \"api\"",
"fastapi>=0.116.0; extra == \"api\"",
"jinja2>=3.1.6; extra == \"api\"",
"opentelemetry-api>=1.34.1; extra == \"api\"",
"opentelemetry-instrumentation-fastapi>=0.55b1; extra == \"api\"",
"opentelemetry-sdk>=1.34.1; extra == \"api\"",
"prometheus-fastapi-instrumentator>=7.1.0; extra == \"api\"",
"pydantic-settings>=2.10.1; extra == \"api\"",
"python-multipart>=0.0.20; extra == \"api\"",
"uvicorn>=0.35.0; extra == \"api\"",
"kubernetes>=33.1.0; extra == \"customizer\"",
"openpyxl>=3.1.5; extra == \"customizer\"",
"playwright>=1.53.0; extra == \"customizer\"",
"pydantic>=2.11.7; extra == \"customizer\"",
"python-hcl2>=7.2.1; extra == \"customizer\"",
"python-magic; extra == \"magic\"",
"pyrfc==3.3.1; extra == \"sap\""
] | [] | [] | [] | [
"Homepage, https://github.com/opentext/pyxecm"
] | uv/0.9.30 {"installer":{"name":"uv","version":"0.9.30","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"12","id":"bookworm","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T07:42:42.530721 | pyxecm-3.2.10.tar.gz | 644,016 | 88/5e/a14c8ae13853abe6c01ac053623e8a1e3b202239d8407382c4eeac378f4c/pyxecm-3.2.10.tar.gz | source | sdist | null | false | d895797366df55cd149ca9f7e8e81d94 | ee32f5cf45f321545a0b61359b4b2d26cd330ea1f0ab32481517a5d4076ea7cf | 885ea14c8ae13853abe6c01ac053623e8a1e3b202239d8407382c4eeac378f4c | null | [] | 321 |
2.4 | bam2plot | 0.4.0 | Plot of coverage from bam file | # bam2plot
Generate coverage plots and QC reports from BAM files. No external tools required.
[](https://zenodo.org/records/15052225)
## Features
- **Coverage plots** with three-color depth visualization (red = 0X, yellow = below threshold, blue = above threshold)
- **Cumulative coverage** plots per reference
- **Depth distribution histograms** (per-reference and global) with mean/median annotations
- **Coverage uniformity analysis** via Lorenz curves with Gini coefficient
- **Insert size distribution** for paired-end data with summary statistics
- **Standalone HTML report** combining all plots, statistics tables, and interactive sections into a single file
- **Median coverage** and **Gini coefficient** reported alongside mean coverage for each reference
- Direct read alignment via minimap2 (no BAM needed)
- GC content visualization for reference sequences
## Installation
```bash
pip install bam2plot
```
Python >= 3.10 required. All dependencies (pysam, polars, matplotlib, etc.) are installed automatically.
## Quick start
```bash
# Plot coverage from a BAM file (sort + index automatically)
bam2plot from_bam -b input.bam -o output_folder -s
# With cumulative coverage plots and custom threshold
bam2plot from_bam -b input.bam -o output_folder -s -c -t 20
# Plot coverage directly from reads (no BAM needed)
bam2plot from_reads -r1 reads.fastq -ref reference.fasta -o output_folder
# Plot GC content of a reference
bam2plot guci -ref reference.fasta -w 1000 -o output_folder
```
## Output
Running `from_bam` produces all of the following automatically:
### Coverage plot
Three-color depth visualization (red = 0X, yellow = below threshold, blue = above threshold):

### Cumulative coverage
Per-reference cumulative coverage plots (with `-c`):

### Depth distribution histograms
Weighted histograms showing the distribution of coverage depth across bases, with vertical lines for mean and median coverage. Generated per-reference and as a global aggregate.

### Coverage uniformity (Lorenz curves)
Lorenz curves visualize how evenly coverage is distributed across the genome. A perfectly uniform coverage would follow the diagonal; deviation below indicates uneven coverage. Each subplot is annotated with the Gini coefficient (0 = perfectly uniform, 1 = maximally unequal).

### Insert size distribution
For paired-end BAM files, a histogram of insert sizes is generated with mean, median, and standard deviation annotations. A summary statistics table is included in the HTML report.
### HTML report
A self-contained HTML report (`<sample>_report.html`) is always generated, embedding all plots as base64 images. It includes:
- **Global summary** — mean coverage, median coverage, percent bases above 0X and threshold
- **Per-reference statistics table** — total bases, mean/median coverage, percent above thresholds, Gini coefficient
- **All plots** — coverage, cumulative, depth histograms, Lorenz curves, and insert size distribution
See [example/report.html](example/report.html) for a complete example report.
## Subcommands
### `from_bam` -- BAM to coverage plot
```
bam2plot from_bam -b BAM -o OUTPATH [options]
-b, --bam BAM file (required)
-o, --outpath Output directory (required)
-s, --sort_and_index Sort and index the BAM before plotting
-i, --index Index only (BAM must already be sorted)
-t, --threshold Coverage depth threshold (default: 10)
-r, --rolling_window Rolling window size for smoothing (default: 100)
-n, --number_of_refs How many references to plot (default: 10, max: 100)
-w, --whitelist Only plot these references
-z, --zoom Zoom into a region, e.g. -z='1000 5000'
-c, --cum_plot Also generate cumulative coverage plots
-p, --plot_type Output format: png, svg, or both (default: png)
```
### `from_reads` -- Reads + reference to coverage plot
Aligns reads to a reference using minimap2 (via mappy) and plots coverage. Supports both long reads and paired-end short reads.
```
bam2plot from_reads -r1 READ_1 -ref REFERENCE -o OUT_FOLDER [options]
-r1, --read_1 FASTQ file (required)
-r2, --read_2 Second FASTQ for paired-end reads
-ref, --reference Reference FASTA (required)
-o, --out_folder Output directory (required)
-gc, --guci Overlay GC content on the coverage plot
-r, --rolling_window Rolling window size (default: 50)
-p, --plot_type Output format: png, svg, or both (default: png)
```
### `guci` -- GC content plot
Computes per-base GC content of a reference FASTA with a rolling mean window.
```
bam2plot guci -ref REFERENCE -w WINDOW -o OUT_FOLDER [options]
-ref, --reference Reference FASTA (required)
-w, --window Rolling window size (required)
-o, --out_folder Output directory (required)
-p, --plot_type Output format: png, svg, or both (default: png)
```
## How it works
### Coverage computation (`from_bam`)
bam2plot computes per-base coverage depth directly from BAM files using pysam, with no external dependencies like mosdepth or samtools.
The pipeline has three stages:
**1. Sweep-line depth computation**
For each reference sequence, a coverage array is allocated (one element per base position plus a sentinel). For every aligned read, `+1` is added at the read's start position and `-1` at its end position. A cumulative sum over the array then yields the exact per-position depth. This is the classic sweep-line algorithm, running in O(reads + reference_length) time.
Secondary, QC-failed, and duplicate reads are filtered out (matching samtools/mosdepth defaults).
**2. Parallel dispatch for indexed BAMs**
When the BAM has an index (`.bai` file), bam2plot parallelizes across references using Python's `multiprocessing.Pool`. Each worker process opens the BAM independently, seeks to its assigned reference via `pysam.fetch(contig)`, and computes coverage for that reference alone. This eliminates the Python per-read iteration bottleneck by distributing it across CPUs -- benchmarked at 1.3-1.6x faster than mosdepth ([details](BENCHMARKS.md)).
For unindexed BAMs, a single-pass sequential sweep is used instead, iterating all reads once and dispatching to per-reference arrays by reference ID.
**3. Run-length encoding and enrichment**
The per-position depth array is compressed into run-length encoded (RLE) intervals: consecutive positions with identical depth are merged into `(ref, start, end, depth)` rows. This typically reduces millions of positions to hundreds of thousands of intervals, making downstream operations efficient.
The RLE DataFrame is then enriched with per-reference statistics: mean coverage, median coverage, percentage of bases above zero, percentage above the user-specified threshold, Gini coefficient for coverage uniformity, and genome-wide totals. Median and Gini are computed directly from the RLE representation using weighted algorithms — no per-base expansion needed.
**4. Coverage uniformity analysis**
For each reference, a Lorenz curve is computed from the RLE data: intervals are sorted by depth, and cumulative fractions of bases vs. cumulative fractions of total coverage are calculated. The Gini coefficient is derived from the area between the Lorenz curve and the diagonal (via trapezoidal integration). This quantifies how evenly reads are distributed across the genome — useful for detecting amplification bias or capture efficiency problems.
**5. Insert size extraction**
For paired-end BAM files, insert sizes (template lengths) are extracted by iterating all reads. Only read1 of each pair with a positive template length is counted to avoid double-counting. The distribution is summarized with mean, median, standard deviation, and visualized as a histogram.
**6. Visualization**
For each reference, the RLE intervals are exploded back to per-position depth, smoothed with a rolling mean, and rendered as a colored line plot using matplotlib's `LineCollection`. Three colors indicate depth status: red (0X), yellow (below threshold), blue (above threshold).
Additional plots generated automatically:
- **Cumulative coverage** — seaborn `FacetGrid` showing percent of bases above each coverage level
- **Depth histograms** — weighted histograms (per-reference and global) with mean/median vertical lines
- **Lorenz curves** — `FacetGrid` with one subplot per reference, annotated with Gini coefficients
- **Insert size histogram** — for paired-end data, with mean/median lines and a stats text box
All plots are saved to disk and embedded in a standalone HTML report.
### Read alignment (`from_reads`)
The `from_reads` subcommand skips BAM files entirely. It aligns FASTQ reads directly to a reference FASTA using mappy (the Python binding for minimap2). Long reads use the `map-ont` preset; paired-end reads use the `sr` preset. Coverage is accumulated in a Polars DataFrame and plotted with the same visualization pipeline.
### GC content (`guci`)
The `guci` subcommand reads a reference FASTA with pyfastx, marks each base as GC (1) or AT (0), computes a rolling mean over the specified window, and plots the result.
## Citation
If you use bam2plot, please cite via [Zenodo](https://zenodo.org/records/15052225).
## License
MIT
| text/markdown | William Rosenbaum | william.rosenbaum@gmail.com | null | null | MIT | null | [] | [] | https://github.com/willros/bam2plot | null | >=3.10 | [] | [] | [] | [
"pysam==0.22.0",
"seaborn==0.13.2",
"polars==0.20.15",
"mappy==2.28",
"pyfastx",
"pyarrow",
"numpy",
"pandas",
"pytest>=7.0; extra == \"test\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.14 | 2026-02-20T07:41:26.488703 | bam2plot-0.4.0.tar.gz | 40,550 | ed/7c/e5b802d827fa6183a439f7b1365db92a4176bce36f33b9d9afc144c7f2c2/bam2plot-0.4.0.tar.gz | source | sdist | null | false | 261389122b259a0e7f5adfa9dba652e0 | 18531222f145f556014543e024c70f412e2b02fab8041daf6b93f274a23185d7 | ed7ce5b802d827fa6183a439f7b1365db92a4176bce36f33b9d9afc144c7f2c2 | null | [
"LICENSE"
] | 264 |
2.4 | FakeRVLDataRU | 1.0.1 | Генератор фейковых русских данных — ФИО, ИНН, СНИЛС, адреса, паспорта и многое другое | # FakeRVLDataRU 🇷🇺
[](https://pypi.org/project/FakeRVLDataRU/)
[](https://python.org)
[](LICENSE)
Генератор **фейковых русских данных** для тестирования и прототипирования.
Без внешних зависимостей. Полностью на Python.
## ✨ Что генерирует
| Поле | Пример |
|------|--------|
| `full_name` | Иванова Мария Сергеевна |
| `inn` | 771234567890 |
| `snils` | 123-456-789 01 |
| `phone` | +7 (916) 123-45-67 |
| `address` | 101000, Москва, ул. Тверская, д. 15, кв. 42 |
| `email` | maria.ivanova@mail.ru |
| `passport` | `{"series": "45 23", "number": "678901", ...}` |
| `birth_date` | 15.03.1985 |
| `age` | 39 |
| `profession` | Инженер |
| `education` | Высшее (специалист) |
| `marital_status` | Замужем |
| `bank_account` | 40817810... |
| `bank_name` | Сбербанк |
## 📦 Установка
```bash
pip install FakeRVLDataRU
```
## 🚀 Быстрый старт
```python
from fakeruldata import Person, PersonGenerator
# ── Один человек (самый простой способ)
p = Person()
print(p.full_name) # Иванова Мария Сергеевна
print(p.inn) # 771234567890
print(p.snils) # 123-456-789 01
print(p.phone) # +7 (916) 123-45-67
print(p.address) # 101000, Москва, ул. Тверская, д. 15, кв. 42
print(p) # Красивый вывод всех данных
# ── Указать пол
m = Person(gender='male')
f = Person(gender='female')
# ── Генератор — несколько записей с фильтрами
gen = PersonGenerator()
# 10 женщин
women = gen.generate(count=10, gender='female')
# 5 мужчин из Москвы
moscow_men = gen.generate(count=5, gender='male', city='Москва')
# 20 человек, фамилия начинается на 'К'
k_people = gen.generate(count=20, surname_starts='К')
# Возраст 25–35 лет
young = gen.generate(count=10, min_age=25, max_age=35)
# Один человек (удобный метод)
one = gen.one(gender='female', city='Санкт-Петербург')
```
## 📋 Экспорт в словари
```python
gen = PersonGenerator()
# Все поля
data = gen.generate_dicts(count=3)
# [{"full_name": "...", "inn": "...", "address": "...", ...}, ...]
# Только нужные поля
minimal = gen.generate_dicts(
count=10,
gender='male',
fields=['full_name', 'inn', 'phone', 'address']
)
```
## 🎯 Воспроизводимые результаты
```python
# seed гарантирует одинаковые данные при каждом запуске
gen = PersonGenerator(seed=42)
people = gen.generate(count=5)
```
## ⚡ Специализированные функции
```python
from fakeruldata import (
fake_inn, # ИНН физлица (12 цифр)
fake_inn_org, # ИНН юрлица (10 цифр)
fake_snils, # СНИЛС
fake_phone, # Мобильный телефон
fake_passport, # Паспортные данные (dict)
fake_bank_account, # Номер счёта
fake_ogrn, # ОГРН
fake_kpp, # КПП
fake_address, # Адрес (строка)
)
print(fake_inn()) # 771234567890
print(fake_snils()) # 123-456-789 01
print(fake_phone()) # +7 (916) 123-45-67
print(fake_address('Казань')) # 420000, Казань, ул. Ленина, д. 5, кв. 3
passport = fake_passport()
# {
# "series": "45 23",
# "number": "678901",
# "issued_by": "УМВД России по г. Москве",
# "issue_date": "15.06.2018",
# "department_code": "450-123"
# }
```
## 🔢 Валидация данных
Все данные алгоритмически корректны:
- **ИНН** — проходит проверку контрольных цифр
- **СНИЛС** — корректная контрольная сумма
- **Паспорт** — реальный формат серий
- **Телефон** — реальные коды операторов РФ
## 🏙️ Доступные города
Москва, Санкт-Петербург, Новосибирск, Екатеринбург, Казань, Нижний Новгород,
Челябинск, Самара, Уфа, Ростов-на-Дону, Краснодар, Пермь, Воронеж, Волгоград,
Красноярск, Саратов, Тюмень, Тольятти, Ижевск, Барнаул, Ульяновск, Иркутск,
Хабаровск, Ярославль, Владивосток, Томск, Оренбург, Кемерово и другие (40+).
## 📊 Пример — DataFrame (pandas)
```python
import pandas as pd
from fakeruldata import PersonGenerator
gen = PersonGenerator()
data = gen.generate_dicts(
count=100,
fields=['full_name', 'gender', 'age', 'city', 'inn', 'phone', 'profession']
)
df = pd.DataFrame(data)
print(df.head())
print(df.groupby('gender').size())
```
## 📊 Пример — JSON-экспорт
```python
import json
from fakeruldata import PersonGenerator
gen = PersonGenerator()
data = gen.generate_dicts(count=5)
with open("fake_data.json", "w", encoding="utf-8") as f:
json.dump(data, f, ensure_ascii=False, indent=2)
```
## 🔗 REST API
Также доступен REST API: **https://api.prosrochkapatrol.ru**
```
GET /?endpoint=person&gender=female&count=5&city=Москва&fields=full_name,inn
```
## 📄 Лицензия
MIT © 2026 FakeRVLDataRU
---
> ⚠️ Данные полностью вымышленные. Используйте только для тестирования и разработки.
| text/markdown | FakeRVLDataRU | support@prosrochkapatrol.ru | null | null | MIT | fake, data, russia, russian, generator, inn, snils, passport, test, mock | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Testing",
"Natural Language :: Russian",
"Operating System :: OS Independent"
] | [] | https://github.com/fakeruldata/FakeRVLDataRU | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://api.prosrochkapatrol.ru"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-20T07:40:05.477164 | fakervldataru-1.0.1.tar.gz | 14,447 | aa/30/6feba3f64753878ac766ff844c66a8bbb65303c8f9908051fa5c30d891c6/fakervldataru-1.0.1.tar.gz | source | sdist | null | false | 2e39d6f37fe2afa9cf52a0599da5b4ed | 0bda0b4c76cdf1ce5813cace202c17ed0813980493d0072fb886dfb835d57eed | aa306feba3f64753878ac766ff844c66a8bbb65303c8f9908051fa5c30d891c6 | null | [] | 0 |
2.4 | fluvel | 1.0.0b1 | A modern, reactive, and high-performance GUI framework for Python based on PySide6. | # Fluvel






**Fluvel** is a framework built on top of **PySide6**. It abstracts the complexity of Qt by replacing manual layout management with **declarative context handlers** and a **Tailwind-inspired styling processor**. Powered by the **PYRO reactive engine**, it enables **deterministic state-to-UI binding** and a **decoupled resource architecture** (i18n/theming), ensuring that large-scale Python applications remain performant and easy to refactor.
**For complete documentation, tutorials, and architectural details, please visit our [GitHub Repository](https://github.com/fluvel-project/fluvel)**.
## Key Features
* **Declarative UI Architecture**: Interface definition via the `Page` abstract class, utilizing a **context-handler-based** syntax. This approach abstracts boilerplate layout logic, focusing on component hierarchy and structural intent.
* **PYRO Reactive Engine**: A standalone, agnostic state engine (*Pyro Yields Reactive Objects*) featuring **automatic dependency tracking**. It supports reactive primitives and collections (lists/dicts), enabling fine-grained UI binding and deterministic state synchronization.
* **Utility-First Styling & QSSProcessor**: High-performance styling via an integrated **token-based** system. The `QSSProcessor` parses inline utility classes and external QSS, facilitating rapid component skinning without manual stylesheet overhead.
* *Example*: `Button(style="primary fg[red] b[2px solid blue]")`
* **Structural i18n & Logic Decoupling**: Separation of concerns through `.fluml` **(Fluvel Markup Language)** and XML schemas. This architecture decouples static/dynamic content from the Python business logic, enabling independent translation workflows and resource management.
* **Stateful Routing System**: Centralized navigation management via the `Router` class, handling page lifecycle and transitions within a unified application state.
* **Integrated Dev-Tools CLI**: A dedicated command-line interface for automated project scaffolding, asset management, and deployment workflows.
* **Hot-Reloading Environment**: A development-time watcher that performs **runtime hot-patching** of pages. It allows for instantaneous UI and theme iterations without destroying the application process or losing current state.
## 🚀 Quick Start
### 1. Installation
To install this version, use the following command:
```bash
# Setup environment
python -m venv venv
source venv/bin/activate # On Windows use: venv\Scripts\activate
# Install Fluvel
pip install fluvel
```
### 2. Starting a Project
Create your first application with the integrated CLI:
```bash
fluvel startproject
fluvel run
```
## License
Fluvel is an open-source project, licensed under the [GNU LGPL-3.0 Licence](https://www.gnu.org/licenses/lgpl-3.0.html) (or any later version).
| text/markdown | J. F. Escobar | robotid7@outlook.es | null | null | null | gui, pyside6, qt, reactive, framework, declarative, desktop-apps | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: User Interfaces",
"Operating System :: OS Independent"
] | [] | https://github.com/fluvel-project/fluvel | null | <3.14,>=3.11 | [] | [] | [] | [
"pyside6<7.0.0,>=6.8.0",
"watchdog<7.0.0,>=6.0.0",
"click<9.0.0,>=8.1.0",
"qtawesome<2.0.0,>=1.3.0",
"orjson<4.0.0,>=3.11.5"
] | [] | [] | [] | [
"Repository, https://github.com/fluvel-project/fluvel",
"Documentation, https://github.com/fluvel-project/fluvel#documentation-guide",
"Bug Tracker, https://github.com/fluvel-project/fluvel/issues"
] | poetry/2.2.1 CPython/3.12.10 Windows/11 | 2026-02-20T07:37:27.401761 | fluvel-1.0.0b1.tar.gz | 102,510 | fe/49/cd09a83ea64896704236e7038b9a1d4024b9f5db09b97f16f14f0fd2a173/fluvel-1.0.0b1.tar.gz | source | sdist | null | false | 37646cc11c176c237cc0e6c775b06047 | 958dae44f8fd953c0a48da862d55b5519df5d1e8794b9739cd2713b72ae092f3 | fe49cd09a83ea64896704236e7038b9a1d4024b9f5db09b97f16f14f0fd2a173 | null | [] | 251 |
2.4 | clawie | 0.1.0 | CLI + TUI control plane for ZeroClaw provisioning and channel operations | # clawie
`clawie` provides `clawie`, a local CLI + terminal dashboard for ZeroClaw-style
setup, user provisioning, and channel operations.
Core flows:
- initialize local setup (`api_key`, subscription, workspace, API URL)
- create or clone users with channel strategies (`new` or `migrate`)
- bootstrap or migrate channels between users
- inspect health, events, and dashboard snapshots
- export/import local state snapshots
## Install
From package index:
```bash
uv tool install clawie
```
From this repository:
```bash
uv tool install -e .
```
## Quick Start
Initialize setup (interactive):
```bash
clawie setup init --interactive
```
Initialize setup (non-interactive):
```bash
clawie setup init \
--api-key zc_live_1234 \
--subscription pro \
--workspace production \
--api-url https://api.zeroclaw.example/v1
```
Check setup:
```bash
clawie setup status
```
Create a user from a template:
```bash
clawie users create \
--user-id alice \
--display-name "Alice Kim" \
--template baseline \
--channel-strategy new
```
Clone an existing user (shorthand command):
```bash
clawie users clone \
--from-user alice \
--user-id bob \
--display-name "Bob Lee" \
--channel-strategy migrate
```
Launch the dashboard:
```bash
clawie dashboard
```
## Command Highlights
User operations:
```bash
clawie users list
clawie users show --user-id alice
clawie users delete --user-id alice
```
Create/clone with explicit channels:
```bash
clawie users create --user-id sam --channel-strategy new --channel chat:ops --channel email:inbox
clawie users clone --from-user alice --user-id bob --channels-file channels.json
```
Channel operations:
```bash
clawie channels bootstrap --user-id alice --preset growth
clawie channels bootstrap --user-id alice --preset enterprise --replace
clawie channels migrate --from-user alice --to-user bob
clawie channels migrate --from-user alice --to-user bob --replace
```
Diagnostics and events:
```bash
clawie doctor
clawie events list --limit 50
```
State snapshots:
```bash
clawie state export --output backup.json
clawie state import --input backup.json
clawie state import --input backup.json --merge
```
## Batch Provisioning
Create `users.json`:
```json
[
{
"user_id": "maria",
"display_name": "Maria",
"template": "baseline",
"channel_strategy": "new"
},
{
"user_id": "dan",
"display_name": "Dan",
"clone_from": "maria",
"channel_strategy": "migrate"
}
]
```
Run:
```bash
clawie users batch-create --file users.json
```
## Config and State
Defaults:
- config directory: `~/.config/clawie`
- config file: `~/.config/clawie/config.json`
- state file: `~/.config/clawie/state.json`
You can override the config root for any command:
```bash
clawie --config-dir /tmp/clawie-dev setup status
```
## Development
Run from source:
```bash
uv run clawie --help
uv run python -m clawie --help
```
Run tests:
```bash
uv run --with pytest pytest -q
```
## Notes
This project currently stores data locally. Integration point for service behavior:
`clawie/service.py`.
| text/markdown | Clawie Team | null | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | uv/0.8.11 | 2026-02-20T07:37:04.886042 | clawie-0.1.0.tar.gz | 14,582 | 9a/15/5daa7e72888515ef2106ab38c0edc8c956d2b64e34ee31a8283b9634b559/clawie-0.1.0.tar.gz | source | sdist | null | false | d3193bbd105a72d0564533a0424ef061 | d36a4feed6a05bc5e396527e69d759411cf601c3a79e7d464e6818edc274879f | 9a155daa7e72888515ef2106ab38c0edc8c956d2b64e34ee31a8283b9634b559 | null | [] | 279 |
2.4 | qkan | 0.1.5 | Quantum-inspired Kolmogorov Arnold Networks | # QKAN: Quantum-inspired Kolmogorov-Arnold Network
<div align='center'>
<a>"Quantum Variational Activation Functions Empower Kolmogorov-Arnold Networks"</a>
</div>
<div align='center'>
<a href='https://scholar.google.com/citations?user=W_I27S8AAAAJ' target='_blank'>Jiun-Cheng Jiang</a><sup>1</sup>
<a href='https://scholar.google.com/citations?user=1u3Kvh8AAAAJ' target='_blank'>Morris Yu-Chao Huang</a><sup>2</sup>
<a href='https://scholar.google.com/citations?user=LE3ctn0AAAAJ' target='_blank'>Tianlong Chen</a><sup>2</sup>
<a href='https://scholar.google.com/citations?user=PMnNYPcAAAAJ' target='_blank'>Hsi-Sheng Goan</a><sup>1</sup>
</div>
<div align='center'>
<sup>1</sup>National Taiwan University <sup>2</sup>UNC Chapel Hill
</div>
<div align='center'>
[](https://jim137.github.io/qkan/)
[](https://arxiv.org/abs/2509.14026)
[](https://pypi.org/project/qkan/)

[](https://doi.org/10.5281/zenodo.17437425)
</div>
<!-- [](https://github.com/Jim137/qkan/actions/workflows/publish.yml)
[](https://github.com/Jim137/qkan/actions/workflows/lint.yml) -->
This is the official repository for the paper:
**["Quantum Variational Activation Functions Empower Kolmogorov-Arnold Networks"](https://arxiv.org/abs/2509.14026)**
📖 Documentation: [https://qkan.jimq.cc/](https://qkan.jimq.cc/)
We provide a PyTorch implementation of QKAN with:
- Pre- and post-activation processing support
- Grouped QVAFs for efficient training
- Plot the nodes and pruning unnecessary nodes
- Layer extension for more complex features
- and more ...
A basic PennyLane version of the quantum circuit is also included for demonstration, but not optimized for performance.
## Installation
You can install QKAN using pip:
```bash
pip install qkan
```
If you want to install the latest development version, you can use:
```bash
pip install git+https://github.com/Jim137/qkan.git
```
To install QKAN from source, you can use the following command:
```bash
git clone https://github.com/Jim137/qkan.git && cd qkan
pip install -e .
```
It is recommended to use a virtual environment to avoid conflicts with other packages.
```bash
python -m venv qkan-env
source qkan-env/bin/activate # On Windows: qkan-env\Scripts\activate
pip install qkan
```
## Quick Start
Here's a minimal working example for function fitting using QKAN:
```python
import torch
from qkan import QKAN, create_dataset
device = "cuda" if torch.cuda.is_available() else "cpu"
f = lambda x: torch.sin(20*x)/x/20 # J_0(20x)
dataset = create_dataset(f, n_var=1, ranges=[0,1], device=device, train_num=1000, test_num=1000, seed=0)
qkan = QKAN(
[1, 1],
reps=3,
device=device,
seed=0,
preact_trainable=True,
postact_weight_trainable=True,
postact_bias_trainable=True,
ba_trainable=True,
save_act=True, # enable to plot from saved activation
)
optimizer = torch.optim.LBFGS(qkan.parameters(), lr=5e-2)
qkan.train_(
dataset,
steps=100,
optimizer=optimizer,
reg_metric="edge_forward_dr_n",
)
qkan.plot(from_acts=True, metric=None)
```
You can find more examples in the [examples](https://jim137.github.io/qkan/examples) for different tasks, such as function fitting, classification, and generative modeling.
## Contributing
We are very welcome to all kinds of contributions, including but not limited to bug reports, documentation improvements, and code contributions.
To start contributing, please fork the repository and create a new branch for your feature or bug fix. Then, submit a pull request with a clear description of your changes.
In your environment, you can install the development dependencies with:
```bash
pip install .[dev] # install development dependencies
pip install .[doc] # install documentation dependencies
pip install .[all] # install all optional dependencies
```
## Citation
```bibtex
@article{jiang2025qkan,
title={Quantum Variational Activation Functions Empower Kolmogorov-Arnold Networks},
author={Jiang, Jiun-Cheng and Huang, Morris Yu-Chao and Chen, Tianlong and Goan, Hsi-Sheng},
journal={arXiv preprint arXiv:2509.14026},
year={2025},
url={https://arxiv.org/abs/2509.14026}
}
@misc{jiang2025qkan_software,
title={QKAN: Quantum-inspired Kolmogorov-Arnold Network},
author={Jiang, Jiun-Cheng},
year={2025},
publisher={Zenodo},
doi={10.5281/zenodo.17437425},
url={https://doi.org/10.5281/zenodo.17437425}
}
```
| text/markdown | null | Jiun-Cheng Jiang <jcjiang@phys.ntu.edu.tw> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| null | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Intended Audience :: Developers",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"torch>=2.0",
"tqdm",
"matplotlib",
"pennylane>=0.37; extra == \"qml\"",
"mypy; extra == \"dev\"",
"ruff; extra == \"dev\"",
"isort; extra == \"dev\"",
"build; extra == \"dev\"",
"setuptools; extra == \"dev\"",
"wheel; extra == \"dev\"",
"twine; extra == \"dev\"",
"sphinx>=8.0; extra == \"doc\"",
"sphinx-rtd-theme; extra == \"doc\"",
"sphinx-autodoc-typehints; extra == \"doc\"",
"nbsphinx; extra == \"doc\"",
"jupyter; extra == \"doc\"",
"qiskit[visualization]>=2.0; extra == \"doc\"",
"transformers; extra == \"doc\"",
"torchvision; extra == \"doc\"",
"qkan[dev,doc,qml]; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/Jim137/qkan"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:36:32.690438 | qkan-0.1.5.tar.gz | 47,929 | 0f/16/fa309197136a9f9e52cb505020e5c3159c6135e6462aa3f0dc8b150e8826/qkan-0.1.5.tar.gz | source | sdist | null | false | 4b89fd220da8174278376ac78be7b226 | e77d15d1292f08a43f7cd631a2a20a65ed4f931220ec8a1a4875c3dc14f2e5e9 | 0f16fa309197136a9f9e52cb505020e5c3159c6135e6462aa3f0dc8b150e8826 | null | [
"LICENSE"
] | 253 |
2.4 | pyunigps | 0.2.0 | Unicore UNI protocol parser and generator | pyunigps
=======
[Current Status](#currentstatus) |
[Installation](#installation) |
[Reading](#reading) |
[Parsing](#parsing) |
[Generating](#generating) |
[Serializing](#serializing) |
[Examples](#examples) |
[Extensibility](#extensibility) |
[Author & License](#author)
`pyunigps` is an original Python 3 parser for the UNI protocol. UNI is our term for the proprietary binary data output protocol implemented on Unicore ™ GNSS receiver modules. `pyunigps` can also parse NMEA 0183 © and RTCM3 © protocols via the underlying [`pynmeagps`](https://github.com/semuconsulting/pynmeagps) and [`pyrtcm`](https://github.com/semuconsulting/pyrtcm) packages from the same author, covering all the protocols that Unicore UNI GNSS receivers are capable of outputting.
The `pyunigps` homepage is located at [https://github.com/semuconsulting/pyunigps](https://github.com/semuconsulting/pyunigps).
This is an independent project and we have no affiliation whatsoever with Unicore.
## <a name="currentstatus">Current Status</a>








This Beta implements a comprehensive set of messages for Unicore "NebulasIV" High Precision GPS/GNSS devices, including the UM96n and UM98n series, but is readily [extensible](#extensibility). Refer to [UNI_MSGIDS in unitypes_core.py](https://github.com/semuconsulting/pyunigps/blob/main/src/pyunigps/unitypes_core.py#L86) for the complete list of message definitions currently defined. UNI protocol information sourced from public domain Unicore Reference Commands R1.13 © Dec 2025 Unicore
https://en.unicore.com/uploads/file/Unicore%20Reference%20Commands%20Manual%20For%20N4%20High%20Precision%20Products_V2_EN_R1.13.pdf
**FYI:**
Unicore "NebulasIV" GNSS receivers are configured using TTY commands (ASCII text over serial port) e.g. `"SATSINFOB COM1 1"`. The command response will be an ASCII text message resembling an NMEA sentence e.g. `"$command,SATSINFOB COM1 1,response: OK*46"` or
`"$command,SATSXXXXB COM1 1,response: PARSING FAILD NO MATCHING FUNC SATSXXXXB*01"`.
Sphinx API Documentation in HTML format is available at [https://www.semuconsulting.com/pyunigps/](https://www.semuconsulting.com/pyunigps/).
Contributions welcome - please refer to [CONTRIBUTING.MD](https://github.com/semuconsulting/pyunigps/blob/master/CONTRIBUTING.md).
[Bug reports](https://github.com/semuconsulting/pyunigps/blob/master/.github/ISSUE_TEMPLATE/bug_report.md) and [Feature requests](https://github.com/semuconsulting/pyunigps/blob/master/.github/ISSUE_TEMPLATE/feature_request.md) - please use the templates provided. For general queries and advice, post a message to one of the [pyunigps Discussions](https://github.com/semuconsulting/pyunigps/discussions) channels.

---
## <a name="installation">Installation</a>

[](https://pypi.org/project/pyunigps/)
[](https://clickpy.clickhouse.com/dashboard/pyunigps)
`pyunigps` is compatible with Python>=3.10. In the following, `python3` & `pip` refer to the Python 3 executables. You may need to substitute `python` for `python3`, depending on your particular environment (*on Windows it's generally `python`*).
The recommended way to install the latest version of `pyunigps` is with [pip](http://pypi.python.org/pypi/pip/):
```shell
python3 -m pip install --upgrade pyunigps
```
If required, `pyunigps` can also be installed into a virtual environment, e.g.:
```shell
python3 -m venv env
source env/bin/activate # (or env\Scripts\activate on Windows)
python3 -m pip install --upgrade pyunigps
```
For [Conda](https://docs.conda.io/en/latest/) users, `pyunigps` is available from [conda forge](https://github.com/conda-forge/pyunigps-feedstock):
[](https://anaconda.org/conda-forge/pyunigps)
[](https://anaconda.org/conda-forge/pyunigps)
```shell
conda install -c conda-forge pyunigps
```
---
## <a name="reading">Reading (Streaming)</a>
```
class pyunigps.UNIreader.UNIReader(stream, *args, **kwargs)
```
You can create a `UNIReader` object by calling the constructor with an active stream object.
The stream object can be any viable data stream which supports a `read(n) -> bytes` method (e.g. File or Serial, with
or without a buffer wrapper). `pyunigps` implements an internal `SocketWrapper` class to allow sockets to be read in the same way as other streams (see example below).
Individual UNI messages can then be read using the `UNIReader.read()` function, which returns both the raw binary data (as bytes) and the parsed data (as a `UNIMessage` object, via the `parse()` method). The function is thread-safe in so far as the incoming data stream object is thread-safe. `UNIReader` also implements an iterator.
The constructor accepts the following optional keyword arguments:
* `protfilter`: `NMEA_PROTOCOL` (1), `UNI_PROTOCOL` (2), `RTCM3_PROTOCOL` (4). Can be OR'd; default is `NMEA_PROTOCOL | UNI_PROTOCOL | RTCM3_PROTOCOL` (7)
* `quitonerror`: `ERR_IGNORE` (0) = ignore errors, `ERR_LOG` (1) = log errors and continue (default), `ERR_RAISE` (2) = (re)raise errors and terminate
* `validate`: `VALCKSUM` (0x01) = validate checksum (default), `VALNONE` (0x00) = ignore invalid checksum or length
* `parsebitfield`: 1 = parse bitfields ('X' type properties) as individual bit flags, where defined (default), 0 = leave bitfields as byte sequences
* `msgmode`: `GET` (0) (default), `SET` (1), `POLL` (2)
Example A - Serial input (using iterator). This example will output both UNI and NMEA messages but not RTCM3, and log any errors:
```python
from serial import Serial
from pyunigps import ERR_LOG, NMEA_PROTOCOL, UNI_PROTOCOL, VALCKSUM, UNIReader
with Serial("/dev/ttyACM0", 115200, timeout=3) as stream:
unr = UNIReader(
stream,
protfilter=UNI_PROTOCOL | NMEA_PROTOCOL,
quitonerror=ERR_LOG,
validate=VALCKSUM,
parsebitfield=1,
)
for raw_data, parsed_data in unr:
print(parsed_data)
```
```
<UNI(SATSINFO, cpuidle=96, timeref=1, timestatus=1, wno=2215, tow=367199000, version=0, leapsecond=18, delay=16, numsat=50, reserved1=0, reserved2=0, reserved3=0, L1B1IE1=1, L2CL2B2IE5b=1, L5B3IE5aL5=0, B1CL1C=1, B2aG3E6=0, B2bL2P=1, prn_01=2, azi_01=302, elev_01=51, sysstatus_01_01=0, cno_01_01=45, freqstatus_01_01=0, freqno_01_01=2, sysstatus_01_02=0, cno_01_02=42, freqstatus_01_02=9, freqno_01_02=2, ... prn_50=36, azi_50=286, elev_50=19, sysstatus_50_01=3, cno_50_01=34, freqstatus_50_01=2, freqno_50_01=3, sysstatus_50_02=3, cno_50_02=42, freqstatus_50_02=17, freqno_50_02=3, sysstatus_50_03=3, cno_50_03=38, freqstatus_50_03=12, freqno_50_03=3)>
```
Example B - File input (using iterator). This will only output UNI data, and fail on any error:
```python
from pyunigps import ERR_RAISE, UNI_PROTOCOL, VALCKSUM, UNIReader
with open("pygpsdata_u980.log", "rb") as stream:
unr = UNIReader(
stream, protfilter=UNI_PROTOCOL, validate=VALCKSUM, quitonerror=ERR_RAISE
)
for raw_data, parsed_data in unr:
print(parsed_data)
```
Example C - Socket input (using iterator). This will output UNI, NMEA and RTCM3 data, and ignore any errors:
```python
import socket
from pyunigps import (
ERR_IGNORE,
NMEA_PROTOCOL,
UNI_PROTOCOL,
RTCM3_PROTOCOL,
VALCKSUM,
UNIReader,
)
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as stream:
stream.connect(("localhost", 50007))
unr = UNIReader(
stream,
protfilter=NMEA_PROTOCOL | UNI_PROTOCOL | RTCM3_PROTOCOL,
validate=VALCKSUM,
quitonerror=ERR_IGNORE,
)
for raw_data, parsed_data in unr:
print(parsed_data)
```
---
## <a name="parsing">Parsing</a>
```
pyunigps.UNIreader.UNIReader.parse(message: bytes, **kwargs)
```
You can parse individual UNI messages using the static `UNIReader.parse(data)` function, which takes a bytes array containing a binary UNI message and returns a `UNIMessage` object.
**NB:** Once instantiated, a `UNIMessage` object is immutable.
The `parse()` method accepts the following optional keyword arguments:
* `validate`: VALCKSUM (0x01) = validate checksum (default), VALNONE (0x00) = ignore invalid checksum or length
* `parsebitfield`: 1 = parse bitfields ('X' type properties) as individual bit flags, where defined (default), 0 = leave bitfields as byte sequences
* `msgmode`: `GET` (0) (default), `SET` (1), `POLL` (2)
Example A - parsing VERSION output message:
```python
from pyunigps import GET, VALCKSUM, UNIReader
msg = UNIReader.parse(
b'\xaaD\xb5\x00\x11\x004\x01\x00\x00f\t\x8f\xf4\x0e\x02\x00\x00\x00\x00\x00\x00\x00\x00M982R4.10Build5251 HRPT00-S10C-P - ffff48ffff0fffff 2021/11/26 #\x87\x83\xb9'
,
validate=VALCKSUM,
parsebitfield=1,
)
print(msg)
```
```
<UNI(VERSION, cpuidle=0, timeref=0, timestatus=0, wno=2406, tow=34534543, version=0, leapsecond=0, delay=0, device=18, swversion=R4.10Build5251, authtype=HRPT00-S10C-P, psn=-, efuseid=ffff48ffff0fffff, comptime=2021/11/26)>
```
The `UNIMessage` object exposes different public attributes depending on its message type or 'identity'. Attributes which are enumerations may have corresponding decodes in `pyunigps.unitypes_decodes` e.g. the `VERSION` message has the following attributes:
```python
from pyunigps import DEVICE
print(msg)
print(msg.identity)
print(msg.device)
print(DEVICE[msg.device])
print(swversion)
print(comptime)
```
```
<UNI(VERSION, cpuidle=0, timeref=0, timestatus=0, wno=2406, tow=34534543, version=0, leapsecond=18, delay=0, device=18, swversion=R4.10Build5251, authtype=HRPT00-S10C-P, psn=-, efuseid=ffff48ffff0fffff, comptime=2021/11/26)>
VERSION
18
UM980
R4.10Build5251
2021/11/26
```
The `payload` attribute always contains the raw payload as bytes. Attributes within repeating groups are parsed with a two-digit suffix (prn_01, prn_02, etc.).
---
## <a name="generating">Generating</a>
```
class pyunigps.UNImessage.UNIMessage(msggrp, msgid, **kwargs)
```
You can create a `UNIMessage` object by calling the constructor with the following parameters:
1. message id in each integer or string format (must be a valid id or name from `pyunigps.UNI_MSGIDS`)
2. (optional) a series of keyword parameters representing the message header and payload.
3. (optional) `parsebitfield` keyword - 1 = define bitfields as individual bits (default), 0 = define bitfields as byte sequences.
The message payload can be defined via keyword arguments in one of three ways:
1. A single keyword argument of `payload` containing the full payload as a sequence of bytes (any other keyword arguments will be ignored). **NB** the `payload` keyword argument *must* be used for message types which have a 'variable by size' repeating group.
2. One or more keyword arguments corresponding to individual message attributes. Any attributes not explicitly provided as keyword arguments will be set to a nominal value according to their type.
3. If no keyword arguments are passed, the payload is assumed to be null.
4. If the `wno` or `tow` arguments are omitted, they will default to the current datetime and leapsecond offset.
Example A - generate a VERSION message from individual keyword arguments:
```python
from pyunigps import UNIMessage
msg = UNIMessage(
msgid=17,
wno=2406,
tow=34534543,
device=18,
swversion="R4.10Build5251",
authtype="HRPT00-S10C-P",
psn="-",
efuseid="ffff48ffff0fffff",
comptime="2021/11/26",
)
print(msg)
```
```
<UNI(VERSION, cpuidle=0, timeref=0, timestatus=0, wno=2406, tow=34534543, version=0, leapsecond=18, delay=0, device=18, swversion=R4.10Build5251, authtype=HRPT00-S10C-P, psn=-, efuseid=ffff48ffff0fffff, comptime=2021/11/26)>
```
---
## <a name="serializing">Serializing</a>
The `UNIMessage` class implements a `serialize()` method to convert a `UNIMessage` object to a bytes array suitable for writing to an output stream.
e.g. to serialize and send a `VERSION` message:
```python
from serial import Serial
from pyunigps import UNIMessage
serialOut = Serial('COM1', 115200, timeout=5)
print(msg)
output = msg.serialize()
print(output)
serialOut.write(output)
```
```
<UNI(VERSION, cpuidle=0, timeref=0, timestatus=0, wno=2406, tow=34534543, version=0, leapsecond=18, delay=0, device=18, swversion=R4.10Build5251, authtype=HRPT00-S10C-P, psn=-, efuseid=ffff48ffff0fffff, comptime=2021/11/26)>
b'\xaaD\xb5\x00\x11\x004\x01\x00\x00f\t\x8f\xf4\x0e\x02\x00\x00\x00\x00\x00\x00\x00\x00\x12\x00\x00\x00R4.10Build5251 HRPT00-S10C-P - ffff48ffff0fffff 2021/11/26 \x11t\x19\x1f'
```
---
## <a name="examples">Examples</a>
The following command line examples can be found in the `\examples` folder:
1. [`uniusage.py`](https://github.com/semuconsulting/pyunigps/blob/main/examples/uniusage.py) illustrates basic usage of the `UNIMessage` and `UNIReader` classes.
1. [`unipoller.py`](https://github.com/semuconsulting/pyunigps/blob/main/examples/unipoller.py) illustrates how to stream UNI messages while simultaneously applying ASCII text configuration commands.
---
## <a name="extensibility">Extensibility</a>
The UNI protocol is principally defined in the modules `unitypes_*.py` as a series of dictionaries. Message payload definitions must conform to the following rules:
```
1. attribute names must be unique within each message class
2. attribute types must be one of the valid types (S1, U2, X4, etc.). A suffix of "*f" signifies a scaling factor of f is to be applied to the raw value.
3. repeating or bitfield groups must be defined as a tuple ('numr', {dict}), where:
'numr' is either:
a. an integer representing a fixed number of repeats e.g. 32
b. a string representing the name of a preceding attribute containing the number of repeats e.g. 'numsat'
c. an 'X' attribute type ('X1', 'X2', 'X4', etc) representing a group of individual bit flags
d. 'None' for a 'variable by size' repeating group. Only one such group is permitted per payload and it must be at the end.
{dict} is the nested dictionary of repeating items or bitfield group
```
Repeating attribute names are parsed with a two-digit suffix (prn_01, prn_02, etc.). Nested repeating groups are supported.
---
## <a name="author">Author & License Information</a>
semuadmin@semuconsulting.com

`pyunigps` is maintained entirely by unpaid volunteers. It receives no funding from advertising or corporate sponsorship. If you find the utility useful, please consider sponsoring the project with the price of a coffee...
[](https://buymeacoffee.com/semuconsulting)
| text/markdown | null | Steve Smith <semuadmin@semuconsulting.com> | null | Steve Smith <semuadmin@semuconsulting.com> | null | null | [
"Operating System :: OS Independent",
"Development Status :: 4 - Beta",
"Environment :: MacOS X",
"Environment :: Win32 (MS Windows)",
"Environment :: X11 Applications",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Intended Audience :: End Users/Desktop",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Utilities",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: GIS"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pynmeagps>=1.1.0",
"pyrtcm>=1.1.10"
] | [] | [] | [] | [
"homepage, https://github.com/semuconsulting/pyunigps",
"documentation, https://www.semuconsulting.com/pyunigps/",
"repository, https://github.com/semuconsulting/pyunigps",
"changelog, https://github.com/semuconsulting/pyunigps/blob/master/RELEASE_NOTES.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:36:04.978356 | pyunigps-0.2.0.tar.gz | 68,279 | dc/39/70f7239686de9dfa1a2a3a1e5af9667f25d8cd8247b98922e46a3bf41c8d/pyunigps-0.2.0.tar.gz | source | sdist | null | false | 357c3bcce9a2c08b3f91dbb20319d563 | 5af7761d4e3792034d92c2ac8620cf43fb0318226405acab0be9e37cf702b24c | dc3970f7239686de9dfa1a2a3a1e5af9667f25d8cd8247b98922e46a3bf41c8d | BSD-3-Clause | [
"LICENSE"
] | 545 |
2.4 | sdepack | 0.0.1 | Runge-Kutta Numerical Integration Stochastic Differential Equations for Python | # sdepack
Runge-Kutta Numerical Integration Stochastic Differential Equations for Python
| text/markdown | null | Saud Zahir <m.saud.zahir@gmail.com> | null | Saud Zahir <m.saud.zahir@gmail.com> | null | numerical, stochastic, ito | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Programming Language :: Fortran",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Software Development",
"Topic :: Scientific/Engineering",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Operating System :: Unix",
"Operating System :: MacOS"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy"
] | [] | [] | [] | [
"homepage, https://eggzec.github.io/sdepack",
"documentation, https://eggzec.github.io/sdepack",
"source, https://github.com/eggzec/sdepack",
"releasenotes, https://github.com/eggzec/sdepack/releases/latest",
"issues, https://github.com/eggzec/sdepack/issues"
] | uv/0.9.15 {"installer":{"name":"uv","version":"0.9.15","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T07:32:34.011576 | sdepack-0.0.1-py3-none-any.whl | 1,741 | 8e/6d/bf2c439d3d8bcf167018ec469a981083f7141aba734b5675808afaa6f77b/sdepack-0.0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 02fd888a1adac980e6da4aac4e9652e8 | fbfeee2dc26a98a7942789740755330c11ebea105a301607a3182c0ae4e0fca7 | 8e6dbf2c439d3d8bcf167018ec469a981083f7141aba734b5675808afaa6f77b | MIT | [] | 282 |
2.2 | tinybird | 3.5.2.dev0 | Tinybird Command Line Tool | Tinybird CLI
=============
The Tinybird command-line tool allows you to use all the Tinybird functionality directly from the command line. Additionally, it includes several functions to create and manage data projects easily.
Changelog
----------
3.5.2
*******
- `Changed` `tb deploy --auto/--no-auto` keeps the same CLI behavior (default `--auto`) while using server-side deployment `auto_promote` under the hood when enabled.
3.5.1
*******
- `Fixed` Support special characters in workspace names in the login flow.
3.5.0
*******
- `Changed` AI-assisted CLI commands now show a deprecation warning indicating they will be removed in a future major release and include agent skills guidance (`npx skills add @tinybirdco/tinybird-agent-skills`).
3.4.0
*******
- `Changed` Prompt-based AI flows now print a deprecation warning (`tb --prompt`, `tb create --prompt`, `tb datasource create --prompt`, `tb test create`, `tb mock`) ahead of their removal in a future release.
3.3.1
*******
- `Changed` Tinybird Code (`tb` / `tb --prompt`) no longer prompts users to auto-update the CLI when starting an agent session.
3.3.0
*******
- `Added` ``tb datasource stop`` and ``tb datasource start`` commands to pause/resume Kafka ingestion in forward branches
- `Added` ``POST /v0/datasources/{name}/sample`` API endpoint to import sample data from S3/GCS connections in forward branches
- `Changed` ``--with-connections`` flag for ``tb build`` and ``tb deploy`` is now stable (previously experimental)
3.0.2
*******
- `Changed` `IMPORT_FORMAT` is now optional for S3/GCS data sources and will be automatically inferred from the file extension if not provided
3.0.1
*******
- `Experimental` Added hidden `--with-connections` flag to `tb build` and `tb dev` commands for enabling connection datasources (S3, Kafka, GCS) in branches. This is an experimental feature and is not yet publicly documented.
3.0.0
*******
- `Removed` Python 3.9 support. The minimum supported Python version is now 3.10.
- `Removed` BigQuery and Snowflake connector support from Forward CLI build and deployment commands
1.1.14
*******
- `Changed` `tb deploy` First deployment will not show feedback related to deleting the previous deployment, as it does not exist yet.
1.1.13
*******
- `Changed` `tb workspace members` subcommands now use the token from context instead of asking for a user token.
- `Removed` Guest role option from `tb workspace members`, only Viewer role remains.
1.1.12
*******
- `Added` support for formatting connection files with `tb fmt` command.
1.1.11
***********
- `Changed` Improve first deployment feedback message.
- `Changed` do not show `tb materialization populate` command as it is not supported.
1.1.10
*******
- `Changed` Tinybird Code now will not load all the resources content at start and instead will load them on demand using its own tools: `read_datafile` and `search_datafiles`.
1.1.9
*******
- `Fixed` `tb dev` won't show a multiple environment flags error when a command is passed with an environment flag.
1.1.8
*******
- `Fixed` `tb deploy` will return a meaningful error in case of invalid cloud provider credentials instead of returning Internal Server Error.
1.1.7
*******
- `Fixed` `tb connection create s3` now will show a warning if the AWS credentials are not available and will continue without Tinybird Local.
1.1.6
*******
- `Changed` Using more than one environment flag (`--cloud`, `--local`, `--branch`) at the same time will raise an error.
1.1.5
*******
- `Changed` Changed the output of `tb deploy` to show real deployment order of events.
- `Changed` First deployment no longer shows the hint to ingest data if it detects data already planned to be ingested.
1.1.4
*******
- `Changed` `tb connection create s3` now will ask for the environments to create the secret in (Local and Cloud) and will use the corresponging AWS account IDs in the generated trust policy.
1.1.3
*******
- `Changed` `tb datasource create --s3` now will generate the exact schema from the bucket preview instead of a generic `data` column.
1.1.2
*******
- `Changed` `tb datasource create` wording.
1.1.1
*******
- `Added` Support for previewing S3 connections data in `tb connection data` command.
1.1.0
*******
- `Added` `tb fmt` command to format .datasource and .pipe files. Not supported yet for .connection files.
1.0.7
*******
- `Changed` `tb datasource create --kafka` now includes the Kafka meta columns explicitly in the generated file.
1.0.6
*******
- `Changed` `tb deploy` will raise a warning when using engine parameters that are not useful for the picked engine.
1.0.5
*******
- `Added` Internal changes
1.0.4
*******
- `Changed` `tb login` help text has been updated to provide more clarity on the available options.
1.0.3
*******
- `Changed` `tb login` now will warn the user if they are trying to login from a different folder than the last one and will ask for confirmation to continue.
1.0.2
*******
- `Changed` If connection name is not provided in `tb connection data`, it will prompt the user to select a connection from the list of available connections.
1.0.1
*******
- `Added` Support for `schema_registry_url` and `auto_offset_reset` when creating a Kafka connection.
1.0.0
*******
- `Released` Version 1.0.0, from now on the tinybird-cli package uses the standard semver convention for stable versions. Development versions will be tagged as with the `.devX` suffix where `X` is an integer number.
| text/x-rst | Tinybird | support@tinybird.co | null | null | null | null | [] | [] | https://www.tinybird.co/docs/forward/commands | null | <3.14,>=3.10 | [] | [] | [] | [
"aiofiles==24.1.0",
"anthropic==0.55.0",
"boto3",
"click<8.2,>=8.1.6",
"clickhouse-toolset==0.34.dev0",
"colorama==0.4.6",
"confluent-kafka==2.8.2",
"cryptography~=41.0.0",
"croniter==1.3.15",
"docker==7.1.0",
"GitPython~=3.1.32",
"humanfriendly~=8.2",
"plotext==5.3.2",
"prompt_toolkit==3.0.48",
"logfire-api==4.2.0",
"pydantic~=2.11.7",
"pydantic-ai-slim[anthropic]~=0.5.0",
"pydantic-ai-slim[retries]~=0.5.0",
"pyperclip==1.9.0",
"pyyaml<6.1,>=6.0",
"requests<3,>=2.28.1",
"shandy-sqlfmt==0.11.1",
"shandy-sqlfmt[jinjafmt]==0.11.1",
"toposort==1.10",
"tornado~=6.0.0",
"urllib3<2,>=1.26.14",
"watchdog==6.0.0",
"wheel",
"packaging<24,>=23.1",
"llm>=0.19",
"thefuzz==0.22.1",
"python-dotenv==1.1.0",
"pyjwt[crypto]==2.9.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.13 | 2026-02-20T07:30:54.119003 | tinybird-3.5.2.dev0-py3-none-any.whl | 498,107 | 25/e1/78763d8672d553e705d5e3b9446ebc7a8b6a46a5bed49ac6984bb34ad9be/tinybird-3.5.2.dev0-py3-none-any.whl | py3 | bdist_wheel | null | false | 650ecaa596ead91e99bd2ab37ee78d69 | 8272be10a7a44a91c89fc130640d291ccfd6c61f265e217550502ba02134b9b9 | 25e178763d8672d553e705d5e3b9446ebc7a8b6a46a5bed49ac6984bb34ad9be | null | [] | 223 |
2.4 | tritonparse | 0.4.1.dev20260220072946 | TritonParse: A Compiler Tracer, Visualizer, and mini-Reproducer Generator for Triton Kernels | # TritonParse
[](https://opensource.org/licenses/BSD-3-Clause)
[](https://meta-pytorch.org/tritonparse/)
**A comprehensive visualization and analysis tool for Triton kernel compilation and launch** — helping developers analyze, debug, and understand Triton kernel compilation processes.
🌐 **[Try it online →](https://meta-pytorch.org/tritonparse/?json_url=https://meta-pytorch.org/tritonparse/dedicated_log_triton_trace_findhao__mapped.ndjson.gz)**
## ✨ Key Features
### 🔍 Visualization & Analysis
- **🚀 Launch Difference Analysis** - Detect and visualize kernel launch parameter variations
- **📊 IR Code View** - Side-by-side IR viewing with synchronized highlighting and line mapping
- **🔄 File Diff View** - Compare kernels across different trace files side-by-side
- **📝 Multi-format IR Support** - View TTGIR, TTIR, LLIR, PTX, and AMDGCN
- **🎯 Interactive Code Views** - Click-to-highlight corresponding lines across IR stages
### 🔧 Reproducer & Debugging Tools
- **🔄 Standalone Script Generation** - Extract any kernel into a self-contained Python script
- **💾 Tensor Data Reconstruction** - Preserve actual tensor data or use statistical approximation
- **🎯 Custom Templates** - Flexible reproducer templates for different workflows
- **🐛 Bug Isolation** - Share reproducible test cases for debugging and collaboration
### 📊 Structured Logging & Analysis
- **📝 Compilation & Launch Tracing** - Capture detailed events with source mapping
- **🔍 Stack Trace Integration** - Full Python stack traces for debugging
- **📈 Metadata Extraction** - Comprehensive kernel statistics
### 🛠️ Developer Tools
- **🌐 Browser-based Interface** - No installation required, works in your browser
- **🔒 Privacy-first** - All processing happens locally, no data uploaded
## 🚀 Quick Start
### 1. Installation
**Four options to install:**
```bash
# install nightly version
pip install -U --pre tritonparse
# install stable version
pip install tritonparse
# install from source
git clone https://github.com/meta-pytorch/tritonparse.git
cd tritonparse
pip install -e .
# pip install the latest version from github
pip install git+https://github.com/meta-pytorch/tritonparse.git
```
**Prerequisites:** Python ≥ 3.10, Triton ≥ 3.4.0, GPU required (NVIDIA/AMD)
TritonParse relies on new features in Triton. If you're using nightly PyTorch, Triton is already included. Otherwise, install the latest Triton:
```bash
pip install triton
```
### 2. Generate Traces
```python
import tritonparse.structured_logging
import tritonparse.parse.utils
# Initialize logging with full tracing options
tritonparse.structured_logging.init(
"./logs/",
enable_trace_launch=True, # Capture kernel launch events (enables torch.compile tracing automatically)
enable_more_tensor_information=True, # Optional: collect tensor statistics (min/max/mean/std)
)
# Your Triton/PyTorch code here
# ... your kernels ...
# Parse and generate trace files
tritonparse.parse.utils.unified_parse("./logs/", out="./parsed_output")
```
> **💡 Note**: `enable_trace_launch=True` automatically enables tracing for both native Triton kernels (`@triton.jit`) and `torch.compile` / TorchInductor kernels.
<details>
<summary>📝 Example output (click to expand)</summary>
```bash
================================================================================
📁 TRITONPARSE PARSING RESULTS
================================================================================
📂 Parsed files directory: /scratch/findhao/tritonparse/tests/parsed_output
📊 Total files generated: 2
📄 Generated files:
1. 📝 dedicated_log_triton_trace_findhao__mapped.ndjson.gz (7.2KB)
2. 📝 log_file_list.json (181B)
================================================================================
✅ Parsing completed successfully!
================================================================================
```
</details>
### 3. Visualize Results
**Visit [https://meta-pytorch.org/tritonparse/](https://meta-pytorch.org/tritonparse/?json_url=https://meta-pytorch.org/tritonparse/dedicated_log_triton_trace_findhao__mapped.ndjson.gz)** and open your local trace files (.ndjson.gz format).
> **🔒 Privacy Note**: Your trace files are processed entirely in your browser - nothing is uploaded to any server!
### 4. Generate Reproducers (Optional)
Extract any kernel into a standalone, executable Python script for debugging or testing:
```bash
# Generate reproducer for the first launch event
# (--line is 0-based: line 0 is compilation event, line 1 is first launch event)
tritonparseoss reproduce ./parsed_output/trace.ndjson.gz --line 1 --out-dir repro_output
# Run the generated reproducer
cd repro_output/<kernel_name>/
python repro_*.py
```
**Python API:**
```python
from tritonparse.reproducer.orchestrator import reproduce
result = reproduce(
input_path="./parsed_output/trace.ndjson.gz",
line_index=0, # 0-based index (first event is 0)
out_dir="repro_output"
)
```
<details>
<summary>🎯 Common Reproducer Use Cases (click to expand)</summary>
- **🐛 Bug Isolation**: Extract a failing kernel into a minimal standalone script
- **⚡ Performance Testing**: Benchmark specific kernels without running the full application
- **🤝 Team Collaboration**: Share reproducible test cases with colleagues or in bug reports
- **📊 Regression Testing**: Compare kernel behavior and performance across different versions
- **🔍 Deep Debugging**: Modify and experiment with kernel parameters in isolation
</details>
## 📚 Complete Documentation
| 📖 Guide | Description |
|----------|-------------|
| **[🏠 Wiki Home](https://github.com/meta-pytorch/tritonparse/wiki)** | Complete documentation and quick navigation |
| **[📦 Installation](https://github.com/meta-pytorch/tritonparse/wiki/01.-Installation)** | Setup guide for all scenarios |
| **[📋 Usage Guide](https://github.com/meta-pytorch/tritonparse/wiki/02.-Usage-Guide)** | Complete workflow, reproducer generation, and examples |
| **[🌐 Web Interface](https://github.com/meta-pytorch/tritonparse/wiki/03.-Web-Interface-Guide)** | Master the visualization interface |
| **[🔧 Developer Guide](https://github.com/meta-pytorch/tritonparse/wiki/04.-Developer-Guide)** | Contributing and architecture overview |
| **[📝 Code Formatting](https://github.com/meta-pytorch/tritonparse/wiki/05.-Code-Formatting)** | Formatting standards and tools |
| **[❓ FAQ](https://github.com/meta-pytorch/tritonparse/wiki/06.-FAQ)** | Quick answers and troubleshooting |
| **[⚙️ Environment Variables](https://github.com/meta-pytorch/tritonparse/wiki/07.-Environment-Variables-Reference)** | Complete environment variable reference |
| **[📖 Python API Reference](https://github.com/meta-pytorch/tritonparse/wiki/08.-Python-API-Reference)** | Full API documentation |
| **[🔄 Reproducer Guide](https://github.com/meta-pytorch/tritonparse/wiki/09.-Reproducer-Guide)** | Comprehensive kernel reproducer guide |
## 📊 Understanding Triton Compilation
TritonParse visualizes the complete Triton compilation pipeline:
**Python Source** → **TTIR** → **TTGIR** → **LLIR** → **PTX/AMDGCN**
Each stage can be inspected and compared to understand optimization transformations.
## 🤝 Contributing
We welcome contributions! Please see our **[Developer Guide](https://github.com/meta-pytorch/tritonparse/wiki/04.-Developer-Guide)** for:
- Development setup and prerequisites
- Code formatting standards (**[Formatting Guide](https://github.com/meta-pytorch/tritonparse/wiki/05.-Code-Formatting)**)
- Pull request and code review process
- Testing guidelines
- Architecture overview
## 📞 Support & Community
- **🐛 Report Issues**: [GitHub Issues](https://github.com/meta-pytorch/tritonparse/issues)
- **💬 Discussions**: [GitHub Discussions](https://github.com/meta-pytorch/tritonparse/discussions)
- **📚 Documentation**: [TritonParse Wiki](https://github.com/meta-pytorch/tritonparse/wiki)
## 📄 License
This project is licensed under the BSD-3 License - see the [LICENSE](LICENSE) file for details.
---
**✨ Ready to get started?** Visit our **[Installation Guide](https://github.com/meta-pytorch/tritonparse/wiki/01.-Installation)** or try the **[online tool](https://meta-pytorch.org/tritonparse/)** directly!
| text/markdown | null | Yueming Hao <yhao@meta.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"triton>3.3.1; extra == \"triton\"",
"pytorch-triton>=3.4.0; extra == \"pytorch-triton\"",
"coverage>=7.0.0; extra == \"test\"",
"ufmt==2.9.0; extra == \"dev\"",
"usort==1.1.0; extra == \"dev\"",
"ruff-api==0.2.0; extra == \"dev\"",
"ruff>=0.4.0; extra == \"dev\"",
"coverage>=7.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/meta-pytorch/tritonparse"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:30:14.713542 | tritonparse-0.4.1.dev20260220072946.tar.gz | 688,868 | fa/0a/bf58a36fea94e1eda48e69403e92606367c53dfea06516dad24d140051dc/tritonparse-0.4.1.dev20260220072946.tar.gz | source | sdist | null | false | ea0da4a376f1521d741738161578a7f0 | 27331aee0402a512d42f42f8dec6d48115153206dd7965bc1c0c7ac9e6cf61ea | fa0abf58a36fea94e1eda48e69403e92606367c53dfea06516dad24d140051dc | BSD-3-Clause | [
"LICENSE"
] | 231 |
2.4 | pulumi-cloudflare | 6.14.0a1771569194 | A Pulumi package for creating and managing Cloudflare cloud resources. | [](https://github.com/pulumi/pulumi-cloudflare/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/cloudflare)
[](https://pypi.org/project/pulumi-cloudflare)
[](https://badge.fury.io/nu/pulumi.cloudflare)
[](https://pkg.go.dev/github.com/pulumi/pulumi-cloudflare/sdk/v6/go/cloudflare)
[](https://github.com/pulumi/pulumi-cloudflare/blob/master/LICENSE)
# Cloudflare Provider
The Cloudflare resource provider for Pulumi lets you use Cloudflare resources
in your cloud programs. To use this package, please [install the Pulumi CLI
first](https://pulumi.io/).
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/cloudflare
or `yarn`:
$ yarn add @pulumi/cloudflare
### Python
To use from Python, install using `pip`:
$ pip install pulumi_cloudflare
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-cloudflare/sdk/v6
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Cloudflare
## Configuration
The following configuration points are available:
- `cloudflare:apiKey` - (Optional) The API key for operations. Alternatively, can be configured using the `CLOUDFLARE_API_KEY` environment variable. API keys are now considered legacy by Cloudflare, API tokens should be used instead. Must provide only one of `cloudflare:apiKey`, `cloudflare:apiToken`, `cloudflare:apiUserServiceKey`.
- `cloudflare:apiToken` - (Optional) The API Token for operations. Alternatively, can be configured using the `CLOUDFLARE_API_TOKEN` environment variable. Must provide only one of `cloudflare:apiKey`, `cloudflare:apiToken`, `cloudflare:apiUserServiceKey`.
- `cloudflare:apiUserServiceKey` - (Optional) A special Cloudflare API key good for a restricted set of endpoints. Alternatively, can be configured using the `CLOUDFLARE_API_USER_SERVICE_KEY` environment variable. Must provide only one of `cloudflare:apiKey`, `cloudflare:apiToken`, `cloudflare:apiUserServiceKey`.
- `cloudflare:baseUrl` (String) Value to override the default HTTP client base URL. Alternatively, can be configured using the `CLOUDFLARE_BASE_URL` environment variable.
- `cloudflare:email` - (Optional) A registered Cloudflare email address. Alternatively, can be configured using the `CLOUDFLARE_EMAIL` environment variable. Required when using `cloudflare:apiKey`. Conflicts with `cloudflare:apiToken`.
- `cloudflare:userAgentOperatorSuffix` - (Optional) A value to append to the HTTP User Agent for all API calls. This value is not something most users need to modify however, if you are using a non-standard provider or operator configuration, this is recommended to assist in uniquely identifying your traffic. **Setting this value will remove the Pulumi version from the HTTP User Agent string and may have unintended consequences.** Alternatively, can be configured using the `CLOUDFLARE_USER_AGENT_OPERATOR_SUFFIX` environment variable.
## Reference
For further information, please visit [the Cloudflare provider docs](https://www.pulumi.com/docs/intro/cloud-providers/cloudflare) or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/cloudflare).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, cloudflare | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-cloudflare"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T07:29:14.893394 | pulumi_cloudflare-6.14.0a1771569194.tar.gz | 1,524,204 | 37/25/a636de057b20b41947b21d463cec0cc8b4c2b3e9dc41e1ca216086b651d9/pulumi_cloudflare-6.14.0a1771569194.tar.gz | source | sdist | null | false | 07320e8b1d4125e3765da82c4f615cc7 | ea34a7cbee808ea025614cb7cb0ea03cf1154db8d6eb4dd0b05de293ef8d8f24 | 3725a636de057b20b41947b21d463cec0cc8b4c2b3e9dc41e1ca216086b651d9 | null | [] | 221 |
2.4 | jam.py-v7 | 7.0.93 | Jam.py Application Builder is an event-driven framework for the development of web database applications. |
[](https://pypi.org/project/jam.py-v7)  [](https://jampy-docs-v7.readthedocs.io) [](http://pepy.tech/project/jam.py-v7)
## Jam.py is a web front-end application generator that works with both existing databases and newly created ones.
## With Monaco editor and Databricks support!
## This is forked jam.py to continue the support and development, since Andrew retired from jam.py project. The v7 is now fully released. Please find v5 master branch archive in here: https://github.com/jam-py-v5/jam-py/
## The LLMS-full.txt is released:
https://jampy-docs-v7.readthedocs.io/en/latest/llms-full.txt
and
https://jampy-docs-v7.readthedocs.io/en/latest/llms.txt
All batteries included and event driven! What is EDA:
"An event-driven framework, also known as event-driven architecture (EDA), is a design pattern where software components communicate and react to changes in state or events." Everything in Jam.py can be an event. Like a mouse click, or pressing CRTL+Ins, CTRL+Del or whatever is defined by you.
Major difference from other products is that the entire application is contained within a **single SQLite3 file**. And it can be **encrypted**!
Another key distinction is the ability to run **any Python procedure directly within the Application Builder as a back-end** - including popular libraries like Matplotlib, Pandas, and NumPy - with the results displayed in the browser. Python procedure can run **synchronously** or **asynchronously** on the server.
More over, using **Import tables** feature from any supported database is providing **instant web front-end**. There's no need to code anything and **authentication is one click away**!
Hope this sparked some interest! Thank you.
## Installation and Launch
```
pip install jam.py-v7
jam-project.py
server.py
```
[](https://northwind.pythonanywhere.com)
Builder animation:
[](https://northwind.pythonanywhere.com)
Some short videos about how to setup Jam.py and create applications:
* [Creating CRM web database applications from start to finish in 7 minutes with Jam.py framework](https://youtu.be/vY6FTdpABa4)
* [Setting up interface of Jam.py application using Forms Dialogs](https://youtu.be/hvNZ0-a_HHw)
Longer video:
[video](https://youtu.be/qkJvGlgoabU) with dashboards and complex internal logic.
Live demos on PythonAnywhere (pls drop an issue to start the app if "Coming Soon!" shows up):
- [SAP Theme Demo](https://jampyapp.pythonanywhere.com)
- [Personal Account Ledger from MS Access template](https://msaccess.pythonanywhere.com)
Below two apps demonstrate Matplotlib, Pandas, NumPy and RFM analysis, which stands for R ecency, F requency, and M onetary value, directly migrated from MS Access template:
- [NorthWind Traders from MS Access template V7 (wip)](https://northwind.jampyapplicationbuilder.com)
- [The ERP POC Demo with Italian and English translations](https://sem.pythonanywhere.com)
- [Sir Edward Elgar Discography from MS Access - or any discography](https://elgar.pythonanywhere.com/)
- [Assets/Parts Application (wip, currently Jam V7 Demo)](https://jampy.pythonanywhere.com)
- [Machine Learning (wip)](https://mlearning.pythonanywhere.com)
- [Auto Parts Sales for Brazilian Market (Portuguese)](https://carparts.pythonanywhere.com)
- [Resourcing and Billing Application from MS Access DB (wip)](https://resourcingandbilling.pythonanywhere.com)
- [Job Positions tracking App from MS Access DB (wip)](https://positionstracking.pythonanywhere.com)
- [Kanban/Tasks Application, V7](https://kanban.pythonanywhere.com)
- [Assets Inventory Application, V7 (wip)](https://assetinventory.pythonanywhere.com)
- [Google Authentication, V7](https://ipam2.pythonanywhere.com)
- [IP Management V7 (wip)](https://ipmgmt.pythonanywhere.com)
- [Sistema Integrado de Gestão - IMS for Brazilian Market (Portuguese)](https://imsmax.pythonanywhere.com)
- [ Bills of Materials, sourced from https://github.com/mpkasp/django-bom as no-code, V7 (wip)](https://billsofmaterials.pythonanywhere.com)
Jam.py alternative site:
https://jampyapplicationbuilder.com/
## Main features
Jam.py is an object oriented, event driven framework with hierarchical structure, modular design
and very tight DB/GUI coupling. The server side of Jam.py is written in [Python](https://www.python.org),
the client utilizes [JavaScript](https://developer.mozilla.org/en/docs/Web/JavaScript),
[jQuery](https://jquery.com) and [Bootstrap](https://getbootstrap.com/docs/5.0/).
* Simple, clear and efficient IDE. The development takes place in the
Application builder, an application written completely in Jam.py.
* “All in the browser” framework. With Jam.py, all you need are two pages
in the browser, one for the project, the other for the Application builder.
Make changes in the Application builder, go to the project, refresh the page,
and see the results.
* Supports SQLite, PostgreSQL, MySQL, Firebird, MSSQL and
Oracle databases. The concept of the framework allows you to migrate from
one database to another without changing the project.
* Authentication, authorization, session management, roles and permissions.
* Automatic creation and modification of database tables and SQL queries generation.
* Data-aware controls.
* Open framework. You can use any Javascript/Python libraries.
* Rich, informative reports. Band-oriented report generation based on
[LibreOffice](https://www.libreoffice.org) templates.
* Charts. You can use free [jsCharts](http://www.jscharts.com) library
or any javascript charting library to create charts to represent and analyze your application data.
* Allows to save audit trail/change history made by users
* Predefined css themes.
* Develop and test locally update remotely. Jam.py has Export and Import
utilities that allow developer to store all metadata (database structures,
project parameters and code) in a file that can be loaded by another
application to apply all the changes.
## Documentation
All updated documentation for v7 is online at
https://jampy-docs-v7.readthedocs.io/
or
https://jam-py-v7.github.io/jampy-docs-v7/
Brazilian Portuguese translation started at
https://jampy-docs-v7-br-pt.readthedocs.io/
Simplified Chinese translation started at
https://jampy-docs.readthedocs.io/projects/V7/zh-cn/latest
Please visit https://jampy-docs-v7.readthedocs.io/en/latest/intro/install.html for Python and
framework installation or https://jampy-docs-v7.readthedocs.io/en/latest/intro/new_project.html how to create a
new project.
Jam.py application design tips are at https://jampy-application-design-tips.readthedocs.io/
For general discussion, ideas or similar, please visit mailgroup https://groups.google.com/g/jam-py or
FB page https://www.facebook.com/groups/jam.py/ (paused atm)
## Sponsor
Jam.py is raising funds to keep the software free for everyone, and we need the support of the entire community to do it. [Donate to Jam.py on Github](https://github.com/sponsors/platipusica) to show your support.
## License
Jam.py is licensed under the BSD License.
## Original Author
Andrew Yushev
See also the list of [contributors](http://jam-py.com/contributors.html)
who participated in this project.
## Maintainers
[crnikaurin](https://github.com/crnikaurin), [platipusica](https://github.com/platipusica)
| text/markdown | Andrew Yushev | yushevaa@gmail.com | null | null | BSD | null | [
"Development Status :: 2 - Pre-Alpha",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: JavaScript",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Database",
"Topic :: Database :: Front-Ends"
] | [] | https://github.com/jam-py-v5/jam-py | null | >=3.7 | [] | [] | [] | [
"Werkzeug>=3.0.0",
"sqlalchemy",
"esprima",
"pyjsparser",
"jsmin",
"sqlparse",
"standard-imghdr"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:28:53.691742 | jam_py_v7-7.0.93.tar.gz | 22,562,878 | 8a/bf/31ce2c4e1d94ed432ae3c847903d157711ab2ce69cf5737b5dbe4f005250/jam_py_v7-7.0.93.tar.gz | source | sdist | null | false | fc00bce553247955bfa3dd3a6087a8e2 | 658ff260649f4f9505b456f7e84626101ef0b1cf74b261bf00d3cc362299bd80 | 8abf31ce2c4e1d94ed432ae3c847903d157711ab2ce69cf5737b5dbe4f005250 | null | [
"LICENSE",
"AUTHORS"
] | 0 |
2.4 | smartllm | 0.1.4 | A unified async Python wrapper for multiple LLM providers with OpenAI Response API and reasoning support | # SmartLLM
A unified async Python wrapper for multiple LLM providers with a consistent interface.
[](https://www.python.org/downloads/)
[](LICENSE)
## Features
- **Unified Interface** - Single API for multiple LLM providers (OpenAI, AWS Bedrock)
- **Async/Await** - Built on asyncio for high-performance concurrent requests
- **Smart Caching** - Automatic response caching to reduce costs and latency
- **Auto Retry** - Exponential backoff retry logic for transient failures
- **Structured Output** - Native Pydantic model support for type-safe responses
- **Streaming** - Real-time streaming responses for better UX
- **Rate Limiting** - Built-in concurrency control per model
- **Colored Logging** - Beautiful console output for debugging
- **OpenAI Response API** - Full support for OpenAI's primary API including reasoning models
## Installation
```bash
pip install smartllm
```
### Optional Dependencies
Install only the providers you need:
```bash
# For OpenAI
pip install smartllm[openai]
# For AWS Bedrock
pip install smartllm[bedrock]
# For all providers
pip install smartllm[all]
```
## Quick Start
### Basic Usage
```python
import asyncio
from smartllm import LLMClient, TextRequest
async def main():
# Auto-detects provider from environment variables
async with LLMClient(provider="openai") as client:
response = await client.generate_text(
TextRequest(prompt="What is the capital of France?")
)
print(response.text)
asyncio.run(main())
```
### Multi-turn Conversations
```python
from smartllm import LLMClient, MessageRequest, Message
async with LLMClient(provider="openai") as client:
messages = [
Message(role="user", content="My name is Alice."),
Message(role="assistant", content="Nice to meet you, Alice!"),
Message(role="user", content="What's my name?"),
]
response = await client.send_message(
MessageRequest(messages=messages)
)
print(response.text) # "Your name is Alice."
```
### Streaming Responses
```python
from smartllm import LLMClient, TextRequest
async with LLMClient(provider="openai") as client:
request = TextRequest(
prompt="Write a short poem about Python.",
stream=True
)
async for chunk in client.generate_text_stream(request):
print(chunk.text, end="", flush=True)
```
### Structured Output with Pydantic
```python
from pydantic import BaseModel
from smartllm import LLMClient, TextRequest
class Person(BaseModel):
name: str
age: int
occupation: str
async with LLMClient(provider="openai") as client:
response = await client.generate_text(
TextRequest(
prompt="Generate a person profile for a software engineer named John, age 30.",
response_format=Person
)
)
person = response.structured_data
print(f"{person.name} is a {person.age} year old {person.occupation}")
```
## Configuration
### Environment Variables
**OpenAI:**
```bash
export OPENAI_API_KEY="your-api-key"
export OPENAI_MODEL="gpt-4o-mini" # Optional
```
**AWS Bedrock:**
```bash
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_REGION="us-east-1"
export BEDROCK_MODEL="anthropic.claude-3-sonnet-20240229-v1:0" # Optional
```
### Programmatic Configuration
```python
from smartllm import LLMClient, LLMConfig
config = LLMConfig(
provider="openai",
api_key="your-api-key",
default_model="gpt-4o",
temperature=0.7,
max_tokens=2048,
max_retries=3,
)
async with LLMClient(config) as client:
# Use client...
pass
```
### Customizing Defaults
```python
from smartllm import defaults
# Modify global defaults
defaults.DEFAULT_TEMPERATURE = 0.7
defaults.DEFAULT_MAX_TOKENS = 4096
defaults.DEFAULT_MAX_RETRIES = 5
```
### OpenAI API Types
SmartLLM supports both OpenAI APIs via the `api_type` parameter:
- `"responses"` (default) - OpenAI's primary [Response API](https://platform.openai.com/docs/api-reference/responses), recommended for all modern models
- `"chat_completions"` - Legacy [Chat Completions API](https://platform.openai.com/docs/api-reference/chat), supported indefinitely
```python
# Response API (default)
response = await client.generate_text(
TextRequest(prompt="Hello", api_type="responses")
)
# Chat Completions API (legacy)
response = await client.generate_text(
TextRequest(prompt="Hello", api_type="chat_completions")
)
```
### Reasoning Models
For models that support reasoning (e.g. GPT-5.x), use `reasoning_effort` to control how much the model reasons before responding. Reasoning tokens are returned in `response.metadata`:
```python
response = await client.generate_text(
TextRequest(
prompt="Solve: what is the 100th Fibonacci number?",
reasoning_effort="high", # "low", "medium", or "high"
)
)
print(response.text)
print(f"Reasoning tokens used: {response.metadata.get('reasoning_tokens', 0)}")
```
Note: reasoning models do not support `temperature`. Passing a value other than `1` will raise a `ValueError`.
### Reasoning with Structured Output
```python
from pydantic import BaseModel
from smartllm import LLMClient, TextRequest
class Solution(BaseModel):
answer: float
unit: str
explanation: str
async with LLMClient(provider="openai") as client:
response = await client.generate_text(
TextRequest(
prompt="A train leaves city A at 60mph toward city B (300 miles away). Another leaves B at 90mph. When do they meet?",
response_format=Solution,
reasoning_effort="medium",
)
)
solution = response.structured_data
print(f"{solution.answer} {solution.unit}: {solution.explanation}")
print(f"Reasoning tokens: {response.metadata.get('reasoning_tokens', 0)}")
```
## Advanced Features
### Caching
Responses are automatically cached when `temperature=0`:
```python
# First call - hits API
response1 = await client.generate_text(
TextRequest(prompt="What is 2+2?", temperature=0)
)
# Second call - uses cache (instant, free)
response2 = await client.generate_text(
TextRequest(prompt="What is 2+2?", temperature=0)
)
# Clear cache for specific request
response3 = await client.generate_text(
TextRequest(prompt="What is 2+2?", temperature=0, clear_cache=True)
)
```
### Concurrent Requests
```python
import asyncio
from smartllm import LLMClient, TextRequest
async with LLMClient(provider="openai") as client:
prompts = ["Question 1", "Question 2", "Question 3"]
tasks = [
client.generate_text(TextRequest(prompt=p))
for p in prompts
]
responses = await asyncio.gather(*tasks)
```
### Rate Limiting
```python
# Limit concurrent requests
client = LLMClient(provider="openai", max_concurrent=5)
```
### Provider-Specific Clients
For advanced use cases, access provider-specific clients:
```python
from smartllm.openai import OpenAILLMClient, OpenAIConfig
from smartllm.bedrock import BedrockLLMClient, BedrockConfig
# OpenAI-specific features
openai_config = OpenAIConfig(api_key="...", organization="...")
async with OpenAILLMClient(openai_config) as client:
models = await client.list_available_models()
# Bedrock-specific features
bedrock_config = BedrockConfig(aws_region="us-east-1")
async with BedrockLLMClient(bedrock_config) as client:
models = await client.list_available_model_ids()
```
## Supported Providers
- **OpenAI** - GPT models via OpenAI API
- **AWS Bedrock** - Claude, Llama, Mistral, and Titan models
## API Reference
### Core Classes
- **`LLMClient`** - Unified client for all providers
- **`LLMConfig`** - Unified configuration
- **`TextRequest`** - Single prompt request
- **`MessageRequest`** - Multi-turn conversation request
- **`TextResponse`** - LLM response with metadata
- **`Message`** - Conversation message
- **`StreamChunk`** - Streaming response chunk
### Request Parameters
| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| `prompt` | str | Input text prompt | Required |
| `model` | str | Model ID to use | Config default |
| `temperature` | float | Sampling temperature (0-1) | 0 |
| `max_tokens` | int | Maximum output tokens | 2048 |
| `top_p` | float | Nucleus sampling | 1.0 |
| `system_prompt` | str | System context | None |
| `stream` | bool | Enable streaming | False |
| `response_format` | BaseModel | Pydantic model for structured output | None |
| `use_cache` | bool | Enable caching | True |
| `clear_cache` | bool | Clear cache before request | False |
| `api_type` | str | OpenAI API type (`"responses"` or `"chat_completions"`) | `"responses"` |
| `reasoning_effort` | str | Reasoning effort (`"low"`, `"medium"`, `"high"`) | None |
## Error Handling
```python
from smartllm import LLMClient, TextRequest
async with LLMClient(provider="openai") as client:
try:
response = await client.generate_text(
TextRequest(prompt="Hello")
)
except ValueError as e:
print(f"Configuration error: {e}")
except Exception as e:
print(f"API error: {e}")
```
## Development
### Setup
```bash
git clone https://github.com/Redundando/smartllm.git
cd smartllm
pip install -r requirements-dev.txt
```
### Running Tests
```bash
# Unit tests
pytest tests/unit/ -v
# Integration tests (select model interactively)
pytest tests/integration/
# Integration tests with a specific model
pytest tests/integration/ --model gpt-4o
# Integration tests with a reasoning model
pytest tests/integration/ --model gpt-5.2
```
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Changelog
### Version 0.1.4
- Fixed logger name from `aws_llm_wrapper` to `smartllm`
- Removed redundant `response_format=json_object` when using tool-based structured output
- Cache read failures now log a warning instead of silently returning `None`
- Added `reasoning_effort` warning when used with Bedrock models
- Test suite now supports model selection via `--model` CLI option or interactive prompt
- Integration tests support both OpenAI and AWS Bedrock models
- Bedrock streaming chunk parsing fixed for Claude models
### Version 0.1.0
- Initial public release
- Unified interface for multiple providers
- OpenAI support (GPT models)
- AWS Bedrock support (Claude, Llama, Mistral, Titan)
- Async/await architecture
- Smart caching with temperature=0
- Auto retry with exponential backoff
- Structured output with Pydantic models
- Streaming responses
- Rate limiting and concurrency control
- OpenAI Response API support (primary interface)
- Reasoning model support with `reasoning_effort` parameter
- Comprehensive test suite
## Support
- **Issues**: [GitHub Issues](https://github.com/Redundando/smartllm/issues)
- **Email**: arved.kloehn@gmail.com
## Acknowledgments
Built with love using:
- [Pydantic](https://pydantic.dev/) for data validation
- [aioboto3](https://github.com/terrycain/aioboto3) for AWS async support
- [OpenAI Python SDK](https://github.com/openai/openai-python) for OpenAI integration
| text/markdown | Arved Klöhn | Arved Klöhn <arved.kloehn@gmail.com> | null | null | MIT | llm, openai, bedrock, claude, gpt, async, ai, ml | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | https://github.com/Redundando/smartllm | null | >=3.8 | [] | [] | [] | [
"pydantic>=2.0.0",
"openai>=1.0.0; extra == \"openai\"",
"aioboto3>=12.0.0; extra == \"bedrock\"",
"openai>=1.0.0; extra == \"all\"",
"aioboto3>=12.0.0; extra == \"all\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-mock>=3.10.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Redundando/smartllm",
"Repository, https://github.com/Redundando/smartllm",
"Issues, https://github.com/Redundando/smartllm/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T07:27:46.361548 | smartllm-0.1.4.tar.gz | 30,345 | c4/ba/876470d578408e8c7b2227fd502be20c698d6cfd4cabcdb44b903b29593b/smartllm-0.1.4.tar.gz | source | sdist | null | false | d35869979089fefaf1388389cf1ab226 | 461cf706ffff76d59da30300d3d218985a84275dae1f9ff19981505578e67ab8 | c4ba876470d578408e8c7b2227fd502be20c698d6cfd4cabcdb44b903b29593b | null | [
"LICENSE"
] | 250 |
2.4 | pyside-cli | 1.2.0b1 | A command-line tool for quickly creating and managing PySide6 projects. | # CLI for PySide Template
## Quick Overview
This is a companion CLI for **pyside\_template** (not an official PySide tool).
It helps you quickly create a template project:
```bash
mkdir app && cd app
pip install pyside-cli>=1.0.0
pyside-cli create . # requires: git
```
You can also build the project or run tests with a single command.
```bash
pyside-cli build --onefile # for build: requires pyside6, nuitka
pyside-cli test # for testing: requires pytest
```
## Links
- [PyPI - pyside-cli](https://pypi.org/project/pyside-cli/)
- [pyside\_template](https://github.com/SHIINASAMA/pyside_template)
| text/markdown; charset=UTF-8; variant=GFM | null | Kaoru <shiinasama2001@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/SHIINASAMA/pyside-cli",
"Issues, https://github.com/SHIINASAMA/pyside-cli/issues",
"Repository, https://github.com/SHIINASAMA/pyside-cli"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T07:27:31.954778 | pyside_cli-1.2.0b1.tar.gz | 61,693 | b4/a8/2ae8f15b67c26c6857ab2059b68c4ce6f83fdcb20b0497ac2474839c0470/pyside_cli-1.2.0b1.tar.gz | source | sdist | null | false | be12af5f89a8d0ee2ae50c3a00aa4387 | 5ac58739ed3d5a759e470858e02521750d7e4bc44d4916b3525cb5bf75f3a261 | b4a82ae8f15b67c26c6857ab2059b68c4ce6f83fdcb20b0497ac2474839c0470 | null | [] | 335 |
2.4 | django-autoapp | 2.0.0 | Auto-generate Django apps with full CRUD boilerplate — models, views, URLs, forms, admin, and templates. | # Django AutoApp
[](https://pypi.org/project/django-autoapp/)
[](https://pypi.org/project/django-autoapp/)
[](https://www.djangoproject.com/)
[](LICENSE)
**Auto-generate Django apps with full CRUD boilerplate** — models, views, URLs, forms, admin, templates, and static files — in a single command.
## Installation
```sh
pip install django-autoapp
```
## Quick Start — New Project from Scratch
Bootstrap a full Django project + virtual environment in one command:
```sh
mkdir my_site && cd my_site
django-autoapp-init myproject
```
This creates:
```
my_site/
├── venv/ ← isolated virtual environment
└── myproject/ ← Django project (manage.py, settings, etc.)
```
Then follow the printed instructions:
```sh
cd myproject
..\venv\Scripts\activate # Windows
source ../venv/bin/activate # macOS / Linux
python manage.py autoapp blog Post
python manage.py runserver
```
## Adding to an Existing Project
Add to your `INSTALLED_APPS`:
```python
INSTALLED_APPS = [
# ...
"django_autoapp",
]
```
## Usage
```sh
python manage.py autoapp <APP_NAME> <MODEL_NAME>
```
### Example
```sh
python manage.py autoapp blog Post
```
This generates a complete `blog/` app with a `Post` model, including:
- `models.py` — Model with `name`, `created_at`, `updated_at` fields
- `views.py` — ListView, CreateView, DetailView, UpdateView, DeleteView
- `urls.py` — URL patterns with proper namespacing
- `forms.py` — ModelForm
- `admin.py` — Admin registration with all fields displayed
- `apps.py` — AppConfig
- `templates/` — List, form, detail, and confirm-delete HTML templates
- `management/commands/run_project.py` — One-command project runner
### Options
| Flag | Description |
|------|-------------|
| `--dry-run` | Simulate execution without writing any files |
| `--force` | Overwrite existing files and directories |
| `--template-dir PATH` | Use a custom Jinja2 template directory |
## Requirements
- Python ≥ 3.10
- Django ≥ 4.2
- Jinja2 ≥ 3.1
## License
[MIT](LICENSE)
| text/markdown | null | Dhiraj Jadav <jadavdhiraj020@gmail.com> | null | null | MIT | django, boilerplate, code-generator, crud, scaffolding | [
"Development Status :: 4 - Beta",
"Environment :: Web Environment",
"Framework :: Django",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Framework :: Django :: 5.1",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Code Generators",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"Django>=4.2",
"Jinja2>=3.1",
"djangorestframework>=3.14; extra == \"api\""
] | [] | [] | [] | [
"Homepage, https://github.com/dhirajjadav/django-autoapp",
"Repository, https://github.com/dhirajjadav/django-autoapp",
"Issues, https://github.com/dhirajjadav/django-autoapp/issues"
] | twine/6.2.0 CPython/3.13.2 | 2026-02-20T07:26:59.863815 | django_autoapp-2.0.0.tar.gz | 28,371 | 86/1f/65eba89ba32b959e43e803cd322fe1921696c55c8393abe30635dba24a12/django_autoapp-2.0.0.tar.gz | source | sdist | null | false | 434f605d574b980bad4d38b46b8c6bd3 | 1d12de59b20ef747730c98da0d9c6998317feb6b25fdbf9878c1ea46e207d7b9 | 861f65eba89ba32b959e43e803cd322fe1921696c55c8393abe30635dba24a12 | null | [
"LICENSE"
] | 255 |
2.4 | camat | 0.1.2 | CAMAT: tools for symbolic music parsing, analysis, and rendering. | # CAMAT
CAMAT is a Python toolkit for symbolic music parsing, analysis, pattern search, and score rendering.
Optimized for Python => 3.11
## Installation
```bash
pip install camat
```
## What Is Included
- Parsing helpers for `music21` and `partitura` backends.
- Pattern search and similarity utilities.
- Piano-roll and overlay visualization helpers.
- Verovio-based rendering utilities.
## Quick Start
```python
from camat import get_parse_files, run_pattern_search
parse_files = get_parse_files("music21") # or "partitura"
results, dfs_by_name, last_df = parse_files(["path/to/score.mxl"])
# Example: run pattern search on a matrix and kernel
# out = run_pattern_search(matrix_source, kernel_source)
```
## Documentation
This repo includes a Read the Docs-ready Sphinx project in `docs/` and
config in `.readthedocs.yaml`.
Local build:
```bash
pip install -r docs/requirements.txt
sphinx-build -b html docs docs/_build/html
```
## Mensural MEI Helper
If your MEI files use mensural duration labels (for example `semibrevis`), you can normalize them for partitura compatibility:
```bash
python scripts/normalize_mensural_mei.py path/to/input.mei -o path/to/output.mei
```
You can override injected default meter if needed:
```bash
python scripts/normalize_mensural_mei.py path/to/input.mei -o path/to/output.mei --meter-count 2 --meter-unit 2
```
`parse_files_partitura` applies this preprocessing automatically by default:
- `normalize_mensural_durations=True`
- `inject_missing_meter_signature=True` (defaults to `4/4`, configurable)
- `prefer_verovio_for_mensural=True` (detects mensural MEI markers and runs Verovio first)
- `try_verovio_mei_conversion=True` (retries unsupported MEI structures through Verovio, still using partitura)
- `verovio_mensural_to_cmn=True`
- `verovio_duration_equivalence=None` (set if you want explicit mensural-to-CMN scaling)
- `verovio_mensural_score_up=False`
If you want a strict partitura-only workflow (no music21 fallback), set:
- `allow_music21_fallback=False`
## Repository Layout
- `camat/`: package source used for PyPI distribution.
- `CAMAT_revamped/`: legacy development notebooks and experiments.
- `CHANGELOG.md`: release notes.
- test_corpus - various test data
## License
MIT (see `LICENSE`).
| text/markdown | Egor Polyakov | null | null | null | MIT License
Copyright (c) 2025 Egor Polyakov
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| music, symbolic-music, analysis, music21, partitura, verovio | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Multimedia :: Sound/Audio :: Analysis",
"Topic :: Scientific/Engineering :: Information Analysis"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"bokeh",
"ipycanvas",
"ipython",
"ipywidgets",
"matplotlib",
"music21",
"numpy",
"pandas",
"partitura",
"requests",
"tqdm",
"verovio"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:26:41.712524 | camat-0.1.2.tar.gz | 90,821 | 25/18/5aef3ead9e1d097f9a90f232ede5456f524ad19414a64e3899baf8aa78d2/camat-0.1.2.tar.gz | source | sdist | null | false | 527619ff73838ad4b87b6074330665ca | 71f13d38829280b59c520685d46dfed120bfa733bd85fafadc4c6a0eb7da91d3 | 25185aef3ead9e1d097f9a90f232ede5456f524ad19414a64e3899baf8aa78d2 | null | [
"LICENSE"
] | 259 |
2.4 | mxlbricks | 0.7.0 | A package to build metabolic models | <p align="center">
<img src="https://raw.githubusercontent.com/Computational-Biology-Aachen/mxl-bricks/refs/heads/main/docs/assets/logo.png" width="400px" alt='mxlbricks-logo'>
</p>
# MxlBricks
[](https://pypi.python.org/pypi/mxlbricks)
[![docs][docs-badge]][docs]

[](https://github.com/astral-sh/ruff)
[](https://github.com/PyCQA/bandit)
[](https://pepy.tech/projects/mxlbricks)
[docs-badge]: https://img.shields.io/badge/docs-main-green.svg?style=flat-square
[docs]: https://computational-biology-aachen.github.io/mxl-bricks/
MxlBricks is a Python package to build mechanistic models composed of pre-defined reactions (bricks). This facilitates re-use and interoperability between different models by sharing common parts.
## Installation
You can install mxlpy using pip: `pip install mxlbricks`.
If you want access to the sundials solver suite via the [assimulo](https://jmodelica.org/assimulo/) package, we recommend setting up a virtual environment via [pixi](https://pixi.sh/) or [mamba / conda](https://mamba.readthedocs.io/en/latest/) using the [conda-forge](https://conda-forge.org/) channel.
```bash
pixi init
pixi add python assimulo
pixi add --pypi mxlbricks
```
## Development setup
Install pixi [as described in the docs](https://pixi.sh/latest/#installation).
Run
```bash
pixi instal
```
## Models
| Name | Description |
| -------------------- | --------------------------------------------------------------------------- |
| Ebenhöh 2011 | PSII & two-state quencher & ATP synthase |
| Ebenhöh 2014 | PETC & state transitions & ATP synthase from Ebenhoeh 2011 |
| Matuszyńska 2016 NPQ | 2011 + PSII & four-state quencher |
| Matuszyńska 2016 PhD | ? |
| Matuszyńska 2019 | Merges PETC (Ebenhöh 2014), NPQ (Matuszynska 2016) and CBB (Poolman 2000) |
| Saadat 2021 | 2019 + Mehler (Valero ?) & Thioredoxin & extendend PSI states & consumption |
| van Aalst 2023 | Saadat 2021 & Yokota 1985 & Witzel 2010 |
## References
| Name | Description |
| ------------ | ----------------------------------------------------- |
| Poolman 2000 | CBB cycle, based on Pettersson & Ryde-Pettersson 1988 |
| Yokota 1985 | Photorespiration |
| Valero ? | |
## Tool family 🏠
`MxlPy` is part of a larger family of tools that are designed with a similar set of abstractions. Check them out!
- [MxlPy](https://github.com/Computational-Biology-Aachen/MxlPy) is a Python package for mechanistic learning (Mxl)
- [MxlWeb](https://github.com/Computational-Biology-Aachen/mxl-web) brings simulation of mechanistic models to the browser!
| text/markdown | null | Marvin van Aalst <marvin.vanaalst@gmail.com> | null | Marvin van Aalst <marvin.vanaalst@gmail.com> | null | metabolic, modelling, ode | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: Unix",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Topic :: Scientific/Engineering",
"Topic :: Software Development"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"mxlpy>=0.30.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:26:05.755527 | mxlbricks-0.7.0.tar.gz | 718,792 | 40/40/2d266006ede369a6b0343494ee628720aed5b278de2fdf4af3059da8ce99/mxlbricks-0.7.0.tar.gz | source | sdist | null | false | 8f8097d13ff4e7789f12bd361ca954af | 95d7be17b817ba195f8a5a9584260253c94f676d703d110df3e864ffae2fd421 | 40402d266006ede369a6b0343494ee628720aed5b278de2fdf4af3059da8ce99 | MIT | [] | 250 |
2.4 | vtes-archon | 0.65 | VTES tournament management | # archon
Tournament management
> 📋 For detailed architecture and design information, see [DESIGN.md](DESIGN.md)
> 📝 For version history and changes, see [CHANGELOG.md](CHANGELOG.md)
## Quick Start
For detailed development setup instructions, see [DESIGN.md](DESIGN.md).
### Basic Installation
```bash
nvm install node
nvm use node
python -m virtualenv .venv
source .venv/bin/activate
make update
```
### Windows Users
Four options for Windows users:
- Use [Chocolatey](https://chocolatey.org) as package manager and `choco install make`
- Use the [Windows Subsystem for Linux (WSL)](https://learn.microsoft.com/en-us/windows/wsl/) feature
- Just install the [GNU make binary for Windows](https://gnuwin32.sourceforge.net/packages/make.htm)
- Don't use `make` at all. The [Makefile](Makefile) is just a shortcut,
you can open it and copy/paste the commands in your Powershell.
### Using Homebrew on OSX
You can use [Homebrew](https://brew.sh/) on Linux or OSX to install Python and its dependencies.
Don't forget to update the CA certificates from time to time.
```bash
brew reinstall ca-certificates openssl
```
### Tools & Frameworks
We are using standard tools and frameworks that `make update` will install and update for you. See [DESIGN.md](DESIGN.md) for detailed technology stack information.
## Make targets
- `make geodata` download and refresh the geographical data in [geodata](src/archon/geodata)
- `make test` runs the tests, formatting and linting checks
- `make serve` runs a dev server with watchers for auto-reload when changes are made to the source files
- `make clean` cleans the repository from all transient build files
- `make build` builds the python package
- `make release` creates and pushes a git tag for this version and publishes the package on [PYPI](https://pypi.org)
## CLI
The `archon` CLI gives access to useful DB-related commands when developing in local.
```bash
> archon --help
Usage: archon [OPTIONS] COMMAND [ARGS]...
╭─ Options ───────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --install-completion Install completion for the current shell. │
│ --show-completion Show completion for the current shell, to copy it or customize the installation. │
│ --help Show this message and exit. │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Commands ──────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ reset-db ⚠️ Reset the database ⚠️ Removes all data │
│ list List tournaments │
│ sync-members Update members from the vekn.net website │
│ sync-events Update historical tournaments from the vekn.net website │
│ purge Purge deprecated historical data │
│ add-client Add an authorized client to the platform │
│ recompute-ratings Recompute all tournament ratings │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
```
## Settings
This software requires some environment settings for multiple functionalities:
### VEKN credentials
Used to collect the VEKN members list, and publish events and their result.
```bash
export VEKN_LOGIN="<vekn_login>"
export VEKN_PASSWORD="<vekn_password>"
```
### VEKN API
For now, this app uses the [VEKN API](https://bitbucket.org/vekn/vekn-api/src/master/)
to declare and report events. There is an [online documentation](https://www.vekn.net/API/readme.txt).
```bash
export VEKN_PUSH="<vekn_push_token>"
```
### Site configuration
Base URL for the application (used for generating links in emails and API responses):
```bash
export SITE_URL_BASE="http://127.0.0.1:8000"
```
### Discord credentials
Used for the Discord social login. You need to register a
[Discord Application](https://discord.com/developers/applications).
```bash
export DISCORD_CLIENT_ID="<discord_client_id>"
export DISCORD_CLIENT_SECRET="<discord_client_secret>"
```
### Application secrets
Secrets for various security features.
Make sure you use different secure random secrets for different environments.
```bash
export SESSION_KEY="<sign_session_cookie>"
export TOKEN_SECRET="<sign_access_token>"
export HASH_KEY="<hash_user_passwords>"
```
You can use `openssl` to generate each of these secrets:
```bash
openssl rand -hex 32
```
### Email (SMTP) parameters
Used to send the "password reset" email necessary for the basic email login feature.
Note that if you're using GMail, you probably need to generate an
[Application Password](https://myaccount.google.com/apppasswords) for this application.
```bash
export MAIL_SERVER="smtp.gmail.com"
export MAIL_PORT="587"
export MAIL_USERNAME="codex.of.the.damned@gmail.com"
export MAIL_PASSWORD="<app_password>"
export MAIL_FROM="codex.of.the.damned@gmail.com"
export MAIL_FROM_NAME="Archon"
```
## Deployment
For deployment information, see [DESIGN.md](DESIGN.md).
## API Reference
For detailed architecture and design information including offline mode, event-driven architecture, and state management, see [DESIGN.md](DESIGN.md).
### Tournament States
Tournaments progress through the following states:
- **PLANNED**: Initial state. Registration is closed. Only judges can register players.
- **REGISTRATION**: Registration is open. Players can self-register and judges can register players.
- **WAITING**: Check-in is open. Players must check in to play the next round. They can still self-register.
- **PLAYING**: A round is in progress. Judges can add/remove players to the round. Players can self-register for next one.
- **FINALS**: The finals round is in progress.
- **FINISHED**: Tournament is complete.
**State transitions:**
- PLANNED → (OpenRegistration) → REGISTRATION
- REGISTRATION → (CloseRegistration) → PLANNED
- REGISTRATION → (OpenCheckin) → WAITING
- WAITING → (CancelCheckin) → REGISTRATION
- WAITING → (RoundStart) → PLAYING
- PLAYING → (RoundFinish/RoundFinish) → REGISTRATION
### Tournament Events
#### OpenRegistration
Opens player registration. Players can then self-register to the tournament.
Only judges can open registration. Only works from PLANNED state.
```json
{
"type": "OPEN_REGISTRATION"
}
```
#### CloseRegistration
Closes player registration. Puts the tournament back in PLANNED state.
Players can no longer self-register, but judges can still register players manually.
Only judges can close registration. Only works from REGISTRATION state.
```json
{
"type": "CLOSE_REGISTRATION"
}
```
#### Register
Neither VEKN nor UID is mandatory. To register a new player who has no VEKN account, provide a new UUID4.
If you do not provide one, a new UUID4 will be generated and an account created for that person.
```json
{
"type": "Register",
"name": "John Doe",
"vekn": "12300001",
"player_uid": "24AAC87E-DE63-46DF-9784-AB06B2F37A24",
"country": "France",
"city": "Paris"
}
```
#### OpenCheckin
Allows to check players in, signaling they are present and ready to play.
You should open the check-in just before the round starts to limit
the number of players who do not show up to their table.
```json
{
"type": "OPEN_CHECKIN"
}
```
#### CancelCheckin
Cancel the check-in. Use it if you opened the check-in too early.
Puts the tournament back in the REGISTRATION state.
```json
{
"type": "CANCEL_CHECKIN"
}
```
#### CheckIn
Mark a player as ready to play. Players can self-check-in.
```json
{
"type": "CHECK_IN"
"player_uid": "238CD960-7E54-4A38-A676-8288A5700FC8"
}
```
#### CheckEveryoneIn
When running registrations in situ, or after first round.
It will not check-in players who have dropped (FINISHED state)
or have an active barrier (missing deck, having been disqualified, etc.).
```json
{
"type": "CHECK_EVERYONE_IN"
}
```
#### CheckOut
Move a player back to registration.
```json
{
"type": "CHECK_OUT",
"player_uid": "238CD960-7E54-4A38-A676-8288A5700FC8"
}
```
#### RoundStart
Start the next round. The provided seating must list players UID forming the tables.
Each UID must match a VEKN member UID.
```json
{
"type": "ROUND_START",
"seating": [
["238CD960-7E54-4A38-A676-8288A5700FC8",
"796CD3CE-BC2B-4505-B448-1C2D42E9F140",
"80E9FD37-AD8C-40AA-A42D-138065530F10",
"586616DC-3FEA-4DAF-A222-1E77A2CBD809",
"8F28E4C2-1953-473E-A1C5-C281957072D1"
],[
"BD570AA9-B70C-43CA-AD05-3B4C7DADC28C",
"AB6F75B3-ED60-45CA-BDFF-1BF8DD5F02C4",
"1CB1E9A7-576B-4065-8A9C-F7920AAF977D",
"8907BE41-91A7-4395-AF91-54D94C489A36"
]
]
}
```
#### RoundAlter
Change a round's seating. Note recorded VPs, if any, stay assigned to the player even if they move.
```json
{
"type": "ROUND_ALTER",
"round": 1,
"seating": [
["238CD960-7E54-4A38-A676-8288A5700FC8",
"796CD3CE-BC2B-4505-B448-1C2D42E9F140",
"80E9FD37-AD8C-40AA-A42D-138065530F10",
"586616DC-3FEA-4DAF-A222-1E77A2CBD809",
"8F28E4C2-1953-473E-A1C5-C281957072D1"
],[
"BD570AA9-B70C-43CA-AD05-3B4C7DADC28C",
"AB6F75B3-ED60-45CA-BDFF-1BF8DD5F02C4",
"1CB1E9A7-576B-4065-8A9C-F7920AAF977D",
"8907BE41-91A7-4395-AF91-54D94C489A36"
]
]
}
```
#### RoundFinish
Finish the current round.
```json
{
"type": "ROUND_FINISH"
}
```
#### RoundCancel
Cancel the current round. All results for this round are discarded.
```json
{
"type": "RoundCancel"
}
```
#### SetResult
Set a player's result. Players can set their and their table result for the current round.
Only VPs are provided, the GW and TP computations are done by the engine.
```json
{
"type": "SET_RESULT",
"player_uid": "238CD960-7E54-4A38-A676-8288A5700FC8",
"round": 1,
"vps": 2.5
}
```
#### SetDeck
Set a player's deck list. Players can set their own decklist, each round if it is a multideck tournament.
Accepts plain text decklist (any usual format) or decklists URL (VDB, Amaranth, VTESDecks).
```json
{
"type": "SET_DECK",
"player_uid": "238CD960-7E54-4A38-A676-8288A5700FC8",
"deck": "https://vdb.im/decks/11906"
}
```
The `round` parameter is facultative and can only be used by a Judge for corrective action in multideck tournaments.
```json
{
"type": "SET_DECK",
"player_uid": "238CD960-7E54-4A38-A676-8288A5700FC8",
"round": 1,
"deck": "https://vdb.im/decks/11906"
}
```
#### Drop
Drop a player from the tournament. A player can drop by themselves.
A Judge can drop a player if they note they have juse left.
To **disqualify** a player, use the [Sanction](#sanction) event.
```json
{
"type": "DROP",
"player_uid": "238CD960-7E54-4A38-A676-8288A5700FC8"
}
```
#### Sanction
Sanction (punish) a player.
The sanction levels are: `CAUTION`, `WARNING` and `DISQUALIFICATION`.
Cautions are just informative. Warnings are recorded (accessible to organizers, even in future events).
Disqualifications are recorded and remove the player from the tournament.
Sanction also have an optional category, one of:
- `DECK_PROBLEM`
- `PROCEDURAL_ERRORS`
- `CARD_DRAWING`
- `MARKED_CARDS`
- `SLOW_PLAY`
- `UNSPORTSMANLIKE_CONDUCT`
- `CHEATING`
```json
{
"type": "SANCTION",
"level": "WARNING",
"player_uid": "238CD960-7E54-4A38-A676-8288A5700FC8",
"comment": "Free comment",
"category": "PROCEDURAL_ERRORS"
}
```
#### Unsanction
Remove all sanctions of given level for a player.
```json
{
"type": "UNSANCTION",
"level": "WARNING",
"player_uid": "238CD960-7E54-4A38-A676-8288A5700FC8"
}
```
#### Override
Judges can validate an odd table score.
For example, if they disqualify a player but do not award VPs to their predator,
the final table score will not appear valid until it's overridden.
Rounds and tables are counted starting from 1.
```json
{
"type": "OVERRIDE",
"round": 1,
"table": 1,
"comment": "Free form comment"
}
```
#### Unoverride
Remove an override for a table score.
```json
{
"type": "Unoverride",
"round": 1,
"table": 1
}
```
#### SeedFinals
A finals is "seeded" first before players elect their seat in seed order.
```json
{
"type": "SEED_FINALS",
"seeds": ["238CD960-7E54-4A38-A676-8288A5700FC8",
"796CD3CE-BC2B-4505-B448-1C2D42E9F140",
"80E9FD37-AD8C-40AA-A42D-138065530F10",
"586616DC-3FEA-4DAF-A222-1E77A2CBD809",
"8F28E4C2-1953-473E-A1C5-C281957072D1"
]
}
```
#### SeatFinals
Note what seating position finalists have elected.
```json
{
"type": "SEAT_FINALS",
"seating": ["238CD960-7E54-4A38-A676-8288A5700FC8",
"796CD3CE-BC2B-4505-B448-1C2D42E9F140",
"80E9FD37-AD8C-40AA-A42D-138065530F10",
"586616DC-3FEA-4DAF-A222-1E77A2CBD809",
"8F28E4C2-1953-473E-A1C5-C281957072D1"
]
}
```
#### Finish
Finish the tournament. This closes up the tournament. The winner, if finals results have been recorded,
is automatically computed.
```json
{
"type": "FINISH_TOURNAMENT",
}
```
| text/markdown | VEKN | null | null | null | null | vtes, Vampire: The Eternal Struggle, CCG, Tournament | [
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Development Status :: 3 - Alpha",
"Natural Language :: English",
"Operating System :: OS Independent",
"Framework :: FastAPI",
"Topic :: Internet :: WWW/HTTP :: WSGI :: Application"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"aiohttp",
"fastapi[standard]",
"fastapi-mail",
"itsdangerous",
"jinja2",
"krcg>=4.4",
"orjson",
"psycopg[binary,pool]",
"pyjwt",
"python-dotenv",
"typer",
"uvicorn",
"ansible; extra == \"dev\"",
"black; extra == \"dev\"",
"build; extra == \"dev\"",
"check-manifest; extra == \"dev\"",
"debugpy; extra == \"dev\"",
"ipython; extra == \"dev\"",
"pytest; extra == \"dev\"",
"ruff; extra == \"dev\"",
"setuptools-scm; extra == \"dev\"",
"twine; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/vtes-biased/archon"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-20T07:25:01.385554 | vtes_archon-0.65.tar.gz | 12,806,473 | e4/a6/20afa60fee1718b224aecc10050e06a26815dbe951c72683e36d7eb19a35/vtes_archon-0.65.tar.gz | source | sdist | null | false | 2d393aaad851deab2ca207038ab14a01 | 771944160b81db84405dd9f768274789b2b93d80961bc3f469c729e8a76b01ec | e4a620afa60fee1718b224aecc10050e06a26815dbe951c72683e36d7eb19a35 | MIT | [
"LICENSE"
] | 239 |
2.4 | ttup | 2.2.0 | 终端文件上传工具 - 支持网页版和命令行上传 | <div align="center">
<img src="https://img.shields.io/badge/ttup-文件上传工具-667eea?style=for-the-badge&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAyNCAyNCIgZmlsbD0id2hpdGUiPjxwYXRoIGQ9Ik0xOSAzSDVjLTEuMSAwLTIgLjktMiAydjE0YzAgMS4xLjkgMiAyIDJoMTRjMS4xIDAgMi0uOSAyLTJWNWMwLTEuMS0uOS0yLTItMnptLTUgMTRIN3YtMmg3djJ6bTMtNEg3di0yaDEwdjJ6bTAtNEg3VjdoMTB2MnoiLz48L3N2Zz4=" alt="ttup" />
**简单高效的终端文件上传工具**
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://codeberg.org/ttup/ttup)
[]()
一款轻量级的文件传输工具,支持网页端和命令行,让文件分享变得简单快捷。
[快速开始](#-快速开始) · [功能特性](#-功能特性) · [安装](#-安装) · [使用文档](#-使用文档) · [贡献](#-贡献)
</div>
---
## 🎯 什么是 ttup?
ttup 是一个**开箱即用**的文件传输工具,无需复杂的配置,一行命令即可启动服务端,一行命令即可上传文件。
**适用场景:**
- 📤 临时分享文件给同事/朋友
- 📱 跨设备文件传输
- 🔒 敏感文件阅后即焚
- 🤖 脚本自动化文件分发
- 🏠 家庭/团队私有文件服务
---
## ✨ 功能特性
<table>
<tr>
<td width="50%">
### 🖥️ 多端支持
- **网页版** - 拖拽上传,简洁美观
- **命令行** - 终端快速操作
- **curl** - 脚本友好接口
- **API** - 完整 RESTful 接口
</td>
<td width="50%">
### 🔒 安全可控
- **过期时间** - 1s/1m/1h/1d/1w
- **下载限制** - 阅后即焚模式
- **自动清理** - 过期文件自动删除
- **唯一ID** - 8位取件码
</td>
</tr>
<tr>
<td width="50%">
### 🎯 简洁高效
- **零配置** - 开箱即用
- **真实IP** - 自动识别
- **无数据库** - 文件存储
- **轻量级** - 最小依赖
</td>
<td width="50%">
### 📦 跨平台
- **Linux** - 全面支持
- **macOS** - 原生体验
- **Windows** - 无缝运行
- **Python 3.8+** - 广泛兼容
</td>
</tr>
</table>
---
## 📦 安装
### 方式一:下载 Wheel 包(推荐)
```bash
# 下载最新的 wheel 包
pip install ttup-2.2.0-py3-none-any.whl
```
### 方式二:源码安装
```bash
git clone https://codeberg.org/ttup/ttup.git
cd ttup
pip install -e .
```
<details>
<summary>📋 系统要求</summary>
| 项目 | 要求 |
|------|------|
| Python | 3.8+ |
| 操作系统 | Linux / macOS / Windows |
| 依赖 | fastapi, uvicorn, click, requests |
| 存储 | 根据文件大小决定 |
</details>
---
## 🚀 快速开始
### 1️⃣ 启动服务端
```bash
ttup-server
```
```
==================================================
ttup 文件上传服务 v2.1.0
==================================================
上传目录: /home/user/.ttup/uploads
监听地址: http://0.0.0.0:7888
==================================================
访问 http://localhost:7888 使用网页版
==================================================
```
<details>
<summary>⚙️ 自定义配置</summary>
```bash
# 指定端口
ttup-server -p 9000
# 指定上传目录
ttup-server -d /path/to/uploads
# 指定绑定地址
ttup-server -H 127.0.0.1
# 组合使用
ttup-server -p 9000 -H 0.0.0.0 -d ~/my-uploads
```
</details>
### 2️⃣ 上传文件
**方式一:网页版** → 浏览器打开 `http://localhost:7888`
**方式二:命令行**
```bash
ttup document.pdf
```
**方式三:curl**
```bash
curl -F "file=@document.pdf" http://localhost:7888/upload
```
### 3️⃣ 下载文件
```bash
# 命令行下载
curl -O http://localhost:7888/file/abc12345
# 或直接浏览器打开链接
```
---
## 📖 使用文档
### 客户端命令 (ttup)
```
用法: ttup [OPTIONS] FILENAME
选项:
-s, --server TEXT 服务端地址
-e, --expires-in TEXT 过期时间 (1h, 24h, 7d)
-n, --max-downloads INT 最大下载次数
-v, --verbose 详细输出
-V, --version 显示版本
-h, --help [cn|en] 显示帮助 (默认中文,en=英文)
环境变量:
TTUP_SERVER 默认服务端地址
```
<details>
<summary>🌐 English Help</summary>
```
USAGE: ttup [OPTIONS] FILENAME
OPTIONS:
-s, --server TEXT Server address (default: from TTUP_SERVER env)
-e, --expires-in TEXT Expiration time (e.g., 1h, 24h, 7d)
-n, --max-downloads INT Maximum download count
-v, --verbose Show detailed output
-V, --version Show version
-h, --help [cn|en] Show help (cn=Chinese, en=English)
ENVIRONMENT:
TTUP_SERVER Default server address
```
</details>
### 服务端命令 (ttup-server)
```
用法: ttup-server [OPTIONS]
选项:
-p, --port INTEGER 端口 (默认: 7888)
-H, --host TEXT 地址 (默认: 0.0.0.0)
-d, --dir TEXT 目录 (默认: ~/.ttup/uploads)
-V, --version 显示版本
-h, --help [cn|en] 显示帮助 (默认中文,en=英文)
```
<details>
<summary>🌐 English Help</summary>
```
USAGE: ttup-server [OPTIONS]
OPTIONS:
-p, --port INTEGER Server port (default: 7888)
-H, --host TEXT Bind address (default: 0.0.0.0)
-d, --dir TEXT Upload directory (default: ~/.ttup/uploads)
-V, --version Show version
-h, --help [cn|en] Show help (cn=Chinese, en=English)
```
</details>
### 使用示例
```bash
# 基本上传
ttup file.pdf
# 1小时后过期
ttup -e 1h file.pdf
# 只能下载3次
ttup -n 3 file.pdf
# 阅后即焚
ttup -n 1 secret.txt
# 组合使用:24小时或下载5次后失效
ttup -e 24h -n 5 bigfile.zip
# 指定服务端
ttup -s http://192.168.1.100:7888 file.pdf
# 使用环境变量
export TTUP_SERVER=http://your-server:7888
ttup file.pdf
```
### 过期时间格式
| 参数 | 说明 | 使用场景 |
|------|------|----------|
| `1s` | 1秒 | 临时分享 |
| `1m` | 1分钟 | 快速传输 |
| `1h` | 1小时 | 短期分享 |
| `24h` | 24小时 | 日常使用 |
| `7d` | 7天 | 一周有效 |
| `30d` | 30天 | 长期存储 |
---
## 🌐 API 接口
### 接口列表
| 方法 | 路径 | 说明 |
|------|------|------|
| `POST` | `/upload` | 上传文件 |
| `GET` | `/file/{id}` | 下载文件 |
| `GET` | `/info/{id}` | 文件信息 |
| `DELETE` | `/file/{id}` | 删除文件 |
| `GET` | `/health` | 健康检查 |
### 上传示例
```bash
# 基本上传
curl -F "file=@test.txt" http://localhost:7888/upload
# 带过期时间
curl -F "file=@test.txt" -F "expires_in=24h" http://localhost:7888/upload
# 带下载限制
curl -F "file=@test.txt" -F "max_downloads=5" http://localhost:7888/upload
# 完整参数
curl -F "file=@test.txt" \
-F "expires_in=24h" \
-F "max_downloads=5" \
http://localhost:7888/upload
```
### 响应示例
```json
{
"file_id": "abc12345",
"filename": "test.txt",
"ext": ".txt",
"size": 1024,
"url": "http://192.168.1.100:7888/file/abc12345",
"expires_at": "2026-02-20T10:00:00",
"max_downloads": 5
}
```
---
## 🚢 生产部署
### 后台运行
```bash
# 使用 nohup
nohup ttup-server > /var/log/ttup.log 2>&1 &
# 查看日志
tail -f /var/log/ttup.log
```
### Nginx 反向代理
```nginx
server {
listen 80;
server_name your-domain.com;
location / {
proxy_pass http://127.0.0.1:7888;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 100M;
}
}
```
### HTTPS 配置
```bash
# 使用 Certbot
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d your-domain.com
```
### 防火墙配置
```bash
# Ubuntu/Debian
sudo ufw allow 7888/tcp
# CentOS/RHEL
sudo firewall-cmd --add-port=7888/tcp --permanent
sudo firewall-cmd --reload
```
---
## 📁 项目结构
```
ttup/
├── ttup/
│ ├── __init__.py # 包初始化
│ ├── __main__.py # 服务端入口
│ ├── cli.py # 客户端 CLI
│ ├── ttup_server.py # FastAPI 服务
│ └── static/
│ └── index.html # 网页界面
├── setup.py # 安装配置
├── LICENSE # MIT 许可证
└── README.md # 说明文档
```
---
---
## 🤝 贡献
欢迎贡献代码、报告问题或提出建议!
1. Fork 本仓库
2. 创建特性分支 (`git checkout -b feature/AmazingFeature`)
3. 提交更改 (`git commit -m 'Add some AmazingFeature'`)
4. 推送到分支 (`git push origin feature/AmazingFeature`)
5. 提交 Pull Request
---
## 📝 更新日志
### v2.2.0 (2026-02-20)
- 🐛 修复网页版接收文件功能
### v2.1.0 (2026-02-20)
- ✨ 添加中英文双语帮助支持
- ✨ 优化帮助文本,移除冗余信息
- 🐛 修复多项问题
### v2.0.0 (2026-02-19)
- 🎉 首次发布
- ✅ 基础文件上传/下载功能
- ✅ 过期时间支持
- ✅ 下载次数限制
- ✅ 自动清理功能
- ✅ 网页界面
- ✅ 命令行工具
---
## 📜 许可证
本项目基于 [MIT License](LICENSE) 开源。
---
## 🔗 相关链接
| 链接 | 地址 |
|------|------|
| 项目主页 | https://codeberg.org/ttup/ttup |
| 问题反馈 | https://codeberg.org/ttup/ttup/issues |
---
<div align="center">
**如果觉得有用,请给个 ⭐ Star 支持一下!**
[](https://star-history.com/#ttup/ttup&Date)
**Made with ❤️ by ttup**
</div>
| text/markdown | ttup | ttupio@bf00.com | null | null | null | null | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | https://codeberg.org/ttup/ttup | null | >=3.8 | [] | [] | [] | [
"click>=8.0.0",
"requests>=2.28.0",
"fastapi>=0.100.0",
"uvicorn[standard]>=0.23.0",
"python-multipart>=0.0.6"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T07:22:19.681714 | ttup-2.2.0.tar.gz | 20,820 | 1c/7e/eaf0ea8ff462e8afa7c42c7f49d6ed129f132307c3ceb905a2b73b7b908e/ttup-2.2.0.tar.gz | source | sdist | null | false | 94d7c8062f410996619815053316beae | 03e744e480704b66d2886a698413b13fbffe5614acb8b873f80e00ec77641f9b | 1c7eeaf0ea8ff462e8afa7c42c7f49d6ed129f132307c3ceb905a2b73b7b908e | null | [
"LICENSE"
] | 252 |
2.4 | molass | 0.8.2 | Matrix Optimization with Low-rank factorization for Automated analysis of SEC-SAXS | <h1 align="center"><a href="https://biosaxs-dev.github.io/molass-library"><img src="docs/_static/molass-title.png" width="300"></a></h1>
Molass Library is a rewrite of [MOLASS](https://pfwww.kek.jp/saxs/MOLASSE.html), a tool for the analysis of SEC-SAXS experiment data currently hosted at [Photon Factory](https://www2.kek.jp/imss/pf/eng/) and [SPring-8](http://www.spring8.or.jp/en/), Japan.
## Tested Platforms
- Python 3.13 on Windows 11
- Python 3.12 on Windows 11
- Python 3.12 on Ubuntu 22.04.4 LTS (WSL2)
## Installation
To install this package, use pip as follows:
```
pip install -U molass
```
## Documentation
- **Tutorial:** https://biosaxs-dev.github.io/molass-tutorial — practical usage, for beginners
- **Essence:** https://biosaxs-dev.github.io/molass-essence — theory, for researchers
- **Technical Report:** https://biosaxs-dev.github.io/molass-technical — technical details, for advanced users
- **Reference:** https://biosaxs-dev.github.io/molass-library — function reference, for coding
- **Legacy Repository:** https://github.com/biosaxs-dev/molass-legacy — legacy code
## Community
To join the community, see:
- **Handbook:** https://biosaxs-dev.github.io/molass-develop — maintenance, for developers
Especially for testing, see the first two sections in
- **Testing:** https://biosaxs-dev.github.io/molass-develop/chapters/06/testing.html
## Copilot Usage
Before starting a Copilot chat session with this repository, please use the following magic phrase to ensure Copilot follows project rules:
For details on Copilot rules and usage, see [`Copilot/copilot-guidelines.md`](https://github.com/biosaxs-dev/molass-library/blob/master/Copilot/copilot-guidelines.md).
> “Please follow the Copilot guidelines in this project for all advice and responses.”
## Optional Features
**Excel reporting (Windows only):**
If you want to use Excel reporting features (Windows only) for backward compatibility, install with the `excel` extra:
```
pip install -U molass[excel]
```
> **Note:** The `excel` extra installs `pywin32`, which is required for Excel reporting and only works on Windows.
| text/markdown | Molass Community | null | Molass Community | null | GNU General Public License v3.0 | SEC-SAXS | [
"Development Status :: 2 - Pre-Alpha",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Programming Language :: Python"
] | [] | null | null | <3.14,>=3.12 | [] | [] | [] | [
"learnsaxs",
"matplotlib",
"molass-legacy>=0.5.0",
"mrcfile",
"numba",
"numpy",
"psutil",
"pybaselines>=1.2.0",
"ruptures",
"scikit-learn",
"scipy",
"seaborn",
"statsmodels",
"toml",
"tqdm",
"pywin32; extra == \"excel\"",
"pytest; extra == \"testing\"",
"pytest-cov; extra == \"testing\"",
"pytest-env; extra == \"testing\"",
"pytest-order; extra == \"testing\""
] | [] | [] | [] | [
"Repository, https://github.com/biosaxs-dev/molass-library"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:22:08.833337 | molass-0.8.2.tar.gz | 1,524,669 | d1/86/00d25a5c4025f93ffb0d7aa0d2b1b2e335addf2f9241127701b9dd9c2810/molass-0.8.2.tar.gz | source | sdist | null | false | f6c855f3e5d2d6274355dc0bce7f5323 | 2c9a1b7e20fbfe9842fab494e6d2e1027ac05bcc14c84c919a0f61ebb1d66426 | d18600d25a5c4025f93ffb0d7aa0d2b1b2e335addf2f9241127701b9dd9c2810 | null | [
"LICENSE.txt"
] | 242 |
2.4 | fal-client | 0.13.1 | Python client for fal.ai | # fal.ai Python client
This is a Python client library for interacting with ML models deployed on [fal.ai](https://fal.ai).
## Getting started
To install the client, run:
```bash
pip install fal-client
```
To use the client, you need to have an API key. You can get one by signing up at [fal.ai](https://fal.ai). Once you have it, set
it as an environment variable:
```bash
export FAL_KEY=your-api-key
```
Now you can use the client to interact with your models. Here's an example of how to use it:
```python
import fal_client
response = fal_client.run("fal-ai/fast-sdxl", arguments={"prompt": "a cute cat, realistic, orange"})
print(response["images"][0]["url"])
```
## Asynchronous requests
The client also supports asynchronous requests out of the box. Here's an example:
```python
import asyncio
import fal_client
async def main():
response = await fal_client.run_async("fal-ai/fast-sdxl", arguments={"prompt": "a cute cat, realistic, orange"})
print(response["images"][0]["url"])
asyncio.run(main())
```
## Uploading files
If the model requires files as input, you can upload them directly to fal.media (our CDN) and pass the URLs to the client. Here's an example:
```python
import fal_client
audio_url = fal_client.upload_file("path/to/audio.wav")
response = fal_client.run("fal-ai/whisper", arguments={"audio_url": audio_url})
print(response["text"])
```
## Encoding files as in-memory data URLs
If you don't want to upload your file to our CDN service (for latency reasons, for example), you can encode it as a data URL and pass it directly to the client. Here's an example:
```python
import fal_client
audio_data_url = fal_client.encode_file("path/to/audio.wav")
response = fal_client.run("fal-ai/whisper", arguments={"audio_url": audio_data_url})
print(response["text"])
```
## Queuing requests
When you want to send a request and keep receiving updates on its status, you can use the `submit` method. Here's an example:
```python
import asyncio
import fal_client
async def main():
response = await fal_client.submit_async("fal-ai/fast-sdxl", arguments={"prompt": "a cute cat, realistic, orange"})
logs_index = 0
async for event in response.iter_events(with_logs=True):
if isinstance(event, fal_client.Queued):
print("Queued. Position:", event.position)
elif isinstance(event, (fal_client.InProgress, fal_client.Completed)):
new_logs = event.logs[logs_index:]
for log in new_logs:
print(log["message"])
logs_index = len(event.logs)
result = await response.get()
print(result["images"][0]["url"])
asyncio.run(main())
```
| text/markdown | Features & Labels <support@fal.ai> | null | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"httpx<1,>=0.21.0",
"httpx-sse<0.5,>=0.4.0",
"msgpack<2,>=1.0.7",
"websockets>=12.0",
"sphinx; extra == \"docs\"",
"sphinx-rtd-theme; extra == \"docs\"",
"sphinx-autodoc-typehints; extra == \"docs\"",
"pytest; extra == \"test\"",
"pytest-asyncio; extra == \"test\"",
"pytest-mock; extra == \"test\"",
"pillow; extra == \"test\"",
"fal_client[docs,test]; extra == \"dev\""
] | [] | [] | [] | [
"homepage, https://fal.ai",
"repository, https://github.com/fal-ai/fal"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:21:29.192614 | fal_client-0.13.1.tar.gz | 30,281 | 0d/2c/3097270895a959aa4304b8e38c598182973ab106166e4ae3810533270bd3/fal_client-0.13.1.tar.gz | source | sdist | null | false | cb9888cd4e3137345b02c368e470dbdb | 9e1c07d0a61b452a8ffb48c199de5f2543d7546f1230f6312370443127c5e937 | 0d2c3097270895a959aa4304b8e38c598182973ab106166e4ae3810533270bd3 | null | [] | 27,119 |
2.4 | stewreads-mcp | 0.1.1 | Local MCP server that turns AI conversations into polished ebooks and delivers them to Kindle or email | # stewreads
StewReads is a local MCP server that transforms AI conversations into clean, well-formatted ebooks. Everything runs on your machine, with no account and no cloud backend required. Generate EPUB files from your chats and send them directly to Kindle or any email address. Works with Claude Desktop and other MCP-compatible clients.
## Current Scope
- Local stdio MCP server (no backend calls)
- EPUB generation from markdown
- Save generated ebooks to a configured local directory
- Send generated EPUB files by email (Claude Gmail connector preferred, SMTP fallback)
- Reuse current StewReads ebook-generation prompt
## Not In Scope (Current Iteration)
- PDF generation
## Requirements
- Python 3.10+
- uv
macOS install:
```bash
brew install uv
```
## Install (Recommended)
Install as a local tool:
```bash
uv tool install stewreads-mcp
```
This includes pandoc via `pypandoc-binary`, so no separate pandoc install is required.
## Install (Repo Development)
```bash
uv sync
```
## Configuration
Create `~/.config/stewreads/config.toml`:
```toml
[paths]
output_dir = "/Users/you/Projects/generated_books"
[email]
from_email = "you@gmail.com"
default_to_email = "kindle-or-reader@example.com"
```
Optional environment overrides:
- `STEWREADS_CONFIG_PATH` (path to config file)
- `STEWREADS_OUTPUT_DIR` (overrides configured output dir)
- `STEWREADS_FROM_EMAIL` (overrides sender email)
- `STEWREADS_DEFAULT_TO_EMAIL` (overrides default recipient email)
Fallback-only secret for built-in SMTP tool:
- `STEWREADS_GMAIL_APP_PASSWORD` (Gmail App Password)
## Run MCP Server (Tool Install)
```bash
stewreads-mcp
```
## Run MCP Server (Repo Development)
```bash
uv run stewreads-mcp
```
## Claude Desktop (Mac) Example
Add to Claude Desktop MCP config:
```json
{
"mcpServers": {
"stewreads": {
"command": "/Users/you/.local/bin/stewreads-mcp",
"env": {
"STEWREADS_CONFIG_PATH": "/Users/you/.config/stewreads/config.toml"
}
}
}
}
```
If you want to use `email_ebook` (SMTP fallback tool), also set:
```json
"STEWREADS_GMAIL_APP_PASSWORD": "your-16-char-app-password"
```
## Exposed MCP Tools
- `get_stew_prompt()`
- `get_stew_config()`
- `get_email_status()`
- `save_stew_config(output_dir)`
- `save_ebook(markdown, title, filename?, original_prompt?)`
- `email_ebook(to_email?, ebook_path?, subject?, body?)` (SMTP fallback when connector is unavailable)
## First-Time Claude Flow
1. Call `get_stew_config()`.
2. If `configured` is `false`, ask the user for their preferred output directory and call `save_stew_config(output_dir=...)`.
3. Call `save_ebook(...)` after config is set.
4. If Claude Gmail connector is available, ask Claude to send the saved EPUB using that connector.
5. If no connector is configured, call `get_email_status()` and then `email_ebook(...)`.
## Dev Shell Safety
When running multi-step shell commands, you may see `set -euo pipefail`:
- `-e`: stop on command failure.
- `-u`: fail on unset variables.
- `-o pipefail`: fail if any command in a pipeline fails.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"fastmcp>=2.0.0",
"pypandoc-binary>=1.15",
"aiofiles>=24.1.0",
"tomli>=2.0.1; python_version < \"3.11\"",
"pytest>=8.0.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.0 | 2026-02-20T07:21:23.529049 | stewreads_mcp-0.1.1.tar.gz | 15,328 | 1e/17/7e9b393b8a592a0d1d15a393a741d3a3e2fbf828691d6640483270d1ae3f/stewreads_mcp-0.1.1.tar.gz | source | sdist | null | false | d049d4e0f91ccdb2f572622a9b165c64 | 2e43f54a7fd927be4db348cb46057552995b12f284d62ede5858a079fea3a8ca | 1e177e9b393b8a592a0d1d15a393a741d3a3e2fbf828691d6640483270d1ae3f | null | [] | 258 |
2.4 | pygmi | 3.3.0.0 | Python Geoscience Modelling and Interpretation | PyGMI
=====
.. |pythonversion| image:: https://img.shields.io/pypi/pyversions/pygmi
:alt: PyPI - Python Version
:target: https://pypi.org/project/pygmi
.. |pygmiversion| image:: https://img.shields.io/pypi/v/pygmi
:alt: PyPI - Version
:target: https://pypi.org/project/pygmi
.. |pygmilicence| image:: https://img.shields.io/github/license/patrick-cole/pygmi
:alt: GitHub License
:target: https://github.com/Patrick-Cole/pygmi/blob/pygmi3/LICENSE.txt
.. |pygmirelease| image:: https://img.shields.io/github/release/patrick-cole/pygmi
:alt: GitHub Release
:target: https://github.com/Patrick-Cole/pygmi/releases
.. image:: https://joss.theoj.org/papers/10.21105/joss.07019/status.svg
:target: https://doi.org/10.21105/joss.07019
|pythonversion| |pygmiversion| |pygmilicence| |pygmirelease|
Overview
--------
PyGMI stands for Python Geoscience Modelling and Interpretation. It is a modelling and interpretation suite aimed at magnetic, gravity, remote sensing and other datasets. PyGMI has a graphical user interface, and is meant to be run as such.
PyGMI is developed at the `Council for Geoscience <http://www.geoscience.org.za>`_ (Geological Survey of South Africa).
It includes:
* Magnetic and Gravity 3D forward modelling.
* Cluster Analysis, including use of scikit-learn libraries.
* Routines for cutting, reprojecting and doing simple modifications to data.
* Convenient display of data using pseudo-color, ternary and sunshaded representation.
* MT processing and 1D inversion using MTpy.
* Gravity processing.
* Seismological functions for SEISAN data.
* Remote sensing ratios and improved imports.
It is released under the `Gnu General Public License version 3.0 <http://www.gnu.org/copyleft/gpl.html>`_
The PyGMI `Wiki <http://patrick-cole.github.io/pygmi/index.html>`_ pages, include installation and full usage! Contributors can check this `link <https://github.com/Patrick-Cole/pygmi/blob/pygmi3/CONTRIBUTING.md>`_ for ways to contribute.
The latest release version (including windows installers) can be found `here <https://github.com/Patrick-Cole/pygmi/releases>`_.
You may need to install the `Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019 <https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads>`_.
If you have any comments or queries, you can contact the author either through `GitHub <https://github.com/Patrick-Cole/pygmi>`_ or via email at pcole@geoscience.org.za
Installation
------------
The simplest installation of PyGMI is on Windows, using a pre-built installer at `64-bit <https://github.com/Patrick-Cole/pygmi/releases>`_.
If you prefer building from source, you can use PyPi or Conda.
Once built using PyPi, running pygmi can be done at the command prompt as follows:
pygmi
If you are in python, you can run PyGMI by using the following commands:
from pygmi.main import main
main()
If you prefer not to install pygmi as a library, download the source code and execute the following command to run it manually:
python quickstart.py
Requirements
^^^^^^^^^^^^
PyGMI will run on both Windows and Linux. It should be noted that the main development is done in Python 3.13 on Windows.
PyGMI should still work with Python 3.10.
PyGMI is developed and has been tested with the following libraries in order to function:
* fiona>=1.10.1
* geopandas>=1.1.2
* h5netcdf>=1.8.1
* matplotlib>=3.10.8
* natsort>=8.4.0
* numba>=0.63.1
* numexpr>=2.14.1
* openpyxl>=3.1.5
* psutil>=7.2.1
* pyside6>=6.10.1
* pytest>=9.0.2
* pyvista>=0.46.5
* pyvistaqt>=0.11.3
* rasterio>=1.5.0
* rioxarray>=0.21.0
* scikit-learn>=1.8.0
* scikit-image>=0.26.0
* shapelysmooth>=0.2.1
* simpeg>=0.25.0
* beautifulsoup4>=4.14.3
* pyyaml>=6.0.3
* pwlf>=2.5.2
PyPi - Windows
^^^^^^^^^^^^^^
Windows users can use the `WinPython <https://winpython.github.io/>`_ distribution as an alternative to Anaconda. It comes with most libraries preinstalled, so using pip should be sufficient.
Install with the following command.
pip install pygmi
Should you wish to manually install binaries, related binaries can be obtained at the `website <https://github.com/cgohlke/geospatial-wheels/>`_ by Christoph Gohlke.
If you wish to update GDAL, you will need to download and install:
* fiona
* GDAL
* pyproj
* rasterio
* Rtree
* shapely
All these binaries should be downloaded since they have internal co-dependencies.
PyPi - Linux
^^^^^^^^^^^^
Linux normally comes with python installed, but the additional libraries will still need to be installed.
The process is as follows:
sudo apt-get install pipx
pipx ensurepath
pipx install pygmi
Once installed, running pygmi can be done at the command prompt as follows:
pygmi
If you get the following error: *qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.*, then you can try the following command, since this is Linux issue:
sudo apt-get install libxcb-xinerama0
Anaconda
^^^^^^^^
Anaconda users are advised not to use pip since it can break PyQt5. However, one package is installed only by pip, so a Conda environment should be created.
The process to install is as follows:
conda create -n pygmi python=3.13
conda activate pygmi
conda config --env --add channels conda-forge
conda install pyside6
conda install fiona
conda install matplotlib
conda install psutil
conda install numexpr
conda install rasterio
conda install geopandas
conda install natsort
conda install numba
conda install scikit-learn
conda install scikit-image
conda install pyvista
conda install pyvistaqt
conda install simpeg
conda install shapelysmooth
conda install openpyxl
conda install h5netcdf
conda install rioxarray
conda install pytest
conda install beautifulsoup4
conda install pyyaml
conda install pwlf
conda update --all
Once this is done, download pygmi, extract (unzip) it to a directory, and run it from its root directory with the following command:
python quickstart.py
References
----------
* Cole, P. 2012, Development of a 3D Potential Field Forward Modelling System in Python, AGU fall meeting, 3-7 December, San Francisco, USA
* Cole, P. 2013, PyGMI – The use of Python in geophysical modelling and interpretation. South African Geophysical Association, 13th Biennial Conference, Skukuza Rest Camp, Kruger National Park (7-9 October)
* Cole, P. 2014, The history and design behind the Python Geophysical Modelling and Interpretation (PyGMI) package, SciPy 2014, Austin, Texas (6-12 July)
* Cole, P. 2016, The continued evolution of the open source PyGMI project. 35th IGC, Cape Town.
* Cole, P. 2025, PyGMI - a python package for geoscience modelling and interpretation. Journal of Open Source Software, 10(111), 7019, https://doi.org/10.21105/joss.07019
| text/x-rst | null | Patrick Cole <pcole@geoscience.org.za> | null | Patrick Cole <pcole@geoscience.org.za> | null | Geoscience, Geophysics, Magnetic, Gravity, Modelling, Interpretation, Remote Sensing | [
"Development Status :: 5 - Production/Stable",
"Environment :: Win32 (MS Windows)",
"Environment :: X11 Applications :: Qt",
"Intended Audience :: Education",
"Intended Audience :: End Users/Desktop",
"Intended Audience :: Science/Research",
"Natural Language :: English",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Scientific/Engineering :: Physics",
"Topic :: Scientific/Engineering :: Image Processing",
"Topic :: Scientific/Engineering :: Visualization",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"fiona>=1.10.1",
"geopandas>=1.1.2",
"h5netcdf>=1.8.1",
"matplotlib>=3.10.8",
"natsort>=8.4.0",
"numba>=0.63.1",
"numexpr>=2.14.1",
"openpyxl>=3.1.5",
"psutil>=7.2.1",
"pyside6>=6.10.1",
"pytest>=9.0.2",
"pyvista>=0.46.5",
"pyvistaqt>=0.11.3",
"rasterio>=1.5.0",
"rioxarray>=0.21.0",
"scikit-learn>=1.8.0",
"scikit-image>=0.26.0",
"shapelysmooth>=0.2.1",
"simpeg>=0.25.0",
"beautifulsoup4>=4.14.3",
"pyyaml>=6.0.3",
"pwlf>=2.5.2"
] | [] | [] | [] | [
"homepage, http://patrick-cole.github.io/pygmi/",
"documentation, https://patrick-cole.github.io/pygmi/wiki.html",
"repository, https://github.com/Patrick-Cole/pygmi.git",
"changelog, https://github.com/Patrick-Cole/pygmi/blob/pygmi3/CHANGES.rst"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T07:21:14.263417 | pygmi-3.3.0.0.tar.gz | 19,022,917 | 65/7c/79aa8890c983a0a28cda95bba42c03961c530ddada568c87c258fb16d97a/pygmi-3.3.0.0.tar.gz | source | sdist | null | false | 9b4965bd33527c8e97aa5699520e5ec2 | 73f322c18553ed338842b30243c5ae386f48dd605f31c74d151842f3ce59dae8 | 657c79aa8890c983a0a28cda95bba42c03961c530ddada568c87c258fb16d97a | GPL-3.0-or-later | [
"LICENSE.txt"
] | 258 |
2.4 | xgboost2ww | 0.1.0 | Compute WeightWatcher-style correlation matrices (W1/W2/W7/W8) for XGBoost via OOF margin increments. | ## Why XGBoost2WW?
**XGBoost2WW lets you apply WeightWatcher-style spectral diagnostics to XGBoost models.**
XGBoost models don’t have traditional neural network weight matrices — so you can’t directly run tools like WeightWatcher on them.
XGBoost2WW bridges that gap by converting a trained XGBoost model into structured matrices (W1/W2/W7/W8) derived from **out-of-fold margin increments along the boosting trajectory**.
These matrices behave like neural weight matrices, so you can analyze them with WeightWatcher.
---
## Why would a production ML engineer care?
Because traditional metrics (accuracy, AUC, logloss) often look fine **right up until a model fails in production**.
Spectral diagnostics can help detect:
- Overfitting that standard validation doesn’t reveal
- Correlation traps in boosted trees
- Excessive memorization
- Unstable training dynamics
- Data leakage patterns
- Models that are brittle to distribution shift
In short:
> XGBoost2WW gives you a structural diagnostic signal — not just a performance metric.
That means you can:
- Compare model candidates beyond accuracy
- Detect problematic models *before deployment*
- Monitor structural drift over time
- Add an extra safety layer to your MLOps pipeline
---
If you deploy XGBoost models in production,
XGBoost2WW gives you a new lens to inspect them.
# xgboost2ww
Convert XGBoost boosting dynamics into WeightWatcher-style correlation matrices (W1/W2/W7/W8).
## Install
Development install:
```bash
pip install -e .
pip install weightwatcher torch
```
Minimal runtime install (for a future PyPI install):
```bash
pip install xgboost2ww
pip install weightwatcher
```
## Quickstart (compute_matrices)
```python
import numpy as np
import xgboost as xgb
from xgboost2ww import compute_matrices
rng = np.random.default_rng(0)
X = rng.normal(size=(300, 12)).astype(np.float32)
logits = 1.5 * X[:, 0] - 0.8 * X[:, 1] + 0.3 * rng.normal(size=300)
y = (logits > 0).astype(np.int32)
dtrain = xgb.DMatrix(X, label=y)
params = {
"objective": "binary:logistic",
"eval_metric": "logloss",
"max_depth": 3,
"eta": 0.1,
"subsample": 1.0,
"colsample_bytree": 1.0,
"seed": 0,
"verbosity": 0,
}
rounds = 40
bst = xgb.train(params, dtrain, num_boost_round=rounds)
# Reproducibility knobs for fold training inside compute_matrices / convert
train_params = params
num_boost_round = rounds
mats = compute_matrices(
bst,
X,
y,
nfolds=5,
t_points=40,
random_state=0,
train_params=train_params,
num_boost_round=num_boost_round,
)
W7 = mats.W7
print(W7.shape)
```
## Quickstart (convert + WeightWatcher)
```python
import weightwatcher as ww
from xgboost2ww import convert
layer = convert(
bst,
X,
y,
W="W7",
return_type="torch",
nfolds=5,
t_points=40,
random_state=0,
train_params=train_params,
num_boost_round=num_boost_round,
)
watcher = ww.WeightWatcher(model=layer)
details_df = watcher.analyze(randomize=True, plot=False)
alpha = details_df["alpha"].iloc[0]
rand_num_spikes = details_df["rand_num_spikes"].iloc[0]
print({"alpha": alpha, "rand_num_spikes": rand_num_spikes})
```
For initial evaluation, you do not need `detX=True`. If you want determinant-based diagnostics, you can pass `detX=True`.
## Notes / limitations
- Binary classification is the default workflow.
- Multiclass requires setting `multiclass` explicitly (supported modes: `"per_class"`, `"stack"`, `"avg"`).
- `convert(..., multiclass="per_class", return_type="torch")` is unsupported and raises; for multiclass per-class output, use `return_type="numpy"`.
- `torch` is optional unless you need `convert(..., return_type="torch")`.
| text/markdown | Charles H. Martin, PhD | null | null | null |
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| xgboost, weightwatcher, spectral, correlation, matrix, oof | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy>=1.23",
"xgboost>=1.7",
"scikit-learn>=1.2",
"torch>=2.0; extra == \"torch\"",
"pytest>=7.0; extra == \"dev\"",
"build>=1.0; extra == \"dev\"",
"ruff>=0.3.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/CalculatedContent/xgboost2ww",
"Repository, https://github.com/CalculatedContent/xgboost2ww",
"Issues, https://github.com/CalculatedContent/xgboost2ww/issues"
] | twine/6.2.0 CPython/3.10.18 | 2026-02-20T07:21:12.846649 | xgboost2ww-0.1.0.tar.gz | 20,959 | 63/2d/3e9f05e73003feb68a4b77b9025d9ae7eec47c90d555d56fc89995d99bf8/xgboost2ww-0.1.0.tar.gz | source | sdist | null | false | 522559714f84380737268a42fc167523 | a7f690f7c73b907f2d64119a85f162e920378c07b27dd91f4dbae26f4989ed38 | 632d3e9f05e73003feb68a4b77b9025d9ae7eec47c90d555d56fc89995d99bf8 | null | [
"LICENSE"
] | 293 |
2.4 | kaos-cli | 0.2.7 | CLI for KAOS (K8s Agent Orchestration System) | # KAOS CLI
Command-line interface for KAOS (K8s Agent Orchestration System).
## Installation
```bash
cd kaos-cli
uv sync
source .venv/bin/activate
```
## Usage
### Start UI Proxy
Start a CORS-enabled proxy to the Kubernetes API server:
```bash
kaos ui
```
This starts a local proxy on port 8010 that:
- Proxies requests to the Kubernetes API using your kubeconfig credentials
- Adds CORS headers to enable browser-based access
- Exposes the `mcp-session-id` header for MCP protocol support
Options:
- `--k8s-url`: Override the Kubernetes API URL (default: from kubeconfig)
- `--expose-port`: Port to expose the proxy on (default: 8010)
- `--namespace`, `-n`: Initial namespace to display in the UI (default: "default")
- `--no-browser`: Don't automatically open the browser
Example:
```bash
# Use default settings
kaos ui
# Custom port
kaos ui --expose-port 9000
# Start with a specific namespace
kaos ui --namespace kaos-system
# Custom K8s URL
kaos ui --k8s-url https://my-cluster:6443
```
### Version
```bash
kaos version
```
## Development
```bash
# Run tests
pytest
# Run directly
python -m kaos_cli.main ui
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"typer>=0.9.0",
"httpx>=0.27.0",
"uvicorn>=0.30.0",
"starlette>=0.37.0",
"kubernetes>=29.0.0",
"pyyaml>=6.0.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:21:07.116261 | kaos_cli-0.2.7.tar.gz | 38,683 | 8c/4f/ca1551c5e5d9649c8b8ac3703923f2da5eb3097c25bdf982c5862070844e/kaos_cli-0.2.7.tar.gz | source | sdist | null | false | dd2ec73ee25c16d3fbdfaf576fb666c4 | 14cf6fc9be833bddf1860a2a0bd706d5f6ce6383a900c3d1a0291011a1b085f9 | 8c4fca1551c5e5d9649c8b8ac3703923f2da5eb3097c25bdf982c5862070844e | null | [] | 253 |
2.4 | docling-parse | 5.3.3 | Simple package to extract text with coordinates from programmatic PDFs | # Docling Parse
[](https://pypi.org/project/docling-parse/)
[](https://pypi.org/project/docling-parse/)
[](https://github.com/astral-sh/uv)
[](https://github.com/pybind/pybind11/)
[](https://github.com/docling-project/docling-parse/)
[](https://opensource.org/licenses/MIT)
Simple package to extract text, paths and bitmap images with coordinates from programmatic PDFs. This package is used in the [Docling](https://github.com/docling-project/docling) PDF conversion. Below, we show a few output of the latest parser with char, word and line level output for text, in addition to the extracted paths and bitmap resources.
To do the visualizations yourself, simply run (change `word` into `char` or `line`),
```sh
uv run python ./docling_parse/visualize.py -i <path-to-pdf-file> -c word --interactive
```
<table>
<tr>
<th>original</th>
<th>char</th>
<th>word</th>
<th>line</th>
</tr>
<tr>
<td><img src="./docs/visualisations/ligatures_01.pdf.page_1.orig.png" alt="screenshot" width="170"/></td>
<td><img src="./docs/visualisations/ligatures_01.pdf.page_1.char.png" alt="screenshot" width="170"/></td>
<td><img src="./docs/visualisations/ligatures_01.pdf.page_1.word.png" alt="screenshot" width="170"/></td>
<td><img src="./docs/visualisations/ligatures_01.pdf.page_1.line.png" alt="screenshot" width="170"/></td>
</tr>
<tr>
<td><img src="./docs/visualisations/ligatures_01.pdf.page_3.orig.png" alt="screenshot" width="170"/></td>
<td><img src="./docs/visualisations/ligatures_01.pdf.page_3.char.png" alt="screenshot" width="170"/></td>
<td><img src="./docs/visualisations/ligatures_01.pdf.page_3.word.png" alt="screenshot" width="170"/></td>
<td><img src="./docs/visualisations/ligatures_01.pdf.page_3.line.png" alt="screenshot" width="170"/></td>
</tr>
<tr>
<td><img src="./docs/visualisations/ligatures_01.pdf.page_4.orig.png" alt="screenshot" width="170"/></td>
<td><img src="./docs/visualisations/ligatures_01.pdf.page_4.char.png" alt="screenshot" width="170"/></td>
<td><img src="./docs/visualisations/ligatures_01.pdf.page_4.word.png" alt="screenshot" width="170"/></td>
<td><img src="./docs/visualisations/ligatures_01.pdf.page_4.line.png" alt="screenshot" width="170"/></td>
</tr>
<tr>
<td><img src="./docs/visualisations/table_of_contents_01.pdf.page_1.orig.png" alt="screenshot" width="170"/></td>
<td><img src="./docs/visualisations/table_of_contents_01.pdf.page_1.char.png" alt="screenshot" width="170"/></td>
<td><img src="./docs/visualisations/table_of_contents_01.pdf.page_1.word.png" alt="screenshot" width="170"/></td>
<td><img src="./docs/visualisations/table_of_contents_01.pdf.page_1.line.png" alt="screenshot" width="170"/></td>
</tr>
<tr>
<td><img src="./docs/visualisations/table_of_contents_01.pdf.page_4.orig.png" alt="screenshot" width="170"/></td>
<td><img src="./docs/visualisations/table_of_contents_01.pdf.page_4.char.png" alt="screenshot" width="170"/></td>
<td><img src="./docs/visualisations/table_of_contents_01.pdf.page_4.word.png" alt="screenshot" width="170"/></td>
<td><img src="./docs/visualisations/table_of_contents_01.pdf.page_4.line.png" alt="screenshot" width="170"/></td>
</tr>
</table>
## Quick start
Install the package from Pypi
```sh
pip install docling-parse
```
Convert a PDF (look in the [visualize.py](docling_parse/visualize.py) for a more detailed information)
```python
from docling_core.types.doc.page import TextCellUnit
from docling_parse.pdf_parser import DoclingPdfParser, PdfDocument
parser = DoclingPdfParser()
pdf_doc: PdfDocument = parser.load(
path_or_stream="<path-to-pdf>"
)
# PdfDocument.iterate_pages() will automatically populate pages as they are yielded.
for page_no, pred_page in pdf_doc.iterate_pages():
# iterate over the word-cells
for word in pred_page.iterate_cells(unit_type=TextCellUnit.WORD):
print(word.rect, ": ", word.text)
# create a PIL image with the char cells
img = pred_page.render_as_image(cell_unit=TextCellUnit.CHAR)
img.show()
```
Use the CLI
```sh
$ docling-parse -h
usage: docling-parse [-h] -p PDF
Process a PDF file.
options:
-h, --help show this help message and exit
-p PDF, --pdf PDF Path to the PDF file
```
## Performance Benchmarks
*Coming soon - benchmarks will be updated for the current parser version.*
For historical V1 vs V2 benchmarks, see [legacy_performance_benchmarks.md](./docs/legacy_performance_benchmarks.md).
## Development
### CXX
To build the parser, simply run the following command in the root folder,
```sh
rm -rf build; cmake -B ./build; cd build; make
```
You can run the parser from your build folder:
```sh
% ./parse.exe -h
program to process PDF files or configuration files
Usage:
PDFProcessor [OPTION...]
-i, --input arg Input PDF file
-c, --config arg Config file
--create-config arg Create config file
-p, --page arg Pages to process (default: -1 for all) (default:
-1)
--password arg Password for accessing encrypted, password-protected files
-o, --output arg Output file
-l, --loglevel arg loglevel [error;warning;success;info]
-h, --help Print usage
```
If you don't have an input file, a template input file will be printed on the terminal.
### Python
To build the package, simply run (make sure [uv](https://docs.astral.sh/uv/) is [installed](https://docs.astral.sh/uv/getting-started/installation)),
```sh
uv sync
```
The latter will only work after a clean `git clone`. If you are developing and updating C++ code, please use,
```sh
# uv pip install --force-reinstall --no-deps -e .
rm -rf .venv; uv venv; uv pip install --force-reinstall --no-deps -e ".[perf-tools]"
```
To test the package, run:
```sh
uv run pytest ./tests -v -s
```
## Contributing
Please read [Contributing to Docling Parse](https://github.com/docling-project/docling-parse/blob/main/CONTRIBUTING.md) for details.
## References
If you use Docling in your projects, please consider citing the following:
```bib
@techreport{Docling,
author = {Docling Team},
month = {8},
title = {Docling Technical Report},
url = {https://arxiv.org/abs/2408.09869},
eprint = {2408.09869},
doi = {10.48550/arXiv.2408.09869},
version = {1.0.0},
year = {2024}
}
```
## License
The Docling Parse codebase is under MIT license.
For individual model usage, please refer to the model licenses found in the original packages.
## LF AI & Data
Docling (and also docling-parse) is hosted as a project in the [LF AI & Data Foundation](https://lfaidata.foundation/projects/).
### IBM ❤️ Open Source AI
The project was started by the AI for knowledge team at IBM Research Zurich.
| text/markdown | null | Peter Staar <taa@zurich.ibm.com>, Christoph Auer <cau@zurich.ibm.com>, Michele Dolfi <dol@zurich.ibm.com>, Panos Vagenas <pva@zurich.ibm.com>, Maxim Lysak <mly@zurich.ibm.com> | null | null | null | docling, pdf, parser | [
"Operating System :: MacOS :: MacOS X",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"tabulate<1.0.0,>=0.9.0",
"pillow<13.0.0,>=10.0.0",
"pydantic>=2.0.0",
"docling-core>=2.65.1",
"pywin32>=305; sys_platform == \"win32\"",
"pdfplumber>=0.11.7; extra == \"perf-tools\"",
"pymupdf>=1.26.4; extra == \"perf-tools\"",
"pypdfium2>=4.30.0; extra == \"perf-tools\""
] | [] | [] | [] | [
"Homepage, https://github.com/docling-project/docling-parse",
"Repository, https://github.com/docling-project/docling-parse"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:20:39.404998 | docling_parse-5.3.3.tar.gz | 55,465,256 | 27/1a/eb17ab19a60851f206350386da2d1f78e7bffc6077acf347afb977a1e36b/docling_parse-5.3.3.tar.gz | source | sdist | null | false | 050b48214ac0b2c6e3448fb33a0da379 | d3bc34bc236b205c466334870a4284c75ac5dc973294107f2523548fac2b3408 | 271aeb17ab19a60851f206350386da2d1f78e7bffc6077acf347afb977a1e36b | MIT | [
"LICENSE"
] | 42,402 |
2.4 | 10xscale-agentflow | 0.6.1 | 10xScale Agentflow is a Python framework for building, orchestrating, and managing multi-agent systems. Designed for flexibility and scalability, 10xScale Agentflow enables developers to create intelligent agents that collaborate, communicate, and solve complex tasks together. |
# 10xScale Agentflow



[](#)
**10xScale Agentflow** is a lightweight Python framework for building intelligent agents and orchestrating multi-agent workflows. It's an **LLM-agnostic orchestration tool** that works with any LLM provider—use LiteLLM, native SDKs from OpenAI, Google Gemini, Anthropic Claude, or any other provider. You choose your LLM library; 10xScale Agentflow provides the workflow orchestration.
---
## ✨ Key Features
- **⚡ Agent Class** - Build complete agents in 10-30 lines of code (new in v0.5.3!)
- **🎯 LLM-Agnostic Orchestration** - Works with any LLM provider (LiteLLM, OpenAI, Gemini, Claude, native SDKs)
- **🤖 Multi-Agent Workflows** - Build complex agent systems with your choice of orchestration patterns
- **📊 Structured Responses** - Get `content`, optional `thinking`, and `usage` in a standardized format
- **🌊 Streaming Support** - Real-time incremental responses with delta updates
- **🔧 Tool Integration** - Native support for function calling, MCP, Composio, and LangChain tools with **parallel execution**
- **🔀 LangGraph-Inspired Engine** - Flexible graph orchestration with nodes, conditional edges, and control flow
- **💾 State Management** - Built-in persistence with in-memory and PostgreSQL+Redis checkpointers
- **🔄 Human-in-the-Loop** - Pause/resume execution for approval workflows and debugging
- **🚀 Production-Ready** - Event publishing (Console, Redis, Kafka, RabbitMQ), metrics, and observability
- **🧩 Dependency Injection** - Clean parameter injection for tools and nodes
- **📦 Prebuilt Patterns** - React, RAG, Swarm, Router, MapReduce, SupervisorTeam, and more
---
## 🌟 What Makes Agentflow Unique
Agentflow stands out with powerful features designed for production-grade AI applications:
### 🏗️ **Architecture & Scalability**
1. **💾 Checkpointer with Caching Design**
Intelligent state persistence with built-in caching layer to scale efficiently. PostgreSQL + Redis implementation ensures high performance in production environments.
2. **🧠 3-Layer Memory System**
- **Short-term memory**: Current conversation context
- **Conversational memory**: Session-based chat history
- **Long-term memory**: Persistent knowledge across sessions
### 🔧 **Advanced Tooling Ecosystem**
3. **🔌 Remote Tool Calls**
Execute tools remotely using our TypeScript SDK for distributed agent architectures.
4. **🛠️ Comprehensive Tool Integration**
- Local tools (Python functions)
- Remote tools (via TypeScript SDK)
- Agent handoff tools (multi-agent collaboration)
- MCP (Model Context Protocol)
- LangChain tools
- Composio tools
### 🎯 **Intelligent Context Management**
5. **📏 Dedicated Context Manager**
- Automatically controls context size to prevent token overflow
- Called at iteration end to avoid mid-execution context loss
- Fully extensible with custom implementations
### ⚙️ **Dependency Injection & Control**
6. **💉 First-Class Dependency Injection**
Powered by InjectQ library for clean, testable, and maintainable code patterns.
7. **🎛️ Custom ID Generation Control**
Choose between string, int, or bigint IDs. Smaller IDs save significant space in databases and indexes compared to standard 128-bit UUIDs.
### 📊 **Observability & Events**
8. **📡 Internal Event Publishing**
Emit execution events to any publisher:
- Kafka
- RabbitMQ
- Redis Pub/Sub
- OpenTelemetry (planned)
- Custom publishers
### 🔄 **Advanced Execution Features**
9. **⏰ Background Task Manager**
Built-in manager for running tasks asynchronously:
- Prefetching data
- Memory persistence
- Cleanup operations
- Custom background jobs
10. **🚦 Human-in-the-Loop with Interrupts**
Pause execution at any point for human approval, then seamlessly resume with full state preservation.
11. **🧭 Flexible Agent Navigation**
- Condition-based routing between agents
- Command-based jumps to specific agents
- Agent handoff tools for smooth transitions
### 🛡️ **Security & Validation**
12. **🎣 Comprehensive Callback System**
Hook into various execution stages for:
- Logging and monitoring
- Custom behavior injection
- **Prompt injection attack prevention**
- Input/output validation
### 📦 **Ready-to-Use Components**
13. **🤖 Prebuilt Agent Patterns**
Production-ready implementations:
- React agents
- RAG (Retrieval-Augmented Generation)
- Swarm architectures
- Router agents
- MapReduce patterns
- Supervisor teams
### 📐 **Developer Experience**
14. **📋 Pydantic-First Design**
All core classes (State, Message, ToolCalls) are Pydantic models:
- Automatic JSON serialization
- Type safety
- Easy debugging and logging
- Seamless database storage
---
## Installation
**Basic installation with [uv](https://github.com/astral-sh/uv) (recommended):**
```bash
uv pip install 10xscale-agentflow
```
Or with pip:
```bash
pip install 10xscale-agentflow
```
**Optional Dependencies:**
10xScale Agentflow supports optional dependencies for specific functionality:
```bash
# PostgreSQL + Redis checkpointing
pip install 10xscale-agentflow[pg_checkpoint]
# MCP (Model Context Protocol) support
pip install 10xscale-agentflow[mcp]
# Google GenAI adapter (google-genai SDK)
pip install 10xscale-agentflow[google-genai]
# LiteLLM for multi-provider LLM support
pip install 10xscale-agentflow[litellm]
# Composio tools (adapter)
pip install 10xscale-agentflow[composio]
# LangChain tools (registry-based adapter)
pip install 10xscale-agentflow[langchain]
# Individual publishers
pip install 10xscale-agentflow[redis] # Redis publisher
pip install 10xscale-agentflow[kafka] # Kafka publisher
pip install 10xscale-agentflow[rabbitmq] # RabbitMQ publisher
# Multiple extras
pip install 10xscale-agentflow[pg_checkpoint,mcp,google-genai,litellm,composio,langchain]
```
### Environment Setup
Set your LLM provider API key:
```bash
export OPENAI_API_KEY=sk-... # for OpenAI models
# or
export GEMINI_API_KEY=... # for Google Gemini
# or
export ANTHROPIC_API_KEY=... # for Anthropic Claude
```
If you have a `.env` file, it will be auto-loaded (via `python-dotenv`).
---
## 🎯 Two Ways to Build Agents
10xScale Agentflow offers two approaches—choose based on your needs:
| Approach | Best For | Lines of Code |
|----------|----------|---------------|
| **Agent Class** ⭐ | Most use cases, rapid development | 10-30 lines |
| **Custom Functions** | Complex custom logic, non-LiteLLM providers | 50-150 lines |
> **Recommendation:** Start with the Agent class. It handles 90% of use cases with minimal code.
---
## 💡 Simple Example with Agent Class
Here's a complete tool-calling agent in under 30 lines:
```python
from agentflow.graph import Agent, StateGraph, ToolNode
from agentflow.state import AgentState, Message
from agentflow.utils.constants import END
# 1. Define your tool
def get_weather(location: str) -> str:
"""Get weather for a location."""
return f"The weather in {location} is sunny, 72°F"
# 2. Build the graph with Agent class
graph = StateGraph()
graph.add_node("MAIN", Agent(
model="gemini/gemini-2.5-flash",
system_prompt=[{"role": "system", "content": "You are a helpful assistant."}],
tool_node_name="TOOL"
))
graph.add_node("TOOL", ToolNode([get_weather]))
# 3. Define routing
def route(state: AgentState) -> str:
if state.context and state.context[-1].tools_calls:
return "TOOL"
return END
graph.add_conditional_edges("MAIN", route, {"TOOL": "TOOL", END: END})
graph.add_edge("TOOL", "MAIN")
graph.set_entry_point("MAIN")
# 4. Run it!
app = graph.compile()
result = app.invoke({
"messages": [Message.text_message("What's the weather in NYC?")]
}, config={"thread_id": "1"})
for msg in result["messages"]:
print(f"{msg.role}: {msg.content}")
```
**That's it!** The Agent class handles message conversion, LLM calls, and tool integration automatically.
---
<details>
<summary><strong>🔧 Advanced: Custom Functions Approach</strong></summary>
For maximum control, use custom functions instead of the Agent class:
```python
from dotenv import load_dotenv
from litellm import acompletion
from agentflow.checkpointer import InMemoryCheckpointer
from agentflow.graph import StateGraph, ToolNode
from agentflow.state.agent_state import AgentState
from agentflow.utils import Message
from agentflow.utils.constants import END
from agentflow.utils.converter import convert_messages
load_dotenv()
# Define a tool with dependency injection
def get_weather(
location: str,
tool_call_id: str | None = None,
state: AgentState | None = None,
) -> Message:
"""Get the current weather for a specific location."""
res = f"The weather in {location} is sunny"
return Message.tool_message(
content=res,
tool_call_id=tool_call_id,
)
# Create tool node
tool_node = ToolNode([get_weather])
# Define main agent node (manual message handling)
async def main_agent(state: AgentState):
prompts = "You are a helpful assistant. Use tools when needed."
messages = convert_messages(
system_prompts=[{"role": "system", "content": prompts}],
state=state,
)
# Check if we need tools
if (
state.context
and len(state.context) > 0
and state.context[-1].role == "tool"
):
response = await acompletion(
model="gemini/gemini-2.5-flash",
messages=messages,
)
else:
tools = await tool_node.all_tools()
response = await acompletion(
model="gemini/gemini-2.5-flash",
messages=messages,
tools=tools,
)
return response
# Define routing logic
def should_use_tools(state: AgentState) -> str:
"""Determine if we should use tools or end."""
if not state.context or len(state.context) == 0:
return "TOOL"
last_message = state.context[-1]
if (
hasattr(last_message, "tools_calls")
and last_message.tools_calls
and len(last_message.tools_calls) > 0
):
return "TOOL"
return END
# Build the graph
graph = StateGraph()
graph.add_node("MAIN", main_agent)
graph.add_node("TOOL", tool_node)
graph.add_conditional_edges(
"MAIN",
should_use_tools,
{"TOOL": "TOOL", END: END},
)
graph.add_edge("TOOL", "MAIN")
graph.set_entry_point("MAIN")
# Compile and run
app = graph.compile(checkpointer=InMemoryCheckpointer())
inp = {"messages": [Message.from_text("What's the weather in New York?")]}
config = {"thread_id": "12345", "recursion_limit": 10}
res = app.invoke(inp, config=config)
for msg in res["messages"]:
print(msg)
```
</details>
### How to run the example locally
1. Install dependencies (recommended in a virtualenv):
```bash
pip install -r requirements.txt
# or if you use uv
uv pip install -r requirements.txt
```
2. Set your LLM provider API key (for example OpenAI):
```bash
export OPENAI_API_KEY="sk-..."
# or create a .env with the key and the script will load it automatically
```
3. Run the example script:
```bash
python examples/react/react_weather_agent.py
```
Notes:
- The example uses `litellm`'s `acompletion` function — set `model` to a provider/model available in your environment (for example `gemini/gemini-2.5-flash` or other supported model strings).
- `InMemoryCheckpointer` is for demo/testing only. Replace with a persistent checkpointer for production.
---
## Example: MCP Integration
10xScale Agentflow supports integration with Model Context Protocol (MCP) servers, allowing you to connect external tools and services. The example in `examples/react-mcp/` demonstrates how to integrate MCP tools with your agent.
First, create an MCP server (see `examples/react-mcp/server.py`):
```python
from fastmcp import FastMCP
mcp = FastMCP("My MCP Server")
@mcp.tool(
description="Get the weather for a specific location",
)
def get_weather(location: str) -> dict:
return {
"location": location,
"temperature": "22°C",
"description": "Sunny",
}
if __name__ == "__main__":
mcp.run(transport="streamable-http")
```
Then, integrate MCP tools into your agent (from `examples/react-mcp/react-mcp.py`):
```python
from typing import Any
from dotenv import load_dotenv
from fastmcp import Client
from litellm import acompletion
from agentflow.checkpointer import InMemoryCheckpointer
from agentflow.graph import StateGraph, ToolNode
from agentflow.state.agent_state import AgentState
from agentflow.utils import Message
from agentflow.utils.constants import END
from agentflow.utils.converter import convert_messages
load_dotenv()
checkpointer = InMemoryCheckpointer()
config = {
"mcpServers": {
"weather": {
"url": "http://127.0.0.1:8000/mcp",
"transport": "streamable-http",
},
}
}
client_http = Client(config)
# Initialize ToolNode with MCP client
tool_node = ToolNode(functions=[], client=client_http)
async def main_agent(state: AgentState):
prompts = "You are a helpful assistant."
messages = convert_messages(
system_prompts=[{"role": "system", "content": prompts}],
state=state,
)
# Get all available tools (including MCP tools)
tools = await tool_node.all_tools()
response = await acompletion(
model="gemini/gemini-2.0-flash",
messages=messages,
tools=tools,
)
return response
def should_use_tools(state: AgentState) -> str:
"""Determine if we should use tools or end the conversation."""
if not state.context or len(state.context) == 0:
return "TOOL"
last_message = state.context[-1]
if (
hasattr(last_message, "tools_calls")
and last_message.tools_calls
and len(last_message.tools_calls) > 0
):
return "TOOL"
if last_message.role == "tool" and last_message.tool_call_id is not None:
return END
return END
graph = StateGraph()
graph.add_node("MAIN", main_agent)
graph.add_node("TOOL", tool_node)
graph.add_conditional_edges(
"MAIN",
should_use_tools,
{"TOOL": "TOOL", END: END},
)
graph.add_edge("TOOL", "MAIN")
graph.set_entry_point("MAIN")
app = graph.compile(checkpointer=checkpointer)
# Run the agent
inp = {"messages": [Message.from_text("Please call the get_weather function for New York City")]}
config = {"thread_id": "12345", "recursion_limit": 10}
res = app.invoke(inp, config=config)
for i in res["messages"]:
print(i)
```
How to run the MCP example:
1. Install MCP dependencies:
```bash
pip install 10xscale-agentflow[mcp]
# or
uv pip install 10xscale-agentflow[mcp]
```
2. Start the MCP server in one terminal:
```bash
cd examples/react-mcp
python server.py
```
3. Run the MCP-integrated agent in another terminal:
```bash
python examples/react-mcp/react-mcp.py
```
---
## Example: Streaming Agent
10xScale Agentflow supports streaming responses for real-time interaction. The example in `examples/react_stream/stream_react_agent.py` demonstrates different streaming modes and configurations.
```python
import asyncio
import logging
from dotenv import load_dotenv
from litellm import acompletion
from agentflow.checkpointer import InMemoryCheckpointer
from agentflow.graph import StateGraph, ToolNode
from agentflow.state.agent_state import AgentState
from agentflow.utils import Message, ResponseGranularity
from agentflow.utils.constants import END
from agentflow.utils.converter import convert_messages
load_dotenv()
checkpointer = InMemoryCheckpointer()
def get_weather(
location: str,
tool_call_id: str,
state: AgentState,
) -> Message:
"""Get weather with injectable parameters."""
res = f"The weather in {location} is sunny."
return Message.tool_message(
content=res,
tool_call_id=tool_call_id,
)
tool_node = ToolNode([get_weather])
async def main_agent(state: AgentState, config: dict):
prompts = "You are a helpful assistant. Answer conversationally. Use tools when needed."
messages = convert_messages(
system_prompts=[{"role": "system", "content": prompts}],
state=state,
)
is_stream = config.get("is_stream", False)
if (
state.context
and len(state.context) > 0
and state.context[-1].role == "tool"
):
response = await acompletion(
model="gemini/gemini-2.5-flash",
messages=messages,
stream=is_stream,
)
else:
tools = await tool_node.all_tools()
response = await acompletion(
model="gemini/gemini-2.5-flash",
messages=messages,
tools=tools,
stream=is_stream,
)
return response
def should_use_tools(state: AgentState) -> str:
if not state.context or len(state.context) == 0:
return "TOOL"
last_message = state.context[-1]
if (
hasattr(last_message, "tools_calls")
and last_message.tools_calls
and len(last_message.tools_calls) > 0
):
return "TOOL"
if last_message.role == "tool" and last_message.tool_call_id is not None:
return END
return END
graph = StateGraph()
graph.add_node("MAIN", main_agent)
graph.add_node("TOOL", tool_node)
graph.add_conditional_edges(
"MAIN",
should_use_tools,
{"TOOL": "TOOL", END: END},
)
graph.add_edge("TOOL", "MAIN")
graph.set_entry_point("MAIN")
app = graph.compile(checkpointer=checkpointer)
async def run_stream_test():
inp = {"messages": [Message.from_text("Call get_weather for Tokyo, then reply.")]}
config = {"thread_id": "stream-1", "recursion_limit": 10}
logging.info("--- streaming start ---")
stream_gen = app.astream(
inp,
config=config,
response_granularity=ResponseGranularity.LOW,
)
async for chunk in stream_gen:
print(chunk.model_dump(), end="\n", flush=True)
if __name__ == "__main__":
asyncio.run(run_stream_test())
```
Run the streaming example:
```bash
python examples/react_stream/stream_react_agent.py
```
---
## ⚡ Parallel Tool Execution
10xScale Agentflow automatically executes multiple tool calls **in parallel** when an LLM requests multiple tools simultaneously. This dramatically improves performance for I/O-bound operations.
### Benefits
- **Faster Response Times**: Multiple API calls execute concurrently
- **Better Resource Utilization**: Don't wait for one tool to finish before starting the next
- **Seamless Integration**: Works automatically with existing code - no changes needed
### Example Performance
```python
# LLM requests 3 tools simultaneously:
# - get_weather("NYC") # Takes 1.0s
# - get_news("tech") # Takes 1.5s
# - get_stock("AAPL") # Takes 0.8s
# Sequential execution: 1.0 + 1.5 + 0.8 = 3.3 seconds
# Parallel execution: max(1.0, 1.5, 0.8) = 1.5 seconds ⚡
```
See the [parallel tool execution documentation](https://10xhub.github.io/10xScale Agentflow/Concept/graph/tools/#parallel-tool-execution) for more details.
---
## 🎯 Use Cases & Patterns
10xScale Agentflow includes prebuilt agent patterns for common scenarios:
### 🤖 Agent Types
- **React Agent** - Reasoning and acting with tool calls
- **RAG Agent** - Retrieval-augmented generation
- **Guarded Agent** - Input/output validation and safety
- **Plan-Act-Reflect** - Multi-step reasoning
### 🔀 Orchestration Patterns
- **Router Agent** - Route queries to specialized agents
- **Swarm** - Dynamic multi-agent collaboration
- **SupervisorTeam** - Hierarchical agent coordination
- **MapReduce** - Parallel processing and aggregation
- **Sequential** - Linear workflow chains
- **Branch-Join** - Parallel branches with synchronization
### 🔬 Advanced Patterns
- **Deep Research** - Multi-level research and synthesis
- **Network** - Complex agent networks
See the [documentation](https://10xhub.github.io/Agentflow/) for complete examples.
---
## 🔧 Development
### For Library Users
Install 10xScale Agentflow as shown above. The `pyproject.toml` contains all runtime dependencies.
### For Contributors
```bash
# Clone the repository
git clone https://github.com/10xhub/10xScale Agentflow.git
cd 10xScale Agentflow
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install dev dependencies
pip install -r requirements-dev.txt
# or
uv pip install -r requirements-dev.txt
# Run tests
make test
# or
pytest -q
# Build docs
make docs-serve # Serves at http://127.0.0.1:8000
# Run examples
cd examples/react
python react_sync.py
```
### Development Tools
The project uses:
- **pytest** for testing (with async support)
- **ruff** for linting and formatting
- **mypy** for type checking
- **mkdocs** with Material theme for documentation
- **coverage** for test coverage reports
See `pyproject.dev.toml` for complete tool configurations.
---
## 🗺️ Roadmap
- ✅ Core graph engine with nodes and edges
- ✅ State management and checkpointing
- ✅ Tool integration (MCP, Composio, LangChain)
- ✅ **Parallel tool execution** for improved performance
- ✅ Streaming and event publishing
- ✅ Human-in-the-loop support
- ✅ Prebuilt agent patterns
- 🚧 Agent-to-Agent (A2A) communication protocols
- 🚧 Remote node execution for distributed processing
- 🚧 Enhanced observability and tracing
- 🚧 More persistence backends (Redis, DynamoDB)
- 🚧 Parallel/branching strategies
- 🚧 Visual graph editor
---
## 📄 License
MIT License - see [LICENSE](https://github.com/10xhub/10xScale Agentflow/blob/main/LICENSE) for details.
---
## 🔗 Links & Resources
- **[Documentation](https://10xhub.github.io/10xScale Agentflow/)** - Full documentation with tutorials and API reference
- **[GitHub Repository](https://github.com/10xhub/10xScale Agentflow)** - Source code and issues
- **[PyPI Project](https://pypi.org/project/10xScale-Agentflow/)** - Package releases
- **[Examples Directory](https://github.com/10xhub/10xScale Agentflow/tree/main/examples)** - Runnable code samples
---
## 🙏 Contributing
Contributions are welcome! Please see our [GitHub repository](https://github.com/10xhub/10xScale Agentflow) for:
- Issue reporting and feature requests
- Pull request guidelines
- Development setup instructions
- Code style and testing requirements
---
## 💬 Support
- **Documentation**: [https://10xhub.github.io/10xScale Agentflow/](https://10xhub.github.io/10xScale Agentflow/)
- **Examples**: Check the [examples directory](https://github.com/10xhub/10xScale Agentflow/tree/main/examples)
- **Issues**: Report bugs on [GitHub Issues](https://github.com/10xhub/10xScale Agentflow/issues)
- **Discussions**: Ask questions in [GitHub Discussions](https://github.com/10xhub/10xScale Agentflow/discussions)
---
**Ready to build intelligent agents?** Check out the [documentation](https://10xhub.github.io/10xScale Agentflow/) to get started!
| text/markdown | null | 10xScale <contact@10xscale.ai> | null | Shudipto Trafder <shudiptotrafder@gmail.com> | MIT License
Copyright (c) 2025 Iamsdt
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"Development Status :: 3 - Alpha",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"injectq>=0.4.0",
"pydantic",
"python-dotenv",
"litellm>=1.77.0; extra == \"litellm\"",
"google-genai>=1.56.0; extra == \"google-genai\"",
"openai>=1.77.0; extra == \"openai\"",
"asyncpg>=0.29.0; extra == \"pg-checkpoint\"",
"redis>=4.2; extra == \"pg-checkpoint\"",
"fastmcp>=2.11.3; extra == \"mcp\"",
"mcp>=1.13.0; extra == \"mcp\"",
"composio>=0.8.0; extra == \"composio\"",
"langchain-core>=0.3.0; extra == \"langchain\"",
"langchain-community>=0.3.0; extra == \"langchain\"",
"redis>=4.2; extra == \"redis\"",
"aiokafka>=0.8.0; extra == \"kafka\"",
"aio-pika>=9.0.0; extra == \"rabbitmq\"",
"qdrant-client>=1.7.0; extra == \"qdrant\"",
"mem0ai>=0.1.117; extra == \"mem0\"",
"redis>=4.2; extra == \"all-publishers\"",
"aiokafka>=0.8.0; extra == \"all-publishers\"",
"aio-pika>=9.0.0; extra == \"all-publishers\""
] | [] | [] | [] | [
"Homepage, https://github.com/10xHub/agentflow",
"Repository, https://github.com/10xHub/agentflow",
"Issues, https://github.com/10xHub/agentflow/issues",
"Documentation, https://10xhub.github.io/Agentflow/"
] | twine/6.2.0 CPython/3.13.9 | 2026-02-20T07:20:27.665383 | 10xscale_agentflow-0.6.1.tar.gz | 265,415 | 30/e1/b3997b7eb631b6dedcf62736dcaa7f916be5cc8c36299e65cd98ee23e3df/10xscale_agentflow-0.6.1.tar.gz | source | sdist | null | false | 8e09e0bc4666640283836394df93875a | 9b16f1535b8bce079328bae0f3ee5bf3bd2700f76034235022590afa34fcb97d | 30e1b3997b7eb631b6dedcf62736dcaa7f916be5cc8c36299e65cd98ee23e3df | null | [
"LICENSE"
] | 270 |
2.4 | autobots-devtools-shared-lib | 0.1.7 | Shared library functions to be used for all autobots projects | # Autobots DevTools Shared Library
**Dyn**amic **Agent** (**Dynagent**) is the core of this library. It turns your prompts and business processes into production-ready, multi-agent applications—chatbots and unsupervised workflows—in hours. You focus on prompts, output schemas, and domain logic; Dynagent handles multi-LLM wiring, UI integration, observability, and batch processing out of the box.
### Essential features
| Feature | Description |
| --------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Dynagent framework** | Build dynamic AI agents with YAML configs, prompts, and tools. Agent handoff and default/coordinator agents for mesh-style flows. |
| **Multi-LLM support** | Swap LLMs like swapping batteries. Use Gemini, Claude, or others via a single integration layer. |
| **Chainlit UI** | Pre-built streaming, tool steps, and structured output for Chainlit. OAuth-ready. |
| **State & context** | Session state and context management with caching and durable storage. Tools receive `ToolRuntime` with shared state across handoffs. |
| **Batch processing** | Run prompts in parallel for batch-enabled agents. Sync API with `batch_invoker` and `BatchResult`. |
| **Observability** | Langfuse integration for tracing and monitoring. `TraceMetadata` for session, app, and tags. |
| **Pythonic** | Native Python and LangChain tools. Type hints, async/sync, pytest—no DSLs. |
| **Extensible** | File server, workspace management, Jenkins integration, and helpers that plug into the framework. |
| **Containerization** | Docker images with bundled dependencies for consistent deployment. |
| **Prompt versioning** | Prompts as source — version-controlled markdown files alongside code. |
| **Prompt evaluation** | Tooling to tweak and evaluate prompt quality across versions. |
### Batteries included
| Helper | Description |
| ------ | ----------- |
| **File server** | Serve and manage files within agent sessions. |
| **Workspace management** | Manage working directories and session artifacts. |
| **Context management** | Caching and durable storage for session context. |
| **Jenkins integration** | Trigger and monitor Jenkins pipelines from agents. |
## Quickstart
| Guide | Description |
| ------ | ----------- |
| **[Try Jarvis](https://github.com/Pratishthan/autobots-agents-jarvis)** | See Dynagent in action with a multi-domain multi-agent demo (Concierge, Customer Support, Sales). |
| **[Install](#workspace-setup)** | Set up the shared workspace, virtual environment, and install this library. |
| **[Development](#development)** | Run tests, format, lint, type-check, and use the Makefile from this repo or the workspace root. |
## How-to guides
| Guide | Description |
| ------ | ----------- |
| **[Workspace setup](#workspace-setup)** | Clone the workspace, create the shared `.venv`, clone this repo, and install dependencies. |
| **[Development](#development)** | Available `make` targets: test, format, lint, type-check, install, build, clean. |
| **[Project structure](#project-structure)** | Layout of `autobots_devtools_shared_lib` (dynagent, chainlit_ui, llm_tools, observability, batch). |
| **[Testing](#testing)** | Unit, integration, and e2e tests. Run with `make test`, `make test-fast`, or `make test-one`. |
| **[Contributing](#contributing)** | See CONTRIBUTING.md for guidelines and workflow. |
| **[Publishing](#publishing)** | See PUBLISHING.md for PyPI publishing. |
## Advanced
| Topic | Description |
| ------ | ----------- |
| **[Code quality](#code-quality-standards)** | Type safety (Pyright), pytest, Ruff format/lint, pre-commit hooks. |
| **[Type checking](#type-checking)** | Pyright in basic mode; type annotations required. |
| **[Workspace commands](#workspace-level-commands)** | From workspace root: `make test`, `make lint`, `make format`, `make type-check`, `make all-checks` across all repos. |
---
## Workspace setup
This library is part of a multi-repository workspace. Use a shared virtual environment at the workspace root.
**Prerequisites:** Python 3.12+, Poetry (e.g. `brew install poetry` on macOS).
### 1. Clone the workspace
```bash
cd /path/to/your/work
git clone <workspace-url> ws-jarvis
cd ws-jarvis
```
### 2. Create shared virtual environment
```bash
make setup
```
This creates a shared `.venv` at the workspace root.
### 3. Clone this repository
```bash
git clone https://github.com/Pratishthan/autobots-devtools-shared-lib.git
cd autobots-devtools-shared-lib
```
### 4. Install dependencies
```bash
make install-dev # with dev dependencies (recommended)
# or
make install # runtime only
```
### 5. Pre-commit hooks
```bash
make install-hooks
```
## Development
Run from `autobots-devtools-shared-lib/`:
```bash
# Testing
make test # with coverage
make test-fast # no coverage
make test-one TEST=tests/unit/test_example.py::test_function
# Code quality
make format # Ruff format
make lint # Ruff lint (auto-fix)
make check-format # check only
make type-check # Pyright
make all-checks # format, type, test
# Dependencies & build
make install / make install-dev / make update-deps
make build
make clean
make help
```
### Workspace-level commands
From the workspace root:
```bash
make test
make lint
make format
make type-check
make all-checks
```
## Project structure
```
autobots-devtools-shared-lib/
├── src/autobots_devtools_shared_lib/
│ ├── dynagent/ # Multi-agent framework
│ ├── chainlit_ui/ # Chainlit UI components
│ ├── llm_tools/ # LLM integrations
│ ├── observability/ # Observability helpers
│ └── batch_processing/ # Batch utilities
├── tests/
│ ├── unit/
│ ├── integration/
│ └── e2e/
├── .github/workflows/
├── pyproject.toml
├── poetry.toml # Poetry settings (uses workspace .venv)
├── Makefile
├── CONTRIBUTING.md
└── PUBLISHING.md
```
## Code quality standards
- **Type safety:** Type annotations; Pyright (basic mode).
- **Testing:** pytest; unit, integration, e2e.
- **Formatting:** Ruff, line length 100.
- **Linting:** Ruff, strict rules.
- **Pre-commit:** Format, lint, type-check, tests on commit.
## Testing
```bash
make test
make test-one TEST=tests/unit/test_example.py
make test-cov # HTML coverage report
```
- **Unit** (`tests/unit/`): Functions and classes.
- **Integration** (`tests/integration/`): Component interactions.
- **E2E** (`tests/e2e/`): Full workflows.
## Type checking
```bash
make type-check
```
All code must have type annotations. Pyright runs in basic mode.
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md) for development workflow and guidelines.
## Publishing
See [PUBLISHING.md](PUBLISHING.md) for PyPI publishing.
## License
MIT
## Authors
- **Pra1had** — [GitHub](https://github.com/pra1had) · pralhad.kamath@pratishthanventures.com
## Questions?
Open an issue on the project repository.
| text/markdown | Pralhad | pralhad.kamath@pratishthanventures.com | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0.0,>=3.12 | [] | [] | [] | [
"chainlit>=2.9.6",
"fastapi>=0.115.0",
"jsonschema>=4.26.0",
"langchain>=1.0.0",
"langchain-anthropic>=0.3.0",
"langchain-google-genai>=4.2.0",
"langfuse>=3.12.1",
"opentelemetry-api<2.0.0,>=1.30.0",
"opentelemetry-exporter-otlp-proto-http<2.0.0,>=1.30.0",
"opentelemetry-instrumentation-fastapi>=0.49b0",
"opentelemetry-sdk<2.0.0,>=1.30.0",
"pydantic-settings>=2.10.1",
"python-dotenv>=1.1.1",
"pyyaml>=6.0.3",
"uvicorn[standard]>=0.32.0"
] | [] | [] | [] | [] | poetry/2.3.2 CPython/3.12.12 Linux/6.11.0-1018-azure | 2026-02-20T07:19:41.929191 | autobots_devtools_shared_lib-0.1.7-py3-none-any.whl | 68,344 | 24/a3/5e1117434ed1443865dd291e59cdc6a624b2e878ce7461f453a660cd911c/autobots_devtools_shared_lib-0.1.7-py3-none-any.whl | py3 | bdist_wheel | null | false | 81d2b26b4e29900d738996f043374f1e | cb2d412c6092b2a17daa473165c8e4464ac96ff8fc3f190f0efb2073103f2c63 | 24a35e1117434ed1443865dd291e59cdc6a624b2e878ce7461f453a660cd911c | null | [] | 251 |
2.4 | downdocs | 0.1.0 | CLI tool to download Google Docs from stdin | # downdocs
A CLI tool to download Google Docs from stdin. Pipe Google Doc IDs and download them as DOCX, TXT, or Markdown files.
## Installation
```bash
pip install downdocs
```
## Setup
1. Go to [Google Cloud Console](https://console.cloud.google.com/)
2. Create a new project or select existing one
3. Enable the Google Drive API
4. Create OAuth 2.0 credentials (Desktop app)
5. Download the credentials JSON file
6. Save it to `~/.config/downloadDocs/credentials.json`
## Usage
```bash
# Pipe document IDs to download
echo "1ABC...xyz" | downdocs
# Specify output directory
echo "1ABC...xyz" | downdocs --out ./my-docs
# Download as Markdown
echo "1ABC...xyz" | downdocs --format md
# Download multiple docs
cat doc_ids.txt | downdocs --format docx --out ./downloads
```
### Options
| Option | Default | Description |
|--------|---------|-------------|
| `--out` | `./docs` | Output directory |
| `--format` | `docx` | Download format (`docx`, `txt`, `md`) |
| `--token` | `~/.config/downloadDocs/token.json` | Path to token.json |
| `--credentials` | `~/.config/downloadDocs/credentials.json` | Path to credentials.json |
## Getting Document IDs
Google Doc IDs are the part of the URL between `/d/` and `/edit`:
```
https://docs.google.com/document/d/1ABC123xyz/edit
^^^^^^^^^^
Document ID
```
## License
MIT License - see [LICENSE](LICENSE) for details.
| text/markdown | null | Ebinesh <ebinesh2511@gmail.com> | null | null | MIT | cli, download, gdocs, google-docs, google-drive | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Utilities"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"google-api-python-client",
"google-auth",
"google-auth-oauthlib"
] | [] | [] | [] | [
"Homepage, https://github.com/ebinesh25/download-google-docs",
"Repository, https://github.com/ebinesh25/download-google-docs.git",
"Issues, https://github.com/ebinesh25/download-google-docs/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T07:19:11.257000 | downdocs-0.1.0.tar.gz | 44,241 | 0b/59/15df775f689b71fc1864996e9a8e5c8ee0f8a417c98ccfe675ee51ead502/downdocs-0.1.0.tar.gz | source | sdist | null | false | 94fbfb9ff41d54bd1c4e945d28ba4d0d | 804751b46eb13537086065b994ecd3277389be1dc57ecb7deefbd9ca8b57211d | 0b5915df775f689b71fc1864996e9a8e5c8ee0f8a417c98ccfe675ee51ead502 | null | [
"LICENSE"
] | 273 |
2.4 | katversion | 1.3 | Reliable git-based versioning for Python packages | katversion
==========
The *katversion* package provides proper versioning for Python packages as
dictated by their (git) source repositories. The resulting version string is
baked into the installed package's ``__init__.py`` file for guaranteed
traceability when imported (no dependency on what pkg_resources or importlib
thinks!).
Version String Format
---------------------
*katversion* generates a version string for your SCM package that complies with
`PEP 440 <https://www.python.org/dev/peps/pep-0440/>`_.
It only supports git repositories.
The format of our version string is:
::
- for RELEASE builds:
<major>.<minor>
e.g.
0.1
2.4
- for DEVELOPMENT builds:
<major>.<minor>.dev<num_commits>+<branch_name>.g<short_git_sha>[.dirty]
e.g.
0.2.dev34+new.shiny.feature.gfa973da
2.5.dev7+master.gb91ffa6.dirty
- for UNKNOWN builds:
0.0+unknown.[<scm_type>.]<timestamp>
e.g.
0.0+unknown.git.201402031023
0.0+unknown.201602081715
where <major>.<minor> is derived from the latest version tag and
<num_commits> is the total number of commits on the development branch.
The <major>.<minor> substring for development builds will be that of the
NEXT (minor) release, in order to allow proper Python version ordering.
To add a version tag use the `git tag` command, e.g.
$ git tag -a 1.2 -m 'Release version 1.2'
Typical Usage
-------------
Add this to ``setup.py`` (handles installed packages):
.. code:: python
from setuptools import setup
setup(
...,
# version=1.0, # remove the version parameter as it will be overridden
setup_requires=['katversion'],
use_katversion=True,
...
)
Add this to ``mypackage/__init__.py``, including the comment lines
(handles local packages):
.. code:: python
# BEGIN VERSION CHECK
# Get package version when locally imported from repo or via -e develop install
try:
import katversion as _katversion
except ImportError: # pragma: no cover
import time as _time
__version__ = "0.0+unknown.{}".format(_time.strftime('%Y%m%d%H%M'))
else: # pragma: no cover
__version__ = _katversion.get_version(__path__[0])
# END VERSION CHECK
In addition, a command-line script for checking the version:
::
# From inside your SCM subdirectory, run the following command
# which will print the result to stdout:
$ kat-get-version.py
| text/x-rst | The MeerKAT CAM Team | cam@ska.ac.za | null | null | BSD-3-Clause | versioning meerkat ska | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Version Control",
"Topic :: System :: Software Distribution",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [
"OS Independent"
] | https://github.com/ska-sa/katversion | null | !=3.0.*,!=3.1.*,!=3.2.*,<4,>=2.7 | [] | [] | [] | [
"packaging",
"importlib-metadata; python_version < \"3.8\"",
"unittest2>=0.5.1; extra == \"test\"",
"nose<2.0,>=1.3; extra == \"test\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.11.3 | 2026-02-20T07:17:31.334885 | katversion-1.3-py2.py3-none-any.whl | 12,134 | 7b/42/8828a066df83904a71c9ab0034feb9a2f5b4ca533eb9b4488bf0d1bf2b9a/katversion-1.3-py2.py3-none-any.whl | py2.py3 | bdist_wheel | null | false | e84aae2818d1517294fafde84ee46a30 | c83453fc0bb69be2fdc428fd63ad0250545dc6f5e55fc836f066516762a5594c | 7b428828a066df83904a71c9ab0034feb9a2f5b4ca533eb9b4488bf0d1bf2b9a | null | [
"LICENSE.txt"
] | 1,145 |
2.4 | kagan | 0.6.0 | AI-powered Kanban TUI for autonomous development workflows | <p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset=".github/assets/logo-dark.svg">
<source media="(prefers-color-scheme: light)" srcset=".github/assets/logo-light.svg">
<img alt="Kagan" src=".github/assets/logo-light.svg" width="480">
</picture>
</p>
<p align="center">
<strong>A terminal task board that runs AI agents on your code — you review, you decide, you merge.</strong>
</p>
<p align="center">
<a href="https://pypi.org/project/kagan/"><img src="https://img.shields.io/pypi/v/kagan?style=for-the-badge" alt="PyPI"></a>
<a href="https://pypi.org/project/kagan/"><img src="https://img.shields.io/pypi/pyversions/kagan?style=for-the-badge" alt="Python"></a>
<a href="https://opensource.org/license/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg?style=for-the-badge" alt="License: MIT"></a>
<a href="https://github.com/aorumbayev/kagan/stargazers"><img src="https://img.shields.io/github/stars/aorumbayev/kagan?style=for-the-badge" alt="Stars"></a>
<a href="https://discord.gg/dB5AgMwWMy"><img src="https://img.shields.io/badge/discord-join-5865F2?style=for-the-badge&logo=discord&logoColor=white" alt="Discord"></a>
</p>
<p align="center">
<a href="https://snyk.io/test/github/kagan-sh/kagan?targetFile=pyproject.toml"><img src="https://snyk.io/test/github/kagan-sh/kagan/badge.svg?targetFile=pyproject.toml&style=flat" alt="Snyk"></a>
</p>
<p align="center">
<a href="https://docs.kagan.sh/">Documentation</a> •
<a href="https://docs.kagan.sh/quickstart/">Quickstart</a> •
<a href="https://docs.kagan.sh/guides/mcp-setup/">MCP Setup</a> •
<a href="https://docs.kagan.sh/reference/cli/">CLI Reference</a> •
<a href="https://github.com/aorumbayev/kagan/issues">Issues</a>
</p>
---
<p align="center">
<img src=".github/assets/demo.gif" alt="Kagan Demo" width="700">
</p>
Create a task. Pick a mode. The agent works. You review, approve, and merge.
## Install
=== "UV (Recommended)"
```bash
uv tool install kagan
```
=== "Mac / Linux"
```bash
curl -fsSL https://uvget.me/install.sh | bash -s -- kagan
```
=== "Windows (PowerShell)"
```powershell
iwr -useb uvget.me/install.ps1 -OutFile install.ps1; .\install.ps1 kagan
```
=== "pip"
```bash
pip install kagan
```
### Requirements
- Python 3.12 -- 3.13, Git, terminal 80x20+
- tmux (recommended for PAIR sessions on macOS/Linux)
- VS Code or Cursor (PAIR launchers, especially on Windows)
## Usage
```bash
kagan # Launch TUI (default command)
kagan tui # Launch TUI explicitly
kagan core status # Show status of the core process
kagan core stop # Stop the running core process
kagan mcp # Run as MCP server (connects to core via IPC)
kagan tools # Stateless developer utilities (prompt enhancement)
kagan update # Check for and install updates
kagan list # List all projects with task counts
kagan reset # Reset data (interactive)
kagan --help # Show all options
```
## Ways to Use Kagan
### TUI (interactive)
Run `kagan` -- create tasks, run AUTO/PAIR workflows, review/rebase/merge, switch projects.
### Editor (MCP)
Operate Kagan from Claude Code, Gemini CLI, or any MCP-compatible client -- no TUI required:
```bash
kagan mcp --capability pair_worker
```
Start with `pair_worker`. Escalate to `maintainer` when needed. See [MCP setup](https://docs.kagan.sh/guides/mcp-setup/) for editor configs.
## Features
- Kanban lifecycle: `BACKLOG -> IN_PROGRESS -> REVIEW -> DONE`
- Task CRUD, duplicate, inspect
- Work modes: `AUTO` (background agent) / `PAIR` (interactive session)
- Chat-driven planning with approval flow
- Review: diff, approve/reject/rebase/merge
- Multi-repo: project switching, base-branch controls
- PAIR handoff: tmux / VS Code / Cursor session management
- MCP: 23 tools spanning tasks, sessions, review, planning, projects, audit, settings
- Core daemon management: run, inspect, stop
## Supported Agents
- [Claude Code](https://docs.anthropic.com/en/docs/claude-code) (Anthropic)
- [OpenCode](https://opencode.ai/docs) (SST)
- [Codex](https://github.com/openai/codex) (OpenAI)
- [Gemini CLI](https://github.com/google-gemini/gemini-cli) (Google)
- [Kimi CLI](https://github.com/MoonshotAI/kimi-cli) (Moonshot AI)
- [GitHub Copilot](https://github.com/github/copilot-cli) (GitHub)
## Docs
**[docs.kagan.sh](https://docs.kagan.sh/)** -- [Quickstart](https://docs.kagan.sh/quickstart/) | [MCP Setup](https://docs.kagan.sh/guides/mcp-setup/) | [Editor MCP Setup](https://docs.kagan.sh/guides/editor-mcp-setup/)
## License
[MIT](LICENSE)
---
<p align="center">
<a href="https://www.star-history.com/#aorumbayev/kagan&type=date">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=aorumbayev/kagan&type=date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=aorumbayev/kagan&type=date" />
<img alt="Star History" src="https://api.star-history.com/svg?repos=aorumbayev/kagan&type=date" width="600" />
</picture>
</a>
</p>
| text/markdown | null | Altynbek Orumbayev <altynbek.orumbayev@makerx.com.au> | null | null | null | ai, autonomous, kanban, project-management, terminal, textual, tui | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Terminals",
"Topic :: Utilities",
"Typing :: Typed"
] | [] | null | null | <3.15,>=3.12 | [] | [] | [] | [
"agent-client-protocol<1.0.0,>=0.7.0",
"aiofiles>=24.1.0",
"aiosqlite>=0.22.0",
"click>=8.1.0",
"filelock>=3.16.0",
"greenlet>=3.1.0",
"mcp<2.0.0,>=1.26.0",
"mslex>=1.3.0",
"packaging>=24.0",
"platformdirs>=4.3.0",
"pydantic>=2.10.0",
"pyperclip>=1.11.0",
"rich>=14.0.0",
"sqlmodel>=0.0.22",
"textual>=7.0.0",
"tomlkit>=0.13.0"
] | [] | [] | [] | [
"Homepage, https://kagan.sh",
"Documentation, https://docs.kagan.sh",
"Repository, https://github.com/aorumbayev/kagan",
"Issues, https://github.com/aorumbayev/kagan/issues",
"Changelog, https://github.com/aorumbayev/kagan/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:17:22.866482 | kagan-0.6.0-py3-none-any.whl | 598,258 | e6/3d/82983f57d490741a4643ac72af52a69815a86c8956aae6a34f1156103685/kagan-0.6.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 9980ca56f51d833e4b98adc25427ad99 | a0c1f72b6becbbea781e62353a275115b63f7fe2e84d4c0ae9676e9047546cba | e63d82983f57d490741a4643ac72af52a69815a86c8956aae6a34f1156103685 | MIT | [
"LICENSE"
] | 105 |
2.4 | onion-arch | 0.5.1 | Decouple your package-project with Onion-Mode. | # OnionMeta: A Python Metaclass for Dynamic Abstract Method Resolution
`from onion import *`
**Exports**: `Onion`, `OnionMeta`, `OnionViolationError`, `abstractmethod`, `OnionUnload`
## 1. Core Concepts: The Onion Architecture Metaclass
The `OnionMeta` metaclass is a sophisticated Python construct designed to facilitate a unique architectural pattern often referred to as the "Onion Architecture." This pattern is characterized by a clear separation of concerns, where the core business logic classes are defined using abstract interfaces, and their concrete implementations are provided in separate, decoupled modules. The `OnionMeta` metaclass is the engine that makes this dynamic assembly possible, especially by deferring the execution of abstract method implementations until runtime.
This approach allows for the creation of a highly modular and extensible system where the core class remains stable and unaware of its concrete implementations, while the implementations can be developed, maintained, and even loaded independently. The metaclass achieves this by intercepting the attempt to instantiate the core abstract class for the first time, triggering a "Construction & Binding" (C&B) phase. During this phase, it dynamically discovers and integrates all available method implementations from designated submodules, effectively "completing" the class before allowing instantiation to proceed.
This mechanism ensures that the end user interacts with a single, unified interface (the core class) while benefiting from a rich set of functionalities aggregated from various parts of the project.
### 1.1. Design Philosophy
The design philosophy of `OnionMeta` is rooted in principles of decoupling, modularity, and dynamic behavior, which are highly valued in modern software engineering. It addresses the challenge of creating a stable core application logic that can be extended with functionality without modifying the core itself. This is akin to the layers of an onion, where the core represents the essential business rules, and each outer layer represents a specific implementation or set of functionalities.
The metaclass provides the mechanism to seamlessly wrap these layers around the core at runtime. This design is particularly useful in large applications, plugin systems, or frameworks where a clear contract (the abstract class) needs to be established, but the fulfillment of that contract (the implementation) may vary or be developed by different teams. By centralizing the logic for discovering and applying these implementations in the metaclass, the system maintains a clear separation between the "what" (the interface) and the "how" (the implementation), resulting in a more maintainable and flexible codebase.
#### 1.1.1. Decoupling Core Business Logic from Implementations
A fundamental goal of the `OnionMeta` design is to achieve a strict decoupling between the core business logic (represented by the abstract base class) and its concrete implementations. In typical object-oriented design, a class that defines abstract methods must be subclassed, and all abstract methods must be implemented in that subclass before instantiation. This creates a tight coupling between the interface and its implementation.
`OnionMeta` inverts this relationship. The core class, which uses `OnionMeta` as its metaclass, defines the necessary abstract methods but remains unaware of how or where these methods will be implemented. The implementations are provided in separate modules or classes, which may or may not directly inherit from the core class in the traditional sense. The metaclass acts as a bridge, dynamically collecting these disparate implementations at runtime and binding them to the core class upon its first instantiation.
This decoupling means that the core class can be defined, documented, and even distributed independently of any specific implementation. Developers working on implementations do not need to modify the core class, reducing the risk of introducing bugs into the central logic. This separation of concerns is a cornerstone of maintainable software, allowing for parallel development, easier testing, and the ability to swap or add new functionalities without altering the foundational code.
#### 1.1.2. The "Onion" Metaphor: Layered Implementations
The "onion" metaphor aptly describes the architectural pattern enabled by `OnionMeta`. Imagine the core abstract class as the heart of the onion. This core defines the essential, non-negotiable business rules and interfaces. Each layer of the onion, wrapped around the core, represents a set of concrete implementations for the abstract methods defined by the core.
These layers are independent of each other and of the core's internal structure. Their sole purpose is to provide the "flesh" to the abstract "skeleton." The `OnionMeta` metaclass is the force that assembles these layers. When the core class is instantiated for the first time, the metaclass initiates the "Construction & Binding" (C&B) process, which can be thought of as the act of wrapping all available implementation layers around the core.
It scans the project's ecosystem for any code that provides implementations for the core's abstract methods. It then dynamically attaches these implementations to the core class. From the end user's perspective, they only see and interact with the final, fully formed onion (the core class), which now possesses all the functionalities provided by its layers. The user is unaware of the individual layers or the dynamic assembly process; they simply see a cohesive, functional object.
This metaphor highlights the core principle of the architecture: a stable, hidden core and a flexible, layered set of implementations that can be added or modified without affecting the core.
#### 1.1.3. Interacting Only with the Core Class
A key aspect of the `OnionMeta` design is to simplify the user-facing API. The end user of a library or framework built using this pattern interacts only with the core abstract class. They do not need to know about the existence of the various implementation submodules or the classes within them (e.g., `A_foo`, `A_bar`). Their code will simply import the core class `A` and instantiate it directly.
The complexity of discovering and integrating the necessary implementations is completely abstracted away by the `OnionMeta` metaclass. The user's workflow is as follows: `from impls import A` followed by `my_a = A()`. Upon the first call to `A()`, the metaclass silently performs the "Construction & Binding" (C&B) process in the background. It loads the `impls` module, which in turn imports all implementation submodules. It then collects the required methods and binds them to the `A` class. Once this is done, instantiation proceeds, and the user receives a fully functional object.
All subsequent instantiations of `A` will skip the C&B phase and proceed directly to object creation, as the class has already been "compiled" and marked as ready. This design provides the user with a clean and intuitive experience, benefiting from a powerful, dynamically assembled object without being burdened by the details of its construction. This approach is particularly powerful in plugin architectures, where the user might install new plugins (implementations) that are automatically picked up and integrated into the core application without requiring any changes to the user's code.
### 1.2. Key Responsibilities of `OnionMeta`
The `OnionMeta` metaclass shoulders several critical responsibilities to orchestrate the dynamic assembly of the abstract core class. It acts as the central controller, managing the entire lifecycle from the initial class definition to its final, usable state. Its primary duties revolve around intercepting the instantiation process, managing a one-time initialization phase, and ensuring that the resulting class adheres to its abstract method definitions, albeit in a deferred manner.
These responsibilities are closely tied to Python's object creation mechanisms and the behavior of the `abc` (Abstract Base Classes) module. By carefully overriding specific methods within its own definition, `OnionMeta` can insert custom logic at crucial points in the class and object creation process, enabling the unique "onion" architecture. The successful execution of these responsibilities ensures that the core class is both extensible and robust, providing a stable interface to the user while maintaining enough flexibility to dynamically incorporate new implementations.
#### 1.2.1. Intercepting the First Instantiation
The first and most critical responsibility of `OnionMeta` is to intercept the initial attempt to instantiate the core class (e.g., `A`). In Python, object creation is a two-step process handled by the metaclass's `__call__` method. When you write `A()`, you are effectively calling `OnionMeta.__call__(A, ...)`. This provides a perfect hook for the metaclass to execute its custom logic before the actual object creation.
The `OnionMeta` metaclass overrides the `__call__` method to perform this interception. Within this overridden method, it checks for a special flag on the class itself, typically named `__onion_built__`. If this flag is not present or is `False`, the metaclass knows that this is the first time the class is being instantiated and that the "Construction & Binding" (C&B) process has not yet occurred.
This interception is the trigger for the entire dynamic assembly mechanism. It allows the metaclass to pause the standard instantiation process, collect the necessary components, modify the class, and only then allow instantiation to proceed. This ensures that the class is fully prepared and has all its required methods implemented before any objects are created, preventing the `TypeError` exception that would typically be raised by `ABCMeta` upon attempting to instantiate a class with unimplemented abstract methods.
#### 1.2.2. Triggering the "Construction & Binding" (C&B) Process
Once the `OnionMeta` metaclass has intercepted the first instantiation, its next responsibility is to trigger the "Construction & Binding" (C&B) process. This is the heart of the dynamic assembly mechanism. The C&B process is a custom routine defined within the overridden `__call__` method of the metaclass. It is responsible for several key tasks: discovering implementation submodules, collecting concrete method implementations, and binding them to the core abstract class.
The process begins by examining all loaded modules to look for classes or functions that provide implementations for the abstract methods defined in the core class. Since the metaclass no longer automatically loads implementation modules, the user must ensure that all necessary implementation modules are loaded into memory before using the core class. This discovery phase is crucial as it gathers all the onion "layers" that need to be wrapped around the core. The entire C&B process is encapsulated within an `if not getattr(cls, '__onion_built__', False):` block, ensuring it runs only once, making the class modification a one-time, atomic operation.
#### 1.2.3. Dynamically Modifying the Abstract Base Class
After the "Construction & Binding" (C&B) process has collected all the necessary method implementations, the next responsibility of the `OnionMeta` metaclass is to dynamically modify the core abstract class. This is achieved by adding the collected methods directly to the class's namespace. In Python, a class is a mutable object, and its attributes (including methods) can be added or modified at runtime.
The metaclass iterates over the collected implementations dictionary (e.g., `{'foo': <function foo_impl>, 'bar': <function bar_impl>}`) and uses the built-in `setattr` function to bind each implementation to the core class. For example, `setattr(cls, 'foo', foo_impl)` would add the `foo_impl` function as a method named `foo` to the class `cls` (which is the core class `A`). This step effectively "completes" the class by providing concrete definitions for the previously abstract methods.
The dynamic nature of Python is key here, as it allows the class structure to be altered after its initial definition but before it's used to create instances. This modification is permanent for the duration of the program's execution; once a method is added, it becomes part of the class definition, accessible to all subsequent instances. This responsibility is what makes the decoupling of interface from implementation possible, as the core class is transformed into a concrete, instantiable class at runtime.
#### 1.2.4. Deferring and Managing Abstract Method Execution
A critical responsibility of `OnionMeta` is to manage the execution of abstract methods, effectively deferring the check that would normally be performed by `ABCMeta`. The standard `ABCMeta` metaclass prevents the instantiation of any class with unimplemented abstract methods. However, in the "onion" architecture, the core class is intentionally abstract at definition time, with the expectation that its methods will be provided later.
If the standard `ABCMeta` behavior were allowed to proceed, the first attempt to instantiate the core class would immediately fail. `OnionMeta` resolves this by first executing the "Construction & Binding" (C&B) process to dynamically add the required methods, and then explicitly updating the abstract methods registry.
In Python 3.10 and later, the `abc` module provides a crucial function: `update_abstractmethods(cls)`. This function recalculates the set of abstract methods for a given class `cls`. After `OnionMeta` has dynamically added all the necessary implementations to the core class, it calls `update_abstractmethods(cls)`. This call forces the `abc` machinery to re-examine the class. If all previously abstract methods now have concrete implementations, the class's `__abstractmethods__` attribute is updated to an empty set, and the class is no longer considered abstract.
Only after this update does `OnionMeta` proceed to call the parent `ABCMeta.__call__` method to complete the instantiation. This sequence ensures that the abstract method check is performed only after the class is fully assembled, preventing premature `TypeError` exceptions and enabling the desired dynamic behavior.
## 2. The "Construction & Binding" (C&B) Process
The "Construction & Binding" (C&B) process is the central mechanism by which the `OnionMeta` metaclass brings the abstract core class to life. It is a one-time, automated procedure that occurs upon the first instantiation of the core class. This process is responsible for discovering, collecting, and integrating all the disparate method implementations scattered across the project's submodules. The C&B process can be broken down into several distinct phases, each with a specific function.
It begins with the interception of the first instantiation, which acts as the trigger. Following this, it enters the discovery phase, where it examines all loaded modules for implementation submodules. Since the metaclass no longer automatically loads implementation modules, the user must ensure that all necessary implementation modules are loaded into memory before using the core class. Once the submodules are discovered, it moves to the collection phase, where it systematically searches for and gathers the concrete methods that fulfill the core class's abstract requirements. The final phase involves the dynamic modification of the core class itself, where the collected methods are bound to the class, and the abstract method registry is updated to reflect the class's new, complete state. The entire process is designed to be transparent to the end user, who simply interacts with the final, fully formed class.
### 2.1. Triggering C&B on First Instantiation
The trigger for the "Construction & Binding" (C&B) process is the first call to the core abstract class. This is a deliberate design choice to ensure that the class is fully prepared before any objects are created, while also avoiding the overhead of this preparation process for subsequent instantiations. The mechanism for this trigger relies on the `__call__` method of the `OnionMeta` metaclass and a simple state management flag. This approach is effective because it relies on the standard Python object creation protocol and does not require explicit initialization calls from the user. The entire process is initiated implicitly, resulting in a clean and intuitive API. The use of the flag ensures that the potentially expensive operations of loading modules and modifying the class are performed only once, no matter how many times the class is instantiated throughout the application's lifetime.
#### 2.1.1. Overriding the `__call__` Method in the Metaclass
The `__call__` method in a Python metaclass is a special method that is called when an instance of a class created by that metaclass is called. In the context of `OnionMeta`, when the user writes `a = A()`, Python internally translates this to `OnionMeta.__call__(A, *args, **kwargs)`. By overriding this method, `OnionMeta` gains the ability to insert its custom logic at the very beginning of the object creation process. This is the ideal place to trigger the "Construction & Binding" (C&B) process.
The overridden `__call__` method first checks whether the C&B process has already been completed. If not, it executes the C&B routine, which involves loading the implementation modules and dynamically adding methods to the class `A`. After the C&B process is complete, the method then calls `super().__call__(*args, **kwargs)`, which invokes the `__call__` method of the parent metaclass, `ABCMeta`. This call proceeds with the standard object creation process using the newly modified, fully realized version of class `A`. This technique of overriding `__call__` is a powerful metaprogramming pattern in Python, allowing for fine-grained control over class instantiation.
#### 2.1.2. Using an Internal State Flag (`__onion_built__`) to Ensure Single Execution
To ensure that the "Construction & Binding" (C&B) process is executed only once, `OnionMeta` employs a simple but effective state management technique using an internal class-level flag, typically named `__onion_built__`. This flag is stored as an attribute on the class itself (e.g., `A.__onion_built__`). When the overridden `__call__` method is invoked, its first operation is to check the state of this flag using `getattr(cls, '__onion_built__', False)`.
On the first instantiation, this attribute will not exist on the class, so `getattr` will return the default value `False`. This condition triggers the C&B process. Upon successful completion of the C&B process, the metaclass sets this flag to `True` on the class using `setattr(cls, '__onion_built__', True)`. Now, any subsequent calls to instantiate the class will find `__onion_built__` to be `True`, and the `if` condition in the `__call__` method will evaluate to `False`. Consequently, the C&B code block will be skipped, and the method will proceed directly to the `super().__call__(*args, **kwargs)` call, resulting in a faster instantiation process. This flag-based mechanism is a common pattern in Python for achieving singleton-like behavior or one-time initialization, and it is perfectly suited to `OnionMeta`'s needs.
### 2.2. Collecting Method Implementations
The collection of method implementations is the core part of the "Construction & Binding" (C&B) process. This stage is responsible for finding and gathering all the concrete methods that will be used to fulfill the abstract contract of the core class. The design relies on a convention-based approach where implementations are located in specific, discoverable places within the project structure. This process involves loading a central manifest file, which in turn loads all implementation submodules. The design relies on a convention-based approach where implementations are located in specific, discoverable places within the project structure.
After the first instance of `A` is created, all subsequent instantiations will follow the normal, fast Python object creation flow. The internal `__onion_built__` flag of the `OnionMeta` metaclass will be set, causing it to skip the C&B process on all future calls. This means that creating new instances of `A` will perform just as fast as instantiating any other class, ensuring that the dynamic assembly mechanism does not impose a performance burden on the application after the initial setup cost.
#### 2.2.1. Mechanism for Loading Implementation Submodules
During the C&B process, the metaclass examines all loaded modules to discover implementation submodules. Since the metaclass no longer automatically loads implementation modules, the user must ensure that all necessary implementation modules are loaded into memory before using the core class. If critical implementation modules are not loaded, the corresponding functionalities will not be available.
The metaclass checks the loaded modules to discover implementation submodules, requiring that all implementation modules are loaded before using the core class. This design makes module dependencies more explicit and controllable.
#### 2.2.2. Responsibility for Loading Implementation Modules
The `OnionMeta` metaclass discovers implementation submodules by examining the loaded modules and does not automatically load any implementation modules. This design places the loading responsibility on the user, making module dependencies more explicit and controllable.
As a module developer, you need to implicitly load all necessary implementations in the package's `__init__.py` to ensure that the functionality is fully available when the user imports it:
```python
# In the package's __init__.py
from .core import DataProcessor, Calculator # Export the core classes
from . import basic_calculator # Implicitly load the basic implementation
from . import advanced_calculator # Implicitly load the advanced implementation
# When the user executes "from your_package import Calculator", all implementations are ready
```
If the user imports the core class individually, they must manually load the required implementation modules; otherwise, the metaclass cannot discover the unloaded modules, and the corresponding functionalities will not be available.
### 2.3. Dynamic Class Modification and Abstract Method Updates
During the C&B process, the metaclass examines all loaded modules to discover implementation submodules. Since the metaclass no longer automatically loads implementation modules, the user must ensure that all necessary implementation modules are loaded into memory before using the core class. If critical implementation modules are not loaded, the corresponding functionalities will not be available.
The metaclass checks the loaded modules to discover implementation submodules, requiring that all implementation modules are loaded before using the core class. This design makes module dependencies more explicit and controllable.
## 3. Simplifying Onion Architecture with the Onion Base Class
To simplify the use of the onion architecture, we provide the `Onion` base class, which is an abstract base class that uses the `OnionMeta` metaclass. Users only need to inherit from the `Onion` class to obtain the full onion architecture functionality without having to deal with the complexities of the metaclass directly.
### 3.1. Design of the Onion Base Class
The `Onion` base class provides a concise API, similar to the standard `abc.ABC`:
```python
from onion import Onion
import abc
class MyCore(Onion):
@abc.abstractmethod
def my_method(self):
pass
```
### 3.2. Usage Flow
The complete flow for using the `Onion` base class is as follows:
The usage flow includes defining the core abstract class (inheriting from `Onion` and defining abstract methods), defining implementation classes (inheriting from the core class and implementing abstract methods), loading implementation modules (must be done before using the core class, otherwise the functionality will be incomplete), compilation (manually calling `onionCompile()` or relying on automatic compilation upon first instantiation), and creating instances and calling methods.
### 3.3. Example Code
```python
import abc
from onion import Onion
# 1. Define the core abstract class
class CalculatorCore(Onion):
@abc.abstractmethod
def add(self, a: float, b: float) -> float:
"""Addition operation"""
pass
@abc.abstractmethod
def multiply(self, a: float, b: float) -> float:
"""Multiplication operation"""
pass
# 2. Define implementation classes (usually in separate modules)
class CalculatorImpl(CalculatorCore):
def add(self, a: float, b: float) -> float:
return a + b
def multiply(self, a: float, b: float) -> float:
return a * b
# 3. Usage flow
def main():
# Important: The user must load the implementation modules before using the core class
# Method 1: Via package import (recommended)
# from your_package import CalculatorCore # The package has implicitly loaded the implementations
#
# Method 2: Explicitly load the implementation modules
# import impls # Manually load all implementations
# Or import individually
# from calculator_impl import CalculatorImpl
print("✓ Implementation modules loaded")
# 4. Manual compilation (optional)
CalculatorCore.onionCompile()
print("✓ Manual compilation completed")
# 5. Create instance and use
calc = CalculatorCore()
print("✓ Instance created successfully")
# 6. Call methods
result1 = calc.add(5, 3)
result2 = calc.multiply(4, 6)
print(f"5 + 3 = {result1}")
print(f"4 * 6 = {result2}")
if __name__ == "__main__":
main()
```
**Important Reminder**: If the implementation modules are not loaded before using the core class, the functionality will be incomplete, and the metaclass can only discover the loaded modules.
### 3.4. Key Changes and Improvements
Compared to earlier versions, version 0.5 brings significant improvements:
**Method Naming Standardization**: The `compile()` method has been renamed to the more descriptive `onionCompile()`, avoiding conflicts with Python built-in methods. All internal methods now use the `__onion_` prefix.
**Explicit User Responsibilities**: The metaclass no longer automatically loads implementation modules. Users must ensure all necessary modules are loaded before using the core class.
**Thread Safety (New in 0.5)**: Uses `WeakKeyDictionary` for per-thread implementation storage. Each thread maintains independent implementations, with automatic cleanup when threads end.
**Architectural Enforcement (New in 0.5)**: The `OnionViolationError` exception enforces implementation class conventions: no `__init__`, no new methods, no class attributes, and no overriding non-abstract methods.
**Implementation Unloading (New in 0.5)**: The `OnionUnload()` function allows complete unloading of implementations, enabling switching between different implementations at runtime.
**Enhanced Debugging**: Detailed debug output during compilation, clear error messages, method conflict warnings, and thread-local state inspection capabilities.
**Performance Optimization**: The C&B process executes only once per thread. After initial compilation, instantiation performance matches regular classes.
## 4. Implementation Details and Technical Points
### 4.1. Subclass Registration Mechanism
`OnionMeta` uses an internal list `__onion_subs__` to maintain a registry of all non-abstract subclasses. In the `__init__` method, when a non-abstract subclass is detected inheriting from the core class, it is automatically added to the registry. This mechanism ensures that all possible implementations can be tracked and managed by the system, providing a foundation for subsequent method collection and validation.
### 4.2. Method Collection Strategy
The `__onion_get_meths()` method is responsible for collecting method implementations from the registered subclasses. It iterates over all registered subclasses, checking whether each subclass implements the abstract methods of the core class. The validation process confirms whether the method is defined in that subclass (via `__qualname__` check) and handles method conflicts, usually adopting a strategy where later definitions override earlier ones. Finally, it returns a mapping dictionary from method names to implementation functions, providing all the method implementations required for the compilation process.
### 4.3. Compilation Process Validation
The `__onion_compile()` method performs a strict validation流程. First, it conducts a subclass existence check to ensure that at least one implementation subclass is available for use. Then, it performs a method completeness check to verify that all abstract methods have corresponding implementations. Next, it executes a method merge operation, correctly binding the collected methods to the core class. Finally, it handles various warning messages, reporting issues such as method conflicts that need attention.
### 4.4. Abstract Method Updates
Using Python 3.10+'s `abc.update_abstractmethods()` function to update the abstract method registry, ensuring that the class is correctly recognized as a concrete class after the compilation is completed. This function recalculates the collection of abstract methods of the class and updates the internal state accordingly, allowing the originally abstract class to be instantiated normally.
## 5. Best Practices and Usage Recommendations
### 5.1. Project Structure Recommendations
```
project/
├── __init__.py # Package initialization, responsible for implicitly loading implementations
├── core.py # Core abstract class definitions
├── feature_a.py # Specific implementation module
├── feature_b.py # Another implementation module
└── main.py # Main program entry point
```
**Important**: As a module developer, you need to ensure in `__init__.py` that all necessary files are implicitly loaded when the user imports. If the import is omitted, the functionality will be incomplete.
### 5.2. Implementation Module Loading Example
```python
# Implicit loading via package import (recommended)
from your_package import CalculatorCore # The package's __init__.py has loaded the implementations
# Or load specific implementations separately
from basic_calculator import BasicCalculatorImpl
from advanced_calculator import AdvancedCalculatorImpl
# Important: Before using the core class, you must ensure that all required implementation modules have been loaded
CalculatorCore.onionCompile() # Now you can compile
```
### 5.3. Usage Pattern Recommendations
The explicit loading mode is suitable for scenarios where there are precise requirements for the timing of compilation. The user first imports the implementation modules, then manually calls the compilation method, and finally creates an instance. This method allows the user to have complete control over the entire initialization流程, facilitating the execution of additional setup operations before and after compilation. The automatic compilation mode is more concise and suitable for most regular usage scenarios. The user only needs to import the implementation modules and then directly create an instance. The system will automatically complete the compilation process upon the first instantiation. This method reduces the amount of code and makes the usage process more intuitive and simple. Regardless of which mode is chosen, the user must ensure that all necessary implementation modules have been loaded into memory before using the core class; otherwise, the corresponding functionalities will not be available.
### 5.4. Debugging Tips
When debugging an onion architecture application:
- **Check compilation status**: Use `core_class.__onion_built__` to verify if the C&B process is complete (per-thread)
- **View remaining abstract methods**: Use `core_class.__abstractmethods__` to identify unimplemented methods
- **Monitor method conflicts**: Watch for `RuntimeWarning` when multiple implementations provide the same method
- **Thread safety debugging**: Each thread has independent storage; use `_get_thread_storage()` to inspect thread-local state
- **Implementation switching**: Use `OnionUnload()` to reset and switch implementations during development
For detailed compilation information, including collected subclasses and method mappings, enable debug logging in your application.
## 6. New Features in Version 0.5
### 6.1. Thread Safety
Version 0.5 introduces **thread-local storage** for implementation management. Each thread maintains its own independent set of implementations, ensuring thread safety:
```python
import threading
from onion import Onion
import abc
class Core(Onion):
@abc.abstractmethod
def work(self):
pass
class ImplA(Core):
def work(self):
return "A"
def thread_task():
# Each thread loads its own implementation
impl = Core()
print(impl.work()) # Output: A
t1 = threading.Thread(target=thread_task)
t2 = threading.Thread(target=thread_task)
t1.start()
t2.start()
```
**Important**: Cross-thread access is not allowed. Each thread must load its own implementations.
### 6.2. Implementation Class Constraints (OnionViolationError)
To ensure architectural integrity, implementation classes must follow strict conventions. Violations will raise `OnionViolationError`:
| Constraint | Description |
|------------|-------------|
| **No `__init__`** | All initialization logic must be handled by the base class |
| **No new methods** | Only implement abstract methods declared in the base class |
| **No class attributes** | Configuration must be passed through base class `__init__` |
| **No overriding non-abstract methods** | Cannot override methods already implemented in the base class |
| **Use module-level helpers** | Use private functions (`def _helper():`) for auxiliary logic |
```python
from onion import Onion, OnionViolationError
import abc
class Calculator(Onion):
@abc.abstractmethod
def add(self, a, b):
pass
# ❌ Wrong: defining __init__
class BadImpl(Calculator):
def __init__(self, config): # Raises OnionViolationError
self.config = config
# ❌ Wrong: adding new methods
class BadImpl2(Calculator):
def helper(self): # Raises OnionViolationError
pass
def add(self, a, b):
return a + b
# ✅ Correct
class GoodImpl(Calculator):
def add(self, a, b):
return a + b
```
### 6.3. Implementation Unloading (OnionUnload)
Version 0.5 introduces the ability to unload implementations and switch to different ones at runtime:
```python
from onion import Onion, OnionUnload
import abc
class AICore(Onion):
@abc.abstractmethod
def chat(self, message):
pass
# Load Kimi implementation
from ai import kimi_ai
ai = AICore()
ai.chat("Hello")
# Switch to DeepSeek implementation
OnionUnload(AICore, 'ai.kimi_ai', 'ai.kimi_config')
from ai import deepseek_ai
ai = AICore()
ai.chat("Hello")
```
**Note**: This is the **only** way to switch implementations. Dynamic switching at runtime is not supported by design.
### 6.4. Asyncio Compatibility Check
The framework now detects and prevents usage inside asyncio coroutines, as coroutines share threads which would cause state pollution:
```python
import asyncio
from onion import Onion
class Core(Onion):
pass
async def main():
# This will raise RuntimeError
core = Core()
# Use threading.Thread instead for async scenarios
```
## 7. Summary
The `OnionMeta` metaclass and `Onion` base class provide a powerful and flexible implementation of the onion architecture. By decoupling the core business logic from the concrete implementations, it supports:
- **Modular Development**: Core and implementations can be developed and maintained independently
- **Dynamic Extension**: New implementations can be added at runtime
- **Clear Architecture**: Maintains a separation between the "what" and the "how"
- **Simplicity**: Provides an intuitive API that hides complex metaprogramming details
- **Thread Safety**: Each thread has independent implementation storage
- **Architectural Enforcement**: OnionViolationError ensures implementation conventions are followed
The new implementation makes the onion architecture pattern more robust and easier to use through standardized method naming, explicit user responsibilities, enhanced debugging support, and thread-local storage.
| text/markdown | null | 2229066748@qq.com | Eagle'sBaby | 2229066748@qq.com | Apache Licence 2.0 | python | [
"Programming Language :: Python",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.10 | 2026-02-20T07:16:28.072735 | onion_arch-0.5.1.tar.gz | 32,418 | 96/c4/6a37e5015d1e10b402957fcc43c8f5935ba4fed4ef2eb00d14d09e48f9ad/onion_arch-0.5.1.tar.gz | source | sdist | null | false | cc9cffe2e5c2348f0d53608888151117 | c75998f83e7d441615d9c040ff23fa11a81de20bd94f1bb71b3bfe7c34f87fa6 | 96c46a37e5015d1e10b402957fcc43c8f5935ba4fed4ef2eb00d14d09e48f9ad | null | [] | 239 |
2.4 | parsim | 2.3.0 | A tool for working with parameterized simulation models | Introduction
============
Parsim is a tool for working with parameterized simulation models.
The primary objective is to facilitate quality assurance of simulation projects.
The tool supports a scripted and automated workflow, where verified and validated simulation models
are parameterized, so that they can be altered/modified in well-defined ways and reused with minimal user invention.
All events are logged on several levels, to support traceability, project documentation and quality control.
Parsim provides basic functionality for generating studies based on common design-of experiments
(DOE) methods, for example using factorial designs, response surface methods or random sampling,
like Monte Carlo or Latin Hypercube.
Parsim can also be used as an interface to the `Dakota <https://dakota.sandia.gov>`_ library;
Dakota is run as a subprocess, generating cases from a Parsim model template.
How it works
============
Once a prototype simulation case has been developed, a corresponding simulation
*model template* is created by collecting all simulation input files, data
files and scripts into a *template directory*. The text files in a model
template can then be parameterized by replacing numerical values, or text
strings with macro names. Parsim uses the pyexpander macro processing library, which
supports embedding of arbitrarly complex Python code in the template files.
This can be used for advanced parameterization needs, for example to compute data
tables from functions, generate graphs for reports, generate content in loops or
conditionals, etc.
When a simulation case is created, the model template directory is recursively
replicated to create a *case directory*. Parsim operations can also be carried
out on a *study*, containing multiple cases. A study is a directory containing
multiple case directories.
You operate on your cases (either individually or on all cases of a study at once)
by executing scripts written to perform specific tasks, e.g.
meshing operations, starting a simulation, or post-processing of results.
Your simulation project lives in a Parsim *project directory*, which holds all
cases and studies of the project. The project directory holds Parsim
configuration settings and logs project events, like creation of cases and
studies, serious errors, change of configuration settings, etc.
Summary of features:
* Flexible and full-featured support for parameterization of text-based simulation models.
* Cases and parameter studies kept together in projects.
* Scripted workflow can be applied to individual cases as well as to large parameter studies.
* Logging and error handling, for traceability and project documentation.
* Python API can be conveniently used for post-processing and analysis, with input parameters
and output available as pandas DataFrames.
* Built-in support for many common design-of-experiments (DOE) methods.
* Can be used as an interface to the Dakota library, for complex uncertainty quantification and optimization tasks.
* Based on Python.
* One simple workflow for any kind of simulation application.
* Platform independent: Works in both Linux, Windows and MacOS environments.
* Simple installation from public Python repositories (install with pip or conda).
* Available under open-source license (GNU Public License v. 3)
Installation
============
Parsim is available at both `PyPI, the Python Package Index <https://pypi.python.org/pypi>`_ and as a conda package
through the `conda-forge repository <https://conda-forge.org>`_, depending on which Python distribution and package
manager you use (``pip`` and ``conda``, respectively).
The Parsim installation requires and automatically installs the
Python library `pyexpander <http://pyexpander.sourceforge.net>`_,
which is used for macro and parameter expansion (parameterization of input files).
The DOE (Design of Experiments) functionality is provided by the pyDOE3, numpy and
scipy libraries. The pandas library is used, so that the Python API can
provide results and caselist data as pandas DataFrames.
If you want to use the `Dakota toolkit <https://dakota.sandia.gov/>`_, it is installed separately;
the ``dakota`` executable should be in your ``PATH``.
.. note::
If you experience issues with the installation, it is recommended to first make a clean and fully
functional installation of the NumPy, SciPy and pandas libraries. The best way to do this depends on
which Python distribution you use. The `anaconda Python distribution <https://www.continuum.io/downloads>`_
or miniconda with packages from conda-forge both work well on both Windows and Linux.
Installation from PyPI
----------------------
Use the package installer ``pip`` to install: ::
pip install parsim
Installation with conda
-----------------------
Note that you need to select the ``conda-forge`` channel to find parsim with conda.
To install in your base environment: ::
conda install -c conda-forge parsim
Alternatively, create a separate conda environment (here called ``psm-env``) for using parsim: ::
conda create -n psm-env -c conda-forge parsim
conda activate psm-env
Documentation
=============
The Parsim documentation is hosted at `ReadTheDocs <https://parsim.readthedocs.io>`_.
Author
======
Parsim was developed by `Ola Widlund <https://www.ri.se/en/ola-widlund>`_,
`RISE Research Institutes of Sweden <https://www.ri.se/en>`_, to
provide basic and generic functionality for uncertainty quantification
and quality assurance of parameterized simulation models.
Licensing
=========
Parsim is licensed under the GNU Public License, GPL, version 3 or later.
Copyright belongs to `RISE Research Institutes of Sweden AB <https://www.ri.se/en>`_.
Source code and reporting of issues
===================================
The source code is hosted at `GitLab.com <https://gitlab.com/olwi/psm>`_.
Here you can also report issues and suggest improvements.
| text/x-rst | null | Ola Widlund <ola.widlund@ri.se> | null | null | null | simulation, numerical modeling, doe, design of experiments, sampling, dakota, vvuq, quality assurance, qa, uq, uncertainty quantification, parameterization, parameterized models | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Intended Audience :: Manufacturing",
"Topic :: Scientific/Engineering",
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Environment :: Console"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"pyexpander>=1.9",
"numpy",
"scipy",
"pyDOE3",
"pandas"
] | [] | [] | [] | [
"Homepage, https://gitlab.com/olwi/psm",
"Documentation, https://parsim.readthedocs.io/en/latest/index.html",
"Repository, https://gitlab.com/olwi/psm.git",
"Issues, https://gitlab.com/olwi/psm/-/issues",
"Changelog, https://gitlab.com/olwi/psm/-/blob/master/CHANGELOG.rst"
] | twine/6.2.0 CPython/3.13.3 | 2026-02-20T07:15:12.395822 | parsim-2.3.0.tar.gz | 205,574 | f5/9f/d8c626357ec418e10ee50cbeb26e0086344e0ec5bd25c9a7d5d5a555c7fa/parsim-2.3.0.tar.gz | source | sdist | null | false | f7565f20b8f8d20ce0e6d82573453141 | 215859d4ca54250c41ce745cb319f3bd0b5009a6c255ee89942eb1d04010ddba | f59fd8c626357ec418e10ee50cbeb26e0086344e0ec5bd25c9a7d5d5a555c7fa | GPL-3.0-only | [
"LICENSE"
] | 245 |
2.4 | nmaipy | 4.2.1a2 | Nearmap AI Python Library for extracting AI features from aerial imagery | # nmaipy - Nearmap AI Python Library
Extract building footprints, vegetation, damage assessments, and other AI features from Nearmap's aerial imagery using simple Python code.
## What is nmaipy?
nmaipy (pronounced "en-my-pie") is a Python library that makes it easy for data scientists to access Nearmap's AI-powered geospatial data. Whether you're analyzing a few properties or processing millions of buildings across entire cities, nmaipy handles the complexity so you can focus on your analysis.
**Supported countries:** `au` (Australia), `us` (United States), `nz` (New Zealand), `ca` (Canada)
## Quick Start for Data Scientists
### 1. Install
#### Option A: Using pip
```bash
pip install -e .
```
#### Option B: Using conda
Minimal installation (core features only):
```bash
conda env create -f environment-minimal.yaml
conda activate nmaipy
```
Full installation (includes development and notebook tools):
```bash
conda env create -f environment.yaml
conda activate nmaipy
```
#### Option C: Install into existing conda environment
```bash
conda install -c conda-forge geopandas pandas numpy pyarrow psutil pyproj python-dotenv requests rtree shapely stringcase tqdm fsspec s3fs
pip install -e .
```
#### Additional options
For running notebooks with pip:
```bash
pip install -e ".[notebooks]"
```
For development with pip:
```bash
pip install -e ".[dev]"
```
### 2. Set your API key
```bash
export API_KEY=your_api_key_here
```
### 3. Run your first extraction
```python
from nmaipy.exporter import NearmapAIExporter
# Extract building and vegetation data
exporter = NearmapAIExporter(
aoi_file='my_parcels.geojson', # Your areas of interest
output_dir='results', # Where to save outputs
country='au', # au, us, nz, or ca
packs=['building', 'vegetation'], # What features to extract
processes=4 # Parallel processing
)
exporter.run()
```
That's it! Your results will be saved as CSV or Parquet files in the output directory.
> **Note:** `AOIExporter` is available as a backward-compatible alias for `NearmapAIExporter`.
## Common Use Cases
### Urban Planning
Extract comprehensive data about buildings, vegetation coverage, and surface materials:
```python
exporter = NearmapAIExporter(
aoi_file='city_blocks.geojson',
output_dir='urban_analysis',
country='au',
packs=['building', 'vegetation', 'surfaces', 'solar'],
save_features=True, # Get individual features, not just summaries
include_parcel_geometry=True # Keep boundaries for GIS analysis
)
```
### Disaster Response
Assess damage after natural disasters like hurricanes or floods:
```python
exporter = NearmapAIExporter(
aoi_file='affected_areas.geojson',
output_dir='damage_assessment',
country='us',
packs=['damage'],
since='2024-07-08', # Date range of the event
until='2024-07-11',
rapid=True, # Use rapid post-catastrophe imagery
save_features=True
)
```
### Environmental Analysis
Study vegetation coverage and tree canopy:
```python
exporter = NearmapAIExporter(
aoi_file='study_area.geojson',
output_dir='vegetation_study',
country='au',
packs=['vegetation'],
save_features=True # Get individual tree polygons
)
```
### Market Research
Find properties with pools or solar panels:
```python
exporter = NearmapAIExporter(
aoi_file='suburbs.geojson',
output_dir='market_analysis',
country='au',
packs=['pools', 'solar'],
include_parcel_geometry=True
)
```
### Roof Age Analysis (US Only)
Predict roof installation dates using AI analysis of historical imagery.
**Unified approach** (recommended) - combines Feature API and Roof Age in one export:
```python
exporter = NearmapAIExporter(
aoi_file='properties.geojson',
output_dir='unified_results',
country='us',
packs=['building'],
roof_age=True, # Include Roof Age API data
save_features=True
)
exporter.run()
```
**Standalone approach** - for roof age data only:
```python
from nmaipy.roof_age_exporter import RoofAgeExporter
exporter = RoofAgeExporter(
aoi_file='properties.geojson',
output_dir='roof_age_results',
country='us',
threads=10,
output_format='both' # Generate both GeoParquet and CSV
)
exporter.run()
```
The Roof Age API uses machine learning to analyze multiple imagery captures over time, combined with building permit data and climate information, to predict when roofs were last installed or significantly renovated. Each roof feature includes:
- Predicted installation date
- Confidence score (trust score)
- Evidence type and number of captures analyzed
- Timeline of all imagery used in analysis
This is valuable for:
- Insurance underwriting and risk assessment
- Property valuation and market analysis
- Maintenance planning and capital budgeting
- Real estate due diligence
## Available AI Features
Some of the more common AI packs are below - there are more and growing, available via API request or on the Nearmap help.nearmap.com page.
| Pack | Description | Example Use Cases |
|------|-------------|-------------------|
| `building` | Building footprints and heights | Urban planning, property analysis |
| `vegetation` | Trees and vegetation coverage | Environmental studies, urban forestry |
| `surfaces` | Ground surface materials | Permeability studies, heat mapping |
| `pools` | Swimming pool detection | Compliance, market research |
| `solar` | Solar panel detection | Renewable energy assessment |
| `damage` | Post-disaster damage classification | Insurance, emergency response |
| `building_characteristics` | Detailed roof types, materials | Detailed property analysis |
## Input Data Formats
nmaipy accepts areas of interest (AOIs) in several formats:
- **GeoJSON**: Standard geospatial format with polygons
- **GeoPackage** (GPKG): OGC standard for geospatial data
- **Parquet / GeoParquet**: Efficient columnar format for large datasets
- **CSV**: Simple format with WKT geometries (also supports TSV and PSV)
Your input file should contain polygon geometries representing the areas you want to analyze (parcels, census blocks, suburbs, etc.).
## Output Data
The exporter writes results to `{output_dir}/final/` with the following structure:
| File | Description |
|------|-------------|
| `{stem}_aoi_rollup.csv` or `.parquet` | One row per AOI with summary statistics (counts, areas, confidences) |
| `{stem}_{class}.csv` | Per-class attribute tables (e.g. `roof.csv`, `building.csv`) |
| `{stem}_{class}_features.parquet` | Per-class GeoParquet with feature geometries (when `save_features=True`) |
| `{stem}_features.parquet` | All features combined as GeoParquet (when `save_features=True`) |
| `{stem}_buildings.csv` or `.parquet` | Per-building detail rows (when `save_buildings=True`) |
| `{stem}_feature_api_errors.csv` | AOIs where the Feature API returned errors |
| `{stem}_roof_age_errors.csv` | AOIs where the Roof Age API returned errors (US only) |
| `{stem}_latency_stats.csv` | API timing diagnostics |
| `export_config.json` | Full record of export parameters and nmaipy version |
| `README.md` | Auto-generated data dictionary describing all output files and columns |
A `{output_dir}/chunks/` directory holds intermediate per-chunk results during processing, enabling resume after interruption.
For detailed column-level documentation, refer to the auto-generated `README.md` inside each export's `final/` directory.
## S3 Output Support
nmaipy can write output directly to Amazon S3. Pass an `s3://` URI as the output directory:
```python
exporter = NearmapAIExporter(
aoi_file='properties.geojson',
output_dir='s3://my-bucket/nmaipy-results/',
country='us',
packs=['building'],
)
exporter.run()
```
AWS credentials are resolved automatically from environment variables (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_DEFAULT_REGION`) or `~/.aws/credentials`. No additional nmaipy configuration is needed.
The `cache_dir` parameter also accepts S3 URIs for cloud-native workflows, though local caching is faster for iterative development.
## Examples
**Quick start** — verify your setup with 10 US properties covering buildings, features, and roof age:
```bash
export API_KEY=your_api_key_here
python run_10_test.py
```
**More examples** — see `examples.py` for complete, working examples covering:
- Basic building/vegetation extraction
- Damage assessment (Hurricane Beryl)
- Urban planning (multi-pack)
- Vegetation analysis
- Pool detection
- Large area gridding
- Time series extraction
- Unified roof age + feature export
Example AOI files are provided in `data/examples/`:
- `sydney_parcels.geojson` — Sydney CBD, Australia
- `us_parcels.geojson` — Austin, Texas, USA
- `large_area.geojson` — 2km x 2km Melbourne area (triggers auto-gridding)
## Working with Large Areas
nmaipy automatically handles large areas by:
- Splitting them into manageable grid cells
- Processing in parallel
- Combining results seamlessly
For areas larger than 1 sq km, the library will automatically use gridding:
```python
exporter = NearmapAIExporter(
aoi_file='large_region.geojson',
output_dir='large_area_results',
country='us',
packs=['building'],
aoi_grid_inexact=True, # Allow mixing survey dates if needed
processes=16 # Use more processes for speed
)
```
## Performance Tips
1. **Use parallel processing**: Set `processes` to the number of CPU cores available.
2. **Tune chunk size**: `chunk_size` controls how many AOIs are grouped into each parallel work unit (default: 500). Smaller values give finer-grained parallelism and cheaper resume after interruption; larger values reduce overhead.
3. **Cache API responses**: Use `cache_dir` to persist API responses to a directory. On subsequent runs with different parameters (e.g. different packs), cached responses are reused without re-fetching. By default, cache is stored in `{output_dir}/cache/`.
4. **Filter by date**: Use `since` and `until` to restrict to specific time periods, reducing data volume.
## Command Line Interface
### Feature API Export
```bash
python nmaipy/exporter.py \
--aoi-file "parcels.geojson" \
--output-dir "results" \
--country us \
--packs building vegetation \
--save-features \
--roof-age
```
Key options:
- `--packs`: AI packs to extract (building, vegetation, surfaces, pools, solar, damage, etc.)
- `--roof-age`: Include Roof Age API data (US only)
- `--save-features`: Save per-class GeoParquet files with feature geometries
- `--save-buildings`: Save per-building detail rows
- `--rollup-format`: Output format for rollup file (`csv` or `parquet`, default: `csv`)
- `--cache-dir`: Directory for caching API responses
- `--no-cache`: Disable caching entirely
- `--primary-decision`: Feature selection method (`largest_intersection`, `nearest`, `optimal`)
- `--since` / `--until`: Filter by survey date range
- `--max-retries`: Maximum API retry attempts (default: 10)
Run `python nmaipy/exporter.py --help` for all options.
### Standalone Roof Age Export (US Only)
```bash
python -m nmaipy.roof_age_exporter \
--aoi-file "us_properties.geojson" \
--output-dir "roof_age_results" \
--country us \
--processes 4 \
--output-format both
```
Run `python -m nmaipy.roof_age_exporter --help` for all options.
## Getting Help
- **Examples**: See `examples.py` for common use cases
- **Installation**: See `INSTALL.md` for detailed installation options
- **Notebooks**: Check the `notebooks/` directory for Jupyter notebook tutorials
- **Issues**: Report bugs or request features on [GitHub](https://github.com/nearmap/nmaipy)
## Requirements
- Python 3.12+
- Nearmap API key (contact Nearmap for access)
- 4GB+ RAM recommended for large extractions
- AWS credentials for S3 output (optional)
## Advanced: Building a Conda Package
For system administrators who want to create a local conda package:
```bash
conda build conda.recipe
conda install --use-local nmaipy
```
This will create a conda package that can be shared internally or uploaded to a conda channel.
## License
See LICENSE file for details.
| text/markdown | null | Nearmap AI Systems <ai.systems@nearmap.com> | null | Nearmap AI Systems <ai.systems@nearmap.com> | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| nearmap, ai, geospatial, gis, aerial, imagery, buildings, vegetation | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: GIS",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"fsspec",
"geopandas>=1.1.0",
"numpy",
"pandas",
"psutil",
"pyarrow",
"pyproj",
"python-dotenv",
"requests",
"rtree",
"s3fs",
"shapely",
"stringcase",
"tqdm",
"black; extra == \"dev\"",
"isort; extra == \"dev\"",
"pytest; extra == \"dev\"",
"build; extra == \"dev\"",
"twine; extra == \"dev\"",
"ipykernel; extra == \"notebooks\"",
"matplotlib; extra == \"notebooks\""
] | [] | [] | [] | [
"Homepage, https://github.com/nearmap/nmaipy",
"Documentation, https://github.com/nearmap/nmaipy#readme",
"Repository, https://github.com/nearmap/nmaipy.git",
"Bug Tracker, https://github.com/nearmap/nmaipy/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T07:14:23.002346 | nmaipy-4.2.1a2.tar.gz | 147,279 | 1d/fc/1a48380627738abe7b65588d6a997c71474aae0476c0afa778b1c68ff604/nmaipy-4.2.1a2.tar.gz | source | sdist | null | false | 39487c5cc05f22c7ba6890cc40589220 | 0e0b7cd7a3123f37db45d87d733a928d936f4bbefca2367289d2dcd63f0d8aee | 1dfc1a48380627738abe7b65588d6a997c71474aae0476c0afa778b1c68ff604 | null | [
"LICENSE"
] | 220 |
2.4 | lifecycle-allocation | 0.1.0 | Lifecycle portfolio allocation framework inspired by Choi et al. | # lifecycle-allocation
A Python library implementing a practical lifecycle portfolio choice framework inspired by [Choi et al.](https://www.nber.org/papers/w34166) It combines human capital analysis with visual analytics to produce data-driven stock/bond allocation recommendations.
[](https://github.com/engineerinvestor/lifecycle-allocation/actions/workflows/ci.yml)
[](https://pypi.org/project/lifecycle-allocation/)
[](https://pypi.org/project/lifecycle-allocation/)
[](https://opensource.org/licenses/MIT)
[](https://engineerinvestor.github.io/lifecycle-allocation)
## Why This Matters
Most portfolio allocation "rules" are single-variable heuristics: 60/40, 100-minus-age, target-date funds. They ignore the biggest asset most people own -- their future earning power. A 30-year-old software engineer with $100k in savings and 35 years of income ahead is in a fundamentally different position than a 30-year-old retiree with the same $100k.
This library takes a **balance-sheet** view of your finances. Your investable portfolio is only part of your total wealth. Future earnings (human capital) act like a bond-like asset, and accounting for them changes how much stock risk you should take. The result is a theoretically grounded, personalized allocation that evolves naturally over your lifecycle -- no arbitrary rules required.
## Features
- **Core allocation engine** -- Merton-style optimal risky share adjusted for human capital
- **4 income models** -- flat, constant-growth, age-profile, and CSV-based
- **Strategy comparison** -- benchmark against 60/40, 100-minus-age, and target-date funds
- **Visualization suite** -- balance sheet waterfall, glide paths, sensitivity tornado, heatmaps
- **CLI interface** -- generate full reports from YAML/JSON profiles
- **YAML/JSON profiles** -- declarative investor configuration
- **Leverage support** -- two-tier borrowing rate model with configurable constraints
- **Mortality adjustment** -- survival probability discounting for human capital
## Install
```bash
pip install lifecycle-allocation
```
For development:
```bash
git clone https://github.com/engineerinvestor/lifecycle-allocation.git
cd lifecycle-allocation
pip install -e ".[dev]"
```
Requires Python 3.10+.
## Quick Start (Python)
```python
from lifecycle_allocation import (
InvestorProfile,
MarketAssumptions,
recommended_stock_share,
compare_strategies,
)
profile = InvestorProfile(
age=30,
retirement_age=67,
investable_wealth=100_000,
after_tax_income=70_000,
risk_tolerance=5,
)
market = MarketAssumptions(mu=0.05, r=0.02, sigma=0.18)
result = recommended_stock_share(profile, market)
print(f"Recommended stock allocation: {result.alpha_recommended:.1%}")
print(f"Human capital: ${result.human_capital:,.0f}")
print(result.explain)
# Compare against heuristic strategies
df = compare_strategies(profile, market)
print(df.to_string(index=False))
```
## Quick Start (CLI)
```bash
lifecycle-allocation alloc \
--profile examples/profiles/young_saver.yaml \
--out ./output \
--report
```
This produces `allocation.json`, `summary.md`, and charts in `output/charts/`.
## How It Works
1. Compute a **baseline risky share** (Merton-style): `alpha* = (mu - r) / (gamma * sigma^2)`
2. Estimate **human capital** H as the present value of future earnings + retirement benefits, discounted by survival probability and a term structure
3. Adjust: `alpha = alpha* x (1 + H/W)`, clamped to [0, 1] (or [0, L_max] with leverage)
Young workers with high H/W ratios get higher equity allocations. As you age and accumulate financial wealth, H shrinks relative to W and the allocation naturally declines -- producing a lifecycle glide path from first principles rather than arbitrary rules.
## Example Output
| Archetype | Age | Income | Wealth | H/W Ratio | Recommended Equity |
|---|---|---|---|---|---|
| Young saver | 30 | $70k | $100k | ~15x | ~90%+ |
| Mid-career | 45 | $120k | $500k | ~4x | ~65% |
| Near-retirement | 60 | $90k | $1.2M | ~0.5x | ~40% |
*Values depend on market assumptions and risk tolerance. These are illustrative.*
## Tutorial
Explore the interactive tutorial notebook for a guided walkthrough:
[](https://colab.research.google.com/github/engineerinvestor/lifecycle-allocation/blob/main/examples/notebooks/tutorial.ipynb)
Or run locally:
```bash
jupyter notebook examples/notebooks/tutorial.ipynb
```
## Documentation
Full documentation is available at [engineerinvestor.github.io/lifecycle-allocation](https://engineerinvestor.github.io/lifecycle-allocation).
## Roadmap
| Version | Milestone |
|---|---|
| **v0.1** | Core allocation engine, CLI, YAML profiles, strategy comparison, charts |
| **v0.5** | Monte Carlo simulation, CRRA utility evaluation, Social Security modeling |
| **v1.0** | Full documentation, tax-aware optimization, couples modeling |
## Contributing
Contributions are welcome! See [CONTRIBUTING.md](CONTRIBUTING.md) for development setup, code style, and PR guidelines.
## Citation
If you use this library in academic work, please cite both the underlying research and the software:
```bibtex
@techreport{choi2025practical,
title={Practical Finance: An Approximate Solution to Lifecycle Portfolio Choice},
author={Choi, James J. and Liu, Canyao and Liu, Pengcheng},
year={2025},
institution={National Bureau of Economic Research},
type={Working Paper},
number={34166},
doi={10.3386/w34166},
url={https://www.nber.org/papers/w34166}
}
@software{engineerinvestor2025lifecycle,
title={lifecycle-allocation: A Lifecycle Portfolio Choice Framework},
author={{Engineer Investor}},
year={2025},
url={https://github.com/engineerinvestor/lifecycle-allocation},
version={0.1.0},
license={MIT}
}
```
## Disclaimer
**This library is for education and research purposes only. It is not investment advice.** The authors are not financial advisors. Consult a qualified professional before making investment decisions. Past performance and model outputs do not guarantee future results.
## License
[MIT](LICENSE)
| text/markdown | null | Engineer Investor <egr.investor@gmail.com> | null | null | null | finance, portfolio, allocation, lifecycle, human-capital, investment | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Education",
"Intended Audience :: Financial and Insurance Industry",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Office/Business :: Financial"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.24",
"pandas>=2.0",
"matplotlib>=3.7",
"pyyaml>=6.0",
"click>=8.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"ruff>=0.1; extra == \"dev\"",
"black>=23.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"types-PyYAML>=6.0; extra == \"dev\"",
"pandas-stubs>=2.0; extra == \"dev\"",
"mkdocs>=1.5; extra == \"docs\"",
"mkdocs-material>=9.0; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/engineerinvestor/lifecycle-allocation",
"Repository, https://github.com/engineerinvestor/lifecycle-allocation",
"Documentation, https://engineerinvestor.github.io/lifecycle-allocation",
"Issues, https://github.com/engineerinvestor/lifecycle-allocation/issues",
"Changelog, https://github.com/engineerinvestor/lifecycle-allocation/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.9.6 | 2026-02-20T07:14:00.922681 | lifecycle_allocation-0.1.0.tar.gz | 24,445 | 52/09/bdc439e5aeb6b5159011e9609272cdc1e6dbb6e914fcfadde9e6bffe463d/lifecycle_allocation-0.1.0.tar.gz | source | sdist | null | false | e5a573f6016afd79ae68fc68aed2f1a8 | 124202a93a5a7641f9c7b6141ba694e407df26a15665212e17cf63cde16bc0f8 | 5209bdc439e5aeb6b5159011e9609272cdc1e6dbb6e914fcfadde9e6bffe463d | MIT | [
"LICENSE"
] | 256 |
2.4 | netbox-interface-name-rules | 1.0.0 | NetBox plugin for automatic interface renaming when modules are installed | # NetBox Interface Name Rules Plugin
Automatic interface renaming when modules are installed into NetBox device bays.
## What it does
When a module (transceiver, line card, converter) is installed into a module bay,
NetBox creates interfaces using position-based naming from the module type template.
This often produces incorrect names — e.g., `Interface 1` instead of `et-0/0/4`.
This plugin hooks into Django's `post_save` signal on the `Module` model to
automatically apply renaming rules based on configurable templates.
## Features
- **Signal-driven** — rules fire automatically on module install, no manual step needed
- **Template variables** — `{slot}`, `{bay_position}`, `{bay_position_num}`, `{base}`, `{channel}`, etc.
- **Arithmetic expressions** — `{8 + ({parent_bay_position} - 1) * 2 + {sfp_slot}}`
- **Breakout support** — create multiple channel interfaces from a single port (e.g., QSFP+ 4x10G)
- **Scoping** — rules can be scoped to specific device types, parent module types, or be universal
- **Bulk import/export** — YAML-based rule management via the UI or API
## Supported scenarios
| Scenario | Example |
|----------|---------|
| Converter offset | GLC-T in CVR-X2-SFP → `GigabitEthernet3/10` |
| Breakout channels | QSFP-4X10G-LR → `et-0/0/4:0` through `et-0/0/4:3` |
| Platform naming | QSFP-100G-LR4 on ACX7024 → `et-0/0/{bay_position}` |
| UfiSpace breakout | QSFP-100G on S9610 → `swp{bay_position_num}s{channel}` |
## Installation
```bash
pip install netbox-interface-name-rules
```
Add to `configuration.py`:
```python
PLUGINS = ['netbox_interface_name_rules']
```
## Compatibility
- NetBox ≥ 4.2.0
- Python ≥ 3.12
## License
Apache 2.0
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12.0 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:12:14.050942 | netbox_interface_name_rules-1.0.0.tar.gz | 10,765 | ea/b0/b205c493487a23507105262cfd4e30ec6e0a12ca498aa877cc18e223a866/netbox_interface_name_rules-1.0.0.tar.gz | source | sdist | null | false | 82cc90cc00af4460a58d26cbba100cc1 | 687fd28426e89aaaba83cad80c62602ac09a3cc776fa11e9acca6c26eeb626bd | eab0b205c493487a23507105262cfd4e30ec6e0a12ca498aa877cc18e223a866 | Apache-2.0 | [
"LICENSE"
] | 253 |
2.4 | isagellm-control-plane-benchmark | 0.2.0.2 | Control Plane scheduling benchmark for sageLLM (formerly isage-control-plane-benchmark) | # sageLLM Control Plane Benchmark
This module provides comprehensive benchmarking tools for evaluating different scheduling policies
in sageLLM's Control Plane. It supports both **LLM-only** and **Hybrid (LLM + Embedding)**
workloads.
## Overview
The benchmark measures key performance metrics across various scheduling strategies:
- **Throughput**: Requests per second and tokens per second
- **Latency**: End-to-end latency, Time to First Token (TTFT), Time Between Tokens (TBT)
- **SLO Compliance**: Percentage of requests meeting their SLO deadlines
- **Error Rates**: Failed requests and timeout rates
- **Resource Utilization**: GPU memory and compute utilization (optional)
## Architecture
```
┌─────────────────────────────────────────────┐
│ Control Plane │
┌─────────────┐ HTTP │ ┌─────────────────────────────────────┐ │
│ Benchmark │ ───────────────► │ │ Scheduler (Policy: X) │ │
│ Client │ │ │ ┌───────────┬───────────────────┐ │ │
│ │ │ │ │ LLM Queue │ Embedding Queue │ │ │
└─────────────┘ │ │ └───────────┴───────────────────┘ │ │
│ │ └─────────────────────────────────────┘ │
│ └──────────────────┬──────────────────────────┘
│ │
│ ┌──────────────────┴──────────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌──────────────┐ ┌──────────────┐
│ Metrics │ │ vLLM Inst 1 │ │ Embedding │
│ Collector │ │ (Qwen-7B) │ │ Server │
└─────────────┘ ├──────────────┤ │ (BGE-M3) │
│ vLLM Inst 2 │ └──────────────┘
│ (Llama-13B) │
└──────────────┘
```
## Quick Start
### Installation
```bash
# Install from PyPI
pip install isagellm-control-plane-benchmark
# Or for development:
pip install -e "packages/sage-benchmark[dev]"
# CLI dependencies
pip install typer aiohttp pyyaml
# Visualization dependencies (optional)
pip install matplotlib jinja2
```
### Running Your First Benchmark
```bash
# 1. Run a simple LLM benchmark
sage-cp-bench run --mode llm --policy fifo --requests 100 --rate 10
# 2. Run a hybrid (LLM + Embedding) benchmark
sage-cp-bench run --mode hybrid --policy hybrid_slo --llm-ratio 0.7 --requests 100
# 3. Compare multiple policies
sage-cp-bench compare --mode llm --policies fifo,priority,slo_aware --requests 500
# 4. Run a predefined experiment
sage-cp-bench experiment --name throughput --policies fifo,priority
```
## CLI Reference
### Commands Overview
| Command | Description |
| ------------ | -------------------------------------------- |
| `run` | Run benchmark for a single scheduling policy |
| `compare` | Compare multiple scheduling policies |
| `sweep` | Sweep across multiple request rates |
| `experiment` | Run predefined experiments |
| `visualize` | Generate charts from existing results |
| `config` | Show/save example configuration |
| `validate` | Validate a configuration file |
### `run` Command
```bash
sage-cp-bench run [OPTIONS]
Options:
--mode -m [llm|hybrid] Benchmark mode (default: llm)
--control-plane -c TEXT Control Plane URL (default: http://localhost:8080)
--policy -p TEXT Scheduling policy (default: fifo)
--requests -n INTEGER Number of requests (default: 100)
--rate -r FLOAT Request rate req/s (default: 10.0)
--llm-ratio FLOAT LLM ratio for hybrid mode (default: 0.7)
--output -o TEXT Output directory (default: ./benchmark_results)
--warmup -w INTEGER Warmup requests (default: 10)
--timeout -t FLOAT Request timeout seconds (default: 60.0)
--no-visualize Disable auto visualization
--config TEXT Load config from YAML/JSON file
--quiet -q Suppress progress output
```
**Examples:**
```bash
# LLM-only benchmark
sage-cp-bench run --mode llm --policy fifo --requests 100 --rate 10
# Hybrid benchmark with 70% LLM, 30% Embedding
sage-cp-bench run --mode hybrid --policy hybrid_slo --llm-ratio 0.7 --requests 100
# Load configuration from file
sage-cp-bench run --config benchmark_config.yaml
```
### `compare` Command
```bash
sage-cp-bench compare [OPTIONS]
Options:
--mode -m [llm|hybrid] Benchmark mode (default: llm)
--policies -p TEXT Comma-separated policy list (default: fifo,priority,slo_aware)
--requests -n INTEGER Requests per policy (default: 100)
--rate -r FLOAT Request rate (default: 10.0)
--llm-ratio FLOAT LLM ratio for hybrid mode (default: 0.7)
--output -o TEXT Output directory
--no-visualize Disable comparison charts
```
**Examples:**
```bash
# Compare LLM scheduling policies
sage-cp-bench compare --mode llm --policies fifo,priority,slo_aware
# Compare hybrid scheduling policies
sage-cp-bench compare --mode hybrid --policies fifo,hybrid_slo --llm-ratio 0.7
```
### `sweep` Command
```bash
sage-cp-bench sweep [OPTIONS]
Options:
--mode -m [llm|hybrid] Benchmark mode (default: llm)
--policy -p TEXT Policy to test (default: fifo)
--rates TEXT Comma-separated rates (default: 10,50,100,200)
--requests -n INTEGER Requests per rate (default: 100)
--output -o TEXT Output directory
```
**Examples:**
```bash
# Sweep request rates for LLM benchmark
sage-cp-bench sweep --mode llm --policy fifo --rates 10,50,100,200
# Sweep rates for hybrid benchmark
sage-cp-bench sweep --mode hybrid --policy hybrid_slo --rates 10,50,100
```
### `experiment` Command
```bash
sage-cp-bench experiment [OPTIONS]
Options:
--name -e TEXT Experiment: throughput|latency|slo|mixed_ratio [required]
--control-plane -c TEXT Control Plane URL
--requests -n INTEGER Requests per test (default: 500)
--rate -r INTEGER Request rate (default: 100)
--llm-ratio FLOAT LLM ratio (default: 0.5)
--policies -p TEXT Policies to test (default: fifo,priority,slo_aware)
--output -o TEXT Output directory
--no-visualize Skip visualization
```
**Available Experiments:**
| Experiment | Description |
| ------------- | --------------------------------------------- |
| `throughput` | Sweep request rates to find max throughput |
| `latency` | Analyze latency distribution under fixed load |
| `slo` | Compare SLO compliance across policies |
| `mixed_ratio` | Test different LLM/Embedding ratios |
**Examples:**
```bash
# Run throughput experiment
sage-cp-bench experiment --name throughput --policies fifo,priority
# Run latency analysis
sage-cp-bench experiment --name latency --rate 100 --requests 1000
# Run SLO compliance comparison
sage-cp-bench experiment --name slo --policies fifo,slo_aware
# Run mixed ratio sweep (hybrid only)
sage-cp-bench experiment --name mixed_ratio --rate 100
```
### `visualize` Command
```bash
sage-cp-bench visualize [OPTIONS]
Options:
--input -i TEXT Results JSON file [required]
--output -o TEXT Output directory (default: ./visualizations)
--format -f TEXT Output format: charts|html|markdown|all (default: all)
```
**Examples:**
```bash
# Generate all visualizations
sage-cp-bench visualize --input results.json --output ./charts
# Generate only HTML report
sage-cp-bench visualize --input results.json --format html
```
### `config` and `validate` Commands
```bash
# Show example LLM configuration
sage-cp-bench config --mode llm
# Show and save hybrid configuration
sage-cp-bench config --mode hybrid --output config.yaml
# Validate configuration file
sage-cp-bench validate config.json --mode llm
sage-cp-bench validate config.yaml --mode hybrid
```
## Python API
### LLM-only Benchmark
```python
import asyncio
from sage.benchmark_control_plane import (
BenchmarkConfig,
BenchmarkRunner,
BenchmarkReporter,
)
# Configure benchmark
config = BenchmarkConfig(
control_plane_url="http://localhost:8080",
policies=["fifo", "priority", "slo_aware"],
num_requests=1000,
request_rate=100.0,
)
# Run benchmark
runner = BenchmarkRunner(config)
result = asyncio.run(runner.run())
# Generate report
reporter = BenchmarkReporter(result)
reporter.print_summary()
reporter.save_all("./benchmark_results")
```
### Hybrid Benchmark (LLM + Embedding)
```python
import asyncio
from sage.benchmark_control_plane.hybrid_scheduler import (
HybridBenchmarkConfig,
HybridBenchmarkRunner,
HybridBenchmarkReporter,
)
# Configure hybrid benchmark
config = HybridBenchmarkConfig(
control_plane_url="http://localhost:8080",
num_requests=1000,
request_rate=100.0,
llm_ratio=0.7, # 70% LLM, 30% Embedding
embedding_ratio=0.3,
policies=["fifo", "hybrid_slo"],
)
# Run benchmark
runner = HybridBenchmarkRunner(config)
result = asyncio.run(runner.run())
# Generate report
reporter = HybridBenchmarkReporter(result)
reporter.print_summary()
reporter.save_json("./results/hybrid_benchmark.json")
```
### Running Predefined Experiments
```python
import asyncio
from sage.benchmark_control_plane.experiments import (
ThroughputExperiment,
LatencyExperiment,
SLOComplianceExperiment,
MixedRatioExperiment,
)
from sage.benchmark_control_plane.common.base_config import SchedulingPolicy
# Throughput experiment
exp = ThroughputExperiment(
name="throughput_sweep",
control_plane_url="http://localhost:8080",
policies=[SchedulingPolicy.FIFO, SchedulingPolicy.PRIORITY],
request_rates=[50, 100, 200, 500],
)
result = asyncio.run(exp.run_full()) # Includes visualization
print(f"Best policy: {result.summary['best_policy']}")
# Latency experiment
exp = LatencyExperiment(
name="latency_analysis",
control_plane_url="http://localhost:8080",
request_rate=100,
num_requests=1000,
)
result = asyncio.run(exp.run_full())
# Mixed ratio experiment (hybrid)
exp = MixedRatioExperiment(
name="ratio_sweep",
control_plane_url="http://localhost:8080",
llm_ratios=[0.0, 0.25, 0.5, 0.75, 1.0],
)
result = asyncio.run(exp.run_full())
```
### Generating Visualizations
```python
from pathlib import Path
from sage.benchmark_control_plane.visualization import (
BenchmarkCharts,
ReportGenerator,
)
# Generate charts
charts = BenchmarkCharts(output_dir=Path("./charts"))
charts.plot_throughput_comparison(policy_metrics)
charts.plot_latency_distribution(latency_data)
charts.plot_slo_compliance(slo_data)
# Generate reports
report_gen = ReportGenerator(result=benchmark_result, charts_dir=Path("./charts"))
report_gen.generate_html_report(Path("./report.html"))
report_gen.generate_markdown_report(Path("./report.md"))
```
## Supported Scheduling Policies
| Policy | Mode | Description |
| ---------------- | ------ | ----------------------------------------------- |
| `fifo` | Both | First-In-First-Out scheduling |
| `priority` | Both | Priority-based scheduling |
| `slo_aware` | Both | SLO-deadline aware scheduling |
| `cost_optimized` | LLM | Cost-optimized scheduling |
| `adaptive` | LLM | Adaptive scheduling based on system state |
| `aegaeon` | LLM | Advanced scheduling with multiple optimizations |
| `hybrid` | Hybrid | Hybrid LLM/Embedding scheduling |
| `hybrid_slo` | Hybrid | Hybrid with SLO awareness |
## Configuration Options
### LLM Benchmark Configuration
| Option | Description | Default |
| ----------------------- | ---------------------------------- | ----------------------------------- |
| `control_plane_url` | Control Plane HTTP address | `http://localhost:8080` |
| `policies` | List of policies to benchmark | `["fifo", "priority", "slo_aware"]` |
| `num_requests` | Total requests per policy | `100` |
| `request_rate` | Target request rate (req/s) | `10.0` |
| `arrival_pattern` | Request arrival pattern | `poisson` |
| `model_distribution` | Request distribution across models | `{"default": 1.0}` |
| `priority_distribution` | Request priority distribution | `{"NORMAL": 1.0}` |
| `timeout_seconds` | Request timeout | `60.0` |
| `warmup_requests` | Warmup requests before measurement | `10` |
### Hybrid Benchmark Configuration
| Option | Description | Default |
| --------------------------- | --------------------------------- | ------------- |
| `llm_ratio` | Ratio of LLM requests (0.0-1.0) | `0.5` |
| `embedding_ratio` | Ratio of Embedding requests | `0.5` |
| `embedding_model` | Embedding model name | `BAAI/bge-m3` |
| `embedding_batch_size` | Batch size for embedding requests | `32` |
| `llm_slo_deadline_ms` | SLO deadline for LLM requests | `5000` |
| `embedding_slo_deadline_ms` | SLO deadline for embedding | `500` |
## Output Formats
### Terminal Output
```
============================================================
sageLLM Hybrid Scheduling Benchmark Report
============================================================
Config: 1000 requests @ 100 req/s | LLM: 70% | Embedding: 30%
------------------------------------------------------------
| Policy | Throughput | LLM Avg | Emb Avg | LLM SLO | Emb SLO | Errors |
|------------|------------|---------|---------|---------|---------|--------|
| fifo | 95.2 req/s | 156 ms | 23 ms | 71.2% | 92.1% | 0.3% |
| hybrid_slo | 98.5 req/s | 132 ms | 18 ms | 93.7% | 98.2% | 0.1% |
Best Throughput: hybrid_slo (98.5 req/s)
Best LLM SLO: hybrid_slo (93.7%)
Best Embedding SLO: hybrid_slo (98.2%)
```
### JSON Report
Full results saved to `report_<timestamp>.json` including:
- Configuration summary
- Per-policy metrics
- Raw request results
- Summary statistics
### HTML Report
Interactive HTML report with embedded charts and tables.
### Markdown Report
Markdown format suitable for documentation and GitHub.
## Module Structure
```
benchmark_control_plane/
├── __init__.py # Module exports (backward compatible)
├── cli.py # CLI interface (sage-cp-bench)
├── config.py # Legacy config (→ llm_scheduler)
├── workload.py # Legacy workload (→ llm_scheduler)
├── client.py # Legacy client (→ llm_scheduler)
├── metrics.py # Legacy metrics (→ llm_scheduler)
├── runner.py # Legacy runner (→ llm_scheduler)
├── reporter.py # Legacy reporter (→ llm_scheduler)
├── README.md # This file
│
├── common/ # Shared components
│ ├── __init__.py
│ ├── base_config.py # Base configuration classes
│ ├── base_metrics.py # Base metrics classes
│ ├── gpu_monitor.py # GPU resource monitoring
│ └── strategy_adapter.py # Scheduling strategy adapter
│
├── llm_scheduler/ # LLM-only benchmark
│ ├── __init__.py
│ ├── config.py # LLM benchmark config
│ ├── workload.py # LLM workload generation
│ ├── client.py # LLM HTTP client
│ ├── metrics.py # LLM metrics collection
│ ├── runner.py # LLM benchmark runner
│ └── reporter.py # LLM result reporting
│
├── hybrid_scheduler/ # Hybrid LLM+Embedding benchmark
│ ├── __init__.py
│ ├── config.py # Hybrid benchmark config
│ ├── workload.py # Hybrid workload generation
│ ├── client.py # Hybrid HTTP client
│ ├── metrics.py # Hybrid metrics collection
│ ├── runner.py # Hybrid benchmark runner
│ └── reporter.py # Hybrid result reporting
│
├── visualization/ # Charts and reports
│ ├── __init__.py
│ ├── charts.py # Matplotlib chart generation
│ ├── report_generator.py # HTML/Markdown reports
│ └── templates/ # Report templates
│ ├── benchmark_report.html
│ └── comparison_report.html
│
└── experiments/ # Predefined experiments
├── __init__.py
├── base_experiment.py # Experiment base class
├── throughput_exp.py # Throughput sweep
├── latency_exp.py # Latency analysis
├── slo_compliance_exp.py # SLO compliance
└── mixed_ratio_exp.py # LLM/Embedding ratio sweep
```
## Related Documentation
- [DATA_PATHS.md](./DATA_PATHS.md) - Data directory structure and formats
- [VISUALIZATION.md](./VISUALIZATION.md) - Chart types and report formats
- [examples/run_llm_benchmark.py](../../../../examples/benchmark/run_llm_benchmark.py) - LLM
benchmark example
- [examples/run_hybrid_benchmark.py](../../../../examples/benchmark/run_hybrid_benchmark.py) -
Hybrid benchmark example
## Control Plane Integration
### Required API Endpoints
| Endpoint | Method | Description |
| ---------------------- | ------ | ------------------------------------ |
| `/health` | GET | Health check |
| `/v1/chat/completions` | POST | OpenAI-compatible LLM endpoint |
| `/v1/embeddings` | POST | OpenAI-compatible embedding endpoint |
| `/admin/set_policy` | POST | Switch scheduling policy |
| `/admin/metrics` | GET | Get Control Plane metrics |
### Request Headers
- `X-Request-ID`: Unique request identifier
- `X-Request-Priority`: Request priority (HIGH, NORMAL, LOW)
- `X-SLO-Deadline-Ms`: SLO deadline in milliseconds
- `X-Request-Type`: Request type (llm_chat, llm_generate, embedding)
## Troubleshooting
### Common Issues
1. **Connection refused**: Ensure Control Plane is running at the specified URL
1. **Timeout errors**: Increase `--timeout` or reduce `--rate`
1. **No visualization**: Install matplotlib: `pip install matplotlib`
1. **YAML config error**: Install pyyaml: `pip install pyyaml`
### Debug Mode
```bash
# Enable verbose logging
export SAGE_LOG_LEVEL=DEBUG
sage-cp-bench run --mode llm --policy fifo --requests 10
```
______________________________________________________________________
*Updated: 2025-11-28*
| text/markdown | null | IntelliStream Team <shuhao_zhang@hust.edu.cn> | null | null | null | sage, benchmark, control-plane, scheduling, evaluation, intellistream | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | ==3.10.* | [] | [] | [] | [
"isage-common",
"isage-kernel",
"isage-middleware>=0.2.4.0",
"isage-libs",
"aiohttp>=3.9.0",
"numpy<2.3.0,>=1.26.0",
"pandas>=2.0.0",
"pyyaml>=6.0",
"typer<1.0.0,>=0.15.0",
"rich<14.0.0,>=13.0.0",
"matplotlib>=3.7.0",
"seaborn>=0.12.0",
"jinja2>=3.1.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"ruff==0.14.6; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\"",
"types-PyYAML>=6.0.0; extra == \"dev\"",
"isage-pypi-publisher>=0.2.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/intellistream/sagellm-control-plane-benchmark",
"Documentation, https://github.com/intellistream/sagellm-control-plane-benchmark#readme",
"Repository, https://github.com/intellistream/sagellm-control-plane-benchmark",
"Issues, https://github.com/intellistream/sagellm-control-plane-benchmark/issues"
] | twine/6.2.0 CPython/3.11.11 | 2026-02-20T07:11:48.400499 | isagellm_control_plane_benchmark-0.2.0.2.tar.gz | 216,695 | b8/1a/2be70166754ae41663cb19eb36ef6cefa0c2ed34c200deda73c649b647b2/isagellm_control_plane_benchmark-0.2.0.2.tar.gz | source | sdist | null | false | a1bc8ad172ae0d38dd4c37c25bb891f3 | da330fe378f39edbce3b8e7f182082009983bd6b33cf6af62bf75a2395c3db89 | b81a2be70166754ae41663cb19eb36ef6cefa0c2ed34c200deda73c649b647b2 | MIT | [
"LICENSE"
] | 244 |
2.4 | nadeshiko-internal-sdk | 1.4.3.dev1771571466 | Python SDK for Nadeshiko API (internal build - includes internal endpoints) | # Nadeshiko SDK
Python SDK for the [Nadeshiko API](https://nadeshiko.co).
## Install
```bash
pip install nadeshiko-sdk
```
## Use the public SDK
The client sends your API key as `Authorization: Bearer <apiKey>`.
```python
import os
from nadeshiko import Nadeshiko
from nadeshiko.api.search import search
from nadeshiko.models import Error, SearchRequest
client = Nadeshiko(
base_url=os.getenv("NADESHIKO_BASE_URL", "https://api.nadeshiko.co"),
token=os.getenv("NADESHIKO_API_KEY", "your-api-key"),
)
result = search.sync(
client=client,
body=SearchRequest(query="彼女"),
)
if isinstance(result, Error):
print(result.code, result.detail)
else:
for sentence in result.sentences:
print(sentence.segment_info.content_jp)
```
### Error handling
Every response returns either a typed response object or an `Error`. The `Error` object follows the [RFC 7807](https://tools.ietf.org/html/rfc7807) Problem Details format, so you always get a machine-readable `code` and a human-readable `detail`.
```python
import os
from nadeshiko import Nadeshiko
from nadeshiko.api.search import search
from nadeshiko.models import Error, SearchRequest
client = Nadeshiko(
base_url=os.getenv("NADESHIKO_BASE_URL", "https://api.nadeshiko.co"),
token=os.getenv("NADESHIKO_API_KEY", "your-api-key"),
)
result = search.sync(
client=client,
body=SearchRequest(query="食べる"),
)
if isinstance(result, Error):
match result.code:
# 400 — Bad Request
case "VALIDATION_FAILED":
print("Validation failed:", result.detail)
case "INVALID_JSON":
print("Malformed JSON body:", result.detail)
case "INVALID_REQUEST":
print("Invalid request:", result.detail)
# 401 — Unauthorized
case "AUTH_CREDENTIALS_REQUIRED":
print("Missing API key or session token")
case "AUTH_CREDENTIALS_INVALID":
print("API key is invalid")
case "AUTH_CREDENTIALS_EXPIRED":
print("Token has expired, re-authenticate")
case "EMAIL_NOT_VERIFIED":
print("Email verification required")
# 403 — Forbidden
case "ACCESS_DENIED":
print("Access denied")
case "INSUFFICIENT_PERMISSIONS":
print("API key lacks the required scope")
# 429 — Too Many Requests
case "RATE_LIMIT_EXCEEDED":
print("Rate limit hit, slow down")
case "QUOTA_EXCEEDED":
print("Monthly quota exhausted")
# 500 — Internal Server Error
case "INTERNAL_SERVER_EXCEPTION":
print("Server error, trace ID:", result.instance)
else:
for sentence in result.sentences:
print(sentence.segment_info.content_jp, "—", sentence.basic_info.name_anime_en)
```
See [`examples/examples.py`](examples/examples.py) for more usage patterns.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"attrs>=23.0",
"httpx>=0.27.0",
"python-dateutil>=2.8.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:11:30.941223 | nadeshiko_internal_sdk-1.4.3.dev1771571466.tar.gz | 92,343 | 4b/e5/3479ce220660fda1e6adb78fc791798f5d4af722318a48b0f0e7fc2a4603/nadeshiko_internal_sdk-1.4.3.dev1771571466.tar.gz | source | sdist | null | false | 620dbe650a9a346c75bc90ad80c86e6c | 66edec8a35aee33cb2ac90faa765a22d2a1cef0dbc2fabb250c314182353431e | 4be53479ce220660fda1e6adb78fc791798f5d4af722318a48b0f0e7fc2a4603 | MIT | [
"LICENSE"
] | 209 |
2.4 | agentarc | 0.2.0 | Advanced policy enforcement layer for AI agents with 3-stage validation, transaction simulation, and security controls | # AgentARC - Security Layer for AI Blockchain Agents
[](https://github.com/galaar-org/AgentARC)
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://github.com/galaar-org)
[](https://pypi.org/project/agentarc/)
**Advanced security and policy enforcement layer for AI blockchain agents with multi-stage validation, transaction simulation, and threat detection across DeFi + smart-contract attack surfaces, with LLM-based risk analysis.**
## 🎯 Overview
AgentARC provides a comprehensive security framework for AI agents interacting with blockchain networks. It validates all transactions through multiple security stages before execution, reducing exposure to the broader DeFi threat surface and common smart-contract attack planes, including:
- 💰 Unauthorized fund transfers and unexpected value movement
- 🔓 Hidden or unlimited token approvals and allowance abuse
- 🧨 Malicious smart contracts and hostile call chains (e.g., delegatecall to untrusted code)
- 🎣 Token traps (honeypots, sell-blocks, malicious fee mechanics)
- 🌊 Liquidity and price-manipulation patterns (context-dependent)
- 🔄 Reentrancy-style execution hazards and unexpected re-calls
- 🧾 Suspicious fund-flow anomalies and downstream interactions that don’t match intent
These are representative examples, not an exhaustive list. AgentARC is designed to expand with more DeFi and smart-contract threat cases over time.
### Key Features
- ✅ **Multi-Stage Validation Pipeline**: Intent → Policies → Simulation → Threat Detection
- ✅ **Comprehensive Policy Engine**: 7 policy types for granular control
- ✅ **Transaction Simulation**: Tenderly integration for detailed execution traces
- ✅ **Threat Detection (Includes Honeypots)**: Automated checks for token traps and other suspicious patterns
- ✅ **Optional LLM-based Security**: AI-powered malicious activity detection and risk scoring
- ✅ **Zero Agent Modifications**: Pure wrapper pattern for seamless integration
- ✅ **Asset Change Tracking**: Monitor balance changes before execution
- ✅ **Multi-Framework Support**: LangChain, OpenAI SDK and AgentKit
- ✅ **Universal Wallet Support**: Private key, mnemonic and CDP
- ✅ **Event Streaming**: Real-time validation events for frontend integration
- ✅ **Plugin Architecture**: Extensible validators, simulators, and parsers
---
## 🚀 Quick Start
### Installation
```bash
# Install from PyPI (recommended)
pip install agentarc
# Or install from source
git clone https://github.com/galaar-org/AgentARC.git
cd agentarc
pip install -e .
# Verify installation
agentarc --help
```
### Setup Policy Configuration
```bash
# Generate default policy.yaml
agentarc setup
# Edit policy.yaml to configure your security rules
vim policy.yaml
```
### Integration
#### New API (v0.2.0+) - Universal Wallet
```python
from agentarc import WalletFactory, PolicyWallet
# Create wallet from private key, mnemonic, or CDP
wallet = WalletFactory.from_private_key(
private_key="0x...",
rpc_url="https://sepolia.base.org"
)
# Wrap with policy enforcement
policy_wallet = PolicyWallet(wallet, config_path="policy.yaml")
# All transactions now go through multi-stage validation
result = policy_wallet.send_transaction({"to": "0x...", "value": 1000})
```
#### AgentKit Integration (Legacy API)
```python
from agentarc import PolicyWalletProvider, PolicyEngine
from coinbase_agentkit import AgentKit, CdpEvmWalletProvider
# Create base wallet
base_wallet = CdpEvmWalletProvider(config)
# Wrap with AgentARC (add security layer)
policy_engine = PolicyEngine(
config_path="policy.yaml",
web3_provider=base_wallet,
chain_id=84532 # Base Sepolia
)
policy_wallet = PolicyWalletProvider(base_wallet, policy_engine)
# Use with AgentKit - no other changes needed!
agentkit = AgentKit(wallet_provider=policy_wallet, action_providers=[...])
```
All transactions now go through multi-stage security validation.
---
## 📚 Examples
### 1. Basic Chat Agent (`examples/basic-chat-agent/`)
Production-ready Coinbase AgentKit chatbot with AgentARC and a Next.js frontend.
```bash
cd examples/basic-chat-agent
cp .env.example .env
# Edit .env with your API keys
poetry install
python chatbot.py
```
**Features:**
- ✅ Real CDP wallet integration
- ✅ Interactive chatbot interface
- ✅ Complete policy configuration
- ✅ Next.js frontend with real-time validation events
- ✅ LangGraph server integration
**See:** [Basic Chat Agent Docs](examples/basic-chat-agent/docs/)
### 2. Autonomous Portfolio Agent (`examples/autonomous-portfolio-agent/`)
AI agent that autonomously manages a crypto portfolio with honeypot protection.
```bash
cd examples/autonomous-portfolio-agent
cp .env.example .env
# Edit .env
pip install -r requirements.txt
python autonomous_agent.py
```
**Features:**
- ✅ Autonomous portfolio rebalancing
- ✅ Automatic honeypot detection
- ✅ Multi-layer security (policies + simulation + LLM)
- ✅ Zero manual blacklisting
- ✅ Demonstrates honeypot token blocking in action
**See:** [Autonomous Portfolio Agent README](examples/autonomous-portfolio-agent/README.md) and [Honeypot Demo](examples/autonomous-portfolio-agent/HONEYPOT_DEMO.md)
---
## 🛡️ Security Pipeline
AgentARC validates every transaction through 4 stages:
### Stage 1: Intent Judge
- Parse transaction calldata
- Identify function calls and parameters
- Detect token transfers and approvals
### Stage 2: Policy Validation
- ETH value limits
- Address allowlist/denylist
- Per-asset spending limits
- Gas limits
- Function allowlists
### Stage 3: Transaction Simulation
- Tenderly simulation with full execution traces
- Asset/balance change tracking
- Gas estimation
- Revert detection
### Stage 3.5: Honeypot Detection
- Simulate token BUY transaction
- Automatically test SELL transaction
- Block if tokens cannot be sold back
- **Zero manual blacklisting needed**
### Stage 4: LLM Security Analysis (Optional)
- AI-powered malicious pattern detection
- Hidden approval detection
- Unusual fund flow analysis
- Risk scoring and recommendations
---
## 📋 Policy Types
### 1. ETH Value Limit
Prevent large ETH transfers per transaction.
```yaml
policies:
- type: eth_value_limit
max_value_wei: "1000000000000000000" # 1 ETH
enabled: true
description: "Limit ETH transfers to 1 ETH per transaction"
```
### 2. Address Denylist
Block transactions to sanctioned or malicious addresses.
```yaml
policies:
- type: address_denylist
denied_addresses:
- "0xSanctionedAddress1..."
- "0xMaliciousContract..."
enabled: true
description: "Block transactions to denied addresses"
```
### 3. Address Allowlist
Only allow transactions to pre-approved addresses (whitelist mode).
```yaml
policies:
- type: address_allowlist
allowed_addresses:
- "0xTrustedContract1..."
- "0xTrustedContract2..."
enabled: false # Disabled by default
description: "Only allow transactions to approved addresses"
```
### 4. Per-Asset Limits
Different spending limits for each token.
```yaml
policies:
- type: per_asset_limit
asset_limits:
- name: USDC
address: "0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
max_amount: "10000000" # 10 USDC
decimals: 6
- name: DAI
address: "0x6B175474E89094C44Da98b954EedeAC495271d0F"
max_amount: "100000000000000000000" # 100 DAI
decimals: 18
enabled: true
description: "Per-asset spending limits"
```
### 5. Token Amount Limit
Limit token transfers across all ERC20 tokens.
```yaml
policies:
- type: token_amount_limit
max_amount: "1000000000000000000000" # 1000 tokens (18 decimals)
enabled: false
description: "Limit token transfers per transaction"
```
### 6. Gas Limit
Prevent expensive transactions.
```yaml
policies:
- type: gas_limit
max_gas: 500000
enabled: true
description: "Limit gas to 500k per transaction"
```
### 7. Function Allowlist
Only allow specific function calls.
```yaml
policies:
- type: function_allowlist
allowed_functions:
- "eth_transfer"
- "transfer"
- "approve"
- "swap"
enabled: false
description: "Only allow specific function calls"
```
---
## 🔬 Advanced Features
### Tenderly Simulation
Enable advanced transaction simulation with full execution traces and asset tracking:
```yaml
simulation:
enabled: true
fail_on_revert: true
estimate_gas: true
print_trace: false # Set to true for detailed execution traces
```
**Setup Tenderly (optional but recommended):**
```bash
# Add to .env
TENDERLY_ACCESS_KEY=your_access_key
TENDERLY_ACCOUNT_SLUG=your_account
TENDERLY_PROJECT_SLUG=your_project
```
**Capabilities:**
- ✅ Full call trace analysis
- ✅ Asset/balance change tracking
- ✅ Event log decoding
- ✅ Gas prediction
- ✅ State modification tracking
**Output Example:**
```
Stage 3: Transaction Simulation
✅ Simulation successful (gas: 166300)
Asset changes:
0x1234567... (erc20): +1000
0xabcdef0... (erc20): -500
```
**With `print_trace: true`:**
```
Tenderly Simulation Details
----------------------------------------
Call Trace:
[1] CALL: 0x1234567... → 0xabcdef0... (value: 0.5 ETH, gas: 50000)
[1] DELEGATECALL: 0xabcdef0... → 0x9876543... (value: 0 ETH, gas: 30000)
[2] CALL: 0xabcdef0... → 0x5555555... (value: 0 ETH, gas: 15000)
Asset/Balance Changes:
0x1234567... (erc20): +1000
0xabcdef0... (erc20): -500
Events Emitted:
[1] Transfer
[2] Approval
[3] Swap
```
### LLM-based Security Validation
Enable AI-powered malicious activity detection:
```yaml
llm_validation:
enabled: true
provider: "openai" # or "anthropic"
model: "gpt-4o-mini"
api_key: "${OPENAI_API_KEY}" # or set in environment
block_threshold: 0.70 # Block if confidence >= 70%
warn_threshold: 0.40 # Warn if confidence >= 40%
```
**What LLM Analyzes:**
- Hidden token approvals
- Unusual fund flow patterns
- Reentrancy attack patterns
- Flash loan exploits
- Sandwich/MEV attacks
- Phishing attempts
- Hidden fees and draining
- Delegatecall to untrusted contracts
- Honeypot token indicators
**Example Output:**
```
Stage 4: LLM-based Security Validation
⚠️ LLM warning: Detected unlimited token approval to unknown contract
Confidence: 65% | Risk: MEDIUM
Indicators: unlimited_approval, unknown_recipient
```
### Honeypot Detection
Automatically detect scam tokens that can be bought but not sold:
**How it works:**
1. Transaction initiates a token purchase (BUY)
2. AgentARC simulates the BUY
3. Detects token receipt via Transfer events
4. Automatically simulates a SELL transaction
5. If SELL fails → **HONEYPOT DETECTED** → Block original BUY
**Configuration:**
```yaml
# Honeypot detection is automatic when Tenderly simulation is enabled
simulation:
enabled: true
```
**Example Output:**
```
Stage 3.5: Honeypot Detection
🔍 Token BUY detected. Checking if tokens can be sold back...
🧪 Testing sell for token 0xFe8365...
❌ Sell simulation FAILED/REVERTED
🛡️ ❌ BLOCKED: HONEYPOT DETECTED
Token 0xFe8365... can be bought but cannot be sold
```
---
## 📊 Logging Levels
Control output verbosity in `policy.yaml`:
```yaml
logging:
level: info # minimal, info, or debug
```
- **minimal**: Only final decisions (ALLOWED/BLOCKED)
- **info**: Full validation pipeline (recommended)
- **debug**: Detailed debugging information including trace counts
---
## 🔧 Complete Configuration Example
`policy.yaml`:
```yaml
version: "2.0"
apply_to: [all]
# Logging configuration
logging:
level: info # minimal, info, debug
# Policy rules
policies:
- type: eth_value_limit
max_value_wei: "1000000000000000000" # 1 ETH
enabled: true
description: "Limit ETH transfers to 1 ETH per transaction"
- type: address_denylist
denied_addresses: []
enabled: true
description: "Block transactions to denied addresses"
- type: address_allowlist
allowed_addresses: []
enabled: false
description: "Only allow transactions to approved addresses"
- type: per_asset_limit
asset_limits:
- name: USDC
address: "0x036CbD53842c5426634e7929541eC2318f3dCF7e"
max_amount: "10000000" # 10 USDC
decimals: 6
- name: DAI
address: "0x6B175474E89094C44Da98b954EedeAC495271d0F"
max_amount: "100000000000000000000" # 100 DAI
decimals: 18
enabled: true
description: "Per-asset spending limits"
- type: token_amount_limit
max_amount: "1000000000000000000000" # 1000 tokens
enabled: false
description: "Limit token transfers per transaction"
- type: function_allowlist
allowed_functions:
- "eth_transfer"
- "transfer"
- "approve"
enabled: false
description: "Only allow specific function calls"
- type: gas_limit
max_gas: 500000
enabled: true
description: "Limit gas to 500k per transaction"
# Transaction simulation
simulation:
enabled: true
fail_on_revert: true
estimate_gas: true
print_trace: false # Enable for detailed execution traces
# Calldata validation
calldata_validation:
enabled: true
strict_mode: false
# LLM-based validation (optional)
llm_validation:
enabled: false
provider: "openai"
model: "gpt-4o-mini"
api_key: "${OPENAI_API_KEY}"
block_threshold: 0.70
warn_threshold: 0.40
```
---
## 🧪 Testing
Run the test suite:
```bash
cd tests
python test_complete_system.py
```
**Tests cover:**
- ETH value limits
- Address denylist/allowlist
- Per-asset limits
- Gas limits
- Calldata parsing
- All logging levels
---
## 🏗️ Project Structure
```
agentarc/
├── agentarc/ # Main package
│ ├── __init__.py # Public API exports
│ ├── __main__.py # CLI entry point
│ ├── core/ # Core abstractions
│ │ ├── config.py # PolicyConfig for YAML loading
│ │ ├── types.py # TypedDict definitions
│ │ ├── errors.py # Custom exceptions
│ │ ├── interfaces.py # Protocol definitions
│ │ └── events.py # Event types
│ ├── engine/ # Validation pipeline
│ │ ├── policy_engine.py # Main orchestrator
│ │ ├── pipeline.py # ValidationPipeline
│ │ ├── context.py # ValidationContext
│ │ ├── factory.py # ComponentFactory (DI)
│ │ └── stages/ # Pipeline stages
│ │ ├── intent.py # Intent parsing
│ │ ├── policy.py # Policy validation
│ │ ├── simulation.py # Transaction simulation
│ │ ├── honeypot.py # Honeypot detection
│ │ └── llm.py # LLM analysis
│ ├── validators/ # Plugin-based validators
│ │ ├── base.py # PolicyValidator ABC
│ │ ├── registry.py # ValidatorRegistry
│ │ └── builtin/ # 7 built-in validators
│ │ ├── address.py # Allowlist/Denylist
│ │ ├── limits.py # Value/Token limits
│ │ ├── gas.py # Gas limit
│ │ └── functions.py # Function allowlist
│ ├── wallets/ # Universal wallet support
│ │ ├── base.py # WalletAdapter ABC
│ │ ├── factory.py # WalletFactory
│ │ ├── policy_wallet.py # PolicyWallet wrapper
│ │ └── adapters/ # Wallet implementations
│ │ ├── private_key.py # PrivateKeyWallet
│ │ ├── mnemonic.py # MnemonicWallet
│ │ └── cdp.py # CdpWalletAdapter
│ ├── frameworks/ # Multi-framework adapters
│ │ ├── base.py # FrameworkAdapter ABC
│ │ ├── agentkit.py # Coinbase AgentKit
│ │ ├── langchain.py # LangChain adapter
│ │ └── openai_sdk.py # OpenAI SDK adapter
│ ├── simulators/ # Transaction simulation
│ │ ├── basic.py # Basic eth_call simulator
│ │ └── tenderly.py # Tenderly integration
│ ├── analysis/ # Security analysis
│ │ └── llm_judge.py # LLM-based threat detection
│ ├── parsers/ # Calldata parsing
│ │ └── calldata.py # ABI decoding
│ ├── compat/ # Legacy compatibility
│ │ └── wallet_wrapper.py # PolicyWalletProvider
│ ├── log/ # Logging system
│ │ └── logger.py # PolicyLogger
│ └── events/ # Event streaming
│ └── events.py # EventEmitter
├── examples/ # Usage examples
│ ├── basic-chat-agent/ # Production chatbot with frontend
│ └── autonomous-portfolio-agent/ # AI portfolio manager
├── tests/
├── README.md
├── CHANGELOG.md
└── pyproject.toml
```
---
## 🤝 Compatibility
### Framework Support
AgentARC integrates with popular AI agent frameworks:
- ✅ **Coinbase AgentKit** - Primary integration with full support
- ✅ **LangChain** - LangChainAdapter for LangChain agents
- ✅ **OpenAI SDK** - OpenAIAdapter for function-calling agents
### Wallet Support
Universal wallet support for any blockchain interaction:
- ✅ **Private Key Wallets** - Direct private key management
- ✅ **Mnemonic Wallets** - HD wallet derivation (BIP-39/44)
- ✅ **CDP Wallets** - Coinbase Developer Platform integration
### AgentKit Wallet Providers
For Coinbase AgentKit users:
- ✅ **CDP EVM Wallet Provider**
- ✅ **CDP Smart Wallet Provider**
- ✅ **Ethereum Account Wallet Provider**
---
## 📖 Documentation
- **[CHANGELOG.md](CHANGELOG.md)** - Version history and updates
- **[CONTRIBUTING.md](CONTRIBUTING.md)** - Contributing guidelines
- **[Examples](examples/)** - Sample implementations and demos
---
## 🔒 Security Best Practices
- **Start with restrictive policies** — Use low limits and gradually increase
- **Enable simulation** — Catch failures before sending transactions
- **Use Tenderly** — Get detailed execution traces and asset changes
- **Enable optional LLM validation** — Add AI-powered risk analysis where useful
- **Test on testnet** — Validate policies on Base Sepolia before mainnet
- **Monitor logs** — Review transaction validations regularly
- **Keep denylists updated** — Add known malicious addresses
- **Enable threat checks** — Automatically catch token traps (honeypots and related patterns) and expand coverage over time
---
## 🛠️ Environment Variables
```bash
# Coinbase CDP (required for real wallet)
CDP_API_KEY_NAME=your_cdp_key_name
CDP_API_KEY_PRIVATE_KEY=your_cdp_private_key
# LLM Provider (optional - for Stage 4)
OPENAI_API_KEY=your_openai_key
# OR
ANTHROPIC_API_KEY=your_anthropic_key
# Tenderly (optional - for advanced simulation)
TENDERLY_ACCESS_KEY=your_tenderly_key
TENDERLY_ACCOUNT_SLUG=your_account
TENDERLY_PROJECT_SLUG=your_project
```
---
## 🎯 Use Cases
- 🤖 **AI Trading Bots** - Prevent unauthorized trades and limit exposure
- 💼 **Portfolio Managers** - Enforce spending limits across assets
- 🏦 **Treasury Management** - Multi-signature with policy enforcement
- 🎮 **GameFi Agents** - Limit in-game asset transfers
- 🔐 **Security Testing** - Validate smart contract interactions
- 🛡️ **Honeypot Protection** - Automatically detect and block scam tokens
---
## 📝 License
MIT License - see [LICENSE](LICENSE) file for details.
---
## 🤝 Contributing
Contributions are welcome! Please read [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
---
## 🆘 Support
- **Issues:** [GitHub Issues](https://github.com/galaar-org/AgentARC/issues)
- **Examples:** [examples/](examples/)
- **Documentation:** [README.md](README.md)
---
**Protect your AI agents with AgentARC - Multi-layer security for blockchain interactions** 🛡️
| text/markdown | null | Galaar <me@dipeshsukhani.dev> | null | null | null | agent, security, policy, ai, validation, agentkit | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pyyaml>=6.0",
"click>=8.0.0",
"web3>=6.0.0",
"eth-abi>=4.0.0",
"rich>=13.0.0",
"coinbase-agentkit>=0.1.0; extra == \"agentkit\"",
"web3>=6.0.0; extra == \"agentkit\"",
"eth-abi>=4.0.0; extra == \"agentkit\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/galaar-org/AgentARC",
"Repository, https://github.com/galaar-org/AgentARC",
"Issues, https://github.com/galaar-org/AgentARC/issues"
] | twine/6.2.0 CPython/3.11.9 | 2026-02-20T07:10:34.876930 | agentarc-0.2.0.tar.gz | 108,670 | d3/28/9c23510e0e95ae6d0a34781195f15ba0f6410511319cbfd7a1bed8101cdd/agentarc-0.2.0.tar.gz | source | sdist | null | false | f5d986c2e236cb6d117b4e56f1f07320 | 4c85c967b1f59e2cdf87f3243acd228f095bf8743001bde28504d05aa7676063 | d3289c23510e0e95ae6d0a34781195f15ba0f6410511319cbfd7a1bed8101cdd | MIT | [
"LICENSE"
] | 251 |
2.4 | crawlee | 1.4.1b3 | Crawlee for Python | <h1 align="center">
<a href="https://crawlee.dev">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/apify/crawlee-python/master/website/static/img/crawlee-dark.svg?sanitize=true">
<img alt="Crawlee" src="https://raw.githubusercontent.com/apify/crawlee-python/master/website/static/img/crawlee-light.svg?sanitize=true" width="500">
</picture>
</a>
<br>
<small>A web scraping and browser automation library</small>
</h1>
<p align=center>
<a href="https://trendshift.io/repositories/11169" target="_blank"><img src="https://trendshift.io/api/badge/repositories/11169" alt="apify%2Fcrawlee-python | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</p>
<p align="center">
<a href="https://badge.fury.io/py/crawlee" rel="nofollow"><img src="https://badge.fury.io/py/crawlee.svg" alt="PyPI package version"></a>
<a href="https://pypi.org/project/crawlee/" rel="nofollow"><img src="https://img.shields.io/pypi/dm/crawlee" alt="PyPI package downloads"></a>
<a href="https://codecov.io/gh/apify/crawlee-python"><img src="https://codecov.io/gh/apify/crawlee-python/graph/badge.svg?token=cCju61iPQG" alt="Codecov report"></a>
<a href="https://pypi.org/project/crawlee/" rel="nofollow"><img src="https://img.shields.io/pypi/pyversions/crawlee" alt="PyPI Python version"></a>
<a href="https://discord.gg/jyEM2PRvMU" rel="nofollow"><img src="https://img.shields.io/discord/801163717915574323?label=discord" alt="Chat on Discord"></a>
</p>
Crawlee covers your crawling and scraping end-to-end and **helps you build reliable scrapers. Fast.**
Your crawlers will appear almost human-like and fly under the radar of modern bot protections even with the default configuration. Crawlee gives you the tools to crawl the web for links, scrape data and persistently store it in machine-readable formats, without having to worry about the technical details. And thanks to rich configuration options, you can tweak almost any aspect of Crawlee to suit your project's needs if the default settings don't cut it.
> 👉 **View full documentation, guides and examples on the [Crawlee project website](https://crawlee.dev/python/)** 👈
We also have a TypeScript implementation of the Crawlee, which you can explore and utilize for your projects. Visit our GitHub repository for more information [Crawlee for JS/TS on GitHub](https://github.com/apify/crawlee).
## Installation
We recommend visiting the [Introduction tutorial](https://crawlee.dev/python/docs/introduction) in Crawlee documentation for more information.
Crawlee is available as [`crawlee`](https://pypi.org/project/crawlee/) package on PyPI. This package includes the core functionality, while additional features are available as optional extras to keep dependencies and package size minimal.
To install Crawlee with all features, run the following command:
```sh
python -m pip install 'crawlee[all]'
```
Then, install the [Playwright](https://playwright.dev/) dependencies:
```sh
playwright install
```
Verify that Crawlee is successfully installed:
```sh
python -c 'import crawlee; print(crawlee.__version__)'
```
For detailed installation instructions see the [Setting up](https://crawlee.dev/python/docs/introduction/setting-up) documentation page.
### With Crawlee CLI
The quickest way to get started with Crawlee is by using the Crawlee CLI and selecting one of the prepared templates. First, ensure you have [uv](https://pypi.org/project/uv/) installed:
```sh
uv --help
```
If [uv](https://pypi.org/project/uv/) is not installed, follow the official [installation guide](https://docs.astral.sh/uv/getting-started/installation/).
Then, run the CLI and choose from the available templates:
```sh
uvx 'crawlee[cli]' create my-crawler
```
If you already have `crawlee` installed, you can spin it up by running:
```sh
crawlee create my-crawler
```
## Examples
Here are some practical examples to help you get started with different types of crawlers in Crawlee. Each example demonstrates how to set up and run a crawler for specific use cases, whether you need to handle simple HTML pages or interact with JavaScript-heavy sites. A crawler run will create a `storage/` directory in your current working directory.
### BeautifulSoupCrawler
The [`BeautifulSoupCrawler`](https://crawlee.dev/python/api/class/BeautifulSoupCrawler) downloads web pages using an HTTP library and provides HTML-parsed content to the user. By default it uses [`HttpxHttpClient`](https://crawlee.dev/python/api/class/HttpxHttpClient) for HTTP communication and [BeautifulSoup](https://pypi.org/project/beautifulsoup4/) for parsing HTML. It is ideal for projects that require efficient extraction of data from HTML content. This crawler has very good performance since it does not use a browser. However, if you need to execute client-side JavaScript, to get your content, this is not going to be enough and you will need to use [`PlaywrightCrawler`](https://crawlee.dev/python/api/class/PlaywrightCrawler). Also if you want to use this crawler, make sure you install `crawlee` with `beautifulsoup` extra.
```python
import asyncio
from crawlee.crawlers import BeautifulSoupCrawler, BeautifulSoupCrawlingContext
async def main() -> None:
crawler = BeautifulSoupCrawler(
# Limit the crawl to max requests. Remove or increase it for crawling all links.
max_requests_per_crawl=10,
)
# Define the default request handler, which will be called for every request.
@crawler.router.default_handler
async def request_handler(context: BeautifulSoupCrawlingContext) -> None:
context.log.info(f'Processing {context.request.url} ...')
# Extract data from the page.
data = {
'url': context.request.url,
'title': context.soup.title.string if context.soup.title else None,
}
# Push the extracted data to the default dataset.
await context.push_data(data)
# Enqueue all links found on the page.
await context.enqueue_links()
# Run the crawler with the initial list of URLs.
await crawler.run(['https://crawlee.dev'])
if __name__ == '__main__':
asyncio.run(main())
```
### PlaywrightCrawler
The [`PlaywrightCrawler`](https://crawlee.dev/python/api/class/PlaywrightCrawler) uses a headless browser to download web pages and provides an API for data extraction. It is built on [Playwright](https://playwright.dev/), an automation library designed for managing headless browsers. It excels at retrieving web pages that rely on client-side JavaScript for content generation, or tasks requiring interaction with JavaScript-driven content. For scenarios where JavaScript execution is unnecessary or higher performance is required, consider using the [`BeautifulSoupCrawler`](https://crawlee.dev/python/api/class/BeautifulSoupCrawler). Also if you want to use this crawler, make sure you install `crawlee` with `playwright` extra.
```python
import asyncio
from crawlee.crawlers import PlaywrightCrawler, PlaywrightCrawlingContext
async def main() -> None:
crawler = PlaywrightCrawler(
# Limit the crawl to max requests. Remove or increase it for crawling all links.
max_requests_per_crawl=10,
)
# Define the default request handler, which will be called for every request.
@crawler.router.default_handler
async def request_handler(context: PlaywrightCrawlingContext) -> None:
context.log.info(f'Processing {context.request.url} ...')
# Extract data from the page.
data = {
'url': context.request.url,
'title': await context.page.title(),
}
# Push the extracted data to the default dataset.
await context.push_data(data)
# Enqueue all links found on the page.
await context.enqueue_links()
# Run the crawler with the initial list of requests.
await crawler.run(['https://crawlee.dev'])
if __name__ == '__main__':
asyncio.run(main())
```
### More examples
Explore our [Examples](https://crawlee.dev/python/docs/examples) page in the Crawlee documentation for a wide range of additional use cases and demonstrations.
## Features
Why Crawlee is the preferred choice for web scraping and crawling?
### Why use Crawlee instead of just a random HTTP library with an HTML parser?
- Unified interface for **HTTP & headless browser** crawling.
- Automatic **parallel crawling** based on available system resources.
- Written in Python with **type hints** - enhances DX (IDE autocompletion) and reduces bugs (static type checking).
- Automatic **retries** on errors or when you’re getting blocked.
- Integrated **proxy rotation** and session management.
- Configurable **request routing** - direct URLs to the appropriate handlers.
- Persistent **queue for URLs** to crawl.
- Pluggable **storage** of both tabular data and files.
- Robust **error handling**.
### Why to use Crawlee rather than Scrapy?
- **Asyncio-based** – Leveraging the standard [Asyncio](https://docs.python.org/3/library/asyncio.html) library, Crawlee delivers better performance and seamless compatibility with other modern asynchronous libraries.
- **Type hints** – Newer project built with modern Python, and complete type hint coverage for a better developer experience.
- **Simple integration** – Crawlee crawlers are regular Python scripts, requiring no additional launcher executor. This flexibility allows to integrate a crawler directly into other applications.
- **State persistence** – Supports state persistence during interruptions, saving time and costs by avoiding the need to restart scraping pipelines from scratch after an issue.
- **Organized data storages** – Allows saving of multiple types of results in a single scraping run. Offers several storing options (see [datasets](https://crawlee.dev/python/api/class/Dataset) & [key-value stores](https://crawlee.dev/python/api/class/KeyValueStore)).
## Running on the Apify platform
Crawlee is open-source and runs anywhere, but since it's developed by [Apify](https://apify.com), it's easy to set up on the Apify platform and run in the cloud. Visit the [Apify SDK website](https://docs.apify.com/sdk/python/) to learn more about deploying Crawlee to the Apify platform.
## Support
If you find any bug or issue with Crawlee, please [submit an issue on GitHub](https://github.com/apify/crawlee-python/issues). For questions, you can ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/apify), in GitHub Discussions or you can join our [Discord server](https://discord.com/invite/jyEM2PRvMU).
## Contributing
Your code contributions are welcome, and you'll be praised for eternity! If you have any ideas for improvements, either submit an issue or create a pull request. For contribution guidelines and the code of conduct, see [CONTRIBUTING.md](https://github.com/apify/crawlee-python/blob/master/CONTRIBUTING.md).
## License
This project is licensed under the Apache License 2.0 - see the [LICENSE](https://github.com/apify/crawlee-python/blob/master/LICENSE) file for details.
| text/markdown | null | "Apify Technologies s.r.o." <support@apify.com> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2023 Apify Technologies s.r.o.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | apify, automation, chrome, crawlee, crawler, headless, scraper, scraping | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"async-timeout>=5.0.1",
"cachetools>=5.5.0",
"colorama>=0.4.0",
"impit>=0.8.0",
"more-itertools>=10.2.0",
"protego>=0.5.0",
"psutil>=6.0.0",
"pydantic-settings>=2.12.0",
"pydantic>=2.11.0",
"pyee>=9.0.0",
"tldextract>=5.1.0",
"typing-extensions>=4.1.0",
"yarl>=1.18.0",
"apify-fingerprint-datapoints>=0.0.3; extra == \"adaptive-crawler\"",
"browserforge>=1.2.4; extra == \"adaptive-crawler\"",
"jaro-winkler>=2.0.3; extra == \"adaptive-crawler\"",
"playwright>=1.27.0; extra == \"adaptive-crawler\"",
"scikit-learn>=1.6.0; extra == \"adaptive-crawler\"",
"aiosqlite>=0.21.0; extra == \"all\"",
"apify-fingerprint-datapoints>=0.0.2; extra == \"all\"",
"apify-fingerprint-datapoints>=0.0.3; extra == \"all\"",
"asyncpg>=0.24.0; extra == \"all\"",
"beautifulsoup4[lxml]>=4.12.0; extra == \"all\"",
"browserforge>=1.2.3; extra == \"all\"",
"browserforge>=1.2.4; extra == \"all\"",
"cookiecutter>=2.6.0; extra == \"all\"",
"curl-cffi>=0.9.0; extra == \"all\"",
"html5lib>=1.0; extra == \"all\"",
"httpx[brotli,http2,zstd]>=0.27.0; extra == \"all\"",
"inquirer>=3.3.0; extra == \"all\"",
"jaro-winkler>=2.0.3; extra == \"all\"",
"opentelemetry-api>=1.34.1; extra == \"all\"",
"opentelemetry-distro[otlp]>=0.54; extra == \"all\"",
"opentelemetry-instrumentation-httpx>=0.54; extra == \"all\"",
"opentelemetry-instrumentation>=0.54; extra == \"all\"",
"opentelemetry-sdk>=1.34.1; extra == \"all\"",
"opentelemetry-semantic-conventions>=0.54; extra == \"all\"",
"parsel>=1.10.0; extra == \"all\"",
"playwright>=1.27.0; extra == \"all\"",
"redis[hiredis]>=7.0.0; extra == \"all\"",
"rich>=13.9.0; extra == \"all\"",
"scikit-learn>=1.6.0; extra == \"all\"",
"sqlalchemy[asyncio]<3.0.0,>=2.0.0; extra == \"all\"",
"typer>=0.12.0; extra == \"all\"",
"wrapt>=1.17.0; extra == \"all\"",
"beautifulsoup4[lxml]>=4.12.0; extra == \"beautifulsoup\"",
"html5lib>=1.0; extra == \"beautifulsoup\"",
"cookiecutter>=2.6.0; extra == \"cli\"",
"inquirer>=3.3.0; extra == \"cli\"",
"rich>=13.9.0; extra == \"cli\"",
"typer>=0.12.0; extra == \"cli\"",
"curl-cffi>=0.9.0; extra == \"curl-impersonate\"",
"apify-fingerprint-datapoints>=0.0.2; extra == \"httpx\"",
"browserforge>=1.2.3; extra == \"httpx\"",
"httpx[brotli,http2,zstd]>=0.27.0; extra == \"httpx\"",
"opentelemetry-api>=1.34.1; extra == \"otel\"",
"opentelemetry-distro[otlp]>=0.54; extra == \"otel\"",
"opentelemetry-instrumentation-httpx>=0.54; extra == \"otel\"",
"opentelemetry-instrumentation>=0.54; extra == \"otel\"",
"opentelemetry-sdk>=1.34.1; extra == \"otel\"",
"opentelemetry-semantic-conventions>=0.54; extra == \"otel\"",
"wrapt>=1.17.0; extra == \"otel\"",
"parsel>=1.10.0; extra == \"parsel\"",
"apify-fingerprint-datapoints>=0.0.2; extra == \"playwright\"",
"browserforge>=1.2.3; extra == \"playwright\"",
"playwright>=1.27.0; extra == \"playwright\"",
"redis[hiredis]>=7.0.0; extra == \"redis\"",
"aiomysql>=0.3.2; extra == \"sql-mysql\"",
"cryptography>=46.0.5; extra == \"sql-mysql\"",
"sqlalchemy[asyncio]<3.0.0,>=2.0.0; extra == \"sql-mysql\"",
"asyncpg>=0.24.0; extra == \"sql-postgres\"",
"sqlalchemy[asyncio]<3.0.0,>=2.0.0; extra == \"sql-postgres\"",
"aiosqlite>=0.21.0; extra == \"sql-sqlite\"",
"sqlalchemy[asyncio]<3.0.0,>=2.0.0; extra == \"sql-sqlite\""
] | [] | [] | [] | [
"Apify Homepage, https://apify.com",
"Changelog, https://crawlee.dev/python/docs/changelog",
"Discord, https://discord.com/invite/jyEM2PRvMU",
"Documentation, https://crawlee.dev/python/docs/quick-start",
"Homepage, https://crawlee.dev/python",
"Issue Tracker, https://github.com/apify/crawlee-python/issues",
"Release Notes, https://crawlee.dev/python/docs/upgrading",
"Source Code, https://github.com/apify/crawlee-python"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:10:31.682068 | crawlee-1.4.1b3.tar.gz | 24,932,888 | be/f5/709304f032df63037518aedd107282bede78edfe94a0dd10a663a4e6a44e/crawlee-1.4.1b3.tar.gz | source | sdist | null | false | c00abf7fcd1a8eed0c1d44d999bcc155 | 335875cb6e216d01945a9e0ad57602644bfd4a62000d738786d2c30dfd9179b4 | bef5709304f032df63037518aedd107282bede78edfe94a0dd10a663a4e6a44e | null | [
"LICENSE"
] | 254 |
2.4 | isagellm-kv-cache | 0.5.1.6 | KV Cache Management Module for sageLLM | # sagellm-kv-cache
**KV Cache Management + KV Transfer** for sageLLM inference engine.
[](https://github.com/intellistream/sagellm-kv-cache/actions/workflows/ci.yml)
[](https://codecov.io/gh/intellistream/sagellm-kv-cache)
[](https://badge.fury.io/py/isagellm-kv-cache)
[](https://www.python.org/downloads/)
## Overview
This package provides efficient KV cache management and transfer for LLM inference.
**Key Features**:
- **KV Pool**: Block-based memory management with budget control.
- **KV Transfer**: Primitives for cross-node KV block migration.
- **Observability**: Metrics and hooks for cache monitoring.
### Architecture
```
┌─────────────────────────────────────────────────────────────────────┐
│ sagellm-control-plane │
│ (Scheduling: alloc/free/migrate decisions) │
└────────────────────────────┬────────────────────────────────────────┘
│ KVCacheInterface
▼
┌─────────────────────────────────────────────────────────────────────┐
│ sagellm-kv-cache (This Package) │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ PrefixCache │ │ KV Pool │ │ Eviction │ │ KV Transfer │ │
│ │ (Task2.1) │ │ (Task2.2) │ │ (Task2.3) │ │ (Task1.3) │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ └──────┬──────┘ │
└────────────────────────────────────────────────────────────┼────────┘
┌───────────────────────────────┘
│ Use CommBackend for transport
▼
┌─────────────────────────────────────────────────────────────────────┐
│ sagellm-comm │
│ (Network Layer: Topology, Collectives) │
└─────────────────────────────────────────────────────────────────────┘
```
## Installation
```bash
pip install isagellm-kv-cache
```
## Quick Start (CPU-first)
### KV Pool
```python
from sagellm_kv_cache.pool import KVPool
# Create a KV pool with budget control
pool = KVPool(max_tokens=1024)
# Allocate KV cache block
handle = pool.alloc(num_tokens=128, device="cpu")
print(f"Allocated handle: {handle.handle_id}, Tokens: {handle.num_tokens}")
# Free the handle
pool.free(handle)
```
### Prefix Cache (Task 2.1)
```python
from sagellm_kv_cache import PrefixCache
# Create cache with block-based hashing
cache = PrefixCache(block_size=16, max_cached_blocks=100, enable_lru=True)
# Insert prefix blocks
tokens = list(range(48)) # 3 blocks
hashes = cache.compute_block_hashes(tokens)
blocks = [{"block_id": i} for i in range(len(hashes))]
cache.insert(hashes, blocks)
# Lookup with prefix overlap
hit_blocks, num_tokens = cache.lookup(hashes)
print(f"Reused {num_tokens} tokens from cache!")
# Check hit rate
stats = cache.get_stats()
print(f"Hit rate: {stats['hit_rate']:.1%}")
```
See [examples/prefix_cache_example.py](examples/prefix_cache_example.py) for comprehensive usage
examples.
### KV Cache Access Pattern Profiling
```python
from sagellm_kv_cache.profiling import AccessStatsCollector
# Create statistics collector
collector = AccessStatsCollector()
# Record accesses during inference
collector.record_access("block_001", is_hit=True)
collector.record_access("block_002", is_hit=False)
# Export statistics to JSON
collector.export_stats("stats.json")
# Get summary
summary = collector.get_stats_summary()
print(f"Hit rate: {summary['hit_rate']:.2%}")
print(f"Total accesses: {summary['total_accesses']}")
```
**CLI Tool - Generate Demo Data**:
```bash
# Generate demo statistics
sage-kv-stats demo --num-accesses 1000 --output demo_stats.json
# Or use the Python script
python examples/kv_profiling_demo.py --num-accesses 500 --output stats.json
```
**CLI Tool - Visualize Results**:
```bash
# Generate heatmap
sage-kv-stats visualize --input stats.json --output heatmap.png
# Generate all visualizations with summary
sage-kv-stats visualize --input stats.json --type all --summary
# Or use the Python script
python scripts/visualize_access_pattern.py --input stats.json --type all --summary
```
**Install visualization dependencies** (matplotlib is optional):
```bash
pip install isagellm-kv-cache[visualization]
```
## API Reference
### Core Components
- **`PrefixCache`** (`sagellm_kv_cache`): Block-hash based prefix caching for cross-request KV
reuse. Supports LRU eviction, hit rate tracking, and handle invalidation. See Task 2.1.
- **`KVPool`** (`sagellm_kv_cache.pool`): Main entry point for memory management. Handles
allocation, freeing, and budget enforcement.
- **`KVHandle`** (`sagellm_kv_cache`): Represents a reference to allocated KV cache. Contains
metadata like `handle_id`, `dtype`, `layout`.
- **`KVTransferEngine`** (`sagellm_kv_cache`): Handles moving KV blocks between nodes using
`sagellm-comm`.
- **`EvictionManager`** (`sagellm_kv_cache`): Eviction policy management with LRU/FIFO strategies.
- **`SchedulerBridge`** (`sagellm_kv_cache`): Bridge between scheduler IR and KV pool operations.
### Dependencies
- `isagellm-protocol`: Common data structures and protocol definitions.
- `isagellm-backend`: Backend abstraction.
- `isagellm-comm`: Communication layer for transfer.
## Development
1. **Install dev dependencies**:
```bash
pip install -e .[dev]
```
1. **Run tests**:
```bash
pytest
```
1. **Linting**:
```bash
ruff check .
```
## Version
Current version: 0.4.0.11 See [CHANGELOG.md](CHANGELOG.md) for history.
| text/markdown | null | IntelliStream Team <shuhao_zhang@hust.edu.cn> | null | null | Private | llm, inference, kv-cache, domestic-hardware | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11"
] | [] | null | null | ==3.10.* | [] | [] | [] | [
"pydantic>=2.0.0",
"isagellm-protocol<0.6.0,>=0.5.1.0",
"isagellm-backend<0.6.0,>=0.5.1.0",
"isagellm-comm<0.6.0,>=0.5.1.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"isage-pypi-publisher>=0.2.0; extra == \"dev\"",
"matplotlib>=3.5.0; extra == \"dev\"",
"numpy>=1.21.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/intellistream/sagellm-kv-cache",
"Repository, https://github.com/intellistream/sagellm-kv-cache",
"Issues, https://github.com/intellistream/sagellm-kv-cache/issues"
] | twine/6.2.0 CPython/3.11.11 | 2026-02-20T07:10:21.388089 | isagellm_kv_cache-0.5.1.6.tar.gz | 266,721 | bd/ec/a1b1a99f980c9820128e92e9f16fef947d10be8a2b58ec2fa7bdeb67a63f/isagellm_kv_cache-0.5.1.6.tar.gz | source | sdist | null | false | d38a6dce1cc3730d3a396ba31e7b8b1a | a20abb2e3ca50f35d558b307874f9c67f09e159cee26f6dfe81e6affde61bf0c | bdeca1b1a99f980c9820128e92e9f16fef947d10be8a2b58ec2fa7bdeb67a63f | null | [] | 248 |
2.4 | croissant-sim | 4.1.1 | CROISSANT: Rapid spherical harmonics-based simulator of visibilities | # CROISSANT: spheriCal haRmOnics vISibility SimulAtor iN pyThon
[](https://codecov.io/gh/christianhbye/croissant)
CROISSANT is a rapid visiblity simulator in python based on spherical harmonics. Given an antenna design and a sky model, CROISSANT simulates the visbilities - that is, the perceived sky temperature.
CROISSANT uses spherical harmonics to decompose the sky and antenna beam to a set of coefficients. Since the spherical harmonics represents a complete, orthormal basis on the sphere, the visibility computation reduces nicely from a convolution to a dot product.
In frequency domain, CROISSANT uses Discrete Prolate Spheroidal Sequences as a rapid linear interpolation scheme. Being linear, this interpolation can be done directly on the spherical harmonics coefficients, avoiding redoing the most expensive part of the computation.
Moreover, the time evolution of the simulation is very natural in this representation. In the antenna reference frame, the sky rotates overhead with time. To account for this rotation, it is enough to rotate the spherical harmonics coefficients. In the right choice of coordinates (that is, one where the z-axis is aligned with the rotation axis of the earth or the moon), this rotation is simply achieved by multiplying the spherical coefficient by a phase.
> **New in version 4.0.0:** CROISSANT is now fully compatible with JAX, provided in the interfeace croissant.jax. Spherical harmonics transforms (built on [s2ftt](https://github.com/astro-informatics/s2fft/)), coordinate system transforms, rotations, and the simulator itself can now all be differentiated using JAX autograd.
Overall, this makes CROISSANT a very fast visibility simulator. CROISSANT can therefore be used to simulate a large combination of antenna models and sky models - allowing for the exploration of a range of propsed designs before choosing an antenna for an experiment.
## Installation
For the latest release, do `pip install croissant-sim` (see https://pypi.org/project/croissant-sim). Git clone this repository for the newest changes (this is under activate development, do so at your own risk!).
To access the JAX features, JAX must also be installed. See the [installation guide](https://github.com/google/jax#installation).
Note that croissant is only tested up to Python 3.12. Python 3.13 and newer are not yet supported.
## Demo
Jupyter Notebook: https://nbviewer.org/github/christianhbye/croissant/blob/main/notebooks/example_sim.ipynb
## Contributing
Contributions are welcome - please see the [contribution guidelines](https://github.com/christianhbye/croissant/blob/add_contributing/CONTRIBUTING.md).
| null | Christian Hellum Bye | chbye@berkeley.edu | null | null | MIT | null | [
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Astronomy"
] | [] | https://github.com/christianhbye/croissant | null | <3.13,>=3.10 | [] | [] | [] | [
"astropy",
"hera-filters",
"jupyter",
"lunarsky",
"matplotlib",
"numpy",
"pygdsm",
"s2fft",
"black; extra == \"dev\"",
"build; extra == \"dev\"",
"flake8; extra == \"dev\"",
"mypy; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"twine; extra == \"dev\"",
"hera_sim[vis]; extra == \"hera-sim\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T07:09:45.261650 | croissant_sim-4.1.1.tar.gz | 18,292,871 | 2f/11/a3da5b44898a78098cb381473f197e1c441701ff49f7bd590650cdc2c6ca/croissant_sim-4.1.1.tar.gz | source | sdist | null | false | 501960a35b80455a0d8210f9af5c8661 | 543660ba4dbc9d114318b2c1e268f19ab8b6fb761e0ce790bf5dae146d1c8f6d | 2f11a3da5b44898a78098cb381473f197e1c441701ff49f7bd590650cdc2c6ca | null | [
"LICENSE"
] | 262 |
2.4 | genai-auditor | 0.1.0 | LangChain-integrated compliance audit handler for GenAI (EU AI Act etc.). Captures prompts, outputs, and model usage with provenance hashing. | # genai-auditor
LangChain に統合できる GenAI コンプライアンス監査パッケージです。
## ディレクトリ構成
```
genai-auditor/
├── pyproject.toml # ビルド・メタデータ(PyPI 公開用)
├── README.md
├── example.py # 開発者向け動作確認スクリプト
└── src/
└── genai_auditor/
├── __init__.py # パッケージエントリ(ComplianceAuditCallbackHandler を公開)
└── callback.py # コールバック実装・監査ログ生成・保存ロジック
```
EU AI Act 等の 2026 年 AI 法規制に対応するため、LLM の入力・出力・モデル名を自動取得し、改ざん防止用のプロビナンスハッシュ付きで監査ログを保存します。
## インストール
```bash
pip install genai-auditor
```
開発版(ローカル):
```bash
pip install -e /path/to/genai-auditor
```
## 3 行で導入
既存の LangChain コードにコールバックを渡すだけで監査が有効になります。
```python
from langchain_openai import ChatOpenAI
from genai_auditor import ComplianceAuditCallbackHandler
audit = ComplianceAuditCallbackHandler()
llm = ChatOpenAI(model="gpt-4o-mini", callbacks=[audit])
response = llm.invoke("こんにちは")
```
監査ログはデフォルトで `audit_logs.json` に追記されます。
## 監査ログの形式
各エントリには以下が含まれます。
- `run_id`: LangChain の実行 ID
- `model_name`: 使用したモデル名
- `prompts`: 入力プロンプト(リスト)
- `outputs`: LLM 出力テキスト(リスト)
- `started_at` / `ended_at`: ISO 8601 タイムスタンプ(UTC)
- `provenance_hash`: 上記フィールドから計算した SHA-256 ハッシュ(改ざん検知用)
## ログファイルの指定
```python
audit = ComplianceAuditCallbackHandler(audit_log_path="./logs/compliance.json")
```
## ライセンス
MIT
| text/markdown | GenAI Compliance Auditor Contributors | null | null | null | null | langchain, compliance, audit, ai-act, genai, llm | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"langchain>=0.2.0",
"langchain-community>=0.3.0",
"rich>=13.0.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"langchain-openai>=0.1.0; extra == \"openai\"",
"langchain-google-genai>=2.0.0; extra == \"google-genai\"",
"langchain-community>=0.3.0; extra == \"ollama\""
] | [] | [] | [] | [
"Homepage, https://github.com/your-org/genai-auditor",
"Documentation, https://github.com/your-org/genai-auditor#readme",
"Repository, https://github.com/your-org/genai-auditor"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T07:08:54.771888 | genai_auditor-0.1.0.tar.gz | 7,413 | 76/08/830d8614ef2db25fa0793f737015deade278065d028da7cdbbb6ac7d3e0f/genai_auditor-0.1.0.tar.gz | source | sdist | null | false | 6353f873a8d685d83d5f2c9b19ca0fab | 3b48ef942cde490c9229cc5679b9bcb89869cb8e55c8e594a36df61f220d17c2 | 7608830d8614ef2db25fa0793f737015deade278065d028da7cdbbb6ac7d3e0f | MIT | [] | 254 |
2.4 | basepair | 2.4.2 | Python client for Basepair's API | Python client for Basepair
======================
Python bindings for Basepair's API and command line interface (CLI).
### Using MFA
**Note: It is advisable to use MFA for increased security and best practices**
After you have been added to the list of collaborators, you can verify if MFA has been activated for you by visiting [collaboration page](https://pypi.org/manage/project/basepair/collaboration/)
### How to build and push to pypi:
#### Automated publishing of distribution
In the testing stage of the changes, it is advisable to use [Test PyPI](https://test.pypi.org/).
We already have github actions to publish on both Test PyPI and PyPI.
* [Publish Test Packages GitHub Action](https://github.com/basepair/basepair-python/actions/workflows/publish-test-package.yml)
* [Publish Production Package GitHub Action](https://github.com/basepair/basepair-python/actions/workflows/publish-package.yml)
#### Manual way from local
```BASH
python setup.py sdist bdist_wheel # This will create two files in a newly created dist directory, a source archive and a wheel:
twine upload dist/* # To upload it to Pypi
Uploading distributions to https://upload.pypi.org/legacy/
Enter your username:
Enter your password:
```
Note: `username` must be `__token__` (not your pypi username)
`password` is the token. You may generate the token -> [token creation page](https://pypi.org/manage/account/token/).
That is it!
Below is a successful execution sample:
```
Uploading distributions to https://upload.pypi.org/legacy/
Enter your username: __token__
Enter your password:
Uploading basepair-2.0.7-py3-none-any.whl
100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 51.0/51.0 kB • 00:00 • 36.3 MB/s
Uploading basepair-2.0.7.tar.gz
100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 37.4/37.4 kB • 00:00 • 47.2 MB/s
View at:
https://pypi.org/project/basepair/2.0.7/
```
| text/markdown | Basepair | info@basepairtech.com | null | null | null | bioinformatics, ngs analysis, dna-seq, rna-seq, chip-seq, atac-seq | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Healthcare Industry",
"Intended Audience :: Science/Research",
"License :: Free for non-commercial use",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3.6",
"Topic :: Scientific/Engineering :: Bio-Informatics"
] | [] | https://bitbucket.org/basepair/basepair | https://bitbucket.org/basepair/basepair/get/2.4.2.tar.gz | null | [] | [] | [] | [
"boto3",
"future",
"requests",
"awscli",
"logbook",
"tabulate"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T07:08:50.935455 | basepair-2.4.2.tar.gz | 69,445 | 38/99/c0547be4ee718f6c7307e773aeeabefe6e7d76db8ba7eee0c8dbf87849cd/basepair-2.4.2.tar.gz | source | sdist | null | false | 32dfa90ef6f8fabe91dcb8454f509450 | 91ed8f960eeb38a918826bd3a55ceabfa84731daf74914e67b48061c31f76836 | 3899c0547be4ee718f6c7307e773aeeabefe6e7d76db8ba7eee0c8dbf87849cd | null | [
"LICENSE",
"LICENSE.txt"
] | 258 |
2.1 | pretext | 2.37.2.dev20260220070834 | A package to author, build, and deploy PreTeXt projects. | # PreTeXt-CLI
A package for authoring and building [PreTeXt](https://pretextbook.org) documents.
- GitHub: <https://github.com/PreTeXtBook/pretext-cli/>
## Documentation and examples for authors/publishers
Most documentation for PreTeXt authors and publishers is available at:
- <https://pretextbook.org/doc/guide/html/>
Authors and publishers may also find the examples catalog useful as well:
- <https://pretextbook.org/examples.html>
We have a few notes below (TODO: publish these in the Guide).
### Installation
#### Installing Python
PreTeXt-CLI requires the Python version specified in `pyproject.toml`.
To check your version, type this into your terminal or command prompt:
```
python -V
```
If your version is 2.x, try this instead
(and if so, replace all future references to `python`
in these instructions with `python3`).
```
python3 -V
```
If you don't have a compatible Python available, try one of these:
- https://www.python.org/downloads/
- Windows warning: Be sure to select the option adding Python to your Path.
- https://github.com/pyenv/pyenv#installation (Mac/Linux)
- https://github.com/pyenv-win/pyenv-win#installation (Windows)
#### Installing PreTeXt-CLI
Once you've confirmed that you're using a valid version of Python, just
run (replacing `python` with `python3` if necessary):
```
python -m pip install --user pretext
```
(It's possible you will get an error like
`error: invalid command 'bdist_wheel'`
— good news, you can ignore it!)
After installation, try to run:
```
pretext --help
```
If that works, great! Otherwise, it likely means that Python packages
aren't available on your “PATH”. In that case, replace all `pretext`
commands with `python -m pretext` instead:
```
python -m pretext --help
```
Either way, you're now ready to use the CLI, the `--help` option will explain how to use all the different
subcommands like `pretext new` and `pretext build`.
#### External dependencies
We install as much as we can with the `pip install` command, but depending on your machine
you may require some extra software:
- [TeXLive](https://www.tug.org/texlive/)
- [pdftoppm/Ghostscript](https://github.com/abarker/pdfCropMargins/blob/master/doc/installing_pdftoppm_and_ghostscript.rst)
#### Upgrading PreTeXt-CLI
If you have an existing installation and you want to upgrade to a more recent version, you can run:
```
python -m pip install --upgrade pretext
```
#### Custom XSL
Custom XSL is not encouraged for most authors, but (for example) developers working
bleeding-edge XSL from core PreTeXt may want to call XSL different from that
which is shipped with a fixed version of the CLI. This may be accomplished by
adding an `<xsl/>` element to your target with a relative (to `project.ptx`) or
absolute path to the desired XSL. _(Note: this XSL must only import
other XSL files in the same directory or within subdirectories.)_
For example:
```
<target name="html">
<format>html</format>
<source>source/main.ptx</source>
<publication>publication/publication.ptx</publication>
<output-dir>output/html</output-dir>
<xsl>../pretext/xsl/pretext-html.xsl</xsl>
</target>
```
If your custom XSL file needs to import the XSL
shipped with the CLI (e.g. `pretext-common.xsl`), then use a `./core/`
prefix in your custom XSL's `xsl:import@href` as follows:
```
<xsl:import href="./core/pretext-common.xsl"/>
```
Similarly, `entities.ent` may be used:
```
<!DOCTYPE xsl:stylesheet [
<!ENTITY % entities SYSTEM "./core/entities.ent">
%entities;
]>
```
_Note: previously this was achieved with a `pretext-href` attribute - this is now deprecated and will be removed in a future release._
---
## Using this package as a library/API
We have started documenting how you can use this CLI programmatically in [docs/api.md](docs/api.md).
---
## Development
**Note.** The remainder of this documentation is intended only for those interested
in contributing to the development of this project. Anyone who simply wishes to
_use_ the PreTeXt-CLI can stop reading here.
From the "Clone or Download" button on GitHub, copy the `REPO_URL` into the below
command to clone the project.
```bash
git clone [REPO_URL]
cd pretext-cli
```
### Using a valid Python installation
Developers and contributors must install a
version of Python that matching the requirements in `pyproject.toml`.
### Installing dependencies
<details>
<summary><b>Optional</b>: use pyenv as a virtual environment</summary>
The `pyenv` tool for Linux automates the process of running the correct
version of Python when working on this project (even if you have
other versions of Python installed on your system).
- https://github.com/pyenv/pyenv#installation
Run the following, replacing `PYTHON_VERSION` with your desired version.
```
pyenv install PYTHON_VERSION
```
#### Steps on Windows
In windows, you can either use the bash shell and follow the directions above,
or try [pyenv-win](https://github.com/pyenv-win/pyenv-win#installation). In
the latter case, make sure to follow all the installation instructions, including
the **Finish the installation**. Then proceed to follow the directions above to
install a version of python matching `pyproject.toml`. Finally, you may then need
to manually add that version of python to your path.
</details>
<br/>
The first time you set up your development environment, you should follow these steps:
1. Follow these instructions to install `poetry`.
- https://python-poetry.org/docs/#installation
- Note 2022/06/21: you may ignore "This installer is deprecated". See
[python-poetry/poetry/issues/4128](https://github.com/python-poetry/poetry/issues/4128)
2. Install dependencies into a virtual environment with this command.
```
poetry install
```
3. Fetch a copy of the core pretext library and bundle templates by running
```
poetry run python scripts/fetch_core.py
```
The last command above should also be run when returning to development after some time, since the core commit you develop against might have changed.
Make sure you are in a `poetry shell` during development mode so that you
execute the development version of `pretext-cli` rather than the system-installed
version.
```
pretext --version # returns system version
poetry shell
pretext --version # returns version being developed
```
When inside a `poetry shell` you can navigate to other folders and run pretext commands. Doing so will use the current development environment version of pretext.
In newer versions of `poetry`, the `shell` command is not avaiable anymore and is a [plugin](https://github.com/python-poetry/poetry-plugin-shell) instead. Alternatively, the command `poetry env activate` will print a line that you can then run to activate the virtual environment:
```
pretext --version # returns system version
poetry env activate # returns something like `source ./venv/bin/activate`, which you should now run
source .venv/bin/activate
pretext --version # returns version being developed
```
### Updating dependencies
<details>
<summary>Show instructions</summary>
To add dependencies for the package, run
```
poetry add DEPENDENCY-NAME
```
If someone else has added a dependency:
```
poetry install
```
</details>
### Using a local copy of `PreTeXtBook/pretext`
See [docs/core_development.md](docs/core_development.md).
### Formatting code before a commit
All `.py` files are formatted with the [black](https://black.readthedocs.io/en/stable/)
python formatter and checked by [flake8](https://flake8.pycqa.org/en/latest/).
Proper formatting is enforced by checks in the Continuous Integration framework.
Before you commit code, you should make sure it is formatted with `black` and
passes `flake8` by running the following commands (on linux or mac)
from the _root_ project folder (most likely `pretext-cli`).
```
poetry run black .
poetry run flake8
```
### Testing
Sets are contained in `tests/`. To run all tests:
```
poetry run pytest
```
To run a specific test, say `test_name` inside `test_file.py`:
```
poetry run pytest -k name
```
Tests are automatically run by GitHub Actions when pushing to identify
regressions.
### Packaging
To check if a successful build is possible:
```
poetry run python scripts/build_package.py
```
To publish a new alpha release, first add/commit any changes. Then
the following handles bumping versions, publishing to PyPI,
and associated Git management.
```
poetry run python scripts/release_alpha.py
```
Publishing a stable release is similar:
```
poetry run python scripts/release_stable.py # patch +0.+0.+1
poetry run python scripts/release_stable.py minor # +0.+1.0
poetry run python scripts/release_stable.py major # +1.0.0
```
### Asset generation
Generating assets is complicated. See [docs/asset-generation.md](docs/asset-generation.md)
---
## About
### PreTeXt-CLI Team
- [Oscar Levin](https://math.oscarlevin.com/) is co-creator and lead developer of PreTeXt-CLI.
- [Steven Clontz](https://clontz.org/) is co-creator and a regular contributor of PreTeXt-CLI.
- Development of PreTeXt-CLI would not be possible without the frequent
[contributions](https://github.com/PreTeXtBook/pretext-cli/graphs/contributors) of the
wider [PreTeXt-Runestone Open Source Ecosystem](https://prose.runestone.academy).
### A note and special thanks
A `pretext` package unrelated to the PreTeXtBook.org project was released on PyPI
several years ago by Alex Willmer. We are grateful for his willingness to transfer
this namespace to us.
As such, versions of this project before 1.0 are released on PyPI under the
name `pretextbook`, while versions 1.0 and later are released as `pretext`.
### About PreTeXt
The development of [PreTeXt's core](https://github.com/PreTeXtBook/pretext)
is led by [Rob Beezer](http://buzzard.ups.edu/).
| text/markdown | Oscar Levin | oscar.levin@unco.edu | null | null | GPL-3.0-or-later | null | [
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | https://pretextbook.org | null | <4.0,>=3.10 | [] | [] | [] | [
"lxml<7,>=6",
"requests<3,>=2",
"GitPython<4,>=3",
"click<9,>=8",
"pdfCropMargins<1.1.0,>=1.0.9",
"PyPDF2<2.6,>=2.5",
"pyMuPDF<2.0,>=1.24",
"click-log<0.5,>=0.4",
"ghp-import<3,>=2",
"single-version<2,>=1",
"playwright<2,>=1",
"pydantic-xml==2.14.3",
"qrcode<8,>=7",
"psutil<8,>=7",
"plastex<4,>=3",
"jinja2<4,>=3",
"coloraide<5,>=4",
"pelican[markdown]<5.0,>=4.10; extra == \"homepage\" or extra == \"all\"",
"prefig[text]<0.6.0,>=0.5.7; extra == \"prefigure\" or extra == \"all\"",
"citeproc-py<1,>=0"
] | [] | [] | [] | [
"Repository, https://github.com/PreTeXtBook/pretext-cli"
] | poetry/1.8.4 CPython/3.12.12 Linux/6.11.0-1018-azure | 2026-02-20T07:08:47.045350 | pretext-2.37.2.dev20260220070834.tar.gz | 17,057,420 | 88/cd/57770cff14489598b39f4c7075967a431186e31551f5546ed8acacb1ccdd/pretext-2.37.2.dev20260220070834.tar.gz | source | sdist | null | false | c276c169d3f656fba601c1723e0ad411 | 271cb8859014639905a78f6930c50c39ab6ef97665df87b6581a3040435056fe | 88cd57770cff14489598b39f4c7075967a431186e31551f5546ed8acacb1ccdd | null | [] | 223 |
2.4 | python-gmp | 0.6.0a1 | Safe bindings to the GNU GMP library | Python-GMP
==========
Python extension module, providing bindings to the GNU GMP via the `ZZ library
<https://github.com/diofant/zz>`_ (version 0.9.0 or later required). This
module shouldn't crash the interpreter.
The gmp can be used as a `gmpy2`_/`python-flint`_ replacement to provide
integer type (`mpz`_), compatible with Python's `int`_. It also includes
functions, compatible with the Python stdlib's submodule `math.integer
<https://docs.python.org/3.15/library/math.integer.html>`_.
This module requires Python 3.11 or later versions and has been tested with
CPython 3.11 through 3.14, with PyPy3.11 7.3.20 and with GraalPy 25.0.
Free-threading builds of the CPython are supported.
Releases are available in the Python Package Index (PyPI) at
https://pypi.org/project/python-gmp/.
Motivation
----------
The CPython (and most other Python implementations, like PyPy) is optimized to
work with small (machine-sized) integers. Algorithms used here for big
integers usually aren't best known in the field. Fortunately, it's possible to
use bindings (for example, the `gmpy2`_ package) to the GNU GMP, which aims to
be faster than any other bignum library for all operand sizes.
But such extension modules usually rely on default GMP's memory management and
can't recover from allocation failure. So, it's easy to crash the Python
interpreter during the interactive session. Following example with the gmpy2
will work if you set address space limit for the Python interpreter (e.g. by
``prlimit`` command on Linux):
.. code:: pycon
>>> import gmpy2
>>> gmpy2.__version__
'2.2.1'
>>> z = gmpy2.mpz(29925959575501)
>>> while True: # this loop will crash interpter
... z = z*z
...
GNU MP: Cannot allocate memory (size=46956584)
Aborted
The gmp module handles such errors correctly:
.. code:: pycon
>>> import gmp
>>> z = gmp.mpz(29925959575501)
>>> while True:
... z = z*z
...
Traceback (most recent call last):
File "<python-input-3>", line 2, in <module>
z = z*z
~^~
MemoryError
>>> # interpreter still works, all variables in
>>> # the current scope are available,
>>> z.bit_length() # including pre-failure value of z
93882077
Warning on --disable-alloca configure option
--------------------------------------------
You should use the GNU GMP library, compiled with the '--disable-alloca'
configure option to prevent using alloca() for temporary workspace allocation,
or this module may crash the interpreter in case of a stack overflow.
.. _gmpy2: https://pypi.org/project/gmpy2/
.. _python-flint: https://pypi.org/project/python-flint/
.. _mpz: https://python-gmp.readthedocs.io/en/latest/#gmp.mpz
.. _int: https://docs.python.org/3/library/functions.html#int
| text/x-rst | null | Sergey B Kirpichev <skirpichev@gmail.com> | null | Sergey B Kirpichev <skirpichev@gmail.com> | null | gmp, multiple-precision, arbitrary-precision, bignum | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Natural Language :: English",
"Operating System :: POSIX",
"Programming Language :: C",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Free Threading :: 2 - Beta",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Programming Language :: Python :: Implementation :: GraalPy",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Operating System :: Unix",
"Operating System :: MacOS",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"pytest; extra == \"tests\"",
"hypothesis; extra == \"tests\"",
"mpmath; extra == \"tests\"",
"python-gmp[tests]; extra == \"ci\"",
"pytest-xdist; extra == \"ci\"",
"sphinx>=8.2; extra == \"docs\"",
"python-gmp[docs,tests]; extra == \"develop\"",
"pre-commit; extra == \"develop\"",
"pyperf; extra == \"develop\""
] | [] | [] | [] | [
"Homepage, https://github.com/diofant/python-gmp",
"Source Code, https://github.com/diofant/python-gmp",
"Bug Tracker, https://github.com/diofant/python-gmp/issues",
"Documentation, https://python-gmp.readthedocs.io/en/latest/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:07:56.103977 | python_gmp-0.6.0a1.tar.gz | 64,804 | 3f/f8/fe7aabc14292be5fcbdacd09eadf26b65794b493077aa4084c92bceb5e47/python_gmp-0.6.0a1.tar.gz | source | sdist | null | false | 5f37df7880016bb6da935a1cc24921ce | c2dead36195665f6333b76bbd8e53ba062cba534d55d6aadbeaa94a98ecc3d54 | 3ff8fe7aabc14292be5fcbdacd09eadf26b65794b493077aa4084c92bceb5e47 | MIT | [
"LICENSE"
] | 3,143 |
2.4 | neo-whisper | 0.1.8 | Improve Whisper with RoPE and latest tokenizers of OpenAI | # NeoWhisper
Improve `Whisper` of OpenAI by integrating Rotary Positional Embeddings (RoPE) and adding more options for tokenizers available in pypi package `tiktoken`.
## Support My Work
While this work comes truly from the heart, each project represents a significant investment of time -- from deep-dive research and code preparation to the final narrative and editing process.
I am incredibly passionate about sharing this knowledge, but maintaining this level of quality is a major undertaking.
If you find my work helpful and are in a position to do so, please consider supporting my work with a donation.
You can click <a href="https://pay.ababank.com/oRF8/8yp6hy53">here</a> to donate or scan the QR code below.
Your generosity acts as a huge encouragement and helps ensure that I can continue creating in-depth, valuable content for you.
<figure>
<div style="text-align: center;"><a name='slotMachine' ><img src="https://kimang18.github.io/assets/fig/aba_qr_kimang.JPG" width="500" /></a></div>
<figcaption> Using Cambodian bank account, you can donate by scanning my ABA QR code here. (or click <a href="https://pay.ababank.com/oRF8/8yp6hy53">here</a>. Make sure that receiver's name is 'Khun Kim Ang'.) </figcaption>
</figure>
# Installation
```bash
pip install neo-whisper
```
## Requirement
```bash
pip install git+https://github.com/openai/whisper.git
```
# Usage
## Loading tokenizer
```python
from neo_whisper import get_tokenizer
tokenizer_name = 'cl100k_base'
tokenizer = get_tokenizer(multilingual=True, language='km', task='transcribe', encoder_name=tokenizer_name)
print(tokenizer.eot)
```
## Loading NeoWhisper model
```python
from neo_whisper import NeoWhisper, NeoModelDimensions
dims = NeoModelDimensions(
n_vocab=tokenizer.encoding.n_vocab, # use the tokenizer's vocab size
n_mels=80,
n_audio_ctx=1500,
n_audio_state=384,
n_audio_head=6,
n_audio_layer=4,
n_text_ctx=448,
n_text_state=384,
n_text_head=6,
n_text_kv_head=6,
n_text_layer=4
)
model = NeoWhisper(dims)
```
This `model` works like the original model of OpenAI whisper (actually, `NeoWhisper` inherits from `Whisper` of openai-whisper. TextDecoder of `NeoWhisper` is different from the one of `Whisper` in the sense that `RoPE` is integrated in `NeoWhisper`.).
## Loading Original Whisper model
It is possible to load the model implemented in openai-whisper but with new tokenizer (such as `cl100k_base`).
```python
from neo_whisper import Whisper, ModelDimensions
dims = ModelDimensions(
n_vocab=tokenizer.encoding.n_vocab, # use the tokenizer's vocab size
n_mels=80,
n_audio_ctx=1500,
n_audio_state=384,
n_audio_head=6,
n_audio_layer=4,
n_text_ctx=448,
n_text_state=384,
n_text_head=6,
n_text_layer=4
)
model = Whisper(dims)
```
__NOTE:__ When using __new__ tokenizer, you need to train the Text Decoder of your model.
## Train TextDecoder
You can check out the notebook below to train your own NeoWhisper.
I would like to highlight that you can __use your own tokenizer__ as long as it is available in `tiktoken` pypi package to train `NeoWhisper` and I recommend to do so __for Khmer language__.
[](https://colab.research.google.com/github/Kimang18/rag-demo-with-mlx/blob/main/NeoWhisper_cl100k_Train.ipynb)
I also have a video about training Text Decoder of NeoWhisper below
[](https://youtu.be/XJaqGjhiGxw)
__Remark__
When the config of `AudioEncoder` is the same as the original whisper audio encoder trained by OpenAI, we can load pre-trained weight for the encoder from OpenAI, and just train the text decoder.
To load model with `AudioEncoder` of OpenAI whisper, simply provide `neo_encoder=False` when initialize `NeoWhisper` (by default, `neo_encoder=True`).
```python
from neo_whisper import NeoWhisper, NeoModelDimensions
import whisper
dims = NeoModelDimensions(
n_vocab=tokenizer.encoding.n_vocab, # use the tokenizer's vocab size
n_mels=80,
n_audio_ctx=1500,
n_audio_state=384,
n_audio_head=6,
n_audio_layer=4,
n_text_ctx=448,
n_text_state=384,
n_text_head=6,
n_text_kv_head=6,
n_text_layer=4
)
model = NeoWhisper(dims, neo_encoder=False)
# load pre-trained weight of audio encoder
model.encoder.load_state_dict(whisper.load_model("tiny").encoder.state_dict())
# freeze the pre-trained weight
for p in model.encoder.parameters():
p.requires_grad = False
```
## Transcription
We can use trained model for transcription in the same way as `openai-whisper` pypi.
The only difference is that you must specify `tokenizer_name` properly.
Concretely, tokenizer used in the transcription task must be the tokenizer used to train the model.
So, `tokenizer_name` __must be provided__ in the arguments of `transcribe`.
```python
from neo_whisper import (
get_tokenizer,
NeoWhisper,
NeoModelDimensions,
transcribe
)
tokenizer_name = 'cl100k_base'
tokenizer = get_tokenizer(multilingual=True, task='transcribe', encoder_name=tokenizer_name)
dims = NeoModelDimensions(
n_vocab=tokenizer.encoding.n_vocab, # use the tokenizer's vocab size
n_mels=80, # or whatever context size you're training with
n_audio_ctx=1500,
n_audio_state=384,
n_audio_head=6,
n_audio_layer=4,
n_text_ctx=448,
n_text_state=384,
n_text_head=6,
n_text_kv_head=6,
n_text_layer=4
)
model = NeoWhisper(dims, neo_encoder=False) # if you use neo_encoder, specify accordingly
best_model_params_path = "path/to/your/weights.pt"
model.load_state_dict(torch.load(best_model_params_path))
result = transcribe(wmodel, audio_array, verbose=True, tokenizer_name=tokenizer_name)
print(result['text'])
```
## TODO:
- [X] implement decoding function for `NeoWhisper` and `Whisper`
- [X] implement transcription for `NeoWhisper` and `Whisper`
- [X] notebook colab for training `NeoWhisper`
- [ ] benchmarking
| text/markdown | KHUN Kimang | kimang.khun@polytechnique.org | null | null | null | null | [] | [] | https://github.com/kimang18/KrorngAI | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.10 | 2026-02-20T07:06:44.428948 | neo_whisper-0.1.8.tar.gz | 28,150 | 38/a5/3f10f270cc2f7392c74b25ade034ed200353a1c1e5091415608cfabd1260/neo_whisper-0.1.8.tar.gz | source | sdist | null | false | 6a6c5affbfa15989ca41fba36c820bfe | 36235196533e151a59a63e970284637b942bf2ccf157aca24fa96ac57c257b7b | 38a53f10f270cc2f7392c74b25ade034ed200353a1c1e5091415608cfabd1260 | null | [
"LICENSE"
] | 176 |
2.1 | lunchbox | 0.9.9 | A library of various tools for common python tasks | <p>
<a href="https://www.linkedin.com/in/alexandergbraun" rel="nofollow noreferrer">
<img src="https://www.gomezaparicio.com/wp-content/uploads/2012/03/linkedin-logo-1-150x150.png"
alt="linkedin" width="30px" height="30px"
>
</a>
<a href="https://github.com/theNewFlesh" rel="nofollow noreferrer">
<img src="https://tadeuzagallo.com/GithubPulse/assets/img/app-icon-github.png"
alt="github" width="30px" height="30px"
>
</a>
<a href="https://pypi.org/user/the-new-flesh" rel="nofollow noreferrer">
<img src="https://cdn.iconscout.com/icon/free/png-256/python-2-226051.png"
alt="pypi" width="30px" height="30px"
>
</a>
<a href="http://vimeo.com/user3965452" rel="nofollow noreferrer">
<img src="https://cdn1.iconfinder.com/data/icons/somacro___dpi_social_media_icons_by_vervex-dfjq/500/vimeo.png"
alt="vimeo" width="30px" height="30px"
>
</a>
<a href="https://alexgbraun.com" rel="nofollow noreferrer">
<img src="https://i.ibb.co/fvyMkpM/logo.png"
alt="alexgbraun" width="30px" height="30px"
>
</a>
</p>
[](https://github.com/thenewflesh/lunchbox/blob/master/LICENSE)
[](https://github.com/thenewflesh/lunchbox/blob/master/docker/config/pyproject.toml)
[](https://pypi.org/project/lunchbox/)
[](https://pepy.tech/project/lunchbox)
<p><img src="sphinx/images/logo.png" style="max-width: 100%"></p>
# Introduction
A library of various tools for common python tasks
See [documentation](https://thenewflesh.github.io/lunchbox/) for details.
# Installation for Developers
### Docker
1. Install [docker-desktop](https://docs.docker.com/desktop/)
2. Ensure docker-desktop has at least 4 GB of memory allocated to it.
3. `git clone git@github.com:theNewFlesh/lunchbox.git`
4. `cd lunchbox`
5. `chmod +x bin/lunchbox`
6. `bin/lunchbox docker-start`
- If building on a silicon Mac change the value of the `PLATFORM` variable in
the cli.py module to `linux/arm64`.
The service should take a few minutes to start up.
Run `bin/lunchbox --help` for more help on the command line tool.
### ZSH Setup
1. `bin/lunchbox` must be run from this repository's top level directory.
2. Therefore, if using zsh, it is recommended that you paste the following line
in your ~/.zshrc file:
- `alias lunchbox="cd [parent dir]/lunchbox; bin/lunchbox"`
- Replace `[parent dir]` with the parent directory of this repository
3. Consider adding the following line to your ~/.zshrc if you are using a silicon Mac:
- `export DOCKER_DEFAULT_PLATFORM=linux/arm64`
4. Running the `zsh-complete` command will enable tab completions of the cli
commands, in the next shell session.
For example:
- `lunchbox [tab]` will show you all the cli options, which you can press
tab to cycle through
- `lunchbox docker-[tab]` will show you only the cli options that begin with
"docker-"
# Installation for Production
### Python
`pip install lunchbox`
Please see the prod.dockerfile for an official example of how to build a docker
image with lunchbox.
### Docker
1. Install [docker-desktop](https://docs.docker.com/desktop/)
2. `docker pull theNewFlesh/lunchbox:[mode]-[version]`
---
# Quickstart Guide
This repository contains a suite commands for the whole development process.
This includes everything from testing, to documentation generation and
publishing pip packages.
These commands can be accessed through:
- The VSCode task runner
- The VSCode task runner side bar
- A terminal running on the host OS
- A terminal within this repositories docker container
Running the `zsh-complete` command will enable tab completions of the CLI.
See the zsh setup section for more information.
### Command Groups
Development commands are grouped by one of 10 prefixes:
| Command | Description |
| ---------- | ---------------------------------------------------------------------------------- |
| build | Commands for building packages for testing and pip publishing |
| docker | Common docker commands such as build, start and stop |
| docs | Commands for generating documentation and code metrics |
| library | Commands for managing python package dependencies |
| session | Commands for starting interactive sessions such as jupyter lab and python |
| state | Command to display the current state of the repo and container |
| test | Commands for running tests, linter and type annotations |
| version | Commands for bumping project versions |
| quickstart | Display this quickstart guide |
| zsh | Commands for running a zsh session in the container and generating zsh completions |
### Common Commands
Here are some frequently used commands to get you started:
| Command | Description |
| ----------------- | ----------------------------------------------------------------- |
| docker-restart | Restart container |
| docker-start | Start container |
| docker-stop | Stop container |
| docs-full | Generate documentation, coverage report, diagram and code metrics |
| library-add | Add a given package to a given dependency group |
| library-graph-dev | Graph dependencies in dev environment |
| library-remove | Remove a given package from a given dependency group |
| library-search | Search for pip packages |
| library-update | Update dev dependencies |
| session-lab | Run jupyter lab server |
| state | State of |
| test-dev | Run all tests |
| test-lint | Run linting and type checking |
| zsh | Run ZSH session inside container |
| zsh-complete | Generate ZSH completion script |
---
# Development CLI
bin/lunchbox is a command line interface (defined in cli.py) that
works with any version of python 2.7 and above, as it has no dependencies.
Commands generally do not expect any arguments or flags.
Its usage pattern is: `bin/lunchbox COMMAND [-a --args]=ARGS [-h --help] [--dryrun]`
### Commands
The following is a complete list of all available development commands:
| Command | Description |
| -------------------------- | ------------------------------------------------------------------- |
| build-edit-prod-dockerfile | Edit prod.dockefile to use local package |
| build-local-package | Generate local pip package in docker/dist |
| build-package | Generate pip package of repo |
| build-prod | Build production version of repo for publishing |
| build-publish | Run production tests first then publish pip package of repo to PyPi |
| build-publish-test | Run tests and then publish pip package of repo to test PyPi |
| build-test | Build test version of repo for prod testing |
| docker-build | Build development image |
| docker-build-from-cache | Build development image from registry cache |
| docker-build-no-cache | Build development image without cache |
| docker-build-prod | Build production image |
| docker-build-prod-no-cache | Build production image without cache |
| docker-container | Display the Docker container id |
| docker-destroy | Shutdown container and destroy its image |
| docker-destroy-prod | Shutdown production container and destroy its image |
| docker-image | Display the Docker image id |
| docker-prod | Start production container |
| docker-pull-dev | Pull development image from Docker registry |
| docker-pull-prod | Pull production image from Docker registry |
| docker-push-dev | Push development image to Docker registry |
| docker-push-dev-latest | Push development image to Docker registry with dev-latest tag |
| docker-push-prod | Push production image to Docker registry |
| docker-push-prod-latest | Push production image to Docker registry with prod-latest tag |
| docker-remove | Remove Docker container |
| docker-restart | Restart container |
| docker-start | Start container |
| docker-stop | Stop container |
| docs | Generate sphinx documentation |
| docs-architecture | Generate architecture.svg diagram from all import statements |
| docs-full | Generate documentation, coverage report, diagram and code metrics |
| docs-metrics | Generate code metrics report, plots and tables |
| docs-sphinx | Generate sphinx rst files |
| library-add | Add a given package to a given dependency group |
| library-graph-dev | Graph dependencies in dev environment |
| library-graph-prod | Graph dependencies in prod environment |
| library-install-dev | Install all dependencies into dev environment |
| library-install-prod | Install all dependencies into prod environment |
| library-list-dev | List packages in dev environment |
| library-list-prod | List packages in prod environment |
| library-lock-dev | Resolve dev.lock file |
| library-lock-prod | Resolve prod.lock file |
| library-remove | Remove a given package from a given dependency group |
| library-search | Search for pip packages |
| library-sync-dev | Sync dev environment with packages listed in dev.lock |
| library-sync-prod | Sync prod environment with packages listed in prod.lock |
| library-update | Update dev dependencies |
| library-update-pdm | Update PDM |
| quickstart | Display quickstart guide |
| session-lab | Run jupyter lab server |
| session-python | Run python session with dev dependencies |
| state | State of repository and Docker container |
| test-coverage | Generate test coverage report |
| test-dev | Run all tests |
| test-fast | Test all code excepts tests marked with SKIP_SLOW_TESTS decorator |
| test-format | Format all python files |
| test-lint | Run linting and type checking |
| test-prod | Run tests across all support python versions |
| version | Full resolution of repo: dependencies, linting, tests, docs, etc |
| version-bump | Bump repo's patch version up to x.x.20, then bump minor version |
| version-bump-major | Bump pyproject major version |
| version-bump-minor | Bump pyproject minor version |
| version-bump-patch | Bump pyproject patch version |
| version-commit | Tag with version and commit changes to master |
| zsh | Run ZSH session inside Docker container |
| zsh-complete | Generate oh-my-zsh completions |
| zsh-root | Run ZSH session as root inside Docker container |
### Flags
| Short | Long | Description |
| ----- | --------- | ---------------------------------------------------- |
| -a | --args | Additional arguments, this can generally be ignored |
| -h | --help | Prints command help message to stdout |
| | --dryrun | Prints command that would otherwise be run to stdout |
---
# Production CLI
lunchbox comes with a command line interface defined in command.py.
Its usage pattern is: `lunchbox COMMAND [ARGS] [FLAGS] [-h --help]`
## Commands
---
### bash-completion
Prints BASH completion code to be written to a _lunchbox completion file
Usage: `lunchbox bash-completion`
---
### slack
Posts a slack message to a given channel
Usage: `lunchbox slack URL CHANNEL MESSAGE`
| Argument | Description |
| -------- | ------------------------------------ |
| url | https://hooks.slack.com/services URL |
| channel | slack channel name |
| message | message to be posted |
---
### zsh-completion
Prints ZSH completion code to be written to a _lunchbox completion file
Usage: `lunchbox zsh-completion`
| text/markdown | null | Alex Braun <alexander.g.braun@gmail.com> | null | null | MIT | tool, tools, general, slack, enforce | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"documentation, https://theNewFlesh.github.io/lunchbox",
"repository, https://github.com/thenewflesh/lunchbox"
] | pdm/2.26.6 CPython/3.13.12 Linux/6.11.0-1018-azure | 2026-02-20T07:05:21.219059 | lunchbox-0.9.9.tar.gz | 23,820 | cb/c0/7f203bec8b1c47debf15d7b16d71ca68a53a53b8d9363008db2456f9cb29/lunchbox-0.9.9.tar.gz | source | sdist | 0.9.9 | false | c58a4a2cbb19ab7c3656d7b819e4755f | dc04a20656c970a22932edfffab48cc12dcf967ae0c7d540a50f87d562f4d922 | cbc07f203bec8b1c47debf15d7b16d71ca68a53a53b8d9363008db2456f9cb29 | null | [] | 241 |
2.4 | cashpayyy | 1.1.1 | Official CashPay Payment Gateway SDK for Python | # CashPay Python SDK
Official Python SDK for integrating with CashPay Payment Gateway.
## Installation
```bash
pip install cashpay
```
## Quick Start
```python
from cashpay import CashPay
client = CashPay(
api_key='cpk_live_xxx',
api_secret='cps_live_xxx',
environment='production' # or 'sandbox'
)
```
## Usage Examples
### Check Balance
```python
# Get unified balance
balance = client.balance.get()
print(f"Total Balance: ₹{balance.total_balance / 100}")
# Get settlement balance
settlement = client.balance.get_settlement()
print(f"Available for withdrawal: ₹{settlement.available_withdrawal_amount / 100}")
# Get payout balance
payout = client.balance.get_payout()
print(f"Payout Balance: ₹{payout.payout_balance / 100}")
```
### Payins
```python
# 1. Create a Hosted Payment Page (Direct Redirect Flow)
page = client.payins.create_payment_page({
'amount': 10000, # ₹100
'orderId': 'ORDER_123',
'customerName': 'John Doe',
'customerEmail': 'john@example.com',
'customerPhone': '9876543210',
'returnUrl': 'https://yoursite.com/payment/result'
})
print(f"Payment URL: {page['paymentUrl']}")
# 2. Create UPI Intent (for Mobile App redirects)
intent = client.payins.create_intent({
'amount': 5000,
'orderId': 'ORDER_124',
'customer_phone': '9876543210'
})
print(f"UPI DeepLink: {intent['intentUrl']}")
# 3. Create Card Payment
card = client.payins.create_card({
'amount': 10000,
'orderId': 'ORDER_125',
'cardNumber': '4111111111111111',
'expiryMonth': '12',
'expiryYear': '2025',
'cvv': '123'
})
# Get payin status by payment ID
status = client.payins.get_status('payment-uuid')
# Get payin by order ID
payin = client.payins.get_by_order_id('ORDER_124')
print(f"Status: {payin['status']}, UTR: {payin['utr']}")
```
### Payment Links
```python
# Create a shareable payment link or QR
link = client.payment_links.create({
'amount': 50000,
'description': 'Invoice #001',
'type': 'one-time', # or 'reusable'
'outputType': 'link' # or 'qr'
})
print(f"Short URL: {link['shortUrl']}")
# List payment links
links = client.payment_links.list(page=1, limit=10, status='active')
# Deactivate a link
client.payment_links.deactivate('link-uuid')
```
### Beneficiaries & Bank Accounts
```python
# Add a beneficiary for payouts
beneficiary = client.beneficiaries.create({
'name': 'John Doe',
'accountNumber': '50100123456789',
'ifsc': 'HDFC0001234'
})
# Add a merchant bank account for settlements
bank = client.bank_accounts.create({
'accountNumber': '50100123456789',
'ifsc': 'HDFC0001234',
'accountHolderName': 'My Business'
})
```
### Payouts
```python
# Create a payout
payout = client.payouts.create(
beneficiary_id='ben_xxx',
amount=10000, # ₹100 in paise
reference_id='PAY-001',
narration='Salary payment',
mode='IMPS',
idempotency_key='unique-key'
)
print(f"Payout ID: {payout['id']}, Status: {payout['status']}")
# Create bulk payouts (max 100)
bulk_result = client.payouts.create_bulk([
{'beneficiaryId': 'ben_1', 'amount': 10000, 'referenceId': 'PAY-001'},
{'beneficiaryId': 'ben_2', 'amount': 20000, 'referenceId': 'PAY-002'},
], idempotency_key='bulk-key')
print(f"Success: {bulk_result['successCount']}, Failed: {bulk_result['failureCount']}")
# List payouts
payouts = client.payouts.list(page=1, limit=20, status='COMPLETED')
# Get payout by ID
payout_details = client.payouts.get('payout-uuid')
# Get payout by reference ID
payout_by_ref = client.payouts.get_by_reference_id('PAY-001')
# Cancel payout
cancelled = client.payouts.cancel('payout-uuid')
```
### Settlements
```python
# Create settlement with saved bank account
settlement = client.settlements.create(
amount=100000, # ₹1000 in paise
bank_account_id='bank_xxx',
reference_id='SET-001',
idempotency_key='unique-key'
)
# Create settlement with direct bank details
direct_settlement = client.settlements.create(
amount=100000,
account_number='50100123456789',
ifsc='HDFC0001234',
account_holder_name='John Doe',
reference_id='SET-002'
)
# Create bulk settlements
bulk_settlements = client.settlements.create_bulk([
{'amount': 50000, 'bankAccountId': 'bank_1', 'referenceId': 'SET-001'},
{'amount': 75000, 'bankAccountId': 'bank_2', 'referenceId': 'SET-002'},
])
# List settlements
settlements = client.settlements.list(status='COMPLETED')
# Get settlement by ID
settlement_details = client.settlements.get('settlement-uuid')
# Cancel settlement
cancelled_settlement = client.settlements.cancel('settlement-uuid')
```
### Webhook Verification
```python
from flask import Flask, request
app = Flask(__name__)
@app.route('/webhook', methods=['POST'])
def webhook():
signature = request.headers.get('x-webhook-signature')
payload = request.get_data(as_text=True)
is_valid = client.verify_webhook(payload, signature, 'your-webhook-secret')
if not is_valid:
return 'Invalid signature', 401
event = request.get_json()
if event['type'] == 'payin.completed':
print('Payment completed:', event['data'])
elif event['type'] == 'payout.completed':
print('Payout completed:', event['data'])
elif event['type'] == 'settlement.completed':
print('Settlement completed:', event['data'])
return 'OK', 200
```
## Error Handling
```python
from cashpay import CashPay, CashPayError
try:
payout = client.payouts.create(
beneficiary_id='invalid-id',
amount=10000
)
except CashPayError as e:
print(f"Error: {e.message}")
print(f"Status: {e.status_code}")
print(f"Code: {e.code}")
print(f"Details: {e.details}")
```
## Configuration Options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `api_key` | str | required | Your API key |
| `api_secret` | str | required | Your API secret |
| `environment` | str | 'production' | 'sandbox' or 'production' |
| `base_url` | str | auto | Custom API base URL |
| `timeout` | int | 30 | Request timeout in seconds |
## Support
- Documentation: https://docs.cashpay.com
- Email: support@cashpay.com
- GitHub Issues: https://github.com/cashpay/cashpay-python-sdk/issues
| text/markdown | CashPay | support@cashpay.com | null | null | null | cashpay payment gateway upi payin payout settlement india | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | https://github.com/cashpay/cashpay-python-sdk | null | >=3.8 | [] | [] | [] | [
"requests>=2.25.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T07:05:10.250268 | cashpayyy-1.1.1.tar.gz | 9,498 | 05/79/c550f448b464c3f2bd4850b6ebcec6b4bb32dc75ffbcc84e16bbf940db4a/cashpayyy-1.1.1.tar.gz | source | sdist | null | false | 18c44cd5762695569957c36115a97886 | 2c2ed21de053c1810ac710dbd32dd8c372fa04d23b8d9c7960339850ba63aa91 | 0579c550f448b464c3f2bd4850b6ebcec6b4bb32dc75ffbcc84e16bbf940db4a | null | [] | 225 |
2.4 | meddatasets | 0.1.0 | A curated collection of medical and healthcare datasets for data analysis, clinical research, epidemiology, and education. Includes cancer data, chronic disease diagnostics, hospital management records, public health, statistics and more from Kaggle sources. | # meddatasets
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
The `meddatasets` package provides a curated collection of medical and healthcare
datasets for data analysis, clinical research, epidemiology, and education in Python.
It includes cancer data, chronic disease diagnostics, hospital management records,
public health statistics, and more — sourced from Kaggle.
## Installation
You can install the `meddatasets` package from PyPI:
```bash
pip install meddatasets
```
## Usage
```python
import meddatasets as md
# List all available datasets
datasets = md.list_datasets()
print(datasets)
# Load a specific dataset
df = md.load_dataset('breast_cancer')
print(df.head())
# Describe dataset
df_01 = md.describe('smoking_cancer_risk')
print(df_01)
```
## 📊 Some Available Datasets
| Dataset | Description |
|---------|-------------|
| `breast_cancer` | Breast Cancer dataset derived from the Breast Cancer Wisconsin (Diagnostic) dataset.|
| `smoking_cancer_risk` | Smoking and cancer risk analysis dataset.|
| `covid_worldwide` | Dataset containing COVID-19 cases and deaths worldwide.|
| `water_pollution_disease` | Dataset containing data on water pollution and its impact on public health.|
> Run `meddatasets.list_datasets()` or `md.list_datasets()` (using `md` as alias) to see the full list of available datasets.
## Disclaimer
`meddatasets` is intended for **educational and research purposes only**.
The datasets provided should not be used for clinical diagnosis or medical
decision-making.
## License
The `meddatasets` library is released under the **MIT License**, allowing free use for both commercial and non-commercial purposes.
See the [LICENSE](LICENSE) file for details.
| text/markdown | Renzo Caceres Rossi | Renzo Caceres Rossi <arenzocaceresrossi@gmail.com> | null | Renzo Caceres Rossi <arenzocaceresrossi@gmail.com> | MIT License
Copyright (c) 2026 Renzo Caceres Rossi
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| datasets, medicine, health, public health, cancer, clinical data, chronic diseases, diabetes, health statistics, epidemiology, data science, research, data analysis, hospital management, machine learning, kaggle | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Information Analysis",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Medical Science Apps.",
"Topic :: Scientific/Engineering :: Bio-Informatics",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3 :: Only",
"Operating System :: OS Independent",
"Natural Language :: English"
] | [] | https://github.com/lightbluetitan/meddatasets-py | null | >=3.8 | [] | [] | [] | [
"pandas>=1.5"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T07:04:36.383123 | meddatasets-0.1.0.tar.gz | 870,096 | 00/25/b51ba2991e9d7bc106ea61e991d3f1acac5081dcebc2c28e2e87a70c80d0/meddatasets-0.1.0.tar.gz | source | sdist | null | false | 62b559f9dc651f9fc72f2cd146084325 | 02c3ef480c1cf90e014c42d3b49e4086be40dc2f85fb58a5981ac4ca8bfa0725 | 0025b51ba2991e9d7bc106ea61e991d3f1acac5081dcebc2c28e2e87a70c80d0 | null | [
"LICENSE"
] | 277 |
2.4 | insightfacex | 0.7.4 | InsightFace Python Library | # InsightFace Python Library
## License
The code of InsightFace Python Library is released under the MIT License. There is no limitation for both academic and commercial usage.
**The pretrained models we provided with this library are available for non-commercial research purposes only, including both auto-downloading models and manual-downloading models.**
## Install
### Install Inference Backend
For ``insightface<=0.1.5``, we use MXNet as inference backend.
Starting from insightface>=0.2, we use onnxruntime as inference backend.
You have to install ``onnxruntime-gpu`` manually to enable GPU inference, or install ``onnxruntime`` to use CPU only inference.
## Change Log
### [0.7.1] - 2022-12-14
#### Changed
- Change model downloading provider to cloudfront.
### [0.7] - 2022-11-28
#### Added
- Add face swapping model and example.
#### Changed
- Set default ORT provider to CUDA and CPU.
### [0.6] - 2022-01-29
#### Added
- Add pose estimation in face-analysis app.
#### Changed
- Change model automated downloading url, to ucloud.
## Quick Example
```
import cv2
import numpy as np
import insightface
from insightface.app import FaceAnalysis
from insightface.data import get_image as ins_get_image
app = FaceAnalysis(providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=(640, 640))
img = ins_get_image('t1')
faces = app.get(img)
rimg = app.draw_on(img, faces)
cv2.imwrite("./t1_output.jpg", rimg)
```
This quick example will detect faces from the ``t1.jpg`` image and draw detection results on it.
## Model Zoo
In the latest version of insightface library, we provide following model packs:
Name in **bold** is the default model pack. **Auto** means we can download the model pack through the python library directly.
Once you manually downloaded the zip model pack, unzip it under `~/.insightface/models/` first before you call the program.
| Name | Detection Model | Recognition Model | Alignment | Attributes | Model-Size | Link | Auto |
| ------------- | --------------- | -------------------- | ------------ | ---------- | ---------- | ------------------------------------------------------------ | ------------- |
| antelopev2 | SCRFD-10GF | ResNet100@Glint360K | 2d106 & 3d68 | Gender&Age | 407MB | [link](https://drive.google.com/file/d/18wEUfMNohBJ4K3Ly5wpTejPfDzp-8fI8/view?usp=sharing) | N |
| **buffalo_l** | SCRFD-10GF | ResNet50@WebFace600K | 2d106 & 3d68 | Gender&Age | 326MB | [link](https://drive.google.com/file/d/1qXsQJ8ZT42_xSmWIYy85IcidpiZudOCB/view?usp=sharing) | Y |
| buffalo_m | SCRFD-2.5GF | ResNet50@WebFace600K | 2d106 & 3d68 | Gender&Age | 313MB | [link](https://drive.google.com/file/d/1net68yNxF33NNV6WP7k56FS6V53tq-64/view?usp=sharing) | N |
| buffalo_s | SCRFD-500MF | MBF@WebFace600K | 2d106 & 3d68 | Gender&Age | 159MB | [link](https://drive.google.com/file/d/1pKIusApEfoHKDjeBTXYB3yOQ0EtTonNE/view?usp=sharing) | N |
| buffalo_sc | SCRFD-500MF | MBF@WebFace600K | - | - | 16MB | [link](https://drive.google.com/file/d/19I-MZdctYKmVf3nu5Da3HS6KH5LBfdzG/view?usp=sharing) | N |
Recognition Accuracy:
| Name | MR-ALL | African | Caucasian | South Asian | East Asian | LFW | CFP-FP | AgeDB-30 | IJB-C(E4) |
| :-------- | ------ | ------- | --------- | ----------- | ---------- | ----- | ------ | -------- | --------- |
| buffalo_l | 91.25 | 90.29 | 94.70 | 93.16 | 74.96 | 99.83 | 99.33 | 98.23 | 97.25 |
| buffalo_s | 71.87 | 69.45 | 80.45 | 73.39 | 51.03 | 99.70 | 98.00 | 96.58 | 95.02 |
*buffalo_m has the same accuracy with buffalo_l.*
*buffalo_sc has the same accuracy with buffalo_s.*
**Note that these models are available for non-commercial research purposes only.**
For insightface>=0.3.3, models will be downloaded automatically once we init ``app = FaceAnalysis()`` instance.
For insightface==0.3.2, you must first download the model package by command:
```
insightface-cli model.download buffalo_l
```
## Use Your Own Licensed Model
You can simply create a new model directory under ``~/.insightface/models/`` and replace the pretrained models we provide with your own models. And then call ``app = FaceAnalysis(name='your_model_zoo')`` to load these models.
## Call Models
The latest insightface libary only supports onnx models. Once you have trained detection or recognition models by PyTorch, MXNet or any other frameworks, you can convert it to the onnx format and then they can be called with insightface library.
### Call Detection Models
```
import cv2
import numpy as np
import insightface
from insightface.app import FaceAnalysis
from insightface.data import get_image as ins_get_image
# Method-1, use FaceAnalysis
app = FaceAnalysis(allowed_modules=['detection']) # enable detection model only
app.prepare(ctx_id=0, det_size=(640, 640))
# Method-2, load model directly
detector = insightface.model_zoo.get_model('your_detection_model.onnx')
detector.prepare(ctx_id=0, input_size=(640, 640))
```
### Call Recognition Models
```
import cv2
import numpy as np
import insightface
from insightface.app import FaceAnalysis
from insightface.data import get_image as ins_get_image
handler = insightface.model_zoo.get_model('your_recognition_model.onnx')
handler.prepare(ctx_id=0)
```
| text/markdown | null | InsightFace Contributors <contact@insightface.ai>, Hameer Abbasi <hameerabbasi@yahoo.com> | null | null | null | null | [] | [] | null | null | <4,>=3.9 | [] | [] | [] | [
"numpy",
"onnx",
"tqdm",
"requests",
"matplotlib",
"Pillow",
"scipy",
"scikit-learn",
"scikit-image",
"easydict",
"cython",
"albumentations",
"prettytable"
] | [] | [] | [] | [
"Homepage, https://github.com/hameerabbasi/insightface",
"Repository, https://github.com/hameerabbasi/insightface.git",
"Issues, https://github.com/hameerabbasi/insightface/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T07:03:22.493833 | insightfacex-0.7.4.tar.gz | 424,825 | 06/a5/6d33d2505d9dbaaf767ab74e87c8b19250e7b8127953d2de57a442914284/insightfacex-0.7.4.tar.gz | source | sdist | null | false | f966dc17f9bd1fead4f4c0eb9a707ee7 | 45a25448207e5d5444745d8ff712e2474a0fcf27b6b3625c3299a57c45bbf948 | 06a56d33d2505d9dbaaf767ab74e87c8b19250e7b8127953d2de57a442914284 | MIT | [] | 3,530 |
2.1 | wordlift-client | 1.142.0 | WordLift API | WordLift API
| text/markdown | WordLift | hello@wordlift.io | null | null | (c) copyright 2022-present WordLift | OpenAPI, OpenAPI-Generator, WordLift API | [] | [] | null | null | null | [] | [] | [] | [
"urllib3<2.1.0,>=1.25.3",
"python-dateutil",
"aiohttp>=3.0.0",
"aiohttp-retry>=2.8.3",
"pydantic>=2",
"typing-extensions>=4.7.1"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:03:14.156911 | wordlift_client-1.142.0.tar.gz | 199,152 | 8f/23/d01a18a01cbaf698e31e98f4651c12ae4cd43f53aa5a0475de3286fb8f59/wordlift_client-1.142.0.tar.gz | source | sdist | null | false | 31a3557d29705435a6bbe9a64422cb49 | 0222e5c66c3ecec5ebc32a0f9320ef0d5020dc9b990f833fd7361758dc763dbc | 8f23d01a18a01cbaf698e31e98f4651c12ae4cd43f53aa5a0475de3286fb8f59 | null | [] | 291 |
2.4 | launchable | 1.121.1 | Launchable CLI | # Usage
See https://help.launchableinc.com/resources/cli-reference/ and
https://help.launchableinc.com/getting-started/.
# Development
## Preparation
We recommend Pipenv
```shell
pip install pipenv==2021.5.29
pipenv install --dev
```
In order to automatically format files with autopep8, this repository contains a
configuration for [pre-commit](https://pre-commit.com). Install the hook with
`pipenv run pre-commit install`.
## Load development environment
```shell
pipenv shell
```
## Run tests cli
```shell
pipenv run test
```
## Run tests exe_deploy.jar
```
bazel test ...
```
## Add dependency
```shell
pipenv install --dev some-what-module
```
# How to release
[tagpr](https://github.com/Songmu/tagpr) creates a release pull request automatically when changes are pushed to the `v1` branch.
Merge the release pull request, then GitHub Actions automatically tags, creates a GitHub Release, and uploads the module to PyPI.
## How to update launchable/jar/exe_deploy.jar
```
./build-java.sh
```
# Installing CLI
You can install the `launchable` command from either source or [pypi](https://pypi.org/project/launchable/).
## Prerequisite
- \>= Python 3.6
- \>= Java 8
## Install from source
```sh
$ pwd
~/cli
$ python setup.py install
```
## Install from pypi
```sh
$ pip3 install --user --upgrade launchable~=1.0
```
## Versioning
This module follows [Semantic versioning](https://semver.org/) such as X.Y.Z.
* Major (X)
* Drastic update breaking backward compatibility
* Minor (Y)
* Add new plugins, options with backward compatibility
* Patch (Z)-
* Fix bugs or minor behaviors
| text/markdown | Launchable, Inc. | info@launchableinc.com | null | null | Apache Software License v2 | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent"
] | [] | https://launchableinc.com/ | null | >=3.6 | [] | [] | [] | [
"click<8.1,>=8.0; python_version == \"3.6\"",
"click<8.2,>=8.1; python_version > \"3.6\"",
"dataclasses; python_version == \"3.6\"",
"requests>=2.25; python_version >= \"3.6\"",
"urllib3>=1.26",
"junitparser>=4.0.0",
"setuptools",
"more_itertools>=7.1.0; python_version >= \"3.6\"",
"python-dateutil",
"tabulate",
"importlib-metadata"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.9 | 2026-02-20T07:02:45.627165 | launchable-1.121.1.tar.gz | 11,570,185 | cb/4e/a80229cf01e1cd92fc39070e94dd8ade1a1db1fcaf811548b0bf5632d235/launchable-1.121.1.tar.gz | source | sdist | null | false | aa356a1943675aa07e817c8ba6ee8cf3 | b07355a775c7c85eb105e0ed44971db5229c6cfbbf32056fab36410e8dfde7f2 | cb4ea80229cf01e1cd92fc39070e94dd8ade1a1db1fcaf811548b0bf5632d235 | null | [
"LICENSE.txt"
] | 6,129 |
2.3 | pylendar | 0.5.0 | Python port of the calendar reminder utility commonly found on BSD-style systems, which displays upcoming relevant dates. | # pylendar
Python port of the "calendar" reminder utility
commonly found on BSD-style systems,
which displays upcoming relevant dates.
* [FreeBSD calendar(1) man page](https://man.freebsd.org/cgi/man.cgi?calendar)
This utility has also been ported to Debian GNU/Linux,
so please see
[the Debian package](https://packages.debian.org/source/bookworm/bsdmainutils)
for more information.
| text/markdown | Fredrik Mellström | Fredrik Mellström <11281108+harkabeeparolus@users.noreply.github.com> | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"astronomy-engine>=2.1.19",
"lunardate>=0.2.2",
"python-dateutil>=2.9.0.post0"
] | [] | [] | [] | [
"source, https://github.com/harkabeeparolus/pylendar"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:02:40.454696 | pylendar-0.5.0.tar.gz | 11,969 | df/2d/5f80343c3ceb0f2bb06b72318124defb38010e776bf180f3def14f844261/pylendar-0.5.0.tar.gz | source | sdist | null | false | e9252ee27ddf43d7df7596c3064c9a2d | 57d16d09ac42a0e5f20ef6616cb28ed2042394fa44b1c1e8d00a279d2e44599e | df2d5f80343c3ceb0f2bb06b72318124defb38010e776bf180f3def14f844261 | null | [] | 229 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.