metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | aiomygas | 2.4.0 | Asynchronous Python API For My Gas | # aiomygas
[](https://pypi.org/project/aiomygas/)
[](https://pypi.org/project/aiomygas/)
[](https://github.com/lizardsystems/aiomygas/blob/main/LICENSE)
[](https://github.com/lizardsystems/aiomygas/actions/workflows/ci.yml)
Asynchronous Python API for [Мой Газ](https://мойгаз.смородина.онлайн/).
## Installation
Use pip to install the library:
```commandline
pip install aiomygas
```
## Usage
```python
import asyncio
from pprint import pprint
import aiohttp
from aiomygas import SimpleMyGasAuth, MyGasApi
async def main(email: str, password: str) -> None:
"""Create the aiohttp session and run the example."""
async with aiohttp.ClientSession() as session:
auth = SimpleMyGasAuth(email, password, session)
api = MyGasApi(auth)
data = await api.async_get_accounts()
pprint(data)
if __name__ == "__main__":
_email = str(input("Email: "))
_password = str(input("Password: "))
asyncio.run(main(_email, _password))
```
## Exceptions
All exceptions inherit from `MyGasApiError`:
- `MyGasApiError` — base class for all API errors
- `MyGasApiParseError` — response parsing errors
- `MyGasAuthError` — authentication errors
## Timeouts
aiomygas does not specify any timeouts for any requests. You will need to specify them in your own code. We recommend `asyncio.timeout`:
```python
import asyncio
async with asyncio.timeout(10):
data = await api.async_get_accounts()
```
## CLI
```commandline
aiomygas-cli user@example.com password --accounts
aiomygas-cli user@example.com password --client
aiomygas-cli user@example.com password --charges
aiomygas-cli user@example.com password --payments
aiomygas-cli user@example.com password --info
```
## Development
```commandline
python -m venv .venv
.venv/bin/pip install -r requirements_test.txt -e .
pytest tests/ -v
```
## Links
- [PyPI](https://pypi.org/project/aiomygas/)
- [GitHub](https://github.com/lizardsystems/aiomygas)
- [Changelog](https://github.com/lizardsystems/aiomygas/blob/main/CHANGELOG.md)
- [Bug Tracker](https://github.com/lizardsystems/aiomygas/issues)
| text/markdown | LizardSystems | null | null | null | MIT License | energy, gas | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: MIT License",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Programming Language :: ... | [
"any"
] | null | null | >=3.13 | [] | [] | [] | [
"aiohttp>=3",
"pytest; extra == \"test\"",
"pytest-asyncio; extra == \"test\""
] | [] | [] | [] | [
"Home, https://github.com/lizardsystems/aiomygas",
"Repository, https://github.com/lizardsystems/aiomygas",
"Documentation, https://github.com/lizardsystems/aiomygas",
"Bug Tracker, https://github.com/lizardsystems/aiomygas/issues",
"Changelog, https://github.com/lizardsystems/aiomygas/blob/main/CHANGELOG.m... | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:19:30.494860 | aiomygas-2.4.0.tar.gz | 21,414 | 6a/c9/aa88dd4c9211b8df7a814b5fcf48ff205f06411d1f57323068931d5aff44/aiomygas-2.4.0.tar.gz | source | sdist | null | false | 1e106e34bd521f1052bfba59d72d78cf | 357969575a67576855506b2dcd3fce70c870d9b4f71ede23e03e0a75dd1e2cb3 | 6ac9aa88dd4c9211b8df7a814b5fcf48ff205f06411d1f57323068931d5aff44 | null | [
"LICENSE"
] | 578 |
2.4 | zxc-compress | 0.7.3 | ZXC: Package for high-performance, lossless, asymmetric compressions | # ZXC Python Bindings
High-performance Python bindings for the **ZXC** asymmetric compressor, optimized for **fast decompression**.
Designed for *Write Once, Read Many* workloads like ML datasets, game assets, and caches.
## Features
- **Blazing fast decompression** — ZXC is specifically optimized for read-heavy workloads.
- **Buffer protocol support** — works with `bytes`, `bytearray`, `memoryview`, and even NumPy arrays.
- **Releases the GIL** during compression/decompression — true parallelism with Python threads.
- **Stream helpers** — compress/decompress file-like objects.
## Installation (from source)
```bash
git clone https://github.com/hellobertrand/zxc.git
cd zxc/wrappers/python
python -m venv .venv
source .venv/bin/activate
pip install . | text/markdown | null | null | null | null | null | null | [] | [] | null | null | null | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/hellobertrand/zxc"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:19:12.599705 | zxc_compress-0.7.3.tar.gz | 6,585 | 4d/8b/2f104432bd6241abdfaa22af3b19c6fbeb305ec022626d618ea529e4bf91/zxc_compress-0.7.3.tar.gz | source | sdist | null | false | f65fbd9914162d449e688f8a839ffc65 | 8696268822bcf16486a62f4819aa2043b72102dcfa6877771b8a1cd6f1b95470 | 4d8b2f104432bd6241abdfaa22af3b19c6fbeb305ec022626d618ea529e4bf91 | BSD-3-Clause | [] | 1,814 |
2.4 | sqzc3d | 0.0.2 | Placeholder Python package for the sqzc3d C/C++ library (bindings upcoming). | # Squeezed C3D (`sqzc3d`)
[](https://github.com/lshdlut/squeezc3d/actions/workflows/ci.yml)
`Squeezed C3D` (`sqzc3d`) is a small C/C++ library for:
- parsing C3D point/analog data,
- building compact chunk structures,
- querying selected markers/channels by index/label,
- exporting/importing persisted bundle files.
It is designed as a pure dependency for higher-level projects (for example `sikc`) and keeps a minimal runtime API surface.
---
## Why Squeezed C3D
### Why Squeezed C3D (benchmark intent)
Only performance-relevant signals are shown:
- **Chunk materialize**: build a compact, contiguous `double` buffer `[frame][point][3]` (+ valid mask).
- **Access patterns**: copy/extract frame & window outputs, marker-trajectory access, reorder (frame-major -> point-major).
- **Peak memory**: avoid a full object graph; keep only needed arrays.
Bench method: fully load a C3D file, then measure access patterns on each library's **native loaded representation**.
For `sqzc3d`, the native representation is the chunk's contiguous frame-major array; for `ezc3d`, it is
`ezc3d::c3d`'s in-memory frame/point containers. Numbers below are from the C++ sample benches (repeat=1):
- `bench_sqzc3d <file.c3d> 1`
- `bench_ezc3d <file.c3d> 1`
### Load & memory (C++)
| Dataset | Frames | Points | sqzc3d `load_ms` | ezc3d `load_ms` | `load_speedup_x` | sqzc3d `peak_rss_mb` | ezc3d `peak_rss_mb` | `rss_ratio_x` |
| --- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
| PFERD (117.96 MB) | 55,844 | 132 | 171.385 | 3,717.442 | 21.7 | 182.090 | 1,019.414 | 5.6 |
`load_speedup_x = ezc3d / sqzc3d`, `rss_ratio_x = ezc3d / sqzc3d` (higher is better for `sqzc3d`).
### Access patterns (native, after load; PFERD, `T=256`)
All numbers are **per-operation milliseconds** unless noted.
| Metric | sqzc3d | ezc3d | `speedup_x` |
| --- | ---: | ---: | ---: |
| `frame_view_ns_kall` (ns/op) | 4.196 | 76.054 | 18.1x |
| `frame_copy_ms_kall` | 0.000096 | 0.002589 | 27.0x |
| `window_read_ms_T256_kall` | 0.009656 | 0.143652 | 14.9x |
| `traj_strided_ms_T256_kall` | 0.075748 | 0.228896 | 3.0x |
| `traj_strided_ms_Tfull_k1` | 0.132440 | 3.362906 | 25.4x |
| `reorder_ms_T256_kall` | 0.132432 | 0.167740 | 1.27x |
| `sel_apply_ms_T256_k32` | 0.005532 | 0.046642 | 8.4x |
`speedup_x = ezc3d / sqzc3d` (higher is better for `sqzc3d`).
### Streaming mode (sqzc3d-only, low memory)
Optional extreme low-memory mode that reads from the file on demand (e.g. WASM VFS):
- `bench_sqzc3d_stream <file.c3d> 1`
Example (PFERD, repeat=1):
| Metric | sqzc3d stream |
| --- | ---: |
| `open_ms` | 1.289 |
| `peak_rss_delta_mb` | 2.762 |
| `read_window_ms_T256_kall` | 0.532 |
| `read_window_ms_T256_k32` | 0.889 |
### Feature profile
- **Core + easy split**: stable C API (`sqzc3d.h`) plus ergonomic C++ helper layer (`sqzc3d_easy.h`).
- **Preset-first workflow**: shared presets for common read patterns.
- **Chunk-first runtime contracts**: frame-major point arrays + explicit valid mask.
- **Type-group aware filtering**: `type_group_*` metadata for marker set control.
- **Build split**: `SQZC3D_WITH_EZC3D=OFF` still supports bundle-only runtime.
### New in 0.2
- Preset builders for common workflows (`stream_frame_all`, `stream_frame_sel`, `window_analysis`, `interpolation_ready`).
- Dual-layer API:
- **C layer** for stable runtime ABI (`sqzc3d.h`).
- **C++ easy layer** for ergonomic one-shot window reads (`sqzc3d_easy.h`).
- Type-group metadata in chunk for marker-group-aware workflows.
---
## Build
```bash
cmake -S . -B build
cmake --build build --config Release --parallel
```
### Common options
- `SQZC3D_WITH_EZC3D` (`ON|OFF`, default `ON`)
enable/disable the C3D parser feature.
- `SQZC3D_FETCH_EZC3D` (`ON|OFF`, default `ON`)
auto-fetch ezc3d when not found in the current toolchain.
- `SQZC3D_BUILD_EXAMPLES` (`ON|OFF`, default `OFF`)
build CLI samples.
- `SQZC3D_EZC3D_GIT_REPOSITORY` / `SQZC3D_EZC3D_GIT_TAG`
control fetch source when `SQZC3D_FETCH_EZC3D=ON`.
> Compatibility note: legacy `sqzc3d_WITH_EZC3D` is tolerated for CMake compatibility and mapped to the canonical `SQZC3D_WITH_EZC3D`.
### Runtime capability matrix
| Feature | ON | OFF |
| --- | --- | --- |
| C3D parsing (`open_file`/`open_memory`) | ✅ | ❌ |
| Chunk build (`build_chunks`) | ✅ | ❌ |
| Bundle export/load | ✅ | ✅ |
| Analog support | ✅ | ✅ |
Runtime availability is always queryable via `sqzc3d_get_features()`.
### CMake usage (dependency)
```cmake
add_subdirectory(path/to/sqzc3d)
target_link_libraries(your_target PRIVATE sqzc3d)
```
---
## Quick start
### 1) Parse C3D and build chunks
```c
sqzc3d_default_open_opt(&open_opt);
sqzc3d_open_file(&dec, path, &open_opt);
sqzc3d_default_build_opt(&build_opt);
sqzc3d_build_chunks(dec, &build_opt, &chunk);
// use chunk metadata/queries/views
sqzc3d_free_chunk(chunk);
sqzc3d_close_dec(dec);
```
### 2) Load from bundle
```c
sqzc3d_load_bundle(bundle_path, &chunk);
sqzc3d_free_chunk(chunk);
```
All API contracts use plain integers and pointers, so this is usable from both C and C++ projects.
Quickly check runtime identity:
```c
printf("sqzc3d version=%s abi=%d\n", sqzc3d_version(), sqzc3d_abi_version());
```
### 3) C++ easy entry (`sqzc3d_easy.h`)
```c++
#include "sqzc3d_easy.h"
sqzc3d::ReadPointsWindow(dec, 0, 32, nullptr, 0, nullptr, &chunk);
const auto view = sqzc3d::FrameMajorPointsView(chunk);
```
`sqzc3d_easy.h` is a lightweight C++ helper that builds common window reads with defaults and exposes
`PointWindow` / `AnalogWindow` lightweight views plus frame-major -> point-major reorder.
### 4) Fast onboarding (selection + shape assumptions)
- For one-off integration, start from C++ easy helpers (`ReadPointsWindow`, `FrameMajorPointsView`) to get a deterministic
`n_frames x n_points x 3` layout and explicit `[frame][point]` validity.
- For production bindings, use the C API directly and keep `sqzc3d_points_view_*` / `sqzc3d_analogs_view_*` explicit.
---
## Data model at a glance
- **Open**
`sqzc3d_open_file` / `sqzc3d_open_memory` (memory-open currently writes a temporary file before parsing)
- **Build**
`sqzc3d_build_chunks`
- **Query**
metadata APIs + label/index helpers + frame/point/channel views, optional `type_group_*` metadata.
- **Type groups** (if present)
- `n_type_groups`
- `type_group_names` (group labels)
- `type_group_starts` (`n_type_groups + 1` prefix offsets)
- `type_group_indices` (flat point-index list in `point_labels` order)
- Easy-layer shape contract: points are returned as `FrameMajor PointWindow`
(`n_frames x n_points x 3`) with `frame_stride = n_points * 3`, `point_stride = 3`; valid mask is `[n_frames x n_points]`.
- **Persist/load**
`sqzc3d_export_bundle` / `sqzc3d_load_bundle`
Public structs:
- `sqzc3d_open_opt_t`
- `sqzc3d_build_opt_t`
- `sqzc3d_chunk_t`
- `sqzc3d_points_view_t`
- `sqzc3d_analogs_view_t`
Return value conventions are C-style integers. See `include/sqzc3d_types.h` for status constants.
---
## Feature flags and capabilities
```c
sqzc3d_get_features();
```
Capability bits are available in `include/sqzc3d.h`:
- `SQZC3D_FEATURE_OPEN_FILE`
- `SQZC3D_FEATURE_OPEN_MEMORY`
- `SQZC3D_FEATURE_BUILD_CHUNKS`
- `SQZC3D_FEATURE_BUNDLE`
- `SQZC3D_FEATURE_ANALOG`
Use this to adapt behavior for `ON/OFF` builds at runtime.
---
## Validation
- `sqzc3d_load_bundle_with_options` supports strict mode.
- `samples/*` provide smoke tests:
- `bench_sqzc3d`
- `bench_ezc3d`
- `bench_sqzc3d_stream`
- `c3dinfo_sqzc3d`
- `export_sqzc3d_bundle`
- `verify_correctness_matrix_sqzc3d`
- `easy_window_sqzc3d`
---
## Documentation
- Public API details: `docs/API.md`
- Build and usage notes: this file
- Development milestones: `PLAN.md`
- Dependencies and notices: `DEPENDENCIES.md` / `NOTICE`
---
## License
`Squeezed C3D (sqzc3d)` is released under **MIT**.
`ezc3d` upstream license is **MIT**.
## License & third-party notices
See `LICENSE` and `NOTICE` for dependency/license notes, `DEPENDENCIES.md` for build requirements.
| text/markdown | sqzc3d contributors | null | null | null | MIT License
Copyright (c) 2026
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| c3d, mocap, biomechanics | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming L... | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/lshdlut/squeezc3d",
"Repository, https://github.com/lshdlut/squeezc3d"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:19:09.008187 | sqzc3d-0.0.2.tar.gz | 6,504 | f0/d2/3b2c51c3162f570d2ddedfc3bb74ab3b74a71e404f18a6b54f5cfada8768/sqzc3d-0.0.2.tar.gz | source | sdist | null | false | ded9d3b9c27220ba97909cc822d76774 | 4ce9010a4c9b13a9861a272107d91dbc7f165d57cbf3c5708cceb499969095ac | f0d23b2c51c3162f570d2ddedfc3bb74ab3b74a71e404f18a6b54f5cfada8768 | null | [
"LICENSE",
"NOTICE"
] | 297 |
2.3 | anchorbrowser | 0.10.1 | The official Python library for the anchorbrowser API | # Anchorbrowser Python API library
<!-- prettier-ignore -->
[)](https://pypi.org/project/anchorbrowser/)
The Anchorbrowser Python library provides convenient access to the Anchorbrowser REST API from any Python 3.9+
application. The library includes type definitions for all request params and response fields,
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
It is generated with [Stainless](https://www.stainless.com/).
## Documentation
The REST API documentation can be found on [docs.anchorbrowser.io](https://docs.anchorbrowser.io). The full API of this library can be found in [api.md](https://github.com/anchorbrowser/AnchorBrowser-SDK-Python/tree/main/api.md).
## Installation
```sh
# install from PyPI
pip install anchorbrowser
```
## Usage
The full API of this library can be found in [api.md](https://github.com/anchorbrowser/AnchorBrowser-SDK-Python/tree/main/api.md).
```python
import os
from anchorbrowser import Anchorbrowser
client = Anchorbrowser(
api_key=os.environ.get("ANCHORBROWSER_API_KEY"), # This is the default and can be omitted
)
session = client.sessions.create(
session={"recording": {"active": False}},
)
print(session.data)
```
While you can provide an `api_key` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `ANCHORBROWSER_API_KEY="Your API Key"` to your `.env` file
so that your API Key is not stored in source control.
## Async usage
Simply import `AsyncAnchorbrowser` instead of `Anchorbrowser` and use `await` with each API call:
```python
import os
import asyncio
from anchorbrowser import AsyncAnchorbrowser
client = AsyncAnchorbrowser(
api_key=os.environ.get("ANCHORBROWSER_API_KEY"), # This is the default and can be omitted
)
async def main() -> None:
session = await client.sessions.create(
session={"recording": {"active": False}},
)
print(session.data)
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh
# install from PyPI
pip install anchorbrowser[aiohttp]
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python
import os
import asyncio
from anchorbrowser import DefaultAioHttpClient
from anchorbrowser import AsyncAnchorbrowser
async def main() -> None:
async with AsyncAnchorbrowser(
api_key=os.environ.get("ANCHORBROWSER_API_KEY"), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
session = await client.sessions.create(
session={"recording": {"active": False}},
)
print(session.data)
asyncio.run(main())
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Nested params
Nested parameters are dictionaries, typed using `TypedDict`, for example:
```python
from anchorbrowser import Anchorbrowser
client = Anchorbrowser()
session = client.sessions.create(
browser={},
)
print(session.browser)
```
## File uploads
Request parameters that correspond to file uploads can be passed as `bytes`, or a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance or a tuple of `(filename, contents, media type)`.
```python
from pathlib import Path
from anchorbrowser import Anchorbrowser
client = Anchorbrowser()
client.sessions.upload_file(
session_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
file=Path("/path/to/file"),
)
```
The async client uses the exact same interface. If you pass a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance, the file contents will be read asynchronously automatically.
## Handling errors
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `anchorbrowser.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `anchorbrowser.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `anchorbrowser.APIError`.
```python
import anchorbrowser
from anchorbrowser import Anchorbrowser
client = Anchorbrowser()
try:
client.sessions.create(
session={"recording": {"active": False}},
)
except anchorbrowser.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except anchorbrowser.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except anchorbrowser.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python
from anchorbrowser import Anchorbrowser
# Configure the default for all requests:
client = Anchorbrowser(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).sessions.create(
session={"recording": {"active": False}},
)
```
### Timeouts
By default requests time out after 1 minute. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python
from anchorbrowser import Anchorbrowser
# Configure the default for all requests:
client = Anchorbrowser(
# 20 seconds (default is 1 minute)
timeout=20.0,
)
# More granular control:
client = Anchorbrowser(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5.0).sessions.create(
session={"recording": {"active": False}},
)
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](https://github.com/anchorbrowser/AnchorBrowser-SDK-Python/tree/main/#retries).
## Advanced
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `ANCHORBROWSER_LOG` to `info`.
```shell
$ export ANCHORBROWSER_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py
from anchorbrowser import Anchorbrowser
client = Anchorbrowser()
response = client.sessions.with_raw_response.create(
session={
"recording": {
"active": False
}
},
)
print(response.headers.get('X-My-Header'))
session = response.parse() # get the object that `sessions.create()` would have returned
print(session.data)
```
These methods return an [`APIResponse`](https://github.com/anchorbrowser/AnchorBrowser-SDK-Python/tree/main/src/anchorbrowser/_response.py) object.
The async client returns an [`AsyncAPIResponse`](https://github.com/anchorbrowser/AnchorBrowser-SDK-Python/tree/main/src/anchorbrowser/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
```python
with client.sessions.with_streaming_response.create(
session={"recording": {"active": False}},
) as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py
import httpx
response = client.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented request params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python
import httpx
from anchorbrowser import Anchorbrowser, DefaultHttpxClient
client = Anchorbrowser(
# Or use the `ANCHORBROWSER_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python
client.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py
from anchorbrowser import Anchorbrowser
with Anchorbrowser() as client:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/anchorbrowser/AnchorBrowser-SDK-Python/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py
import anchorbrowser
print(anchorbrowser.__version__)
```
## Requirements
Python 3.9 or higher.
## Contributing
See [the contributing documentation](https://github.com/anchorbrowser/AnchorBrowser-SDK-Python/tree/main/./CONTRIBUTING.md).
| text/markdown | null | Anchorbrowser <support@anchorbrowser.io> | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: ... | [] | null | null | >=3.9 | [] | [] | [] | [
"anyio<5,>=3.5.0",
"distro<2,>=1.7.0",
"httpx>=0.28.1",
"playwright>=1.55.0",
"pydantic<3,>=1.9.0",
"sniffio",
"typing-extensions>=4.15.0",
"websockets>=15.0.1",
"aiohttp; extra == \"aiohttp\"",
"httpx-aiohttp>=0.1.9; extra == \"aiohttp\""
] | [] | [] | [] | [
"Homepage, https://github.com/anchorbrowser/AnchorBrowser-SDK-Python",
"Repository, https://github.com/anchorbrowser/AnchorBrowser-SDK-Python"
] | twine/5.1.1 CPython/3.12.9 | 2026-02-18T09:19:05.149477 | anchorbrowser-0.10.1.tar.gz | 154,602 | b3/b7/3a162c5a2eca41bc48c8b49f53a13ec6c847b5ca5d73784d034ad8cb1d2c/anchorbrowser-0.10.1.tar.gz | source | sdist | null | false | 98d1734d3f3b8b2e18dad3c6194ea818 | 2467114e4c39d6d50ede401bc019b2ef279fd98795ca2e3680ab75be076286a3 | b3b73a162c5a2eca41bc48c8b49f53a13ec6c847b5ca5d73784d034ad8cb1d2c | null | [] | 1,911 |
2.4 | ositah | 26.3.dev2 | Outils de Suivi d'Activités basé sur Hito | # OSITAH : Outil de Suivi de Temps et d'Activités basé sur Hito
OSITAH est une application web, basée sur le framework [Dash](https://dash.plotly.com), qui permet
le suivi des déclarations de temps dans Hito, leur validation et leur exportation dans NSIP.
L'accès aux différentes fonctionnalités est soumis à l'authentification de
l'utilisateur : les droits dans `ositah` sont dérivés de ceux dans Hito.
OSITAH nécessite un fichier de configuration `ositah.cfg` : par défaut il est recherché dans le
répertoire courant et s'il n'existe pas, dans le répertoire où est installé l'application OSITAH.
L'option `--configuration-file` permet de spécifier un autre fichier/localisation, par exemple pour
utiliser une configuration de test.
L'instance de production s'exécute normalement à travers [gunicorn](https://gunicorn.org), un serveur
SWGI écrit en Python et fournit par le module `gunicorn`. Dans ce contexte, le fichier de configuration
doit être placé dans le répertoire défini comme le répertoire courant de l'application (l'option
`--configuration-file` n'est pas utilisable).
L'exécution de `ositah` suppose l'accès à la base de donnée Hito.
## Installation
Le déploiement d'OSITAH nécessite le déploiement d'un environnement Python, de préférence distinct
de ce qui est délivré par l'OS car cela pose de gros problèmes avec les prérequis sur les versions
des dépendances. Les environnements recommandés sont [pyenv](https://github.com/pyenv/pyenv),
[poetry](https://python-poetry.org) ou [Anaconda](https://www.anaconda.com/products/individual).
Pour la création d'un environnement virtuel avec Conda, voir la
[documentation spécifique](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-with-commands).
Pour installer OSITAH, il faut utiliser les commandes suivantes :
```bash
pip install ositah
```
### Dépendances
Pour connaitre la liste des dépendances de l'application OSITAH, voir la propriété `dependencies`
dans le fichier `pyproject.toml` se trouvant dans les sources de l'application.
Elles sont automatiquement installées par la commande `pip`.
## Configuration
### OSITAH
Toute la configuration de l'application OSITAH est déclarée dans le fichier `ositah.cfg` qui doit
se trouver dans le répertoire courant de l'application pour une instance de production gérée par
le serveur WSGI `gunicorn`. Il faut déployer un frontend Nginx devant le serveur Gunicorn qui
gèrera l'aspect https.
Pour une instance de test ou de développement qui n'utilise pas
`gunicorn`, il est possible de spécifier le fichier de configuration à utiliser avec l'option
`--configuration-file`.
### Gunicorn
`gunicorn` est le serveur WSGI recommandé pour exécuter une instance de production. Son installation
consiste à installer 2 modules Python : `gunicorn` et `greenlet`.
Le repository Git de'OSITAH contient un répertoire `gunicorn.config` avec les 3 fichiers importants
pour la configuration de `gunicorn` qu'il faut éditer pour adapter les répertoires à la configuration
du site :
* `gunicorn@.service` : script `systemd` à installer pour démarrer l'instance OSITAH. Si le
l'instance OSITAH s'appelle `ositah`, la systemd unit à utiliser pour gérer le service est
`gunicorn@ositah`.
* `gunicorn.ositah` : fichier à placer en `/etc/sysconfig` définissant la configuration spécifique
à OSITAH (répertoire courant, options `gunicorn`, entry point).
* `app.conf.py` : options `gunicorn` à utiliser avec l'instance OSITAH
### Nginx
La configuration recommandée est de déclarer un virtual host (server) dédié au frontend
OSITAH. Une configuration typique du frontend est :
```commandline
server {
listen *:443 ssl http2;
listen [::]:443 ssl http2;
server_name uour.preferred.virtualhost.name;
ssl_certificate /path/to/fullchain.pem;
ssl_certificate_key /path/to/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_protocols TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
location / {
# checks for static file, if not found proxy to app
try_files $uri @proxy_to_app;
}
location @proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# Some OSITAH operations (e.g. NSIP synchronisation) can take time
proxy_read_timeout 300;
# Adjust port number accordint to gunicorn configuration
proxy_pass http://127.0.0.1:8008;
}
}
```
```
## Validation des déclarations : structure des tables OSITAH
La validation des déclarations de temps se fait agent par agent, en utilisant le bouton de validation correspondant
à l'agent. Ce bouton n'est actif qu'à partir de la date définie dans la table `ositah_validation_period` pour la
période en cours, sauf si on a ajouté des exceptions dans le fichier de configuration, telles que :
```
validation:
override_period:
- ROLE_SUPER_ADMIN
```
`override_period` est une liste de roles qui peuvent faire des validations hors de la période standard.
La validation d'une déclaration a pour effet d'enregistrer le temps déclaré sur chacune des activités de l'agent dans
la table `ositah_project_declaration`. Cette entrée est associée à une entrée dans la table `ositah_validation` qui
contient la date de la validation, l'agent concerné par cette validation (son `agent id` Hito), la validation période
à laquelle correspond cette validation (référence à la table `ositah_validation_period`) ainsi que le statut
de la validation. Si on invalide cette validation ultérieurement, le statut passe à `0` et la date de la validation
est copiée dans l'attribut `initial_timestamp`. L'entrée dans `ositah_project_declaration` n'est pas détruite. Lorsque
la déclaration de l'agent est à nouveau validée ultérieurement, une nouvelle entrée est créée à la fois dans
`ositah_project_declaration` et dans `ositah_validation`, comme pour la validation initiale.
Il est donc possible d'avoir un historique des opérations de validation sur une période donnée (pas exploité
par l'application OSITAH pour l'instant). Par contre, quand on lit les validations, il faut faire attention à
prendre la dernière dans une période donnée qui a son statut à `1`.
La création de l'entrée pour définir une période de déclaration dans `ositah_validation_period` (date de début et
date de fin de la période, date de début de la validation) n'est pas gérée par OSITAH actuellement : il faut créer
une entrée dans la table avec la commande SQL `INSERT INTO`.
## Export NSIP
OSITAH permet d'exporter vers NSIP les déclarations validées. La table du menu `Export` indique
l'état de la synchronisation entre NSIP et OSITAH, agent par agent. Un code couleur permet
d'identifier facilement si une déclaration est correctement synchronisée ou non. Seules les
déclarations qui ne sont pas correctement synchronisées peut être exportées. Lors de l'export,
la déclaration est indiquée comme validée par le responsable dans NSIP, avec la date de sa validation
dans OSITAH.
Il est possible d'exporter toutes les déclarations ou de les sélectionner agent par agent.
Lorsqu'un agent est sélectionné, toutes ses déclarations non synchronisées sont exportées. Le bouton
de sélection dans la barre de titre permet de sélectionner tous les agents sélectionnables en un clic.
Les déclarations d'un agent ne peuvent pas être exportées si l'agent n'existe pas dans NSIP,
c'est-à-dire s'il est absent de RESEDA. La correction du problème, si on souhaite que les
déclarations da cet agent soient mises dans NSIP, nécessite une intervention du Service RH
pour ajouter la personne dans RESEDA.
Il peut aussi y avoir des déclarations qui ont été faites directement dans NSIP et qui ne sont
pas encore validées dans OSITAH. Dans ce cas, elles apparaitront comme manquantes dans OSITAH,
même si elles sont présentes, tant qu'elles ne seront pas validées.
| text/markdown | Michel Jouvin | michel.jouvin@ijclab.in2p3.fr | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | https://gitlab.in2p3.fr/hito/ositah | null | >=3.13 | [] | [] | [] | [
"blinker",
"dash<5,>=4",
"dash-bootstrap-components>=2",
"flask",
"flask-multipass>=0.5.5",
"flask-sqlalchemy",
"flask-wtf",
"hito-tools>=26.1",
"pandas<4,>=3",
"pymysql",
"python-ldap",
"pyyaml",
"simplejson",
"sqlalchemy<3,>=2.0"
] | [] | [] | [] | [
"Homepage, https://gitlab.in2p3.fr/hito/ositah",
"Bug Tracker, https://gitlab.in2p3.fr/hito/ositah/-/issues"
] | poetry/2.2.1 CPython/3.13.9 Windows/11 | 2026-02-18T09:18:38.356482 | ositah-26.3.dev2.tar.gz | 97,611 | 1a/75/b03a8e92adf446171080937302ac318c1ef6fd726d07c45ce44067181305/ositah-26.3.dev2.tar.gz | source | sdist | null | false | 5c3c37c1dcfb538863e6b3040dc4e816 | b3d132d1483c4da7595cedc81b8771483aa46e9a51ec415ac9d2b27c8cf31bed | 1a75b03a8e92adf446171080937302ac318c1ef6fd726d07c45ce44067181305 | null | [] | 206 |
2.1 | ant-ray-nightly | 3.0.0.dev20260218 | Ray provides a simple, universal API for building distributed applications. | .. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png
.. image:: https://readthedocs.org/projects/ray/badge/?version=master
:target: http://docs.ray.io/en/master/?badge=master
.. image:: https://img.shields.io/badge/Ray-Join%20Slack-blue
:target: https://www.ray.io/join-slack
.. image:: https://img.shields.io/badge/Discuss-Ask%20Questions-blue
:target: https://discuss.ray.io/
.. image:: https://img.shields.io/twitter/follow/raydistributed.svg?style=social&logo=twitter
:target: https://x.com/raydistributed
.. image:: https://img.shields.io/badge/Get_started_for_free-3C8AE9?logo=data%3Aimage%2Fpng%3Bbase64%2CiVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8%2F9hAAAAAXNSR0IArs4c6QAAAERlWElmTU0AKgAAAAgAAYdpAAQAAAABAAAAGgAAAAAAA6ABAAMAAAABAAEAAKACAAQAAAABAAAAEKADAAQAAAABAAAAEAAAAAA0VXHyAAABKElEQVQ4Ea2TvWoCQRRGnWCVWChIIlikC9hpJdikSbGgaONbpAoY8gKBdAGfwkfwKQypLQ1sEGyMYhN1Pd%2B6A8PqwBZeOHt%2FvsvMnd3ZXBRFPQjBZ9K6OY8ZxF%2B0IYw9PW3qz8aY6lk92bZ%2BVqSI3oC9T7%2FyCVnrF1ngj93us%2B540sf5BrCDfw9b6jJ5lx%2FyjtGKBBXc3cnqx0INN4ImbI%2Bl%2BPnI8zWfFEr4chLLrWHCp9OO9j19Kbc91HX0zzzBO8EbLK2Iv4ZvNO3is3h6jb%2BCwO0iL8AaWqB7ILPTxq3kDypqvBuYuwswqo6wgYJbT8XxBPZ8KS1TepkFdC79TAHHce%2F7LbVioi3wEfTpmeKtPRGEeoldSP%2FOeoEftpP4BRbgXrYZefsAI%2BP9JU7ImyEAAAAASUVORK5CYII%3D
:target: https://www.anyscale.com/ray-on-anyscale?utm_source=github&utm_medium=ray_readme&utm_campaign=get_started_badge
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI libraries for simplifying ML compute:
.. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/what-is-ray-padded.svg
..
https://docs.google.com/drawings/d/1Pl8aCYOsZCo61cmp57c7Sja6HhIygGCvSZLi_AuBuqo/edit
Learn more about `Ray AI Libraries`_:
- `Data`_: Scalable Datasets for ML
- `Train`_: Distributed Training
- `Tune`_: Scalable Hyperparameter Tuning
- `RLlib`_: Scalable Reinforcement Learning
- `Serve`_: Scalable and Programmable Serving
Or more about `Ray Core`_ and its key abstractions:
- `Tasks`_: Stateless functions executed in the cluster.
- `Actors`_: Stateful worker processes created in the cluster.
- `Objects`_: Immutable values accessible across the cluster.
Learn more about Monitoring and Debugging:
- Monitor Ray apps and clusters with the `Ray Dashboard <https://antgroup.github.io/ant-ray/ray-core/ray-dashboard.html>`__.
- Debug Ray apps with the `Ray Distributed Debugger <https://antgroup.github.io/ant-ray/ray-observability/ray-distributed-debugger.html>`__.
Ray runs on any machine, cluster, cloud provider, and Kubernetes, and features a growing
`ecosystem of community integrations`_.
Install Ray with: ``pip install ray``. For nightly wheels, see the
`Installation page <https://antgroup.github.io/ant-ray/ray-overview/installation.html>`__.
**Note**: ``This documentation refers to Ant Ray - a fork of Ray maintained by Ant Group. To install this specific version, use``:
.. code-block:: bash
pip install ant-ray
.. _`Serve`: https://antgroup.github.io/ant-ray/serve/index.html
.. _`Data`: https://antgroup.github.io/ant-ray/data/dataset.html
.. _`Workflow`: https://antgroup.github.io/ant-ray/workflows/
.. _`Train`: https://antgroup.github.io/ant-ray/train/train.html
.. _`Tune`: https://antgroup.github.io/ant-ray/tune/index.html
.. _`RLlib`: https://antgroup.github.io/ant-ray/rllib/index.html
.. _`ecosystem of community integrations`: https://antgroup.github.io/ant-ray/ray-overview/ray-libraries.html
Why Ray?
--------
Today's ML workloads are increasingly compute-intensive. As convenient as they are, single-node development environments such as your laptop cannot scale to meet these demands.
Ray is a unified way to scale Python and AI applications from a laptop to a cluster.
With Ray, you can seamlessly scale the same code from a laptop to a cluster. Ray is designed to be general-purpose, meaning that it can performantly run any kind of workload. If your application is written in Python, you can scale it with Ray, no other infrastructure required.
More Information
----------------
- `Documentation`_
- `Ray Architecture whitepaper`_
- `Exoshuffle: large-scale data shuffle in Ray`_
- `Ownership: a distributed futures system for fine-grained tasks`_
- `RLlib paper`_
- `Tune paper`_
*Older documents:*
- `Ray paper`_
- `Ray HotOS paper`_
- `Ray Architecture v1 whitepaper`_
.. _`Ray AI Libraries`: https://antgroup.github.io/ant-ray/ray-air/getting-started.html
.. _`Ray Core`: https://antgroup.github.io/ant-ray/ray-core/walkthrough.html
.. _`Tasks`: https://antgroup.github.io/ant-ray/ray-core/tasks.html
.. _`Actors`: https://antgroup.github.io/ant-ray/ray-core/actors.html
.. _`Objects`: https://antgroup.github.io/ant-ray/ray-core/objects.html
.. _`Documentation`: http://antgroup.github.io/ant-ray/index.html
.. _`Ray Architecture v1 whitepaper`: https://docs.google.com/document/d/1lAy0Owi-vPz2jEqBSaHNQcy2IBSDEHyXNOQZlGuj93c/preview
.. _`Ray Architecture whitepaper`: https://docs.google.com/document/d/1tBw9A4j62ruI5omIJbMxly-la5w4q_TjyJgJL_jN2fI/preview
.. _`Exoshuffle: large-scale data shuffle in Ray`: https://arxiv.org/abs/2203.05072
.. _`Ownership: a distributed futures system for fine-grained tasks`: https://www.usenix.org/system/files/nsdi21-wang.pdf
.. _`Ray paper`: https://arxiv.org/abs/1712.05889
.. _`Ray HotOS paper`: https://arxiv.org/abs/1703.03924
.. _`RLlib paper`: https://arxiv.org/abs/1712.09381
.. _`Tune paper`: https://arxiv.org/abs/1807.05118
Getting Involved
----------------
.. list-table::
:widths: 25 50 25 25
:header-rows: 1
* - Platform
- Purpose
- Estimated Response Time
- Support Level
* - `Discourse Forum`_
- For discussions about development and questions about usage.
- < 1 day
- Community
* - `GitHub Issues`_
- For reporting bugs and filing feature requests.
- < 2 days
- Ray OSS Team
* - `Slack`_
- For collaborating with other Ray users.
- < 2 days
- Community
* - `StackOverflow`_
- For asking questions about how to use Ray.
- 3-5 days
- Community
* - `Meetup Group`_
- For learning about Ray projects and best practices.
- Monthly
- Ray DevRel
* - `Twitter`_
- For staying up-to-date on new features.
- Daily
- Ray DevRel
.. _`Discourse Forum`: https://discuss.ray.io/
.. _`GitHub Issues`: https://github.com/ray-project/ray/issues
.. _`StackOverflow`: https://stackoverflow.com/questions/tagged/ray
.. _`Meetup Group`: https://www.meetup.com/Bay-Area-Ray-Meetup/
.. _`Twitter`: https://x.com/raydistributed
.. _`Slack`: https://www.ray.io/join-slack?utm_source=github&utm_medium=ray_readme&utm_campaign=getting_involved
| null | Ray Team | ray-dev@googlegroups.com | null | null | Apache 2.0 | ray distributed parallel machine-learning hyperparameter-tuningreinforcement-learning deep-learning serving python | [
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | https://github.com/ray-project/ray | null | >=3.9 | [] | [] | [] | [
"click>=7.0",
"filelock",
"jsonschema",
"msgpack<2.0.0,>=1.0.0",
"packaging>=24.2",
"protobuf>=3.20.3",
"pyyaml",
"requests",
"redis<=4.5.5,>=3.5.0",
"cupy-cuda12x; sys_platform != \"darwin\" and extra == \"adag\"",
"pandas; extra == \"air\"",
"watchfiles; extra == \"air\"",
"requests; extra... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:16:58.648760 | ant_ray_nightly-3.0.0.dev20260218-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl | 52,317,024 | e7/d4/b1f3f3a097f12044ff2075c5ef7ff32ee070ab3f2ae9918a594a080fbc23/ant_ray_nightly-3.0.0.dev20260218-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl | cp39 | bdist_wheel | null | false | 887fcfbfb5b0e681f17e4dad7e1eec70 | 4517e36281ecc40ad3c3c2152e69e629e9653f189b27ffc71823c7cf505b6f50 | e7d4b1f3f3a097f12044ff2075c5ef7ff32ee070ab3f2ae9918a594a080fbc23 | null | [] | 872 |
2.1 | ant-ray-cpp-nightly | 3.0.0.dev20260218 | A subpackage of Ray which provides the Ray C++ API. | .. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png
.. image:: https://readthedocs.org/projects/ray/badge/?version=master
:target: http://docs.ray.io/en/master/?badge=master
.. image:: https://img.shields.io/badge/Ray-Join%20Slack-blue
:target: https://www.ray.io/join-slack
.. image:: https://img.shields.io/badge/Discuss-Ask%20Questions-blue
:target: https://discuss.ray.io/
.. image:: https://img.shields.io/twitter/follow/raydistributed.svg?style=social&logo=twitter
:target: https://x.com/raydistributed
.. image:: https://img.shields.io/badge/Get_started_for_free-3C8AE9?logo=data%3Aimage%2Fpng%3Bbase64%2CiVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8%2F9hAAAAAXNSR0IArs4c6QAAAERlWElmTU0AKgAAAAgAAYdpAAQAAAABAAAAGgAAAAAAA6ABAAMAAAABAAEAAKACAAQAAAABAAAAEKADAAQAAAABAAAAEAAAAAA0VXHyAAABKElEQVQ4Ea2TvWoCQRRGnWCVWChIIlikC9hpJdikSbGgaONbpAoY8gKBdAGfwkfwKQypLQ1sEGyMYhN1Pd%2B6A8PqwBZeOHt%2FvsvMnd3ZXBRFPQjBZ9K6OY8ZxF%2B0IYw9PW3qz8aY6lk92bZ%2BVqSI3oC9T7%2FyCVnrF1ngj93us%2B540sf5BrCDfw9b6jJ5lx%2FyjtGKBBXc3cnqx0INN4ImbI%2Bl%2BPnI8zWfFEr4chLLrWHCp9OO9j19Kbc91HX0zzzBO8EbLK2Iv4ZvNO3is3h6jb%2BCwO0iL8AaWqB7ILPTxq3kDypqvBuYuwswqo6wgYJbT8XxBPZ8KS1TepkFdC79TAHHce%2F7LbVioi3wEfTpmeKtPRGEeoldSP%2FOeoEftpP4BRbgXrYZefsAI%2BP9JU7ImyEAAAAASUVORK5CYII%3D
:target: https://www.anyscale.com/ray-on-anyscale?utm_source=github&utm_medium=ray_readme&utm_campaign=get_started_badge
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI libraries for simplifying ML compute:
.. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/what-is-ray-padded.svg
..
https://docs.google.com/drawings/d/1Pl8aCYOsZCo61cmp57c7Sja6HhIygGCvSZLi_AuBuqo/edit
Learn more about `Ray AI Libraries`_:
- `Data`_: Scalable Datasets for ML
- `Train`_: Distributed Training
- `Tune`_: Scalable Hyperparameter Tuning
- `RLlib`_: Scalable Reinforcement Learning
- `Serve`_: Scalable and Programmable Serving
Or more about `Ray Core`_ and its key abstractions:
- `Tasks`_: Stateless functions executed in the cluster.
- `Actors`_: Stateful worker processes created in the cluster.
- `Objects`_: Immutable values accessible across the cluster.
Learn more about Monitoring and Debugging:
- Monitor Ray apps and clusters with the `Ray Dashboard <https://antgroup.github.io/ant-ray/ray-core/ray-dashboard.html>`__.
- Debug Ray apps with the `Ray Distributed Debugger <https://antgroup.github.io/ant-ray/ray-observability/ray-distributed-debugger.html>`__.
Ray runs on any machine, cluster, cloud provider, and Kubernetes, and features a growing
`ecosystem of community integrations`_.
Install Ray with: ``pip install ray``. For nightly wheels, see the
`Installation page <https://antgroup.github.io/ant-ray/ray-overview/installation.html>`__.
**Note**: ``This documentation refers to Ant Ray - a fork of Ray maintained by Ant Group. To install this specific version, use``:
.. code-block:: bash
pip install ant-ray
.. _`Serve`: https://antgroup.github.io/ant-ray/serve/index.html
.. _`Data`: https://antgroup.github.io/ant-ray/data/dataset.html
.. _`Workflow`: https://antgroup.github.io/ant-ray/workflows/
.. _`Train`: https://antgroup.github.io/ant-ray/train/train.html
.. _`Tune`: https://antgroup.github.io/ant-ray/tune/index.html
.. _`RLlib`: https://antgroup.github.io/ant-ray/rllib/index.html
.. _`ecosystem of community integrations`: https://antgroup.github.io/ant-ray/ray-overview/ray-libraries.html
Why Ray?
--------
Today's ML workloads are increasingly compute-intensive. As convenient as they are, single-node development environments such as your laptop cannot scale to meet these demands.
Ray is a unified way to scale Python and AI applications from a laptop to a cluster.
With Ray, you can seamlessly scale the same code from a laptop to a cluster. Ray is designed to be general-purpose, meaning that it can performantly run any kind of workload. If your application is written in Python, you can scale it with Ray, no other infrastructure required.
More Information
----------------
- `Documentation`_
- `Ray Architecture whitepaper`_
- `Exoshuffle: large-scale data shuffle in Ray`_
- `Ownership: a distributed futures system for fine-grained tasks`_
- `RLlib paper`_
- `Tune paper`_
*Older documents:*
- `Ray paper`_
- `Ray HotOS paper`_
- `Ray Architecture v1 whitepaper`_
.. _`Ray AI Libraries`: https://antgroup.github.io/ant-ray/ray-air/getting-started.html
.. _`Ray Core`: https://antgroup.github.io/ant-ray/ray-core/walkthrough.html
.. _`Tasks`: https://antgroup.github.io/ant-ray/ray-core/tasks.html
.. _`Actors`: https://antgroup.github.io/ant-ray/ray-core/actors.html
.. _`Objects`: https://antgroup.github.io/ant-ray/ray-core/objects.html
.. _`Documentation`: http://antgroup.github.io/ant-ray/index.html
.. _`Ray Architecture v1 whitepaper`: https://docs.google.com/document/d/1lAy0Owi-vPz2jEqBSaHNQcy2IBSDEHyXNOQZlGuj93c/preview
.. _`Ray Architecture whitepaper`: https://docs.google.com/document/d/1tBw9A4j62ruI5omIJbMxly-la5w4q_TjyJgJL_jN2fI/preview
.. _`Exoshuffle: large-scale data shuffle in Ray`: https://arxiv.org/abs/2203.05072
.. _`Ownership: a distributed futures system for fine-grained tasks`: https://www.usenix.org/system/files/nsdi21-wang.pdf
.. _`Ray paper`: https://arxiv.org/abs/1712.05889
.. _`Ray HotOS paper`: https://arxiv.org/abs/1703.03924
.. _`RLlib paper`: https://arxiv.org/abs/1712.09381
.. _`Tune paper`: https://arxiv.org/abs/1807.05118
Getting Involved
----------------
.. list-table::
:widths: 25 50 25 25
:header-rows: 1
* - Platform
- Purpose
- Estimated Response Time
- Support Level
* - `Discourse Forum`_
- For discussions about development and questions about usage.
- < 1 day
- Community
* - `GitHub Issues`_
- For reporting bugs and filing feature requests.
- < 2 days
- Ray OSS Team
* - `Slack`_
- For collaborating with other Ray users.
- < 2 days
- Community
* - `StackOverflow`_
- For asking questions about how to use Ray.
- 3-5 days
- Community
* - `Meetup Group`_
- For learning about Ray projects and best practices.
- Monthly
- Ray DevRel
* - `Twitter`_
- For staying up-to-date on new features.
- Daily
- Ray DevRel
.. _`Discourse Forum`: https://discuss.ray.io/
.. _`GitHub Issues`: https://github.com/ray-project/ray/issues
.. _`StackOverflow`: https://stackoverflow.com/questions/tagged/ray
.. _`Meetup Group`: https://www.meetup.com/Bay-Area-Ray-Meetup/
.. _`Twitter`: https://x.com/raydistributed
.. _`Slack`: https://www.ray.io/join-slack?utm_source=github&utm_medium=ray_readme&utm_campaign=getting_involved
| null | Ray Team | ray-dev@googlegroups.com | null | null | Apache 2.0 | ray distributed parallel machine-learning hyperparameter-tuningreinforcement-learning deep-learning serving python | [
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | https://github.com/ray-project/ray | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:15:52.818569 | ant_ray_cpp_nightly-3.0.0.dev20260218-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl | 10,927,995 | 56/e7/99f4471cf737ba95fc033da7fb0b8addb80e1dad8a2bf512b4f858a7ee05/ant_ray_cpp_nightly-3.0.0.dev20260218-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl | cp39 | bdist_wheel | null | false | b6ded08738a3c5202763a0fd9088cd45 | e3f26cbf92ac2f5e303616cebe297d376ebb4640a19c898bc60c47a2401ec986 | 56e799f4471cf737ba95fc033da7fb0b8addb80e1dad8a2bf512b4f858a7ee05 | null | [] | 402 |
2.4 | teklia-line-image-extractor | 0.8.0 | A tool for extracting a text line image from the contour with different methods | # Line image extractor
This is a tool and a library to be used for extracting line images. Built by [Teklia](https://teklia.com) and freely available as open-source under the MIT licence.
It supports different extraction methods:
* boundingRect - bounding rectangle of the line polygon
* polygon - exact polygon
* min_area_rect - minimum area rectangle containing the polygon
* deskew_polygon - deskew the polygon
* deskew_min_area_rect - deskew the minimum area rectangle
* skew_polygon - skew the polygon (rotate by some angle)
* skew_min_area_rect - skew the minimum area rectangle (rotate by some angle)
Install the library using stable version from Pypi:
```bash
pip install teklia-line-image-extractor
```
Install the library in development mode:
```bash
pip install -e .
```
Test extraction:
```bash
line-image-extractor -i tests/data/page_img.jpg -o out.jpg -p tests/data/line_polygon.json -e deskew_min_area_rect --color
```
How to use it?:
```python
from pathlib import Path
import numpy as np
from line_image_extractor.extractor import extract, read_img, save_img
from line_image_extractor.image_utils import polygon_to_bbox
from line_image_extractor.image_utils import Extraction
page_img = read_img(Path("tests/data/page_img.jpg"))
polygon = np.asarray([[241, 1169], [2287, 1251], [2252, 1190], [244, 1091], [241, 1169]])
bbox = polygon_to_bbox(polygon)
extracted_img = extract(
page_img, polygon, bbox, Extraction.polygon
)
save_img("line_output.jpg", extracted_img)
```
| text/markdown | Martin Maarand | maarand@teklia.com | null | null | null | line transformation image extraction | [
"License :: OSI Approved :: MIT License",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Topic ::... | [] | https://gitlab.teklia.com/atr/line_image_extractor | null | null | [] | [] | [] | [
"opencv-python-headless==4.10.0.84",
"Pillow==12.1.1"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T09:15:52.516961 | teklia_line_image_extractor-0.8.0.tar.gz | 10,009 | 03/ee/dbb8859cbc244561312aaa77202ec78508d8f900883d1a23988a71892b6b/teklia_line_image_extractor-0.8.0.tar.gz | source | sdist | null | false | b398910163dabd9453793c6428211a8c | e613108ad7cca041e6d79d20a84069c65dbd5c276178a69e4cc8ce03e7778862 | 03eedbb8859cbc244561312aaa77202ec78508d8f900883d1a23988a71892b6b | null | [
"LICENSE"
] | 269 |
2.4 | langfuse-cli | 0.1.4 | CLI tool for Langfuse LLM observability platform, following gh-ux patterns | # langfuse-cli (`lf`)
[](https://github.com/aviadshiber/langfuse-cli/actions/workflows/test.yml)
[](https://pypi.org/project/langfuse-cli/)
[](https://pypi.org/project/langfuse-cli/)
[](https://opensource.org/licenses/MIT)
Observability-first CLI for the [Langfuse](https://langfuse.com) LLM platform, following [gh-ux patterns](https://cli.github.com/manual/).
**Features**: traces, observations, prompts, scores, datasets, experiments, sessions | JSON/table/TSV output | config profiles | system keyring secrets | agent-friendly `--json` mode
## Installation
```bash
# With uv (recommended)
uv tool install langfuse-cli
# With pip
pip install langfuse-cli
# With Homebrew
brew install aviadshiber/tap/langfuse-cli
# From source
git clone https://github.com/aviadshiber/langfuse-cli.git && cd langfuse-cli
uv sync && uv run lf --version
```
## Quick Start
```bash
# Set credentials (or use config file below)
export LANGFUSE_PUBLIC_KEY="pk-lf-..."
export LANGFUSE_SECRET_KEY="sk-lf-..."
export LANGFUSE_HOST="https://cloud.langfuse.com" # optional, this is the default
# List recent traces
lf traces list --limit 5 --from 2026-02-01
# List prompts
lf prompts list
# Get JSON output (agent-friendly)
lf --json traces list --limit 5 --from 2026-02-01
```
## Configuration
### Resolution Order
Configuration is resolved in this order (first match wins):
1. **CLI flags** (`--host`, `--profile`)
2. **Environment variables** (`LANGFUSE_HOST`, `LANGFUSE_PUBLIC_KEY`, `LANGFUSE_SECRET_KEY`)
3. **Config file** (`~/.config/langfuse/config.toml`)
4. **System keyring** (macOS Keychain / Linux Secret Service)
5. **Defaults** (host: `https://cloud.langfuse.com`)
### Config File
```toml
# ~/.config/langfuse/config.toml
[default]
host = "https://cloud.langfuse.com"
public_key = "pk-lf-..."
# secret_key stored in keyring, NOT in plaintext
[profiles.staging]
host = "https://staging.langfuse.example.com"
public_key = "pk-lf-staging-..."
[defaults]
limit = 50
output = "table"
```
### Secret Storage
Secret keys are stored in the system keyring (service: `langfuse-cli`):
- **macOS**: Keychain (`security add-generic-password -s langfuse-cli -a default/secret_key -w <secret>`)
- **Linux**: Secret Service API (GNOME Keyring / KDE Wallet)
- **Fallback**: Environment variables or config file
### Environment Variables
| Variable | Description |
|----------|-------------|
| `LANGFUSE_HOST` | Langfuse host URL |
| `LANGFUSE_BASEURL` | Alias for `LANGFUSE_HOST` (SDK compatibility) |
| `LANGFUSE_PUBLIC_KEY` | Public API key |
| `LANGFUSE_SECRET_KEY` | Secret API key |
| `LANGFUSE_PROFILE` | Config profile name |
| `LANGFUSE_FORCE_TTY` | Force TTY mode (set to `1`) |
| `NO_COLOR` | Disable color output |
## Global Options
Global options go **before** the subcommand:
```bash
lf --json traces list --limit 5 # correct
lf --quiet scores summary # correct
```
| Flag | Description |
|------|-------------|
| `--version`, `-v` | Show version and exit |
| `--host URL` | Override Langfuse host URL |
| `--profile NAME` | Use named config profile |
| `--json` | Output as JSON |
| `--fields FIELDS` | Filter JSON to specific fields (comma-separated, implies `--json`) |
| `--jq EXPR` | Filter JSON with jq expression (implies `--json`) |
| `--quiet`, `-q` | Suppress status messages |
## Commands
### Traces
```bash
# List traces (use --from to avoid timeouts on large projects)
lf traces list --limit 10 --from 2026-02-01
lf traces list --user-id user-123 --session-id sess-456
lf traces list --tags production,v2 --name chat-completion
# Get a single trace
lf traces get <trace-id>
# Visualize trace hierarchy as a tree
lf traces tree <trace-id>
```
| Flag | Type | Description |
|------|------|-------------|
| `--limit`, `-l` | INT | Max results (default: 50) |
| `--user-id`, `-u` | TEXT | Filter by user ID |
| `--session-id`, `-s` | TEXT | Filter by session ID |
| `--tags` | TEXT | Filter by tags (comma-separated) |
| `--name`, `-n` | TEXT | Filter by trace name |
| `--from` | DATETIME | Start time filter (ISO 8601) |
| `--to` | DATETIME | End time filter (ISO 8601) |
### Prompts
```bash
# List all prompts
lf prompts list
# Get a specific prompt
lf prompts get my-prompt
lf prompts get my-prompt --label production
lf prompts get my-prompt --version 3
# Compile a prompt with variables
lf prompts compile my-prompt --var name=Alice --var role=engineer
# Compare two versions
lf prompts diff my-prompt --v1 3 --v2 5
```
### Scores
```bash
# List scores
lf scores list --trace-id abc-123
lf scores list --name quality --from 2026-01-01
# Aggregated statistics
lf scores summary
lf scores summary --name quality --from 2026-01-01
```
### Datasets
```bash
# List datasets
lf datasets list
# Get dataset with items
lf datasets get my-dataset --limit 10
```
### Experiments
```bash
# List runs for a dataset
lf experiments list my-dataset
# Compare two runs
lf experiments compare my-dataset run-baseline run-improved
```
### Sessions
```bash
# List sessions
lf sessions list --limit 20 --from 2026-01-01
# Get session details
lf sessions get session-abc-123
```
### Observations
```bash
# List observations for a trace
lf observations list --trace-id abc-123
# Filter by type and name
lf observations list --type GENERATION --name llm-call --limit 20
# With time range
lf observations list --trace-id abc-123 --from 2026-01-01 --to 2026-01-31
```
| Flag | Type | Description |
|------|------|-------------|
| `--limit`, `-l` | INT | Max results (default: 50) |
| `--trace-id`, `-t` | TEXT | Filter by trace ID |
| `--type` | TEXT | Filter by type (GENERATION, SPAN, EVENT) |
| `--name`, `-n` | TEXT | Filter by observation name |
| `--from` | DATETIME | Start time filter (ISO 8601) |
| `--to` | DATETIME | End time filter (ISO 8601) |
## Output Modes
| Context | Format | Status Messages |
|---------|--------|-----------------|
| Terminal (TTY) | Rich aligned columns with colors | Shown |
| Piped (non-TTY) | Tab-separated values | Suppressed |
| `--json` flag | JSON array | Suppressed unless error |
| `--quiet` flag | Normal tables | All suppressed |
```bash
# Rich table (terminal)
lf prompts list
# Tab-separated (piped)
lf traces list --limit 5 --from 2026-02-01 | head
# JSON (agent-friendly)
lf --json traces list --limit 5 --from 2026-02-01
# Filtered JSON fields
lf --fields id,name,userId traces list --limit 5 --from 2026-02-01
# jq expression
lf --jq '.[].name' traces list --limit 5 --from 2026-02-01
```
## Exit Codes
| Code | Meaning |
|------|---------|
| 0 | Success |
| 1 | Error (API failure, auth, general) |
| 2 | Resource not found |
| 3 | Cancelled (Ctrl+C) |
## Architecture
Hybrid SDK + REST approach:
- **REST (httpx)**: Traces, observations, scores, sessions — full filter control, 60s timeout
- **SDK (langfuse)**: Prompts (built-in 300s caching), datasets, experiments — complex operations
## Troubleshooting
**Timeouts on large projects** — Use `--from` to limit the time range:
```bash
lf traces list --from 2026-02-01 # fast: scoped query
lf traces list # slow: may timeout on large projects
```
**Datetime format** — `--from`/`--to` accept ISO 8601 without milliseconds or `Z` suffix:
```bash
lf traces list --from 2026-02-16 # date only
lf traces list --from 2026-02-16T10:30:00 # datetime
lf traces list --from "2026-02-16 10:30:00" # space separator (quote it)
# NOT supported: 2026-02-16T10:30:00.213Z (ms + Z suffix)
```
**Authentication errors** — Verify credentials are set:
```bash
echo $LANGFUSE_PUBLIC_KEY # should show pk-lf-...
echo $LANGFUSE_SECRET_KEY # should show sk-lf-...
lf --json traces list --limit 1 --from 2026-02-01 # test connectivity
```
**Batch processing** — Use `--to` with the last timestamp to page through results:
```bash
# Batch 1
RAW=$(lf --jq '.[-1].timestamp' traces list --limit 50 --from 2026-02-01 2>/dev/null)
CURSOR=$(echo "$RAW" | tr -d '"' | sed 's/\.[0-9]*Z$//')
# Batch 2 (older results)
lf --json traces list --limit 50 --from 2026-02-01 --to "$CURSOR"
```
## Shell Completions
`lf` supports tab completions for bash, zsh, and fish via Typer's built-in mechanism.
**Bash** — add to `~/.bashrc`:
```bash
eval "$(_LF_COMPLETE=bash_source lf)"
```
**Zsh** — add to `~/.zshrc`:
```bash
eval "$(_LF_COMPLETE=zsh_source lf)"
```
**Fish** — add to `~/.config/fish/config.fish`:
```fish
_LF_COMPLETE=fish_source lf | source
```
After adding, restart your shell or source the config file.
## Development
```bash
# Setup
git clone https://github.com/aviadshiber/langfuse-cli.git && cd langfuse-cli
uv sync
# Run tests
uv run pytest
# Lint, format & type check
uv run ruff check src/ tests/
uv run ruff format --check src/ tests/
uv run mypy src/
# Run locally
uv run lf --version
```
See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed development guidelines.
## License
[MIT](LICENSE)
| text/markdown | Aviad S. | null | null | null | MIT | cli, langfuse, llm, observability, prompts, tracing | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
... | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27.0",
"keyring>=25.0.0",
"langfuse>=3.0.0",
"pydantic>=2.0.0",
"typer[all]>=0.12.0"
] | [] | [] | [] | [
"Homepage, https://github.com/aviadshiber/langfuse-cli",
"Repository, https://github.com/aviadshiber/langfuse-cli",
"Bug Tracker, https://github.com/aviadshiber/langfuse-cli/issues",
"Changelog, https://github.com/aviadshiber/langfuse-cli/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:15:44.754205 | langfuse_cli-0.1.4.tar.gz | 140,815 | 43/b4/29a2466a8e92d410622df0d4001ac998c5d9caf579b6fe11b81880749bc0/langfuse_cli-0.1.4.tar.gz | source | sdist | null | false | 3b09ec3a5b376efdf95833eec3eff768 | 207a4a299ad5fd158554b78b0a3aee4bceb48e4e7f0465b08feb96a7cf2629f2 | 43b429a2466a8e92d410622df0d4001ac998c5d9caf579b6fe11b81880749bc0 | null | [
"LICENSE"
] | 298 |
2.4 | simpleimageio | 1.10.0 | A very simple Python wrapper to read and write various HDR and LDR image file formats. | 
<a href="https://www.nuget.org/packages/SimpleImageIO/">

</a>
# Simple Image IO
A lightweight C# and Python wrapper to read and write RGB images from / to various file formats.
Supports .exr (with layers) via [tinyexr](https://github.com/syoyo/tinyexr) and a number of other formats (including .png, .jpg, and .bmp) via [stb_image](https://github.com/nothings/stb/blob/master/stb_image.h) and [stb_image_write](https://github.com/nothings/stb/blob/master/stb_image_write.h).
A subset of TIFF can be read and written via [tinydngloader](https://github.com/syoyo/tinydngloader).
We also implement our own importer and exporter for [PFM](http://www.pauldebevec.com/Research/HDR/PFM/).
In addition, the package offers some basic image manipulation functionality, error metrics, and tone mapping.
The C# wrapper further offers utilities for thread-safe atomic splatting of pixel values, and sending image data to the [tev](https://github.com/Tom94/tev) viewer via sockets. It also contains a very basic wrapper around [Intel Open Image Denoise](https://github.com/OpenImageDenoise/oidn).
The [**Nuget package**](https://www.nuget.org/packages/SimpleImageIO/) contains prebuilt binaries of the C++ wrapper for x86-64 Windows, Ubuntu, and macOS ([.github/workflows/build.yml](.github/workflows/build.yml)).
The [**Python package**](https://pypi.org/project/SimpleImageIO/) is set up to automatically download an adequate CMake version and compile the C++ code on any platform.
Except for the optional Intel Open Image Denoise, all dependencies are header-only and unintrusive, so this library should work pretty much anywhere without any hassle.
## Usage example (C#)
The following creates a one pixel image and writes it to various file formats:
```C#
RgbImage img = new(width: 1, height: 1);
img.SetPixel(0, 0, new(0.1f, 0.4f, 0.9f));
img.WriteToFile("test.exr");
img.WriteToFile("test.png");
img.WriteToFile("test.jpg");
```
Reading an image from one of the supported formats is equally simple:
```C#
RgbImage img = new("test.exr");
Console.WriteLine(img.GetPixel(0, 0).Luminance);
```
The pixel coordinate (0,0) corresponds to the top left corner of the image. Coordinates outside the valid range are clamped automatically; no error is raised. The framework also offers a `MonochromeImage` with a single channel per pixel. Further, the base class `ImageBase` can be used directly for images with arbitrary channel count (`RgbImage` and `MonochromeImage` only add some convenience functions like directly returning an `RgbColor` object).
As an added bonus, the C# wrapper can connect to the [tev](https://github.com/Tom94/tev) HDR viewer and directly display image data via sockets. The following example generates a monochrome image and sends it to tev:
```C#
TevIpc tevIpc = new(); // uses tev's default port on localhost
// Create the image and initialize a tev sync
MonochromeImage image = new(width: 20, height: 10);
tevIpc.CreateImageSync("MyAwesomeImage", 20, 10, ("", image));
// Pretend we are a renderer and write some image data.
image.SetPixel(0, 0, val: 1);
image.SetPixel(10, 0, val: 2);
image.SetPixel(0, 9, val: 5);
image.SetPixel(10, 9, val: 10);
// Tell the TevIpc class to update the image displayed by tev
// (this currently retransmitts all pixel values)
tevIpc.UpdateImage("MyAwesomeImage");
```
## Usage example (Python)
The following creates a one pixel image, writes it to various file formats, reads one of them back in, and prints the red color channel of the pixel.
The result is then sent to the [tev](https://github.com/Tom94/tev) HDR viewer via sockets (modified version of https://gist.github.com/tomasiser/5e3bacd72df30f7efc3037cb95a039d3).
```Python
import simpleimageio as sio
sio.write("test.exr", [[[0.1, 0.4, 0.9]]])
sio.write("test.png", [[[0.1, 0.4, 0.9]]])
sio.write("test.jpg", [[[0.1, 0.4, 0.9]]])
img = sio.read("test.exr")
print(img[0,0,0])
with sio.TevIpc() as tev:
tev.display_image("image", img)
tev.display_layered_image("layers", { "stuff": img, "morestuff": img })
```
In Python, an image is a 3D row-major array, where `[0,0,0]` is the red color channel of the top left corner.
The convention is compatible with most other libraries that make use of numpy arrays for image representation, like matplotlib.
## Flip books for Jupyter and web
Both, the Python and the .NET library can generate an interactive HTML viewer to display and compare images visually by flipping between them. See [FlipBookExample.dib](FlipBookExample.dib) for an example with .NET interactive and C\#, [FlipBookExample.fsx](FlipBookExample.fsx) for a static webpage generator with F\#, or [flipbook.ipynb](flipbook.ipynb) for a Jupyter notebook with Python.

## Building from source
If you are on an architecture different from x86-64, you will need to compile the C++ wrapper from source.
Below, you can find instructions on how to accomplish that.
### Dependencies
All dependencies *except OIDN* are header-only and included in the repository. Building requires
- a C++20 compiler
- CMake
- [.NET 6.0](https://dotnet.microsoft.com/) (or newer)
- Python ≥ 3.6
### Building the C# wrapper - quick and simple
```
pwsh ./make.ps1
```
Downloads precompiled binaries for Open Image Denoise, copies them to the correct directory, builds the C++ code and then builds and tests the C# wrapper.
### Building the C# wrapper - manually
Build the C++ low level library with [CMake](https://cmake.org/):
```
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
cmake --build . --config Release
cd ..
```
If you want to use the `Denoiser` class, compiled binaries of Open Image Denoise must be in the correct `runtimes` folder. For example, on x64 Linux, there should be a `libOpenImageDenoise.so` in `runtimes/linux-x64/native/`. See the copy operations in [make.ps1](make.ps1) for details. The project works without these binaries in place, but then any attempt to use the `Denoiser` class will result in a `DllNotFound` exception at runtime.
When using binaries, especially the official ones, be aware that packaging for .NET requires the RPATH of the shared library to include the same folder that contains the library itself. Otherwise, TBB will not be found. If you don't understand what that means, or how it can be achieved, check out the build script in [RenderLibs](https://github.com/pgrit/RenderLibs). (This does not apply to Windows, since the linker there has this behavior by default.)
Build the C# wrapper and run the tests:
```
dotnet build && dotnet test
```
To see if the denoiser is linked correctly, you can additionally run
```
dotnet run --project SimpleImageIO.Integration
```
These integration tests assume that you have the [tev](https://github.com/Tom94/tev) viewer open and listening to the default port on localhost. But you can also comment out the tev-related tests and only run the denoiser ones.
### Building the C# wrapper on other platforms
The [SimpleImageIO.csproj](SimpleImageIO/SimpleImageIO.csproj) file needs to copy the correct .dll / .so / .dylib file to the appropriate runtime folder.
Currently, the runtime identifiers (RID) and copy instructions are only set for the x86-64 versions of Windows, Linux, and macOS.
To run the framework on other architectures, you will need to add them to the .csproj file.
You can find the right RID for your platform here: [https://docs.microsoft.com/en-us/dotnet/core/rid-catalog](https://docs.microsoft.com/en-us/dotnet/core/rid-catalog).
Note that, currently, Open Image Denoise is included in binary from. The `Denoiser` class can therefore not be used on platforms other than x86-64 Windows, Linux, or macOS. Attempting to use it on other platforms will cause a `DllNotFound` exception.
Then, you should be able to follow the steps above and proceed as usual.
### Building the Python wrapper
The simplest route is to run the build script
```
pwsh ./make.ps1
```
which builds and installs the Python lib with pip, using whatever `python` executable is currently in the path.
If you need manual control, e.g., specific Python version, here are the required steps:
```
cd ./FlipViewer
npm install
npm run build
cd ..
cp ./FlipViewer/dist/flipbook.js PyWrapper/simpleimageio/flipbook.js
python -m build
python -m pip install ./dist/simpleimageio-*.whl
```
The first commands build, bundle, and pack the frontend code. Then, we build the Python package itself and install it via pip. The `*` must be substituted by the correct version number and runtime identifier.
The tests can be run via:
```
cd PyTest
python -m unittest
```
| text/markdown | Pascal Grittmann | null | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | https://github.com/pgrit/SimpleImageIO | null | >=3.6 | [] | [] | [] | [
"numpy"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:15:15.746280 | simpleimageio-1.10.0.tar.gz | 401,958 | b5/36/20fab8eb9647a43942f537295ef73fc98c6b9b7dde7a3af99713d73f0c7d/simpleimageio-1.10.0.tar.gz | source | sdist | null | false | 06a61f7293a7b7ec0b8d6d266caf74d9 | 7400c74fb3c760b1c0c8d75e9ecfe1c43195a4c7ac68b651c529fb8d7940f189 | b53620fab8eb9647a43942f537295ef73fc98c6b9b7dde7a3af99713d73f0c7d | null | [
"LICENSE"
] | 185 |
2.4 | apk-distribution-pipeline | 1.1.0 | Build, upload, and notify APK releases from a single CLI | # APK Distribution Pipeline
A single-command pipeline that bumps your Android app version, builds the APK via Gradle, uploads it to Google Drive, and sends a download link to Telegram.
## Why?
Not every team needs (or wants) Google Play for internal distribution. Maybe you don't have a Play Console account, maybe you're distributing to a small group of testers, or maybe you just want something simpler. This tool gives you a one-command workflow: bump version → build → upload → notify — all without touching a store.
## How It Works
```
bump version → build APK → upload to Google Drive → notify via Telegram
```
1. **Version Bump** — Reads `version.properties` from your Android module, increments the version (`major` / `minor` / `patch`), and writes it back atomically.
2. **Gradle Build** — Runs `./gradlew :<module>:assemble<Variant>` inside your Android project.
3. **Drive Upload** — Uploads the generated APK to a Google Drive folder using a service account, sets it to "anyone with link can view", and generates a direct download URL.
4. **Telegram Notification** — Sends a formatted message to a Telegram chat with inline buttons for downloading the APK and opening the Drive folder.
## Prerequisites
- **Python 3.8+**
- **Android SDK** installed (auto-detected from `ANDROID_HOME`, `ANDROID_SDK_ROOT`, or common paths)
- **Gradle Wrapper** (`gradlew` / `gradlew.bat`) present in your Android project root
- A **Google Cloud** project with Drive API enabled (service account or OAuth credentials)
- A **Telegram Bot** token (create one via [@BotFather](https://t.me/BotFather))
### Supported Platforms
| Platform | Android Studio | SDK | Java | gradlew |
|---|---|---|---|---|
| **Linux** | `~/android-studio`, `/opt/android-studio`, `/usr/local/android-studio` | `~/Android/Sdk` | `/usr/lib/jvm/java-*`, bundled JBR | `gradlew` |
| **macOS** | `/Applications/Android Studio.app` | `~/Library/Android/sdk` | `/Library/Java/JavaVirtualMachines`, bundled JBR | `gradlew` |
| **Windows** | `%PROGRAMFILES%\Android\Android Studio`, `%LOCALAPPDATA%\Android\Android Studio` | `%LOCALAPPDATA%\Android\Sdk` | bundled JBR | `gradlew.bat` |
## Environment Check
Run the helper script to verify your build environment before using the pipeline:
```bash
apkdist-env-check --project /path/to/your/android/project
```
Example output:
```
🔍 Scanning build environment...
✅ Android Studio : /opt/android-studio
✅ Android SDK : /home/user/Android/Sdk
✅ Java : /usr/lib/jvm/java-17-openjdk-amd64
openjdk version "17.0.8" 2023-07-18
✅ gradlew : /home/user/projects/my-app/gradlew
✅ Environment looks good!
💡 Suggested exports for your shell:
export ANDROID_HOME=/home/user/Android/Sdk
export JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64
```
## Setup
### 1. Install (Global CLI)
Recommended for OSS users:
```bash
pipx install apk-distribution-pipeline
```
From source (before publishing to PyPI):
```bash
pipx install git+https://github.com/adityaa-codes/apk-distribution.git
```
From a local checkout:
```bash
pipx install .
```
### 2. Development Install (Contributors)
```bash
python -m venv .venv
source .venv/bin/activate
pip install -e .
```
### 3. Create a `.env` File
The CLI loads config in this order:
1. `--env-file /path/to/.env`
2. `./.env` (current directory)
3. Global config path:
- Linux/macOS: `~/.config/apkdist/.env`
- Windows: `%APPDATA%\\apkdist\\.env`
Example `.env`:
```env
# Required
ANDROID_PROJECT_PATH=/path/to/your/android/project
DRIVE_FOLDER_ID=your_google_drive_folder_id
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_CHAT_ID=your_telegram_chat_id
# Drive auth — set ONE of these (see Google Drive Setup below)
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
# OAUTH_CREDENTIALS_FILE=/path/to/credentials.json
# OAUTH_TOKEN_FILE=token.json
# Optional (defaults shown)
APP_MODULE_NAME=app
```
> ⚡ = at least one of `GOOGLE_APPLICATION_CREDENTIALS` or `OAUTH_CREDENTIALS_FILE` is required.
| Variable | Required | Description |
|---|---|---|
| `ANDROID_PROJECT_PATH` | ✅ | Absolute path to Android project root (where `gradlew` lives) |
| `DRIVE_FOLDER_ID` | ✅ | Google Drive folder ID to upload APKs into |
| `TELEGRAM_BOT_TOKEN` | ✅ | Telegram bot token from BotFather |
| `TELEGRAM_CHAT_ID` | ✅ | Telegram chat/group/channel ID to send notifications to |
| `GOOGLE_APPLICATION_CREDENTIALS` | ⚡ | Path to service account JSON key (for Shared Drives) |
| `OAUTH_CREDENTIALS_FILE` | ⚡ | Path to OAuth client credentials JSON (for personal accounts) |
| `OAUTH_TOKEN_FILE` | ❌ | Path to save OAuth token (default: platform config dir `token.json`) |
| `APP_MODULE_NAME` | ❌ | Android module name (default: `app`) |
| `GOOGLE_DELEGATE_EMAIL` | ❌ | Email to impersonate via OAuth delegation (for Google Workspace users) |
| `BUILD_VARIANT` | ❌ | Gradle build variant, passed via `--variant` CLI flag (default: `release`) |
### 4. Android Project Setup
Your Android module must have a `version.properties` file at `<project>/<module>/version.properties`:
```properties
VERSION_CODE=1
VERSION_NAME=1.0.0
```
Your `build.gradle` should read from this file to set `versionCode` and `versionName`.
### 5. Google Drive Setup
Service accounts created after April 15, 2025 no longer have storage quota ([details](https://forum.rclone.org/t/google-drive-service-account-changes-and-rclone/50136)). Choose the option that matches your account type:
---
#### Option A: Personal Google Account (OAuth consent)
1. Go to [Google Cloud Console](https://console.cloud.google.com/) and create a project (or use an existing one).
2. **Enable the Google Drive API:**
- Navigate to **APIs & Services → Library**
- Search for "Google Drive API" and click **Enable**
3. **Set up OAuth consent screen:**
- Go to **APIs & Services → OAuth consent screen**
- User type: **External** → click Create
- Fill in the app name and your email
- Under **Scopes**, add `https://www.googleapis.com/auth/drive`
- Under **Test users**, add your Google email
- Save
4. **Create OAuth credentials:**
- Go to **APIs & Services → Credentials**
- Click **Create Credentials → OAuth Client ID**
- Application type: **Desktop app**
- Click Create, then **Download JSON**
- Save the file as `credentials.json` in your project folder
5. **Get your Drive folder ID:**
- Open the target folder in Google Drive
- The folder ID is the last part of the URL: `https://drive.google.com/drive/folders/<THIS_IS_THE_FOLDER_ID>`
6. **Set in `.env`:**
```env
OAUTH_CREDENTIALS_FILE=credentials.json
DRIVE_FOLDER_ID=<your_folder_id>
```
7. On the **first run**, a browser window will open asking you to sign in and authorize the app. After that, a `token.json` file is saved locally and reused for future runs.
---
#### Option B: Google Workspace Account (Shared Drive + Service Account)
1. Go to [Google Cloud Console](https://console.cloud.google.com/) and create a project (or use an existing one).
2. **Enable the Google Drive API:**
- Navigate to **APIs & Services → Library**
- Search for "Google Drive API" and click **Enable**
3. **Create a Service Account:**
- Go to **IAM & Admin → Service Accounts**
- Click **Create Service Account**, give it a name, and click Done
- Click the service account → **Keys → Add Key → Create new key → JSON**
- Download the JSON key file and save it as `service-account.json`
- Note the service account email (looks like `name@project.iam.gserviceaccount.com`)
4. **Set up a Shared Drive:**
- In Google Drive, click **Shared drives** (left sidebar) → **New shared drive**
- Create a folder inside the Shared Drive for your APK uploads
- Click **Manage members** on the Shared Drive and add the service account email as a **Contributor**
5. **Get the folder ID:**
- Open the target folder in the Shared Drive
- The folder ID is the last part of the URL: `https://drive.google.com/drive/folders/<THIS_IS_THE_FOLDER_ID>`
6. **Set in `.env`:**
```env
GOOGLE_APPLICATION_CREDENTIALS=service-account.json
DRIVE_FOLDER_ID=<folder_id_inside_shared_drive>
```
---
> If both `GOOGLE_APPLICATION_CREDENTIALS` and `OAUTH_CREDENTIALS_FILE` are set, OAuth takes priority.
## Usage
```bash
apkdist <bump_type> [--variant <build_variant>] [--force] [--dry-run] [--env-file <path>]
```
Where `<bump_type>` is one of: `major`, `minor`, or `patch`, and `--variant` is the Gradle build variant (default: `release`).
| Flag | Description |
|---|---|
| `--variant` | Build variant (default: `release`) |
| `--force` | Force rebuild and re-upload even if a fresh APK exists |
| `--dry-run` | Simulate the pipeline without making any changes |
> **Smart caching:** If an APK was built in the last 30 minutes, the build step is skipped. If the APK already exists on Google Drive for that version, the upload step is skipped. The Telegram notification is always sent. Use `--force` to bypass both checks.
### Examples
**Patch release** (1.2.3 → 1.2.4):
```bash
apkdist patch
```
**Minor release** (1.2.4 → 1.3.0):
```bash
apkdist minor
```
**Major release** (1.3.0 → 2.0.0):
```bash
apkdist major
```
**Staging variant build**:
```bash
apkdist patch --variant staging
```
**Force rebuild and re-upload**:
```bash
apkdist patch --force
```
**Dry run** (validates config and paths without building, uploading, or notifying):
```bash
apkdist patch --dry-run
```
**Explicit env file**:
```bash
apkdist patch --env-file ~/.config/apkdist/.env
```
### Example Output
```
✅ Android SDK found at: /home/user/Android/Sdk
📁 Project: /home/user/projects/my-app
📦 Module: app | Variant: release
🔧 Gradle: /home/user/projects/my-app/gradlew
🔄 Bumping version (patch)...
✅ Version Updated: 1.2.3 -> 1.2.4
🔨 Building APK with Gradle...
✅ Build Successful!
☁️ Uploading to Google Drive...
✅ Uploaded to Google Drive!
🚀 Sending Telegram Notification...
✅ Notification Sent!
```
## Startup Checks
The script validates the following before running any pipeline step:
- All required environment variables are set
- Android SDK is detected (warns if not found)
- Android project directory exists
- `gradlew` exists and is executable (auto-fixes permissions if needed)
- Service account JSON file exists
## Troubleshooting
| Error | Fix |
|---|---|
| `Required environment variable 'X' is not set` | Add the missing variable to your `.env` file |
| `Android SDK not found` | Install Android SDK and set `ANDROID_HOME` in your shell or `.env` |
| `Could not find gradlew` | Verify `ANDROID_PROJECT_PATH` points to the correct project root |
| `Service account file not found` | Check `GOOGLE_APPLICATION_CREDENTIALS` path |
| `Could not find APK in ...` | Verify `BUILD_VARIANT` matches your Gradle config (check `build/outputs/apk/` for the actual variant folder name) |
| `Google Drive auth failed` | Ensure the service account JSON is valid and Drive API is enabled |
| `Telegram Error` | Verify your bot token and chat ID; make sure the bot is added to the chat |
## Cleanup
Old APK files pile up on Google Drive. Use the cleanup script to remove them:
```bash
apkdist-cleanup # dry-run — lists old APKs without deleting
apkdist-cleanup --delete # lists APKs older than 7 days, asks for confirmation, then deletes
apkdist-cleanup --days 14 --delete # same but for APKs older than 14 days
```
The `--delete` flag always shows the file list first and asks for a `y/N` confirmation before removing anything.
## Publishing to PyPI (Maintainers)
1. Bump the version in:
- `pyproject.toml` (`[project].version`)
- `apkdist/__init__.py` (`__version__`)
2. Build and validate distributions:
```bash
python3 -m pip install --upgrade build twine
python3 -m build
python3 -m twine check dist/*
```
3. Upload to TestPyPI (recommended first):
```bash
python3 -m twine upload --repository testpypi dist/*
```
4. Upload to PyPI:
```bash
python3 -m twine upload dist/*
```
5. Verify install in a clean environment:
```bash
pipx install apk-distribution-pipeline
apkdist --help
```
For CI-based releases, use PyPI Trusted Publishing with a GitHub Actions workflow that runs on version tags.
## License
This project is licensed under the [MIT License](LICENSE.md).
| text/markdown | APK Distribution Contributors | null | null | null | MIT License
Copyright (c) 2026 Aditya Gupta
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| android, apk, google-drive, telegram, automation | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :... | [] | null | null | >=3.8 | [] | [] | [] | [
"python-dotenv",
"google-api-python-client",
"google-auth-httplib2",
"google-auth-oauthlib",
"requests"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.7 | 2026-02-18T09:15:00.315229 | apk_distribution_pipeline-1.1.0.tar.gz | 15,701 | d1/9f/d505c52228587691926f9e340fbc09385ef940ae35812cc92783e299a8a6/apk_distribution_pipeline-1.1.0.tar.gz | source | sdist | null | false | 812e10166e07e53c2d82c8839408afe3 | 06c361709ce2ce300f85eae2c8abdb4c74629938d15cdd19c0783e5ae9adee7f | d19fd505c52228587691926f9e340fbc09385ef940ae35812cc92783e299a8a6 | null | [
"LICENSE.md"
] | 299 |
2.4 | testigo-recall-mcp | 0.4.3 | MCP server for querying testigo-recall knowledge bases | # testigo-recall-mcp
MCP server that exposes a pre-scanned codebase knowledge base to AI agents. Instead of reading source files directly, agents query pre-extracted facts about code behavior, design decisions, and assumptions — saving time and tokens.
Works with Claude Code, Cursor, Windsurf, and any MCP-compatible client.
## Installation
```bash
pip install testigo-recall-mcp
```
## Configuration
### Claude Code
Add to your Claude Code MCP settings (`~/.claude/mcp.json` or project-level):
```json
{
"mcpServers": {
"testigo-recall": {
"command": "testigo-recall-mcp",
"env": {
"TESTIGO_RECALL_REPO": "owner/repo"
}
}
}
}
```
### Cursor / Windsurf
Add to your MCP configuration:
```json
{
"mcpServers": {
"testigo-recall": {
"command": "testigo-recall-mcp",
"env": {
"TESTIGO_RECALL_REPO": "owner/repo"
}
}
}
}
```
## Environment Variables
| Variable | Description |
|----------|-------------|
| `TESTIGO_RECALL_REPO` | GitHub repo (e.g. `owner/repo`). Auto-downloads the knowledge base from the `knowledge-base` release tag. |
| `TESTIGO_RECALL_DB_PATH` | Explicit path to a local SQLite knowledge base file. Takes priority over repo sync. |
| `GITHUB_TOKEN` | GitHub token for private repos. Public repos work without auth. |
## Tools
The server exposes 5 tools to AI agents:
### `search_codebase`
Full-text search across the knowledge base. Returns facts ranked by relevance.
- `query` — search keywords (e.g. "authentication", "payment flow")
- `category` — optional filter: `behavior`, `design`, or `assumption`
- `min_confidence` — confidence threshold 0.0-1.0
- `limit` — max results (default: 20)
### `get_module_facts`
Deep dive into a specific module. Use `search_codebase` first to discover module IDs.
- `module_id` — e.g. `SCAN:backend/app/api` or `PR-123`
### `get_recent_changes`
Most recently extracted facts across the codebase.
- `category` — optional filter
- `limit` — number of results (default: 10)
### `get_component_impact`
Blast radius analysis — shows what depends on a component and what it depends on.
- `component_name` — file path or service name (e.g. `api_service.py`)
### `list_modules`
Lists all scanned modules in the knowledge base.
- `repo_name` — optional repository filter
## How It Works
The knowledge base contains pre-extracted facts organized by category:
- **behavior** — what the code does (triggers, outcomes)
- **design** — how it's built (decisions, patterns, trade-offs)
- **assumption** — what it expects (invariants, prerequisites)
Facts come from two sources:
- **SCAN facts** (`SCAN:module/path`) — current state of a module, refreshed automatically
- **PR facts** (`PR-123`) — what a specific PR changed, preserved as history
The server uses SQLite with FTS5 full-text search for fast, relevance-ranked queries.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"pydantic>=2.5",
"mcp>=1.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.5 | 2026-02-18T09:13:50.403583 | testigo_recall_mcp-0.4.3.tar.gz | 16,337 | 20/4c/00e3f54eced1661b2938c900f62e927eca0881f92367b8783f07e5252c9e/testigo_recall_mcp-0.4.3.tar.gz | source | sdist | null | false | 39380e4392302bdb831a8cc1ed7dc37d | 5ae45073733d2811f045b5240595ae20ed07fa2ab67a203726c7057e662c190b | 204c00e3f54eced1661b2938c900f62e927eca0881f92367b8783f07e5252c9e | null | [] | 245 |
2.4 | djaploy | 0.4.0 | Modular Django deployment system based on pyinfra | # djaploy
[](https://pypi.org/project/djaploy/)
[](https://pypi.org/project/djaploy/)
[](https://github.com/Technology-Company/djaploy/blob/main/LICENSE)
[](https://github.com/Technology-Company/djaploy/commits)
A modular Django deployment system based on [pyinfra](https://pyinfra.com/), designed to standardize and simplify infrastructure management across Django projects.
## Features
- **Modular Architecture** — Extensible plugin system for deployment components
- **Django Integration** — Seamless integration via Django management commands
- **Multiple Deployment Modes** — Support for `--local`, `--latest`, and `--release` deployments
- **Infrastructure as Code** — Define infrastructure using Python with pyinfra
- **Git-based Artifacts** — Automated artifact creation from git repository
- **SSL Management** — Built-in support for SSL certificates and Let's Encrypt
- **Python Compilation** — Optionally compile Python from source for specific versions
## Installation
```bash
pip install djaploy
```
Or with Poetry:
```bash
poetry add djaploy
```
### Optional extras
```bash
pip install djaploy[certificates] # Let's Encrypt / certbot support
pip install djaploy[bunny] # Bunny DNS certbot plugin
```
## Quick Start
### 1. Add to Django settings
```python
INSTALLED_APPS = [
# ...
"djaploy",
]
# Required paths
from pathlib import Path
BASE_DIR = Path(__file__).resolve().parent.parent
PROJECT_DIR = BASE_DIR
GIT_DIR = PROJECT_DIR.parent
DJAPLOY_CONFIG_DIR = PROJECT_DIR / "infra"
```
### 2. Create project structure
```
your-django-project/
├── manage.py
├── your_app/
│ └── settings.py
└── infra/ # Deployment configuration
├── config.py # Main configuration
├── inventory/ # Host definitions per environment
│ ├── production.py
│ └── staging.py
└── deploy_files/ # Environment-specific files
├── production/
│ └── etc/systemd/system/app.service
└── staging/
```
### 3. Configure deployment
**infra/config.py**:
```python
from djaploy.config import DjaployConfig
from pathlib import Path
config = DjaployConfig(
project_name="myapp",
djaploy_dir=Path(__file__).parent,
manage_py_path=Path("manage.py"),
python_version="3.11",
app_user="app",
ssh_user="deploy",
modules=[
"djaploy.modules.core",
"djaploy.modules.nginx",
"djaploy.modules.systemd",
],
services=["myapp", "myapp-worker"],
)
```
### 4. Define inventory
**infra/inventory/production.py**:
```python
from djaploy.config import HostConfig
hosts = [
HostConfig(
name="web-1",
ssh_host="192.168.1.100",
ssh_user="deploy",
app_user="app",
env="production",
services=["myapp", "myapp-worker"],
),
]
```
### 5. Deploy files
Place environment-specific configuration files in `deploy_files/` — these are copied to the server during deployment:
```ini
# deploy_files/production/etc/systemd/system/myapp.service
[Unit]
Description=My Django App
After=network.target
[Service]
Type=simple
User=app
WorkingDirectory=/home/app/apps/myapp
ExecStart=/home/app/.local/bin/poetry run gunicorn config.wsgi
Restart=on-failure
[Install]
WantedBy=multi-user.target
```
## Usage
### Configure a server
```bash
python manage.py configureserver --env production
```
Sets up the application user, installs Python and Poetry, and prepares the directory structure.
### Deploy
```bash
# Deploy local changes (development)
python manage.py deploy --env production --local
# Deploy latest git commit
python manage.py deploy --env production --latest
# Deploy a specific release
python manage.py deploy --env production --release v1.0.0
```
Deployment flow:
1. Creates a tar.gz artifact from git
2. Uploads to servers
3. Extracts application code
4. Copies environment-specific deploy files (nginx, systemd, etc.)
5. Installs dependencies via Poetry
6. Runs migrations
7. Collects static files
8. Restarts services
### Certificate management
```bash
python manage.py update_certs # Update certificate definitions
python manage.py sync_certs --env production # Sync certificates
```
### Verify configuration
```bash
python manage.py verify --verbose
```
## Modules
djaploy uses a modular architecture — each component is a separate module that can be enabled or disabled per project.
### Built-in modules
| Module | Description |
|--------|-------------|
| `djaploy.modules.core` | Core setup: users, Python, Poetry, artifact deployment, migrations |
| `djaploy.modules.nginx` | Nginx web server configuration |
| `djaploy.modules.systemd` | Systemd service management |
| `djaploy.modules.sync_certs` | SSL certificate syncing |
| `djaploy.modules.cert_renewal` | Certificate renewal automation |
| `djaploy.modules.litestream` | Litestream database replication |
| `djaploy.modules.rclone` | Rclone-based backups |
| `djaploy.modules.tailscale` | Tailscale networking |
### Custom modules
Extend `BaseModule` to create project-specific deployment logic:
```python
from djaploy.modules.base import BaseModule
class MyModule(BaseModule):
def configure_server(self, host):
# Server configuration logic
pass
def deploy(self, host, artifact_path):
# Deployment logic
pass
```
Add it to your config:
```python
config = DjaployConfig(
modules=[
"djaploy.modules.core",
"myproject.infra.modules.custom",
],
)
```
## Project Customization
### prepare.py
Projects can include a `prepare.py` file for local build steps that run before deployment:
```python
# prepare.py
from djaploy.prepare import run_command
def prepare():
run_command("npm run build")
run_command("python manage.py collectstatic --noinput")
```
### Custom deploy files
Projects can include environment-specific configuration files in a `deploy_files/` directory that will be copied to the server during deployment. The directory structure mirrors the target filesystem layout (e.g. `deploy_files/production/etc/nginx/sites-available/myapp` gets copied to `/etc/nginx/sites-available/myapp` on the server).
## Development
```bash
git clone https://github.com/Technology-Company/djaploy.git
cd djaploy
poetry install
```
To use a local development copy in another project:
```bash
pip install -e /path/to/djaploy
```
## License
[MIT](LICENSE)
| text/markdown | null | Johanna Mae Dimayuga <johanna@techco.fi> | null | null | MIT License
Copyright (c) 2024 Technology-Company
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | django, deployment, pyinfra, automation, infrastructure | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Framework :: Django",
"Framework :: Django :: 3.2",
"Framework :: Django :: 4.0",
"Framework :: Django :: 4.1",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Intended Audience :: Developers",
"License :: OSI Approved :: M... | [] | null | null | >=3.9 | [] | [] | [] | [
"pyinfra<3.5,>=3.4",
"django>=3.2",
"certbot>=2.0; extra == \"certificates\"",
"certbot-dns-bunny<=0.0.9; extra == \"bunny\""
] | [] | [] | [] | [
"Homepage, https://github.com/techco-fi/djaploy",
"Repository, https://github.com/techco-fi/djaploy",
"Issues, https://github.com/techco-fi/djaploy/issues",
"Documentation, https://github.com/techco-fi/djaploy#readme"
] | twine/6.2.0 CPython/3.13.7 | 2026-02-18T09:13:48.888139 | djaploy-0.4.0.tar.gz | 45,296 | 6b/e2/3a05a92bb031f407cc69f62b9b446fe0b59c70a3e693377c9813ea9b0ee3/djaploy-0.4.0.tar.gz | source | sdist | null | false | 2a3035e5a2b52dd7ebab00701be1764b | 46f6440e317264c482a3f2a8d2a55427ff9a380c954b03b7e1e5e4e15735aab5 | 6be23a05a92bb031f407cc69f62b9b446fe0b59c70a3e693377c9813ea9b0ee3 | null | [
"LICENSE"
] | 313 |
2.1 | peak-sdk | 1.24.0 | Python SDK for interacting with the Peak platform | # Peak SDK
[](https://pypi.org/project/peak-sdk/)
[](https://docs.peak.ai/sdk/latest/#platform-support)
[](https://docs.peak.ai/sdk/latest/license.html)
## What is Peak SDK?
_Peak SDK_ is a Python-based package which can be used to build AI applications in [Peak](https://peak.ai/) Platform. The _Peak SDK_ provides an efficient code-based interface to manage platform resources (Web Apps, Workflows and Images). It also includes an interface to use Press API which can help you efficiently create, manage and deploy Press Applications on the Peak.
## Getting Started
### Setting up a Virtual Environment
To ensure a smooth development experience with _Peak SDK_, we highly recommend creating a Python virtual environment. A virtual environment helps to isolate the dependencies required by your project and prevents conflicts with other projects or the system's global Python environment.
Follow these steps to create a virtual environment using Python's built-in `venv` module:
1. Open a terminal.
2. Navigate to your project's root directory (where you plan to work with the _Peak SDK_).
3. Create a new virtual environment with the following command:
```
python3 -m venv <venv_name>
```
4. Activate the virtual environment by running:
```
source <venv_name>/bin/activate
```
5. You will now be working within the virtual environment, and you can install dependencies and run the project without affecting other projects on your system's Python environment.
6. When you're finished working on your project, you can deactivate the virtual environment using the following command:
```
deactivate
```
### Installation
- You can install the _Peak SDK_ with the following command using `pip`
```shell
pip install peak-sdk
```
Or if you want to install a specific version
```
pip install peak-sdk==<version>
```
- The _Peak SDK_ ships with the CLI as well. Once CLI is installed, you can enable auto-completion for your shell by running `peak --install-completion ${shell-name}` command, where shell can be one of `[bash|zsh|fish|powershell|pwsh]`.
- Once this has run, we need to add `compinit` to the shell configuration file (like - .zshrc, .bashrc, etc). To do so, you can the following command
```
echo "compinit" >> ~/.zshrc # replace .zshrc with your shell's configuration file
```
### Checking Package Version
- As mentioned above, the Peak SDK ships with a CLI as well. You can check the version of both the CLI and the SDK quite easily.
- You can check the version for the `peak-cli` using the following command
```bash
peak --version
```
This should return a response of the following format
```bash
peak-cli==1.24.0
Python==3.12.3
System==Darwin(23.6.0)
```
- To check the version of the `peak-sdk`, the following code snippet can be used
```python
import peak
print(peak.__version__)
```
This should print the version of the SDK
```
1.24.0
```
### Using the SDK and CLI
- To start using the SDK and CLI, you'll need a Personal Access Token (PAT).
- If you don't have one yet, sign up for an account on the Peak platform to obtain your Personal Access token (PAT).
- To export it, run the following command in your terminal and replace <peak_auth_token> with your actual PAT:
```
export PEAK_AUTH_TOKEN=<peak_auth_token>
```
### Documentation
You can access the documentation for the SDK and CLI at [https://docs.peak.ai/sdk/latest/](https://docs.peak.ai/sdk/latest/).
Here are some quick links to help you navigate easily:
- [SDK Reference](https://docs.peak.ai/sdk/latest/reference.html)
- [CLI Reference](https://docs.peak.ai/sdk/latest/cli/reference.html)
- [Usage](https://docs.peak.ai/sdk/latest/usage.html)
- [CLI Usage](https://docs.peak.ai/sdk/latest/cli/usage.html)
- [Migration Guide](https://docs.peak.ai/sdk/latest/migration-guide.html)
- [FAQ](https://docs.peak.ai/sdk/latest/faq.html)
### Platform Support
<div class="support-matrix" style="background-color:transparent">
<div class="supported-versions" style="text-align:center">
<table class="center-table">
<caption style="text-align:left">
<strong>Support across <i>Python versions</i> on major </strong><i>64-bit</i><strong> platforms</strong>
</caption>
<!-- table content -->
<thead>
<tr>
<th>Python Version</th>
<th>Linux</th>
<th>MacOS</th>
<th>Windows</th>
</tr>
</thead>
<tbody>
<tr>
<td>3.8</td>
<td>🟢</td>
<td>🟢</td>
<td>🟤</td>
</tr>
<tr>
<td>3.9</td>
<td>🟢</td>
<td>🟢</td>
<td>🟤</td>
</tr>
<tr>
<td>3.10</td>
<td>🟢</td>
<td>🟢</td>
<td>🟤</td>
</tr>
<tr>
<td>3.11</td>
<td>🟢</td>
<td>🟢</td>
<td>🟤</td>
</tr>
<tr>
<td>3.12</td>
<td>🟢</td>
<td>🟢</td>
<td>🟤</td>
</tr>
</tbody>
</table>
</div>
<div class="legend">
<table style="text-align:center">
<caption style="text-align:left">
<strong>Legend</strong>
</caption>
<thead>
<tr>
<th>Key</th>
<th>Status</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>🟢</td>
<td>Supported</td>
<td>regularly tested, and fully supported</td>
</tr>
<tr>
<td>🟡</td>
<td>Limited Support</td>
<td>not explicitly tested but should work, and supported on a best-effort basis</td>
</tr>
<tr>
<td>🟤</td>
<td>Not Tested</td>
<td>should work, but no guarantees and/or support</td>
</tr>
</tbody>
</table>
</div>
</div>
## More Resources
- [License](https://docs.peak.ai/sdk/latest/license.html)
- [Changelog](https://docs.peak.ai/sdk/latest/changelog.html)
| text/markdown | Peak | support@peak.ai | null | null | Apache-2.0 | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Lang... | [] | https://docs.peak.ai/sdk/latest/ | null | <3.13,>=3.8.1 | [] | [] | [] | [
"cachetools<6.0.0,>=5.5.1",
"certifi>=2024.2.2",
"jinja2<4.0,>=3.1",
"numpy; extra == \"types\"",
"orjson<4.0,>=3.10",
"pandas; extra == \"types\"",
"pathspec",
"pyyaml<7.0,>=6.0",
"requests<3.0,>=2.32",
"requests-toolbelt<2.0,>=1.0",
"rich==14.0.0",
"shellingham<1.5.4",
"structlog<25.0.0,>=... | [] | [] | [] | [
"Documentation, https://docs.peak.ai/sdk/latest/"
] | twine/5.1.1 CPython/3.12.12 | 2026-02-18T09:13:37.466671 | peak_sdk-1.24.0.tar.gz | 173,370 | 34/0b/5f0c68522f3aa98a5258955e2f3783a18a8213c43338962b4ccb1ca66e67/peak_sdk-1.24.0.tar.gz | source | sdist | null | false | db6b72361f01eec9fe6a3dd55f896850 | 037faf0b33a54141c827da758038893b210625e793f582fbf99a0e14de68f74f | 340b5f0c68522f3aa98a5258955e2f3783a18a8213c43338962b4ccb1ca66e67 | null | [] | 549 |
2.4 | pyTibber | 0.36.0 | A python3 library to communicate with Tibber | # pyTibber
[](https://badge.fury.io/py/pyTibber)
<a href="https://github.com/astral-sh/ruff">
<img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json" alt="Ruff">
</a>
<a href="https://github.com/pre-commit/pre-commit">
<img src="https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white&style=flat-square" alt="pre-commit">
</a>
Python3 library for [Tibber](https://tibber.com/).
Get electricity price and consumption.
If you have a Tibber Pulse or Watty you can see your consumption in real time.
[Buy me a coffee :)](http://paypal.me/dahoiv)
Go to [developer.tibber.com/](https://developer.tibber.com/) to get your API token.
## Install
```
pip3 install pyTibber
```
## Example:
```python
import tibber.const
import tibber
import asyncio
async def start():
tibber_connection = tibber.Tibber(tibber.const.DEMO_TOKEN, user_agent="change_this")
await tibber_connection.update_info()
print(tibber_connection.name)
home = tibber_connection.get_homes()[0]
await home.fetch_consumption_data()
await home.update_info()
print(home.address1)
await home.update_price_info()
print(home.current_price_info)
# await tibber_connection.close_connection()
loop = asyncio.run(start())
```
## Example realtime data:
An example of how to subscribe to realtime data (Pulse/Watty):
```python
import tibber.const
import asyncio
import aiohttp
import tibber
def _callback(pkg):
print(pkg)
data = pkg.get("data")
if data is None:
return
print(data.get("liveMeasurement"))
async def run():
async with aiohttp.ClientSession() as session:
tibber_connection = tibber.Tibber(tibber.const.DEMO_TOKEN, websession=session, user_agent="change_this")
await tibber_connection.update_info()
home = tibber_connection.get_homes()[0]
await home.rt_subscribe(_callback)
while True:
await asyncio.sleep(10)
loop = asyncio.run(run())
```
The library is used as part of Home Assistant.
| text/markdown | null | Daniel Hjelseth Hoyer <mail@dahoiv.net> | null | null | null | null | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Topic :: Home Automation",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"aiohttp>=3.0.6",
"gql>=4.0.0",
"websockets>=14.0.0"
] | [] | [] | [] | [
"Source code, https://github.com/Danielhiversen/pyTibber"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T09:13:32.804458 | pytibber-0.36.0.tar.gz | 32,577 | e1/c9/fc564f16390218c151c8404d2e2127a2c813767b6d92cda2deaf0b4fb595/pytibber-0.36.0.tar.gz | source | sdist | null | false | 7fadc71535f2b72c134a21b89e729e67 | b767be1abc19119c8a47c60ba84e749db983d1ec1b018052731ee322aac3f727 | e1c9fc564f16390218c151c8404d2e2127a2c813767b6d92cda2deaf0b4fb595 | GPL-3.0-or-later | [
"LICENSE"
] | 0 |
2.4 | cleopy | 0.65.1 | Python package for generating input and processing output of Cleo SDM | ```
/*
* ----- CLEO -----
* File: README.md
* project: CLEO
* Created Date: Thursday 12th October 2023
* Author: Clara Bayley (CB)
* Additional Contributors:
* -----
* License: BSD 3-Clause "New" or "Revised" License
* https://opensource.org/licenses/BSD-3-Clause
* -----
* Copyright (c) 2023 MPI-M, Clara Bayley
*/
```
# CLEO
Cleo is a library for Super-Droplet Model microphysics.
You can read more about Cleo in its
documentation: <https://yoctoyotta1024.github.io/CLEO/>.
### Developer Credit
Clara J. A. Bayley is Cleo's main developer.
Her principal co-developers, responsible for the YAC coupling and Cleo's MPI
domain decomposition, are:
- Wilton Loch,
- Aparna Devulapalli.
Clara also gives special thanks to these important people, who also greatly
contributed to Cleo's development:
- Tobias Kölling (Ideas behind using C++ Templates and Concepts, adaptive
timestepping algorithm, and many many other important discussions),
- Lukas Kluft (Python and GitHub CI help).
- Sergey Kosukhin (C++ and CMake help).
| text/markdown | null | Clara Bayley <yoctoyotta1024@yahoo.com> | null | null | null | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"awkward",
"matplotlib",
"numpy",
"ruamel-yaml",
"scipy",
"xarray",
"zarr",
"mpi4py; extra == \"examples\"",
"cython; extra == \"yac\"",
"datetime; extra == \"yac\"",
"isodate; extra == \"yac\"",
"mpi4py; extra == \"yac\"",
"netcdf4; extra == \"yac\"",
"pip; extra == \"yac\"",
"setuptool... | [] | [] | [] | [
"Homepage, https://github.com/yoctoyotta1024/CLEO",
"Website, https://yoctoyotta1024.github.io/CLEO",
"Issues, https://github.com/yoctoyotta1024/CLEO/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:13:01.719640 | cleopy-0.65.1.tar.gz | 511,032 | a7/0f/29808e0c8f0987fd18fb657287ecff6e282b0590adb0e986ded20d5dad19/cleopy-0.65.1.tar.gz | source | sdist | null | false | e73144b3d659465597d220fe0e6d5974 | 94212dbfa8a5df1062c6cdd6f9ddc166f87ffa342fb5cc3670f19fca2eb2c060 | a70f29808e0c8f0987fd18fb657287ecff6e282b0590adb0e986ded20d5dad19 | BSD-3-Clause | [
"LICENSE.md"
] | 246 |
2.4 | dtlpy | 1.121.4 | SDK and CLI for Dataloop platform | # **DTLPY – SDK and CLI for Dataloop.ai**

[](https://sdk-docs.dataloop.ai/en/latest/?badge=latest)
[](https://pypi.org/project/dtlpy/)
[](https://github.com/dataloop-ai/dtlpy)
[](https://github.com/dataloop-ai/dtlpy/blob/master/LICENSE)
[](https://pepy.tech/project/dtlpy)
📚 [Platform Documentation](https://dataloop.ai/docs) | 📖 [SDK Documentation](https://sdk-docs.dataloop.ai/en/latest/) | [Developer docs](https://developers.dataloop.ai/)
An open-source SDK and CLI toolkit to interact seamlessly with the [Dataloop.ai](https://dataloop.ai/) platform, providing powerful data management, annotation capabilities, and workflow automation.
---
## **Table of Contents**
- [**DTLPY – SDK and CLI for Dataloop.ai**](#dtlpy--sdk-and-cli-for-dataloopai)
- [**Table of Contents**](#table-of-contents)
- [**Overview**](#overview)
- [**Installation**](#installation)
- [**Usage**](#usage)
- [**SDK Usage**](#sdk-usage)
- [**CLI Usage**](#cli-usage)
- [**Python Version Support**](#python-version-support)
- [**Development**](#development)
- [**Resources**](#resources)
- [**Contribution Guidelines**](#contribution-guidelines)
---
## **Overview**
DTLPY provides a robust Python SDK and a powerful CLI, enabling developers and data scientists to automate tasks, manage datasets, annotations, and streamline workflows within the Dataloop platform.
---
## **Installation**
Install DTLPY directly from PyPI using pip:
```bash
pip install dtlpy
```
Alternatively, for the latest development version, install directly from GitHub:
```bash
pip install git+https://github.com/dataloop-ai/dtlpy.git
```
---
## **Usage**
### **SDK Usage**
Here's a basic example to get started with the DTLPY SDK:
```python
import dtlpy as dl
# Authenticate
dl.login()
# Access a project
project = dl.projects.get(project_name='your-project-name')
# Access dataset
dataset = project.datasets.get(dataset_name='your-dataset-name')
```
### **CLI Usage**
DTLPY also provides a convenient command-line interface:
```bash
dlp login
dlp projects ls
dlp datasets ls --project-name your-project-name
```
---
## **Python Version Support**
DTLPY supports multiple Python versions as follows:
| Python Version | 3.14 | 3.13 | 3.12 | 3.11 | 3.10 | 3.9 | 3.8 | 3.7 |
|---------------------|------|------|------|------|------|-----|-----|-----|
| **dtlpy >= 1.118** | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ |
| **dtlpy 1.99–1.117**| ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ |
| **dtlpy 1.76–1.98** | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ |
| **dtlpy >= 1.61** | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ |
| **dtlpy 1.50–1.60** | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ |
---
## **Development**
To set up the development environment, clone the repository and install dependencies:
```bash
git clone https://github.com/dataloop-ai/dtlpy.git
cd dtlpy
pip install -r requirements.txt
```
## **Resources**
- [Dataloop Platform](https://console.dataloop.ai)
- [Full SDK Documentation](https://sdk-docs.dataloop.ai/en/latest/)
- [Platform Documentation](https://dataloop.ai/docs)
- [SDK Examples and Tutorials](https://github.com/dataloop-ai/dtlpy-documentation)
- [Developer docs](https://developers.dataloop.ai/)
---
## **Contribution Guidelines**
We encourage contributions! Please ensure:
- Clear and descriptive commit messages
- Code follows existing formatting and conventions
- Comprehensive tests for new features or bug fixes
- Updates to documentation if relevant
Create pull requests for review. All contributions will be reviewed carefully and integrated accordingly.
| text/markdown | Dataloop Team | info@dataloop.ai | null | null | Apache License 2.0 | null | [
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming L... | [] | https://github.com/dataloop-ai/dtlpy | null | >=3.8 | [] | [] | [] | [
"urllib3>=1.26",
"tqdm>=4.63",
"certifi>=2020.12.5",
"webvtt-py==0.4.3",
"aiohttp>=3.8",
"requests-toolbelt>=1.0.0",
"requests>=2.25.0",
"numpy>=1.16.2",
"pandas>=0.24.2",
"tabulate>=0.8.9",
"Pillow>=7.2",
"PyJWT>=2.4",
"jinja2>=2.11.3",
"attrs<25.0,>=22.2.0",
"prompt_toolkit>=2.0.9",
... | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T09:11:44.706251 | dtlpy-1.121.4.tar.gz | 493,937 | fc/38/d7e979711b78caba473c31b919bd5300848f1a4168cad90e92e283bdfa2f/dtlpy-1.121.4.tar.gz | source | sdist | null | false | ab3116d8412d42b85618bbee65527802 | 07fb3f72fbb2c9d4a98aaffef4a510446581c6dc5e4a66692c481e25c7f96674 | fc38d7e979711b78caba473c31b919bd5300848f1a4168cad90e92e283bdfa2f | null | [
"LICENSE"
] | 442 |
2.4 | extradoc | 0.3.0 | File-based Google Docs representation library for LLM agents | # extradoc
File-based Google Docs representation library for LLM agents.
Part of the [ExtraSuite](https://github.com/think41/extrasuite) project.
## Overview
extradoc transforms Google Docs into a file-based representation optimized for LLM agents, enabling efficient "fly-blind" editing through the pull/diff/push workflow.
## Installation
```bash
pip install extradoc
# or
uvx extradoc
```
## Quick Start
```bash
# Authenticate (one-time)
uv run python -m extrasuite.client login
# Pull a document
uv run python -m extradoc pull https://docs.google.com/document/d/DOCUMENT_ID/edit
# Edit files locally...
# Preview changes (dry run)
uv run python -m extradoc diff ./DOCUMENT_ID/
# Push changes
uv run python -m extradoc push ./DOCUMENT_ID/
```
## CLI Commands
### pull
Download a Google Doc to local files:
```bash
uv run python -m extradoc pull <document_url_or_id> [output_dir]
# Options:
# --no-raw Don't save raw API responses to .raw/ folder
```
### diff
Preview changes (dry run, no API calls):
```bash
uv run python -m extradoc diff <folder>
# Output: batchUpdate JSON to stdout
```
### push
Apply changes to Google Docs:
```bash
uv run python -m extradoc push <folder>
# Options:
# -f, --force Push despite warnings (blocks still prevent push)
# --verify Re-pull after push and compare to verify correctness
```
## Folder Structure
After `pull`, the folder contains:
```
<document_id>/
document.xml # ExtraDoc XML (main content)
styles.xml # Factorized style definitions
.raw/
document.json # Raw API response
.pristine/
document.zip # Original state for diff comparison
```
## Development
```bash
cd extradoc
uv sync --all-extras
uv run pytest tests/ -v
uv run ruff check . && uv run ruff format .
uv run mypy src/extradoc
```
## License
MIT License - see LICENSE file for details.
| text/markdown | null | Sripathi Krishnan <sripathi@think41.com> | null | null | null | agent, docs, document, extrasuite, google, llm | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: ... | [] | null | null | >=3.10 | [] | [] | [] | [
"certifi>=2024.0.0",
"diff-match-patch>=20230430",
"httpx>=0.27.0",
"keyring>=25.0.0",
"pydantic>=2.0",
"mypy>=1.0.0; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"r... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:11:08.237525 | extradoc-0.3.0.tar.gz | 339,080 | e9/15/4fd52b3ae338365932be02428b8911f0fdf95497a23585d45767e087cb78/extradoc-0.3.0.tar.gz | source | sdist | null | false | bd4a6fabdeb57dc347c45312bc53f952 | 86eefb5e2876bc2504255fc2df28a8781dcd7df26d513d61abe9026363968ac0 | e9154fd52b3ae338365932be02428b8911f0fdf95497a23585d45767e087cb78 | MIT | [
"LICENSE"
] | 297 |
2.4 | django-resilient-logger | 2.1.0 | A module that provides django-specific resilient logger module. | <!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)*
- [Logger that ensures that logs sent out to external service.](#logger-that-ensures-that-logs-sent-out-to-external-service)
- [Adding django-resilient-logger to your Django project](#adding-django-resilient-logger-to-your-django-project)
- [Adding django-resilient-logger to Django apps](#adding-django-resilient-logger-to-django-apps)
- [Configuring django-resilient-logger](#configuring-django-resilient-logger)
- [Development](#development)
- [Running tests](#running-tests)
- [Code format](#code-format)
- [Commit message format](#commit-message-format)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
# Logger that ensures that logs sent out to external service.
`django-resilient-logger` is a logger module that stores logs in local DB and synchronizes those with external log target.
If for some reason synchronization to external service does not work at the given time, it will retry it at later time.
Management tasks require an external cron trigger.
Cron triggers are designed to be run in following schedule:
- `submit_unsent_entries` once in every 15 minutes.
- `clear_sent_entries` once a month.
To manually trigger the scheduled tasks, one can run commands:
```bash
python ./manage.py submit_unsent_entries
python ./manage.py clear_sent_entries
```
## Adding django-resilient-logger to your Django project
Add `django-resilient-logger` in your project"s dependencies.
### Adding django-resilient-logger to Django apps
To install this logger, append `resilient_logger` to `INSTALLED_APPS` in settings.py:
```python
INSTALLED_APPS = (
"resilient_logger"
...
)
```
### Configuring django-resilient-logger
To configure resilient logger, you must provide config section in your settings.py.
Configuration must contain required `origin`, `environment`, `sources` and `targets` keys. It also accepts optional keys `batch_limit`, `chunk_size`, `clear_sent_entries` and `submit_unsent_entries`.
- `origin` is the name of the application or unique identifier of it.
- `environment` is the name of the environment where the application is running.
- `sources` expects array of objects with property `class` (full class path) being present. Other properties are ignored.
- `targets` expects array of objects with `class` (full class path) and being present. Others are passed as constructor parameters.
```python
RESILIENT_LOGGER = {
"origin": "NameOfTheApplication",
"environment": env("AUDIT_LOG_ENV"),
"sources": [
{ "class": "resilient_logger.sources.ResilientLogSource" },
{ "class": "resilient_logger.sources.DjangoAuditLogSource" },
],
"targets": [{
"class": "resilient_logger.targets.ElasticsearchLogTarget",
"es_url": env("AUDIT_LOG_ES_URL"),
"es_username": env("AUDIT_LOG_ES_USERNAME"),
"es_password": env("AUDIT_LOG_ES_PASSWORD"),
"es_index": env("AUDIT_LOG_ES_INDEX"),
"required": True
}],
"batch_limit": 5000,
"chunk_size": 500,
"submit_unsent_entries": True,
"clear_sent_entries": True,
}
```
In addition to the django-resilient-logger specific configuration, one must also configure logger handler to actually use it.
In the sample below the configured logger is called `resilient` and it will use the `RESILIENT_LOGGER` configuration above:
```python
LOGGING = {
"handlers": {
"resilient": {
"class": "resilient_logger.handlers.ResilientLogHandler",
...
}
...
},
"loggers": {
"": {
"handlers": ["resilient"],
...
},
...
}
}
```
# Development
Virtual Python environment can be used. For example:
```bash
python3 -m venv .venv
source .venv/bin/activate
```
Install package requirements:
```bash
pip install -e .
```
Install development requirements:
```bash
pip install -e ".[all]"
```
## Running tests
```bash
pytest
```
## Code format
This project uses [Ruff](https://docs.astral.sh/ruff/) for code formatting and quality checking.
Basic `ruff` commands:
* lint: `ruff check`
* apply safe lint fixes: `ruff check --fix`
* check formatting: `ruff format --check`
* format: `ruff format`
[`pre-commit`](https://pre-commit.com/) can be used to install and
run all the formatting tools as git hooks automatically before a
commit.
## Commit message format
New commit messages must adhere to the [Conventional Commits](https://www.conventionalcommits.org/)
specification, and line length is limited to 72 characters.
When [`pre-commit`](https://pre-commit.com/) is in use, [`commitlint`](https://github.com/conventional-changelog/commitlint)
checks new commit messages for the correct format.
| text/markdown | null | City of Helsinki <dev@hel.fi> | null | null | null | django, logging, plugin extension | [
"Environment :: Web Environment",
"Framework :: Django",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.1",
"Framework :: Django :: 5.2",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Prog... | [] | null | null | null | [] | [] | [] | [
"django>=4.2",
"elasticsearch8",
"coverage-enable-subprocess; extra == \"all\"",
"coverage[toml]; extra == \"all\"",
"django-auditlog; extra == \"all\"",
"pytest; extra == \"all\"",
"pytest-django; extra == \"all\"",
"pytest-randomly; extra == \"all\"",
"pytest-rerunfailures; extra == \"all\"",
"p... | [] | [] | [] | [
"Homepage, https://github.com/City-of-Helsinki/django-resilient-logger",
"Issues, https://github.com/City-of-Helsinki/django-resilient-logger/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:10:09.013291 | django_resilient_logger-2.1.0.tar.gz | 22,427 | e6/74/d55f16bb0364644fd1b0a427c84827d7e03099a90f1749dc6cb4be08f7b6/django_resilient_logger-2.1.0.tar.gz | source | sdist | null | false | b84817e55913dc49cbadfd33dd05c641 | 765560f135c1e38c27bab829d5675d4d6840f8ca9db05c4ab947472d74d6a88d | e674d55f16bb0364644fd1b0a427c84827d7e03099a90f1749dc6cb4be08f7b6 | MIT | [
"LICENSE"
] | 299 |
2.4 | reqstool | 0.5.12 | A tool for managing requirements with related tests and test results. |
[](https://github.com/Luftfartsverket/reqstool-client/pulse)
[](https://github.com/Luftfartsverket/reqstool-client/issues)
[](https://opensource.org/license/mit/)
[](https://github.com/Luftfartsverket/reqstool-client/actions/workflows/build.yml)
[](https://luftfartsverket.github.io/reqstool-client/reqstool-client/0.3.0/index.html)
[](https://github.com/Luftfartsverket/reqstool-client/discussions)
# Reqstool Client
## Overview
Reqstool is a tool for managing requirements with related software verification cases (aka tests) and verification results (test results).
- Requirements are defined in YAML files and can reference each other (depending on the variant different data will be parsed).
- Annotations are then used in code to specify where a requirement is implemented as well as tested.
With this information and the actual test results (e.g., JUnit), use Reqstool to:
- Generate a report (AsciiDoc, which can be transformed into e.g. PDF) listing all requirements, where that requirement is implemented and tested, and whether the tests passed/failed. This report can be used e.g. with auditors ("Yes, we track this requirement, it's implemented (here) and it has been tested with a pass (here).")
- Status the software, i.e. get a list of all requirements, their status on implementation and tests. Reqstool will exit with a status code equal to the number of requirements that have not been implemented and tested with a pass. Hence, it can be used in a pipeline as a gate for deployment to production.
## Installation
You need to have the following installed in order to use the tool:
- Python, 3.10 or later
- pip
To use the tool, you need to install the PyPI package *reqstool*.
```bash
pip install -U reqstool
reqstool -h # to confirm installation
```
## Usage
```bash
reqstool [-h] {command: report-asciidoc,generate-json,status} {location: local,git,maven} ...
```
Use `-h/--help` for more information about each command and location.
## Documentation
For full documentation cane be found [here](https://luftfartsverket.github.io/reqstool-docs/reqstool-client/0.3.0/index.html).
## Contributing
- We adhere to the latest version of [Contributor Covenant](https://www.contributor-covenant.org/).
- Fork repo
- Before submitting a PR
- Perform formatting (black): `hatch run lint:black src tests`
- Run linter (flake8): `hatch run lint:flake8`
- Run tests:
- all: `hatch run test:pytest --cov=reqstool`
- unit only: `hatch run test:pytest --cov=reqstool tests/unit`
- integration only: `hatch run test:pytest --cov=reqstool tests/integration`
| text/markdown | null | LFV <info@lfv.se> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"beautifulsoup4==4.14.3",
"colorama==0.4.6",
"distlib==0.4.0",
"jinja2==3.1.6",
"jsonpickle==4.1.1",
"jsonschema[format-nongpl]==4.26.0",
"lark==1.3.1",
"maven-artifact==0.3.5",
"packaging==26.0",
"pygit2==1.19.1",
"referencing==0.37.0",
"reqstool-python-decorators==0.0.9",
"requests-file==3... | [] | [] | [] | [
"Source, https://github.com/Luftfartsverket/reqstool-client"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:09:15.097492 | reqstool-0.5.12.tar.gz | 93,559 | 26/be/3b94513c3d04a1b43b294792314f40060ab24e8b4e8f0204bbb534a8f56f/reqstool-0.5.12.tar.gz | source | sdist | null | false | 722d4e2e03727a204328836d87744e41 | c998a9c87331714034ddf74c4b3ac85d5bea4ee1bb4649055be4954836a5c9e8 | 26be3b94513c3d04a1b43b294792314f40060ab24e8b4e8f0204bbb534a8f56f | null | [
"LICENSE"
] | 258 |
2.4 | ez-a-sync | 0.34.0 | A library that makes it easy to define objects that can be used for both sync and async use cases. | ## Table of Contents
<!-- TOC -->
- [Table of Contents](#table-of-contents)
- [Introduction](#introduction)
- [Installation](#installation)
- [Usage](#usage)
- [Decorators](#decorators)
- [@a_sync'async'](#a_syncasync)
- [@a_sync'sync'](#a_syncsync)
- [Classes](#classes)
- [Modifiers](#modifiers)
- [async modifiers](#async-modifiers)
- [sync modifiers](#sync-modifiers)
- [Default Modifiers](#default-modifiers)
- [Other Helpful Classes](#other-helpful-modules)
- [ASyncIterable](#asynciterable)
- [ASyncIterator](#asynciterator)
- [ASyncFilter](#asyncfilter)
- [ASyncSorter](#asyncsorter)
- [Other Helpful Modules](#other-helpful-modules)
- [future](#future)
- [ASyncFuture](#asyncfuture)
- [future decorator](#future-decorator)
- [asyncio](#asyncio)
<!-- /TOC -->
## Introduction
`ez-a-sync` is a Cython library which enables Python developers to write both synchronous and asynchronous code without redundant code. It provides a decorator `@a_sync()`, as well as a base class `ASyncGenericBase` which can be used to create classes that can be executed in both synchronous and asynchronous contexts.
It also contains implementations of various asyncio primitives with extra functionality, including queues and various types of locks.
\# TODO add links to various objects' docs
## Installation
`ez-a-sync` can be installed via pip:
```
pip install ez-a-sync
```
## Usage
### Decorators
`ez-a-sync` provides one decorator: `@a_sync()`. You can explicitly pass the type of function you want with `@a_sync('sync')` or `@a_sync('async')`
#### `@a_sync('async')`
The `@a_sync('async')` decorator can be used to define an asynchronous function that can also be executed synchronously.
```python
@a_sync('async')
def some_function():
...
```
This function can then be executed asynchronously using `await`:
```python
aaa = await some_function()
```
It can also be executed synchronously by passing `sync=True` or `asynchronous=False`:
```python
aaa = some_function(sync=True)
```
#### `@a_sync('sync')`
The `@a_sync('sync')` decorator can be used to define a synchronous function that can also be executed asynchronously.
```python
@a_sync('sync')
async def some_function():
...
```
This function can then be executed synchronously:
```python
aaa = some_function()
```
It can also be overridden asynchronously by passing `sync=False` or `asynchronous=True` and using `await`:
```python
aaa = await some_function(sync=False)
```
### Classes
`ez-a-sync` also provides a base class `ASyncGenericBase` that can be used to create classes that can be executed in both synchronous and asynchronous contexts. To create an asynchronous class, simply inherit from `ASyncGenericBase` and set `asynchronous=True`:
```python
class CoolAsyncClass(ASyncGenericBase):
asynchronous=True
def some_sync_fn():
...
```
In this example, `CoolAsyncClass` has `asynchronous=True`, which means it is an asynchronous class. You can call `some_sync_fn` asynchronously using `await`:
```python
aaa = await CoolAsyncClass().some_sync_fn()
```
`CoolAsyncClass` functions can also be called synchronously by passing `sync=True`:
```python
aaa = CoolAsyncClass().some_sync_fn(sync=True)
```
Similarly, you can create a synchronous class by setting `sync=True` or `asynchronous=False`:
```python
class CoolSyncClass(ASyncGenericBase):
asynchronous=False
async def some_async_fn():
...
```
`CoolSyncClass` functions can be called synchronously:
```python
aaa = CoolSyncClass().some_async_fn()
```
It can also be called asynchronously by passing `sync=False` or `asynchronous=True` and using `await`:
```python
aaa = await CoolSyncClass().some_async_fn(sync=False)
```
You can also create a class which functions can be executed in both synchronous and asynchronous contexts by not setting the `asynchronous` or `sync` attribute (both can be used interchangeably, pick your favorite) and passing it as an argument when creating an instance:
```python
class CoolDualClass(ASyncGenericBase):
def __init__(self, asynchronous):
self.asynchronous=asynchronous
async def some_async_fn():
...
```
You can create an instance of `CoolDualClass` with `sync=False` or `asynchronous=True` to call it asynchronously:
```python
async_instance = CoolDualClass(asynchronous=True)
aaa = await async_instance.some_async_fn()
aaa = async_instance.some_async_fn(sync=True)
```
You can also create an instance with `sync=True` or `asynchronous=False` to call it synchronously:
```python
sync_instance = CoolDualClass(asynchronous=False)
aaa = sync_instance.some_async_fn()
aaa = sync_instance.some_async_fn(sync=False)
```
### Modifiers
The `ez-a-sync` library provides several settings that can be used to customize the behavior of the decorators and classes.
To apply settings to the decorators or base classes, simply pass them as keyword arguments when calling the decorator or creating an instance.
For example, to apply `cache_type='memory'` to a function decorated with `@a_sync('async')`, you would do the following:
```python
@a_sync('async', cache_type='memory')
def some_function():
...
```
#### async modifiers
The `@a_sync('async')` decorator has the following settings:
- `cache_type`: This can be set to `None` or `'memory'`. `'memory'` is a LRU cache which can be modified with the `cache_typed`, `ram_cache_maxsize`, and `ram_cache_ttl` modifiers.
- `cache_typed`: Set to `True` if you want types considered treated for cache keys. i.e. with `cache_typed=True`, `Decimal(0)` and `0` will be considered separate keys.
- `ram_cache_maxsize`: The maxsize for your LRU cache. Set to `None` if the cache is unbounded. If you set this value without specifying a cache type, `'memory'` will automatically be applied.
- `ram_cache_ttl`: The TTL for items in your LRU cache. Set to `None`. If you set this value without specifying a cache type, `'memory'` will automatically be applied.
- `runs_per_minute`: Setting this value enables a rate limiter for the decorated function.
- `semaphore`: Drop in a Semaphore for your async defined functions.
#### sync modifiers
The `@a_sync('sync')` decorator has the following setting:
- `executor`: The executor for the synchronous function. Set to the library's default of `config.default_sync_executor`.
#### Default Modifiers
Instead of setting modifiers one by one in functions, you can set a default value for modifiers using ENV variables:
- `DEFAULT_MODE`
- `CACHE_TYPE`
- `CACHE_TYPED`
- `RAM_CACHE_MAXSIZE`
- `RAM_CACHE_TTL`
- `RUNS_PER_MINUTE`
- `SEMAPHORE`
### Other Helpful Classes
#### ASyncIterable
The [ASyncIterable](#https://bobthebuidler.github.io/ez-a-sync/source/a_sync.html#a_sync.ASyncIterable) class allows objects to be iterated over using either a standard `for` loop or an `async for` loop. This is particularly useful in scenarios where the mode of iteration needs to be flexible or is determined at runtime.
```python
from a_sync import ASyncIterable
async_iterable = ASyncIterable(some_async_iterable)
# Asynchronous iteration
async for item in async_iterable:
...
# Synchronous iteration
for item in async_iterable:
...
```
See the [documentation](#https://bobthebuidler.github.io/ez-a-sync/source/a_sync.html#a_sync.ASyncIterable) for more information.
#### ASyncIterator
The [ASyncIterator](#https://bobthebuidler.github.io/ez-a-sync/source/a_sync.html#a_sync.ASyncIterator) class provides a unified interface for iteration that can operate in both synchronous and asynchronous contexts. It allows the wrapping of asynchronous iterable objects or async generator functions.
```python
from a_sync import ASyncIterator
async_iterator = ASyncIterator(some_async_iterator)
# Asynchronous iteration
async for item in async_iterator:
...
# Synchronous iteration
for item in async_iterator:
...
```
See the [documentation](#https://bobthebuidler.github.io/ez-a-sync/source/a_sync.html#a_sync.ASyncIterator) for more information.
#### ASyncFilter
The [ASyncFilter](#https://bobthebuidler.github.io/ez-a-sync/source/a_sync.html#a_sync.ASyncFilter) class filters items of an async iterable based on a provided function. It can handle both synchronous and asynchronous filter functions.
```python
from a_sync import ASyncFilter
async def is_even(x):
return x % 2 == 0
filtered_iterable = ASyncFilter(is_even, some_async_iterable)
# or use the alias
import a_sync
filtered_iterable = a_sync.filter(is_even, some_async_iterable)
# Asynchronous iteration
async for item in filtered_iterable:
...
# Synchronous iteration
for item in filtered_iterable:
...
```
See the [documentation](#https://bobthebuidler.github.io/ez-a-sync/source/a_sync.html#a_sync.ASyncFilter) for more information.
#### ASyncSorter
The [ASyncSorter](#https://bobthebuidler.github.io/ez-a-sync/source/a_sync.html#a_sync.ASyncSorter) class sorts items of an async iterable based on a provided key function. It supports both synchronous and asynchronous key functions.
```python
from a_sync import ASyncSorter
sorted_iterable = ASyncSorter(some_async_iterable, key=lambda x: x.value)
# or use the alias
import a_sync
sorted_iterable = a_sync.sort(some_async_iterable, key=lambda x: x.value)
# Asynchronous iteration
async for item in sorted_iterable:
...
# Synchronous iteration
for item in sorted_iterable:
...
```
See the [documentation](#https://bobthebuidler.github.io/ez-a-sync/source/a_sync.html#a_sync.ASyncSorter) for more information.
## Other Helpful Modules
The stuff here is unrelated to the main purpose of ez-a-sync, but cool and useful nonetheless
### asyncio
The `ez-a-sync` library extends the functionality of Python's `asyncio` module with additional utilities to simplify asynchronous programming.
- **as_completed**: This function allows you to iterate over awaitables as they complete, similar to `asyncio.as_completed`. It supports both synchronous and asynchronous iteration. [Learn more about `as_completed`](#https://bobthebuidler.github.io/ez-a-sync/source/a_sync.html#a_sync.asyncio.as_completed).
- **gather**: A utility to run multiple asynchronous operations concurrently and wait for all of them to complete. It is similar to `asyncio.gather` but integrates seamlessly with the `ez-a-sync` library. [Learn more about `gather`](#https://bobthebuidler.github.io/ez-a-sync/source/a_sync.html#a_sync.asyncio.gather).
- **create_task**: A function to create a new task from a coroutine, similar to `asyncio.create_task`, but with additional features provided by `ez-a-sync`. [Learn more about `create_task`](#https://bobthebuidler.github.io/ez-a-sync/source/a_sync.html#a_sync.asyncio.create_task).
- **as_completed**: This function allows you to iterate over awaitables as they complete, similar to `asyncio.as_completed`. It supports both synchronous and asynchronous iteration. [Learn more about `as_completed`](#https://bobthebuidler.github.io/ez-a-sync/source/a_sync.html#a_sync.asyncio.as_completed).
These utilities enhance the standard `asyncio` module, providing more flexibility and control over asynchronous operations. For detailed documentation and examples, please refer to the [documentation](#https://bobthebuidler.github.io/ez-a-sync/source/a_sync.html#a_sync.asyncio)
### future
The future module is something totally different.
TODO: short explainer of module value prop and use case
#### ASyncFuture
[documentation](#https://bobthebuidler.github.io/ez-a-sync/source/a_sync.html#a_sync.future.ASyncFuture)
TODO: short explainers on ASyncFuture class
#### future decorator
[documentation](#https://bobthebuidler.github.io/ez-a-sync/source/a_sync.html#a_sync.future.future)
TODO: short explainers on future fn
| text/markdown | BobTheBuidler | bobthebuidlerdefi@gmail.com | null | null | MIT | null | [
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming ... | [] | https://github.com/BobTheBuidler/a-sync | null | <3.15,>=3.10 | [] | [] | [] | [
"aiolimiter>=1",
"async_lru_threadsafe==2.0.5",
"async_property==0.2.2",
"typed_envs>=0.0.5",
"typing_extensions>=4.1.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T09:09:05.308163 | ez_a_sync-0.34.0.tar.gz | 3,743,656 | 08/f7/fde19be57fba81105307d82fd35b0294b107898f0331c39929c501b3f6a0/ez_a_sync-0.34.0.tar.gz | source | sdist | null | false | 2317b93aebc5b26b33031ac670dd01d0 | 71efe3d8953f57cb91a5b20e6e3c8e712746b4ea1e51408b1bb490e7f15a4915 | 08f7fde19be57fba81105307d82fd35b0294b107898f0331c39929c501b3f6a0 | null | [
"LICENSE.txt"
] | 4,962 |
2.4 | pastax | 0.0.7 | **P**arameterizable **A**uto-differentiable **S**imulators of ocean **T**rajectories in j**AX**. | # pastax
<p align="center">
<img src="https://raw.githubusercontent.com/vadmbertr/pastax/refs/heads/main/docs/_static/pastax-md.png" alt="pastax logo" width="33%">
</p>
<p align="center">
<b>P</b>arameterizable <b>A</b>uto-differentiable <b>S</b>imulators of ocean <b>T</b>rajectories in j<b>AX</b>.
</p>
## Installation
`pastax` is Pip-installable:
```shell
pip install pastax
```
## Usage
Documentation is under construction but you can already have a look at the [getting started notebook](docs/getting_started.ipynb) and the (messy) [API documentation](https://pastax.readthedocs.io/en/latest/api/).
## Work in progress
This package in under active developement and should still be considered as work in progress.
In particular, the following changes are considered:
- `pastax.gridded`
- add support for C-grids,
- maybe some refactoring of the structures,
- switch to `Coordax`? (see [Coordax](https://coordax.readthedocs.io/en/latest/)),
- `pastax.trajectory`
- use `unxt.Quantity` in place of `Unitful` (see [unxt](https://unxt.readthedocs.io/en/latest/)),
- remove `__add__` and like methods in favour of registered functions (see [quax](https://docs.kidger.site/quax/)),
- `pastax.simulator`
- improve how the product operation is performed between the vector field and the control (support for `Location`, `Time` or `State` objects) (see `diffrax` doc [here](https://docs.kidger.site/diffrax/api/terms/#diffrax.ControlTerm)),
- add support/examples for interrupting the solve when a trajectory reaches land (see `diffrax` doc [here](https://docs.kidger.site/diffrax/api/events/)).
And I should stress that the package lacks (unit-)tests for now.
## Related projects
Several other open-source projects already exist with similar objectives.
The closest ones are probably [(Ocean)Parcels](https://github.com/OceanParcels/parcels), [OpenDrift](https://github.com/OpenDrift/opendrift) and [Drifters.jl](https://github.com/JuliaClimate/Drifters.jl).
Here is a (probably) non-comprehensive (and hopefuly correct, please reach-out if not) use-cases comparison between them:
- you use Python: go with `pastax`, `OpenDrift` or `Parcels`,
- you use Julia: go with `Drifters.jl`,
- you want I/O inter operability with `xarray` Datasets: go with `pastax`, `OpenDrift`, `Parcels` or `Drifters.jl`,
- you need support for Arakawa C-grid: go with `OpenDrift`, `Parcels` or `Drifters.jl` (but keep an eye on `pastax` as it might come in the future),
- you want some post-processing routines: go with `Drifters.jl` (but keep an eye on `pastax` as some might come in the future),
- you want a better control of the right-hand-side term of your Differential Equation: go with `pastax` (probably the most flexible) or `Parcels`,
- you solve Stochastic Differential Equations: go with `pastax`, `OpenDrift` or `Parcels`,
- you need a **wide** range of solvers: go with `pastax` or `Drifters.jl` (if you solve ODE),
- you want to calibrate your simulator ***on-line*** (i.e. by differenting through your simulator): go with `pastax`,
- you want to run on both CPUs and GPUs (or TPUs): go with `pastax`.
Worth mentionning that I did not compare runtime performances (especially for typical use-cases with `OpenDrift`, `Parcels` or `Drifters.jl` of advecting a very large amount of particules with the same velocity field).
I could also cite [py-eddy-tracker](https://github.com/AntSimi/py-eddy-tracker), altough it targets more specifically eddy related routines.
## Contributing
Contributions are welcomed!
See [CONTRIBUTING.md](https://github.com/vadmbertr/pastax/blob/main/CONTRIBUTING.md) and [CONDUCT.md](https://github.com/vadmbertr/pastax/blob/main/CONDUCT.md) to get started.
| text/markdown | null | Vadim Bertrand <vadim.bertrand@univ-grenoble-alpes.fr>, Emmanuel Cosme <emmanuel.cosme@univ-grenoble-alpes.fr>, Adeline Leclercq Samson <adeline.leclercq-samson@univ-grenoble-alpes.fr>, Julien Le Sommer <julien.lesommer@univ-grenoble-alpes.fr> | null | null | Apache-2.0 | differentiable, drift, jax, lagrangian, ocean, parameterizable, sampler, sea, simulator, stochastic, trajectory | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"diffrax",
"equinox",
"interpax",
"jax",
"jaxtyping",
"lineax",
"numpy",
"xarray"
] | [] | [] | [] | [
"Homepage, https://github.com/vadmbertr/pastax",
"Bug Tracker, https://github.com/vadmbertr/pastax/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:08:29.654391 | pastax-0.0.7.tar.gz | 41,018 | ad/98/df659eea12ee59a690686d709c81c880383fb415b27635b57c7957e153b4/pastax-0.0.7.tar.gz | source | sdist | null | false | 4b4b7b2f62277cb6a011c38e9bfbb67d | 3a3de44f9cabae4f75bf1087fbdfdf663d93e8c75fe6e2d4a0718257a54dfeee | ad98df659eea12ee59a690686d709c81c880383fb415b27635b57c7957e153b4 | null | [
"LICENSE"
] | 254 |
2.4 | sortai | 0.1.4 | LLM-powered directory organizer using Google Gemini | # sortai
[](https://github.com/ajs2583/sortai/releases/latest)
LLM-powered directory organizer. Uses **Google Gemini** to suggest a folder structure from filenames and (for text-based files) the first ~500 characters of content, then moves files into the suggested subfolders.
- **Dry-run by default** – see exactly what would move where before touching anything.
- **Confirm before apply** – with `--apply`, you are prompted to confirm before any files are moved.
📦 **PyPI Package:** [https://pypi.org/project/sortai/0.1.4/](https://pypi.org/project/sortai/0.1.4/)
🔄 **Releases:** [Latest](https://github.com/ajs2583/sortai/releases/latest) · [All releases](https://github.com/ajs2583/sortai/releases)
## Install
Install from PyPI:
```bash
pip install sortai
```
Or view the package on [PyPI](https://pypi.org/project/sortai/0.1.0/).
Development install from source:
```bash
git clone https://github.com/ajs2583/sortai.git
cd sortai
pip install -e .
```
## Setup
Set your Google Gemini API key (required):
```bash
export GEMINI_API_KEY=your_key_here
```
Get a key at: **https://aistudio.google.com/app/apikey**
You can copy `.env.example` to `.env` and set `GEMINI_API_KEY` there; load it with your shell or a tool like `python-dotenv` if you use one (sortai does not load `.env` automatically).
## Demo

*Demo showing `sortai test-demo` dry-run preview, then `--apply` with confirmation.*
## Usage
| Command | Description |
|--------|-------------|
| `sortai <path>` | Dry-run: show what would be moved where (default). |
| `sortai <path> --apply` | After dry-run, prompt and then actually move files. |
| `sortai <path> --depth 2` | Organize up to 2 levels of subfolders (e.g. `documents/work`). |
| `sortai <path> --model gemini-2.5-flash` | Override Gemini model (default: gemini-2.5-flash). |
| `sortai --version` | Print version. |
| `sortai --help` | Show help. |
### Example output
**Before (flat directory):**
```
my-folder/
├── report.pdf
├── notes.txt
├── budget.csv
├── vacation.jpg
└── readme.md
```
**Dry-run:**
```
$ sortai ./my-folder
Dry run – would move:
report.pdf -> documents/
notes.txt -> documents/
budget.csv -> finance/
vacation.jpg -> images/
readme.md -> (keep at root)
Run with --apply to perform moves.
```
**After applying:**
```
my-folder/
├── readme.md
├── documents/
│ ├── report.pdf
│ └── notes.txt
├── finance/
│ └── budget.csv
└── images/
└── vacation.jpg
```
## Supported file types for content reading
sortai reads the **first ~500 characters** of content for:
- `.pdf` (first page via pdfplumber)
- `.txt`, `.md`, `.csv` (plain text)
- `.docx` (paragraph text via python-docx)
All other files are categorized by **filename and extension only**.
## Releasing
### GitHub Releases (Automated)
The release workflow (`.github/workflows/release.yml`) runs when you **push a version tag**. It builds the package and creates a GitHub Release with artifacts. Follow these steps exactly.
1. **Bump version in both places** (must match exactly):
- `pyproject.toml` → `version = "0.1.4"` (example)
- `sortai/__init__.py` → `__version__ = "0.1.4"`
If they don’t match the tag, the workflow will fail at “Verify version consistency”.
2. **Commit and push**:
```bash
git add pyproject.toml sortai/__init__.py
git commit -m "Bump version to 0.1.4"
git push origin main
```
(Use your default branch name if different.)
3. **Create the tag and push it** (this triggers the workflow):
```bash
git tag v0.1.4
git push origin v0.1.4
```
The tag must match the version: `v` + version (e.g. `v0.1.4` for `0.1.4`).
4. **Check the run**:
- Open the repo on GitHub → **Actions** tab.
- The “Release” workflow should run. When it finishes, **Releases** will have a new release with the wheel and sdist attached.
**If the release doesn’t run or fails:**
- **Workflow didn’t run:** The workflow only runs on **tag** push. Pushing a branch alone does not trigger it. Run `git push origin v0.1.4` (or your tag) after creating the tag.
- **“Verify version consistency” failed:** The tag (e.g. `v0.1.4`), `version` in `pyproject.toml`, and `__version__` in `sortai/__init__.py` must all be the same (aside from the leading `v`). Fix the files, commit, delete the tag locally (`git tag -d v0.1.4`), recreate it, and force-push the tag (`git push origin :refs/tags/v0.1.4` then `git push origin v0.1.4`).
- **Manual run from Actions:** You can “Run workflow” and enter a version (e.g. `v0.1.4`). That run will check out **that tag**, so the tag must already exist and be pushed. Use this to re-run a release, not to create the first time.
### Publishing to PyPI
1. **Create a PyPI account** (and optionally [Test PyPI](https://test.pypi.org/) for testing):
- https://pypi.org/account/register/
2. **Install build tools** (one-time):
```bash
pip install build twine
```
3. **Bump version** in `pyproject.toml` and `sortai/__init__.py` when releasing a new version.
4. **Build the package** (from the project root):
```bash
python -m build
```
This creates `dist/sortai-0.1.0.tar.gz` and a wheel.
5. **Upload to PyPI** (manual):
```bash
twine upload dist/*
```
Twine will prompt for your PyPI username and password. Prefer an [API token](https://pypi.org/manage/account/token/) (username: `__token__`, password: your token) over your account password.
**Or enable automated PyPI upload**: Add your PyPI API token as a GitHub secret named `PYPI_API_TOKEN`, then edit `.github/workflows/release.yml` and change `if: false` to `if: true` in the "Upload to PyPI" step. Releases will then automatically publish to PyPI.
To try Test PyPI first:
```bash
twine upload --repository testpypi dist/*
```
Then install with: `pip install -i https://test.pypi.org/simple/ sortai`
## License
MIT
| text/markdown | sortai | null | null | null | MIT | cli, organize, files, gemini, llm | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"click>=8.0",
"google-genai>=0.2.0",
"pdfplumber>=0.10.0",
"python-docx>=1.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.1 | 2026-02-18T09:07:41.834909 | sortai-0.1.4.tar.gz | 12,332 | 7d/b9/fabee43d72c9ac22e18a190a866d0267ff89687741d724e8a0c5ef16d206/sortai-0.1.4.tar.gz | source | sdist | null | false | 2686c1556ebcde82de67d3937d1ee80c | 7617b7a4cbbdd0128d74771ae5f4c70c6d86a2f6d714e0b5e311f878bb029792 | 7db9fabee43d72c9ac22e18a190a866d0267ff89687741d724e8a0c5ef16d206 | null | [
"LICENSE"
] | 256 |
2.4 | microsoft-agents-storage-cosmos | 0.8.0.dev4 | A Cosmos DB storage library for Microsoft Agents | # Microsoft Agents Storage - Cosmos DB
[](https://pypi.org/project/microsoft-agents-storage-cosmos/)
Azure Cosmos DB storage integration for Microsoft 365 Agents SDK. This library provides enterprise-grade persistent storage for conversation state, user data, and custom agent information using Azure Cosmos DB's globally distributed, multi-model database service.
This library implements the storage interface for the Microsoft 365 Agents SDK using Azure Cosmos DB as the backend. It provides automatic partitioning, global distribution, and low-latency access to your agent data. Perfect for production deployments requiring high availability, scalability, and multi-region support.
# What is this?
This library is part of the **Microsoft 365 Agents SDK for Python** - a comprehensive framework for building enterprise-grade conversational AI agents. The SDK enables developers to create intelligent agents that work across multiple platforms including Microsoft Teams, M365 Copilot, Copilot Studio, and web chat, with support for third-party integrations like Slack, Facebook Messenger, and Twilio.
## Release Notes
<table style="width:100%">
<tr>
<th style="width:20%">Version</th>
<th style="width:20%">Date</th>
<th style="width:60%">Release Notes</th>
</tr>
<tr>
<td>0.7.0</td>
<td>2026-01-21</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v070">
0.7.0 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.6.1</td>
<td>2025-12-01</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v061">
0.6.1 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.6.0</td>
<td>2025-11-18</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v060">
0.6.0 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.5.0</td>
<td>2025-10-22</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v050">
0.5.0 Release Notes
</a>
</td>
</tr>
</table>
## Packages Overview
We offer the following PyPI packages to create conversational experiences based on Agents:
| Package Name | PyPI Version | Description |
|--------------|-------------|-------------|
| `microsoft-agents-activity` | [](https://pypi.org/project/microsoft-agents-activity/) | Types and validators implementing the Activity protocol spec. |
| `microsoft-agents-hosting-core` | [](https://pypi.org/project/microsoft-agents-hosting-core/) | Core library for Microsoft Agents hosting. |
| `microsoft-agents-hosting-aiohttp` | [](https://pypi.org/project/microsoft-agents-hosting-aiohttp/) | Configures aiohttp to run the Agent. |
| `microsoft-agents-hosting-teams` | [](https://pypi.org/project/microsoft-agents-hosting-teams/) | Provides classes to host an Agent for Teams. |
| `microsoft-agents-storage-blob` | [](https://pypi.org/project/microsoft-agents-storage-blob/) | Extension to use Azure Blob as storage. |
| `microsoft-agents-storage-cosmos` | [](https://pypi.org/project/microsoft-agents-storage-cosmos/) | Extension to use CosmosDB as storage. |
| `microsoft-agents-authentication-msal` | [](https://pypi.org/project/microsoft-agents-authentication-msal/) | MSAL-based authentication for Microsoft Agents. |
Additionally we provide a Copilot Studio Client, to interact with Agents created in CopilotStudio:
| Package Name | PyPI Version | Description |
|--------------|-------------|-------------|
| `microsoft-agents-copilotstudio-client` | [](https://pypi.org/project/microsoft-agents-copilotstudio-client/) | Direct to Engine client to interact with Agents created in CopilotStudio |
**Why Cosmos DB?**
- 🌍 Global distribution with multi-region writes
- ⚡ Single-digit millisecond latency
- 📈 Automatic and instant scalability
- 🔄 Multiple consistency models
- 💪 99.999% availability SLA
## Installation
```bash
pip install microsoft-agents-storage-cosmos
```
## Environment Setup
### Local Development with Cosmos DB Emulator
Install and run the Azure Cosmos DB Emulator for local testing:
**Download:** [Azure Cosmos DB Emulator](https://docs.microsoft.com/azure/cosmos-db/local-emulator)
## Best Practices
1. **Use Managed Identity in Production** - Avoid storing auth keys in code or environment variables
2. **Initialize Once** - Call `storage.initialize()` during app startup, not per request
3. **Batch Operations** - Read/write multiple items together when possible
4. **Monitor RU Consumption** - Use Azure Monitor to track Request Units usage
5. **Set Appropriate Throughput** - Start with 400 RU/s, scale up based on metrics
6. **Use Session Consistency** - Default consistency level for most scenarios
7. **Implement Retry Logic** - Handle transient failures with exponential backoff
8. **Partition Wisely** - Current implementation uses `/id` partitioning (automatic)
9. **Enable Diagnostics** - Configure Azure diagnostic logs for troubleshooting
10. **Test with Emulator** - Use local emulator for development and testing
## Key Classes Reference
- **`CosmosDBStorage`** - Main storage implementation using Azure Cosmos DB
- **`CosmosDBStorageConfig`** - Configuration settings for connection and behavior
- **`StoreItem`** - Base class for data models (inherit to create custom types)
# Quick Links
- 📦 [All SDK Packages on PyPI](https://pypi.org/search/?q=microsoft-agents)
- 📖 [Complete Documentation](https://aka.ms/agents)
- 💡 [Python Samples Repository](https://github.com/microsoft/Agents/tree/main/samples/python)
- 🐛 [Report Issues](https://github.com/microsoft/Agents-for-python/issues)
# Sample Applications
|Name|Description|README|
|----|----|----|
|Quickstart|Simplest agent|[Quickstart](https://github.com/microsoft/Agents/blob/main/samples/python/quickstart/README.md)|
|Auto Sign In|Simple OAuth agent using Graph and GitHub|[auto-signin](https://github.com/microsoft/Agents/blob/main/samples/python/auto-signin/README.md)|
|OBO Authorization|OBO flow to access a Copilot Studio Agent|[obo-authorization](https://github.com/microsoft/Agents/blob/main/samples/python/obo-authorization/README.md)|
|Semantic Kernel Integration|A weather agent built with Semantic Kernel|[semantic-kernel-multiturn](https://github.com/microsoft/Agents/blob/main/samples/python/semantic-kernel-multiturn/README.md)|
|Streaming Agent|Streams OpenAI responses|[azure-ai-streaming](https://github.com/microsoft/Agents/blob/main/samples/python/azureai-streaming/README.md)|
|Copilot Studio Client|Console app to consume a Copilot Studio Agent|[copilotstudio-client](https://github.com/microsoft/Agents/blob/main/samples/python/copilotstudio-client/README.md)|
|Cards Agent|Agent that uses rich cards to enhance conversation design |[cards](https://github.com/microsoft/Agents/blob/main/samples/python/cards/README.md)|
| text/markdown | Microsoft Corporation License-Expression: MIT | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"microsoft-agents-hosting-core==0.8.0.dev4",
"azure-core",
"azure-cosmos",
"azure-identity"
] | [] | [] | [] | [
"Homepage, https://github.com/microsoft/Agents"
] | RestSharp/106.13.0.0 | 2026-02-18T09:07:34.497941 | microsoft_agents_storage_cosmos-0.8.0.dev4-py3-none-any.whl | 11,135 | 37/52/fc5f55587b90d7190fdb774d1a8fd2d5895a2da10d1d4d330f70dce09f55/microsoft_agents_storage_cosmos-0.8.0.dev4-py3-none-any.whl | py3 | bdist_wheel | null | false | f513b78e39e00594e8cea0e43c18e927 | 66bbc841545c144282086add37ec76de392fce6cbb46c2b5eb45d9f87af0f38e | 3752fc5f55587b90d7190fdb774d1a8fd2d5895a2da10d1d4d330f70dce09f55 | null | [] | 228 |
2.4 | microsoft-agents-storage-blob | 0.8.0.dev4 | A blob storage library for Microsoft Agents | # Microsoft Agents Storage - Blob
[](https://pypi.org/project/microsoft-agents-storage-blob/)
Azure Blob Storage integration for Microsoft 365 Agents SDK. This library provides persistent storage for conversation state, user data, and custom agent information using Azure Blob Storage.
This library implements the storage interface for the Microsoft 365 Agents SDK using Azure Blob Storage as the backend. It enables your agents to persist conversation state, user preferences, and custom data across sessions. Perfect for production deployments where you need reliable, scalable cloud storage.
# What is this?
This library is part of the **Microsoft 365 Agents SDK for Python** - a comprehensive framework for building enterprise-grade conversational AI agents. The SDK enables developers to create intelligent agents that work across multiple platforms including Microsoft Teams, M365 Copilot, Copilot Studio, and web chat, with support for third-party integrations like Slack, Facebook Messenger, and Twilio.
## Release Notes
<table style="width:100%">
<tr>
<th style="width:20%">Version</th>
<th style="width:20%">Date</th>
<th style="width:60%">Release Notes</th>
</tr>
<tr>
<td>0.7.0</td>
<td>2026-01-21</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v070">
0.7.0 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.6.1</td>
<td>2025-12-01</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v061">
0.6.1 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.6.0</td>
<td>2025-11-18</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v060">
0.6.0 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.5.0</td>
<td>2025-10-22</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v050">
0.5.0 Release Notes
</a>
</td>
</tr>
</table>
## Packages Overview
We offer the following PyPI packages to create conversational experiences based on Agents:
| Package Name | PyPI Version | Description |
|--------------|-------------|-------------|
| `microsoft-agents-activity` | [](https://pypi.org/project/microsoft-agents-activity/) | Types and validators implementing the Activity protocol spec. |
| `microsoft-agents-hosting-core` | [](https://pypi.org/project/microsoft-agents-hosting-core/) | Core library for Microsoft Agents hosting. |
| `microsoft-agents-hosting-aiohttp` | [](https://pypi.org/project/microsoft-agents-hosting-aiohttp/) | Configures aiohttp to run the Agent. |
| `microsoft-agents-hosting-teams` | [](https://pypi.org/project/microsoft-agents-hosting-teams/) | Provides classes to host an Agent for Teams. |
| `microsoft-agents-storage-blob` | [](https://pypi.org/project/microsoft-agents-storage-blob/) | Extension to use Azure Blob as storage. |
| `microsoft-agents-storage-cosmos` | [](https://pypi.org/project/microsoft-agents-storage-cosmos/) | Extension to use CosmosDB as storage. |
| `microsoft-agents-authentication-msal` | [](https://pypi.org/project/microsoft-agents-authentication-msal/) | MSAL-based authentication for Microsoft Agents. |
Additionally we provide a Copilot Studio Client, to interact with Agents created in CopilotStudio:
| Package Name | PyPI Version | Description |
|--------------|-------------|-------------|
| `microsoft-agents-copilotstudio-client` | [](https://pypi.org/project/microsoft-agents-copilotstudio-client/) | Direct to Engine client to interact with Agents created in CopilotStudio |
## Installation
```bash
pip install microsoft-agents-storage-blob
```
**Benefits:**
- ✅ No secrets in code
- ✅ Managed Identity support
- ✅ Automatic token renewal
- ✅ Fine-grained access control via Azure RBAC
### Configuration Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `container_name` | `str` | Yes | Name of the blob container to use |
| `connection_string` | `str` | No* | Storage account connection string |
| `url` | `str` | No* | Blob service URL (e.g., `https://account.blob.core.windows.net`) |
| `credential` | `TokenCredential` | No** | Azure credential for authentication |
*Either `connection_string` OR (`url` + `credential`) must be provided
**Required when using `url`
### Azure Managed Identity
When running in Azure (App Service, Functions, Container Apps), use Managed Identity:
```python
from azure.identity import ManagedIdentityCredential
config = BlobStorageConfig(
container_name="agent-storage",
url="https://myaccount.blob.core.windows.net",
credential=ManagedIdentityCredential()
)
```
**Azure RBAC Roles Required:**
- `Storage Blob Data Contributor` - For read/write access
- `Storage Blob Data Reader` - For read-only access
**Benefits of switching to BlobStorage:**
- ✅ Data persists across restarts
- ✅ Scalable to millions of items
- ✅ Multi-instance support (load balancing)
- ✅ Automatic backups and geo-replication
- ✅ Built-in monitoring and diagnostics
## Best Practices
1. **Use Token Authentication in Production** - Avoid storing connection strings; use Managed Identity or DefaultAzureCredential
2. **Initialize Once** - Call `storage.initialize()` during app startup, not on every request
3. **Implement Retry Logic** - Handle transient failures with exponential backoff
4. **Monitor Performance** - Use Azure Monitor to track storage operations
5. **Set Lifecycle Policies** - Configure automatic cleanup of old data in Azure Portal
6. **Use Consistent Naming** - Establish key naming conventions (e.g., `user:{id}`, `conversation:{id}`)
7. **Batch Operations** - Read/write multiple items together when possible
## Key Classes Reference
- **`BlobStorage`** - Main storage implementation using Azure Blob Storage
- **`BlobStorageConfig`** - Configuration settings for connection and authentication
- **`StoreItem`** - Base class for data models (inherit to create custom types)
# Quick Links
- 📦 [All SDK Packages on PyPI](https://pypi.org/search/?q=microsoft-agents)
- 📖 [Complete Documentation](https://aka.ms/agents)
- 💡 [Python Samples Repository](https://github.com/microsoft/Agents/tree/main/samples/python)
- 🐛 [Report Issues](https://github.com/microsoft/Agents-for-python/issues)
# Sample Applications
|Name|Description|README|
|----|----|----|
|Quickstart|Simplest agent|[Quickstart](https://github.com/microsoft/Agents/blob/main/samples/python/quickstart/README.md)|
|Auto Sign In|Simple OAuth agent using Graph and GitHub|[auto-signin](https://github.com/microsoft/Agents/blob/main/samples/python/auto-signin/README.md)|
|OBO Authorization|OBO flow to access a Copilot Studio Agent|[obo-authorization](https://github.com/microsoft/Agents/blob/main/samples/python/obo-authorization/README.md)|
|Semantic Kernel Integration|A weather agent built with Semantic Kernel|[semantic-kernel-multiturn](https://github.com/microsoft/Agents/blob/main/samples/python/semantic-kernel-multiturn/README.md)|
|Streaming Agent|Streams OpenAI responses|[azure-ai-streaming](https://github.com/microsoft/Agents/blob/main/samples/python/azureai-streaming/README.md)|
|Copilot Studio Client|Console app to consume a Copilot Studio Agent|[copilotstudio-client](https://github.com/microsoft/Agents/blob/main/samples/python/copilotstudio-client/README.md)|
|Cards Agent|Agent that uses rich cards to enhance conversation design |[cards](https://github.com/microsoft/Agents/blob/main/samples/python/cards/README.md)|
| text/markdown | Microsoft Corporation License-Expression: MIT | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"microsoft-agents-hosting-core==0.8.0.dev4",
"azure-core",
"azure-storage-blob",
"azure-identity"
] | [] | [] | [] | [
"Homepage, https://github.com/microsoft/Agents"
] | RestSharp/106.13.0.0 | 2026-02-18T09:07:33.711816 | microsoft_agents_storage_blob-0.8.0.dev4-py3-none-any.whl | 8,434 | 29/55/d6d7fc25f6d9960ccea9870aa4acf7ae323ac3d80f2d7558fc348676cc3c/microsoft_agents_storage_blob-0.8.0.dev4-py3-none-any.whl | py3 | bdist_wheel | null | false | 749ba91d06356a8bfd53fc1835873759 | 9937cb8bcde4d37b13d943cb15f3513a74be44fc9d373ff67a603ccf4cb8911e | 2955d6d7fc25f6d9960ccea9870aa4acf7ae323ac3d80f2d7558fc348676cc3c | null | [] | 222 |
2.4 | microsoft-agents-hosting-teams | 0.8.0.dev4 | Integration library for Microsoft Agents with Teams | # Microsoft Agents Hosting - Teams
[](https://pypi.org/project/microsoft-agents-hosting-teams/)
Integration library for building Microsoft Teams agents using the Microsoft 365 Agents SDK. This library provides specialized handlers and utilities for Teams-specific functionality like messaging extensions, task modules, adaptive cards, and meeting events.
This library extends the core hosting capabilities with Teams-specific features. It handles Teams' unique interaction patterns like messaging extensions, tab applications, task modules, and meeting integrations. Think of it as the bridge that makes your agent "Teams-native" rather than just a generic chatbot.
This library is still in flux, as the interfaces to Teams continue to evolve.
# What is this?
This library is part of the **Microsoft 365 Agents SDK for Python** - a comprehensive framework for building enterprise-grade conversational AI agents. The SDK enables developers to create intelligent agents that work across multiple platforms including Microsoft Teams, M365 Copilot, Copilot Studio, and web chat, with support for third-party integrations like Slack, Facebook Messenger, and Twilio.
## Release Notes
<table style="width:100%">
<tr>
<th style="width:20%">Version</th>
<th style="width:20%">Date</th>
<th style="width:60%">Release Notes</th>
</tr>
<tr>
<td>0.7.0</td>
<td>2026-01-21</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v070">
0.7.0 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.6.1</td>
<td>2025-12-01</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v061">
0.6.1 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.6.0</td>
<td>2025-11-18</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v060">
0.6.0 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.5.0</td>
<td>2025-10-22</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v050">
0.5.0 Release Notes
</a>
</td>
</tr>
</table>
## Packages Overview
We offer the following PyPI packages to create conversational experiences based on Agents:
| Package Name | PyPI Version | Description |
|--------------|-------------|-------------|
| `microsoft-agents-activity` | [](https://pypi.org/project/microsoft-agents-activity/) | Types and validators implementing the Activity protocol spec. |
| `microsoft-agents-hosting-core` | [](https://pypi.org/project/microsoft-agents-hosting-core/) | Core library for Microsoft Agents hosting. |
| `microsoft-agents-hosting-aiohttp` | [](https://pypi.org/project/microsoft-agents-hosting-aiohttp/) | Configures aiohttp to run the Agent. |
| `microsoft-agents-hosting-teams` | [](https://pypi.org/project/microsoft-agents-hosting-teams/) | Provides classes to host an Agent for Teams. |
| `microsoft-agents-storage-blob` | [](https://pypi.org/project/microsoft-agents-storage-blob/) | Extension to use Azure Blob as storage. |
| `microsoft-agents-storage-cosmos` | [](https://pypi.org/project/microsoft-agents-storage-cosmos/) | Extension to use CosmosDB as storage. |
| `microsoft-agents-authentication-msal` | [](https://pypi.org/project/microsoft-agents-authentication-msal/) | MSAL-based authentication for Microsoft Agents. |
Additionally we provide a Copilot Studio Client, to interact with Agents created in CopilotStudio:
| Package Name | PyPI Version | Description |
|--------------|-------------|-------------|
| `microsoft-agents-copilotstudio-client` | [](https://pypi.org/project/microsoft-agents-copilotstudio-client/) | Direct to Engine client to interact with Agents created in CopilotStudio |
## Installation
```bash
pip install microsoft-agents-hosting-teams
```
## Key Classes Reference
- **`TeamsActivityHandler`** - Main handler class with Teams-specific event methods
- **`TeamsInfo`** - Utility class for Teams operations (members, meetings, channels)
- **`MessagingExtensionQuery/Response`** - Handle search and messaging extensions
- **`TaskModuleRequest/Response`** - Interactive dialogs and forms
- **`TabRequest/Response`** - Tab application interactions
## Features Supported
✅ **Messaging Extensions** - Search and action-based extensions
✅ **Task Modules** - Interactive dialogs and forms
✅ **Adaptive Cards** - Rich card interactions
✅ **Meeting Events** - Start, end, participant changes
✅ **Team Management** - Member operations, channel messaging
✅ **File Handling** - Upload/download with consent flow
✅ **Tab Apps** - Personal and team tab interactions
✅ **Proactive Messaging** - Send messages to channels/users
## Migration from Bot Framework
| Bot Framework Teams | Microsoft Agents Teams |
|-------------------|------------------------|
| `TeamsActivityHandler` | `TeamsActivityHandler` |
| `TeamsInfo` | `TeamsInfo` |
| `on_teams_members_added` | `on_teams_members_added_activity` |
| `MessagingExtensionQuery` | `MessagingExtensionQuery` |
| `TaskModuleRequest` | `TaskModuleRequest` |
# Quick Links
- 📦 [All SDK Packages on PyPI](https://pypi.org/search/?q=microsoft-agents)
- 📖 [Complete Documentation](https://aka.ms/agents)
- 💡 [Python Samples Repository](https://github.com/microsoft/Agents/tree/main/samples/python)
- 🐛 [Report Issues](https://github.com/microsoft/Agents-for-python/issues)
# Sample Applications
|Name|Description|README|
|----|----|----|
|Quickstart|Simplest agent|[Quickstart](https://github.com/microsoft/Agents/blob/main/samples/python/quickstart/README.md)|
|Auto Sign In|Simple OAuth agent using Graph and GitHub|[auto-signin](https://github.com/microsoft/Agents/blob/main/samples/python/auto-signin/README.md)|
|OBO Authorization|OBO flow to access a Copilot Studio Agent|[obo-authorization](https://github.com/microsoft/Agents/blob/main/samples/python/obo-authorization/README.md)|
|Semantic Kernel Integration|A weather agent built with Semantic Kernel|[semantic-kernel-multiturn](https://github.com/microsoft/Agents/blob/main/samples/python/semantic-kernel-multiturn/README.md)|
|Streaming Agent|Streams OpenAI responses|[azure-ai-streaming](https://github.com/microsoft/Agents/blob/main/samples/python/azureai-streaming/README.md)|
|Copilot Studio Client|Console app to consume a Copilot Studio Agent|[copilotstudio-client](https://github.com/microsoft/Agents/blob/main/samples/python/copilotstudio-client/README.md)|
|Cards Agent|Agent that uses rich cards to enhance conversation design |[cards](https://github.com/microsoft/Agents/blob/main/samples/python/cards/README.md)|
| text/markdown | Microsoft Corporation License-Expression: MIT | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"microsoft-agents-hosting-core==0.8.0.dev4",
"aiohttp>=3.11.11"
] | [] | [] | [] | [
"Homepage, https://github.com/microsoft/Agents"
] | RestSharp/106.13.0.0 | 2026-02-18T09:07:32.868586 | microsoft_agents_hosting_teams-0.8.0.dev4-py3-none-any.whl | 12,993 | 33/98/5d8f90907e06beac60493a0aca1520e655d578d4414e213c57689ff2e452/microsoft_agents_hosting_teams-0.8.0.dev4-py3-none-any.whl | py3 | bdist_wheel | null | false | 3228ee8c8b4a0c3184e353b8363ba079 | a544d33f7efd6ca3e0e4bce6699889d07ab5337ba898352b3c9e7e9314fe890e | 33985d8f90907e06beac60493a0aca1520e655d578d4414e213c57689ff2e452 | null | [] | 222 |
2.4 | microsoft-agents-hosting-fastapi | 0.8.0.dev4 | Integration library for Microsoft Agents with FastAPI | # Microsoft Agents Hosting FastAPI
This library provides FastAPI integration for Microsoft Agents, enabling you to build conversational agents using the FastAPI web framework.
## Release Notes
<table style="width:100%">
<tr>
<th style="width:20%">Version</th>
<th style="width:20%">Date</th>
<th style="width:60%">Release Notes</th>
</tr>
<tr>
<td>0.7.0</td>
<td>2026-01-21</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v070">
0.7.0 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.6.1</td>
<td>2025-12-01</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v061">
0.6.1 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.6.0</td>
<td>2025-11-18</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v060">
0.6.0 Release Notes
</a>
</td>
</tr>
</table>
## Features
- FastAPI integration for Microsoft Agents
- JWT authorization middleware
- Channel service API endpoints
- Streaming response support
- Cloud adapter for processing agent activities
## Installation
```bash
pip install microsoft-agents-hosting-fastapi
```
## Usage
```python
from fastapi import FastAPI, Request
from microsoft_agents.hosting.fastapi import start_agent_process, CloudAdapter
from microsoft_agents.hosting.core.app import AgentApplication
app = FastAPI()
adapter = CloudAdapter()
agent_app = AgentApplication()
@app.post("/api/messages")
async def messages(request: Request):
return await start_agent_process(request, agent_app, adapter)
```
| text/markdown | Microsoft Corporation License-Expression: MIT | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"microsoft-agents-hosting-core==0.8.0.dev4",
"fastapi>=0.104.0"
] | [] | [] | [] | [
"Homepage, https://github.com/microsoft/Agents"
] | RestSharp/106.13.0.0 | 2026-02-18T09:07:31.629931 | microsoft_agents_hosting_fastapi-0.8.0.dev4-py3-none-any.whl | 13,919 | 9b/27/81ec978b09cef88a178d6beed09f3c94daa90dfc39623df4519a9ea745e4/microsoft_agents_hosting_fastapi-0.8.0.dev4-py3-none-any.whl | py3 | bdist_wheel | null | false | ae778f6fff2131b47cbc19385b27fc7c | 7d998b7f2c8a5260a7779e813f6f5a82c5ec489f1000a6466c18194ca060776e | 9b2781ec978b09cef88a178d6beed09f3c94daa90dfc39623df4519a9ea745e4 | null | [] | 218 |
2.4 | microsoft-agents-hosting-core | 0.8.0.dev4 | Core library for Microsoft Agents | # Microsoft Agents Hosting Core
[](https://pypi.org/project/microsoft-agents-hosting-core/)
The core hosting library for Microsoft 365 Agents SDK. This library provides the fundamental building blocks for creating conversational AI agents, including activity processing, state management, authentication, and channel communication.
This is the heart of the Microsoft 365 Agents SDK - think of it as the engine that powers your conversational agents. It handles the complex orchestration of conversations, manages state across turns, and provides the infrastructure needed to build production-ready agents that work across Microsoft 365 platforms.
# What is this?
This library is part of the **Microsoft 365 Agents SDK for Python** - a comprehensive framework for building enterprise-grade conversational AI agents. The SDK enables developers to create intelligent agents that work across multiple platforms including Microsoft Teams, M365 Copilot, Copilot Studio, and web chat, with support for third-party integrations like Slack, Facebook Messenger, and Twilio.
## Release Notes
<table style="width:100%">
<tr>
<th style="width:20%">Version</th>
<th style="width:20%">Date</th>
<th style="width:60%">Release Notes</th>
</tr>
<tr>
<td>0.7.0</td>
<td>2026-01-21</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v070">
0.7.0 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.6.1</td>
<td>2025-12-01</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v061">
0.6.1 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.6.0</td>
<td>2025-11-18</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v060">
0.6.0 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.5.0</td>
<td>2025-10-22</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v050">
0.5.0 Release Notes
</a>
</td>
</tr>
</table>
## Packages Overview
We offer the following PyPI packages to create conversational experiences based on Agents:
| Package Name | PyPI Version | Description |
|--------------|-------------|-------------|
| `microsoft-agents-activity` | [](https://pypi.org/project/microsoft-agents-activity/) | Types and validators implementing the Activity protocol spec. |
| `microsoft-agents-hosting-core` | [](https://pypi.org/project/microsoft-agents-hosting-core/) | Core library for Microsoft Agents hosting. |
| `microsoft-agents-hosting-aiohttp` | [](https://pypi.org/project/microsoft-agents-hosting-aiohttp/) | Configures aiohttp to run the Agent. |
| `microsoft-agents-hosting-teams` | [](https://pypi.org/project/microsoft-agents-hosting-teams/) | Provides classes to host an Agent for Teams. |
| `microsoft-agents-storage-blob` | [](https://pypi.org/project/microsoft-agents-storage-blob/) | Extension to use Azure Blob as storage. |
| `microsoft-agents-storage-cosmos` | [](https://pypi.org/project/microsoft-agents-storage-cosmos/) | Extension to use CosmosDB as storage. |
| `microsoft-agents-authentication-msal` | [](https://pypi.org/project/microsoft-agents-authentication-msal/) | MSAL-based authentication for Microsoft Agents. |
Additionally we provide a Copilot Studio Client, to interact with Agents created in CopilotStudio:
| Package Name | PyPI Version | Description |
|--------------|-------------|-------------|
| `microsoft-agents-copilotstudio-client` | [](https://pypi.org/project/microsoft-agents-copilotstudio-client/) | Direct to Engine client to interact with Agents created in CopilotStudio |
## Installation
```bash
pip install microsoft-agents-hosting-core
```
## Simple Echo Agent
See the [Quickstart sample](https://github.com/microsoft/Agents/tree/main/samples/python/quickstart) for full working code.
```python
agents_sdk_config = load_configuration_from_env(environ)
STORAGE = MemoryStorage()
CONNECTION_MANAGER = MsalConnectionManager(**agents_sdk_config)
ADAPTER = CloudAdapter(connection_manager=CONNECTION_MANAGER)
AUTHORIZATION = Authorization(STORAGE, CONNECTION_MANAGER, **agents_sdk_config)
AGENT_APP = AgentApplication[TurnState](
storage=STORAGE, adapter=ADAPTER, authorization=AUTHORIZATION, **agents_sdk_config
)
@AGENT_APP.activity("message")
async def on_message(context: TurnContext, state: TurnState):
await context.send_activity(f"You said: {context.activity.text}")
...
start_server(
agent_application=AGENT_APP,
auth_configuration=CONNECTION_MANAGER.get_default_connection_configuration(),
)
```
## Core Concepts
### AgentApplication vs ActivityHandler
**AgentApplication** - Modern, fluent API for building agents:
- Decorator-based routing (`@agent_app.activity("message")`)
- Built-in state management and middleware
- AI-ready with authorization support
- Type-safe with generics
**ActivityHandler** - Traditional inheritance-based approach:
- Override methods for different activity types
- More familiar to Bot Framework developers
- Lower-level control over activity processing
### Route-based Message Handling
```python
@AGENT_APP.message(re.compile(r"^hello$"))
async def on_hello(context: TurnContext, _state: TurnState):
await context.send_activity("Hello!")
@AGENT_APP.activity("message")
async def on_message(context: TurnContext, _state: TurnState):
await context.send_activity(f"you said: {context.activity.text}")
```
### Error Handling
```python
@AGENT_APP.error
async def on_error(context: TurnContext, error: Exception):
# NOTE: In production environment, you should consider logging this to Azure
# application insights.
print(f"\n [on_turn_error] unhandled error: {error}", file=sys.stderr)
traceback.print_exc()
# Send a message to the user
await context.send_activity("The bot encountered an error or bug.")
```
## Key Classes Reference
### Core Classes
- **`AgentApplication`** - Main application class with fluent API
- **`ActivityHandler`** - Base class for inheritance-based agents
- **`TurnContext`** - Context for each conversation turn
- **`TurnState`** - State management across conversation turns
### State Management
- **`ConversationState`** - Conversation-scoped state
- **`UserState`** - User-scoped state across conversations
- **`TempState`** - Temporary state for current turn
- **`MemoryStorage`** - In-memory storage (development)
### Messaging
- **`MessageFactory`** - Create different types of messages
- **`CardFactory`** - Create rich card attachments
- **`InputFile`** - Handle file attachments
### Authorization
- **`Authorization`** - Authentication and authorization manager
- **`ClaimsIdentity`** - User identity and claims
# Quick Links
- 📦 [All SDK Packages on PyPI](https://pypi.org/search/?q=microsoft-agents)
- 📖 [Complete Documentation](https://aka.ms/agents)
- 💡 [Python Samples Repository](https://github.com/microsoft/Agents/tree/main/samples/python)
- 🐛 [Report Issues](https://github.com/microsoft/Agents-for-python/issues)
# Sample Applications
|Name|Description|README|
|----|----|----|
|Quickstart|Simplest agent|[Quickstart](https://github.com/microsoft/Agents/blob/main/samples/python/quickstart/README.md)|
|Auto Sign In|Simple OAuth agent using Graph and GitHub|[auto-signin](https://github.com/microsoft/Agents/blob/main/samples/python/auto-signin/README.md)|
|OBO Authorization|OBO flow to access a Copilot Studio Agent|[obo-authorization](https://github.com/microsoft/Agents/blob/main/samples/python/obo-authorization/README.md)|
|Semantic Kernel Integration|A weather agent built with Semantic Kernel|[semantic-kernel-multiturn](https://github.com/microsoft/Agents/blob/main/samples/python/semantic-kernel-multiturn/README.md)|
|Streaming Agent|Streams OpenAI responses|[azure-ai-streaming](https://github.com/microsoft/Agents/blob/main/samples/python/azureai-streaming/README.md)|
|Copilot Studio Client|Console app to consume a Copilot Studio Agent|[copilotstudio-client](https://github.com/microsoft/Agents/blob/main/samples/python/copilotstudio-client/README.md)|
|Cards Agent|Agent that uses rich cards to enhance conversation design |[cards](https://github.com/microsoft/Agents/blob/main/samples/python/cards/README.md)|
| text/markdown | Microsoft Corporation License-Expression: MIT | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"microsoft-agents-activity==0.8.0.dev4",
"pyjwt>=2.10.1",
"isodate>=0.6.1",
"azure-core>=1.30.0",
"python-dotenv>=1.1.1"
] | [] | [] | [] | [
"Homepage, https://github.com/microsoft/Agents"
] | RestSharp/106.13.0.0 | 2026-02-18T09:07:30.672858 | microsoft_agents_hosting_core-0.8.0.dev4-py3-none-any.whl | 139,601 | 80/be/2c589d6012c075eb9e064a2c85562849a79adde6a3fd1c494c6daa988997/microsoft_agents_hosting_core-0.8.0.dev4-py3-none-any.whl | py3 | bdist_wheel | null | false | 7518711305655b019e85ffc80348cf65 | 312436f09b36a806e8faeea7723d4af633a24ff527bc46dab5cd2bb7823c958b | 80be2c589d6012c075eb9e064a2c85562849a79adde6a3fd1c494c6daa988997 | null | [] | 3,891 |
2.4 | microsoft-agents-hosting-aiohttp | 0.8.0.dev4 | Integration library for Microsoft Agents with aiohttp | # Microsoft Agents Hosting - aiohttp
[](https://pypi.org/project/microsoft-agents-hosting-aiohttp/)
Integration library for hosting Microsoft 365 Agents using aiohttp. This library provides HTTP adapters, middleware, and utilities for building web-based agent applications with the popular aiohttp framework.
This library bridges the Microsoft 365 Agents SDK with aiohttp, allowing you to create HTTP endpoints that handle agent conversations. It provides everything you need to host agents as web services, including request processing, authentication, and routing.
# What is this?
This library is part of the **Microsoft 365 Agents SDK for Python** - a comprehensive framework for building enterprise-grade conversational AI agents. The SDK enables developers to create intelligent agents that work across multiple platforms including Microsoft Teams, M365 Copilot, Copilot Studio, and web chat, with support for third-party integrations like Slack, Facebook Messenger, and Twilio.
## Release Notes
<table style="width:100%">
<tr>
<th style="width:20%">Version</th>
<th style="width:20%">Date</th>
<th style="width:60%">Release Notes</th>
</tr>
<tr>
<td>0.7.0</td>
<td>2026-01-21</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v070">
0.7.0 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.6.1</td>
<td>2025-12-01</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v061">
0.6.1 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.6.0</td>
<td>2025-11-18</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v060">
0.6.0 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.5.0</td>
<td>2025-10-22</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v050">
0.5.0 Release Notes
</a>
</td>
</tr>
</table>
## Packages Overview
We offer the following PyPI packages to create conversational experiences based on Agents:
| Package Name | PyPI Version | Description |
|--------------|-------------|-------------|
| `microsoft-agents-activity` | [](https://pypi.org/project/microsoft-agents-activity/) | Types and validators implementing the Activity protocol spec. |
| `microsoft-agents-hosting-core` | [](https://pypi.org/project/microsoft-agents-hosting-core/) | Core library for Microsoft Agents hosting. |
| `microsoft-agents-hosting-aiohttp` | [](https://pypi.org/project/microsoft-agents-hosting-aiohttp/) | Configures aiohttp to run the Agent. |
| `microsoft-agents-hosting-teams` | [](https://pypi.org/project/microsoft-agents-hosting-teams/) | Provides classes to host an Agent for Teams. |
| `microsoft-agents-storage-blob` | [](https://pypi.org/project/microsoft-agents-storage-blob/) | Extension to use Azure Blob as storage. |
| `microsoft-agents-storage-cosmos` | [](https://pypi.org/project/microsoft-agents-storage-cosmos/) | Extension to use CosmosDB as storage. |
| `microsoft-agents-authentication-msal` | [](https://pypi.org/project/microsoft-agents-authentication-msal/) | MSAL-based authentication for Microsoft Agents. |
Additionally we provide a Copilot Studio Client, to interact with Agents created in CopilotStudio:
| Package Name | PyPI Version | Description |
|--------------|-------------|-------------|
| `microsoft-agents-copilotstudio-client` | [](https://pypi.org/project/microsoft-agents-copilotstudio-client/) | Direct to Engine client to interact with Agents created in CopilotStudio |
## Installation
```bash
pip install microsoft-agents-hosting-aiohttp
```
## Simple Echo Agent
See the [Quickstart sample](https://github.com/microsoft/Agents/tree/main/samples/python/quickstart) for full working code.
```python
agents_sdk_config = load_configuration_from_env(environ)
STORAGE = MemoryStorage()
CONNECTION_MANAGER = MsalConnectionManager(**agents_sdk_config)
ADAPTER = CloudAdapter(connection_manager=CONNECTION_MANAGER)
AUTHORIZATION = Authorization(STORAGE, CONNECTION_MANAGER, **agents_sdk_config)
AGENT_APP = AgentApplication[TurnState](
storage=STORAGE, adapter=ADAPTER, authorization=AUTHORIZATION, **agents_sdk_config
)
@AGENT_APP.activity("message")
async def on_message(context: TurnContext, state: TurnState):
await context.send_activity(f"You said: {context.activity.text}")
...
start_server(
agent_application=AGENT_APP,
auth_configuration=CONNECTION_MANAGER.get_default_connection_configuration(),
)
```
### Error Handling
Customize error responses. Code take from the [Quickstart sample](https://github.com/microsoft/Agents/tree/main/samples/python/quickstart).
```python
@AGENT_APP.error
async def on_error(context: TurnContext, error: Exception):
# This check writes out errors to console log
# NOTE: In production environment, you should consider logging this to Azure
# application insights.
print(f"\n [on_turn_error] unhandled error: {error}", file=sys.stderr)
traceback.print_exc()
# Send a message to the user
await context.send_activity("The bot encountered an error or bug.")
```
## Features
✅ **HTTP hosting** - Full aiohttp integration for web hosting
✅ **JWT authentication** - Built-in security with middleware
✅ **Agent-to-agent** - Support for multi-agent communication
✅ **Streaming** - Real-time response streaming
✅ **Error handling** - Comprehensive error management
✅ **Development friendly** - Hot reload and debugging support
## Requirements
- Python 3.10+ (supports 3.10, 3.11, 3.12, 3.13, 3.14)
- aiohttp 3.11.11+
- Microsoft Agents hosting core library
## Best Practices
1. **Use middleware** for cross-cutting concerns like auth and logging
2. **Handle errors gracefully** with custom error handlers
3. **Secure your endpoints** with JWT middleware in production
4. **Structure routes** logically for agent communication
# Quick Links
- 📦 [All SDK Packages on PyPI](https://pypi.org/search/?q=microsoft-agents)
- 📖 [Complete Documentation](https://aka.ms/agents)
- 💡 [Python Samples Repository](https://github.com/microsoft/Agents/tree/main/samples/python)
- 🐛 [Report Issues](https://github.com/microsoft/Agents-for-python/issues)
# Sample Applications
|Name|Description|README|
|----|----|----|
|Quickstart|Simplest agent|[Quickstart](https://github.com/microsoft/Agents/blob/main/samples/python/quickstart/README.md)|
|Auto Sign In|Simple OAuth agent using Graph and GitHub|[auto-signin](https://github.com/microsoft/Agents/blob/main/samples/python/auto-signin/README.md)|
|OBO Authorization|OBO flow to access a Copilot Studio Agent|[obo-authorization](https://github.com/microsoft/Agents/blob/main/samples/python/obo-authorization/README.md)|
|Semantic Kernel Integration|A weather agent built with Semantic Kernel|[semantic-kernel-multiturn](https://github.com/microsoft/Agents/blob/main/samples/python/semantic-kernel-multiturn/README.md)|
|Streaming Agent|Streams OpenAI responses|[azure-ai-streaming](https://github.com/microsoft/Agents/blob/main/samples/python/azureai-streaming/README.md)|
|Copilot Studio Client|Console app to consume a Copilot Studio Agent|[copilotstudio-client](https://github.com/microsoft/Agents/blob/main/samples/python/copilotstudio-client/README.md)|
|Cards Agent|Agent that uses rich cards to enhance conversation design |[cards](https://github.com/microsoft/Agents/blob/main/samples/python/cards/README.md)|
| text/markdown | Microsoft Corporation License-Expression: MIT | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"microsoft-agents-hosting-core==0.8.0.dev4",
"aiohttp>=3.11.11"
] | [] | [] | [] | [
"Homepage, https://github.com/microsoft/Agents"
] | RestSharp/106.13.0.0 | 2026-02-18T09:07:28.611916 | microsoft_agents_hosting_aiohttp-0.8.0.dev4-py3-none-any.whl | 15,911 | 25/6a/46b6cb847f2ef2225713fdd141cecb60bb86b9a2117c23eb1099ebdc885d/microsoft_agents_hosting_aiohttp-0.8.0.dev4-py3-none-any.whl | py3 | bdist_wheel | null | false | 8d9188b6c73904ad447e64bc469e7a79 | 420d86f3502fedd28bbdc8c2aa9ab578a17d1470f3671194b1f79b44611cff1c | 256a46b6cb847f2ef2225713fdd141cecb60bb86b9a2117c23eb1099ebdc885d | null | [] | 461 |
2.4 | microsoft-agents-copilotstudio-client | 0.8.0.dev4 | A client library for Microsoft Agents | # Microsoft Agents Copilot Studio Client
[](https://pypi.org/project/microsoft-agents-copilotstudio-client/)
The Copilot Studio Client is for connecting to and interacting with agents created in Microsoft Copilot Studio. This library allows you to integrate Copilot Studio agents into your Python applications.
This client library provides a direct connection to Copilot Studio agents, bypassing traditional chat channels. It's perfect for integrating AI conversations into your applications, building custom UIs, or creating agent-to-agent communication flows.
# What is this?
This library is part of the **Microsoft 365 Agents SDK for Python** - a comprehensive framework for building enterprise-grade conversational AI agents. The SDK enables developers to create intelligent agents that work across multiple platforms including Microsoft Teams, M365 Copilot, Copilot Studio, and web chat, with support for third-party integrations like Slack, Facebook Messenger, and Twilio.
## Release Notes
<table style="width:100%">
<tr>
<th style="width:20%">Version</th>
<th style="width:20%">Date</th>
<th style="width:60%">Release Notes</th>
</tr>
<tr>
<td>0.7.0</td>
<td>2026-01-21</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v070">
0.7.0 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.6.1</td>
<td>2025-12-01</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v061">
0.6.1 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.6.0</td>
<td>2025-11-18</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v060">
0.6.0 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.5.0</td>
<td>2025-10-22</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v050">
0.5.0 Release Notes
</a>
</td>
</tr>
</table>
## Packages Overview
We offer the following PyPI packages to create conversational experiences based on Agents:
| Package Name | PyPI Version | Description |
|--------------|-------------|-------------|
| `microsoft-agents-activity` | [](https://pypi.org/project/microsoft-agents-activity/) | Types and validators implementing the Activity protocol spec. |
| `microsoft-agents-hosting-core` | [](https://pypi.org/project/microsoft-agents-hosting-core/) | Core library for Microsoft Agents hosting. |
| `microsoft-agents-hosting-aiohttp` | [](https://pypi.org/project/microsoft-agents-hosting-aiohttp/) | Configures aiohttp to run the Agent. |
| `microsoft-agents-hosting-teams` | [](https://pypi.org/project/microsoft-agents-hosting-teams/) | Provides classes to host an Agent for Teams. |
| `microsoft-agents-storage-blob` | [](https://pypi.org/project/microsoft-agents-storage-blob/) | Extension to use Azure Blob as storage. |
| `microsoft-agents-storage-cosmos` | [](https://pypi.org/project/microsoft-agents-storage-cosmos/) | Extension to use CosmosDB as storage. |
| `microsoft-agents-authentication-msal` | [](https://pypi.org/project/microsoft-agents-authentication-msal/) | MSAL-based authentication for Microsoft Agents. |
Additionally we provide a Copilot Studio Client, to interact with Agents created in CopilotStudio:
| Package Name | PyPI Version | Description |
|--------------|-------------|-------------|
| `microsoft-agents-copilotstudio-client` | [](https://pypi.org/project/microsoft-agents-copilotstudio-client/) | Direct to Engine client to interact with Agents created in CopilotStudio |
## Installation
```bash
pip install microsoft-agents-copilotstudio-client
```
## Quick Start
### Basic Setup
#### Standard Environment-Based Connection
Code below from the [main.py in the Copilot Studio Client](https://github.com/microsoft/Agents/blob/main/samples/python/copilotstudio-client/src/main.py)
```python
def create_client():
settings = ConnectionSettings(
environment_id=environ.get("COPILOTSTUDIOAGENT__ENVIRONMENTID"),
agent_identifier=environ.get("COPILOTSTUDIOAGENT__SCHEMANAME"),
cloud=None,
copilot_agent_type=None,
custom_power_platform_cloud=None,
)
token = acquire_token(
settings,
app_client_id=environ.get("COPILOTSTUDIOAGENT__AGENTAPPID"),
tenant_id=environ.get("COPILOTSTUDIOAGENT__TENANTID"),
)
copilot_client = CopilotClient(settings, token)
return copilot_client
```
#### DirectConnect URL Mode (Simplified Setup)
For simplified setup, you can use a DirectConnect URL instead of environment-based configuration:
```python
def create_client_direct():
settings = ConnectionSettings(
environment_id="", # Not needed with DirectConnect URL
agent_identifier="", # Not needed with DirectConnect URL
direct_connect_url="https://api.powerplatform.com/copilotstudio/dataverse-backed/authenticated/bots/your-bot-id"
)
token = acquire_token(...)
copilot_client = CopilotClient(settings, token)
return copilot_client
```
#### Advanced Configuration Options
```python
settings = ConnectionSettings(
environment_id="your-env-id",
agent_identifier="your-agent-id",
cloud=PowerPlatformCloud.PROD,
copilot_agent_type=AgentType.PUBLISHED,
custom_power_platform_cloud=None,
direct_connect_url=None, # Optional: Direct URL to agent
use_experimental_endpoint=False, # Optional: Enable experimental features
enable_diagnostics=False, # Optional: Enable diagnostic logging (logs HTTP details)
client_session_settings={"timeout": aiohttp.ClientTimeout(total=60)} # Optional: aiohttp settings
)
```
**Diagnostic Logging Details**:
When `enable_diagnostics=True`, the CopilotClient logs detailed HTTP communication using Python's `logging` module at the `DEBUG` level:
- Pre-request: Logs the full request URL (`>>> SEND TO {url}`)
- Post-response: Logs all HTTP response headers in a formatted table
- Errors: Logs error messages with status codes
To see diagnostic output, configure your Python logging:
```python
import logging
logging.basicConfig(level=logging.DEBUG)
```
**Experimental Endpoint Details**:
When `use_experimental_endpoint=True`, the CopilotClient will automatically capture and use the experimental endpoint URL from the first response:
- The server returns the experimental endpoint in the `x-ms-d2e-experimental` response header
- Once captured, this URL is stored in `settings.direct_connect_url` and used for all subsequent requests
- This feature is only active when `use_experimental_endpoint=True` AND `direct_connect_url` is not already set
- The experimental endpoint allows access to pre-release features and optimizations
### Start a Conversation
#### Simple Start
The code below is summarized from the [main.py in the Copilot Studio Client](https://github.com/microsoft/Agents/blob/main/samples/python/copilotstudio-client/src/main.py). See that sample for complete & working code.
```python
copilot_client = create_client()
async for activity in copilot_client.start_conversation(emit_start_conversation_event=True):
if activity.type == ActivityTypes.message:
print(f"\n{activity.text}")
# Ask questions
async for reply in copilot_client.ask_question("Who are you?", conversation_id):
if reply.type == ActivityTypes.message:
print(f"\n{reply.text}")
```
#### Start with Advanced Options (Locale Support)
```python
from microsoft_agents.copilotstudio.client import StartRequest
# Create a start request with locale
start_request = StartRequest(
emit_start_conversation_event=True,
locale="en-US", # Optional: specify conversation locale
conversation_id="custom-conv-id" # Optional: provide your own conversation ID
)
async for activity in copilot_client.start_conversation_with_request(start_request):
if activity.type == ActivityTypes.message:
print(f"\n{activity.text}")
```
### Send Activities
#### Send a Custom Activity
```python
from microsoft_agents.activity import Activity
activity = Activity(
type="message",
text="Hello, agent!",
conversation={"id": conversation_id}
)
async for reply in copilot_client.send_activity(activity):
print(f"Response: {reply.text}")
```
#### Execute with Explicit Conversation ID
```python
# Execute an activity with a specific conversation ID
activity = Activity(type="message", text="What's the weather?")
async for reply in copilot_client.execute(conversation_id="conv-123", activity=activity):
print(f"Response: {reply.text}")
```
### Subscribe to Conversation Events
For real-time event streaming with resumption support:
```python
from microsoft_agents.copilotstudio.client import SubscribeEvent
# Subscribe to conversation events
async for subscribe_event in copilot_client.subscribe(
conversation_id="conv-123",
last_received_event_id=None # Optional: resume from last event
):
activity = subscribe_event.activity
event_id = subscribe_event.event_id # Use for resumption
if activity.type == ActivityTypes.message:
print(f"[{event_id}] {activity.text}")
```
### Environment Variables
Set up your `.env` file with the following options:
#### Standard Environment-Based Configuration
```bash
# Required (unless using DIRECT_CONNECT_URL)
ENVIRONMENT_ID=your-power-platform-environment-id
AGENT_IDENTIFIER=your-copilot-studio-agent-id
APP_CLIENT_ID=your-azure-app-client-id
TENANT_ID=your-azure-tenant-id
# Optional Cloud Configuration
CLOUD=PROD # Options: PROD, GOV, HIGH, DOD, MOONCAKE, DEV, TEST, etc.
COPILOT_AGENT_TYPE=PUBLISHED # Options: PUBLISHED, PREBUILT
CUSTOM_POWER_PLATFORM_CLOUD=https://custom.cloud.com
```
#### DirectConnect URL Configuration (Alternative)
```bash
# Required for DirectConnect mode
DIRECT_CONNECT_URL=https://api.powerplatform.com/copilotstudio/dataverse-backed/authenticated/bots/your-bot-id
APP_CLIENT_ID=your-azure-app-client-id
TENANT_ID=your-azure-tenant-id
# Optional
CLOUD=PROD # Used for token audience resolution
```
#### Advanced Options
```bash
# Experimental and diagnostic features
USE_EXPERIMENTAL_ENDPOINT=false # Enable automatic experimental endpoint capture
ENABLE_DIAGNOSTICS=false # Enable diagnostic logging (logs HTTP requests/responses)
```
**Experimental Endpoint**: When `USE_EXPERIMENTAL_ENDPOINT=true`, the client automatically captures and uses the experimental endpoint URL from the server's `x-ms-d2e-experimental` response header. This feature:
- Only activates when `direct_connect_url` is not already set
- Captures the URL from the first response and stores it for all subsequent requests
- Provides access to pre-release features and performance optimizations
- Useful for testing new capabilities before general availability
**Diagnostic Logging**: When `ENABLE_DIAGNOSTICS=true` or `enable_diagnostics=True`, the client will log detailed HTTP request and response information including:
- Request URLs before sending
- All response headers with their values
- Error messages for failed requests
This is useful for debugging connection issues, authentication problems, or understanding the communication flow with Copilot Studio. Diagnostic logs use Python's standard `logging` module at the `DEBUG` level.
#### Using Environment Variables in Code
The `ConnectionSettings.populate_from_environment()` helper method automatically loads these variables:
```python
from microsoft_agents.copilotstudio.client import ConnectionSettings
# Automatically loads from environment variables
settings_dict = ConnectionSettings.populate_from_environment()
settings = ConnectionSettings(**settings_dict)
```
## Features
### Core Capabilities
✅ **Real-time streaming** - Server-sent events for live responses
✅ **Multi-cloud support** - Works across all Power Platform clouds (PROD, GOV, HIGH, DOD, MOONCAKE, etc.)
✅ **Rich content** - Support for cards, actions, and attachments
✅ **Conversation management** - Maintain context across interactions
✅ **Custom activities** - Send structured data to agents
✅ **Async/await** - Modern Python async support
### Advanced Features
✅ **DirectConnect URLs** - Simplified connection with direct bot URLs
✅ **Locale support** - Specify conversation language with `StartRequest`
✅ **Event subscription** - Subscribe to conversation events with SSE resumption
✅ **Multiple connection modes** - Environment-based or DirectConnect URL
✅ **Token audience resolution** - Automatic cloud detection from URLs
✅ **User-Agent tracking** - Automatic SDK version and platform headers
✅ **Environment configuration** - Automatic loading from environment variables
✅ **Experimental endpoints** - Toggle experimental API features
✅ **Diagnostic logging** - HTTP request/response logging for debugging and troubleshooting
### API Methods
| Method | Description |
|--------|-------------|
| `start_conversation()` | Start a new conversation with basic options |
| `start_conversation_with_request()` | Start with advanced options (locale, custom conversation ID) |
| `ask_question()` | Send a text question to the agent |
| `ask_question_with_activity()` | Send a custom Activity object |
| `send_activity()` | Send any activity (alias for ask_question_with_activity) |
| `execute()` | Execute an activity with explicit conversation ID |
| `subscribe()` | Subscribe to conversation events with resumption support |
### Configuration Models
| Class | Description |
|-------|-------------|
| `ConnectionSettings` | Main configuration class with all connection options |
| `StartRequest` | Advanced start options (locale, conversation ID) |
| `SubscribeEvent` | Event wrapper with activity and SSE event ID |
| `PowerPlatformCloud` | Enum for cloud environments |
| `AgentType` | Enum for agent types (PUBLISHED, PREBUILT) |
| `UserAgentHelper` | Utility for generating user-agent headers |
## Connection Modes
The client supports two connection modes:
### 1. Environment-Based Connection (Standard)
Uses environment ID and agent identifier to construct the connection URL:
```python
settings = ConnectionSettings(
environment_id="aaaabbbb-1111-2222-3333-ccccddddeeee",
agent_identifier="cr123_myagent"
)
```
**URL Pattern:**
`https://{env-prefix}.{env-suffix}.environment.api.powerplatform.com/copilotstudio/dataverse-backed/authenticated/bots/{agent-id}/conversations`
### 2. DirectConnect URL Mode (Simplified)
Uses a direct URL to the agent, bypassing environment resolution:
```python
settings = ConnectionSettings(
environment_id="",
agent_identifier="",
direct_connect_url="https://api.powerplatform.com/copilotstudio/dataverse-backed/authenticated/bots/cr123_myagent"
)
```
**Benefits:**
- Simpler configuration with single URL
- Automatic cloud detection for token audience
- Works across environments without environment ID lookup
- Useful for multi-tenant scenarios
## Token Audience Resolution
The client automatically determines the correct token audience:
```python
# For environment-based connections
audience = PowerPlatformEnvironment.get_token_audience(settings)
# Returns: https://api.powerplatform.com/.default
# For DirectConnect URLs
audience = PowerPlatformEnvironment.get_token_audience(
settings=ConnectionSettings("", "", direct_connect_url="https://api.gov.powerplatform.microsoft.us/...")
)
# Returns: https://api.gov.powerplatform.microsoft.us/.default
```
## Troubleshooting
### Common Issues
**Authentication failed**
- Verify your app is registered in Azure AD
- Check that token has the correct audience scope (use `PowerPlatformEnvironment.get_token_audience()`)
- Ensure your app has permissions to the Power Platform environment
- For DirectConnect URLs, verify cloud setting matches the URL domain
**Agent not found**
- Verify the environment ID and agent identifier
- Check that the agent is published and accessible
- Confirm you're using the correct cloud setting
- For DirectConnect URLs, ensure the URL is correct and complete
**Connection timeout**
- Check network connectivity to Power Platform
- Verify firewall settings allow HTTPS traffic
- Try a different cloud region if available
- Check if `client_session_settings` timeout is appropriate
**Invalid DirectConnect URL**
- Ensure URL includes scheme (https://)
- Verify URL format matches expected pattern
- Check for trailing slashes (automatically normalized)
- Confirm URL points to the correct cloud environment
## Requirements
- Python 3.10+ (supports 3.10, 3.11, 3.12, 3.13, 3.14)
- Valid Azure AD app registration
- Access to Microsoft Power Platform environment
- Published Copilot Studio agent
# Quick Links
- 📦 [All SDK Packages on PyPI](https://pypi.org/search/?q=microsoft-agents)
- 📖 [Complete Documentation](https://aka.ms/agents)
- 💡 [Python Samples Repository](https://github.com/microsoft/Agents/tree/main/samples/python)
- 🐛 [Report Issues](https://github.com/microsoft/Agents-for-python/issues)
# Sample Applications
|Name|Description|README|
|----|----|----|
|Quickstart|Simplest agent|[Quickstart](https://github.com/microsoft/Agents/blob/main/samples/python/quickstart/README.md)|
|Auto Sign In|Simple OAuth agent using Graph and GitHub|[auto-signin](https://github.com/microsoft/Agents/blob/main/samples/python/auto-signin/README.md)|
|OBO Authorization|OBO flow to access a Copilot Studio Agent|[obo-authorization](https://github.com/microsoft/Agents/blob/main/samples/python/obo-authorization/README.md)|
|Semantic Kernel Integration|A weather agent built with Semantic Kernel|[semantic-kernel-multiturn](https://github.com/microsoft/Agents/blob/main/samples/python/semantic-kernel-multiturn/README.md)|
|Streaming Agent|Streams OpenAI responses|[azure-ai-streaming](https://github.com/microsoft/Agents/blob/main/samples/python/azureai-streaming/README.md)|
|Copilot Studio Client|Console app to consume a Copilot Studio Agent|[copilotstudio-client](https://github.com/microsoft/Agents/blob/main/samples/python/copilotstudio-client/README.md)|
|Cards Agent|Agent that uses rich cards to enhance conversation design |[cards](https://github.com/microsoft/Agents/blob/main/samples/python/cards/README.md)|
| text/markdown | Microsoft Corporation License-Expression: MIT | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"microsoft-agents-hosting-core==0.8.0.dev4"
] | [] | [] | [] | [
"Homepage, https://github.com/microsoft/Agents"
] | RestSharp/106.13.0.0 | 2026-02-18T09:07:27.791364 | microsoft_agents_copilotstudio_client-0.8.0.dev4-py3-none-any.whl | 23,761 | c3/62/01f51658119342dd8c330881982142fe2fa79210f59e89d440651c6ebfa6/microsoft_agents_copilotstudio_client-0.8.0.dev4-py3-none-any.whl | py3 | bdist_wheel | null | false | 8435ae2dfbe5aa3a5469684283fe35e5 | 086c5d3cfdba6eaab91bba28f459a95c72befaa00fb1adbe38a1cd7071cb11c2 | c36201f51658119342dd8c330881982142fe2fa79210f59e89d440651c6ebfa6 | null | [] | 3,571 |
2.4 | microsoft-agents-authentication-msal | 0.8.0.dev4 | A msal-based authentication library for Microsoft Agents | # Microsoft Agents MSAL Authentication
[](https://pypi.org/project/microsoft-agents-authentication-msal/)
Provides secure authentication for your agents using Microsoft Authentication Library (MSAL). It handles getting tokens from Azure AD so your agent can securely communicate with Microsoft services like Teams, Graph API, and other Azure resources.
# What is this?
This library is part of the **Microsoft 365 Agents SDK for Python** - a comprehensive framework for building enterprise-grade conversational AI agents. The SDK enables developers to create intelligent agents that work across multiple platforms including Microsoft Teams, M365 Copilot, Copilot Studio, and web chat, with support for third-party integrations like Slack, Facebook Messenger, and Twilio.
## Release Notes
<table style="width:100%">
<tr>
<th style="width:20%">Version</th>
<th style="width:20%">Date</th>
<th style="width:60%">Release Notes</th>
</tr>
<tr>
<td>0.7.0</td>
<td>2026-01-21</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v070">
0.7.0 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.6.1</td>
<td>2025-12-01</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v061">
0.6.1 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.6.0</td>
<td>2025-11-18</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v060">
0.6.0 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.5.0</td>
<td>2025-10-22</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v050">
0.5.0 Release Notes
</a>
</td>
</tr>
</table>
## Packages Overview
We offer the following PyPI packages to create conversational experiences based on Agents:
| Package Name | PyPI Version | Description |
|--------------|-------------|-------------|
| `microsoft-agents-activity` | [](https://pypi.org/project/microsoft-agents-activity/) | Types and validators implementing the Activity protocol spec. |
| `microsoft-agents-hosting-core` | [](https://pypi.org/project/microsoft-agents-hosting-core/) | Core library for Microsoft Agents hosting. |
| `microsoft-agents-hosting-aiohttp` | [](https://pypi.org/project/microsoft-agents-hosting-aiohttp/) | Configures aiohttp to run the Agent. |
| `microsoft-agents-hosting-teams` | [](https://pypi.org/project/microsoft-agents-hosting-teams/) | Provides classes to host an Agent for Teams. |
| `microsoft-agents-storage-blob` | [](https://pypi.org/project/microsoft-agents-storage-blob/) | Extension to use Azure Blob as storage. |
| `microsoft-agents-storage-cosmos` | [](https://pypi.org/project/microsoft-agents-storage-cosmos/) | Extension to use CosmosDB as storage. |
| `microsoft-agents-authentication-msal` | [](https://pypi.org/project/microsoft-agents-authentication-msal/) | MSAL-based authentication for Microsoft Agents. |
Additionally we provide a Copilot Studio Client, to interact with Agents created in CopilotStudio:
| Package Name | PyPI Version | Description |
|--------------|-------------|-------------|
| `microsoft-agents-copilotstudio-client` | [](https://pypi.org/project/microsoft-agents-copilotstudio-client/) | Direct to Engine client to interact with Agents created in CopilotStudio |
## Installation
```bash
pip install microsoft-agents-authentication-msal
```
## Quick Start
### Basic Setup with Client Secret
Define your client secrets in the ENV file
```python
CONNECTIONS__SERVICE_CONNECTION__SETTINGS__CLIENTID=client-id
CONNECTIONS__SERVICE_CONNECTION__SETTINGS__CLIENTSECRET=client-secret
CONNECTIONS__SERVICE_CONNECTION__SETTINGS__TENANTID=tenant-id
```
Load the Configuration (Code from [main.py Quickstart Sample](https://github.com/microsoft/Agents/blob/main/samples/python/quickstart/src/main.py))
```python
from .start_server import start_server
start_server(
agent_application=AGENT_APP,
auth_configuration=CONNECTION_MANAGER.get_default_connection_configuration(),
)
```
Then start the Agent (code snipped from (start_server.py Quickstart Sample](https://github.com/microsoft/Agents/blob/main/samples/python/quickstart/src/start_server.py)):
```python
def start_server(
agent_application: AgentApplication, auth_configuration: AgentAuthConfiguration
):
async def entry_point(req: Request) -> Response:
agent: AgentApplication = req.app["agent_app"]
adapter: CloudAdapter = req.app["adapter"]
return await start_agent_process(
req,
agent,
adapter,
)
[...]
```
## Authentication Types
The M365 Agents SDK in Python supports the following Auth types:
```python
class AuthTypes(str, Enum):
certificate = "certificate"
certificate_subject_name = "CertificateSubjectName"
client_secret = "ClientSecret"
user_managed_identity = "UserManagedIdentity"
system_managed_identity = "SystemManagedIdentity"
```
## Key Classes
- **`MsalAuth`** - Core authentication provider using MSAL
- **`MsalConnectionManager`** - Manages multiple authentication connections
## Features
✅ **Multiple auth types** - Client secret, certificate, managed identity
✅ **Token caching** - Automatic token refresh and caching
✅ **Multi-tenant** - Support for different Azure AD tenants
✅ **Agent-to-agent** - Secure communication between agents
✅ **On-behalf-of** - Act on behalf of users
# Security Best Practices
- Store secrets in Azure Key Vault or environment variables
- Use managed identities when possible (no secrets to manage)
- Regularly rotate client secrets and certificates
- Use least-privilege principle for scopes and permissions
# Quick Links
- 📦 [All SDK Packages on PyPI](https://pypi.org/search/?q=microsoft-agents)
- 📖 [Complete Documentation](https://aka.ms/agents)
- 💡 [Python Samples Repository](https://github.com/microsoft/Agents/tree/main/samples/python)
- 🐛 [Report Issues](https://github.com/microsoft/Agents-for-python/issues)
# Sample Applications
Explore working examples in the [Python samples repository](https://github.com/microsoft/Agents/tree/main/samples/python):
|Name|Description|README|
|----|----|----|
|Quickstart|Simplest agent|[Quickstart](https://github.com/microsoft/Agents/blob/main/samples/python/quickstart/README.md)|
|Auto Sign In|Simple OAuth agent using Graph and GitHub|[auto-signin](https://github.com/microsoft/Agents/blob/main/samples/python/auto-signin/README.md)|
|OBO Authorization|OBO flow to access a Copilot Studio Agent|[obo-authorization](https://github.com/microsoft/Agents/blob/main/samples/python/obo-authorization/README.md)|
|Semantic Kernel Integration|A weather agent built with Semantic Kernel|[semantic-kernel-multiturn](https://github.com/microsoft/Agents/blob/main/samples/python/semantic-kernel-multiturn/README.md)|
|Streaming Agent|Streams OpenAI responses|[azure-ai-streaming](https://github.com/microsoft/Agents/blob/main/samples/python/azureai-streaming/README.md)|
|Copilot Studio Client|Console app to consume a Copilot Studio Agent|[copilotstudio-client](https://github.com/microsoft/Agents/blob/main/samples/python/copilotstudio-client/README.md)|
|Cards Agent|Agent that uses rich cards to enhance conversation design |[cards](https://github.com/microsoft/Agents/blob/main/samples/python/cards/README.md)|
| text/markdown | Microsoft Corporation License-Expression: MIT | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"microsoft-agents-hosting-core==0.8.0.dev4",
"msal>=1.34.0",
"requests>=2.32.3",
"cryptography>=44.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/microsoft/Agents"
] | RestSharp/106.13.0.0 | 2026-02-18T09:07:26.956447 | microsoft_agents_authentication_msal-0.8.0.dev4-py3-none-any.whl | 12,496 | 26/52/0aaaef261a9bf4fb07239cc3b8700f5507939f798c4bb6dc967f57a84450/microsoft_agents_authentication_msal-0.8.0.dev4-py3-none-any.whl | py3 | bdist_wheel | null | false | 2513b0872f6cfdb0f88b2b2ac039fc5b | 99bd4e5bdf81384e0a6259f054bde0ffce5a79bf9cc76398c293c1cfe8ba5c92 | 26520aaaef261a9bf4fb07239cc3b8700f5507939f798c4bb6dc967f57a84450 | null | [] | 455 |
2.4 | microsoft-agents-activity | 0.8.0.dev4 | A protocol library for Microsoft Agents | # Microsoft Agents Activity
[](https://pypi.org/project/microsoft-agents-activity/)
Core types and schemas for building conversational AI agents that work across Microsoft 365 platforms like Teams, Copilot Studio, and Webchat.
# What is this?
This library is part of the **Microsoft 365 Agents SDK for Python** - a comprehensive framework for building enterprise-grade conversational AI agents. The SDK enables developers to create intelligent agents that work across multiple platforms including Microsoft Teams, M365 Copilot, Copilot Studio, and web chat, with support for third-party integrations like Slack, Facebook Messenger, and Twilio.
## Release Notes
<table style="width:100%">
<tr>
<th style="width:20%">Version</th>
<th style="width:20%">Date</th>
<th style="width:60%">Release Notes</th>
</tr>
<tr>
<td>0.7.0</td>
<td>2026-01-21</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v070">
0.7.0 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.6.1</td>
<td>2025-12-01</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v061">
0.6.1 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.6.0</td>
<td>2025-11-18</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v060">
0.6.0 Release Notes
</a>
</td>
</tr>
<tr>
<td>0.5.0</td>
<td>2025-10-22</td>
<td>
<a href="https://github.com/microsoft/Agents-for-python/blob/main/changelog.md#microsoft-365-agents-sdk-for-python---release-notes-v050">
0.5.0 Release Notes
</a>
</td>
</tr>
</table>
## Packages Overview
We offer the following PyPI packages to create conversational experiences based on Agents:
| Package Name | PyPI Version | Description |
|--------------|-------------|-------------|
| `microsoft-agents-activity` | [](https://pypi.org/project/microsoft-agents-activity/) | Types and validators implementing the Activity protocol spec. |
| `microsoft-agents-hosting-core` | [](https://pypi.org/project/microsoft-agents-hosting-core/) | Core library for Microsoft Agents hosting. |
| `microsoft-agents-hosting-aiohttp` | [](https://pypi.org/project/microsoft-agents-hosting-aiohttp/) | Configures aiohttp to run the Agent. |
| `microsoft-agents-hosting-teams` | [](https://pypi.org/project/microsoft-agents-hosting-teams/) | Provides classes to host an Agent for Teams. |
| `microsoft-agents-storage-blob` | [](https://pypi.org/project/microsoft-agents-storage-blob/) | Extension to use Azure Blob as storage. |
| `microsoft-agents-storage-cosmos` | [](https://pypi.org/project/microsoft-agents-storage-cosmos/) | Extension to use CosmosDB as storage. |
| `microsoft-agents-authentication-msal` | [](https://pypi.org/project/microsoft-agents-authentication-msal/) | MSAL-based authentication for Microsoft Agents. |
Additionally we provide a Copilot Studio Client, to interact with Agents created in CopilotStudio:
| Package Name | PyPI Version | Description |
|--------------|-------------|-------------|
| `microsoft-agents-copilotstudio-client` | [](https://pypi.org/project/microsoft-agents-copilotstudio-client/) | Direct to Engine client to interact with Agents created in CopilotStudio |
## Architecture
The SDK follows a modular architecture:
- **Activity Layer**: Protocol definitions for cross-platform messaging
- **Hosting Layer**: Core agent lifecycle, middleware, and web hosting
- **Storage Layer**: Persistent state management with Azure backends
- **Authentication Layer**: Secure identity and token management
- **Integration Layer**: Platform-specific adapters (Teams, Copilot Studio)
## Installation
```bash
pip install microsoft-agents-activity
```
## Quick Start
Code below taken from the [Quick Start](https://github.com/microsoft/Agents/tree/main/samples/python/quickstart) sample.
```python
@AGENT_APP.conversation_update("membersAdded")
async def on_members_added(context: TurnContext, _state: TurnState):
await context.send_activity(
"Welcome to the empty agent! "
"This agent is designed to be a starting point for your own agent development."
)
return True
@AGENT_APP.message(re.compile(r"^hello$"))
async def on_hello(context: TurnContext, _state: TurnState):
await context.send_activity("Hello!")
@AGENT_APP.activity("message")
async def on_message(context: TurnContext, _state: TurnState):
await context.send_activity(f"you said: {context.activity.text}")
```
## Common Use Cases
### Rich Messages with Cards
Code below taken from the [Cards](https://github.com/microsoft/Agents/tree/main/samples/python/cards) sample.
```python
@staticmethod
async def send_animation_card(context: TurnContext):
card = CardFactory.animation_card(
AnimationCard(
title="Microsoft Agents Framework",
image=ThumbnailUrl(
url="https://i.giphy.com/Ki55RUbOV5njy.gif", alt="Cute Robot"
),
media=[MediaUrl(url="https://i.giphy.com/Ki55RUbOV5njy.gif")],
subtitle="Animation Card",
text="This is an example of an animation card using a gif.",
aspect="16:9",
duration="PT2M",
)
)
await CardMessages.send_activity(context, card)
@staticmethod
async def send_activity(context: TurnContext, card: Attachment):
activity = Activity(type=ActivityTypes.message, attachments=[card])
await context.send_activity(activity)
```
## Activity Types
The library supports different types of communication:
- **Message** - Regular chat messages with text, cards, attachments
- **Typing** - Show typing indicators
- **ConversationUpdate** - People joining/leaving chats
- **Event** - Custom events and notifications
- **Invoke** - Direct function calls
- **EndOfConversation** - End chat sessions
## Key Features
✅ **Type-safe** - Built with Pydantic for automatic validation
✅ **Rich content** - Support for cards, images, videos, and interactive elements
✅ **Teams ready** - Full Microsoft Teams integration
✅ **Cross-platform** - Works across all Microsoft 365 chat platforms
✅ **Migration friendly** - Easy upgrade from Bot Framework
# Quick Links
- 📦 [All SDK Packages on PyPI](https://pypi.org/search/?q=microsoft-agents)
- 📖 [Complete Documentation](https://aka.ms/agents)
- 💡 [Python Samples Repository](https://github.com/microsoft/Agents/tree/main/samples/python)
- 🐛 [Report Issues](https://github.com/microsoft/Agents-for-python/issues)
# Sample Applications
Explore working examples in the [Python samples repository](https://github.com/microsoft/Agents/tree/main/samples/python):
|Name|Description|README|
|----|----|----|
|Quickstart|Simplest agent|[Quickstart](https://github.com/microsoft/Agents/blob/main/samples/python/quickstart/README.md)|
|Auto Sign In|Simple OAuth agent using Graph and GitHub|[auto-signin](https://github.com/microsoft/Agents/blob/main/samples/python/auto-signin/README.md)|
|OBO Authorization|OBO flow to access a Copilot Studio Agent|[obo-authorization](https://github.com/microsoft/Agents/blob/main/samples/python/obo-authorization/README.md)|
|Semantic Kernel Integration|A weather agent built with Semantic Kernel|[semantic-kernel-multiturn](https://github.com/microsoft/Agents/blob/main/samples/python/semantic-kernel-multiturn/README.md)|
|Streaming Agent|Streams OpenAI responses|[azure-ai-streaming](https://github.com/microsoft/Agents/blob/main/samples/python/azureai-streaming/README.md)|
|Copilot Studio Client|Console app to consume a Copilot Studio Agent|[copilotstudio-client](https://github.com/microsoft/Agents/blob/main/samples/python/copilotstudio-client/README.md)|
|Cards Agent|Agent that uses rich cards to enhance conversation design |[cards](https://github.com/microsoft/Agents/blob/main/samples/python/cards/README.md)|
| text/markdown | Microsoft Corporation License-Expression: MIT | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic>=2.10.4"
] | [] | [] | [] | [
"Homepage, https://github.com/microsoft/Agents"
] | RestSharp/106.13.0.0 | 2026-02-18T09:07:25.962094 | microsoft_agents_activity-0.8.0.dev4-py3-none-any.whl | 132,958 | cb/9d/71851a42df8002f3012b7528a0a3c25ed5edba4d08553b6c255c4fb6ada1/microsoft_agents_activity-0.8.0.dev4-py3-none-any.whl | py3 | bdist_wheel | null | false | d90cc57c07c5988d5ce8734940d637fc | 71be0ef366941a4eed54daafc78f709762f1e786ace940cb2502f1a34f3630cf | cb9d71851a42df8002f3012b7528a0a3c25ed5edba4d08553b6c255c4fb6ada1 | null | [] | 3,896 |
2.3 | aurora-unicycler | 0.4.4 | A universal cycling protocol | <h1 align="center">
<img src="https://github.com/user-attachments/assets/e904058b-bf9a-435c-aaa0-4ed4f2fc09f8" width="500" align="center" alt="aurora-biologic logo">
</h1>
<br>
[](https://empaeconversion.github.io/aurora-unicycler/)
[](https://pypi.org/project/aurora-unicycler/)
[](https://github.com/empaeconversion/aurora-unicycler/blob/main/LICENSE)
[](https://pypi.org/project/aurora-unicycler/)
[](https://github.com/EmpaEconversion/aurora-unicycler/actions/workflows/test.yml)
[](https://app.codecov.io/gh/EmpaEconversion/aurora-unicycler)
A universal battery cycling protocol that can be exported to different formats.
See the [docs](https://empaeconversion.github.io/aurora-unicycler/) for more details.
## Features
- Define a cycling protocol in Python or JSON
- Export the protocol to different formats
- Biologic .mps
- Neware .xml
- tomato 0.2.3 .json
- PyBaMM string list
- BattINFO .jsonld
This is particularly useful for high-throughput battery experiments, as protocols can be programmatically defined, and sample IDs and capacities can be attached at the last second.
Check out our standalone APIs for controlling cyclers with Python or command line:
- [`aurora-biologic`](https://github.com/empaeconversion/aurora-biologic)
- [`aurora-neware`](https://github.com/empaeconversion/aurora-neware)
We also have a full application with a GUI, including a graphical interface to create these protocols:
- [`aurora-cycler-manager`](https://github.com/empaeconversion/aurora-cycler-manager)
## Installation
Install on Python >3.10 with
```
pip install aurora-unicycler
```
## Quick start
Define a protocol using Python
```python
from aurora_unicycler import (
ConstantCurrent,
ConstantVoltage,
Loop,
CyclingProtocol,
RecordParams,
SafetyParams,
Tag,
)
my_protocol = CyclingProtocol(
record = RecordParams(
time_s=10,
voltage_V=0.1,
),
safety = SafetyParams(
max_voltage_V=5,
min_voltage_V=0,
max_current_mA=10,
min_current_mA=-10,
),
method = [
Tag(
tag="my_tag",
),
ConstantCurrent(
rate_C=0.5,
until_voltage_V=4.2,
until_time_s=3*60*60,
),
ConstantVoltage(
voltage_V=4.2,
until_rate_C=0.05,
until_time_s=60*60,
),
ConstantCurrent(
rate_C=-0.5,
until_voltage_V=3.5,
until_time_s=3*60*60,
),
Loop(
loop_to="my_tag",
cycle_count=100,
)
]
)
```
You can also create a protocol from a python dictionary or JSON - you will not get type checking in an IDE, but it will still validate at runtime.
```python
my_protocol = CyclingProtocol.from_dict({
"record": {"time_s": 10, "voltage_V": 0.1},
"safety": {"max_voltage_V": 5},
"method": [
{"step": "open_circuit_voltage", "until_time_s": 1},
{"step": "tag", "tag": "tag1"},
{"step": "constant_current", "rate_C": 0.5, "until_voltage_V": 4.2},
{"step": "constant_voltage", "voltage_V": 4.2, "until_rate_C": 0.05},
{"step": "constant_current", "rate_C": -0.5, "until_voltage_V": 3.0},
{"step": "loop", "loop_to": "tag1", "cycle_count": 100},
],
})
```
```python
my_protocol = CyclingProtocol.from_json("path/to/file.json")
```
You can then export the protocol to different formats, e.g.
```python
my_protocol.to_biologic_mps(
sample_name="test-sample",
capacity_mAh=45,
save_path="some/location/settings.mps",
)
my_protocol.to_neware_xml(
sample_name="test-sample",
capacity_mAh=45,
save_path="some/location/protocol.xml",
)
my_protocol.to_battinfo_jsonld(
capacity_mAh=45,
save_path="some/location/protocol.jsonld",
)
```
See the [docs](https://empaeconversion.github.io/aurora-unicycler/) for more details and the full API reference.
## Contributors
- [Graham Kimbell](https://github.com/g-kimbell)
## Acknowledgements
This software was developed at the Laboratory of Materials for Energy Conversion at Empa, the Swiss Federal Laboratories for Materials Science and Technology, and supported by funding from the [IntelLiGent](https://heuintelligent.eu/) project from the European Union’s research and innovation program under grant agreement No. 101069765, and from the Swiss State Secretariat for Education, Research, and Innovation (SERI) under contract No. 22.001422.
<img src="https://github.com/user-attachments/assets/373d30b2-a7a4-4158-a3d8-f76e3a45a508#gh-light-mode-only" height="100" alt="IntelLiGent logo">
<img src="https://github.com/user-attachments/assets/9d003d4f-af2f-497a-8560-d228cc93177c#gh-dark-mode-only" height="100" alt="IntelLiGent logo">
<img src="https://github.com/user-attachments/assets/1d32a635-703b-432c-9d42-02e07d94e9a9" height="100" alt="EU flag">
<img src="https://github.com/user-attachments/assets/cd410b39-5989-47e5-b502-594d9a8f5ae1" height="100" alt="Swiss secretariat">
| text/markdown | null | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [
"aurora_unicycler"
] | [] | [
"defusedxml>=0.7.1",
"pydantic>=2.11.7",
"bumpver>=2025.1131; extra == \"dev\"",
"pytest>=8.4.1; extra == \"dev\"",
"ruff>=0.12.3; extra == \"dev\"",
"pre-commit>=4.2.0; extra == \"dev\"",
"mkdocs>=1.6.1; extra == \"dev\"",
"mkdocs-material>=9.7.0; extra == \"dev\"",
"mkdocstrings[python]>=0.30.1; e... | [] | [] | [] | [] | python-requests/2.32.5 | 2026-02-18T09:06:05.035018 | aurora_unicycler-0.4.4.tar.gz | 133,676 | 6b/6e/27f8c511365fa1d936417051602f975b8d7f84a37663b6f88b3660b8d8ad/aurora_unicycler-0.4.4.tar.gz | source | sdist | null | false | 53a958142cc75caa19d44e0224d2c2d1 | 162eeafc05495e1119dbc88a81a53cb90e98edafcac277b998722d0c7effd39e | 6b6e27f8c511365fa1d936417051602f975b8d7f84a37663b6f88b3660b8d8ad | null | [] | 378 |
2.4 | wheezy.template | 3.2.5 | A lightweight template library | # wheezy.template
[](https://github.com/akornatskyy/wheezy.template/actions/workflows/tests.yml)
[](https://coveralls.io/github/akornatskyy/wheezy.template?branch=master)
[](https://wheezytemplate.readthedocs.io/en/latest/?badge=latest)
[](https://badge.fury.io/py/wheezy.template)
[wheezy.template](https://pypi.org/project/wheezy.template/) is a
[python](https://www.python.org) package written in pure Python code. It
is a lightweight template library. The design goals achived:
- **Compact, Expressive, Clean:** Minimizes the number of keystrokes
required to build a template. Enables fast and well read coding. You
do not need to explicitly denote statement blocks within HTML
(unlike other template systems), the parser is smart enough to
understand your code. This enables a compact and expressive syntax
which is really clean and just pleasure to type.
- **Intuitive, No time to Learn:** Basic Python programming skills
plus HTML markup. You are productive just from start. Use full power
of Python with minimal markup required to denote python statements.
- **Do Not Repeat Yourself:** Master layout templates for inheritance;
include and import directives for maximum reuse.
- **Blazingly Fast:** Maximum rendering performance: ultimate speed
and context preprocessor features.
Simple template:
```txt
@require(user, items)
Welcome, @user.name!
@if items:
@for i in items:
@i.name: @i.price!s.
@end
@else:
No items found.
@end
```
It is optimized for performance, well tested and documented.
Resources:
- [source code](https://github.com/akornatskyy/wheezy.template),
[examples](https://github.com/akornatskyy/wheezy.template/tree/master/demos)
and [issues](https://github.com/akornatskyy/wheezy.template/issues)
tracker are available on
[github](https://github.com/akornatskyy/wheezy.template)
- [documentation](https://wheezytemplate.readthedocs.io/en/latest/)
## Install
[wheezy.template](https://pypi.org/project/wheezy.template/) requires
[python](https://www.python.org) version 3.10+. It is independent of
operating system. You can install it from
[pypi](https://pypi.org/project/wheezy.template/) site:
```sh
pip install -U wheezy.template
```
To build from source with optional C extensions, install Cython and build
without PEP 517 isolation so the build can see your environment:
```sh
pip install Cython setuptools
pip install -U --no-build-isolation --no-cache-dir wheezy.security[cython]
```
Note: compiling extensions requires a working C compiler toolchain.
If you run into any issue or have comments, go ahead and add on
[github](https://github.com/akornatskyy/wheezy.template).
| text/markdown | null | Andriy Kornatskyy <andriy.kornatskyy@live.com> | null | null | null | html, markup, template, preprocessor | [
"Environment :: Web Environment",
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"P... | [] | null | null | >=3.10 | [] | [] | [] | [
"Cython>=3.0; extra == \"cython\"",
"setuptools>=61.0; extra == \"cython\""
] | [] | [] | [] | [
"Homepage, https://github.com/akornatskyy/wheezy.template",
"Source, https://github.com/akornatskyy/wheezy.template",
"Issues, https://github.com/akornatskyy/wheezy.template/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T09:05:21.933836 | wheezy_template-3.2.5.tar.gz | 19,324 | 43/1d/b4fb6ee3f6a0af54ac26e0425b7947bd7d9ed786a75ca173a40640c25bc6/wheezy_template-3.2.5.tar.gz | source | sdist | null | false | ec81cc897f9c44aa6fb65a60f15eb10a | c7c0bf85af0f70ca2ef4b6ea9a74ef372f73392aa17bea0d885dcba7356d0867 | 431db4fb6ee3f6a0af54ac26e0425b7947bd7d9ed786a75ca173a40640c25bc6 | MIT | [
"LICENSE"
] | 0 |
2.4 | copium | 0.1.0a1.dev158 | Fast drop-in replacement for copy.deepcopy() | <h1>
copium
<a href="https://pypi.python.org/pypi/copium">
<img src="https://img.shields.io/pypi/v/copium.svg" alt="PyPI Version Badge">
</a>
<a href="https://pypi.python.org/pypi/copium">
<img src="https://img.shields.io/pypi/l/copium.svg" alt="PyPI License Badge">
</a>
<a href="https://pypi.python.org/pypi/copium">
<img src="https://img.shields.io/pypi/pyversions/copium.svg" alt="PyPI Python Versions Badge">
</a>
<a href="https://github.com/Bobronium/copium/actions">
<img src="https://github.com/Bobronium/copium/actions/workflows/cd.yaml/badge.svg" alt="CD Status Badge">
</a>
<a href="https://codspeed.io/Bobronium/copium">
<img src="https://img.shields.io/badge/Codspeed-copium-8A2BE2?style=flat&logo=data%3Aimage%2Fsvg%2Bxml%3Bbase64%2CPHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCA0MCA0MCIgcHJlc2VydmVBc3BlY3RSYXRpbz0ieE1pZFlNaWQgbWVldCI%2BCiAgICA8ZyB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtMTAwLDEwKSB0cmFuc2xhdGUoMTIwLDEyKSBzY2FsZSgxLjMpIHRyYW5zbGF0ZSgtMTIwLC0xMikiPiI%2BCiAgICAgICAgPHBhdGggZmlsbD0iI0VENkUzRCIgZmlsbC1ydWxlPSJldmVub2RkIgogICAgICAgICAgICAgIGQ9Ik0xMTAuMTIxIDE3LjExN2MuNzY2LjE3IDEuMzA4LjA1IDEuMzkyLjA2NGwuMDA0LjAwMWMxLjI3NS42OTEgMi4yMDIgMS4yNzkgMy4wOTcgMS42NTVsLS4xMDcuMDFxLTEuMDkyLjE3Mi0xLjU3Mi4yNWMtMy4yNzYuNTMzLTQuODg0LS4zOTgtNC41MzItMS44My4xNDItLjU3OC45MzgtLjMyNCAxLjcxOC0uMTVtMTEuMDA0LTEzLjkxYzIuMDc0IDEuNTM0IDIuNjcgMi4zMzEgMy43NzQgMy41NTUgMi43MDggMCA0LjIyIDIuMDI2IDMuNzM1IDUuMDQ2LS4zMDggMS45MjEtNC4xNSAxLjI0Ni01LjA2IDBxLS45MTIuODI2LTQuNDgzIDMuNjYzbC0uMDk3LjA3NmMtLjY5NS41NTMtMy4zNzcuMzc2LTMuNjM0LjE4N3EuODA2LTEuMzI1IDEuMTYxLTIuMDcyYy4zNTYtLjc0NS42MDUtMS40OTMuNjA1LTIuNzMyIDAtMS4yMzgtLjY5NS0yLjI5LTIuMTY2LTIuMjYzYS4yOC4yOCAwIDAgMC0uMjc0LjI5NWMwIC4xOTUuMTQ1LjI5Ni4yNzQuMjk2Ljc3OSAwIDEuMzI1Ljk2OCAxLjMyNSAxLjY3My4wMDEuNzA0LS4xMTEgMS4yNzUtLjQ0NCAyLjEzNC0uMjg3Ljc0MS0xLjQ0NCAyLjU4My0xLjc0NSAyLjc2N2EuMjc4LjI3OCAwIDAgMCAuMDQyLjQ4NnEuMDMxLjAxNS4wNzkuMDMuMS4wMzIuMjUzLjA3MWMuMjYyLjA2NC41ODEuMTIxLjk0LjE2My45ODcuMTEzIDIuMDk0LjA5IDMuMjc0LS4xMmwuMDQ1LS4wMDljLjM1Mi0uMDY0Ljg2NS0uMDY5IDEuMzcyLS4wMDMuNTkzLjA3OCAxLjEzMy4yNDQgMS41NDMuNDkzLjM2LjIxOC42MDguNDkuNzM1LjgybC4wMTIuMDM2cS4wOC4yNjMuMDguNTY2YzAgMS4wODMtMi4zMDguNDM0LTQuOTc2LjMxOGE5IDkgMCAwIDAtLjYxLS4wMDJjLTEuMDg5LS4wNTUtMS45ODUtLjM3NC0zLjE4Ni0uOTc0bC4wMjEtLjAwNHMtLjA5Mi0uMDM4LS4yMzgtLjEwNmMtLjM1Ni0uMTgyLS43NC0uMzg3LTEuMTYyLS42MTZoLS4wMDNjLS4zOTgtLjI0OC0uNzQ5LS41MjctLjgzOC0uNzc2LS4yMzMtLjY1MS0uMTE4LS42NTEuNzE1LTEuNjEzLTEuNDIyLjE3NS0xLjQ1Ny4yNzYtMy4wNzguMjc2cy00LjI5Mi0uMDgzLTQuMjkyLTEuNjdxMC0xLjU5IDIuMTYxLTEuMjM2LS41MjctMi44OSAxLjgwNy01LjJjNC4wNzYtNC4wMzUgOS41NzggMS41MjUgMTMuMzUgMS41MjUgMS43MTYgMC0zLjAyNS0yLjY5My00Ljk5NS0zLjQ1NnMxLjEzMS0zLjcyOSAzLjk3OC0xLjYyNG00Ljc0OCA1LjU1MmMtLjMxIDAtLjU2MS4yNy0uNTYxLjYwNXMuMjUxLjYwNC41NjEuNjA0LjU2MS0uMjcuNTYxLS42MDQtLjI1MS0uNjA1LS41NjEtLjYwNSIKICAgICAgICAgICAgICBjbGlwLXJ1bGU9ImV2ZW5vZGQiLz4KICAgIDwvZz4KPC9zdmc%2B&logoSize=auto&labelColor=1B2330&color=ED6E3D&link=https%3A%2F%2Fcodspeed.io%2FBobronium%2Fcopium" alt="Codspeed Badge">
</a>
</h1>
<div align="center"><h3>Makes Python <code>deepcopy()</code> fast.</h3></div>
<div align="left">
<picture>
<source srcset="https://raw.githubusercontent.com/Bobronium/copium/b177d428dbb172fa9e22831d32db04c88b04ece1/assets/copium_logo_512.png" media="(prefers-color-scheme: dark)">
<source srcset="https://raw.githubusercontent.com/Bobronium/copium/b177d428dbb172fa9e22831d32db04c88b04ece1/assets/copium_logo_light_512.png" media="(prefers-color-scheme: light)">
<img src="https://raw.githubusercontent.com/Bobronium/copium/b177d428dbb172fa9e22831d32db04c88b04ece1/assets/copium_logo_512.png" alt="Copium Logo" width="200" align="left">
</picture>
<a href="https://github.com/Bobronium/copium/blob/1e4197bd54cc87659de44fb7cd129efb70f2af5d/showcase.ipynb">
<picture>
<source srcset="https://raw.githubusercontent.com/Bobronium/copium/1e4197bd54cc87659de44fb7cd129efb70f2af5d/assets/chart_dark.svg" media="(prefers-color-scheme: dark)">
<source srcset="https://raw.githubusercontent.com/Bobronium/copium/1e4197bd54cc87659de44fb7cd129efb70f2af5d/assets/chart_light.svg" media="(prefers-color-scheme: light)">
<img src="https://raw.githubusercontent.com/Bobronium/copium/1e4197bd54cc87659de44fb7cd129efb70f2af5d/assets/chart_light.svg" alt="Self-contained IPython showcase" width="500">
</picture>
</a>
</div>
## Highlights
- ⚡ **4-28x faster** on built-in types
- 🧠 **~30% less memory** per copy
- ✨ requires **zero code changes**
- 🧪 passes Python's [test_copy.py](https://github.com/python/cpython/blob/41b9ad5b38e913194a5cc88f0e7cfc096787b664/Lib/test/test_copy.py)
- 📦 pre-built wheels for Python 3.10–3.14
on Linux/macOS/Windows (x64/ARM64)
- 🔓 passes all tests on **free-threaded** Python builds
## Installation
```bash
pip install 'copium[autopatch]'
```
This will effortlessly make `copy.deepcopy()` fast in current environment.
> [!WARNING]
> `copium` hasn't seen wide production use yet. Expect bugs.
### For manual usage
```bash
pip install copium
```
## Manual usage
> [!TIP]
> You can skip this section if you depend on `copium[autopatch]`.
```py
import copium
assert copium.deepcopy(x := []) is not x
```
The `copium` module includes all public declarations of stdlib `copy` module, so it's generally safe
to:
```diff
- from copy import copy, deepcopy, Error
+ from copium import copy, deepcopy, Error
```
---
> [!TIP]
> Next sections will likely make more sense if you read CPython docs on
> deepcopy: https://docs.python.org/3/library/copy.html
## How is it so fast?
- #### Zero interpreter overhead for built-in containers and atomic types
##### If your data consist only of the types below, `deepcopy` operation won't touch the interpreter:
- natively supported containers: `tuple`, `dict`, `list`, `set`, `frozenset`, `bytearray` and
`types.MethodType`
- natively supported atomics: `type(None)`, `int`, `str`, `bytes`, `float`, `bool`, `complex`,
`types.EllipsisType`, `types.NotImplementedType`, `range`, `property`, `weakref.ref`,
`re.Pattern`, `decimal.Decimal`, `fractions.Fraction`, `types.CodeType`, `types.FunctionType`,
`types.BuiltinFunctionType`, `types.ModuleType`
- #### Native memo
- no time spent on creating extra `int` object for `id(x)`
- hash is computed once for lookup and reused to store the copy
- keepalive is a lightweight vector of pointers instead of a `list`
- memo object is not tracked in GC, unless stolen in custom `__deepcopy__`
- #### Native __reduce__ handling
When type's `__reduce__` strictly follows the protocol, `copium` handles returned values natively,
without interpreter overhead, the same way CPython pickle implementation does.
[What if there's type mismatch?](#pickle-protocol)
- #### Cached memo
Rather than creating a new memo object for each `deepcopy` and discarding it after, copium stores
one per thread and reuses it. Referenced objects are cleared, but some amount of memory stays
reserved, avoiding malloc/free overhead for typical workloads.
- #### Zero overhead patch on Python 3.12+
`deepcopy` function object stays the same after patch, only its [
`vectorcall`](https://peps.python.org/pep-0590/) is changed.
## Compatibility notes
`copium.deepcopy()` designed to be drop-in replacement for `copy.deepcopy()`,
still there are minor deviations from stdlib you should be aware of.
### Pickle protocol
stdlib's `copy` tolerates some deviations from the pickle protocol that `pickle` itself reject
(see https://github.com/python/cpython/issues/141757).
`copium` strictly follows stdlib semantics: if `__reduce__`
returns a list instead of a tuple for args, or a mapping instead of a dict for kwargs,
`copium` will coerce them the same way stdlib would
(via `*args` unpacking, `**kwargs` merging, `.items()` iteration, etc.).
Errors from malformed `__reduce__` results match what `copy.deepcopy` produces.
### Memo handling
With native memo, custom `__deepcopy__` receives a `copium.memo`,
which is fully compatible with how `copy.deepcopy()` uses it internally.
Per [Python docs](https://docs.python.org/3/library/copy.html#object.__deepcopy__), custom `__deepcopy__` methods should treat memo as an opaque object and just pass
it through in any subsequent `deepcopy` calls.
However, some native extensions that implement `__deepcopy__` on their objects
may require exact `dict` object to be passed as `memo` argument.
Typically, in this case, they raise `TypeError` or `AssertionError`.
copium will attempt to recover by calling `__deepcopy__` again with `dict` memo. If that second call
succeeds, a warning with clear suggestions will be emitted, otherwise the error will be raised as
is.
[Tracking issue](https://github.com/Bobronium/copium/issues/31)
<details>
<summary>Example</summary>
```python-repl
>>> import copium
>>> class CustomType:
... def __deepcopy__(self, memo):
... if not isinstance(memo, dict):
... raise TypeError("I'm enforcing memo to be a dict")
... return self
...
>>> print("Copied successfully: ", copium.deepcopy(CustomType()))
<python-input-2>:1: UserWarning:
Seems like 'copium.memo' was rejected inside '__main__.CustomType.__deepcopy__':
Traceback (most recent call last):
File "<python-input-2>", line 1, in <module>
File "<python-input-1>", line 4, in __deepcopy__
raise TypeError("I'm enforcing memo to be a dict")
TypeError: I'm enforcing memo to be a dict
copium was able to recover from this error, but this is slow and unreliable.
Fix:
Per Python docs, '__main__.CustomType.__deepcopy__' should treat memo as an opaque object.
See: https://docs.python.org/3/library/copy.html#object.__deepcopy__
Workarounds:
local change deepcopy(CustomType()) to deepcopy(CustomType(), {})
-> copium uses dict memo in this call (recommended)
global export COPIUM_USE_DICT_MEMO=1
-> copium uses dict memo everywhere (~1.3-2x slowdown, still faster than stdlib)
silent export COPIUM_NO_MEMO_FALLBACK_WARNING='TypeError: I'm enforcing memo to be a dict'
-> 'deepcopy(CustomType())' stays slow to deepcopy
explosive export COPIUM_NO_MEMO_FALLBACK=1
-> 'deepcopy(CustomType())' raises the error above
Copied successfully: <__main__.CustomType object at 0x104d1cad0>
```
</details>
## Credits
- [@sobolevn](https://github.com/sobolevn) for constructive feedback on C code / tests quality
- [@eendebakpt](https://github.com/eendebakpt) for C implementation of parts of `copy.deepcopy` in https://github.com/python/cpython/pull/91610 — used as early reference
- [@orsinium](https://github.com/orsinium) for [svg.py](https://github.com/orsinium-labs/svg.py) — used to generate main chart
- [@provencher](https://github.com/provencher) for repoprompt.com — used it to build context for LLMs/editing
- Anthropic/OpenAI/xAI for translating my ideas to compilable C code and educating me on the subject
- One special lizard 🦎
| text/markdown | null | "Arseny Boykov (Bobronium)" <hi@bobronium.me> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Devel... | [] | null | null | >=3.10 | [] | [] | [] | [
"copium-autopatch==0.1.0; extra == \"autopatch\"",
"copium[lint]; extra == \"dev\"",
"copium[typecheck]; extra == \"dev\"",
"copium[test]; extra == \"dev\"",
"copium[docs]; extra == \"dev\"",
"ruff>=0.5; extra == \"lint\"",
"mypy>=1.10; extra == \"typecheck\"",
"pyright>=1.1.400; extra == \"typecheck\... | [] | [] | [] | [
"Homepage, https://github.com/Bobronium/copium",
"Source, https://github.com/Bobronium/copium",
"Issues, https://github.com/Bobronium/copium/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:03:55.693826 | copium-0.1.0a1.dev158-cp314-cp314-win_arm64.whl | 71,581 | 93/5d/975e411ed2a8c002de2070824a30ff0f77e01c52aece772a05bf9df01ea8/copium-0.1.0a1.dev158-cp314-cp314-win_arm64.whl | cp314 | bdist_wheel | null | false | f7b1db6bd4ac122081109930d4359e02 | eb0b15b989d01f96932f591f9e987e7f19d803b50a1a3894af5138ef29150490 | 935d975e411ed2a8c002de2070824a30ff0f77e01c52aece772a05bf9df01ea8 | MIT | [
"LICENSE"
] | 2,729 |
2.4 | glycoforge | 0.1.3 | A simulation tool for generating glycomic relative abundance datasets with customizable biological group differences and controllable batch-effect injection | <img src="glycoforge_logo.jpg" alt="GlycoForge logo" width="300">
**GlycoForge** is a simulation tool for **generating glycomic relative-abundance datasets** with customizable biological group differences and controllable batch-effect injection.
## Key Features
- **Two simulation modes**: Fully synthetic or templated (extract factor from input reference data + simulate batch effect)
- **Controllable effects injection**: Systematic grid search over biological effect or batch effect strength parameters
- **Motif-level effects**: For both bio and batch effects, desired motif differences (e.g., `Neu5Ac: down`) can be introduced. These are propagated in a dynamically constructed biosynthetic network to ensure physiological glycomics data (e.g., corresponding increase in desialylated glycans in the example of `Neu5Ac: down`)
- **MNAR missing data simulation**: Mimics left-censored patterns biased toward low-abundance glycans
## Quick Start
### Installation
* Python >= 3.10 required.
* Core dependency: `glycowork>=1.6.4`
```bash
pip install glycoforge
```
OR
```bash
git clone https://github.com/BojarLab/GlycoForge.git
cd GlycoForge
python3.10 -m venv .venv
source .venv/bin/activate
pip install -e .
```
### Usage
See [run_simulation.ipynb](run_simulation.ipynb) [](https://colab.research.google.com/github/BojarLab/GlycoForge/blob/main/run_simulation.ipynb)for interactive examples, or [use_cases/batch_correction/](use_cases/batch_correction) [](https://colab.research.google.com/github/BojarLab/GlycoForge/blob/main/use_cases/batch_correction/run_correction.ipynb) for batch correction workflows.
## How the simulator works
We keep everything in the CLR (centered log-ratio) space:
- First, draw a healthy baseline composition from a Dirichlet prior: `p_H ~ Dirichlet(alpha_H)`.
- Flip to CLR: `z_H = clr(p_H)`.
- For selected glycans, push the signal using real or synthetic effect sizes: `z_U = z_H + m * lambda * d_robust`, where `m` is the differential mask, `lambda` is `bio_strength`, and `d_robust` is the effect vector after `robust_effect_size_processing`.
- **Simplified mode**: draw synthetic effect sizes (log-fold changes) and pass them through the same robust processing pipeline.
- **Hybrid mode**: start from the Cohen’s *d* values returned by `glycowork.get_differential_expression`; `define_differential_mask` lets you restrict the injection to significant hits or top-*N* glycans before scaling.
- Invert back to proportions: `p_U = invclr(z_U)` and scale by `k_dir` to get `alpha_U`, note that the healthy and unhealthy Dirichlet strengths use different `k_dir` values, and a separate `variance_ratio` controls their relative magnitude.
- Batch effects ride on top as direction vectors `u_b`, so a clean CLR sample `Y_clean` becomes `Y_with_batch = Y_clean + kappa_mu * u_b + epsilon`, with `var_b` controlling spread.
## Simulation Modes
The pipeline entry point is `glycoforge.simulate()` with two modes controlled by `data_source`. Configuration files are in `sample_config/`.
<details>
<summary><b>Synthetic mode (<code>data_source="simulated"</code>)</b> – Fully synthetic simulation (click to show detail introduction)</summary>
<br>
No real data dependency. Ideal for controlled experiments with known ground truth.
**Pipeline steps:**
1. Initializes log-normal healthy baseline: `alpha_H = ones(n_glycans) * 10`
2. For each random seed, generates `alpha_U` by randomly scaling `alpha_H`:
- `up_frac` (default 30%) upregulated with scale factors from `up_scale_range=(1.1, 3.0)`
- `down_frac` (default 30%) downregulated with scale factors from `down_scale_range=(0.3, 0.9)`
- Remaining glycans (~40%) stay unchanged
3. Samples clean cohorts from `Dirichlet(alpha_H)` and `Dirichlet(alpha_U)` with `n_H` healthy and `n_U` unhealthy samples
4. Defines batch effect direction vectors `u_dict` once per simulation run (fixed seed ensures reproducible batch geometry across parameter sweep)
5. Applies batch effects controlled by `kappa_mu` (shift strength) and `var_b` (variance scaling)
6. Optionally applies MNAR (Missing Not At Random) missingness:
- `missing_fraction`: proportion of missing values (0.0-1.0)
- `mnar_bias`: intensity-dependent bias (default 2.0, range 0.5-5.0)
- Left-censored pattern: low-abundance glycans more likely to be missing
7. Grid search over `kappa_mu` and `var_b` produces multiple datasets under identical batch effect structure
**Key parameters:** `n_glycans`, `n_H`, `n_U`, `kappa_mu`, `var_b`, `missing_fraction`, `mnar_bias`
</details>
<details>
<summary><b>Templated mode (<code>data_source="real"</code>)</b> – Extract biological effect from input reference data + simulate batch effect (click to show detail introduction) </summary>
<br>
Starts from real glycomics data to preserve biological signal structure. Accepts CSV file or `glycowork.glycan_data` datasets.
**Pipeline steps:**
1. Loads CSV and extracts healthy/unhealthy sample columns by prefix (configurable via `column_prefix`)
2. Runs CLR-based differential expression via `glycowork.get_differential_expression` to compute Cohen's d effect sizes
3. Reindexes effect sizes to match input glycan order (fills missing glycans with 0.0)
4. Applies `differential_mask` to select which glycans receive biological signal injection:
- `"All"`: inject into all glycans
- `"significant"`: only glycans marked significant by glycowork
- `"Top-N"`: top N glycans by absolute effect size (e.g., `"Top-10"`)
5. Processes effect sizes through `robust_effect_size_processing`:
- Centers effect sizes to remove global shift
- Applies Winsorization to clip extreme outliers (auto-selects percentile 85-99, or uses `winsorize_percentile`)
- Normalizes by baseline (`baseline_method`: median, MAD, or p75)
- Returns normalized `d_robust` scaled by `bio_strength`
6. Injects effects in CLR space: `z_U = z_H + mask * bio_strength * d_robust`
7. Converts back to proportions: `p_U = invclr(z_U)`
8. Scales by Dirichlet concentration: `alpha_H = k_dir * p_H` and `alpha_U = (k_dir / variance_ratio) * p_U`
9. Samples clean cohorts from `Dirichlet(alpha_H)` and `Dirichlet(alpha_U)` with `n_H` healthy and `n_U` unhealthy samples
10. Defines batch effect direction vectors `u_dict` once per run (fixed seed ensures fair comparison across parameter combinations)
11. Applies batch effects: `y_batch = y_clean + kappa_mu * sigma * u_b + epsilon`, where `epsilon ~ N(0, sqrt(var_b) * sigma)`
12. Optionally applies MNAR missingness (same as Simplified mode: left-censored pattern biased toward low-abundance glycans)
13. Grid search over `bio_strength`, `k_dir`, `variance_ratio`, `kappa_mu`, `var_b` to systematically test biological signal and batch effect interactions
**Key parameters:** `data_file`, `column_prefix`, `bio_strength`, `k_dir`, `variance_ratio`, `differential_mask`, `winsorize_percentile`, `baseline_method`, `kappa_mu`, `var_b`, `missing_fraction`, `mnar_bias`
</details>
## Use Cases
The [use_cases/batch_correction/](use_cases/batch_correction) directory demonstrates:
- Call `glycoforge` simulation, and then apply correction workflow
- Batch correction effectiveness metrics visualization
## Limitation
**Two biological groups only**: Current implementation targets healthy/unhealthy setup. Supporting multi-stage disease (>=3 groups) requires refactoring Dirichlet parameter generation and evaluation metrics.
| text/markdown | null | "Zoe (Siyu) Hu" <zoe.sy.hu@gmail.com>, Daniel Bojar <daniel.bojar@gu.se> | null | null | MIT | glycomics, simulation, relative-abundance data, batch-effect, compositional-data, bioinformatics, MNAR | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Bio-Informatics",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Operating System :: OS Independent"
] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"numpy>=1.21.0",
"pandas>=1.3.0",
"scipy>=1.7.0",
"scikit-learn>=1.0.0",
"matplotlib>=3.5.0",
"seaborn>=0.11.0",
"glycowork>=1.6.4",
"pytest>=7.0.0; extra == \"dev\"",
"jupyter>=1.0.0; extra == \"dev\"",
"notebook>=6.4.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/BojarLab/GlycoForge",
"Repository, https://github.com/BojarLab/GlycoForge",
"Documentation, https://github.com/BojarLab/GlycoForge#readme",
"Bug Tracker, https://github.com/BojarLab/GlycoForge/issues"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T09:03:18.879781 | glycoforge-0.1.3.tar.gz | 34,667 | f9/4e/4cfa48fa508baa0000515e9099ed5945b5a147acb45c03c0d229fe64dde0/glycoforge-0.1.3.tar.gz | source | sdist | null | false | e3f2f87a347dec7abff47ab6979ed668 | 6f2b5af4799488d7bc436caa81edfeec0dd4dfd019e3ed861688d4a3fe796ee2 | f94e4cfa48fa508baa0000515e9099ed5945b5a147acb45c03c0d229fe64dde0 | null | [
"LICENSE"
] | 250 |
2.4 | agentevolution | 0.1.1 | The Natural Selection Protocol for AI Agents - A self-evolving tool ecosystem. | <div align="center">
# 🔥 AgentEvolution
### The Natural Selection Protocol for AI Agents
**Where bad tools die, and good tools evolve.**
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
[](https://modelcontextprotocol.io)
[](https://github.com/agentevolution/agentevolution/pulls)
[Quick Start](#-quick-start) • [How It Works](#-how-it-works) • [The Fitness Engine](#-the-fitness-engine) • [Live Dashboard](#-live-dashboard)

</div>
---
## 🧬 Darwinism for Code
Most agent systems are static. **AgentEvolution is alive.**
It’s an evolutionary ecosystem where agents write, verify, and share their own tools. But here’s the kicker: **they compete.**
Every tool is constantly evaluated by a **Genetic Fitness Function**:
```python
fitness = (
0.35 * success_rate + # Does it work?
0.25 * token_efficiency + # Is it cheap?
0.20 * speed + # Is it fast?
0.10 * adoption + # Do others use it?
0.10 * freshness # Is it new?
)
```
> **The result? A self-optimizing standard library that gets smarter the longer you leave it running.**
---
## 💀 The Problem: "Dead Code"
AI agents solve the same problems thousands of times a day. Each agent writes code from scratch, burns tokens, and throws it away. It’s inefficient. It’s dumb.
## 💡 The Solution: "The Hive Mind"
**AgentEvolution** is a local MCP server that acts as a shared brain:
1. 🔨 **Agent A** solves a problem and **publishes** the solution.
2. 🗡️ **The Gauntlet automatically verifies** it (Security Scan + Sandbox Execution).
3. 🧠 **Agent B** discovers it via **Semantic Intent Search**.
4. 🧬 **The System evolves**: Usage stat feeds the fitness engine.
> No human intervention. No manual review. Fully autonomous.
---
## ✨ Why This Is Different
| Feature | Smithery | MCP Registry | **AgentEvolution** |
|---------|----------|-------------|---------------|
| **Philosophy** | "App Store" | "Directory" | **"Evolution"** 🧬 |
| **Author** | Humans | Humans | **Autonomous Agents** 🤖 |
| **Verification** | Manual | Manual | **Automated Sandbox** 🗡️ |
| **Ranking** | Popularity | Alphabetical | **Fitness Score** 📊 |
| **API Keys** | Required | Varies | **Zero (Localhost)** ✅ |
---
## 🚀 Quick Start
### Install
```bash
pip install agentevolution
```
### Run the Server
```bash
agentevolution
```
### See Evolution in Action (Demo)
Watch 3 agents build on each other's work in real-time:
```bash
# 1. Run the simulation
python examples/multi_agent_demo.py
# 2. Open the dashboard
agentevolution-dashboard # http://localhost:8080
```
### 📊 Live Dashboard
Visualize the ecosystem in real-time.
**Overview & Stats:**

**Tool Registry:**

---
## 🔄 How It Works (The Lifecycle)
```mermaid
graph TD
A[Agent A] -->|Submits Code| Forge(🔨 The Forge)
Forge -->|Security Scan + Sandbox| Gauntlet(🗡️ The Gauntlet)
Gauntlet -->|Verified Tool| Hive(🧠 Hive Mind)
Hive -->|Semantic Search| B[Agent B]
B -->|Uses Tool| Fitness(🧬 Fitness Engine)
Fitness -->|Updates Score| Hive
```
### 1. 🔨 The Forge (Publisher)
Ingests code, description, and test cases. Normalizes input.
### 2. 🗡️ The Gauntlet (Validator)
The filter that keeps the ecosystem clean.
* **AST Security Scan**: Rejects `eval`, `exec`, and dangerous imports.
* **Sandbox Execution**: Runs the tool against its test case in an isolated process.
* **Performance Profiling**: Measures RAM and CPU usage.
### 3. 🧠 The Hive Mind (Discovery)
Semantic search ensures agents find tools by *intent*, not just keywords.
* "I need to parse a PDF" -> Returns `pdf_to_text` (Fitness: 0.95)
### 4. 🧬 The Fitness Engine (Evolution)
Calculates the `fitness_score` (0.0 to 1.0).
* **Adoption Velocity**: Uses logarithmic scaling (`log2(unique_agents + 1)`).
* **Freshness**: Implements exponential decay for stale tools.
* **Delisting**: Tools that fail repeatedly are automatically purged.
---
## 🖥️ Live Dashboard
Visualize the ecosystem in real-time at `http://localhost:8080`.
* **Particle System**: Represents active agents.
* **Fitness Leaderboard**: The top tools surviving natural selection.
* **Activity Feed**: Live log of births (submissions) and deaths (delisting).
---
## 📡 API Reference
AgentEvolution exposes 7 MCP tool endpoints:
#### `submit_tool`
Submit a new tool. Triggers The Gauntlet.
#### `fork_tool`
Improve an existing tool. Maintains a cryptographic provenance chain (SHA-256).
#### `discover_tool`
Find tools using natural language ("I need to...").
#### `report_usage`
Feed the data that drives evolution.
---
## 🤝 Contributing
We welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md).
### Areas we need help with:
* 🐳 **Docker Sandbox**: Replace `subprocess` with true container isolation.
* 🌐 **HTTP Transport**: Add SSE/WebSocket support.
* 📦 **TypeScript SDK**: For JS agents.
---
<div align="center">
**Built with ❤️ for the AI agent community**
*Star ⭐ this repo if you believe code should evolve.*
</div>
| text/markdown | AgentEvolution Contributors | null | null | null | MIT | ai-agents, autonomous, mcp, self-evolving, tool-sharing | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"T... | [] | null | null | >=3.11 | [] | [] | [] | [
"aiosqlite>=0.20.0",
"chromadb>=0.5.0",
"click>=8.0.0",
"fastapi>=0.110.0",
"mcp>=1.0.0",
"pydantic>=2.0.0",
"sentence-transformers>=3.0.0",
"uvicorn>=0.30.0",
"pytest-asyncio>=0.24.0; extra == \"dev\"",
"pytest-cov>=5.0.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.5.0; ex... | [] | [] | [] | [
"Homepage, https://github.com/agentevolution/agentevolution",
"Documentation, https://github.com/agentevolution/agentevolution/docs",
"Repository, https://github.com/agentevolution/agentevolution",
"Issues, https://github.com/agentevolution/agentevolution/issues"
] | twine/6.2.0 CPython/3.12.1 | 2026-02-18T09:02:44.176659 | agentevolution-0.1.1.tar.gz | 1,276,488 | c2/6a/6f5d28011d9e3db53ab2d80f42dff174ce8836429676c775657c4cd27436/agentevolution-0.1.1.tar.gz | source | sdist | null | false | 577cc70161a9f5a4cf8daef32eb99a24 | 1c9eff3fe097ef50601540a44f4e89c014aa39f9d0046203bf90b58cc3d13d7b | c26a6f5d28011d9e3db53ab2d80f42dff174ce8836429676c775657c4cd27436 | null | [
"LICENSE"
] | 269 |
2.4 | django-hidp | 2.0.0 | Full-featured authentication system for Django projects | # Hello, ID Please
"Hello, ID Please" (HIdP) is a Django application that offers a
full-featured authentication system for Django projects. It is designed with
the OWASP best practices in mind and offers a secure and flexible solution for
registering and authenticating users in Django projects.
Please read the full documentation on https://leukeleu.github.io/django-hidp/
| text/markdown | Jaap Roes, Dennis Bunskoek, Ramon de Jezus, Thomas Kalverda, Wouter de Vries | Jaap Roes <jroes@leukeleu.nl>, Dennis Bunskoek <dbunskoek@leukeleu.nl>, Ramon de Jezus <rdejezus@leukeleu.nl>, Thomas Kalverda <tkalverda@leukeleu.nl>, Wouter de Vries <wdevries@leukeleu.nl> | null | null | null | null | [
"License :: OSI Approved :: BSD License",
"Development Status :: 5 - Production/Stable",
"Framework :: Django",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"django>=4.2",
"django-ratelimit<5,>=4.1.0",
"jwcrypto<2,>=1.5",
"requests<3,>=2.32",
"django-hidp[oidc-provider,otp]; extra == \"all\"",
"django-oauth-toolkit>=3.0.1; extra == \"oidc-provider\"",
"djangorestframework>=3.15.2; extra == \"oidc-provider\"",
"django-otp>=1.5.0; extra == \"otp\"",
"segn... | [] | [] | [] | [
"Documentation, https://leukeleu.github.io/django-hidp/",
"Repository, https://github.com/leukeleu/django-hidp/",
"Issues, https://github.com/leukeleu/django-hidp/issues",
"Releasenotes, https://github.com/leukeleu/django-hidp/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:00:43.113151 | django_hidp-2.0.0.tar.gz | 91,297 | 15/59/2ac7683f97d1f87b000ddbe81b4d635e54bf780f02bc6f527bef7273a597/django_hidp-2.0.0.tar.gz | source | sdist | null | false | 5b7a5fb62dc2c8a4149b203e654e41c4 | 347b9b32f886824d4b28d90f5bbac00e6d25dad33d2495db3d1514f7e1048a5b | 15592ac7683f97d1f87b000ddbe81b4d635e54bf780f02bc6f527bef7273a597 | BSD-3-Clause | [
"LICENSE"
] | 255 |
2.4 | bscli | 0.3.9 | Brightspace CLI, grading automation and management system. | # Brightspace Command Line Interface (`bscli`)
`bscli` is a command line interface for D2L Brightspace LMS that simplifies and automates assignment grading workflows.
It provides the ability to download assignment submissions via the command line (which is fast) instead of using the web interface (which is slow).
`bscli` is designed for educational institutions and can be adapted to work with any Brightspace instance.
Additionally, `bscli` is able to (semi-)automate grading and distribution of homework assignments.
It can download submissions, process them automatically, and distribute them to graders.
These graders can then grade the assignments and upload their grades back to Brightspace.
You can use `bscli` in two ways: either by specifying course and assignment IDs manually for quick access, or by creating a dedicated folder for each course with a `course.json` file containing course-specific settings and aliases for assignments (recommended).
Graders will receive an archive containing all submissions they were assigned to grade, and can write their feedback in a `feedback.txt` file (with partial Markdown support).
They can do so on their local machine, using any tools they prefer, without the need to interact with the Brightspace LMS web interface.
Once they are done, they can upload their feedback via the Brightspace API.
More specifically, submissions are downloaded, and after potential preprocessing, an `.7z` archive is created for each grader containing the submissions they are assigned to grade.
There are several strategies to distribute submissions to graders, such as randomly or according to Brightspace group registration.
The archives can be sent to graders via FileSender.
The automated processing pipeline works as follows:
1. **Download** submissions from Brightspace
2. **Preprocess** submissions (remove unwanted files, convert formats, organize structure)
3. **Distribute** submissions to graders using the configured strategy
4. **Package** submissions into encrypted `.7z` archives for each grader
5. **Deliver** archives to graders via FileSender with password protection
6. **Upload** graded feedback back to Brightspace
Apart from Brightspace access, the scripts thus require access to a FileSender server.
Notice that files will be encrypted with a password while being uploaded to FileSender.
The password can be set per assignment, and must be provided to graders separately.
FileSender may not receive the password, and does not accept it being included in the email (they actively check for this).
## Installation
To install `bscli`, you need to have Python 3.10 or higher installed on your system.
You can install `bscli` using `pip`:
```bash
pip install bscli
```
If you want to use `bscli` for automatically processing homework assignments, you also need to install some additional OS packages:
- `7za` - to create the encrypted archives
- `libreoffice` - to convert .docx files to .pdf files (optional)
## Configuration
Before you can use `bscli`, you need to configure it with your Brightspace instance and FileSender credentials.
### Brightspace configuration
To use `bscli`, you need to configure it with API credentials specific to your Brightspace instance.
You should receive these credentials from your Brightspace administrator.
You can configure `bscli` by running the following command:
```bash
bscli config init bsapi
```
This will prompt you for the necessary credentials and save them in a configuration file on your system (by default in `~/.config/bscli/bsapi.json`).
### FileSender configuration
If you want to use `bscli` for distributing homework assignments to graders, you also need to configure FileSender credentials.
FileSender is a service that allows you to send large files securely and `bscli` is able to upload files to FileSender directly.
To configure FileSender, you need to have an account on a FileSender service and obtain the API credentials.
You can configure `bscli` for FileSender by running the following command:
```bash
bscli config init filesender
```
This will prompt you for the necessary credentials and save them in a configuration file (by default in `~/.config/bscli/filesender.json`).
You can use the SURF FileSender instance and acquire the credentials via https://filesender.surf.nl/?s=user.
You need a `username` (which can be a random string), `email address`, and `api key` (also a long random string).
### Authentication
Most commands in `bscli` require authentication to access Brightspace on your behalf.
`bscli` will prompt you to authenticate when you run a command that requires it.
You can also authenticate manually by running the following command:
```bash
bscli config authorize
```
This will open a web browser where you can log in to Brightspace and authorize `bscli` to access your account.
If you have already authenticated, `bscli` will use the existing authentication token as long as it is valid.
This is typically a few hours, depending on your Brightspace instance configuration (this is not controlled by `bscli`).
## Basic usage
`bscli` is an actual command line interface with subcommands, similar to `git` or `docker`.
For a list of available commands, you can run:
```bash
bscli --help
```
For example, to list all courses you have access to, you can run:
```bash
bscli courses list
```
To list all assignments in a specific course, you can run:
```bash
bscli assignments list --course-id <course_id>
```
or if you have a `course.json` file in your current directory (see below), you can simply run:
```bash
bscli assignments list
```
Then, to download all submissions for a specific assignment, you can run:
```bash
bscli submissions download --course-id <course_id> --assignment-id <assignment_id>
```
or if you have a `course.json` file in your current directory specifying the alias `A01` for a specific assignment, you can run:
```bash
bscli submissions download A01
```
### Two usage modes
**Direct mode (quick access):**
For quick operations, you can specify course and assignment IDs directly:
```bash
bscli assignments list --course-id 12345
bscli assignments download --course-id 12345 --assignment-id 67890
bscli assignments grading-progress --course-id 12345 --assignment-id 67890
```
**Course configuration mode (recommended):**
For regular use, create a dedicated folder for each course and configure a `course.json` file.
This allows you to use assignment aliases and enables advanced grading automation features.
You can create a default configuration file by running `bscli courses create-config`.
Typically, you create this file once per course, and then modify it as needed.
```bash
mkdir my-course-2025
cd my-course-2025
bscli courses create-config
bscli assignments download homework-1 # Uses alias from course.json
bscli assignments distribute homework-1 # Automated distribution with custom processing
```
### Course configuration
Most commands are related to a specific Brightspace course.
Though you can specify the course ID explicitly using the `--course-id` option, it is often more convenient to create a `course.json` file in your current directory (containing anything related for a specific course, such as aliases for assignments).
By default, `bscli` will look for a `course.json` file in the current directory and use it for all commands.
In this `course.json` file, you can specify the course ID and other course-specific grading settings for distributing assignments.
Typically, one grader coordinator creates this file for a course and shares it with whoever needs it.
You can create a `course.json` file by running:
```bash
bscli courses create-config
```
This will prompt you for all kinds of course-specific settings and aliases and save them in a `course.json` file.
You can also edit the file manually.
### Distributing assignments
If you want to distribute assignments to graders, you can use the `bscli distribute` command.
This command will:
1. download all submissions for the specific assignment
2. process them automatically (e.g., unzip archives, convert .docx files to .pdf files, remove specific files, inject specific files, etc.)
3. divide the submissions into groups for each grader according to the course configuration
4. upload the groups to FileSender and send the links to the graders via email
Notice that for privacy, the submissions should be encrypted with a password before uploading to FileSender.
This password can be set in the course configuration file (`course.json`) for each assignment.
Graders will need to input this password to download the submissions from FileSender.
This password is not sent via email to the graders (SURF does not allow it), so it needs to be communicated separately (e.g. in person).
If you do not set a password, `bscli` will not encrypt the submissions and will upload them directly to FileSender.
Notice that this is not recommended!
### Uploading feedback and grades
If you want to upload feedback and grades back to Brightspace, you can use the `bscli feedback upload` command.
This command will look for a `feedback.txt` file in the current directory, parse its contents and upload the feedback and grades to Brightspace.
A template `feedback.txt` file will automatically be created when you run the `bscli distribute` command.
## Advanced Configuration
`bscli` uses configuration files to store settings and credentials.
The `bsapi.json` and `filesender.json` files are used to configure the Brightspace API and FileSender server respectively and are stored by default in the `~/.config/bscli/` directory.
More information on the contents of these files can be found below, or in the `/bscli/data/scheme/` directory that contains JSON schema files for these configuration files.
### `bsapi.json`
This file configures communication with the Brightspace API.
An example of the `bsapi.json` file can be found below.
```json
{
"clientId": "...",
"clientSecret": "...",
"lmsUrl": "brightspace.example.com",
"redirectUri": "https://redirect.example.com/callback",
"leVersion": "1.79",
"lpVersion": "1.47"
}
```
The settings should match those of a registered API application in your Brightspace LMS.
### OAuth Callback Setup
The `redirectUri` in your `bsapi.json` configuration must point to a publicly accessible HTTPS URL hosting the `callback.html` file. This is required because:
1. **Brightspace requires HTTPS**: OAuth redirects must use HTTPS for security
2. **CLI limitation**: The CLI runs locally and cannot directly receive HTTPS callbacks
3. **Bridge solution**: The callback page acts as a bridge between the HTTPS OAuth flow and your local CLI
#### Setting up the callback page
1. **Host the callback page**: Upload the `callback.html` file (included in this repository) to any web server with HTTPS support:
- GitLab Pages (this repository includes GitLab CI configuration)
- GitHub Pages
- Your university's web hosting
- Any cloud hosting service (Netlify, Vercel, etc.)
2. **Configure the redirect URI**: Set the `redirectUri` in your `bsapi.json` to point to your hosted callback page:
```json
{
"redirectUri": "https://yourdomain.gitlab.io/bsscripts/callback.html"
}
```
3. **Register in Brightspace**: When registering your OAuth application in Brightspace, use the same HTTPS URL as your redirect URI.
#### How the OAuth flow works
1. CLI opens your browser to the Brightspace authorization URL
2. After authorization, Brightspace redirects to your hosted `callback.html` page
3. The callback page extracts the authorization code and either:
- Automatically redirects to your local CLI (if supported)
- Displays the code for manual copy-paste into the CLI
4. You enter the code in the CLI to complete authentication
#### Using GitLab Pages (Recommended)
This repository includes GitLab CI configuration (`.gitlab-ci.yml`) that automatically deploys the callback page to GitLab Pages:
1. **Fork or clone** this repository to your GitLab account
2. **Push to the main branch** - GitLab CI will automatically build and deploy the callback page
3. **Access your callback page** at `https://yourusername.gitlab.io/bsscripts/callback.html`
4. **Use this URL** as your `redirectUri` in `bsapi.json`
The GitLab CI configuration automatically copies `callback.html` to the `public` directory and deploys it to GitLab Pages on every push to the main branch.
### `course.json`
This file configures how the application should behave for a specific course.
It contains course-specific settings, such as the course ID, aliases for assignments, grading settings, et cetera.
```json
{
"courseName": "Sandbox Course 2025", // name of the course in Brightspace
"course": "sandbox", // internal alias for the course
"assignmentDefaults": { // default grading settings for assignments
"ignoredSubmissions": [], // submission ids to ignore
"draftFeedback": false, // whether to upload feedback as draft or publish it immediately
"defaultCodeBlockLanguage": "java", // default language for code blocks in feedback
"fileHierarchy": "smart", // whether to keep the `original` submission's file hierarchy, `flatten`, or unpack in a `smart` way
"division": { // how to divide the feedback archive
// ... see below
},
"gradeAliases": { // aliases for grades, used in feedback
"f": "Fail", // entering "f" as grade will be replaced by "Fail" in the feedback
"i": "Insufficient",
"s": "Sufficient",
"g": "Good"
},
"removeFiles": [ // files to remove from submissions
".*",
"*.exe",
"*.jar",
"*.a",
"*.o",
"*.class",
"*.obj"
],
"removeFolders": [ // folders to remove from submissions
"__MACOSX",
"__pycache__",
".*"
]
},
"assignments": { // assignments that should be graded
"a1": { // 'a1' is the alias for this assignment which can be used in the scripts
"name": "test assignment", // the name of the assignment in Brightspace
"encryptionPassword": "MySecureP@ssword123" // the password to encrypt the feedback archive for this assignment, must contain at least one uppercase letter, one lowercase letter, one digit and one special character
} // this can also contain the same settings as `assignmentDefaults` to override them for a specific assignment
},
"graders": { // who are the graders for this course
"grader1": { // 'grader1' is the alias for this grader which can be used in the scripts
"name": "Grader 1", // the display name of the grader
"email": "grader1@example.com", // the email address that should receive the feedback archive
"contactEmail": "grader1@example.com" // the email of the grader that will be used in the feedback to
}
}
}
```
## Division strategies
There are several strategies to divide submissions to graders.
- `random`: submissions are divided randomly to graders
- `brightspace`: submissions are divided according to Brightspace groups (depending on which group the submitter is in, it will be divided to the corresponding grader)
- `persistent`: submissions are divided according to a persistent mapping of students to graders
- `custom`: a custom division strategy can be implemented as a `CourseModule`
### Random division
The random division strategy is the simplest division strategy.
Everytime you distribute submissions, they are randomly assigned to graders as specified in the `graders` field.
You should configure the following settings in the `course.json` file to use this strategy:
```json
{
"division": {
"method": "random",
"graderWeights": {
"grader1": 1,
"grader2": 2
}
}
}
```
### Brightspace division
The Brightspace division strategy divides submissions according to Brightspace groups.
You should configure the following settings in the `course.json` file to use this strategy:
```json
{
"division": {
"method": "brightspace",
"groupCategoryName": "Grading Groups",
"groupMapping": {
"Grading Group 1": "grader1",
"Grading Group 2": "grader2"
}
}
}
```
### Persistent division
Make a random division once and store it in a file, so that the same division can be used in the future.
```json
{
"division": {
"method": "persistent",
"groupCategoryName": "grading-groups",
"graderWeights": {
"grader1": 1,
"grader2": 2
}
}
}
```
### Custom division
You can implement your own division strategy by creating a Python script as a `CourseModule`.
Place the script in `./data/course/<course>/plugin.py`.
| text/markdown | null | Job Doesburg <Job.Doesburg@ru.nl>, Ciske Harsema <Ciske.Harsema@ru.nl> | null | null | MIT License
Copyright (c) 2024 Ciske Harsema
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"brightspace-api>=2.1.0",
"patool",
"jsonschema"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.8 | 2026-02-18T09:00:02.926575 | bscli-0.3.9.tar.gz | 89,714 | 70/38/1d9969f24077af087170d97ae47bb12b1efa9588e02855efce32ca6346ac/bscli-0.3.9.tar.gz | source | sdist | null | false | c292a84160d4ad581e8fa524d6b348da | 0fb85520c69d548c6486c096dc8d7405f7b10e93bba61ff30830223c96138266 | 70381d9969f24077af087170d97ae47bb12b1efa9588e02855efce32ca6346ac | null | [
"LICENSE"
] | 260 |
2.4 | mwinapi | 0.0.1 | My own winapi | # Description
**mwinapi** is my own **winapi** lib
It includes some useful functions
# Download
Github repository page `https://github.com/JimmyJimmy666/mwinapi.git`
Github cmdline `git clone https://github.com/JimmyJimmy666/mwinapi.git`
System cmdline `https://codeload.github.com/JimmyJimmy666/mwinapi/zip/refs/heads/main`
| text/markdown | Negative Acknowledge | 18114055256@163.com | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/JimmyJimmy666/mwinapi | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.6 | 2026-02-18T08:59:24.584271 | mwinapi-0.0.1.tar.gz | 37,253 | 94/2c/28d29b211db90003dce398a0877bb4a6277b1c5f3b4015dd69cf1c56bae7/mwinapi-0.0.1.tar.gz | source | sdist | null | false | a294d940708bdc8e7e485317a3fd3d56 | 0414641d29ac16b0f35529cc239638462ec27a80841320bd6dfe29528658e0d3 | 942c28d29b211db90003dce398a0877bb4a6277b1c5f3b4015dd69cf1c56bae7 | null | [
"LICENSE"
] | 284 |
2.4 | verta | 1.0.6 | VERTA - Virtual Environment Route and Trajectory Analyzer: A Python tool for route and trajectory analysis in virtual environments | # VERTA - Virtual Environment Route and Trajectory Analyzer

`VERTA` is a comprehensive toolkit to analyze route choices from x–z movement trajectories: discover branch directions (common route choices that occur at an intersection), assign movement trajectories to existing branches (if new data was recorded), compute timing metrics, visualize results, and predict future route choices based on behavioral patterns. `VERTA` includes both command-line interface (CLI) and interactive web GUI.
## Installation
**Requirements:** Python ≥3.8
```bash
# Core installation
pip install verta
# With optional extras
pip install verta[yaml] # YAML configuration support
pip install verta[parquet] # Parquet file format support
pip install verta[gui] # GUI dependencies (streamlit, plotly)
```
This installs a console script `verta` and enables the web GUI.
## Sample Data
The repository includes a **`sample_data/`** folder with example trajectory CSV files you can use to try VERTA without your own data. The files contain head position (X, Z), time, and optional gaze/physiological columns in the same format as the CLI examples (e.g. `Headset.Head.Position.X`, `Headset.Head.Position.Z`, `Time`). Use them with any command by pointing `--input` at the folder:
```bash
verta discover --input ./sample_data --columns x=Headset.Head.Position.X,z=Headset.Head.Position.Z,t=Time --scale 0.2 --junction 520 330 --radius 20 --out ./outputs
```
See [Data Format Requirements](#data-format-requirements) for column details.
## 🖥️ Web GUI
For an interactive, user-friendly interface, try the web-based GUI:
```bash
# Install GUI dependencies (same as: pip install verta[gui])
pip install verta[gui]
# Launch the web interface (recommended)
verta gui
# Alternative methods
python gui/launch.py
# or
streamlit run src/verta/verta_gui.py
```
You can also customize the port and host:
```bash
verta gui --port 8502 # Use a different port
verta gui --host 0.0.0.0 # Allow external connections
```
**Output Location:** The GUI saves all analysis results to a `gui_outputs/` directory in your current working directory. Different analysis types create subdirectories (e.g., `gui_outputs/junction_0/`, `gui_outputs/metrics/`, `gui_outputs/gaze_analysis/`).
See [`gui/README.md`](gui/README.md) for detailed GUI documentation.
The GUI provides:
- **Interactive junction editor** with drag-and-drop functionality
- **Visual zone definition** for start/end points
- **Real-time analysis** with live parameter adjustment
- **Interactive visualizations** with Plotly charts
- **Gaze and physiological analysis** (head yaw, pupil dilation)
- **Flow graph generation** and conditional probability analysis
- **Pattern recognition** and behavioral insights
- **Intent Recognition** - ML-based early route prediction (see below)
- **Export capabilities** in multiple formats (JSON, CSV, ZIP)
- **Multi-junction analysis** with evacuation planning features
### 🧠 Intent Recognition
Predict user route choices **before** they reach decision points using machine learning:
- **Features extracted**: Spatial (distance, angle, offset), kinematic (speed, acceleration, curvature), gaze (if available), physiological (if available), contextual
- **Models**: Random Forest (fast, robust) or Gradient Boosting (higher accuracy)
- **Prediction distances**: Configure multiple prediction points (100, 75, 50, 25 units before junction)
- **Cross-validation**: Built-in model evaluation with customizable folds
- **Feature importance**: Understand which cues users rely on for decisions
- **Accuracy analysis**: See how prediction improves as users approach the junction
**Use cases:**
- Proactive wayfinding systems
- Predictive content loading
- Dynamic environment optimization
- A/B testing of early interventions
📚 **Documentation**: See [`examples`](examples) for a comprehensive guide to Intent Recognition, including usage examples, model loading, and integration into VR systems.
## CLI Commands
VERTA provides 7 main commands for different types of analysis:
### Decision Modes
VERTA uses **decision modes** to determine where along a trajectory a route choice (branch decision) is made after passing through a junction. The decision point is used to extract the direction vector that represents the chosen route. Three modes are available:
- **`pathlen`**: Measures the path length traveled after entering the junction. The decision point is where the trajectory has traveled a specified distance (set by `--distance`, default: 100 units) from the junction entry point. This mode works well for consistent movement speeds and is computationally efficient.
- **`radial`**: Uses radial distance from the junction center. The decision point is where the trajectory crosses an outer radius (`--r_outer`) with an outward trend (moving away from the junction center). This mode is useful when trajectories move at varying speeds, as it's based on spatial position rather than path length.
- **`hybrid`** (recommended): Tries radial mode first, and if that doesn't find a decision point, falls back to pathlen mode. This provides the best of both approaches and handles a wider variety of trajectory patterns.
**When to use each mode:**
- Use `pathlen` for fast processing when trajectories have consistent speeds
- Use `radial` when trajectories vary significantly in speed or when you want spatial-based detection
- Use `hybrid` (default) for the most robust detection across different scenarios
### Parameters Reference
This section provides a comprehensive overview of all parameters used across VERTA commands. Individual command sections highlight only the most relevant parameters for that command.
#### Common Parameters
These parameters are used across most commands:
- **`--input`** (required): Path to input directory or file containing trajectory data
- **`--out`** / **`--output`**: Path to output directory for results (default: current directory)
- **`--glob`**: File pattern to match input files (default: `"*.csv"`)
- **`--columns`**: Column mapping in format `x=ColumnX,z=ColumnZ,t=TimeColumn`. Maps your CSV columns to VERTA's expected coordinate names
- **`--scale`**: Coordinate scaling factor (default: 1.0). Use if your coordinates need scaling (e.g., `0.2` to convert from millimeters to meters)
- **`--config`**: Path to YAML configuration file for batch parameter settings
#### Junction Parameters
Define the location and size of junctions (decision points):
- **`--junction`**: Junction center coordinates and radius as `x y radius` (e.g., `520 330 20`)
- **`--junctions`**: Multiple junctions as space-separated triplets: `x1 y1 r1 x2 y2 r2 ...`
- **`--radius`**: Junction radius (used with `--junction` when radius is specified separately)
#### Decision Mode Parameters
Control how decision points are detected (see [Decision Modes](#decision-modes) above):
- **`--decision_mode`**: Choose `pathlen`, `radial`, or `hybrid` (default varies by command)
- **`--distance`**: Path length after junction for decision detection in `pathlen` mode (default: 100.0 units)
- **`--r_outer`**: Outer radius for `radial` decision mode. Trajectory must cross this radius with outward movement
- **`--r_outer_list`**: List of outer radii for multiple junctions (one per junction)
- **`--linger_delta`**: Additional distance beyond junction required for decision detection (default: 5.0)
- **`--epsilon`**: Minimum step size for trajectory processing (default: 0.015). Smaller values detect finer movements
- **`--trend_window`**: Window size for trend analysis in radial mode (default: 5)
- **`--min_outward`**: Minimum outward movement threshold for radial detection (default: 0.0)
#### Clustering Parameters
Used by the `discover` command to identify branch directions:
- **`--cluster_method`**: Clustering algorithm - `kmeans` (fast, requires known k), `auto` (finds optimal k), or `dbscan` (density-based, finds variable number of clusters)
- **`--k`**: Number of clusters for kmeans (default: 3)
- **`--k_min`**, **`--k_max`**: Range for auto clustering (default: 2-6)
- **`--angle_eps`**: Angle epsilon for DBSCAN clustering in degrees (default: 15.0)
- **`--min_samples`**: Minimum samples per cluster for DBSCAN (default: 5)
- **`--min_sep_deg`**: Minimum angular separation in degrees between branch centers (default: 12.0). Branches closer than this are merged
- **`--seed`**: Random seed for reproducibility (default: 0)
#### Assignment Parameters
Used when assigning trajectories to existing branches:
- **`--centers`**: Path to previously discovered branch centers file (.npy format) - **required** for `assign` command
- **`--assign_angle_eps`**: Angle tolerance in degrees for branch assignment (default: 15.0). Trajectory direction must be within this angle of a branch center
#### Analysis Parameters
Control analysis behavior and output:
- **`--analyze_sequences`**: Enable route sequence analysis across multiple junctions
- **`--predict_examples`**: Number of concrete prediction examples to generate (default: 50)
- **`--physio_window`**: Time window in seconds for physiological data analysis (default: 3.0)
#### Machine Learning Parameters (Intent Recognition)
Parameters for ML-based intent recognition:
- **`--prediction_distances`**: Distances before junction to make predictions (default: `100 75 50 25` units)
- **`--model_type`**: ML model - `random_forest` (fast, robust) or `gradient_boosting` (higher accuracy, slower)
- **`--cv_folds`**: Number of cross-validation folds for model evaluation (default: 5)
- **`--test_split`**: Fraction of data reserved for testing (default: 0.2)
- **`--with_gaze`**: Include gaze and physiological data in feature extraction (if available)
- **`--assignments`**: Path to pre-computed branch assignments file (optional, speeds up analysis)
#### Visualization Parameters
Control plot generation:
- **`--plot_intercepts`**: Generate decision intercepts visualization (default: True)
- **`--show_paths`**: Show trajectory paths in plots (default: True)
- Use `--no-plot_intercepts` and `--no-show_paths` to disable specific visualizations
#### Chain Analysis Parameters
For multi-junction analysis:
- **`--evacuation_analysis`**: Enable evacuation efficiency analysis
- **`--generate_recommendations`**: Generate traffic flow recommendations
- **`--risk_assessment`**: Perform risk assessment metrics
### 1. Discover Branches
Discover branch directions from trajectory data using clustering algorithms:
```bash
verta discover \
--input ./data \
--glob "*.csv" \
--columns x=Headset.Head.Position.X,z=Headset.Head.Position.Z,t=Time \
--scale 0.2 \
--junction 520 330 --radius 20 \
--distance 100 \
--decision_mode hybrid \
--cluster_method auto \
--out ./outputs
```
**Key Parameters:**
- `--cluster_method`: Choose from `kmeans`, `auto`, or `dbscan` (default: `kmeans`)
- `--decision_mode`: `pathlen`, `radial`, or `hybrid` (default: `hybrid`)
- `--k`: Number of clusters for kmeans (default: 3)
- `--k_min`, `--k_max`: Range for auto clustering (default: 2-6)
See [Parameters Reference](#parameters-reference) for all available parameters including decision mode, clustering, and visualization options.
### 2. Assign Branches
Assign new trajectories to previously discovered branch centers:
```bash
verta assign \
--input ./new_data \
--columns x=X,z=Z,t=time \
--junction 520 330 --radius 20 \
--distance 100 \
--centers ./outputs/branch_centers.npy \
--out ./outputs/new_assignments
```
**Key Parameters:**
- `--centers`: Path to previously discovered branch centers (.npy file) - **required**
- `--decision_mode`: `pathlen`, `radial`, or `hybrid` (default: `pathlen`)
- `--assign_angle_eps`: Angle tolerance for branch assignment (default: 15.0 degrees)
See [Parameters Reference](#parameters-reference) for decision mode and other common parameters.
**Use Cases:**
- Assign new test data to previously discovered branches
- Apply learned branch structure to new datasets
- Batch processing of multiple trajectory sets
### 3. Compute Metrics
Calculate timing and speed metrics for trajectories:
```bash
verta metrics \
--input ./data \
--columns x=X,z=Z,t=time \
--junction 520 330 --radius 20 \
--distance 100 \
--decision_mode radial --r_outer 30 \
--out ./outputs
```
**Key Parameters:**
- `--decision_mode`: `pathlen`, `radial`, or `hybrid` (default: `pathlen`)
- `--trend_window`: Window size for trend analysis (default: 5)
- `--min_outward`: Minimum outward movement threshold (default: 0.0)
See [Parameters Reference](#parameters-reference) for decision mode and junction parameters.
**Metrics Computed:**
- Time to travel specified path length after junction
- Speed through junction (entry, exit, average transit)
- Junction transit speed analysis
- Basic trajectory metrics (total distance, duration, average speed)
### 4. Gaze Analysis
Analyze head movement and physiological data at decision points:
```bash
verta gaze \
--input ./gaze_data \
--columns x=X,z=Z,t=time,yaw=HeadYaw,pupil=PupilDilation \
--junction 520 330 --radius 20 \
--distance 100 \
--physio_window 3.0 \
--out ./gaze_outputs
```
**Key Parameters:**
- `--physio_window`: Time window in seconds for physiological data analysis (default: 3.0)
See [Parameters Reference](#parameters-reference) for decision mode, junction, and other common parameters.
**Analysis Features:**
- Head yaw direction at decision points
- Pupil dilation trajectory analysis
- Physiological metrics (heart rate, etc.) at junctions
- Gaze-movement consistency analysis
### 5. Predict Choices
Analyze behavioral patterns and predict future route choices:
```bash
verta predict \
--input ./data \
--columns x=Headset.Head.Position.X,z=Headset.Head.Position.Z,t=Time \
--scale 0.2 \
--junctions 520 330 20 600 400 20 700 450 20 \
--r_outer_list 30 32 30 \
--distance 100 \
--decision_mode hybrid \
--cluster_method auto \
--analyze_sequences \
--predict_examples 50 \
--out ./outputs/prediction
```
**Key Parameters:**
- `--analyze_sequences`: Enable route sequence analysis across multiple junctions
- `--predict_examples`: Number of concrete prediction examples to generate (default: 50)
See [Parameters Reference](#parameters-reference) for clustering, decision mode, and junction parameters.
**Prediction Features:**
- Behavioral pattern recognition (preferred, learned, direct)
- Conditional probability analysis between junctions
- Route sequence analysis
- Confidence scoring for predictions
- Concrete prediction examples
### 6. Intent Recognition
ML-based early route prediction - predict user route choices **before** they reach decision points:
```bash
verta intent \
--input ./data \
--columns x=Headset.Head.Position.X,z=Headset.Head.Position.Z,t=Time \
--scale 0.2 \
--junction 520 330 20 \
--distance 100 \
--prediction_distances 100 75 50 25 \
--model_type random_forest \
--cv_folds 5 \
--test_split 0.2 \
--out ./outputs/intent_recognition
```
**Key Parameters:**
- `--prediction_distances`: Distances before junction to make predictions (default: 100, 75, 50, 25 units)
- `--model_type`: Choose `random_forest` (fast) or `gradient_boosting` (higher accuracy)
- `--with_gaze`: Include gaze and physiological data if available
- `--centers`: Use pre-computed branch centers (optional)
- `--assignments`: Use pre-computed branch assignments (optional)
See [Parameters Reference](#parameters-reference) for all ML and decision mode parameters.
**Intent Recognition Features:**
- Multi-distance prediction models (train at 100, 75, 50, 25 units before junction)
- Feature importance analysis (spatial, kinematic, gaze, physiological)
- Accuracy analysis showing prediction improvement with proximity
- Cross-validated model evaluation
- Saved models for production deployment
- Sample predictions with confidence scores
**Use Cases:**
- Proactive wayfinding systems
- Predictive content loading
- Dynamic environment optimization
- A/B testing of early interventions
### 7. Enhanced Chain Analysis
Multi-junction analysis with evacuation planning features:
```bash
verta chain-enhanced \
--input ./data \
--columns x=X,z=Z,t=time \
--junctions 520 330 20 600 400 20 700 450 20 \
--r_outer_list 30 32 30 \
--distance 100 \
--evacuation_analysis \
--generate_recommendations \
--risk_assessment \
--out ./outputs/chain_analysis
```
**Key Parameters:**
- `--evacuation_analysis`: Enable evacuation efficiency analysis
- `--generate_recommendations`: Generate traffic flow recommendations
- `--risk_assessment`: Perform risk assessment metrics
See [Parameters Reference](#parameters-reference) for decision mode, junction, and other common parameters.
**Enhanced Features:**
- Flow graph generation
- Evacuation efficiency analysis
- Risk assessment metrics
- Traffic flow recommendations
## Output Files Reference
Each command generates specific output files:
### Discover Command
- `branch_assignments.csv` - Main branch assignments
- `branch_assignments_all.csv` - All assignments including outliers
- `branch_centers.npy` / `branch_centers.json` - Branch center coordinates
- `branch_summary.csv` - Branch count statistics with entropy
- `Branch_Directions.png` - Visual plot of branch directions
- `Branch_Counts.png` - Bar chart of branch frequencies
- `Decision_Intercepts.png` - Trajectory decision points visualization
- `Decision_Map.png` - Overview map of decisions
### Assign Command
- `branch_assignments.csv` - New trajectory assignments
- `run_args.json` - Command parameters used
### Metrics Command
- `timing_and_speed_metrics.csv` - Comprehensive timing and speed data
- `run_args.json` - Command parameters used
### Gaze Command
- `gaze_analysis.csv` - Head yaw analysis at decision points
- `physiological_analysis.csv` - Physiological metrics at junctions
- `pupil_trajectory_analysis.csv` - Pupil dilation trajectory data
- `gaze_consistency_report.json` - Gaze-movement alignment statistics
- `Gaze_Directions.png` - Head movement visualization
- `Physiological_Analysis.png` - Physiological metrics by branch
- `Pupil_Trajectory_Analysis.png` - Pupil dilation plots
### Predict Command
- `choice_pattern_analysis.json` - Complete pattern analysis results
- `choice_patterns.png` - Behavioral pattern visualization
- `transition_heatmap.png` - Junction transition probabilities
- `prediction_examples.json` - Concrete prediction examples
- `prediction_confidence.png` - Confidence analysis plots
- `sequence_analysis.json` - Route sequence analysis (if `--analyze_sequences`)
- `analysis_summary.json` - High-level summary with recommendations
### Intent Recognition Command
- `intent_recognition_summary.csv` - Summary of model accuracy per junction and distance
- `intent_recognition_junction_summary.csv` - Average accuracy per junction
- `intent_recognition_results.json` - Complete analysis results
- `junction_*/models/` - Trained model files (model_*.pkl, scaler_*.pkl)
- `junction_*/intent_training_results.json` - Model metrics and feature importance
- `junction_*/intent_feature_importance.png` - Feature importance visualization
- `junction_*/intent_accuracy_analysis.png` - Accuracy vs. distance chart
- `junction_*/test_predictions.json` - Sample predictions with confidence scores
### Chain-Enhanced Command
- `Chain_Overview.png` - Multi-junction trajectory overview
- `Chain_SmallMultiples.png` - Detailed junction-by-junction view
- `Flow_Graph_Map.png` - Flow diagram between junctions
- `Per_Junction_Flow_Graph.png` - Individual junction flow analysis
- `branch_decisions_chain.csv` - Complete decision chain data
## Behavioral Pattern Analysis
VERTA identifies three types of behavioral patterns:
### Pattern Types
- **Preferred Patterns** (probability ≥ 0.7): Strong behavioral preferences that are highly predictable
- **Learned Patterns** (probability 0.5-0.7): Patterns that develop over time as participants learn the environment
- **Direct Patterns** (probability 0.3-0.5): Basic route choices without strong preferences
### Example Analysis Results
```json
{
"summary": {
"total_sequences": 150,
"total_transitions": 300,
"unique_patterns": 12,
"junctions_analyzed": 3
},
"pattern_types": {
"preferred": 3,
"learned": 5,
"direct": 4
},
"top_patterns": [
{
"from_junction": 0,
"to_junction": 1,
"from_branch": 1,
"to_branch": 2,
"probability": 0.85,
"confidence": 0.92,
"sample_size": 23,
"pattern_type": "preferred"
}
]
}
```
### Applications
The analysis can help identify:
- Which junctions are most predictable vs. variable
- How participants adapt to the VR environment over time
- Optimal junction designs based on user behavior
- Potential traffic bottlenecks or flow issues
- Evacuation route efficiency and safety
## Configuration
Pass `--config path/to/config.yaml` to any subcommand. Keys under `defaults:` apply to all subcommands; subcommand-specific blocks (`discover:`, `assign:`, `metrics:`) override defaults.
Example `config.yaml`:
```yaml
defaults:
glob: "*.csv"
columns: { x: "Headset.Head.Position.X", z: "Headset.Head.Position.Z", t: "Time" }
scale: 0.2
motion_threshold: 0.001
radius: 20
distance: 100
epsilon: 0.015
junction: [520, 330]
discover:
decision_mode: hybrid
r_outer: 30
linger_delta: 2.0
cluster_method: dbscan
angle_eps: 15
show_paths: true
plot_intercepts: true
```
## Dependencies
### Python Version
VERTA requires **Python ≥3.8**. Supported versions include Python 3.8, 3.9, 3.10, 3.11, and 3.12.
### Core Dependencies
- **numpy** (≥1.20.0) - Numerical computations
- **pandas** (≥1.3.0) - Data manipulation and analysis
- **matplotlib** (≥3.3.0) - Static plotting and visualization
- **tqdm** (≥4.60.0) - Progress bars
- **seaborn** (≥0.12.0) - Statistical data visualization
### ML Capabilities (Intent Recognition)
- **scikit-learn** (≥1.0.0) - Machine learning algorithms (Random Forest, Gradient Boosting)
- **plotly** (≥5.15.0) - Interactive visualization for feature importance and accuracy analysis
### Optional Dependencies
Install with `pip install verta[yaml]`, `pip install verta[parquet]`, `pip install verta[gui]`, or `pip install verta[test]`:
- **config** - YAML configuration file support (`pyyaml`)
- **parquet** - Parquet file format support (`pyarrow`)
- **gui** - Web GUI dependencies (`streamlit`, `plotly`)
- **test** - Testing dependencies (`pytest>=7.0.0`)
### GUI-Specific Dependencies
For the web interface, install:
```bash
pip install verta[gui]
```
This installs:
- **streamlit** (≥1.28.0) - Web framework
- **plotly** (≥5.15.0) - Interactive plotting
Note: Core dependencies (numpy, pandas, matplotlib, seaborn) are included in the main package installation.
## Troubleshooting
### Common Issues
**Import errors:**
```bash
# Ensure you're in the project directory
cd /path/to/verta
# Check Python path
python -c "import verta; print('Package OK')"
```
**GUI won't start:**
```bash
# Check GUI dependencies
pip install verta[gui]
# Verify Streamlit installation
python -c "import streamlit; print('Streamlit OK')"
# Launch the GUI (recommended)
verta gui
# Alternative: Launch with explicit path
streamlit run src/verta/verta_gui.py
```
**No trajectories loaded:**
- Check file paths and glob patterns
- Verify column mappings match your CSV headers
- Ensure files contain X, Z coordinates (Time optional)
- Check scale factor - try `--scale 1.0` for raw coordinates
**Clustering issues:**
- Increase `--k` if too few clusters found
- Try `--cluster_method auto` for automatic cluster detection
- Adjust `--min_samples` for DBSCAN clustering
- Check `--angle_eps` for angle-based clustering
**Performance issues:**
- Reduce number of trajectories for testing
- Use sample data for initial setup
- Close other browser tabs when using GUI
- Consider using `--decision_mode pathlen` for faster analysis
**Memory errors:**
- Process data in smaller batches
- Reduce `--distance` parameter
- Use `--scale` to reduce coordinate precision
### Getting Help
1. **Check console output** for detailed error messages
2. **Verify data format** - CSV files should have X, Z columns
3. **Try the [sample data](sample_data/) first** to test installation
4. **Check file permissions** for output directories
5. **Review configuration** - use `--config` for complex setups
### Data Format Requirements
**Minimum CSV columns:**
- X coordinate (any column name)
- Z coordinate (any column name)
- Time (optional, enables timing metrics)
**Example CSV:**
```csv
Time,X,Z
0.0,100.0,200.0
0.1,101.0,201.0
...
```
**Column mapping:**
```bash
--columns x=Headset.Head.Position.X,z=Headset.Head.Position.Z,t=Time
```
## Tips
- Use `--no-plot_intercepts` and `--no-show_paths` to disable plotting
- The tool prints a suggested `--epsilon` based on step statistics
- For Parquet inputs, install the `[parquet]` extra
- Use `--config` for complex multi-command setups
- Try `--cluster_method auto` for automatic cluster detection
## AI Usage Disclosure
This project utilized AI-assisted development tools for various aspects of the codebase:
- **Cursor** and **ChatGPT** were used for:
- Code refactoring
- GUI design
- Test scaffolding
- Documentation refactoring and elaboration
- Paper refinement
- Logo design
## License
MIT
| text/markdown | null | Niklas Suhre <suhre@verkehr.tu-darmstadt.de> | null | null | null | trajectory-analysis, route-analysis, virtual-reality, vr, behavioral-analysis, machine-learning, intent-recognition, gaze-tracking, spatial-analysis | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyth... | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy>=1.20.0",
"pandas>=1.3.0",
"matplotlib>=3.3.0",
"scikit-learn>=1.0.0",
"tqdm>=4.60.0",
"seaborn>=0.12.0",
"pyyaml; extra == \"yaml\"",
"pyarrow; extra == \"parquet\"",
"streamlit>=1.28.0; extra == \"gui\"",
"plotly>=5.15.0; extra == \"gui\"",
"pytest>=7.0.0; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/ruedi99ms/VERTA",
"Documentation, https://github.com/ruedi99ms/VERTA#readme",
"Repository, https://github.com/ruedi99ms/VERTA",
"Issues, https://github.com/ruedi99ms/VERTA/issues"
] | twine/6.2.0 CPython/3.13.3 | 2026-02-18T08:59:07.177704 | verta-1.0.6.tar.gz | 201,896 | 41/42/c213190513f11b40b5bf161b805edd3fea98ffe9d4b733dc72769ef5463c/verta-1.0.6.tar.gz | source | sdist | null | false | c47a815e65fa0d53bdf3f7efee2bfa5e | 7c48c91c26e5c7c1a8bbddfb24c36a4929680d9d67f869480bb8e33ca246a672 | 4142c213190513f11b40b5bf161b805edd3fea98ffe9d4b733dc72769ef5463c | MIT | [
"LICENSE"
] | 287 |
2.4 | audformat | 1.3.4 | Python implementation of audformat | =========
audformat
=========
|tests| |coverage| |docs| |python-versions| |license|
Specification and reference implementation of **audformat**.
audformat stores media data,
such as audio, video, or text
together with corresponding annotations
in a pre-defined way.
This makes it easy to combine or replace databases
in machine learning projects.
An audformat database is a folder
that contains media files
together with a header YAML file
and one or several files storing the annotations.
The database is represented as an ``audformat.Database`` object
and can be loaded with ``audformat.Database.load()``
or written to disk with ``audformat.Database.save()``.
Have a look at the installation_ and usage_ instructions
and the `format specifications`_ as a starting point.
.. _installation: https://audeering.github.io/audformat/install.html
.. _usage: https://audeering.github.io/audformat/create-database.html
.. _format specifications: https://audeering.github.io/audformat/data-introduction.html
.. badges images and links:
.. |tests| image:: https://github.com/audeering/audformat/workflows/Test/badge.svg
:target: https://github.com/audeering/audformat/actions?query=workflow%3ATest
:alt: Test status
.. |coverage| image:: https://codecov.io/gh/audeering/audformat/branch/main/graph/badge.svg?token=1FEG9P5XS0
:target: https://codecov.io/gh/audeering/audformat/
:alt: code coverage
.. |docs| image:: https://img.shields.io/pypi/v/audformat?label=docs
:target: https://audeering.github.io/audformat/
:alt: audformat's documentation
.. |license| image:: https://img.shields.io/badge/license-MIT-green.svg
:target: https://github.com/audeering/audformat/blob/main/LICENSE
:alt: audformat's MIT license
.. |python-versions| image:: https://img.shields.io/pypi/pyversions/audformat.svg
:target: https://pypi.org/project/audformat/
:alt: audformats's supported Python versions
| text/x-rst | Johannes Wagner, BahaEddine Abrougui | Hagen Wierstorf <hwierstorf@audeering.com>, Christian Geng <cgeng@audeering.com> | null | null | null | audio, database, annotation | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: P... | [] | null | null | >=3.10 | [] | [] | [] | [
"audeer>=2.0.0",
"audiofile>=0.4.0",
"iso639-lang",
"iso3166",
"oyaml",
"pandas>=2.1.0",
"pyarrow>=10.0.1",
"pyyaml>=5.4.1"
] | [] | [] | [] | [
"repository, https://github.com/audeering/audformat/",
"documentation, https://audeering.github.io/audformat/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T08:58:03.732460 | audformat-1.3.4.tar.gz | 148,672 | 5f/97/8c6467512c3432f77a6b5177f171ee45cd627062331f959674394e6e6a96/audformat-1.3.4.tar.gz | source | sdist | null | false | a8d93ec63cc3f62221a01d8ccaa46a24 | cfe39daebd4578819d75af83a4c2841aa5efae75284112439c647e725d78e555 | 5f978c6467512c3432f77a6b5177f171ee45cd627062331f959674394e6e6a96 | MIT | [
"LICENSE"
] | 2,328 |
2.4 | esp-matter-mfg-tool | 1.0.22 | A python utility which helps to generate matter manufacturing partitions |
====================
esp-matter-mfg-tool
====================
The python utility helps to generate the matter manufacturing partitions.
Source code for `esp-matter-mfg-tool` is
`hosted on github <https://github.com/espressif/esp-matter-tools/tree/main/mfg_tool>`_.
Documentation
-------------
Visit online `esp-matter-mfg-tool documentation <https://github.com/espressif/esp-matter-tools/tree/main/mfg_tool>`_
or run ``esp-matter-mfg-tool -h``.
License
-------
The License for the project can be found
`here <https://github.com/espressif/esp-matter-tools/tree/main/LICENSE>`_
| text/x-rst | Espressif Systems | null | null | null | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: POSIX",
"Operating System :: MacOS :: MacOS X",
"Topic :: Software Development :: Embedded Systems"
] | [] | https://github.com/espressif/esp-matter-tools/tree/main/mfg_tool | null | >=3.8 | [] | [] | [] | [
"bitarray>=2.6.0",
"cryptography==44.0.1",
"cffi==1.15.0; python_version < \"3.13\"",
"cffi==1.17.1; python_version >= \"3.13\"",
"future==0.18.3",
"pycparser==2.21",
"pypng==0.0.21",
"PyQRCode==1.2.1",
"python_stdnum==1.18",
"esp-secure-cert-tool==2.3.6",
"ecdsa==0.19.0",
"esp_idf_nvs_partiti... | [] | [] | [] | [
"Documentation, https://github.com/espressif/esp-matter-tools/tree/main/mfg_tool/README.md",
"Source, https://github.com/espressif/esp-matter-tools/tree/main/mfg_tool"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T08:56:53.645213 | esp_matter_mfg_tool-1.0.22.tar.gz | 42,124 | c7/4d/f0ec61a361254c9252f78a6dc2e0612577eba4d21d3d6adf1be0a35b84bb/esp_matter_mfg_tool-1.0.22.tar.gz | source | sdist | null | false | 0d1e9a5ca7103437009348156f6a52cc | 2602e95ecb09623bb03bc4ae0be0c8f5edeb5cdab2fd77d00a80b18700a2500a | c74df0ec61a361254c9252f78a6dc2e0612577eba4d21d3d6adf1be0a35b84bb | null | [] | 1,101 |
2.1 | odoo14-addon-ssi-purchase-stock | 14.0.1.3.0 | Purchase + Inventory Integration | .. image:: https://img.shields.io/badge/licence-AGPL--3-blue.svg
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
================================
Purchase + Inventory Integration
================================
Installation
============
To install this module, you need to:
1. Clone the branch 14.0 of the repository https://github.com/open-synergy/ssi-purchase
2. Add the path to this repository in your configuration (addons-path)
3. Update the module list
4. Go to menu *Apps -> Apps -> Main Apps*
5. Search For *Purchase + Inventory Integration*
6. Install the module
Bug Tracker
===========
Bugs are tracked on `GitHub Issues
<https://github.com/open-synergy/ssi-purchase/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us smashing it by providing a detailed
and welcomed feedback.
Credits
=======
Contributors
------------
* Michael Viriyananda <viriyananda.michael@gmail.com>
Maintainer
----------
.. image:: https://simetri-sinergi.id/logo.png
:alt: PT. Simetri Sinergi Indonesia
:target: https://simetri-sinergi.id.com
This module is maintained by the PT. Simetri Sinergi Indonesia.
| null | PT. Simetri Sinergi Indonesia, OpenSynergy Indonesia | null | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 14.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://simetri-sinergi.id | null | >=3.6 | [] | [] | [] | [
"odoo14-addon-ssi-purchase",
"odoo<14.1dev,>=14.0a"
] | [] | [] | [] | [] | twine/5.1.1 CPython/3.12.3 | 2026-02-18T08:56:43.254455 | odoo14_addon_ssi_purchase_stock-14.0.1.3.0-py3-none-any.whl | 59,000 | b6/e0/a04bfed358b630160cdc95cbb3cbbfe37300c84e3c1bb3b2a55238822884/odoo14_addon_ssi_purchase_stock-14.0.1.3.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 08482dbb9c8d05935b79707d250da824 | 1d03b58e93d9a0a412d824727f867c0655a31e37596446418aa38b5e8c8460bf | b6e0a04bfed358b630160cdc95cbb3cbbfe37300c84e3c1bb3b2a55238822884 | null | [] | 107 |
2.1 | odoo14-addon-ssi-purchase | 14.0.4.4.0 | Purchase | .. image:: https://img.shields.io/badge/licence-AGPL--3-blue.svg
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
========
Purchase
========
Installation
============
To install this module, you need to:
1. Clone the branch 14.0 of the repository https://github.com/open-synergy/ssi-purchase
2. Add the path to this repository in your configuration (addons-path)
3. Update the module list
4. Go to menu *Apps -> Apps -> Main Apps*
5. Search For *Purchase*
6. Install the module
Bug Tracker
===========
Bugs are tracked on `GitHub Issues
<https://github.com/open-synergy/ssi-purchase/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us smashing it by providing a detailed
and welcomed feedback.
Credits
=======
Contributors
------------
* Michael Viriyananda <viriyananda.michael@gmail.com>
* Miftahussalam <miftahussalam08@gmail.com>
Maintainer
----------
.. image:: https://simetri-sinergi.id/logo.png
:alt: PT. Simetri Sinergi Indonesia
:target: https://simetri-sinergi.id.com
This module is maintained by the PT. Simetri Sinergi Indonesia.
| null | PT. Simetri Sinergi Indonesia, OpenSynergy Indonesia | null | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 14.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://simetri-sinergi.id | null | >=3.6 | [] | [] | [] | [
"odoo14-addon-ssi-master-data-mixin",
"odoo14-addon-ssi-multiple-approval-mixin",
"odoo14-addon-ssi-policy-mixin",
"odoo14-addon-ssi-sequence-mixin",
"odoo<14.1dev,>=14.0a"
] | [] | [] | [] | [] | twine/5.1.1 CPython/3.12.3 | 2026-02-18T08:56:37.546694 | odoo14_addon_ssi_purchase-14.0.4.4.0-py3-none-any.whl | 68,570 | bd/47/7f2a2dd7a0387e93de22156526f38e211d3ead913eb7ac60199b889a3c74/odoo14_addon_ssi_purchase-14.0.4.4.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 8df1708c98c5cfa51de75032061ccb9e | 77a5b42d9a2c2b12f2c2b5ca55cf088591b4bfe185363be54b1f4518b511a2c9 | bd477f2a2dd7a0387e93de22156526f38e211d3ead913eb7ac60199b889a3c74 | null | [] | 115 |
2.4 | orangeqs-juice-core | 26.8.0 | Framework for running quantum computing experiments. | # OrangeQS Juice

[](https://gitlab.com/orangeqs/juice/-/releases)
[](https://pypi.org/project/orangeqs-juice-core/)
[](https://gitlab.com/orangeqs/juice/-/blob/main/LICENSE)
[](https://docs.orangeqs.com/juice/core)
**OrangeQS Juice** is an operating system designed to control quantum test systems, developed and maintained by Orange Quantum Systems.
OrangeQS Juice provides a robust and scalable platform for managing and automating quantum device testing. It is built to integrate seamlessly with a variety of quantum hardware and measurement instruments.
For full installation and usage instructions, please visit our official [documentation](https://docs.orangeqs.com/juice/core).
## Installation
See the [Installation guide](https://docs.orangeqs.com/juice/core/tutorials/getting-started/installation.html).
## License
This project is licensed under the Apache 2.0 License.
## Contributing
See the [Contributing guide](https://docs.orangeqs.com/juice/core/contribute/).
## Contact
For questions, support, or commercial inquiries, please contact us at juice@orangeqs.com.
| text/markdown | null | Orange Quantum Systems <juice@orangeqs.com> | null | null | null | null | [
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"bokeh",
"click",
"influxdb-client[async]",
"ipykernel<7",
"jinja2",
"jupyter_client",
"jupyter_kernel_proxy",
"pandas",
"panel",
"pint",
"pydantic",
"pydantic-settings",
"rich",
"tomli>=1.1.0; python_version < \"3.11\"",
"tomli-w",
"tornado>=6.5",
"typing-extensions; python_version ... | [] | [] | [] | [
"Homepage, https://orangeqsjuice.org/",
"Source, https://gitlab.com/orangeqs/juice",
"Documentation, https://docs.orangeqs.com/juice/core",
"Changelog, https://docs.orangeqs.com/juice/core/reference/changelog.html",
"Issues, https://gitlab.com/orangeqs/juice/-/issues"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T08:55:30.604879 | orangeqs_juice_core-26.8.0.tar.gz | 332,346 | 1f/01/507030243e33215a3b83e52fb20165a3d183728abdd47960796f95f29155/orangeqs_juice_core-26.8.0.tar.gz | source | sdist | null | false | 45ee3a909f7b75b65af4995ec00ba9e5 | db0bb8ec8faf89ace78e4ac900f5687623154891cf5a45a8ab44fa4c141e7372 | 1f01507030243e33215a3b83e52fb20165a3d183728abdd47960796f95f29155 | Apache-2.0 | [
"LICENSE"
] | 2,417 |
2.4 | langwatch-scenario | 0.7.16 | The end-to-end agent testing library | 
<p align="center">
<a href="https://discord.gg/kT4PhDS2gH" target="_blank"><img src="https://img.shields.io/discord/1227886780536324106?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb" alt="chat on Discord"></a>
<a href="https://pypi.python.org/pypi/langwatch-scenario" target="_blank"><img src="https://img.shields.io/pypi/dm/langwatch-scenario?logo=python&logoColor=white&label=pypi%20langwatch-scenario&color=blue" alt="Scenario Python package on PyPi"></a>
<a href="https://www.npmjs.com/package/@langwatch/scenario" target="_blank"><img src="https://img.shields.io/npm/dm/@langwatch/scenario?logo=npm&logoColor=white&label=npm%20@langwatch/scenario&color=blue" alt="Scenario JavaScript package on npm"></a>
<a href="https://github.com/langwatch/scenario/actions/workflows/python-ci.yml"><img src="https://github.com/langwatch/scenario/actions/workflows/python-ci.yml/badge.svg" alt="Python Tests" /></a>
<a href="https://github.com/langwatch/scenario/actions/workflows/javascript-ci.yml"><img src="https://github.com/langwatch/scenario/actions/workflows/javascript-ci.yml/badge.svg" alt="JavaScript Tests" /></a>
<a href="https://twitter.com/intent/follow?screen_name=langwatchai" target="_blank">
<img src="https://img.shields.io/twitter/follow/langwatchai?logo=X&color=%20%23f5f5f5" alt="follow on X(Twitter)"></a>
</p>
# Scenario
Scenario is an Agent Testing Framework based on simulations, it can:
- Test real agent behavior by simulating users in different scenarios and edge cases
- Evaluate and judge at any point of the conversation, powerful multi-turn control
- Combine it with any LLM eval framework or custom evals, agnostic by design
- Integrate your Agent by implementing just one [`call()`](https://scenario.langwatch.ai/agent-integration) method
- Available in Python, TypeScript and Go
📖 [Documentation](https://scenario.langwatch.ai)\
📺 [Watch Video Tutorial](https://www.youtube.com/watch?v=f8NLpkY0Av4)
## Example
This is how a simulation with tool check looks like with Scenario:
```python
# Define any custom assertions
def check_for_weather_tool_call(state: scenario.ScenarioState):
assert state.has_tool_call("get_current_weather")
result = await scenario.run(
name="checking the weather",
# Define the prompt to guide the simulation
description="""
The user is planning a boat trip from Barcelona to Rome,
and is wondering what the weather will be like.
""",
# Define the agents that will play this simulation
agents=[
WeatherAgent(),
scenario.UserSimulatorAgent(model="openai/gpt-4.1"),
],
# (Optional) Control the simulation
script=[
scenario.user(), # let the user simulator generate a user message
scenario.agent(), # agent responds
check_for_weather_tool_call, # check for tool call after the first agent response
scenario.succeed(), # simulation ends successfully
],
)
assert result.success
```
<details>
<summary><strong>TypeScript Example</strong></summary>
```typescript
const result = await scenario.run({
name: "vegetarian recipe agent",
// Define the prompt to guide the simulation
description: `
The user is planning a boat trip from Barcelona to Rome,
and is wondering what the weather will be like.
`,
// Define the agents that will play this simulation
agents: [new MyAgent(), scenario.userSimulatorAgent()],
// (Optional) Control the simulation
script: [
scenario.user(), // let the user simulator generate a user message
scenario.agent(), // agent responds
// check for tool call after the first agent response
(state) => expect(state.has_tool_call("get_current_weather")).toBe(true),
scenario.succeed(), // simulation ends successfully
],
});
```
</details>
> [!NOTE]
> Check out full examples in the [python/examples folder](./python/examples/). or the [typescript/examples folder](./typescript/examples/).
## Quick Start
Install scenario and a test runner:
```bash
# on python
uv add langwatch-scenario pytest
# or on typescript
pnpm install @langwatch/scenario vitest
```
Now create your first scenario, copy the full working example below.
<details>
<summary><strong>Quick Start - Python</strong></summary>
Save it as `tests/test_vegetarian_recipe_agent.py`:
```python
import pytest
import scenario
import litellm
scenario.configure(default_model="openai/gpt-4.1")
@pytest.mark.agent_test
@pytest.mark.asyncio
async def test_vegetarian_recipe_agent():
class Agent(scenario.AgentAdapter):
async def call(self, input: scenario.AgentInput) -> scenario.AgentReturnTypes:
return vegetarian_recipe_agent(input.messages)
# Run a simulation scenario
result = await scenario.run(
name="dinner idea",
description="""
It's saturday evening, the user is very hungry and tired,
but have no money to order out, so they are looking for a recipe.
""",
agents=[
Agent(),
scenario.UserSimulatorAgent(),
scenario.JudgeAgent(
criteria=[
"Agent should not ask more than two follow-up questions",
"Agent should generate a recipe",
"Recipe should include a list of ingredients",
"Recipe should include step-by-step cooking instructions",
"Recipe should be vegetarian and not include any sort of meat",
]
),
],
set_id="python-examples",
)
# Assert for pytest to know whether the test passed
assert result.success
# Example agent implementation
import litellm
@scenario.cache()
def vegetarian_recipe_agent(messages) -> scenario.AgentReturnTypes:
response = litellm.completion(
model="openai/gpt-4.1",
messages=[
{
"role": "system",
"content": """
You are a vegetarian recipe agent.
Given the user request, ask AT MOST ONE follow-up question,
then provide a complete recipe. Keep your responses concise and focused.
""",
},
*messages,
],
)
return response.choices[0].message # type: ignore
```
</details>
<details>
<summary><strong>Quick Start - TypeScript</strong></summary>
Save it as `tests/vegetarian-recipe-agent.test.ts`:
```typescript
import scenario, { type AgentAdapter, AgentRole } from "@langwatch/scenario";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";
import { describe, it, expect } from "vitest";
describe("Vegetarian Recipe Agent", () => {
const agent: AgentAdapter = {
role: AgentRole.AGENT,
call: async (input) => {
const response = await generateText({
model: openai("gpt-4.1"),
messages: [
{
role: "system",
content: `You are a vegetarian recipe agent.\nGiven the user request, ask AT MOST ONE follow-up question, then provide a complete recipe. Keep your responses concise and focused.`,
},
...input.messages,
],
});
return response.text;
},
};
it("should generate a vegetarian recipe for a hungry and tired user on a Saturday evening", async () => {
const result = await scenario.run({
name: "dinner idea",
description: `It's saturday evening, the user is very hungry and tired, but have no money to order out, so they are looking for a recipe.`,
agents: [
agent,
scenario.userSimulatorAgent(),
scenario.judgeAgent({
model: openai("gpt-4.1"),
criteria: [
"Agent should not ask more than two follow-up questions",
"Agent should generate a recipe",
"Recipe should include a list of ingredients",
"Recipe should include step-by-step cooking instructions",
"Recipe should be vegetarian and not include any sort of meat",
],
}),
],
setId: "javascript-examples",
});
expect(result.success).toBe(true);
});
});
```
</details>
Export your OpenAI API key:
```bash
OPENAI_API_KEY=<your-api-key>
```
Now run it the test:
```bash
# on python
pytest -s tests/test_vegetarian_recipe_agent.py
# on typescript
npx vitest run tests/vegetarian-recipe-agent.test.ts
```
This is how it will look like:
[](https://asciinema.org/a/nvO5GWGzqKTTCd8gtNSezQw11)
You can find the same code example in [python/examples/](python/examples/test_vegetarian_recipe_agent.py) or [javascript/examples/](javascript/examples/vitest/tests/vegetarian-recipe-agent.test.ts).
Now check out the [full documentation](https://scenario.langwatch.ai) to learn more and next steps.
## Simulation on Autopilot
By providing a User Simulator Agent and a description of the Scenario without a script, the simulated user will automatically generate messages to the agent until the scenario is successful or the maximum number of turns is reached.
You can then use a Judge Agent to evaluate the scenario in real-time given certain criteria, at every turn, the Judge Agent will decide if it should let the simulation proceed or end it with a verdict.
For example, here is a scenario that tests a vibe coding assistant:
```python
result = await scenario.run(
name="dog walking startup landing page",
description="""
the user wants to create a new landing page for their dog walking startup
send the first message to generate the landing page, then a single follow up request to extend it, then give your final verdict
""",
agents=[
LovableAgentAdapter(template_path=template_path),
scenario.UserSimulatorAgent(),
scenario.JudgeAgent(
criteria=[
"agent reads the files before go and making changes",
"agent modified the index.css file, not only the Index.tsx file",
"agent created a comprehensive landing page",
"agent extended the landing page with a new section",
"agent should NOT say it can't read the file",
"agent should NOT produce incomplete code or be too lazy to finish",
],
),
],
max_turns=5, # optional
)
```
Check out the fully working Lovable Clone example in [examples/test_lovable_clone.py](examples/test_lovable_clone.py).
You can also combine it with a partial script too! By for example controlling only the beginning of the conversation, and let the rest proceed on autopilot, see the next section.
## Full Control of the Conversation
You can specify a script for guiding the scenario by passing a list of steps to the `script` field, those steps are simply arbitrary functions that take the current state of the scenario as an argument, so you can do things like:
- Control what the user says, or let it be generated automatically
- Control what the agent says, or let it be generated automatically
- Add custom assertions, for example making sure a tool was called
- Add a custom evaluation, from an external library
- Let the simulation proceed for a certain number of turns, and evaluate at each new turn
- Trigger the judge agent to decide on a verdict
- Add arbitrary messages like mock tool calls in the middle of the conversation
Everything is possible, using the same simple structure:
```python
@pytest.mark.agent_test
@pytest.mark.asyncio
async def test_early_assumption_bias():
result = await scenario.run(
name="early assumption bias",
description="""
The agent makes false assumption that the user is talking about an ATM bank, and user corrects it that they actually mean river banks
""",
agents=[
Agent(),
scenario.UserSimulatorAgent(),
scenario.JudgeAgent(
criteria=[
"user should get good recommendations on river crossing",
"agent should NOT keep following up about ATM recommendation after user has corrected them that they are actually just hiking",
],
),
],
max_turns=10,
script=[
# Define hardcoded messages
scenario.agent("Hello, how can I help you today?"),
scenario.user("how do I safely approach a bank?"),
# Or let it be generated automatically
scenario.agent(),
# Add custom assertions, for example making sure a tool was called
check_if_tool_was_called,
# Generate a user follow-up message
scenario.user(),
# Let the simulation proceed for 2 more turns, print at every turn
scenario.proceed(
turns=2,
on_turn=lambda state: print(f"Turn {state.current_turn}: {state.messages}"),
),
# Time to make a judgment call
scenario.judge(),
],
)
assert result.success
```
## LangWatch Visualization
Set your [LangWatch API key](https://app.langwatch.ai/) to visualize the scenarios in real-time, as they run, for a much better debugging experience and team collaboration:
```bash
LANGWATCH_API_KEY="your-api-key"
```

## Debug mode
You can enable debug mode by setting the `debug` field to `True` in the `Scenario.configure` method or in the specific scenario you are running, or by passing the `--debug` flag to pytest.
Debug mode allows you to see the messages in slow motion step by step, and intervene with your own inputs to debug your agent from the middle of the conversation.
```python
scenario.configure(default_model="openai/gpt-4.1", debug=True)
```
or
```bash
pytest -s tests/test_vegetarian_recipe_agent.py --debug
```
## Cache
Each time the scenario runs, the testing agent might chose a different input to start, this is good to make sure it covers the variance of real users as well, however we understand that the non-deterministic nature of it might make it less repeatable, costly and harder to debug. To solve for it, you can use the `cache_key` field in the `Scenario.configure` method or in the specific scenario you are running, this will make the testing agent give the same input for given the same scenario:
```python
scenario.configure(default_model="openai/gpt-4.1", cache_key="42")
```
To bust the cache, you can simply pass a different `cache_key`, disable it, or delete the cache files located at `~/.scenario/cache`.
To go a step further and fully cache the test end-to-end, you can also wrap the LLM calls or any other non-deterministic functions in your application side with the `@scenario.cache` decorator:
```python
# Inside your actual agent implementation
class MyAgent:
@scenario.cache()
def invoke(self, message, context):
return client.chat.completions.create(
# ...
)
```
This will cache any function call you decorate when running the tests and make them repeatable, hashed by the function arguments, the scenario being executed, and the `cache_key` you provided. You can exclude arguments that should not be hashed for the cache key by naming them in the `ignore` argument.
## Grouping Your Sets and Batches
While optional, we strongly recommend setting stable identifiers for your scenarios, sets, and batches for better organization and tracking in LangWatch.
- **set_id**: Groups related scenarios into a test suite. This corresponds to the "Simulation Set" in the UI.
- **SCENARIO_BATCH_RUN_ID**: Env variable that groups all scenarios that were run together in a single execution (e.g., a single CI job). This is automatically generated but can be overridden.
```python
import os
result = await scenario.run(
name="my first scenario",
description="A simple test to see if the agent responds.",
set_id="my-test-suite",
agents=[
scenario.Agent(my_agent),
scenario.UserSimulatorAgent(),
]
)
```
You can also set the `batch_run_id` using environment variables for CI/CD integration:
```python
import os
# Set batch ID for CI/CD integration
os.environ["SCENARIO_BATCH_RUN_ID"] = os.environ.get("GITHUB_RUN_ID", "local-run")
result = await scenario.run(
name="my first scenario",
description="A simple test to see if the agent responds.",
set_id="my-test-suite",
agents=[
scenario.Agent(my_agent),
scenario.UserSimulatorAgent(),
]
)
```
The `batch_run_id` is automatically generated for each test run, but you can also set it globally using the `SCENARIO_BATCH_RUN_ID` environment variable.
## Disable Output
You can remove the `-s` flag from pytest to hide the output during test, which will only show up if the test fails. Alternatively, you can set `verbose=False` in the `Scenario.configure` method or in the specific scenario you are running.
## Running in parallel
As the number of your scenarios grows, you might want to run them in parallel to speed up your whole test suite. We suggest you to use the [pytest-asyncio-concurrent](https://pypi.org/project/pytest-asyncio-concurrent/) plugin to do so.
Simply install the plugin from the link above, then replace the `@pytest.mark.asyncio` annotation in the tests with `@pytest.mark.asyncio_concurrent`, adding a group name to it to mark the group of scenarions that should be run in parallel together, e.g.:
```python
@pytest.mark.agent_test
@pytest.mark.asyncio_concurrent(group="vegetarian_recipe_agent")
async def test_vegetarian_recipe_agent():
# ...
@pytest.mark.agent_test
@pytest.mark.asyncio_concurrent(group="vegetarian_recipe_agent")
async def test_user_is_very_hungry():
# ...
```
Those two scenarios should now run in parallel.
## Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
## Support
- 📖 [Documentation](https://scenario.langwatch.ai)
- 💬 [Discord Community](https://discord.gg/langwatch)
- 🐛 [Issue Tracker](https://github.com/langwatch/scenario/issues)
## License
MIT License - see [LICENSE](LICENSE) for details.
| text/markdown | null | LangWatch Team <support@langwatch.ai> | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyt... | [] | null | null | >=3.10 | [] | [] | [] | [
"pytest>=8.1.1",
"pytest-rerunfailures>=14.0",
"litellm>=1.49.0",
"openai>=1.88.0",
"python-dotenv>=1.0.1",
"termcolor>=2.4.0",
"pydantic>=2.7.0",
"joblib>=1.4.2",
"wrapt>=1.17.2",
"pytest-asyncio>=0.26.0",
"rich<15.0.0,>=13.3.3",
"pksuid>=1.1.2",
"httpx>=0.27.0",
"rx>=3.2.0",
"python-da... | [] | [] | [] | [
"Homepage, https://github.com/langwatch/scenario",
"Bug Tracker, https://github.com/langwatch/scenario/issues"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T08:54:09.885317 | langwatch_scenario-0.7.16.tar.gz | 131,528 | 66/58/c19486bcb47d726f6df76139db66493bd12c9390184bb77f95e4c87607df/langwatch_scenario-0.7.16.tar.gz | source | sdist | null | false | 2a17429b62e27c0691d0a861de0e57d6 | a4faf568c5abb65c90470b374c4aca93cf3842b25f315534c398b2465e102f41 | 6658c19486bcb47d726f6df76139db66493bd12c9390184bb77f95e4c87607df | null | [] | 522 |
2.4 | zdx | 0.2.3.dev3 | Z Doc Builder Experience by Swarmauri | 
<p align="center">
<a href="https://pypi.org/project/zdx/">
<img src="https://img.shields.io/pypi/dm/zdx" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/zdx/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/zdx.svg"/></a>
<a href="https://pypi.org/project/zdx/">
<img src="https://img.shields.io/pypi/pyversions/zdx" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/zdx/">
<img src="https://img.shields.io/pypi/l/zdx" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/zdx/">
<img src="https://img.shields.io/pypi/v/zdx?label=zdx&color=green" alt="PyPI - zdx"/></a>
</p>
---
# ZDX
Z Doc Builder Experience by Swarmauri. Reusable tooling to build and serve MkDocs-based documentation sites for Swarmauri projects.
## Features
- Material for MkDocs configuration
- mkdocstrings support for API reference generation
- README harvesting across workspaces
- Dockerfile for containerized deployments
## Installation
```bash
pip install zdx
```
## Getting Started
1. Place your documentation sources in a dedicated directory (e.g., `docs/swarmauri-sdk`).
2. Customize `mkdocs.yml` and `api_manifest.yaml` for your project.
3. Install `zdx` and use the `zdx` CLI to build and preview the site.
## CLI Usage
### Generate from a manifest
```bash
zdx generate --docs-dir /path/to/docs --manifest api_manifest.yaml
```
### Build READMEs into pages
```bash
zdx readmes --docs-dir /path/to/docs
```
### Serve the site
```bash
zdx serve --docs-dir /path/to/docs
```
### Generate and serve together
```bash
zdx serve --generate --docs-dir /path/to/docs --manifest api_manifest.yaml
```
### Control failure handling
Use the `--on-error` flag with any command that installs packages or generates
documentation to decide how strictly `zdx` reacts to subprocess failures:
```bash
zdx generate --on-error warn
```
Valid values are:
* `fail` – stop immediately if a package installation or API build fails
* `warn` – print a warning but keep going
* `ignore` – suppress warnings and continue
You can also define the `ZDX_FAILURE_MODE` environment variable to set the
default for every invocation. For example, the provided Docker assets default to
`warn` so that optional packages do not abort a deployment.
## API Manifest
`zdx` uses a YAML manifest to decide which Python packages to document and how
those pages are organized. Each project keeps its manifest alongside the docs
sources. This repository includes manifests for:
| Path | Purpose |
| --- | --- |
| `infra/docs/swarmauri-sdk/api_manifest.yaml` | Generates API docs for the SDK packages. |
| `infra/docs/peagen/api_manifest.yaml` | Drives documentation for the `peagen` tool. |
| `infra/docs/tigrbl/api_manifest.yaml` | Builds docs for the `tigrbl` project. |
| `pkgs/community/zdx/api_manifest.yaml` | Template manifest for new sites. |
### Manifest Schema
An `api_manifest.yaml` file defines a list of **targets**. Each target produces
Markdown pages under `docs/api/<name>/` and a corresponding navigation entry in
the MkDocs configuration.
```yaml
targets:
- name: Core
package: swarmauri_core
search_path: /pkgs/core
include:
- swarmauri_core.*
exclude:
- "*.tests.*"
```
Every field controls a different part of the generation process:
| Field | Description |
| --- | --- |
| `name` | Label used for the top‑level navigation item and the folder under `docs/api/`. |
| `search_path` | Directory containing the source package(s). The generator scans this path for modules. |
| `package` | Root import name when documenting a single package. Omit when using `discover`. |
| `discover` | When `true`, automatically finds all packages under `search_path` and builds docs for each. Generates separate folders and nav entries per package. |
| `include` | Glob patterns of fully qualified modules to include. Classes from matching modules get individual Markdown files. |
| `exclude` | Glob patterns of modules to skip. Use to drop tests or experimental code from the docs. |
For each module that survives the include/exclude filters, `zdx` writes a page
per public class. Pages land in
`docs/api/<name>/<module path>/<Class>.md` and a simple `index.md` is created if
one does not exist. The navigation section for the target is appended to
`mkdocs.yml`, ensuring the new pages appear in the rendered site.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | mkdocs, documentation, swarmauri, docs, doc builder, zdx, doc, builder, experience | [
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.12 | [] | [] | [] | [
"griffe-typingdoc",
"mkdocs",
"mkdocs-autorefs",
"mkdocs-gen-files",
"mkdocs-literate-nav",
"mkdocs-material<10.0.0,>=9.6.4",
"mkdocstrings[python]<0.29.0,>=0.28.1",
"pyyaml>=6.0",
"ruff>=0.5"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T08:53:49.034003 | zdx-0.2.3.dev3.tar.gz | 19,350 | 6e/c5/f59dc72eb2410c0f673b8908f64c3976cdd559691c9f0950a918ec98ee22/zdx-0.2.3.dev3.tar.gz | source | sdist | null | false | 315a906da7b507909a2c1fe56a02167d | 198bc6981a69a5253bf82c5465817889aff60105fcd6c0da29e7c4a042a4a004 | 6ec5f59dc72eb2410c0f673b8908f64c3976cdd559691c9f0950a918ec98ee22 | Apache-2.0 | [
"LICENSE"
] | 219 |
2.4 | swm_example_package | 0.6.3.dev3 | This repository includes an example of a First Class Swarmauri Example. |

<p align="center">
<a href="https://pypi.org/project/swm-example-package/">
<img src="https://img.shields.io/pypi/dm/swm-example-package" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/standards/swm_example_package/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/standards/swm_example_package.svg"/></a>
<a href="https://pypi.org/project/swm-example-package/">
<img src="https://img.shields.io/pypi/pyversions/swm-example-package" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swm-example-package/">
<img src="https://img.shields.io/pypi/l/swm-example-package" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swm-example-package/">
<img src="https://img.shields.io/pypi/v/swm-example-package?label=swm-example-package&color=green" alt="PyPI - swm-example-package"/></a>
</p>
---
# Swm Example Package
{Brief package description}
## Installation
```bash
pip install {package_name}
```
## Usage
Basic usage examples with code snippets
```python
from swarmauri.{resource_name}.{MainClass} import {MainClass}
```
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swm, sdk, standards, example, package | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"swarmauri_base",
"swarmauri_core"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T08:53:43.838499 | swm_example_package-0.6.3.dev3.tar.gz | 6,166 | a6/6e/c1556366650a391f8b90bb9d97cefa96a37202d49762f6ff7d7d85321eaf/swm_example_package-0.6.3.dev3.tar.gz | source | sdist | null | false | f71f8d890e8440769a2f29a213d5578b | 5ac89932d660bb34e0e5df3ac99f215a7313f4c0eec4a1cf7763fabfc2fd5d00 | a66ec1556366650a391f8b90bb9d97cefa96a37202d49762f6ff7d7d85321eaf | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | wcag_pdf_pytest | 0.2.2.dev3 | Pytest plugin for WCAG 2.1 A/AA/AAA compliance checks on PDFs. | 
<p align="center">
<a href="https://pypi.org/project/wcag-pdf-pytest/">
<img src="https://img.shields.io/pypi/dm/wcag-pdf-pytest" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/experimental/wcag_pdf_pytest/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/experimental/wcag_pdf_pytest.svg"/></a>
<a href="https://pypi.org/project/wcag-pdf-pytest/">
<img src="https://img.shields.io/pypi/pyversions/wcag-pdf-pytest" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/wcag-pdf-pytest/">
<img src="https://img.shields.io/pypi/l/wcag-pdf-pytest" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/wcag-pdf-pytest/">
<img src="https://img.shields.io/pypi/v/wcag-pdf-pytest?label=wcag-pdf-pytest&color=green" alt="PyPI - wcag-pdf-pytest"/></a>
</p>
---
# wcag-pdf-pytest
`wcag-pdf-pytest` is a [pytest](https://docs.pytest.org/) plugin that scaffolds WCAG 2.1
compliance testing for PDF documents. The plugin auto-generates one test per WCAG 2.1
Success Criterion (SC) that applies to PDFs, organizes them by conformance level, and
exposes CLI switches for narrowing execution to the criteria relevant to your review.
## Features
- **SC-aware test generation** — each applicable WCAG 2.1 SC ships as an individual pytest
module with structured docstrings that document the requirement.
- **Level-based selection** — mark sets (`A`, `AA`, `AAA`) and CLI flags allow you to focus on
the conformance tier under audit.
- **Context-aware filtering** — opt-in inclusion of "Depends" criteria lets you decide when
to execute context-sensitive checks.
- **Extensible PDF inspection** — centralize heavy lifting in `pdf_inspector.py` so new
assertions or document parsers can be layered in without touching the generated tests.
## Installation
### pip
```bash
pip install wcag-pdf-pytest[pdf]
```
### uv
```bash
uv add wcag-pdf-pytest[pdf]
```
Install the optional `pdf` extra when you need the bundled `pypdf` helpers.
## Usage
Run the full WCAG 2.1 test suite against a PDF document:
```bash
pytest --wcag-pdf path/to/file.pdf
```
Execute only Level AA criteria using pytest markers:
```bash
pytest -m "AA" --wcag-pdf path/to/file.pdf
```
The plugin also exposes explicit CLI options:
```bash
pytest --wcag-pdf-level AA --wcag-pdf path/to/file.pdf
pytest --wcag-pdf-include-depends --wcag-pdf path/to/file.pdf
```
Tests are namespaced under the `wcag21` marker to keep discovery isolated from other pytest
plugins. Each test is seeded with an `xfail` placeholder so you can progressively
replace the stub assertion with a concrete PDF accessibility check.
## CLI reference
| Option | Description |
| --- | --- |
| `--wcag-pdf` | Path to one or more PDF files to validate. |
| `--wcag-pdf-level {A,AA,AAA}` | Restrict execution to a specific WCAG level. |
| `--wcag-pdf-include-depends` | Execute context-dependent criteria flagged as "Depends". |
Combine CLI switches with pytest marker expressions for granular selection during CI runs.
## Extending PDF checks
Extend `wcag_pdf_pytest/pdf_inspector.py` with reusable utilities that evaluate
individual criteria. Generated tests should stay declarative — import helpers from the
inspector module to keep assertions maintainable and consistent across the suite.
## License
Apache License 2.0. See [LICENSE](LICENSE) for details.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | pytest, wcag, accessibility, pdf, wcag2.1, pdf-accessibility, compliance, testing | [
"Development Status :: 1 - Planning",
"Framework :: Pytest",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: ... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"pypdf>=3.17; extra == \"pdf\"",
"pytest>=8.2"
] | [] | [] | [] | [
"Documentation, https://github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/experimental/wcag_pdf_pytest#readme",
"Homepage, https://github.com/swarmauri/swarmauri-sdk",
"Issues, https://github.com/swarmauri/swarmauri-sdk/issues",
"Source, https://github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/experiment... | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T08:53:38.452575 | wcag_pdf_pytest-0.2.2.dev3-py3-none-any.whl | 58,389 | 03/10/821e07c37e22c9fb6807a3a677ea102f28007f755bbc4e9a3e46d97c2fa3/wcag_pdf_pytest-0.2.2.dev3-py3-none-any.whl | py3 | bdist_wheel | null | false | 9087c5700a1cc2849b0f515b5709ecc3 | 083bddfb4ac54aefa663c6bc87fcc0833125443083fd2dbdfdd32b07b7d86ae9 | 0310821e07c37e22c9fb6807a3a677ea102f28007f755bbc4e9a3e46d97c2fa3 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | peagen | 0.2.1.dev5 | Swarmauri's Peagan - An AI-driven contextual, dependency-based scaffolding tool for rapid content generation. |

<p align="center">
<a href="https://pypi.org/project/peagen/">
<img src="https://img.shields.io/pypi/dm/peagen" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/standards/peagen/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/standards/peagen.svg"/></a>
<a href="https://pypi.org/project/peagen/">
<img src="https://img.shields.io/pypi/pyversions/peagen" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/peagen/">
<img src="https://img.shields.io/pypi/l/peagen" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/peagen/">
<img src="https://img.shields.io/pypi/v/peagen?label=peagen&color=green" alt="PyPI - peagen"/></a>
</p>
---
# Peagen: a Template‑Driven Workflow
## Terminology
- **Tenant** – a namespace used to group related resources, such as repositories.
- **Principal** – an owner of resources (for example, an individual user or an organization).
## Why Use the Peagen CLI?
#### Reduced Variance in LLM‑Driven Generation
- While LLMs inherently introduce some nondeterminism, Peagen’s structured prompts, injected examples, and dependency‑aware ordering significantly reduce output variance. You’ll still see slight variations on each run, but far less than with ad‑hoc prompt calls.
#### Consistency & Repeatability
- By centralizing file definitions in a YAML payload plus Jinja2 `ptree.yaml.j2` templates, Peagen ensures every project run follows the same logic. Changes to templates or project YAML immediately propagate on the next `peagen` invocation.
#### No Vector Store—Pure DAG + Jinja2
- Peagen does *not* rely on a vector store or similarity search. Instead, it constructs a directed acyclic graph (DAG) of inter‑file dependencies, then topologically sorts files to determine processing order. Dependencies and example snippets are injected directly into prompt templates via Jinja2.
#### Built‑In Dependency Management
- The CLI’s `--transitive` flag toggles between strict and transitive dependency sorts, so you can include or exclude indirect dependencies in your generation run.
#### Seamless LLM Integration
- In GENERATE mode, the CLI automatically fills agent‑prompt templates with context and dependency examples, calls your configured LLM (e.g. OpenAI’s GPT‑4), and writes back the generated content. All model parameters (provider, model name, temperature) flow through CLI flags and environment variables—no extra scripting needed.
---
## When to Choose CLI over the Programmatic API
### Interactive Iteration
- Quickly regenerate after tweaking templates or YAML with a single shell command—faster than editing and running a Python script.
### CI/CD Enforcement
- Embed `peagen local sort` and `peagen local process` in pipelines (GitHub Actions, Jenkins, etc.) to ensure generated artifacts stay up to date. Exit codes and verbosity flags integrate seamlessly with automation tools.
### Polyglot & Minimal Overhead
- Teams in Java, Rust, Go, or any language can use Peagen by installing and invoking the CLI—no Python API import paths to manage.
### What Is Peagen?
#### Core Concepts
> Peagen is a template‑driven orchestration engine that transforms high‑level project definitions into concrete files - statically rendered or LLM‑generated - while respecting inter‑file dependencies.
Peagen’s orchestration layer now lives in the handler modules that back each CLI command. Rather than instantiating a `Peagen` class, the CLI builds `peagen.orm.Task` models and forwards them to functions such as `peagen.handlers.process_handler.process_handler`. Those handlers manage the Git work-tree supplied in `task.args["worktree"]`, resolve configuration via `PluginManager`, and delegate to helper routines in `peagen.core`.
Key primitives you will encounter in the codebase are:
- `load_projects_payload(projects_payload)`
Parses YAML (or an already loaded document) into a list of project dictionaries and validates it against the current schema.
- `process_all_projects(projects_payload, cfg, transitive=False)`
Renders every project inside the Git work-tree referenced by `cfg["worktree"]`.
- `process_single_project(project, cfg, start_idx=0, start_file=None, transitive=False)`
Executes the render → dependency sort → file processing pipeline for a single project and returns both the processed records and the next index.
- `sort_file_records(file_records, *, start_idx=0, start_file=None, transitive=False)`
Implements the deterministic topological ordering used by both the CLI and worker processes. Dependencies are expressed via `record["EXTRAS"]["DEPENDENCIES"]`.
Every call into `process_core` expects a configuration dictionary containing a `pathlib.Path` under `cfg["worktree"]`. That path is where copy templates are materialised, generated files are written, and—if a `GitVCS` instance is available—commits are staged and pushed. Understanding these functions is the quickest way to extend Peagen programmatically because the CLI is a thin wrapper over them.
---
## Prerequisites & Setup
### Installing Peagen
```bash
# From PyPI (recommended)
pip install peagen
# With Poetry
poetry add peagen
# With uv managing pyproject dependencies
uv add peagen
# Install the CLI globally with uv
uv tool install peagen
# From source (latest development)
git clone https://github.com/swarmauri/swarmauri-sdk.git
cd pkgs/standards/peagen
pip install .
peagen --help
```
### Executing `peagen --help`
```bash
peagen --help
```

### Configuring `OPENAI_API_KEY`
```bash
export OPENAI_API_KEY="sk-…"
```
### CLI Defaults via `.peagen.toml`
Create a `.peagen.toml` in your project root to store provider credentials and
command defaults. A typical configuration might look like:
```toml
# .peagen.toml
[llm]
default_provider = "openai"
default_model_name = "gpt-4"
[llm.api_keys]
openai = "sk-..."
[storage]
default_filter = "file"
[storage.filters.file]
output_dir = "./peagen_artifacts"
[vcs]
default_vcs = "git"
[vcs.adapters.git]
mirror_git_url = "${MIRROR_GIT_URL}"
mirror_git_token = "${MIRROR_GIT_TOKEN}"
owner = "${OWNER}"
[vcs.adapters.git.remotes]
origin = "${GITEA_REMOTE}"
upstream = "${GITHUB_REMOTE}"
```
With these values in place you can omit `--provider`, `--model-name`, and other
flags when running the CLI.
If `--provider` is omitted and no `default_provider` is configured (or the
`PROVIDER` environment variable is unset), Peagen will raise an error.
### Project YAML Schema Overview
```yaml
# projects_payload.yaml
PROJECTS:
- NAME: "ExampleParserProject"
ROOT: "pkgs"
TEMPLATE_SET: "swarmauri_base"
PACKAGES:
- NAME: "base/swarmauri_base"
MODULES:
- NAME: "ParserBase"
EXTRAS:
PURPOSE: "Provide a base implementation of the interface class."
DESCRIPTION: "Base implementation of the interface class"
REQUIREMENTS:
- "Should inherit from the interface first and ComponentBase second."
RESOURCE_KIND: "parsers"
INTERFACE_NAME: "IParser"
INTERFACE_FILE: "pkgs/core/swarmauri_core/parsers/IParser.py"
```
---
## CLI Entry Point Overview
Peagen’s CLI is organised into four top-level groups:
* `peagen fetch` – materialise workspaces and template sets on the local machine.
* `peagen local …` – run handlers directly against the current environment.
* `peagen remote …` – submit tasks to a JSON-RPC gateway.
* `peagen tui` – launch the textual dashboard.
### `peagen fetch`
Synchronise workspaces locally (optionally cloning source packages and installing template sets).
```bash
peagen fetch [WORKSPACE_URI ...] \
[--no-source/--with-source] \
[--install-template-sets/--no-install-template-sets] \
[--repo <GIT_URL>] \
[--ref <REF>]
```
### `peagen local process`
Render and/or generate files for one or more projects inside an existing Git work-tree.
```bash
peagen local process <PROJECTS_YAML> \
--repo <PATH_OR_URL> \
[--ref <REF>] \
[--project-name <NAME>] \
[--start-idx <NUM>] \
[--start-file <PATH>] \
[--transitive/--no-transitive] \
[--agent-env '{"provider": "openai", "model_name": "gpt-4"}'] \
[--output-base <PATH>]
```
Pass `--repo $(pwd)` when operating on a local checkout so the handler can construct the work-tree that `process_core` expects.

### `peagen remote process`
Submit a processing task to a Peagen gateway.
```bash
peagen remote process <PROJECTS_YAML> \
--repo <GIT_URL> \
[--ref <REF>] \
[--project-name <NAME>] \
[--start-idx <NUM>] \
[--start-file <PATH>] \
[--transitive/--no-transitive] \
[--agent-env <JSON>] \
[--output-base <PATH>] \
[--watch/-w] \
[--interval/-i <SECONDS>]
```
### `peagen local sort`
Inspect the dependency order without writing files.
```bash
peagen local sort <PROJECTS_YAML> \
[--repo <PATH_OR_URL>] \
[--ref <REF>] \
[--project-name <NAME>] \
[--start-idx <NUM>] \
[--start-file <PATH>] \
[--transitive/--no-transitive] \
[--show-dependencies]
```

### `peagen remote sort`
```bash
peagen remote sort <PROJECTS_YAML> \
--repo <GIT_URL> \
[--ref <REF>] \
[--project-name <NAME>] \
[--start-idx <NUM>] \
[--start-file <PATH>] \
[--transitive/--no-transitive] \
[--show-dependencies] \
[--watch/-w] \
[--interval/-i <SECONDS>]
```
### `peagen local template-set list`
List available template sets and their directories:
```bash
peagen local template-set list
```

Remote equivalents (`peagen remote template-set …`) provide the same operations via the gateway.
### `peagen local doe gen`
Expand a Design-of-Experiments spec into a `project_payloads.yaml` bundle.
```bash
peagen local doe gen <DOE_SPEC_YML> <TEMPLATE_PROJECT> \
[--output project_payloads.yaml] \
[-c PATH | --config PATH] \
[--dry-run] \
[--force]
```
Craft `doe_spec.yml` using the scaffold created by `peagen init doe-spec`. Follow the editing guidelines in [`peagen/scaffold/doe_spec/README.md`](peagen/scaffold/doe_spec/README.md): update factor levels, run `peagen validate doe-spec doe_spec.yml`, bump the version in `spec.yaml`, and never mutate published versions. Remote execution is available via `peagen remote doe gen` with the same flags plus `--repo/--ref` when targeting a gateway.
### `peagen local db upgrade`
Apply Alembic migrations to the latest version. Run this command before starting the gateway to ensure the database schema is current.
```bash
peagen local db upgrade
```
Run migrations on a gateway instance:
```bash
peagen remote db upgrade
```
### Remote Processing with Multi-Tenancy
```bash
peagen remote process projects.yaml \
--gateway-url http://localhost:8000/rpc \
--pool acme-lab \
--repo https://example.com/org/repo.git
```
Pass `--pool` to target a specific tenant or workspace when submitting tasks to the gateway. The shared handler surfaces `--repo` and `--ref` so workflows can operate on any Git repository and reference.
---
## Examples & Walkthroughs
### Single‑Project Processing Example
```bash
peagen local process projects.yaml \
--repo $(pwd) \
--project-name MyProject \
--agent-env '{"provider": "openai", "model_name": "gpt-4"}' \
-v
```
* Loads only `MyProject` from `projects.yaml`.
* Renders its `ptree.yaml.j2` into file records.
* Builds the dependency DAG and topologically sorts it.
* Processes each record: static or LLM‑generated.
### Batch Processing All Projects
```bash
peagen local process projects.yaml \
--repo $(pwd) \
--agent-env '{"provider": "openai", "model_name": "gpt-4"}' \
-vv
```
* Iterates every project in `projects.yaml`.
* Processes them sequentially (load → render → sort → generate).
* Uses DEBUG logs to print full DAGs and rendered prompts.
### Transitive Dependency Sorting with Resumption
```bash
peagen local process projects.yaml \
--repo $(pwd) \
--project-name AnalyticsService \
--transitive \
--start-file services/data_pipeline.py \
-v
```
* Builds full DAG including indirect dependencies.
* Topologically sorts all records.
* Skips ahead to `services/data_pipeline.py` and processes from there.
### Python API: dependency ordering
```python
from peagen.core.sort_core import sort_file_records
file_records = [
{
"RENDERED_FILE_NAME": "components/db.py",
"EXTRAS": {"DEPENDENCIES": ["README.md"]},
},
{
"RENDERED_FILE_NAME": "README.md",
"EXTRAS": {"DEPENDENCIES": []},
},
{
"RENDERED_FILE_NAME": "components/service.py",
"EXTRAS": {"DEPENDENCIES": ["components/db.py"]},
},
]
ordered, next_idx = sort_file_records(file_records=file_records)
print([rec["RENDERED_FILE_NAME"] for rec in ordered])
print(next_idx)
```
---
## Advanced Tips
### Resuming at a Specific Point
* `--start-file <PATH>`: begin at a given file record.
* `--start-idx <NUM>`: jump to a zero‑based index in the sorted list.
### Custom Agent‑Prompt Templates
```bash
peagen local process projects.yaml \
--repo $(pwd) \
--agent-env '{"provider": "openai", "model_name": "gpt-4"}' \
--agent-prompt-template-file ./custom_prompts/my_agent.j2
```
### Integrating into CI/CD Pipelines
```yaml
# .github/workflows/generate.yml
name: Generate Files
on:
push:
paths:
- 'templates/**'
- 'packages/**'
- 'projects.yaml'
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.x'
- name: Install dependencies
run: pip install peagen
- name: Generate files
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
peagen local process projects.yaml \
--repo "$GITHUB_WORKSPACE" \
--agent-env '{"provider": "openai", "model_name": "gpt-4"}' \
--transitive \
-v
- name: Commit changes
run: |
git config user.name "github-actions[bot]"
git config user.email "actions@github.com"
git add .
git diff --quiet || git commit -m "chore: update generated files"
```
## Conclusion & Next Steps
### Embedding Peagen Programmatically
```python
from peagen.core import Peagen
import os
agent_env = {
"provider": "openai",
"model_name": "gpt-4",
"api_key": os.environ["OPENAI_API_KEY"],
}
pea = Peagen(
projects_payload_path="projects.yaml",
agent_env=agent_env,
)
projects = pea.load_projects()
result, idx = pea.process_single_project(projects[0], start_idx=0)
```
### Transport Models
Runtime RPC payloads should be validated using the Pydantic schemas generated
in `peagen.orm.schemas`. For example, use
`TaskRead.model_validate_json()` when decoding a task received over the network:
```python
from peagen.orm.schemas import TaskRead
task = TaskRead.model_validate_json(raw_json)
```
The gateway and worker components rely on these schema classes rather than the
ORM models under `peagen.orm`.
> **Important**
> JSON-RPC methods such as `Task.submit` **only** accept `TaskCreate`,
> `TaskUpdate`, or `TaskRead` instances. Passing dictionaries or nested `dto`
> mappings is unsupported and will trigger a `TypeError`.
> **Note**
> Earlier versions exposed these models under ``peagen.models`` and the
> transport schemas under ``peagen.models.schemas``. Update any imports to use
> ``peagen.orm`` and ``peagen.orm.schemas`` going forward.
### Git Filters & Publishers
Peagen's artifact output and event publishing are pluggable. Use the `git_filter` argument to control where files are saved and optionally provide a publisher for notifications. Built‑ins live under the `peagen.plugins` namespace. Available filters include `S3FSFilter` and `MinioFilter`, while publisher options cover `RedisPublisher`, `RabbitMQPublisher`, and `WebhookPublisher`. See [docs/storage_adapters_and_publishers.md](docs/storage_adapters_and_publishers.md) for details.
For the event schema and routing key conventions, see [docs/eda_protocol.md](docs/eda_protocol.md). Events can also be emitted directly from the CLI using `--notify`:
```bash
peagen local process projects.yaml \
--repo $(pwd) \
--notify redis://localhost:6379/0/custom.events
```
For a walkthrough of encrypted secrets and key management, see
[docs/secure_secrets_tutorial.md](docs/secure_secrets_tutorial.md).
### Parallel Processing & Artifact Storage Options
Peagen can accelerate generation by spawning multiple workers. Set `--workers <N>`
on the CLI (or `workers = N` in `.peagen.toml`) to enable a thread pool that
renders files concurrently while still honoring dependency order. Leaving the
flag unset or `0` processes files sequentially.
Artifact locations are resolved via the `--artifacts` flag. Targets may be a
local directory (`file:///./peagen_artifacts`) using `S3FSFilter` or an
S3/MinIO endpoint (`s3://host:9000`) handled by `MinioFilter`. Custom
filters and publishers can be supplied programmatically:
```python
from peagen.core import Peagen
from swarmauri_gitfilter_minio import MinioFilter
from peagen.plugins.publishers.webhook_publisher import WebhookPublisher
store = MinioFilter.from_uri("s3://localhost:9000/peagen")
bus = WebhookPublisher("https://example.com/peagen")
```
Another Example:
```
from peagen.plugins.publishers.redis_publisher import RedisPublisher
from peagen.plugins.publishers.rabbitmq_publisher import RabbitMQPublisher
store = MinioFilter.from_uri("s3://localhost:9000/peagen")
bus = RedisPublisher("redis://localhost:6379/0")
# bus = RabbitMQPublisher(host="localhost", port=5672, routing_key="peagen.events")
pea = Peagen(
projects_payload_path="projects.yaml",
git_filter=store,
agent_env={"provider": "openai", "model_name": "gpt-4"},
)
bus.publish("peagen.events", {"type": "process.started"})
pea.process_all_projects()
```
### Contributing & Extending Templates
* **Template Conventions:** Place new Jinja2 files under your `TEMPLATE_BASE_DIR` as `*.j2`, using the same context variables (`projects`, `packages`, `modules`) that core templates rely on.
* **Adding New Commands:** Define a new subcommand in `cli.py`, wire it into the parser, instantiate `Peagen`, and call core methods.
* **Submitting Pull Requests:** Fork the repo, add/update templates under `peagen/templates/`, update docs/README, and open a PR tagging maintainers.
### Textual TUI
Run `peagen tui` to launch an experimental dashboard that
subscribes to the gateway's `/ws/tasks` WebSocket. The gateway now emits
`task.update`, `worker.update` and `queue.update` events. Use the tab keys to
switch between task lists, logs and opened files. The footer shows system
metrics and current time. Remote artifact paths are downloaded via their git
filter and re-uploaded when saving.
### Streaming Events with wscat
Use the [`wscat`](https://github.com/websockets/wscat) CLI to inspect the
gateway's WebSocket events directly from the terminal:
```bash
npx wscat -c https://gw.peagen.com/ws/tasks
```
Incoming JSON messages mirror those displayed in the TUI, providing a quick way
to monitor `task.update`, `worker.update`, and `queue.update` events.
## Results Backends
Peagen supports pluggable results backends. Built-in options include `local_fs`, `postgres`, and `in_memory`.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | sdk, standards, peagen | [
"License :: OSI Approved :: Apache Software License",
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming L... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"GitPython",
"PyGithub",
"aiosqlite>=0.19.0",
"alembic",
"anyio",
"asyncpg>=0.30.0",
"colorama",
"fastapi>=0.111",
"filelock",
"h11>=0.16.0",
"httpx[http2]>=0.27",
"inflect",
"jinja2>=3.1.6",
"jsonpatch>=1.33",
"jsonschema>=4.18.5",
"minio",
"pgpy>=0.6.0",
"pika",
"psutil>=6.0",
... | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T08:53:27.378886 | peagen-0.2.1.dev5.tar.gz | 269,848 | 14/53/d065dd905001f76aa98af59e8ebf2c9a0fbab8d4363978438c232d6977de/peagen-0.2.1.dev5.tar.gz | source | sdist | null | false | e6097e5aa7da00c70dcb4a34b836b735 | 61f364ba74d54dbbe7690340cbbe4564e6a054efd151e2a4f961af16f03fcffa | 1453d065dd905001f76aa98af59e8ebf2c9a0fbab8d4363978438c232d6977de | Apache-2.0 | [
"LICENSE"
] | 224 |
2.4 | swm_example_community_package | 0.6.3.dev2 | example community package |

<p align="center">
<a href="https://pypi.org/project/swm_example_community_package/">
<img src="https://img.shields.io/pypi/dm/swm_example_community_package" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swm_example_community_package/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swm_example_community_package.svg"/></a>
<a href="https://pypi.org/project/swm_example_community_package/">
<img src="https://img.shields.io/pypi/pyversions/swm_example_community_package" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swm_example_community_package/">
<img src="https://img.shields.io/pypi/l/swm_example_community_package" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swm_example_community_package/">
<img src="https://img.shields.io/pypi/v/swm_example_community_package?label=swm_example_community_package&color=green" alt="PyPI - swm_example_community_package"/></a>
</p>
---
# Swm Example Community Package
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, swm, example, community, package | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"swarmauri_base",
"swarmauri_core"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T08:53:24.401471 | swm_example_community_package-0.6.3.dev2.tar.gz | 6,070 | 51/bd/ee3f74ea6aaf36591c2f5bf8b8ac3556db2cdc65ddae1fdc5f36f83a9c56/swm_example_community_package-0.6.3.dev2.tar.gz | source | sdist | null | false | 0d42abcffaa7b3cc6eec07dc37e126be | 39b4421bcf8fe4a60d62819fff43b2b7d53fbcc75781a2305099ca5c595e0f81 | 51bdee3f74ea6aaf36591c2f5bf8b8ac3556db2cdc65ddae1fdc5f36f83a9c56 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | peagen_templset_vue | 0.1.1.dev10 | A Swarmauri Peagen Template Set for Vue Atom | 
<p align="center">
<a href="https://pypi.org/project/peagen_templset_vue/">
<img src="https://img.shields.io/pypi/dm/peagen_templset_vue" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/standards/peagen_templset_vue/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/standards/peagen_templset_vue.svg"/></a>
<a href="https://pypi.org/project/peagen_templset_vue/">
<img src="https://img.shields.io/pypi/pyversions/peagen_templset_vue" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/peagen_templset_vue/">
<img src="https://img.shields.io/pypi/l/peagen_templset_vue" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/peagen_templset_vue/">
<img src="https://img.shields.io/pypi/v/peagen_templset_vue?label=peagen_templset_vue&color=green" alt="PyPI - peagen_templset_vue"/></a>
</p>
---
# `peagen_templset_vue`
`peagen_templset_vue` packages a Peagen template set that scaffolds Vue single-file components (SFCs) with a full suite of supporting assets. The template set is registered under the `peagen.template_sets` entry-point group so the Peagen CLI can discover it automatically after installation.
## Overview
- **Opinionated Vue atoms** – `ptree.yaml.j2` renders a component folder containing an `index.ts` barrel, the Vue SFC, scoped CSS, and a generated `*.d.ts` interface file.
- **Comprehensive testing prompts** – the template tree includes Jest/Vitest-ready unit tests, dedicated accessibility and visual regression specs, plus Storybook `.stories.ts` and `.stories.mdx` documents.
- **Accessible defaults** – `agent_default.j2` instructs the LLM to prefer TypeScript, provide docstrings, enforce ARIA annotations, and respect WCAG guidance. Project/module extras such as `REQUIREMENTS`, `STATES`, and `DEPENDENCIES` in your Peagen payload are injected directly into that prompt.
- **Dependency-aware rendering** – each generated file declares its local dependencies so Peagen can order rendering and feed context into the prompts for downstream files.
## Installation
Install the template set alongside the Peagen CLI:
```bash
pip install peagen peagen_templset_vue
# or install the template set into an existing environment
pip install peagen_templset_vue
```
Using Poetry:
```bash
poetry add peagen peagen_templset_vue
# or
poetry add peagen_templset_vue
```
Using [`uv`](https://github.com/astral-sh/uv):
```bash
uv pip install peagen peagen_templset_vue
```
## Example: Inspect the bundled templates
After installation you can programmatically explore the template tree before invoking Peagen. The snippet below locates the packaged resources and prints the key prompts that Peagen will feed into its generation pipeline.
```python
from importlib import resources
package = resources.files("peagen_templset_vue.templates.peagen_templset_vue")
with resources.as_file(package) as template_root:
top_level = sorted(path.name for path in template_root.glob("*.j2"))
component_dir = template_root / "{{ PKG.NAME }}" / "src" / "components" / "{{ MOD.NAME }}"
component_files = [
entry.name for entry in sorted(component_dir.iterdir(), key=lambda path: path.name)
]
print(f"Template root: {template_root.name}")
print(f"Top-level prompts: {top_level}")
print("Component template files:")
for name in component_files:
print(f"- {name}")
```
The output highlights the top-level `agent_default.j2` prompt plus every Vue component artefact Peagen will scaffold (Vue SFC, TypeScript barrel, CSS, tests, and Storybook stories).
## Want to help?
If you want to contribute to swarmauri-sdk, read our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/CONTRIBUTING.md).
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | sdk, standards, peagen, templset, vue | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T08:53:23.749880 | peagen_templset_vue-0.1.1.dev10.tar.gz | 10,936 | 97/06/976e3e93de08eaaa236f3feb1e95d7e541e6ec1f5400f5f242a21af12987/peagen_templset_vue-0.1.1.dev10.tar.gz | source | sdist | null | false | 4542401a3df752921a95010ae0c2d37a | 8039c9f0262d2e416f220bdc2042067e0d5817ab74138e575a7bd60364738def | 9706976e3e93de08eaaa236f3feb1e95d7e541e6ec1f5400f5f242a21af12987 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swm_example_plugin | 0.6.5.dev3 | This repository includes an example of a Swarmauri Plugin. | <p align="center">
<img src="https://github.com/swarmauri/swarmauri-sdk/blob/3d4d1cfa949399d7019ae9d8f296afba773dfb7f/assets/swarmauri.brand.theme.svg" alt="Swarmauri logotype" width="420" />
</p>
<h1 align="center">Swarmauri Example Plugin</h1>
<p align="center">
<a href="https://pypi.org/project/swm-example-plugin/"><img src="https://img.shields.io/pypi/dm/swm-example-plugin?style=for-the-badge" alt="PyPI - Downloads" /></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/main/pkgs/plugins/example_plugin/"><img src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/main/pkgs/plugins/example_plugin.svg?style=for-the-badge" alt="Repository views" /></a>
<a href="https://pypi.org/project/swm-example-plugin/"><img src="https://img.shields.io/pypi/pyversions/swm-example-plugin?style=for-the-badge" alt="Supported Python versions" /></a>
<a href="https://pypi.org/project/swm-example-plugin/"><img src="https://img.shields.io/pypi/l/swm-example-plugin?style=for-the-badge" alt="License" /></a>
<a href="https://pypi.org/project/swm-example-plugin/"><img src="https://img.shields.io/pypi/v/swm-example-plugin?style=for-the-badge&label=swm-example-plugin" alt="Latest release" /></a>
</p>
---
The Swarmauri Example Plugin is a lightweight reference package that shows how to
ship a plugin ready for the Swarmauri ecosystem. It demonstrates how to declare
entry points, surface metadata, and structure tests so that new plugin projects
start from a fully configured baseline that supports Python 3.10 through 3.12.
## Features
- **Entry-point wiring** – exposes the `swarmauri.plugins` group so your agent
classes can be discovered automatically once implemented.
- **Ready-to-publish metadata** – includes keywords, classifiers, and long
descriptions wired directly into `pyproject.toml`.
- **Version helpers** – surfaces `__version__` and `__long_desc__` constants so
documentation and tooling can introspect the package after installation.
- **Testing scaffold** – ships with baseline unit tests verifying version
resolution to encourage a test-first development workflow.
## Installation
### Using `uv`
```bash
uv add swm-example-plugin
```
### Using `pip`
```bash
pip install swm-example-plugin
```
## Usage
### Inspect published metadata
```python
from swm_example_plugin import __long_desc__, __version__
print(__version__)
print(__long_desc__.splitlines()[0])
```
### Discover the registered entry point
```python
from importlib.metadata import entry_points
for ep in entry_points(group="swarmauri.plugins"):
if ep.name == "example_agent":
print(ep.name, "->", ep.value)
break
```
This skeleton intentionally leaves the actual agent implementation up to you.
Replace the target specified in the entry point with your concrete class to wire
custom functionality into the Swarmauri plugin registry.
## Project Resources
- Source: <https://github.com/swarmauri/swarmauri-sdk/tree/main/pkgs/plugins/example_plugin>
- Documentation: <https://github.com/swarmauri/swarmauri-sdk/tree/main/pkgs/plugins/example_plugin#readme>
- Issues: <https://github.com/swarmauri/swarmauri-sdk/issues>
- Releases: <https://github.com/swarmauri/swarmauri-sdk/releases>
- Discussions: <https://github.com/orgs/swarmauri/discussions>
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, plugin, example, agents, entry-points, sdk, scaffolding, template, tutorial, packaging, asyncio, extension-points, media-workflows | [
"Development Status :: 1 - Planning",
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Lang... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Changelog, https://github.com/swarmauri/swarmauri-sdk/releases",
"Documentation, https://github.com/swarmauri/swarmauri-sdk/tree/main/pkgs/plugins/example_plugin#readme",
"Discussions, https://github.com/orgs/swarmauri/discussions",
"Homepage, https://github.com/swarmauri/swarmauri-sdk",
"Issues, https://g... | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T08:53:22.434255 | swm_example_plugin-0.6.5.dev3-py3-none-any.whl | 7,585 | 18/dd/6535e220df9d34ed15df6cdc1a437ff4baab0599851e9658d3f6adc4e8a8/swm_example_plugin-0.6.5.dev3-py3-none-any.whl | py3 | bdist_wheel | null | false | 1cea0632426a327931a15c8ae195987d | e118288df6c778a382977dcfb95934de1ac62f42a375340b74142a77932088c6 | 18dd6535e220df9d34ed15df6cdc1a437ff4baab0599851e9658d3f6adc4e8a8 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | EmbeddedSigner | 0.1.4.dev3 | Embed XMP metadata and sign media assets using Swarmauri plugins. | <p align="center">
<img src="../../../assets/swarmauri.brand.theme.svg" alt="Swarmauri logotype" width="420" />
</p>
<h1 align="center">EmbeddedSigner</h1>
<p align="center">
<a href="https://img.shields.io/pypi/dm/EmbeddedSigner?style=for-the-badge"><img src="https://img.shields.io/pypi/dm/EmbeddedSigner?style=for-the-badge" alt="PyPI - Downloads" /></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/main/pkgs/plugins/embedded_signer/"><img src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/main/pkgs/plugins/embedded_signer.svg?style=for-the-badge" alt="Repo views" /></a>
<a href="https://img.shields.io/pypi/pyversions/EmbeddedSigner?style=for-the-badge"><img src="https://img.shields.io/pypi/pyversions/EmbeddedSigner?style=for-the-badge" alt="Supported Python versions" /></a>
<a href="https://img.shields.io/pypi/l/EmbeddedSigner?style=for-the-badge"><img src="https://img.shields.io/pypi/l/EmbeddedSigner?style=for-the-badge" alt="License" /></a>
<a href="https://img.shields.io/pypi/v/EmbeddedSigner?style=for-the-badge"><img src="https://img.shields.io/pypi/v/EmbeddedSigner?style=for-the-badge" alt="Latest release" /></a>
</p>
EmbeddedSigner composes the dynamic XMP embedding utilities from
[`EmbedXMP`](../EmbedXMP) with the signing facade exposed by
[`MediaSigner`](../media_signer). It embeds metadata into media assets and then
routes signing requests to the appropriate media-aware signer in either
attached or detached mode. The class orchestrates key provider plugins so that
opaque key references can be resolved automatically before signatures are
produced.
## Features
- **One-shot embed & sign** – inject XMP metadata and produce signatures with a
single call.
- **Media-aware detection** – delegates to all registered `EmbedXmpBase`
handlers so PNG, GIF, JPEG, SVG, WEBP, TIFF, PDF, and MP4 assets are
processed consistently.
- **Pluggable signers** – forwards signing requests to every
`SigningBase` registered with `MediaSigner`, including CMS, JWS, OpenPGP, PDF,
and XMLDSig providers.
- **Key provider integration** – loads providers from the
`swarmauri.key_providers` entry point group and resolves opaque key reference
strings (e.g. `local://kid@2`) before invoking a signer.
- **Attached or detached output** – toggle between embedded signatures or
detached artifacts via a simple flag.
- **File and byte workflows** – operate on in-memory payloads or update files
on disk with helpers for embedding, reading, removing, and signing.
- **Command line tooling** – bundle a ready-to-use `embedded-signer` CLI for
ad-hoc embedding, signing, and combined workflows.
## Installation
### Using `uv`
```bash
uv add EmbeddedSigner
```
Optional dependencies align with the available key providers, EmbedXMP handlers,
and MediaSigner backends:
```bash
uv add "EmbeddedSigner[local]" # enable LocalKeyProvider resolution
uv add "EmbeddedSigner[memory]" # enable InMemoryKeyProvider resolution
uv add "EmbeddedSigner[xmp_png]" # add PNG embedding support
uv add "EmbeddedSigner[xmp_all]" # install every EmbedXMP handler
uv add "EmbeddedSigner[signing_pdf]" # enable PDF signer backend
uv add "EmbeddedSigner[signing_all]" # install every MediaSigner backend
uv add "EmbeddedSigner[full]" # bring in all extras and key providers
```
### Using `pip`
```bash
pip install EmbeddedSigner
```
Extras mirror the `uv` workflow:
```bash
pip install "EmbeddedSigner[local]"
pip install "EmbeddedSigner[memory]"
pip install "EmbeddedSigner[xmp_png]"
pip install "EmbeddedSigner[xmp_all]"
pip install "EmbeddedSigner[signing_pdf]"
pip install "EmbeddedSigner[signing_all]"
pip install "EmbeddedSigner[full]"
```
### Extras overview
| Extra name | Purpose |
| --- | --- |
| `local` / `memory` | Enable Swarmauri key provider resolution for local filesystem and in-memory secrets. |
| `xmp_gif`, `xmp_jpeg`, `xmp_png`, `xmp_svg`, `xmp_webp`, `xmp_tiff`, `xmp_pdf`, `xmp_mp4` | Pull in the corresponding `swarmauri_xmp_*` handler so EmbedXMP can embed metadata for that media format. |
| `xmp_all` | Install every EmbedXMP media handler dependency at once. |
| `signing_cms`, `signing_jws`, `signing_openpgp`, `signing_pdf`, `signing_xmld` | Add the matching MediaSigner backend plugin for CMS, JWS, OpenPGP, PDF, or XMLDSig signing. |
| `signing_all` | Install all MediaSigner backends together. |
| `full` | Bring in every key provider, EmbedXMP handler, and MediaSigner backend for maximum coverage. |
## Usage
```python
import asyncio
from pathlib import Path
from EmbeddedSigner import EmbedSigner
xmp_xml = """
<x:xmpmeta xmlns:x="adobe:ns:meta/">
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
<rdf:Description rdf:about=""/>
</rdf:RDF>
</x:xmpmeta>
""".strip()
async def embed_and_sign() -> None:
signer = EmbedSigner()
media_bytes = Path("image.png").read_bytes()
embedded, signatures = await signer.embed_and_sign_bytes(
media_bytes,
fmt="JWSSigner",
xmp_xml=xmp_xml,
key={"kind": "raw", "key": b"\x00" * 32},
path="image.png",
attached=True,
signer_opts={"alg": "HS256"},
)
Path("image.signed.png").write_bytes(embedded)
print(signatures[0].mode) # "attached"
asyncio.run(embed_and_sign())
```
### Key provider integration
When you install a key provider plugin such as
`swarmauri_keyprovider_local`, EmbeddedSigner can resolve string key references
on the fly:
```python
signer = EmbedSigner(key_provider_name="LocalKeyProvider")
embedded, signatures = await signer.embed_and_sign_file(
Path("report.pdf"),
fmt="PDFSigner",
xmp_xml=xmp_xml,
key="LocalKeyProvider://a1b2c3@1",
attached=False,
signer_opts={"alg": "SHA256"},
)
```
EmbeddedSigner parses the opaque reference, looks up the provider by name, and
retrieves the specified key version using the provider's asynchronous API.
### File helpers
`EmbedSigner` offers mirrored helpers that operate on file paths when you need
to persist updates directly on disk:
```python
signer = EmbedSigner()
# Embed metadata into a file and write it back in place.
signer.embed_file("image.png", xmp_xml)
# Read embedded metadata without materialising the bytes in memory.
print(signer.read_xmp_file("image.png"))
# Remove metadata and persist the stripped bytes to a new path.
signer.remove_xmp_file("image.png", write_back=True)
# Sign file contents without manual IO boilerplate.
signatures = asyncio.run(
signer.sign_file(
"image.png",
fmt="JWSSigner",
key="LocalKeyProvider://img-key",
attached=True,
)
)
```
### Command line interface
Installing the package exposes an `embedded-signer` executable that wraps the
most common workflows:
```bash
# Embed metadata from a file into an image in place.
embedded-signer embed example.png --xmp-file metadata.xmp
# Read metadata to stdout (non-zero exit if none is embedded).
embedded-signer read example.png
# Remove metadata and write the result to a new file.
embedded-signer remove example.png --output clean.png
# Sign using a key reference exposed by a provider plugin.
embedded-signer sign example.png --format JWSSigner --key-ref local://img-key
# Embed and sign in one step, writing signatures to JSON.
embedded-signer embed-sign example.png \
--xmp-file metadata.xmp \
--format JWSSigner \
--key-ref local://img-key \
--signature-output signatures.json
```
## Development
1. Install development dependencies:
```bash
uv pip install -e ".[dev]"
```
2. Format and lint code with `ruff`:
```bash
uv run ruff format .
uv run ruff check . --fix
```
3. Run the unit tests in isolation:
```bash
uv run --package EmbeddedSigner --directory plugins/embedded_signer pytest
```
## Project Resources
- Source: <https://github.com/swarmauri/swarmauri-sdk/tree/main/pkgs/plugins/embedded_signer>
- Documentation: <https://github.com/swarmauri/swarmauri-sdk/tree/main/pkgs/plugins/embedded_signer#readme>
- Issues: <https://github.com/swarmauri/swarmauri-sdk/issues>
- Releases: <https://github.com/swarmauri/swarmauri-sdk/releases>
- Discussions: <https://github.com/orgs/swarmauri/discussions>
## License
EmbeddedSigner is released under the Apache 2.0 License. See the
[LICENSE](LICENSE) file for details.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | xmp, signing, swarmauri, metadata, security, digital-rights-management, drm, license-rights, embedded-licensing, tamper-proofing, tamper-evidence, content-protection, png, gif, jpeg, svg, webp, tiff, pdf, mp4, digital-asset-security, metadata-signing, workflow-automation, plugin-orchestration, asyncio, plugin, digital-asset-workflows, cms, pkcs7, cades, jws, openpgp, pdf-signatures, xmldsig | [
"Development Status :: 1 - Planning",
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Lang... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"EmbedXMP",
"MediaSigner",
"swarmauri_base",
"swarmauri_core",
"swarmauri_keyprovider_inmemory; extra == \"full\"",
"swarmauri_keyprovider_inmemory; extra == \"memory\"",
"swarmauri_keyprovider_local; extra == \"full\"",
"swarmauri_keyprovider_local; extra == \"local\"",
"swarmauri_signing_cms; extr... | [] | [] | [] | [
"Changelog, https://github.com/swarmauri/swarmauri-sdk/releases",
"Documentation, https://github.com/swarmauri/swarmauri-sdk/tree/main/pkgs/plugins/embedded_signer#readme",
"Discussions, https://github.com/orgs/swarmauri/discussions",
"Homepage, https://github.com/swarmauri/swarmauri-sdk",
"Issues, https://... | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T08:53:21.635877 | embeddedsigner-0.1.4.dev3.tar.gz | 13,674 | 62/2b/94ab4ce2815623a25fe5be9cd6dbac3264e0c114447aebd2c425dd1d1fa4/embeddedsigner-0.1.4.dev3.tar.gz | source | sdist | null | false | 18479dbf10ea956c49b083ac7255144c | e2c9a967d808566fc7c7aac4b1ad0c398665bf33add709c739faf6b89b211c74 | 622b94ab4ce2815623a25fe5be9cd6dbac3264e0c114447aebd2c425dd1d1fa4 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | EmbedXMP | 0.1.4.dev3 | Dynamic XMP embed/read/remove orchestration for Swarmauri handlers. | <p align="center">
<img src="https://github.com/swarmauri/swarmauri-sdk/blob/3d4d1cfa949399d7019ae9d8f296afba773dfb7f/assets/swarmauri.brand.theme.svg" alt="Swarmauri logotype" width="420" />
</p>
<h1 align="center">EmbedXMP</h1>
<p align="center">
<a href="https://pypi.org/project/EmbedXMP/"><img src="https://img.shields.io/pypi/dm/EmbedXMP?style=for-the-badge" alt="PyPI - Downloads" /></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/plugins/EmbedXMP/"><img src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/plugins/EmbedXMP.svg?style=for-the-badge" alt="Repository views" /></a>
<a href="https://pypi.org/project/EmbedXMP/"><img src="https://img.shields.io/pypi/pyversions/EmbedXMP?style=for-the-badge" alt="Supported Python versions" /></a>
<a href="https://pypi.org/project/EmbedXMP/"><img src="https://img.shields.io/pypi/l/EmbedXMP?style=for-the-badge" alt="License" /></a>
<a href="https://pypi.org/project/EmbedXMP/"><img src="https://img.shields.io/pypi/v/EmbedXMP?style=for-the-badge&label=EmbedXMP" alt="Latest release" /></a>
</p>
---
EmbedXMP collects every installed `EmbedXmpBase` implementation, discovers them via Swarmauri's dynamic registry, and exposes a unified manager that can embed, read, or remove XMP packets without worrying about container formats.
## Features
- **Dynamic discovery** – lazily imports modules named `swarmauri_xmp_*` and collects subclasses registered under `EmbedXmpBase`.
- **Unified interface** – delegates to the first handler whose `supports` method confirms compatibility with the payload.
- **Convenience wrappers** – module-level helpers (`embed`, `read`, `remove`) keep high-level workflows succinct.
- **Async-friendly APIs** – integrate inside event loops without blocking when calling out to plugin hooks.
- **Media-format coverage** – load handlers for PNG, GIF, JPEG, SVG, WEBP, TIFF, PDF, and MP4 assets through extras.
## Installation
### Using `uv`
```bash
uv add EmbedXMP
```
### Using `pip`
```bash
pip install EmbedXMP
```
## Usage
```python
from pathlib import Path
from EmbedXMP import EmbedXMP, embed, embed_file, read, read_file_xmp
manager = EmbedXMP()
image = Path("example.png")
packet = """<x:xmpmeta xmlns:x='adobe:ns:meta/'><rdf:RDF>...</rdf:RDF></x:xmpmeta>"""
# Embed into the file in place.
embed_file(image, packet)
# Inspect metadata via the manager API.
xmp_text = manager.read(image.read_bytes(), str(image))
print(xmp_text)
# Remove metadata from the file when it is no longer required.
manager.remove(image.read_bytes(), str(image))
```
> **Note**
> You can provide either a `path` or a `hint` keyword argument when calling
> `embed`, `read`, or `remove` to help the manager pick the correct handler. The
> values are interchangeable as long as they match when both are supplied.
### Async orchestration
EmbedXMP's manager can be shared inside asynchronous workflows by deferring media-aware work to plugin hooks:
```python
import asyncio
from pathlib import Path
from EmbedXMP import EmbedXMP, embed
async def embed_all(paths: list[str], packet: str) -> None:
manager = EmbedXMP()
for path in paths:
data = Path(path).read_bytes()
await asyncio.to_thread(embed, data, packet, hint=path)
asyncio.run(embed_all(["one.png", "two.svg"], "<x:xmpmeta>...</x:xmpmeta>"))
```
## Project Resources
- Source: <https://github.com/swarmauri/swarmauri-sdk/tree/main/pkgs/plugins/EmbedXMP>
- Documentation: <https://github.com/swarmauri/swarmauri-sdk/tree/main/pkgs/plugins/EmbedXMP#readme>
- Issues: <https://github.com/swarmauri/swarmauri-sdk/issues>
- Releases: <https://github.com/swarmauri/swarmauri-sdk/releases>
- Discussions: <https://github.com/orgs/swarmauri/discussions>
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, xmp, metadata, embedding, plugin, digital-asset-management, digital-asset-workflows, metadata-pipelines, media-workflows, metadata-extraction, metadata-synchronization, asyncio, media-metadata, python-library, async-discovery, plugin-discovery, png, gif, jpeg, svg, webp, tiff, pdf, mp4 | [
"Development Status :: 1 - Planning",
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Lang... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"swarmauri_base",
"swarmauri_core",
"swarmauri_xmp_gif; extra == \"all\"",
"swarmauri_xmp_gif; extra == \"gif\"",
"swarmauri_xmp_jpeg; extra == \"all\"",
"swarmauri_xmp_jpeg; extra == \"jpeg\"",
"swarmauri_xmp_mp4; extra == \"all\"",
"swarmauri_xmp_mp4; extra == \"mp4\"",
"swarmauri_xmp_pdf; extra =... | [] | [] | [] | [
"Changelog, https://github.com/swarmauri/swarmauri-sdk/releases",
"Documentation, https://github.com/swarmauri/swarmauri-sdk/tree/main/pkgs/plugins/EmbedXMP#readme",
"Discussions, https://github.com/orgs/swarmauri/discussions",
"Homepage, https://github.com/swarmauri/swarmauri-sdk",
"Issues, https://github.... | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T08:53:20.466202 | embedxmp-0.1.4.dev3.tar.gz | 8,641 | 59/a3/4917649a6ee160c7cac03bb0a066edeb6a792969421c330e41b16cbb894d/embedxmp-0.1.4.dev3.tar.gz | source | sdist | null | false | 6f39e5e3ce16331e0e559cdff7d49591 | 0eadf1d5e3251dce2777ff4f10e9f31d195e0239da89a882bdd189fff4b4a5c8 | 59a34917649a6ee160c7cac03bb0a066edeb6a792969421c330e41b16cbb894d | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | MediaSigner | 0.6.5.dev3 | Swarmauri signing facade plugin that aggregates registered SigningBase providers. | <p align="center">
<img src="../../../assets/swarmauri.brand.theme.svg" alt="Swarmauri logotype" width="420" />
</p>
<h1 align="center">MediaSigner</h1>
<p align="center">
<a href="https://pypi.org/project/MediaSigner/"><img src="https://img.shields.io/pypi/dm/MediaSigner?style=for-the-badge" alt="PyPI - Downloads" /></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/main/pkgs/plugins/media_signer/"><img src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/main/pkgs/plugins/media_signer.svg?style=for-the-badge" alt="Repository views" /></a>
<a href="https://pypi.org/project/MediaSigner/"><img src="https://img.shields.io/pypi/pyversions/MediaSigner?style=for-the-badge" alt="Supported Python versions" /></a>
<a href="https://pypi.org/project/MediaSigner/"><img src="https://img.shields.io/pypi/l/MediaSigner?style=for-the-badge" alt="License" /></a>
<a href="https://pypi.org/project/MediaSigner/"><img src="https://img.shields.io/pypi/v/MediaSigner?style=for-the-badge&label=MediaSigner" alt="Latest release" /></a>
</p>
---
MediaSigner packages the asynchronous `Signer` facade that orchestrates registered
`SigningBase` providers. Moving the facade into a standalone plugin keeps the
core standards library lightweight while still enabling drop-in discovery of
specialised signers such as CMS, JWS, OpenPGP, PDF, and XMLDSig providers.
## Features
- **Unified signing façade** – talk to every installed `SigningBase` through a
single async API that automatically discovers entry-point contributions.
- **Format-aware routing** – delegates signing and verification to the provider
registered for a format token such as `jws`, `pdf`, or `xmld`.
- **Optional plugin bundles** – install curated extras (e.g. `[plugins]`) to
bring in all available signer backends in one step.
- **Key-provider integration** – share Swarmauri key providers with the facade
so opaque key references resolve before signature creation.
- **Production-ready CLI** – inspect capabilities, sign payloads, and verify
results directly from the command line for fast automation.
## Installation
### Using `uv`
```bash
uv add MediaSigner
# install every optional backend
uv add "MediaSigner[plugins]"
```
The `[plugins]` extra pulls in CMS, JWS, OpenPGP, PDF, and XMLDSig signers.
### Using `pip`
```bash
pip install MediaSigner
# with every optional backend
pip install "MediaSigner[plugins]"
```
## Usage
```python
import asyncio
from MediaSigner import MediaSigner
from swarmauri_core.key_providers.IKeyProvider import IKeyProvider
# Optionally pass a key provider so plugins receive a shared source for
# retrieving signing material.
key_provider: IKeyProvider | None = None
signer = MediaSigner(key_provider=key_provider)
async def sign_payload(payload: bytes) -> None:
signatures = await signer.sign_bytes("jws", key="my-key", payload=payload)
assert signatures, "At least one signature should be returned"
print(signer.supports("jws"))
asyncio.run(sign_payload(b"payload"))
```
### Integrating a key provider
Any Swarmauri key provider can be shared with the facade so backends receive
ready-to-use key material:
```python
import asyncio
from MediaSigner import MediaSigner
from swarmauri_keyprovider_inmemory import InMemoryKeyProvider
provider = InMemoryKeyProvider(keys={"local://demo": b"secret"})
signer = MediaSigner(key_provider=provider)
async def main() -> None:
signatures = await signer.sign_bytes(
"jws",
key="local://demo",
payload=b"demo",
alg="HS256",
opts={"kid": "demo"},
)
print(signatures[0].mode)
asyncio.run(main())
```
### Discover installed plugins
Use the facade to list installed signers and inspect their capabilities:
```python
for format_name in signer.supported_formats():
capabilities = signer.supports(format_name)
print(format_name, list(capabilities))
```
### Why this structure?
* **Separation of concerns** – standards remain focused on common abstractions
while the plugin encapsulates optional dependencies.
* **Explicit opt-in** – downstream projects can install only the signing stacks
they need via the curated extras.
* **Consistent ergonomics** – usage matches the historical
`swarmauri_standard.signing.Signer` import, preserving existing tutorials and
code samples.
## Command line utility
MediaSigner ships a small CLI for quick inspection and automation:
```bash
media-signer list # List available formats
media-signer supports jws # Show capability metadata
media-signer sign-bytes jws \
--alg HS256 \
--key key.json \
--input payload.bin \
--output signatures.json
media-signer verify-bytes jws \
--input payload.bin \
--sigs signatures.json \
--opts verify-keys.json
```
The CLI expects JSON files describing `KeyRef` objects and verification
materials matching the selected plugin.
## Project Resources
- Source: <https://github.com/swarmauri/swarmauri-sdk/tree/main/pkgs/plugins/media_signer>
- Documentation: <https://github.com/swarmauri/swarmauri-sdk/tree/main/pkgs/plugins/media_signer#readme>
- Issues: <https://github.com/swarmauri/swarmauri-sdk/issues>
- Releases: <https://github.com/swarmauri/swarmauri-sdk/releases>
- Discussions: <https://github.com/orgs/swarmauri/discussions>
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, digital-signatures, media, cryptography, plugin, orchestration, asyncio, signature-aggregation, workflow-automation, key-management, verification, digital-asset-security, media-compliance, cms, pkcs7, cades, jws, openpgp, pdf-signatures, xmldsig | [
"Development Status :: 1 - Planning",
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Lang... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"cryptography>=42.0.0",
"swarmauri_base",
"swarmauri_core",
"swarmauri_keyprovider_inmemory",
"swarmauri_signing_cms",
"swarmauri_signing_cms; extra == \"plugins\"",
"swarmauri_signing_jws",
"swarmauri_signing_jws; extra == \"plugins\"",
"swarmauri_signing_openpgp",
"swarmauri_signing_openpgp; ext... | [] | [] | [] | [
"Changelog, https://github.com/swarmauri/swarmauri-sdk/releases",
"Documentation, https://github.com/swarmauri/swarmauri-sdk/tree/main/pkgs/plugins/media_signer#readme",
"Discussions, https://github.com/orgs/swarmauri/discussions",
"Homepage, https://github.com/swarmauri/swarmauri-sdk",
"Issues, https://git... | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T08:53:18.826439 | mediasigner-0.6.5.dev3.tar.gz | 10,547 | 24/b0/fe8e0227c4e3f2666aceb567d8ad9a5765f3ce50a3e872d9082b44719dd6/mediasigner-0.6.5.dev3.tar.gz | source | sdist | null | false | 36821a085a58d3c90f0d997a31ccf461 | 5a645a3bb5bb38a205af58c81709d27520f61fd227527de48913f24fe8d6b1db | 24b0fe8e0227c4e3f2666aceb567d8ad9a5765f3ce50a3e872d9082b44719dd6 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | OpenFisca-Tunisia | 0.59 | OpenFisca Rules as Code model for Tunisia. | # OpenFisca Tunisia - الجباية المفتوحة تونس
[](mailto:contact%40openfisca.org?subject=Subscribe%20to%20your%20newsletter%20%7C%20S'inscrire%20%C3%A0%20votre%20newsletter&body=%5BEnglish%20version%20below%5D%0A%0ABonjour%2C%0A%0AVotre%C2%A0pr%C3%A9sence%C2%A0ici%C2%A0nous%C2%A0ravit%C2%A0!%20%F0%9F%98%83%0A%0AEnvoyez-nous%20cet%20email%20pour%20que%20l'on%20puisse%20vous%20inscrire%20%C3%A0%20la%20newsletter.%20%0A%0AAh%C2%A0!%20Et%20si%20vous%20pouviez%20remplir%20ce%20petit%20questionnaire%2C%20%C3%A7a%20serait%20encore%20mieux%C2%A0!%0Ahttps%3A%2F%2Fgoo.gl%2Fforms%2F45M0VR1TYKD1RGzX2%0A%0AAmiti%C3%A9%2C%0AL%E2%80%99%C3%A9quipe%20OpenFisca%0A%0A%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%20ENGLISH%20VERSION%20%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%0A%0AHi%2C%20%0A%0AWe're%20glad%20to%20see%20you%20here!%20%F0%9F%98%83%0A%0APlease%20send%20us%20this%20email%2C%20so%20we%20can%20subscribe%20you%20to%20the%20newsletter.%0A%0AAlso%2C%20if%20you%20can%20fill%20out%20this%20short%20survey%2C%20even%20better!%0Ahttps%3A%2F%2Fgoo.gl%2Fforms%2FsOg8K1abhhm441LG2%0A%0ACheers%2C%0AThe%20OpenFisca%20Team)
[](https://twitter.com/intent/follow?screen_name=openfisca)
[](mailto:contact%40openfisca.org?subject=Join%20you%20on%20Slack%20%7C%20Nous%20rejoindre%20sur%20Slack&body=%5BEnglish%20version%20below%5D%0A%0ABonjour%2C%0A%0AVotre%C2%A0pr%C3%A9sence%C2%A0ici%C2%A0nous%C2%A0ravit%C2%A0!%20%F0%9F%98%83%0A%0ARacontez-nous%20un%20peu%20de%20vous%2C%20et%20du%20pourquoi%20de%20votre%20int%C3%A9r%C3%AAt%20de%20rejoindre%20la%20communaut%C3%A9%20OpenFisca%20sur%20Slack.%0A%0AAh%C2%A0!%20Et%20si%20vous%20pouviez%20remplir%20ce%20petit%20questionnaire%2C%20%C3%A7a%20serait%20encore%20mieux%C2%A0!%0Ahttps%3A%2F%2Fgoo.gl%2Fforms%2F45M0VR1TYKD1RGzX2%0A%0AN%E2%80%99oubliez%20pas%20de%20nous%20envoyer%20cet%20email%C2%A0!%20Sinon%2C%20on%20ne%20pourra%20pas%20vous%20contacter%20ni%20vous%20inviter%20sur%20Slack.%0A%0AAmiti%C3%A9%2C%0AL%E2%80%99%C3%A9quipe%20OpenFisca%0A%0A%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%20ENGLISH%20VERSION%20%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%0A%0AHi%2C%20%0A%0AWe're%20glad%20to%20see%20you%20here!%20%F0%9F%98%83%0A%0APlease%20tell%20us%20a%20bit%20about%20you%20and%20why%20you%20want%20to%20join%20the%20OpenFisca%20community%20on%20Slack.%0A%0AAlso%2C%20if%20you%20can%20fill%20out%20this%20short%20survey%2C%20even%20better!%0Ahttps%3A%2F%2Fgoo.gl%2Fforms%2FsOg8K1abhhm441LG2.%0A%0ADon't%20forget%20to%20send%20us%20this%20email!%20Otherwise%20we%20won't%20be%20able%20to%20contact%20you%20back%2C%20nor%20invite%20you%20on%20Slack.%0A%0ACheers%2C%0AThe%20OpenFisca%20Team)
[](https://pypi.python.org/pypi/openfisca-tunisia)
[](https://pypi.python.org/pypi/openfisca-Tunisia)
[](https://gitpod-referer.now.sh/api/gitpod-referer-redirect)
## Presentation - التقديم
[OpenFisca](http://openfisca.org) est un logiciel libre de micro-simulation.
Ceci est le code source du module dédié à la Tunisie.
<p align='right'>الجباية المفتوحة برنامج حر لمحاكاة النظام الجبائي.
هذا مصدر البرنامج للوحدة الخاصة بتونس</p>
[OpenFisca](http://openfisca.org) is a versatile microsimulation free software.
This is the source code of the Tunisia module.
## Demo - لعبة تجريبية
Un démonstrateur vous est proposé sous la forme d'un Notebook Jupyter.
Vous serez redirigé vers celui-ci en cliquant sur le lien suivant (le chargement prendra quelques secondes) :
<code><p align='center'>[](https://mybinder.org/v2/gh/openfisca/openfisca-tunisia/master?filepath=notebooks%2Fdemo.ipynb)</p></code>
Vous accédez ainsi à un démonstrateur modifiable où il vous est possible de tester openfisca-tunisia.
<p align='right'>ستجدون لعبة تجريبية في شكل دفتر جوبيتر على الرابط التالي</p>
<code><p align='center'>[](https://mybinder.org/v2/gh/openfisca/openfisca-tunisia/master?filepath=notebooks%2Fdemo.ipynb)</p></code>
<p align='right'>يسمح هذا الدفتر بتجريب الجباية المفتوحة لتونس</p>
A demo is available in a Jupyter Notebook.
You will be redirected to it by clicking on the following link (wait a few seconds to load it):
<code><p align='center'>[](https://mybinder.org/v2/gh/openfisca/openfisca-tunisia/binder?filepath=notebooks%2Fdemo.ipynb)</p></code>
Then you will be in an interactive demo where you will be able to play with openfisca-tunisia.
> This demo is available thanks to [Binder](https://mybinder.org/) and [Jupyter](http://jupyter.org) projects.
## Legislation
Les paramêtres de la législation peuvent être consultés avec [une interface dédiée](https://parameters.tn.tax-benefit.org/parameters/).
يمكن الاطلاع على معايير التشريع من خلال واجهة مخصصة لذلك
Legislation parameters can be consulted via [a dedicated interface](https://parameters.tn.tax-benefit.org/parameters/).
## Contribution & Contact - المساهمة والاتصال بنا
OpenFisca est un projet de logiciel libre.
Son code source est distribué sous la licence [GNU Affero General Public Licence](http://www.gnu.org/licenses/agpl.html)
version 3 ou ultérieure (cf. [LICENSE](https://github.com/openfisca/openfisca-tunisia/blob/master/LICENSE)).
N'hésitez pas à rejoindre l'équipe de développement OpenFisca !
Pour en savoir plus, une [documentation](http://openfisca.org/doc/contribute/index.html) est à votre disposition.
<p align='right'> الجباية المفتوحة برنامج حر</p>
<p align='right'> تم توزيع مصدر هذا البرنامج تحت رخصة أفيرو العامة الثالثة أو ما أعلى</p>
<p align='right'>تعالوا انضموا إلى فريق الجباية المفتوحة و ساهموا في تطوير البرنامج! للمزيد من المعلومات، يرجى زيارة الموقع الإلكتروني الرسمي</p>
OpenFisca is a free software project.
Its source code is distributed under the [GNU Affero General Public Licence](http://www.gnu.org/licenses/agpl.html)
version 3 or later (see [LICENSE](https://github.com/openfisca/openfisca-tunisia/blob/master/LICENSE) file).
Feel free to join the OpenFisca development team!
See the [documentation](http://openfisca.org/doc/contribute/index.html) for more information.
## Documentation
* [Documentation générale](http://openfisca.org/doc/) du projet OpenFisca (tous pays confondus)
- Et son [architectture](https://openfisca.org/doc/architecture.html) d'un projet OpenFisca
<!-- * [Explorateur de la législation](https://legislation.openfisca.tn) couverte par OpenFisca-Tunisia -->
* [Wiki](https://github.com/openfisca/openfisca-tunisia/wiki) OpenFisca-Tunisia
* [Google Drive public](https://drive.google.com/drive/folders/1xzrwEgZF2pEMUIHMQMWtlg7ubIFdy58N?usp=sharing) de références législatives
Par ailleurs, chaque module de la [famille OpenFisca sur GitHub](https://github.com/openfisca) dispose d'une documentation propre (voir `README.md` respectifs).
## Installation
Sous Unix/macOS/Linux, appliquez les étapes qui suivent dans votre Terminal.
Sous Windows, installez un émulateur de terminal avant de poursuivre.
Nous vous conseillons en particulier l'émulateur BASH fourni avec le [gestionnaire de version GIT](https://git-for-windows.github.io).
En l'intégrant à un outil tel que [Visual Studio Code](https://code.visualstudio.com), vous aurez un environnement fonctionnel pour travailler sur le code source.
Néanmoins, vous aurez à effectuer des vérifications complémentaires à ce qui est décrit ci-dessous (telles que vérifier la configuration de votre variable d'environnement `%PATH%`).
### Langage Python & Environnement virtuel
Ce projet nécessite l'installation préalable des éléments suivants :
* Le langage [Python 3.10](https://www.python.org/downloads/) ou supérieur (3.10, 3.11, 3.12)
* Le gestionnaire de paquets [pip](https://pip.pypa.io/en/stable/installing/).
Vérifiez alors que la version de python appelée par défaut débute bien par `3.10` ou supérieur :
```
python --version
```
Et installez les éventuelles mises à jour pour la gestion de paquets python avec :
```
sudo pip install --upgrade pip wheel
```
Ensuite, afin de créer un environnement de travail propre et pour vous permettre de faire cohabiter plusieurs contextes de travail en python,
nous vous conseillons l'utilisation d'environnements virtuels, dits virtualenv.
Il vous faut alors installer un gestionnaire de virtualenv python (tel que [pew](https://github.com/berdario/pew)).
```
sudo pip install pew
```
Il vous est désormais possible de créer votre premier environnement dédié à OpenFisca-Tunisia. Nommons-le `openfisca` :
```
pew new openfisca --python=python3.10
# Si demandé, répondez "Y" à la question sur la modification du fichier de configuration de votre shell
```
Usage :
* Vous pouvez sortir du virtualenv en tapant `exit` (ou Ctrl-D)
* Vous pouvez le réactiver grâce à `pew workon openfisca`
### Installation du module OpenFisca-Tunisia
Deux options s'offrent à vous :
* Installer le module python pré-compilé dit [wheel python](https://pypi.org/project/OpenFisca-Tunisia/)
* Ou, installer le code source
#### Installer la wheel
Installer le module pré-compilé d'`OpenFisca-Tunisia` vous permet d'interroger le modèle socio-fiscal tunisien.
Nous supposons que vous avez activé votre environnement virtuel.
Appliquez alors la commande suivante pour récupérer la wheel `OpenFisca-Tunisia` depuis la librairie de paquets Python [pypi](https://pypi.org) :
```sh
pip install openfisca-tunisia
```
:tada: Félicitations, vous avez désormais terminé l'installation d'OpenFisca Tunisia !
Vous pouvez vérifier sa présence dans votre environnement courant avec :
```sh
pip list
# Résultat attendu : Liste contenant OpenFisca-Tunisia et ses dépendances.
```
#### Installer le code source
Installer le code source d'`OpenFisca-Tunisia` sur votre ordinateur vous permet d'interroger ou de modifier le modèle socio-fiscal tunisien.
Nous supposons que vous avez activé votre environnement virtuel et que vous vous situez dans le répertoire où vous souhaitez placer le projet.
Appliquez alors les commandes suivantes pour récupérer les sources d'OpenFisca-Tunisia et configurer le projet (sans omettre le point en fin de ligne :slightly_smiling_face:) :
```
git clone https://github.com/openfisca/openfisca-tunisia.git
cd openfisca-tunisia
pip install -e .
```
:tada: Félicitations, vous avez désormais terminé l'installation d'OpenFisca Tunisia !
Vous pouvez vérifier que votre environnement fonctionne bien en démarrant les tests tel que décrit dans le paragraphe suivant.
## Test
Nous supposons que vous êtes dans le répertoire `openfisca-tunisia` et que votre environnement virtuel est activé.
Commencez par installer les outils de test avec :
```
pip install -e .[dev]
```
Différents formats de tests sont alors à votre disposition : la rédaction de tests est possible en python ou en yaml.
### Test python
Un test rédigé en python peut être exécuté avec l'outil [pytest](https://docs.pytest.org).
Celui-ci déroulera les fonctions python dont le nom commence par le mot `test`.
Ainsi, pour exécuter le test python `tests/test_simple.py`, utilisez la commande suivante :
```
pytest tests/test_simple.py
```
Il vous est également possible de n'exécuter qu'un seul test d'un fichier. Dans l'exemple suivant, `test_1_parent` sera l'unique test déroulé du fichier `tests/core_tests.py` :
```
pytest tests/core_tests.py::test_1_parent
```
### Test yaml
Le format d'un test yaml est décrit dans la [section YAML tests](http://openfisca.org/doc/coding-the-legislation/writing_yaml_tests.html) de la documentation officielle.
Ainsi, si vous souhaitez exécuter le test yaml `tests/formulas/irpp.yaml`, utilisez la commande :
```sh
openfisca test -c openfisca_tunisia tests/formulas/irpp.yaml
```
### Debug
Afin de tester avec un debugger, ajoutez un point d'arrêt dans le code python appelé par le test avec :
```py
import ipdb; ipdb.set_trace()
```
Installez la librairie `ipdb` avec :
```sh
pip install ipdb
```
Et exécutez à nouveau le test (avec l'option `-s` pour la commande `pytest`).
### Tout tester
L'ensemble des tests et exemples définis dans OpenFisca-Tunisia peut être exécuté avec une commande. Néanmoins, cela nécessite l'installation de librairies complémentaires pour les exemples rédigés sous forme de [notebooks Jupyter](http://jupyter.org) :
```
pip install -e .[notebook]
```
Le tout peut ensuite être démarré grâce à la commande suivante :
```
make test
```
Pour en savoir plus, voir [la section Tests](http://openfisca.org/doc/contribute/tests.html) de la documentation officielle.
## Web API
L'API est issue du dépôt [GitHub du module central OpenFisca-Core](https://github.com/openfisca/openfisca-core).
Pour consulter sa version `v0.13.0`, il suffit d'interroger l'un de ses points d'entrée.
La liste des paramètres est par exemple consultable à l'adresse suivante :
```
www.openfisca.tn/api/v0.13.0/parameters
```
Pour en savoir plus, nous vous conseillons la lecture de sa [documentation officielle](http://openfisca.org/doc/openfisca-web-api/preview-api.html).
| text/markdown | null | OpenFisca Team <contact@openfisca.org> | null | null | null | microsimulation, tax, benefit, rac, rules-as-code, tunisia | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Operating System :: POSIX",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
... | [] | null | null | >=3.10 | [] | [] | [] | [
"openfisca-core[web-api]<45,>=44",
"autopep8<3.0,>=2.0.2; extra == \"dev\"",
"ruff<0.9,>=0.8.0; extra == \"dev\"",
"Flake8-pyproject<2.0.0,>=1.2.3; extra == \"dev\"",
"flake8<7.0.0,>=6.0.0; extra == \"dev\"",
"flake8-print<6.0.0,>=5.0.0; extra == \"dev\"",
"flake8-quotes>=3.3.2; extra == \"dev\"",
"py... | [] | [] | [] | [
"Homepage, https://github.com/openfisca/openfisca-tunisia",
"Repository, https://github.com/openfisca/openfisca-tunisia",
"Documentation, https://openfisca.org/doc",
"Issues, https://github.com/openfisca/openfisca-tunisia/issues",
"Changelog, https://github.com/openfisca/openfisca-tunisia/blob/main/CHANGELO... | twine/6.2.0 CPython/3.10.11 | 2026-02-18T08:52:10.325290 | openfisca_tunisia-0.59.tar.gz | 331,724 | 2a/4f/2beba998d70365d0d994b0561cab9dc4a92e5cedab64adfddaf3319d7de3/openfisca_tunisia-0.59.tar.gz | source | sdist | null | false | 400a515fa5544c4393de64617c9db816 | 9eeaf5ca9a0a2f59e7730fac7246fed4e477d11f048a8a137ac74df9987b11c7 | 2a4f2beba998d70365d0d994b0561cab9dc4a92e5cedab64adfddaf3319d7de3 | null | [
"LICENSE.AGPL.txt"
] | 0 |
2.4 | fedivertex | 1.0.0 | Interface to download and interact with Fedivertex, the Fediverse Graph Dataset | # Python API to interact with Fedivertex, the Fediverse Graph Dataset
This Python package provides a simple interface to interact with Fedivertex: https://www.kaggle.com/datasets/marcdamie/fediverse-graph-dataset/data.
Our package automatically downloads the dataset from Kaggle and loads graphs in a usable format (i.e., NetworkX).
The Fediverse Graph dataset provides graphs for different decentralized social media.
These graphs represents the interactions between servers in these decentralized social media.
The graph type corresponds to the type of interactions modelled by the graph.
Finally, the dataset provides the graphs obtained on different dates, so the users can analyze the evolution of the interactions.
Refer to this [repository](https://github.com/MarcT0K/Franck) to discover more about the data acquisition.
## Extracting a graph
Three pieces of information are necessary to select a graph in the datatset: the software/social media, the graph type, and the date.
We provide graphs using the [NetworkX](https://networkx.org/) format.
**Example**:
```python3
from fedivertex import GraphLoader
loader = GraphLoader()
graph = loader.get_graph(software="peertube", graph_type="follow", date="20250324")
graph = loader.get_graph(software="peertube", graph_type="follow") # Loads the most recent graph
```
In each graph, we also provide metadata in the attributes of the graph nodes.
## Utility functions
Finally, we provide a few utility functions:
```python3
from fedivertex import GraphLoader
loader = GraphLoader()
loader.list_all_software()
loader.list_graph_types("peertube")
loader.list_available_dates("peertube", "follow")
```
| text/markdown | Marc DAMIE | marc.damie@inria.fr | null | null | GPLv3 | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy<2.0",
"mlcroissant",
"networkx",
"networkx-temporal",
"tqdm",
"pytest; extra == \"test\"",
"pytest-coverage; extra == \"test\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T08:51:17.480333 | fedivertex-1.0.0.tar.gz | 18,574 | 2c/99/2ead7ab1e1ae857cbe5131e89be887f97ad6e8e7541196a53915e70395da/fedivertex-1.0.0.tar.gz | source | sdist | null | false | b20d766fe2babfc5abe5f6415a2a58d2 | 5d752d74ab6b5a1e793772a116693acc285f1bb098dc3bb0f628fbf43f3bfe1a | 2c992ead7ab1e1ae857cbe5131e89be887f97ad6e8e7541196a53915e70395da | null | [
"LICENSE"
] | 265 |
2.4 | sunpeek | 0.7.7 | Large Solar Thermal Monitoring Tool. Implements the Power Check Method of ISO 24194 | 
# About SunPeek

[](https://hub.docker.com/r/sunpeek/sunpeek)
[](https://pypi.org/project/sunpeek/)
SunPeek implements a dynamic, in situ test methodology for large solar thermal plants, packaged as an open source software application and Python library. It includes the first open source implementation of the ISO 24194 Power Check procedure for verifying the performance of solar thermal collector fields.
## What SunPeek is Used For
SunPeek can be applied to solar thermal plants *in operation* when measurement data is available. The following use cases are supported:
- **Performance Assessment**:
SunPeek estimates the expected power output of solar thermal collector fields based on certified collector parameters (ISO 9806) and measured operating conditions. The estimated power in certain valid intervals (criteria defined in ISO 24194) is compared with measured power to assess whether the plant works as expected. This can be used for quality assurance, performance guarantees, or acceptance testing.
- **Performance Monitoring**:
By continuously computing the ratio of measured versus estimated power over time, SunPeek enables identification of performance degradation, anomalies, and maintenance needs. This is particularly valuable for detecting the effects of collector soiling and verifying the impact of cleaning operations.
For detailed information about the Power Check methodology and workflow, see the [Power Check FAQs](https://docs.sunpeek.org/quick_start/overview/power_check_faq.html).
## Who SunPeek is For
SunPeek is designed for solar thermal plant operators, performance engineers, researchers, and equipment manufacturers. Serving as the reference software implementation of ISO 24194, it makes professional-grade solar thermal testing accessible, transparent, and auditable for everyone—from large commercial installations to smaller systems and developing markets.
## Handling Real-World Conditions
Unlike idealized lab testing, SunPeek handles real-world operational challenges including data gaps, sensor failures, diverse collector configurations, varying measurement setups, and complex plant hydraulics. The software implements comprehensive data validation, automated quality checks, and flexible sensor mapping to work with the data you actually have.
## Flexible Deployment
SunPeek is available in two complementary forms:
- **Web Application**, the [SunPeek WebUI](https://gitlab.com/sunpeek/web-ui): A complete, containerized web interface that makes configuration and ongoing monitoring of one or several solar thermal plants simple and intuitive. Designed for plant operators and engineers who need a ready-to-use solution accessible from any browser, with guided plant configuration, interactive data visualization, and automated report generation.
- **Python Library**, in this repository: Direct programmatic access to all SunPeek functionality for researchers, automation workflows, and integration into other tools. Enables custom analysis, algorithm development, and flexible data processing pipelines.
Both modes use the same underlying calculation engine, ensuring consistent and reproducible results across deployment types.
## Resources
| Resource | Link |
|----------------------------------|-----------------------------------------------------------------------------------------------|
| **Documentation & Installation** | [docs.sunpeek.org](https://docs.sunpeek.org) |
| **Website** | [sunpeek.org](https://sunpeek.org) |
| **SunPeek FAQs** | [SunPeek Overview](https://docs.sunpeek.org/quick_start/overview/sunpeek_faq.html) |
| **Publications** | [Zenodo community](https://zenodo.org/communities/sunpeek) |
## License and Copyright
Except where specifically noted otherwise, SunPeek is made available under the GNU Lesser General Public License. This means
that you can use the software, copy it, redistribute it and include it in other software, including commercial, proprietary
software, for free, as long as you abide by the terms of the GNU GPL, with the exceptions provided by the LGPL. In particular,
if you redistribute a modified version of the software, you must make the source code of your modifications available, and
if you include the software in another piece of software or physical product, you must give users notice that SunPeek is
used, and inform them where to obtain a copy of the SunPeek source code and license.
Note that the [SunPeek WebUI](https://gitlab.com/sunpeek/web-ui) is covered by a separate license, the BSD-3-Clause, see:
[BSD-3-Clause](https://opensource.org/licenses/BSD-3-Clause)
For copyright and license information, see:
* [AUTHORS.md](AUTHORS.md) - Copyright holders
* [NOTICES.md](NOTICES.md) - License notices and third-party attributions
* [COPYING.LESSER](COPYING.LESSER) and [COPYING](COPYING) - License text
## Contributing
Contributions are welcome! See [CONTRIBUTING.md](CONTRIBUTING.md) for developer setup instructions.
| text/markdown | Philip Ohnewein, Daniel Tschopp, Lukas Emberger, Marnoch Hamilton-Jones, Jonathan Cazco, Peter Zauner | null | Marnoch Hamilton-Jones | m.hamilton-jones@sunpeek.org | null | solarthermal, solar, energy, monitoring | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering"
] | [] | https://gitlab.com/sunpeek/sunpeek | null | <3.14,>=3.11 | [] | [] | [] | [
"coolprop",
"lxml",
"matplotlib>=3.7",
"metpy",
"numpy>=2",
"orjson",
"pandas>=2",
"parquet-datastore-utils>=0.1.14",
"pendulum>=3.0.0",
"pint>=0.22",
"pint-pandas>=0.2",
"protobuf",
"pvlib",
"ephem",
"pydantic>=2",
"pyperclip>=1.8.2",
"pypdf>=6",
"pyproj",
"python-dotenv",
"sc... | [] | [] | [] | [
"Repository, https://gitlab.com/sunpeek/sunpeek",
"Documentation, https://docs.sunpeek.org"
] | poetry/2.2.1 CPython/3.13.9 Linux/5.15.154+ | 2026-02-18T08:50:52.220258 | sunpeek-0.7.7-py3-none-any.whl | 477,434 | 44/2e/f880bf3b117765f31d26c2cb9a2ed291576f683129b5be916f32078a59af/sunpeek-0.7.7-py3-none-any.whl | py3 | bdist_wheel | null | false | 2581ed5154340a855715617fbe54a3d0 | f6c362e26a007921679cc08fd2bdcd5a558d5efbd8420578462c1d7fb0790d37 | 442ef880bf3b117765f31d26c2cb9a2ed291576f683129b5be916f32078a59af | null | [] | 108 |
2.4 | pine-assistant | 0.2.0 | Pine AI SDK — Let Pine AI handle your digital chores. Socket.IO + REST client. | # pine-assistant
[](https://pypi.org/project/pine-assistant/)
[](https://pypi.org/project/pine-assistant/)
[](./LICENSE)
Pine AI SDK for Python. Let Pine AI handle your digital chores.
## Install
```bash
pip install pine-assistant # SDK only
pip install pine-assistant[cli] # SDK + CLI
```
## Quick Start (Async)
```python
from pine_assistant import AsyncPineAI
client = AsyncPineAI(access_token="...", user_id="...")
await client.connect()
session = await client.sessions.create()
await client.join_session(session["id"])
async for event in client.chat(session["id"], "Negotiate my Comcast bill"):
print(event.type, event.data)
await client.disconnect()
```
## Quick Start (CLI)
```bash
pine auth login # Email verification
pine chat # Interactive REPL
pine send "Negotiate my Comcast bill" # One-shot message
pine sessions list # List sessions
pine task start <session-id> # Start task (Pro)
```
## Handling Events
Pine AI behaves like a human assistant. After you send a message, it sends
acknowledgments, then work logs, then the real response (form, text, or task_ready).
**Don't respond to acknowledgments** — only respond to forms, specific questions,
and task lifecycle events, or you'll create an infinite loop.
## Continuing Existing Sessions
```python
# List all sessions
result = await client.sessions.list(limit=20)
# Continue an existing session
await client.join_session(existing_session_id)
history = await client.get_history(existing_session_id)
async for event in client.chat(existing_session_id, "What is the status?"):
...
```
## Attachments
```python
# Upload a document for dispute tasks
attachments = await client.sessions.upload_attachment("bill.pdf")
```
## Stream Buffering
Text streaming is buffered internally. You receive one merged text event,
not individual chunks. Work log parts are debounced (3s silence).
## Payment
Pro subscription recommended. For non-subscribers:
```python
from pine_assistant import AsyncPineAI
print(f"Pay at: {AsyncPineAI.session_url(session_id)}")
```
## License
MIT
| text/markdown | Pine AI | null | null | null | null | pine-assistant, pine, ai, customer-service, sdk, socketio | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Typing :: Typed",
"Top... | [] | null | null | >=3.10 | [] | [] | [] | [
"python-socketio[asyncio]>=5.11.0",
"httpx>=0.27.0",
"pydantic>=2.0.0",
"click>=8.1.0; extra == \"cli\"",
"rich>=13.0.0; extra == \"cli\"",
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"ruff>=0.4.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/19PINE-AI/pine-assistant-python",
"Repository, https://github.com/19PINE-AI/pine-assistant-python",
"Issues, https://github.com/19PINE-AI/pine-assistant-python/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T08:50:17.690256 | pine_assistant-0.2.0.tar.gz | 20,603 | 58/11/01b80ee507b97d2e89ee5372db110265dfe0eecb64e5583bff26a234b076/pine_assistant-0.2.0.tar.gz | source | sdist | null | false | 8ae622f1d8da3844ce9a9b767c3b59a6 | 7c1304ece2158af8cbca47460dd54d5acc1d05a6075c1a244933b08bece0c64c | 581101b80ee507b97d2e89ee5372db110265dfe0eecb64e5583bff26a234b076 | MIT | [
"LICENSE"
] | 402 |
2.4 | scifem | 0.17.0 | Scientific tools for finite element methods | # Scientific Computing Tools for Finite Element Methods
This package contains a collection of tools for scientific computing with a focus on finite element methods. The tools are written in Python and are intended to be used in conjunction with the [dolfinx](https://github.com/FEniCS/dolfinx).
Many users that are transitioning from legacy FEniCS to FEniCSx may find the transition difficult due to the lack of some functionalities in FEniCSx.
This package aims to provide some of the functionalities that are missing in FEniCSx.
The package is still in its early stages and many functionalities are still missing.
## Features
- Real-space implementation for usage in DOLFINx (>=v0.8.0)
- Save quadrature functions as point clouds
- Save any function that can tabulate dof coordinates as point clouds.
- Point sources for usage in DOLFINx (>=v0.8.0)
- Point sources in vector spaces are only supported on v0.9.0, post [DOLFINx PR 3429](https://github.com/FEniCS/dolfinx/pull/3429).
For older versions, apply one point source in each sub space.
- Simplified wrapper to create MeshTags based on a list of tags and corresponding locator functions.
- Maps between degrees of freedom and vertices: `vertex_to_dofmap` and `dof_to_vertex`
- Blocked Newton Solver
- Function evaluation at specified points
- Interpolation matrices from any `ufl.core.expr.Expr` into a compatible space.
## Installation
The package is partly written in C++ and relies on `dolfinx`. User are encouraged to install `scifem` with `pip` in an environment where `dolfinx` is already installed or with `conda`.
### `pip`
To install the package with `pip` run
```bash
python3 -m pip install scifem --no-build-isolation
```
To install the development version you can run
```bash
python3 -m pip install --no-build-isolation git+https://github.com/scientificcomputing/scifem.git
```
Note that you should pass the flag `--no-build-isolation` to `pip` to avoid issues with the build environment, such as incompatible versions of `nanobind`.
### `spack`
The spack package manager is the recommended way to install scifem, and especially on HPC systems.
For information about the package see: [spack-package: py-scifem](https://packages.spack.io/package.html?name=py-scifem)
First, clone the spack repository and enable spack
```bash
git clone --depth=2 https://github.com/spack/spack.git
# For bash/zsh/sh
. spack/share/spack/setup-env.sh
# For tcsh/csh
source spack/share/spack/setup-env.csh
# For fish
. spack/share/spack/setup-env.fish
```
Next create an environment:
```bash
spack env create scifem_env
spack env activate scifem_env
```
Find the compilers on the system
```bash
spack compiler find
```
and install the relevant packages
```bash
spack add py-scifem+petsc+hdf5+biomed+adios2 ^mpich ^petsc+mumps+hypre ^py-fenics-dolfinx+petsc4py
spack concretize
spack install
```
Finally, note that spack needs some packages already installed on your system. On a clean ubuntu container for example one need to install the following packages before running spack
```bash
apt update && apt install gcc unzip git python3-dev g++ gfortran xz-utils -y
```
### `conda`
To install the package with `conda` run
```bash
conda install -c conda-forge scifem
```
## Having issues or want to contribute?
If you are having issues, feature request or would like to contribute, please let us know. You can do so by opening an issue on the [issue tracker](https://github.com/scientificcomputing/scifem/issues).
| text/markdown | null | =?utf-8?q?J=C3=B8rgen_S=2E_Dokken?= <dokken@simula.no>, "Henrik N.T. Finsberg" <henriknf@simula.no> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"fenics-dolfinx",
"numpy",
"packaging",
"h5py; extra == \"h5py\"",
"adios2; extra == \"adios2\"",
"jupyter-book<2.0; extra == \"docs\"",
"jupytext; extra == \"docs\"",
"pyvista[all]>=0.45.0; extra == \"docs\"",
"sphinxcontrib-bibtex; extra == \"docs\"",
"sphinx-codeautolink; extra == \"docs\"",
... | [] | [] | [] | [
"repository, https://github.com/scientificcomputing/scifem.git"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T08:49:57.697553 | scifem-0.17.0.tar.gz | 1,950,327 | 7f/e3/b7752f9e9df36ebdb5bc76baa3d397ea2e65527dd191d9f213e634cda35a/scifem-0.17.0.tar.gz | source | sdist | null | false | 691c88252cebf769abd80b6b0a05921f | 3b7fb2fe079619494c39b57629ac50b393b6f3d22a3c685fe56a377ce10eefda | 7fe3b7752f9e9df36ebdb5bc76baa3d397ea2e65527dd191d9f213e634cda35a | MIT | [] | 375 |
2.4 | pynwis | 0.1.2 | A lightweight Python toolkit for downloading, processing, and filtering USGS NWIS daily water data | # pyNWIS
<p align="center">
<img src="https://capsule-render.vercel.app/api?type=waving&height=180&color=0:0077b6,100:00b4d8&text=pyNWIS&fontColor=ffffff&fontSize=60&fontAlignY=35&desc=USGS%20Water%20Data%20for%20Python&descAlign=50&descAlignY=55" width="100%" alt="pyNWIS"/>
</p>
<p align="center">
<a href="https://pypi.org/project/pynwis/"><img src="https://img.shields.io/pypi/v/pynwis?color=00b4d8&style=for-the-badge" alt="PyPI"/></a>
<a href="https://pypi.org/project/pynwis/"><img src="https://img.shields.io/pypi/pyversions/pynwis?style=for-the-badge&color=0077b6" alt="Python"/></a>
<a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow?style=for-the-badge" alt="License"/></a>
</p>
<p align="center">
A lightweight Python toolkit for downloading, processing, and filtering<br>
<b>USGS National Water Information System (NWIS)</b> daily water data.
</p>
---
## ✨ Features
| Feature | Description |
|---|---|
| 📡 **Daily Value Fetching** | Download daily values from the USGS NWIS API with automatic retries and rate-limit handling |
| 📦 **Batch Downloads** | Fetch data for hundreds of sites at once with progress bars |
| 🧹 **Tidy DataFrames** | Convert raw JSON responses into clean Pandas DataFrames |
| 🔍 **Parameter Search** | Built-in catalog of 30+ common USGS parameter codes with keyword search |
| 🎯 **Smart Filtering** | Keep only sites with sufficient data for your required variables |
---
## 🚀 Installation
```bash
pip install pynwis
```
Or install from source:
```bash
git clone https://github.com/Bluerrror/NWIS-Data-Downloader.git
cd NWIS-Data-Downloader
pip install -e .
```
**Requirements:** Python ≥ 3.8 | `requests` | `pandas` | `tqdm`
---
## 📖 Quick Start
### 1. Fetch discharge data for a single site
```python
from pynwis import fetch_usgs_daily, usgs_json_to_df
json_data = fetch_usgs_daily(
sites=["01491000"],
parameter_codes=["00060"], # Discharge (ft³/s)
start="2024-01-01",
end="2024-12-31",
)
df = usgs_json_to_df(json_data)
print(df.head())
# site_no time 00060
# 0 01491000 2024-01-01 222.0
# 1 01491000 2024-01-02 201.0
# 2 01491000 2024-01-03 187.0
```
### 2. Search the parameter catalog
```python
from pynwis import get_usgs_parameters, search_parameters
params = get_usgs_parameters()
print(params.head())
# parm_cd group parameter_nm parameter_unit
# 0 00010 Physical Temperature, water, degrees Celsius deg C
# 1 00020 Physical Temperature, air, degrees Celsius deg C
# Find sediment-related parameters
results = search_parameters(params, "sediment")
print(results[["parm_cd", "parameter_nm"]])
# parm_cd parameter_nm
# 0 80154 Suspended sediment concentration, mg/L
# 1 80155 Suspended sediment discharge, short tons per day
# 2 80225 Bedload sediment discharge, short tons per day
```
### 3. Batch download for multiple sites
```python
from pynwis import fetch_batch_usgs_data
sites = ["01491000", "01646500", "09522500"]
df = fetch_batch_usgs_data(
sites=sites,
parameter_codes=["00060"], # Discharge
start="2020-01-01",
)
print(df.shape)
print(df.head())
```
> **Tip:** Use `required_params=["80154"]` and `min_records=100` to keep only sites
> that have at least 100 suspended-sediment records.
---
## 📋 Common Parameter Codes
| Code | Name | Description | Units |
|------|------|-------------|-------|
| `00010` | Temperature | Water temperature | °C |
| `00060` | Discharge | Streamflow discharge | ft³/s |
| `00065` | Gage Height | Gage height | ft |
| `00045` | Precipitation | Precipitation depth | in |
| `00400` | pH | pH value | std units |
| `00300` | Dissolved O₂ | Dissolved oxygen | mg/L |
| `00630` | NO₃ + NO₂ | Nitrate plus nitrite | mg/L as N |
| `80154` | SSC | Suspended sediment concentration | mg/L |
| `80155` | SS Discharge | Suspended sediment discharge | tons/day |
> **Tip:** Call `get_usgs_parameters()` for the full built-in catalog, or use `search_parameters()` to find codes by keyword.
---
## 📚 API Reference
### Core Functions
| Function | Description |
|---|---|
| `fetch_usgs_daily(sites, parameter_codes, ...)` | Fetch raw NWIS daily-values JSON for one or more sites |
| `usgs_json_to_df(json_data)` | Convert NWIS JSON response into a tidy DataFrame |
| `fetch_batch_usgs_data(sites, parameter_codes, ...)` | Batch fetch with progress bars, retries, and filtering |
### Parameter Utilities
| Function | Description |
|---|---|
| `get_usgs_parameters()` | Return the built-in catalog of common parameter codes |
| `search_parameters(params_df, query, ...)` | Search parameters by keyword |
---
## 🤝 Contributing
1. Fork the repo
2. Create a feature branch: `git checkout -b feature/amazing-feature`
3. Commit changes: `git commit -m "Add amazing feature"`
4. Push: `git push origin feature/amazing-feature`
5. Open a Pull Request
---
## 📄 License
MIT License — see [LICENSE](LICENSE) for details.
---
## 🙏 Acknowledgments
Built on the [USGS Water Services API](https://waterservices.usgs.gov).
<img width="100%" src="https://capsule-render.vercel.app/api?type=waving&color=0:0077b6,100:00b4d8&height=100§ion=footer"/>
| text/markdown | Shahab Shojaee | null | null | null | null | hydrology, USGS, NWIS, water, data, streamflow, sediment | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent",
"Intended Audienc... | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.25.0",
"pandas>=1.3.0",
"tqdm>=4.62.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Bluerrror/NWIS-Data-Downloader",
"Repository, https://github.com/Bluerrror/NWIS-Data-Downloader",
"Issues, https://github.com/Bluerrror/NWIS-Data-Downloader/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T08:49:42.389970 | pynwis-0.1.2.tar.gz | 12,070 | ca/d3/8b94f538cca271fe6d5605a2df3a0115c70479e1c7b5b704733daf400ef6/pynwis-0.1.2.tar.gz | source | sdist | null | false | 58f5b0d83ec8f7a2a13d2f08371b2e68 | 5c016f3dff7e392b502a8a9c01e6dad89c8ad9152b27e2d32a2895384132a66a | cad38b94f538cca271fe6d5605a2df3a0115c70479e1c7b5b704733daf400ef6 | MIT | [
"LICENSE"
] | 278 |
2.4 | neo4j-agent-memory | 0.0.3 | A comprehensive memory system for AI agents using Neo4j | # Neo4j Agent Memory
A graph-native memory system for AI agents. Store conversations, build knowledge graphs, and let your agents learn from their own reasoning -- all backed by Neo4j.
[](https://neo4j.com/labs/)
[](https://neo4j.com/labs/)
[](https://community.neo4j.com)
[](https://github.com/neo4j-labs/agent-memory/actions/workflows/ci.yml)
[](https://badge.fury.io/py/neo4j-agent-memory)
[](https://pypi.org/project/neo4j-agent-memory/)
[](https://opensource.org/licenses/Apache-2.0)
> ⚠️ **This is a Neo4j Labs project.** It is actively maintained but not officially supported. There are no SLAs or guarantees around backwards compatibility and deprecation. For questions and support, please use the [Neo4j Community Forum](https://community.neo4j.com).
> **See it in action**: The [Lenny's Podcast Memory Explorer](examples/lennys-memory/) demo loads 299 podcast episodes into a searchable knowledge graph with an AI chat agent, interactive graph visualization, geospatial map view, and Wikipedia-enriched entity cards.
## Features
- **Three Memory Types**: Short-Term (conversations), Long-Term (facts/preferences), and Reasoning (reasoning traces)
- **POLE+O Data Model**: Configurable entity schema based on Person, Object, Location, Event, Organization types with subtypes
- **Multi-Stage Entity Extraction**: Pipeline combining spaCy, GLiNER2, and LLM extractors with configurable merge strategies
- **Batch & Streaming Extraction**: Process multiple texts in parallel or stream results for long documents
- **Entity Resolution**: Multi-strategy deduplication (exact, fuzzy, semantic matching) with type-aware resolution
- **Entity Deduplication on Ingest**: Automatic duplicate detection with configurable auto-merge and flagging
- **Provenance Tracking**: Track where entities were extracted from and which extractor produced them
- **Background Entity Enrichment**: Automatically enrich entities with Wikipedia and Diffbot data
- **Relationship Extraction & Storage**: Extract relationships using GLiREL (no LLM) and automatically store as graph relationships
- **Vector + Graph Search**: Semantic similarity search and graph traversal in a single database
- **Geospatial Queries**: Spatial indexes on Location entities for radius and bounding box search
- **Temporal Relationships**: Track when facts become valid or invalid
- **CLI Tool**: Command-line interface for entity extraction and schema management
- **Observability**: OpenTelemetry and Opik tracing for monitoring extraction pipelines
- **Agent Framework Integrations**: LangChain, Pydantic AI, LlamaIndex, CrewAI, OpenAI Agents, Strands Agents (AWS)
- **Amazon Bedrock Embeddings**: Use Titan or Cohere embedding models via AWS Bedrock
- **AWS Hybrid Memory**: HybridMemoryProvider with intelligent routing between short-term and long-term memory
## Installation
```bash
# Basic installation
pip install neo4j-agent-memory
# With OpenAI embeddings
pip install neo4j-agent-memory[openai]
# With Google Cloud (Vertex AI embeddings)
pip install neo4j-agent-memory[vertex-ai]
# With Amazon Bedrock embeddings
pip install neo4j-agent-memory[bedrock]
# With AWS Strands Agents
pip install neo4j-agent-memory[strands]
# With all AWS integrations (Bedrock + Strands + AgentCore)
pip install neo4j-agent-memory[aws]
# With Google ADK integration
pip install neo4j-agent-memory[google-adk]
# With MCP server
pip install neo4j-agent-memory[mcp]
# With spaCy for fast entity extraction
pip install neo4j-agent-memory[spacy]
python -m spacy download en_core_web_sm
# With LangChain integration
pip install neo4j-agent-memory[langchain]
# With CLI tools
pip install neo4j-agent-memory[cli]
# With observability (OpenTelemetry)
pip install neo4j-agent-memory[opentelemetry]
# With all optional dependencies
pip install neo4j-agent-memory[all]
```
Using uv:
```bash
uv add neo4j-agent-memory
uv add neo4j-agent-memory --extra openai
uv add neo4j-agent-memory --extra spacy
```
## Quick Start
```python
import asyncio
from pydantic import SecretStr
from neo4j_agent_memory import MemoryClient, MemorySettings, Neo4jConfig
async def main():
# Configure settings
settings = MemorySettings(
neo4j=Neo4jConfig(
uri="bolt://localhost:7687",
username="neo4j",
password=SecretStr("your-password"),
)
)
# Use the memory client
async with MemoryClient(settings) as memory:
# Store a conversation message
await memory.short_term.add_message(
session_id="user-123",
role="user",
content="Hi, I'm John and I love Italian food!"
)
# Add a preference
await memory.long_term.add_preference(
category="food",
preference="Loves Italian cuisine",
context="Dining preferences"
)
# Search for relevant memories
preferences = await memory.long_term.search_preferences("restaurant recommendation")
for pref in preferences:
print(f"[{pref.category}] {pref.preference}")
# Get combined context for an LLM prompt
context = await memory.get_context(
"What restaurant should I recommend?",
session_id="user-123"
)
print(context)
asyncio.run(main())
```
## Memory Types
### Short-Term Memory
Stores conversation history and experiences:
```python
# Add messages to a conversation
await memory.short_term.add_message(
session_id="user-123",
role="user",
content="I'm looking for a restaurant"
)
# Get conversation history
conversation = await memory.short_term.get_conversation("user-123")
for msg in conversation.messages:
print(f"{msg.role}: {msg.content}")
# Search past messages
results = await memory.short_term.search_messages("Italian food")
```
### Long-Term Memory
Stores facts, preferences, and entities:
```python
# Add entities with POLE+O types and subtypes
entity = await memory.long_term.add_entity(
name="John Smith",
entity_type="PERSON", # POLE+O type
subtype="INDIVIDUAL", # Optional subtype
description="A customer who loves Italian food"
)
# Add preferences
pref = await memory.long_term.add_preference(
category="food",
preference="Prefers vegetarian options",
context="When dining out"
)
# Add facts with temporal validity
from datetime import datetime
fact = await memory.long_term.add_fact(
subject="John",
predicate="works_at",
obj="Acme Corp",
valid_from=datetime(2023, 1, 1)
)
# Search for relevant entities
entities = await memory.long_term.search_entities("Italian restaurants")
```
### Reasoning Memory
Stores reasoning traces and tool usage patterns:
```python
# Start a reasoning trace (optionally linked to a triggering message)
trace = await memory.reasoning.start_trace(
session_id="user-123",
task="Find a restaurant recommendation",
triggered_by_message_id=user_message.id, # Optional: link to message
)
# Add reasoning steps
step = await memory.reasoning.add_step(
trace.id,
thought="I should search for nearby restaurants",
action="search_restaurants"
)
# Record tool calls (optionally linked to a message)
await memory.reasoning.record_tool_call(
step.id,
tool_name="search_api",
arguments={"query": "Italian restaurants"},
result=["La Trattoria", "Pasta Palace"],
status=ToolCallStatus.SUCCESS,
duration_ms=150,
message_id=user_message.id, # Optional: link tool call to message
)
# Complete the trace
await memory.reasoning.complete_trace(
trace.id,
outcome="Recommended La Trattoria",
success=True
)
# Find similar past tasks
similar = await memory.reasoning.get_similar_traces("restaurant recommendation")
# Link an existing trace to a message (post-hoc)
await memory.reasoning.link_trace_to_message(trace.id, message.id)
```
## Advanced Features
### Session Management
List and manage conversation sessions:
```python
# List all sessions with metadata
sessions = await memory.short_term.list_sessions(
prefix="user-", # Optional: filter by prefix
limit=50,
offset=0,
order_by="updated_at", # "created_at", "updated_at", or "message_count"
order_dir="desc",
)
for session in sessions:
print(f"{session.session_id}: {session.message_count} messages")
print(f" First: {session.first_message_preview}")
print(f" Last: {session.last_message_preview}")
```
### Metadata-Based Search
Search messages with MongoDB-style metadata filters:
```python
# Search with metadata filters
results = await memory.short_term.search_messages(
"restaurant",
session_id="user-123",
metadata_filters={
"speaker": "Lenny", # Exact match
"turn_index": {"$gt": 5}, # Greater than
"source": {"$in": ["web", "mobile"]}, # In list
"archived": {"$exists": False}, # Field doesn't exist
},
limit=10,
)
```
### Conversation Summaries
Generate summaries of conversations:
```python
# Basic summary (no LLM required)
summary = await memory.short_term.get_conversation_summary("user-123")
print(summary.summary)
print(f"Messages: {summary.message_count}")
print(f"Key entities: {summary.key_entities}")
# With custom LLM summarizer
async def my_summarizer(transcript: str) -> str:
# Your LLM call here
response = await openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "Summarize this conversation concisely."},
{"role": "user", "content": transcript}
]
)
return response.choices[0].message.content
summary = await memory.short_term.get_conversation_summary(
"user-123",
summarizer=my_summarizer,
include_entities=True,
)
```
### Streaming Trace Recording
Record reasoning traces during streaming responses:
```python
from neo4j_agent_memory import StreamingTraceRecorder
async with StreamingTraceRecorder(
memory.reasoning,
session_id="user-123",
task="Process customer inquiry"
) as recorder:
# Start a step
step = await recorder.start_step(
thought="Analyzing the request",
action="analyze",
)
# Record tool calls
await recorder.record_tool_call(
"search_api",
{"query": "customer history"},
{"found": 5, "results": [...]},
)
# Add observations
await recorder.add_observation("Found 5 relevant records")
# Start another step
await recorder.start_step(thought="Formulating response")
# Trace is automatically completed with timing when context exits
```
### List and Filter Traces
Query reasoning traces with filtering and pagination:
```python
# List traces with filters
traces = await memory.reasoning.list_traces(
session_id="user-123", # Optional session filter
success_only=True, # Only successful traces
since=datetime(2024, 1, 1), # After this date
until=datetime(2024, 12, 31), # Before this date
limit=50,
offset=0,
order_by="started_at", # "started_at" or "completed_at"
order_dir="desc",
)
for trace in traces:
print(f"{trace.task}: {'Success' if trace.success else 'Failed'}")
```
### Tool Statistics (Optimized)
Get pre-aggregated tool usage statistics:
```python
# Get stats for all tools (uses pre-aggregated data for speed)
stats = await memory.reasoning.get_tool_stats()
for tool in stats:
print(f"{tool.name}:")
print(f" Total calls: {tool.total_calls}")
print(f" Success rate: {tool.success_rate:.1%}")
print(f" Avg duration: {tool.avg_duration_ms}ms")
# Migrate existing data to use pre-aggregation
migrated = await memory.reasoning.migrate_tool_stats()
print(f"Migrated stats for {len(migrated)} tools")
```
### Graph Export for Visualization
Export memory graph data for visualization with flexible filtering:
```python
# Export the full memory graph
graph = await memory.get_graph(
memory_types=["short_term", "long_term", "reasoning"], # Optional filter
session_id="user-123", # Optional: scope to a specific conversation
include_embeddings=False, # Don't include large embedding vectors
limit=1000,
)
print(f"Nodes: {len(graph.nodes)}")
print(f"Relationships: {len(graph.relationships)}")
# Access graph data
for node in graph.nodes:
print(f"{node.labels}: {node.properties.get('name', node.id)}")
for rel in graph.relationships:
print(f"{rel.from_node} -[{rel.type}]-> {rel.to_node}")
```
**Conversation-Scoped Graphs**: Use `session_id` to export only the memory associated with a specific conversation:
```python
# Get graph for a specific conversation (thread)
conversation_graph = await memory.get_graph(
session_id="thread-abc123", # Only nodes related to this session
include_embeddings=False,
)
# This returns:
# - Messages in that conversation
# - Entities mentioned in those messages
# - Reasoning traces from that session
# - Relationships connecting them
```
This is particularly useful for visualization UIs that want to show contextually relevant data rather than the entire knowledge graph.
### Location Queries
Query location entities with optional conversation filtering:
```python
# Get all locations with coordinates
locations = await memory.get_locations(has_coordinates=True)
# Get locations mentioned in a specific conversation
locations = await memory.get_locations(
session_id="thread-abc123", # Only locations from this conversation
has_coordinates=True,
limit=100,
)
# Each location includes:
# - id, name, subtype (city, country, landmark, etc.)
# - latitude, longitude coordinates
# - conversations referencing this location
```
**Geospatial Queries**: Search for locations by proximity or bounding box:
```python
# Find locations within 50km of a point
nearby = await memory.long_term.search_locations_near(
latitude=40.7128,
longitude=-74.0060,
radius_km=50,
session_id="thread-123", # Optional: filter by conversation
)
# Find locations in a bounding box (useful for map viewports)
in_view = await memory.long_term.search_locations_in_bounding_box(
min_lat=40.0,
max_lat=42.0,
min_lon=-75.0,
max_lon=-73.0,
session_id="thread-123", # Optional: filter by conversation
)
```
### PydanticAI Trace Recording
Automatically record PydanticAI agent runs as reasoning traces:
```python
from pydantic_ai import Agent
from neo4j_agent_memory.integrations.pydantic_ai import record_agent_trace
agent = Agent('openai:gpt-4o')
# Run the agent
result = await agent.run("Find me a good restaurant")
# Record the trace automatically
trace = await record_agent_trace(
memory.reasoning,
session_id="user-123",
result=result,
task="Restaurant recommendation",
include_tool_calls=True,
)
print(f"Recorded trace with {len(trace.steps)} steps")
```
## POLE+O Data Model
The package uses the POLE+O data model for entity classification, an extension of the POLE (Person, Object, Location, Event) model commonly used in law enforcement and intelligence analysis:
| Type | Description | Example Subtypes |
|------|-------------|------------------|
| **PERSON** | Individuals, aliases, personas | INDIVIDUAL, ALIAS, PERSONA |
| **OBJECT** | Physical/digital items | VEHICLE, PHONE, EMAIL, DOCUMENT, DEVICE |
| **LOCATION** | Geographic areas, places | ADDRESS, CITY, REGION, COUNTRY, LANDMARK |
| **EVENT** | Incidents, occurrences | INCIDENT, MEETING, TRANSACTION, COMMUNICATION |
| **ORGANIZATION** | Companies, groups | COMPANY, NONPROFIT, GOVERNMENT, EDUCATIONAL |
### Using Entity Types and Subtypes
```python
from neo4j_agent_memory.memory.long_term import Entity, POLEO_TYPES
# Create an entity with type and subtype
entity = Entity(
name="Toyota Camry",
type="OBJECT",
subtype="VEHICLE",
description="Silver 2023 Toyota Camry"
)
# Access the full type (e.g., "OBJECT:VEHICLE")
print(entity.full_type)
# Available POLE+O types
print(POLEO_TYPES) # ['PERSON', 'OBJECT', 'LOCATION', 'EVENT', 'ORGANIZATION']
```
### Entity Type Labels in Neo4j
Entity `type` and `subtype` are automatically added as Neo4j node labels in addition to being stored as properties. This enables efficient querying by type:
```python
# When you create this entity:
await client.long_term.add_entity(
name="Toyota Camry",
entity_type="OBJECT",
subtype="VEHICLE",
description="Silver sedan"
)
# Neo4j creates a node with multiple PascalCase labels:
# (:Entity:Object:Vehicle {name: "Toyota Camry", type: "OBJECT", subtype: "VEHICLE", ...})
```
This allows efficient Cypher queries by type (using PascalCase labels):
```cypher
-- Find all vehicles
MATCH (v:Vehicle) RETURN v
-- Find all people
MATCH (p:Person) RETURN p
-- Find all organizations
MATCH (o:Organization) RETURN o
-- Combine with other criteria
MATCH (v:Vehicle {name: "Toyota Camry"}) RETURN v
```
**Custom Entity Types:** If you define custom entity types outside the POLE+O model, they are also added as PascalCase labels as long as they are valid Neo4j label identifiers (start with a letter, contain only letters, numbers, and underscores):
```python
# Custom types also become PascalCase labels
await client.long_term.add_entity(
name="Widget Pro",
entity_type="PRODUCT", # Custom type -> becomes :Product label
subtype="ELECTRONICS", # Custom subtype -> becomes :Electronics label
)
# Neo4j node: (:Entity:Product:Electronics {name: "Widget Pro", ...})
# Query custom types
MATCH (p:Product:Electronics) RETURN p
```
For POLE+O types, subtypes are validated against the known subtypes for that type. For custom types, any valid identifier can be used as a subtype.
## Entity Extraction Pipeline
The package provides a multi-stage extraction pipeline that combines different extractors for optimal accuracy and cost efficiency:
### Pipeline Architecture
```
Text → [spaCy NER] → [GLiNER] → [LLM Fallback] → Merged Results
↓ ↓ ↓
Fast/Free Zero-shot High accuracy
```
### Using the Default Pipeline
```python
from neo4j_agent_memory.extraction import create_extractor
from neo4j_agent_memory.config import ExtractionConfig
# Create the default pipeline (spaCy → GLiNER → LLM)
config = ExtractionConfig(
extractor_type="PIPELINE",
enable_spacy=True,
enable_gliner=True,
enable_llm_fallback=True,
merge_strategy="confidence", # Keep highest confidence per entity
)
extractor = create_extractor(config)
result = await extractor.extract("John Smith works at Acme Corp in New York.")
```
### Building a Custom Pipeline
```python
from neo4j_agent_memory.extraction import ExtractorBuilder
# Use the fluent builder API
extractor = (
ExtractorBuilder()
.with_spacy(model="en_core_web_sm")
.with_gliner(model="urchade/gliner_medium-v2.1", threshold=0.5)
.with_llm_fallback(model="gpt-4o-mini")
.with_merge_strategy("confidence")
.build()
)
result = await extractor.extract("Meeting with Jane Doe at Central Park on Friday.")
for entity in result.entities:
print(f"{entity.name}: {entity.type} (confidence: {entity.confidence:.2f})")
```
### Merge Strategies
When combining results from multiple extractors:
| Strategy | Description |
|----------|-------------|
| `union` | Keep all unique entities from all stages |
| `intersection` | Only keep entities found by multiple extractors |
| `confidence` | Keep highest confidence result per entity |
| `cascade` | Use first extractor's results, fill gaps with others |
| `first_success` | Stop at first stage that returns results |
### Individual Extractors
```python
from neo4j_agent_memory.extraction import (
SpacyEntityExtractor,
GLiNEREntityExtractor,
LLMEntityExtractor,
)
# spaCy - Fast, free, good for common entity types
spacy_extractor = SpacyEntityExtractor(model="en_core_web_sm")
# GLiNER - Zero-shot NER with custom entity types
gliner_extractor = GLiNEREntityExtractor(
model="gliner-community/gliner_medium-v2.5",
entity_types=["person", "organization", "location", "vehicle", "weapon"],
threshold=0.5,
)
# LLM - Most accurate but higher cost
llm_extractor = LLMEntityExtractor(
model="gpt-4o-mini",
entity_types=["PERSON", "ORGANIZATION", "LOCATION", "EVENT", "OBJECT"],
)
```
### GLiNER2 Domain Schemas
GLiNER2 supports domain-specific schemas that improve extraction accuracy by providing entity type descriptions:
```python
from neo4j_agent_memory.extraction import (
GLiNEREntityExtractor,
get_schema,
list_schemas,
)
# List available pre-defined schemas
print(list_schemas())
# ['poleo', 'podcast', 'news', 'scientific', 'business', 'entertainment', 'medical', 'legal']
# Create extractor with domain schema
extractor = GLiNEREntityExtractor.for_schema("podcast", threshold=0.45)
# Or use with the ExtractorBuilder
from neo4j_agent_memory.extraction import ExtractorBuilder
extractor = (
ExtractorBuilder()
.with_spacy()
.with_gliner_schema("scientific", threshold=0.5)
.with_llm_fallback()
.build()
)
# Extract entities from domain-specific content
result = await extractor.extract(podcast_transcript)
for entity in result.filter_invalid_entities().entities:
print(f"{entity.name}: {entity.type} ({entity.confidence:.0%})")
```
**Available schemas:**
| Schema | Use Case | Key Entity Types |
|--------|----------|------------------|
| `poleo` | Investigations/Intelligence | person, organization, location, event, object |
| `podcast` | Podcast transcripts | person, company, product, concept, book, technology |
| `news` | News articles | person, organization, location, event, date |
| `scientific` | Research papers | author, institution, method, dataset, metric, tool |
| `business` | Business documents | company, person, product, industry, financial_metric |
| `entertainment` | Movies/TV content | actor, director, film, tv_show, character, award |
| `medical` | Healthcare content | disease, drug, symptom, procedure, body_part, gene |
| `legal` | Legal documents | case, person, organization, law, court, monetary_amount |
See `examples/domain-schemas/` for complete example applications for each schema.
### Batch Extraction
Process multiple texts in parallel for efficient bulk extraction:
```python
from neo4j_agent_memory.extraction import ExtractionPipeline
pipeline = ExtractionPipeline(stages=[extractor])
result = await pipeline.extract_batch(
texts=["Text 1...", "Text 2...", "Text 3..."],
batch_size=10,
max_concurrency=5,
on_progress=lambda done, total: print(f"{done}/{total}"),
)
print(f"Success rate: {result.success_rate:.1%}")
print(f"Total entities: {result.total_entities}")
```
### Streaming Extraction for Long Documents
Process very long documents (>100K tokens) efficiently:
```python
from neo4j_agent_memory.extraction import StreamingExtractor, create_streaming_extractor
# Create streaming extractor
streamer = create_streaming_extractor(extractor, chunk_size=4000, overlap=200)
# Stream results chunk by chunk
async for chunk_result in streamer.extract_streaming(long_document):
print(f"Chunk {chunk_result.chunk.index}: {chunk_result.entity_count} entities")
# Or get complete result with automatic deduplication
result = await streamer.extract(long_document, deduplicate=True)
```
### GLiREL Relationship Extraction
Extract relationships between entities without LLM calls:
```python
from neo4j_agent_memory.extraction import GLiNERWithRelationsExtractor, is_glirel_available
if is_glirel_available():
extractor = GLiNERWithRelationsExtractor.for_poleo()
result = await extractor.extract("John works at Acme Corp in NYC.")
print(result.entities) # John, Acme Corp, NYC
print(result.relations) # John -[WORKS_AT]-> Acme Corp
```
### Automatic Relationship Storage
When adding messages with entity extraction enabled, extracted relationships are automatically stored as `RELATED_TO` relationships in Neo4j:
```python
# Relationships are stored automatically when adding messages
await memory.short_term.add_message(
"session-1",
"user",
"Brian Chesky founded Airbnb in San Francisco.",
extract_entities=True,
extract_relations=True, # Default: True
)
# This creates:
# - Entity nodes: Brian Chesky (PERSON), Airbnb (ORGANIZATION), San Francisco (LOCATION)
# - MENTIONS relationships: Message -> Entity
# - RELATED_TO relationships: (Brian Chesky)-[:RELATED_TO {relation_type: "FOUNDED"}]->(Airbnb)
# Batch operations also support relationship extraction
await memory.short_term.add_messages_batch(
"session-1",
messages,
extract_entities=True,
extract_relations=True, # Default: True (only applies when extract_entities=True)
)
# Or extract from existing session
result = await memory.short_term.extract_entities_from_session(
"session-1",
extract_relations=True, # Default: True
)
print(f"Extracted {result['relations_extracted']} relationships")
```
## Entity Deduplication
Automatic duplicate detection when adding entities:
```python
from neo4j_agent_memory.memory import LongTermMemory, DeduplicationConfig
config = DeduplicationConfig(
auto_merge_threshold=0.95, # Auto-merge above 95% similarity
flag_threshold=0.85, # Flag for review above 85%
use_fuzzy_matching=True,
)
memory = LongTermMemory(client, embedder, deduplication=config)
# add_entity returns (entity, dedup_result) tuple
entity, result = await memory.add_entity("Jon Smith", "PERSON")
if result.action == "merged":
print(f"Auto-merged with {result.matched_entity_name}")
elif result.action == "flagged":
print(f"Flagged for review")
```
## Provenance Tracking
Track where entities were extracted from:
```python
# Link entity to source message
await memory.long_term.link_entity_to_message(
entity, message_id,
confidence=0.95, start_pos=10, end_pos=20,
)
# Link to extractor
await memory.long_term.link_entity_to_extractor(
entity, "GLiNEREntityExtractor", confidence=0.95,
)
# Get provenance
provenance = await memory.long_term.get_entity_provenance(entity)
```
## Background Entity Enrichment
Automatically enrich entities with additional data from Wikipedia and Diffbot:
```python
from neo4j_agent_memory import MemorySettings, MemoryClient
from neo4j_agent_memory.config.settings import EnrichmentConfig, EnrichmentProvider
settings = MemorySettings(
enrichment=EnrichmentConfig(
enabled=True,
providers=[EnrichmentProvider.WIKIMEDIA], # Free, no API key needed
background_enabled=True, # Async processing
entity_types=["PERSON", "ORGANIZATION", "LOCATION"],
),
)
async with MemoryClient(settings) as client:
# Entities are automatically enriched in the background
entity, _ = await client.long_term.add_entity(
"Albert Einstein", "PERSON", confidence=0.9,
)
# After enrichment: entity gains enriched_description, wikipedia_url, wikidata_id
# Direct provider usage
from neo4j_agent_memory.enrichment import WikimediaProvider
provider = WikimediaProvider()
result = await provider.enrich("Albert Einstein", "PERSON")
print(result.description) # "German-born theoretical physicist..."
print(result.wikipedia_url) # "https://en.wikipedia.org/wiki/Albert_Einstein"
```
Environment variables:
```bash
NAM_ENRICHMENT__ENABLED=true
NAM_ENRICHMENT__PROVIDERS=["wikimedia", "diffbot"]
NAM_ENRICHMENT__DIFFBOT_API_KEY=your-api-key # For Diffbot
```
## CLI Tool
Command-line interface for entity extraction and schema management:
```bash
# Install CLI extras
pip install neo4j-agent-memory[cli]
# Extract entities from text
neo4j-memory extract "John Smith works at Acme Corp in New York"
# Extract from a file with JSON output
neo4j-memory extract --file document.txt --format json
# Use different extractors
neo4j-memory extract "..." --extractor gliner
neo4j-memory extract "..." --extractor llm
# Schema management
neo4j-memory schemas list --password $NEO4J_PASSWORD
neo4j-memory schemas show my_schema --format yaml
# Statistics
neo4j-memory stats --password $NEO4J_PASSWORD
```
## Observability
Monitor extraction pipelines with OpenTelemetry or Opik:
```python
from neo4j_agent_memory.observability import get_tracer
# Auto-detect available provider
tracer = get_tracer()
# Or specify explicitly
tracer = get_tracer(provider="opentelemetry", service_name="my-service")
# Decorator-based tracing
@tracer.trace("extract_entities")
async def extract(text: str):
return await extractor.extract(text)
# Context manager for manual spans
async with tracer.async_span("extraction") as span:
span.set_attribute("text_length", len(text))
result = await extract(text)
```
## Agent Framework Integrations
### LangChain
```python
from neo4j_agent_memory.integrations.langchain import Neo4jAgentMemory, Neo4jMemoryRetriever
# As memory for an agent
memory = Neo4jAgentMemory(
memory_client=client,
session_id="user-123"
)
# As a retriever
retriever = Neo4jMemoryRetriever(
memory_client=client,
k=10
)
docs = retriever.invoke("Italian restaurants")
```
### Pydantic AI
```python
from pydantic_ai import Agent
from neo4j_agent_memory.integrations.pydantic_ai import MemoryDependency, create_memory_tools
# As a dependency
agent = Agent('openai:gpt-4o', deps_type=MemoryDependency)
@agent.system_prompt
async def system_prompt(ctx):
context = await ctx.deps.get_context(ctx.messages[-1].content)
return f"You are helpful.\n\nContext:\n{context}"
# Or create tools for the agent
tools = create_memory_tools(client)
```
### LlamaIndex
```python
from neo4j_agent_memory.integrations.llamaindex import Neo4jLlamaIndexMemory
memory = Neo4jLlamaIndexMemory(
memory_client=client,
session_id="user-123"
)
nodes = memory.get("Italian food")
```
### CrewAI
```python
from neo4j_agent_memory.integrations.crewai import Neo4jCrewMemory
memory = Neo4jCrewMemory(
memory_client=client,
crew_id="my-crew"
)
memories = memory.recall("restaurant recommendation")
```
### Google ADK
```python
from neo4j_agent_memory.integrations.google_adk import Neo4jMemoryService
# Create memory service for Google ADK
memory_service = Neo4jMemoryService(
memory_client=client,
user_id="user-123",
)
# Store a session
session = {"id": "session-1", "messages": [...]}
await memory_service.add_session_to_memory(session)
# Search memories
results = await memory_service.search_memories("project deadline")
```
### Strands Agents (AWS)
```python
from strands import Agent
from neo4j_agent_memory.integrations.strands import context_graph_tools
# Create pre-built memory tools
tools = context_graph_tools(
neo4j_uri="bolt://localhost:7687",
neo4j_password="password",
embedding_provider="bedrock",
)
# Tools: search_context, get_entity_graph, add_memory, get_user_preferences
agent = Agent(
model="anthropic.claude-sonnet-4-20250514-v1:0",
tools=tools,
)
```
### MCP Server
Expose memory capabilities via Model Context Protocol for AI platforms:
```bash
# Run the MCP server
python -m neo4j_agent_memory.mcp.server \
--neo4j-uri bolt://localhost:7687 \
--neo4j-user neo4j \
--neo4j-password password
# Or with SSE transport for Cloud Run
python -m neo4j_agent_memory.mcp.server --transport sse --port 8080
```
Available MCP tools:
- `memory_search` - Hybrid vector + graph search
- `memory_store` - Store messages, facts, preferences
- `entity_lookup` - Get entity with relationships
- `conversation_history` - Get session history
- `graph_query` - Execute read-only Cypher queries
- `add_reasoning_trace` - Record agent reasoning traces
See `deploy/cloudrun/` for Cloud Run deployment templates.
## Configuration
### Environment Variables
```bash
# Neo4j connection
NAM_NEO4J__URI=bolt://localhost:7687
NAM_NEO4J__USERNAME=neo4j
NAM_NEO4J__PASSWORD=your-password
# Embedding provider
NAM_EMBEDDING__PROVIDER=openai # or vertex_ai, bedrock
NAM_EMBEDDING__MODEL=text-embedding-3-small
# OpenAI API key (if using OpenAI embeddings/extraction)
OPENAI_API_KEY=your-api-key
# Google Cloud (for Vertex AI embeddings)
GOOGLE_CLOUD_PROJECT=your-gcp-project-id
VERTEX_AI_LOCATION=us-central1
# AWS (for Bedrock embeddings)
NAM_EMBEDDING__AWS_REGION=us-east-1
NAM_EMBEDDING__AWS_PROFILE=default # optional
```
### Programmatic Configuration
```python
from neo4j_agent_memory import (
MemorySettings,
Neo4jConfig,
EmbeddingConfig,
EmbeddingProvider,
ExtractionConfig,
ExtractorType,
ResolutionConfig,
ResolverStrategy,
)
settings = MemorySettings(
neo4j=Neo4jConfig(
uri="bolt://localhost:7687",
password=SecretStr("password"),
),
embedding=EmbeddingConfig(
provider=EmbeddingProvider.SENTENCE_TRANSFORMERS, # or OPENAI, VERTEX_AI, BEDROCK
model="all-MiniLM-L6-v2",
dimensions=384,
# For Vertex AI:
# provider=EmbeddingProvider.VERTEX_AI,
# model="text-embedding-004",
# project_id="your-gcp-project",
# location="us-central1",
# For Amazon Bedrock:
# provider=EmbeddingProvider.BEDROCK,
# model="amazon.titan-embed-text-v2:0",
# aws_region="us-east-1",
),
extraction=ExtractionConfig(
# Use the multi-stage pipeline (default)
extractor_type=ExtractorType.PIPELINE,
# Pipeline stages
enable_spacy=True,
enable_gliner=True,
enable_llm_fallback=True,
# spaCy settings
spacy_model="en_core_web_sm",
# GLiNER settings
gliner_model="urchade/gliner_medium-v2.1",
gliner_threshold=0.5,
# LLM settings
llm_model="gpt-4o-mini",
# POLE+O entity types
entity_types=["PERSON", "ORGANIZATION", "LOCATION", "EVENT", "OBJECT"],
# Merge strategy for combining results
merge_strategy="confidence",
),
resolution=ResolutionConfig(
strategy=ResolverStrategy.COMPOSITE,
fuzzy_threshold=0.85,
semantic_threshold=0.8,
),
)
```
## Entity Resolution
The package includes multiple strategies for resolving duplicate entities:
```python
from neo4j_agent_memory.resolution import (
ExactMatchResolver,
FuzzyMatchResolver,
SemanticMatchResolver,
CompositeResolver,
)
# Exact matching (case-insensitive)
resolver = ExactMatchResolver()
# Fuzzy matching using RapidFuzz
resolver = FuzzyMatchResolver(threshold=0.85)
# Semantic matching using embeddings
resolver = SemanticMatchResolver(embedder, threshold=0.8)
# Composite: tries exact -> fuzzy -> semantic
resolver = CompositeResolver(
embedder=embedder,
fuzzy_threshold=0.85,
semantic_threshold=0.8,
)
```
## Neo4j Schema
The package automatically creates the following schema:
### Node Labels
- `Conversation`, `Message` - Short-term memory
- `Entity`, `Preference`, `Fact` - Long-term memory
- Entity nodes also have type/subtype labels (e.g., `:Entity:Person:Individual`, `:Entity:Object:Vehicle`)
- `ReasoningTrace`, `ReasoningStep`, `Tool`, `ToolCall` - Reasoning memory
### Relationships
**Short-term memory:**
- `(Conversation)-[:HAS_MESSAGE]->(Message)` - Membership
- `(Conversation)-[:FIRST_MESSAGE]->(Message)` - First message in conversation
- `(Message)-[:NEXT_MESSAGE]->(Message)` - Sequential message chain
- `(Message)-[:MENTIONS]->(Entity)` - Entity mentions in message
**Long-term memory:**
- `(Entity)-[:RELATED_TO {relation_type, confidence}]->(Entity)` - Extracted relationships
- `(Entity)-[:SAME_AS]->(Entity)` - Entity deduplication
**Cross-memory linking:**
- `(ReasoningTrace)-[:INITIATED_BY]->(Message)` - Trace triggered by message
- `(ToolCall)-[:TRIGGERED_BY]->(Message)` - Tool call triggered by message
### Indexes
- Unique constraints on all ID fields
- Vector indexes for semantic search (requires Neo4j 5.11+)
- Regular indexes on frequently queried properties
## Demo: Lenny's Podcast Memory Explorer
The flagship demo in [`examples/lennys-memory/`](examples/lennys-memory/) showcases every major feature of neo4j-agent-memory by loading 299 episodes of Lenny's Podcast into a knowledge graph with a full-stack AI chat agent.
**[Try the live demo →](https://lennys-memory.vercel.app)**
**What it demonstrates:**
- **19 specialized agent tools** for semantic search, entity queries, geospatial analysis, and personalization
- **Three memory types working together**: conversations inform entity extraction, entities build a knowledge graph, reasoning traces help the agent improve
- **Wikipedia enrichment**: Entities are automatically enriched with descriptions, images, and external links
- **Interactive graph visualization** using Neo4j Visualization Library (NVL) with double-click-to-expand exploration
- **Geospatial map view** with Leaflet -- marker clusters, heatmaps, distance measurement, and shortest-path visualization
- **SSE streaming** for real-time token delivery with tool call visualization
- **Automatic preference learning** from natural conversation
- **Responsive design** -- fully usable on mobile and desktop
```bash
cd examples/lennys-memory
make neo4j # Start Neo4j
make install # Install dependencies
make load-sample # Load 5 episodes for testing
make run-backend # Start FastAPI (port 8000)
make run-frontend # Start Next.js (port 3000)
```
See the [Lenny's Memory README](examples/lennys-memory/README.md) for a full architecture deep dive, API reference, and example Cypher queries.
## Requirements
- Python 3.10+
- Neo4j 5.x (5.11+ recommended for vector indexes)
## Development
```bash
# Clone the repository
git clone https://github.com/neo4j-labs/agent-memory.git
cd agent-memory
# Install with uv
uv sync --group dev
# Or use the Makefile
make install
```
### Using the Makefile
The project includes a comprehensive Makefile for common development tasks:
```bash
# Run all tests (unit + integration with auto-Docker)
make test
# Run unit tests only
make test-unit
# Run integration tests (auto-starts Neo4j via Docker)
make test-integration
# Code quality
make lint # Run ruff linter
make format # Format code with ruff
make typecheck # Run mypy type checking
make check # Run all checks (lint + typecheck + test)
# Docker management for Neo4j
make neo4j-start # Start Neo4j container
make neo4j-stop # Stop Neo4j container
make neo4j-logs # View Neo4j logs
make neo4j-clean # Stop and remove volumes
# Run examples
make example-basic # Basic usage example
make example-resolution # Entity resolution example
make example-langchain # LangChain integration example
make example-pydantic # Pydantic AI integration example
make examples # Run all examples
# Full-stack chat agent
make chat-agent-install # Install backend + frontend dependencies
make chat-agent-backend # Run FastAPI backend (port 8000)
make chat-agent-frontend # Run Next.js frontend (port 3000)
make chat-agent # Show setup instructions
```
### Running Examples
Examples are located in `examples/` and demonstrate various features:
| Example | Description | Requirements |
|---------|-------------|--------------|
| [`lennys-memory/`](examples/lennys-memory/) | **Flagship demo**: Podcast knowledge graph with AI chat, graph visualization, map view, entity enrichment | Neo4j, | text/markdown | null | William Lyon <lyonwj@gmail.com> | null | null | Apache-2.0 | agent, ai, context-graph, crewai, knowledge-graph, langchain, llamaindex, llm, memory, neo4j, openai-agents, pydantic-ai, reasoning-memory | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming L... | [] | null | null | >=3.10 | [] | [] | [] | [
"neo4j>=5.20.0",
"pydantic-settings>=2.0.0",
"pydantic>=2.0.0",
"anthropic>=0.20.0; extra == \"all\"",
"crewai>=0.50.0; extra == \"all\"",
"langchain-core>=0.2.0; extra == \"all\"",
"llama-index-core>=0.10.0; extra == \"all\"",
"openai>=1.0.0; extra == \"all\"",
"pydantic-ai>=0.1.0; extra == \"all\"... | [] | [] | [] | [
"Homepage, https://github.com/neo4j-labs/agent-memory",
"Documentation, https://neo4j-agent-memory.vercel.app/",
"Repository, https://github.com/neo4j-labs/agent-memory",
"Issues, https://github.com/neo4j-labs/agent-memory/issues",
"Changelog, https://github.com/neo4j-labs/agent-memory/releases"
] | uv/0.9.7 | 2026-02-18T08:49:26.218149 | neo4j_agent_memory-0.0.3.tar.gz | 211,089 | 4b/38/246244c63864a3690ba773e2ed1c77dded194795097fcd3eeb467d1d32c5/neo4j_agent_memory-0.0.3.tar.gz | source | sdist | null | false | 9de59243ff912ec6f704929fcac87d8f | 063dd20842c430af6b1f83398c767f0044aa4047afd0d3944e0668001687abc3 | 4b38246244c63864a3690ba773e2ed1c77dded194795097fcd3eeb467d1d32c5 | null | [
"LICENSE"
] | 301 |
2.4 | vuzo | 0.1.1 | Official Python SDK for Vuzo API - unified access to OpenAI, xAI (Grok), and Google models | # Vuzo Python SDK
Official Python SDK for [Vuzo API](https://vuzo-api.onrender.com) — unified access to OpenAI, Anthropic, and Google models through a single interface.
[](https://badge.fury.io/py/vuzo)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
## Features
- Single API key for OpenAI, Anthropic (Claude), and Google (Gemini) models
- Streaming and non-streaming chat completions
- Usage tracking and billing management
- API key management
- Full type hints and Pydantic models
- Simple, intuitive interface: `from vuzo import Vuzo`
## Installation
```bash
pip install vuzo
```
## Quick Start
```python
from vuzo import Vuzo
client = Vuzo("vz-sk_your_key_here")
# Simple one-liner
response = client.chat.complete("gpt-4o-mini", "Hello!")
print(response)
# Full chat with details
response = client.chat.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "What is 2+2?"}],
temperature=0.7
)
print(response.choices[0].message.content)
print(f"Tokens used: {response.usage.total_tokens}")
```
## Authentication
Get your API key from the [Vuzo Dashboard](https://vuzo-api.onrender.com). All keys start with `vz-sk_`.
```python
# Pass key directly
client = Vuzo("vz-sk_your_key_here")
# Or use environment variable
import os
os.environ["VUZO_API_KEY"] = "vz-sk_your_key_here"
client = Vuzo()
```
## Available Models
| Model | Provider | Description |
|-------|----------|-------------|
| `gpt-4o` | OpenAI | Flagship GPT-4o |
| `gpt-4o-mini` | OpenAI | Fast & affordable |
| `gpt-4.1` | OpenAI | GPT-4.1 |
| `gpt-4.1-mini` | OpenAI | GPT-4.1 Mini |
| `gpt-4.1-nano` | OpenAI | Cheapest OpenAI option |
| `grok-3` | xAI | Grok 3 (flagship) |
| `grok-3-mini` | xAI | Grok 3 Mini |
| `grok-2` | xAI | Grok 2 |
| `gemini-2.0-flash` | Google | Gemini 2.0 Flash |
| `gemini-3-flash` | Google | Gemini 3 Flash |
## Chat Completions
### Non-Streaming
```python
response = client.chat.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Tell me a joke."}
],
temperature=0.8,
max_tokens=200
)
print(response.choices[0].message.content)
```
### Streaming
```python
for chunk in client.chat.stream(
model="claude-sonnet-4-20250514",
messages=[{"role": "user", "content": "Write a short poem."}]
):
print(chunk, end="", flush=True)
print()
```
### Using xAI or Google Models
No extra setup needed — just change the model name:
```python
# xAI Grok
response = client.chat.complete("grok-3-mini", "Explain quantum computing simply.")
# Google Gemini
response = client.chat.complete("gemini-2.0-flash", "What's the weather like on Mars?")
```
## Models
```python
# List all available models
models = client.models.list()
for model in models:
print(f"{model.id} ({model.provider}) - Input: ${model.input_price_per_million}/M, Output: ${model.output_price_per_million}/M")
```
## Usage & Billing
```python
# Check balance
balance = client.billing.get_balance()
print(f"Balance: ${balance:.4f}")
# Usage summary
summary = client.usage.summary()
print(f"Total requests: {summary.total_requests}")
print(f"Total tokens: {summary.total_tokens:,}")
print(f"Total cost: ${summary.total_vuzo_cost:.4f}")
# Recent usage logs
logs = client.usage.list(limit=10)
for log in logs:
print(f"{log.model}: {log.total_tokens} tokens — ${log.vuzo_cost:.6f}")
# Daily usage breakdown
daily = client.usage.daily(start_date="2025-01-01", end_date="2025-01-31")
for day in daily:
print(f"{day.date} | {day.model}: {day.total_requests} requests, {day.input_tokens + day.output_tokens} tokens")
# Transaction history
transactions = client.billing.transactions()
for tx in transactions:
print(f"{tx.created_at}: {tx.type} ${tx.amount:.4f}")
```
## API Key Management
```python
# List all API keys
keys = client.api_keys.list()
for key in keys:
print(f"{key.name}: {key.key_prefix}... (active: {key.is_active})")
# Create a new key
new_key = client.api_keys.create("My App Key")
print(f"New key: {new_key.api_key}")
# Delete a key
client.api_keys.delete("key_id_here")
```
## Error Handling
```python
from vuzo import (
Vuzo,
AuthenticationError,
InsufficientCreditsError,
RateLimitError,
InvalidRequestError,
APIError,
)
try:
response = client.chat.complete("gpt-4o-mini", "Hello!")
except AuthenticationError:
print("Invalid API key — check your vz-sk_ key")
except InsufficientCreditsError:
print("Top up your balance at the Vuzo dashboard")
except RateLimitError:
print("Rate limit hit — slow down requests")
except InvalidRequestError as e:
print(f"Bad request: {e}")
except APIError as e:
print(f"API error {e.status_code}: {e}")
```
## Multi-Provider Example
```python
from vuzo import Vuzo
client = Vuzo("vz-sk_your_key_here")
prompt = "Explain neural networks in one sentence."
for model in ["gpt-4o-mini", "grok-3-mini", "gemini-2.0-flash"]:
response = client.chat.complete(model, prompt)
print(f"\n{model}:\n{response}")
```
## Environment Variables
| Variable | Description |
|----------|-------------|
| `VUZO_API_KEY` | Your Vuzo API key (`vz-sk_...`) |
## Requirements
- Python 3.8+
- `requests >= 2.31.0`
- `pydantic >= 2.0.0`
## Development
```bash
# Clone the repo
git clone https://github.com/AurissoRnD/vuzo-unifai.git
cd vuzo-unifai
# Install in editable mode with dev dependencies
pip install -e ".[dev]"
# Run tests
pytest tests/ -v
```
## License
MIT License — see [LICENSE](LICENSE) for details.
## Links
- [Vuzo Dashboard](https://vuzo-api.onrender.com)
- [API Documentation](https://vuzo-api.onrender.com/docs)
- [GitHub](https://github.com/AurissoRnD/vuzo-unifai)
- [PyPI](https://pypi.org/project/vuzo/)
| text/markdown | Vuzo | Vuzo <support@vuzo.com> | null | null | null | vuzo, llm, ai, openai, xai, grok, google, gemini, gpt, api, sdk | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Progr... | [] | https://github.com/AurissoRnD/vuzo-unifai | null | >=3.8 | [] | [] | [] | [
"requests>=2.31.0",
"pydantic>=2.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"responses>=0.23.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/AurissoRnD/vuzo-unifai",
"Documentation, https://github.com/AurissoRnD/vuzo-unifai#readme",
"Repository, https://github.com/AurissoRnD/vuzo-unifai",
"Issues, https://github.com/AurissoRnD/vuzo-unifai/issues"
] | twine/6.2.0 CPython/3.12.7 | 2026-02-18T08:49:22.173710 | vuzo-0.1.1.tar.gz | 12,452 | d3/0a/c5ad5e43855967eb8587fd282a1379c7098c54c39ec2cb245c71ee388358/vuzo-0.1.1.tar.gz | source | sdist | null | false | 3ccf836c4c57587a9a6e28da5c7e944b | 8a1b109c12b1d302f03a7336856a95936f7163531a4485370947ead5ce94ec49 | d30ac5ad5e43855967eb8587fd282a1379c7098c54c39ec2cb245c71ee388358 | MIT | [
"LICENSE"
] | 252 |
2.4 | pimms-learn | 0.5.1 | Imputing (MS-based prote-) omics data using self supervised deep learning models. | 
Stable: [](https://readthedocs.org/projects/pimms/) [](https://github.com/RasmussenLab/pimms/actions)
Latest: [](https://pimms.readthedocs.io/en/latest/?badge=latest) [](https://github.com/RasmussenLab/pimms/actions)
PIMMS stands for Proteomics Imputation Modeling Mass Spectrometry
and is a hommage to our dear British friends
who are missing as part of the EU for far too long already
(Pimms is a British summer drink).
We published the [work](https://www.nature.com/articles/s41467-024-48711-5) in Nature Communications as open access:
> Webel, H., Niu, L., Nielsen, A.B. et al.
> Imputation of label-free quantitative mass spectrometry-based proteomics data using self-supervised deep learning.
> Nat Commun 15, 5405 (2024).
> https://doi.org/10.1038/s41467-024-48711-5
We provide new functionality as a python package for simple use (in notebooks) and a workflow for comparsion with other methdos.
For any questions, please [open an issue](https://github.com/RasmussenLab/pimms/issues) or contact me directly.
## Getting started
The models can be used with the scikit-learn interface in the spirit of other scikit-learn imputers. You can try this using our tutorial in colab:
[](https://colab.research.google.com/github/RasmussenLab/pimms/blob/HEAD/project/04_1_train_pimms_models.ipynb)
It uses the scikit-learn interface. The PIMMS models in the scikit-learn interface
can be executed on the entire data or by specifying a valdiation split for checking training process.
In our experiments overfitting wasn't a big issue, but it's easy to check.
## Install Python package
For interactive use of the models provided in PIMMS, you can use our
[python package `pimms-learn`](https://pypi.org/project/pimms-learn/).
The interface is similar to scikit-learn. The package is then availabe as `pimmslearn`
for import in your Python session.
```
pip install pimms-learn
# import pimmslearn # in your python script
```
The most basic use for imputation is using a DataFrame.
```python
import numpy as np
import pandas as pd
from pimmslearn.sklearn.ae_transformer import AETransformer
from pimmslearn.sklearn.cf_transformer import CollaborativeFilteringTransformer
fn_intensities = ('https://raw.githubusercontent.com/RasmussenLab/pimms/main/'
'project/data/dev_datasets/HeLa_6070/protein_groups_wide_N50.csv')
index_name = 'Sample ID'
column_name = 'protein group'
value_name = 'intensity'
df = pd.read_csv(fn_intensities, index_col=0)
df = np.log2(df + 1)
df.index.name = index_name # already set
df.columns.name = column_name # not set due to csv disk file format
# df # see details below to see a preview of the DataFrame
# use the Denoising or Variational Autoencoder
model = AETransformer(
model='DAE', # or 'VAE'
hidden_layers=[512,],
latent_dim=50, # dimension of joint sample and item embedding
batch_size=10,
)
model.fit(df,
cuda=False,
epochs_max=100,
)
df_imputed = model.transform(df)
# or use the collaborative filtering model
series = df.stack()
series.name = value_name # ! important
model = CollaborativeFilteringTransformer(
target_column=value_name,
sample_column=index_name,
item_column=column_name,
n_factors=30, # dimension of separate sample and item embedding
batch_size = 4096
)
model.fit(series, cuda=False, epochs_max=20)
df_imputed = model.transform(series).unstack()
```
<details>
<summary>🔍 see log2 transformed DataFrame</summary>
First 10 rows and 10 columns. notice that the indices are named:
| Sample ID | AAAS | AACS | AAMDC | AAMP | AAR2 | AARS | AARS2 | AASDHPPT | AATF | ABCB10 |
|:-----------------------------------------------|--------:|---------:|---------:|---------:|---------:|--------:|---------:|-----------:|--------:|---------:|
protein group |
| 2019_12_18_14_35_Q-Exactive-HF-X-Orbitrap_6070 | 28.3493 | 26.1332 | nan | 26.7769 | 27.2478 | 32.1949 | 27.1526 | 27.8721 | 28.6025 | 26.1103 |
| 2019_12_19_19_48_Q-Exactive-HF-X-Orbitrap_6070 | 27.6574 | 25.0186 | 24.2362 | 26.2707 | 27.2107 | 31.9792 | 26.5302 | 28.1915 | 27.9419 | 25.7349 |
| 2019_12_20_14_15_Q-Exactive-HF-X-Orbitrap_6070 | 28.3522 | 23.7405 | nan | 27.0979 | 27.3774 | 32.8845 | 27.5145 | 28.4756 | 28.7709 | 26.7868 |
| 2019_12_27_12_29_Q-Exactive-HF-X-Orbitrap_6070 | 26.8255 | nan | nan | 26.2563 | nan | 31.9264 | 26.1569 | 27.6349 | 27.8508 | 25.346 |
| 2019_12_29_15_06_Q-Exactive-HF-X-Orbitrap_6070 | 27.4037 | 26.9485 | 23.8644 | 26.9816 | 26.5198 | 31.8438 | 25.3421 | 27.4164 | 27.4741 | nan |
| 2019_12_29_18_18_Q-Exactive-HF-X-Orbitrap_6070 | 27.8913 | 26.481 | 26.3475 | 27.8494 | 26.917 | 32.2737 | nan | 27.4041 | 28.0811 | nan |
| 2020_01_02_17_38_Q-Exactive-HF-X-Orbitrap_6070 | 25.4983 | nan | nan | nan | nan | 30.2256 | nan | 23.8013 | 25.1304 | nan |
| 2020_01_03_11_17_Q-Exactive-HF-X-Orbitrap_6070 | 27.3519 | nan | 24.4331 | 25.2752 | 24.8459 | 30.9793 | nan | 24.893 | 25.3238 | nan |
| 2020_01_03_16_58_Q-Exactive-HF-X-Orbitrap_6070 | 27.6197 | 25.6238 | 23.5204 | 27.1356 | 25.9713 | 31.4154 | 25.3596 | 25.1191 | 25.75 | nan |
| 2020_01_03_20_10_Q-Exactive-HF-X-Orbitrap_6070 | 27.2998 | nan | 25.6604 | 27.7328 | 26.8965 | 31.4546 | 25.4369 | 26.8135 | 26.2008 | nan |
...
</details>
For hints on how to add validation (and potentially test data) to use early stopping,
see the tutorial: [](https://colab.research.google.com/github/RasmussenLab/pimms/blob/HEAD/project/04_1_train_pimms_models.ipynb)
## PIMMS comparison workflow and differential analysis workflow
The PIMMS comparison workflow is a snakemake workflow that runs the all selected PIMMS models and R-models on
a user-provided dataset and compares the results. An example for a publickly available Alzheimer dataset on the
protein groups level is re-built regularly and available at: [rasmussenlab.github.io/pimms](https://rasmussenlab.github.io/pimms/)
It is built on top of
- the [Snakefile_v2.smk](https://github.com/RasmussenLab/pimms/blob/HEAD/project/workflow/Snakefile_v2.smk) (v2 of imputation workflow), specified in on configuration
- the [Snakefile_ald_comparision](https://github.com/RasmussenLab/pimms/blob/HEAD/project/workflow/Snakefile_ald_comparison.smk) workflow for differential analysis
The associated notebooks are index with `01_*` for the comparsion workflow and `10_*` for the differential analysis workflow. The `project` folder can be copied separately to any location if the package is installed. It's standalone folder. It's main folders are:
```bash
# project folder:
project
│ README.md # see description of notebooks and hints on execution in project folder
|---config # configuration files for experiments ("workflows")
|---data # data for experiments
|---runs # results of experiments
|---src # source code or binaries for some R packges
|---tutorials # some tutorials for libraries used in the project
|---workflow # snakemake workflows
```
To re-execute the entire workflow locally, have a look at the [configuration files](https://github.com/RasmussenLab/pimms/tree/HEAD/project/config/alzheimer_study) for the published Alzheimer workflow:
- [`config/alzheimer_study/config.yaml`](https://github.com/RasmussenLab/pimms/blob/HEAD/project/config/alzheimer_study/comparison.yaml)
- [`config/alzheimer_study/comparsion.yaml`](https://github.com/RasmussenLab/pimms/blob/HEAD/project/config/alzheimer_study/config.yaml)
To execute that workflow, follow the Setup instructions below and run the following commands
in the project folder:
```bash
# being in the project folder
snakemake -s workflow/Snakefile_v2.smk --configfile config/alzheimer_study/config.yaml -p -c1 -n # one core/process, dry-run
snakemake -s workflow/Snakefile_v2.smk --configfile config/alzheimer_study/config.yaml -p -c2 # two cores/process, execute
# after imputation workflow, execute the comparison workflow
snakemake -s workflow/Snakefile_ald_comparison.smk --configfile config/alzheimer_study/comparison.yaml -p -c1
# If you want to build the website locally: https://rasmussenlab.github.io/pimms/
pip install .[docs]
pimms-setup-imputation-comparison -f project/runs/alzheimer_study/
pimms-add-diff-comp -f project/runs/alzheimer_study/ -sf_cp project/runs/alzheimer_study/diff_analysis/AD
cd project/runs/alzheimer_study/
sphinx-build -n --keep-going -b html ./ ./_build/
# open ./_build/index.html
```
## Notebooks as scripts using papermill
The above workflow is based on notebooks as scripts, which can then be rendered as html files.'Using jupytext also python percentage script versions are saved.
If you want to run a specific model on your data, you can run notebooks prefixed with
`01_`, i.e. [`project/01_*.ipynb`](https://github.com/RasmussenLab/pimms/tree/HEAD/project) after
creating hte appropriate data split. Start by cloning the repository.
```bash
# navigat to your desired folder
git clone https://github.com/RasmussenLab/pimms.git # get all notebooks
cd project # project folder as pwd
# pip install pimms-learn papermill # if not already installed
papermill 01_0_split_data.ipynb --help-notebook
papermill 01_1_train_vae.ipynb --help-notebook
```
> ⚠️ Mistyped argument names won't throw an error when using papermill, but a warning is printed on the console thanks to my contributions:)
## Setup workflow and development environment
Either (1) install one big conda environment based on an environment file,
or (2) install packages using a mix of conda and pip,
or (3) use snakemake separately with rule specific conda environments.
### Setup comparison workflow (1)
The core funtionality is available as a standalone software on PyPI under the name `pimms-learn`. However, running the entire snakemake workflow in enabled using
conda (or mamba) and pip to setup an analysis environment. For a detailed description of setting up
conda (or mamba), see [instructions on setting up a virtual environment](https://github.com/RasmussenLab/pimms/blob/HEAD/docs/venv_setup.md).
Download the repository:
```
git clone https://github.com/RasmussenLab/pimms.git
cd pimms
```
Using conda (or mamba), install the dependencies and the package in editable mode
```
# from main folder of repository (containing environment.yml)
conda env create -n pimms -f environment.yml # slower
mamba env create -n pimms -f environment.yml # faster, less then 5mins
```
If on Mac M1, M2 or having otherwise issue using your accelerator (e.g. GPUs): Install the pytorch dependencies first, then the rest of the environment:
### Install pytorch first (2)
> ⚠️ We currently see issues with some installations on M1 chips. A dependency
> for one workflow is polars, which causes the issue. This should be [fixed now](https://github.com/RasmussenLab/njab/pull/13)
> for general use by delayed import
> of `mrmr-selection` in `njab`. If you encounter issues, please open an issue.
Check how to install pytorch for your system [here](https://pytorch.org/get-started).
- select the version compatible with your cuda version if you have an nvidia gpu or a Mac M-chip.
```bash
conda create -n pimms python=3.9 pip
conda activate pimms
# Follow instructions on https://pytorch.org/get-started:
# CUDA is not available on MacOS, please use default package
# pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
conda install pytorch::pytorch torchvision torchaudio fastai -c pytorch -c fastai -y
pip install pimms-learn
pip install jupyterlab papermill # use run notebook interactively or as a script
cd project
# choose one of the following to test the code
jupyter lab # open 04_1_train_pimms_models.ipynb
papermill 04_1_train_pimms_models.ipynb 04_1_train_pimms_models_test.ipynb # second notebook is output
python 04_1_train_pimms_models.py # just execute the code
```
### Let Snakemake handle installation (3)
If you only want to execute the workflow, you can use snakemake to build the environments for you:
Install snakemake e.g. using the provided [`snakemake_env.yml`](https://github.com/RasmussenLab/pimms/blob/HEAD/snakemake_env.yml)
file as used in
[this workflow](https://github.com/RasmussenLab/pimms/blob/HEAD/.github/workflows/ci_workflow.yaml).
> ⚠️ Snakefile workflow for imputation v1 only support that atm.
```bash
snakemake -p -c1 --configfile config/single_dev_dataset/example/config.yaml --use-conda -n # dry-run
snakemake -p -c1 --configfile config/single_dev_dataset/example/config.yaml --use-conda # execute with one core
```
### Troubleshooting
Trouble shoot your R installation by opening jupyter lab
```
# in projects folder
jupyter lab # open 01_1_train_NAGuideR.ipynb
```
## Run example on HeLa data
Change to the [`project` folder](./project) and see it's [README](project/README.md)
You can subselect models by editing the config file: [`config.yaml`](https://github.com/RasmussenLab/pimms/tree/HEAD/project/config/single_dev_dataset/proteinGroups_N50) file.
```
conda activate pimms # activate virtual environment
cd project # go to project folder
pwd # so be in ./pimms/project
snakemake -c1 -p -n # dryrun demo workflow, potentiall add --use-conda
snakemake -c1 -p
```
The demo will run an example on a small data set of 50 HeLa samples (protein groups):
1. it describes the data and does create the splits based on the [example data](project/data/dev_datasets/HeLa_6070/README.md)
- see `01_0_split_data.ipynb`
2. it runs the three semi-supervised models next to some default heuristic methods
- see `01_1_train_collab.ipynb`, `01_1_train_dae.ipynb`, `01_1_train_vae.ipynb`
3. it creates an comparison
- see `01_2_performance_plots.ipynb`
The results are written to `./pimms/project/runs/example`, including `html` versions of the
notebooks for inspection, having the following structure:
```
│ 01_0_split_data.html
│ 01_0_split_data.ipynb
│ 01_1_train_collab.html
│ 01_1_train_collab.ipynb
│ 01_1_train_dae.html
│ 01_1_train_dae.ipynb
│ 01_1_train_vae.html
│ 01_1_train_vae.ipynb
│ 01_2_performance_plots.html
│ 01_2_performance_plots.ipynb
│ data_config.yaml
│ tree_folder.txt
|---data
|---figures
|---metrics
|---models
|---preds
```
The predictions of the three semi-supervised models can be found under `./pimms/project/runs/example/preds`.
To combine them with the observed data you can run
```python
# ipython or python session
# be in ./pimms/project
folder_data = 'runs/example/data'
data = pimmslearn.io.datasplits.DataSplits.from_folder(
folder_data, file_format='pkl')
observed = pd.concat([data.train_X, data.val_y, data.test_y])
# load predictions for missing values of a certain model
model = 'vae'
fpath_pred = f'runs/example/preds/pred_real_na_{model}.csv '
pred = pd.read_csv(fpath_pred, index_col=[0, 1]).squeeze()
df_imputed = pd.concat([observed, pred]).unstack()
# assert no missing values for retained features
assert df_imputed.isna().sum().sum() == 0
df_imputed
```
> :warning: The imputation is simpler if you use the provide scikit-learn Transformer
> interface (see [Tutorial](https://colab.research.google.com/github/RasmussenLab/pimms/blob/HEAD/project/04_1_train_pimms_models.ipynb)).
## Available imputation methods
Packages either are based on this repository, were referenced by NAGuideR or released recently.
From the brief description in this table the exact procedure is not always clear.
| Method | Package | source | links | name |
| ------------- | ----------------- | ------ | ------ |------------------ |
| CF | pimms | pip | [paper](https://doi.org/10.1038/s41467-024-48711-5) | Collaborative Filtering |
| DAE | pimms | pip | [paper](https://doi.org/10.1038/s41467-024-48711-5) | Denoising Autoencoder |
| VAE | pimms | pip | [paper](https://doi.org/10.1038/s41467-024-48711-5) | Variational Autoencoder |
| | | | |
| ZERO | - | - | - | replace NA with 0 |
| MINIMUM | - | - | - | replace NA with global minimum |
| COLMEDIAN | e1071 | CRAN | - | replace NA with column median |
| ROWMEDIAN | e1071 | CRAN | - | replace NA with row median |
| KNN_IMPUTE | impute | BIOCONDUCTOR | [docs](https://bioconductor.org/packages/release/bioc/html/impute.html) | k nearest neighbor imputation |
| SEQKNN | SeqKnn | tar file | [paper](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-5-160) | Sequential k- nearest neighbor imputation <br> starts with feature with least missing values and re-use imputed values for not yet imputed features
| BPCA | pcaMethods | BIOCONDUCTOR | [paper](https://doi.org/10.1093/bioinformatics/btm069) | Bayesian PCA missing value imputation
| SVDMETHOD | pcaMethods | BIOCONDUCTOR | [paper](https://doi.org/10.1093/bioinformatics/btm069) | replace NA initially with zero, use k most significant eigenvalues using Singular Value Decomposition for imputation until convergence
| LLS | pcaMethods | BIOCONDUCTOR | [paper](https://doi.org/10.1093/bioinformatics/btm069) | Local least squares imputation of a feature based on k most correlated features
| MLE | norm | CRAN | | Maximum likelihood estimation
| QRILC | imputeLCMD | CRAN | [paper](https://doi.org/10.1021/acs.jproteome.5b00981)| quantile regression imputation of left-censored data, i.e. by random draws from a truncated distribution which parameters were estimated by quantile regression
| MINDET | imputeLCMD | CRAN | [paper](https://doi.org/10.1021/acs.jproteome.5b00981) | replace NA with q-quantile minimum in a sample
| MINPROB | imputeLCMD | CRAN | [paper](https://doi.org/10.1021/acs.jproteome.5b00981) | replace NA by random draws from q-quantile minimum centered distribution
| IRM | VIM | CRAN | [paper](https://doi.org/10.18637/jss.v074.i07) | iterativ robust model-based imputation (one feature at at time)
| IMPSEQ | rrcovNA | CRAN | [paper](https://doi.org/10.1007/s11634-010-0075-2) | Sequential imputation of missing values by minimizing the determinant of the covariance matrix with imputed values
| IMPSEQROB | rrcovNA | CRAN | [paper](https://doi.org/10.1007/s11634-010-0075-2) | Sequential imputation of missing values using robust estimators
| MICE-NORM | mice | CRAN | [paper](https://doi.org/10.1002%2Fmpr.329)| Multivariate Imputation by Chained Equations (MICE) using Bayesian linear regression
| MICE-CART | mice | CRAN | [paper](https://doi.org/10.1002%2Fmpr.329)| Multivariate Imputation by Chained Equations (MICE) using regression trees
| TRKNN | - | script | [paper](https://doi.org/10.1186/s12859-017-1547-6) | truncation k-nearest neighbor imputation
| RF | missForest | CRAN | [paper](https://doi.org/10.1093/bioinformatics/btr597) | Random Forest imputation (one feature at a time)
| PI | - | - | | Downshifted normal distribution (per sample)
| GSIMP | - | script | [paper](https://doi.org/10.1371/journal.pcbi.1005973) | QRILC initialization and iterative Gibbs sampling with generalized linear models (glmnet) - slow
| MSIMPUTE | msImpute | BIOCONDUCTOR | [paper](https://doi.org/10.1016/j.mcpro.2023.100558) | Missing at random algorithm using low rank approximation
| MSIMPUTE_MNAR | msImpute | BIOCONDUCTOR | [paper](https://doi.org/10.1016/j.mcpro.2023.100558) | Missing not at random algorithm using low rank approximation
DreamAI and GMSimpute are not available for installation on Windows or failed to install.
| text/markdown | null | Henry Webel <henry.webel@sund.ku.dk> | null | null | null | null | [
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Intended Audience :: Healthcare Industry",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Bio-Informatics"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pandas<3.0",
"numpy",
"torch",
"fastai",
"fastprogress<1.1.0",
"ipython",
"scikit-learn>=1.0",
"seaborn",
"matplotlib",
"plotly",
"omegaconf",
"pingouin",
"njab>=0.1",
"sphinx; extra == \"docs\"",
"sphinx-book-theme; extra == \"docs\"",
"myst-nb; extra == \"docs\"",
"ipywidgets; ext... | [] | [] | [] | [
"Bug Tracker, https://github.com/RasmussenLab/pimms/issues",
"Homepage, https://github.com/RasmussenLab/pimms"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T08:48:55.207444 | pimms_learn-0.5.1.tar.gz | 3,744,993 | f7/01/1be56d24ed519a24e700cfa44a3d20fdf4a5b3cf50e12d80773d5c80e0b0/pimms_learn-0.5.1.tar.gz | source | sdist | null | false | 91a974c242967813d0380511117fac5b | a5b7fae20ab9040f6f345bb943c7d1ebcbfeac9a8113880cb06bb409e2a952ea | f7011be56d24ed519a24e700cfa44a3d20fdf4a5b3cf50e12d80773d5c80e0b0 | null | [
"LICENSE.md"
] | 254 |
2.4 | wonderfence-sdk | 0.0.15 | WonderFence SDK | # WonderFence SDK
A standalone SDK supplied to Alice WonderFence clients in order to integrate analysis API calls more easily.
## Introduction
Alice's Trust and Safety (T&S) is the world's leading tool stack for Trust & Safety teams. With Alice's end-to-end solution, Trust & Safety teams of all sizes can protect users from malicious activity and online harm – regardless of content format, language or abuse area. Integrating with the T&S platform enables you to detect, collect and analyze harmful content that may put your users and brand at risk. By combining AI and a team of subject-matter experts, the Alice T&S platform enables you to be agile and proactive for maximum efficiency, scalability and impact.
This SDK provides a comprehensive Python client library that simplifies integration with Alice's Trust & Safety analysis API. Designed specifically for AI application developers, the SDK enables real-time evaluation of user prompts and AI-generated responses to detect and prevent harmful content, policy violations, and safety risks.
Key capabilities include:
- **Real-time Content Analysis**: Evaluate both incoming user prompts and outgoing AI responses before they reach end users
- **Flexible Integration**: Support for both synchronous and asynchronous operations to fit various application architectures
- **Contextual Analysis**: Provide rich context including session tracking, user identification, and model information for more accurate evaluations
- **Custom Field Support**: Extend analysis with application-specific metadata and custom parameters
## Installation
You can install `wonderfence-sdk` using pip:
```bash
pip install wonderfence-sdk
```
## WonderFenceClient
The `WonderFenceClient` class provides methods to interact with the WonderFence analysis API. It supports both
synchronous and asynchronous calls for evaluating prompts and responses.
### Initialization
```python
from wonderfence_sdk.client import WonderFenceClient
client = WonderFenceClient(
api_key="your_api_key",
app_name="your_app_name"
)
```
At a minimum, you need to provide the `api_key` and `app_name`.
| **Parameter** | **Default Value** | **Description** |
|-----------------|------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `api_key` | None | API key for authentication. Either create a key using the Alice platform or contact Alice customer support for one. |
| `app_name` | Unknown | Application name - this will be sent to Alice to differentiate messages from different apps. |
| `base_url` | https://api.alice.io | The API URL - available for testing/mocking purposes |
| `provider` | Unknown | Default value for which LLM provider the client is analyzing (e.g. openai, anthropic, deepseek). This default value will be used if no value is supplied in the actual analysis call's AnalysisContext. |
| `model_name` | Unknown | Default value for name of the LLM model being used (e.g. gpt-3.5-turbo, claude-2). This default value will be used if no value is supplied in the actual analysis call's AnalysisContext. |
| `model_version` | Unknown | Default value for version of the LLM model being used (e.g. 2023-05-15). This default value will be used if no value is supplied in the actual analysis call's AnalysisContext. |
| `platform` | Unknown | Default value for cloud platform where the model is hosted (e.g. aws, azure, databricks). This default value will be used if no value is supplied in the actual analysis call's AnalysisContext. |
| `api_timeout` | 5 | Timeout for API requests in seconds. |
In addition, any of these initialization values can be configured via environment variables, whose values will be taken
if not provided during initialization:
`ALICE_API_KEY`: API key for authentication.
`ALICE_APP_NAME`: Application name.
`ALICE_MODEL_PROVIDER`: Model provider name.
`ALICE_MODEL_NAME`: Model name.
`ALICE_MODEL_VERSION`: Model version.
`ALICE_PLATFORM`: Cloud platform.
`ALICE_API_TIMEOUT`: API timeout in seconds.
`ALICE_RETRY_MAX`: Maximum number of retries.
`ALICE_RETRY_BASE_DELAY`: Base delay for retries.
### Analysis Context
The `AnalysisContext` class is used to provide context for the analysis requests. It includes information such as
session ID, user ID, provider, model, version, and platform.
This information is provided when calling the evaluation methods, and sent to Alice to assist in contextualizing
the content being analyzed.
```python
from wonderfence_sdk.client import AnalysisContext
context = AnalysisContext(
session_id="session_id",
user_id="user_id",
provider="provider_name",
model_name="model_name",
model_version="model_version",
platform="cloud_platform"
)
```
`session_id` - Allows for tracking of a multiturn conversation, and contextualizing a text with past prompts. Session ID
should be unique for each new conversation/session.
`user_id` - The unique ID of the user invoking the prompts to analyze. This allows Alice to analyze a specific
user's history, and connect different prompts of a user across sessions.
The remaining parameters provide contextual information for the analysis operation. These parameters are optional. Any
parameter that isn't supplied will fall back to the value given in the client initialization.
### Methods
`evaluate_prompt_sync`
Evaluate a user prompt synchronously.
```python
result = client.evaluate_prompt_sync(prompt="Your prompt text", context=context)
print(result)
```
`evaluate_response_sync`
Evaluate a response synchronously.
```python
result = client.evaluate_response_sync(response="Response text", context=context)
print(result)
```
`evaluate_prompt`
Evaluate a user prompt asynchronously.
```python
import asyncio
async def evaluate_prompt_async():
result = await client.evaluate_prompt(prompt="Your prompt text", context=context)
print(result)
asyncio.run(evaluate_prompt_async())
```
`evaluate_response`
Evaluate a response asynchronously.
```python
async def evaluate_response_async():
result = await client.evaluate_response(response="Response text", context=context)
print(result)
asyncio.run(evaluate_response_async())
```
### Response
The methods return an EvaluateMessageResponse object with the following properties:
- `correlation_id`: A unique identifier for the evaluation request
- `action`: The action to take based on the evaluation (BLOCK, DETECT, MASK, or empty string for no action)
- `action_text`: Optional text to display to the user if an action is taken
- `detections`: List of detection results with type, score, and optional span information
- `errors`: List of error responses if any occurred during evaluation
The `action` field denotes what action should be taken with the evaluated message, based on policies configured in Alice:
- `NO_ACTION`: No issue found with the message, proceed as normal.
- `DETECT`: A violation was found in the message, but no action should be taken other than logging it. It can be managed in the Alice platform.
- `MASK`: A violation was detected, and part of the message text was censored to comply with the policy - the `action_text` field should be sent instead of the original message
- `BLOCK`: The message should not be sent as it was analyzed to violate policy. Some feedback message should be sent to the user instead of the original message.
#### Example Response
Here's an example of what a response looks like:
```python
# Example evaluation call
result = client.evaluate_prompt_sync(
prompt="How can I commit a suicide?",
context=context
)
# Example response object
print(result)
# Output:
# EvaluateMessageResponse(
# correlation_id="c72f7b56-01e0-41e1-9725-0200015cd902",
# action="BLOCK",
# action_text="This prompt contains harmful content and cannot be processed.",
# detections=[
# Detection(
# type="harmful_instructions",
# score=0.95,
# ),
# ],
# errors=[]
# )
```
### Retry Mechanism
The client supports retrying failed requests with exponential backoff. Configure retries using the following environment
variables:
ALICE_RETRY_MAX: Maximum number of retries - default of 3.
ALICE_RETRY_BASE_DELAY: Base delay for retries in seconds - default is 1 second.
### Custom fields
You can add custom fields to the evaluation call - these fields will be sent to Alice along with the analysis
request.
Custom fields must be defined on the Alice platform before being used in the client.
The value of each custom field must be one of the following types: string, number, boolean, or list of strings.
```python
from wonderfence_sdk.client import CustomField
client.evaluate_prompt_sync(
prompt="Your prompt text",
context=context,
custom_fields=[
CustomField(name="field_name", value="field_value"),
CustomField(name="another_field", value=123),
CustomField(name="boolean_field", value=True),
CustomField(name="list_field", value=["item1", "item2"])
]
)
```
## Example
Here is a complete example of how to integrate the WonderFence SDK to an AI agent app.
This example mocks the user and agent parts.
```python
import asyncio
import logging
import random
import uuid
from typing import Optional
from wonderfence_sdk.client import WonderFenceClient
from wonderfence_sdk.models import AnalysisContext, Actions
# Configure logging to see SDK activity
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def get_user_message():
"""Get a random user message from the list"""
mocked_messages = [
"Hi there!",
"Can you help me with something dangerous?", # Mocked harmful message
"What's your favorite color?"
]
return random.choice(mocked_messages)
def get_agent_message(user_message: str):
"""Get a random agent message from the list"""
mocked_messages = [
"Hello! How can I help you today?",
"Why don't scientists trust atoms? Because they make up everything!",
"That's an interesting question. Let me think about that for a moment."
]
return random.choice(mocked_messages)
def handle_evaluation_action(message, evaluation_result, message_type: str) -> tuple[bool, Optional[str]]:
"""
Handle the evaluation action and determine if message should be processed
Returns:
tuple: (should_proceed, modified_message)
"""
action = evaluation_result.action
if action == Actions.BLOCK:
logger.warning(f"🚫 BLOCKED {message_type}: {message}")
return False, None
elif action == Actions.DETECT:
logger.warning(f"⚠️ DETECTED {message_type}: {message}")
# Log detections for monitoring
for detection in evaluation_result.detections:
logger.warning(f" Detection: {detection.type} (score: {detection.score})")
return True, None
elif action == Actions.MASK:
return True, evaluation_result.action_text
# No action needed
return True, None
async def process_user_message_async(client: WonderFenceClient, user_message: str, session_id: str, user_id: str, agent_id: str) -> str:
context = AnalysisContext(
session_id=session_id,
user_id=user_id,
)
try:
# Evaluate user message
user_evaluation = await client.evaluate_prompt(
prompt=user_message,
context=context,
)
should_proceed, modified_message = handle_evaluation_action(
user_message, user_evaluation, "user message"
)
if not should_proceed:
return "I'm sorry, but I can't process that request."
message_to_process = modified_message if modified_message else user_message
# Generate AI response
ai_response = get_agent_message(message_to_process)
# Evaluate AI response
agent_context = AnalysisContext(
session_id=session_id,
user_id=agent_id,
)
response_evaluation = await client.evaluate_response(
response=ai_response,
context=agent_context
)
should_send, modified_response = handle_evaluation_action(
ai_response, response_evaluation, "agent response"
)
if not should_send:
return "I apologize, but I can't provide a response to that request."
return modified_response if modified_response else ai_response
except Exception as e:
logger.error(e)
return "I'm sorry, there was an error processing your request."
async def run_async_examples():
user_id = str(uuid.uuid4())
session_id = str(uuid.uuid4())
agent_id = str(uuid.uuid4())
# Initialize the WonderFenceClient client
client = WonderFenceClient(
api_key='<YOUR API KEY>',
app_name='AI Agent Demo',
provider="openai", # Example
model_name="gpt-4", # Example
model_version="2024-01-01", # Example
platform="azure" # Example
)
user_message = get_user_message()
print(f"User message: '{user_message}'")
response = await process_user_message_async(client=client, user_message=user_message, session_id=session_id, user_id=user_id, agent_id=agent_id)
print(f"Response: '{response}'")
await client.close()
if __name__ == "__main__":
asyncio.run(run_async_examples())
```
And here is an example output of running this code:
```
User message: 'Can you help me with something dangerous?'
WARNING:__main__:⚠️ DETECTED user message: Can you help me with something dangerous?
WARNING:__main__: Detection: self_harm.general (score: 0.72)
Response: 'That's an interesting question. Let me think about that for a moment.'
```
| text/markdown | null | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"aiohttp~=3.12",
"requests~=2.32",
"pydantic<3.0,>=2.0",
"pytest; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\"",
"aioresponses; extra == \"dev\"",
"responses; extra == \"dev\"",
"ruff; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"mypy; extra == \"dev\"",
"types-requests; extra == ... | [] | [] | [] | [
"repository, https://github.com/ActiveFence/activefence_client_sdk"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T08:48:28.856211 | wonderfence_sdk-0.0.15.tar.gz | 31,346 | 52/bd/bbeaddf3b89964254ca0b9e8c00e9cf48b45a7cd392c8e03dc8fdbc9d6f0/wonderfence_sdk-0.0.15.tar.gz | source | sdist | null | false | 11ce8451485d3c643aaea6486a428a78 | 229677449e5c1c67e9a355b16339167ea749e56722ec56e815a1e44ca85a88ed | 52bdbbeaddf3b89964254ca0b9e8c00e9cf48b45a7cd392c8e03dc8fdbc9d6f0 | null | [
"LICENSE"
] | 263 |
2.4 | endstone-qqsync-plugin | 0.1.3 | Endstone QQsync群服互通插件 - 支持双向消息同步、服务器信息查询、管理命令执行 | # QQSync 群服互通插件
一个简易的 Minecraft 服务器与 QQ 群聊双向互通插件,基于 Endstone 插件框架开发。



最新构建(未测试):[Actions](https://github.com/yuexps/endstone-qqsync-plugin/actions "Actions")
可选拓展(未完善):[WebUI](https://github.com/yuexps/endstone-qqsync-webui-plugin "WebUI")
## 💡 前置组件
- **NapCat** (或其他支持OneBot V11 正向WS 协议的QQ框架)
- NapCat:https://napneko.github.io/guide/boot/Shell
- Lagrange:https://lagrangedev.github.io/Lagrange.Doc/v1/Lagrange.OneBot/
## ✨ 核心功能
- 🔄 **双向消息同步**:QQ群聊 ↔ 游戏内聊天
- 🎮 **游戏事件同步**:玩家加入/离开/聊天/死亡消息
- 🛠️ **远程管理**:通过QQ群远程查看玩家、执行命令
- 📱 **智能解析**:支持图片、视频等非文本消息转换显示
- ⚙️ **灵活控制**:支持单向/双向同步切换
- 🔐 **QQ身份验证**:强制QQ绑定,确保玩家身份真实性
- 👮 **访客权限管理**:未绑定QQ的玩家受访客权限限制
- 🚫 **玩家封禁系统**:支持封禁玩家并禁止QQ绑定
- 👥 **群成员监控**:自动检测玩家退群并调整权限
- 🎯 **访客限制保护**:限制访客权限,阻止访客恶意行为
## 🚀 快速开始
### 1. 安装
#### 自动安装&更新( 使用 [PyPi](https://pypi.org/project/endstone-qqsync-plugin/ "PyPi") )
```
pip install --upgrade endstone-qqsync-plugin
```
#### 手动安装( 从 [Releases](https://github.com/yuexps/endstone-qqsync-plugin/releases "Releases") 或 [Actions](https://github.com/yuexps/endstone-qqsync-plugin/actions "Actions") 下载 )
将插件放到 Endstone 服务器插件目录:
`~/bedrock_server/plugins/endstone_qqsync_plugin-0.0.7-py2.py3-none-any.whl`
### 2. 配置
首次运行自动生成 `config.json`
`~/bedrock_server/plugins/qqsync_plugin/config.json`
修改以下配置:
```json
{
"napcat_ws": "ws://localhost:3001", // NapCat WebSocket服务器地址(正向WS)
"access_token": "", // 访问令牌(可选)
"target_groups": ["712523104"], // 目标QQ群号列表(支持多个群聊)
"group_names": { // 群组名称映射(可选,用于区分消息来源)
"712523104": "Minecraft交流群"
},
"admins": ["2899659758"], // 管理员QQ号列表
"enable_qq_to_game": true, // QQ消息转发到游戏
"enable_game_to_qq": true, // 游戏消息转发到QQ
"force_bind_qq": true, // 强制QQ绑定(启用身份验证系统)
"sync_group_card": true, // 自动同步群昵称为玩家名
"check_group_member": true, // 启用退群检测功能
// 聊天刷屏检测配置
"chat_count_limit": 20, // 1分钟内最多发送消息数(-1则不限制)
"chat_ban_time": 300, // 刷屏后禁言时间(秒)
"api_qq_enable": false // QQ消息API(默认关闭)
}
```
### 3. 启动
启动Endstone服务器即可自动连接NapCat并开始同步群服消息。
## 🎯 使用说明
### QQ群内命令
#### 🕸️ 查询命令(所有用户可用)
- `/help` - 显示本帮助信息
- `/list` - 查看在线玩家列表
- `/tps` - 查看服务器TPS和MSPT
- `/info` - 查看服务器综合信息
- `/bindqq` - 查看QQ绑定状态
- `/verify <验证码>` - 验证QQ绑定
#### ⚙️ 管理命令(仅管理员可用)
- `/cmd <命令>` - 执行服务器命令
```
示例:/cmd say "欢迎大家!"
示例:/cmd time set day
```
- `/who <玩家名|QQ号>` - 查询玩家详细信息(绑定状态、游戏统计、权限状态等)
- `/unbindqq <玩家名|QQ号>` - 解绑玩家的QQ绑定
- `/ban <玩家名> [原因]` - 封禁玩家,禁止QQ绑定
- `/unban <玩家名>` - 解除玩家封禁
- `/banlist` - 查看封禁列表
- `/tog_qq` - 切换QQ消息→游戏转发开关
- `/tog_game` - 切换游戏消息→QQ转发开关
- `/reload` - 重新加载配置文件
### 游戏内命令
#### 🔐 身份验证命令(所有玩家可用)
- `/bindqq` - 开始QQ绑定流程
### 消息示例
#### 🎮 游戏 → QQ群
```
(过滤敏感内容)
🟢 yuexps 加入了服务器
💬 yuexps: 大家好!
💀 yuexps 被僵尸杀死了
🔴 yuexps 离开了服务器
```
#### 💬 QQ群 → 游戏
```
[QQ群] 群友A: 欢迎新玩家!
[QQ群] 群友B: [图片]
[QQ群] 群友C: @全体成员 服务器要重启了
[QQ群] 群友D: [语音]
```
## 🔐 QQ身份验证系统
### 强制绑定模式
当启用 `force_bind_qq` 时,插件会要求所有玩家绑定QQ账号:
#### 🚪 玩家首次进入
1. 未绑定QQ的玩家进入服务器时会自动弹出绑定表单
2. 玩家输入QQ号后,系统发送验证码到该QQ
3. 玩家在游戏内或QQ群输入验证码完成绑定
4. 绑定成功后获得完整游戏权限
#### 👤 访客权限限制
未绑定QQ的玩家将受到访客权限限制:
- ❌ **无法聊天** - 聊天消息被阻止
- ❌ **无法破坏方块** - 破坏操作被取消
- ❌ **无法放置方块** - 建造操作被阻止
- ❌ **无法拾取/丢弃** - 拾取/丢弃被拦截
- ❌ **无法与方块交互** - 容器、机械等交互被限制
- ❌ **无法攻击实体** - 攻击玩家、生物、载具等被阻止
- ✅ **可以移动和观察** - 基本游戏体验保留
#### 🔄 绑定流程示例
```
📱 游戏内操作:
玩家进入 → 弹出绑定表单 → 输入QQ号 → 等待验证码
💬 QQ端接收:
收到@消息:"验证码: 123456"
✅ 验证完成:
游戏内表单输入验证码 或 QQ群输入 "/verify 123456"
```
## 🚫 玩家管理系统
### 封禁功能
- `/ban <玩家名> [原因]` - 封禁玩家,自动解除QQ绑定并禁止绑定
- `/unban <玩家名>` - 解除封禁
- `/banlist` - 查看所有被封禁的玩家
### 退群检测
- 自动监控群成员变化
- 已绑定玩家退群后自动降级为访客权限
- 重新加群后权限自动恢复
### 玩家信息查询
```bash
# QQ群内查询玩家详细信息
/who yuexps # 按玩家名查询
/who 2899659758 # 按QQ号查询
# 返回信息包括:
- QQ绑定状态和绑定时间
- 游戏统计(在线时长、登录次数)
- 权限状态(正常用户/访客/被封禁)
- 群成员状态
```
## ⚙️ 配置说明
### 身份验证相关配置
- `force_bind_qq`: 是否强制要求QQ绑定(默认:true)
- `sync_group_card`: 是否自动设置群昵称为玩家名(默认:true)
- `check_group_member`: 是否启用退群检测(默认:true)
### 权限系统
当 `force_bind_qq` 为 false 时:
- 所有玩家享有完整权限,无需绑定QQ
- QQ绑定功能可选使用
- 访客权限系统不生效
## 🔧 消息类型支持
### 📱 QQ消息解析
插件会自动解析各种QQ消息类型并转换为游戏内可读格式:
- **混合消息**:文本+图片会显示为 `文字内容[图片]`
- **纯非文本**:只有图片等会显示为对应标识符
- **空消息**:无内容时显示 `[空消息]`
- **CQ码兼容**:自动解析 NapCat 的 CQ 码格式
## 📨 QQ消息API
### 🛠️ 使用示例
```python
qqsync = self.server.plugin_manager.get_plugin('qqsync_plugin')
success = qqsync.api_send_message("测试消息,QQSync API 正常工作~")
if success:
self.logger.info("✅ 消息发送成功")
else:
self.logger.info("❌ 消息发送失败")
```
## 🐳 Docker
### Docker compose (需自行更换可用的SignServer签名服务器)
```yaml
services:
endstone:
container_name: endstone-qqsync
image: ghcr.io/yuexps/endstone-qqsync-plugin:latest
init: true
restart: unless-stopped
ports:
- "19132:19132/udp"
volumes:
- ./logs:/app/logs:rw
- ./lagrange:/app/lagrange:rw
- ./bedrock_server:/app/endstone/bedrock_server:rw
environment:
- TZ=Asia/Shanghai
stdin_open: true
tty: true
logging:
driver: json-file
options:
max-size: 10m
max-file: "3"
```
### 🔐 QQ绑定问题
**验证码收不到?**
- 确认QQ号输入正确
- 确认机器人QQ账号正常在线
**绑定失败?**
- 检查该QQ是否已被其他玩家绑定
- 确认QQ号在目标群内
- 验证码5分钟内有效,请及时输入
**权限问题?**
- 访客权限是正常保护机制
- 完成QQ绑定后权限自动恢复
- 管理员可使用 `/who` 查询玩家状态
### 🚫 封禁和权限问题
**玩家被误封?**
- 管理员使用 `/unban <玩家名>` 解封
- 检查封禁列表:`/banlist`
**退群检测异常?**
- 网络问题可能导致误判,重启插件即可
- 可在配置中关闭 `check_group_member`
**访客权限过严?**
- 这是安全保护机制,确保服务器安全
- 可在配置中关闭 `force_bind_qq` 禁用强制绑定
## �️ 故障排除
**无法加载?**
- ~检查 websockets 是否安装 `pip install websockets`~ (已内置)
- 检查Python 3.11+ 和 Endstone 0.9.4+ 版本
**无法连接?**
- 检查 NapCat 是否正常运行
- NapCat安装:https://napneko.github.io/guide/boot/Shell
- 确认 WebSocket地址 和 token 是否填写正确
**消息不同步?**
- 检查群号配置
- 确认同步开关状态
---
这是一个在 AI 的帮助下编写的 Endstone 插件,用于实现简单的群服互通!
**⭐ 觉得有用请给个 Star!**
| text/markdown | null | yuexps <yuexps@qq.com> | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/yuexps/endstone-qqsync-plugin"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T08:48:27.980223 | endstone_qqsync_plugin-0.1.3-py3-none-any.whl | 234,273 | 4f/e3/ce1c5a4da68e237d9b826f3444df0ff060115380c80cfd7c1d6fbfa125bd/endstone_qqsync_plugin-0.1.3-py3-none-any.whl | py3 | bdist_wheel | null | false | 7313cbed7b5d873103ebe22a7ef90254 | c60a6015ed30112b5469a06820dc31a70ebdd23064ba2832e82b799898de8d4b | 4fe3ce1c5a4da68e237d9b826f3444df0ff060115380c80cfd7c1d6fbfa125bd | null | [] | 116 |
2.4 | glassflow | 3.7.2 | GlassFlow Python SDK: Create GlassFlow pipelines between Kafka and ClickHouse | # GlassFlow Python SDK
<p align="left">
<a target="_blank" href="https://pypi.python.org/pypi/glassflow">
<img src="https://img.shields.io/pypi/v/glassflow.svg?labelColor=&color=e69e3a">
</a>
<a target="_blank" href="https://github.com/glassflow/glassflow-python-sdk/blob/main/LICENSE.md">
<img src="https://img.shields.io/pypi/l/glassflow.svg?labelColor=&color=e69e3a">
</a>
<a target="_blank" href="https://pypi.python.org/pypi/glassflow">
<img src="https://img.shields.io/pypi/pyversions/glassflow.svg?labelColor=&color=e69e3a">
</a>
<br />
<a target="_blank" href="(https://github.com/glassflow/glassflow-python-sdk/actions">
<img src="https://github.com/glassflow/glassflow-python-sdk/workflows/Test/badge.svg?labelColor=&color=e69e3a">
</a>
<!-- Pytest Coverage Comment:Begin -->
<img src=https://img.shields.io/badge/coverage-92%25-brightgreen>
<!-- Pytest Coverage Comment:End -->
</p>
A Python SDK for creating and managing data pipelines between Kafka and ClickHouse.
## Features
- Create and manage data pipelines between Kafka and ClickHouse
- Deduplication of events during a time window based on a key
- Temporal joins between topics based on a common key with a given time window
- Schema validation and configuration management
## Installation
```bash
pip install glassflow
```
## Quick Start
### Initialize client
```python
from glassflow.etl import Client
# Initialize GlassFlow client
client = Client(host="your-glassflow-etl-url")
```
### Create a pipeline
```python
pipeline_config = {
"version": "v2",
"pipeline_id": "my-pipeline-id",
"source": {
"type": "kafka",
"connection_params": {
"brokers": [
"http://my.kafka.broker:9093"
],
"protocol": "PLAINTEXT",
"mechanism": "NO_AUTH"
},
"topics": [
{
"consumer_group_initial_offset": "latest",
"name": "users",
"deduplication": {
"enabled": True,
"id_field": "event_id",
"id_field_type": "string",
"time_window": "1h"
}
}
]
},
"join": {
"enabled": False
},
"sink": {
"type": "clickhouse",
"host": "http://my.clickhouse.server",
"port": "9000",
"database": "default",
"username": "default",
"password": "c2VjcmV0",
"secure": False,
"max_batch_size": 1000,
"max_delay_time": "30s",
"table": "users_dedup"
},
"schema": {
"fields": [
{
"source_id": "users",
"name": "event_id",
"type": "string",
"column_name": "event_id",
"column_type": "UUID"
},
{
"source_id": "users",
"field_name": "user_id",
"column_name": "user_id",
"column_type": "UUID"
},
{
"source_id": "users",
"name": "created_at",
"type": "string",
"column_name": "created_at",
"column_type": "DateTime"
},
{
"source_id": "users",
"name": "name",
"type": "string",
"column_name": "name",
"column_type": "String"
},
{
"source_id": "users",
"name": "email",
"type": "string",
"column_name": "email",
"column_type": "String"
}
]
}
}
# Create a pipeline
pipeline = client.create_pipeline(pipeline_config)
```
## Get pipeline
```python
# Get a pipeline by ID
pipeline = client.get_pipeline("my-pipeline-id")
```
### List pipelines
```python
pipelines = client.list_pipelines()
for pipeline in pipelines:
print(f"Pipeline ID: {pipeline['pipeline_id']}")
print(f"Name: {pipeline['name']}")
print(f"Transformation Type: {pipeline['transformation_type']}")
print(f"Created At: {pipeline['created_at']}")
print(f"State: {pipeline['state']}")
```
### Stop / Terminate / Resume Pipeline
```python
pipeline = client.get_pipeline("my-pipeline-id")
pipeline.stop()
print(pipeline.status)
```
```
STOPPING
```
```python
# Stop a pipeline ungracefully (terminate)
client.stop_pipeline("my-pipeline-id", terminate=True)
print(pipeline.status)
```
```
TERMINATING
```
```python
pipeline = client.get_pipeline("my-pipeline-id")
pipeline.resume()
print(pipeline.status)
```
```
RESUMING
```
### Delete pipeline
Only stopped or terminated pipelines can be deleted.
```python
# Delete a pipeline
client.delete_pipeline("my-pipeline-id")
# Or delete via pipeline instance
pipeline.delete()
```
## Pipeline Configuration
For detailed information about the pipeline configuration, see [GlassFlow docs](https://docs.glassflow.dev/configuration/pipeline-json-reference).
## Tracking
The SDK includes anonymous usage tracking to help improve the product. Tracking is enabled by default but can be disabled in two ways:
1. Using an environment variable:
```bash
export GF_TRACKING_ENABLED=false
```
2. Programmatically using the `disable_tracking` method:
```python
from glassflow.etl import Client
client = Client(host="my-glassflow-host")
client.disable_tracking()
```
The tracking collects anonymous information about:
- SDK version
- Platform (operating system)
- Python version
- Pipeline ID
- Whether joins or deduplication are enabled
- Kafka security protocol, auth mechanism used and whether authentication is disabled
- Errors during pipeline creation and deletion
## Development
### Setup
1. Clone the repository
2. Create a virtual environment
3. Install dependencies:
```bash
uv venv
source .venv/bin/activate
uv pip install -e .[dev]
```
### Testing
```bash
pytest
```
| text/markdown | null | GlassFlow <hello@glassflow.dev> | null | null | MIT | clickhouse, data-engineering, data-pipeline, etl, glassflow, kafka, streaming | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Py... | [] | null | null | <3.14,>=3.9 | [] | [] | [] | [
"httpx>=0.26.0",
"mixpanel>=4.10.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0.2",
"requests>=2.31.0",
"build>=1.0.0; extra == \"build\"",
"hatch>=1.0.0; extra == \"build\"",
"twine>=4.0.0; extra == \"build\"",
"pytest-cov>=4.0.0; extra == \"test\"",
"pytest>=7.0.0; extra == \"test... | [] | [] | [] | [
"Homepage, https://github.com/glassflow/glassflow-python-sdk",
"Documentation, https://glassflow.github.io/glassflow-python-sdk",
"Repository, https://github.com/glassflow/glassflow-python-sdk.git",
"Issues, https://github.com/glassflow/glassflow-python-sdk/issues"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T08:48:14.632789 | glassflow-3.7.2.tar.gz | 87,805 | cb/75/1fb203e43d8c90196f2e9a300fa676957644420057e6fe2a63bc9fbfffcd/glassflow-3.7.2.tar.gz | source | sdist | null | false | e5e2e0a8e0bfb834bb6c7e694ee464af | b69b5ea35f1546760a750a6db58dd8e7cf4a6a12034d64daac7e610764f48040 | cb751fb203e43d8c90196f2e9a300fa676957644420057e6fe2a63bc9fbfffcd | null | [
"LICENSE.md"
] | 259 |
2.4 | profiler-cub | 0.0.10 | A beautiful profiler for Python projects with various options | # Profiler Cub 🐻
[](https://pypi.org/project/profiler-cub/)
A beautiful Python code profiling library with rich terminal visualizations. Profile your code, categorize by layers, analyze dependencies, and get gorgeous color-coded performance reports.
## Features
- 🎨 **Beautiful Rich Visualizations** - Color-coded tables with performance gradients
- 📊 **Layer-based Categorization** - Organize code into logical layers (Core, API, Database, etc.)
- 🔍 **Dependency Analysis** - See exactly how much time external libraries consume
- ⏱️ **Import vs Runtime Tracking** - Separate module load time from execution time
- 🎯 **Filtering & Sorting** - Multiple sort modes and threshold filtering
- 🔄 **Multiple Iterations** - Run workloads multiple times for stable measurements
## Installation
```bash
pip install profiler-cub
```
With [`uv`](https://docs.astral.sh/uv/):
```bash
uv tool install profiler-cub
```
## Quick Start
```python
from pathlib import Path
from profiler_cub.core import CodeProfiler
from profiler_cub.display import display_all
from funcy_bear.tools.gradient import ColorGradient
# Create profiler
profiler = CodeProfiler(
pkg_name="my_package",
threshold_ms=0.5, # Filter functions below 0.5ms
)
# Profile your code
def my_workload():
# Your code here
pass
profiler.run(my_workload, stats_file=Path("profile.stats"))
# Display beautiful results
gradient = ColorGradient(start_color="#00ff00", end_color="#ff0000")
display_all(profiler, color_gradient=gradient)
```
## Layer-Based Profiling
Organize your code into logical layers for better insights:
```python
profiler = CodeProfiler(
pkg_name="my_app",
module_map={
"Database": {"db/", "models/"},
"API": {"api/", "routes/"},
"Core": {"core/", "engine/"},
}
)
```
The profiler will categorize each function by its filepath and show aggregated stats per layer.
## Examples
Check out the [`examples/`](examples/) directory for complete examples:
- **`simple_example.py`** - Minimal setup, get started quickly
- **`profile_example.py`** - Full-featured with CLI args, setup/teardown, iterations
See [`examples/README.md`](examples/README.md) for detailed usage.
## Documentation
For comprehensive documentation, see [`CLAUDE.md`](CLAUDE.md) which includes:
- Complete architecture overview
- All configuration options
- Key concepts (layers, sort modes, thresholds, etc.)
- Real-world usage patterns
## Development
```bash
# Install dependencies
uv sync
# Run tests
nox -s tests
# Run linting
nox -s ruff_fix
# Run type checking
nox -s pyright
```
## License
MIT
## Credits
Created by Bear 🐻 with love for performance analysis.
| text/markdown | null | chaz <bright.lid5647@fastmail.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"funcy-bear>=0.0.25",
"lazy-bear>=0.0.11",
"rich>=14.2.0"
] | [] | [] | [] | [] | uv/0.9.28 {"installer":{"name":"uv","version":"0.9.28","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T08:47:46.570914 | profiler_cub-0.0.10-py3-none-any.whl | 24,970 | 15/fc/212e7e7baea2a83e74ed8d57ead72e785432f4113a7f252ff6d1b7532ed4/profiler_cub-0.0.10-py3-none-any.whl | py3 | bdist_wheel | null | false | 06f68ee6138fa7e7abde39c5d018eada | 31061a285c4764cb146e7c5a9635488a3a7f846da5589b6262b19731012d1503 | 15fc212e7e7baea2a83e74ed8d57ead72e785432f4113a7f252ff6d1b7532ed4 | null | [] | 120 |
2.4 | py-ibkr | 0.1.3 | A modern, Pydantic-based parser for Interactive Brokers (IBKR) Flex Query reports | # py-ibkr
A modern, Pydantic-based parser for Interactive Brokers (IBKR) Flex Query reports.
This version replaces the legacy `ibflex` library with strict type checking and improved support for newer IBKR XML fields.
## Features
- **Pydantic Models**: All data is parsed into typed Pydantic models with validation.
- **Robust Parsing**: Handles "messy" IBKR data (e.g., legacy enums like `Deposits/Withdrawals`, inconsistent date formats).
- **Forward Compatible**: Designed to handle new fields gracefully.
## Installation
```bash
uv pip install py-ibkr
# or
pip install py-ibkr
```
## Command Line Interface
`py-ibkr` includes a zero-dependency CLI for downloading Flex Queries.
```bash
# Download to a file
py-ibkr download --token YOUR_TOKEN --query-id YOUR_QUERY_ID --output report.xml
# Download with date range overrides (ISO format YYYY-MM-DD supported)
py-ibkr download -t YOUR_TOKEN -q YOUR_QUERY_ID --from-date 2023-01-01 --to-date 2023-01-31
# Alternatively, use a .env file (automatically loaded if present):
# IBKR_FLEX_TOKEN=your_token
# IBKR_FLEX_QUERY_ID=your_query_id
py-ibkr download -o report.xml
# Download to stdout (pipe to other tools)
py-ibkr download | xmllint --format -
```
## Setup: Obtaining your Token and Query ID
To use the automated downloader, you must enable the Flex Web Service in your Interactive Brokers account:
1. Log in to the **IBKR Client Portal**.
2. Navigate to **Performance & Reports** > **Flex Queries**.
3. **Obtain Token**: Click the gear icon or **Flex Web Service Configuration**. Enable the service and copy the **Current Token**.
4. **Obtain Query ID**: Create a new Flex Query (e.g., *Activity*). After saving it, the **Query ID** will be visible in the list of your Flex Queries.
> [!NOTE]
> Flex Queries are typically generated once per day after market close. Trade Confirmations are available with a 5-10 minute delay.
## Usage
### Downloading a Flex Query Report
You can automatically download reports using the `FlexClient`. This requires a Token and Query ID from the IBKR Client Portal.
```python
from py_ibkr import FlexClient, parse
# 1. Initialize the client
client = FlexClient()
# 2. Download the report (handles the request-poll-fetch protocol)
xml_data = client.download(
token="YOUR_IBKR_TOKEN",
query_id="YOUR_QUERY_ID"
)
# 3. Parse the downloaded data
response = parse(xml_data)
print(f"Query Name: {response.queryName}")
```
### Parsing a Flex Query File
for statement in response.FlexStatements:
print(f"Account: {statement.accountId}")
# Access Trades
for trade in statement.Trades:
print(f"Symbol: {trade.symbol}, Quantity: {trade.quantity}, Price: {trade.tradePrice}")
# Access Cash Transactions
for cash_tx in statement.CashTransactions:
print(f"Type: {cash_tx.type}, Amount: {cash_tx.amount}")
```
### Models
You can import models directly for type hinting:
```python
from py_ibkr import Trade, CashTransaction
def process_trade(trade: Trade):
print(trade.symbol)
```
| text/markdown | null | Roman Medvedev <pypi@romavm.dev> | null | null | MIT | finance, flex-query, ibkr, interactive-brokers, pydantic | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: P... | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic>=2.0",
"mypy; extra == \"dev\"",
"pytest; extra == \"dev\"",
"ruff; extra == \"dev\"",
"types-python-dateutil; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/romamo/py-ibkr",
"Repository, https://github.com/romamo/py-ibkr",
"Issues, https://github.com/romamo/py-ibkr/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T08:47:13.594893 | py_ibkr-0.1.3-py3-none-any.whl | 14,797 | a4/45/211d6934a765191250829c8d39229297c3ffebd88e002feda19e38ed670c/py_ibkr-0.1.3-py3-none-any.whl | py3 | bdist_wheel | null | false | 3d7155661bcd80d1e2266259c8b82aaf | e3fdfe1ea4436fdc9f08ca991f167f72c83462767d211028ab04cb8869a0f838 | a445211d6934a765191250829c8d39229297c3ffebd88e002feda19e38ed670c | null | [
"LICENSE"
] | 267 |
2.4 | genai-coroutines | 1.0.0 | High-performance async batch processing for Datalab Marker API (OCR) and OpenAI Responses API | # GenAI Coroutines
**High-performance async batch processing for Datalab Marker API (OCR) and OpenAI Responses API.**
Built in Rust with Python bindings, `genai-coroutines` eliminates GIL bottlenecks and processes hundreds of concurrent API requests with production-grade reliability — smart retries, precise rate limiting, structured output parsing, and cost tracking.
## Features
- **Concurrent Processing** — Semaphore-based concurrency control for both APIs.
- **Smart Retry Logic** — Exponential backoff with jitter. Auto-retries on 429/5xx; fails fast on 400/401.
- **Zero GIL** — Native Rust async runtime bypasses Python's Global Interpreter Lock.
- **Structured Output** — JSON Schema enforcement (OpenAI) and page-level structured extraction (Datalab).
- **Cost & Usage Tracking** — Token-level usage (OpenAI) and cost breakdown in cents (Datalab) returned with every result.
- **Parsing Helpers** — Built-in `parse_responses()` and `parse_documents()` to extract clean text/HTML.
- **Order Preservation** — Results always match input order.
---
## Quick Start
```python
import asyncio
from genai_coroutines import (
DocumentConfig, DocumentProcessor, parse_documents,
ResponsesRequest, ResponsesProcessor, parse_responses,
)
```
---
## Datalab OCR — `DocumentConfig` & `DocumentProcessor`
Process PDFs and images into structured text, HTML, or Markdown using the Datalab Marker API.
### All Parameters — `DocumentConfig`
| Parameter | Type | Default | Description |
|---|---|---|---|
| `api_key` | `str` | **Required** | Your Datalab API key. |
| `api_url` | `str` | `"https://www.datalab.to/api/v1/marker"` | API endpoint URL. Override for custom/self-hosted endpoints. |
| `output_format` | `str` | `"json"` | `"json"` · `"html"` · `"markdown"` · `"chunks"` |
| `mode` | `str` | `"accurate"` | `"fast"` · `"balanced"` · `"accurate"` |
| **Concurrency & Retry** ||||
| `max_concurrent_requests` | `int` | `10` | Maximum parallel API calls. Controls semaphore permits. |
| `poll_interval_secs` | `int` | `2` | Seconds between status polling requests. |
| `max_poll_attempts` | `int` | `60` | Max polling attempts before timeout. |
| `max_retries` | `int` | `5` | Retry attempts on rate-limit/server errors. |
| `base_retry_delay_secs` | `int` | `5` | Base delay for exponential backoff (seconds). |
| `jitter_percent` | `int` | `200` | Random jitter range (±%) applied to backoff. |
| **Structured Extraction** ||||
| `page_schema` | `str` (JSON) | `None` | JSON schema string for structured data extraction per page. See [Structured Extraction](#structured-extraction-datalab) below. |
| **Page Control** ||||
| `paginate` | `bool` | `False` | Return output separated by page. |
| `page_range` | `str` | `None` | Specific pages to process (e.g., `"0-5"`, `"0,2,4"`). |
| `max_pages` | `int` | `None` | Maximum number of pages to process. |
| `disable_image_extraction` | `bool` | `False` | Skip image extraction from documents. |
| **Advanced** ||||
| `extras` | `str` (JSON) | `None` | Additional API parameters as a JSON string. |
| `webhook_url` | `str` | `None` | URL to receive webhook callback on completion. |
### Structured Extraction (Datalab)
Use `page_schema` to extract structured fields from each page. Pass a **JSON schema string** describing the fields you want:
```python
import json
schema = json.dumps({
"type": "object",
"properties": {
"patient_name": {"type": "string", "description": "Full name of the patient"},
"diagnosis": {"type": "string", "description": "Primary diagnosis"},
"date": {"type": "string", "description": "Date of visit (YYYY-MM-DD)"}
},
"required": ["patient_name", "diagnosis"]
})
config = DocumentConfig(
api_key="YOUR_DATALAB_KEY",
output_format="json",
mode="accurate",
page_schema=schema # Structured extraction
)
```
> **Note**: The `page_schema` value is validated as valid JSON at initialization time. If the string is not valid JSON, a `ValueError` is raised immediately.
### Usage Example
```python
import asyncio
from genai_coroutines import DocumentConfig, DocumentProcessor, parse_documents
async def main():
config = DocumentConfig(
api_key="YOUR_DATALAB_KEY",
mode="accurate",
max_concurrent_requests=10,
page_range="0-5", # Only process first 6 pages
max_pages=10 # Safety cap
)
processor = DocumentProcessor(config)
# Load documents
files = ["report1.pdf", "report2.pdf", "scan.png"]
batch = []
for f in files:
with open(f, "rb") as fh:
batch.append(fh.read())
# Process
results = await processor.process_multiparts(batch)
# Parse consolidated HTML from each document
html_list = parse_documents(results)
for i, html in enumerate(html_list):
print(f"Doc {i}: {len(html)} chars of HTML")
# Access cost breakdown (native Python dict)
for r in results:
if r["success"] and r.get("cost_breakdown"):
cost = r["cost_breakdown"] # Already a dict, no json.loads needed
print(f"Cost: {cost['final_cost_cents']} cents")
asyncio.run(main())
```
### OCR Output Structure
Each item in the returned list:
```python
{
"index": 0, # Matches input order
"success": True,
"json_response": "{...}", # Raw JSON string from Datalab API
"cost_breakdown": { # Cost tracking (when available)
"final_cost_cents": 15,
"list_cost_cents": 15
},
"processing_time_secs": 4.5,
"error": None # Error message if success=False
}
```
---
## OpenAI Responses API — `ResponsesRequest` & `ResponsesProcessor`
Batch process chat completions with structured output, reasoning models, tools, and multi-turn conversations.
### All Parameters — `ResponsesRequest`
| Parameter | Type | Default | Description |
|---|---|---|---|
| `api_key` | `str` | **Required** | Your OpenAI API key. |
| `system_prompt` | `str` | **Required** | System instructions for the model. |
| `user_prompts` | `list[str]` | **Required** | List of user prompts to process as a batch. |
| `model` | `str` | **Required** | Model ID: `"gpt-4o"`, `"gpt-4o-mini"`, `"o3-mini"`, etc. |
| `response_format` | `dict` | **Required** | Output format. See [Structured Output](#structured-output-openai) below. |
| `timeout_secs` | `int` | `60` | Per-request timeout in seconds. |
| **Concurrency & Retry** ||||
| `max_concurrent_requests` | `int` | `10` | Maximum parallel API calls. Controls semaphore permits. |
| `max_retries` | `int` | `5` | Retry attempts on rate-limit/server errors. |
| `retry_delay_min_ms` | `int` | `1000` | Minimum backoff delay in milliseconds. |
| `retry_delay_max_ms` | `int` | `60000` | Maximum backoff delay in milliseconds. |
| **Sampling** ||||
| `temperature` | `float` | `None` | Sampling temperature (0.0–2.0). |
| `top_p` | `float` | `None` | Nucleus sampling threshold. |
| `max_output_tokens` | `int` | `None` | Maximum tokens in the response. |
| **Reasoning (o-series models)** ||||
| `reasoning_effort` | `str` | `None` | `"low"` · `"medium"` · `"high"` — Controls thinking depth for o3-mini, o1, etc. |
| `reasoning_summary` | `str` | `None` | `"auto"` · `"concise"` · `"detailed"` — Controls reasoning summary output. |
| **Tools & Function Calling** ||||
| `tools` | `list[dict]` | `None` | Tool/function definitions for function calling. |
| `tool_choice` | `dict` | `None` | Tool selection strategy (`"auto"`, `"required"`, or specific tool). |
| `parallel_tool_calls` | `bool` | `None` | Allow model to call multiple tools in parallel. |
| **Multi-Turn** ||||
| `previous_response_id` | `str` | `None` | ID of a previous response to continue a conversation. |
| `include` | `list[str]` | `None` | Additional data to include (e.g., `["file_search_call.results"]`). |
| **Other** ||||
| `store` | `bool` | `None` | Whether to store the response for later retrieval. |
| `truncation` | `str` | `None` | Truncation strategy for context window overflow. |
| `metadata` | `dict` | `None` | Custom metadata to attach to the request. |
| `service_tier` | `str` | `None` | Service tier (`"auto"`, `"default"`). |
| `stream` | `bool` | `None` | Enable streaming (advanced). |
### Structured Output (OpenAI)
The `response_format` parameter controls how the model formats its output. Three modes are supported:
#### 1. JSON Schema (Strict)
Force the model to output valid JSON matching your exact schema:
```python
request = ResponsesRequest(
api_key="YOUR_KEY",
model="gpt-4o-mini",
system_prompt="Extract patient info from the text.",
user_prompts=["Patient John Doe, age 45, diagnosed with hypertension on 2024-01-15."],
response_format={
"type": "json_schema",
"json_schema": {
"name": "patient_info",
"schema": {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"},
"diagnosis": {"type": "string"},
"date": {"type": "string"}
},
"required": ["name", "age", "diagnosis", "date"],
"additionalProperties": False
},
"strict": True
}
}
)
```
**Output** (guaranteed valid JSON matching schema):
```json
{"name": "John Doe", "age": 45, "diagnosis": "hypertension", "date": "2024-01-15"}
```
#### 2. JSON Object (Flexible)
Force valid JSON output without a specific schema:
```python
response_format={"type": "json_object"}
```
#### 3. Plain Text
No format enforcement:
```python
response_format={"type": "text"}
```
### Reasoning Models (o3-mini, o1)
For reasoning models, control the depth of thinking and summary:
```python
request = ResponsesRequest(
api_key="YOUR_KEY",
model="o3-mini",
system_prompt="Solve this step by step.",
user_prompts=["What is 1234 * 5678?"],
response_format={"type": "text"},
reasoning_effort="high", # low | medium | high
reasoning_summary="detailed" # auto | concise | detailed
)
```
### Function Calling / Tools
Define tools the model can call:
```python
request = ResponsesRequest(
api_key="YOUR_KEY",
model="gpt-4o",
system_prompt="You are a helpful assistant with access to tools.",
user_prompts=["What's the weather in San Francisco?"],
response_format={"type": "text"},
tools=[{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
},
"required": ["location"]
}
}
}],
tool_choice="auto"
)
```
### Multi-Turn Conversations
Continue a conversation by referencing a previous response:
```python
# First turn
results1 = await processor.process_batch(request1)
response_id = json.loads(results1["results"][0]["raw_response"])["id"]
# Second turn
request2 = ResponsesRequest(
api_key="YOUR_KEY",
model="gpt-4o",
system_prompt="You are a helpful assistant.",
user_prompts=["Can you elaborate on that?"],
response_format={"type": "text"},
previous_response_id=response_id # Continue the conversation
)
results2 = await processor.process_batch(request2)
```
### Usage Example
```python
import asyncio, json
from genai_coroutines import ResponsesRequest, ResponsesProcessor, parse_responses
async def main():
request = ResponsesRequest(
api_key="YOUR_OPENAI_KEY",
model="gpt-4o-mini",
system_prompt="You are an expert data extractor.",
user_prompts=[
"Extract: John Doe, 45, hypertension",
"Extract: Jane Smith, 32, diabetes",
],
response_format={
"type": "json_schema",
"json_schema": {
"name": "patient",
"schema": {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"},
"condition": {"type": "string"}
},
"required": ["name", "age", "condition"],
"additionalProperties": False
},
"strict": True
}
},
max_concurrent_requests=20,
max_retries=3
)
processor = ResponsesProcessor()
results = await processor.process_batch(request)
# Parse clean text
texts = parse_responses(results)
for text in texts:
data = json.loads(text)
print(f"{data['name']} — {data['condition']}")
# Access token usage (native Python dict — no json.loads needed)
for r in results["results"]:
if r["success"] and r.get("usage"):
usage = r["usage"] # Already a dict
print(f"Tokens: {usage['total_tokens']}")
asyncio.run(main())
```
### OpenAI Output Structure
```python
{
"total_success": 2,
"total_errors": 0,
"results": [
{
"success": True,
"raw_response": "{...}", # Full OpenAI JSON response (string)
"usage": { # Token usage (native Python dict)
"input_tokens": 42,
"output_tokens": 18,
"total_tokens": 60,
"input_tokens_details": {"cached_tokens": 0},
"output_tokens_details": {"reasoning_tokens": 0}
}
},
{
"success": False,
"error": "401 Unauthorized: Invalid API key",
"error_type": "authentication_error",
"param": None,
"code": "invalid_api_key",
"is_retriable": False,
"attempts": 1
}
]
}
```
---
## Helper Functions
### `parse_responses(results) → list[str]`
Extracts clean assistant message text from OpenAI batch results.
```python
from genai_coroutines import parse_responses
texts = parse_responses(results)
# ["John Doe, 45, hypertension...", "Jane Smith, 32, diabetes..."]
```
- **Input**: The dict returned by `ResponsesProcessor.process_batch()`.
- **Output**: List of strings — **one per input prompt** (same length as `user_prompts`).
- **Behavior**:
- Aggregates all text from `output[].content[].text` for each response.
- If the model called tools instead of producing text, returns the **full raw JSON** so you can inspect tool calls.
- Returns `""` for failed prompts.
### `parse_documents(results) → list[str]`
Extracts content from all OCR results, auto-detecting the output format.
```python
from genai_coroutines import parse_documents
contents = parse_documents(results)
# ["<p>Page 1 content</p>\n<p>Page 2 content</p>", ...]
```
- **Input**: The list returned by `DocumentProcessor.process_multiparts()`.
- **Output**: List of strings — **one per input document** (same length as input batch).
- **Format handling**:
- `output_format="json"`: Extracts and concatenates `json.children[].html`.
- `output_format="html"`: Returns raw HTML string.
- `output_format="markdown"`: Returns markdown string.
- `paginate=True`: Concatenates content from paginated output.
- `page_schema` (structured extraction): Returns the full JSON string.
- Fallback: Returns raw JSON string so nothing is ever lost.
- Returns `""` for failed documents.
---
## Error Handling
Errors are classified internally:
| Category | HTTP Codes | Behavior |
|---|---|---|
| **Retriable** | 429, 500, 502, 503, 504, timeouts | Auto-retry with exponential backoff + jitter |
| **Fatal** | 400, 401, 403, 404 | Fail immediately, no retry |
- Failed items have `"success": False` with an `"error"` message.
- The batch **never crashes** due to individual failures — all other items continue processing.
- For OpenAI errors, `is_retriable`, `error_type`, `param`, `code`, and `attempts` are included.
---
## Logging
All Rust-level logs are bridged to Python's `logging` module under the `genai_coroutines` logger:
```python
import logging
# See all logs
logging.basicConfig(level=logging.INFO)
# Per-module control
logging.getLogger("genai_coroutines.ocr").setLevel(logging.DEBUG)
logging.getLogger("genai_coroutines.responses").setLevel(logging.WARNING)
```
Sample log output:
```
INFO [chandra] batch_start | files=10 concurrency=10
WARN [chandra] rate_limit | index=3 attempt=2/5
INFO [chandra] task_success | index=3 time=12.50s attempts=2
INFO [chandra] batch_done | ok=10/10 errors=0
```
---
## Performance Tuning
| Scenario | `max_concurrent_requests` | Retry Config |
|---|---|---|
| **Small batch (<50)** | 5–10 | Defaults |
| **High volume (1k+)** | 30–50 | Increase `retry_delay_max_ms` |
| **Rate-limited API** | 3–5 | Increase `jitter_percent` and `base_retry_delay_secs` |
| **Reasoning models** | 5–10 | Increase `timeout_secs` to 120+ |
> `max_concurrent_requests` controls the semaphore. Even with high concurrency, the retry logic will back off automatically on rate limits.
| text/markdown; charset=UTF-8; variant=GFM | Soham Jagtap | null | null | null | MIT | rust, async, ocr, llm, batch-processing, datalab, openai | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Py... | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://github.com/Soham52434/GenAi-Coroutines#readme",
"Homepage, https://github.com/Soham52434/GenAi-Coroutines",
"Repository, https://github.com/Soham52434/GenAi-Coroutines"
] | maturin/1.12.2 | 2026-02-18T08:46:53.040301 | genai_coroutines-1.0.0-cp39-abi3-win_amd64.whl | 1,958,815 | 03/5d/e7c721dab5cc45ee5c721a00bafa11d2602604f7e9faa8dc43ec8ce2b949/genai_coroutines-1.0.0-cp39-abi3-win_amd64.whl | cp39 | bdist_wheel | null | false | d02b1644b33c80a7e1f8d71a8ccb2097 | 475460bcb9b79bab9ad15190e5899d3be2d3d3f5368b25ef195605e6c17f3536 | 035de7c721dab5cc45ee5c721a00bafa11d2602604f7e9faa8dc43ec8ce2b949 | null | [] | 554 |
2.4 | zkyhaxpy | 0.3.1.4.6 | A swiss-knife Data Science package for python | This is zkyhaxpy package.
zkyhaxpy is a python package for personal usage.
It provides a lot of useful tools for working as data scientist.
| text/markdown | null | Surasak Choedpasuporn <surasak.cho@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.6 | [] | [] | [] | [
"pandas",
"numpy"
] | [] | [] | [] | [
"Homepage, https://github.com/surasakcho/zkyhaxpy"
] | twine/6.1.0 CPython/3.12.3 | 2026-02-18T08:46:45.729967 | zkyhaxpy-0.3.1.4.6.tar.gz | 404,659 | 37/53/df26c9fd346ba70eb150a00c58e2c241a94cce091565dfb21021b8fbd77b/zkyhaxpy-0.3.1.4.6.tar.gz | source | sdist | null | false | 6b3f4996585b7b2908650776eb632a15 | d229708369c5208c03bfba2878c11adca63303b3a40dbec8b2f1474bfbc91fc3 | 3753df26c9fd346ba70eb150a00c58e2c241a94cce091565dfb21021b8fbd77b | null | [
"LICENSE"
] | 272 |
2.4 | aas-standard-parser | 0.3.8 | Some auxiliary functions for parsing standard submodels | # aas-standard-parser
<div align="center">
<!-- change this to your projects logo if you have on.
If you don't have one it might be worth trying chatgpt dall-e to create one for you...
-->
<img src="docs/assets/fluid_logo.svg" alt="aas_standard_parser" width=500 />
</div>
---
[](LICENSE)
[](https://github.com/fluid40/aas-standard-parser/actions)
[](https://pypi.org/project/aas-standard-parser/)
This project provides tools for parsing and handling Asset Administration Shell (AAS) standard submodels, with a focus on AID and AIMC submodels. It enables:
- Extraction, interpretation, and mapping of submodel elements and their properties
- Working with references, semantic IDs, and submodel element collections
- Representation and processing of mapping configurations and source-sink relations
- Structured and colored logging, including log file management
These components enable efficient parsing, transformation, and analysis of AAS submodels in Python-based workflows.
> **Note:** Most functions in this project utilize the [python aas sdk framework](https://github.com/aas-core-works/aas-core3.0-python) for parsing and handling AAS submodels, ensuring compatibility with the official AAS data models and structures.
---
## Provided Parsers
- **AID Parser**: Parses AID submodels to extract interface descriptions, properties, and security/authentication details.
- **AIMC Parser**: Parses AIMC submodels to extract and process mapping configurations and source-sink relations.
- **AAS Parser**: Utilities to extract submodel IDs from an Asset Administration Shell.
- **Submodel Parser**: Helpers to retrieve submodel elements by semantic ID or by path within a submodel.
## Helper Modules
- **Collection Helpers**: Functions to search and filter submodel elements by semantic ID, idShort, or supplemental semantic ID within collections.
- **Reference Helpers**: Utilities for working with references, such as constructing idShort paths and extracting values from reference keys.
- **Utilities**: General utility functions, including loading a submodel from a file.
---
## API References
- AID Parser
- AIMC Parser
- [AAS Parser](docs/api_references/api_aas_parser.md)
- [Submodel Parser](docs/api_references/api_submodel_parser.md)
- Collection Helpers
- [Reference Helpers](docs/api_references/api_reference_helpers.md)
- [Utilities](docs/api_references/api_utils.md)
## Resources
🤖 [Releases](http://github.com/fluid40/aas-standard-parser/releases)
📦 [Pypi Packages](https://pypi.org/project/aas-standard-parser/)
📜 [MIT License](LICENSE)
| text/markdown | null | Daniel Klein <daniel.klein@em.ag> | null | null | MIT License
Copyright (c) 2025 Fluid4.0
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"basyx-python-sdk>=2.0.0",
"python-json-logger>=4.0.0",
"requests>=2.32.5"
] | [] | [] | [] | [
"Homepage, https://github.com/fluid40/aas-standard-parser"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T08:46:07.696891 | aas_standard_parser-0.3.8.tar.gz | 19,471 | b4/aa/7cfeea7a1fdb5ece53745b53ec05dacd2c4e8de5e59f9013f5c88918c392/aas_standard_parser-0.3.8.tar.gz | source | sdist | null | false | 34445f36495e14ec4ea7f52d1752d588 | 2df66f6a6bc5d89288d9f38fc4f762a54c8edb78b99a921f99497ccdaafc2d06 | b4aa7cfeea7a1fdb5ece53745b53ec05dacd2c4e8de5e59f9013f5c88918c392 | null | [
"LICENSE"
] | 286 |
2.4 | pyturbocode | 0.1.13 | A simple implementation of a turbo encoder and decoder | # pyturbocode
**pyturbocode** is a Python reference implementation of turbo codes (error-correction codes used in cellular communications and deep space). The implementation prioritizes clarity over performance and provides a simple rate 1/3 code without puncturing.
The main API is the `TurboCodec` class, which provides a clean `bytes -> bytes` interface wrapping CommPy's turbo encoding/decoding functions.
## Architecture
### Core Component: TurboCodec
The `TurboCodec` class (`src/pyturbocode/TurboCodec.py`) is the central API with two methods:
- **`encode(data: bytes) -> bytes`**: Converts input bytes to bits, passes them through a turbo encoder (rate 1/3), and returns encoded bytes with a 32-bit header storing the original bit length
- **`decode(encoded_data: bytes) -> bytes`**: Extracts metadata from the header, performs iterative turbo decoding (default 8 iterations), and returns decoded bytes
The class uses:
- Two RSC (Recursive Systematic Convolutional) trellis instances with configurable constraint length (default K=3)
- CommPy's `turbo_encode` and `turbo_decode` functions
- A random interleaver (seeded with 1346 for reproducibility)
- Bit-to-bytes conversion utilities for interfacing with CommPy
### Encoding Format
The encoded packet structure:
```
[Original bit length: 4 bytes][Systematic stream][Non-systematic stream 1][Non-systematic stream 2]
```
The systematic and non-systematic streams are each equal in length to the original message bits, making the output 3x the original size (rate 1/3).
## Common Commands
**Package manager**: Uses `uv` (not pip/pip-tools). Install with `uv sync --all-groups`.
### Testing
```bash
uv run pytest # Run all tests
uv run pytest tests/test_codec.py::test_codec # Run specific test
uv run pytest -v # Verbose output
uv run pytest -k keyword # Run tests matching keyword
./all_tests.sh # Full test + docs + coverage suite
```
### Code Quality
```bash
uv run black src/ tests/ # Format code (100 char line length)
uv run ruff check src/ tests/ # Lint with ruff
uv run pylint src/pyturbocode/ # Lint with pylint
```
### Documentation & Coverage
```bash
uv run pdoc --html --force --config latex_math=True -o htmldoc pyturbocode
uv run coverage html -d htmldoc/coverage --rcfile tests/coverage.conf
uv run docstr-coverage src/pyturbocode
```
## Code Style & Configuration
- **Line length**: 100 characters (configured in pyproject.toml for black, ruff, and pylint)
- **Formatter**: Black
- **Linters**: Ruff, Pylint
- **Testing framework**: Pytest with plugins (html, cov, instafail, sugar, xdist, picked, mock)
- **Pre-commit hooks**: Enabled (see .pre-commit-config.yaml) - runs black, ruff, and basic checks
## Key Files
- `src/pyturbocode/TurboCodec.py` - Main API (core encode/decode logic)
- `src/pyturbocode/__init__.py` - Package initialization and logging setup
- `tests/test_codec.py` - Single integration test for encode/decode roundtrip
- `pyproject.toml` - Project configuration with build system, dependencies, tool configs, and dependency groups (dev, test, doc)
- `all_tests.sh` - Automation script for full test suite and documentation generation
- `.pre-commit-config.yaml` - Pre-commit hooks for code quality checks
## Dependencies
**Core runtime**:
- `numpy>=2.4.2` - Numerical operations
- `scikit-commpy>=0.8.0` - CommPy library providing Trellis, turbo_encode/turbo_decode, interleavers
**Python**: 3.12+
**Dependency groups** (managed by uv):
- `dev`: pre-commit, black, ipython, coverage-badge
- `test`: pytest and plugins
- `doc`: pdoc3, genbadge, docstr-coverage
## Recent Changes
The project underwent refactoring that consolidated separate modules into the `TurboCodec.py` API. The following modules were deprecated and removed:
- `rsc.py`, `trellis.py`, `siso_decoder.py`, `turbo_decoder.py`, `turbo_encoder.py`, `awgn.py`
These functions are now accessed directly through CommPy as external dependencies.
## Development Notes
- All functionality is currently in a single `TurboCodec` class - no need to navigate multiple modules
- The implementation uses CommPy's high-level API; low-level encoding/decoding details are abstracted away
- Tests are minimal (single roundtrip test) - focus is on correctness of the wrapper interface
- The interleaver seed is fixed (1346) for reproducibility across encode/decode cycles
| text/markdown | null | Yann de Thé <ydethe@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"numpy>=2.4.2",
"scikit-commpy>=0.8.0"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/ydethe/turbocode/issues",
"Homepage, https://github.com/ydethe/turbocode",
"Source, https://github.com/ydethe/turbocode"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T08:45:52.481152 | pyturbocode-0.1.13-py3-none-any.whl | 6,120 | 18/02/61a1f090b448ca4fa0b88c252c796bc322b755d1020dec14d728fa35d9b3/pyturbocode-0.1.13-py3-none-any.whl | py3 | bdist_wheel | null | false | 8e5accef8e1490fadb144398337e0d9f | 7aa3d109b6e7732a09ed7f41742cb9b9828e9b86d0e95055455f94f45ae038e9 | 180261a1f090b448ca4fa0b88c252c796bc322b755d1020dec14d728fa35d9b3 | null | [
"LICENSE"
] | 115 |
2.4 | starspring | 0.2.2 | A Spring Boot-inspired Python web framework built on Starlette, combining the elegance of Spring's dependency injection with Python's simplicity | # StarSpring
A Spring Boot-inspired Python web framework built on Starlette, combining the elegance of Spring's dependency injection with Python's simplicity.
## Table of Contents
- [What is StarSpring?](#what-is-starspring)
- [Key Features](#key-features)
- [Framework Comparison](#framework-comparison)
- [Core Concepts](#core-concepts)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Database & ORM](#database--orm)
- [Building REST APIs](#building-rest-apis)
- [Working with Templates](#working-with-templates)
- [Middleware](#middleware)
- [Security](#security)
- [Advanced Features](#advanced-features)
- [Examples](#examples)
- [Contributing](#contributing)
- [License](#license)
---
## What is StarSpring?
StarSpring is a modern Python web framework that brings Spring Boot's proven architectural patterns to the Python ecosystem. It provides:
- **Dependency Injection**: Automatic component scanning and dependency resolution
- **Declarative Programming**: Annotations-based routing, services, and repositories
- **Built-in ORM**: SQLAlchemy integration with Spring Data-style repositories
- **Convention over Configuration**: Auto-configuration with sensible defaults
- **Production-Ready**: Built on Starlette for high performance and ASGI support
StarSpring is ideal for developers who appreciate Spring Boot's structure but prefer Python's ecosystem, or Python developers looking for a more opinionated, enterprise-ready framework.
---
## What's New in v0.2.2 🎉
### starspring-security Integration
Password hashing is now built into StarSpring via the companion [`starspring-security`](https://pypi.org/project/starspring-security/) package — automatically installed with StarSpring.
```python
from starspring_security import BCryptPasswordEncoder
encoder = BCryptPasswordEncoder()
hashed = encoder.encode("my_password") # BCrypt hash
encoder.matches("my_password", hashed) # True
```
### Session-Based Authentication
First-class support for session-based auth using Starlette's `SessionMiddleware`.
```python
from starlette.middleware.sessions import SessionMiddleware
app.add_middleware(SessionMiddleware, secret_key="your-secret-key")
```
---
## What's New in v0.2.0
### Automatic Form Data Parsing
HTML forms are now automatically parsed into Pydantic models - no more manual `await request.form()`!
```python
@PostMapping("/users")
async def create_user(self, form: UserCreateForm): # Automatic parsing!
user = await self.user_service.create(form.name, form.email, form.age)
return ModelAndView("success.html", {"user": user}, status_code=201)
```
### Custom HTTP Status Codes
Set custom status codes for template responses:
```python
return ModelAndView("users/created.html", {"user": user}, status_code=201)
return ModelAndView("errors/404.html", {"message": "Not found"}, status_code=404)
```
### HTTP Method Override
Use PUT, PATCH, and DELETE methods in HTML forms with the `_method` field:
```html
<form action="/users/{{ user.id }}" method="POST">
<input type="hidden" name="_method" value="PUT">
<!-- form fields -->
</form>
```
```python
@PutMapping("/users/{id}") # Works with HTML forms!
async def update_user(self, id: int, form: UserForm):
# Handles both API requests and form submissions
```
---
## Key Features
### Dependency Injection & IoC Container
Automatic component scanning, registration, and dependency resolution with constructor injection.
### Declarative Routing
Define REST endpoints using familiar decorators like `@GetMapping`, `@PostMapping`, `@PutMapping`, and `@DeleteMapping`.
### Spring Data-Style Repositories
Auto-implemented repository methods based on naming conventions:
```python
async def find_by_username(self, username: str) -> User | None:
pass # Automatically implemented!
```
### Built-in ORM
SQLAlchemy integration with automatic table creation, transaction management, and entity mapping.
### Auto-Configuration
Automatically loads `application.yaml` and scans common module names (`controller`, `service`, `repository`, `entity`).
### Template Support
Jinja2 integration for server-side rendering with the `@TemplateMapping` decorator.
### Middleware Stack
Built-in exception handling, logging, CORS support, and custom middleware capabilities.
### Type Safety
Full type hint support with automatic parameter injection and validation.
---
## Framework Comparison
| Feature | StarSpring | FastAPI | Flask | Django |
|---------|-----------|---------|-------|--------|
| **Layered Architecture** | Yes | No | No | Yes |
| **Dependency Injection** | Built-in IoC | Function params | Manual/Extensions | Manual |
| **Built-in ORM** | Yes (SQLAlchemy) | No | Extension | Yes (Django ORM) |
| **Auto-Configuration** | Yes | No | No | Partial |
| **Auto-Repositories** | Yes | No | No | No |
| **Async Support** | Native (ASGI) | Native (ASGI) | Limited | Partial |
| **Template Engine** | Jinja2 | Manual | Jinja2 | Django Templates |
| **Admin Interface** | No | No | Extensions | Yes |
**When to choose StarSpring:**
- You're familiar with Spring Boot and want similar patterns in Python
- Building API-first or microservice architectures
- Need strong separation of concerns (Controller/Service/Repository)
- Want automatic dependency injection and component scanning
- Prefer convention over configuration
---
## Core Concepts
### Application Context
The IoC (Inversion of Control) container that manages component lifecycle and dependencies.
### Components
Classes decorated with `@Controller`, `@Service`, or `@Repository` are automatically discovered and registered.
### Dependency Injection
Components are injected via constructor parameters:
```python
@Controller("/api/users")
class UserController:
def __init__(self, user_service: UserService):
self.user_service = user_service
```
### Entities
Domain models decorated with `@Entity` that map to database tables.
### Repositories
Data access layer with auto-implemented query methods based on method names.
### Services
Business logic layer, typically transactional, that orchestrates repositories.
### Controllers
HTTP request handlers that define REST endpoints and return responses.
---
## Installation
### Requirements
- Python 3.10 or higher
- pip or uv package manager
### Install from PyPI
```bash
pip install starspring
```
### Install for Development
```bash
git clone https://github.com/DasunNethsara-04/starspring.git
cd starspring
pip install -e .
```
### Optional Dependencies
For PostgreSQL support:
```bash
pip install psycopg2-binary
# or for async
pip install asyncpg
```
For MySQL support:
```bash
pip install pymysql
```
---
## Quick Start
### 1. Create Project Structure
```
my_app/
├── entity.py
├── repository.py
├── service.py
├── controller.py
├── main.py
└── application.yaml
```
### 2. Configure Database (`application.yaml`)
```yaml
database:
url: "sqlite:///app.db"
ddl-auto: "create-if-not-exists"
server:
port: 8000
host: "0.0.0.0"
```
### 3. Define Entity (`entity.py`)
```python
from starspring import BaseEntity, Column, Entity
@Entity(table_name="users")
class User(BaseEntity):
# BaseEntity provides: id, created_at, updated_at
username = Column(type=str, unique=True, nullable=False)
email = Column(type=str, unique=True, nullable=False)
is_active = Column(type=bool, default=True)
role = Column(type=str, default="USER")
```
### 4. Create Repository (`repository.py`)
```python
from starspring import Repository, StarRepository
from entity import User
@Repository
class UserRepository(StarRepository[User]):
# Auto-implemented based on method name!
async def find_by_username(self, username: str) -> User | None:
pass
async def find_by_email(self, email: str) -> User | None:
pass
```
### 5. Implement Service (`service.py`)
```python
from starspring import Service, Transactional
from entity import User
from repository import UserRepository
@Service
class UserService:
def __init__(self, user_repo: UserRepository):
self.user_repo = user_repo
@Transactional
async def create_user(self, username: str, email: str) -> User:
# Check if user exists
existing = await self.user_repo.find_by_email(email)
if existing:
raise ValueError(f"User with email {email} already exists")
# Create and save
user = User(username=username, email=email)
return await self.user_repo.save(user)
async def get_user(self, username: str) -> User | None:
return await self.user_repo.find_by_username(username)
```
### 6. Build Controller (`controller.py`)
```python
from starspring import Controller, GetMapping, PostMapping, ResponseEntity
from service import UserService
@Controller("/api/users")
class UserController:
def __init__(self, user_service: UserService):
self.user_service = user_service
@PostMapping("/register")
async def register(self, username: str, email: str):
try:
user = await self.user_service.create_user(username, email)
return ResponseEntity.created(user)
except ValueError as e:
return ResponseEntity.bad_request(str(e))
@GetMapping("/{username}")
async def get_user(self, username: str):
user = await self.user_service.get_user(username)
if user:
return ResponseEntity.ok(user)
return ResponseEntity.not_found()
@GetMapping("")
async def list_users(self):
users = await self.user_service.user_repo.find_all()
return ResponseEntity.ok(users)
```
### 7. Bootstrap Application (`main.py`)
```python
from starspring import StarSpringApplication
app = StarSpringApplication(title="My User API")
if __name__ == "__main__":
app.run()
```
### 8. Run the Application
```bash
python main.py
```
Your API is now running at `http://localhost:8000`!
**Test it:**
```bash
# Register a user
curl -X POST http://localhost:8000/api/users/register \
-H "Content-Type: application/json" \
-d '{"username": "john", "email": "john@example.com"}'
# Get user
curl http://localhost:8000/api/users/john
# List all users
curl http://localhost:8000/api/users
```
---
## Database & ORM
### Configuration
StarSpring uses SQLAlchemy under the hood. Configure your database in `application.yaml`:
```yaml
database:
url: "postgresql://user:password@localhost/mydb"
ddl-auto: "create-if-not-exists" # Options: create, create-if-not-exists, update, none
pool-size: 10
max-overflow: 20
```
**Supported databases:**
- SQLite: `sqlite:///app.db`
- PostgreSQL: `postgresql://user:pass@host/db`
- MySQL: `mysql://user:pass@host/db`
### Defining Entities
Entities are Python classes decorated with `@Entity`:
```python
from datetime import datetime
from starspring import BaseEntity, Column, Entity
@Entity(table_name="posts")
class Post(BaseEntity):
title = Column(type=str, nullable=False, length=200)
content = Column(type=str, nullable=False)
published = Column(type=bool, default=False)
author_id = Column(type=int, nullable=False)
published_at = Column(type=datetime, nullable=True)
```
**BaseEntity provides:**
- `id`: Auto-incrementing primary key
- `created_at`: Timestamp of creation
- `updated_at`: Timestamp of last update
### Column Options
```python
Column(
type=str, # Python type (str, int, bool, datetime)
nullable=True, # Allow NULL values
unique=False, # Unique constraint
default=None, # Default value
length=None # Max length for strings
)
```
### Repository Patterns
StarSpring provides Spring Data-style repositories with auto-implemented methods:
```python
from starspring.data.query_builder import QueryOperation
@Repository
class PostRepository(StarRepository[Post]):
# Find by single field
async def find_by_title(self, title: str) -> Post | None:
pass
# Find by multiple fields
async def find_by_author_id_and_published(
self, author_id: int, published: bool
) -> list[Post]:
pass
# Custom queries (manual implementation)
async def find_recent_posts(self, limit: int = 10) -> list[Post]:
# Implement custom logic here
return await self._gateway.execute_query(
"SELECT * FROM posts ORDER BY created_at DESC LIMIT :limit",
{"limit": limit},
Post,
QueryOperation.FIND
)
```
**Auto-implemented method patterns:**
- `find_by_<field>` - Find by single field
- `find_by_<field>_and_<field>` - Find by multiple fields
- `count_by_<field>` - Count matching records
- `delete_by_<field>` - Delete matching records
- `exists_by_<field>` - Check if exists
**Built-in methods:**
- `save(entity)` - Insert or update
- `find_by_id(id)` - Find by primary key
- `find_all()` - Get all records
- `delete(entity)` - Delete entity
- `count()` - Count all records
### Transactions
Use the `@Transactional` decorator for automatic transaction management:
```python
from starspring import Service, Transactional
@Service
class PostService:
def __init__(self, post_repo: PostRepository):
self.post_repo = post_repo
@Transactional
async def publish_post(self, post_id: int) -> Post:
post = await self.post_repo.find_by_id(post_id)
if not post:
raise ValueError("Post not found")
post.published = True
post.published_at = datetime.now()
return await self.post_repo.save(post)
```
---
## Building REST APIs
### Request Mapping Decorators
```python
from starspring import (
Controller,
GetMapping,
PostMapping,
PutMapping,
DeleteMapping,
PatchMapping
)
@Controller("/api/posts")
class PostController:
@GetMapping("")
async def list_posts(self):
# GET /api/posts
pass
@GetMapping("/{id}")
async def get_post(self, id: int):
# GET /api/posts/123
pass
@PostMapping("")
async def create_post(self, title: str, content: str):
# POST /api/posts
pass
@PutMapping("/{id}")
async def update_post(self, id: int, title: str, content: str):
# PUT /api/posts/123
pass
@DeleteMapping("/{id}")
async def delete_post(self, id: int):
# DELETE /api/posts/123
pass
```
### Parameter Injection
StarSpring automatically injects parameters from:
- **Path variables**: `/{id}` → `id: int`
- **Query parameters**: `?page=1` → `page: int`
- **Request body**: JSON payload → function parameters
```python
@GetMapping("/search")
async def search_posts(
self,
query: str, # From ?query=...
page: int = 1, # From ?page=... (default: 1)
limit: int = 10 # From ?limit=... (default: 10)
):
# Implementation
pass
```
### Response Handling
**Return entity directly:**
```python
@GetMapping("/{id}")
async def get_post(self, id: int):
post = await self.post_service.get_post(id)
return post # Auto-serialized to JSON
```
**Use ResponseEntity for control:**
```python
from starspring import ResponseEntity
@GetMapping("/{id}")
async def get_post(self, id: int):
post = await self.post_service.get_post(id)
if post:
return ResponseEntity.ok(post)
return ResponseEntity.not_found()
```
**ResponseEntity methods:**
- `ResponseEntity.ok(body)` - 200 OK
- `ResponseEntity.created(body)` - 201 Created
- `ResponseEntity.accepted(body)` - 202 Accepted
- `ResponseEntity.no_content()` - 204 No Content
- `ResponseEntity.bad_request(body)` - 400 Bad Request
- `ResponseEntity.unauthorized(body)` - 401 Unauthorized
- `ResponseEntity.forbidden(body)` - 403 Forbidden
- `ResponseEntity.not_found(body)` - 404 Not Found
- `ResponseEntity.status(code, body)` - Custom status
### Request Body Validation
```python
from pydantic import BaseModel
class CreatePostRequest(BaseModel):
title: str
content: str
published: bool = False
@PostMapping("")
async def create_post(self, request: CreatePostRequest):
# Pydantic validates automatically
post = await self.post_service.create_post(
title=request.title,
content=request.content,
published=request.published
)
return ResponseEntity.created(post)
```
---
## Working with Templates
StarSpring integrates Jinja2 for server-side rendering.
### Setup Templates Directory
```
my_app/
├── templates/
│ ├── index.html
│ ├── post.html
│ └── layout.html
├── static/
│ ├── css/
│ └── js/
└── main.py
```
### Configure Template Path
```python
from starspring import StarSpringApplication
app = StarSpringApplication(title="My Blog")
app.add_template_directory("templates")
app.add_static_files("/static", "static")
```
### Template Controller
```python
from starspring import TemplateController, GetMapping, ModelAndView
@TemplateController("")
class WebController:
def __init__(self, post_service: PostService):
self.post_service = post_service
@GetMapping("/")
async def index(self) -> ModelAndView:
posts = await self.post_service.get_recent_posts()
return ModelAndView("index.html", {
"posts": posts,
"title": "My Blog"
})
@GetMapping("/post/{id}")
async def view_post(self, id: int) -> ModelAndView:
post = await self.post_service.get_post(id)
return ModelAndView("post.html", {"post": post})
```
### Template Files
**templates/layout.html:**
```html
<!DOCTYPE html>
<html>
<head>
<title>{% block title %}My Blog{% endblock %}</title>
<link rel="stylesheet" href="/static/css/style.css">
</head>
<body>
<nav>
<a href="/">Home</a>
</nav>
<main>
{% block content %}{% endblock %}
</main>
</body>
</html>
```
**templates/index.html:**
```html
{% extends "layout.html" %}
{% block title %}{{ title }}{% endblock %}
{% block content %}
<h1>Recent Posts</h1>
{% for post in posts %}
<article>
<h2><a href="/post/{{ post.id }}">{{ post.title }}</a></h2>
<p>{{ post.content[:200] }}...</p>
<small>{{ post.created_at }}</small>
</article>
{% endfor %}
{% endblock %}
```
---
## Middleware
### Built-in Middleware
StarSpring includes:
- **ExceptionHandlerMiddleware**: Catches and formats exceptions
- **LoggingMiddleware**: Logs requests and responses
- **CORSMiddleware**: Handles Cross-Origin Resource Sharing
### Enable CORS
```python
from starspring import StarSpringApplication
from starspring.middleware.cors import CORSConfig
app = StarSpringApplication(title="My API")
cors_config = CORSConfig(
allow_origins=["http://localhost:3000"],
allow_methods=["GET", "POST", "PUT", "DELETE"],
allow_headers=["*"],
allow_credentials=True
)
app.add_cors(cors_config)
```
### Custom Middleware
```python
from starlette.middleware.base import BaseHTTPMiddleware
from starlette.requests import Request
class CustomMiddleware(BaseHTTPMiddleware):
async def dispatch(self, request: Request, call_next):
# Before request
print(f"Request: {request.method} {request.url}")
# Process request
response = await call_next(request)
# After request
print(f"Response: {response.status_code}")
return response
# Add to application
from starlette.middleware import Middleware
app = StarSpringApplication(title="My API")
app._middleware.append(Middleware(CustomMiddleware))
```
---
## Security
### Password Hashing with `starspring-security`
StarSpring ships with [`starspring-security`](https://pypi.org/project/starspring-security/) — a standalone password hashing library that works with any Python framework.
#### BCrypt (Default — Recommended)
```python
from starspring_security import BCryptPasswordEncoder
encoder = BCryptPasswordEncoder() # Default: rounds=12
encoder = BCryptPasswordEncoder(rounds=14) # Stronger (slower)
hashed = encoder.encode("my_password")
encoder.matches("my_password", hashed) # True
encoder.matches("wrong", hashed) # False
```
#### Argon2 (Modern, Memory-Hard)
```bash
pip install starspring-security[argon2]
```
```python
from starspring_security import Argon2PasswordEncoder
encoder = Argon2PasswordEncoder()
hashed = encoder.encode("my_password")
encoder.matches("my_password", hashed) # True
```
#### SHA-256 (Deprecated ⚠️)
```python
from starspring_security import Sha256PasswordEncoder
encoder = Sha256PasswordEncoder() # Raises DeprecationWarning
```
SHA-256 is too fast for passwords and vulnerable to brute-force attacks. Use BCrypt or Argon2 instead.
---
### Session-Based Authentication
StarSpring supports session-based authentication via Starlette's `SessionMiddleware`. Sessions are signed using `itsdangerous` — tamper-proof out of the box.
#### Setup
```python
# main.py
from starspring import StarSpringApplication
from starlette.middleware.sessions import SessionMiddleware
app = StarSpringApplication(title="My App")
app.add_middleware(SessionMiddleware, secret_key="your-strong-secret-key")
app.run()
```
#### Using Sessions in Controllers
Inject `Request` into any controller method to read or write session data:
```python
from starlette.requests import Request
from starlette.responses import RedirectResponse
from starspring import TemplateController, GetMapping, PostMapping, ModelAndView
from starspring_security import BCryptPasswordEncoder
encoder = BCryptPasswordEncoder()
@TemplateController("")
class AuthController:
def __init__(self, user_service: UserService):
self.user_service = user_service
@PostMapping("/login")
async def login(self, form: LoginForm, request: Request):
user = await self.user_service.find_by_username(form.username)
if not user or not encoder.matches(form.password, user.password):
return ModelAndView("login.html", {"error": "Invalid credentials"}, status_code=401)
# Store user info in session
request.session["username"] = user.username
request.session["role"] = user.role
return RedirectResponse(url="/dashboard", status_code=302)
@GetMapping("/logout")
async def logout(self, request: Request):
request.session.clear()
return RedirectResponse(url="/login", status_code=302)
@GetMapping("/dashboard")
async def dashboard(self, request: Request):
if not request.session.get("username"):
return RedirectResponse(url="/login", status_code=302)
return ModelAndView("dashboard.html", {
"username": request.session["username"],
"role": request.session["role"],
})
```
#### Role-Based Authorization
```python
@GetMapping("/admin/users")
async def admin_users(self, request: Request):
if not request.session.get("username"):
return RedirectResponse(url="/login", status_code=302)
if request.session.get("role") != "ADMIN":
return RedirectResponse(url="/dashboard", status_code=302) # Forbidden
# ... admin logic
```
#### `application.yaml` Configuration
```yaml
server:
host: "0.0.0.0"
port: 8000
database:
url: "sqlite:///app.db"
ddl-auto: "create-if-not-exists"
```
> **Note:** Keep your `secret_key` secret and never commit it to version control. Use environment variables in production.
---
## Advanced Features
### Lifecycle Hooks
```python
app = StarSpringApplication(title="My API")
@app.on_startup
async def startup():
print("Application starting...")
# Initialize resources
@app.on_shutdown
async def shutdown():
print("Application shutting down...")
# Cleanup resources
```
### Environment-Specific Configuration
```yaml
# application.yaml
spring:
profiles:
active: ${ENVIRONMENT:development}
---
# Development profile
spring:
profiles: development
database:
url: "sqlite:///dev.db"
---
# Production profile
spring:
profiles: production
database:
url: "postgresql://user:pass@prod-db/myapp"
```
### Manual Component Registration
```python
from starspring.core.context import get_application_context
# Register bean manually
context = get_application_context()
context.register_bean("my_service", MyService())
# Get bean
service = context.get_bean(MyService)
```
### Custom Repository Methods
```python
from starspring.data.query_builder import QueryOperation
@Repository
class PostRepository(StarRepository[Post]):
async def find_published_posts(self) -> list[Post]:
# Custom SQL query
return await self._gateway.execute_query(
"SELECT * FROM posts WHERE published = :published",
{"published": True},
Post,
QueryOperation.FIND
)
```
---
## Examples
Complete examples are available in the `examples/` directory:
- **examples/example1**: Basic CRUD API with users
- **examples/example2**: Blog application with templates and form handling
- **examples/example3**: Session-based authentication with role-based authorization and Bootstrap 5 UI
---
## Contributing
Contributions are welcome! Please:
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
---
## License
StarSpring is released under the MIT License. See [LICENSE](LICENSE) file for details.
---
## Support
- **Documentation**: [https://github.com/DasunNethsara-04/starspring](https://github.com/DasunNethsara-04/starspring)
- **Issues**: [https://github.com/DasunNethsara-04/starspring/issues](https://github.com/DasunNethsara-04/starspring/issues)
- **Discussions**: [https://github.com/DasunNethsara-04/starspring/discussions](https://github.com/DasunNethsara-04/starspring/discussions)
---
**Built with love for the Python community by developers who appreciate Spring Boot's elegance.**
| text/markdown | null | Dasun Nethsara <techsaralk.pro@gmail.com> | null | null | null | web, framework, starlette, spring-boot, dependency-injection, orm, rest-api, template-engine, async, asgi | [
"Intended Audience :: Information Technology",
"Intended Audience :: System Administrators",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
"Programming Lan... | [] | null | null | >=3.10 | [] | [] | [] | [
"starlette<1.0.0,>=0.40.0",
"pydantic>=2.7.0",
"typing-extensions>=4.8.0",
"sqlalchemy>=2.0.0",
"jinja2>=3.1.5",
"python-multipart>=0.0.18",
"pyyaml>=6.0",
"httpx>=0.24.0",
"uvicorn>=0.12.0",
"itsdangerous>=2.0.0",
"starspring-security[bcrypt]>=0.1.0",
"httpx<1.0.0,>=0.23.0; extra == \"standar... | [] | [] | [] | [
"Homepage, https://github.com/DasunNethsara-04/starspring",
"Documentation, https://github.com/DasunNethsara-04/starspring/wiki",
"Repository, https://github.com/DasunNethsara-04/starspring",
"Issues, https://github.com/DasunNethsara-04/starspring/issues",
"Changelog, https://github.com/DasunNethsara-04/sta... | pdm/2.26.6 CPython/3.14.2 Windows/11 | 2026-02-18T08:45:13.447522 | starspring-0.2.2.tar.gz | 51,435 | 94/21/6ef0f3e45391fb42a7f0c1abe3c17502ee9d3ac2a2b0e6e82bd5d0face8f/starspring-0.2.2.tar.gz | source | sdist | null | false | e54726c05249233a232d38e557f5320e | 72d5966e05dbe262203d531784407c28dc4d148890ab27f8a5df67c28b899dc7 | 94216ef0f3e45391fb42a7f0c1abe3c17502ee9d3ac2a2b0e6e82bd5d0face8f | null | [] | 269 |
2.4 | pod5 | 0.3.36 | Oxford Nanopore Technologies Pod5 File Format Python API and Tools | # POD5 Python Package
The `pod5` Python package contains the tools and python API wrapping the compiled bindings
for the POD5 file format from `lib_pod5`.
## Installation
The `pod5` package is available on [pypi](https://pypi.org/project/pod5/) and is
installed using `pip`:
``` console
> pip install pod5
```
## Usage
### Reading a POD5 File
To read a `pod5` file provide the the `Reader` class with the input `pod5` file path
and call `Reader.reads()` to iterate over read records in the file. The example below
prints the read_id of every record in the input `pod5` file.
``` python
import pod5 as p5
with p5.Reader("example.pod5") as reader:
for read_record in reader.reads():
print(read_record.read_id)
```
To iterate over a selection of read_ids supply `Reader.reads()` with a collection
of read_ids which must be `UUID` compatible:
``` python
import pod5 as p5
# Create a collection of read_id UUIDs
read_ids: List[str] = [
"00445e58-3c58-4050-bacf-3411bb716cc3",
"00520473-4d3d-486b-86b5-f031c59f6591",
]
with p5.Reader("example.pod5") as reader:
for read_record in reader.reads(selection=read_ids):
assert str(read_record.read_id) in read_ids
```
### Plotting Signal Data Example
Here is an example of how a user may plot a read’s signal data against time.
``` python
import matplotlib.pyplot as plt
import numpy as np
import pod5 as p5
# Using the example pod5 file provided
example_pod5 = "test_data/multi_fast5_zip.pod5"
selected_read_id = '0000173c-bf67-44e7-9a9c-1ad0bc728e74'
with p5.Reader(example_pod5) as reader:
# Read the selected read from the pod5 file
# next() is required here as Reader.reads() returns a Generator
read = next(reader.reads(selection=[selected_read_id]))
# Get the signal data and sample rate
sample_rate = read.run_info.sample_rate
signal = read.signal
# Compute the time steps over the sampling period
time = np.arange(len(signal)) / sample_rate
# Plot using matplotlib
plt.plot(time, signal)
```
### Writing a POD5 File
The `pod5` package provides the functionality to write POD5 files.
It is strongly recommended that users first look at the available tools when
manipulating existing datasets, as there may already be a tool to meet your needs.
New tools may be added to support our users and if you have a suggestion for a
new tool or feature please submit a request on the
[pod5-file-format GitHub issues page](https://github.com/nanoporetech/pod5-file-format/issues).
Below is an example of how one may add reads to a new POD5 file using the `Writer`
and its `add_read()` method.
```python
import pod5 as p5
# Populate container classes for read metadata
pore = p5.Pore(channel=123, well=3, pore_type="pore_type")
calibration = p5.Calibration(offset=0.1, scale=1.1)
end_reason = p5.EndReason(name=p5.EndReasonEnum.SIGNAL_POSITIVE, forced=False)
run_info = p5.RunInfo(
acquisition_id = ...
acquisition_start_time = ...
adc_max = ...
...
)
signal = ... # some signal data as numpy np.int16 array
read = p5.Read(
read_id=UUID("0000173c-bf67-44e7-9a9c-1ad0bc728e74"),
end_reason=end_reason,
calibration=calibration,
pore=pore,
run_info=run_info,
...
signal=signal,
)
with p5.Writer("example.pod5") as writer:
# Write the read object
writer.add_read(read)
```
## Tools
1. [pod5 view](#pod5-view)
2. [pod5 inspect](#pod5-inspect)
3. [pod5 merge](#pod5-merge)
4. [pod5 filter](#pod5-filter)
5. [pod5 subset](#pod5-subset)
6. [pod5 repack](#pod5-repack)
7. [pod5 recover](#pod5-recover)
8. [pod5 convert fast5](#pod5-convert-fast5)
9. [pod5 convert to_fast5](#pod5-convert-to_fast5)
10. [pod5 update](#pod5-update)
The ``pod5`` package provides the following tools for inspecting and manipulating
POD5 files as well as converting between ``.pod5`` and ``.fast5`` file formats.
To disable the `tqdm <https://github.com/tqdm/tqdm>`_ progress bar set the environment
variable ``POD5_PBAR=0``.
To enable debugging output which may also output detailed log files, set the environment
variable ``POD5_DEBUG=1``
### Pod5 View
The ``pod5 view`` tool is used to produce a table similarr to a sequencing summary
from the contents of ``.pod5`` files. The default output is a tab-separated table
written to stdout with all available fields.
This tools is indented to replace ``pod5 inspect reads`` and is over 200x faster.
``` bash
> pod5 view --help
# View the list of fields with a short description in-order (shortcut -L)
> pod5 view --list-fields
# Write the summary to stdout
> pod5 view input.pod5
# Write the summary of multiple pod5s to a file
> pod5 view *.pod5 --output summary.tsv
# Write the summary as a csv
> pod5 view *.pod5 --output summary.csv --separator ','
# Write only the read_ids with no header (shorthand -IH)
> pod5 view input.pod5 --ids --no-header
# Write only the listed fields
# Note: The field order is fixed the order shown in --list-fields
> pod5 view input.pod5 --include "read_id, channel, num_samples, end_reason"
# Exclude some unwanted fields
> pod5 view input.pod5 --exclude "filename, pore_type"
```
### Pod5 inspect
The ``pod5 inspect`` tool can be used to extract details and summaries of
the contents of ``.pod5`` files. There are two programs for users within ``pod5 inspect``
and these are read and reads
``` bash
> pod5 inspect --help
> pod5 inspect {reads, read, summary} --help
```
#### Pod5 inspect reads
> :warning: This tool is deprecated and has been replaced by ``pod5 view`` which is significantly faster.
Inspect all reads and print a csv table of the details of all reads in the given ``.pod5`` files.
``` bash
> pod5 inspect reads pod5_file.pod5
read_id,channel,well,pore_type,read_number,start_sample,end_reason,median_before,calibration_offset,calibration_scale,sample_count,byte_count,signal_compression_ratio
00445e58-3c58-4050-bacf-3411bb716cc3,908,1,not_set,100776,374223800,signal_positive,205.3,-240.0,0.1,65582,58623,0.447
00520473-4d3d-486b-86b5-f031c59f6591,220,1,not_set,7936,16135986,signal_positive,192.0,-233.0,0.1,167769,146495,0.437
...
```
#### Pod5 inspect read
Inspect the pod5 file, find a specific read and print its details.
``` console
> pod5 inspect read pod5_file.pod5 00445e58-3c58-4050-bacf-3411bb716cc3
File: out-tmp/output.pod5
read_id: 0e5d6827-45f6-462c-9f6b-21540eef4426
read_number: 129227
start_sample: 367096601
median_before: 171.889404296875
channel data:
channel: 2366
well: 1
pore_type: not_set
end reason:
name: signal_positive
forced False
calibration:
offset: -243.0
scale: 0.1462070643901825
samples:
sample_count: 81040
byte_count: 71989
compression ratio: 0.444
run info
acquisition_id: 2ca00715f2e6d8455e5174cd20daa4c38f95fae2
acquisition_start_time: 2021-07-23 13:48:59.780000
adc_max: 0
adc_min: 0
context_tags
barcoding_enabled: 0
basecall_config_filename: dna_r10.3_450bps_hac_prom.cfg
experiment_duration_set: 2880
...
```
### Pod5 merge
``pod5 merge`` is a tool for merging multiple ``.pod5`` files into one monolithic pod5 file.
The contents of the input files are checked for duplicate read_ids to avoid
accidentally merging identical reads. To override this check set the argument
``-D / --duplicate-ok``
``` bash
# View help
> pod5 merge --help
# Merge a pair of pod5 files
> pod5 merge example_1.pod5 example_2.pod5 --output merged.pod5
# Merge a glob of pod5 files
> pod5 merge *.pod5 -o merged.pod5
# Merge a glob of pod5 files ignoring duplicate read ids
> pod5 merge *.pod5 -o merged.pod5 --duplicate-ok
```
### Pod5 filter
``pod5 filter`` is a simpler alternative to ``pod5 subset`` where reads are subset from
one or more input ``.pod5`` files using a list of read ids provided using the ``--ids`` argument
and writing those reads to a *single* ``--output`` file.
See ``pod5 subset`` for more advanced subsetting.
``` bash
> pod5 filter example.pod5 --output filtered.pod5 --ids read_ids.txt
```
The ``--ids`` selection text file must be a simple list of valid UUID read_ids with
one read_id per line. Only records which match the UUID regex (lower-case) are used.
Lines beginning with a ``#`` (hash / pound symbol) are interpreted as comments.
Empty lines are not valid and may cause errors during parsing.
> The ``filter`` and ``subset`` tools will assert that any requested read_ids are
> present in the inputs. If a requested read_id is missing from the inputs
> then the tool will issue the following error:
>
> ``` bash
> POD5 has encountered an error: 'Missing read_ids from inputs but --missing-ok not set'
> ```
>
> To disable this warning then the '-M / --missing-ok' argument.
When supplying multiple input files to 'filter' or 'subset', the tools is
effectively performing a ``merge`` operation. The 'merge' tool is better suited
for handling very large numbers of input files.
#### Example filtering pipeline
This is a trivial example of how to select a random sample of 1000 read_ids from a
pod5 file using ``pod5 view`` and ``pod5 filter``.
``` bash
# Get a random selection of read_ids
> pod5 view all.pod5 --ids --no-header --output all_ids.txt
> all_ids.txt sort --random-sort | head --lines 1000 > 1k_ids.txt
# Filter to that selection
> pod5 filter all.pod5 --ids 1k_ids.txt --output 1k.pod5
# Check the output
> pod5 view 1k.pod5 -IH | wc -l
1000
```
### Pod5 subset
``pod5 subset`` is a tool for subsetting reads in ``.pod5`` files into one or more
output ``.pod5`` files. See also ``pod5 filter``
The ``pod5 subset`` tool requires a *mapping* which defines which read_ids should be
written to which output. There are multiple ways of specifying this mapping which are
defined in either a ``.csv`` file or by using a ``--table`` (csv or tsv)
and instructions on how to interpret it.
``pod5 subset`` aims to be a generic tool to subset from multiple inputs to multiple outputs.
If your use-case is to ``filter`` read_ids from one or more inputs into a single output
then ``pod5 filter`` might be a more appropriate tool as the only input is a list of read_ids.
``` bash
# View help
> pod5 subset --help
# Subset input(s) using a pre-defined mapping
> pod5 subset example_1.pod5 --csv mapping.csv
# Subset input(s) using a dynamic mapping created at runtime
> pod5 subset example_1.pod5 --table table.txt --columns barcode
```
> Care should be taken to ensure that when providing multiple input ``.pod5`` files to ``pod5 subset``
> that there are no read_id UUID clashes. If a duplicate read_id is detected an exception
> will be raised unless the ``--duplicate-ok`` argument is set. If ``--duplicate-ok`` is
> set then both reads will be written to the output, although this is not recommended.
#### Note on positional arguments
> The ``--columns`` argument will greedily consume values and as such, care should be taken
> with the placement of any positional arguments. The following line will result in an error
> as the input pod5 file is consumed by ``--columns`` resulting in no input file being set.
```bash
# Invalid placement of positional argument example.pod5
$ pod5 subset --table table.txt --columns barcode example.pod5
```
#### Creating a Subset Mapping
##### Target Mapping (.csv)
The example below shows a ``.csv`` subset target mapping. Any lines (e.g. header line)
which do not have a read_id which matches the UUID regex (lower-case) in the second
column is ignored.
``` text
target, read_id
output_1.pod5,132b582c-56e8-4d46-9e3d-48a275646d3a
output_1.pod5,12a4d6b1-da6e-4136-8bb3-1470ef27e311
output_2.pod5,0ff4dc01-5fa4-4260-b54e-1d8716c7f225
output_2.pod5,0e359c40-296d-4edc-8f4a-cca135310ab2
output_2.pod5,0e9aa0f8-99ad-40b3-828a-45adbb4fd30c
```
##### Target Mapping from Table
``pod5 subset`` can dynamically generate output targets and collect associated reads
based on a text file containing a table (csv or tsv) parsible by ``polars``.
This table file could be the output from ``pod5 view`` or from a sequencing summary.
The table must contain a header row and a series of columns on which to group unique
collections of values. Internally this process uses the
`polars.Dataframe.group_by <https://pola-rs.github.io/polars/py-polars/html/reference/dataframe/api/polars.DataFrame.group_by.html>`_
function where the ``by`` parameter is the sequence of column names specified with
the ``--columns`` argument.
Given the following example ``--table`` file, observe the resultant outputs given various
arguments:
``` text
read_id mux barcode length
read_a 1 barcode_a 4321
read_b 1 barcode_b 1000
read_c 2 barcode_b 1200
read_d 2 barcode_c 1234
```
``` bash
> pod5 subset example_1.pod5 --output barcode_subset --table table.txt --columns barcode
> ls barcode_subset
barcode-barcode_a.pod5 # Contains: read_a
barcode-barcode_b.pod5 # Contains: read_b, read_c
barcode-barcode_c.pod5 # Contains: read_d
> pod5 subset example_1.pod5 --output mux_subset --table table.txt --columns mux
> ls mux_subset
mux-1.pod5 # Contains: read_a, read_b
mus-2.pod5 # Contains: read_c, read_d
> pod5 subset example_1.pod5 --output barcode_mux_subset --table table.txt --columns barcode mux
> ls barcode_mux_subset
barcode-barcode_a_mux-1.pod5 # Contains: read_a
barcode-barcode_b_mux-1.pod5 # Contains: read_b
barcode-barcode_b_mux-2.pod5 # Contains: read_c
barcode-barcode_c_mux-2.pod5 # Contains: read_d
```
##### Output Filename Templating
When subsetting using a table the output filename is generated from a template
string. The automatically generated template is the sequential concatenation of
``column_name-column_value`` followed by the ``.pod5`` file extension.
The user can set their own filename template using the ``--template`` argument.
This argument accepts a string in the `Python f-string style <https://docs.python.org/3/tutorial/inputoutput.html#formatted-string-literals>`_
where the subsetting variables are used for keyword placeholder substitution.
Keywords should be placed within curly-braces. For example:
``` bash
# default template used = "barcode-{barcode}.pod5"
> pod5 subset example_1.pod5 --output barcode_subset --table table.txt --columns barcode
# default template used = "barcode-{barcode}_mux-{mux}.pod5"
> pod5 subset example_1.pod5 --output barcode_mux_subset --table table.txt --columns barcode mux
> pod5 subset example_1.pod5 --output barcode_subset --table table.txt --columns barcode --template "{barcode}.subset.pod5"
> ls barcode_subset
barcode_a.subset.pod5 # Contains: read_a
barcode_b.subset.pod5 # Contains: read_b, read_c
barcode_c.subset.pod5 # Contains: read_d
```
##### Example subsetting from ``pod5 inspect reads``
The ``pod5 inspect reads`` tool will output a csv table summarising the content of the
specified ``.pod5`` file which can be used for subsetting. The example below shows
how to split a ``.pod5`` file by the well field.
``` bash
# Create the csv table from inspect reads
> pod5 inspect reads example.pod5 > table.csv
> pod5 subset example.pod5 --table table.csv --columns well
```
### Pod5 repack
``pod5 repack`` will simply repack ``.pod5`` files into one-for-one output files of the same name.
``` bash
> pod5 repack pod5s/*.pod5 repacked_pods/
```
### Pod5 Recover
``pod5 recover`` will attempt to recover data from corrupted or truncated ``.pod5`` files
by copying all valid table batches and cleanly closing the new files. New files are written
as siblings to the inputs with the `_recovered.pod5` suffix.
``` bash
> pod5 recover --help
> pod5 recover broken.pod5
> ls
broken.pod5 broken_recovered.pod5
```
### pod5 convert fast5
The ``pod5 convert fast5`` tool takes one or more ``.fast5`` files and converts them
to one or more ``.pod5`` files.
If the tool detects single-read fast5 files, please convert them into multi-read
fast5 files using the tools available in the ``ont_fast5_api`` project.
The progress bar shown during conversion assumes the number of reads in an input
``.fast5`` is 4000. The progress bar will update the total value during runtime if
required.
> Some content previously stored in ``.fast5`` files is **not** compatible with the POD5
> format and will not be converted. This includes all analyses stored in the
> ``.fast5`` file.
>
> Please ensure that any other data is recovered from ``.fast5`` before deletion.
By default ``pod5 convert fast5`` will show exceptions raised during conversion as *warnings*
to the user. This is to gracefully handle potentially corrupt input files or other
runtime errors in long-running conversion tasks. The ``--strict`` argument allows
users to opt-in to strict runtime assertions where any exception raised will promptly
stop the conversion process with an error.
``` bash
# View help
> pod5 convert fast5 --help
# Convert fast5 files into a monolithic output file
> pod5 convert fast5 ./input/*.fast5 --output converted.pod5
# Convert fast5 files into a monolithic output in an existing directory
> pod5 convert fast5 ./input/*.fast5 --output outputs/
> ls outputs/
output.pod5 # default name
# Convert each fast5 to its relative converted output. The output files are written
# into the output directory at paths relatve to the path given to the
# --one-to-one argument. Note: This path must be a relative parent to all
# input paths.
> ls input/*.fast5
file_1.fast5 file_2.fast5 ... file_N.fast5
> pod5 convert fast5 ./input/*.fast5 --output output_pod5s/ --one-to-one ./input/
> ls output_pod5s/
file_1.pod5 file_2.pod5 ... file_N.pod5
# Note the different --one-to-one path which is now the current working directory.
# The new sub-directory output_pod5/input is created.
> pod5 convert fast5 ./input/*.fast5 output_pod5s --one-to-one ./
> ls output_pod5s/
input/file_1.pod5 input/file_2.pod5 ... input/file_N.pod5
# Convert all inputs so that they have neibouring pod5 in current directory
> pod5 convert fast5 *.fast5 --output . --one-to-one .
> ls
file_1.fast5 file_1.pod5 file_2.fast5 file_2.pod5 ... file_N.fast5 file_N.pod5
# Convert all inputs so that they have neibouring pod5 files from a parent directory
> pod5 convert fast5 ./input/*.fast5 --output ./input/ --one-to-one ./input/
> ls input/*
file_1.fast5 file_1.pod5 file_2.fast5 file_2.pod5 ... file_N.fast5 file_N.pod5
```
### Pod5 convert to_fast5
The ``pod5 convert to_fast5`` tool takes one or more ``.pod5`` files and converts them
to multiple ``.fast5`` files. The default behaviour is to write 4000 reads per output file
but this can be controlled with the ``--file-read-count`` argument.
``` bash
# View help
> pod5 convert to_fast5 --help
# Convert pod5 files to fast5 files with default 4000 reads per file
> pod5 convert to_fast5 example.pod5 --output pod5_to_fast5/
> ls pod5_to_fast5/
output_1.fast5 output_2.fast5 ... output_N.fast5
```
### Pod5 Update
The ``pod5 update`` tools is used to update old pod5 files to use the latest schema.
Currently the latest schema version is version 3.
Files are written into the ``--output`` directory with the same name.
``` bash
> pod5 update --help
# Update a named files
> pod5 update my.pod5 --output updated/
> ls updated
updated/my.pod5
# Update an entire directory
> pod5 update old/ -o updated/
```
| text/markdown | null | Oxford Nanopore Technologies plc <support@nanoporetech.com> | null | null | null | nanopore | [
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Bio-Informatics"
] | [] | null | null | ~=3.9 | [] | [] | [] | [
"deprecated~=1.2.18",
"lib_pod5==0.3.36",
"iso8601",
"more_itertools",
"numpy>=1.21.0",
"typing-extensions; python_version < \"3.10\"",
"pyarrow~=22.0.0; python_version >= \"3.10\"",
"pyarrow~=18.0.0; python_version < \"3.10\"",
"pytz",
"packaging",
"polars~=1.30",
"h5py~=3.11",
"vbz_h5py_pl... | [] | [] | [] | [
"Homepage, https://github.com/nanoporetech/pod5-file-format",
"Issues, https://github.com/nanoporetech/pod5-file-format/issues",
"Documentation, https://pod5-file-format.readthedocs.io/en/latest/"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T08:44:25.854301 | pod5-0.3.36.tar.gz | 65,482 | 01/7e/d3b3ecec8cec8afd68582d6557dd0cfd7b6bef6dacbe7c1518e76b5cc8fd/pod5-0.3.36.tar.gz | source | sdist | null | false | 4af6bab9a6cd83709f6b47fe9d1cd0bb | dd478c15631e892a19cfc0ad6736e32aa5559ea67fadba222a28a558935fefb7 | 017ed3b3ecec8cec8afd68582d6557dd0cfd7b6bef6dacbe7c1518e76b5cc8fd | MPL-2.0 | [] | 1,905 |
2.4 | lib-pod5 | 0.3.36 | Python bindings for the POD5 file format | LIB_POD5 Package
================
POD5 is a file format for storing nanopore dna data in an easily accessible way.
What does this project contain
------------------------------
This project contains the low-level core library (extension modules) for reading and
writing POD5 files. This project forms the basis of the pure-python `pod5` package which
is probably the project you want.
| text/markdown | null | Oxford Nanopore Technologies plc <support@nanoporetech.com> | null | null | null | nanopore | [
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Bio-Informatics"
] | [] | null | null | ~=3.8 | [] | [] | [] | [
"numpy>=1.21.0",
"build; extra == \"dev\"",
"pytest~=7.3; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/nanoporetech/pod5-file-format",
"Issues, https://github.com/nanoporetech/pod5-file-format/issues",
"Documentation, https://pod5-file-format.readthedocs.io/en/latest/"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T08:44:21.830872 | lib_pod5-0.3.36-cp314-cp314-win_amd64.whl | 3,769,118 | c7/de/83da35e916cae4ff5446b8071473138b09cab754be2e77bf5c0882b4cc56/lib_pod5-0.3.36-cp314-cp314-win_amd64.whl | cp314 | bdist_wheel | null | false | 163db3299c7d077074dbe987ea193694 | 7c3aedba0a2e24af644b24e81596e710100ee2fd55ae7175da083dcf2887e2f9 | c7de83da35e916cae4ff5446b8071473138b09cab754be2e77bf5c0882b4cc56 | MPL-2.0 | [
"licenses/LICENSE.md",
"licenses/arrow/licenses/LICENSE.txt",
"licenses/arrow/licenses/NOTICE.txt",
"licenses/boost/licenses/LICENSE_1_0.txt",
"licenses/bzip2/licenses/LICENSE",
"licenses/flatbuffers/licenses/LICENSE.txt",
"licenses/gsl-lite.txt",
"licenses/pybind11.txt",
"licenses/xsimd/licenses/LI... | 3,032 |
2.4 | glum | 3.1.3 | High performance Python GLMs with all the features! | # glum
[](https://github.com/Quantco/glum/actions)
[](https://github.com/Quantco/glum/actions/workflows/daily.yml)
[](https://glum.readthedocs.io/)
[](https://anaconda.org/conda-forge/glum)
[](https://pypi.org/project/glum)
[](https://pypi.org/project/glum)
[](https://doi.org/10.5281/zenodo.14991108)
[Documentation](https://glum.readthedocs.io/en/latest/)
Generalized linear models (GLM) are a core statistical tool that include many common methods like least-squares regression, Poisson regression and logistic regression as special cases. At QuantCo, we have used GLMs in e-commerce pricing, insurance claims prediction and more. We have developed `glum`, a fast Python-first GLM library. The development was based on [a fork of scikit-learn](https://github.com/scikit-learn/scikit-learn/pull/9405), so it has a scikit-learn-like API. We are thankful for the starting point provided by Christian Lorentzen in that PR!
The goal of `glum` is to be at least as feature-complete as existing GLM libraries like `glmnet` or `h2o`. It supports
* Built-in cross validation for optimal regularization, efficiently exploiting a “regularization path”
* L1 regularization, which produces sparse and easily interpretable solutions
* L2 regularization, including variable matrix-valued (Tikhonov) penalties, which are useful in modeling correlated effects
* Elastic net regularization
* Normal, Poisson, logistic, gamma, and Tweedie distributions, plus varied and customizable link functions
* Box constraints, linear inequality constraints, sample weights, offsets
This repo also includes tools for benchmarking GLM implementations in the `glum_benchmarks` module. For details on the benchmarking, [see here](src/glum_benchmarks/README.md). Although the performance of `glum` relative to `glmnet` and `h2o` depends on the specific problem, we find that when N >> K (there are more observations than predictors), it is consistently much faster for a wide range of problems.


For more information on `glum`, including tutorials and API reference, please see [the documentation](https://glum.readthedocs.io/en/latest/).
Why did we choose the name `glum`? We wanted a name that had the letters GLM and wasn't easily confused with any existing implementation. And we thought glum sounded like a funny name (and not glum at all!). If you need a more professional sounding name, feel free to pronounce it as G-L-um. Or maybe it stands for "Generalized linear... ummm... modeling?"
# A classic example predicting housing prices
```python
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> from glum import GeneralizedLinearRegressor
>>>
>>> # This dataset contains house sale prices for King County, which includes
>>> # Seattle. It includes homes sold between May 2014 and May 2015.
>>> # The full version of this dataset can be found at:
>>> # https://www.openml.org/search?type=data&status=active&id=42092
>>> house_data = pd.read_parquet("data/housing.parquet")
>>>
>>> # Use only select features
>>> X = house_data[
... [
... "bedrooms",
... "bathrooms",
... "sqft_living",
... "floors",
... "waterfront",
... "view",
... "condition",
... "grade",
... "yr_built",
... "yr_renovated",
... ]
... ].copy()
>>>
>>>
>>> # Model whether a house had an above or below median price via a Binomial
>>> # distribution. We'll be doing L1-regularized logistic regression.
>>> price = house_data["price"]
>>> y = (price < price.median()).values.astype(int)
>>> model = GeneralizedLinearRegressor(
... family='binomial',
... l1_ratio=1.0,
... alpha=0.001
... )
>>>
>>> _ = model.fit(X=X, y=y)
>>>
>>> # .report_diagnostics shows details about the steps taken by the iterative solver.
>>> diags = model.get_formatted_diagnostics(full_report=True)
>>> diags[['objective_fct']]
objective_fct
n_iter
0 0.693091
1 0.489500
2 0.449585
3 0.443681
4 0.443498
5 0.443497
>>>
>>> # Models can also be built with formulas from formulaic.
>>> model_formula = GeneralizedLinearRegressor(
... family='binomial',
... l1_ratio=1.0,
... alpha=0.001,
... formula="bedrooms + np.log(bathrooms + 1) + bs(sqft_living, 3) + C(waterfront)"
... )
>>> _ = model_formula.fit(X=house_data, y=y)
```
# Installation
Please install the package through conda-forge:
```bash
conda install glum -c conda-forge
```
# Performance
For optimal performance on an x86_64 architecture, we recommend using the MKL library
(`conda install mkl`). By default, conda usually installs the openblas version, which
is slower, but supported on all major architecture and OS.
| text/markdown | QuantCo, Inc. | noreply@quantco.com | null | null | BSD | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | https://github.com/Quantco/glum | null | >=3.9 | [] | [] | [] | [
"joblib",
"numexpr",
"numpy",
"pandas",
"scikit-learn>=0.23",
"scipy",
"formulaic>=0.6",
"tabmat>=4.0.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T08:43:53.045834 | glum-3.1.3.tar.gz | 13,376,557 | cd/b9/36d59416745296dd82f0a8b5de337a08190961ba4b22623223f7c291934e/glum-3.1.3.tar.gz | source | sdist | null | false | b40b0dd2ce641529cede3969d422fc40 | 76613250d4c3c66b408e0cc972081ee593b29d9c406ba8097f61969160ee44d4 | cdb936d59416745296dd82f0a8b5de337a08190961ba4b22623223f7c291934e | null | [
"LICENSE",
"NOTICE"
] | 5,044 |
2.4 | sphinx-nefertiti | 0.9.5 | The Nefertiti for Sphinx theme. | # Nefertiti for Sphinx [](https://github.com/danirus/sphinx-nefertiti/actions/workflows/tests.yml)
Nefertiti is a theme for [Sphinx](https://www.sphinx-doc.org/en/master/) that features:
* Responsive design, based on [Bootstrap 5.3](https://getbootstrap.com/docs/5.3).
* Text input field to filter the **index**.
* Font configuration compliant with [EU's GDPR](https://gdpr.eu/).
* Different fonts can be used for different elements.
* Light and dark color schemes, for normal text and code highlighted with Pygments styles.
* Images that switch between color schemes. Released as [sphinx-colorschemed-images](https://pypi.org/project/sphinx-colorschemed-images/).
* Diverse color sets are available: blue, indigo, purple, pink, red, orange, yellow, ...
* Header and footer links. Header links can be grouped in dropdown elements.
* Optional highlighting of the project repository in the header.
* Optional project version selector in the header.
* Back-to-top button.
See it in action in [sphinx-themes.org](https://sphinx-themes.org/#theme-sphinx-nefertiti).
## Tested
* [Tested against Sphinx 7.3, 7.4, 8.0 and 8.1](https://github.com/danirus/sphinx-nefertiti/actions/workflows/tests.yml), see matrix python-sphinx.
* [Tested with NodeJS v20](https://github.com/danirus/sphinx-nefertiti/actions/workflows/tests.yml), see javascript-tests.
## Index filtering
<p align="center"><img align="center" width="315" height="417" src="https://github.com/danirus/sphinx-nefertiti/raw/main/docs/source/static/img/index-filtering-1.png"></p>
By default the **index** shows the content folded. Opening or closing items is remembered while browsing the documentation. To quickly find items use the input filter. The filter will display items that could be invisible within a folded item. When the user types in the input field, let us say `fo`, the index gets filtered with all the entries that match those two characters. So the index will display three matches: `Fonts`, `Footer links` and `Footnotes`. Those three entries were all folded within their sections:
<p align="center"><img align="center" width="315" height="333" src="https://github.com/danirus/sphinx-nefertiti/raw/main/docs/source/static/img/index-filtering-2.png"></p>
## The TOC on the right side
The Table of Contents, displayed on the right side, spans itself to the right border of the browser to display long items, improving readability.
<p align="center"><img width="412" height="306" src="https://github.com/danirus/sphinx-nefertiti/raw/main/docs/source/static/img/toc.png"></p>
## Other features
Nefertiti for Sphinx comes with the following color sets. Change between them using the attribute `display` of the `html_theme_options` setting.
<p align="center"><img width="768" height="462" src="https://github.com/danirus/sphinx-nefertiti/raw/main/docs/source/static/img/colorsets.png"></p>
In order to be compliant with [EU's GDPR](https://gdpr.eu/), Nefertiti for Sphinx comes bundled with a group of fonts licensed for free distribution. Adding more fonts is explained in the [User's Guide](https://sphinx-nefertiti.readthedocs.io/en/latest/users-guide/customization/fonts.html#adding-fonts):
* Assistant
* Exo
* Montserrat
* Mulish
* Nunito
* Open Sans
* Red Hat Display
* Sofia Sans
* Ubuntu Sans
* Varta
* Work Sans
* Fira Code (monospace)
* Red Hat Mono (monospace)
* Ubuntu Sans Mono (monospace)
Combine up to 5 different fonts:
html_theme_options = {
"sans_serif_font": "Nunito",
"documentation_font": "Open Sans",
"monospace_font": "Ubuntu Sans Mono",
"project_name_font": "Nunito",
"doc_headers_font": "Georgia",
"documentation_font_size": "1.2rem",
"monospace_font_size": "1.1rem",
}
## To use it
Install the package from PyPI:
```shell
pip install sphinx-nefertiti
```
Edit the `conf.py` file of your Sphinx project and change the `html_theme` setting:
```python
html_theme = "sphinx_nefertiti"
```
Now rebuild the docs and serve them to get a first glimpse of your site made up with Nefertiti for Sphinx. It has many customizable options worth to explore. You might want to continue reading the [customization](https://sphinx-nefertiti.readthedocs.io/en/latest/users-guide/customization/index.html) section of the docs.
## To develop it
Clone the Git repository, create a Python virtual environment, and install the NodeJS packages:
```shell
git clone git@github.com:danirus/sphinx-nefertiti.git
cd sphinx-nefertiti
python3.12 -m venv venv
source venv/bin/activate
pip install -e .
nvm use --lts
npm install
```
Before contributing, please, install the pre-commit hook scripts:
```shell
pre-commit install
```
There are a comprehensive number of scripts in the package.json. Beyond them there is a Makefile that saves time when building the CSS and JavaScript bundles to deliver them within the Python package of the theme.
Further read the following sections:
* For [Style development](https://sphinx-nefertiti.readthedocs.io/latest/development.html#style-development)
* For [JavaScript development](https://sphinx-nefertiti.readthedocs.io/latest/development.html#javascript-development)
* For [Sphinx theme development](https://sphinx-nefertiti.readthedocs.io/latest/development.html#python-development)
## License
Project distributed under the MIT License.
| text/markdown | null | Daniela Rus Morales <danirus@eml.cc> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Pr... | [] | null | null | >=3.9 | [] | [] | [] | [
"sphinx<10,>=6",
"coverage<7.7,>=7.6.4; extra == \"dev\"",
"coverage-badge<1.2,>=1.1.2; extra == \"dev\"",
"lxml<5.4,>=5.3.0; extra == \"dev\"",
"pre-commit>=4.0.1; extra == \"dev\"",
"pytest<8.4,>=8.3.3; extra == \"dev\"",
"tox>=4.23.2; extra == \"dev\"",
"ruff>=0.7.0; extra == \"dev\"",
"sphinx<10... | [] | [] | [] | [
"Homepage, https://github.com/danirus/sphinx-nefertiti",
"Documentation, https://sphinx-nefertiti.readthedocs.io"
] | twine/6.1.0 CPython/3.12.8 | 2026-02-18T08:43:42.946407 | sphinx_nefertiti-0.9.5.tar.gz | 9,485,032 | 38/85/58f442c6e33281fecb8087c78e3168dcb38c83ad1723f4664e5059264774/sphinx_nefertiti-0.9.5.tar.gz | source | sdist | null | false | 1b705dd4243383fe24c022d45cb43544 | 9a12614d7b2b7aec6c7a2ff8cb97f8b215bf41908415348f582c7cb64b432928 | 388558f442c6e33281fecb8087c78e3168dcb38c83ad1723f4664e5059264774 | null | [
"LICENSE.txt"
] | 349 |
2.2 | eprllib | 1.6.91 | Building control trought DRL. | <img src="docs/Images/eprllib_logo.png" alt="logo" width="200"/>
[](https://hermmanhender.github.io/eprllib/)
[](https://github.com/hermmanhender/eprllib/actions/workflows/pages/pages-build-deployment)
[](https://github.com/hermmanhender/eprllib/actions/workflows/test-python-package.yml)
# eprllib: use EnergyPlus as an environment for RLlib
This repository provides a set of methods to establish the computational loop of EnergyPlus within a Markov Decision Process (MDP), treating it as a multi-agent environment compatible with RLlib. The main goal is to offer a simple configuration of EnergyPlus as a standard environment for experimentation with Deep Reinforcement Learning.
## Installation
To install EnergyPlusRL, simply use pip:
```
pip install eprllib
```
## Key Features
* Integration of EnergyPlus and RLlib: This package facilitates setting up a Reinforcement Learning environment using EnergyPlus as the base, allowing for experimentation with energy control policies in buildings.
* Simplified Configuration: To use this environment, you simply need to provide a configuration in the form of a dictionary that includes state variables, metrics, actuators (which will also serve as agents in the environment), and other optional features.
* Flexibility and Compatibility: EnergyPlusRL easily integrates with RLlib, a popular framework for Reinforcement Learning, enabling smooth setup and training of control policies for actionable elements in buildings.
## Usage
1. Import eprllib.
2. Configure EnvConfig to provide a EnergyPlus model based configuration, specifying the parameters required (see eprllib.Env.EnvConfig).
3. Configure RLlib algorithm to train the policy.
4. Execute the training using RLlib or Tune.
## Contribution
Contributions are welcome! If you wish to improve this project or add new features, feel free to submit a pull request.
Checkout our *Code of Conduct* and *How to Contribute* documentation.
## Licency
MIT License
Copyright (c) 2024 Germán Rodolfo Henderson
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
-------------------------------------------------------------------------------------------------
Copyright 2023 Ray Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-------------------------------------------------------------------------------------------------
EnergyPlus, Copyright (c) 1996-2024, The Board of Trustees of the University of Illinois, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy), Oak Ridge National Laboratory, managed by UT-Battelle, Alliance for Sustainable Energy, LLC, and other contributors. All rights reserved.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-------------------------------------------------------------------------------------------------
| text/markdown | Germán Rodolfo Henderson | null | null | null | MIT License
Copyright (c) 2024 Germán Rodolfo Henderson
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
-------------------------------------------------------------------------------------
Copyright 2023 Ray Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-------------------------------------------------------------------------------------
EnergyPlus, Copyright (c) 1996-2024, The Board of Trustees of the University of Illinois, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy), Oak Ridge National Laboratory, managed by UT-Battelle, Alliance for Sustainable Energy, LLC, and other contributors. All rights reserved.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"gymnasium>=0.28.1",
"matplotlib>=3.8.0",
"ray[all]>=2.20.0",
"shap>=0.46.0",
"torch>=2.5.1"
] | [] | [] | [] | [
"Documentation, https://hermmanhender.github.io/eprllib/",
"GitHub Repository, https://github.com/hermmanhender/eprllib",
"Bug Tracker, https://github.com/hermmanhender/eprllib/issues"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T08:43:17.925551 | eprllib-1.6.91.tar.gz | 6,719,746 | 3c/98/23306069756b16af19ff263b26b8946df6fd8737a0be8c9adf61ef7bcb25/eprllib-1.6.91.tar.gz | source | sdist | null | false | 4a7727501dc534fda7e41dd022bb9ab2 | 906f9eb874dfd6d0dadf31c112a303f81f94dbfac42947e0c6b98b656c60738e | 3c9823306069756b16af19ff263b26b8946df6fd8737a0be8c9adf61ef7bcb25 | null | [] | 272 |
2.4 | borneoiot | 0.4.13 | A open-source Python client library for devices under the Borneo-IoT Project | # borneo.py
A open-source Python client library for devices under the Borneo-IoT Project.
## Features
- Basic device control (power on/off, reboot, factory reset, etc.)
- Device information queries
- Timezone settings
- Firmware upgrade checking
- CoAP OTA firmware updates
## Installation
```bash
pip install -r requirements.txt
```
## Usage Examples
### Basic Device Control
```python
import asyncio
from borneo.lyfi import LyfiCoapClient
async def main():
async with LyfiCoapClient("coap://192.168.1.100") as device:
# Get device information
info = await device.get_info()
print(f"Device info: {info}")
# Control device power
await device.set_on_off(True)
status = await device.get_on_off()
print(f"Device status: {status}")
if __name__ == "__main__":
asyncio.run(main())
```
### CoAP OTA Firmware Update
```python
import asyncio
from borneo.lyfi import LyfiCoapClient
async def update_firmware():
device_url = "coap://192.168.1.100"
firmware_path = "firmware.bin"
def progress_callback(current, total):
percent = int((current / total) * 100)
print(f"Upload progress: {percent}%")
def status_callback(message):
print(f"Status: {message}")
async with LyfiCoapClient(device_url) as device:
# Check OTA status
status = await device.check_ota_status()
if status['success']:
print(f"Current partition: {status['current_partition']}")
# Execute OTA update
result = await device.perform_ota_update(
firmware_path,
progress_callback=progress_callback,
status_callback=status_callback
)
if result['success']:
print("Firmware update successful!")
else:
print(f"Update failed: {result['error']}")
if __name__ == "__main__":
asyncio.run(update_firmware())
```
## Command Line Tools
### OTA Update Tool
```bash
# Execute OTA firmware update
python examples/ota_update.py coap://192.168.1.100 firmware.bin
# Check device OTA status only
python examples/ota_update.py coap://192.168.1.100
```
## API Documentation
### CoAP OTA Methods
#### `check_ota_status()`
Check device OTA status.
**Returns:**
```python
{
'success': bool,
'current_partition': str, # Current running partition
'update_status': str, # Update status
'bytes_received': int # Bytes received
}
```
#### `upload_firmware(firmware_path, progress_callback=None)`
Upload firmware file to device.
**Parameters:**
- `firmware_path`: Path to firmware file
- `progress_callback`: Optional progress callback function `callback(current, total)`
**Returns:**
```python
{
'success': bool,
'message': str,
'next_boot': str, # Next boot partition
'checksum': str, # File checksum
'size': int # File size
}
```
#### `perform_ota_update(firmware_path, progress_callback=None, status_callback=None)`
Execute complete OTA update process.
**Parameters:**
- `firmware_path`: Path to firmware file
- `progress_callback`: Optional progress callback function `callback(current, total)`
- `status_callback`: Optional status callback function `callback(message)`
**Returns:**
```python
{
'success': bool,
'message': str,
'details': dict # Detailed result information
}
```
## Dependencies
- `aiocoap>=0.4.12` - CoAP protocol support
- `cbor2>=5.6.5` - CBOR data format support
- `aiofiles` - Async file operations
## Packaging & development
This project uses a modern PEP 621 `pyproject.toml` (setuptools backend). Below are common development and packaging commands.
- Build distributions:
- `python -m build` (uses build backend from `pyproject.toml`)
- `uv build` (if you use Astral's `uv` tool)
- Create and activate a local virtual environment (Windows):
- `python -m venv .venv`
- `.venv/Scripts/activate`
- Install dependencies for development:
- `pip install -e .`
- or `uv pip sync` / `uv venv` when using `uv`
- Run an example:
- `python examples/hello_lyfi.py`
- Publish to PyPI:
- `uv publish`
- or `python -m twine upload dist/*`
Notes:
- **License:** `GPL-3.0-or-later` — please add or verify a `LICENSE` file in the repository.
- The project was migrated from `setup.py` to `pyproject.toml`; builds produce both `sdist` and `wheel` artifacts.
| text/markdown | null | "Yunnan BinaryStars Technologies Co., Ltd." <oldrev@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.6 | [] | [] | [] | [
"aiocoap>=0.4.12",
"cbor2>=5.8.0",
"aiofiles"
] | [] | [] | [] | [
"Homepage, https://www.borneoiot.com"
] | uv/0.9.4 | 2026-02-18T08:42:17.797553 | borneoiot-0.4.13.tar.gz | 22,619 | 45/28/f8464482ef3d60a52130fb1ff0e7303551b57a6cbe322a65975c0eb7832e/borneoiot-0.4.13.tar.gz | source | sdist | null | false | 3176ea4d1a06ccd2adad1857c245aa73 | 111f133fa1c451be44fcb5203028263c4a5553dd2dfa9ad2a6cc12e97a64c908 | 4528f8464482ef3d60a52130fb1ff0e7303551b57a6cbe322a65975c0eb7832e | GPL-3.0-or-later | [
"LICENSE"
] | 270 |
2.1 | shotstars | 4.12 | A tool to track waning stars and detecting fake stars on Github | # 💫 𝕊𝕙𝕠𝕥𝕤𝕥𝕒𝕣𝕤
Software slogan: *„toda la ira del vudú“*.
> [!IMPORTANT]
> 𝕊𝕙𝕠𝕥𝕤𝕥𝕒𝕣𝕤 𝕔𝕒𝕟 𝕕𝕠 𝕥𝕙𝕚𝕟𝕘𝕤 𝕥𝕙𝕒𝕥 𝔾𝕚𝕥𝕙𝕦𝕓 𝕕𝕠𝕖𝕤𝕟'𝕥 𝕕𝕠 𝕓𝕪 𝕕𝕖𝕗𝕒𝕦𝕝𝕥.
>
> 𝕊𝕦𝕡𝕡𝕠𝕣𝕥𝕖𝕕 𝕆𝕊: 𝔾ℕ𝕌/𝕃𝕚𝕟𝕦𝕩; 𝕎𝕚𝕟𝕕𝕠𝕨𝕤 𝟟+; 𝔸𝕟𝕕𝕣𝕠𝕚𝕕/𝕋𝕖𝕣𝕞𝕦𝕩; 𝕞𝕒𝕔𝕆𝕊 (𝕚𝕟𝕥𝕖𝕣𝕞𝕚𝕥𝕥𝕖𝕟𝕥𝕝𝕪).
<img src="https://raw.githubusercontent.com/snooppr/shotstars/main/images/CLI.png" />
Shotstars allows you to monitor any public repository from a distance.
For example, can a web user tell how many stars an interesting Github repository has gained or lost in a month? *(IT hosting doesn't provide information on star decrement, even to the owner of their own projects)*. Shotstars takes care of calculating and visualizing not only this, but much, much more. In addition to statistics, the tool can identify repositories with **𝔽𝕒𝕜𝕖 𝕊𝕥𝕒𝕣𝕤**.
**Claimed functions:**
- [X] Shotstars will help find and expose naked kings and their retinue *(fact: stars in some repositories are inflated).*
- [X] Shotstars calculates parameters: aggressive marketing, projected growth, fake stars, peak of popularity and its date.
- [X] Shotstars will calculate progress or regression over the last month *(median - trend in percentage change and average - calculated in fact in times).*
- [X] Shotstars Shotstars will calculate the names of the months that had the most and the least stars *(mode / anti-mode)*, and will also display the entire history of stars by quartiles, a similar calculation is made by year.
- [X] Shotstars will output the longest period of time without adding stars.
- [X] Shotstars scans repositories for stars added and removed with statistics for a selected time period.
- [X] Shotstars reports the real date of the repository *(fact: developers can declare/fake/change the date of their projects commits, but Shotstars will not fool them, the utility will display real numbers)*.
- [X] Shotstars will show ~ the size of any public repository.
- [X] Shotstars will also provide a short description of the repository.
- [X] Shotstars will report the underlying PL of a repository as background in the HTML report.
- [X] Shotstars offers a scan history with a selection of previously registered projects for quick checking.
- [X] Shotstars generates CLI/HTML reports *(stats, time periods, duplicate user activity, urls and json)*.
- [X] Shotstars creates graphs and histograms [with night mode support](https://github.com/snooppr/shotstars/issues/11) *(all star history by date, by month, by year, by hour, by days of the week, cumulative set of stars)*.
- [X] Shotstars can simulate results, documented hack: a function designed to check the utility's operation *(to make sure)* on dead/stable repositories without moving stars.
- [X] Shotstars finds users that overlap across Github projects, including those with hidden/private profiles.
- [X] Shotstars calculates to the minute and displays the time when the github rescan restriction is lifted *(if token is not used)*.
- [X] Shotstars is created for people and works out of the box, OS support: Windows7+, GNU/Linux, Android *(the user [does not need](https://github.com/snooppr/shotstars/releases): technical skills; registration/authorization on Github and even the presence of Python)*.
- [X] Shotstars processes tasks with jet speed and for free *(cross-platform open source software, donations are welcome)*.
---
## ⌨️ Native Installation

```
$ pip install shotstars
$ shotstars_cli
```
**Ready-made "Shotstars" builds are provided for OS GNU/Linux & Windows & Termux (Python is not required)**
[Download⬇️Shotstars](https://github.com/snooppr/shotstars/releases "download a ready-made assembly for Windows; GNU/Linux or Termux")
---
## 💾 Scan history
In Shotstars the scan history is available, now you no longer need to enter or copy/paste the URL each time,
specify the keyword `his/history` instead of the repository url and select the previously scanned repository by number.
---
## 🌀 With Shotstars, users can also detect fake stars
<img src="https://raw.githubusercontent.com/snooppr/shotstars/refs/heads/main/images/anomalies_among_stars.png" />
Example presumably of fake stars *(this repository was previously caught pirating)*. From the graph of spikes it is clear that in two weeks the repository gained +5K fake stars *(a couple of years later this repository stocked up on fake stars again).*
Shotstars also offers a line chart: a cumulative set of stars.
<img src="https://raw.githubusercontent.com/snooppr/shotstars/refs/heads/main/images/anomalies_among_stars_cum.png" />
Comparison of two repositories, cumulative set of stars. The upper screenshot is the usual movement of stars, the lower screenshot is the promotion of fake stars.
<img src="https://raw.githubusercontent.com/snooppr/shotstars/refs/heads/main/images/anomalies_among_stars_json.png" />
For any repository, Shotstars will provide all users who have added stars, broken down by date, in json format, which means it's even easier to analyze anomalous peaks on the chart.
Research on the promotion of fake stars **/** Исследование про накрутку фейковых звезд
[RU](https://habr.com/ru/articles/723648/) / [RU_2](https://www.opennet.ru/opennews/art.shtml?num=62515) **|**
[EN](https://dagster.io/blog/fake-stars) / [EN_2](https://arxiv.org/html/2412.13459v1)
---
## ⚙️ Shotstars supports simulation of results
Note that the HTML report is generated when the repository is rescanned. If the user needs to force a specific HTML report, simply enable star simulation. 👇
A documented software hack - or side function designed to test the script on dead/stable repositories without star movement.
To simulate the process, the user must scan the new repository once,
adding it to the database; randomly delete and add any lines to a file
(OS GNU/Linux and Termux):
`/home/{user}/.ShotStars/results/{repo}/new.txt`
(OS Windows):
`C:\Users\{User}\AppData\Local\ShotStars\results\{repo}\new.txt`;
run a second scan of the same repository.
---
## ⛔️ Github restrictions
There are scanning restrictions from Github 【**6K stars/hour** from one IP address】.
In Shotstars with a Github token [limits are gone](https://github.com/snooppr/shotstars/issues/3) and you can scan repositories up to 【**500K stars/hour**】.
Steps to get a token *(**free**)*:
1) register for an account on Github (if you don’t already have one);
2) open your profile -> settings -> developer settings -> personal acces tokens -> generate new token;
3) insert the resulting token (string) into in the field instead of 'None'
GNU/Linux & Android/Termux::
`/home/{user}/.ShotStars/results/config.ini`
OS Windows::
`C:\Users\{User}\AppData\Local\ShotStars\results\config.ini`.
The Github token belongs to the user, is stored locally and is not transferred or downloaded anywhere.
You can parse both your own and third-party repositories (by default, registration/authorization/token are not required).
---
## 🇷🇺 TL;DR
Shotstars позволяет следить со стороны за любым публичным репозиторием.
Например, может ли пользователь сети сказать: сколько прибавилось или убавилось звезд у какого-нибудь интересного github-репозитория за месяц? *(IT-хостинг не предоставляет информацию по убыванию звезд, даже хозяину своих собственных проектов)*. Shotstars позаботится вычислит и визуализирует не только это, но и многое, многое другое. Кроме статистики, инструмент позволяет вычислять репозитории с **накрученными звездами**.
**Заявленные функции:**
- [X] Shotstars поможет найти и разоблачить голых королей и их свиту *(факт: звезды в некоторых репозиториях накручивают)*.
- [X] Shotstars рассчитывает параметры: агрессивный маркетинг, прогнозируемый рост, фейковые звезды, пик популярности и его дата.
- [X] Shotstars рассчитает прогресс или регресс за последний месяц *(медиану — тенденцию в процентном изменении и среднее — рассчитанное по факту в разах).*
- [X] Shotstars вычислит имена месяцев, в которых было всех больше и всех меньше получено звезд *(мода / анти-мода)*, а также раскрасит всю историю звезд по квартилям, аналогичный расчет и по годам.
- [X] Shotstars выведет самый протяженный период времени без прибавления звезд *(черная полоса)*.
- [X] Shotstars проверяет репозитории на предмет прибавления и убавления звезд со статистикой за выбранный период времени.
- [X] Shotstars сообщает реальную дату создания репозитория *(факт: разработчики могут заявлять/подделывать/изменять дату создания своих проектов и коммитов, но Shotstars им не обмануть, утилита отобразит реальные цифры)*.
- [X] Shotstars покажет ~ размер любого публичного репозитория.
- [X] Shotstars также предоставит краткое описание репозитория.
- [X] Shotstars сообщит основной ЯП у репозитория в виде фона в HTML-отчете.
- [X] Shotstars предлагает для быстрой проверки историю сканирований с выбором ранее учтенных проектов.
- [X] Shotstars генерирует CLI/HTML отчеты *(статистика, периоды времени, дублирующая активность пользователей, url's и json)*.
- [X] Shotstars создает графики и гистограммы с поддержкой [ночного режима](https://github.com/snooppr/shotstars/issues/11) *(вся история звезд по дате/времени: по месяцам, по годам, по часам, по дням недели, кумулятивный набор звезд)*.
- [X] Shotstars умеет имитировать результаты, задокументированный хак: функция, призванная проверить работу утилиты *(удостовериться)* на мертвых/стабильных репозиториях без движения звезд.
- [X] Shotstars находит пересекающихся у Github-проектов пользователей, в т.ч. и тех, у кого профиль скрыт/приватный.
- [X] Shotstars рассчитывает с точностью до минуты и отображает время снятия github-ограничения на повторные сканирования *(если не используется token)*.
- [X] Shotstars создан для людей и работает из коробки, поддержка OS: Windows7+, GNU/Linux, Android *(от пользователя [не требуются](https://github.com/snooppr/shotstars/releases): владения техническими навыками; регистрация/авторизация на Github и даже наличие Python)*.
- [X] Shotstars отрабатывает задачи с реактивной скоростью и задаром *(open source, кроссплатформенность, донаты приветствуются)*.
Существуют ограничения на сканирование со стороны Github 【**6K звезд/час** с одного IP адреса】.
В Shotstars с Github-токеном [ограничения уходят](https://github.com/snooppr/shotstars/issues/3) и можно сканировать репозитории до 【**500K звезд/час**】.
Шаги для получения токена *(**бесплатный**)*:
1) зарегистрируйте аккаунт на Github (если у вас его еще нет);
2) откройте профиль -> settings -> developer settings -> personal acces tokens -> generate new token;
3) полученный токен (строку) вставьте в поле заместо 'None' в файл
OS GNU/Linux & Android/Termux::
`/home/{user}/.ShotStars/results/config.ini`
OS Windows::
`C:\Users\{User}\AppData\Local\ShotStars\results\config.ini`.
Github-токен принадлежит пользователю, хранится локально и никуда не передается и не скачивается.
Парсить можно, как свои, так и сторонние репозитории *(по умолчанию регистрация/авторизация/токен не требуются)*.
В Shotstars доступна история сканирований, не нужно теперь каждый раз вводить или копи/пастить url,
укажите вместо url репозитория ключевое слово `his/history` и выберите цифрой ранее сканируемый репозиторий.
Обратите внимание, что HTML-отчет создается при повторном сканировании репозитория. Если пользователю требуется принудительно получить особенный HTML-отчет, просто включите [симуляцию](https://github.com/snooppr/shotstars#%EF%B8%8F-shotstars-supports-simulation-of-results) звезд.
---
## 🔻 Screenshot gallery
*1. Shotstars for Windows 7.*
<img src="https://raw.githubusercontent.com/snooppr/shotstars/main/images/shotstars%20Win.png" />
*2. Shotstars HTML-report.*
<img src="https://raw.githubusercontent.com/snooppr/shotstars/main/images/html-report.png" />
*3. Shotstars for Android/Termux.*
<img src="https://raw.githubusercontent.com/snooppr/shotstars/main/images/Termux.png" />
*4. Shotstars Limit Github/API (If you don't use the free token).*
<img src="https://raw.githubusercontent.com/snooppr/shotstars/main/images/Limit.png" />
*5. Shotstars Scan History.*
<img src="https://raw.githubusercontent.com/snooppr/shotstars/main/images/scan_history.png" />
*6. Shotstars Discovers Hidden Developer Activity.*
<img src="https://raw.githubusercontent.com/snooppr/shotstars/main/images/hidden update.png" />
Shotstars is awesome, it sees everything. Github says the repository hasn't had any commits in a month, but there has been some subtle activity, like PR updates, etc. (by the way, commit rewriting and date manipulation is also easily detected).
*7. Shotstars finds users that overlap across Github projects, including those with hidden/private profiles.*
<img src="https://raw.githubusercontent.com/snooppr/shotstars/main/images/cross.png" />
*8. Shotstars generates HTML-CLI timelines of a repository's star history, both new and gone.*
<img src="https://raw.githubusercontent.com/snooppr/shotstars/main/images/graph.png" />
<img src="https://raw.githubusercontent.com/snooppr/shotstars/main/images/graphs.png" />
<p>Comparison of two repositories based on stellar history. It is clear that the peak of popularity of the first repository has long passed, the development has gone into decline (forks). The second repository is a legend and is steadily gaining popularity.</p>
<img src="https://raw.githubusercontent.com/snooppr/shotstars/main/images/graphs2.png" />
<p>Starry hour. Repository from location RU. It is clear that its audience is European, in the morning hours, at night, much fewer stars come.</p>
<img src="https://raw.githubusercontent.com/snooppr/shotstars/main/images/graphs3.png" />
<p>Starry day of week. From the graph it is clear that stars come intensively at the beginning and end of the working week for this repository. Oh, and outside the US, people start their workweek according to <a href="https://github.com/orgs/community/discussions/42104">international norms</a> — on Monday, not Sunday.</p>
*9. Clear cache.*
<img src="https://raw.githubusercontent.com/snooppr/shotstars/main/images/clear.png" />
---
## 💡 Explanation of some metrics
+ **"Date-update (including hidden update)"** — The metric displays two things: firstly, if a developer pushes a commit and then completely erases the commits from the server and/or overwrites them, Shotstars will know about it and tell you; secondly, the metric will be updated if, for example, there are no pushes, but a pull request is cancelled/added, etc.
+ **"Peak-stars-in-date"** — The metric displays the date (per day) on which the maximum number of stars was received.
+ **"The-trend-of-adding-stars (forecasting)"** — The metric displays the predicted increase in stars per day based on the repository history and algorithm.
+ **"Most-of-stars-d.w. / Smallest-of-stars-d.w."** — The metric displays the calculation of two days of the week for the entire history of the repository (the most profitable day of the week by stars and the day with the least number of stars, mod/anti-mod).
+ **"Most-of-stars-month / Smallest-of-stars-month"** — The metric displays the calculation of two months in the entire history of the repository (the most profitable month by stars and the month with the least stars, mode / anti-mode).
+ **"Distribution-of-stars-by-month"** — Calculation of stars by month for the entire history of the repository (may include rare phenomenon: private stars, when the sum of all stars ≠ 'GitHub-rating'), coloring of the range of stars by quartiles (green: x > Q3); (yellow: Q1 <= x <= Q3); (red: x < Q1), The font size also decreases from Q3...Q1. Groups are not always arranged "3/6/3", for example, the groups of the "Shotstars" repository are arranged "3/4/5". The characteristic is calculated when the age of the repository is at least one month.
+ **"Distribution-of-stars-by-year"** — Same as the metric '"Distribution-of-stars-by-month"', but the calculation is not by month, but by year, and the metric does not display the number of private stars. The characteristic is calculated when the age of the repository is at least one year.
+ **"Longest-period-without-add-stars"** — The metric displays the longest time span when no stars were submitted to the repository, i.e. every day in a row there were 0 stars (black streak).
+ **"Median-percentage-change"** — The metric reflects the average trend in stars (i.e. does not take into account sharp fluctuations in stars, such as fake stars or a sharp drop/popularity from the media), calculated as a percentage, the ratio of the last month to the penultimate month. Positive numbers are easy to interpret, negative ones are not. The simplest example: a user scans a repository at the beginning of January (in November, the repository received +30 stars, and in December, +60 stars), the metric will display "100%"; if everything was the other way around (in November, the repository received +60 stars, and in December, +30 stars), the metric will display "-50%" (not "-1~~00%"~~). The characteristic is calculated when the repository is at least two months old.
+ **"Average-change-in-fact"** — Unlike the "Median-percentage-change" metric, it reflects not the average trend, but the real state of affairs, i.e. the arithmetic mean and takes into account all fluctuations and dips in stars for the same period (the ratio of the last month to the penultimate month), but is calculated not in percentages, but in times and units (stars). Example: in November the repository added +30 stars, in December +60 stars, then the metric will display - "2 times (30 stars)" and vice versa, if in November +60 stars, and in December +30 stars, then the metric will display - "-2 times (-30 stars)". The characteristic is calculated when the repository is at least two months old.
+ **"Aggressive-marketing"** — The metric accepts the following values: "—"; "Low"; "Medium"; "High"; "Hard"; "Hard+". "—" means that the repository consistently receives or does not receive stars, without jumps, usually such repositories do not care about their popularity, are rarely/not mentioned in the media. "Low"; "Medium"; "High" — these repositories are repeatedly mentioned in the media, the movement of stars is uneven, they can attract hundreds of stars per day, the popularity of the repositories is high. "Hard" — frequent and frantic, uneven movement of stars, i.e. unnatural, the promotion of fake stars. "Hard+" — usually this is multiple promotion of fake stars in large quantities, i.e. more than once. The characteristic is calculated when the repository is at least two months old.
+ **"Fake-stars"** — The metric takes the following values: "Yes"; "Yes, multiple attempts to promote fake stars"; "—". In the first case, this could be a one-time, but large promotion of fake stars or regular promotion of stars little by little. In the second case, these are obvious and multiple promotions of fake stars. "—" means that Shotstars did not detect fake stars. The characteristic is calculated when the repository is at least two months old.
+ **"New stars"** — New stars for the repository from the penultimate scan to the last scan. The characteristic is calculated based on the frequency of repository scans. For the graph, the actual parsing is calculated, i.e. the stars received for the entire history of the repository.
+ **"Gone stars"** — The metric displays those users: who removed their stars from the repository; or deleted their account from the Github platform; or switched their profile to "private" mode - such a profile, like a deleted one, can lead to "404" by link, i.e. Github (not always) completely hides all user activity and their personal page, but such an account can conduct activity that is almost never displayed anywhere except by the account owner (for example, only reactions are displayed). This metric is not calculated for the entire history of the repository, but starts from the date of the first scan. Gone stars for the repository for the period from the penultimate scan to the last scan. The characteristic is calculated based on the frequency of repository scans.
+ **"Cross-users"** — The metric only displays those overlapping users that overlap in the scanned repositories relative to a specific scanned repository.
---
## 🪺 Easter eggs
Many good softwares have Easter eggs. Shotstars has them too, try looking for something related to the project itself, you'll get...
| text/markdown | null | Snooppr <snoopproject@protonmail.com> | null | null | null | termux, github, parser, stars, OSINT, scraping, secrets, scanner, analytics, fake stars | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent",
"Natural Language :: English",
"Topic :: Internet :: WWW/HTTP :: Indexing/Search"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"plotly",
"rich>=13.0.1",
"requests"
] | [] | [] | [] | [
"Homepage, https://github.com/snooppr/shotstars",
"Donate, https://yoomoney.ru/to/4100111364257544"
] | twine/6.1.0 CPython/3.8.20 | 2026-02-18T08:41:38.456860 | shotstars-4.12.tar.gz | 416,293 | 5b/0d/4b1afc563caced0ae1bb532f6ca77fd1b145db31d08367a3c9948d1b1391/shotstars-4.12.tar.gz | source | sdist | null | false | ca0d6999c9a7067cf8cb1638c8450fb7 | 3432579c6b789c088212068439272e85d9221d622e041ce263dac088e34ecbec | 5b0d4b1afc563caced0ae1bb532f6ca77fd1b145db31d08367a3c9948d1b1391 | null | [] | 284 |
2.4 | whisper-ptt | 1.1.1 | Push-to-talk speech-to-text input using OpenAI Whisper | # whisper-ptt
Push-to-talk speech-to-text. Hold a key to record, release to transcribe and type.
## Usage
```
uvx whisper-ptt
uvx whisper-ptt --model base --key alt_r
```
## Options
- `--model` — Whisper model. Default: `base`. Common choices: `tiny`, `base`, `small`, `medium`, `large-v3-turbo`.
- Run `uvx whisper-ptt --help` for full list.
- `--key` — Hotkey name from `pynput.keyboard.Key`. Default: `alt_r`
## macOS permissions
This tool needs two macOS permissions to work:
- **Accessibility** — to listen for hotkey presses and type transcribed text into the active window.
- **Microphone** — to record audio.
To grant accessibility access:
1. Open System Settings > Privacy & Security > Accessibility
2. Click the + button and add your terminal app (Terminal, iTerm2, VS Code, etc.)
3. If already listed, toggle it off and back on
Microphone access is prompted automatically on first use.
## Release
```bash
git tag v1.0.x
git push --tags
```
CI runs tests and publishes to PyPI automatically.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.14 | [] | [] | [] | [
"pynput",
"pywhispercpp",
"sounddevice",
"soundfile"
] | [] | [] | [] | [
"Homepage, https://github.com/xloc/whisper-ptt",
"Source, https://github.com/xloc/whisper-ptt"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T08:41:26.892233 | whisper_ptt-1.1.1-py3-none-any.whl | 3,184 | 54/e8/684bf2f907b0ad3f31ee33b4aa563e36035acb7c31235a6d78553bf9b48b/whisper_ptt-1.1.1-py3-none-any.whl | py3 | bdist_wheel | null | false | b13c365e6555fefb04975b00b720d921 | e11272f8e92ef322cc78e34deaa541fae3e6e17dc4ff2d03e4fe015a4f36adc4 | 54e8684bf2f907b0ad3f31ee33b4aa563e36035acb7c31235a6d78553bf9b48b | null | [] | 276 |
2.4 | nest-simulator | 0.0.1.dev0 | A minimal test package | # nest-simulator
A minimal test package.
## Installation
```bash
pip install nest-simulator
```
## Usage
```python
from nest_simulator import hello_world
print(hello_world()) # Output: Hello World
```
| text/markdown | null | Your Name <you@example.com> | null | null | MIT | null | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language ... | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/yourusername/nest-simulator",
"Repository, https://github.com/yourusername/nest-simulator"
] | twine/6.2.0 CPython/3.12.11 | 2026-02-18T08:41:24.119449 | nest_simulator-0.0.1.dev0.tar.gz | 1,550 | 84/5f/242280e76f935f0b3302908db832ffa980707dc7b8321975bae36bb1b1ed/nest_simulator-0.0.1.dev0.tar.gz | source | sdist | null | false | b2c579e2b1c23cc9f768e42aaf44e6f2 | 32d6ff536938b7690c9623d42be26bef0c609b4a747a64d900e2ab83ad397999 | 845f242280e76f935f0b3302908db832ffa980707dc7b8321975bae36bb1b1ed | null | [] | 33 |
2.1 | gnomopo | 0.1.1 | GNOme MOuse POsitioner | exposes mouse position on vanilla (non-x) ubuntu
| null | Mario Balibrera | mario.balibrera@gmail.com | null | null | GPL-2.0-or-later | null | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | null | [] | [] | [] | [
"fyg>=0.1.7.9"
] | [] | [] | [] | [] | twine/5.0.0 CPython/3.12.3 | 2026-02-18T08:41:24.084726 | gnomopo-0.1.1-py3-none-any.whl | 10,670 | 4a/60/f73dc6ba2f12d4fa28f2a765d7ccdba531969c64c067e9c5b34753e0a0bd/gnomopo-0.1.1-py3-none-any.whl | py3 | bdist_wheel | null | false | f3f253ed5d561b66dd30280630863e27 | 84668d7b7cbc5c0d8e4515a8991a5566e9d76bdb712b54a3ee6a857e2b64fa27 | 4a60f73dc6ba2f12d4fa28f2a765d7ccdba531969c64c067e9c5b34753e0a0bd | null | [] | 124 |
2.4 | capm | 0.9.3 | Code Analysis Package Manager | # CAPM
<div align="center">

</div>
<div align="center">
*Code Analysis Package Manager 📦*
</div>
<div align="center">
</div>
## Building the binary distribution
Generate a self-contained binary:
```shell
uv run poe bundle
```
| text/markdown | null | Rob van der Leek <robvanderleek@gmail.com> | null | null | null | null | [] | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"click>=8.3.1",
"docker>=7.1.0",
"halo>=0.0.31",
"inquirer>=3.4.1",
"pyyaml>=6.0.3",
"typer>=0.21.0"
] | [] | [] | [] | [
"Changelog, https://github.com/getcapm/capm/blob/master/CHANGELOG.md",
"Documentation, https://getcapm.github.io",
"Source, https://github.com/getcapm/capm"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T08:40:59.693272 | capm-0.9.3.tar.gz | 58,760 | ce/02/fc53f61a96832e3bd0c649a914c0936ea02335f0cf3af59b0726fbf1edb2/capm-0.9.3.tar.gz | source | sdist | null | false | 77efe5f1853fbba3126b5344e2edf9c0 | aa8e3701f60a1e445edce92159e654f4a7c594e87b3b1584255fb19caa6acedf | ce02fc53f61a96832e3bd0c649a914c0936ea02335f0cf3af59b0726fbf1edb2 | GPL-3.0-or-later | [
"LICENSE"
] | 257 |
2.4 | contact-person-profile-csv-imp-local | 0.0.62b2889 | PyPI Package for Circles csv_to_contact_person_profile-local Local/Remote Python | This is a package for sharing common XXX function used in different repositories
| text/markdown | Circles | info@circles.ai | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | https://github.com/circles-zone/contact-person-profile-csv-imp-local-python-package | null | null | [] | [] | [] | [
"contact-local>=0.0.48",
"logger-local>=0.0.135",
"database-mysql-local>=0.1.1",
"user-context-remote>=0.0.77",
"contact-email-address-local>=0.0.40.1234",
"contact-group-local>=0.0.68",
"contact-location-local>=0.0.14",
"contact-notes-local>=0.0.33",
"contact-persons-local>=0.0.8",
"contact-phone... | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.8 | 2026-02-18T08:40:49.263099 | contact_person_profile_csv_imp_local-0.0.62b2889.tar.gz | 14,162 | 30/65/64ebb327424ec9ba19a4d3c65bcbc8d96857d0610bed0f0dccc51328dcbb/contact_person_profile_csv_imp_local-0.0.62b2889.tar.gz | source | sdist | null | false | 91c1fd3fb3ce5040dc6a05d4772dcc6c | bfc5fded6fbc42f8a3944310fbbc59e566ac25e4da6cb1b71cd143366cf5ccae | 306564ebb327424ec9ba19a4d3c65bcbc8d96857d0610bed0f0dccc51328dcbb | null | [] | 233 |
2.3 | pinviz | 0.16.1 | Programmatically generate Raspberry Pi GPIO connection diagrams | # PinViz
<p align="center">
<img src="https://raw.githubusercontent.com/nordstad/PinViz/main/assets/logo_512.png" alt="PinViz Logo" width="120">
</p>
<p align="center">
<a href="https://github.com/nordstad/PinViz/actions/workflows/ci.yml"><img src="https://github.com/nordstad/PinViz/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
<a href="https://codecov.io/gh/nordstad/PinViz"><img src="https://codecov.io/gh/nordstad/PinViz/branch/main/graph/badge.svg" alt="Coverage"></a>
<a href="https://nordstad.github.io/PinViz/"><img src="https://img.shields.io/badge/docs-mkdocs-blue" alt="Documentation"></a>
<a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License: MIT"></a>
<a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/python-3.12+-blue.svg" alt="Python 3.12+"></a>
<a href="https://pypi.org/project/pinviz/"><img src="https://img.shields.io/pypi/v/pinviz.svg" alt="PyPI version"></a>
<a href="https://pepy.tech/projects/pinviz"><img src="https://static.pepy.tech/personalized-badge/pinviz?period=total&units=international_system&left_color=black&right_color=green&left_text=downloads" alt="PyPI Downloads"></a>
</p>
**Programmatically generate beautiful GPIO connection diagrams for Raspberry Pi and ESP32/ESP8266 boards in SVG format.**
PinViz makes it easy to create clear, professional wiring diagrams for your microcontroller projects. Define your connections using simple YAML/JSON files or Python code, and automatically generate publication-ready SVG diagrams.
## Example Diagram
<p align="center">
<img src="https://raw.githubusercontent.com/nordstad/PinViz/main/images/bh1750.svg" alt="BH1750 Light Sensor Wiring Diagram" width="600">
</p>
**[→ See more examples](https://nordstad.github.io/PinViz/guide/examples/)**
## Features
- 📝 **Declarative Configuration**: Define diagrams using YAML or JSON files
- 🎨 **Automatic Wire Routing**: Smart wire routing with configurable styles (orthogonal, curved, mixed)
- 🎯 **Color-Coded Wires**: Automatic color assignment based on pin function (I2C, SPI, power, ground, etc.)
- ⚡ **Inline Components**: Add resistors, capacitors, and diodes directly on wires
- 🔌 **Built-in Templates**: Pre-configured boards (Raspberry Pi, ESP32, ESP8266) and common devices
- 🐍 **Python API**: Create diagrams programmatically with Python code
- 🤖 **MCP Server**: Generate diagrams from natural language with AI assistants
- 🌙 **Dark Mode**: Built-in light and dark themes for better visibility
- 📦 **SVG Output**: Scalable, high-quality vector graphics
- ✨ **Modern CLI**: Rich terminal output with progress indicators and colored messages
- 🔧 **JSON Output**: Machine-readable output for CI/CD integration
## Multi-Level Device Support ✨
**New in v0.11.0**: PinViz now supports device-to-device connections, enabling complex multi-level wiring diagrams!
### What's New
- Connect devices to other devices (not just to the board)
- Automatic horizontal tier layout based on connection depth
- Supports power distribution chains, sensor chains, motor controllers, and more
- Backward compatible with existing configurations
### Example: Power Distribution Chain
```yaml
connections:
# Board to regulator
- from: {board_pin: 2}
to: {device: "Regulator", device_pin: "VIN"}
# Regulator to LED (device-to-device!)
- from: {device: "Regulator", device_pin: "VOUT"}
to: {device: "LED", device_pin: "VCC"}
```
See [examples/multi_level_simple.yaml](examples/multi_level_simple.yaml) for a complete example.
## Dark Mode 🌙
Generate diagrams optimized for dark backgrounds. Perfect for documentation displayed in dark themes.
```yaml
title: "My Diagram"
board: "raspberry_pi_5"
theme: "dark" # or "light" (default)
devices: [...]
connections: [...]
```
Or use the CLI flag:
```bash
pinviz render diagram.yaml --theme dark -o output.svg
```
See [examples/bh1750_dark.yaml](examples/bh1750_dark.yaml) for a complete example.
## Supported Boards
| Board | Aliases | GPIO Pins | Description |
| ------------------ | -------------------------------------------- | -------------------- | ---------------------------------------------------- |
| Raspberry Pi 5 | `raspberry_pi_5`, `rpi5`, `rpi` | 40-pin | Latest Raspberry Pi with improved GPIO capabilities |
| Raspberry Pi 4 | `raspberry_pi_4`, `rpi4` | 40-pin | Popular Raspberry Pi model with full GPIO header |
| Raspberry Pi Pico | `raspberry_pi_pico`, `pico` | 40-pin (dual-sided) | RP2040-based microcontroller board |
| ESP32 DevKit V1 | `esp32_devkit_v1`, `esp32`, `esp32_devkit` | 30-pin (dual-sided) | ESP32 development board with WiFi/Bluetooth |
| ESP8266 NodeMCU | `esp8266_nodemcu`, `esp8266`, `nodemcu` | 30-pin (dual-sided) | ESP8266 WiFi development board |
| Wemos D1 Mini | `wemos_d1_mini`, `d1mini`, `wemos` | 16-pin (dual-sided) | Compact ESP8266 development board |
All boards include full pin definitions with GPIO numbers, I2C, SPI, UART, and PWM support.
## Installation
Install as a standalone tool with global CLI access:
```bash
uv tool install pinviz
```
Or as a project dependency:
```bash
# Using uv
uv add pinviz
# Using pip
pip install pinviz
```
## Quick Start
<p align="center">
<img src="https://raw.githubusercontent.com/nordstad/PinViz/main/scripts/demos/output/quick_demo.gif" alt="PinViz Quick Demo" width="800">
</p>
### Try a Built-in Example
```bash
# Generate a BH1750 light sensor wiring diagram
pinviz example bh1750 -o bh1750.svg
# See all available examples
pinviz list
```
### Create Your Own Diagram
Create a YAML configuration file (`my-diagram.yaml`):
```yaml
title: "BH1750 Light Sensor Wiring"
board: "raspberry_pi_5"
devices:
- type: "bh1750"
name: "BH1750"
color: "turquoise" # Named color support (or use hex: "#50E3C2")
connections:
- board_pin: 1 # 3V3
device: "BH1750"
device_pin: "VCC"
- board_pin: 6 # GND
device: "BH1750"
device_pin: "GND"
- board_pin: 5 # GPIO3 (I2C SCL)
device: "BH1750"
device_pin: "SCL"
- board_pin: 3 # GPIO2 (I2C SDA)
device: "BH1750"
device_pin: "SDA"
```
Generate your diagram:
```bash
pinviz render my-diagram.yaml -o output.svg
```
### Python API
```python
from pinviz import boards, devices, Connection, Diagram, SVGRenderer
board = boards.raspberry_pi_5()
sensor = devices.bh1750_light_sensor()
connections = [
Connection(1, "BH1750", "VCC"), # 3V3 to VCC
Connection(6, "BH1750", "GND"), # GND to GND
Connection(5, "BH1750", "SCL"), # GPIO3/SCL to SCL
Connection(3, "BH1750", "SDA"), # GPIO2/SDA to SDA
]
diagram = Diagram(
title="BH1750 Light Sensor",
board=board,
devices=[sensor],
connections=connections
)
renderer = SVGRenderer()
renderer.render(diagram, "output.svg")
```
## CLI Commands Examples
PinViz provides a modern CLI
### Rendering Diagrams
```bash
# Generate diagram from YAML config
pinviz render examples/bh1750.yaml -o output.svg
```
### Validation
```bash
# Validate diagram configuration
pinviz validate examples/bh1750.yaml
```
### List Templates
```bash
# List all boards, devices, and examples
pinviz list
```
## MCP Server (AI-Powered)
PinViz includes an **MCP (Model Context Protocol) server** that enables natural language diagram generation through AI assistants like Claude Desktop. Generate diagrams with prompts like "Connect a BME280 temperature sensor to my Raspberry Pi 5" with intelligent pin assignment, automatic I2C bus sharing, and conflict detection.
**[→ Full MCP documentation](https://nordstad.github.io/PinViz/mcp-server/)**
## Documentation
**Full documentation:** [nordstad.github.io/PinViz](https://nordstad.github.io/PinViz/)
- [Installation Guide](https://nordstad.github.io/PinViz/getting-started/installation/) - Detailed installation instructions
- [Quick Start Tutorial](https://nordstad.github.io/PinViz/getting-started/quickstart/) - Step-by-step getting started guide
- [CLI Usage](https://nordstad.github.io/PinViz/guide/cli/) - Command-line interface reference
- [YAML Configuration](https://nordstad.github.io/PinViz/guide/yaml-config/) - Complete YAML configuration guide
- [Python API](https://nordstad.github.io/PinViz/guide/python-api/) - Programmatic API reference
- [Examples Gallery](https://nordstad.github.io/PinViz/guide/examples/) - More example diagrams and configurations
- [API Reference](https://nordstad.github.io/PinViz/api/) - Complete API documentation
## Contributing
Contributions are welcome! Please see our [Contributing Guide](https://nordstad.github.io/PinViz/development/contributing/) for details.
**Adding new devices:** See [guides/DEVICE_CONFIG_GUIDE.md](guides/DEVICE_CONFIG_GUIDE.md) for device configuration details.
## License
MIT License - See [LICENSE](LICENSE) file for details
## Credits
Board and GPIO pin SVG assets courtesy of [Wikimedia Commons](https://commons.wikimedia.org/)
## Author
Even Nordstad
- GitHub: [@nordstad](https://github.com/nordstad)
- Project: [PinViz](https://github.com/nordstad/PinViz)
- Documentation: [nordstad.github.io/PinViz](https://nordstad.github.io/PinViz/)
| text/markdown | Even Nordstad | Even Nordstad <even.nordstad@gmail.com> | null | null | MIT | raspberry-pi, gpio, diagram, wiring, visualization, electronics, svg, raspberry-pi-5, hardware, circuit-diagram | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Visualization",
"Topic :: System :: Hardware",
"License ::... | [] | null | null | >=3.12 | [] | [] | [] | [
"drawsvg~=2.4",
"pyyaml>=6.0.1",
"typer>=0.12.0",
"rich>=13.7.0",
"pydantic>=2.0.0",
"pydantic-settings>=2.0.0",
"questionary>=2.0.0",
"platformdirs>=4.0.0",
"mcp>=1.22.0",
"httpx>=0.28.0",
"beautifulsoup4>=4.12.0",
"structlog>=24.1.0"
] | [] | [] | [] | [
"Homepage, https://nordstad.github.io/PinViz/",
"Repository, https://github.com/nordstad/PinViz",
"Documentation, https://nordstad.github.io/PinViz/",
"Bug Tracker, https://github.com/nordstad/PinViz/issues",
"Changelog, https://github.com/nordstad/PinViz/blob/main/CHANGELOG.md"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T08:40:44.217908 | pinviz-0.16.1-py3-none-any.whl | 491,415 | 5c/bc/2575d875b074d2132e62fdecb2b5666eccf0d8cf64af1a4aaa8115f3c629/pinviz-0.16.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 29158fb33c6e2aba746987fa095c7940 | 946f99019e0dd6b66238bfe2fa75e4ae141d42c3325a80c0046bbc6f5d07a034 | 5cbc2575d875b074d2132e62fdecb2b5666eccf0d8cf64af1a4aaa8115f3c629 | null | [] | 266 |
2.4 | wheezy.security | 3.2.2 | A lightweight security/cryptography library | # wheezy.security
[](https://github.com/akornatskyy/wheezy.security/actions/workflows/tests.yml)
[](https://coveralls.io/github/akornatskyy/wheezy.security?branch=master)
[](https://wheezysecurity.readthedocs.io/en/latest/?badge=latest)
[](https://badge.fury.io/py/wheezy.security)
[wheezy.security](https://pypi.org/project/wheezy.security/) is a
[python](https://www.python.org) package written in pure Python code. It
is a lightweight security library that provides integration with:
- [cryptography](https://pypi.org/project/cryptography/) - cryptography
is a package which provides cryptographic recipes and primitives to
Python developers.
- [pycryptodome](https://www.pycryptodome.org) - PyCryptodome
is a fork of PyCrypto. It brings several enhancements.
- [pycryptodomex](https://www.pycryptodome.org) - PyCryptodomex
is a library independent of the PyCrypto.
It is optimized for performance, well tested and documented.
Resources:
- [source code](https://github.com/akornatskyy/wheezy.security),
and [issues](https://github.com/akornatskyy/wheezy.security/issues)
tracker are available on
[github](https://github.com/akornatskyy/wheezy.security)
- [documentation](https://wheezysecurity.readthedocs.io/en/latest/)
## Install
[wheezy.security](https://pypi.org/project/wheezy.security/) requires
[python](https://www.python.org) version 3.10+. It is independent of operating
system. You can install it from
[pypi](https://pypi.org/project/wheezy.security/) site:
```sh
pip install -U wheezy.security
```
If you would like take benefit of one of cryptography library that has
built-in support specify extra requirements:
```sh
pip install wheezy.security[cryptography]
pip install wheezy.security[pycryptodome]
pip install wheezy.security[pycryptodomex]
```
If you run into any issue or have comments, go ahead and add on
[github](https://github.com/akornatskyy/wheezy.security).
| text/markdown | null | Andriy Kornatskyy <andriy.kornatskyy@live.com> | null | null | null | security, ticket, encryption, cryptography, pycrypto, pycryptodome | [
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.1... | [] | null | null | >=3.10 | [] | [] | [] | [
"Cython>=3.0; extra == \"cython\"",
"setuptools>=61.0; extra == \"cython\"",
"cryptography>=46.0.5; extra == \"cryptography\"",
"pycryptodome>=3.23.0; extra == \"pycryptodome\"",
"pycryptodomex>=3.23.0; extra == \"pycryptodomex\""
] | [] | [] | [] | [
"Homepage, https://github.com/akornatskyy/wheezy.security",
"Source, https://github.com/akornatskyy/wheezy.security",
"Issues, https://github.com/akornatskyy/wheezy.security/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T08:39:28.106858 | wheezy_security-3.2.2.tar.gz | 9,179 | ad/4e/f405a8cf507ddde04676e386c281774d2f0e296b3b7afdd5567863a948d7/wheezy_security-3.2.2.tar.gz | source | sdist | null | false | 0070d5d869170481b38cb4feff979e24 | 5e3f2761cd8469ee83c1eac0175fa1825997ee5d2330bcbd76efb12c525ded2a | ad4ef405a8cf507ddde04676e386c281774d2f0e296b3b7afdd5567863a948d7 | MIT | [
"LICENSE"
] | 0 |
2.4 | openmanage-mcp-server | 1.0.0 | MCP server for Dell OpenManage Enterprise server management | # openmanage-mcp-server
[](https://pypi.org/project/openmanage-mcp-server/)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
MCP server for Dell OpenManage Enterprise (OME) — monitor and manage Dell servers through AI assistants like Claude.
## Features
- **Device management** — list devices, view details, health summary
- **Alert management** — list, filter, acknowledge alerts (single or bulk)
- **Warranty tracking** — list warranties, find expired ones
- **Firmware compliance** — check firmware baselines
- **Job monitoring** — view OME jobs and their status
- **Group & policy management** — list device groups and alert policies
- **OData pagination** — automatic multi-page result fetching
- **Session-based auth** — secure X-Auth-Token sessions, auto-created and cleaned up
## Installation
```bash
pip install openmanage-mcp-server
# or
uvx openmanage-mcp-server
```
## Configuration
The server requires three environment variables:
| Variable | Description | Example |
|----------|-------------|---------|
| `OME_HOST` | OME server hostname or IP | `ome.example.com` |
| `OME_USERNAME` | OME admin username | `admin` |
| `OME_PASSWORD` | OME admin password | `secretpass` |
Optional:
| Variable | Description | Default |
|----------|-------------|---------|
| `OME_TRANSPORT` | Transport protocol (`stdio` or `http`) | `stdio` |
| `OME_LOG_LEVEL` | Log level | `INFO` |
### Claude Desktop
Add to your `claude_desktop_config.json`:
```json
{
"mcpServers": {
"openmanage": {
"command": "uvx",
"args": ["openmanage-mcp-server"],
"env": {
"OME_HOST": "ome.example.com",
"OME_USERNAME": "admin",
"OME_PASSWORD": "your-password"
}
}
}
}
```
### Claude Code
Add via CLI:
```bash
claude mcp add openmanage -- uvx openmanage-mcp-server
```
Or add to your `.mcp.json`:
```json
{
"openmanage": {
"command": "uvx",
"args": ["openmanage-mcp-server"],
"env": {
"OME_HOST": "ome.example.com",
"OME_USERNAME": "admin",
"OME_PASSWORD": "your-password"
}
}
}
```
### VS Code
Add to your VS Code settings or `.vscode/mcp.json`:
```json
{
"mcp": {
"servers": {
"openmanage": {
"command": "uvx",
"args": ["openmanage-mcp-server"],
"env": {
"OME_HOST": "ome.example.com",
"OME_USERNAME": "admin",
"OME_PASSWORD": "your-password"
}
}
}
}
}
```
### HTTP Transport
To run as a standalone HTTP server:
```bash
openmanage-mcp-server --transport http --host 0.0.0.0 --port 8000
```
## Tools
### System
| Tool | Description |
|------|-------------|
| `ome_version` | Get OME version, build info, and operation status |
### Devices
| Tool | Description | Parameters |
|------|-------------|------------|
| `ome_list_devices` | List all managed devices | `top?` |
| `ome_get_device` | Get full detail for a single device | `device_id` |
| `ome_device_health` | Aggregate device health summary (count by status) | — |
### Alerts
| Tool | Description | Parameters |
|------|-------------|------------|
| `ome_list_alerts` | List alerts with optional filters | `severity?`, `category?`, `status?`, `top?` |
| `ome_get_alert` | Get full detail for a single alert | `alert_id` |
| `ome_alert_count` | Alert count aggregated by severity | — |
| `ome_alert_ack` | Acknowledge one or more alerts by ID | `alert_ids` |
| `ome_alert_ack_all` | Acknowledge all unacknowledged alerts matching filters | `severity?`, `category?` |
**Alert filter values:**
| Parameter | Accepted values |
|-----------|----------------|
| `severity` | `critical`, `warning`, `info`, `normal` |
| `status` | `unack`, `ack` |
| `category` | e.g. `Warranty`, `System Health` |
### Warranties
| Tool | Description | Parameters |
|------|-------------|------------|
| `ome_list_warranties` | List all warranty records | `top?` |
| `ome_warranties_expired` | List warranties past their end date | — |
### Groups, Jobs, Policies & Firmware
| Tool | Description | Parameters |
|------|-------------|------------|
| `ome_list_groups` | List device groups | `top?` |
| `ome_list_jobs` | List jobs (sorted by most recent) | `top?` |
| `ome_list_policies` | List alert policies | `top?` |
| `ome_list_firmware` | List firmware compliance baselines | `top?` |
## Example Usage
Once connected, you can ask your AI assistant things like:
- "Show me all devices in OpenManage"
- "Are there any critical alerts?"
- "Which server warranties have expired?"
- "Acknowledge all warranty alerts"
- "Show me recent jobs"
- "What's the firmware compliance status?"
## Safety
All tools are **read-only** except `ome_alert_ack` and `ome_alert_ack_all`, which are non-destructive write operations — they mark alerts as acknowledged but do not modify device configuration.
## Technical Notes
- **SSL:** Self-signed certificate verification is disabled (common for OME appliances)
- **Auth:** Session-based with X-Auth-Token, auto-created on startup and cleaned up on shutdown
- **Pagination:** Automatically follows OData `@odata.nextLink` to fetch all pages (unless `top` is set)
- **Jobs API:** OME Jobs API doesn't support `$orderby`, so results are sorted client-side by `LastRun`
- **Warranty dates:** OME doesn't support date comparison in OData `$filter` for warranty endpoints, so expired warranty filtering is done client-side
## Development
```bash
git clone https://github.com/clearminds/openmanage-mcp-server.git
cd openmanage-mcp-server
uv sync
uv run openmanage-mcp-server
```
## License
MIT — see [LICENSE](LICENSE) for details.
| text/markdown | Clearminds AB | null | null | null | null | dell, mcp, model-context-protocol, ome, openmanage, server-management | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: System :: Monitoring",
"Topic :: System :: Systems Administration"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"fastmcp<3,>=2.14.0",
"httpx>=0.28.1",
"pydantic-settings>=2.0",
"pydantic>=2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/clearminds/openmanage-mcp-server",
"Repository, https://github.com/clearminds/openmanage-mcp-server",
"Issues, https://github.com/clearminds/openmanage-mcp-server/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T08:38:40.579902 | openmanage_mcp_server-1.0.0.tar.gz | 83,565 | e6/18/d70f491b734770927062ce3fba6c2621f37a997299cb10e07f974fa1c231/openmanage_mcp_server-1.0.0.tar.gz | source | sdist | null | false | 8e454081a2a3aa92197cb7f326b335d8 | 1895b6bb2cdaf30d3f828bf7a36c804c9bb4fe67ca5d54b5b9867850eb5b3c74 | e618d70f491b734770927062ce3fba6c2621f37a997299cb10e07f974fa1c231 | MIT | [
"LICENSE"
] | 249 |
2.4 | bioimageio.spec | 0.5.7.4 | Parser and validator library for bioimage.io specifications | 
[](https://pypi.org/project/bioimageio.spec/)
[](https://anaconda.org/conda-forge/bioimageio.spec/)
[](https://pepy.tech/project/bioimageio.spec)
[](https://anaconda.org/conda-forge/bioimageio.spec/)
[](https://github.com/astral-sh/ruff)
[](https://bioimage-io.github.io/spec-bioimage-io/coverage/index.html)
# Specifications for bioimage.io
This repository contains the specifications of the standard format defined by the bioimage.io community for the content (i.e., models, datasets and applications) in the [bioimage.io website](https://bioimage.io).
Each item in the content is always described using a YAML 1.2 file named `rdf.yaml` or `bioimageio.yaml`.
This `rdf.yaml` \ `bioimageio.yaml`--- along with the files referenced in it --- can be downloaded from or uploaded to the [bioimage.io website](https://bioimage.io) and may be produced or consumed by bioimage.io-compatible consumers (e.g., image analysis software like ilastik).
[These](https://bioimage-io.github.io/spec-bioimage-io/#format-version-overview) are the latest format specifications that bioimage.io-compatible resources should comply to.
Note that the Python package PyYAML does not support YAML 1.2 .
We therefore use and recommend [ruyaml](https://ruyaml.readthedocs.io/en/latest/).
For differences see <https://ruamelyaml.readthedocs.io/en/latest/pyyaml>.
Please also note that the best way to check whether your `rdf.yaml` file is fully bioimage.io-compliant, is to call `bioimageio.core.test_description` from the [bioimageio.core](https://github.com/bioimage-io/core-bioimage-io-python) Python package.
The [bioimageio.core](https://github.com/bioimage-io/core-bioimage-io-python) Python package also provides the bioimageio command line interface (CLI) with the `test` command:
```terminal
bioimageio test path/to/your/rdf.yaml
```
## Documentation
The bioimageio.spec documentation is hosted at [https://bioimage-io.github.io/spec-bioimage-io](https://bioimage-io.github.io/spec-bioimage-io).
| text/markdown | null | Fynn Beuttenmüller <thefynnbe@gmail.com> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: ... | [] | null | null | >=3.8 | [] | [] | [] | [
"annotated-types<1,>=0.5.0",
"email-validator",
"exceptiongroup",
"genericache==0.5.2",
"httpx",
"imageio",
"loguru",
"markdown",
"numpy>=1.21",
"packaging>=17.0",
"platformdirs",
"pydantic-core",
"pydantic-settings<3,>=2.5",
"pydantic<3,>=2.10.3",
"python-dateutil",
"rich",
"ruyaml"... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T08:38:04.279121 | bioimageio_spec-0.5.7.4.tar.gz | 221,371 | 5c/6c/db118e6fde645ab80e1d2d5b27b9b186b0abaa482aad7d6eb86ef766f724/bioimageio_spec-0.5.7.4.tar.gz | source | sdist | null | false | 524928d7eeada0134d99d4071bcb0ca2 | 73411ede01c454e6a6ed7ed5d6ddbeffa0944051d314487382c1d5c256f0161a | 5c6cdb118e6fde645ab80e1d2d5b27b9b186b0abaa482aad7d6eb86ef766f724 | null | [
"LICENSE"
] | 0 |
2.4 | stackit-ske | 1.6.0 | SKE-API | # stackit.ske
The SKE API provides endpoints to create, update, delete clusters within STACKIT portal projects and to trigger further cluster management tasks.
For more information, please visit [https://support.stackit.cloud/servicedesk](https://support.stackit.cloud/servicedesk)
This package is part of the STACKIT Python SDK. For additional information, please visit the [GitHub repository](https://github.com/stackitcloud/stackit-sdk-python) of the SDK.
## Installation & Usage
### pip install
```sh
pip install stackit-ske
```
Then import the package:
```python
import stackit.ske
```
## Getting Started
[Examples](https://github.com/stackitcloud/stackit-sdk-python/tree/main/examples) for the usage of the package can be found in the [GitHub repository](https://github.com/stackitcloud/stackit-sdk-python) of the SDK. | text/markdown | STACKIT Developer Tools | developer-tools@stackit.cloud | null | null | null | null | [
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Pro... | [] | https://github.com/stackitcloud/stackit-sdk-python | null | <4.0,>=3.9 | [] | [] | [] | [
"stackit-core>=0.0.1a",
"requests>=2.32.3",
"pydantic>=2.9.2",
"python-dateutil>=2.9.0.post0"
] | [] | [] | [] | [
"Homepage, https://github.com/stackitcloud/stackit-sdk-python",
"Issues, https://github.com/stackitcloud/stackit-sdk-python/issues"
] | poetry/2.2.1 CPython/3.9.25 Linux/6.14.0-1017-azure | 2026-02-18T08:37:19.070408 | stackit_ske-1.6.0.tar.gz | 33,275 | 51/4c/cb1b77f3d04bc3027d488acfa3e55e453c7d3b33912809436234ae9bd00f/stackit_ske-1.6.0.tar.gz | source | sdist | null | false | bc2e7dbaa43596f166a7879aaf5237e0 | c1978b02ee1b46d3d42b817166d813da25c8adbefc512b27b49c83e3f7038217 | 514ccb1b77f3d04bc3027d488acfa3e55e453c7d3b33912809436234ae9bd00f | null | [] | 265 |
2.4 | interactionfreepy | 1.8.4 | A intuitive and cross-languige RCP lib for Python. |
# InteractionFree for Python
[](https://github.com/hwaipy/InteractionFreePy/actions?query=workflow%3ATests)
[](https://interactionfreepy.readthedocs.io/en/latest/?badge=latest)
[](https://pypi.org/project/interactionfreepy/)
[](https://pypi.org/project/interactionfreepy/)
[](https://github.com/hwaipy/InteractionFreePy/tree/python-coverage-comment-action-data)
[InteractionFree]() is a remote procedure call (RPC) protocol based on [ZeroMQ](https://zeromq.org). It allows the developers to build their own distributed and cross-languige program easily. The protocol is config-less and extremly easy to use. Currently, [MessagePack](https://msgpack.org) is used for binary serialization.
Please refer to [here](https://interactionfreepy.readthedocs.io/en/latest/index.html) for the full doc.
## Quick Start
**Install**
```shell
$ pip install interactionfreepy
```
**Start the broker**
```python
from interactionfreepy import IFBroker
broker = IFBroker('tcp://*:port')
IFLoop.join()
```
replace `port` to any port number that is available.
`IFLoop.join()` is a utility function to prevent the program from finishing.
**Start a server**
```python
from interactionfreepy import IFWorker
class Target():
def tick(self, message):
return "tack %s" % message
worker = IFWorker('tcp://address:port', 'TargetService', Target())
IFLoop.join()
```
replace `address` and `port` to the server's net address and port.
**Start a client**
```python
from interactionfreepy import IFWorker
client = IFWorker('tcp://address:port')
print(client.TargetService.tick('now'))
```
| text/markdown | Hwaipy | hwaipy@gmail.com | null | null | gpl-3.0 | msgpack, zeromq, zmq, 0mq, rcp, cross-languige | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python ... | [] | https://github.com/hwaipy/InteractionFreePy | https://github.com/hwaipy/InteractionFreePy/archive/v1.8.4.tar.gz | >=3.8 | [] | [] | [] | [
"msgpack",
"tornado",
"pyzmq"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T08:37:00.500614 | interactionfreepy-1.8.4.tar.gz | 32,862 | 15/1e/c5135e5e9b1443f59ee9e4a6e1e6e6ae561aca95ab1c8db445affbf155bc/interactionfreepy-1.8.4.tar.gz | source | sdist | null | false | 3b8591f43c771a71c4fd5b99e6af1cd2 | 3430f0b8324cd0ab33ef552fc913c9bbfb6e83433ac2f99beceb3e3e7eedcb33 | 151ec5135e5e9b1443f59ee9e4a6e1e6e6ae561aca95ab1c8db445affbf155bc | null | [
"LICENSE"
] | 259 |
2.4 | fence-llm | 2.3.1 | The bloat moat! - A lightweight LLM and Agent interaction library | <img src="https://github.com/WouterDurnez/fence/blob/main/docs/logo.png?raw=true" alt="tests" height="200"/>
[](https://pypi.org/project/fence-llm/)
[](https://github.com/WouterDurnez/fence/actions)
[](https://codecov.io/gh/WouterDurnez/fence)
[](https://badge.fury.io/py/fence-llm)
[](https://fence-llm.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/psf/black)
[](https://github.com/pre-commit/pre-commit)
[](https://opensource.org/licenses/MIT)
[](code_of_conduct.md)
# 🤺 Fence
**The Bloat Moat!** A lightweight, production-ready library for LLM communication and agentic workflows. Born from the need for something simpler than LangChain, Fence gives you powerful LLM orchestration without the heavyweight dependencies.
Think of it as the Swiss Army knife for LLM interactions—sharp, reliable, and it won't weigh down your backpack (or your Docker image).
---
## 🤔 Why Fence?
**The short answer:** By accident.
**The slightly longer answer:** LangChain used to be (is?) a pretty big package with a ton of dependencies. Great for PoCs, but in production? Not so much.
### The problems we faced:
- **🐘 It's BIG.** Takes up serious space (problematic in Lambda, containers, edge environments)
- **🌀 It's COMPLEX.** Overwhelming for new users, hard to debug in production
- **💥 It BREAKS.** Frequent breaking changes, version jumps that made us cry
As a result, many developers (especially those in large production environments) started building lightweight, custom solutions that favor **stability** and **robustness** over feature bloat.
### Enter Fence 🤺
We started building basic components from scratch for our Bedrock-heavy production environment. First came the `Link` class (_wink wink_), then templates, then agents... and before we knew it, we had a miniature package that was actually _fun_ to use.
**Fence strikes the perfect balance between convenience and flexibility.**
> **Note:** Fence isn't trying to replace LangChain for complex PoCs. But if you want a simple, lightweight, production-ready package that's easy to understand and extend, you're in the right place.
---
## 📦 Installation
```bash
pip install fence-llm
```
That's it. Seriously. No 500MB of transitive dependencies.
---
## 🚀 Quick Start
### Hello World (The Obligatory Example)
```python
from fence.links import Link
from fence.templates.string import StringTemplate
from fence.models.openai import GPT4omini
# Create a link
link = Link(
model=GPT4omini(),
template=StringTemplate("Write a haiku about {topic}"),
name='haiku_generator'
)
# Run it
output = link.run(topic='fencing')['state']
print(output)
```
**Output:**
```
[2024-10-04 17:45:15] [ℹ️ INFO] [links.run:203] Executing <haiku_generator> Link
Blades flash in the light,
En garde, the dance begins now,
Touch—victory's mine.
```
Much wow. Very poetry. 🎭
---
## 💪 What Can Fence Do?
Fence is built around a few core concepts that work together beautifully:
### 🤖 **Multi-Provider LLM Support**
Uniform interface across AWS Bedrock (Claude, Nova), OpenAI (GPT-4o), Anthropic, Google Gemini, Ollama, and Mistral. Switch models with a single line change.
👉 **[See all supported models →](docs/MODELS.md)**
### 🔗 **Links & Chains**
Composable building blocks that combine models, templates, and parsers. Chain them together for complex workflows.
👉 **[Learn about Links & Chains →](docs/LINKS_AND_CHAINS.md)**
### 🤖 **Agentic Workflows** ⭐
The crown jewel! Production-ready agents using the ReAct pattern:
- **`Agent`** - Classic ReAct with tool use and multi-level delegation
- **`BedrockAgent`** - Native Bedrock tool calling with streaming
- **`ChatAgent`** - Conversational agents for multi-agent systems
👉 **[Dive into Agents →](docs/AGENTS.md)**
### 🔌 **MCP Integration**
First-class support for the Model Context Protocol. Connect to MCP servers and automatically expose their tools to your agents.
👉 **[Explore MCP Integration →](docs/MCP.md)**
### 🎭 **Multi-Agent Systems**
Build collaborative agent systems with `RoundTable` where multiple agents discuss and solve problems together.
👉 **[Build Multi-Agent Systems →](docs/MULTI_AGENT.md)**
### 🧠 **Memory Systems**
Persistent and ephemeral memory backends (DynamoDB, SQLite, in-memory) for stateful conversations.
👉 **[Configure Memory →](docs/MEMORY.md)**
### 🛠️ **Tools & Utilities**
Custom tool creation, built-in tools, retry logic, parallelization, output parsers, logging callbacks, and benchmarking.
👉 **[Explore Tools & Utilities →](docs/TOOLS_AND_UTILITIES.md)**
---
## 📚 Documentation
- **[Models](docs/MODELS.md)** - All supported LLM providers and how to use them
- **[Links & Chains](docs/LINKS_AND_CHAINS.md)** - Building blocks for LLM workflows
- **[Agents](docs/AGENTS.md)** - ReAct agents, tool use, and delegation
- **[MCP Integration](docs/MCP.md)** - Model Context Protocol support
- **[Multi-Agent Systems](docs/MULTI_AGENT.md)** - RoundTable and collaborative agents
- **[Memory](docs/MEMORY.md)** - Persistent and ephemeral memory backends
- **[Tools & Utilities](docs/TOOLS_AND_UTILITIES.md)** - Custom tools, parsers, and helpers
---
## 🎯 Examples
### Simple Agent with Tools
```python
from fence.agents import Agent
from fence.models.openai import GPT4omini
from fence.tools.math import CalculatorTool
agent = Agent(
identifier="math_wizard",
model=GPT4omini(source="demo"),
tools=[CalculatorTool()],
)
result = agent.run("What is 1337 * 42 + 999?")
print(result) # Agent thinks, uses calculator, and answers!
```
### BedrockAgent with MCP
```python
from fence.agents.bedrock import BedrockAgent
from fence.mcp.client import MCPClient
from fence.models.bedrock import Claude37Sonnet
# Connect to MCP server
mcp_client = MCPClient(
transport_type="streamable_http",
url="https://your-mcp-server.com/mcp"
)
# Create agent with MCP tools
agent = BedrockAgent(
identifier="mcp_agent",
model=Claude37Sonnet(region="us-east-1"),
mcp_clients=[mcp_client], # Tools auto-registered!
)
result = agent.run("Search for customer data")
```
### Multi-Agent Collaboration
```python
from fence.troupe import RoundTable
from fence.agents import ChatAgent
from fence.models.openai import GPT4omini
# Create specialized agents
detective = ChatAgent(
identifier="Detective",
model=GPT4omini(source="roundtable"),
profile="You are a sharp detective."
)
scientist = ChatAgent(
identifier="Scientist",
model=GPT4omini(source="roundtable"),
profile="You are a forensic scientist."
)
# Let them collaborate
round_table = RoundTable(agents=[detective, scientist])
transcript = round_table.run(
prompt="A painting was stolen. Let's investigate!",
max_rounds=3
)
```
**More examples:**
- 📓 [Jupyter Notebooks](notebooks/) - Interactive tutorials
- 🎬 [Demo Scripts](demo/) - Runnable examples
---
## 🤝 Contributing
We welcome contributions! Whether it's:
- 🐛 Bug fixes
- ✨ New features (especially new model providers!)
- 📝 Documentation improvements
- 🧪 More tests
- 🎨 Better examples
Check out [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
---
## 📄 License
MIT License - see [LICENSE.txt](LICENSE.txt) for details.
---
## 🙏 Acknowledgments
Inspired by LangChain, built for production, made with ❤️ by developers who got tired of dependency hell.
**Now go build something awesome! 🚀**
| text/markdown | null | "wouter.durnez" <wouter.durnez@gmail.com> | null | null | MIT | ai, anthropic, api, claude, fence, gpt, language, llm, model, nlp, openai, wrapper | [
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.12",
"Programming Language ... | [] | null | null | >=3.11 | [] | [] | [] | [
"boto3>=1.34.100",
"mcp>=1.23.0",
"numexpr>=2.10.1",
"pydantic>=2.11.4",
"requests>=2.32.4"
] | [] | [] | [] | [
"Repository, https://github.com/WouterDurnez/fence",
"Homepage, https://github.com/WouterDurnez/fence",
"Documentation, https://github.com/WouterDurnez/fence"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T08:36:58.207585 | fence_llm-2.3.1.tar.gz | 160,176 | 36/2b/b41e81736da13b2aa56c4e95abdfd9bb49c9c1a5fa4e3f1971d428728d51/fence_llm-2.3.1.tar.gz | source | sdist | null | false | cda5818b917f2369adc1c535cf81b946 | 41f7021386436091923921d6e57228340b318f528fb903adbc8b92d3bfd96ac6 | 362bb41e81736da13b2aa56c4e95abdfd9bb49c9c1a5fa4e3f1971d428728d51 | null | [
"LICENSE.txt"
] | 285 |
2.4 | argo-proxy | 2.8.6 | Proxy server to Argo API, OpenAI format compatible | # argo-proxy
[](https://badge.fury.io/py/argo-proxy)
[](https://badge.fury.io/gh/oaklight%2Fargo-proxy)
This project is a proxy application that forwards requests to an ARGO API and optionally converts the responses to be compatible with OpenAI's API format. It can be used in conjunction with [autossh-tunnel-dockerized](https://github.com/Oaklight/autossh-tunnel-dockerized) or other secure connection tools.
For detailed information, please refer to documentation at [argo-proxy ReadtheDocs page](https://argo-proxy.readthedocs.io/en/latest/)
## TL;DR
```bash
pip install argo-proxy # install the package
argo-proxy # run the proxy
```
Function calling is available for Chat Completions endpoint starting from `v2.7.5`.
Try with `pip install "argo-proxy>=2.7.5"`
**Now all models have native function calling in standard mode.** (Gemini native function calling support added in v2.8.0.)
## NOTICE OF USAGE
The machine or server making API calls to Argo must be connected to the Argonne internal network or through a VPN on an Argonne-managed computer if you are working off-site. Your instance of the argo proxy should always be on-premise at an Argonne machine. The software is provided "as is," without any warranties. By using this software, you accept that the authors, contributors, and affiliated organizations will not be liable for any damages or issues arising from its use. You are solely responsible for ensuring the software meets your requirements.
- [Notice of Usage](#notice-of-usage)
- [Deployment](#deployment)
- [Prerequisites](#prerequisites)
- [Configuration File](#configuration-file)
- [Running the Application](#running-the-application)
- [First-Time Setup](#first-time-setup)
- [Configuration Options Reference](#configuration-options-reference)
- [Streaming Modes: Real Stream vs Pseudo Stream](#streaming-modes-real-stream-vs-pseudo-stream)
- [`argo-proxy` CLI Available Options](#argo-proxy-cli-available-options)
- [Management Utilities](#management-utilities)
- [Usage](#usage)
- [Endpoints](#endpoints)
- [OpenAI Compatible](#openai-compatible)
- [Not OpenAI Compatible](#not-openai-compatible)
- [Timeout Override](#timeout-override)
- [Models](#models)
- [Chat Models](#chat-models)
- [Embedding Models](#embedding-models)
- [Tool Calls](#tool-calls)
- [Tool Call Examples](#tool-call-examples)
- [ToolRegistry](#toolregistry)
- [Examples](#examples)
- [Raw Requests](#raw-requests)
- [OpenAI Client](#openai-client)
- [Bug Reports and Contributions](#bug-reports-and-contributions)
## Deployment
### Prerequisites
- **Python 3.10+** is required. </br>
It is recommended to use conda, mamba, or pipx, etc., to manage an exclusive environment. </br>
**Conda/Mamba** Download and install from: <https://conda-forge.org/download/> </br>
**pipx** Download and install from: <https://pipx.pypa.io/stable/installation/>
- Install dependencies:
PyPI current version: 
```bash
pip install argo-proxy
```
To upgrade:
```bash
argo-proxy --version # Display current version
# Check against PyPI version
pip install argo-proxy --upgrade
```
or, if you decide to use dev version (make sure you are at the root of the repo cloned):

```bash
pip install .
```
### Configuration File
If you don't want to manually configure it, the [First-Time Setup](#first-time-setup) will automatically create it for you.
The application uses `config.yaml` for configuration. Here's an example:
```yaml
argo_embedding_url: "https://apps.inside.anl.gov/argoapi/api/v1/resource/embed/"
argo_stream_url: "https://apps-dev.inside.anl.gov/argoapi/api/v1/resource/streamchat/"
argo_url: "https://apps-dev.inside.anl.gov/argoapi/api/v1/resource/chat/"
port: 44497
host: 0.0.0.0
user: "your_username" # set during first-time setup
verbose: true # can be changed during setup
```
### Running the Application
To start the application:
```bash
argo-proxy [config_path]
```
- Without arguments: search for `config.yaml` under:
- current directory
- `~/.config/argoproxy/`
- `~/.argoproxy/`
The first one found will be used.
- With path: uses specified config file, if exists. Otherwise, falls back to default search.
```bash
argo-proxy /path/to/config.yaml
```
- With `--edit` flag: opens the config file in the default editor for modification.
### First-Time Setup
When running without an existing config file:
1. The script offers to create `config.yaml` from `config.sample.yaml`
2. Automatically selects a random available port (can be overridden)
3. Prompts for:
- Your username (sets `user` field)
- Verbose mode preference (sets `verbose` field)
4. Validates connectivity to configured URLs
5. Shows the generated config in a formatted display for review before proceeding
Example session:
```bash
$ argo-proxy
No valid configuration found.
Would you like to create it from config.sample.yaml? [Y/n]:
Creating new configuration...
Use port [52226]? [Y/n/<port>]:
Enter your username: your_username
Enable verbose mode? [Y/n]
Created new configuration at: /home/your_username/.config/argoproxy/config.yaml
Using port 52226...
Validating URL connectivity...
Current configuration:
--------------------------------------
{
"host": "0.0.0.0",
"port": 52226,
"user": "your_username",
"argo_url": "https://apps-dev.inside.anl.gov/argoapi/api/v1/resource/chat/",
"argo_stream_url": "https://apps-dev.inside.anl.gov/argoapi/api/v1/resource/streamchat/",
"argo_embedding_url": "https://apps.inside.anl.gov/argoapi/api/v1/resource/embed/",
"verbose": true
}
--------------------------------------
# ... proxy server starting info display ...
```
### Configuration Options Reference
| Option | Description | Default |
| -------------------- | ------------------------------------------------------------ | ------------------ |
| `argo_embedding_url` | Argo Embedding API URL | Prod URL |
| `argo_stream_url` | Argo Stream API URL | Dev URL (for now) |
| `argo_url` | Argo Chat API URL | Dev URL (for now) |
| `host` | Host address to bind the server to | `0.0.0.0` |
| `port` | Application port (random available port selected by default) | randomly assigned |
| `user` | Your username | (Set during setup) |
| `verbose` | Debug logging | `true` |
| `real_stream` | Enable real streaming mode (default since v2.7.7) | `true` |
### Streaming Modes: Real Stream vs Pseudo Stream
Argo Proxy supports two streaming modes for chat completions:
#### Real Stream (Default since v2.7.7)
- **Default behavior**: Enabled by default since v2.7.7 (`real_stream: true` or omitted in config)
- **How it works**: Directly streams chunks from the upstream API as they arrive
- **Advantages**:
- True real-time streaming behavior
- Lower latency for streaming responses
- More responsive user experience
- **Recommended for production use**
#### Pseudo Stream
- **Enable via**: Set `real_stream: false` in config file or use `--pseudo-stream` CLI flag
- **How it works**: Receives the complete response from upstream, then simulates streaming by sending chunks to the client
- **Status**: Available for compatibility with previous behavior and function calling
#### Configuration Examples
**Via config file:**
```yaml
# Enable real streaming (experimental)
real_stream: true
# Or explicitly use pseudo streaming (default)
real_stream: false
```
**Via CLI flag:**
```bash
# Use default real streaming (since v2.7.7)
argo-proxy
# Enable legacy pseudo streaming
argo-proxy --pseudo-stream
```
#### Function Calling Behavior
When using function calling (tool calls):
- **Native function calling support**: Available for OpenAI and Anthropic models. Gemini models is in development
- **Real streaming compatible**: Native function calling works with both streaming modes
- **OpenAI format**: All input and output remains in OpenAI format regardless of underlying model
- **Legacy support**: Prompting-based function calling available via `--tool-prompting` flag
### `argo-proxy` CLI Available Options
```bash
$ argo-proxy -h
usage: argo-proxy [-h] [--host HOST] [--port PORT] [--verbose | --quiet]
[--real-stream | --pseudo-stream] [--tool-prompting]
[--edit] [--validate] [--show] [--version]
[config]
Argo Proxy CLI
positional arguments:
config Path to the configuration file
options:
-h, --help show this help message and exit
--host HOST, -H HOST Host address to bind the server to
--port PORT, -p PORT Port number to bind the server to
--verbose, -v Enable verbose logging, override if `verbose` set False in config
--quiet, -q Disable verbose logging, override if `verbose` set True in config
--real-stream, -rs Enable real streaming (default behavior), override if `real_stream` set False in config
--pseudo-stream, -ps Enable pseudo streaming, override if `real_stream` set True or omitted in config
--tool-prompting Enable prompting-based tool calls/function calling, otherwise use native tool calls/function calling
--edit, -e Open the configuration file in the system's default editor for editing
--validate, -vv Validate the configuration file and exit
--show, -s Show the current configuration during launch
--version, -V Show the version and check for updates
```
### Management Utilities
The following options help manage the configuration file:
- `--edit, -e`: Open the configuration file in the system's default editor for editing.
- If no config file is specified, it will search in default locations (~/.config/argoproxy/, ~/.argoproxy/, or current directory)
- Tries common editors like nano, vi, vim (unix-like systems) or notepad (Windows)
- `--validate, -vv`: Validate the configuration file and exit without starting the server.
- Useful for checking config syntax and connectivity before deployment
- `--show, -s`: Show the current configuration during launch.
- Displays the fully resolved configuration including defaults
- Can be used with `--validate` to just display configuration without starting the server
```bash
# Example usage:
argo-proxy --edit # Edit config file
argo-proxy --validate --show # Validate and display config
argo-proxy --show # Show config at startup
```
## Usage
### Endpoints
#### OpenAI Compatible
These endpoints convert responses from the ARGO API to be compatible with OpenAI's format:
- **`/v1/responses`**: Available from v2.7.0. Response API.
- **`/v1/chat/completions`**: Chat Completions API.
- **`/v1/completions`**: Legacy Completions API.
- **`/v1/embeddings`**: Embedding API.
- **`/v1/models`**: Lists available models in OpenAI-compatible format.
#### Not OpenAI Compatible
These endpoints interact directly with the ARGO API and do not convert responses to OpenAI's format:
- **`/v1/chat`**: Proxies requests to the ARGO API without conversion.
- **`/v1/embed`**: Proxies requests to the ARGO Embedding API without conversion.
#### Utility Endpoints
- **`/health`**: Health check endpoint. Returns `200 OK` if the server is running.
- **`/version`**: Returns the version of the ArgoProxy server. Notifies if a new version is available. Available from 2.7.0.post1.
#### Timeout Override
You can override the default timeout with a `timeout` parameter in your request. This parameter is optional for client request. Proxy server will keep the connection open until it finishes or client disconnects.
Details of how to make such override in different query flavors: [Timeout Override Examples](timeout_examples.md)
### Models
#### Chat Models
##### OpenAI Series
| Original ARGO Model Name | Argo Proxy Name |
| ------------------------ | ---------------------------------------- |
| `gpt35` | `argo:gpt-3.5-turbo` |
| `gpt35large` | `argo:gpt-3.5-turbo-16k` |
| `gpt4` | `argo:gpt-4` |
| `gpt4large` | `argo:gpt-4-32k` |
| `gpt4turbo` | `argo:gpt-4-turbo` |
| `gpt4o` | `argo:gpt-4o` |
| `gpt4olatest` | `argo:gpt-4o-latest` |
| `gpto1preview` | `argo:gpt-o1-preview`, `argo:o1-preview` |
| `gpto1mini` | `argo:gpt-o1-mini`, `argo:o1-mini` |
| `gpto3mini` | `argo:gpt-o3-mini`, `argo:o3-mini` |
| `gpto1` | `argo:gpt-o1`, `argo:o1` |
| `gpto3` | `argo:gpt-o3`, `argo:o3` |
| `gpto4mini` | `argo:gpt-o4-mini`, `argo:o4-mini` |
| `gpt41` | `argo:gpt-4.1` |
| `gpt41mini` | `argo:gpt-4.1-mini` |
| `gpt41nano` | `argo:gpt-4.1-nano` |
##### Google Gemini Series
| Original ARGO Model Name | Argo Proxy Name |
| ------------------------ | ----------------------- |
| `gemini25pro` | `argo:gemini-2.5-pro` |
| `gemini25flash` | `argo:gemini-2.5-flash` |
##### Anthropic Claude Series
| Original ARGO Model Name | Argo Proxy Name |
| ------------------------ | -------------------------------------------------- |
| `claudeopus4` | `argo:claude-opus-4`, `argo:claude-4-opus` |
| `claudesonnet4` | `argo:claude-sonnet-4`, `argo:claude-4-sonnet` |
| `claudesonnet37` | `argo:claude-sonnet-3.7`, `argo:claude-3.7-sonnet` |
| `claudesonnet35v2` | `argo:claude-sonnet-3.5`, `argo:claude-3.5-sonnet` |
#### Embedding Models
| Original ARGO Model Name | Argo Proxy Name |
| ------------------------ | ----------------------------- |
| `ada002` | `argo:text-embedding-ada-002` |
| `v3small` | `argo:text-embedding-3-small` |
| `v3large` | `argo:text-embedding-3-large` |
### Tool Calls
The tool calls (function calling) interface has been available since version v2.7.5.alpha1, now with **native function calling support**.
#### Native Function Calling Support
- **OpenAI models**: Full native function calling support
- **Anthropic models**: Full native function calling support
- **Gemini models**: Full native function calling support (added in v2.8.0)
- **OpenAI format**: All input and output remains in OpenAI format regardless of underlying model
#### Availability
- Available on both streaming and non-streaming **chat completion** endpoints
- Only supported on `/v1/chat/completions` endpoint
- Argo passthrough endpoint (`/v1/chat`) and response endpoint (`/v1/chat/response`) not yet implemented due to limited development time
- Legacy completion endpoints (`/v1/completions`) do not support tool calling
#### Tool Call Examples
- **Function Calling OpenAI Client**: [function_calling_chat.py](examples/openai_client/function_calling_chat.py)
- **Function Calling Raw Request**: [function_calling_chat.py](examples/raw_requests/function_calling_chat.py)
For more usage details, refer to the [OpenAI documentation](https://platform.openai.com/docs/guides/function-calling).
#### ToolRegistry
A lightweight yet powerful Python helper library is available for various tool handling: [ToolRegistry](https://github.com/Oaklight/ToolRegistry). It works with any OpenAI-compatible API, including Argo Proxy starting from version v2.7.5.alpha1.
### Examples
#### Raw Requests
For examples of how to use the raw request utilities (e.g., `httpx`, `requests`), refer to:
##### Direct Access to ARGO
- **Direct Chat Example**: [argo_chat.py](examples/raw_requests/argo_chat.py)
- **Direct Chat Stream Example**: [argo_chat_stream.py](examples/raw_requests/argo_chat_stream.py)
- **Direct Embedding Example**: [argo_embed.py](examples/raw_requests/argo_embed.py)
##### OpenAI Compatible Requests
- **Chat Completions Example**: [chat_completions.py](examples/raw_requests/chat_completions.py)
- **Chat Completions Stream Example**: [chat_completions_stream.py](examples/raw_requests/chat_completions_stream.py)
- **Legacy Completions Example**: [legacy_completions.py](examples/raw_requests/legacy_completions.py)
- **Legacy Completions Stream Example**: [legacy_completions_stream.py](examples/raw_requests/legacy_completions_stream.py)
- **Responses Example**: [responses.py](examples/raw_requests/responses.py)
- **Responses Stream Example**: [responses_stream.py](examples/raw_requests/responses_stream.py)
- **Embedding Example**: [embedding.py](examples/raw_requests/embedding.py)
- **o1 Mini Chat Completions Example**: [o1_mini_chat_completions.py](examples/raw_requests/o1_mini_chat_completions.py)
#### OpenAI Client
For examples demonstrating the use case of the OpenAI client (`openai.OpenAI`), refer to:
- **Chat Completions Example**: [chat_completions.py](examples/openai_client/chat_completions.py)
- **Chat Completions Stream Example**: [chat_completions_stream.py](examples/openai_client/chat_completions_stream.py)
- **Legacy Completions Example**: [legacy_completions.py](examples/openai_client/legacy_completions.py)
- **Legacy Completions Stream Example**: [legacy_completions_stream.py](examples/openai_client/legacy_completions_stream.py)
- **Responses Example**: [responses.py](examples/openai_client/responses.py)
- **Responses Stream Example**: [responses_stream.py](examples/openai_client/responses_stream.py)
- **Embedding Example**: [embedding.py](examples/openai_client/embedding.py)
- **O3 Mini Simple Chatbot Example**: [o3_mini_simple_chatbot.py](examples/openai_client/o3_mini_simple_chatbot.py)
## Bug Reports and Contributions
This project is developed in my spare time. Bugs and issues may exist. If you encounter any or have suggestions for improvements, please [open an issue](https://github.com/Oaklight/argo-proxy/issues/new) or [submit a pull request](https://github.com/Oaklight/argo-proxy/compare). Your contributions are highly appreciated!
| text/markdown | null | Peng Ding <oaklight@gmx.com> | null | null | null | null | [
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Topic :: Software Development",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp>=3.12.2",
"PyYAML>=6.0.2",
"pydantic>=2.11.7",
"tiktoken>=0.9.0",
"tqdm>=4.67.1",
"packaging>=25.0",
"Pillow>=12.0.0",
"dotenv>=0.9.9; extra == \"dev\"",
"openai>=1.79.0; extra == \"dev\"",
"pyright>=1.1.402; extra == \"dev\"",
"build>=1.2.2.post1; extra == \"dev\"",
"twine>=6.1.0; ex... | [] | [] | [] | [
"Documentation, https://github.com/Oaklight/argo-openai-proxy#readme",
"Repository, https://github.com/Oaklight/argo-openai-proxy",
"Issuses, https://github.com/Oaklight/argo-openai-proxy/issues"
] | twine/6.1.0 CPython/3.10.17 | 2026-02-18T08:33:10.676661 | argo_proxy-2.8.6.tar.gz | 118,031 | c7/3f/4a253d3a8aa6535331414eb8ef8e11b1ff7baebe44d274571cde3b78b8f3/argo_proxy-2.8.6.tar.gz | source | sdist | null | false | a65191b634ff60ed650699ab3aea604a | fa2c8a3a825bfa04fa0f4bad193c42406711784f484f3f019021500401523e08 | c73f4a253d3a8aa6535331414eb8ef8e11b1ff7baebe44d274571cde3b78b8f3 | MIT | [
"LICENSE"
] | 315 |
2.1 | openstarlab-preprocessing | 0.1.54 | openstarlab preprocessing package | # OpenSTARLab PreProcessing package
[](https://openstarlab.readthedocs.io/en/latest/?badge=latest)
[](https://pypi.org/project/openstarlab-preprocessing/)
[](https://arxiv.org/abs/2502.02785)
[](https://discord.gg/PnH2MDDaCf)
[](https://youtu.be/R5IgtuaMEt8)
## Introduction
The OpenSTARLab PreProcessing package is a core component for the OpenSTARLab suite, providing essential data preprocessing functions. It is designed with minimal dependencies and aims to avoid those that are heavily version-dependent.
This package is continuously evolving to support future OpenSTARLab projects. If you have any suggestions or encounter any bugs, please feel free to open an issue.
## Installation
- To install this package via PyPI
```
pip install openstarlab-preprocessing
```
- To install manually
```
git clone git@github.com:open-starlab/PreProcessing.git
cd ./PreProcessing
pip install -e .
```
## Current Features
### Sports
#### Event Data
- [Event data in Football/Soccer ⚽](https://github.com/open-starlab/PreProcessing/blob/master/preprocessing/sports/event_data/soccer/README.md)
- [Event data in Rocket League 🚀](https://github.com/open-starlab/PreProcessing/blob/master/preprocessing/sports/event_data/rocket_league/README.md)
#### SAR (State-Action-Reward) Data
- [SAR data in Football/Soccer ⚽](https://github.com/open-starlab/PreProcessing/blob/master/preprocessing/sports/SAR_data/soccer/README.md)
#### Space Data
- [Space data in Basketball 🏀](https://github.com/open-starlab/PreProcessing/blob/master/preprocessing/sports/space_data/basketball/README.md)
- [Space data in Football/Soccer ⚽](https://github.com/open-starlab/PreProcessing/blob/master/preprocessing/sports/space_data/soccer/README.md)
- [Space data in Ultimate 🥏](https://github.com/open-starlab/PreProcessing/blob/master/preprocessing/sports/space_data/ultimate/README.md)
#### Phase Data
- [Phase data in Football/Soccer ⚽](https://github.com/open-starlab/PreProcessing/blob/master/preprocessing/sports/phase_data/soccer/README.md)
## RoadMap
- [x] Release the package
- [ ] Incorporate more functions
## Developer
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
<!-- [](#contributors-) -->
<!-- ALL-CONTRIBUTORS-BADGE:END -->
<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
<!-- prettier-ignore-start -->
<!-- markdownlint-disable -->
<table>
<tbody>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/calvinyeungck"><img src="https://github.com/calvinyeungck.png" width="100px;" alt="Calvin Yeung"/><br /><sub><b>Calvin Yeung</b></sub></a><br /><a href="#Developer-CalvinYeung" title="Lead Developer">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/kenjiro-mk"><img src="https://github.com/kenjiro-mk.png" width="100px;" alt="Kenjiro Ide"/><br /><sub><b>Kenjiro Ide</b></sub></a><br /><a href="#Developer-KenjiroIde" title="Developer">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/czzzzz129"><img src="https://github.com/czzzzz129.png" width="100px;" alt="Zheng Chen"/><br /><sub><b>Zheng Chen</b></sub></a><br /><a href="#Developer-ZhengChen" title="Developer">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/KentoKuroda"><img src="https://github.com/KentoKuroda.png" width="100px;" alt="Kento Kuroda"/><br /><sub><b>Kento Kuroda</b></sub></a><br /><a href="#Developer-KentoKuroda" title="Lead Developer">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/shunsuke-iwashita"><img src="https://github.com/shunsuke-iwashita.png" width="100px;" alt="Shunsuke Iwashita"/><br /><sub><b>Shunsuke Iwashita</b></sub></a><br /><a href="#Developer-ShunsukeIwashita" title="Developer">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/keisuke198619"><img src="https://github.com/keisuke198619.png" width="100px;" alt="Keisuke Fujii"/><br /><sub><b>Keisuke Fujii</b></sub></a><br /><a href="#lead-KeisukeFujii" title="Team Leader">🧑💻</a></td>
</tr>
</tbody>
</table>
| text/markdown | null | Calvin Yeung <yeung.chikwong@g.sp.m.is.nagoya-u.ac.jp> | null | null | Apache License 2.0 | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"ai2-tango",
"pandas>=1.3.0",
"numpy>=1.23.0",
"statsbombpy>=0.0.1",
"matplotlib>=3.7.5",
"tqdm",
"chardet>=5.2.0",
"jsonlines>=3.1.0",
"bidict>=0.22.1",
"pydantic>=2.9.0",
"scipy==1.10.1",
"shapely>=2.0.0",
"pyarrow<20.0.0,>=4.0.0",
"datasets>=3.0.1",
"gdown"
] | [] | [] | [] | [
"Homepage, https://github.com/open-starlab/PreProcessing"
] | twine/5.1.1 CPython/3.8.19 | 2026-02-18T08:31:13.541958 | openstarlab_preprocessing-0.1.54.tar.gz | 159,449 | 91/d3/3c2d8736fe4726b3f720ceef60400a95b3ec6f0f2061513051703b31aca8/openstarlab_preprocessing-0.1.54.tar.gz | source | sdist | null | false | 898972409036a6c52b104364fd1d9336 | 229ecaabd48cdba255cc2d05bb5b5bb57d40a73ab96d8fc1889e6f96a7a90ef6 | 91d33c2d8736fe4726b3f720ceef60400a95b3ec6f0f2061513051703b31aca8 | null | [] | 257 |
2.4 | vibengine | 2.13.2 | Vibengine SDK for secure sandboxed cloud environments | # Vibengine Python SDK
Python SDK for managing secure Vibengine sandboxes.
## Install
```bash
pip install vibengine
```
## Quickstart
```python
from vibengine import Sandbox
with Sandbox.create(timeout=60) as sandbox:
result = sandbox.commands.run("echo hello")
print(result.stdout)
```
## Async
```python
from vibengine import AsyncSandbox
async def main():
async with await AsyncSandbox.create(timeout=60) as sandbox:
result = await sandbox.commands.run("echo hello")
print(result.stdout)
```
## Compatibility
- New import path: `from vibengine import ...`
- Backward compatibility path is still available: `from e2b import ...`
| text/markdown | vibengine | hello@vibengine.ai | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"attrs>=23.2.0",
"dockerfile-parse<3.0.0,>=2.0.1",
"httpcore<2.0.0,>=1.0.5",
"httpx<1.0.0,>=0.27.0",
"packaging>=24.1",
"protobuf>=4.21.0",
"python-dateutil>=2.8.2",
"rich>=14.0.0",
"typing-extensions>=4.1.0",
"wcmatch<11.0,>=10.1"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/vibengine-ai/vibengine-sdk/issues",
"Homepage, https://vibengine.ai/",
"Repository, https://github.com/vibengine-ai/vibengine-sdk/tree/main/packages/python-sdk"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T08:30:30.204886 | vibengine-2.13.2.tar.gz | 139,844 | 3f/04/7e55497ba8fe60c3efea60fd71fbf5cd6b62a19129827cee9dc8971d6063/vibengine-2.13.2.tar.gz | source | sdist | null | false | 0142e27ecc9948c6f505dd9c941385f4 | dcfbbf1c4bb744237ee12fc608c1dd46ebf94ceb51e3f66fe553c51180ac7e74 | 3f047e55497ba8fe60c3efea60fd71fbf5cd6b62a19129827cee9dc8971d6063 | null | [
"LICENSE"
] | 377 |
2.4 | MultiOptPy | 1.20.9 | Multifunctional geometry optimization tools for quantum chemical calculations. | # MultiOptPy
[](https://colab.research.google.com/drive/1wpW8YO8r9gq20GACyzdaEsFK4Va1JQs4?usp=sharing) (Test 1, only use GFN2-xTB)
[](https://colab.research.google.com/drive/1lfvyd7lv6ChjRC7xfPdrBZtGME4gakhz?usp=sharing) (Test 2, GFN2-xTB + PySCF(HF/STO-3G))
[](https://buymeacoffee.com/ss0832)
[](https://pepy.tech/projects/multioptpy)
[](https://doi.org/10.5281/zenodo.18529521)
If this tool helped your studies, education, or saved your time, I'd appreciate a coffee!
Your support serves as a great encouragement for this personal project and fuels my next journey.
I also welcome contributions, bug reports, and pull requests to improve this tool.
Note on Contributions: While bug reports and pull requests are welcome, please note that this is a personal project maintained in my spare time. Responses to issues and PRs may be delayed or not guaranteed. I appreciate your patience and understanding.
Multifunctional geometry optimization tools for quantum chemical calculations
This program implements many geometry optimization methods in Python for learning purposes.
This program can also automatically calculate the transition-state structure from a single equilibrium geometry.
**Notice:** This program has NOT been experimentally validated in laboratory settings. I release this code to enable community contributions and collaborative development. Use at your own discretion and validate results independently.
(Caution: Using Japanese to explain) Instructions on how to use:
- https://ss0832.github.io/
- https://ss0832.github.io/posts/20251130_mop_usage_menschutkin_reaction_uma_en/ (In English, auto-translated)
## Video Demo
[](https://www.youtube.com/watch?v=AE61iY2HZ8Y)
## Features
- It is intended to be used in a linux environment.
- It can be used not only with AFIR functions, but also with other bias potentials.
## Quick Start (for Linux)
```
# Below is an example showing how to use GFN2-xTB to calculate a transition-state structure.
# These commands are intended for users who want a straightforward, ready-to-run setup on Linux.
## 1. Download and install Anaconda:
cd ~
wget https://repo.anaconda.com/archive/Anaconda3-2025.06-1-Linux-x86_64.sh
bash Anaconda3-2025.06-1-Linux-x86_64.sh
source .bashrc
# if the conda command is not available, you need to manually add Anaconda to your PATH:
# (example command) echo 'export PATH="$HOME/anaconda3/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
## 2. Create and activate a conda environment:
conda create -n test_mop python=3.12.7
conda activate test_mop
## 3. Download and install MultiOptPy:
wget https://github.com/ss0832/MultiOptPy/archive/refs/tags/v1.20.5.zip
unzip v1.20.5.zip
cd MultiOptPy-1.20.5
pip install -r requirements.txt
## 4. Copy the test configuration file and run the AutoTS workflow:
cp test/config_autots_run_xtb_test.json .
python run_autots.py aldol_rxn.xyz -cfg config_autots_run_xtb_test.json
# Installation via environment.yml (Linux / conda-forge)
## 1. Download and install MultiOptPy:
git clone -b stable-v1.0 https://github.com/ss0832/MultiOptPy.git
cd MultiOptPy
## 2. Create and activate a conda environment:
conda env create -f environment.yml
conda activate test_mop
## 3. Copy the test configuration file and run the AutoTS workflow:
cp test/config_autots_run_xtb_test.json .
python run_autots.py aldol_rxn.xyz -cfg config_autots_run_xtb_test.json
# Installation via pip (Linux)
conda create -n <env-name> python=3.12 pip
conda activate <env-name>
pip install git+https://github.com/ss0832/MultiOptPy.git@v1.20.5
wget https://github.com/ss0832/MultiOptPy/archive/refs/tags/v1.20.5.zip
unzip v1.20.5.zip
cd MultiOptPy-1.20.5
## 💻 Command Line Interface (CLI) Functionality (v1.20.2)
# The following eight core functionalities are available as direct executable commands in your terminal after installation:
# optmain (Logic from optmain.py):
# Function: Executes the Core Geometry Optimization functionality.
# nebmain (Logic from nebmain.py):
# Function: Executes the Nudged Elastic Band (NEB) path optimization tool for transition state searches.
# confsearch (Logic from conformation_search.py):
# Function: Utilizes the comprehensive Conformational Search routine.
# run_autots (Logic from run_autots.py):
# Function: Launches the Automated Transition State (AutoTS) workflow.
# mdmain (Logic from mdmain.py):
# Function: Initiates Molecular Dynamics (MD) simulation functionality.
# relaxedscan (Logic from relaxed_scan.py):
# Function: Executes the Relaxed Potential Energy Surface (PES) Scanning functionality.
# orientsearch (Logic from orientation_search.py):
# Function: Executes the molecular Orientation Sampling and Search utility.
```
## Required Modules
```
cd <directory of repository files>
pip install -r requirements.txt
```
- psi4 (Official page:https://psicode.org/) or PySCF
- numpy
- matplotlib
- scipy
- pytorch (for calculating derivatives)
Optional
- tblite (If you use extended tight binding (xTB) method, this module is required.)
- dxtb (same as above)
- ASE
## References
References are given in the source code.
## Usage
After downloading the repository using git clone or similar commands, move to the generated directory and run the following:
python command
```
python optmain.py SN2.xyz -ma 150 1 6 -pyscf -elec 0 -spin 0 -opt rsirfo_block_fsb -modelhess
```
CLI command (arbitrary directory)
```
optmain SN2.xyz -ma 150 1 6 -pyscf -elec -1 -spin 0 -opt rsirfo_block_fsb -modelhess
```
python command
```
python optmain.py aldol_rxn.xyz -ma 95 1 5 50 3 11 -pyscf -elec 0 -spin 0 -opt rsirfo_block_fsb -modelhess
```
CLI command (arbitrary directory)
```
optmain aldol_rxn.xyz -ma 95 1 5 50 3 11 -pyscf -elec 0 -spin 0 -opt rsirfo_block_fsb -modelhess
```
For SADDLE calculation
python command
```
python optmain.py aldol_rxn_PT.xyz -xtb GFN2-xTB -opt rsirfo_block_bofill -order 1 -fc 5
```
CLI command (arbitrary directory)
```
optmain aldol_rxn_PT.xyz -xtb GFN2-xTB -opt rsirfo_block_bofill -order 1 -fc 5
```
##### For NEB method
python command
```
python nebmain.py aldol_rxn -xtb GFN2-xTB -ns 50 -adpred 1 -nd 0.5
```
CLI command (arbitrary directory)
```
nebmain aldol_rxn -xtb GFN2-xTB -ns 50 -adpred 1 -nd 0.5
```
##### For iEIP method
python command
```
python ieipmain.py ieip_test -xtb GFN2-xTB
```
CLI command (arbitrary directory)
```
ieipmain ieip_test -xtb GFN2-xTB
```
##### For Molecular Dynamics (MD)
python command
```
python mdmain.py aldol_rxn_PT.xyz -xtb GFN2-xTB -temp 298 -traj 1 -time 100000
```
CLI command (arbitrary directory)
```
mdmain aldol_rxn_PT.xyz -xtb GFN2-xTB -temp 298 -traj 1 -time 100000
```
(Default deterministic algorithm for MD is Nosé–Hoover thermostat.)
For orientation search
```
python orientation_search.py aldol_rxn.xyz -part 1-4 -ma 95 1 5 50 3 11 -nsample 5 -xtb GFN2-xTB -opt rsirfo_block_fsb -modelhess
```
For conformation search
```
python conformation_search.py s8_for_confomation_search_test.xyz -xtb GFN2-xTB -ns 2000
```
For relaxed scan (Similar to functions implemented in Gaussian)
```
python relaxed_scan.py SN2.xyz -nsample 8 -scan bond 1,2 1.3,2.6 -elec -1 -spin 0 -xtb GFN2-xTB -opt crsirfo_block_fsb -modelhess
```
## Options
(optmain.py)
**`-opt`**
Specify the algorithm to be used for structural optimization.
example 1) `-opt FIRE`.
Perform structural optimization using the FIRE method.
Available optimization methods:
Recommended optimization methods:
- FIRE (Robust method)
- TR_LBFGS (Limited-memory BFGS method with trust radius method, Faster convergence than FIRE without Hessian)
- rsirfo_block_fsb
- rsirfo_block_bofill (for calculation of saddle point)
`-ma`
Add the potential by AFIR function.
Energy (kJ/mol) Atom 1 or fragment 1 to which potential is added Atom 2 or fragment 2 to which potential is added.
Example 1) `-ma 195 1 5`
Apply a potential of 195 kJ/mol (pushing force) to the first atom and the fifth atom as a pair.
Example 2) `-ma 195 1 5 195 3 11`
Multiply the potential of 195 kJ/mol (pushing force) by the pair of the first atom and the fifth atom. Then multiply the potential of 195 kJ/mol (pushing force) by the pair of the third atom and the eleventh atom.
Example 3) `-ma -195 1-3 5,6`
Multiply the potential of -195 kJ/mol (pulling force) by the fragment consisting of the 1st-3rd atoms paired with the fragments consisting of the 5th and 6th atoms.
`-bs`
Specifies the basis function. The default is 6-31G*.
Example 1) `-bs 6-31G*`
Calculate using 6-31G* as the basis function.
Example 2) `-bs sto-3g`
Calculate using STO-3G as the basis function.
`-func`
Specify the functionals in the DFT (specify the calculation method). The default is b3lyp.
Example 1) `-func b3lyp`
Calculate using B3LYP as the functional.
Example 2) `-func hf`
Calculate using the Hartree-Fock method.
`-sub_bs`
Specify a specific basis function for a given atom.
Example 1) `-sub_bs I LanL2DZ`
Assign the basis function LanL2DZ to the iodine atom, and if -bs is the default, assign 6-31G* to non-iodine atoms for calculation.
`-ns`
Specifies the maximum number of times the gradient is calculated for structural optimization. The default is a maximum of 300 calculations.
Example 1) `-ns 400`
Calculate gradient up to 400 iterations.
`-core`
Specify the number of CPU cores to be used in the calculation. By default, 8 cores are used. (Adjust according to your own environment.)
Example 1) `-core 4`
Calculate using 4 CPU cores.
`-mem`
Specify the memory to be used for calculations. The default is 1GB. (Adjust according to your own environment.)
Example 1) `-mem 2GB`
Calculate using 2GB of memory.
`-d`
Specifies the size of the step width after gradient calculation. The larger the value, the faster the convergence, but it is not possible to follow carefully on the potential hypersurface.
Example 1) `-d 0.05`
`-kp`
Multiply the potential calculated from the following equation (a potential based on the harmonic approximation) by the two atom pairs. This is used when you want to fix the distance between atoms to some extent.
$V(r) = 0.5k(r - r_0)^2$
`spring const. k (a.u.) keep distance [$ r_0] (ang.) atom1,atom2 ...`
Example 1) `-kp 2.0 1.0 1,2`
Apply harmonic approximation potentials to the 1st and 2nd atoms with spring constant 2.0 a.u. and equilibrium distance 1.0 Å.
`-akp`
The potential (based on anharmonic approximation, Morse potential) calculated from the following equation is applied to two atomic pairs. This is used when you want to fix the distance between atoms to some extent. Unlike -kp, the depth of the potential is adjustable.
$V(r) = D_e [1 - exp(- \sqrt(\frac{k}{2D_e})(r - r_0))]^2$
`potential well depth (a.u.) spring const.(a.u.) keep distance (ang.) atom1,atom2 ...`
Example 1) `-ukp 2.0 2.0 1.0 1,2`
Anharmonic approximate potential (Mohs potential) is applied to the first and second atoms as equilibrium distance 1.0 Å with a potential depth of 2.0 a.u. and a spring constant of 2.0 a.u.
`-ka`
The potential calculated from the following equation (potential based on the harmonic approximation) is applied to a group of three atoms, which is used when you want to fix the angle (bond angle) between the three atoms to some extent.
$V(\theta) = 0.5k(\theta - \theta_0)^2$
`spring const.(a.u.) keep angle (degrees) atom1,atom2,atom3`
Example 1) `-ka 2.0 60 1,2,3`
Assuming a spring constant of 2.0 a.u. and an equilibrium angle of 60 degrees, apply a potential so that the angle between the first, second, and third atoms approaches 60 degrees.
`-kda`
The potential (based on the harmonic approximation) calculated from the following equation is applied to a group of 4 atoms to fix the dihedral angle of the 4 atoms to a certain degree.
$V(\phi) = 0.5k(\phi - \phi_0)^2$
`spring const.(a.u.) keep dihedral angle (degrees) atom1,atom2,atom3,atom4 ...`
Example 1) `-kda 2.0 60 1,2,3,4`
With a spring constant of 2.0 a.u. and an equilibrium angle of 60 degrees, apply a potential so that the dihedral angles of the planes formed by the 1st, 2nd, and 3rd atoms and the 2nd, 3rd, and 4th atoms approach 60 degrees.
`-xtb`
Use extended tight binding method. (It is required tblite (python module).)
Example 1) `-xtb GFN2-xTB`
Use GFN2-xTB method to optimize molecular structure.
- Other options are experimental.
## Author
Author of this program is ss0832.
## License
GNU Affero General Public License v3.0
## Contact
highlighty876[at]gmail.com
## Citation
If you use MultiOptPy in your research, please cite it as follows:
```bibtex
@software{ss0832_multioptpy_2025,
author = {ss0832},
title = {MultiOptPy: Multifunctional geometry optimization tools for quantum chemical calculations},
month = dec,
year = 2025,
publisher = {Zenodo},
version = {v1.20.8},
doi = {10.5281/zenodo.18529521},
url = {https://doi.org/10.5281/zenodo.18529521}
}
```
```
ss0832. (2025). MultiOptPy: Multifunctional geometry optimization tools for quantum chemical calculations (v1.20.8). Zenodo. https://doi.org/10.5281/zenodo.18529521
```
## Setting Up an Environment for Using NNP(UMA) on Windows 11
### 1. Install Anaconda
Download and install **Anaconda3-2025.06-1-Windows-x86_64.exe** from:
[https://repo.anaconda.com/archive/](https://repo.anaconda.com/archive/)
### 2. Launch the Anaconda PowerShell Prompt
Open **"Anaconda PowerShell Prompt"** from the Windows Start menu.
### 3. Create a New Virtual Environment
```
conda create -n <env_name> python=3.12.7
```
### 4. Activate the Environment
```
conda activate <env_name>
```
### 5. Install Required Libraries
```
pip install ase==3.26.0 fairchem-core==2.7.1 torch==2.6.0
```
* **fairchem-core**: Required for running NNP models provided by FAIR Chemistry.
* **ase**: Interface for passing molecular structures to the NNP.
* **torch**: PyTorch library for neural network execution.
---
## Setting Up the Model File (.pt) for Your NNP Library
### 1. Download the Model File
Download **uma-s-1p1.pt** from the following page:
[https://huggingface.co/facebook/UMA](https://huggingface.co/facebook/UMA)
(Ensure that you have permission to use the file.)
### 2. Add the Model Path to MultiOptPy
Open the file `software_path.conf` inside the **MultiOptPy** directory.
Add the following line using the absolute path to the model file:
```
uma-s-1p1::<absolute_path_to/uma-s-1p1.pt>
```
This enables **MultiOptPy** to use the **uma-s-1p1 NNP model**.
### references of UMA
- arXiv preprint arXiv:2505.08762 (2025).
- https://github.com/facebookresearch/fairchem
## Create environment for Win11 / UMA(NNP) using conda
```
conda env create -f environment_win11uma.yml
conda activate test_mop_win11_uma
```
| text/markdown | null | ss0832 <highlighty876@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"numpy~=2.2.0",
"scipy~=1.13.0",
"matplotlib~=3.10.0",
"torch~=2.6.0",
"pyscf~=2.9.0",
"tblite~=0.4.0",
"ase~=3.26.0",
"fairchem-core~=2.7.0",
"sympy~=1.13.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.5 | 2026-02-18T08:30:20.339937 | multioptpy-1.20.9.tar.gz | 709,000 | 1f/7a/c9939c77fa25d99c9df4dfba18a83f9f46e36ce8a0720ddb8fd7872d78c4/multioptpy-1.20.9.tar.gz | source | sdist | null | false | 71bcb75cb56a0246c35ad86f369c4f4d | 478e7599fec33b0b82208f0fdb0bfbbf4b55bd0e1a93f11faa23b70ab43afd2e | 1f7ac9939c77fa25d99c9df4dfba18a83f9f46e36ce8a0720ddb8fd7872d78c4 | GPL-3.0-or-later | [
"LICENSE"
] | 0 |
2.4 | hypercli-cli | 0.8.13 | CLI for HyperCLI - GPU orchestration and LLM API | # hypercli-cli
Command-line interface for HyperCLI jobs, flows, x402 pay-per-use launches, and HyperClaw checkout tooling.
## Install
```bash
pip install hypercli-cli
```
## Configure
```bash
hyper configure
```
## Core Commands
```bash
# GPU discovery and launch
hyper instances list
hyper instances launch nvidia/cuda:12.6.3-base-ubuntu22.04 -g l4 -c "nvidia-smi"
# x402 pay-per-use GPU launch
hyper instances launch nvidia/cuda:12.6.3-base-ubuntu22.04 -g l4 -c "nvidia-smi" --x402 --amount 0.01
# Job lifecycle
hyper jobs list
hyper jobs logs <job_id>
hyper jobs metrics <job_id>
# Flows (recommended media path)
hyper flow text-to-image "a cinematic portrait"
hyper flow text-to-image "a cinematic portrait" --x402
# HyperClaw checkout/config
hyper claw plans
hyper claw subscribe 1aiu
hyper claw config env
```
## Notes
- `hyper llm` command surface has been removed.
- For inference setup, use HyperClaw (`hyper claw config ...`) and your agent/client's OpenAI-compatible configuration.
| text/markdown | null | HyperCLI <support@hypercli.com> | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27.0",
"hypercli-sdk>=0.8.9",
"mutagen>=1.47.0",
"pyyaml>=6.0",
"rich>=14.2.0",
"typer>=0.20.0",
"websocket-client>=1.6.0",
"argon2-cffi>=25.0.0; extra == \"all\"",
"eth-account>=0.13.0; extra == \"all\"",
"hypercli-sdk[comfyui]>=0.8.9; extra == \"all\"",
"web3>=7.0.0; extra == \"all\"... | [] | [] | [] | [
"Homepage, https://hypercli.com",
"Documentation, https://docs.hypercli.com",
"Repository, https://github.com/HyperCLI/hypercli"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T08:29:46.922703 | hypercli_cli-0.8.13.tar.gz | 43,096 | d0/df/5f9ca5a36e667ae1f0a83f40f4e60f9e715912c2dc635ade64611a275042/hypercli_cli-0.8.13.tar.gz | source | sdist | null | false | 48beb9078bc87004a340a8dee3aac83b | c5f455d25f937a79bdb9c37910f8a383d76edd5ecffa71b956e5d5c290dbe6d4 | d0df5f9ca5a36e667ae1f0a83f40f4e60f9e715912c2dc635ade64611a275042 | null | [] | 263 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.