metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | ggblab | 1.6.0 | A JupyterLab extension for learning geometry and Python programming side-by-side with GeoGebra. | # ggblab — A JupyterLab extension for learning geometry and Python programming side-by-side with GeoGebra
Summary: ggblab embeds a GeoGebra applet into JupyterLab and exposes an asynchronous Python API to drive the applet programmatically from notebooks. It uses a dual-channel communication design (an IPython Comm per kernel plus an optional out-of-band socket) to balance responsiveness and message size. Note: `init()` must be executed in its own notebook cell to avoid an unavoidable timing race between frontend comm_open and kernel comm registration.
🚀 Quick links:
- **Binder (Demo)**: Hosted demo and instructions to launch the examples in JupyterLab — see [binder/README.md](binder/README.md).
- **Blog**: Project news and writeups — see [blog/README.md](blog/README.md) (local preview).
[](https://pypi.org/project/ggblab/) [](https://pypi.org/project/ggblab/) [](https://pypi.org/project/ggblab/) [](https://github.com/moyhig-ecs/ggblab/actions/workflows/tests.yml) [](https://codecov.io/gh/moyhig-ecs/ggblab) [](LICENSE) [](https://ggblab.readthedocs.io/en/latest/?badge=latest) [](#cloud-deployment) [](https://mybinder.org/v2/gh/moyhig-ecs/ggblab/main?urlpath=lab/tree/examples/example.ipynb)
ggblab embeds a GeoGebra applet into JupyterLab and provides a compact, async Python API to send commands and call GeoGebra functions. Open a panel from the Command Palette or from Python (`GeoGebra().init()`), interact with the applet visually, and drive it programmatically from notebook code.
Note (Feb 2026): this repo recently added a small IPython magic to help initialize the kernel-side registration, and the optional `ggblab_extra` parser now supports grouping for certain `list` constructions (e.g., `Tangent` / `Intersect` lists). See the `ggblab_extra` docs for details and example usage.
Note: v1.5.0 adds experimental support for a VS Code extension (`geogebra-injector`) that can open a GeoGebra webview and connect to notebook kernels running inside VS Code. Install the VS Code extension `geogebra-injector` (version 1.5.0) from the Marketplace or build the `.vsix` in `vscode-extension/`.
For VS Code usage the extension prefers connection JSON supplied via the clipboard or a workspace file (`.vscode/ggblab.json`) due to platform constraints in passing complex command payloads from notebook links. See `vscode-extension/README.md` for detailed instructions and security notes.
### Features
- **Dual Coding System**: Geometric visualization + Python code in a unified workspace—students learn through both visual and symbolic representations
- Programmatic launch via `GeoGebra().init()` (recommended), which uses ipylab to pass communication settings before widget initialization (Command ID: `ggblab:create`, label: "React Widget"). Use the Command Palette to open a panel; the Launcher tile has been removed.
- Call GeoGebra commands (`command`) and API functions (`function`) from Python through the `GeoGebra` helper
- **Domain Bridge**: Construction dependencies in GeoGebra map isomorphically to variable scoping in Python—teach computational thinking through geometric structure
- **Transfer of Learning**: Knowledge learned in geometric context transfers to computational thinking and vice versa. Dual representations strengthen understanding across both domains.
- **ML-ready parser outputs**: The parser enriches the construction DataFrame with production-ready features (e.g. `Sequence`, `DependsOn`, `DependsOn_minimal`) that are stored as native Polars types and are directly usable for feature engineering, graph models, and inference pipelines — see [Parser outputs for ML / Inference](#parser-outputs-for-ml--inference).
- **Communication and events**: ggblab uses a dual-channel design — an IPython Comm (`ggblab-comm`) per notebook kernel plus an out-of-band socket (Unix domain socket or WebSocket) for larger messages and event transport. The frontend watches applet events (add/remove/rename/clear and dialog messages) and forwards them to the kernel over these channels.
By design, a notebook kernel drives one side-by-side GeoGebra applet via a single Comm; other notebooks can open and drive their own applets. Controlling multiple applets from the same kernel is possible in principle but is not enabled by default—implementations will be considered if a clear use case is proposed.
(Design note: this avoids message correlation issues because IPython Comm cannot receive during cell execution and asyncio send/receive logic relies on a single shared buffer. See [ggblab/utils.py section 8](ggblab/utils.py) and [architecture.md](docs/architecture.md#asyncio-design-challenges-in-jupyter) for details.)
### Requirements
- JupyterLab >= 4.0
- Python >= 3.10
- Browser access to https://cdn.geogebra.org/apps/deployggb.js
- For development: Node.js may be useful; `jlpm` is not required for classroom installs. Follow the Development Workflow for optional build steps.
## 📦 Package Ecosystem
ggblab consists of two packages:
### **ggblab** (Core) — JupyterLab Extension
Interactive GeoGebra widgets with bidirectional Python ↔ GeoGebra communication.
**Core features (this repo):**
- Embedded GeoGebra applet in JupyterLab
- Dual-channel communication: a per-kernel IPython Comm (`ggblab-comm`) plus an out-of-band socket (Unix domain socket or WebSocket)
- Async Python API to send commands and call GeoGebra functions; frontend watches applet events and forwards them to the kernel
- .ggb file I/O (`ggb_file`, alias `ggb_construction`)
- Design note: one Comm per notebook kernel drives one side-by-side applet by default; other notebooks can open their own applets. Multi-applet-from-one-kernel is not enabled by default.
### **ggblab_extra** — Analysis & Educational Tools
> **Note**: The optional `ggblab_extra` package (import name `ggblab_extra`) is currently undergoing restructuring and will be republished soon as a standalone package or distribution. The core `ggblab` package remains lightweight; import
# ggblab
[](https://pypi.org/project/ggblab/) [](https://pypi.org/project/ggblab/) [](https://github.com/moyhig-ecs/ggblab/actions/workflows/tests.yml) [](https://ggblab.readthedocs.io/en/latest/?badge=latest)
Compact overview and quick guide for contributors and users.
ggblab embeds a GeoGebra applet into JupyterLab and provides an asynchronous Python API to drive the applet programmatically from notebooks. It is intended for teaching geometry side-by-side with Python and for producing reproducible construction datasets useful for analysis and ML workflows.
--
## Quick Links
- Documentation: https://ggblab.readthedocs.io/
- Examples & notebooks: [examples/](examples/)
- Demo (Binder): see [binder/README.md](binder/README.md)
- Blog / News: [blog/README.md](blog/README.md)
## Key Features
- Side-by-side GeoGebra applet embedded in JupyterLab.
- Async Python API: send algebraic commands and call GeoGebra functions from notebooks.
- Async Python API: send algebraic commands and call GeoGebra functions from notebooks.
- Dual-channel communication: IPython Comm for normal messages and an optional out-of-band socket for responsiveness during cell execution.
- Construction I/O: load/save `.ggb` (base64 zip), XML, JSON, and normalized DataFrame outputs (Polars) for analysis and ML.
- Rich error-handling and pre-flight validation to catch many issues before execution.
## Installation
Recommended (end users / JupyterHub):
```bash
pip install ggblab
```
Development (edit & test locally):
```bash
pip install -e ".[dev]"
jupyter labextension develop . --overwrite
jlpm build # optional: only needed for frontend changes
jupyter lab
```
## Jupyter server CORS / allow_origin (for webview integrations)
If you run ggblab inside an environment where a webview or external host
needs to connect to the local Jupyter server (for example a VS Code webview),
you may need to allow cross-origin requests from the host. In most local
deployments using plain HTTP this is not a concern; however, some webview
integrations require the server to accept cross-origin requests.
Quick runtime option:
```bash
# Classic Notebook/JupyterLab
jupyter lab --NotebookApp.allow_origin='*'
# jupyter_server-based (newer) process
jupyter lab --ServerApp.allow_origin='*'
```
Persistent config (user-level): add to `~/.jupyter/jupyter_notebook_config.py`
or `~/.jupyter/jupyter_server_config.py` depending on your server:
```python
# allow all origins (use with caution on publicly accessible servers)
c.NotebookApp.allow_origin = '*'
# optionally allow credentials (cookies) if your host sends them
c.NotebookApp.allow_credentials = True
```
Security note:
- `allow_origin='*'` permits all origins and should only be used in local
or tightly controlled environments. Do not use it on public-facing servers
without additional access controls. A safer pattern is to restrict the
allowed origin to the exact webview origin or to proxy REST requests via
the host extension (recommended for VS Code integrations).
Notes:
- For managed JupyterHub deployments, `pip install ggblab` is usually sufficient — no manual `jlpm` steps are required.
## Quick Start (Python)
In a notebook cell:
```python
from ggblab.ggbapplet import GeoGebra
ggb = GeoGebra()
await ggb.init() # initialize comm/socket and open the applet panel
await ggb.command("A = (0,0)")
val = await ggb.function("getValue", ["A"])
print(val)
```
Tips:
- Important: do NOT run `await ggb.init()` inside the same notebook cell as other commands — due to the unavoidable timing between the frontend's `comm_open` and the kernel's comm registration, a race will occur if `init()` is executed together with other code. Always run `await ggb.init()` in its own cell and wait for it to complete before sending further commands to the applet.
- Each GeoGebra panel shows the kernel id (first 8 chars) to help match notebooks↔applets.
Note on kernel registration:
- To ensure reliable Comm registration in classroom environments, load the kernel extension before creating the widget using the IPython magic or the helper below:
```python
%load_ext ggblab
# or
from IPython import get_ipython
import ggblab
ggblab.load_ipython_extension(get_ipython())
```
This guarantees the kernel-side Comm target (`jupyter.ggblab`) is registered before the frontend probes for it.
### IPython magic examples
Use the `%ggb` (line) and `%%ggb` (cell) magics to send GeoGebra commands directly from the notebook. Examples:
Line magic — simple command:
```python
%ggb A = (0,0)
%ggb api getValue(A)
```
Line magic — API call (synchronous when no running loop):
```python
%ggb api getValue(A)
# returns the value of A (e.g. "(0,0)") or schedules an async task in running loops
```
Cell magic — multi-line commands:
```python
%%ggb
# create points and a circle
A = (0,0)
B = (1,0)
Circle(A,1)
```
Using a Python variable containing commands (brace form `{name}` required):
```python
cmds = "A = (0,0)\nB = Point(A)\n"
%ggb {cmds}
```
Notes:
- Only the brace form (`%ggb {varname}`) is expanded by the magic to avoid accidental variable interpolation.
- When running inside an async event loop the magics schedule tasks and return immediately; results are published to IPython's display hook and stored in `_` / `Out` when available.
## Examples
- See [examples/example.ipynb](examples/example.ipynb) for a basic demo.
- `ggblab_extra` contains advanced parsing and scene-development tools: see [ggblab_extra docs index](docs/ggblab_extra_index.md) for how to access the optional extras and their documentation.
### Plotting: Matplotlib vs GeoGebra
If you use SymPy to generate numeric samples, you can render them either with classic Python plotting libraries (e.g. Matplotlib) or with an interactive GeoGebra panel. See [examples/eg7_plotting.ipynb](examples/eg7_plotting.ipynb) for a short English example that:
- Shows how to sample a SymPy expression with `lambdify` and `numpy.linspace`.
- Renders the same samples with Matplotlib (static) and with GeoGebra (interactive `LineGraph` or list-based import).
- Demonstrates capturing a GeoGebra PNG via `getPNGBase64` and displaying it inside the notebook.
This comparison is useful for deciding whether to prioritize static publication quality (Matplotlib) or interactive student exploration (GeoGebra).
## Recent Changes (since 1.0.2)
This release series includes several reliability and observability fixes focused on widget restore, frontend lifecycle handling, and the backend communication layer. Key changes in this workspace:
- **Restorer & widget lifecycle**: the JupyterLab `ILayoutRestorer`/tracker logic was adjusted so panels persist across browser reloads and the widget's internal teardown now runs on `onCloseRequest()` instead of during layout restore to avoid premature disposal.
- **Comm refactor (backend)**: `ggblab/ggblab/comm.py` moved from polling to a future-based synchronization model (`pending_futures`) and added mutex protection around shared state to prevent race conditions during OOB responses. A watchdog prevents indefinite waits for responses.
- **OOB send serialization (frontend)**: `callRemoteSocketSend` now serializes socket sends and adds a short inter-send delay to avoid kernel `requestExecute` churn when many applet listeners fire concurrently.
- **Reduced noisy logs**: connect/disconnect and socket lifecycle messages are now aggregated and rate-limited to reduce log spam during normal operation.
- **Version bump**: package version updated to `1.3.0`.
- **Assumptions → Conjectures**: Added utilities and frontend hooks to help derive conjectures from GeoGebra construction assumptions. The extension exposes a `listen` facility that allows the frontend to continuously observe object values and notify the kernel when premises change; this makes it convenient to keep derived conjectures in sync from Python-side logic.
- **Listener observability**: The `listen` mechanism enables kernel-side notebooks to react to object mutations (add/remove/modify) and update derived analysis or conjectures continuously without manual polling.
- **Immediate listener delivery & suppression**: The frontend now invokes the registered `listen` callback immediately after registration so the kernel receives the current object value without waiting for the next change. To reduce noisy updates, redundant notifications are suppressed when the object's stringified value hasn't changed since the last send.
- **Configurable GeoGebra flavor (`appName`)**: The frontend accepts an `appName` parameter (passed via `GeoGebra().init(appName)` in Python or via `args.appName` when opening the widget from the Command Palette) to select the GeoGebra flavor to initialize. Supported values:
- `graphing` — GeoGebra Graphing Calculator
- `geometry` — GeoGebra Geometry
- `3d` — GeoGebra 3D Graphing Calculator
- `classic` — GeoGebra Classic
- `suite` — GeoGebra Calculator Suite (default)
- `evaluator` — Equation Editor
- `scientific` — Scientific Calculator
- `notes` — GeoGebra Notes
The kernel-side `GeoGebra.init(appName)` validates the value and will raise `ValueError` for unsupported values.
- **ipywidgets / ipympl interop**: Improved compatibility with ipywidgets-based backends (e.g. ipympl) so they can asynchronously process Comm messages without conflicting with ggblab's Comm handling. To avoid surprising transient kernel-side Comms during initialization, ggblab yields to frontend widget managers using a `post_execute` drain and keeps the optional ipywidgets bridge disabled by default. Advanced users may review the `enable_widget_bridge` flag in `ggblab/comm.py` if they need a kernel-side widget bridge.
If you need more detail on any bullet, see the corresponding source files:
- Frontend: [src/index.ts](src/index.ts) and [src/widget.tsx](src/widget.tsx)
- Backend: [ggblab/comm.py](ggblab/comm.py)
## Development & Testing
Run Python tests:
```bash
pip install -e ".[dev]" pytest pytest-cov
pytest tests/ -v
```
Frontend tests and tasks (only for frontend development):
```bash
jlpm install
jlpm test
jlpm build
```
## Future Work / Roadmap
- Automate kernel-side comm registration: explore a JupyterLab plugin hook or kernel startup snippet so `ggblab-comm` can be registered automatically when a kernel starts or a workspace is opened.
- Robust widget-manager integration: investigate a formally supported widget-manager registration flow so ipywidgets and ggblab interoperate without manual init steps.
- Multi-applet support: enable safe multi-applet control from a single kernel while preserving message isolation and avoiding comm conflicts.
- Improved OOB reliability: refine the out-of-band socket protocol (handshakes, retries, backpressure) to further reduce edge-case races and improve performance for large messages.
- Packaging & distribution: simplify classroom installation (bundled labextension wheels, docker-based IoC deployment recipes) and publish reproducible release artifacts.
- Tests & CI: expand end-to-end tests that exercise kernel↔frontend comm timing, including simulated slow networks and delayed frontends.
Contributions and experiment proposals are welcome — open an issue or a PR with a short design note outlining trade-offs and testing strategy.
CI: GitHub Actions runs tests and coverage on PRs — see `.github/workflows/tests.yml`.
## Documentation
Full docs are in `docs/` and published at https://ggblab.readthedocs.io/. Start at [docs/index.md](docs/index.md) or the high-level [philosophy](docs/philosophy.md).
## Contributing
1. Fork and create a feature branch.
2. Run tests and linters locally.
3. Open a PR with a clear description and tests where applicable.
For large changes, please open an issue first to discuss design.
## License
BSD-3-Clause
--
If you want, I can now:
- run a quick markdown preview or lint
- update `README.md` further with more examples or screenshots
- create a commit and open a PR draft
Which should I do next?
- Implementation roadmap and quick reference
### Quick Reference
| Document | Primary Audience | Key Insight |
| --- | --- | --- |
| **[ggblab_extra docs (index)](docs/ggblab_extra_index.md)** | **Educators, textbook authors** | **Optional advanced guides: scene development, SymPy integration, and examples** |
| **[docs/scoping.md](docs/scoping.md)** | Educators, Students | Geometric construction teaches programming scoping |
| **[docs/philosophy.md](docs/philosophy.md)** | Contributors, Researchers | ggblab = GeoGebra → Timeline → Manim → Video pipeline |
| **[SymPy Integration](docs/sympy_integration.md)** | Math/CS Instructors | Symbolic proof + code generation + manim export (core overview; advanced guides in optional `ggblab_extra`) |
| **[docs/architecture.md](docs/architecture.md)** | Developers | Dual-channel communication (core) |
| **[TODO.md](TODO.md)** | Contributors | Concrete next steps prioritized by learning value |
| **[API Reference](https://ggblab.readthedocs.io/en/latest/api.html)** | Developers | Complete Python API documentation |
### Examples
- Sample notebook: [examples/example.ipynb](examples/example.ipynb)
- Demo video:

Run steps:
```python
%load_ext autoreload
%autoreload 2
from ggblab import GeoGebra
import io
ggb = await GeoGebra().init() # open GeoGebra widget on the left
c = ggb.construction.load('/path/to/your.ggb') # supports .ggb, zip, JSON, XML
o = c.ggb_schema.decode(io.StringIO(c.geogebra_xml)) # geogebra_xml is auto-stripped to construction
o
```
### Construction I/O (example)
Use `ConstructionIO` (preferred) to build a normalized Polars DataFrame from a `.ggb` file or directly from a running applet. `DataFrameIO` is kept as a compatibility alias.
```python
from ggblab.construction_io import ConstructionIO
# From a .ggb file (requires a GeoGebra runner instance)
df_from_file = await ConstructionIO.initialize_dataframe(
ggb, file='path/to/example.ggb',
_columns=ConstructionIO.COLUMNS + ["ShowObject", "ShowLabel", "Auxiliary"]
)
# Or build from the running applet state
df_from_applet = await ConstructionIO.initialize_dataframe(ggb, use_applet=True)
print(df_from_file.head())
```
Note: Supports `.ggb` (base64-encoded zip), plain zip, JSON, and XML formats. The `geogebra_xml` is automatically narrowed to the `construction` element and scientific notation is normalized. Schema/decoding APIs may evolve.
### Saving construction
Save the current construction (archive when Base64 is set, otherwise plain XML):
```python
from ggblab import GeoGebra
ggb = await GeoGebra().init()
c = ggb.construction.load('/path/to/your.ggb')
# Save to XML (when no Base64 is set)
c.save('/tmp/construction.xml')
# Save to a .ggb file name; content depends on state:
# - if Base64 is set -> decoded archive (.ggb zip)
# - else -> plain XML bytes (extension does not enforce format)
c.save('/tmp/construction.ggb')
```
#### Saving behavior and defaults
- `c.save()` with no arguments writes to the next available filename derived from the originally loaded `source_file` (e.g., `name_1.ggb`, `name_2.ggb`, ...). Use `c.save(overwrite=True)` to overwrite the original `source_file`.
- If `construction.base64_buffer` is set (e.g., from `getBase64()` or `load()`), `save()` writes the decoded archive; otherwise it writes the in-memory `geogebra_xml` as plain XML.
- Target file extension does not enforce format: if Base64 is absent, saving to a `.ggb` path will still write plain XML bytes.
- Note: `getBase64()` from the applet may not include non-XML artifacts present in the original `.ggb` archive (e.g., thumbnails or other resources). Saving after API-driven changes can therefore produce a leaner archive.
### Use Cases (from examples/eg3_applet.ipynb)
#### 1) Algebraic commands and API functions
```python
# Algebraic command
r = await ggb.command("O = (0, 0)")
# API functions
r = await ggb.function("getAllObjectNames")
r = await ggb.function("newConstruction")
```
#### 2) Load .ggb and draw via Base64
```python
# Load a .ggb (base64-encoded zip)
c = ggb.construction.load('path/to/file.ggb')
# Render in applet
await ggb.function("setBase64", [ggb.construction.base64_buffer.decode('utf-8')])
```
#### 3) Layer visibility control
```python
from itertools import zip_longest
layers = range(10)
await ggb.function("setLayerVisible", list(zip_longest(list(layers), [], fillvalue=False)))
layers = [9, 0]
await ggb.function("setLayerVisible", list(zip_longest(list(layers), [], fillvalue=True)))
```
#### 4) XML attribute edit roundtrip
```python
# Pull XML for object 'A'
r = await ggb.function("getXML", ['A'])
# Decode to schema dict, modify, and encode back
o2 = c.ggb_schema.decode(r)
o2['show'][0]['@object'] = False
x = xmlschema.etree_tostring(c.ggb_schema.encode(o2, 'element'))
# Apply to applet
await ggb.function("evalXML", [x])
```
#### 5) Roundtrip save from applet state
```python
# Fetch current applet state as base64 and save
r = await ggb.function("getBase64")
ggb.construction.base64_buffer = r.encode('ascii')
c.save() # next available filename based on source_file
# c.save(overwrite=True) # to overwrite the original
```
### Object Dependency Analysis (ggblab_extra)
Advanced parsing, dependency graphs, and subgraph extraction now live in **ggblab_extra**. See [ggblab_extra docs index](docs/ggblab_extra_index.md) for how to access the optional extras and their full documentation.
### Architecture
- **Frontend** ([src/index.ts](src/index.ts), [src/widget.tsx](src/widget.tsx)): Registers the plugin `ggblab:plugin` and command `ggblab:create`. Creates a `GeoGebraWidget` ReactWidget that loads GeoGebra from the CDN, opens an IPython Comm target (default `test3`), executes commands/functions, and mirrors add/remove/rename/clear events plus dialog notices back to the kernel. Results can also be forwarded over the external socket when provided.
- **Backend** ([ggblab/ggbapplet.py](ggblab/ggbapplet.py), [ggblab/comm.py](ggblab/comm.py), [ggblab/file.py](ggblab/file.py)): Initializes a singleton `GeoGebra`, spins up a Unix-socket/TCP WebSocket server, registers the IPython Comm target, and drives the frontend command via ipylab. `ggb_comm.send_recv` waits for responses; `ggb_file` (alias `ggb_construction`) loads multiple file formats (`.ggb`, zip, JSON, XML) and provides `geogebra_xml` + `ggb_schema` for converting construction XML to schema objects. Advanced parsing and verification live in `ggblab_extra`.
- **Styles** ([style/index.css](style/index.css), [style/base.css](style/base.css)): Ensure the embedded applet fills the available area.
**Browser reload & state restoration**
- What happens: A full browser reload or JupyterLab restart destroys the front-end JavaScript context (DOM and in-memory applet instances).
- Why: The browser resets the JS runtime; Lumino's `ILayoutRestorer` recreates widgets by invoking the saved command, but it does not preserve prior in-memory objects.
- Consequence: Kernel connections (`kernelId`) persist on the server, but GeoGebra applets are re-created in the client. A dispose→create cycle on reload is unavoidable.
#### Communication Architecture
**Dual-channel design**: ggblab uses two communication channels between the frontend and backend:
1. **Primary channel (IPython Comm over WebSocket)**:
- Handles command/function calls and event notifications
- Managed by Jupyter/JupyterHub infrastructure with reverse proxy support
- Connection health guaranteed by Jupyter/JupyterHub
- **Limitation**: IPython Comm cannot receive messages while a notebook cell is executing
2. **Out-of-band channel (Unix Domain Socket on POSIX / TCP WebSocket on Windows)**:
- Addresses the Comm limitation by enabling message reception during cell execution
- Allows GeoGebra applet responses to be received even when Python is busy executing code
- Connection is opened/closed per transaction (no persistent connection)
- No auto-reconnection needed due to transient nature
This dual-channel approach ensures that interactive operations (e.g., retrieving object values, updating constructions) remain responsive even during long-running cell execution.
See [architecture.md](docs/architecture.md) for detailed design rationale and implementation notes.
##### Architecture Diagram
```mermaid
flowchart TB
subgraph Browser
FE[JupyterLab Frontend + GeoGebra Applet]
end
subgraph Server
K[Python Kernel]
S["Socket Bridge (Unix or TCP WebSocket)"]
end
FE -- "IPython Comm (WebSocket)\nvia JupyterHub proxy" --> K
FE -- "Out-of-band socket (transient)" --> S
S --- K
FE -. "GeoGebra asset" .-> CDN[cdn.geogebra.org/apps/deployggb.js]
```
#### Security & Compatibility
- Reverse proxy-friendly: Operates over JupyterLab's IPython Comm/WebSocket within the platform's auth/CSRF boundaries.
- CORS-aware CDN: GeoGebra is loaded from `https://cdn.geogebra.org/apps/deployggb.js` as a static asset; no cross-origin API calls from the browser beyond this script.
- Same-origin messaging: Kernel↔widget interactions remain within Jupyter's origin; no custom headers or cookies required.
- Optional socket bridge: Transient Unix/TCP bridge opens per transaction and closes immediately to avoid long-lived external listeners; improves responsiveness during cell execution.
- JupyterHub readiness: Validated in managed JupyterHub (Kubernetes) behind reverse proxies.
#### Error Handling and Limitations
**Primary channel (IPython Comm)**: Error handling is managed automatically by Jupyter/JupyterHub infrastructure. Connection failures are detected and handled transparently; kernel status is visible in the JupyterLab UI.
**Out-of-band channel**: The secondary channel has a **3-second timeout** for receiving responses. If no response arrives within this window, a `TimeoutError` is raised in Python:
```python
try:
result = await ggb.function("getValue", ["a"])
except TimeoutError:
print("GeoGebra did not respond within 3 seconds")
```
**GeoGebra API constraint**: The GeoGebra API does **not** provide explicit error response codes. Instead, errors are communicated through **dialog popups** displayed in the browser. The frontend monitors these dialog events and forwards error information via the primary Comm channel. For errors that do not trigger dialogs (e.g., malformed responses), the timeout is the primary error signal.
See [architecture.md § Error Handling](docs/architecture.md#error-handling) for details on error detection and recovery strategies.
### Settings
The current settings schema ([schema/plugin.json](schema/plugin.json)) exposes no user options yet but is ready for future configuration.
### Development Workflow
```bash
pip install -e ".[dev]"
jupyter labextension develop . --overwrite
jlpm build # or `jlpm watch` during development
jupyter lab # run in another terminal
```
To remove the dev link, uninstall and delete the `ggblab` symlink listed by `jupyter labextension list`.
### Testing
**Automated Testing (GitHub Actions)**:
- Continuous integration configured via [.github/workflows/tests.yml](.github/workflows/tests.yml)
- Runs on `main` and `dev` branches on every push and pull request
- Tests across Python 3.10, 3.11, 3.12 on Ubuntu, macOS, and Windows
- Coverage reports uploaded to Codecov
**Running Tests Locally**:
```bash
# Install test dependencies
pip install -e ".[dev]"
pip install pytest pytest-cov
# Run all tests
pytest tests/ -v
# Run specific test module
pytest tests/test_parser.py -v
# Run with coverage report
pytest tests/ --cov=ggblab --cov-report=html
```
**Frontend Tests**:
- `jlpm install && jlpm test`
**Integration Tests (Playwright/Galata)**:
- See [ui-tests/README.md](ui-tests/README.md)
- Build with `jlpm build:prod`, then `cd ui-tests && jlpm install && jlpm playwright test`
### Release
See [RELEASE.md](RELEASE.md) for publishing to PyPI/NPM or using Jupyter Releaser; bump versions with `hatch version`.
### Known Issues and Gaps
#### Frontend Limitations
- **No explicit error handling UI**: Communication failures between frontend and backend are logged to console but not displayed to users. Currently relies on browser console for debugging.
- **Limited event notification**: Only monitors basic GeoGebra events (add/remove/rename/clear objects, dialogs). Advanced events like slider changes, conditional visibility toggles, or script execution results are not automatically propagated.
- **Hardcoded Comm target**: The Comm target name is hardcoded as `'test3'` with no option for customization without code changes.
- **TypeScript strict checks disabled**: Some type assertions use `any` type, reducing type safety. Widget props lack full interface documentation.
- **No input validation**: Commands and function arguments are not validated before sending to GeoGebra; invalid requests may cause silent failures.
#### Backend Limitations
- **Singleton pattern constraint**: Only one active GeoGebra instance per kernel session. Attempting to create multiple instances will reuse the same connection.
- **Out-of-band communication timeout**: The out-of-band socket channel has a 3-second timeout. If the frontend does not respond within this window, the backend raises a timeout exception.
- **Limited error handling on out-of-band channel**: GeoGebra API does not provide explicit error responses, so errors are communicated indirectly:
- GeoGebra displays error dialogs (native popups) when operations fail (e.g., invalid syntax in algebraic commands)
- The frontend monitors dialog events and forwards error messages via the primary Comm channel
- Errors without a dialog (e.g., malformed JSON responses) result in timeout exceptions or silent failures
- **Parser subgraph extraction (`parse_subgraph()`) performance issues**:
- **Combinatorial explosion**: Generates $2^n$ path combinations where $n$ = number of root objects. Performance degrades rapidly with 15+ independent roots.
- **Infinite loop risk**: May hang indefinitely under certain graph topologies.
- **Limited N-ary dependency support**: Only handles 1-ary and 2-ary dependencies; 3+ objects jointly creating an output are ignored.
- **Redundant computation**: Neighbor lookups are recalculated unnecessarily in loops.
- See [architecture.md § Dependency Parser Architecture](docs/architecture.md#dependency-parser-architecture) for detailed analysis and planned improvements.
#### General Limitations
- ✅ **Unit tests**: Comprehensive Python test suite with pytest (parser, GeoGebra applet, construction handling)
- ✅ **CI/CD pipeline**: Automated testing on all pull requests via GitHub Actions (Python 3.10+, multi-OS)
- 🔄 **Incomplete integration tests**: No Playwright tests yet for critical workflows (command execution, file loading, event handling)
- **Minimal documentation**: No dedicated developer guide beyond code comments; architecture rationale is not documented.
### Project Assessment (Objective)
- **Maturity**: Early-stage (0.x). Core functionality works for driving GeoGebra via dual channels, but lacks automated verification and release safeguards.
- **Strengths**: Clear architecture docs; dual-channel communication to mitigate Comm blocking; supports multiple file formats; dependency parser groundwork.
- **Key Risks**: No CI, low test coverage (unit/integration absent); parser `parse_subgraph()` has performance/loop risks on large graphs; hardcoded Comm target; minimal UX for error surfacing.
- **Maintainability**: TypeScript not strict; some `any` and limited input validation; parser algorithm needs replacement for scale.
- **Operational Gaps**: No monitoring/telemetry; no retry/backoff around sockets; release process manual.
### Future Enhancements and Roadmap
#### Short Term (v0.8.x)
1. **Error Handling & User Feedback**
- Add user-facing error notifications for Comm/WebSocket failures
- Improve out-of-band error reporting: detect timeout conditions and propagate as Python exceptions with context
- Support for custom timeout configuration in `GeoGebra()` initialization
- Enhanced error message recovery from GeoGebra dialog content
- Provide more descriptive error messages in the UI when operations fail
2. **Parser Optimization** (`v0.7.3`)
- Remove debug output; add optional logging via `logging` module
- Add early termination check to detect infinite loops in `parse_subgraph()`
- Cache neighbor computation to reduce redundant graph traversals
- Extend N-ary dependency support (currently limited to 1-2 parents; should support 3+)
3. **Event System Expansion**
- Subscribe to additional GeoGebra events (slider value changes, object property changes, script execution)
- Expose event system to Python API via `ggb.on_event()` pattern
- Log all events with timestamps for debugging
4. **Configuration & Customization**
- Add settings UI to choose Comm target name and socket configuration
- Allow custom GeoGebra CDN URL (for offline or private CDN scenarios)
- Implement widget position/size preferences (split-right, split-left, tab, etc.)
#### Medium Term (v1.0)
1. **Type Safety & Code Quality**
- Enable TypeScript strict mode and eliminate `any` types
- Add JSDoc for all public TypeScript/Python APIs
- Increase test coverage to >80% for both frontend and backend
- Add comprehensive unit tests for parser, especially for edge cases and large graphs
2. **Parser Algorithm Replacement**
- Replace `parse_subgraph()` with topological sort + reachability pruning approach
- Reduce time complexity from $O(2^n)$ to $O(n(n+m))$
- Support arbitrary N-ary dependencies (not limited to 2 parents)
- Eliminate infinite loop risk through deterministic algorithm
- See [architecture.md § Dependency Parser Architecture](docs/architecture.md#dependency-parser-architecture) for detailed design
3. **Advanced Features**
- **Multi-panel support**: Allow multiple GeoGebra instances in different notebook cells
- **State persistence**: Save/restore GeoGebra construction state to notebook or file
- **Real-time collaboration**: Support multiple users viewing/editing the same construction
- **Animation API**: Programmatic animation of objects with timeline control
- **Custom tool definitions**: Allow users to define and persist custom GeoGebra tools
4. **Integration Improvements**
- **Jupyter Widgets (ipywidgets) support**: Make GeoGebra embeddable in `ipywidgets` environments
- **Matplotlib/Plotly integration**: Export construction data to visualization libraries
- **NumPy/Pandas integration**: Bidirectional data sync with DataFrames
#### Long Term (v1.5+)
1. **Performance & Scalability**
- WebSocket batching for high-frequency updates (e.g., animations)
- Caching layer for repeated function calls
- Support for serverless/container environments without persistent sockets
- Memoization of subgraph extraction results
2. **ML/Data Science Features**
- Built-in geometry solvers with numerical optimization (scipy integration)
- Constraint solving interface
- Interactive visualization of mathematical models
3. **Parser Enhancements**
- Weighted edges representing construction order preference
- Interactive subgraph selection UI
- Integration with constraint solving for optimal construction paths
- Interactive visualization of mathematical models
4. **Ecosystem & Standards**
- JupyterHub compatibility testing and official support
- Jupyter Notebook (classic) extension variant
- Conda-forge packaging
- Official plugin for popular JupyterLab distributions (JupyterHub, Google Colab, etc.)
### Contributing
Contributions are welcome! Please:
## Note about legacy `parse_subgraph`
The `ggblab_extra.construction_parser` module preserves the original `parse_subgraph` heuristic under the name `parse_subgraph_legacy()` for reproducibility and compatibility. The legacy function retains the human-oriented search strategy and original control flow; it is kept because some users prefer its behavior.
Prefer the refactored `parse_subgraph()` for new development — it uses clearer variable names, removes debug output, and is easier to maintain. Call `parse_subgraph_legacy()` only when you specifically need the legacy behavior.
## Parser outputs for ML / Inference
ggblab does more than visualize geometry — it converts constructions into first-class, production-ready datasets for machine learning and inference.
- Rich, ML-friendly features: the parser annotates the construction DataFrame with engineered columns such as `Sequence`, `DependsOn`, and `DependsOn_minimal` that encode ordering, full ancestor lists, and compact parent sets respectively.
- Native Polars types: all metadata are stored as proper Polars types (including list/Utf8 columns) so they integrate directly with feature p | text/markdown | null | Manabu Higashida <manabu@higashida.net> | null | null | BSD 3-Clause License
Copyright (c) 2025, ggblab
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | jupyter, jupyterlab, jupyterlab-extension | [
"Framework :: Jupyter",
"Framework :: Jupyter :: JupyterLab",
"Framework :: Jupyter :: JupyterLab :: 4",
"Framework :: Jupyter :: JupyterLab :: Extensions",
"Framework :: Jupyter :: JupyterLab :: Extensions :: Prebuilt",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiofiles",
"ipykernel",
"ipylab",
"networkx",
"numpy",
"polars",
"pyperclip",
"sympy",
"websockets",
"xmlschema"
] | [] | [] | [] | [
"Homepage, https://github.com/moyhig-ecs/ggblab#readme",
"Bug Tracker, https://github.com/moyhig-ecs/ggblab/issues",
"Repository, https://github.com/moyhig-ecs/ggblab"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:57:08.603305 | ggblab-1.6.0.tar.gz | 188,994 | d8/2b/619533b254d679c8af32c98472174f46142c7769c58ec5ab08868b4391e1/ggblab-1.6.0.tar.gz | source | sdist | null | false | 3583740a12ec36dfbdd651120ca5cc78 | 64240e97cb75df5ddfe095defd2d463b95dc4bcfb03e0097ced3da73f0545c51 | d82b619533b254d679c8af32c98472174f46142c7769c58ec5ab08868b4391e1 | null | [
"LICENSE"
] | 217 |
2.4 | outscraper | 6.0.2 | Python bindings for the Outscraper API | Outscraper Python Library
=========================
The library provides convenient access to the `Outscraper
API <https://app.outscraper.com/api-docs>`__ from applications written
in the Python language. Allows using `Outscraper’s
services <https://outscraper.com/services/>`__ from your code.
`API Docs <https://app.outscraper.com/api-docs>`__
Installation
------------
Python 3+
.. code:: bash
pip install outscraper
`Link to the python package
page <https://pypi.org/project/outscraper/>`__
Initialization
--------------
.. code:: python
from outscraper import OutscraperClient
client = OutscraperClient(api_key='SECRET_API_KEY')
`Link to the profile page to create the API
key <https://app.outscraper.com/profile>`__
Scrape Google Search
--------------------
.. code:: python
# Googel Search
results = client.google_search('bitcoin')
# Googel Search News
results = client.google_search_news('election', language='en')
Scrape Google Maps (Places)
---------------------------
.. code:: python
# Search for businesses in specific locations:
results = client.google_maps_search('restaurants brooklyn usa', limit=20, language='en')
# Get data of the specific place by id
results = client.google_maps_search('ChIJrc9T9fpYwokRdvjYRHT8nI4', language='en')
# Search with many queries (batching)
results = client.google_maps_search([
'restaurants brooklyn usa',
'bars brooklyn usa',
], language='en')
Scrape Google Maps Reviews
--------------------------
.. code:: python
# Get reviews of the specific place by id
results = client.google_maps_reviews('ChIJrc9T9fpYwokRdvjYRHT8nI4', reviews_limit=20, language='en')
# Get reviews for places found by search query
results = client.google_maps_reviews('Memphis Seoul brooklyn usa', reviews_limit=20, limit=500, language='en')
# Get only new reviews during last 24 hours
from datetime import datetime, timedelta
yesterday_timestamp = int((datetime.now() - timedelta(1)).timestamp())
results = client.google_maps_reviews(
'ChIJrc9T9fpYwokRdvjYRHT8nI4', sort='newest', cutoff=yesterday_timestamp, reviews_limit=100, language='en')
Scrape Google Maps Photos
-------------------------
.. code:: python
results = client.google_maps_photos(
'Trump Tower, NY, USA', photosLimit=20, language='en')
Scrape Google Maps Directions
-----------------------------
.. code:: python
results = client.google_maps_directions(['29.696596, 76.994928 30.7159662444353, 76.8053887016268', '29.696596, 76.994928 30.723065, 76.770169'])
Scrape Google Play Reviews
--------------------------
.. code:: python
results = client.google_play_reviews(
'com.facebook.katana', reviews_limit=20, language='en')
Emails And Contacts Scraper
---------------------------
.. code:: python
results = client.emails_and_contacts(['outscraper.com'])
`More
examples <https://github.com/outscraper/outscraper-python/tree/master/examples>`__
Responses examples
------------------
Google Maps (Places) response example:
.. code:: json
[
[
{
"name": "Colonie",
"full_address": "127 Atlantic Ave, Brooklyn, NY 11201",
"borough": "Brooklyn Heights",
"street": "127 Atlantic Ave",
"city": "Brooklyn",
"postal_code": "11201",
"country_code": "US",
"country": "United States of America",
"us_state": "New York",
"state": "New York",
"plus_code": null,
"latitude": 40.6908464,
"longitude": -73.9958422,
"time_zone": "America/New_York",
"popular_times": null,
"site": "http://www.colonienyc.com/",
"phone": "+1 718-855-7500",
"type": "American restaurant",
"category": "restaurants",
"subtypes": "American restaurant, Cocktail bar, Italian restaurant, Organic restaurant, Restaurant, Wine bar",
"posts": null,
"rating": 4.6,
"reviews": 666,
"reviews_data": null,
"photos_count": 486,
"google_id": "0x89c25a4590b8c863:0xc4a4271f166de1e2",
"place_id": "ChIJY8i4kEVawokR4uFtFh8npMQ",
"reviews_link": "https://search.google.com/local/reviews?placeid=ChIJY8i4kEVawokR4uFtFh8npMQ&q=restaurants+brooklyn+usa&authuser=0&hl=en&gl=US",
"reviews_id": "-4277250731621359134",
"photo": "https://lh5.googleusercontent.com/p/AF1QipN_Ani32z-7b9XD182oeXKgQ-DIhLcgL09gyMZf=w800-h500-k-no",
"street_view": "https://lh5.googleusercontent.com/p/AF1QipN_Ani32z-7b9XD182oeXKgQ-DIhLcgL09gyMZf=w1600-h1000-k-no",
"working_hours_old_format": "Monday: 5\\u20139:30PM | Tuesday: Closed | Wednesday: Closed | Thursday: 5\\u20139:30PM | Friday: 5\\u20139:30PM | Saturday: 11AM\\u20133PM,5\\u20139:30PM | Sunday: 11AM\\u20133PM,5\\u20139:30PM",
"working_hours": {
"Monday": "5\\u20139:30PM",
"Tuesday": "Closed",
"Wednesday": "Closed",
"Thursday": "5\\u20139:30PM",
"Friday": "5\\u20139:30PM",
"Saturday": "11AM\\u20133PM,5\\u20139:30PM",
"Sunday": "11AM\\u20133PM,5\\u20139:30PM"
},
"business_status": "OPERATIONAL",
"about": {
"Service options": {
"Dine-in": true,
"Delivery": false,
"Takeout": false
},
"Health & safety": {
"Mask required": true,
"Staff required to disinfect surfaces between visits": true
},
"Highlights": {
"Fast service": true,
"Great cocktails": true,
"Great coffee": true
},
"Popular for": {
"Lunch": true,
"Dinner": true,
"Solo dining": true
},
"Accessibility": {
"Wheelchair accessible entrance": true,
"Wheelchair accessible restroom": true,
"Wheelchair accessible seating": true
},
"Offerings": {
"Coffee": true,
"Comfort food": true,
"Healthy options": true,
"Organic dishes": true,
"Small plates": true,
"Vegetarian options": true,
"Wine": true
},
"Dining options": {
"Dessert": true
},
"Amenities": {
"High chairs": true
},
"Atmosphere": {
"Casual": true,
"Cozy": true,
"Romantic": true,
"Upscale": true
},
"Crowd": {
"Groups": true
},
"Planning": {
"Dinner reservations recommended": true,
"Accepts reservations": true,
"Usually a wait": true
},
"Payments": {
"Credit cards": true
}
},
"range": "$$$",
"reviews_per_score": {
"1": 9,
"2": 10,
"3": 47,
"4": 129,
"5": 471
},
"reserving_table_link": "https://resy.com/cities/ny/colonie",
"booking_appointment_link": "https://resy.com/cities/ny/colonie",
"owner_id": "114275131377272904229",
"verified": true,
"owner_title": "Colonie",
"owner_link": "https://www.google.com/maps/contrib/114275131377272904229",
"location_link": "https://www.google.com/maps/place/Colonie/@40.6908464,-73.9958422,14z/data=!4m8!1m2!2m1!1sColonie!3m4!1s0x89c25a4590b8c863:0xc4a4271f166de1e2!8m2!3d40.6908464!4d-73.9958422"
},
...
]
]
Google Maps Reviews response example:
.. code:: json
{
"name": "Memphis Seoul",
"address": "569 Lincoln Pl, Brooklyn, NY 11238, \\u0421\\u043f\\u043e\\u043b\\u0443\\u0447\\u0435\\u043d\\u0456 \\u0428\\u0442\\u0430\\u0442\\u0438",
"address_street": "569 Lincoln Pl",
"address_borough": "\\u041a\\u0440\\u0430\\u0443\\u043d-\\u0413\\u0430\\u0439\\u0442\\u0441",
"address_city": "Brooklyn",
"time_zone": "America/New_York",
"type": "\\u0420\\u0435\\u0441\\u0442\\u043e\\u0440\\u0430\\u043d",
"types": "\\u0420\\u0435\\u0441\\u0442\\u043e\\u0440\\u0430\\u043d",
"postal_code": "11238",
"latitude": 40.6717258,
"longitude": -73.9579098,
"phone": "+1 347-349-2561",
"rating": 3.9,
"reviews": 32,
"site": "http://www.getmemphisseoul.com/",
"photos_count": 77,
"google_id": "0x89c25bb5950fc305:0x330a88bf1482581d",
"reviews_link": "https://www.google.com/search?q=Memphis+Seoul,+569+Lincoln+Pl,+Brooklyn,+NY+11238,+%D0%A1%D0%BF%D0%BE%D0%BB%D1%83%D1%87%D0%B5%D0%BD%D1%96+%D0%A8%D1%82%D0%B0%D1%82%D0%B8&ludocid=3677902399965648925#lrd=0x89c25bb5950fc305:0x330a88bf1482581d,1",
"reviews_id": "3677902399965648925",
"photo": "https://lh5.googleusercontent.com/p/X_6-QqMphC_ctqs3bHSqFg",
"working_hours": "\\u0432\\u0456\\u0432\\u0442\\u043e\\u0440\\u043e\\u043a: 16:00\\u201322:00 | \\u0441\\u0435\\u0440\\u0435\\u0434\\u0430: 16:00\\u201322:00 | \\u0447\\u0435\\u0442\\u0432\\u0435\\u0440: 16:00\\u201322:00 | \\u043f\\u02bc\\u044f\\u0442\\u043d\\u0438\\u0446\\u044f: 16:00\\u201322:00 | \\u0441\\u0443\\u0431\\u043e\\u0442\\u0430: 16:00\\u201322:00 | \\u043d\\u0435\\u0434\\u0456\\u043b\\u044f: 16:00\\u201322:00 | \\u043f\\u043e\\u043d\\u0435\\u0434\\u0456\\u043b\\u043e\\u043a: 16:00\\u201322:00",
"reviews_per_score": "1: 6, 2: 0, 3: 4, 4: 3, 5: 19",
"verified": true,
"reserving_table_link": null,
"booking_appointment_link": null,
"owner_id": "100347822687163365487",
"owner_link": "https://www.google.com/maps/contrib/100347822687163365487",
"reviews_data": [
{
"google_id": "0x89c25bb5950fc305:0x330a88bf1482581d",
"autor_link": "https://www.google.com/maps/contrib/112314095435657473333?hl=en-US",
"autor_name": "Eliott Levy",
"autor_id": "112314095435657473333",
"review_text": "Very good local comfort fusion food ! \\nKimchi coleslaw !! Such an amazing idea !",
"review_link": "https://www.google.com/maps/reviews/data=!4m5!14m4!1m3!1m2!1s112314095435657473333!2s0x0:0x330a88bf1482581d?hl=en-US",
"review_rating": 5,
"review_timestamp": 1560692128,
"review_datetime_utc": "06/16/2019 13:35:28",
"review_likes": null
},
{
"google_id": "0x89c25bb5950fc305:0x330a88bf1482581d",
"autor_link": "https://www.google.com/maps/contrib/106144075337788507031?hl=en-US",
"autor_name": "fenwar1",
"autor_id": "106144075337788507031",
"review_text": "Great wings with several kinds of hot sauce. The mac and cheese ramen is excellent.\\nUPDATE:\\nReturned later to try the meatloaf slider, a thick meaty slice topped with slaw and a fantastic sauce- delicious. \\nConsider me a regular.\\ud83d\\udc4d",
"review_link": "https://www.google.com/maps/reviews/data=!4m5!14m4!1m3!1m2!1s106144075337788507031!2s0x0:0x330a88bf1482581d?hl=en-US",
"review_rating": 5,
"review_timestamp": 1571100055,
"review_datetime_utc": "10/15/2019 00:40:55",
"review_likes": null
},
...
]
}
Google Play Reviews response example:
.. code:: json
[
[
{
"autor_name": "candice petrancosta",
"autor_id": "113798143822975084287",
"autor_image": "https://play-lh.googleusercontent.com/a-/AOh14GiBRe-07Fmx8MyyVyrZP6TkSGenrs97e1_MG7Z-sWA",
"review_text": "I love FB but the app has been pissing me off lately. It keeps having problems. Now my public page for my business is not letting me see my notifications and it is very annoying. Also, it keeps saying that I have a message when I don\'t. That\'s been a probably for a very long time that comes and goes. I hate seeing the icon showing me that I have a message when I do not \\ud83d\\ude21",
"review_rating": 1,
"review_likes": 964,
"version": "328.1.0.28.119",
"review_timestamp": 1627360161,
"review_datetime_utc": "07/27/2021 04:29:21",
"owner_answer": null,
"owner_answer_timestamp": null,
"owner_answer_timestamp_datetime_utc": null
},
{
"autor_name": "Deren Nickerson",
"autor_id": "117741211939002621733",
"autor_image": "https://play-lh.googleusercontent.com/a/AATXAJwIXPpnodqFFvB9oQEsk8XYFqtkEcfDEmNr704=mo",
"review_text": "Technical support is non-existent whatsoever. Currently hiding behind the guise of a lack of reviewers being able to sit and stare at a computer screen due to a pandemic that forces people to stay at and work from home. Using auto-bots to destroy people\'s only methods of communicating with the outside world. I bet Facebook literally has blood on their hands from all the people who have killed themselves due to having their accounts needlessly disabled for months. Also you can\'t remove the app..",
"review_rating": 1,
"review_likes": 225,
"version": "328.1.0.28.119",
"review_timestamp": 1627304448,
"review_datetime_utc": "07/26/2021 13:00:48",
"owner_answer": null,
"owner_answer_timestamp": null,
"owner_answer_timestamp_datetime_utc": null
},
{
"autor_name": "Tj Symula",
"autor_id": "103540836420891624440",
"autor_image": "https://play-lh.googleusercontent.com/a/AATXAJxW4-DAYNCAgj2OQ41lQadAQtBxX4G_Aqn-Urvc=mo",
"review_text": "I have been logged into facebook for as long as I can remember, but I\'ve been booted somehow. I\'ve sent several emails with no response. All of my logins for multiple sites, I\'ve used the \\"login with facebook\\" option. I have no way to retrieve emails and passwords that I changed years ago, please help me fix this issue, its hindering my ability to use many online features on my phone.",
"review_rating": 1,
"review_likes": 181,
"version": "328.1.0.28.119",
"review_timestamp": 1627307359,
"review_datetime_utc": "07/26/2021 13:49:19",
"owner_answer": null,
"owner_answer_timestamp": null,
"owner_answer_timestamp_datetime_utc": null
},
...
]
]
Emails & Contacts Scraper response example:
.. code:: json
[
{
"query": "outscraper.com",
"domain": "outscraper.com",
"emails": [
{
"value": "service@outscraper.com",
"sources": [
{
"ref": "https://outscraper.com/",
"extracted_on": "2021-09-27T07:45:30.386000",
"updated_on": "2021-11-18T12:59:15.602000"
},
...
]
},
{
"value": "support@outscraper.com",
"sources": [
{
"ref": "https://outscraper.com/privacy-policy/",
"extracted_on": "2021-11-18T12:51:39.716000",
"updated_on": "2021-11-18T12:51:39.716000"
}
]
}
],
"phones": [
{
"value": "12812368208",
"sources": [
{
"ref": "https://outscraper.com/",
"extracted_on": "2021-11-18T12:59:15.602000",
"updated_on": "2021-11-18T12:59:15.602000"
},
...
]
}
],
"socials": {
"facebook": "https://www.facebook.com/outscraper/",
"github": "https://github.com/outscraper",
"linkedin": "https://www.linkedin.com/company/outscraper/",
"twitter": "https://twitter.com/outscraper",
"whatsapp": "https://wa.me/12812368208",
"youtube": "https://www.youtube.com/channel/UCDYOuXSEenLpt5tKNq-0l9Q"
},
"site_data": {
"description": "Scrape Google Maps Places, Business Reviews, Photos, Play Market Reviews, and more. Get any public data from the internet by applying cutting-edge technologies.",
"generator": "WordPress 5.8.2",
"title": "Outscraper - get any public data from the internet"
}
}
]
Contributing
------------
Bug reports and pull requests are welcome on GitHub at
https://github.com/outscraper/outscraper-python.
| text/x-rst | Outscraper | support@outscraper.com | null | null | MIT | outscraper webscraper extractor google api maps search json scrape parser reviews google play amazon | [
"Programming Language :: Python",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"Topic :: Utilities"
] | [] | https://github.com/outscraper/outscraper-python | null | null | [] | [] | [] | [
"requests"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.25 | 2026-02-20T08:57:00.094786 | outscraper-6.0.2.tar.gz | 26,621 | 0c/53/ee80b647f357a1135adc76ba84b105c932d4b25266fa1b8cda63b3ccd35d/outscraper-6.0.2.tar.gz | source | sdist | null | false | 680f8a26040f11b18bd810206112bf6d | 95fa5ffe10b93b4c2ff52931c85dd344249e54efb7ba16173febb54583cd9980 | 0c53ee80b647f357a1135adc76ba84b105c932d4b25266fa1b8cda63b3ccd35d | null | [
"LICENSE"
] | 676 |
2.4 | wallpaper-fetcher | 0.2.8 | A python script to automatically fetch and apply the daily Bing wallpaper on Linux, Mac or Windows. | [](https://pypi.org/project/wallpaper-fetcher/)
# Wallpaper Fetcher
Small cli program to automatically download and set the daily Bing wallpaper on Windows, Linux or Mac.
```console
> wallpaper-fetcher -h
usage: Wallpaper Fetcher [-h] [-f] [-n NUMBER] [-r RES] [-d] [-l LOCALE] [-o OUTPUT] [-v] [--debug]
This little tool fetches the Bing wallpaper of the day and automatically applies it (Windows/Mac/Linux).
options:
-h, --help show this help message and exit
-f, --force Force re-download an already downloaded image (default: False)
-n, --number NUMBER Number of latest wallpapers to download (default: 1)
-r, --res RES Custom resolution. Use --valid-res to see all valid resolutions (default: UHD)
-d, --download Only download the wallpaper(s) without updating the desktop background (default: False)
-l, --locale LOCALE The market to use (default: en-US)
-o, --output OUTPUT Output directory where the wallpapers should be saved (default: None)
-u, --update Automatically update the wallpaper every x seconds (default: False)
-i, --update-interval UPDATE_INTERVAL
The interval in seconds to use to update the wallpaper (default: 300)
-a, --attached Run wallpaper rotation in attached mode (see all logs) (default: False)
-s, --stop Stop the wallpaper rotator (default: False)
-v, --version Prints the installed version number (default: False)
--enable-auto Enable autostart (using the supplied arguments) (default: False)
--disable-auto Remove autostart (default: False)
--valid-res List all valid resolutions (default: False)
--debug Set log level to debug (default: False)
```
In addition, the [executable](https://github.com/Johannes11833/BingWallpaperFetcher/releases) versions of this program support enabling autostart which automatically downloads the current wallpaper of the day on login.
To enable autostart, use `--enable-auto` and to disable it use `--disable-auto`:
```
--enable-auto Enable autostart (default: False)
--disable-auto Remove autostart (default: False)
```
## Credits
- The source code in [set_wallpaper.py](wallpaper_fetcher/set_wallpaper.py) was copied from the [Textual Paint](https://github.com/1j01/textual-paint) project licensed under the [MIT License](https://github.com/1j01/textual-paint?tab=MIT-1-ov-file).
| text/markdown | Johannes Gundlach | 24914225+Johannes11833@users.noreply.github.com | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"requests<3.0.0,>=2.32.5",
"rich<15.0.0,>=14.3.2"
] | [] | [] | [] | [
"repository, https://github.com/Johannes11833/BingWallpaperFetcher"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:56:37.848987 | wallpaper_fetcher-0.2.8.tar.gz | 12,177 | fc/93/24bb764eedeb2634d3f4b7ede349f42f84e17fe4182d7677974450aed3bd/wallpaper_fetcher-0.2.8.tar.gz | source | sdist | null | false | 7c8c1dffb475ab56e90418a162f756df | 7b4cf72ed47102e759184af45d7b01ebd19bdec7c78f13f53ccc3ebe5977fb52 | fc9324bb764eedeb2634d3f4b7ede349f42f84e17fe4182d7677974450aed3bd | null | [] | 202 |
2.4 | tsagentkit | 2.0.3 | Minimalist time-series forecasting toolkit for coding agents | # tsagentkit
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/Apache-2.0)
Minimalist time-series forecasting toolkit for coding agents.
`tsagentkit` provides:
- a fixed panel data contract
- zero-config TSFM ensemble forecasting
- a small set of pipeline primitives for agent customization
- explicit TSFM model lifecycle control via `ModelCache`
For architecture details and design rationale, see `docs/DESIGN.md`.
## Install
```bash
pip install tsagentkit
```
## Data Contract
Input data must include these required columns:
- `unique_id`: series identifier
- `ds`: timestamp
- `y`: target value
Custom column remapping is not supported. Required columns must be non-null.
```python
import pandas as pd
# Valid input schema (minimal)
raw_df = pd.DataFrame({
"unique_id": ["A"] * 30,
"ds": pd.date_range("2025-01-01", periods=30, freq="D"),
"y": range(30),
})
```
## Quick Start
```python
from tsagentkit import forecast
result = forecast(raw_df, h=7, freq="D")
print(result.df.head())
```
## Standard Pipeline API
```python
from tsagentkit import ForecastConfig, run_forecast
config = ForecastConfig(h=7, freq="D", ensemble_method="median")
result = run_forecast(raw_df, config)
print(result.df.head())
```
## Building-Block Pipeline
```python
from tsagentkit import (
ForecastConfig,
validate,
build_dataset,
make_plan,
fit_all,
predict_all,
ensemble,
)
config = ForecastConfig(h=7, freq="D")
df = validate(raw_df)
dataset = build_dataset(df, config)
models = make_plan(tsfm_only=True)
artifacts = fit_all(models, dataset)
predictions = predict_all(models, artifacts, dataset, h=config.h)
ensemble_df = ensemble(predictions, method=config.ensemble_method, quantiles=config.quantiles)
print(ensemble_df.head())
```
## Performance
`tsagentkit` runs parallel model fitting and prediction by default. Large panels (>50k rows) automatically use streaming ensemble to reduce memory usage.
```python
# Opt-out if needed (e.g., memory-constrained environments)
result = forecast(raw_df, h=7, parallel_fit=False, parallel_predict=False)
```
## Model Cache Lifecycle
`ModelCache` manages loaded TSFM instances and avoids expensive reloads.
```python
from tsagentkit import ModelCache, forecast
from tsagentkit.models.registry import REGISTRY
# Optional preload
models = [m for m in REGISTRY.values() if m.is_tsfm]
ModelCache.preload(models)
# Reuses cached models across calls
result = forecast(raw_df, h=7)
# Explicit release
ModelCache.unload() # all models
# ModelCache.unload("chronos") # one model
# Optional inspection
print(ModelCache.list_loaded())
```
`ModelCache.unload()` semantics:
- releases all `tsagentkit`-owned model references
- calls adapter unload hooks when available
- triggers best-effort backend cleanup (`gc.collect`, CUDA/MPS cache clear)
- cannot reclaim memory still referenced by external user code
## Public API
Top-level (`from tsagentkit import ...`):
- `forecast`, `run_forecast`
- `ForecastConfig`, `ForecastResult`, `RunResult`
- `TSDataset`, `CovariateSet`
- `validate`, `build_dataset`, `make_plan`, `fit_all`, `predict_all`, `ensemble`
- `ModelCache`
- `REGISTRY`, `ModelSpec`, `list_models`
- `LengthAdjustment`, `adjust_context_length`, `validate_prediction_length`
- `get_effective_limits`, `check_data_compatibility`
- `resolve_device`
- `check_health`
- `TSAgentKitError`, `EContract`, `ENoTSFM`, `EInsufficient`, `ETemporal`
Inspection API (`from tsagentkit.inspect import ...`):
- `list_models`
- `check_health`, `HealthReport`
## Errors
Core error types:
- `EContract`: input contract violations
- `ENoTSFM`: TSFM registry invariant violation (internal misconfiguration)
- `EInsufficient`: not enough successful model outputs
- `ETemporal`: temporal integrity violations
```python
from tsagentkit import EContract, forecast
try:
result = forecast(raw_df, h=7)
except EContract as e:
print(e.code, e.hint)
raise
```
## Developer Commands
```bash
uv sync --all-extras
uv run pytest
uv run mypy src/tsagentkit
uv run ruff format src/
# Real TSFM smoke tests (live adapters)
TSFM_RUN_REAL=1 uv run pytest tests/ci/test_real_tsfm_smoke_gate.py tests/ci/test_standard_pipeline_real_smoke.py
```
## License
Apache-2.0
| text/markdown | null | null | null | null | Apache-2.0 | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"chronos-forecasting>=2.0.0",
"datasets>=2.14.0",
"gluonts",
"hierarchicalforecast>=1.0.0",
"huggingface-hub",
"import-linter>=2.0.0",
"mypy>=1.0.0",
"numpy>=1.24.0",
"pandas>=2.0.0",
"pyarrow>=12.0.0",
"pydantic>=2.0.0",
"pytest-cov>=4.0.0",
"pytest>=7.0.0",
"ruff>=0.1.0",
"scipy<1.12.0,>=1.11.3",
"sktime>=0.24.0",
"statsforecast>=1.7.0",
"toolz>=0.12.0",
"torch",
"tqdm>=4.65.0",
"transformers>=4.56.0",
"tsagentkit-patchtst-fm>=1.0.2",
"tsagentkit-timesfm>=1.0.1",
"tsagentkit-uni2ts",
"tsfeatures>=0.4.5",
"tsfresh>=0.20.0",
"typing-extensions>=4.0.0",
"utilsforecast>=0.1.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:56:29.414971 | tsagentkit-2.0.3.tar.gz | 344,913 | c6/f7/eddc30325e0e193b381839a06b908f526ee66770002d3fc690ea2a012e40/tsagentkit-2.0.3.tar.gz | source | sdist | null | false | b020489a6bbe25947151e7795061f1ab | 39c174a0a5e419e4fa975d9c1c829fa9593de6032d2408b6b168b257f0dd2f53 | c6f7eddc30325e0e193b381839a06b908f526ee66770002d3fc690ea2a012e40 | null | [
"LICENSE"
] | 219 |
2.4 | safe-file-walker | 1.0.0 | Secure filesystem traversal for Python with hardlink deduplication, rate limiting, and DoS protection | # Safe File Walker
[](LICENSE)
[](https://www.python.org/downloads/)
[](https://github.com/psf/black)




> ⚠️ **Non-Commercial Use Only** — Commercial use requires explicit permission.
> Contact: [@saicon001 on Telegram](https://t.me/saicon001)
A **production-grade**, **security-hardened** file system walker for Python with built-in protection against common vulnerabilities and resource exhaustion attacks.
## 🛡️ Why Use Safe File Walker?
Standard file walking utilities (`os.walk`, `pathlib.rglob`) are vulnerable to multiple security issues:
- **Path Traversal** attacks via symbolic links
- **Hardlink duplication** bypassing rate limits
- **Resource exhaustion** from infinite recursion or large directories
- **TOCTOU (Time-of-Check-Time-of-Use)** race conditions
- **Memory leaks** from unbounded inode caching
Safe File Walker solves all these problems while providing enterprise-grade features:
✅ **Hardlink deduplication** with LRU cache
✅ **Rate limiting** to prevent I/O DoS
✅ **Symlink sandboxing** with strict boundary enforcement
✅ **TOCTOU-safe** atomic operations
✅ **Real-time statistics** and observability
✅ **Deterministic ordering** for reproducible results
✅ **Zero external dependencies**
## 📊 Feature Comparison
| Feature | Safe File Walker | `os.walk` | GNU `find` | Rust `fd` |
|---------|------------------|-----------|------------|-----------|
| Hardlink deduplication (LRU) | ✅ | ❌ | ❌ | ❌ |
| Rate limiting | ✅ | ❌ | ❌ | ❌ |
| Symlink sandbox | ✅ | ⚠️ | ✅ | ✅ |
| Depth + timeout control | ✅ | ❌ | ⚠️ | ❌ |
| Observability callbacks | ✅ | ❌ | ❌ | ❌ |
| Real-time statistics | ✅ | ❌ | ❌ | ❌ |
| Deterministic order | ✅ | ❌ | ✅ | ✅ |
| TOCTOU-safe | ✅ | ⚠️ | ⚠️ | ✅ |
| Context manager | ✅ | ❌ | ❌ | ❌ |
## 🚀 Installation
```bash
pip install safe-file-walker
```
Or install directly from source:
```bash
pip install git+https://github.com/saiconfirst/safe_file_walker.git
```
## 🚀 Quick Start
```python
from pathlib import Path
from safe_file_walker import SafeFileWalker, SafeWalkConfig
# Scan current directory with safe defaults
config = SafeWalkConfig(root=Path(".").resolve())
with SafeFileWalker(config) as walker:
for file_path in walker:
print(file_path)
```
## 📖 Basic Usage
```python
from safe_file_walker import SafeFileWalker, SafeWalkConfig
# Configure the walker
config = SafeWalkConfig(
root=Path("/secure/data").resolve(),
max_rate_mb_per_sec=5.0, # Limit I/O to 5 MB/s
follow_symlinks=False, # Never follow symlinks (default)
timeout_sec=300, # Stop after 5 minutes
max_depth=10, # Only go 10 levels deep
deterministic=True # Sort directory entries for reproducibility
)
# Walk files safely
with SafeFileWalker(config) as walker:
for file_path in walker:
print(f"Processing: {file_path}")
print(f"Stats: {walker.stats}")
```
## 🔍 Advanced Features
### Observability & Monitoring
Get real-time statistics and audit skipped files:
```python
def on_skip(path, reason):
print(f"[AUDIT] Skipped {path}: {reason}")
config = SafeWalkConfig(
root=Path("/data"),
on_skip=on_skip,
max_unique_files=100_000 # LRU cache size for hardlink deduplication
)
with SafeFileWalker(config) as walker:
for file_path in walker:
process_file(file_path)
# Get final statistics
stats = walker.stats
print(f"Processed {stats.files_yielded} files in {stats.time_elapsed:.2f}s")
```
### Memory-Conscious Walking
For directories with millions of files, disable deterministic ordering to save memory:
```python
config = SafeWalkConfig(
root=Path("/large-dataset"),
deterministic=False, # Process entries in filesystem order (saves memory)
max_unique_files=50_000 # Smaller LRU cache
)
```
## 🛡️ Security Guarantees
Safe File Walker provides comprehensive protection against common file system traversal vulnerabilities:
### Path Traversal Protection
- Uses `path.is_relative_to(root)` to ensure all paths stay within boundaries
- Resolves symbolic links before boundary checks when `follow_symlinks=True`
### Symlink Attack Prevention
- By default, never follows symbolic links (`follow_symlinks=False`)
- When enabled, validates resolved paths against root boundary
- Prevents symlink cycles with depth limiting
### Hardlink Deduplication
- Tracks files by `(device_id, inode_number)` to prevent processing the same file multiple times
- Uses LRU cache with configurable size to prevent memory exhaustion
- Essential for forensic analysis and backup tools
### Resource Exhaustion Protection
- **Timeout**: Automatic termination after configured time
- **Depth limiting**: Prevents infinite recursion
- **Rate limiting**: Controls I/O bandwidth consumption
- **Memory bounds**: Configurable LRU cache size
### TOCTOU Safety
- Uses atomic `os.scandir()` + `DirEntry.stat()` operations
- Eliminates race conditions between path checking and file access
## 📐 API Reference
### `SafeWalkConfig`
Configuration class with the following parameters:
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `root` | `Path` | *required* | Absolute path to root directory |
| `max_rate_mb_per_sec` | `float` | `10.0` | Maximum I/O rate in MB/s |
| `follow_symlinks` | `bool` | `False` | Whether to follow symbolic links |
| `timeout_sec` | `float` | `3600.0` | Maximum execution time in seconds |
| `max_depth` | `Optional[int]` | `None` | Maximum directory depth (0 = root only) |
| `max_unique_files` | `int` | `1_000_000` | LRU cache size for hardlink deduplication |
| `deterministic` | `bool` | `True` | Sort directory entries for reproducible order |
| `on_skip` | `Callable[[Path, str], None]` | `None` | Callback for skipped files/directories |
### `WalkStats`
Immutable statistics object with the following fields:
| Field | Type | Description |
|-------|------|-------------|
| `files_yielded` | `int` | Number of files successfully processed |
| `files_skipped` | `int` | Number of files skipped |
| `dirs_skipped` | `int` | Number of directories skipped |
| `bytes_processed` | `int` | Total bytes processed (for rate limiting) |
| `time_elapsed` | `float` | Total execution time in seconds |
### `SafeFileWalker`
Main walker class that implements:
- Context manager protocol (`__enter__`, `__exit__`)
- Iterator protocol (`__iter__`)
- Statistics property (`stats`)
- String representation (`__repr__`)
## 🧪 Testing
The library includes comprehensive tests covering:
- Security vulnerability scenarios
- Performance benchmarks
- Edge cases and error handling
- Cross-platform compatibility
Run tests with:
```bash
pytest tests/ -v
```
## 📈 Performance Characteristics
- **Time Complexity**: O(n log n) worst case (with `deterministic=True`), O(n) best case
- **Space Complexity**: O(max_unique_files + directory_size)
- **System Calls**: ~1.5 per file (optimal for security requirements)
- **Memory Usage**: Configurable and bounded
## 🎯 Use Cases
- **Backup and archival tools**
- **Forensic analysis software**
- **File integrity monitoring systems**
- **Malware scanners**
- **Data migration utilities**
- **Security auditing tools**
## ❓ FAQ
### Why not just use `os.walk` or `pathlib.rglob`?
`os.walk` and `pathlib.rglob` are vulnerable to symlink attacks, hardlink duplication, and resource exhaustion. Safe File Walker provides military‑grade security guarantees while maintaining performance.
### How does hardlink deduplication work?
Each file is identified by `(device_id, inode_number)`. An LRU cache tracks already‑processed files, preventing duplicate work when the same file appears via multiple hardlinks.
### What’s the performance impact of rate limiting?
Rate limiting adds negligible overhead (a few nanoseconds per file). It prevents I/O‑based denial‑of‑service attacks and ensures predictable resource consumption.
### Can I use Safe File Walker in async code?
Yes. While the walker itself is synchronous, you can run it in a thread pool (e.g., `asyncio.to_thread`) or process files asynchronously inside the loop.
### Is it safe to follow symbolic links?
Only if you trust the entire directory tree. By default, symlinks are **never** followed (`follow_symlinks=False`). When enabled, each resolved path is validated against the root boundary.
### What happens on timeout?
The walker raises a `TimeoutError` and stops iteration. Any files already yielded remain valid.
### Can I use this for real‑time malware scanning?
Absolutely. The combination of rate limiting, timeout control, and deterministic ordering makes it ideal for security scanning pipelines.
### How do I contribute?
See [Contributing](#-contributing). We welcome security audits, performance improvements, and new feature proposals.
---
## 💖 Support
If you find this project useful, consider supporting its development:
- **Commercial licensing** for business use: contact [@saicon001 on Telegram](https://t.me/saicon001)
- **Sponsor** the developer via [GitHub Sponsors](https://github.com/sponsors/saiconfirst) (coming soon)
- **Star** the repository to show your appreciation
---
## 📜 License
Non-Commercial License - commercial use requires explicit written permission from the author. See [LICENSE](LICENSE) for full terms.
## 🙏 Acknowledgments
This implementation was developed through rigorous security auditing and iterative refinement, incorporating best practices from:
- Secure coding guidelines (CERT, OWASP)
- File system security research
- Production incident post-mortems
- Performance optimization techniques
## 📬 Contributing
## 💬 Stay in Touch
If you're using **Safe File Walker** in your projects or just want to say hello:
- 🤝 **Morally support** the project by sharing feedback
- 💡 Suggest improvements or report edge cases
- ❤️ **Materially support** via [Kaspi](https://kaspi.kz/qr?id=YOUR_ID) if it saves you time or enhances security
I’d love to hear from you!
[](https://t.me/saicon001)
Contributions are welcome! Please open an issue or pull request on GitHub.
---
**Safe File Walker** — because secure file system traversal shouldn't be an afterthought.
| text/markdown | saiconfirst | saiconfirst <247169576+saiconfirst@users.noreply.github.com> | null | null | Non-Commercial | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Security",
"Topic :: System :: Filesystems"
] | [] | https://github.com/saiconfirst/safe-file-walker | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/saiconfirst/safe_file_walker",
"Repository, https://github.com/saiconfirst/safe_file_walker",
"Issues, https://github.com/saiconfirst/safe_file_walker/issues"
] | twine/6.2.0 CPython/3.10.11 | 2026-02-20T08:54:26.979952 | safe_file_walker-1.0.0.tar.gz | 13,355 | 41/7d/a80f6fac874667187a5167918ff70e10463d0f5cf6910f3990938d71f1cc/safe_file_walker-1.0.0.tar.gz | source | sdist | null | false | 07696a36c873279af7e25aca39912a37 | 341d41dde4261d43c2b549bcf6a10ba1439faed481fb548244ae224bb540b9b1 | 417da80f6fac874667187a5167918ff70e10463d0f5cf6910f3990938d71f1cc | null | [
"LICENSE"
] | 227 |
2.4 | syntaxmatrix | 3.1.2 | SyntaxMatrix: A Framework for building owned AI Platform. | SyntaxMatrix Platform Provisioner (SPP)
SyntaxMatrix Platform Provisioner (SPP) is a self-hosted AI platform framework for building, deploying, and operating production-grade AI applications with licensing, governance, and modular extensibility built in.
SPP is designed for developers, educators, research teams, and enterprises who need full control over their AI systems while retaining a clear upgrade path to commercial capabilities.
What is SyntaxMatrix Platform Provisioner?
SPP is an open-core platform provisioner that enables organisations to:
Deploy AI-powered web applications
Provision AI assistants, ML tooling, and admin panels
Enforce feature access via licensing
Maintain full data ownership and self-hosting control
Each client runs their own independent instance.
SyntaxMatrix does not host or operate client environments.
Key Capabilities
Self-Hosted AI Platform
Deploy on your own infrastructure (cloud or on-prem).
Modular Architecture
Enable or disable platform capabilities via entitlements.
Licensing & Entitlements
Commercial features are controlled through a secure licence system.
Admin Panel & Governance
Manage users, content, data ingestion, and configuration.
AI & ML Tooling
Built-in support for AI assistants, retrieval, analytics, and experimentation.
Production-Ready
Designed for real deployments, not demos.
Open-Core Model
SyntaxMatrix Platform Provisioner follows an open-core approach:
Core framework → MIT Licence
Premium features → Commercial Licence
This allows you to:
Start freely
Self-host fully
Upgrade only when advanced capabilities are required
Licensing Overview
Open-Source Components (MIT)
The core platform is released under the MIT Licence, allowing:
Commercial use
Modification
Redistribution
Commercial Licence (Required for Premium Features)
Certain features require a paid subscription, including (but not limited to):
Advanced AI modules
Enterprise-grade limits
Premium admin capabilities
Commercial support tooling
Licence enforcement features
Commercial features are governed by the SyntaxMatrix Commercial Licence Agreement.
Using Premium Features without a valid licence is not permitted.
Subscription & Billing
Subscriptions are managed via Stripe
Licences are validated remotely
Paid plans take effect immediately
Cancellation applies at the end of the billing period
Grace periods may apply for payment issues
All enforcement is automated and transparent.
Self-Hosting Philosophy
SyntaxMatrix is built on a client-owned infrastructure model:
You deploy your own instance
You own your data
You control your environment
You choose when (and if) to upgrade
This architecture is intentional and central to the product’s design.
Typical Use Cases
AI education platforms
Internal enterprise AI tools
Research environments
AI-powered dashboards
Multi-tenant AI services
Regulated or privacy-sensitive deployments
Installation
SPP is distributed via PyPI.
Installation details are intentionally minimal here to avoid coupling the README to internal APIs.
Full setup instructions are provided in the official documentation.
Documentation
Comprehensive documentation covers:
Architecture & design
Licensing model
Deployment workflows
Client-side integration
Security considerations
Documentation is provided separately and kept version-aligned with releases.
Support & Contact
Website: https://syntaxmatrix.com
Licensing: licence@syntaxmatrix.com
Commercial enquiries: info@syntaxmatrix.com
Legal
© SyntaxMatrix Limited
All rights reserved.
Use of Premium Features requires a valid commercial subscription.
| text/markdown | Bob Nti | info@syntaxmatrix.net | null | null | MIT | null | [
"Programming Language :: Python :: 3.13",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"Flask>=3.0.3",
"requests>=2.32.3",
"pytz<2026,>=2025.2",
"Markdown>=3.7",
"pypdf>=5.4.0",
"PyPDF2==3.0.1",
"nest-asyncio>=1.6.0",
"python-dotenv>=1.1.0",
"openai>=1.84.0",
"google-genai>=1.19.0",
"anthropic>=0.67.0",
"reportlab>=4.4.3",
"lxml>=6.0.2",
"flask-login>=0.6.3",
"pandas>=2.2.3",
"numpy>=2.0.2",
"matplotlib>=3.9.4",
"plotly>=6.3.0",
"seaborn>=0.13.2",
"scikit-learn>=1.6.1",
"jupyter_client>=8.6.3",
"ipykernel>=6.29.5",
"ipython",
"statsmodels",
"sqlalchemy>=2.0.42",
"cryptography>=45.0.6",
"regex>=2025.11.3",
"tiktoken>=0.12.0",
"xgboost>=2.1.4",
"beautifulsoup4>=4.12.2",
"html5lib>=1.1",
"shap>=0.42.0",
"gunicorn==22.0.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.1 | 2026-02-20T08:54:21.642011 | syntaxmatrix-3.1.2.tar.gz | 790,989 | 8b/c2/5cf82dc1ec04acc4a3287cc354fb1618b2f317331479dd466cad0988852f/syntaxmatrix-3.1.2.tar.gz | source | sdist | null | false | 8be10c2b8ee303fd0581c201933ca760 | 08ace8ec3451ec7a71584ae1a8e03b08d3ffb25f1fc8420caf381716c6431118 | 8bc25cf82dc1ec04acc4a3287cc354fb1618b2f317331479dd466cad0988852f | null | [] | 227 |
2.4 | CurrencyConverter | 0.18.15 | A currency converter using the European Central Bank data. | .. image:: https://raw.githubusercontent.com/alexprengere/currencyconverter/master/logo/cc3.png
|actions|_ |cratev|_ |crated|_
.. _actions : https://github.com/alexprengere/currencyconverter/actions/workflows/python-package.yml
.. |actions| image:: https://github.com/alexprengere/currencyconverter/actions/workflows/python-package.yml/badge.svg
.. _cratev : https://pypi.org/project/CurrencyConverter/
.. |cratev| image:: https://img.shields.io/pypi/v/currencyconverter.svg
.. _crated : https://pypi.org/project/CurrencyConverter/
.. |crated| image:: https://static.pepy.tech/badge/currencyconverter
This is a currency converter that uses historical rates against a reference currency (Euro). It is compatible with Python3.9+.
Currency data sources
---------------------
The default source is the `European Central Bank <https://www.ecb.europa.eu>`_. This is the ECB historical rates for 42 currencies against the Euro since 1999.
It can be downloaded here: `eurofxref-hist.zip <https://www.ecb.europa.eu/stats/eurofxref/eurofxref-hist.zip>`_.
The converter can use different sources as long as the format is the same.
Note that the currency converter does not query the API in real time, to avoid the overhead of the HTTP request. It uses embedded data in the library, which might not be up to date.
If you need the latest data, please refer to the *data* section.
Installation
------------
You can install directly after cloning:
.. code-block:: bash
$ python setup.py install --user
Or use the Python package:
.. code-block:: bash
$ pip install --user currencyconverter
Command line tool
-----------------
After installation, you should have ``currency_converter`` in your ``$PATH``:
.. code-block:: bash
$ currency_converter 100 USD --to EUR
100.000 USD = 87.512 EUR on 2016-05-06
Python API
----------
Create once the currency converter object:
.. code-block:: python
>>> from currency_converter import CurrencyConverter
>>> c = CurrencyConverter()
Convert from ``EUR`` to ``USD`` using the last available rate:
.. code-block:: python
>>> c.convert(100, 'EUR', 'USD') # doctest: +SKIP
137.5...
Default target currency is ``EUR``:
.. code-block:: python
>>> c.convert(100, 'EUR')
100.0
>>> c.convert(100, 'USD') # doctest: +SKIP
72.67...
You can change the date of the rate:
.. code-block:: python
>>> from datetime import date # datetime works too
>>> c.convert(100, 'EUR', 'USD', date=date(2013, 3, 21))
129...
Data
~~~~
You can use your own currency file, as long as it has the same format (ECB):
.. code-block:: python
from currency_converter import ECB_URL, SINGLE_DAY_ECB_URL
# Load the packaged data (might not be up to date)
c = CurrencyConverter()
# Download the full history, this will be up to date. Current value is:
# https://www.ecb.europa.eu/stats/eurofxref/eurofxref-hist.zip
c = CurrencyConverter(ECB_URL)
# Dowload only the latest available day. Current value is:
# https://www.ecb.europa.eu/stats/eurofxref/eurofxref.zip
c = CurrencyConverter(SINGLE_DAY_ECB_URL)
# Load your custom file
c = CurrencyConverter('./path/to/currency/file.csv')
Since the raw data is updated only once a day, it might be better to only download it once a day:
.. code-block:: python
import os.path as op
import urllib.request
from datetime import date
from currency_converter import ECB_URL, CurrencyConverter
filename = f"ecb_{date.today():%Y%m%d}.zip"
if not op.isfile(filename):
urllib.request.urlretrieve(ECB_URL, filename)
c = CurrencyConverter(filename)
Fallbacks
~~~~~~~~~
Some rates are missing:
.. code-block:: python
>>> c.convert(100, 'BGN', date=date(2010, 11, 21))
Traceback (most recent call last):
RateNotFoundError: BGN has no rate for 2010-11-21
But we have a fallback mode for those, using a linear interpolation of the
closest known rates, as long as you ask for a date within the currency date bounds:
.. code-block:: python
>>> c = CurrencyConverter(fallback_on_missing_rate=True)
>>> c.convert(100, 'BGN', date=date(2010, 11, 21))
51.12...
The fallback method can be configured with the ``fallback_on_missing_rate_method`` parameter, which currently supports ``"linear_interpolation"`` and ``"last_known"`` values.
We also have a fallback mode for dates outside the currency bounds:
.. code-block:: python
>>> c = CurrencyConverter()
>>> c.convert(100, 'EUR', 'USD', date=date(1986, 2, 2))
Traceback (most recent call last):
RateNotFoundError: 1986-02-02 not in USD bounds 1999-01-04/2016-04-29
>>>
>>> c = CurrencyConverter(fallback_on_wrong_date=True)
>>> c.convert(100, 'EUR', 'USD', date=date(1986, 2, 2)) # fallback to 1999-01-04
117.89...
Decimal
~~~~~~~
If you need exact conversions, you can use the ``decimal`` option to use ``decimal.Decimal`` internally when parsing rates.
This will slow down the load time by a factor 10 though.
.. code-block:: python
>>> c = CurrencyConverter(decimal=True)
>>> c.convert(100, 'EUR', 'USD', date=date(2013, 3, 21))
Decimal('129.100')
Other attributes
~~~~~~~~~~~~~~~~
+ ``bounds`` lets you know the first and last available date for each currency
.. code-block:: python
>>> first_date, last_date = c.bounds['USD']
>>> first_date
datetime.date(1999, 1, 4)
>>> last_date # doctest: +SKIP
datetime.date(2016, 11, 14)
+ ``currencies`` is a set containing all available currencies
.. code-block:: python
>>> c.currencies # doctest: +SKIP
set(['SGD', 'CAD', 'SEK', 'GBP', ...
>>> 'AAA' in c.currencies
False
>>> c.convert(100, 'AAA')
Traceback (most recent call last):
ValueError: AAA is not a supported currency
| text/x-rst | Alex Prengère | alex.prengere@gmail.com | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy"
] | [] | https://github.com/alexprengere/currencyconverter | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:54:03.965743 | currencyconverter-0.18.15.tar.gz | 3,835,069 | 4c/d1/5b29f48cdb7082eeebae9a072fead7abbb96423618dd042273ecc4dee995/currencyconverter-0.18.15.tar.gz | source | sdist | null | false | 0d743bffaea153e5363cf2914ce93931 | bc9290e513f76f6e5881bc4a12eacefa04e762a2ddedd33ab1b2e00c91f073d4 | 4cd15b29f48cdb7082eeebae9a072fead7abbb96423618dd042273ecc4dee995 | null | [
"LICENSE"
] | 0 |
2.4 | VertexEngine | 1.5.0 | A high-performance SDK for Python Development. | # VertexEngine/Vertex
VertexEngine is a GUI and Game Engine for python applications, it works best if you use py installer
## Offical Extensions:
[VertexEngine-WebEngine](https://pypi.org/project/VertexEngine-WebEngine)
[VertexEngine-CLI](https://pypi.org/project/VertexEngine-CLI/1.0/)
## Help
- The documentation is in the following link: [Documentation](https://vertexenginedocs.netlify.app/") for help.
- Support Email: FinalFacility0828@gmail.com
## Community
Discord is out NOW!
[Discord Server](https://discord.com/channels/1468208686869643327/1468208687670890588)
## Change Logs (1.0rc1 - 1.5.0), NEW!
### 1.5.0
- Added PYGAME WIDGETS! (from VertexEngine.VertexWidgets import PygameVWidgets)
### 1.5rc5
- FINAL RC!
- WARNING: ALL HOMEPAGE ACCOUNTS WILL BE CLEARNED, BUT DONATIONS WILL BE KEPT IN THE DONATION ACCOUNT.
- Added more Docstrings!
- Added moving the FancyButton and Text classes respectively.
- Pls use the Extensions, they're pretty lonely :(
### 1.5rc4
- Added More Docstrings so documentation won't be as stressful for me to do :)
- Expanded InputSystem!
- Fixed GameEngine!
### 1.5rc2
- Now Allows custom BG colors in GameEngine!
- Allows resizing of AssetManager images!
### 1.5rc1
- Homepage only updates on major updates now.
- Fixed asset manager bugs
### 1.4.0
- Fixed a lot of bugs
- Updated outdated templates!
- Docs update!
### 1.4rc2
- Final RC!
- Fixed internals for the WebEngine and CLI.
- Added 1 New Module:
- OptionBtnWidget
### 1.4rc1
- 21 Stablization Fixes!
- Added builtin KeyDown/Up events to GameEngine!
- Stripped unnecessary stuff.
### 1.3.0
- SUPPORT FOR PYTHON 3.14!
- DOCUMENTATION UPDATED!
### 1.3rc4
- DISCORD SERVER IS OUT NOW!
- FINAL RC!
### 1.3rc2
- Added 2 new VertexUI APIs:
- AutoFontLabel > Automatically Adapts to screen size!
- Card > A mini VBox in card layout for card based UI!
### 1.3rc1
- No more docs for RCs as the new functions are ("expirimental")
- Moved SimpleGUI and AdvancedVWidgets into VertexEngine.VertexWidgets
- 10 Days before the discord server is launched!
- Added:
- Responsive Layout (Adapts to screen size)
- Centered Layout (Anchors child widgets to the center)
### 1.2rc4
- Final RC!
- Docs will be updated on Feb 04, 2026.
- New Library!:
~ VertexEngine.AdvancedWidgets
### 1.2rc3
- Added 1 New Library:
~ HBox, VBox but horizontally not vertically.
### 1.2rc1
- Added a new Extension: [VertexEngine-CLI](https://pypi.org/project/VertexEngine-CLI/1.0/)
### 1.1
- Bugfixed WebEngine
- Compression
### 1.1rc4
- FINAL RC!
- Added 2 New Libraries: ~ VertexEngine.WebEngine ~ VertexEngine.WebView
### 1.1rc3
Revamped Input System!
DEMO GAMES!
Updated Scene Docs
### Version 1.1rc2
Documentation Expansion! ~ Fixed the Changelogs
New Input System!
Old System (Qt) depreceated and not recomended for use.
### Version 1.1rc1
New Library! (And Modules)!: ~ InputSystem ~ Buttons (Mouse and Widget) ~ Keyboard Input
### Version 1.0.1
Added Changelogs!
### Version 1.0
Added 2 New Libraries: ~ VertexEngine.SimpleGUI ~ VertexEngine.VertexScreenModifiers
### Version 1.0rc2
Final Release Candidate
Added 1 New Library!: ~ VertexUI
### Version 1.0rc1
Size Compression
Added 1 New Library!: ~ VertexScreen
## How to install Pyinstaller
Step 1. Type in: pip install pyinstaller
Step 2. Wait a few min, don't worry if it takes 1 hr or more, it will finish
Step 3. How to use pyinstaller type: python -m PyInstaller --onefile *.py
There are flags: --noconsole > disables the console when you run the app --onefile > compress all of the code into one file --icon > the *.ico file after you type it will be set as the app icon.
## How to install VertexEngine/Vertex:
Step 1: Type in pip install VertexEngine
Step 2: Wait a few min, don't worry if it takes 1 hr or more, it will finish
Pygame or PyQt6 systems are compatible with Vertex so you can use pygame collision system or PyQt6's UI system in VertexEngine.
## Dependencies
Vertex obviously has heavy dependencies since it's a game engine, the following requirements are:
| Dependency | Version |
|------------------|--------------------------------------|
| PyQt6 | >=6.7 |
| Pygame | >=2.0 |
| Python | >=3.10 |
## About Me ❔
I Am a solo developer in Diliman, Quezon City that makes things for fun :) 77 Rd 1, 53 Rd 3 Bg-Asa QC
## 📄 License
VertexEngine/Vertex is Managed by the MIT License. This license allows others to tweak the code. However, I would like my name be in the credits if you choose this as your starting ground for your next library.
| text/markdown | null | Tyrel Miguel <annbasilan0828@gmail.com> | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"PyQt6>=6.7",
"pygame>=2.6; python_version < \"3.14\"",
"pygame-ce>=2.5.0; python_version >= \"3.14\""
] | [] | [] | [] | [
"Homepage, https://vertexengine.onrender.com",
"Documentation, https://vertexenginedocs.netlify.app/",
"Source, https://github.com/TyrelGomez/VertexEngine-Code",
"Issues, https://github.com/TyrelGomez/VertexEngine-Code/issues",
"Discord, https://discord.com/channels/1468208686869643327/1468208687670890588"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T08:54:00.850580 | vertexengine-1.5.0.tar.gz | 27,781 | bb/cf/9fb9d5aaec0305fbf5f56e69b3bd48b512f5b9ecf0cb5566a5413ff4f999/vertexengine-1.5.0.tar.gz | source | sdist | null | false | 6e3c64b0cb9d82bcc43ecc92db247082 | c36de53344bc6c8fa063ed90195d21b675678b65bef52a6e4537fbeae19dca1d | bbcf9fb9d5aaec0305fbf5f56e69b3bd48b512f5b9ecf0cb5566a5413ff4f999 | null | [
"LICENSE"
] | 0 |
2.4 | dmi-reader | 1.0.0 | Cross-platform DMI hardware identifier reader without root/admin privileges | # `dmi-reader` — Cross-Platform DMI Hardware Identifier Reader
<div align="center">
[](LICENSE)
[](https://www.python.org/downloads/)
[](https://pypi.org/project/dmi-reader/)




Securely retrieve unique hardware identifiers (DMI, UUID, serial numbers) across Linux, Windows, and macOS — **without requiring root/admin privileges**.
</div>
---
## 🚀 Features
- ✅ **Cross-Platform**: Works on Linux, Windows, macOS
- ✅ **No Root/Admin Required**: Reads DMI safely without elevated permissions
- ✅ **Thread-Safe Caching**: Efficient, avoids repeated system calls
- ✅ **Container-Aware**: Automatically detects Docker, Podman, etc.
- ✅ **Graceful Degradation**: Handles missing/wrong DMI gracefully
- ✅ **Fallback Support**: Uses `machine-id`, `hostname` when DMI unavailable
- ✅ **Production-Ready**: Robust error handling, logging, type hints
---
## 📦 Installation
```bash
pip install -r requirements.txt
```
### Dependencies
- On **Windows**: automatically installs `wmi` and `pywin32`
- On **Linux/macOS**: no additional dependencies needed
---
## 💡 Usage
### Basic usage
```python
from dmi_reader import get_dmi_info
# Get all available DMI information (with fallback)
info = get_dmi_info(include_fallback=True)
print(info)
# Example output:
# {
# 'system_uuid': '123e4567-e89b-12d3-a456-426614174000',
# 'board_serial': 'ABC123456',
# 'chassis_serial': 'CHS789012',
# 'product_name': 'VMware Virtual Platform',
# 'manufacturer': 'VMware, Inc.'
# }
# Strict mode – only DMI data, no fallback
info = get_dmi_info(include_fallback=False)
```
### Advanced: caching and threading
The library is thread‑safe and caches results automatically. If you need fresh data on every call, use `force_refresh`:
```python
from dmi_reader import get_dmi_info
# Force a fresh read (bypass cache)
info = get_dmi_info(force_refresh=True)
```
### Integration with logging
```python
import logging
from dmi_reader import get_dmi_info
logging.basicConfig(level=logging.DEBUG)
info = get_dmi_info()
# The library logs at DEBUG level, helpful for debugging
```
### Use case: device fingerprinting
```python
from dmi_reader import get_dmi_info
import hashlib
import json
def device_fingerprint() -> str:
info = get_dmi_info(include_fallback=True)
# Sort keys for consistent hashing
data = json.dumps(info, sort_keys=True).encode()
return hashlib.sha256(data).hexdigest()[:16]
print(f"Device fingerprint: {device_fingerprint()}")
```
### Use in web applications (FastAPI example)
```python
from fastapi import FastAPI
from dmi_reader import get_dmi_info
app = FastAPI()
@app.get("/system/info")
async def system_info():
return get_dmi_info(include_fallback=True)
```
# Or python test.py
---
## 🤔 Why dmi‑reader?
| Feature | dmi‑reader | `dmidecode` (Linux) | `wmic` (Windows) | `system_profiler` (macOS) |
|---------|------------|---------------------|------------------|---------------------------|
| **No root/admin** | ✅ Yes | ❌ Requires `sudo` | ⚠️ May need admin | ✅ Yes |
| **Cross‑platform** | ✅ Linux, Windows, macOS | ❌ Linux only | ❌ Windows only | ❌ macOS only |
| **Python API** | ✅ Clean, typed | ❌ Shell parsing | ❌ Shell parsing | ❌ Shell parsing |
| **Container‑aware** | ✅ Skips in containers | ❌ May fail | ❌ N/A | ❌ N/A |
| **Thread‑safe** | ✅ With caching | ❌ Process‑based | ❌ Process‑based | ❌ Process‑based |
| **Graceful fallback** | ✅ Uses machine‑id, hostname | ❌ Fails | ❌ Fails | ❌ Fails |
**dmi‑reader** is the only library that provides a **uniform, secure, and dependency‑free** way to read hardware identifiers across all major platforms.
---
## 🛠️ Supported Platforms
| Platform | Method | Requires Root/Admin |
|----------|--------|--------------------|
| Linux | `/sys/class/dmi/id/` | No |
| Windows | WMI (with timeout) | No |
| macOS | `system_profiler` | No |
---
## ⚠️ License & Commercial Use
This software is **free for personal and non-commercial use only**.
**Any commercial use** — including but not limited to:
- Integration into commercial products
- Use in business operations
- Distribution as part of paid services
- Use in corporate environments
**requires explicit written permission from the author.**
To request a commercial license, contact:
**Telegram**: [@saicon001](https://t.me/saicon001)
---
## 🧪 Testing
```bash
# Clone the repository
git clone https://github.com/saiconfirst/dmi_reader.git
cd dmi-reader
# Install dependencies
pip install -r requirements.txt
# Run test script
python test.py
```
---
## 🤝 Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
---
## ❓ FAQ
### Is this library safe to use in production?
Yes. It has zero external dependencies (except `wmi`/`pywin32` on Windows, which are auto‑installed), handles errors gracefully, and is thread‑safe.
### Does it work inside Docker containers?
Yes. It automatically detects container environments (Docker, Podman, etc.) and skips DMI reading (which would fail). Fallback identifiers (`machine‑id`, `hostname`) are used instead.
### Can I use this for license key generation?
Yes, many developers use hardware identifiers as part of license validation. However, note that DMI data can be spoofed in virtualized environments. Use it as **one factor** in a multi‑factor validation scheme.
### How does it compare to reading `/sys/class/dmi/id` directly?
Reading `/sys/class/dmi/id` requires root on some systems; dmi‑reader uses the same path but **never requires elevated privileges**. It also adds caching, container detection, and fallback mechanisms.
### What happens if DMI data is missing or invalid?
The library falls back to `machine‑id` (Linux), `hostname` (Windows/macOS), or a combination. You can control this with `include_fallback=False` to get only DMI data.
### Is there a performance overhead?
The first call may take a few milliseconds (WMI on Windows is slower). Subsequent calls use an in‑memory cache, making them microseconds fast.
### Can I contribute?
Absolutely! See [Contributing](#-contributing). We welcome bug reports, feature requests, and pull requests.
---
## 💖 Support
If you find this project useful, consider supporting its development:
- **Commercial licensing** for business use: contact [@saicon001 on Telegram](https://t.me/saicon001)
- **Sponsor** the developer via [GitHub Sponsors](https://github.com/sponsors/saiconfirst) (coming soon)
- **Star** the repository to show your appreciation
---
## 📜 License
This project is licensed under a Non-Commercial License. Commercial use requires explicit written permission from the author. See the [LICENSE](LICENSE) file for full terms.
---
<div align="center">
Made with ❤️ by [saiconfirst](https://github.com/saiconfirst)
For commercial licensing inquiries: **[@saicon001](https://t.me/saicon001)**
</div>
| text/markdown | null | saiconfirst <247169576+saiconfirst@users.noreply.github.com> | null | null | Non-Commercial | null | [
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: System :: Hardware",
"Topic :: Security"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"wmi>=1.5.1; sys_platform == \"win32\"",
"pywin32>=227; sys_platform == \"win32\"",
"pytest>=7.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"flake8>=5.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/saiconfirst/dmi_reader",
"Repository, https://github.com/saiconfirst/dmi_reader",
"Issues, https://github.com/saiconfirst/dmi_reader/issues"
] | twine/6.2.0 CPython/3.10.11 | 2026-02-20T08:54:00.565208 | dmi_reader-1.0.0-py3-none-any.whl | 7,708 | bf/8d/6ee34c3e042d52aa2b6779b8cc97ee972a2fd14b8b4c1395a6a77219380e/dmi_reader-1.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 6aed86518e36aee718ab9fdfc24b654b | 4c90767885d0c4ab5d09b5722fa08d0a35cf402ebb7a6a0c46019f7f0e484b65 | bf8d6ee34c3e042d52aa2b6779b8cc97ee972a2fd14b8b4c1395a6a77219380e | null | [
"LICENSE"
] | 104 |
2.4 | jmcomic | 2.6.14 | Python API For JMComic (禁漫天堂) | <!-- 顶部标题 & 统计徽章 -->
<div align="center">
<h1 style="margin-top: 0" align="center">Python API for JMComic</h1>
<p align="center">
<strong>提供 Python API 访问禁漫天堂(网页端 & 移动端),集成 GitHub Actions 下载器🚀</strong>
</p>
[](https://github.com/hect0x7)
[](https://github.com/hect0x7/JMComic-Crawler-Python/stargazers)
[](https://github.com/hect0x7/JMComic-Crawler-Python/forks)
[](https://github.com/hect0x7/JMComic-Crawler-Python/releases/latest)
[](https://pepy.tech/projects/jmcomic)
[](https://github.com/hect0x7/JMComic-Crawler-Python)
</div>
> 本项目封装了一套可用于爬取JM的Python API.
>
> 你可以通过简单的几行Python代码,实现下载JM上的本子到本地,并且是处理好的图片。
>
> **友情提示:珍爱JM,为了减轻JM的服务器压力,请不要一次性爬取太多本子,西门🙏🙏🙏**.
[【指路】教程:使用GitHub Actions下载禁漫本子](./assets/docs/sources/tutorial/1_github_actions.md)
[【指路】教程:导出并下载你的禁漫收藏夹数据](./assets/docs/sources/tutorial/10_export_favorites.md)

## 项目介绍
本项目的核心功能是下载本子。
基于此,设计了一套方便使用、便于扩展,能满足一些特殊下载需求的框架。
目前核心功能实现较为稳定,项目也处于维护阶段。
除了下载功能以外,也实现了其他的一些禁漫接口,按需实现。目前已有功能:
- 登录
- 搜索本子(支持所有搜索项)
- 图片下载解码
- 分类/排行榜
- 本子/章节详情
- 个人收藏夹
- 接口加解密(APP的接口)
## 安装教程
> ⚠如果你没有安装过Python,需要先安装Python再执行下面的步骤,且版本需要>=3.7([点我去python官网下载](https://www.python.org/downloads/))
* 通过pip官方源安装(推荐,并且更新也是这个命令)
```shell
pip install jmcomic -i https://pypi.org/project -U
```
* 通过源代码安装
```shell
pip install git+https://github.com/hect0x7/JMComic-Crawler-Python
```
## 快速上手
### 1. 下载本子方法
只需要使用如下代码,就可以下载本子`JM123`的所有章节的图片:
```python
import jmcomic # 导入此模块,需要先安装.
jmcomic.download_album('123') # 传入要下载的album的id,即可下载整个album到本地.
```
上面的 `download_album`方法还有一个参数`option`,可用于控制下载配置,配置包括禁漫域名、网络代理、图片格式转换、插件等等。
你可能需要这些配置项。推荐使用配置文件创建option,用option下载本子,见下章:
### 2. 使用option配置来下载本子
1. 首先,创建一个配置文件,假设文件名为 `option.yml`
该文件有特定的写法,你需要参考这个文档 → [配置文件指南](./assets/docs/sources/option_file_syntax.md)
下面做一个演示,假设你需要把下载的图片转为png格式,你应该把以下内容写进`option.yml`
```yml
download:
image:
suffix: .png # 该配置用于把下载的图片转为png格式
```
2. 第二步,运行下面的python代码
```python
import jmcomic
# 创建配置对象
option = jmcomic.create_option_by_file('你的配置文件路径,例如 D:/option.yml')
# 使用option对象来下载本子
jmcomic.download_album(123, option)
# 等价写法: option.download_album(123)
```
### 3. 使用命令行
> 如果只想下载本子,使用命令行会比上述方式更加简单直接
>
> 例如,在windows上,直接按下win+r键,输入jmcomic xxx就可以下载本子。
示例:
下载本子123的命令
```sh
jmcomic 123
```
同时下载本子123, 章节456的命令
```sh
jmcomic 123 p456
```
命令行模式也支持自定义option,你可以使用环境变量或者命令行参数:
a. 通过命令行--option参数指定option文件路径
```sh
jmcomic 123 --option="D:/a.yml"
```
b. 配置环境变量 `JM_OPTION_PATH` 为option文件路径(推荐)
> 请自行google配置环境变量的方式,或使用powershell命令: `setx JM_OPTION_PATH "D:/a.yml"` 重启后生效
```sh
jmcomic 123
```
## 进阶使用
请查阅文档首页→[jmcomic.readthedocs.io](https://jmcomic.readthedocs.io/zh-cn/latest)
(提示:jmcomic提供了很多下载配置项,大部分的下载需求你都可以尝试寻找相关配置项或插件来实现。)
## 项目特点
- **绕过Cloudflare的反爬虫**
- **实现禁漫APP接口最新的加解密算法 (1.6.3)**
- 用法多样:
- GitHub
Actions:网页上直接输入本子id就能下载([教程:使用GitHub Actions下载禁漫本子](./assets/docs/sources/tutorial/1_github_actions.md))
- 命令行:无需写Python代码,简单易用([教程:使用命令行下载禁漫本子](./assets/docs/sources/tutorial/2_command_line.md))
- Python代码:最本质、最强大的使用方式,需要你有一定的python编程基础
- 支持**网页端**和**移动端**两种客户端实现,可通过配置切换(**移动端不限ip兼容性好,网页端限制ip地区但效率高**)
- 支持**自动重试和域名切换**机制
- **多线程下载**(可细化到一图一线程,效率极高)
- **可配置性强**
- 不配置也能使用,十分方便
- 配置可以从配置文件生成,支持多种文件格式
- 配置点有:`请求域名` `客户端实现` `是否使用磁盘缓存` `同时下载的章节/图片数量` `图片格式转换` `下载路径规则` `请求元信息(headers,cookies,proxies)` `中文繁/简转换`
等
- **可扩展性强**
- 支持自定义本子/章节/图片下载前后的回调函数
- 支持自定义类:`Downloader(负责调度)` `Option(负责配置)` `Client(负责请求)` `实体类`等
- 支持自定义日志、异常监听器
- **支持Plugin插件,可以方便地扩展功能,以及使用别人的插件,目前内置插件有**:
- `登录插件`
- `硬件占用监控插件`
- `只下载新章插件`
- `压缩文件插件`
- `下载特定后缀图片插件`
- `发送QQ邮件插件`
- `自动使用浏览器cookies插件`
- `导出收藏夹为csv文件插件`
- `合并所有图片为pdf文件插件`
- `合并所有图片为长图png插件`
- `重复文件检测删除插件`
- `网页观看本地章节插件`
## 使用小说明
* Python >= 3.7,建议3.9以上,因为jmcomic的依赖库可能会不支持3.9以下的版本。
* 个人项目,文档和示例会有不及时之处,可以Issue提问
## 项目文件夹介绍
* .github:GitHub Actions配置文件
* assets:存放一些非代码的资源文件
* docs:项目文档
* option:存放配置文件
* src:存放源代码
* jmcomic:`jmcomic`模块
* tests:测试目录,存放测试代码,使用unittest
* usage:用法目录,存放示例/使用代码
## 感谢以下项目
### 图片分割算法代码+禁漫移动端API
<a href="https://github.com/tonquer/JMComic-qt">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://github-readme-stats.vercel.app/api/pin/?username=tonquer&repo=JMComic-qt&theme=radical" />
<source media="(prefers-color-scheme: light)" srcset="https://github-readme-stats.vercel.app/api/pin/?username=tonquer&repo=JMComic-qt" />
<img alt="Repo Card" src="https://github-readme-stats.vercel.app/api/pin/?username=tonquer&repo=JMComic-qt" />
</picture>
</a>
| text/markdown | hect0x7 | hect0x7 <93357912+hect0x7@users.noreply.github.com> | null | null | MIT License
Copyright (c) 2023 hect0x7
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| python, jmcomic, 18comic, 禁漫天堂, NSFW | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows"
] | [] | https://github.com/hect0x7/JMComic-Crawler-Python | null | >=3.7 | [] | [] | [] | [
"commonx>=0.6.38",
"curl-cffi",
"pillow",
"pycryptodome",
"pyyaml"
] | [] | [] | [] | [
"Homepage, https://github.com/hect0x7/JMComic-Crawler-Python",
"Documentation, https://jmcomic.readthedocs.io"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:53:32.843088 | jmcomic-2.6.14.tar.gz | 70,494 | 70/b9/cd9caf23bbd97e376d7d133c4edbb48450946fd7ae52a7fe9aa9d94cedbb/jmcomic-2.6.14.tar.gz | source | sdist | null | false | 6ec6657a8a21c960f680e3d9900b2e18 | 2527a5891494948925df33e17f6e520760679740542eb3fca727bb5669026423 | 70b9cd9caf23bbd97e376d7d133c4edbb48450946fd7ae52a7fe9aa9d94cedbb | null | [
"LICENSE"
] | 1,167 |
2.1 | taurus-pyqtgraph | 0.9.8 | Taurus extension providing pyqtgraph-based widgets | taurus_pyqtgraph is an extension for the Taurus package.
It adds the taurus.qt.qtgui.tpg submodule which provides pyqtgraph-based
widgets.
| null | Taurus Community | null | Taurus Community | tauruslib-devel@lists.sourceforge.net | LGPLv3+ | Taurus, pyqtgraph, plugin, widgets | [
"Development Status :: 3 - Alpha",
"Environment :: X11 Applications :: Qt",
"Environment :: Win32 (MS Windows)",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU Lesser General Public License v3 or later (LGPLv3+)",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Operating System :: Unix",
"Operating System :: OS Independent",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Topic :: Scientific/Engineering",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: User Interfaces",
"Topic :: Software Development :: Widget Sets"
] | [
"Linux"
] | https://gitlab.com/taurus-org/taurus_pyqtgraph | https://gitlab.com/taurus-org/taurus_pyqtgraph | >=3.5 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.23 | 2026-02-20T08:53:00.454569 | taurus_pyqtgraph-0.9.8.tar.gz | 1,730,244 | 0e/1e/dfe3badadf64ead1e2be38560e7feb68515f61e7c58ce9701d307258f8ed/taurus_pyqtgraph-0.9.8.tar.gz | source | sdist | null | false | 0e8b2e8911f101b3d8ade24a175ceafe | e86eb8622bd172cef2f1c234eca8fcb319e701947035826fa0c9daadf8864a36 | 0e1edfe3badadf64ead1e2be38560e7feb68515f61e7c58ce9701d307258f8ed | null | [] | 209 |
2.4 | KratosSwimmingDEMApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | ## Swimming DEM Application | text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosdemapplication==10.4.0",
"kratosfluiddynamicsapplication==10.4.0",
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:52:44.744073 | kratosswimmingdemapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 1,866,888 | 9f/89/6c0c9621315952c7c5f1bd8e4841beb67ebf266151fa23ba41eca91cf975/kratosswimmingdemapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 97e62ee059809dcdda2c5af4443af0f2 | ab3f107c06bb7bd14d9f9bfb1f92d8455238e4329a1fa33590bb5c8207ffbfe6 | 9f896c0c9621315952c7c5f1bd8e4841beb67ebf266151fa23ba41eca91cf975 | null | [] | 0 |
2.4 | KratosStructuralMechanicsApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | # Structural Mechanics Application
| **Application** | **Description** | **Status** | **Authors** |
|:---------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------:|:-----------:|
| `StructuralMechanicsApplication` | The *Structural Mechanics Application* contains a series of structural elements, as well as solid elements, the corresponding strategies, solvers and *Constitutive Laws Application* within *Kratos Multiphysics*. | <img src="https://img.shields.io/badge/Status-%F0%9F%9A%80%20Actively%20developed-Green" width="300px"> | @KratosMultiphysics/structural-mechanics |
<p align="center">
<img src="https://github.com/KratosMultiphysics/Examples/raw/master/structural_mechanics/validation/beam_roll_up/data/rollup.gif" alt="Solution" style="width: 300px;"/>
<img src="https://github.com/KratosMultiphysics/Examples/raw/master/structural_mechanics/use_cases/tensile_test_example/data/animation.gif" alt="Solution" style="width: 300px;"/>
<img src="https://github.com/KratosMultiphysics/Examples/raw/master/structural_mechanics/validation/beam_shallow_angled_structure/data/shallowAngleBeam.gif" alt="Solution" style="width: 300px;"/>
<img src="https://github.com/KratosMultiphysics/Examples/raw/master/structural_mechanics/validation/catenoid_formfinding/data/catenoid_normal.gif" alt="Solution" style="width: 300px;"/>
<img src="https://github.com/KratosMultiphysics/Examples/raw/master/structural_mechanics/validation/four_point_sail_formfinding/data/fourpoint_sail.gif" alt="Solution" style="width: 300px;"/>
<img src="https://github.com/KratosMultiphysics/Examples/raw/master/structural_mechanics/validation/two_dimensional_circular_truss_arch_snapthrough/data/DispCtrl.gif" alt="Solution" style="width: 300px;"/>
</p>
The application includes tests to check the proper functioning of the application.
## Features:
- **A set of *Neumann* conditions**:
* *Point loads (loads applied directly on the nodes)*
* *Point moment (a discret moment applied directly on the nodes)*
* *Line load (a distributed load applied over a line)*
* *Surface load (a distributed load applied over a face)*
* *A simple point contact conditions based on the distance*
- **Solid elements**:
* *Small displacement elements*
* Irreducible (pure displacement)
* Mixed formulation ($BBar$)
* Mixed formulation ($U-\varepsilon$)
* *Total Lagrangian elements*
* Irreducible (pure displacement)
* Mixed formulation ($U-\Delta V/V$)
* Mixed formulation ($Q1P0$)
* *Updated Lagrangian* elements *irreducible (pure displacement)*
* *Total Lagrangian prismatic solid-shell element (*SPrism*)*
- **Structural elements**:
* *Zero-dimensional elements* :
* Nodal concentrated element (both 2D/3D). Includes nodal damping, nodal mass and nodal stiffness
* *Uni-dimensional elements* :
* Spring-damper element (3D)
* Cable element (3D)
* Truss element (3D)
* Corrotational beam element (both 2D/3D)
* *Two-dimensional elements* :
* Membrane (pre-stressed)
* Isotropic shell element
* Thin shell (Quadrilateral and triangular)
* Thick shell (Quadrilateral and triangular)
- **Constitutive laws**:
* *Isotropic laws (Plane strain, plane stress and 3D)*
* *The ones available in [`ConstitutiveLawsApplication`](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/ConstitutiveLawsApplication/README.md)*
- **Adjoint Sensitivity Analysis**:
* *This feature provides the framework to compute sensitivities of structural responses (e.g. displacements, strain energy or stresses) with respect to different types of design variables (e.g. nodal coordinates, material or cross-sectional properties or load intensity) with the adjoint approach*
- **Strategies**:
* *Formfinding strategies*
* *Eigensolver strategy*
* *Harmonic analysis strategies*
- **Schemes**:
* *Relaxation scheme*
* *Eigen solver scheme*
- **Convergence criteria**:
* *For displacement and other *DoF**
* *For displacement and rotation*
- **Utilities and processes**:
* *A process to post-process eigenvalues*
* *A *GiDIO* utility for eigen values*
* *Process to compute the global mass of the system*
* *Process to identify the neighbours in a prismatic mesh*
* *Process to transform a pure shell mesh (local dimension equal to 2), to solid-shell mesh (pure 3D mesh)*
- **+100 Python unittest, including Validation tests, and several cpp tests**
## Examples:
Examples can be found [here](https://github.com/KratosMultiphysics/Examples/tree/master/structural_mechanics). | text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratoslinearsolversapplication==10.4.0",
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:52:42.671845 | kratosstructuralmechanicsapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 8,818,622 | 0d/8f/ea0fe3047701bd7911b69674206626a38c06b94b21108d853ab4b3277921/kratosstructuralmechanicsapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 27b2c5bc5a39f9edfc5a1e6e0477ed53 | 004551496cbf0450700c14467db3181aad234dfb1bbc9727e74dc4133478222c | 0d8fea0fe3047701bd7911b69674206626a38c06b94b21108d853ab4b3277921 | null | [] | 0 |
2.4 | KratosStatisticsApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | # Statistics Application
Statistics application consist of widely used methods to calculate statistics in various containers of KratosMultiphysics. There are mainly two groups of statistical methods namely **Spatial** and **Temporal**. **Spatial** methods calculate statistics on spatial containers and output the values whenever they are called. **Temporal** methods calculate statistics on the fly in a transient simulation. All the temporal methods gurantee that, the resultant statistics will be same as the ones if one calculates accumulating all the data upto that time instance and calculate the same statistics. All of these methods in each group is `OpenMP` and `MPI` compatible, and tested.
Following table summarize capabilities of Statistics Application.
| Statistical Methods | Norm Types | Spatial Domain | Temporal Domain | Data types |
|---------------------------------------|-------------------------|---------------------------------------------------------------|------------------------------------------------------------------------------|------------|
| [Sum](#sum) | [Value](#value) | [Spatial methods](#spatial-methods) | [Temporal methods](#temporal-methods) | Double |
| [Mean](#mean) | [Magnitude](#magnitude) | [Spatial containers](#spatial-containers) | [Temporal containers](#temporal-containers) | Array 3D |
| [Root mean square](#root-mean-square) | [Euclidean](#euclidean) | [nodal_historical](#spatial-nodal-historical) | [nodal_historical_historical](#temporal-nodal-historical-historical) | Vector |
| [Variance](#variance) | [Infinity](#infinity) | [nodal_non_historical](#spatial-nodal-non-historical) | [nodal_historical_non_historical](#temporal-nodal-historical-non-historical) | Matrix |
| [Min](#min) | [P-Norm](#p-norm) | [condition_non_historical](#spatial-condition-non-historical) | [nodal_non_historical](#temporal-nodal-non-historical) | |
| [Max](#max) | [Lpq-Norm](#lpq-norm) | [element_non_historical](#spatial-element-non-historical) | [element_non_historical](#temporal-element-non-historical) | |
| [Median](#median) | [Frobenius](#frobenius) | [Spatial statistics process](#spatial-statistics-process) | [condition_non_historical](#temporal-condition-non-historical) | |
| [Distribution](#distribution) | [Trace](#trace) | | [Temporal statistics process](#temporal-statistics-process) | |
| [Norm methods](#norm-methods) | [Index](#index-based) | | | |
| | [Component](#component-based) | | | |
## JSON Examples
If you prefer to use statistics of a simulation using StatisticsApplication, there is `spatial_statistics_process` for spatial statistics calculations and `temporal_statistics_process` for temporal statistics. These processes can be included via JSON settings under `auxiliary_processes`.
### Spatial statistics process examples
Following example illustrates different methods used in different containers with different norms. `input_variable_settings` holds an array of methods for specified containers, specified norm and specified variables. They can be customized for your requirement. `output_settings` holds information about how the output should be handled.
```json
{
"kratos_module" : "KratosMultiphysics.StatisticsApplication",
"python_module" : "spatial_statistics_process",
"Parameters" : {
"model_part_name" : "test_model_part",
"input_variable_settings" : [
{
"method_name" : "sum",
"norm_type" : "none",
"container" : "nodal_historical",
"variable_names" : ["PRESSURE", "VELOCITY"],
"method_settings": {}
},
{
"method_name" : "mean",
"norm_type" : "none",
"container" : "nodal_non_historical",
"variable_names" : ["PRESSURE", "VELOCITY"],
"method_settings": {}
},
{
"method_name" : "variance",
"norm_type" : "none",
"container" : "element_non_historical",
"variable_names" : ["PRESSURE", "VELOCITY"],
"method_settings": {}
},
{
"method_name" : "rootmeansquare",
"norm_type" : "none",
"container" : "condition_non_historical",
"variable_names" : ["PRESSURE", "VELOCITY"],
"method_settings": {}
},
{
"method_name" : "sum",
"norm_type" : "magnitude",
"container" : "nodal_historical",
"variable_names" : ["PRESSURE", "VELOCITY", "LOAD_MESHES", "GREEN_LAGRANGE_STRAIN_TENSOR"],
"method_settings": {}
},
{
"method_name" : "mean",
"norm_type" : "pnorm_2.5",
"container" : "nodal_non_historical",
"variable_names" : ["VELOCITY", "LOAD_MESHES", "GREEN_LAGRANGE_STRAIN_TENSOR"],
"method_settings": {}
},
{
"method_name" : "variance",
"norm_type" : "component_x",
"container" : "condition_non_historical",
"variable_names" : ["VELOCITY"],
"method_settings": {}
},
{
"method_name" : "rootmeansquare",
"norm_type" : "index_3",
"container" : "nodal_non_historical",
"variable_names" : ["LOAD_MESHES"],
"method_settings": {}
},
{
"method_name" : "min",
"norm_type" : "frobenius",
"container" : "nodal_non_historical",
"variable_names" : ["GREEN_LAGRANGE_STRAIN_TENSOR"],
"method_settings": {}
}
],
"output_settings" : {
"output_control_variable": "STEP",
"output_time_interval" : 1,
"write_kratos_version" : false,
"write_time_stamp" : false,
"output_file_settings" : {
"file_name" : "<model_part_name>_<container>_<norm_type>_<method_name>.dat",
"output_path": "spatial_statistics_process",
"write_buffer_size" : -1
}
}
}
}
```
### Temporal statistics process example
Following is an example on how to use `temporal_statistics_process` in json. The output variables data type should be matched with the input variables order and the data type for `"norm_type" = "none"`. If `"norm_type" != "none"`, then output variables should be `scalars`. If `"container" = "nodal_historical_historical"` is used as the container type, then output variables should be added to `NodalSolutionStepVariables` list in Kratos since this container type outputs temporal statistics variables to nodal historical container. This json settings also can be added to `auxiliary_processes` list.
For details about all the available statistical methods, norm_types, etc, please refer to rest of the `README.md` file.
```json
{
"kratos_module": "KratosMultiphysics.StatisticsApplication",
"python_module": "temporal_statistics_process",
"Parameters": {
"model_part_name": "FluidModelPart.fluid_computational_model_part",
"input_variable_settings": [
{
"method_name": "variance",
"norm_type": "none",
"container": "nodal_historical_non_historical",
"echo_level": 1,
"method_settings": {
"input_variables": [
"VELOCITY",
"PRESSURE"
],
"output_mean_variables": [
"VECTOR_3D_MEAN",
"SCALAR_MEAN"
],
"output_variance_variables": [
"VECTOR_3D_VARIANCE",
"SCALAR_VARIANCE"
]
}
}
],
"statistics_start_point_control_variable_name": "TIME",
"statistics_start_point_control_value": 2.5
}
}
```
## Method definitions
There are two types of methods under each **Spatial** and **Temporal** method groups. They are namely **Value** and **Norm** methods. In these methods, `i` index refers to spatial domain element index, and `k` index refers to time step.
### Value methods
#### Sum
In the case of spatial domain, it adds up all the variable values for a given container and returns summed up value as shown in following equation. x<sub>i</sub> is the i<sup>th</sup> element's variable value of the corresponding container. Result will have the same type as the type of the variable specified by the user
<a href="https://www.codecogs.com/eqnedit.php?latex=\color{Black}{\underline{r}&space;=&space;\sum_{i=1}^N{\underline{x}_i}}" target="_blank"><img src="https://latex.codecogs.com/svg.latex?\color{Black}{\underline{r}&space;=&space;\sum_{i=1}^N{\underline{x}_i}}" title="\color{Black}{\underline{r} = \sum_{i=1}^N{\underline{x}_i}}" /></a>
Following is an example of summation of non historical `VELOCITY` over the whole model part's nodes
```python
import KratosMultiphysics as Kratos
import KratosMultiphysics.StatisticsApplication as KratosStats
model = Kratos.Model()
model_part = model.CreateModelPart("test_model_part")
sum = KratosStats.SpatialMethods.NonHistorical.Nodes.ValueMethods.Sum(model_part, Kratos.VELOCITY)
```
In the case of temporal domain, **Sum** methods is the time integrated quantity for a specific variable. It will be stored each element under user specified variable and a user specified container. x<sub>i</sub> is the i<sup>th</sup> element's variable value of the corresponding container. Result will have the same type as the type of the variable specified by the user.
<a href="https://www.codecogs.com/eqnedit.php?latex=\color{Black}{\underline{r}&space;=&space;\sum_{k=1}^{P}{\underline{x}_k\Delta&space;t_k}&space;\quad&space;where&space;\quad&space;\Delta&space;t_k&space;=&space;T_{k}&space;-&space;T_{k-1}&space;\quad&space;\forall&space;T_k&space;\in&space;\left\lbrace&space;T_{initial},&space;...,&space;T_{end}&space;\right\rbrace}" target="_blank"><img src="https://latex.codecogs.com/svg.latex?\color{Black}{\underline{r}&space;=&space;\sum_{k=1}^{P}{\underline{x}_k\Delta&space;t_k}&space;\quad&space;where&space;\quad&space;\Delta&space;t_k&space;=&space;T_{k}&space;-&space;T_{k-1}&space;\quad&space;\forall&space;T_k&space;\in&space;\left\lbrace&space;T_{initial},&space;...,&space;T_{end}&space;\right\rbrace}" title="\color{Black}{\underline{r} = \sum_{k=1}^{P}{\underline{x}_k\Delta t_k} \quad where \quad \Delta t_k = T_{k} - T_{k-1} \quad \forall T_k \in \left\lbrace T_{initial}, ..., T_{end} \right\rbrace}" /></a>
Following is an example of integration calculation of non historical velocity. Input variable is node's non-historical container's `VELOCITY` and output variable is same containers `DISPLACEMENT` where integrated value will be stored for each node. The `0` represents echo level for this method object. Blank "" indicates that value method is used.
```python
import KratosMultiphysics as Kratos
import KratosMultiphysics.StatisticsApplication as KratosStats
model = Kratos.Model()
model_part = model.CreateModelPart("test_model_part")
sum_method = KratosStats.TemporalMethods.NonHistorical.Nodes.ValueMethods.Sum.Array(model_part, "", Kratos.VELOCITY, 0, Kratos.DISPLACEMENT)
integration_starting_time = 2.0
sum_method.InitializeStatisticsMethod(integration_starting_time)
for t in range(3, 6):
sum_method.CalculateStatistics()
```
#### Mean
In the case of spatial domain, it calculates mean of a given variable for a given container and returns it as shown in following equation. x<sub>i</sub> is the i<sup>th</sup> element's variable value of the corresponding container. Result will have the same type as the type of the variable specified by the user. (If it has higher dimension than a scalar, mean of each dimension will be calculated seperately resulting with a mean having same dimension as the input dimension)
<a href="https://www.codecogs.com/eqnedit.php?latex=\color{Black}{\underline{r}&space;=&space;\frac{1}{N}\sum_{i=1}^{N}\underline{x}_i}" target="_blank"><img src="https://latex.codecogs.com/svg.latex?\color{Black}{\underline{r}&space;=&space;\frac{1}{N}\sum_{i=1}^{N}\underline{x}_i}" title="\color{Black}{\underline{r} = \frac{1}{N}\sum_{i=1}^{N}\underline{x}_i}" /></a>
Following is an example of mean calculation of non historical `VELOCITY` over the whole model part's nodes
```python
import KratosMultiphysics as Kratos
import KratosMultiphysics.StatisticsApplication as KratosStats
model = Kratos.Model()
model_part = model.CreateModelPart("test_model_part")
mean = KratosStats.SpatialMethods.NonHistorical.Nodes.ValueMethods.Mean(model_part, Kratos.VELOCITY)
```
In the case of temporal domain, **Mean** methods is the time integrated quantity's mean for a specific variable. It will be stored each element under user specified variable and a user specified container. x<sub>i</sub> is the i<sup>th</sup> element's variable value of the corresponding container. Result will have the same type as the type of the variable specified by the user preserving the dimensionality as in the spatial case.
<a href="https://www.codecogs.com/eqnedit.php?latex=\color{Black}{\underline{\bar{x}}&space;=&space;\frac{1}{T_{total}}\sum_{k=1}^{P}{\underline{x}_k\Delta&space;t_k}&space;\quad&space;where&space;\quad&space;T_{total}&space;=&space;T_{end}&space;-&space;T_{initial}&space;\quad&space;and&space;\quad&space;\Delta&space;t_k&space;=&space;T_{k}&space;-&space;T_{k-1}&space;\quad&space;\forall&space;T_k&space;\in&space;\left\lbrace&space;T_{initial},&space;...,&space;T_{end}&space;\right\rbrace}" target="_blank"><img src="https://latex.codecogs.com/svg.latex?\color{Black}{\underline{\bar{x}}&space;=&space;\frac{1}{T_{total}}\sum_{k=1}^{P}{\underline{x}_k\Delta&space;t_k}&space;\quad&space;where&space;\quad&space;T_{total}&space;=&space;T_{end}&space;-&space;T_{initial}&space;\quad&space;and&space;\quad&space;\Delta&space;t_k&space;=&space;T_{k}&space;-&space;T_{k-1}&space;\quad&space;\forall&space;T_k&space;\in&space;\left\lbrace&space;T_{initial},&space;...,&space;T_{end}&space;\right\rbrace}" title="\color{Black}{\underline{\bar{x}} = \frac{1}{T_{total}}\sum_{k=1}^{P}{\underline{x}_k\Delta t_k} \quad where \quad T_{total} = T_{end} - T_{initial} \quad and \quad \Delta t_k = T_{k} - T_{k-1} \quad \forall T_k \in \left\lbrace T_{initial}, ..., T_{end} \right\rbrace}" /></a>
Following is an example of mean calculation of non historical velocity. Input variable is node's non-historical container's `VELOCITY` and output variable is same containers `VECTOR_3D_MEAN` where mean will be stored for each node. The `0` represents echo level for this method object. Blank "" indicates that value method is used.
```python
import KratosMultiphysics as Kratos
import KratosMultiphysics.StatisticsApplication as KratosStats
model = Kratos.Model()
model_part = model.CreateModelPart("test_model_part")
mean_method = KratosStats.TemporalMethods.NonHistorical.Nodes.ValueMethods.Mean.Array(model_part, "", Kratos.VELOCITY, 0, KratosStats.VECTOR_3D_MEAN)
integration_starting_time = 2.0
mean_method.InitializeStatisticsMethod(integration_starting_time)
for t in range(3, 6):
mean_method.CalculateStatistics()
```
#### Root mean square
In the case of spatial domain, it calculates root mean square of a given variable for a given container and returns it as shown in following equation. x<sub>i</sub> is the i<sup>th</sup> element's variable value of the corresponding container. Result will have the same type as the type of the variable specified by the user. (If it has higher dimension than a scalar, root mean square of each dimension will be calculated seperately resulting with a mean having same dimension as the input dimension)
<a href="https://www.codecogs.com/eqnedit.php?latex=\color{Black}{\underline{r}&space;=&space;\sqrt{\frac{1}{N}\sum_{i=1}^{N}{\underline{x}^2_i}&space;}}" target="_blank"><img src="https://latex.codecogs.com/svg.latex?\color{Black}{\underline{r}&space;=&space;\sqrt{\frac{1}{N}\sum_{i=1}^{N}{\underline{x}^2_i}&space;}}" title="\color{Black}{\underline{r} = \sqrt{\frac{1}{N}\sum_{i=1}^{N}{\underline{x}^2_i} }}" /></a>
Following is an example of root mean square calculation of non historical `VELOCITY` over the whole model part's nodes
```python
import KratosMultiphysics as Kratos
import KratosMultiphysics.StatisticsApplication as KratosStats
model = Kratos.Model()
model_part = model.CreateModelPart("test_model_part")
rms = KratosStats.SpatialMethods.NonHistorical.Nodes.ValueMethods.RootMeanSquare(model_part, Kratos.VELOCITY)
```
In the case of temporal domain, **Root Mean Square** methods is the time integrated quantity's root mean square for a specific variable. It will be stored each element under user specified variable and a user specified container. x<sub>i</sub> is the i<sup>th</sup> element's variable value of the corresponding container. Result will have the same type as the type of the variable specified by the user preserving the dimensionality as in the spatial case.
<a href="https://www.codecogs.com/eqnedit.php?latex=\color{Black}{\underline{r}&space;=&space;\sqrt{\frac{1}{T_{total}}\sum_{k=1}^{P}{\underline{x}_k^2\Delta&space;t_k}}&space;\quad&space;where&space;\quad&space;T_{total}&space;=&space;T_{end}&space;-&space;T_{initial}&space;\quad&space;and&space;\quad&space;\Delta&space;t_k&space;=&space;T_{k}&space;-&space;T_{k-1}&space;\quad&space;\forall&space;T_k&space;\in&space;\left\lbrace&space;T_{initial},&space;...,&space;T_{end}&space;\right\rbrace}" target="_blank"><img src="https://latex.codecogs.com/svg.latex?\color{Black}{\underline{r}&space;=&space;\sqrt{\frac{1}{T_{total}}\sum_{k=1}^{P}{\underline{x}_k^2\Delta&space;t_k}}&space;\quad&space;where&space;\quad&space;T_{total}&space;=&space;T_{end}&space;-&space;T_{initial}&space;\quad&space;and&space;\quad&space;\Delta&space;t_k&space;=&space;T_{k}&space;-&space;T_{k-1}&space;\quad&space;\forall&space;T_k&space;\in&space;\left\lbrace&space;T_{initial},&space;...,&space;T_{end}&space;\right\rbrace}" title="\color{Black}{\underline{r} = \sqrt{\frac{1}{T_{total}}\sum_{k=1}^{P}{\underline{x}_k^2\Delta t_k}} \quad where \quad T_{total} = T_{end} - T_{initial} \quad and \quad \Delta t_k = T_{k} - T_{k-1} \quad \forall T_k \in \left\lbrace T_{initial}, ..., T_{end} \right\rbrace}" /></a>
Following is an example of root mean square calculation of non historical velocity. Input variable is node's non-historical container's `VELOCITY` and output variable is same containers `VECTOR_3D_MEAN` where root mean square value will be stored for each node. The `0` represents echo level for this method object. Blank "" indicates that value method is used.
```python
import KratosMultiphysics as Kratos
import KratosMultiphysics.StatisticsApplication as KratosStats
model = Kratos.Model()
model_part = model.CreateModelPart("test_model_part")
rms_method = KratosStats.TemporalMethods.NonHistorical.Nodes.ValueMethods.RootMeanSquare.Array(model_part, "", Kratos.VELOCITY, 0, KratosStats.VECTOR_3D_MEAN)
integration_starting_time = 2.0
rms_method.InitializeStatisticsMethod(integration_starting_time)
for t in range(3, 6):
rms_method.CalculateStatistics()
```
#### Variance
In the case of spatial domain, it calculates variance of a given variable for a given container and returns mean and variance as shown in following equations. x<sub>i</sub> is the i<sup>th</sup> element's variable value of the corresponding container. Results will have the same type as the type of the variable specified by the user. (If it has higher dimension than a scalar, mean of each dimension will be calculated seperately resulting with a mean having same dimension as the input dimension)
<a href="https://www.codecogs.com/eqnedit.php?latex=\color{Black}{\underline{\bar{x}}&space;=&space;\frac{1}{N}\sum_{i=1}^N{\underline{x}_i}}&space;\\&space;\color{Black}{\underline{v}&space;=&space;\frac{1}{N}\sum_{i=1}^N{\left(\underline{x}_i&space;-&space;\underline{\bar{x}}&space;\right&space;)^2}}" target="_blank"><img src="https://latex.codecogs.com/svg.latex?\color{Black}{\underline{\bar{x}}&space;=&space;\frac{1}{N}\sum_{i=1}^N{\underline{x}_i}}&space;\\&space;\color{Black}{\underline{v}&space;=&space;\frac{1}{N}\sum_{i=1}^N{\left(\underline{x}_i&space;-&space;\underline{\bar{x}}&space;\right&space;)^2}}" title="\color{Black}{\underline{\bar{x}} = \frac{1}{N}\sum_{i=1}^N{\underline{x}_i}} \\ \color{Black}{\underline{v} = \frac{1}{N}\sum_{i=1}^N{\left(\underline{x}_i - \underline{\bar{x}} \right )^2}}" /></a>
Following is an example of variance calculation of non historical `VELOCITY` over the whole model part's nodes
```python
import KratosMultiphysics as Kratos
import KratosMultiphysics.StatisticsApplication as KratosStats
model = Kratos.Model()
model_part = model.CreateModelPart("test_model_part")
mean, variance = KratosStats.SpatialMethods.NonHistorical.Nodes.ValueMethods.Variance(model_part, Kratos.VELOCITY)
```
In the case of temporal domain, **Variance** method is the time integrated quantity's variance for a specific variable. Mean and variance will be stored each element under user specified variables and a user specified container. x<sub>i</sub> is the i<sup>th</sup> element's variable value of the corresponding container. Results will have the same type as the type of the variable specified by the user preserving the dimensionality as in the spatial case.
<a href="https://www.codecogs.com/eqnedit.php?latex=\color{Black}{\underline{\bar{x}}&space;=&space;\frac{1}{T_{total}}\sum_{k=1}^{P}{\underline{x}_k\Delta&space;t_k}}&space;\\&space;\color{Black}{Var\left(\underline{x}&space;\right&space;)&space;=&space;\frac{1}{T_{total}}\sum_{k=1}^{P}{\left(\underline{x}_k&space;-&space;\underline{\bar{x}}&space;\right&space;)^2\Delta&space;t_k}}&space;\\&space;\\&space;\color{Black}{&space;\quad&space;where}&space;\\&space;\\&space;\color{Black}{&space;T_{total}&space;=&space;T_{end}&space;-&space;T_{initial}&space;\quad&space;and&space;\quad&space;\Delta&space;t_k&space;=&space;T_{k}&space;-&space;T_{k-1}&space;\quad&space;\forall&space;T_k&space;\in&space;\left\lbrace&space;T_{initial},&space;...,&space;T_{end}&space;\right\rbrace}" target="_blank"><img src="https://latex.codecogs.com/svg.latex?\color{Black}{\underline{\bar{x}}&space;=&space;\frac{1}{T_{total}}\sum_{k=1}^{P}{\underline{x}_k\Delta&space;t_k}}&space;\\&space;\color{Black}{Var\left(\underline{x}&space;\right&space;)&space;=&space;\frac{1}{T_{total}}\sum_{k=1}^{P}{\left(\underline{x}_k&space;-&space;\underline{\bar{x}}&space;\right&space;)^2\Delta&space;t_k}}&space;\\&space;\\&space;\color{Black}{&space;\quad&space;where}&space;\\&space;\\&space;\color{Black}{&space;T_{total}&space;=&space;T_{end}&space;-&space;T_{initial}&space;\quad&space;and&space;\quad&space;\Delta&space;t_k&space;=&space;T_{k}&space;-&space;T_{k-1}&space;\quad&space;\forall&space;T_k&space;\in&space;\left\lbrace&space;T_{initial},&space;...,&space;T_{end}&space;\right\rbrace}" title="\color{Black}{\underline{\bar{x}} = \frac{1}{T_{total}}\sum_{k=1}^{P}{\underline{x}_k\Delta t_k}} \\ \color{Black}{Var\left(\underline{x} \right ) = \frac{1}{T_{total}}\sum_{k=1}^{P}{\left(\underline{x}_k - \underline{\bar{x}} \right )^2\Delta t_k}} \\ \\ \color{Black}{ \quad where} \\ \\ \color{Black}{ T_{total} = T_{end} - T_{initial} \quad and \quad \Delta t_k = T_{k} - T_{k-1} \quad \forall T_k \in \left\lbrace T_{initial}, ..., T_{end} \right\rbrace}" /></a>
Following is an example of root mean square calculation of non historical velocity. Input variable is node's non-historical container's `VELOCITY` and output variable `VECTOR_3D_MEAN` will store mean and `VECTOR_3D_VARIANCE` will store variance in same container for each node. The `0` represents echo level for this method object. Blank "" indicates that value method is used.
```python
import KratosMultiphysics as Kratos
import KratosMultiphysics.StatisticsApplication as KratosStats
model = Kratos.Model()
model_part = model.CreateModelPart("test_model_part")
variance_method = KratosStats.TemporalMethods.NonHistorical.Nodes.ValueMethods.Variance.Array(model_part, "", Kratos.VELOCITY, 0, KratosStats.VECTOR_3D_MEAN, KratosStats.VECTOR_3D_VARIANCE)
integration_starting_time = 2.0
variance_method.InitializeStatisticsMethod(integration_starting_time)
for t in range(3, 6):
variance_method.CalculateStatistics()
```
#### Min
In the case of spatial domain, it returns minimum of a given variable's norm for a given container, and the corresponding items id. x<sub>i</sub> is the i<sup>th</sup> element's variable value of the corresponding container. Results will be double and integer, irrespective of the input type, since higher dimensional variable types will be reduced to scalars by the use of norms.
<a href="https://www.codecogs.com/eqnedit.php?latex=\color{Black}{v&space;=&space;\min_{\underline{x}_i&space;\in&space;\mathbf{T}}&space;|\underline{x}_i|}" target="_blank"><img src="https://latex.codecogs.com/svg.latex?\color{Black}{v&space;=&space;\min_{\underline{x}_i&space;\in&space;\mathbf{T}}&space;|\underline{x}_i|}" title="\color{Black}{v = \min_{\underline{x}_i \in \mathbf{T}} |\underline{x}_i|}" /></a>
Following is an example of min method of non historical `VELOCITY`'s magnitude over the whole model part's nodes. It returns a tuple, first argument being the minimum, and the second argument being the id of the node where the minimum is found.
```python
import KratosMultiphysics as Kratos
import KratosMultiphysics.StatisticsApplication as KratosStats
model = Kratos.Model()
model_part = model.CreateModelPart("test_model_part")
min_value, min_id = KratosStats.SpatialMethods.NonHistorical.Nodes.NormMethods.Min(model_part, Kratos.VELOCITY, "magnitude")
```
In the case of temporal domain, **Min** method returns minimum value in the temporal domain, and the time minimum is found. Minimum and its occurring time will be stored each element under user specified variables and a user specified container. x<sub>k</sub> is the k<sup>th</sup> time step element's variable value of the corresponding container. Results will have the same type as the type of the variable specified by the user preserving the dimensionality as in the spatial case.
<a href="https://www.codecogs.com/eqnedit.php?latex=\color{Black}{v&space;=&space;\min_{\underline{x}_k&space;\in&space;\mathbf{T}}&space;|\underline{x}_k|}" target="_blank"><img src="https://latex.codecogs.com/svg.latex?\color{Black}{v&space;=&space;\min_{\underline{x}_k&space;\in&space;\mathbf{T}}&space;|\underline{x}_k|}" title="\color{Black}{v = \min_{\underline{x}_k \in \mathbf{T}} |\underline{x}_k|}" /></a>
Following is an example of min method in non historical velocity. Input variable is node's non-historical container's `VELOCITY` and output variable `VECTOR_3D_NORM` will store minimum and `TIME` will store the time minimum occured for each node. The `0` represents echo level for this method object. "magnitude" indicates that magnitude norm is used.
```python
import KratosMultiphysics as Kratos
import KratosMultiphysics.StatisticsApplication as KratosStats
model = Kratos.Model()
model_part = model.CreateModelPart("test_model_part")
min_method = KratosStats.TemporalMethods.NonHistorical.Nodes.NormMethods.Min.Array(model_part, "magnitude", Kratos.VELOCITY, 0, KratosStats.VECTOR_3D_NORM, Kratos.TIME)
integration_starting_time = 2.0
min_method.InitializeStatisticsMethod(integration_starting_time)
for t in range(3, 6):
min_method.CalculateStatistics()
```
#### Max
In the case of spatial domain, it returns maximum of a given variable's norm for a given container, and the corresponding items id. x<sub>i</sub> is the i<sup>th</sup> element's variable value of the corresponding container. Results will be double and integer, irrespective of the input type, since higher dimensional variable types will be reduced to scalars by the use of norms.
<a href="https://www.codecogs.com/eqnedit.php?latex=\color{Black}{v&space;=&space;\max_{\underline{x}_i&space;\in&space;\Omega}&space;|\underline{x}_i|}" target="_blank"><img src="https://latex.codecogs.com/svg.latex?\color{Black}{v&space;=&space;\max_{\underline{x}_i&space;\in&space;\Omega}&space;|\underline{x}_i|}" title="\color{Black}{v = \max_{\underline{x}_i \in \Omega} |\underline{x}_i|}" /></a>
Following is an example of max method of non historical `VELOCITY`'s magnitude over the whole model part's nodes. It returns a tuple, first argument being the maximum, and the second argument being the id of the node where the maximum is found.
```python
import KratosMultiphysics as Kratos
import KratosMultiphysics.StatisticsApplication as KratosStats
model = Kratos.Model()
model_part = model.CreateModelPart("test_model_part")
max_value, max_id = KratosStats.SpatialMethods.NonHistorical.Nodes.NormMethods.Max(model_part, Kratos.VELOCITY, "magnitude")
```
In the case of temporal domain, **Max** method returns maximum value in the temporal domain, and the time maximum is found. Maximum and its occurring time will be stored each element under user specified variables and a user specified container. x<sub>k</sub> is the k<sup>th</sup> time step element's variable value of the corresponding container. Results will have the same type as the type of the variable specified by the user preserving the dimensionality as in the spatial case.
<a href="https://www.codecogs.com/eqnedit.php?latex=\color{Black}{v&space;=&space;\max_{\underline{x}_k&space;\in&space;\mathbf{T}}&space;|\underline{x}_k|}" target="_blank"><img src="https://latex.codecogs.com/svg.latex?\color{Black}{v&space;=&space;\max_{\underline{x}_k&space;\in&space;\mathbf{T}}&space;|\underline{x}_k|}" title="\color{Black}{v = \max_{\underline{x}_k \in \mathbf{T}} |\underline{x}_k|}" /></a>
Following is an example of max method in non historical velocity. Input variable is node's non-historical container's `VELOCITY` and output variable `VECTOR_3D_NORM` will store maximum and `TIME` will store the time maximum occured for each node. The `0` represents echo level for this method object. "magnitude" indicates that magnitude norm is used.
```python
import KratosMultiphysics as Kratos
import KratosMultiphysics.StatisticsApplication as KratosStats
model = Kratos.Model()
model_part = model.CreateModelPart("test_model_part")
max_method = KratosStats.TemporalMethods.NonHistorical.Nodes.NormMethods.Max.Array(model_part, "magnitude", Kratos.VELOCITY, 0, KratosStats.VECTOR_3D_NORM, Kratos.TIME)
integration_starting_time = 2.0
max_method.InitializeStatisticsMethod(integration_starting_time)
for t in range(3, 6):
max_method.CalculateStatistics()
```
#### Median
Median returns the median value in spatial domain of a given variable's norm for a given container. x<sub>i</sub> is the i<sup>th</sup> element's variable value of the corresponding container. Results will be double value type, irrespective of the input type, since higher dimensional variable types will be reduced to scalars by the use of norms.
Following is an example of median method of non historical `VELOCITY`'s magnitude over the whole model part's nodes. It returns a double which is the median.
```python
import KratosMultiphysics as Kratos
import KratosMultiphysics.StatisticsApplication as KratosStats
model = Kratos.Model()
model_part = model.CreateModelPart("test_model_part")
median_value = KratosStats.SpatialMethods.NonHistorical.Nodes.NormMethods.Median(model_part, Kratos.VELOCITY, "magnitude")
```
#### Distribution
Distribution methods calculates distribution of a given variable with respect to given norm in a given container in spatial domain. x<sub>i</sub> is the i<sup>th</sup> element's variable value of the corresponding container. Result will be a tuple with followings in the same order:
1. min in the domain or user specified min value
2. max in the domain or user specified min value
3. group limits of the distribution (only upper limit) (double array)
4. number of occurences of items within each group (int array)
5. percentage distribution of number of occurences of items within each group (double array)
6. Each group's seperate means
7. Eash group's seperate variances
This method requires followings as the parameters as a object of Kratos.Parameters, if nothing is provided, then followings are assumed as defaults.
```json
{
"number_of_value_groups" : 10,
"min_value" : "min",
"max_value" : "max"
}
```
In here, `"min_value"` can either be `"min"` or a `double` value. In the case of a double, it will be used as the minimum when creating groups to identify distribution. `"max_value"` also can either be `"max"` of `double` value, which will determine the maximum value when creating the groups. This will create 2 additional groups to represent values below the specified `"min_value"` and `"max_value"` apart from the `"number_of_value_groups"` specified by the user.
Following is an example of median method of non historical `VELOCITY`'s magnitude over the whole model part's nodes.
```python
import KratosMultiphysics as Kratos
import KratosMultiphysics.StatisticsApplication as KratosStats
model = Kratos.Model()
model_part = model.CreateModelPart("test_model_part")
min_value, max_value, group_upper_values, group_histogram, group_percentage_distribution, group_means, group_variances = KratosStats.SpatialMethods.NonHistorical.Nodes.NormMethods.Distribution(model_part, Kratos.VELOCITY, "magnitude")
```
### Norm methods
All of the value methods mentioned before supports norm version of it, in these methods, higher dimensional values are transformed in to scalar values by a use specified norm, and then statistics are calculated based on the chosen method. Supported norm types may differ based on the variable data type being used. There are few methods, which only supports norm methods. Following table summarize availability of value and norm methods.
| Statistical Methods | Value Method | Norm Method |
|---------------------------------------|--------------|-------------|
| [Sum](#sum) | [x] | [x] |
| [Mean](#mean) | [x] | [x] |
| [Root mean square](#root-mean-square) | [x] | [x] |
| [Variance](#variance) | [x] | [x] |
| [Min](#min) | | [x] |
| [Max](#max) | | [x] |
| [Median](#median) | | [x] |
| [Distribution](#distribution) | | [x] |
Following example shows variance method, under **value** category and **norm** category for nodal non historical velocity. Norm method uses `"magnitude"` as the norm to reduce `VELOCITY` to a scalar value. `value_mean` and `value_variance` will be of `Array 3D` type, whereas `norm_mean` and `norm_variance` will be of `Double` type.
```python
import KratosMultiphysics as Kratos
import KratosMultiphysics.StatisticsApplication as KratosStats
model = Kratos.Model()
model_part = model.CreateModelPart("test_model_part")
value_mean, value_variance = KratosStats.SpatialMethods.NonHistorical.Nodes.ValueMethods.Variance(model_part, Kratos.VELOCITY)
norm_mean, norm_variance = KratosStats.SpatialMethods.NonHistorical.Nodes.NormMethods.Variance(model_part, Kratos.VELOCITY, "magnitude")
```
## Norm definitions
Few different norms are predefined in this application. Following table summarize
| | Double | Array 3D | Vector | Matrix |
|-------------------------------|--------|----------|--------|--------|
| [Value](#value) | [x] | | | |
| [Magnitude](#magnitude) | [x] | [x] | [x] | [x] |
| [Euclidean](#euclidean) | | [x] | [x] | |
| [Infinity](#infinity) | | [x] | [x] | [x] |
| [P-Norm](#p-norm) | | [x] | [x] | [x] |
| [Lpq-Norm](#lpq-norm) | | | | [x] |
| [Frobenius](#frobenius) | | | | [x] |
| [Trace](#trace) | | | | [x] |
| [Index](#index-based) | | | [x] | [x] |
| [Component](#component-based) | | [x] | | |
### Value
This returns the exact value. Only available for `Double` type variables.
### Magnitude
This returns the second norm of the variable.
1. For a `Double` type, it returns the absolute value.
2. For a `Array 3D` type, it returns the magnitude of the 3D vector.
3. For a `Vector` type, it returns root square sum of the vector.
4. For a `Matrix` type, it returns the [frobenius](#frobenius) norm.
### Euclidean
This again returns the second norm of the variable for following variable types.
1. For a `Array 3D` type, it returns the magnitude of the 3D vector.
2. For a `Vector` type, it returns root square sum of the vector.
### Frobenius
This is only available for `Matrix` data type variables only. Following equation illustrates the norm, where `A` is the matrix and a<sub>ij</sub> is the i<sup>th</sup> row, j<sup>th</sup> column value.
<a hr | text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:52:40.270277 | kratosstatisticsapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 2,189,112 | d8/e4/bbb587cdb7e6a6e53f4abfb6acbf0c9279c7127af41d465a0e9733452699/kratosstatisticsapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 73d89991a62f452095dbf5f8709b6312 | 6d75113e2ad2684270bb0f4ffd79308b4c811f1a3bc702334de215929fb058f4 | d8e4bbb587cdb7e6a6e53f4abfb6acbf0c9279c7127af41d465a0e9733452699 | null | [] | 0 |
2.4 | KratosShapeOptimizationApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | # ShapeOptimizationApplication
The Kratos ShapeOptimizationApplication contains an implementation of the Vertex Morphing method for node-based shape optimization.
It can be used in combination with the Adjoint Sensitivity Analysis capabilities of the Kratos StructuralMechanicsApplication and ConvectionDiffusionApplication.
Additionally it offers an interface to use external response functions via custom python code.
## Examples
A few application examples can be found [here](https://github.com/KratosMultiphysics/Examples/tree/master/shape_optimization)
### References
- Bletzinger, K.-U. A consistent frame for sensitivity filtering and the vertex assigned morphing of optimal shape. Struct Multidisc Optim 49, 873–895 (2014). https://doi.org/10.1007/s00158-013-1031-5
- Bletzinger, K.-U. (2017). Shape Optimization. In Encyclopedia of Computational Mechanics Second Edition (eds E. Stein, R. Borst and T.J.R. Hughes). https://doi.org/10.1002/9781119176817.ecm2109
| text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:52:38.282202 | kratosshapeoptimizationapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 2,321,459 | 52/02/daa12ce5aca0e80f11958d1f525dc6cb63b197ef75409f20cdc05cbaceb2/kratosshapeoptimizationapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 4fb73e8db26ec11a487410ac834f1387 | 11ea6cf65a7873e4889e58a60767a0f83ecd5489d97dd9daaa4011dc42b3c3cc | 5202daa12ce5aca0e80f11958d1f525dc6cb63b197ef75409f20cdc05cbaceb2 | null | [] | 0 |
2.4 | KratosShallowWaterApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | # Shallow water application
This is a research application that provides a set of tools for oceanographic and hydrographic simulations over shallow domains. The background of the stabilization method is explained in [^1].
## Overview
| | BDF | Crank-Nicolson | Adams-Moulton |
|---------------|:------------------:|:------------------:|:------------------:|
| Gravity waves | :heavy_check_mark: | :heavy_check_mark: | |
| Saint-Venant | :heavy_check_mark: | | |
| Boussinesq | :heavy_check_mark: | | :heavy_check_mark: |
## Dependencies
This application does not have other application dependencies at compile time. The following Python libraries may be required:
- `scipy` is used by the wave generator and by the benchmarks
- `numpy` is used to generate solitary waves and analytical solutions by the benchmarks
If the coupling with the Navier-Stokes equations is required [^2], add the following applications to compilation:
- [HDF5Application](../HDF5Application/README.md)
- [MappingApplication](../MappingApplication/README.md)
- [PfemFluidDynamicsApplication](../PfemFluidDynamicsApplication/README.md)
## References
[^1]: M. Masó, I. De-Pouplana, E. Oñate. A FIC-FEM stabilized formulation for the shallow water equations over partially dry domains. Computer Methods in Applied Mechanics and Engineering, 389C (2022) 114362 [10.1016/j.cma.2021.114362](https://doi.org/10.1016/j.cma.2021.114362)
[^2]: M. Masó, A. Franci, I. de-Pouplana, A. Cornejo and E. Oñate, A Lagrangian-Eulerian procedure for the coupled solution of the Navier-Stokes and shallow water equations for landslide-generated waves. Advanced Modelling and Simulation in Engineering Sciences, (2022) [10.21203/rs.3.rs-1457837/v1](https://doi.org/10.21203/rs.3.rs-1457837/v1) (in press)
| text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:52:36.793230 | kratosshallowwaterapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 2,645,674 | 6b/2b/81cfd26ba23f19698779e617f41ae35b30d4366fa820d05f0f341fa47110/kratosshallowwaterapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 8c6eb1f4c838b3acbc00dff626dc5824 | c017382f7d12e15f4e6b9db64a61e17ea0d9af59a754c7e3688086219810c344 | 6b2b81cfd26ba23f19698779e617f41ae35b30d4366fa820d05f0f341fa47110 | null | [] | 0 |
2.4 | simplepyble | 0.12.2.dev2 | The ultimate fully-fledged cross-platform BLE library, designed for simplicity and ease of use. | |PyPI Licence|
SimplePyBLE
===========
The ultimate cross-platform library and bindings for Bluetooth Low Energy (BLE), designed for simplicity and ease of use.
Key Features
------------
* **Cross-Platform**: Enterprise-grade support for Windows, macOS, Linux
* **Easy Integration**: Clean, consistent API across all platforms
* **Multiple Language Bindings**: Production-ready bindings for C, C++, Python, Java and Rust, with more coming soon
* **Commercial Ready**: Source-available commercial license for proprietary applications
Support & Resources
--------------------
We're here to help you succeed with SimpleBLE:
* **Documentation**: Visit our `Documentation`_ page for comprehensive guides
* **Commercial Support**: Check out |website|_ or |email|_ about licensing and professional services.
* **Community**: Join our `Discord`_ server for discussions and help
**Don't hesitate to reach out if you need assistance - we're happy to help!**
Installation
------------
You can install SimplePyBLE from PyPI using pip: ::
pip install simplepyble
Usage
-----
Please review our `code examples`_ on GitHub for more information on how to use
SimplePyBLE.
Asynchronous Support
--------------------
SimplePyBLE provides an asynchronous API via the ``simplepyble.aio`` module. This module
is designed to work with ``asyncio`` and provides a more idiomatic way to handle
asynchronous operations in Python.
Example: ::
import asyncio
from simplepyble.aio import Adapter
async def main():
adapters = Adapter.get_adapters()
adapter = adapters[0]
async with adapter:
await adapter.scan_for(5000)
peripherals = adapter.scan_get_results()
for peripheral in peripherals:
print(f"Found: {peripheral.identifier()} [{peripheral.address()}]")
if __name__ == "__main__":
asyncio.run(main())
Check out the `async examples`_ for more details.
To run the built-in REST server, you can use the following command: ::
python3 -m simplepyble.server --host 127.0.0.1 --port 8000
License
=======
Since January 20th 2025, SimpleBLE is now available under the Business Source License 1.1 (BUSL-1.1). Each
version of SimpleBLE will convert to the GNU General Public License version 3 after four years of its initial release.
The project is free to use for non-commercial purposes, but requires a commercial license for commercial use. We
also offer FREE commercial licenses for small projects and early-stage companies - reach out to discuss your use case!
**Why purchase a commercial license?**
- Build and deploy unlimited commercial applications
- Use across your entire development team
- Zero revenue sharing or royalty payments
- Choose features that match your needs and budget
- Priority technical support included
- Clear terms for integrating into MIT-licensed projects
**Looking for information on pricing and commercial terms of service?** Visit |website-url|_ for more details.
For further enquiries, please |email|_ or |leavemessage|_ and we can discuss the specifics of your situation.
----
**SimpleBLE** is a project powered by |caos|_.
.. Links
.. |email| replace:: email us
.. _email: mailto:contact@simpleble.org
.. |leavemessage| replace:: leave us a message on our website
.. _leavemessage: https://www.simpleble.org/contact?utm_source=pypi_simplepyble&utm_medium=referral&utm_campaign=simplepyble_readme
.. |website| replace:: our website
.. _website: https://simpleble.org?utm_source=pypi_simplepyble&utm_medium=referral&utm_campaign=simplepyble_readme
.. |website-url| replace:: www.simpleble.org
.. _website-url: https://simpleble.org?utm_source=pypi_simplepyble&utm_medium=referral&utm_campaign=simplepyble_readme
.. |caos| replace:: **The California Open Source Company**
.. _caos: https://californiaopensource.com?utm_source=pypi_simplepyble&utm_medium=referral&utm_campaign=simplepyble_readme
.. _SimplePyBLE: https://pypi.org/project/simplepyble/
.. _SimpleBLE: https://github.com/simpleble/simpleble/
.. _code examples: https://github.com/simpleble/simpleble/tree/main/examples/simplepyble
.. _async examples: https://github.com/simpleble/simpleble/tree/main/examples/simplepyble
.. _Discord: https://discord.gg/N9HqNEcvP3
.. _Documentation: https://docs.simpleble.org
.. |PyPI Licence| image:: https://img.shields.io/pypi/l/simplepyble
| text/x-rst | Kevin Dewald | kevin@simpleble.org | null | null | null | null | [
"License :: Free for non-commercial use",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3 :: Only"
] | [
"Windows"
] | https://github.com/simpleble/simpleble | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:52:35.615589 | simplepyble-0.12.2.dev2.tar.gz | 387,306 | d6/7e/373df070ad090035e879b262473e02a0675610c85a9cd52cb705f7ef2f88/simplepyble-0.12.2.dev2.tar.gz | source | sdist | null | false | ee5d4a015e8eb98ce279a5823cd0d1b9 | b18112501d7389730ad3a9063a0d22b05a7227044697db6a557b5f766e927386 | d67e373df070ad090035e879b262473e02a0675610c85a9cd52cb705f7ef2f88 | null | [
"LICENSE.md"
] | 3,585 |
2.4 | KratosRomApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | ## ROM Application | text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:52:35.284557 | kratosromapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 2,232,290 | d0/61/8b930d4f7cb5dda791abd4a55719b0ad147a024667873dd5249309097796/kratosromapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 09cba887a8cb66556657c6fbf753c443 | 0e1f162e3759bf26faf70d9cb410739f2b6cd11cdf99e4c7ec63b258bd0e680a | d0618b930d4f7cb5dda791abd4a55719b0ad147a024667873dd5249309097796 | null | [] | 0 |
2.4 | KratosRANSApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | ## RANS Application | text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:52:33.486397 | kratosransapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 4,119,772 | bf/df/2b8f82bfa5b2bb11c51c85de0cd0dc7bfae60e95df0e54c8c747af143d94/kratosransapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 569e7234a83dd459f514c3ad974b491e | 803a80e6b5a2615c77a6a17bfcfceb9d778cb639bf9f7dd3702eda622210b8c8 | bfdf2b8f82bfa5b2bb11c51c85de0cd0dc7bfae60e95df0e54c8c747af143d94 | null | [] | 0 |
2.4 | KratosPoromechanicsApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | ## Poromechanics Application
The Poromechanics Application contains developments in coupled solid-pore fluid interaction problems within Kratos Multiphysics.
### Features:
- UPl small displacement element for saturated porous media (with
equal order interpolation, unstable under incompressible-undrained
conditions)
- Stable UPl small displacement element for saturated porous media
(with higher order interpolation for displacements)
- FIC-Stabilized UPl small displacement element for saturated porous media
(with equal order interpolation for displacements)
- UPl Quasi-zero-thickness interface elements for defining cracks and
joints
- Local linear elastic damage model (Simo-Ju and modified Von Mises)
- Non-local linear elastic damage model (Simo-Ju and modified Von
Mises)
- Bilinear cohesive fracture model (for quasi-zero-thickness interface elements)
- Fracture propagation utility based on the combination of the
damage model with the insertion of interface elements after remeshing
with GiD
- GiD GUI available in [KratosPoroGiDInterface](https://github.com/ipouplana/KratosPoroGiDInterface)
| text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0",
"kratosstructuralmechanicsapplication==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:52:31.711317 | kratosporomechanicsapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 6,644,322 | 70/05/42acb64cd9487a7c23c51d6cc5695a6431e3056f8dbd75acb7196728d331/kratosporomechanicsapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 6a24a95c4e54bd9201fe86d31ddebe42 | 2d1eb56d8c2aa639464d4cfcf0be189b207ee8c87de538d8f8b4c75c269f6b71 | 700542acb64cd9487a7c23c51d6cc5695a6431e3056f8dbd75acb7196728d331 | null | [] | 0 |
2.4 | KratosOptimizationApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | # OptimizationApplication
The Kratos OptimizationApplication is a framework for solving optimization problems in continuum mechanics. It is supposed to handle both gradient-based (adjoint-based) and gradient-free methods.
## Main Features
* State-of-the-art techniques and algorithms for shape, thickness and material/topology optimization.
* Efficient and consistent filtering techniques for parametrization-free shape, thickness and material/topology optimization.
* Abstract problem formulation which enables concurrent and nested multilevel-multi-scale optimization problems.
* Adaptive gradient-projection technique, developed specially for problems with an arbitrary large number of design variables of different scales.
* Modular implementation which enables analysis and optimization of multi-physics problems.
* Realization and implementation of additive manufacturing constraints, e.g. hangover conditions (support structures), stackability and geometric limitations. | text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:52:29.443189 | kratosoptimizationapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 4,219,164 | 68/8a/4ab65e9ebc58a99fe65ac8894f57f8e0832623e172dfc78292a61db19ca6/kratosoptimizationapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 43b19c3e565be3cfbfa7472487a2dfb4 | 5d1c1df117665b6f551619d0b1b3ca4a79bc8a2f0f4293f99cd9213a5cb817bd | 688a4ab65e9ebc58a99fe65ac8894f57f8e0832623e172dfc78292a61db19ca6 | null | [] | 0 |
2.4 | KratosMultiphysics | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | <p align=center><img height="72.125%" width="72.125%" src="https://raw.githubusercontent.com/KratosMultiphysics/Documentation/master/Wiki_files/Home/kratos.png"></p>
[![License][license-image]][license] [![C++][c++-image]][c++standard] [![DOI][DOI-image]][DOI] [![GitHub stars][stars-image]][stars] [![Twitter][twitter-image]][twitter] [![Youtube][youtube-image]][youtube]
[![Release][release-image]][releases]
<a href="https://github.com/KratosMultiphysics/Kratos/releases/latest"><img src="https://img.shields.io/github/release-date/KratosMultiphysics/Kratos?label="></a>
<a href="https://github.com/KratosMultiphysics/Kratos/compare/Release-10.4.0...master"><img src="https://img.shields.io/github/commits-since/KratosMultiphysics/Kratos/latest?label=commits%20since"></a>
<a href="https://github.com/KratosMultiphysics/Kratos/commit/master"><img src="https://img.shields.io/github/last-commit/KratosMultiphysics/Kratos?label=latest%20commit"></a>
[](https://pypi.org/project/KratosMultiphysics/)
[](https://pepy.tech/project/KratosMultiphysics)
[release-image]: https://img.shields.io/badge/release-10.4.0-green.svg?style=flat
[releases]: https://github.com/KratosMultiphysics/Kratos/releases
[license-image]: https://img.shields.io/badge/license-BSD-green.svg?style=flat
[license]: https://github.com/KratosMultiphysics/Kratos/blob/master/kratos/license.txt
[c++-image]: https://img.shields.io/badge/C++-20-blue.svg?style=flat&logo=c%2B%2B
[c++standard]: https://isocpp.org/std/the-standard
[Nightly-Build]: https://github.com/KratosMultiphysics/Kratos/workflows/Nightly%20Build/badge.svg
[Nightly-link]: https://github.com/KratosMultiphysics/Kratos/actions?query=workflow%3A%22Nightly+Build%22
[DOI-image]: https://zenodo.org/badge/DOI/10.5281/zenodo.3234644.svg
[DOI]: https://doi.org/10.5281/zenodo.3234644
[stars-image]: https://img.shields.io/github/stars/KratosMultiphysics/Kratos?label=Stars&logo=github
[stars]: https://github.com/KratosMultiphysics/Kratos/stargazers
[twitter-image]: https://img.shields.io/twitter/follow/kratosmultiphys.svg?label=Follow&style=social
[twitter]: https://twitter.com/kratosmultiphys
[youtube-image]: https://badges.aleen42.com/src/youtube.svg
[youtube]:https://www.youtube.com/@kratosmultiphysics3578
_KRATOS Multiphysics_ ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. More in [Overview](https://github.com/KratosMultiphysics/Kratos/wiki/Overview)
**Kratos** is **free** under BSD-4 [license](https://github.com/KratosMultiphysics/Kratos/blob/master/kratos/license.txt) and can be used even in commercial software as it is. Many of its main applications are also free and BSD-4 licensed but each derived application can have its own proprietary license.
# Main Features
**Kratos** is __multiplatform__ and available for __Windows, Linux__ (several distros) and __macOS__.
**Kratos** is __OpenMP__ and __MPI__ parallel and scalable up to thousands of cores.
**Kratos** provides a core which defines the common framework and several application which work like plug-ins that can be extended in diverse fields.
## Its main applications are:
- [DEM](applications/DEMApplication) for cohesive and non cohesive spheric and non spheric particles simulation
- [Fluid Dynamics](applications/FluidDynamicsApplication/README.md) Provides 2D and 3D incompressible fluids formulation
- [Fluid Structure Interaction](applications/FSIApplication/README.md) for solution of different FSI problems
- [Structural Mechanics](applications/StructuralMechanicsApplication/README.md) Providing solution for solid, shell and beam structures with linear and nonlinear, static and dynamic behavior
- [Contact Structural Mechanics](applications/ContactStructuralMechanicsApplication/README.md) For contact problems used along the [Structural Mechanics application](applications/StructuralMechanicsApplication/README.md)
## Some main modules are:
- [Linear Solvers](applications/LinearSolversApplication/README.md)
- [Trilinos](applications/TrilinosApplication/README.md)
- [Metis](applications/MetisApplication/README.md)
- [Meshing](applications/MeshingApplication/README.md)
# Documentation
Here you can find the basic documentation of the project:
## Getting Started
* Getting Kratos (Last compiled Release)
* [Kratos from `pip`](https://pypi.org/project/KratosMultiphysics/): Just simply type on terminal `pip install KratosMultiphysics-all`
* [Kratos for GiD](https://github.com/KratosMultiphysics/Kratos/wiki/Getting-Kratos-binaries-(via-GiD))
<!---
Rewrite this in the Documentation page
-->
* Compiling Kratos
* [See INSTALL.md](https://github.com/KratosMultiphysics/Kratos/blob/master/INSTALL.md)
## Tutorials
* [Running an example from GiD](https://kratosmultiphysics.github.io/Kratos/pages/Kratos/For_Users/Tutorials/Running_An_Example_From_GiD.html)
* [Kratos input files and I/O](https://kratosmultiphysics.github.io/Kratos/pages/Kratos/For_Users/Tutorials/Kratos_Input_Files_And_Io.html)
* [Data management](https://kratosmultiphysics.github.io/Kratos/pages/Kratos/For_Users/Tutorials/Data_Management.html)
* [Solving strategies](https://kratosmultiphysics.github.io/Kratos/pages/Kratos/For_Users/Tutorials/Solving_Strategies.html)
* [Manipulating solution values](https://kratosmultiphysics.github.io/Kratos/pages/Kratos/For_Developers/Tutorials/Manipulating-solution-values.html)
* [Multiphysics](https://kratosmultiphysics.github.io/Kratos/pages/Kratos/For_Users/Tutorials/Multiphysics_Example.html)
## More documentation
[Documentation](https://kratosmultiphysics.github.io/Kratos/)
# Examples of use
Kratos has been used for simulation of many different problems in a wide variety of disciplines ranging from wind over singular building to granular domain dynamics. Some examples and validation benchmarks simulated by Kratos can be found [here](https://kratosmultiphysics.github.io/Examples/)
<span>
<img align="center" src="https://github.com/KratosMultiphysics/Examples/raw/master/fluid_dynamics/use_cases/barcelona_wind/resources/BarcelonaVelocityVector.png" width="288">
Barcelona Wind Simulation
</span>
<br>
# Contributors
Organizations contributing to Kratos:
<img align="left" src="https://github.com/KratosMultiphysics/Documentation/raw/master/Wiki_files/Logos/CIMNE_logo.png" width="128">
<br><br><p>International Center for Numerical Methods in Engineering</p>
<br><br>
<img align="left" src="https://github.com/KratosMultiphysics/Documentation/raw/master/Wiki_files/Logos/TUM_Logo.png" width="128">
<br><p>Chair of Structural Analysis<br>Technical University of Munich </p>
<br><br>
<img align="left" src="https://github.com/KratosMultiphysics/Documentation/raw/master/Wiki_files/Logos/altair-sponsor-logo.png" width="128">
<br><p>Altair Engineering</p>
<img align="left" src="https://github.com/KratosMultiphysics/Documentation/raw/master/Wiki_files/Logos/Deltares_logo.png" width="128">
<br><p>Deltares</p>
<br><br>
<img align="left" src="https://github.com/KratosMultiphysics/Documentation/raw/master/Wiki_files/Logos/TUBraunschweig_logo.svg" width="128">
<p>Institute of Structural Analysis<br>Technische Universität Braunschweig</p>
<br><br>
<img align="left" src="https://github.com/KratosMultiphysics/Documentation/raw/master/Wiki_files/Logos/logo_UNIPD.svg" width="128">
<p> University of Padova, Italy </p>
<br><br>
# Our Users
Some users of the technologies developed in Kratos are:
<span>
<img align="left" src="https://github.com/KratosMultiphysics/Documentation/raw/master/Wiki_files/Logos/AIRBUS_logo.png" width="128">
<p>Airbus Defence and Space<br>Stress Methods & Optimisation Department</p>
</span>
<span>
<img align="left" src="https://github.com/KratosMultiphysics/Documentation/raw/master/Wiki_files/Logos/siemens_logo.png" width="128">
<p>Siemens AG<br>Corporate Technology</p>
</span>
<span>
<img align="left" src="https://github.com/KratosMultiphysics/Documentation/raw/master/Wiki_files/Logos/onera_logo.png" width="128">
<p>ONERA, The French Aerospace Lab<br>Applied Aerodynamics Department</p>
</span>
🤗 Looking forward to seeing your logo here!
# Special Thanks To
## In Kratos Core:
- [Boost](http://www.boost.org/) for uBLAS
- [pybind11](https://github.com/pybind/pybind11) for exposing C++ to python
- [GidPost]([https://www.gidhome.com/gid-plus/tools/476/gidpost/](https://www.gidsimulation.com/downloads/gidpost-2-11-library-to-write-postprocess-results-for-gid-in-ascii-binary-or-hdf5-format/)) providing output to [GiD](https://www.gidsimulation.com/)
- [AMGCL](https://github.com/ddemidov/amgcl) for its highly scalable multigrid solver
- [JSON](https://github.com/nlohmann/json) JSON for Modern C++
- [ZLib](https://zlib.net/) The compression library
- [Google Test](https://github.com/google/googletest) for unit testing C++ code
- [Google Benchmark](https://github.com/google/benchmark) for microbenchmarking and performance testing
## In applications:
- [Eigen](http://eigen.tuxfamily.org) For linear solvers used in the [LinearSolversApplication](applications/LinearSolversApplication)
- [Trilinos](https://trilinos.org/) for MPI linear algebra and solvers used in [TrilinosApplication](applications/TrilinosApplication)
- [METIS](http://glaros.dtc.umn.edu/gkhome/views/metis) for partitioning in [MetisApplication](applications/MetisApplication/README.md)
- [CoSimIO](https://github.com/KratosMultiphysics/CoSimIO) for performing coupled simulations with external solvers within the [CoSimulationApplication](applications/CoSimulationApplication/README.md). The CoSimIO in Kratos uses the following libraries:
- [Boost](http://www.boost.org/) for the `intrusive_ptr`
- [filesystem](https://github.com/gulrak/filesystem) Header-only single-file std::filesystem compatible helper library, based on the C++17 specs
- [asio](https://think-async.com/Asio/) for socket based interprocess communication
# How to cite Kratos?
Please, use the following references when citing Kratos in your work.
- [Dadvand, P., Rossi, R. & Oñate, E. An Object-oriented Environment for Developing Finite Element Codes for Multi-disciplinary Applications. Arch Computat Methods Eng 17, 253–297 (2010). https://doi.org/10.1007/s11831-010-9045-2](https://doi.org/10.1007/s11831-010-9045-2)
- [Dadvand, P., Rossi, R., Gil, M., Martorell, X., Cotela, J., Juanpere, E., Idelsohn, S., Oñate, E. (2013). Migration of a generic multi-physics framework to HPC environments. Computers & Fluids. 80. 301–309. 10.1016/j.compfluid.2012.02.004.](10.1016/j.compfluid.2012.02.004)
- [Vicente Mataix Ferrándiz, Philipp Bucher, Rubén Zorrilla, Suneth Warnakulasuriya, Alejandro Cornejo, Riccardo Rossi, Carlos Roig, jcotela, Josep Maria, tteschemacher, Miguel Masó, Guillermo Casas, Marc Núñez, Pooyan Dadvand, Salva Latorre, Ignasi De Pouplana, Joaquín Irazábal González, AFranci, Ferran Arrufat, riccardotosi, Aditya Ghantasala, Klaus Sautter, Peter Wilson, dbaumgaertner, Bodhinanda Chandra, Armin Geiser, Máté Kelemen, lluís, Inigo Lopez, jgonzalezusua. (2025). KratosMultiphysics/Kratos: Release v10.2.3 (v10.2.3). Zenodo. https://doi.org/10.5281/zenodo.15687676](https://zenodo.org/records/15687676)
| text/markdown | null | Kratos Team <kratosmultiphysics@gmail.com> | null | null | BSD-4-Clause | null | [
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Natural Language :: English",
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Scientific/Engineering :: Physics"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy>=1.20"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:52:26.552081 | kratosmultiphysics-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 31,928,724 | 43/89/1d30420fbc44dd767a1d06f23a8baa8a23b5497e702e031d8bd75c30bf1c/kratosmultiphysics-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 8e2adc1e21840a98a87c8904584857ca | b16d100e2b822d244f2ff0ca6ee65f61db30723d6701dff25a8d147e5db18c03 | 43891d30420fbc44dd767a1d06f23a8baa8a23b5497e702e031d8bd75c30bf1c | null | [] | 0 |
2.4 | KratosMPMApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | # MPM Application
This application implements the **Material Point Method (MPM)** with main motivations of simulating non-linear large deformable materials, such as free-surface flows, geomechanical phenomena, and extreme events involving impact, penetration, fragmentation, blast, multi-phase interaction, failure evolution, etc.

## Theory
Particle or meshfree methods are a family of methods in which the state of a system is represented by a set of particles, without a fixed connectivity. As a consequence, these methods are particularly well suited for the analysis of moving discontinuities and large deformations with breaking and fragmentation. This approach does not suffer from the mesh distortion and entanglement issues posed by other Lagrangian discretizations such as the Finite Element Method (FEM).
The **Material Point Method** (MPM) is an hybrid thechnique which uses a fixed background grid (or mesh) for solving the governing equations in a FEM fashion and set of material particles (MP) for storing all the hystorical variables and material information. The MPM has gained a remarkably increasing popularity due to its capability in simulating problems involving historically dependent materials and large deformations. As MPM is able to combine the strengths of both Eulerian and Lagrangian methods, it has been used in various engineering applications and industrial purposes, in particular in geomechanics and in the environmental fluid dynamics field.
## Getting Started
The `MPMApplication` is part of the Kratos Multiphysics framework and can be obtained in two different ways:
* by installing the Kratos binaries using the package manager `pip` (suggested for users that want to use the application like a black-box);
* by downloading the source code and compiling it (suggested for developers).
### Getting Binaries with `pip` (users)
Kratos binaries are available for Linux, Windows and MacOS and can be installed by using the `pip` package manager.
Open the terminal and run the following command:
```bash
python3 -m pip install KratosMPMApplication
```
This command will install the following packages:
* `KratosMultiphysics`: Kratos Multiphysics Core;
* `KratosMPMApplication`: application implementing MPM;
* `KratosLinearSolversApplication`: dependency required by `MPMApplication`.
### Build from Source (developers)
Instructions on how to download, compile and run Kratos in your local machine for development and testing purposes are available for Linux, Windows and MacOS distributions in the [installation page](https://github.com/KratosMultiphysics/Kratos/blob/master/INSTALL.md).
In particular, in order to use the `MPMApplication` it is also required to compile the auxiliary `LinearSolversApplication`.
* In **Linux**, the following lines must appear in the `/path_to_kratos/scripts/standard_configure.sh` file:
```bash
export KRATOS_APPLICATIONS=
add_app ${KRATOS_APP_DIR}/MPMApplication
add_app ${KRATOS_APP_DIR}/LinearSolversApplication
```
* In **Windows**, the following lines must appear in the `/path_to_kratos/scripts/standard_configure.sh` file:
```console
set KRATOS_APPLICATIONS=
CALL :add_app %KRATOS_APP_DIR%\MPMApplication;
CALL :add_app %KRATOS_APP_DIR%\LinearSolversApplication;
```
## GUI
A GUI (Graphic User Interface) for the MPM application is also available within the pre and post processing software [GiD](https://www.gidhome.com/). Instructions on how to download and install it are available in the [`GiDInterface` repository](https://github.com/KratosMultiphysics/GiDInterface/tree/master/). A basic knowledge of GiD is required.
Any software able to handle `vtk` files can be used for post processing (e.g., [Paraview](https://www.paraview.org/), [VisIt](https://visit-dav.github.io/visit-website/index.html)).
## Examples & Tutorials
* Use-cases and validation examples are available in the MPM section of the [Examples repository](https://kratosmultiphysics.github.io/Examples/).
* Unit tests of the main features can be found in the [tests](https://github.com/KratosMultiphysics/Kratos/tree/master/applications/MPMApplication/tests) folder.
* A step-by-step tutorial using GiD for both pre and post processing is available [here](https://kratosmultiphysics.github.io/Kratos/pages/Applications/MPM_Application/Step-by-step_Tutorial_in_GiD/introduction.html).
## Features
The following features are currently available and subject to development within the `MPMApplication`.
**Formulations**
* Irreducible formulation (u displacement based)
* Mixed UP (displacement/pressure) formulation
**Element types**
* Updated Lagrangian elements - triangular and quadrilateral (2D) and tetrahedral and hexahedral (3D), structured and unstructured, using classical or partitioned quadrature rules (this latter limited to explicit MPM)
* Updated Lagrangian axis-symmetric elements - triangular and quadrilateral (2D), structured and unstructured
* Updated Lagrangian mixed UP elements - triangular (2D) and tetrahedral (3D), structured and unstructured, stabilized using Variational Multiscale Stabilization (VMS) or Pressure Projection techniques
**Constitutive laws**
* [Linear isotropic elastic materials](https://kratosmultiphysics.github.io/Kratos/pages/Applications/MPM_Application/Constitutive_Laws/constitutive_laws.html#linear-elasticity) - plane strain, plane stress, axis-symmetric, and 3D
* [Hyperelastic Neo-Hookean laws](https://kratosmultiphysics.github.io/Kratos/pages/Applications/MPM_Application/Constitutive_Laws/constitutive_laws.html#hyperelastic-neohookean) - finite strain, plane strain, axis-symmetric, and 3D
* Elasto-plastic laws:
* [Mohr Coulomb](https://kratosmultiphysics.github.io/Kratos/pages/Applications/MPM_Application/Constitutive_Laws/constitutive_laws.html#mohr-coulomb) - finite strain, associative and non-associative, plane strain, axis-symmetric, and 3D
* [Mohr Coulomb with Strain Softening](https://kratosmultiphysics.github.io/Kratos/pages/Applications/MPM_Application/Constitutive_Laws/constitutive_laws.html#mohr-coulomb-strain-softening) - finite strain, associative and non-associative, plane strain, axis-symmetric, and 3D
* Critical state laws:
* [Modified Cam-Clay](https://kratosmultiphysics.github.io/Kratos/pages/Applications/MPM_Application/Constitutive_Laws/constitutive_laws.html#modified-cam-clay) - finite strain, plane strain, axis-symmetric, and 3D
* Johnson Cook Thermal Plastic (just for explicit MPM)
* Displacement-based [Newtonian Fluid Law](https://kratosmultiphysics.github.io/Kratos/pages/Applications/MPM_Application/Constitutive_Laws/constitutive_laws.html#newtonian-fluid) - plane strain and 3D
**Boundary conditions**
* Grid-Based Conditions (conforming): applied directly at the background nodes
* Neumann: [Point load](https://kratosmultiphysics.github.io/Kratos/pages/Applications/MPM_Application/Processes/Grid-based_Boundary_Conditions/load.html)
* Neumann: [Line load](https://kratosmultiphysics.github.io/Kratos/pages/Applications/MPM_Application/Processes/Grid-based_Boundary_Conditions/load.html) (a distributed load applied over a line)
* Neumann: [Surface load](https://kratosmultiphysics.github.io/Kratos/pages/Applications/MPM_Application/Processes/Grid-based_Boundary_Conditions/load.html) (a distributed load applied over a face)
* Dirichlet: [Slip](https://kratosmultiphysics.github.io/Kratos/pages/Applications/MPM_Application/Processes/Grid-based_Boundary_Conditions/slip_boundary_condition.html) and [non-slip](https://kratosmultiphysics.github.io/Kratos/pages/Applications/MPM_Application/Processes/Grid-based_Boundary_Conditions/fixed_displacement_boundary_condition.html) conditions for arbitrary inclination
* Material Point-Based Conditions (non-conforming): applied on movable boundary particles
* Neumann:
* [moving point load](https://kratosmultiphysics.github.io/Kratos/pages/Applications/MPM_Application/Processes/Material_Point-based_Boundary_Conditions/point_load.html)
* interface condition for partitioned coupling with DEM
* Dirichlet: fixed, slip or contact condition
* [penalty method](https://kratosmultiphysics.github.io/Kratos/pages/Applications/MPM_Application/Processes/Material_Point-based_Boundary_Conditions/dirichlet_boundary_particles.html)
* [Lagrange multiplier method](https://kratosmultiphysics.github.io/Kratos/pages/Applications/MPM_Application/Processes/Material_Point-based_Boundary_Conditions/dirichlet_boundary_particles.html) (only for triangles and tetrahedra)
* [perturbed Lagrangian method](https://kratosmultiphysics.github.io/Kratos/pages/Applications/MPM_Application/Processes/Material_Point-based_Boundary_Conditions/dirichlet_boundary_particles.html)
* interface condition for partitioned coupling with FEM, RBS,...
**Time schemes**
* [Implicit](https://kratosmultiphysics.github.io/Kratos/pages/Applications/MPM_Application/MPM_Solver/mpm_implicit_solver.html) - Newmark/Bossak prediction and correction scheme for static, quasi-static, and dynamic problems
* Explicit
**Other features**
* Partitioned coupling with Finite Element Method (FEM) - weak and strong coupling of nonconforming discretization
* Partitioned coupling with the Discrete Element Method (DEM)
* Partitioned coupling with the Rigid Body Solver (RBS)
* Material point erase features - to delete material points outside the interest domain
## References
Recommended references for implementation details of MPM in Kratos:
* Singer, V., (2024). **Partitioned Coupling Strategies to Simulate the Impact of Granular Mass Flows on Flexible Protective Structures**, *PhD Thesis*, Technical University of Munich. PDF: <a href="https://mediatum.ub.tum.de/1743069">https://mediatum.ub.tum.de/1743069</a>
* Singer, V., Teschemacher, T., Larese, A., Wüchner, R., Bletzinger, K.U. (2024). **Lagrange multiplier imposition of non-conforming essential boundary conditions in implicit Material Point Method**, *Computational Mechanics*, 73, 1311–1333 DOI: <a href="https://doi.org/10.1007/s00466-023-02412-w">10.1007/s00466-023-02412-w</a>.
* Singer, V., Sautter, K.B., Larese, A., Wüchner, R., Bletzinger K.-U., (2023) **Partitioned Coupling Approaches for the Simulation of Natural Hazards Impacting Protective Structures**, *VIII International Conference on Particle-Based Methods*. DOI: <a href="https://doi.org/10.23967/c.particles.2023.002">10.23967/c.particles.2023.002</a>.
* Singer, V., Larese, A., Wüchner, R., Bletzinger K.-U., (2023). **Partitioned MPM-FEM Coupling Approach for Advanced Numerical Simulation of Mass-Movement Hazards Impacting Flexible Protective Structures**, *X International Conference on Computational Methods for Coupled Problems in Science and Engineering*. DOI: <a href="https://doi.org/10.23967/c.coupled.2023.026">10.23967/c.coupled.2023.026</a>.
* Singer, V., Sautter, K.B., Larese, A., Wüchner, R., Bletzinger, K.-U. (2022). **A partitioned material point method and discrete element method coupling scheme**, *Advanced Modeling and Simulation in Engineering Sciences*, 9(16). DOI: <a href="https://doi.org/10.1186/s40323-022-00229-5">doi.org/10.1186/s40323-022-00229-5</a>.
* Wilson, P., (2022). **A computational impact analysis approach leveraging non-conforming spatial, temporal and methodological discretisations**, *PhD Thesis*, University of Queensland. DOI: <a href="https://doi.org/10.14264/3e10f66">10.14264/3e10f66</a>.
* Singer, V., Bodhinanda, C., Larese, A., Wüchner, R., Bletzinger K.-U., (2021). **A Staggered Material Point Method and Finite Element Method Coupling Scheme Using Gauss Seidel Communication Pattern**, *9th edition of the International Conference on Computational Methods for Coupled Problems in Science and Engineering*. DOI: <a href="https://doi.org/10.23967/coupled.2021.006">10.23967/coupled.2021.006</a>.
* Chandra, B., Singer, V., Teschemacher, T., Wuechner, R., & Larese, A. (2021). **Nonconforming Dirichlet boundary conditions in implicit material point method by means of penalty augmentation**, *Acta Geotechnica*, 16(8), 2315-2335. DOI: <a href="https://doi.org/10.1007/s11440-020-01123-3">10.1007/s11440-020-01123-3</a>.
* Wilson, P., Wüchner, R., & Fernando, D. (2021). **Distillation of the material point method cell crossing error leading to a novel quadrature‐based C0 remedy**, *International Journal for Numerical Methods in Engineering*, 122(6), 1513-1537. DOI: <a href="https://doi.org/10.1002/nme.6588">10.1002/nme.6588</a>.
* Iaconeta, I., Larese, A., Rossi, R., & Oñate, E. (2018). **A stabilized mixed implicit Material Point Method for non-linear incompressible solid mechanics**, *Computational Mechanics*, 1-18. DOI: <a href="https://doi.org/10.1007/s00466-018-1647-9">10.1007/s00466-018-1647-9</a>.
* Iaconeta, I., Larese, A., Rossi, R., & Zhiming, G. (2016). **Comparison of a material point method and a Galerkin meshfree method for the simulation of cohesive-frictional materials**, *Materials*, 10(10), p. 1150. DOI: <a href="https://doi.org/10.3390/ma10101150">10.3390/ma10101150</a>.
## License
The `MPMApplication` is **Open Source**. The main code and program structure is available and aimed to grow with the need of any user willing to expand it. The **BSD** licence allows to use and distribute the existing code without any restriction, but with the possibility to develop new parts of the code on an open or close basis depending on the developers.
## Contact
* **Antonia Larese** - *Group Leader* - [antonia.larese@unipd.it](mailto:antonia.larese@unipd.it)
* **Veronika Singer** - *Developer* - [veronika.singer@tum.de](mailto:veronika.singer@tum.de)
* **Laura Moreno** - *Developer* - [laura.morenomartinez@ua.es](mailto:laura.morenomartinez@ua.es)
* **Andi Makarim Katili** - *Developer* - [andi.katili@tum.de](mailto:andi.katili@tum.de)
* **Nicolò Crescenzio** - *Developer* - [nicolo.crescenzio@math.unipd.it](mailto:nicolo.crescenzio@math.unipd.it)
| text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratoslinearsolversapplication==10.4.0",
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:52:21.654120 | kratosmpmapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 3,846,140 | 43/83/da7c17eada8df929aebb56b7819319f1c7d9af7564d6ac6101ff6c28efa8/kratosmpmapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 85dc1963194ab29d3cb73aa91ffe3881 | 1c2f09f44690869864b924e03d9c967dc0e75328a6599b9b2a28f5ae483b815d | 4383da7c17eada8df929aebb56b7819319f1c7d9af7564d6ac6101ff6c28efa8 | null | [] | 0 |
2.4 | KratosMeshMovingApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | ## MeshMoving Application
The MeshMoving Application (formerly ALEapplication) contains different mesh-update techniques.
A typical application for the use of mesh-moving is in Fluid-Structure Interaction (FSI) when the fluid mesh is moved after the displacements of the structure are mapped on the interface
### Features:
- laplacian mesh motion solver
- structural similarity mesh motion solver
- computation of mesh velocities (ALE)
- ALE-fluid solver for the usage in FSI
- works both in OpenMP and MPI
### Usage
Check the tests for examples on how to use the solvers.
| text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:52:19.913688 | kratosmeshmovingapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 2,416,725 | ac/b2/ce80cc14799e9747eec78a8d40af61e0f87036fde9198b41fe2f44f33b68/kratosmeshmovingapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | a4a00a6ec9ee2e78cdbf9c94116a446b | 25fe91fd520f1d1be1c39daa2193a9559d4cc9e52a38934afc85525bd94523bc | acb2ce80cc14799e9747eec78a8d40af61e0f87036fde9198b41fe2f44f33b68 | null | [] | 0 |
2.4 | KratosMeshingApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | # Meshing Application

Meshing Application provides several tools to create, manipulate and interact with meshes. It contains several interfaces to both Kratos
third party libraries (Triangle, TetGen, MMG)
The application offers the functionalities listed below. If there is an Object without methods it means it can be called using the __Execute()__ function.
- [Interface](#interface)
* [Custom IO](#custom-io)
* [Utilities](#utilities)
* [Meshers](#meshers)
* [Processes](#processes)
+ [Metrics](#metrics)
+ [LevelSet](#levelset)
+ [Hessian](#hessian)
+ [Error](#error)
- [External Libraries](#external-libraries)
* [TetGen](#tetgen)
* [MMG](#mmg)
## Interface
### Custom IO
* __PFEMGidIO__: A specialized instance of GiDIO for the PFEM Application. It redefines several IO methods:
* _WriteMesh_
* _WriteNodeMesh_
* _InitializeMesh_
* _FinalizeMesh_
* _InitializeResults_
* _FinalizeResults_
* _WriteNodalResults_
* _PrintOnGaussPoints_
* _Flush_
* _CloseResultFile_
### Utilities
* __MeshTransfer2D__
* __MeshTransfer3D__:
* _DirectModelPartInterpolation_
* _DirectScalarVarInterpolation_
* _DirectVectorialVarInterpolation_
* __BinBasedMeshTransfer2D__
* __BinBasedMeshTransfer3D__: Alternative implementation of the __MeshTransfer__ utility based on bins. Inherits the procedures from __MeshTransfer__ and also adds:
* _MappingFromMovingMesh_ScalarVar_
* _MappingFromMovingMesh_VectorialVar_
* _MappingFromMovingMesh_VariableMeshes_ScalarVar_
* _MappingFromMovingMesh_VariableMeshes_VectorialVar_
* __LocalRefineTriangleMesh__: Refines a Triangular Mesh.
* __LocalRefinePrismMesh__: Refines a Prism Mesh.
* __LocalRefineSPrismMesh__: Refines a SPrism Mesh.
* __LocalRefineTetrahedraMesh__: Refines a Tetrahedra Mesh.
* __Cutting_Isosurface_Application__:
* _GenerateScalarVarCut_
* _GenerateVectorialComponentVarCut_
* _GenerateVectorialVarCut_
* _AddModelPartElements_
* _AddSkinConditions_
* _UpdateCutData_
* _DeleteCutData_
### Meshers
* __TriGenPFEMModeler__:
* _ReGenerateMesh_
* __TriGenGLASSModeler__:
* _ReGenerateMeshGlass_
* __TriGenPFEMModelerVMS__:
* _ReGenerateMesh_
* __TriGenPFEMSegment__:
* _ReGenerateMesh_
### Processes
* __InternalVariablesInterpolationProcess__: Inerpolates Nodal v
#### Metrics
* __MetricFastInit2D__:
* __MetricFastInit3D__:
#### LevelSet
* __ComputeLevelSetSolMetricProcess2D__:
* __ComputeLevelSetSolMetricProcess3D__:
#### Hessian
* __ComputeHessianSolMetricProcess2D__: For double values.
* __ComputeHessianSolMetricProcess3D__: For double values.
* __ComputeHessianSolMetricProcessComp2D__: For components.
* __ComputeHessianSolMetricProcessComp3D__: For components.
#### Error
* __ComputeErrorSolMetricProcess2D__:
* __ComputeErrorSolMetricProcess3D__:
## External Libraries
Meshing application can make use of several third party libs as an alternative (or sometimes unique) way to implemented the
interface shown. You can find information about these libs in their respective pages which are listed below:
### TetGen
[TetGen](http://wias-berlin.de/software/index.jsp?id=TetGen&lang=1) is a program to generate tetrahedral meshes of any 3D polyhedral domains.
Please note that __Tetgen license is not compatible__ with Kratos, and hence it is not included as part of Kratos. You must indicate in compile time where it can find a tetgen already in your system.
Tetgen enables to use the following __utilities__:
* __TetgenVolumeMesher__:
* _AddHole_
* _GenerateMesh_
* __TetrahedraReconnectUtility__:
* _EvaluateQuality_
* _TestRemovingElements_
* _OptimizeQuality_
* _FinalizeOptimization_
* _updateNodesPositions_
* _setMaxNumThreads_
* _setBlockSize_
* _isaValidMesh_
Tetgen also enable to use the following __meshers__:
* __TetGenPfemModeler__:
* _ReGenerateMesh_
* __TetGenPfemRefineFace__:
* _ReGenerateMesh_
* __TetGenPfemContact__:
* _ReGenerateMesh_
* __TetGenCDT__:
* _GenerateCDT_
* __TetGenPfemModelerVms__:
* _ReGenerateMesh_
### MMG
[MMG](https://www.mmgtools.org/) is an open source software for simplicial remeshing. It provides 3 applications and 4 libraries. In Kratos it provides the following additional __procedures__:
* __MmgProcess__: This class is a remesher which uses the MMG library. The class uses a class for the 2D and 3D cases (solid and surfaces). The remesher keeps the previous submodelparts and interpolates the nodal values between the old and new mesh
| text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:52:18.381267 | kratosmeshingapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 2,920,622 | 8d/61/72f654a0cba6529a09cd2521c704b127ccb0843fa26fe28abd26ba5086c2/kratosmeshingapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | a9d8dea2ad279548e05b3535076efad3 | b5d0fd8f81fedf56c53347be57f70abb509c8e6ead24547e61c9c9666a69592b | 8d6172f654a0cba6529a09cd2521c704b127ccb0843fa26fe28abd26ba5086c2 | null | [] | 0 |
2.4 | KratosMedApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | # MedApplication
The Med Application is an interface to the MED-library. This library writes med-files, which contain mesh, field results and other data, and is based on [HDF5](https://www.hdfgroup.org/solutions/hdf5/). This format is used by [Salome](https://www.salome-platform.org/) and [Code_Aster](https://code-aster.org).
## Installation
The MED-library is an external library, which must be installed before the application can be compiled
### Ubuntu
On Ubuntu, it can be installed with `sudo apt-get install libmedc-dev`. This installs all required dependencies, including HDF5
The source code is available on the Salome website for a manual installation. In this case also HDF5 needs to be installed separately.
Use `MED_ROOT` to specify the path to the MED installation in the CMake of Kratos.
### Arch / Manjaro
Packages related to *Salome* and *MED* for arch-based distros can be installed from the [AUR](https://en.wikipedia.org/wiki/Arch_Linux#Arch_User_Repository_(AUR)). The MedApplication requires [med-serial](https://aur.archlinux.org/packages/med-serial) (for non-MPI builds) or [med-openmpi](https://archlinux.org/packages/extra/x86_64/med-openmpi/) (for MPI builds with OpenMPI).
```
yay -S med-serial med-openmpi
```
## Usage
- In Salome, mesh groups are translated into SubModelParts. Different geometries and nodes can be added.
- SubSub ... Modelparts can be created by specifying a name with `.`. I.e. like it usually works in Kratos
- The number of characters is restricted in Med: 64 for main mesh name, and 80 for groups. Everything beyond these limits is cut.
## Development
- Use [HDFView](https://www.hdfgroup.org/downloads/hdfview/) to inspect the med-files.
- Make sure to check the return value of every med-library function call.
- The med library does not check if wrong data is written to the file. This must be ensured by the user, the med-library is a thin wrapper around HDF.
| text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:52:16.692484 | kratosmedapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 6,890,201 | 36/9b/59590e60f0670d10d663077f78b433a5ead8fbb5ec094ede7a58885f518b/kratosmedapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | c8a97138702210f2f6775a12c2014bb8 | 349a438d3e92802410b335efce47aa816767f649a590b5014d1c2bbeb0b49ad9 | 369b59590e60f0670d10d663077f78b433a5ead8fbb5ec094ede7a58885f518b | null | [] | 0 |
2.4 | KratosMappingApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | ## Mapping Application
The Mapping Application contains the core developments in mapping data between non matching grids. It works both in shared and distributed (**MPI**) memory environments as well as in 1D, 2D and 3D domains.
### Overview
- [List of features](#list-of-features)
- [Dependencies](#dependencies)
- [Mapping in CoSimulation](#Mapping-in-CoSimulation)
- [Basic Usage](#basic-usage)
- [Advanced Usage](#advanced-usage)
- [Available Mappers](#available-mappers)
- [When to use which Mapper?](#when-to-use-which-mapper)
- [Using the Mapper for ModelParts that are not part of all ranks](#using-the-mapper-for-modelparts-that-are-not-part-of-all-ranks)
- [Miscellaneous functionalities](#miscellaneous-functionalities)
- [FAQ](#faq)
### List of features
- Parallelism:
- Serial (no parallelism)
- Shared memory (OpenMP)
- Distributed memory (MPI)
- Domain sizes: 1D / 2D / 3D
- Matching and non matching grids
- Different mapping technologies (see [here](#available-mappers)):
- Nearest Neighbor
- Nearest Element
- Barycentric
- Radial Basis Function Mapper
- Beam Mapper
- Metamappers
- 3D/2D metamapper (metamapper which obtains the solution for the 3D destination model part from the original 2D solution)
- Mapping operations (see [here](#customizing-the-behavior-of-the-mapping-with-flags))
### Dependencies
The serial / shared memory parallel compilation of the Mapping Application doesn't have any dependencies (except the `KratosCore`).
The distributed compilation of the Mapping Application depends on the [Trilinos library](https://trilinos.github.io/). Also most of the MPI-solvers in Kratos depend on Trilinos, see the [Trilinos Application](../TrilinosApplication).
### Mapping in CoSimulation
The Mapping Application can be used for mapping within the [CoSimulation Application](../CoSimulationApplication). This can be done by using the [KratosMappingDataTransferOperator](../CoSimulationApplication/python_scripts/data_transfer_operators/kratos_mapping.py).
### Basic Usage
The _Mapper_ maps nodal data from one _ModelPart_ to another. This means that the input for the _Mapper_ is two _ModelParts_, the **Origin** and the **Destination**. Furthermore settings in the form of _Kratos::Parameters_ are passed.
The _Mapper_ is constructed using the _MapperFactory_. See the following basic example.
```py
# import the Kratos Core
import KratosMultiphysics as KM
# import the MappingApplication to load the mappers
import KratosMultiphysics.MappingApplication as KratosMapping
# create ModelParts
# ...
mapper_settings = KM.Parameters("""{
"mapper_type": "nearest_neighbor",
"echo_level" : 0
}""")
# creating a mapper for shared memory
mapper = KM.MapperFactory.CreateMapper(
model_part_origin,
model_part_destination,
mapper_settings
)
```
For constructing an _MPI-Mapper_ use the `MPIExtension` instead:
```py
# creating a mapper for distributed memory
from KratosMultiphysics.MappingApplication import MPIExtension as MappingMPIExtension
mpi_mapper = MappingMPIExtension.MPIMapperFactory.CreateMapper(
model_part_origin,
model_part_destination,
mapper_settings
)
```
After constructing the _Mapper_ / _MPI-Mapper_ it can be used immediately to map any scalar and vector quantities, no further initialization is necessary.\
The **Map** function is used to map values from the **Origin** to the **Destination**. For this the _Variables_ have to be specified. See the following example for mapping scalar quantities.
```py
# mapping scalar quantities
# this maps the nodal quantities of TEMPERATURE on the origin-ModelPart
# to the nodal quantities of AMBIENT_TEMPERATURE on the destination-ModelPart
mapper.Map(KM.TEMPERATURE, KM.AMBIENT_TEMPERATURE)
```
The **Map** function is overloaded, this means that mapping vector quantities works in the same way as mapping scalar quantities.
```py
# mapping vector quantities
# this maps the nodal quantities of VELOCITY on the origin-ModelPart
# to the nodal quantities of MESH_VELOCITY on the destination-ModelPart.
mapper.Map(KM.VELOCITY, KM.MESH_VELOCITY)
```
Mapping from **Destination** to **Origin** can be done using the **InverseMap** function which works in the same way as the **Map** function.
```py
# inverse mapping scalar quantities
# this maps the nodal quantities of AMBIENT_TEMPERATURE on the destination-ModelPart
# to the nodal quantities of TEMPERATURE on the origin-ModelPart
mapper.InverseMap(KM.TEMPERATURE, KM.AMBIENT_TEMPERATURE)
# inverse mapping vector quantities
# this maps the nodal quantities of MESH_VELOCITY on the destination-ModelPart
# to the nodal quantities of VELOCITY on the origin-ModelPart
mapper.InverseMap(KM.VELOCITY, KM.MESH_VELOCITY)
```
For the 3D/2D metamapper the settings to consider are the following, where `base_mapper` is the backend mapper to be considered.
```json
mapper_params = KM.Parameters("""{
"mapper_type" : "projection_3D_2D",
"base_mapper" : "nearest_neighbor",
"search_settings" : {},
"echo_level" : 0
}""")
```
### Advanced Usage
The previous section introduced the basics of using the _MappingApplication_. The more advanced usage is explained in this section.
#### Customizing the behavior of the mapping with Flags
By default the mapping functions **Map** and **InverseMap** will overwrite the values where they map to. In order to add instead of overwrite the values the behavior can be customized by using _Kratos::Flags_. Consider in the following example that several forces are acting on a surface. Overwriting the values would cancel the previously applied forces.
```py
# Instead of overwriting, this will add the values to the existing ones
mapper.Map(KM.REACTION, KM.FORCE, KM.Mapper.ADD_VALUES)
```
Sometimes it can be necessary to swap the signs of quantities that are to be mapped. This can be done with the following:
```py
# Swapping the sign, i.e. multiplying the values with (-1)
mapper.Map(KM.DISPLACEMENT, KM.MESH_DISPLACEMENT, KM.Mapper.SWAP_SIGN)
```
The flags can also be combined:
```py
mapper.Map(KM.REACTION, KM.FORCE, KM.Mapper.ADD_VALUES | KM.Mapper.SWAP_SIGN)
```
Historical nodal values are used by default. Mapping to an from nonhistorical nodal values is also supported, the following examples show the usage:
This maps the values from the origin (`REACTION`) as historical values to the destination (`FORCE`) as nonhistorical values:
```py
mapper.Map(KM.REACTION, KM.FORCE, KM.Mapper.TO_NON_HISTORICAL)
```
This maps the values from the origin (`REACTION`) as nonhistorical values to the destination (`FORCE`) as historical values:
```py
mapper.Map(KM.REACTION, KM.FORCE, KM.Mapper.FROM_NON_HISTORICAL)
```
This maps the values from the destination (`FORCE`) as historical values to the origin (`REACTION`) as nonhistorical values:
```py
mapper.InverseMap(KM.REACTION, KM.FORCE, KM.Mapper.TO_NON_HISTORICAL)
```
This maps the values from the destination (`FORCE`) as nonhistorical values to the origin (`REACTION`) as historical values:
```py
mapper.InverseMap(KM.REACTION, KM.FORCE, KM.Mapper.FROM_NON_HISTORICAL)
```
Of course it is possible to use both origin and destination nonhistorical. This maps the values from the origin (`REACTION`) as nonhistorical values to the destination (`FORCE`) as nonhistorical values:
```py
mapper.Map(KM.REACTION, KM.FORCE, KM.Mapper.FROM_NON_HISTORICAL | KM.Mapper.TO_NON_HISTORICAL)
```
Many _Mappers_ internally construct a mapping matrix. It is possible to use the transpose of this matrix for mapping with `USE_TRANSPOSE`. This is often used for conservative mapping of forces in FSI, when the virtual work on both interfaces should be preserved.
```py
mapper.Map(KM.REACTION, KM.FORCE, KM.Mapper.USE_TRANSPOSE)
```
#### Updating the Interface
In case of moving interfaces (e.g. in a problem involving Contact between bodies) it can become necessary to update the _Mapper_ to take the new geometrical positions of the interfaces into account.\
One way of doing this would be to construct a new _Mapper_, but this is not efficient and sometimes not even possible.
Hence the _Mapper_ provides the **UpdateInterface** function for updating itseld with respect to the new geometrical positions of the interfaces.\
Note that this is potentially an expensive operation due to searching the new geometrical neighbors on the interface.
```py
mapper.UpdateInterface()
```
#### Checking which mappers are available
The following can be used to see which _Mappers_ are available:
```py
# available mappers for shared memory
KM.MapperFactory.GetRegisteredMapperNames()
# available mappers for distributed memory
MappingMPIExtension.MPIMapperFactory.GetRegisteredMapperNames()
# check if mapper for shared memory exists
KM.MapperFactory.HasMapper("mapper_name")
# check if mapper for distributed memory exists
MappingMPIExtension.MPIMapperFactory.HasMapper("mapper_name")
```
#### Search settings
The search of neighbors / partners on the other side of the interface is a crucial task when creating the mapper. Especially in distributed computations (MPI) this can be very expensive and time consuming. Hence the search of the mapper is very optimized to provide robust and fast results. For this the search works in several iterations where the search radius is increased in each iteration.
The default settings of the search are working fine in most cases, but in some special cases it might still be necessary to tweak and optimize the settings. The following settings are available (as sub-parameter `search_settings` of the settings that are given to the mapper):
| name | type | default| description |
|---|---|---|---|
| `search_radius`| `double` | computed | The search radius to start with in the first iteration. In each next iteration it will be increased by multiplying with `search_radius_increase_factor` (`search_radius *= search_radius_increase_factor`) |
| `max_search_radius` | `double` | computed | The max search radius to use. |
| `search_radius_increase_factor`| `double` | `2.0` | factor by which the search radius is increasing in each search iteration (see above). **Tuning this parameter is usually the best way to achieve a faster search**. In many cases decreasing it will speed up the search, especially for volumetric mapping, but it is case dependent. |
| `max_num_search_iterations` | `int` | computed (min 3) | max number of search iterations that is conducted. If the search is successful before then it will terminate earlier. The more heterogeneous the mesh the larger this will be.
It is recommended to set the `echo_level` to 2 or higher for getting useful information from the search. This will help to debug the search in case of problems.
### Available Mappers
This section explains the theory behind the mappers.
#### Nearest Neighbor
The _NearestNeighborMapper_ is a very simple/basic _Mapper_. Searches its closest neighbor (node) on the other interface. During mapping it gets/sets its value to the value of its closest neighbor.
This mapper is best suited for problems where both interfaces have a similar discretization. Furthermore it is very robust and can be used for setting up problems when one does not (yet) want to deal with mapping.
Internally it constructs the mapping matrix, hence it offers the usage of the transposed mapping matrix. When using this, for very inhomogeneous interface discretizations it can come to oscillations in the mapped quantities.
**Supported mesh topologies**: This mapper only works with nodes and hence supports any mesh topology
#### Nearest Neighbor for IGA scenarios
The _NearestNeighborMapperIGA_ is a simple and robust Mapper for IGA/FEM partitioned simulations. For each node on the FEM side, it finds the closest integration point on the IGA interface.
During mapping, it evaluates the IGA shape functions at that location to assemble the mapping matrix.
This mapper is suited for cases where the origin domain is discretized with IGA elements and the destination with any node-based discretization technique (e.g., FEM or FCV).
Internally, it constructs the mapping matrix, which also allows the use of its transpose for conservative mapping (e.g., mapping forces from FEM to IGA). In cases of highly inhomogeneous interface discretizations, using the transpose may introduce oscillations in the mapped values.
**Supported mesh topologies**: This mapper operates on nodes and supports any mesh topology. The only requirement is that the origin domain must be the IGA domain; otherwise, the mapping problem is not well defined.
#### Nearest Element
The _NearestElementMapper_ projects nodes to the elements( or conditions) on other side of the interface. Mapping is then done by interpolating the values of the nodes of the elements by using the shape functions at the projected position. The NearestElementMapper supports IGA/FEM partitioned simulations where the origin must be the IGA domain. Each FEM node is projected onto the IGA surface or its boundary curves, and the shape functions are evaluated at that point to assemble the mapping matrix.
This mapper is best suited for problems where the _NearestNeighborMapper_ cannot be used, i.e. for cases where the discretization on the interfaces is different. Note that it is less robust than the _NearestNeighborMapper_ due to the projections it performs. In case a projection fails, it uses an approximation that is similar to the approach of the _NearestNeighborMapper_. This can be disabled by setting `use_approximation` to `false` in the mapper-settings.
Internally it constructs the mapping matrix, hence it offers the usage of the transposed mapping matrix. When using this, for very inhomogeneous interface discretizations it can come to oscillations in the mapped quantities.
**Supported mesh topologies**: Any mesh topology available in Kratos, which includes the most common linear and quadratic geometries, see [here](../../kratos/geometries).
#### Barycentric
The _BarycentricMapper_ uses the closest nodes to reconstructs a geometry. This geometry is used in the same way as the _NearestElementMapper_ for interpolating the values of the nodes using the shape functions.
This mapper can be used when no geometries are available and interpolative properties of the mapper are required. E.g. for particle methods when only nodes or point-based entities are available. Overall it can be seen as combining the advantages of the _NearestNeighborMapper_ (which only requires points as input) with the advantages of the _NearestElementMapper_ (which has interpolative properties). The disadvantage is that the reconstruction of the geometry can cause problems in complex situations, hence it should only be used if the _NearestElementMapper_ cannot be used.
Furthermore, the geometry type for the reconstruction/interpolation has to be chosen with the `interpolation_type` setting. The following types are available: `line`, `triangle` and `tetrahedra`
Internally it constructs the mapping matrix, hence it offers the usage of the transposed mapping matrix. When using this, for very inhomogeneous interface discretizations it can come to oscillations in the mapped quantities.
**Supported mesh topologies**: This mapper only works with nodes and hence supports any mesh topology
#### Radial Basis Function (RBF) Mapper
The _RadialBasisFunctionMapper_ is a global, mesh-independent mapper that constructs a smooth interpolation field based on Radial Basis Functions (RBFs). In contrast to purely local methods, this mapper uses all (or a user-defined subset of) points from the origin interface to build an RBF system that is then evaluated at the destination points.
This allows for smooth, high-quality transfer of field quantities between arbitrarily discretized, non-matching, or strongly non-uniform interfaces. It is therefore particularly suitable for multi-physics problems where interface meshes can differ substantially.
The default configuration looks as follows:
```json
"mapper_settings" : {
"echo_level" : 0,
"radial_basis_function_type" : "thin_plate_spline",
"additional_polynomial_degree" : 0,
"origin_is_iga" : false,
"destination_is_iga" : false,
"max_support_points" : 0,
"use_all_rbf_support_points": true,
"precompute_mapping_matrix" : true,
"search_settings" : {},
"linear_solver_settings" : {}
}
```
**Important notes:**
- Only the origin domain can be IGA.
The mapper can extract coordinates from IGA gauss points on the origin side (```json"origin_is_iga": true```). The destination side must currently be a standard finite element mesh (nodes as mapping coordinates).
-Global RBF System
Internally, the mapper assembles and solves a global RBF interpolation system.
If ```json"precompute_mapping_matrix": true```, the resulting mapping matrix is stored and reused, allowing efficient repeated mapping calls.
- Support Points
- ```json"use_all_rbf_support_points": true``` → all origin points contribute to the RBF system.
- ```json"max_support_points" > 0``` → restricts support to a local neighborhood for each destination point.
#### Beam Mapper
The _BeamMapper_ provides support for mapping between 1D beam elements and 2D/3D surface meshes. It follows the formulation of Wang (2019) and is intended for cases where beam DOFs (displacements and rotations) must be transferred consistently to a surrounding surface, e.g. in FSI or beam–solid coupling.
The mapper projects each surface node onto the undeformed beam centerline and assigns a local rigid cross section. The motion of every projected surface point is obtained through rigid body motion of this cross section, using the beam’s axial and rotational DOFs. Hermitian shape functions are used along the beam to interpolate both displacements and rotations.
A typical configuration json looks as follows:
```json
"mapper" : {
"type" : "kratos_beam_mapping",
"model_part_name_beam" : "Structure.Parts_Beam_beam",
"model_part_name_surface" : "Structure.Parts_Shell_wet_surface",
"solver_name_beam": "beam_structure",
"solver_name_surface" : "dummy_fluid",
"echo_level": 3,
"mapper_settings" : {
"mapper_type" : "beam_mapper",
"use_corotation" : false,
"search_settings": {
"max_num_search_iterations" : 30,
"search_radius": 3.0
},
"echo_level": 0
}
}
```
**Explanation of the main entries:**
- `"type"`: selects the CoSimulation mapper. "kratos_beam_mapping" activates the _BeamMapper_.
- `"model_part_name_beam" / "model_part_name_surface"`: names of the origin beam model part and the target surface model part used for the mapping.
- `"solver_name_beam" / "solver_name_surface"`: identifiers of the solvers that own these model parts. They are used internally by the CoSimulation framework to retrieve nodal values.
- `"echo_level"`: controls the amount of printed information for debugging.
- `"mapper_settings"`
- `"mapper_type"`: must be "beam_mapper" to use the _BeamMapper_.
- `"use_corotation"`: enables the co-rotational formulation (false → linear mapping, true → large-rotation mapping)
- `"search_settings"`: parameters controlling the projection of surface nodes onto the beam centerline.
**Note:**
This mapper currently supports FEM beam elements only. IGA beams are not yet supported.
### When to use which Mapper?
- **Matching Interface**\
For a matching interface the _NearestNeighborMapper_ is the best / fastes choice. Note that the ordering / numbering of the nodes doesn't matter.
- **Interfaces with almost matching discretizations**\
In this case both the _NearestNeighborMapper_ and the _NearestElementMapper_ can yield good results.
- **Interfaces with non matching discretizations**\
The _NearestElementMapper_ is recommended because it results in smoother mapping results due to the interpolation using the shape functions.
- **Interfaces with non matching discretizations when no geometries are available for interpolation**\
The _NearestElementMapper_ cannot be used as it requires geometries for the ionterpolation. Here the _BarycentricMapper_ is recommended because it reconstructs geometries from the surrounding nodes and then uses it to interpolate.
### Using the Mapper for ModelParts that are not part of all ranks
In MPI parallel simulations usually all `ModelParts` are distributed across all ranks. However in some cases this does not hold, for example in FSI when the fluid runs on all ranks but the structure runs serial on one rank. In this case it is necessary to do the following:
- Create a dummy-`ModelPart` on the ranks that do not have the original ModelPart.
- **IMPORTANT**: This `ModelPart` must have a `DataCommunicator` that is not defined on the ranks that are not part of the original `ModelPart`.
- Create and MPI-mapper as explained [above](#basic-usage), using the original and the dummy `ModelPart`s on the respective ranks.
Check [this test](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/MappingApplication/tests/blade_mapping_test.py) for more details and usage example.
For an example the following assumptions are made:
- Overall 4 MPI processes are used
- `model_part_fluid` is distributed across all 4 ranks
- `model_part_structure` is not distributed and exists only on rank 0
```py
import KratosMultiphysics as KM
import KratosMultiphysics.mpi as KratosMPI
# "model_part_fluid" was already read and exists on all ranks
# "model_part_structure" was already read and exists only on rank 0
# getting the DataCommunicator that wraps `MPI_COMM_WORLD` i.e. contains all ranks
world_data_comm = KM.ParallelEnvironment.GetDataCommunicator("World)
# define the ranks on which the structure ModelPart exists
# structure can also be distributed across several (but not all) ranks
structure_ranks = [0]
# create a DataCommunicator containing only the structure ranks
structure_ranks_data_comm_name = "structure_ranks"
data_comm_all_structure_ranks = KratosMPI.DataCommunicatorFactory.CreateFromRanksAndRegister(
world_data_comm,
structure_ranks,
structure_ranks_data_comm_name)
# create a dummy ModelPart on the ranks where the original ModelPart does not exist
if world_data_comm.Rank() not in structure_ranks:
dummy_model = KM.Model()
model_part_structure = dummy_model.CreateModelPart("structure_dummy")
# Important: set the DataCommunicator so that the Mapper knows on which ranks the ModelPart is only a dummy
KratosMPI.ModelPartCommunicatorUtilities.SetMPICommunicator(model_part_structure, data_comm_all_structure_ranks)
# now the Mapper can be created with the original and the dummy ModelParts
mpi_mapper = MappingMPIExtension.MPIMapperFactory.CreateMapper(
model_part_fluid,
model_part_structure,
mapper_settings
)
```
### Miscellaneous functionalities
- [serial_output_process](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/MappingApplication/python_scripts/serial_output_process.py): This process can be used to map results to one rank and then do postprocessing on this rank. This has two advantages:
- Some output formats write one file per rank in distributed simulations, which leads to many files when running with many cores. This process collects the results on one rank and can hence reduce the number of files significantly
- Different meshes can be used to do the postprocessing. This is in particular useful when the computational mesh is very fine, but a coarser mesh would be sufficient for postprocessing.
<ins>The following input parameters are used:</ins>
- `model_part_name_origin`: name of the origin ModelPart where the data comes from (is being mapped from)
- `model_part_name_destination`: name of destination ModelPart where the data is mapped to. This ModelPart is being read.
- `mdpa_file_name_destination`: name of the mdpa file containing the mesh that is used for the destination
- `historical_variables_destination` list of historical variables that are allocated on the destination ModelPart
- `destination_rank` rank on which the processing of the destination happens (i.e. the rank on which the destination ModelPart is read). Note that this increases the memory usage significantly, especially for large destination meshes. The default is rank 0, which in most distributed simulations acts as the master rank with already increased computational effort. Hence it can make sense to use another rank, preferably on another compute node, to optimize the memory and computational load balance
- `mapper_settings`: setting that are passed to the mapper, as explained above
- `mapping_settings`: list of mapping steps to be executed before the postprocessing is done. `variable_origin` and `variable_destination` must be specified, while `mapping_options` is optional and can contain the flags as explained above.
- `output_process_settings`: The settings for the output process (which will be only executed on the destination rank). **Important**: For mapping onto a serial ModelPart, the DataCommunicator is set as explained [here](#using-the-mapper-for-modelparts-that-are-not-part-of-all-ranks). This means that the destination ModelPart is not valid on other ranks and can hence not be used in the regular postprocessing (which happens also on the ranks where it is not valid and hence some MPI-functionalities would fail)
Example input:
~~~js
"python_module" : "serial_output_process",
"kratos_module" : "KratosMultiphysics.MappingApplication",
"Parameters" : {
"model_part_name_origin" : "FluidModelPart",
"model_part_name_destination" : "PostProcessing",
"mdpa_file_name_destination" : "coarse_mesh",
"historical_variables_destination" : ["REACTION", "DISPLACEMENT"],
"mapper_settings" : {"mapper_type" : "nearest_neighbor"},
"mapping_settings" : [{
"variable_origin" : "REACTION",
"variable_destination" : "REACTION"
},{
"variable_origin" : "REACTION",
"variable_destination" : "REACTION",
"mapping_options" : ["add_values"]
},{
"variable_origin" : "MESH_DISPLACEMENT",
"variable_destination" : "DISPLACEMENT"
}],
"output_process_settings" : {
"python_module" : "vtk_output_process",
"kratos_module" : "KratosMultiphysics",
"Parameters" : {
// ...
}
}
}
~~~
### FAQ
- **Is mapping of elemental / conditional data or gauss-point values possible?**\
The mapper only supports mapping of nodal data. In order to map other quantities, those have to first be inter- / extrapolated to the nodes.
- **Something is not working with the mapping. What should I do?**\
Problems with mapping can have many sources. The first thing in debugging what is happening is to increase the `echo_level` of the _Mapper_. Then in many times warnings are shown in case of some problems.
- **I get oscillatory solutions when mapping with `USE_TRANSPOSE`**\
Research has shown that "simple" mappers like _NearestNeighbor_ and _NearestElement_ can have problems with mapping with the transpose (i.e. when using `USE_TRANSPOSE`) if the meshes are very different. Using the _MortarMapper_ technology can improve this situation. This _Mapper_ is currently under development.
- **Projections find the wrong result**\
For complex geometries the projections can fail to find the correct result if many lines or surfaces are close. In those situations it helps to partition the mapping interface and construct multiple mappers with the smaller interfaces.
- **Creation of the mapper takes very long**\
Often this is because of of unfit search settings. If the settings are not suitable for the problem then the mapper creation time can increase several magnitudes! Check [here](#search-settings) for an explanation of how to set the search settings in case the defaults are not working well.
| text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:52:14.616522 | kratosmappingapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 2,136,661 | f4/5b/dc8d750f3c28d5bd1e539bc853c060906d3650aff268e0e31ce53997ed73/kratosmappingapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 3e9136d80df0a445f2cd7a0156f10616 | e21ca9e542cff992f843b5eb31af8b452c91db8124daeafa91cb27053c5301fa | f45bdc8d750f3c28d5bd1e539bc853c060906d3650aff268e0e31ce53997ed73 | null | [] | 0 |
2.4 | KratosLinearSolversApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | @anchor LinearSolversApplicationMainPage
# LinearSolversApplication
The *LinearSolversApplication* is a thin wrapper for the [Eigen linear algebra library](http://eigen.tuxfamily.org/index.php?title=Main_Page).
## Direct sparse solvers
The application provides the following direct sparse solvers:
| Python class | solver_type | Matrix kind | Domain | Dependencies |
|--------------------------|------------------------|:-----------:|:--------:|:------------:|
| SparseLUSolver | `sparse_lu` | Square | Real | None |
| SparseQRSolver | `sparse_qr` | Rectangular | Real | None |
| SparseCGSolver | `sparse_cg` | SPD* | Real | None |
| PardisoLLTSolver | `pardiso_llt` | SPD* | Real | Intel® MKL |
| PardisoLDLTSolver | `pardiso_ldlt` | SPD* | Real | Intel® MKL |
| PardisoLUSolver | `pardiso_lu` | Square | Real | Intel® MKL |
| ComplexSparseLUSolver | `sparse_lu_complex` | Square | Complex | None |
| ComplexPardisoLLTSolver | `pardiso_llt_complex` | SPD* | Complex | Intel® MKL |
| ComplexPardisoLDLTSolver | `pardiso_ldlt_complex` | SPD* | Complex | Intel® MKL |
| ComplexPardisoLUSolver | `pardiso_lu_complex` | Square | Complex | Intel® MKL |
| CholmodSolver | `cholmod` | SPD* | Real | SuiteSparse |
| UmfPackSolver | `umfpack` | Square | Real | SuiteSparse |
| ComplexUmfPackSolver | `umfpack_complex` | Square | Complex | SuiteSparse |
| SPQRSolver | `spqr` | Rectangular | Real | SuiteSparse |
| ComplexSPQRSolver | `spqr_complex` | Rectangular | Complex | SuiteSparse |
*SPD = Symmetric Positive Definite
**Example**:
```json
{
"solver_type": "eigen_sparse_lu"
}
```
## Direct dense solvers
The application provides the following direct solvers for dense systems of equations:
| Python class | solver_type | Matrix requirements | Domain | Dependencies |
| --------------------------------------- | -------------------------------------- | :-----------------: | :-----: | :----------: |
| DenseColPivHouseholderQRSolver** | `dense_col_piv_householder_qr` | None | Real | None |
| DenseHouseholderQRSolver** | `dense_householder_qr` | None | Real | None |
| DenseLLTSolver** | `dense_llt` | SPD* | Real | None |
| DensePartialPivLUSolver** | `dense_partial_piv_lu` | Invertible | Real | None |
| ComplexDenseColPivHouseholderQRSolver | `complex_dense_col_piv_householder_qr` | None | Complex | None |
| ComplexDenseHouseholderQRSolver | `complex_dense_householder_qr` | None | Complex | None |
| ComplexDensePartialPivLUSolver | `complex_dense_partial_piv_lu` | Invertible | Complex | None |
*SPD = Symmetric Positive Definite
**Can also be used to solve equation systems with multiple right hand sides.
## Generalized eigensystem solvers
The application provides the following generalized eigensystem `Ax=λBx` solver for sparse matrices.
| Python class | solver_type | Matrix kind A | Matrix kind B | Domain | Dependencies |
|----------------------------------------|----------------------------|:-------------:|:-------------:|:--------:|:------------:|
| EigensystemSolver | `eigen_eigensystem` | Symmetric | SPD* | Real | None |
| SpectraSymGEigsShiftSolver | `spectra_sym_g_eigs_shift` | Symmetric | SPD* | Real | None |
| FEASTGeneralEigensystemSolver** | `feast` | General | General | Real | Intel® MKL |
| ComplexFEASTGeneralEigensystemSolver** | `feast_complex` | General | General | Complex | Intel® MKL |
*SPD = Symmetric Positive Definite
**A special version for symmetric matrices can be triggered in the solver settings.
`EigensystemSolver` and `SpectraSymGEigsShiftSolver` compute the smallest eigenvalues and corresponding eigenvectors of the system. MKL routines are used automatically if they are available.
`SpectraSymGEigsShiftSolver` interfaces a solver from the [Spectra library](https://spectralib.org/), and has a shift mode that can be used to compute the smallest eigenvalues > `shift`.
**Example:**
```json
{
"solver_type": "spectra_sym_g_eigs_shift",
"number_of_eigenvalues": 3,
"max_iteration": 1000,
"echo_level": 1
}
```
If the application is compiled with MKL, [FEAST 4.0](http://www.ecs.umass.edu/~polizzi/feast/) can be used to solve the generalized eigenvalue problem for real and complex systems (symmetric or unsymmetric). The cmake switch `USE_EIGEN_FEAST` must be set to `ON` with
```batch
-DUSE_EIGEN_FEAST=ON \
```
**Example:**
```json
{
"solver_type": "feast",
"symmetric": true,
"number_of_eigenvalues": 3,
"search_lowest_eigenvalues": true,
"e_min" : 0.0,
"e_max" : 0.2
}
```
## Build instructions
1. Set the required definitions for cmake
As any other app:
**Windows:** in `configure.bat`
```batch
set KRATOS_APPLICATIONS=%KRATOS_APPLICATIONS%%KRATOS_APP_DIR%\LinearSolversApplication;
```
**Linux:** in `configure.sh`
```console
add_app ${KRATOS_APP_DIR}/LinearSolversApplication
```
2. Build Kratos
3. Setup the `ProjectParameters.json`
```json
"linear_solver_settings": {
"solver_type" : "LinearSolversApplication.sparse_lu"
}
```
4. Run the simulation
## Enable MKL (optional)
In case you have installed [MKL](https://software.intel.com/en-us/mkl) (see below), you can also use the Pardiso solvers.
1. Run the MKL setup script before building Kratos:
**Windows:**
```batch
call "C:\Program Files (x86)\Intel\oneAPI\mkl\latest\env\vars.bat" intel64 lp64
```
**Linux:**
```console
source /opt/intel/oneapi/setvars.sh intel64
```
2. Add the following flag to CMake to your configure script:
**Windows:**
```batch
-DUSE_EIGEN_MKL=ON ^
```
**Linux:**
```console
-DUSE_EIGEN_MKL=ON \
```
3. Build Kratos
4. Usage:
**Windows:**
```batch
call "C:\Program Files (x86)\Intel\oneAPI\mkl\latest\env\vars.bat" intel64 lp64
```
**Linux:**
Set the environment before using MKL
```console
source /opt/intel/oneapi/setvars.sh intel64
```
## Install MKL on Ubuntu with apt
Intel MKL can be installed with apt on Ubuntu. A guide can be found in [here](https://neelravi.com/post/intel-oneapi-install/).
For example to install the MKL 2022 version
```console
sudo bash
# <type your user password when prompted. this will put you in a root shell>
# If they are not installed, you can install using the following command:
sudo apt update
sudo apt -y install cmake pkg-config build-essential
# use wget to fetch the Intel repository public key
wget https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
# add to your apt sources keyring so that archives signed with this key will be trusted.
sudo apt-key add GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
# remove the public key
rm GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
# Configure apt client to use Intel repository
sudo add-apt-repository "deb https://apt.repos.intel.com/oneapi all main"
# Install all MKL related dependencies. You can install full HPC with: sudo apt install intel-hpckit
sudo apt install intel-oneapi-mkl-devel
# Exit
exit
```
To enable the MKL environment (needs to be done before build/run) use
```console
source /opt/intel/oneapi/setvars.sh intel64
```
## Enable *SuiteSparse* (optional)
[*SuiteSparse*](https://github.com/DrTimothyAldenDavis/SuiteSparse) is a collection of open-source sparse matrix algorithms, including efficient implementations of various factorizations. *Kratos* currently provides wrappers for *CHOLMOD*, *UMFPACK*, and *SPQR*.
Install the *SuiteSparse* package on your system, and set the `USE_EIGEN_SUITESPARSE` flag in your *CMake* configuration to `ON`. One way of doing this is appending the list of arguments you pass to *CMake* in your configure script with:
```bash
-DUSE_EIGEN_SUITESPARSE:BOOL=ON
```
### Installing *SuiteSparse*
- Arch Linux
- *SuiteSparse* is [available](https://archlinux.org/packages/extra/x86_64/suitesparse/) through your package manager.
- `sudo pacman -S suitesparse`
- Debian (and derivatives, including Ubuntu)
- *SuiteSparse* is [available](https://packages.debian.org/sid/libsuitesparse-dev) through your package manager.
- `sudo apt install libsuitesparse-dev`
- Fedora (and derivatives, including RHEL)
- *SuiteSparse* is [available](https://packages.fedoraproject.org/pkgs/suitesparse/suitesparse-devel/) through your package manager.
- `sudo dnf install suitesparse-devel`
- MacOS
- *SuiteSparse* is [available](https://formulae.brew.sh/formula/suite-sparse) on *Homebrew* for both Intel and Apple Silicon machines.
- `brew install suite-sparse`
- Windows
- *SuiteSparse* is available through [*vcpkg*](https://vcpkg.io/en/package/suitesparse) and [*MSYS2*](https://packages.msys2.org/packages/mingw-w64-x86_64-suitesparse) package managers.
- `vcpkg install suitesparse suitesparse-spqr suitesparse-umfpack` (remember to add to build script `-DCMAKE_TOOLCHAIN_FILE="vcpkg_path\vcpkg\scripts\buildsystems\vcpkg.cmake"`)
- `pacman -S mingw-w64-x86_64-suitesparse`
| text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:52:08.509244 | kratoslinearsolversapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 2,838,854 | 99/7d/5750412accc46b358996ca578609e7daf47538aeec24028d3916da6b5d93/kratoslinearsolversapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 89818fe3a3af2256d91c417e9b414804 | c8e58dafc5b1afaf51f5f32caadbdd743af9d5b87c7b8639172fd104e0145fa1 | 997d5750412accc46b358996ca578609e7daf47538aeec24028d3916da6b5d93 | null | [] | 0 |
2.4 | KratosIgaApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | ## IGA Application | text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:52:06.937110 | kratosigaapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 3,865,118 | 87/36/b45c11f7e09092f8d59df3ae84392b474e540826f75c7b92c2bef9a0e680/kratosigaapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | b79fbc1a2869647cb2bc65d03d63c2a6 | 91c06d8b0f542fa51c903ebdb73ac041387ec31c7391248cba736305e87e9471 | 8736b45c11f7e09092f8d59df3ae84392b474e540826f75c7b92c2bef9a0e680 | null | [] | 0 |
2.4 | KratosHDF5Application | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | # HDF5Application
The *HDF5Application* enables the serialization of a model part with or without MPI using the [HDF5 library](https://support.hdfgroup.org/HDF5/). The model part is stored in and HDF5 file, which can be used for:
* Viewing a model part with [HDFVIEW](https://support.hdfgroup.org/products/java/hdfview/)
* Scientific visualization with tools supporting [XDMF](http://www.xdmf.org/index.php/Main_Page). Tools, which are known to work, include [ParaView 5.4](https://www.paraview.org/) and [VisIt](https://wci.llnl.gov/simulation/computer-codes/visit/).
* Checkpointing (under development).
* Re-partitioning (not supported yet).
- [HDF5Application](#hdf5application)
- [Installing HDF5 (minimum version 1.8)](#installing-hdf5-minimum-version-18)
- [Serial HDF5 Library](#serial-hdf5-library)
- [Ubuntu](#ubuntu)
- [Parallel HDF5 Library](#parallel-hdf5-library)
- [Ubuntu](#ubuntu-1)
- [Build Instructions](#build-instructions)
- [Installing h5py](#installing-h5py)
- [Note](#note)
- [Kratos processes](#kratos-processes)
- [Initialization from hdf5 process](#initialization-from-hdf5-process)
- [Single mesh temporal input process](#single-mesh-temporal-input-process)
- [Single mesh temporal output process](#single-mesh-temporal-output-process)
- [Multiple mesh temporal output process](#multiple-mesh-temporal-output-process)
- [Single mesh xdmf output process for Paraview](#single-mesh-xdmf-output-process-for-paraview)
- [User defined I/O process](#user-defined-io-process)
## Installing HDF5 (minimum version 1.8)
The HDF5 C libraries are used with the *HDF5Application*. If Kratos is configured with MPI then the parallel HDF5 library must be installed. Otherwise, the serial HDF5 library is used.
### Serial HDF5 Library
#### Ubuntu
```
sudo apt-get install libhdf5-dev
```
### Parallel HDF5 Library
#### Ubuntu
```
sudo apt-get install libhdf5-openmpi-dev
```
## Build Instructions
1. Install serial or parallel HDF5 library
2. Configure Kratos to build the *HDF5Application* (following the standard [instructions](https://github.com/KratosMultiphysics/Kratos/blob/master/INSTALL.md))
- For GNU/Linux:
```
add_app ${KRATOS_APP_DIR}/HDF5Application
```
- For Windows:
```
CALL :add_app %KRATOS_APP_DIR%\HDF5Application;
```
3. Build Kratos
## Installing h5py
This package is needed to use some of the python-files, e.g. `xdmf_utils.py`
```
sudo apt-get install python-h5py / python3-h5py
```
## Note
The minimum version for the GCC compiler is **4.9**. This is because earlier version don't fully support *regular expressions*.
## Kratos processes
There are few available HDF5 processes which can be integrated into work flow via ProjectParameters.json file.
### Initialization from hdf5 process
This process can be used to initialize a given model part using existing HDF5 files. Illustrated example reads in VELOCITY and PRESSURE variables from
"hdf5_output/example_initialization_file.h5" file to nodal historical data value container, and nodal, elemental and condition non-historical data value containers. This process also can read FLAGS from HDF5 file and populate model part accordingly.
For the ```list_of_variables``` in either ```nodal_data_value_settings```, ```element_data_value_settings```, ```condition_data_value_settings``` one can specify ```ALL_VARIABLES_FROM_FILE``` which will read all the variables available in the input hdf5 file and will populate the model part accordingly. This ```ALL_VARIABLES_FROM_FILE``` should be used alone (There can be only this entry in the list if it is used).
```json
{
"kratos_module": "KratosMultiphysics.HDF5Application",
"python_module": "initialization_from_hdf5_process",
"Parameters": {
"model_part_name": "MainModelPart",
"file_settings": {
"file_name": "hdf5_output/example_initialization_file.h5",
"echo_level": 1
},
"nodal_solution_step_data_settings" : {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"nodal_data_value_settings": {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"element_data_value_settings" : {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"condition_data_value_settings" : {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"nodal_flag_value_settings": {
"list_of_variables": [
"SLIP"
]
},
"element_flag_value_settings" : {
"list_of_variables": [
"SLIP"
]
},
"condition_flag_value_settings" : {
"list_of_variables": [
"SLIP"
]
}
}
}
```
### Single mesh temporal input process
This process is used to initialize model part variables at each time step from hdf5 files. You can specify the list of hdf5 files to read from by changing the "file_name" with tags ```<time>``` as shown in following example. If no ```<time>``` tag is present in "file_name", then same file will be read at each time step and the variables specified will be initialized. In this process, mesh is only read from the initial time step file. Results for each container will be read from time series h5 files.
For the ```list_of_variables``` in either ```nodal_data_value_settings```, ```element_data_value_settings```, ```condition_data_value_settings``` one can specify ```ALL_VARIABLES_FROM_FILE``` which will read all the variables available in the input hdf5 file and will populate the model part accordingly. This ```ALL_VARIABLES_FROM_FILE``` should be used alone (There can be only this entry in the list if it is used).
```json
{
"kratos_module": "KratosMultiphysics.HDF5Application",
"python_module": "single_mesh_temporal_input_process",
"Parameters": {
"model_part_name": "MainModelPart",
"file_settings": {
"file_name": "hdf5_output/<model_part_name>-<time>.h5",
"time_format": "0.4f",
"echo_level": 1
},
"nodal_solution_step_data_settings" : {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"nodal_data_value_settings": {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"element_data_value_settings" : {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"condition_data_value_settings" : {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"nodal_flag_value_settings": {
"list_of_variables": [
"SLIP"
]
},
"element_flag_value_settings" : {
"list_of_variables": [
"SLIP"
]
},
"condition_flag_value_settings" : {
"list_of_variables": [
"SLIP"
]
}
}
}
```
### Single mesh temporal output process
This process can be used to output data from model parts to HDF5. This will write mesh only in the first time step h5 file. Rest of the time series h5 files will contain only results for each nodal/element/condition containers.
For the ```list_of_variables``` in ```nodal_solution_step_data_settings```, one can specify ```ALL_VARIABLES_FROM_VARIABLES_LIST``` as the only item (Then this list should only have that entry) so it will write all the variables in the corresponding model's solution step variable list.
```json
{
"kratos_module": "KratosMultiphysics.HDF5Application",
"python_module": "single_mesh_temporal_output_process",
"Parameters": {
"model_part_name": "MainModelPart",
"file_settings": {
"file_name": "hdf5_output/<model_part_name>-<time>.h5",
"time_format": "0.4f",
"max_files_to_keep": "unlimited",
"echo_level": 1
},
"output_time_settings": {
"step_frequency": 1,
"time_frequency": 1.0
},
"nodal_solution_step_data_settings" : {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"nodal_data_value_settings": {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"element_data_value_settings" : {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"element_gauss_point_value_settings" : {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"condition_data_value_settings" : {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"condition_gauss_point_value_settings" : {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"nodal_flag_value_settings": {
"list_of_variables": [
"SLIP"
]
},
"element_flag_value_settings" : {
"list_of_variables": [
"SLIP"
]
},
"condition_flag_value_settings" : {
"list_of_variables": [
"SLIP"
]
}
}
}
```
### Multiple mesh temporal output process
This process is used to output model part variable data to HDF5 with mesh written for each time step. This is useful in the case if required to write down deformed mesh in each time step.
For the ```list_of_variables``` in ```nodal_solution_step_data_settings```, one can specify ```ALL_VARIABLES_FROM_VARIABLES_LIST``` as the only item (Then this list should only have that entry) so it will write all the variables in the corresponding model's solution step variable list.
```json
{
"kratos_module": "KratosMultiphysics.HDF5Application",
"python_module": "multiple_mesh_temporal_output_process",
"Parameters": {
"model_part_name": "MainModelPart",
"file_settings": {
"file_name": "hdf5_output/<model_part_name>-<time>.h5",
"time_format": "0.4f",
"max_files_to_keep": "unlimited",
"echo_level": 1
},
"output_time_settings": {
"step_frequency": 1,
"time_frequency": 1.0
},
"model_part_output_settings": {
"prefix": "/ModelData"
},
"nodal_solution_step_data_settings" : {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"nodal_data_value_settings": {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"element_data_value_settings" : {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"element_gauss_point_value_settings" : {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"condition_data_value_settings" : {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"condition_gauss_point_value_settings" : {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"nodal_flag_value_settings": {
"list_of_variables": [
"SLIP"
]
},
"element_flag_value_settings" : {
"list_of_variables": [
"SLIP"
]
},
"condition_flag_value_settings" : {
"list_of_variables": [
"SLIP"
]
}
}
}
```
### Single mesh xdmf output process for Paraview
This process outputs model part variable data for each time step, additionally it writes down the XDMF file after each time step which is required to visualize HDF5 data in paraview. This process requires ```h5py``` to be installed in the ```python``` version (which is compiled with ```Kratos Multiphysics```)
For the ```list_of_variables``` in ```nodal_solution_step_data_settings```, one can specify ```ALL_VARIABLES_FROM_VARIABLES_LIST``` as the only item (Then this list should only have that entry) so it will write all the variables in the corresponding model's solution step variable list.
```json
{
"kratos_module": "KratosMultiphysics.HDF5Application",
"python_module": "single_mesh_xdmf_output_process",
"Parameters": {
"model_part_name": "MainModelPart",
"file_settings": {
"file_name": "hdf5_output/<model_part_name>-<time>.h5",
"time_format": "0.4f",
"max_files_to_keep": "unlimited",
"echo_level": 1
},
"output_time_settings": {
"step_frequency": 1,
"time_frequency": 1.0
},
"model_part_output_settings": {
"prefix": "/ModelData"
},
"nodal_solution_step_data_settings" : {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"nodal_data_value_settings": {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"element_data_value_settings" : {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"condition_data_value_settings" : {
"list_of_variables": [
"VELOCITY",
"PRESSURE"
]
},
"nodal_flag_value_settings": {
"list_of_variables": [
"SLIP"
]
},
"element_flag_value_settings" : {
"list_of_variables": [
"SLIP"
]
},
"condition_flag_value_settings" : {
"list_of_variables": [
"SLIP"
]
}
}
}
```
### User defined I/O process
Users can build their own I/O processes by using components of HDF5 application. Following is such example. This example writes model part, and ```DISPLACEMENT``` values for each time step at ```finalize_solution_step```
```json
{
"kratos_module": "KratosMultiphysics.HDF5Application",
"python_module": "user_defined_io_process",
"Parameters": {
"model_part_name": "MainModelPart",
"process_step": "finalize_solution_step",
"controller_settings": {
"controller_type": "temporal_controller",
"time_frequency": 0.5
},
"io_settings": {
"file_name": "results/<model_part_name>-<time>.h5"
},
"list_of_operations": [
{
"operation_type": "model_part_output"
},
{
"operation_type": "nodal_solution_step_data_output",
"list_of_variables": ["DISPLACEMENT"]
}
]
}
}
```
| text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:52:05.010314 | kratoshdf5application-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 8,312,881 | 0d/29/c18fe8655fc28128a6b1bd63b0d531b1a306eb97a1ebe4fb7b2ead3809db/kratoshdf5application-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | c49ddce9f3fcca0d25548d94211188ea | d50d9bcc746cdca9f2ce1bd396abc074392846dd18e2c4276871ab99274e9c04 | 0d29c18fe8655fc28128a6b1bd63b0d531b1a306eb97a1ebe4fb7b2ead3809db | null | [] | 0 |
2.4 | pwauto | 0.1.3 | A high-efficiency UI automation framework based on Playwright | # pwauto
A high-efficiency UI automation testing framework based on [Playwright](https://playwright.dev/python/) and [pytest](https://docs.pytest.org/).
Built to reduce the boilerplate of UI automation with enterprise-grade features out of the box.
## 🌟 Key Features
- **Global Auto-Login**: Log in just once per test session. Authentication state is automatically saved (`state.json`) and instantly reused across all subsequent tests.
- **Smart Failure Traces**: Automatically saves Playwright Traces (DOM snapshots, network requests, console logs, and videos) upon test failure, seamlessly attaching them to your Allure reports.
- **Out-of-the-Box Fixtures**:
- `page`: Ready-to-use page fixture injected with global authenticated state.
- `guest_page`: Clean, unauthenticated page fixture for testing login/registration flows.
- **Environment & Data Driven**: Built-in support for multiple `.env` environments and JSON-based data-driven testing.
- **Feishu (Lark) Integration**: Automatically sends elegant test execution summary cards to your Feishu group upon completion.
## 📦 Installation
```bash
pip install pwauto
playwright install chromium
```
## 🚀 Quick Start
### 1. Create a Page Object
```python
from pwauto.core.base import BasePage
class MyPage(BasePage):
def open(self):
self.navigate("/my-path")
def submit(self):
self.click('role=button[name="Submit"]', name="Click Submit Button")
```
### 2. Write a Test
Just request the `page` fixture (comes with authenticated state):
```python
def test_my_feature(page):
my_page = MyPage(page)
my_page.open()
my_page.submit()
```
### 3. Run and Report
```bash
# Run tests for production environment
pytest --env dev
# View Allure report with attached failure traces
allure serve ./reports/allure-results
```
---
*Note: For detailed project structure, configuration, and advanced usage, please refer to the complete README in the GitHub repository.* | text/markdown | null | Yunchao Zhang <yunchaozhang@outlook.com> | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"allure-pytest",
"loguru",
"playwright>=1.40.0",
"pytest-playwright",
"pytest>=8.0.0",
"python-dotenv",
"requests>=2.28.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T08:52:01.169918 | pwauto-0.1.3.tar.gz | 8,925 | 31/56/c21281f7fa51f8789642dd1ea9a1f1c58758a7c02114c08936c89a655cad/pwauto-0.1.3.tar.gz | source | sdist | null | false | e0ebf18792db1de557e629dcea19dd3b | 007119d745bdd66f51b998eeedb57a4cf1564c513360ff8dc0826966ac5e8d1a | 3156c21281f7fa51f8789642dd1ea9a1f1c58758a7c02114c08936c89a655cad | null | [] | 206 |
2.4 | KratosGeoMechanicsApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | ## Geo-Mechanics Application
The Geo-Mechanics Application contains features needed for common geotechnical/geomechanical applications within Kratos Multiphysics.
### Features:
- K<sub>0</sub> procedure, Quasi-static, dynamic
- Staged analysis
- Automatic time stepping
- 2D (plane strain and axisymmetric) and 3D UPw small displacement element for saturated and partially saturated porous media (with
equal order interpolation, unstable under incompressible-undrained
conditions)
- 2D (plane strain and axisymmetric) and 3D Stable UPw small displacement element for saturated and partially saturated porous media
(with higher order interpolation for displacements)
- 2D (plane strain and axisymmetric) and 3D FIC-Stabilized UPw small displacement element for saturated and partially saturated porous media
(with equal order interpolation for displacements)
- UPw Quasi-zero-thickness interface elements for defining cracks and
joints under saturated and partially saturated conditions
- UPw Updated-Lagrangian element for saturated and partially saturated porous media (with
equal order interpolation, unstable under incompressible-undrained
conditions)
- Stable UPw Updated-Lagrangian element for saturated and partially saturated porous media
(with higher order interpolation for displacements)
- 2D and 3D truss and cable elements
- 2D curved beam elemens with 3 nodes
- 1D, 2D and 3D steady-state and transient groundwater flow elements
- Loading User Defined Soil Models (UDSM) dll/so, written in PLAXIS forrmat
- Loading User Materials (UMAT) dll/so, written in ABAQUS forrmat
### How to compile Geo-Mechanics Application
Make sure that the following lines are properly set in the configuration file:
#### Windows:
~~~
CALL :add_app %KRATOS_APP_DIR%\LinearSolversApplication;
CALL :add_app %KRATOS_APP_DIR%\StructuralMechanicsApplication;
CALL :add_app %KRATOS_APP_DIR%\GeoMechanicsApplication;
~~~
#### Linux:
~~~
add_app ${KRATOS_APP_DIR}/LinearSolversApplication;
add_app ${KRATOS_APP_DIR}/StructuralMechanicsApplication;
add_app ${KRATOS_APP_DIR}/GeoMechanicsApplication;
~~~
#### Note:
- MPI has not been tested and does not work.
- The UMAT/UDSM constitutive models are not included in this repository. Some practical constitutive models can be found at https://soilmodels.com for instance.
| text/markdown | null | Kratos Team <kratos@deltares.nl> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratoslinearsolversapplication==10.4.0",
"kratosmultiphysics==10.4.0",
"kratosstructuralmechanicsapplication==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:52:01.030403 | kratosgeomechanicsapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 9,850,687 | 72/19/3cd61b1213128056fea573620aecaa51197a13899ca6972816e1e2b76c8e/kratosgeomechanicsapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 015dfa5f4112c2604aedb36ab5e245a6 | 4ebdb73af9d02cb1625763e215b6c09d69a4333a43ba5aa423c7dd9d6123d5da | 72193cd61b1213128056fea573620aecaa51197a13899ca6972816e1e2b76c8e | null | [] | 0 |
2.4 | KratosFSIApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | ## Fluid-Structure Interaction Application
The Fluid-Structure Interaction Application contains the core developments in Fluid-Structure Interaction (FSI) within Kratos Multiphysics.
### Features:
- Partitioned relaxation and Quasi-Newton coupling schemes.
- Support for MPI parallelization (with Trilinos Application).
- Non-matching meshes support (see MappingApplication).
- Laplacian and structural mesh solvers (see MeshMovingApplication).
| text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosfluiddynamicsapplication==10.4.0",
"kratosmappingapplication==10.4.0",
"kratosmeshmovingapplication==10.4.0",
"kratosmultiphysics==10.4.0",
"kratosstructuralmechanicsapplication==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:51:58.741559 | kratosfsiapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 1,498,540 | bd/6d/a6d7a010479702d915880073800661097a2db6bc5aa4359be6b060a1bb41/kratosfsiapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 9f4a0bfa606a375befc2c9c61bde3701 | c2fcf6914805f8b2e78617c6dab02ba90de90b274a4d7d070643c9eeb9bebfd5 | bd6da6d7a010479702d915880073800661097a2db6bc5aa4359be6b060a1bb41 | null | [] | 0 |
2.4 | KratosFluidDynamicsApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | ## Fluid Dynamics Application
| Left columns | Description | Status | Authors
| ------------- | ------------| :----: | -------
| `FluidDynamicsApplication` | The Fluid Dynamics Application contains the core developments in Computational Fluid Dynamics (CFD) within Kratos Multiphysics. | <img src="https://img.shields.io/badge/Status-%F0%9F%94%A7Maintained-blue" width="300px"> | Rubén Zorrilla (rzorrilla@cimne.upc.edu) <br /> Riccardo Rossi (rrossi@cimne.upc.edu) <br /> Jordi Cotela (jcotela@altair.com)
### 1. General features:
- Stabilized FEM solvers for incompressible, weakly-compressible and compressible flow problems.
- Support for MPI parallelization (with _MetisApplication_ and _TrilinosApplication_).
- Arbitrary Lagrangian-Eulerian (ALE) formulation allows for mesh deformation during the simulation (see _MeshMovingApplication_).
- Compatible with meshes made up with linear elements.
<p align="center">
<img src="https://github.com/KratosMultiphysics/Examples/blob/master/fluid_dynamics/use_cases/barcelona_wind/resources/Scalability.png?raw=true" alt="Wind flow over Barcelona scalability test" style="width: 600px;"/>
</p>
_Wind flow over Barcelona scalability test. More info [here](https://github.com/KratosMultiphysics/Examples/blob/master/fluid_dynamics/use_cases/barcelona_wind/README.md)._
### 2. Incompressible flows
#### Features
The simulation of viscous incompressible flows is the main capability of this application.
The application includes a variety of stabilized 2D/3D **Navier-Stokes** and **Stokes** solvers.
Limited support to 2D axisymmetric problems is also included.
Among the wide variety of stabilization techniques present in the literature, in this application the **Variational MultiScale (VMS)** (both with quasi-static and dynamic subscales), **Orthogonal SubScales (OSS)** and **Finite Increment Calculus (FIC)** methods are implemented.
All the incompressible flow elements of the application support both **Newtonian** and **non-Newtonian** (Bingham, Herschel-Bulkley) constitutive models.
A set of **boundary conditions** are included in the application. On top of the standard fixed velocity/pressure there exists the possibility to impose slip boundary conditions using MultiFreedom Constraints (MFCs) or periodic conditions using MultiPoint Constraints (MPCs).
Concerning the wall modelling, the application features linear-log and Navier-slip wall models, with the possibility to easily extend to other models.
The application also includes two different solution strategies. First one is the standard **monolithic** one in which both velocity and pressure equations are solved at once using a Newton-Raphson solver. Second one is a segregated **fractional step** strategy that accelerates the solution procedure (we note that this is only compatible with the VMS formulation).
#### Examples
- [Body-fitted 100 Re cylinder](https://github.com/KratosMultiphysics/Examples/blob/master/fluid_dynamics/validation/body_fitted_cylinder_100Re/README.md)
### 3. Weakly-compressible flows
#### Features
Similar to the described above incompressible solver, the application also includes a **VMS stabilized weakly compressible Navier-Stokes** formulation.
This solver modifies the mass conservation equation to add a slight compressibility which relates the pressure to the volume variation thanks to the inclusion of a pressure-density equation of state.
The energy equation remains uncoupled so thermal effects are assumed to be negligible.
### 4. Compressible flows
#### Features
The application includes a 2D/3D explicit compressible solver implementing a **VMS and OSS stabilized full Navier-Stokes formulations** written in **conservative variables** (momentum, density and total energy).
A set of **explicit strategies** can be used
- Forward Euler
- Midpoint rule
- 3rd order Total Variational Diminishing Runge-Kutta (RK3-TVD)
- 4th order Runge-Kutta (RK4)
Two different **shock capturing** techniques are provided
- Physics-based shock capturing
- Entropy-based shock capturing
This solver can be combined in a multistage fashion with the ones in the [_CompressiblePotentialFlowApplication_](https://github.com/KratosMultiphysics/Kratos/tree/master/applications/CompressiblePotentialFlowApplication).
By doing so, the potential solution can be used as initial condition to ease and accelerate the convergence of the full Navier-Stokes simulation.
As a final note, we shall remark that at current date this solver only supports shared memory parallelism (OpenMP).
<p align="center">
<img src="https://github.com/KratosMultiphysics/Examples/blob/master/fluid_dynamics/validation/compressible_step_woodward_colella/data/step_woodward.gif?raw=true" alt="Woodward and Colella's Mach 3 step density field." style="width: 600px;"/>
</p>
_Woodward and Colella's Mach 3 step density field._
#### Examples
- [Transonic flow around a NACA0012 profile](https://github.com/KratosMultiphysics/Examples/tree/master/fluid_dynamics/validation/compressible_naca_0012_Ma_0.8/README.md)
- [Multistage transonic flow around a NACA0012 profile](https://github.com/KratosMultiphysics/Examples/tree/master/fluid_dynamics/validation/multistage_compressible_naca_0012_Ma_0.8/README.md)
- [Transonic flow around a NACA0012 profile at a 3° angle](https://github.com/KratosMultiphysics/Examples/tree/master/fluid_dynamics/validation/compressible_naca_0012_Ma_0.8_aoa_3/README.md)
- [Sod shock tube](https://github.com/KratosMultiphysics/Examples/tree/master/fluid_dynamics/validation/compressible_sod_shock_tube/README.md)
- [Supersonic flow in Woodward and Colella's Mach 3 step](https://github.com/KratosMultiphysics/Examples/tree/master/fluid_dynamics/validation/compressible_step_woodward_colella/README.md)
- [Supersonic flow over a wedge](https://github.com/KratosMultiphysics/Examples/tree/master/fluid_dynamics/validation/compressible_Wedge/README.md)
### 5. Unfitted mesh methods
#### Features
The embedded solver allows the resolution of problems with **unffitted boundaries**, including flows around volumetric and volumeless (i.e. shell-like) bodies.
Starting from a distance field, either analytical or obtained with any of the levelset algorithms in _KratosCore_, the embedded solver uses a **Cut-FEM** approach to solve the problem.
This approach only supports simplicial meshes. (linear triangle and tetrahedron).
This solver, which can be used in combination with all the formulations described in the incompressible flow section, makes possible to efficiently solve flows around non-watertight or poorly defined geometries (e.g. STL) as well as cases involving arbitrary large boundary displacements and rotations.
Current research on this topic include the development of **Shifted Boundary Method (SBM)** solvers.
<p align="center">
<img src="https://github.com/KratosMultiphysics/Examples/blob/master/fluid_dynamics/validation/embedded_moving_cylinder/data/embedded_moving_cylinder_p.gif?raw=true" alt="Embedded moving cylinder velocity field [m/s]." style="width: 600px;"/>
</p>
_Embedded moving cylinder example velocity field._
#### Examples
- [Embedded moving cylinder](https://github.com/KratosMultiphysics/Examples/tree/master/fluid_dynamics/validation/embedded_moving_cylinder/README.md)
### 6. Two-phase flows
#### Features
The _FluidDynamicsApplication_ includes a solver for the resolution of **biphasic (Newtonian-air) viscous incompressible flows**.
This solver uses a **levelset based approach** which combines the **implicit** fluid solver with a convection and redistancing ones (see the _KratosCore_ for more information about these).
The solver is able to account for the pressure discontinuities thanks to a local enrichment plus an element-by-element static condensation, which avoids the need to reform the sparse matrix graph at each time step.
Besides, the solver is also equipped with a strategy to revert the mass losses introduced by the levelset approach.
#### Examples
- [Two-fluids dam break scenario](https://github.com/KratosMultiphysics/Examples/blob/master/fluid_dynamics/validation/two_fluid_dam_break/README.md)
- [Two-fluids wave propagation](https://github.com/KratosMultiphysics/Examples/blob/master/fluid_dynamics/validation/two_fluid_wave/README.md)
### 7. Multiscale modelling
The application also includes limited support for the multiscale modelling following the **Representative Volume Element (RVE)** approach.
### 8. Multiphysics problems
#### Features
The _FluidDynamicsApplication_ can be coupled with other applications to solve multiphysics problems such as **Fluid-Structure Interaction (FSI)** (see _FSIApplication_) or **thermally-coupled** flows (buoyancy and Conjugate Heat Transfer (CHT)) (see _ConvectionDiffusionApplication_).
#### Examples
Conjugate Heat Transfer:
- [Cylinder cooling Re = 100 and Pr = 2](https://github.com/KratosMultiphysics/Examples/blob/master/conjugate_heat_transfer/validation/cylinder_cooling_Re100_Pr2/README.md)
Fluid-Structure Interaction:
- [FSI lid driven cavity](https://github.com/KratosMultiphysics/Examples/blob/master/fluid_structure_interaction/validation/fsi_lid_driven_cavity/README.md)
- [Mixer with flexible blades (embedded)](https://github.com/KratosMultiphysics/Examples/blob/master/fluid_structure_interaction/validation/embedded_fsi_mixer_Y/README.md)
- [Mok benchmark](https://github.com/KratosMultiphysics/Examples/blob/master/fluid_structure_interaction/validation/fsi_mok/README.md)
- [Mok benchmark (embedded)](https://github.com/KratosMultiphysics/Examples/blob/master/fluid_structure_interaction/validation/embedded_fsi_mok/README.md)
- [Turek benchmark - FSI2](https://github.com/KratosMultiphysics/Examples/blob/master/fluid_structure_interaction/validation/fsi_turek_FSI2/README.md)
| text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratoslinearsolversapplication==10.4.0",
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:51:56.355013 | kratosfluiddynamicsapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 15,306,120 | 46/85/6cbcc1155b5099b3aaa569eceae08fa510f12935066cb789034b72daba6c/kratosfluiddynamicsapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 51d88022f97f0d92a3607dc3b88d46d8 | a81ed2faf5ff8639a836f9862ef4a08ed9077b42d52a49a8244fb59b6a7c1a66 | 46856cbcc1155b5099b3aaa569eceae08fa510f12935066cb789034b72daba6c | null | [] | 0 |
2.4 | KratosDemStructuresCouplingApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | ## DEM-Structures Coupling Application | text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosdemapplication==10.4.0",
"kratosmultiphysics==10.4.0",
"kratosporomechanicsapplication==10.4.0",
"kratosstructuralmechanicsapplication==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:51:53.446250 | kratosdemstructurescouplingapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 1,611,119 | 3d/44/6fbd159d0a46fe0ea6caca6da62a32b951cfc2cce5edd4e84b1b030bb4b5/kratosdemstructurescouplingapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 8d86b660e9aff279fdc2bbb6041bc0d1 | 770c95faacddcba688482076a412f439158d43ac46b802d16ef1c90c2dfe3a7b | 3d446fbd159d0a46fe0ea6caca6da62a32b951cfc2cce5edd4e84b1b030bb4b5 | null | [] | 0 |
2.4 | KratosDEMApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | # DEM Application
This application focuses on the Discrete Element Method (DEM), a particles method for modeling the bulk behavior of granular materials and many geomaterials such as coal, ores, soil, rocks, aggregates, pellets, tablets and powders.
The [DEMpack Team](www.cimne.com/dem) at [CIMNE](www.cimne.com) is in charge of all developments related to the DEM.
For the coupling between DEM and Fluids, go to the [Swimming DEM Application](https://github.com/KratosMultiphysics/Kratos/tree/master/applications/SwimmingDEMApplication).
For the coupling between DEM and thermal effects, go to the [Thermal DEM Application](https://github.com/KratosMultiphysics/Kratos/tree/master/applications/ThermalDEMApplication).
## Getting started
This application is part of the Kratos Multiphysics Platform. Instructions on how to get you a copy of the project up and running on your local machine for development and testing purposes are available for both [Linux](http://kratos-wiki.cimne.upc.edu/index.php/LinuxInstall) and [Windows](http://kratos-wiki.cimne.upc.edu/index.php/Windows_7_Download_and_Installation) systems.
### Prerequisites
Build [Kratos](https://github.com/KratosMultiphysics/Kratos/wiki) and, before that, make sure that you add
``` cmake
-DDEM_APPLICATION=ON
```
amongst the compilation options, so the DEM application is compiled.
No auxiliar external libraries are needed.
## Theory
The DEM is a numerical method that has been applied to simulate and analyze flow behavior in a wide range of disciplines including mechanical and process engineering, pharmaceutical, materials science, agricultural engineering and more.
Coupling with fluid is already available through the Swimming-DEM application, also integrated in the Kratos Multiphysics Platform.
The fundamental theoretical background corresponding to the discontinuous (granular matter) part of the code can be found in the DEM literature easily.
### Contact laws
The contact laws are implemented in [this folder](https://github.com/KratosMultiphysics/Kratos/tree/master/applications/DEMApplication/custom_constitutive). Note that the letter 'D' or 'd' in the file name stands for 'discontiuum'. It is related to non cohesive or slightly cohesive contacts.
#### Linear repulsive force
The most simple representation of a repulsive contact force between a sphere and a wall is given by a linear law, where the force acting on the sphere when contacting the wall is a linear function of the indentation, which in turn would bring a quadratic dependence with the contact radius.
#### Non-Linear repulsive force
Hertz solved in 1882 the non-cohesive normal contact between a sphere and a plane. In 1971 Johnson, Kendall and Roberts presented the solution (JKR-Theory) for the same problem in this case adding cohesive behaviour. Not much later, Derjaguin, Müller and Toporov published similar results (DMT-Theory).
Both theories are very close and correct and, while the JKR theory is adequate to the study of flexible, large spheres, the DMT theory is specially suited to represent the behaviour of rigid, small ones.
## Numerical approach
The application includes two types of DEM elements used for different purposes:
* Spheric Particle - Base element used to simulate granular materials (non cohesive or slightly cohesive)
* Spheric Continuum Particle - With specific built-in variables to simulate fracture in cohesive materials. It can also be understood as a discretization method of the continuum by using spheres.
And has the following easy-to-use capabilities:
* Interaction with FEM-based walls - Objects that cannot be crossed by DEM spheres. The user can choose to impose Linear-periodic conditions or rigid body conditions.
* Inlets - Inject new particles while running the simulation linked to some material properties. With user defined granulometry, mass flow and particle type (single particle or clusters). Inlets are based on FEM-based walls and boundary conditions can also be applied to them.
* Initial conditions on particle elements.
* Boundary conditions on particle elements.
It also includes several predefined cluster formations to be used.
### DEM strategies
#### Non-cohesive materials strategy
Once contact between two spheres occurs, the forces at the contact point are computed. The interaction between the two contacting spheres can be represented by two forces with the same module but opposite directions. This force F can be decomposed into its normal and shear components Fn and Fs, respectively.
The contact interface for the simplest formulation is characterized by the normal and tangential stiffness Kn and Ks, respectively, a frictional device obeying the Coulomb law with a frictional coefficient, and a dashpot defined by a contact damping coefficient.
In order to represent irregular particles with spheres, a numerical correction is used. The rolling friction imposes a virtual moment opposite to particle rotation and dependent on its size.
#### Continuum materials strategy
For continuum materials simulations, the contact between particles can resist tractions up to a certain limit, when breakage occurs. Depending on the chosen constitutive law, the computation of the forces changes. In the basic versions, a bond strategy is used, but more advanced laws use a non-local stress-tensor based strategy.
### DEM integration schemes
The standard translational and rotational equations for the motion of rigid bodies are used to compute the dynamics of the spheres and clusters. The following schemes can be chosen separately for translation and rotation:
* Symplectic Euler
* Velocity Verlet
* Forward Euler
* Taylor
Also, two rotational specific integration schemes are available:
* [Runge-Kutta](https://link.springer.com/article/10.1007/s40571-019-00232-5)
* Quaternion based
### Contact search
The contact detection basically consists in determining, for every target object, which other objects are in contact with it, and then apply the corresponding interaction. It is usually not needed to perform a search at every time step, which is generally limited by the stability of the explicit integration of the equations of motion.
A bins based technique is used for this purpose.
## Available interfaces
### DEM
This is the package that allows a user to create, run and analyze results of a DEM simulation for discontinuum / granular / little-cohesive materials. Requires [GiD](https://www.gidhome.com/) - Pre and Post Processing software. It has both 2D and 3D versions. Check the manuals, follow the tutorials or play with the preloaded sample problems in order to learn how this application works.
### Cohesive-DEM
This package combines the features of the previous one also with the simulation of continuum/cohesive materials. It also offers the possibility of tackling both 2D and 3D problems. Check also the manuals or tutorials or load the test examples in the GUI in order to learn how this problem type works.
### Fluid-DEM
This package allows you to simulate a wide spectrum of problems involving the interaction of granular DEM and fluids. This application has only a 3D version. Check also for existing manuals or tutorials to get a feel of how to work with this application.
## Contact
* **Miguel Angel Celigueta** - *Core Development* - [maceli@cimne.upc.edu](mailto:maceli@cimne.upc.edu)
* **Salva Latorre** - *Granular materials* - [latorre@cimne.upc.edu](mailto:latorre@cimne.upc.edu)
* **Ferran Arrufat** - *Cohesive materials* - [farrufat@cimne.upc.edu](mailto:farrufat@cimne.upc.edu)
* **Guillermo Casas** - *Fluid coupling* - [gcasas@cimne.upc.edu](mailto:gcasas@cimne.upc.edu)
* **Joaquín Irazabal** - *Particle clusters & DEM-Solid interaction* - [jirazabal@cimne.upc.edu](mailto:jirazabal@cimne.upc.edu)
* **Joaquín González-Usúa** - *Fluid coupling* - [jgonzalez@cimne.upc.edu](mailto:jgonzalez@cimne.upc.edu)
* **Chengshun Shang** - *Bonded partcile models* - [cshang@cimne.upc.edu](mailto:cshang@cimne.upc.edu)
## License
The DEM application is OPEN SOURCE. The main code and program structure is available and aimed to grow with the need of any users willing to expand it. The BSD (Berkeley Software Distribution) licence allows to use and distribute the existing code without any restriction, but with the possibility to develop new parts of the code on an open or close basis depending on the developers.
## New GIDInterface for Kratos
The new GIDInterface currently under development can be found [here](https://github.com/KratosMultiphysics/GiDInterface). Based on the customLib, it includes the interfaces for most of the Kratos applications in addition to the new DEM interface.
## FAQ
### What to do if particles behave strangely
* Check the Young Modulus. Materials with high stiffness may require smaller time steps to ensure stability.
* Check the material density.
* Check the time step. If the time step is too large, the elements can fail to interact with each other. In the worst case scenarios, the simulation may even crash.
* Check the frequency of neighbours' search. If the search is not done frequently enough, new contacts may not be detected.
* Check the restitution coefficient of the material. Explicit integration schemes gain energy noticeably unless an enough small time step is used. If the time step is large (but stable), use the restitution coefficient to compensate for the gain of energy to obtain more realistic results.
| text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:51:51.562930 | kratosdemapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 4,792,908 | 4e/ca/39226e3649fa883a05190721fab105b0f3f5e8c01143d8f1fd7a239280ac/kratosdemapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | c5b0da7c72676663b7e7ac6c14346a6e | 8752cfb765790a5d857aa33866f37d914f7a6e4a467066a361040130234b98c1 | 4eca39226e3649fa883a05190721fab105b0f3f5e8c01143d8f1fd7a239280ac | null | [] | 0 |
2.4 | KratosDamApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | ### Using MPI in DamApplication ###
Note: For the moment, MPI only works in Linux and requires compiling METIS_APPLICATION and TRILINOS_APPLICATION. Non-local Damage does not work in MPI.
## Instructions to compile DamApplication for MPI (tested in Ubuntu 16.04) ##
1. Make sure that the following lines are properly set in the configure.sh file:
-DMETIS_APPLICATION=ON \
-DMETIS_INCLUDE_DIR="/usr/include/" \
-DPARMETIS_ROOT_DIR="/usr/lib/" \
-DTRILINOS_APPLICATION=ON \
-DTRILINOS_LIBRARY_DIR="/usr/lib/x86_64-linux-gnu/" \
-DTRILINOS_INCLUDE_DIR="/usr/include/trilinos/" \
-DTRILINOS_LIBRARY_PREFIX="trilinos_" \
-DCONVECTION_DIFFUSION_APPLICATION=ON \
-DEXTERNAL_SOLVERS_APPLICATION=ON \
-DSTRUCTURAL_MECHANICS_APPLICATION=ON \
-DPOROMECHANICS_APPLICATION=ON \
-DDAM_APPLICATION=ON \
-DUSE_DAM_MPI=ON \
2. Uncomment (remove #~ ) the following line in GiDInterface/Kratos.gid/apps/Dam/python/dam_main.py
#~ import KratosMultiphysics.TrilinosApplication as TrilinosApplication
| text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0",
"kratosporomechanicsapplication==10.4.0",
"kratosstructuralmechanicsapplication==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:51:49.125394 | kratosdamapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 4,457,532 | 37/6b/d56b5ab51e8d3e2362169a3b28bad434f46689eb96fbcb90712792cedca7/kratosdamapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | f7c8685442aaa73e232c5e6e99ab52ea | 67d5e185687ee112b629f7184d4e46d20aeaa835920f946f620906f2296d8cb1 | 376bd56b5ab51e8d3e2362169a3b28bad434f46689eb96fbcb90712792cedca7 | null | [] | 0 |
2.4 | KratosCSharpWrapperApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | ## CSharp Wrapper Application | text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:51:47.122620 | kratoscsharpwrapperapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 1,675,787 | 3d/b2/840c810a71c987b842a349da2b662cf6d6a6f3a7825b8cd3dc397c961543/kratoscsharpwrapperapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | e1d10d60ea469796c7f9e15c030d994a | 659b17979e28bb821c590602a74b14184481a31208debff3c367bf39f91256c7 | 3db2840c810a71c987b842a349da2b662cf6d6a6f3a7825b8cd3dc397c961543 | null | [] | 0 |
2.4 | KratosCoSimulationApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | # CoSimulation Application
The CoSimulation Application contains the core developments in coupling black-box solvers and other software-tools within Kratos Multiphysics.
<a name="overview"></a>
## Overview
- [CoSimulation Application](#cosimulation-application)
- [Overview](#overview)
- [List of features](#list-of-features)
- [Dependencies](#dependencies)
- [Examples](#examples)
- [User Guide](#user-guide)
- [Setting up a coupled simulation](#setting-up-a-coupled-simulation)
- [The JSON configuration file](#the-json-configuration-file)
- [Basic FSI example](#basic-fsi-example)
- [Developer Guide](#developer-guide)
- [Structure of the Application](#structure-of-the-application)
- [How to couple a new solver / software-tool?](#how-to-couple-a-new-solver--software-tool)
- [Interface of SolverWrapper](#interface-of-solverwrapper)
- [Remote controlled CoSimulation](#remote-controlled-cosimulation)
- [Using a solver in MPI](#using-a-solver-in-mpi)
- [References](#references)
<a name="list-of-features"></a>
## List of features
- Various features available for CoSimulation:
- [Coupling Algorithms](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/CoSimulationApplication/python_scripts/coupled_solvers)
- [Wrappers for various solvers and other software-tools](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/CoSimulationApplication/python_scripts/solver_wrappers)
- [Data Transfer Operators (including Mapping)](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/CoSimulationApplication/python_scripts/data_transfer_operators)
- [Convergence Accelerators](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/CoSimulationApplication/python_scripts/convergence_accelerators)
- [Convergence Criteria](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/CoSimulationApplication/python_scripts/convergence_criteria)
- [Predictors](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/CoSimulationApplication/python_scripts/predictors)
- Support for MPI parallelization. This is independent of whether or not the ued solvers support/run in MPI.
- Coupling of Kratos <=> Kratos without overhead since the same database is used and data duplication is avoided.
- The [MappingApplication](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/MappingApplication) is used for mapping between nonmatching grids.
<a name="dependencies"></a>
## Dependencies
The CoSimulation Application itself doesn't have any dependencies (except the `KratosCore` / `KratosMPICore` for serial/MPI-compilation).
For running coupled simulations the solvers to be used have to be available. Those dependencies are python-only.
The [MappingApplication](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/MappingApplication) is required when mapping is used.
<a name="examples"></a>
## Examples
The examples can be found in the [examples repository](https://github.com/KratosMultiphysics/Examples/tree/master/co_simulation).
Please also refer to the [tests](tests) for examples of how the coupling can be configured.
Especially the [Mok-FSI](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/CoSimulationApplication/tests/fsi_mok) and the [Wall-FSI](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/CoSimulationApplication/tests/fsi_wall) tests are very suitable for getting a basic understanding.
<a name="user-guide"></a>
## User Guide
This section guides users of the _CoSimulationApplication_ to setting up and performing coupled simulations. The overall workflow is the same as what is used for most Kratos applications. It consists of the following files:
- **MainKratosCoSim.py** This file is to be executed with python to run the coupled simulation
- **ProjectParametersCoSim.json** This file contains the configuration for the coupled simulation
<a name="user-guide-setting-up-a-coupled-simulation"></a>
### Setting up a coupled simulation
For running a coulpled simulation at least the two files above are required. In addition, the input for the solvers / codes participating in the coupled simulation are necessary.
The **MainKratosCoSim.py** file looks like this (see also [here](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/CoSimulationApplication/python_scripts/MainKratosCoSim.py):
```py
import KratosMultiphysics as KM
from KratosMultiphysics.CoSimulationApplication.co_simulation_analysis import CoSimulationAnalysis
"""
For user-scripting it is intended that a new class is derived
from CoSimulationAnalysis to do modifications
Check also "kratos/python_scripts/analysis-stage.py" for available methods that can be overridden
"""
parameter_file_name = "ProjectParametersCoSim.json"
with open(parameter_file_name,'r') as parameter_file:
parameters = KM.Parameters(parameter_file.read())
simulation = CoSimulationAnalysis(parameters)
simulation.Run()
```
It can be executed with python:
```
python MainKratosCoSim.py
```
If the coupled simulation runs in a distributed environment (MPI) then MPI is required to launch the script
```
mpiexec -np 4 python MainKratosCoSim.py --using-mpi
```
Not the passing of the `--using-mpi` flag which tells Kratos that it runs in MPI.
<a name="user-guide-the-json-configuration-file"></a>
### The JSON configuration file
The configuration of the coupled simulation is written in `json` format, same as for the rest of Kratos.
It contains two settings:
- _problem_data_: this setting contains global settings of the coupled problem.
```json
"start_time" : 0.0,
"end_time" : 15.0,
"echo_level" : 0, // verbosity, higher values mean more output
"print_colors" : true, // use colors in the prints
"parallel_type" : "OpenMP" // or "MPI"
```
- _solver_settings_: the settings of the coupled solver.
```json
"type" : "coupled_solvers.gauss_seidel_weak", // type of the coupled solver, see python_scripts/coupled_solvers
"predictors" : [], // list of predictors
"num_coupling_iterations" : 10, // max number of coupling iterations, only available for strongly coupled solvers
"convergence_accelerators" : [] // list of convergence accelerators, only available for strongly coupled solvers
"convergence_criteria" : [] // list of convergence criteria, only available for strongly coupled solvers
"data_transfer_operators" : {} // map of data transfer operators (e.g. mapping)
"coupling_sequence" : [] // list specifying in which order the solvers are called
"solvers" : {} // map of solvers participating in the coupled simulation, specifying their input and interfaces
```
See the next section for a basic example with more explanations.
<a name="user-guide-basic-fsi-example"></a>
### Basic FSI example
This example is the Wall FSI benchmark, see [1], chapter 7.5.3. The Kratos solvers are used to solve this problem. The input files for this example can be found [here](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/CoSimulationApplication/tests/fsi_wall)
```json
{
"problem_data" :
{
"start_time" : 0.0,
"end_time" : 3.0,
"echo_level" : 0, // printing no additional output
"print_colors" : true, // using colors for prints
"parallel_type" : "OpenMP"
},
"solver_settings" :
{
"type" : "coupled_solvers.gauss_seidel_weak", // weakly coupled simulation, no interface convergence is checked
"echo_level" : 0, // no additional output from the coupled solver
"predictors" : [ // using a predictor to improve the stability of the simulation
{
"type" : "average_value_based",
"solver" : "fluid",
"data_name" : "load"
}
],
"data_transfer_operators" : {
"mapper" : {
"type" : "kratos_mapping",
"mapper_settings" : {
"mapper_type" : "nearest_neighbor" // using a simple mapper, see the README in the MappingApplications
}
}
},
"coupling_sequence":
[
{
"name": "structure", // the structural solver comes first
"input_data_list": [ // before solving, the following data is imported in the structural solver
{
"data" : "load",
"from_solver" : "fluid",
"from_solver_data" : "load", // the fluid loads are mapped onto the structure
"data_transfer_operator" : "mapper", // using the mapper defined above (nearest neighbor)
"data_transfer_operator_options" : ["swap_sign"] // in Kratos, the loads have the opposite sign, hence it has to be swapped
}
],
"output_data_list": [ // after solving, the displacements are mapped to the fluid solver
{
"data" : "disp",
"to_solver" : "fluid",
"to_solver_data" : "disp",
"data_transfer_operator" : "mapper"
}
]
},
{
"name": "fluid", // the fluid solver solves after the structure
"output_data_list": [],
"input_data_list": []
}
],
"solvers" : // here we specify the solvers, their input and interfaces for CoSimulation
{
"fluid":
{
"type" : "solver_wrappers.kratos.fluid_dynamics_wrapper", // using the Kratos FluidDynamicsApplication for the fluid
"solver_wrapper_settings" : {
"input_file" : "fsi_wall/ProjectParametersCFD" // input file for the fluid solver
},
"data" : { // definition of interfaces used in the simulation
"disp" : {
"model_part_name" : "FluidModelPart.NoSlip2D_FSI_Interface",
"variable_name" : "MESH_DISPLACEMENT",
"dimension" : 2
},
"load" : {
"model_part_name" : "FluidModelPart.NoSlip2D_FSI_Interface",
"variable_name" : "REACTION",
"dimension" : 2
}
}
},
"structure" :
{
"type" : "solver_wrappers.kratos.structural_mechanics_wrapper", // using the Kratos StructuralMechanicsApplication for the structure
"solver_wrapper_settings" : {
"input_file" : "fsi_wall/ProjectParametersCSM" // input file for the structural solver
},
"data" : { // definition of interfaces used in the simulation
"disp" : {
"model_part_name" : "Structure.GENERIC_FSI_Interface",
"variable_name" : "DISPLACEMENT",
"dimension" : 2
},
"load" : {
"model_part_name" : "Structure.GENERIC_FSI_Interface",
"variable_name" : "POINT_LOAD",
"dimension" : 2
}
}
}
}
}
}
```
<a name="developer-guide"></a>
## Developer Guide
<a name="developer-guide_structure-of-the-application"></a>
### Structure of the Application
The _CoSimulationApplication_ consists of the following main components (taken from [2]):
- **SolverWrapper**: Baseclass and CoSimulationApplication-interface for all solvers/codes participating in the coupled simulation, each solver/code has its own specific version.
- **CoupledSolver**: Implements coupling schemes such as weak/strong coupling with *Gauss-Seidel/Jacobi* pattern. It derives from SolverWrapper such that it can beused in nested coupled simulations.
- **IO**: Responsible for communicating and data exchange with external solvers/codes
- **DataTransferOperator**: Transfers data from one discretization to another, e.g. by use of mapping techniques
- **CouplingOperation**: Tool for customizing coupled simulations
- **ConvergenceAccelerator**: Accelerating the solution in strongly coupled simulations by use of relaxation or Quasi-Newton techniques
- **ConvergenceCriteria**: Checks if convergence is achieved in a strongly coupled simulation.
- **Predictor**: Improves the convergence by using a prediction as initial guess for the coupled solution
The following UML diagram shows the relation between these components:
<p align="center">
<img src="https://github.com/KratosMultiphysics/Documentation/blob/master/Readme_files/CoSimulationApplication/CoSimulation_uml.png?raw=true" style="width: 300px;"/>
</p>
Besides the functionalities [listed above](#list-of-features), the modular design of the application makes it straightforward to add a new or customized version of e.g. a _ConvergenceAccelerator_. It is not necessary to have those custom python scripts inside the _CoSimulationApplication_, it is sufficient that they are in a directory that is included in the _PYTHONPATH_ (e.g. the working directory).
<a name="developer-guide_how-to-couple-a-new-solver--software-tool"></a>
### How to couple a new solver / software-tool?
The _CoSimulationApplication_ is very modular and designed to be extended to coupling of more solvers / software-tools. This requires basically two components on the Kratos side:
The interface between the _CoSimulationApplication_ and a solver is done with the [**SolverWrapper**](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/CoSimulationApplication/python_scripts/base_classes/co_simulation_solver_wrapper.py). This wrapper is specific to every solver and calls the solver-custom methods based on the input of CoSimulation.
The second component necessary is an [**IO**](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/CoSimulationApplication/python_scripts/base_classes/co_simulation_io.py). This component is used by the SolverWrapper and is responsible for the exchange of data (e.g. mesh, field-quantities, geometry etc) between the solver and the _CoSimulationApplication_.
In principle three different options are possible for exchanging data with CoSimulation:
- For very simple solvers IO can directly be done in python inside the SolverWrapper, which makes a separate IO superfluous (see e.g. a [python-only single degree of freedom solver](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/CoSimulationApplication/python_scripts/solver_wrappers/sdof))
- Using the [_CoSimIO_](https://github.com/KratosMultiphysics/CoSimIO). This which is the preferred way of exchanging data with the _CoSimulationApplication_. It is currently available for _C++_, _C_, and _Python_. The _CoSimIO_ is included as the [KratosCoSimIO](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/CoSimulationApplication/python_scripts/solver_wrappers/kratos_co_sim_io.py) and can be used directly. Its modular and Kratos-independent design as _detached interface_ allows for easy integration into other codes.
- Using a custom solution based on capabilities that are offered by the solver that is to be coupled.
The following picture shows the interaction of these components with the _CoSimulationApplication_ and the external solver:
<p align="center">
<img src="https://raw.githubusercontent.com/KratosMultiphysics/Documentation/master/Readme_files/CoSimulationApplication/detached_interface.png" style="width: 300px;"/>
</p>
<a name="developer-guide_interface-of-solverwrapper"></a>
#### Interface of SolverWrapper
The [**SolverWrapper**](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/CoSimulationApplication/python_scripts/base_classes/co_simulation_solver_wrapper.py) is the interface in the _CoSimulationApplication_ to all involved codes / solvers. It provides the following interface (adapted from [2]), which is also called in this order:
- **Initialize**: This function is called once at the beginning of the simulation, it e.g .reads the input files and prepares the internal data structures.
- The solution loop is split into the following six functions:
- **AdvanceInTime**: Advancing in time and preparing the data structure for the next time step.
- **InitializeSolutionStep**: Applying boundary conditions
- **Predict**: Predicting the solution of this time step to accelerate the solution.\
iterate until convergence in a strongly coupled solution:
- **SolveSolutionStep**: Solving the problem for this time step. This is the only function that can be called multiple times in an iterative (strongly coupled) solution procedure.
- **FinalizeSolutionStep**: Updating internals after solving this time step.
- **OutputSolutionStep**: Writing output at the end of a time step
- **Finalize**: Finalizing and cleaning up after the simulation
Each of these functions can implement functionalities to communicate with the external solver, telling it what to do. However, this is often skipped if the data exchange is used for the synchronization of the solvers. This is often done in "classical" coupling tools. I.e. the code to couple internally duplicates the coupling sequence and synchronizes with the coupling tool through the data exchange.
An example of a _SolverWrapper_ coupled to an external solver using this approach can be found [here](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/CoSimulationApplication/python_scripts/solver_wrappers/external/external_solver_wrapper.py). Only the mesh exchange is done explicitly at the beginning of the simulation, the data exchange is done inside _SolveSolutionStep_.
The coupled solver has to duplicate the coupling sequence, it would look e.g. like this (using _CoSimIO_) for a weak coupling:
```py
# solver initializes ...
CoSimIO::ExportMesh(...) # send meshes to the CoSimulationApplication
# start solution loop
while time < end_time:
CoSimIO::ImportData(...) # get interface data
# solve the time step
CoSimIO::ExportData(...) # send new data to the CoSimulationApplication
```
An example for an FSI problem where the structural solver of Kratos is used as an external solver can be found [here](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/CoSimulationApplication/tests/structural_mechanics_analysis_with_co_sim_io.py). The _CoSimIO_ is used for communicating between the _CoSimulationApplication_ and the structural solver
While this approach is commonly used, it has the significant drawback that the coupling sequence has to be duplicated, which not only has the potential for bugs and deadlocks but also severely limits the useability when it comes to trying different coupling algorithms. Then not only the input for the _CoSimulationApplication_ has to be changed but also the source code in the external solver!
Hence a better solution is proposed in the next section:
<a name="developer-guide_remote-controller-cosimulation"></a>
#### Remote controlled CoSimulation
A unique feature of Kratos CoSimulation (in combination with the _CoSimIO_) is the remotely controlled CoSimulation. The main difference to the "classical" approach which duplicates the coupling sequence in the external solver is to give the full control to CoSimulation. This is the most flexible approach from the point of CoSimulation, as then neither the coupling sequence nor any other coupling logic has to be duplicated in the external solver.
In this approach the external solver registers the functions necessary to perform coupled simulations through the _CoSimIO_. These are then called remotely through the _CoSimulationApplication_. This way any coupling algorithm can be used without changing anything in the external solver.
```py
# defining functions to be registered
def SolveSolution()
{
# external solver solves timestep
}
def ExportData()
{
# external solver exports data to the CoSimulationApplication
}
# after defining the functions they can be registered in the CoSimIO:
CoSimIO::Register(SolveSolution)
CoSimIO::Register(ExportData)
# ...
# After all the functions are registered and the solver is fully initialized for CoSimulation, the Run method is called
CoSimIO::Run() # this function runs the coupled simulation. It returns only after finishing
```
A [simple example of this can be found in the _CoSimIO_](https://github.com/KratosMultiphysics/CoSimIO/blob/master/tests/integration_tutorials/cpp/run.cpp).
The _SolverWrapper_ for this approach sends a small control signal in each of its functions to the external solver to tell it what to do. This could be implemented as the following:
```py
class RemoteControlSolverWrapper(CoSimulationSolverWrapper):
# ...
# implement other methods as necessary
# ...
def InitializeSolutionStep(self):
data_config = {
"type" : "control_signal",
"control_signal" : "InitializeSolutionStep"
}
self.ExportData(data_config)
def SolveSolutionStep(self):
for data_name in self.settings["solver_wrapper_settings"]["export_data"].GetStringArray():
# first tell the controlled solver to import data
data_config = {
"type" : "control_signal",
"control_signal" : "ImportData",
"data_identifier" : data_name
}
self.ExportData(data_config)
# then export the data from Kratos
data_config = {
"type" : "coupling_interface_data",
"interface_data" : self.GetInterfaceData(data_name)
}
self.ExportData(data_config)
# now the external solver solves
super().SolveSolutionStep()
for data_name in self.settings["solver_wrapper_settings"]["import_data"].GetStringArray():
# first tell the controlled solver to export data
data_config = {
"type" : "control_signal",
"control_signal" : "ExportData",
"data_identifier" : data_name
}
self.ExportData(data_config)
# then import the data to Kratos
data_config = {
"type" : "coupling_interface_data",
"interface_data" : self.GetInterfaceData(data_name)
}
self.ImportData(data_config)
```
A full example for this can be found [here](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/CoSimulationApplication/tests/structural_mechanics_analysis_remote_controlled.py).
If it is possible for an external solver to implement this approach, it is recommended to use it as it is the most robust and flexible.
Nevertheless both approaches are possible with the _CoSimulationApplication_.
<a name="developer-guide_using-a-solver-in-mpi"></a>
### Using a solver in MPI
By default, each _SolverWrapper_ makes use of all ranks in MPI. This can be changed if e.g. the solver that is wrapped by the _SolverWrapper_ does not support MPI or to specify to use less rank.
The base _SolverWrapper_ provides the `_GetDataCommunicator` function for this purpose. In the baseclass, the default _DataCommunicator_ (which contains all ranks in MPI) is returned. The _SolverWrapper_ will be instantiated on all the ranks on which this _DataCommunicator_ is defined (i.e. on the ranks where `data_communicator.IsDefinedOnThisRank() == True`).
If a solver does not support MPI-parallelism then it can only run on one rank. In such cases it should return a _DataCommunicator_ which contains only one rank. For this purpose the function `KratosMultiphysics.CoSimulationApplication.utilities.data_communicator_utilities.GetRankZeroDataCommunicator` can be used. Other custom solutions are also possible, see for example the [structural_solver_wrapper](https://github.com/KratosMultiphysics/Kratos/blob/master/applications/CoSimulationApplication/python_scripts/solver_wrappers/kratos/structural_mechanics_wrapper.py).
<a name="references"></a>
## References
- [1] Wall, Wolfgang A., _Fluid structure interaction with stabilized finite elements_, PhD Thesis, University of Stuttgart, 1999, http://dx.doi.org/10.18419/opus-127
- [2] Bucher et al., _Realizing CoSimulation in and with a multiphysics framework_, conference proceedings, IX International Conference on Computational Methods for Coupled Problems in Science and Engineering, 2021, https://www.scipedia.com/public/Bucher_et_al_2021a
| text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:51:45.210806 | kratoscosimulationapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 2,514,438 | ef/b8/d6848025da46c603bfe3e82bbb258ea15665767b72384db2a0d77dc8fb25/kratoscosimulationapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 881cf13fcf497a04c6d492f36fefcfba | 0b5f6c519daa2e5ffab5c05bf3b29e2e57af8582e417fc8e0add9c1294ea46d7 | efb8d6848025da46c603bfe3e82bbb258ea15665767b72384db2a0d77dc8fb25 | null | [] | 0 |
2.4 | KratosConvectionDiffusionApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. |
## Convection Diffusion Application
A set of elements, conditions, strategies and solvers necessary for the solution of convection-diffusion problems.
<p align="center">
<img src="https://raw.githubusercontent.com/KratosMultiphysics/Documentation/master/Readme_files/ConvectionDiffusionApplication.png" alt="Solution" style="width: 600px;"/>
</p>
The application includes tests to check the proper functioning of the application
### Features:
- A set of *Neumann* conditions:
* Flux conditions
* Thermal conditions
- Elements:
* Laplacian element (both 2D/3D)
* Eulerian convection-diffusion (both 2D/3D)
* Convection-diffusion (both 2D/3D)
* Convection-diffusion with change of phase (2D)
* Explicit eulerian convection-diffusion (both 2D/3D)
- Strategies:
* Non-linear/linear convection-diffusion strategy
* Eulerian convection-diffusion strategy
* Semi-Eulerian convection-diffusion strategy
- Utilities and others:
* BFECC convection utility
* BFECC elemental limiter convection utility
* Convection particle
* Face-heat utilities
* Move particle utility
* Pure convection tools
* Pure convection (Crank-Nicolson) tools
| text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:51:42.334679 | kratosconvectiondiffusionapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 3,415,614 | 0e/28/4d7084718e6b7fd2eafef40f5d954ff241e8d0093d3a1cc21ae478a0fd7f/kratosconvectiondiffusionapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | ced9d94db25ee4fb4768658a4a604118 | d793d1e19c55801f0b374ab5d69d150be6a39b9c19823ebd9a4fd9bf2d4977ff | 0e284d7084718e6b7fd2eafef40f5d954ff241e8d0093d3a1cc21ae478a0fd7f | null | [] | 0 |
2.4 | blocked-matrix-utils | 1.1.0 | Add your description here | [](https://circleci.com/gh/vahtras/util)
[](https://travis-ci.org/vahtras/util)
[](https://coveralls.io/github/vahtras/util?branch=master)

# blocked-matrix-utils
## About
This package contains utilities for handling numpy arrays with subblocking.
## Install
The latest release versions can be installed from either pypi using uv
(recommended)
~~~
$ uv add blocked-matrix-utils
~~~
or or normal pip
~~~
$ python -m pip install blocked-matrix-utils
~~~
In addition, the package is available on conda-forge
~~~
$ conda install -c conda-forge blocked-matrix-utils
~~~
| text/markdown | null | Olav Vahtras <vahtras@kth.se> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"numpy>=2.2.6",
"scipy>=1.15.3"
] | [] | [] | [] | [] | uv/0.9.25 {"installer":{"name":"uv","version":"0.9.25","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T08:51:41.786836 | blocked_matrix_utils-1.1.0-py3-none-any.whl | 11,493 | 7b/fc/b0ff19943b438f2b346b53af5809741fd48f63afcdeeaa19c2a1e2d21b96/blocked_matrix_utils-1.1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 5bca2c276f2a71cc28ecb5e0addfd1f1 | 79b85c2a37f51cf805754092ae8f77e33d5451201b813633036d1c4b278aab81 | 7bfcb0ff19943b438f2b346b53af5809741fd48f63afcdeeaa19c2a1e2d21b96 | null | [
"LICENSE.txt"
] | 237 |
2.4 | KratosContactStructuralMechanicsApplication | 10.4.0 | The Contact Structural Mechanics Application contains the contact mechanics implementations that can be used by the Structural Mechanics Application within Kratos Multiphysics. | # Contact Structural Mechanics Application
| **Application** | **Description** | **Status** | **Authors** |
|:---------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------:|:-----------:|
| `ContactStructuralMechanicsApplication` | The *Contact Structural Mechanics Application* contains the contact mechanics implementations that can be used by the *Structural Mechanics Application* and *Constitutive Laws Application* within *Kratos Multiphysics* | <img src="https://img.shields.io/badge/Status-%F0%9F%94%A7Maintained-blue" width="300px"> | [*Vicente Mataix Ferrándiz*](mailto:vmataix@altair.com) <br /> [*Alejandro Cornejo Velázquez*](mailto:acornejo@cimne.upc.edu) |
<p align="center">
<img src="https://raw.githubusercontent.com/KratosMultiphysics/Examples/master/contact_structural_mechanics/validation/double_arch/data/result.gif" alt="Solution" style="width: 300px;"/>
<img src="https://raw.githubusercontent.com/KratosMultiphysics/Examples/master/contact_structural_mechanics/validation/double_arch/data/result_frictional.gif" alt="Solution" style="width: 300px;"/>
<img src="https://raw.githubusercontent.com/KratosMultiphysics/Examples/master/contact_structural_mechanics/use_cases/in_ring/data/animation.gif" alt="Solution" style="width: 300px;"/>
<img src="https://github.com/KratosMultiphysics/Examples/raw/master/contact_structural_mechanics/use_cases/hyperelastic_tubes/data/half_cylinders.gif" alt="Solution" style="width: 300px;"/>
<img src="https://raw.githubusercontent.com/KratosMultiphysics/Examples/master/mmg_remeshing_examples/use_cases/contacting_cylinders/data/nodal_h.gif" alt="Solution" style="width: 300px;"/>
<img src="https://raw.githubusercontent.com/KratosMultiphysics/Examples/master/contact_structural_mechanics/use_cases/self_contact/data/animation.gif" alt="Solution" style="width: 300px;"/>
</p>
The application includes tests to check the proper functioning of the application.
## 😎 Features:
- **Mesh tying conditions based in mortar formulation**
- **Augmented Lagrangian contact conditions based in mortar formulation**
* *Frictionless formulation*
* *Frictional formulation*
- **Penalty contact conditions based in mortar formulation**
* *Frictionless formulation*
* *Frictional formulation*
- **Simplified **MPC** conditions based in mortar formulation. With the mortar formulation the weight are computed, allowing to compute a Simplified *NTN* and a simplified *NTS***
* *Frictionless formulation*
* *Frictional formulation*
* *Mesh tying formulation, with tension checking*
- **Self-contact compatible**
- **Strategies, processes, solvers and convergence criterias used by the contact formulation**
- **Several strategies for adaptive remeshing**
- **The application includes search utilities in order to create the contact conditions**
- **Frictional laws (**WIP**) in order to consider different types of frictional behaviour**
- **+115 Python unittest, including Validation tests, and +85 cpp tests**
## ⚙️ Examples:
Examples can be found [here](https://github.com/KratosMultiphysics/Examples/tree/master/contact_structural_mechanics), and [here](https://github.com/KratosMultiphysics/Examples/tree/master/mmg_remeshing_examples/) for several contact adaptive remeshing examples.
## 🗎 Documentation:
Further information regarding the formulation can be accessed in Chapter 4 of the *PhD thesis* authored by [Vicente Mataix Ferrándiz](mailto:vmataix@altair.com), available on [UPC Commons](https://upcommons.upc.edu/bitstream/2117/328952/1/TVMF1de1.pdf).
| text/markdown | null | Vicente Mataix Ferrándiz <vmataix@altair.com> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0",
"kratosstructuralmechanicsapplication==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:51:39.920644 | kratoscontactstructuralmechanicsapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 18,112,445 | 04/68/f9fe83138c8289af7464d9b00f9c6a464f68225aec65db4844668bdd670c/kratoscontactstructuralmechanicsapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 3d416fdea61202e8d1e04a06e570ecbd | 8d9b9bd42148591ecff2ca43c914d8fa4f55ff83fd852ad0ed22873705400dee | 0468f9fe83138c8289af7464d9b00f9c6a464f68225aec65db4844668bdd670c | null | [] | 0 |
2.4 | KratosConstitutiveLawsApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. |
# Constitutive Laws Application
| **Application** | **Description** | **Status** | **Authors** |
|:---------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------:|:-----------:|
| `ConstitutiveLawsApplication` | The *Constitutive Laws Application* contains a series of constitutive laws implementations within *Kratos Multiphysics*. | <img src="https://img.shields.io/badge/Status-%F0%9F%9A%80%20Actively%20developed-Green" width="300px"> | Alejandro Cornejo Velázquez *(acornejo@cimne.upc.edu )* <br /> Sergio Jimenez Reyes *(sjimenez@cimne.upc.edu)* <br /> Riccardo Rossi *(rrossi@cimne.upc.edu)* <br /> Rubén Zorrilla Martínez *(rzorrilla@cimne.upc.edu)* <br /> Vicente Mataix Ferrándiz *(vmataix@altair.com)* |
The application includes tests to check the proper functioning of the application.
## 😎 Features:
- **Constitutive laws**
* *Orthotropic law (Plane stress)*
* *Hyperelastic laws*
* Neo-Hookean
* Kirchhoff
* *Small displacement isotropic plasticity laws (just 3D)*
* Combining:
* Yield surfaces:
* VonMises
* ModifiedMohrCoulomb
* Tresca
* DruckerPrager
* Plastic potential:
* VonMises
* ModifiedMohrCoulomb
* Tresca
* DruckerPrager
* Complete list:
* SmallStrainIsotropicPlasticity3DVonMisesVonMises
* SmallStrainIsotropicPlasticity3DVonMisesModifiedMohrCoulomb
* SmallStrainIsotropicPlasticity3DVonMisesDruckerPrager
* SmallStrainIsotropicPlasticity3DVonMisesTresca
* SmallStrainIsotropicPlasticity3DModifiedMohrCoulombVonMises
* SmallStrainIsotropicPlasticity3DModifiedMohrCoulombModifiedMohrCoulomb
* SmallStrainIsotropicPlasticity3DModifiedMohrCoulombDruckerPrager
* SmallStrainIsotropicPlasticity3DModifiedMohrCoulombTresca
* SmallStrainIsotropicPlasticity3DTrescaVonMises
* SmallStrainIsotropicPlasticity3DTrescaModifiedMohrCoulomb
* SmallStrainIsotropicPlasticity3DTrescaDruckerPrager
* SmallStrainIsotropicPlasticity3DTrescaTresca
* SmallStrainIsotropicPlasticity3DDruckerPragerVonMises
* SmallStrainIsotropicPlasticity3DDruckerPragerModifiedMohrCoulomb
* SmallStrainIsotropicPlasticity3DDruckerPragerDruckerPrager
* SmallStrainIsotropicPlasticity3DDruckerPragerTresca
* *Small displacement isotropic damage laws (just 3D)*
* Combining:
* Yield surfaces:
* VonMises
* ModifiedMohrCoulomb
* Tresca
* DruckerPrager
* Rankine
* SimoJu
* Damage potential:
* VonMises
* ModifiedMohrCoulomb
* Tresca
* DruckerPrager
* Complete list:
* SmallStrainIsotropicDamage3DVonMisesVonMises
* SmallStrainIsotropicDamage3DVonMisesModifiedMohrCoulomb
* SmallStrainIsotropicDamage3DVonMisesDruckerPrager
* SmallStrainIsotropicDamage3DVonMisesTresca
* SmallStrainIsotropicDamage3DModifiedMohrCoulombVonMises
* SmallStrainIsotropicDamage3DModifiedMohrCoulombModifiedMohrCoulomb
* SmallStrainIsotropicDamage3DModifiedMohrCoulombDruckerPrager
* SmallStrainIsotropicDamage3DModifiedMohrCoulombTresca
* SmallStrainIsotropicDamage3DTrescaVonMises
* SmallStrainIsotropicDamage3DTrescaModifiedMohrCoulomb
* SmallStrainIsotropicDamage3DTrescaDruckerPrager
* SmallStrainIsotropicDamage3DTrescaTresca
* SmallStrainIsotropicDamage3DDruckerPragerVonMises
* SmallStrainIsotropicDamage3DDruckerPragerModifiedMohrCoulomb
* SmallStrainIsotropicDamage3DDruckerPragerDruckerPrager
* SmallStrainIsotropicDamage3DDruckerPragerTresca
* SmallStrainIsotropicDamage3DRankineVonMises
* SmallStrainIsotropicDamage3DRankineModifiedMohrCoulomb
* SmallStrainIsotropicDamage3DRankineDruckerPrager
* SmallStrainIsotropicDamage3DRankineTresca
* SmallStrainIsotropicDamage3DSimoJuVonMises
* SmallStrainIsotropicDamage3DSimoJuModifiedMohrCoulomb
* SmallStrainIsotropicDamage3DSimoJuDruckerPrager
* SmallStrainIsotropicDamage3DSimoJuTresca
- **Utilities**
* *Generic constitutive laws utilities*
* *Tangent operator AD*
- **Processes**
* *Automatic initial damage*
* *Advance in time HCF*
- **Several python unittest, including Validation tests, and several cpp tests**
## ⚙️ Examples:
Examples can be found [in the same folder as the *Structural Mechanics Application*](https://github.com/KratosMultiphysics/Examples/tree/master/structural_mechanics). | text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:51:36.452004 | kratosconstitutivelawsapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 8,447,980 | d2/59/92f73fc10fbdf3a466c8a35c4de3f2481fc65c636d829aa6344b6605a90d/kratosconstitutivelawsapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | b9c1e54b789126fd5d12082ebda9764d | efdf88f6f7dad5455b2c5e3b427263d42fe126ebde1731e4d0e9d4e8c395f1b9 | d25992f73fc10fbdf3a466c8a35c4de3f2481fc65c636d829aa6344b6605a90d | null | [] | 0 |
2.4 | KratosCompressiblePotentialFlowApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | ## Compressible Potential Flow Application | text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:51:33.716538 | kratoscompressiblepotentialflowapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 3,951,939 | a1/57/f3d7c5d1ab7a38926f1b16978fec9d8e370db93f3b4a976f31d1cfb7aa30/kratoscompressiblepotentialflowapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 44090ce26337d54f967b971d2c36cfe1 | 470b70719acb049be7d3a5a4e2e8b0a361656f9d006bfa45d748836c8dd2b2e7 | a157f3d7c5d1ab7a38926f1b16978fec9d8e370db93f3b4a976f31d1cfb7aa30 | null | [] | 0 |
2.4 | KratosChimeraApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | ## Chimera Application | text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:51:31.271143 | kratoschimeraapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 1,174,576 | 40/29/f9907f4c656b8e4b23bb5eea888101f29d247669cdf61980ec94fe4ee2a9/kratoschimeraapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | c94d0ddc95789dab7f2321b5710f3b2a | fbc14b7c23508eb7b8557ed3bed8ba208c96388d9dac3e2e04811f3a14970d18 | 4029f9907f4c656b8e4b23bb5eea888101f29d247669cdf61980ec94fe4ee2a9 | null | [] | 0 |
2.4 | KratosCableNetApplication | 10.4.0 | KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. | ## Cable Net Application | text/markdown | null | Kratos Team <kratos@listas.cimne.upc.edu> | null | null | BSD-4-Clause | null | [
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"kratosmultiphysics==10.4.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:51:29.618924 | kratoscablenetapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | 1,591,050 | f8/5d/ad9875d56a8174fd48587b8ffd10d87b4cd1e9f0111c2ede42ac3b146056/kratoscablenetapplication-10.4.0-cp314-cp314-macosx_15_0_arm64.whl | cp314 | bdist_wheel | null | false | 1274958485b233aac4acbcd092da054c | 95300ac03e8920e91ce16bc6842283e5240bd3261e1a9f10d03c760c56940c9b | f85dad9875d56a8174fd48587b8ffd10d87b4cd1e9f0111c2ede42ac3b146056 | null | [] | 0 |
2.4 | qmi | 0.51.2 | The Quantum Measurement Infrastructure framework | [](https://github.com/QuTech-Delft/QMI/blob/v0.51.2/.github/badges/pylint.svg)
[](https://github.com/QuTech-Delft/QMI/blob/v0.51.2/.github/badges/mypy.svg)
[](https://qmi.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/QuTech-Delft/QMI/blob/v0.51.2/.github/badges/coverage.svg)
[](https://github.com/QuTech-Delft/QMI/blob/v0.51.2/.github/badges/tests.svg)
# Quantum Measurement Infrastructure
QMI is a Python 3 framework for controlling laboratory equipment. It is suitable for anything ranging from one-off
scientific experiments to robust operational setups.
QMI is developed by [QuTech](https://qutech.nl) to support advanced physics experiments involving quantum bits.
However, other than its name and original purpose, there is nothing specifically *quantum* about QMI — it is potentially
useful in any environment where monitoring and control of measurement equipment is needed.
## Dependencies
The full functioning of this software is dependent on several external Python packages, dynamic libraries and drivers.
The following items are not delivered as part of this software and must be acquired and installed by the user separately,
when necessary for the use of a specific QMI driver:
- [ADwin.py](https://pypi.org/project/ADwin/)
- [libadwin.so, adwin32.dll, adwin64.dll](https://www.adwin.de/us/download/download.html)
- [aravis](https://github.com/AravisProject/aravis)
- [Aviosys HTTP API](https://aviosys.com/products/lib/httpapi.html)
- [Boston Micromachines DM SDK](https://bostonmicromachines.com/dmsdk/)
- [libdwf.dll, libdwf.so](https://digilent.com/reference/software/waveforms/waveforms-sdk/start)
- [JPE cacli.exe](https://www.jpe-innovations.com/wp-content/uploads/CPSC_v7.3.20201222.zip)
- [libmh150.so](https://www.picoquant.com/dl_software/MultiHarp150/MultiHarp150_160_V3_0.zip)
- [libhh400.so](https://www.picoquant.com/dl_software/HydraHarp400/HydraHarp400_SW_and_DLL_v3_0_0_3.zip)
- [libph300.so](https://www.picoquant.com/dl_software/PicoHarp300/PicoHarp300_SW_and_DLL_v3_0_0_3.zip)
- [libusb](https://libusb.info/)
- [mcculw](https://pypi.org/project/mcculw/)
- [Picotech PicoSDK ps3000a, PicoSDK ps4000a](https://www.picotech.com/downloads)
- [PyGObject](https://pypi.org/project/PyGObject/)
- [tcdbase.dll](https://www.qutools.com/files/quTAU_release/quTAU_Setup_4.3.3_win.exe), libtcdbase.so
- [RPi.GPIO](https://pypi.org/project/RPi.GPIO/)
- [Silicon Labs CP210x USB to UART Bridge](https://www.silabs.com/developers/usb-to-uart-bridge-vcp-drivers)
- [uldaq](https://pypi.org/project/uldaq/)
- [usbdll.dll](https://www.newport.com/software-and-drivers)
- [VCP driver](https://ftdichip.com/Drivers/vcp-drivers/)
- [stmcdc.inf](https://www.wieserlabs.com/products/radio-frequency/flexdds-ng-dual/FlexDDS-NG-ad9910_standalone.zip)
- [wlmData.dll, libwlmData.so](https://www.highfinesse.com/en/support/software-update.html)
- [zhinst](https://pypi.org/project/zhinst/)
Usage of the third-party software, drivers or libraries can be subject to copyright and license terms of the provider. Please review their terms before using the software, driver or library.
## Installation
Install with Pip from https://pypi.org/project/qmi/: `pip install qmi`.
## Documentation
### Latest version
The latest version of the documentation can be found [here](https://qmi.readthedocs.io/en/latest/).
### Installing for generating documentation
To install the necessary packages to perform documentation activities for QMI do:
```
pip install -e .[rtd]
```
To build the 'readthedocs' documentation locally do:
```
cd documentation/sphinx
./make-docs
```
The documentation can then be found in the `build/html` directory.
## Contribute
For contribution guidelines see [CONTRIBUTING](CONTRIBUTING.md)
| text/markdown | null | QuTech <H.K.Ervasti@tudelft.nl> | null | null | Technische Universiteit Delft hereby disclaims all copyright interest in the program “QMI (Quantum Measurement Infrastructure)”, a lab instrument remote procedure call control framework written by QuTech.
Professor Lieven Vandersypen, Director of Research QuTech.
(c) 2023, QuTech, Delft, The Netherlands.
This work is licensed under a MIT OSS licence
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| qmi, hardware, software, interface, laboratory, physics | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Operating System :: Unix",
"Operating System :: POSIX",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Scientific/Engineering :: Physics",
"Natural Language :: English"
] | [] | null | null | <4,>=3.11 | [] | [] | [] | [
"numpy",
"scipy",
"h5py>=3.7.0",
"pyserial",
"pyusb",
"python-vxi11",
"standard-xdrlib",
"pytz",
"psutil",
"colorama",
"jsonschema",
"pydwf",
"setuptools; extra == \"dev\"",
"wheel; extra == \"dev\"",
"twine; extra == \"dev\"",
"astroid; extra == \"dev\"",
"coverage; extra == \"dev\"",
"pylint>=3.0; extra == \"dev\"",
"mypy; extra == \"dev\"",
"sphinx; extra == \"dev\"",
"sphinx_rtd_theme; extra == \"dev\"",
"bump2version; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/QuTech-Delft/QMI",
"Repository, https://github.com/QuTech-Delft/QMI.git",
"Changelog, https://github.com/QuTech-Delft/QMI/blob/stable-0-51/CHANGELOG.md",
"Issues, https://github.com/QuTech-Delft/QMI/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T08:51:28.609102 | qmi-0.51.2.tar.gz | 1,644,902 | e1/b2/f88715d4021a4aa420c9e48bc63115d9e9e6e8c0dda96f712aae45c040d7/qmi-0.51.2.tar.gz | source | sdist | null | false | b45fd59ed148bb493cea51ef7a0de690 | f57cea530a457c7c889f0eb7f4c874e7a2778c56905e55ad806690ebfe705bd9 | e1b2f88715d4021a4aa420c9e48bc63115d9e9e6e8c0dda96f712aae45c040d7 | null | [
"LICENSE.md"
] | 190 |
2.4 | slurm-script-generator | 0.2.1 | Generate slurm scripts. | # slurm-script-generator
``` bash
pip install slurm-script-generator
```
## Generate scripts
Generate a slurm script to `slurm_script.sh` with
``` bash
generate-slurm-script --nodes 1 --ntasks-per-node 16
```
#!/bin/bash
##########################################
#SBATCH --nodes=1 # number of nodes on which to run
#SBATCH --ntasks-per-node=16 # number of tasks to invoke on each node
##########################################
To save the script to file `my_script.sh` use `--output`:
``` bash
generate-slurm-script --nodes 1 --ntasks-per-node 16 --output my_script.sh
```
You can also generate scripts in Python programmatically:
``` python
from slurm_script_generator.pragmas import Nodes, Ntasks_per_node
from slurm_script_generator.slurm_script import SlurmScript
slurm_script = SlurmScript(
custom_command="srun ./bin > run.out",
)
slurm_script.add_pragma(Nodes(value=2))
slurm_script.add_pragma(Ntasks_per_node(value=16))
print(slurm_script)
```
#!/bin/bash
##########################################
#SBATCH --nodes=2 # number of nodes on which to run
#SBATCH --ntasks-per-node=16 # number of tasks to invoke on each node
##########################################
srun ./bin > run.out
You can also generate a string representation of the script with
`generate_script`:
``` python
script = slurm_script.generate_script()
```
To export the settings to a json file you can use `--export-json`:
``` bash
generate-slurm-script --nodes 2 --export-json setup.json
```
#!/bin/bash
##########################################
#SBATCH --nodes=2 # number of nodes on which to run
##########################################
This json file can used as a basis for creating new scripts
``` bash
generate-slurm-script --input setup.json --ntasks-per-node 16
```
#!/bin/bash
##########################################
#SBATCH --nodes=2 # number of nodes on which to run
#SBATCH --ntasks-per-node=16 # number of tasks to invoke on each node
##########################################
### Add modules
Add modules with
``` bash
generate-slurm-script --input setup.json --ntasks-per-node 16 --modules gcc/13 openmpi/5.0
```
#!/bin/bash
##########################################
#SBATCH --nodes=2 # number of nodes on which to run
#SBATCH --ntasks-per-node=16 # number of tasks to invoke on each node
##########################################
module purge # Purge modules
module load gcc/13 openmpi/5.0 # modules
module list # List loaded modules
### Add virtual environment
``` bash
generate-slurm-script --nodes 1 --ntasks-per-node 16 --venv ~/virtual_envs/env
```
#!/bin/bash
##########################################
#SBATCH --nodes=1 # number of nodes on which to run
#SBATCH --ntasks-per-node=16 # number of tasks to invoke on each node
##########################################
source /Users/max/virtual_envs/env/bin/activate # virtual environment
### Other
All optional arguments can be shown with
``` bash
generate-slurm-script -h
```
usage: generate-slurm-script [-h] [-A NAME] [-b TIME] [--bell] [--bb SPEC]
[--bbf FILE_NAME] [-c NCPUS] [--comment NAME]
[--container PATH] [--container-id ID]
[--cpu-freq MIN[-MAX[:GOV]]] [--delay-boot MINS]
[-d TYPE:JOBID[:TIME]] [--deadline TIME]
[-D PATH] [--get-user-env] [--gres LIST]
[--gres-flags OPTS] [-H] [-I [SECS]] [-J NAME]
[-k] [-K [SIGNAL]] [-L NAMES] [-M NAMES]
[-m TYPE] [--mail-type TYPE] [--mail-user USER]
[--mcs-label MCS] [-n N] [--nice VALUE]
[-N NODES] [--ntasks-per-node N]
[--oom-kill-step [0|1]] [-O] [--power FLAGS]
[--priority VALUE] [--profile VALUE]
[-p PARTITION] [-q QOS] [-Q] [--reboot] [-s]
[--signal [R:]NUM[@TIME]] [--spread-job]
[--stderr STDERR] [--stdout STDOUT]
[--switches MAX_SWITCHES[@MAX_TIME]] [-S CORES]
[--thread-spec THREADS] [-t MINUTES]
[--time-min MINUTES] [--tres-bind ...]
[--tres-per-task LIST] [--use-min-nodes]
[--wckey WCKEY] [--cluster-constraint LIST]
[--contiguous] [-C LIST] [-F FILENAME] [--mem MB]
[--mincpus N] [--reservation NAME] [--tmp MB]
[-w HOST [HOST ...]] [-x HOST [HOST ...]]
[--exclusive-user] [--exclusive-mcs]
[--mem-per-cpu MB] [--resv-ports]
[--sockets-per-node S] [--cores-per-socket C]
[--threads-per-core T] [-B S[:C[:T]]]
[--ntasks-per-core N] [--ntasks-per-socket N]
[--hint HINT] [--mem-bind BIND]
[--cpus-per-gpu N] [-G N] [--gpu-bind ...]
[--gpu-freq ...] [--gpus-per-node N]
[--gpus-per-socket N] [--gpus-per-task N]
[--mem-per-gpu --MEM_PER_GPU]
[--disable-stdout-job-summary] [--nvmps]
[--line-length LINE_LENGHT]
[--modules MODULES [MODULES ...]]
[--vars ENVIRONMENT_VARS [ENVIRONMENT_VARS ...]]
[--venv VENV] [--printenv] [--print-self]
[--likwid] [--input INPUT_PATH]
[--output OUTPUT_PATH] [--export-json JSON_PATH]
[--command COMMAND]
Slurm job submission options
options:
-h, --help show this help message and exit
-A, --account NAME charge job to specified account (default: None)
-b, --begin TIME defer job until HH:MM MM/DD/YY (default: None)
--bell ring the terminal bell when the job is allocated
(default: None)
--bb SPEC burst buffer specifications (default: None)
--bbf FILE_NAME burst buffer specification file (default: None)
-c, --cpus-per-task NCPUS
number of cpus required per task (default: None)
--comment NAME arbitrary comment (default: None)
--container PATH Path to OCI container bundle (default: None)
--container-id ID OCI container ID (default: None)
--cpu-freq MIN[-MAX[:GOV]]
requested cpu frequency (and governor) (default: None)
--delay-boot MINS delay boot for desired node features (default: None)
-d, --dependency TYPE:JOBID[:TIME]
defer job until condition on jobid is satisfied
(default: None)
--deadline TIME remove the job if no ending possible before this
deadline (default: None)
-D, --chdir PATH change working directory (default: None)
--get-user-env used by Moab. See srun man page (default: None)
--gres LIST required generic resources (default: None)
--gres-flags OPTS flags related to GRES management (default: None)
-H, --hold submit job in held state (default: None)
-I, --immediate [SECS]
exit if resources not available in "secs" (default:
None)
-J, --job-name NAME name of job (default: None)
-k, --no-kill do not kill job on node failure (default: None)
-K, --kill-command [SIGNAL]
signal to send terminating job (default: None)
-L, --licenses NAMES required license, comma separated (default: None)
-M, --clusters NAMES Comma separated list of clusters to issue commands to
(default: None)
-m, --distribution TYPE
distribution method for processes to nodes (default:
None)
--mail-type TYPE notify on state change (default: None)
--mail-user USER who to send email notification for job state changes
(default: None)
--mcs-label MCS mcs label if mcs plugin mcs/group is used (default:
None)
-n, --ntasks N number of processors required (default: None)
--nice VALUE decrease scheduling priority by value (default: None)
-N, --nodes NODES number of nodes on which to run (default: None)
--ntasks-per-node N number of tasks to invoke on each node (default: None)
--oom-kill-step [0|1]
set the OOMKillStep behaviour (default: None)
-O, --overcommit overcommit resources (default: None)
--power FLAGS power management options (default: None)
--priority VALUE set the priority of the job (default: None)
--profile VALUE enable acct_gather_profile for detailed data (default:
None)
-p, --partition PARTITION
partition requested (default: None)
-q, --qos QOS quality of service (default: None)
-Q, --quiet quiet mode (suppress informational messages) (default:
None)
--reboot reboot compute nodes before starting job (default:
None)
-s, --oversubscribe oversubscribe resources with other jobs (default:
None)
--signal [R:]NUM[@TIME]
send signal when time limit within time seconds
(default: None)
--spread-job spread job across as many nodes as possible (default:
None)
--stderr, -e STDERR File to redirect stderr (%x=jobname, %j=jobid)
(default: None)
--stdout, -o STDOUT File to redirect stdout (%x=jobname, %j=jobid)
(default: None)
--switches MAX_SWITCHES[@MAX_TIME]
optimum switches and max time to wait for optimum
(default: None)
-S, --core-spec CORES
count of reserved cores (default: None)
--thread-spec THREADS
count of reserved threads (default: None)
-t, --time MINUTES time limit (default: None)
--time-min MINUTES minimum time limit (if distinct) (default: None)
--tres-bind ... task to tres binding options (default: None)
--tres-per-task LIST list of tres required per task (default: None)
--use-min-nodes if a range of node counts is given, prefer the smaller
count (default: None)
--wckey WCKEY wckey to run job under (default: None)
--cluster-constraint LIST
specify a list of cluster constraints (default: None)
--contiguous demand a contiguous range of nodes (default: None)
-C, --constraint LIST
specify a list of constraints (default: None)
-F, --nodefile FILENAME
request a specific list of hosts (default: None)
--mem MB minimum amount of real memory (default: None)
--mincpus N minimum number of logical processors per node
(default: None)
--reservation NAME allocate resources from named reservation (default:
None)
--tmp MB minimum amount of temporary disk (default: None)
-w, --nodelist HOST [HOST ...]
request a specific list of hosts (default: None)
-x, --exclude HOST [HOST ...]
exclude a specific list of hosts (default: None)
--exclusive-user allocate nodes in exclusive mode for cpu consumable
resource (default: None)
--exclusive-mcs allocate nodes in exclusive mode when mcs plugin is
enabled (default: None)
--mem-per-cpu MB maximum amount of real memory per allocated cpu
(default: None)
--resv-ports reserve communication ports (default: None)
--sockets-per-node S number of sockets per node to allocate (default: None)
--cores-per-socket C number of cores per socket to allocate (default: None)
--threads-per-core T number of threads per core to allocate (default: None)
-B, --extra-node-info S[:C[:T]]
combine request of sockets, cores and threads
(default: None)
--ntasks-per-core N number of tasks to invoke on each core (default: None)
--ntasks-per-socket N
number of tasks to invoke on each socket (default:
None)
--hint HINT Bind tasks according to application hints (default:
None)
--mem-bind BIND Bind memory to locality domains (default: None)
--cpus-per-gpu N number of CPUs required per allocated GPU (default:
None)
-G, --gpus N count of GPUs required for the job (default: None)
--gpu-bind ... task to gpu binding options (default: None)
--gpu-freq ... frequency and voltage of GPUs (default: None)
--gpus-per-node N number of GPUs required per allocated node (default:
None)
--gpus-per-socket N number of GPUs required per allocated socket (default:
None)
--gpus-per-task N number of GPUs required per spawned task (default:
None)
--mem-per-gpu --MEM_PER_GPU
real memory required per allocated GPU (default: None)
--disable-stdout-job-summary
disable job summary in stdout file for the job
(default: None)
--nvmps launching NVIDIA MPS for job (default: None)
--line-length LINE_LENGHT
line length before start of comment (default: 40)
--modules MODULES [MODULES ...]
Modules to load (e.g., --modules mod1 mod2 mod3)
(default: [])
--vars ENVIRONMENT_VARS [ENVIRONMENT_VARS ...]
Environment variables to export (e.g., --vars VAR1=a
VAR2=b) (default: [])
--venv VENV virtual environment to load with `source
VENV/bin/activate` (default: None)
--printenv print all environment variables (default: False)
--print-self print the batch script in the batch script (default:
False)
--likwid Set up likwid environment variables (default: False)
--input INPUT_PATH path to input json file (default: None)
--output OUTPUT_PATH json path to save slurm batch script to (default:
None)
--export-json JSON_PATH
path to export yaml for generating the slurm script to
(default: None)
--command COMMAND Add a custom command at the end of the script (e.g.
mpirun -n 8 ./bin > run.out) (default: None)
| text/markdown | Max | null | null | null | null | python | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"black; extra == \"dev\"",
"check-manifest; extra == \"dev\"",
"isort; extra == \"dev\"",
"slurm-script-generator[docs]; extra == \"dev\"",
"slurm-script-generator[test]; extra == \"dev\"",
"pytest; extra == \"test\"",
"coverage; extra == \"test\"",
"ipykernel; extra == \"docs\"",
"myst-parser; extra == \"docs\"",
"nbconvert; extra == \"docs\"",
"nbsphinx; extra == \"docs\"",
"jupyterlab; extra == \"docs\"",
"linkify-it-py; extra == \"docs\"",
"pre-commit; extra == \"docs\"",
"pyproject-fmt; extra == \"docs\"",
"sphinx; extra == \"docs\"",
"sphinx-book-theme; extra == \"docs\"",
"sphinxcontrib-mermaid; extra == \"docs\"",
"sphinx-design; extra == \"docs\""
] | [] | [] | [] | [
"Source, https://github.com/max-models/slurm-script-generator"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-20T08:50:51.770710 | slurm_script_generator-0.2.1.tar.gz | 15,260 | ac/8d/9efeb3c05520ed466ed5f0af8984ff4232211684d9bfd9116659c18a4e1b/slurm_script_generator-0.2.1.tar.gz | source | sdist | null | false | a2e842a39619526f1eabcf2550d13734 | a747799b7d89be505736e4e01362bed04f9d69ea00aff0866cce791f93217afd | ac8d9efeb3c05520ed466ed5f0af8984ff4232211684d9bfd9116659c18a4e1b | null | [] | 213 |
2.4 | simpleaible | 0.12.2.dev2 | SimpleAIBLE is an AI-friendly BLE toolkit (MCP server & REST API) powered by SimplePyBLE. | |PyPI Licence|
SimpleAIBLE
===========
An AI-friendly BLE toolkit powered by `SimpleBLE`_. Scan, connect, and interact with Bluetooth Low Energy devices from AI agents and scripts.
Key Features
------------
* **MCP Server**: Expose BLE operations as tools for MCP-capable clients (Cursor, Claude Code, Windsurf, etc.)
* **HTTP Server**: Control BLE devices over a REST API
* **Agent Skills**: Teach your AI assistant how to work with Bluetooth devices using reusable skill files
* **Cross-Platform**: Works on Windows, macOS, and Linux
Support & Resources
--------------------
We're here to help you succeed with SimpleAIBLE:
* **Documentation**: Visit our `docs`_ page for comprehensive guides
* **Commercial Support**: Check out |website|_ or |email|_ about licensing and professional services.
* **Community**: Join our `Discord`_ server for discussions and help
**Don't hesitate to reach out if you need assistance - we're happy to help!**
Installation
------------
Install SimpleAIBLE using your preferred package manager:
Using uv (recommended): ::
uv tool install simpleaible
Or using pip: ::
pip install simpleaible
MCP Server
----------
Expose BLE operations as tools for MCP-capable clients (Cursor, Claude Code, etc.).
Configure it in your MCP client with the following command: ``"command": "simpleaible-mcp"``.
See the `MCP Server docs`_ for full tool documentation and client-specific setup.
HTTP Server
-----------
Run the REST API for controlling BLE devices remotely: ::
simpleaible-http --host 127.0.0.1 --port 8000
See the `HTTP Server docs`_ for the full API reference.
Agent Skills
------------
Install the SimpleAIBLE skill to give your AI agent built-in knowledge of BLE workflows: ::
npx skills add https://github.com/simpleble/simpleble --skill simpleaible
See the `Agent Skills docs`_ for more details.
License
=======
SimpleAIBLE is available under the Business Source License 1.1 (BUSL-1.1). Each
version of SimpleAIBLE will convert to the GNU General Public License version 3 after four years of its initial release.
The project is free to use for non-commercial purposes, but requires a commercial license for commercial use. We
also offer FREE commercial licenses for small projects and early-stage companies - reach out to discuss your use case!
**Why purchase a commercial license?**
- Build and deploy unlimited commercial applications
- Use across your entire development team
- Zero revenue sharing or royalty payments
- Choose features that match your needs and budget
- Priority technical support included
- Clear terms for integrating into MIT-licensed projects
**Looking for information on pricing and commercial terms of service?** Visit |website-url|_ for more details.
For further enquiries, please |email|_ or |leavemessage|_ and we can discuss the specifics of your situation.
----
**SimpleAIBLE** is a project powered by |caos|_.
.. Links
.. |email| replace:: email us
.. _email: mailto:contact@simpleble.org
.. |leavemessage| replace:: leave us a message on our website
.. _leavemessage: https://www.simpleble.org/contact?utm_source=pypi_simpleaible&utm_medium=referral&utm_campaign=simpleaible_readme
.. |website| replace:: our website
.. _website: https://simpleble.org?utm_source=pypi_simpleaible&utm_medium=referral&utm_campaign=simpleaible_readme
.. |website-url| replace:: www.simpleble.org
.. _website-url: https://simpleble.org?utm_source=pypi_simpleaible&utm_medium=referral&utm_campaign=simpleaible_readme
.. |caos| replace:: **The California Open Source Company**
.. _caos: https://californiaopensource.com?utm_source=pypi_simpleaible&utm_medium=referral&utm_campaign=simpleaible_readme
.. _SimpleBLE: https://github.com/simpleble/simpleble/
.. _docs: https://docs.simpleble.org/
.. _Discord: https://discord.gg/N9HqNEcvP3
.. _MCP Server docs: https://docs.simpleble.org/simpleaible/mcp
.. _HTTP Server docs: https://docs.simpleble.org/simpleaible/http
.. _Agent Skills docs: https://docs.simpleble.org/simpleaible/skills
.. |PyPI Licence| image:: https://img.shields.io/pypi/l/simpleaible
| text/x-rst | null | Kevin Dewald <kevin@simpleble.org> | null | null | null | ble, bluetooth, mcp, ai, simpleble | [
"Operating System :: Microsoft :: Windows",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"License :: Free for non-commercial use",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"simplepyble",
"fastmcp",
"fastapi",
"uvicorn",
"pydantic"
] | [] | [] | [] | [
"Homepage, https://github.com/simpleble/simpleble"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:50:46.989099 | simpleaible-0.12.2.dev2.tar.gz | 9,745 | c2/4f/96fb1aae261b3e2699801bc73e8cab8cc3f41b8a7605089a2fc25c1640a4/simpleaible-0.12.2.dev2.tar.gz | source | sdist | null | false | 81c5078e527bdc0aac6246b0a70d0dfb | 8921d47cadbef254ded6512e60ad883ad059f5f2de08ce7245c2d01668da2be6 | c24f96fb1aae261b3e2699801bc73e8cab8cc3f41b8a7605089a2fc25c1640a4 | null | [] | 180 |
2.2 | isage-vdb | 0.2.0.1 | High-Performance Vector Database with Pluggable ANNS Architecture | # SageVDB C++ Core Library
**High-Performance Vector Database with Pluggable ANNS Architecture**
SageVDB is a C++20 library that provides efficient vector similarity search, metadata management, and a flexible plugin system for Approximate Nearest Neighbor Search (ANNS) algorithms. It serves as the native core for the SAGE VDB middleware component.
> Usage Mode Guide: Please refer to `docs/USAGE_MODES.md` (for the positioning, data flow, and examples of Standalone / BYO-Embedding / Plugin / Service).
## 🎯 Features
### Core Capabilities
- **Exact and Approximate Search**: Support for brute-force exact search and pluggable ANNS algorithms
- **Multiple Distance Metrics**: L2 (Euclidean), Inner Product, Cosine similarity
- **Metadata Management**: Efficient key-value metadata storage and filtering
- **Batch Operations**: Optimized batch insertion and search
- **Persistence**: Save and load database state to/from disk
- **Thread-Safe**: Concurrent read operations supported
### ANNS Plugin System
- **Pluggable Architecture**: Easy integration of new ANNS algorithms
- **Algorithm Registry**: Dynamic registration and discovery
- **Big-ANN Compatible**: Parameters follow [big-ann-benchmarks](https://github.com/erikbern/ann-benchmarks) conventions
- **Built-in Algorithms**:
- `brute_force`: Exact search, supports incremental updates and deletions
- `faiss`: FAISS integration (when available)
### Multimodal Support
- **Cross-Modal Fusion**: Combine features from text, images, audio, video, etc.
- **Fusion Strategies**: Concatenation, weighted average, attention, tensor fusion, bilinear pooling
- **Extensible**: Register custom modality processors and fusion strategies
## 🔧 Build Requirements
### Required
- **C++20 compatible compiler** (GCC 11+, Clang 14+, or MSVC 19.29+)
- **CMake 3.12+**
- **BLAS/LAPACK** (for linear algebra operations)
### Optional
- **OpenMP** - Parallel processing (recommended)
- **FAISS** - Facebook AI Similarity Search integration
- **OpenCV** - Image processing for multimodal features
- **FFmpeg** - Audio/video processing for multimodal features
- **gperftools** - Performance profiling
## 🚀 Quick Start
### One-Command Setup (Recommended)
```bash
# Clone and setup in one go
git clone https://github.com/intellistream/sageVDB.git
cd sageVDB
./quickstart.sh
```
The `quickstart.sh` script will:
- ✓ Install git hooks (pre-commit, pre-push)
- ✓ Check dependencies (CMake, C++ compiler, Python)
- ✓ Optionally build the project
- ✓ Optionally install Python package in development mode
**What the git hooks do**:
- `pre-commit`: Checks for trailing whitespace, large files, debug statements
- `pre-push`: Manages version updates and PyPI publishing workflow
### Manual Building
```bash
cd sageVDB
# Basic build
./build.sh
# Production build with optimizations
BUILD_TYPE=Release ./build.sh
# Enable profiling
SAGE_ENABLE_GPERFTOOLS=ON ./build.sh
# The build produces:
# - build/libsage_vdb.so # Shared library
# - build/test_sage_vdb # Test executable
# - install/lib/libsage_vdb.so # Installed library
# - install/include/sage_vdb/ # Public headers
```
### CMake Build Options
```bash
cmake -B build -S . \
-DCMAKE_BUILD_TYPE=Release \
-DBUILD_TESTS=ON \
-DUSE_OPENMP=ON \
-DENABLE_MULTIMODAL=ON \
-DENABLE_OPENCV=OFF \
-DENABLE_FFMPEG=OFF \
-DENABLE_GPERFTOOLS=OFF
cmake --build build -j$(nproc)
```
### Running Tests
```bash
cd build
ctest --verbose
# Or run directly
./test_sage_vdb
./test_multimodal
```
## 📖 Usage Examples
### Basic Vector Search
```cpp
#include <sage_vdb/sage_vdb.h>
using namespace sage_vdb;
int main() {
// Create database configuration
DatabaseConfig config(128); // 128-dimensional vectors
config.index_type = IndexType::FLAT;
config.metric = DistanceMetric::L2;
config.anns_algorithm = "brute_force";
// Initialize database
SageVDB db(config);
// Add vectors with metadata
Vector vec1(128, 0.1f);
Metadata meta1 = {{"category", "A"}, {"text", "first vector"}};
VectorId id1 = db.add(vec1, meta1);
// Batch add
std::vector<Vector> vectors = {
Vector(128, 0.2f),
Vector(128, 0.3f)
};
std::vector<Metadata> metadata = {
{{"category", "B"}},
{{"category", "A"}}
};
auto ids = db.add_batch(vectors, metadata);
// Search for nearest neighbors
Vector query(128, 0.15f);
auto results = db.search(query, 5); // Find 5 nearest neighbors
for (const auto& result : results) {
std::cout << "ID: " << result.id
<< ", Distance: " << result.score
<< ", Category: " << result.metadata.at("category")
<< std::endl;
}
// Filtered search
auto filtered = db.filtered_search(
query,
SearchParams(5),
[](const Metadata& meta) {
return meta.at("category") == "A";
}
);
return 0;
}
```
### Using FAISS Plugin
```cpp
#include <sage_vdb/sage_vdb.h>
int main() {
DatabaseConfig config(768);
config.metric = DistanceMetric::L2;
config.anns_algorithm = "faiss";
// FAISS-specific build parameters
config.anns_build_params["index_type"] = "IVF256,Flat";
config.anns_build_params["metric"] = "l2";
// FAISS-specific query parameters
config.anns_query_params["nprobe"] = "8";
SageVDB db(config);
// Training data for IVF index
std::vector<Vector> training_data;
// ... populate training_data ...
db.train_index(training_data);
// Add vectors
// ... add your data ...
// Build index
db.build_index();
// Query
auto results = db.search(query, 10);
return 0;
}
```
### Multimodal Database
```cpp
#include <sage_vdb/multimodal_sage_vdb.h>
using namespace sage_vdb;
int main() {
// Configure multimodal database
DatabaseConfig config;
config.dimension = 0; // Will be auto-calculated from modalities
MultimodalSageVDB mdb(config);
// Register modality processors
auto text_processor = std::make_shared<TextModalityProcessor>(768);
auto image_processor = std::make_shared<ImageModalityProcessor>(512);
mdb.register_modality("text", text_processor);
mdb.register_modality("image", image_processor);
// Set fusion strategy
auto attention_fusion = std::make_shared<AttentionFusion>();
mdb.set_fusion_strategy(attention_fusion);
// Add multimodal data
std::unordered_map<std::string, Vector> modality_data;
modality_data["text"] = Vector(768, 0.5f); // Text embedding
modality_data["image"] = Vector(512, 0.3f); // Image embedding
Metadata metadata = {{"caption", "A beautiful sunset"}};
mdb.add_multimodal(modality_data, metadata);
// Multimodal query
std::unordered_map<std::string, Vector> query_data;
query_data["text"] = Vector(768, 0.6f);
auto results = mdb.search_multimodal(query_data, 10);
return 0;
}
```
### Persistence
```cpp
#include <sage_vdb/sage_vdb.h>
int main() {
DatabaseConfig config(128);
SageVDB db(config);
// Add data
// ...
// Save to disk
db.save("my_database.SageVDB");
// Later, load from disk
SageVDB db2(config);
db2.load("my_database.SageVDB");
// Database is ready to use
auto results = db2.search(query, 10);
return 0;
}
```
## 🔌 Plugin Development
### Creating a Custom ANNS Algorithm
1. **Implement the `ANNSAlgorithm` interface**:
```cpp
#include <sage_vdb/anns/anns_interface.h>
class MyANNS : public ANNSAlgorithm {
public:
// Identity
std::string name() const override { return "my_anns"; }
std::string version() const override { return "1.0.0"; }
std::string description() const override { return "My custom ANNS"; }
// Capabilities
bool supports_metric(DistanceMetric metric) const override {
return metric == DistanceMetric::L2;
}
bool supports_incremental_add() const override { return true; }
bool supports_deletion() const override { return false; }
// Build
void fit(const std::vector<VectorEntry>& data,
const AlgorithmParams& params) override {
// Build your index here
dimension_ = data.empty() ? 0 : data[0].vector.size();
// ... your implementation ...
}
// Query
ANNSResult query(const Vector& q, const QueryConfig& config) override {
// Perform search
ANNSResult result;
// ... your implementation ...
return result;
}
// Batch query (optional optimization)
std::vector<ANNSResult> query_batch(
const std::vector<Vector>& queries,
const QueryConfig& config) override {
// Default implementation calls query() for each
return ANNSAlgorithm::query_batch(queries, config);
}
// Lifecycle
bool is_built() const override { return built_; }
void save(const std::string& path) override { /* save index */ }
void load(const std::string& path) override { /* load index */ }
private:
bool built_ = false;
Dimension dimension_ = 0;
// ... your data structures ...
};
```
2. **Create a factory**:
```cpp
class MyANNSFactory : public ANNSFactory {
public:
std::string algorithm_name() const override { return "my_anns"; }
std::unique_ptr<ANNSAlgorithm> create(
const DatabaseConfig& config) override {
return std::make_unique<MyANNS>();
}
AlgorithmParams default_build_params() const override {
AlgorithmParams params;
params.set("my_param", 42);
return params;
}
AlgorithmParams default_query_params() const override {
AlgorithmParams params;
params.set("search_depth", 10);
return params;
}
};
```
3. **Register the algorithm**:
```cpp
// In a .cpp file (NOT in a header)
REGISTER_ANNS_ALGORITHM(MyANNSFactory);
```
4. **Use it**:
```cpp
DatabaseConfig config(128);
config.anns_algorithm = "my_anns";
config.anns_build_params["my_param"] = "100";
SageVDB db(config);
```
### Custom Fusion Strategy
```cpp
#include <sage_vdb/fusion_strategies.h>
class MyFusionStrategy : public FusionStrategy {
public:
std::string name() const override { return "my_fusion"; }
Vector fuse(const std::unordered_map<std::string, Vector>& modality_vectors,
const std::unordered_map<std::string, float>& weights) override {
// Implement your fusion logic
Vector result;
// ... your implementation ...
return result;
}
};
// Register and use
auto strategy = std::make_shared<MyFusionStrategy>();
multimodal_db.register_fusion_strategy("my_fusion", strategy);
multimodal_db.set_fusion_strategy_by_name("my_fusion");
```
## 📊 API Reference
### Core Classes
#### `SageVDB`
Main database class for vector operations.
**Methods**:
- `add(vector, metadata)` - Add single vector
- `add_batch(vectors, metadata)` - Batch add vectors
- `remove(id)` - Remove vector by ID
- `update(id, vector, metadata)` - Update existing vector
- `search(query, k)` - Find k nearest neighbors
- `filtered_search(query, params, filter)` - Search with metadata filtering
- `batch_search(queries, params)` - Batch search
- `build_index()` - Build/rebuild the index
- `train_index(training_data)` - Train index (for algorithms that need it)
- `save(filepath)` - Persist to disk
- `load(filepath)` - Load from disk
- `size()` - Number of vectors
- `dimension()` - Vector dimension
#### `MultimodalSageVDB`
Extended database for multimodal data fusion.
**Methods**:
- `register_modality(name, processor)` - Register modality processor
- `set_fusion_strategy(strategy)` - Set fusion strategy
- `add_multimodal(modality_data, metadata)` - Add multimodal entry
- `search_multimodal(query_data, k)` - Multimodal search
#### `VectorStore`
Low-level vector storage and retrieval.
#### `MetadataStore`
Metadata management and filtering.
#### `QueryEngine`
Search coordination and result ranking.
### Configuration Structures
#### `DatabaseConfig`
```cpp
struct DatabaseConfig {
IndexType index_type;
DistanceMetric metric;
Dimension dimension;
std::string anns_algorithm;
std::unordered_map<std::string, std::string> anns_build_params;
std::unordered_map<std::string, std::string> anns_query_params;
// ... index-specific params ...
};
```
#### `SearchParams`
```cpp
struct SearchParams {
uint32_t k; // Number of results
uint32_t nprobe; // Search scope (IVF)
float radius; // Radius search
bool include_metadata; // Include metadata in results
};
```
### Enumerations
#### `IndexType`
- `FLAT` - Brute force (exact)
- `IVF_FLAT` - Inverted file
- `IVF_PQ` - Inverted file with product quantization
- `HNSW` - Hierarchical NSW
- `AUTO` - Automatic selection
#### `DistanceMetric`
- `L2` - Euclidean distance
- `INNER_PRODUCT` - Inner product
- `COSINE` - Cosine similarity
## 🏗️ Architecture
```
SageVDB/
├── include/sage_vdb/ # Public headers
│ ├── common.h # Common types and constants
│ ├── sage_vdb.h # Main database interface
│ ├── multimodal_sage_vdb.h # Multimodal extension
│ ├── vector_store.h # Vector storage backend
│ ├── metadata_store.h # Metadata management
│ ├── query_engine.h # Search coordinator
│ ├── fusion_strategies.h # Multimodal fusion
│ ├── modality_processors.h # Modality handlers
│ └── anns/ # ANNS plugin system
│ ├── anns_interface.h # Plugin interface
│ ├── brute_force_plugin.h
│ └── faiss_plugin.h
├── src/ # Implementation
│ ├── sage_vdb.cpp
│ ├── vector_store.cpp
│ ├── metadata_store.cpp
│ ├── query_engine.cpp
│ ├── multimodal_sage_vdb.cpp
│ ├── fusion_strategies.cpp
│ └── anns/
│ ├── anns_interface.cpp
│ ├── brute_force_plugin.cpp
│ └── faiss_plugin.cpp
├── tests/ # Unit tests
│ ├── test_sage_vdb.cpp
│ └── test_multimodal.cpp
├── cmake/ # CMake modules
│ ├── FindBLASLAPACK.cmake
│ └── gperftools.cmake
├── build/ # Build output (generated)
├── install/ # Install output (generated)
├── CMakeLists.txt # Build configuration
├── build.sh # Build script
└── README.md # This file
```
## 🧪 Testing
### Unit Tests
```bash
# Build and run all tests
cd build
make test
# Run with verbose output
ctest -V
# Run specific test
./test_sage_vdb
./test_multimodal
```
### Performance Benchmarks
```bash
# Enable profiling
cmake -B build -DENABLE_GPERFTOOLS=ON
cmake --build build
# Run with profiler
CPUPROFILE=sage_vdb.prof ./build/test_sage_vdb
google-pprof --text ./build/test_sage_vdb sage_vdb.prof
```
### CI/CD
GitHub Actions workflows are configured in `.github/workflows/`:
- `ci-tests.yml` - Full test suite on push/PR
- `quick-test.yml` - Fast smoke tests
## 🔍 Troubleshooting
### libstdc++ Version Issues
If you encounter `GLIBCXX_3.4.30` errors in conda environments:
```bash
# Update libstdc++ in conda
conda install -c conda-forge libstdcxx-ng -y
# Or use system libstdc++
export LD_LIBRARY_PATH="/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH"
```
The build script (`build.sh`) automatically detects and handles this.
### FAISS Not Found
If FAISS is not detected but you have it installed:
```bash
# Set FAISS_ROOT before building
export FAISS_ROOT=/path/to/faiss
cmake -B build -DFAISS_ROOT=$FAISS_ROOT
```
Or install via conda:
```bash
conda install -c conda-forge faiss-cpu
# or
conda install -c conda-forge faiss-gpu
```
### OpenMP Not Available
OpenMP is optional but recommended for performance:
```bash
# Disable OpenMP if unavailable
cmake -B build -DUSE_OPENMP=OFF
```
## 📈 Performance Tips
1. **Use batch operations** when adding/querying multiple vectors
2. **Choose appropriate index type**:
- < 10K vectors: Use `FLAT` (exact search)
- 10K-1M vectors: Use `IVF_FLAT` or `HNSW`
- > 1M vectors: Use `IVF_PQ` for memory efficiency
3. **Enable OpenMP** for parallel processing
4. **Tune ANNS parameters** based on your accuracy/speed tradeoff
5. **Pre-allocate memory** for large datasets
6. **Use metadata filtering** to reduce search space
## 🧵 Multi-Threading and Service Integration
### Thread Safety Considerations
SageVDB is designed to be **service-friendly** and can seamlessly integrate with SAGE's multi-threaded service architecture:
#### Current Thread Safety Status
```cpp
// Read operations are thread-safe (concurrent reads allowed)
// Write operations should be serialized
std::vector<QueryResult> results = db.search(query, 10); // Thread-safe
```
#### Making SageVDB Fully Thread-Safe
If you plan to upgrade SageVDB to a fully multi-threaded engine, you have several options:
**Option 1: Internal Locking (Recommended for Service Use)**
```cpp
class SageVDB {
private:
mutable std::shared_mutex rw_mutex_; // Reader-writer lock
public:
VectorId add(const Vector& vector, const Metadata& metadata = {}) {
std::unique_lock<std::shared_mutex> lock(rw_mutex_);
// ... add implementation ...
}
std::vector<QueryResult> search(const Vector& query, uint32_t k) const {
std::shared_lock<std::shared_mutex> lock(rw_mutex_); // Multiple readers
// ... search implementation ...
}
};
```
**Option 2: Lock-Free Data Structures**
```cpp
// Use concurrent data structures for high-throughput scenarios
#include <tbb/concurrent_vector.h>
#include <tbb/concurrent_hash_map.h>
class VectorStore {
private:
tbb::concurrent_vector<Vector> vectors_;
tbb::concurrent_hash_map<VectorId, size_t> id_to_index_;
};
```
**Option 3: Thread-Local Index Copies (Read-Heavy Workloads)**
```cpp
class SageVDB {
private:
std::shared_ptr<const Index> shared_index_; // Immutable index
std::atomic<int> version_;
public:
void rebuild_index() {
// Build new index
auto new_index = std::make_shared<Index>(/* ... */);
shared_index_.store(new_index); // Atomic swap
version_.fetch_add(1);
}
};
```
### Integration with SAGE Service Layer
**The good news: SAGE's service architecture is designed to handle multi-threaded backends!**
#### How SAGE Service Layer Works
```python
# SAGE's ServiceManager handles thread safety automatically
class ServiceManager:
def __init__(self):
self._executor = ThreadPoolExecutor(max_workers=10)
self._lock = threading.Lock()
def call_sync(self, service_name, *args, **kwargs):
# Each service call runs in isolated context
# Your multi-threaded SageVDB is safe here!
return service.method(*args, **kwargs)
def call_async(self, service_name, *args, **kwargs):
# Async calls use thread pool
# Multiple concurrent requests are handled properly
return self._executor.submit(self.call_sync, ...)
```
#### Service Integration Example
Even with a multi-threaded SageVDB engine, the service wrapper remains simple:
```python
# packages/sage-middleware/.../sage_vdb_service.py
from threading import Lock
class SageVDBService:
"""Thread-safe service wrapper for multi-threaded SageVDB."""
def __init__(self, dimension: int = 768):
self._db = SageVDB.from_config(DatabaseConfig(dimension))
# Optional: Add Python-level locking if C++ doesn't provide it
self._write_lock = Lock()
def add(self, vector: np.ndarray, metadata: dict = None) -> int:
# Option A: If SageVDB has internal locking, just call it
return self._db.add(vector, metadata or {})
# Option B: If you need Python-level coordination
# with self._write_lock:
# return self._db.add(vector, metadata or {})
def search(self, query: np.ndarray, k: int = 5) -> List[dict]:
# Read operations are typically thread-safe
# No locking needed if C++ provides read concurrency
results = self._db.search(query, k=k)
return [{"id": r.id, "score": r.score, "metadata": r.metadata}
for r in results]
```
#### Usage in SAGE Pipeline
```python
from sage.core.api.local_environment import LocalEnvironment
from sage.core.api.function.map_function import MapFunction
class VectorSearch(MapFunction):
def execute(self, data):
# Concurrent calls are safe!
# SAGE's ServiceManager handles thread coordination
results = self.call_service("sage_vdb", data["query"], method="search", k=10)
# Or async for higher throughput
future = self.call_service_async("sage_vdb", data["query"], method="search", k=10)
results = future.result(timeout=5.0)
return results
# Register multi-threaded SageVDB service
env = LocalEnvironment()
env.register_service("sage_vdb", lambda: SageVDBService(dimension=768))
# Multiple concurrent requests work fine
(
env.from_batch(QuerySource, queries)
.map(VectorSearch) # Can run in parallel
.sink(ResultSink)
)
env.submit()
```
### Multi-Threading Best Practices
#### 1. **Choose the Right Threading Model**
```cpp
// For SAGE service integration, prefer these patterns:
// Pattern A: Reader-Writer Lock (balanced read/write)
class SageVDB {
mutable std::shared_mutex mutex_;
// Readers don't block each other
// Writers have exclusive access
};
// Pattern B: Partitioned Locking (high concurrency)
class SageVDB {
static constexpr size_t NUM_PARTITIONS = 16;
std::array<std::mutex, NUM_PARTITIONS> partition_locks_;
size_t get_partition(VectorId id) {
return id % NUM_PARTITIONS;
}
};
// Pattern C: Lock-Free (expert mode)
class SageVDB {
std::atomic<Index*> current_index_;
// RCU-style updates
};
```
#### 2. **GIL Awareness (Python Bindings)**
```cpp
// In Python bindings, release GIL for long operations
#include <pybind11/pybind11.h>
py::class_<SageVDB>(m, "SageVDB")
.def("search", [](const SageVDB& db, const Vector& query, int k) {
// Release Python GIL during C++ computation
py::gil_scoped_release release;
auto results = db.search(query, k);
py::gil_scoped_acquire acquire;
return results;
}, "Perform vector search");
```
#### 3. **Service-Level Connection Pooling**
```python
class SageVDBServicePool:
"""Pool of SageVDB instances for maximum concurrency."""
def __init__(self, dimension: int, pool_size: int = 4):
self._pool = [SageVDB(DatabaseConfig(dimension))
for _ in range(pool_size)]
self._current = 0
self._lock = threading.Lock()
def get_instance(self) -> SageVDB:
with self._lock:
idx = self._current
self._current = (self._current + 1) % len(self._pool)
return self._pool[idx]
def search(self, query, k=10):
# Round-robin across instances
db = self.get_instance()
return db.search(query, k)
```
### Performance Benchmarks: Single-Threaded vs Multi-Threaded
| Scenario | Single-Threaded | Multi-Threaded (4 cores) | Speedup |
|----------|----------------|--------------------------|---------|
| Concurrent Reads (1M vectors) | 100 QPS | 380 QPS | 3.8x |
| Mixed Read/Write (90/10) | 85 QPS | 240 QPS | 2.8x |
| Batch Insert (10K vectors) | 12K/sec | 35K/sec | 2.9x |
### Migration Checklist
If you're upgrading SageVDB to multi-threaded:
- [ ] Add `std::shared_mutex` or equivalent to core data structures
- [ ] Protect index updates with exclusive locks
- [ ] Allow concurrent reads with shared locks
- [ ] Release Python GIL in pybind11 bindings for long operations
- [ ] Add thread-safety tests (see `tests/test_thread_safety.cpp`)
- [ ] Update documentation to specify thread-safety guarantees
- [ ] Consider lock-free alternatives for hot paths
- [ ] Profile under concurrent load (use `perf` or `gperftools`)
### Example: Thread-Safe Index Update
```cpp
class SageVDB {
private:
mutable std::shared_mutex index_mutex_;
std::unique_ptr<ANNSAlgorithm> index_;
public:
void rebuild_index() {
// Build new index without holding lock
auto new_index = create_new_index();
new_index->fit(vectors_);
// Quick swap under exclusive lock
{
std::unique_lock lock(index_mutex_);
index_.swap(new_index);
}
// old index destroyed here (outside lock)
}
std::vector<QueryResult> search(const Vector& query, uint32_t k) const {
// Shared lock allows concurrent searches
std::shared_lock lock(index_mutex_);
return index_->query(query, QueryConfig{k});
}
};
```
### Summary
**Yes, SageVDB can absolutely work as a SAGE service even when multi-threaded!**
✅ **Why it works:**
- SAGE's `ServiceManager` already handles concurrent service calls
- Thread pool executor isolates each request
- Python GIL can be released in C++ for true parallelism
- Service wrapper can add additional coordination if needed
✅ **Recommended approach:**
1. Add internal locking to SageVDB C++ code (reader-writer pattern)
2. Release GIL in Python bindings for compute-intensive operations
3. Keep service wrapper simple - let C++ handle thread safety
4. Use `call_service_async` for high concurrency in pipelines
✅ **No breaking changes needed:**
- Service interface remains identical
- Existing SAGE pipelines work without modification
- Performance improves automatically with multi-threading
## 🔗 Integration
### Python Bindings
Python bindings are provided in `../python/` using pybind11:
```python
import _sage_vdb
config = _sage_vdb.DatabaseConfig(128)
db = _sage_vdb.SageVDB(config)
# ... use from Python ...
```
Use the optional `sage-anns` Python backend (no C++ rebuild required):
```python
from sagevdb import create_database
db = create_database(
128,
backend="sage-anns",
algorithm="faiss_hnsw",
metric="l2",
M=32,
ef_construction=200,
)
```
See `../README.md` for Python API documentation.
### Shared Library
Link against `libsage_vdb.so`:
```cmake
find_library(sage_vdb_LIB sage_vdb HINTS ${sage_vdb_ROOT}/lib)
target_link_libraries(my_app ${sage_vdb_LIB})
```
## 📚 Documentation
- **[ANNS Plugin Guide](../docs/anns_plugin_guide.md)** - Detailed plugin development
- **[Multimodal Design](../docs/multimodal_fusion_design.md)** - Architecture overview
- **[Multimodal Features](docs/guides/README_Multimodal.md)** - Multimodal usage guide
- **[Parent README](../README.md)** - SageVDB middleware documentation
## 🤝 Contributing
We welcome contributions! Please:
1. Follow C++20 best practices
2. Add tests for new features
3. Update documentation
4. Run `clang-format` before committing:
```bash
clang-format -i $(find src include -name '*.cpp' -o -name '*.h')
```
## 📄 License
This project is part of the SAGE system. See the [LICENSE](../../../../../LICENSE) file in the repository root.
## 🙏 Acknowledgments
- Inspired by [big-ann-benchmarks](https://github.com/erikbern/ann-benchmarks)
- FAISS integration from [Facebook AI](https://github.com/facebookresearch/faiss)
- Built with modern C++20 features
---
**Part of the SAGE Project** - [Documentation](../../../../../README.md) | [Issues](https://github.com/intellistream/SAGE/issues)
## Component Versions
<!-- START_VERSION_TABLE -->
| Component | Status | Latest Version |
|-----------|--------|----------------|
| [isage-vdb](https://pypi.org/project/isage-vdb/) | [](https://pypi.org/project/isage-vdb/) | `0.1.5` |
<!-- END_VERSION_TABLE -->
| text/markdown | null | IntelliStream Team <shuhao_zhang@hust.edu.cn> | null | null | MIT | vector database, ANNS, similarity search, FAISS, embedding | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Database",
"License :: OSI Approved :: MIT License",
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy>=1.19.0",
"faiss-cpu>=1.7.0",
"isage-anns>=0.1.3",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"black; extra == \"dev\"",
"isort; extra == \"dev\"",
"isage-pypi-publisher; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/intellistream/sageVDB",
"Documentation, https://github.com/intellistream/sageVDB#readme",
"Repository, https://github.com/intellistream/sageVDB",
"Issues, https://github.com/intellistream/sageVDB/issues"
] | twine/6.2.0 CPython/3.11.11 | 2026-02-20T08:50:44.238844 | isage_vdb-0.2.0.1-cp311-cp311-manylinux_2_34_x86_64.whl | 472,645 | 6b/18/e0903b68423441c1cee9e74fdb0c2a4012f3dd708a5881bea71d231f9b18/isage_vdb-0.2.0.1-cp311-cp311-manylinux_2_34_x86_64.whl | cp311 | bdist_wheel | null | false | e5d1c7df9ae3746244b6e88a273e928e | b3711903278775e5d892410b0a95328047bd2bb3d4a79b5402de27c838c7338c | 6b18e0903b68423441c1cee9e74fdb0c2a4012f3dd708a5881bea71d231f9b18 | null | [] | 139 |
2.4 | anticipator-ai | 0.1.0 | Runtime threat detection for multi-agent AI systems | # anticipator
Anticipator is an open source security toolkit for multi-agent AI systems — detects credential exposure, prompt injection, and anomalous agent behavior across LangGraph, CrewAI, and AutoGen before they become incidents.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"click>=8.0",
"pyahocorasick>=2.0",
"langgraph>=1.0.0; extra == \"langgraph\"",
"langchain>=1.0.0; extra == \"langgraph\"",
"crewai>=1.0.0; extra == \"crewai\"",
"langgraph>=1.0.0; extra == \"all\"",
"langchain>=1.0.0; extra == \"all\"",
"crewai>=1.0.0; extra == \"all\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.1 | 2026-02-20T08:49:24.033020 | anticipator_ai-0.1.0.tar.gz | 5,518 | ce/50/be1f21a7662587ed000784de1cee8a8bd1bfc5e0fbbd5812c02e2d41fab7/anticipator_ai-0.1.0.tar.gz | source | sdist | null | false | 920423a8a9e068b48aa1d7c673329b26 | 14b878e61b5af5320069842782829fc7f740f2b7884171ca3712e64fc3725ebb | ce50be1f21a7662587ed000784de1cee8a8bd1bfc5e0fbbd5812c02e2d41fab7 | Apache-2.0 | [
"LICENSE"
] | 238 |
2.4 | bat-adk | 2026.2 | Software Development Kit for building AI Agents in BubbleRAN MX-PDK and MX-AI | # BubbleRAN Agentic Toolkit - Agent Development Kit (ADK)
[](../LICENSE)
[](https://pypi.org/project/bat-adk/)
The **BAT-ADK** is a Python-based Software Development Kit designed to simplify the development, deployment, and integration of AI Agents within the BubbleRAN architecture.
This repository includes the ADK framework ([BubbleRAN Software License](https://bubbleran.com/resources/files/BubbleRAN_Licence-Agreement-1.3.pdf)).
## Key Features
- 🛠️ Easy-to-use Python SDK for developing AI Agents
- 🔗 Integrates the [LangGraph](https://pypi.org/project/langgraph/) library with the [A2A SDK](https://pypi.org/project/a2a-sdk/) and [MCP SDK]() for building AI Agents beyond POCs (ready for production)
- ☁️ Ready for Cloud-Native deployment with BubbleRAN [MX-AI](https://bubbleran.com/products/mx-ai/)
- 🧩 Prebuilt Agentic Workflow (e.g. ReAct, A2A Communication)
## Getting Started
### Prerequisites
- Python 3.12+
- `uv` (recommended) or `pip`
## Installation
### Using `uv`
```bash
uv add bat-adk
```
### Using `pip`
```bash
pip install bat-adk
```
## Documentation
The BAT-ADK uses [`pydoc-markdown`](https://pydoc-markdown.readthedocs.io/) to generate API documentation directly from Python docstrings.
### Generating the Documentation
To build the documentation locally, run:
```bash
uv run pydoc-markdown
```
The generated documentation will be available at `adk/build/docs/content/bat-adk`
| text/markdown | null | Andrea LEONE <andrea.leone@bubbleran.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"a2a-sdk>=0.3.20",
"a2a-sdk[http-server]",
"httpx>=0.28.1",
"langchain>=0.3.24",
"langchain-mcp-adapters<0.2.0,>=0.1.13",
"langchain-nvidia-ai-endpoints>=0.3.9",
"langchain-openai>=0.3.14",
"langchain-ollama>=0.3.3",
"langgraph>=1.0.0",
"mcp[cli]>=1.17.0",
"pydantic>=2.10.6",
"pyyaml>=6.0.1",
"uvicorn>=0.37.0"
] | [] | [] | [] | [
"Homepage, https://bubbleran.com/",
"Repository, https://github.com/bubbleran/bat"
] | twine/6.2.0 CPython/3.13.3 | 2026-02-20T08:49:08.059452 | bat_adk-2026.2.tar.gz | 32,279 | 8c/10/ccfc20a668b406f60fbc3ba9407d17ea945b616a73f5ff6a6767f0d0e246/bat_adk-2026.2.tar.gz | source | sdist | null | false | 45a86a4bf5b4566dd4458240c9c80918 | 3a14d28b57147e53ff7a11505e87f3c65765c784d2b9288457433112ef325809 | 8c10ccfc20a668b406f60fbc3ba9407d17ea945b616a73f5ff6a6767f0d0e246 | null | [] | 213 |
2.4 | pmos-brain | 3.2.0 | Semantic Knowledge Graph with Graph Analytics, Event Sourcing, Enrichment Pipeline, Vector Search, MCP Server & Quality Scoring | # PM-OS Brain
```
╭────╮
╭────┤ ● ├────╮
│ ╰──┬─╯ │
╭──┴─╮ │ ╭──┴─╮
╭────┤ ● ├────┼───┤ ● ├────╮
│ ╰──┬─╯ │ ╰──┬─╯ │
╭──┴─╮ │ ╭──┴─╮ │ ╭──┴─╮
│ ● ├────┼───┤ ● ├────┼───┤ ● │
╰──┬─╯ │ ╰──┬─╯ │ ╰──┬─╯
│ ╭──┴─╮ │ ╭──┴─╮ │
╰────┤ ● ├────┼───┤ ● ├────╯
╰──┬─╯ │ ╰──┬─╯
│ ╭──┴─╮ │
╰────┤ ● ├────╯
╰────╯
██████╗ ██████╗ █████╗ ██╗███╗ ██╗
██╔══██╗██╔══██╗██╔══██╗██║████╗ ██║
██████╔╝██████╔╝███████║██║██╔██╗ ██║
██╔══██╗██╔══██╗██╔══██║██║██║╚██╗██║
██████╔╝██║ ██║██║ ██║██║██║ ╚████║
╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝╚═╝╚═╝ ╚═══╝
Semantic Knowledge Graph for AI Agents
```
[](https://badge.fury.io/py/pmos-brain)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
A structured knowledge management system that stores entities (people, projects, teams) as markdown files with YAML frontmatter, connected through typed relationships. Includes event sourcing, a compressed entity index generator, vector search, MCP server, and graph analytics. Part of the [PM-OS](https://github.com/feamando/pm-os) ecosystem.
## What's New in v3.1.0
- **MCP Server** — Expose your knowledge graph to any MCP-compatible AI client (Cursor, Windsurf, Claude Code) with 5 built-in tools
- **Vector Search** — ChromaDB + sentence-transformers semantic search across all entities with embedding-based edge inference
- **Canonical Resolver** — Multi-format entity resolution (`$id`, slug, path, alias) with fuzzy matching
- **Enhanced Search** — Inverted index with Porter stemming, O(1) alias lookup, query expansion, and optional semantic fallback
- **Brain Query** — Combined BRAIN (keyword) + GRAPH (traversal) query interface with relevance scoring
- **Enrichment Orchestrator** — Multi-mode enrichment (full/quick/report/boot/orphan) with pluggable external enrichers
- **PM Frameworks** — 35 product management framework documents for reference and agent context
- **Enhanced Orphan Analyzer** — Standalone marking, pending enrichment tracking, event audit trails
### v3.0.0
- **Event Helpers** — Pydantic-validated event creation with factory methods and automatic compaction
- **Event Query CLI** — Query entity timelines, recent activity, and event statistics
- **Brain Index Generator** — Compressed `BRAIN.md` entity index for passive agent context
- **Retrieval-Led Reasoning** — Recommended usage pattern for AI agent integration
## Installation
```bash
# Basic installation
pip install pmos-brain
# With specific LLM provider
pip install pmos-brain[anthropic] # Claude
pip install pmos-brain[openai] # GPT-4
pip install pmos-brain[gemini] # Gemini
pip install pmos-brain[mistral] # Mistral
pip install pmos-brain[ollama] # Local models
# With all LLM providers
pip install pmos-brain[llm]
# With vector search (ChromaDB + sentence-transformers)
pip install pmos-brain[vector]
# With MCP server
pip install pmos-brain[mcp]
# With integrations
pip install pmos-brain[slack]
pip install pmos-brain[jira]
pip install pmos-brain[github]
pip install pmos-brain[integrations] # All integrations
# Everything
pip install pmos-brain[all]
```
## Quick Start
### Python API
```python
from pmos_brain import Brain, LLMClient
# Initialize brain
brain = Brain("./my-brain")
# Search entities
results = brain.search("product manager")
for entity in results:
print(f"{entity.name} ({entity.entity_type})")
# Get specific entity
person = brain.get("Entities/Jane_Smith")
print(person.relationships)
# Create new entity
project = brain.create(
name="Mobile App v2",
entity_type="project",
content="# Mobile App v2\n\nRedesign project...",
metadata={"status": "in_progress", "priority": "P1"}
)
# Use LLM for entity extraction
llm = LLMClient() # Uses ANTHROPIC_API_KEY by default
response = llm.complete(
"Extract all person names from this text: ...",
system="Return names as a JSON array."
)
```
### CLI
```bash
# Initialize a new brain
pmos-brain setup ./my-brain
# Search entities
pmos-brain search "product manager" --brain ./my-brain
# List all entities
pmos-brain list --type person
# Get entity details
pmos-brain get Entities/Jane_Smith
# Validate brain structure
pmos-brain validate
# Query entity events
pmos-brain events timeline Entities/Jane_Smith.md
pmos-brain events recent --days 7
pmos-brain events stats --since 2026-01-01
# Generate compressed entity index
pmos-brain index --config team.yaml --output BRAIN.md
# Combined BRAIN+GRAPH query (v3.1.0)
pmos-brain query "mobile app" --limit 5
pmos-brain query "project launch" --no-graph --format json
# Semantic search (v3.1.0, requires pmos-brain[vector])
pmos-brain search "checkout flow redesign" --semantic
pmos-brain vector build # Build vector index
pmos-brain vector query "onboarding" # Query vector index
pmos-brain vector stats # Index statistics
# Resolve entity references (v3.1.0)
pmos-brain resolve "jane-smith"
pmos-brain resolve "entity/person/jane-smith"
# Run enrichment (v3.1.0)
pmos-brain enrich --mode quick
pmos-brain enrich --mode report
# Start MCP server (v3.1.0, requires pmos-brain[mcp])
pmos-brain mcp
```
## Event Sourcing
Brain v3.0.0 introduces a structured event sourcing system. Every entity change is tracked as an immutable event in the entity's YAML frontmatter.
### Event Helpers API
```python
from pmos_brain import EventHelper
# Create a field update event
event = EventHelper.create_field_update(
actor="system/enricher",
field="role",
new_value="Director",
old_value="Senior Manager",
)
# Create a relationship event
event = EventHelper.create_relationship_event(
actor="user/jane",
target="entity/team/platform",
rel_type="member_of",
operation="add",
)
# Create a status change event
event = EventHelper.create_status_change(
actor="system/workflow",
old_status="active",
new_status="archived",
)
# Append event to entity frontmatter (auto-increments version, compacts at threshold)
frontmatter = {"$version": 1, "$events": []}
EventHelper.append_to_frontmatter(frontmatter, event)
```
### Event Types
| Type | Description |
|------|-------------|
| `entity_create` | Entity was created |
| `entity_delete` | Entity was deleted |
| `field_update` | A field value changed |
| `relationship_add` | A relationship was added |
| `relationship_remove` | A relationship was removed |
| `status_change` | Entity status changed |
| `enrichment` | Data enriched from external source |
| `compacted_summary` | Summarized event group (from compaction) |
### Event Compaction
When an entity accumulates more than 10 events, automatic compaction runs: the first event (creation) and the most recent events are preserved, while middle events are summarized into a `compacted_summary` event. This keeps frontmatter lean without losing history.
### Event Query
```python
from pmos_brain import EventQuery
from pathlib import Path
from datetime import datetime, timedelta, timezone
query = EventQuery(brain_path=Path("./my-brain"))
# Get entity timeline
timeline = query.get_timeline("Entities/Jane_Smith.md")
# Recent events across all entities
since = datetime.now(timezone.utc) - timedelta(days=7)
events = query.get_recent(since=since, limit=50)
# Event statistics
stats = query.get_stats(since=since)
print(f"Total: {stats['total']}, By type: {stats['by_type']}")
```
## Brain Index Generator
The `BrainIndexGenerator` creates a compressed `BRAIN.md` file — a pipe-delimited entity index designed for loading into AI agent context windows.
### Two-Tier Architecture
- **Tier 1 (Team)**: Manager, direct reports, stakeholders — includes full relationship data
- **Tier 2 (Connected)**: One-hop relationship targets from Tier 1 + hot topics — compact format
### Usage
```python
from pmos_brain import BrainIndexGenerator
from pathlib import Path
generator = BrainIndexGenerator(
brain_path=Path("./my-brain"),
team_config={
"user": {"name": "Jane Smith", "position": "Director"},
"manager": {"id": "john-doe", "name": "John Doe", "role": "VP"},
"reports": [
{"id": "alice-b", "name": "Alice B", "role": "PM", "squad": "Alpha"},
],
"stakeholders": [
{"id": "bob-c", "name": "Bob C", "role": "CTO"},
],
}
)
# Optional: include hot topic entities in Tier 2
generator.set_hot_topics(["mobile-app-v2", "quarterly-planning"])
content = generator.generate()
Path("BRAIN.md").write_text(content)
```
### CLI
```bash
# Generate with team config
brain-index --brain-path ./my-brain --config team.yaml --output BRAIN.md
# Or via the main CLI
pmos-brain index --brain ./my-brain --config team.yaml
```
### Output Format
```markdown
# BRAIN.md — Entity Index
<!-- Generated: 2026-02-11T12:00:00Z | Entities: 45 | Tier1: 8 | Tier2: 37 -->
## Team (Tier 1)
id|type|role|squad|status|relationships
jane-smith|person|Director||active|manages:alice-b,member_of:leadership
alice-b|person|PM|Alpha|active|reports_to:jane-smith,owns:mobile-app
## Connected Entities (Tier 2)
id|type|name|status
mobile-app|project|Mobile App|active
platform-team|team|Platform Team|active
```
## MCP Server
The Brain MCP server exposes your knowledge graph to any MCP-compatible AI client (Cursor, Windsurf, Claude Code, etc.).
### Tools
| Tool | Description |
|------|-------------|
| `search_entities` | Keyword + semantic search across entities |
| `get_entity` | Retrieve full entity content by path |
| `query_knowledge` | Combined BRAIN+GRAPH query |
| `get_relationships` | Get entity relationships |
| `list_entities` | List entities by type |
### Usage
```bash
# Start the MCP server
pmos-brain mcp --brain ./my-brain
# Or set brain path via environment variable
export BRAIN_PATH=./my-brain
python -m pmos_brain.mcp.server
```
### MCP Client Configuration
Add to your MCP client config (e.g., Cursor `mcp.json`):
```json
{
"brain": {
"command": "brain-mcp",
"env": {
"BRAIN_PATH": "/path/to/your/brain"
}
}
}
```
## Vector Search
ChromaDB-powered semantic search using sentence-transformers embeddings. Enables fuzzy, meaning-based entity discovery.
```python
from pmos_brain.vector import BrainVectorIndex, VECTOR_AVAILABLE
if VECTOR_AVAILABLE:
vi = BrainVectorIndex(brain_path=Path("./my-brain"))
# Build/rebuild the index
vi.build()
# Semantic search
results = vi.query("checkout flow redesign", n_results=10)
for r in results:
print(f"{r['id']} (distance: {r['distance']:.3f})")
```
### Embedding Edge Inference
Automatically discover potential relationships between entities based on embedding similarity:
```python
from pmos_brain.vector.edge_inferrer import EmbeddingEdgeInferrer
inferrer = EmbeddingEdgeInferrer(brain_path=Path("./my-brain"))
report = inferrer.infer_edges(entity_type="person", threshold=0.7)
for edge in report.edges:
print(f"{edge.source} → {edge.target} (confidence: {edge.confidence:.2f})")
```
## Canonical Resolver
Resolve entity references in any format to their canonical path:
```python
from pmos_brain import CanonicalResolver
resolver = CanonicalResolver(brain_path=Path("./my-brain"))
# All of these resolve to the same entity
resolver.resolve("jane-smith") # slug
resolver.resolve("entity/person/jane-smith") # $id
resolver.resolve("Entities/Jane_Smith.md") # file path
resolver.resolve("Jane") # alias
# Find similar entities (fuzzy matching)
resolver.find_similar("jne-smith", limit=5)
```
## Enrichment Orchestrator
Multi-mode enrichment pipeline for improving graph density and data quality:
```python
from pmos_brain.enrichers.orchestrator import BrainEnrichmentOrchestrator
orchestrator = BrainEnrichmentOrchestrator(brain_path=Path("./my-brain"))
# Full enrichment: health → soft edges → decay scan → hints → health comparison
result = orchestrator.run(mode="full")
# Quick mode: only soft edge inference
result = orchestrator.run(mode="quick")
# Report mode: analysis only, no changes
result = orchestrator.run(mode="report")
# Orphan cleanup: 4-phase orphan resolution
result = orchestrator.run(mode="orphan")
```
### Pluggable External Enrichers
Register custom enrichers for your data sources:
```python
from pmos_brain.enrichers.orchestrator import BrainEnrichmentOrchestrator, ExternalEnricher
class MySlackEnricher:
"""Implements ExternalEnricher protocol."""
def enrich_entity(self, entity_path, brain_path) -> dict:
# Your enrichment logic here
return {"relationships_added": 3}
orchestrator = BrainEnrichmentOrchestrator(brain_path=Path("./my-brain"))
orchestrator.register_enricher(MySlackEnricher())
result = orchestrator.run(mode="full")
```
## PM Frameworks
Brain v3.1.0 includes 35 product management framework documents in the `frameworks/` directory. These can be loaded into agent context or used as reference material:
- Competitive Analysis
- Conducting User Interviews
- Designing Growth Loops
- Evaluating Trade-offs
- Planning Under Uncertainty
- Prioritization Frameworks
- Writing Product Specs
- ...and 28 more
## Retrieval-Led Reasoning
Research on AI agent architectures (Vercel, 2025) shows that **passive context** — loading relevant knowledge into an agent's context window at session start — significantly outperforms tool-based retrieval for structured knowledge tasks. In benchmarks, agents with pre-loaded context achieved 100% task pass rates versus 53% for agents relying on tool calls to retrieve information on demand.
### Why This Matters
Tool-based retrieval (e.g., "search for person X, then read their file") introduces latency, costs tokens on tool orchestration, and creates failure modes when the agent doesn't know what to search for. Passive context gives the agent immediate access to the knowledge graph structure without any tool calls.
### Recommended Pattern
1. **Generate** `BRAIN.md` at session start (or after enrichment runs)
2. **Load** `BRAIN.md` into the agent's system prompt or initial context
3. **Instruct** the agent to consult the index before referencing entities
Example system prompt snippet:
```
You have access to the entity index in BRAIN.md. Before referencing any person,
team, project, or system, check BRAIN.md first. For entities not in the index,
use the brain_loader tool or read the entity file directly.
```
### When to Regenerate
- After enrichment pipeline runs (new data ingested)
- At the start of each agent session
- After significant entity changes (new team members, project status updates)
The compressed pipe-delimited format keeps the index under ~8KB, small enough for any context window while covering 100+ entities.
## Entity Structure
Entities are markdown files with YAML frontmatter:
```markdown
---
$type: person
$version: 3
$status: active
$updated: "2026-02-11T10:00:00Z"
name: Jane Smith
aliases: [Jane, J. Smith]
role: Senior Product Manager
$relationships:
- type: member_of
target: "entity/team/consumer"
- type: owns
target: "entity/project/mobile-app"
$events:
- event_id: evt-abc123
type: entity_create
actor: system/setup
timestamp: "2026-01-15T09:00:00Z"
changes:
- field: $schema
operation: set
value: brain://entity/person/v1
message: Created entity
---
# Jane Smith
Senior Product Manager on the Consumer team.
## Current Focus
- Mobile App v2 redesign
- Push notification strategy
```
## LLM Providers
Brain supports multiple LLM providers with automatic fallback:
| Provider | Models (Latest) | Best For |
|----------|-----------------|----------|
| **Anthropic** | claude-sonnet-4-20250514, claude-opus-4-20250514 | Entity extraction, reasoning |
| **OpenAI** | gpt-4o, o1, o3-mini | General purpose, embeddings |
| **Gemini** | gemini-2.0-flash-exp, gemini-2.0-pro-exp | Fast summarization |
| **Mistral** | mistral-large-2411, codestral-2405 | Balanced cost/quality |
| **Ollama** | llama3.2, qwen2.5, deepseek-r1, phi4 | Local/offline, privacy |
| **Groq** | llama-3.3-70b-versatile | Ultra-fast inference |
| **Bedrock** | claude-3-5-sonnet, amazon.nova-pro | Enterprise AWS |
```python
from pmos_brain import LLMClient
# Uses config/env for provider selection
client = LLMClient()
# Or specify provider
client = LLMClient(provider="anthropic")
# With fallback
client = LLMClient(
provider="anthropic",
fallback=["openai", "ollama"]
)
# Generate completion
response = client.complete("What is 2+2?")
print(response.content)
# Generate embeddings
embeddings = client.embed(["text to embed"])
print(embeddings.dimensions)
```
## Configuration
Create `config.yaml` in your brain directory:
```yaml
llm:
provider: anthropic
fallback: [openai, gemini, ollama]
providers:
anthropic:
model: claude-sonnet-4-20250514
openai:
model: gpt-4o
embedding_model: text-embedding-3-large
# Team config for Brain Index Generator
user:
name: "Jane Smith"
position: "Director of Product"
team:
manager:
id: john-doe
name: "John Doe"
role: "VP of Product"
reports:
- id: alice-engineer
name: "Alice Engineer"
role: "Staff Engineer"
squad: "Platform"
stakeholders:
- id: bob-designer
name: "Bob Designer"
role: "Head of Design"
```
Or use environment variables:
```bash
export LLM_PROVIDER=anthropic
export ANTHROPIC_API_KEY=sk-ant-...
export LLM_FALLBACK_ORDER=openai,ollama
```
## Directory Structure
```
my-brain/
├── Entities/ # People, teams, companies
│ ├── Jane_Smith.md
│ └── Team_Consumer.md
├── Projects/ # Active projects
│ └── Mobile_App.md
├── Architecture/ # Technical documentation
├── Strategy/ # Strategic documents
├── Decisions/ # ADRs and decisions
├── Inbox/ # Unprocessed data
├── .schema/ # Entity schemas
├── .chroma/ # Vector index (generated by pmos-brain vector build)
├── frameworks/ # PM framework reference docs
├── registry.yaml # Entity index
├── BRAIN.md # Compressed entity index (generated)
└── config.yaml # Configuration
```
## Development
```bash
# Clone repo
git clone https://github.com/feamando/brain.git
cd brain
# Install in development mode
pip install -e ".[dev]"
# Run tests
pytest
# Run specific tests
pytest tools/tests/test_event_helpers.py -v
# Format code
black src/
ruff check src/
```
## License
MIT License - see [LICENSE](LICENSE) for details.
---
*Part of [PM-OS](https://github.com/feamando/pm-os) - Product Management Operating System*
| text/markdown | null | PM-OS Team <pm-os@example.com> | null | null | MIT | ai-agent, brain-index, embeddings, enrichment, event-sourcing, graph-analytics, knowledge-graph, llm, mcp, pm-os, product-management, semantic, vector-search | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"jsonschema>=4.0",
"pydantic>=2.0",
"python-dotenv>=1.0",
"pyyaml>=6.0",
"requests>=2.28",
"anthropic>=0.18; extra == \"all\"",
"atlassian-python-api>=3.0; extra == \"all\"",
"black>=23.0; extra == \"all\"",
"chromadb>=0.4; extra == \"all\"",
"google-api-python-client>=2.0; extra == \"all\"",
"google-auth-httplib2>=0.1; extra == \"all\"",
"google-auth-oauthlib>=1.0; extra == \"all\"",
"google-generativeai>=0.4; extra == \"all\"",
"matplotlib>=3.0; extra == \"all\"",
"mcp>=1.0; extra == \"all\"",
"mistralai>=0.1; extra == \"all\"",
"mypy>=1.0; extra == \"all\"",
"networkx>=3.0; extra == \"all\"",
"numpy>=1.24; extra == \"all\"",
"ollama>=0.1; extra == \"all\"",
"openai>=1.0; extra == \"all\"",
"pygithub>=2.0; extra == \"all\"",
"pytest-asyncio>=0.21; extra == \"all\"",
"pytest>=7.0; extra == \"all\"",
"ruff>=0.1; extra == \"all\"",
"sentence-transformers>=2.0; extra == \"all\"",
"slack-sdk>=3.0; extra == \"all\"",
"spacy>=3.0; extra == \"all\"",
"anthropic>=0.18; extra == \"anthropic\"",
"boto3>=1.34; extra == \"bedrock\"",
"black>=23.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"ruff>=0.1; extra == \"dev\"",
"numpy>=1.24; extra == \"embeddings\"",
"sentence-transformers>=2.0; extra == \"embeddings\"",
"anthropic>=0.18; extra == \"full\"",
"atlassian-python-api>=3.0; extra == \"full\"",
"black>=23.0; extra == \"full\"",
"chromadb>=0.4; extra == \"full\"",
"google-api-python-client>=2.0; extra == \"full\"",
"google-auth-httplib2>=0.1; extra == \"full\"",
"google-auth-oauthlib>=1.0; extra == \"full\"",
"google-generativeai>=0.4; extra == \"full\"",
"matplotlib>=3.0; extra == \"full\"",
"mcp>=1.0; extra == \"full\"",
"mistralai>=0.1; extra == \"full\"",
"mypy>=1.0; extra == \"full\"",
"networkx>=3.0; extra == \"full\"",
"numpy>=1.24; extra == \"full\"",
"ollama>=0.1; extra == \"full\"",
"openai>=1.0; extra == \"full\"",
"pygithub>=2.0; extra == \"full\"",
"pytest-asyncio>=0.21; extra == \"full\"",
"pytest>=7.0; extra == \"full\"",
"ruff>=0.1; extra == \"full\"",
"sentence-transformers>=2.0; extra == \"full\"",
"slack-sdk>=3.0; extra == \"full\"",
"spacy>=3.0; extra == \"full\"",
"google-generativeai>=0.4; extra == \"gemini\"",
"pygithub>=2.0; extra == \"github\"",
"google-api-python-client>=2.0; extra == \"google\"",
"google-auth-httplib2>=0.1; extra == \"google\"",
"google-auth-oauthlib>=1.0; extra == \"google\"",
"matplotlib>=3.0; extra == \"graph\"",
"networkx>=3.0; extra == \"graph\"",
"atlassian-python-api>=3.0; extra == \"integrations\"",
"google-api-python-client>=2.0; extra == \"integrations\"",
"google-auth-httplib2>=0.1; extra == \"integrations\"",
"google-auth-oauthlib>=1.0; extra == \"integrations\"",
"pygithub>=2.0; extra == \"integrations\"",
"slack-sdk>=3.0; extra == \"integrations\"",
"atlassian-python-api>=3.0; extra == \"jira\"",
"litellm>=1.0; extra == \"litellm\"",
"anthropic>=0.18; extra == \"llm\"",
"google-generativeai>=0.4; extra == \"llm\"",
"mistralai>=0.1; extra == \"llm\"",
"ollama>=0.1; extra == \"llm\"",
"openai>=1.0; extra == \"llm\"",
"mcp>=1.0; extra == \"mcp\"",
"mistralai>=0.1; extra == \"mistral\"",
"spacy>=3.0; extra == \"nlp\"",
"ollama>=0.1; extra == \"ollama\"",
"openai>=1.0; extra == \"openai\"",
"slack-sdk>=3.0; extra == \"slack\"",
"chromadb>=0.4; extra == \"vector\"",
"numpy>=1.24; extra == \"vector\"",
"sentence-transformers>=2.0; extra == \"vector\""
] | [] | [] | [] | [
"Homepage, https://github.com/feamando/brain",
"Documentation, https://github.com/feamando/brain#readme",
"Repository, https://github.com/feamando/brain",
"Issues, https://github.com/feamando/brain/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:48:02.558584 | pmos_brain-3.2.0.tar.gz | 156,588 | 26/da/75e44b5d14e91a9251217cde45c5f3a120b407418498c297d62d564c5eab/pmos_brain-3.2.0.tar.gz | source | sdist | null | false | 3273c36f9d1d4abfd9f212ee4e3fd02c | afe03a79d7520c48f4c6022ad4cffc661f04ffb8b06c9885a8910eeb1318b057 | 26da75e44b5d14e91a9251217cde45c5f3a120b407418498c297d62d564c5eab | null | [
"LICENSE"
] | 211 |
2.4 | rahavard | 0.0.75 | Re-Usable Utils |
# rahavard
Under construction! Not ready for use yet! Currently experimenting and planning!
| text/markdown | Davoud Arsalani | d_arsalani@yahoo.com | null | null | null | python | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Framework :: Django",
"License :: OSI Approved :: MIT License",
"Topic :: Utilities",
"Operating System :: Unix",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows"
] | [] | https://github.com/davoudarsalani/rahavard | null | null | [] | [] | [] | [
"beautifulsoup4",
"convert_numbers",
"django",
"jdatetime",
"natsort"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:47:36.638010 | rahavard-0.0.75.tar.gz | 13,651 | 31/d7/5fd5d9208c8466c1a80595a54b461285c6d5193f2e10da9d1efc0665abfe/rahavard-0.0.75.tar.gz | source | sdist | null | false | 573c88f70436f6fd55758b41c637053e | 839787f780266028c378990156d70744893d2488c89fb23b586fc08410af487a | 31d75fd5d9208c8466c1a80595a54b461285c6d5193f2e10da9d1efc0665abfe | null | [
"LICENSE"
] | 231 |
2.4 | celine-sdk | 1.2.1 | CELINE SDK | # celine-sdk
Shared CELINE SDK:
- Versioned OpenAPI specs under `openapi/<service>/v<version>/openapi.json`
- Generated OpenAPI clients under `celine.sdk.openapi.<package>`
- Shared infrastructure:
- OIDC token providers (`celine.sdk.auth`)
- MQTT broker abstraction (`celine.sdk.broker`)
- Pydantic settings (`celine.sdk.settings`)
## CLI
```bash
# fetch and version specs (writes to ./openapi)
celine-sdk spec fetch services.yaml
# list discovered versions
celine-sdk spec list
# generate clients (requires: pip install 'celine-sdk[gen]')
celine-sdk generate services.yaml
```
## services.yaml
```yaml
services:
digital-twin:
package: dt
openapi: http://dt:8000/openapi.json
policies:
openapi: http://policies:8000/openapi.json
```
```bash
# fetch and version specs (writes to ./openapi)
celine-sdk spec fetch services.yaml
# list discovered versions
celine-sdk spec list
# generate clients (requires: pip install 'celine-sdk[gen]')
celine-sdk generate services.yaml
```
## services.yaml
```yaml
services:
digital-twin:
package: dt
openapi: http://dt:8000/openapi.json
policies:
openapi: http://policies:8000/openapi.json
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiomqtt>=2.5.0",
"attrs>=25.4.0",
"cachetools>=7.0.0",
"celine-regorus>=0.9.1.post20260219095826",
"cryptography>=46.0.5",
"httpx>=0.28.1",
"pydantic>=2.12.5",
"pydantic-settings>=2.12.0",
"pyjwt>=2.11.0",
"pyyaml>=6.0.3"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:47:35.911895 | celine_sdk-1.2.1.tar.gz | 125,906 | 26/b6/913391b38c5d747f82b3f105277f526d726a3e8e5a9498c1a66af03a581e/celine_sdk-1.2.1.tar.gz | source | sdist | null | false | ced9437db49da70c5b9d4f48f7ad8620 | 2eb6a54cae91c0f7b4ecdff94b5a4a625a32ffa58a4cbb97ede52b489504162a | 26b6913391b38c5d747f82b3f105277f526d726a3e8e5a9498c1a66af03a581e | null | [
"LICENSE"
] | 215 |
2.4 | snid-sage | 1.2.1 | SNID SAGE - SuperNova IDentification – Spectral Analysis and Guided Exploration | # SNID SAGE - Advanced Supernova Spectral Analysis
[](https://python.org)
[](LICENSE)
[]()
<img src="docs/images/5.MatchTemplateFlux.png" alt="Match Template Flux" style="border: 2px solid #333; border-radius: 4px;">
**SNID SAGE** (SuperNova IDentification – Spectral Analysis and Guided Exploration) is your go-to tool for analyzing supernova spectra. It combines an intuitive PySide6/Qt graphical interface with the original SNID (Blondin & Tonry 2007) cross-correlation techniques, enhanced with modern clustering for classification choice, high-performance plotting via `pyqtgraph`, and LLM-powered analysis summaries and interactive chat assistance.
## Quick Installation
### Install from PyPI (Recommended)
```bash
pip install snid-sage
```
Python support: 3.10–3.13 (3.14 not yet supported due to dependency wheels).
This installs both the CLI and the full GUI stack by default, as defined in `pyproject.toml`.
### Using a virtual environment (recommended)
```bash
# Create virtual environment
python -m venv snid_env
# Activate environment
# Windows:
snid_env\Scripts\activate
# macOS/Linux:
source snid_env/bin/activate
# Install
pip install snid-sage
```
### Development installation
```bash
git clone https://github.com/FiorenSt/SNID-SAGE.git
cd SNID-SAGE
pip install -e .
```
Note: For user installs, you can use `pip install --user` to avoid system-wide changes.
## Getting Started
### Launch the GUI
```bash
snid-sage
```
### Use the CLI
```bash
# Single spectrum analysis (templates auto-discovered). Saves summary (.output) and plots by default
sage data/sn2003jo.dat -o results/
# Batch processing (default saves per-object summary and plots)
sage batch "data/*.dat" -o results/
# Batch from a CSV list with per-row redshift (if provided)
sage batch --list-csv "data/spectra_list.csv" -o results/
```
## Documentation & Support
- **[Complete Documentation](https://fiorenst.github.io/SNID-SAGE/)**
- **[First Analysis Guide](https://fiorenst.github.io/SNID-SAGE/quickstart/first-analysis/)**
- **[GUI Manual](https://fiorenst.github.io/SNID-SAGE/gui/interface-overview/)**
- **[CLI Reference](https://fiorenst.github.io/SNID-SAGE/cli/command-reference/)**
- **[AI Integration](https://fiorenst.github.io/SNID-SAGE/ai/overview/)**
- **[Troubleshooting](https://fiorenst.github.io/SNID-SAGE/reference/troubleshooting/)**
## Supported Data Formats
- FITS files (.fits, .fit)
- ASCII tables (.dat, .txt, .ascii, .asci, .flm)
- Space-separated values with flexible column detection
- Custom formats with configurable parsers
## Research & Citation
If you use SNID SAGE in your research, please cite:
```bibtex
@software{snid_sage_2025,
title={SNID SAGE: A Modern Framework for Interactive Supernova
Classification and Spectral Analysis},
author={F. Stoppa},
year={In Prep, 2025},
url={https://github.com/FiorenSt/SNID-SAGE}
}
```
## Community & Support
- **[Report Bug](https://github.com/FiorenSt/SNID-SAGE/issues)**
- **[Request Feature](https://github.com/FiorenSt/SNID-SAGE/issues)**
- **[Discussions](https://github.com/FiorenSt/SNID-SAGE/discussions)**
- **[Email Support](mailto:fiorenzo.stoppa@physics.ox.ac.uk)**
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
---
<div align="center">
**Made with care for the astronomical community**
[Documentation](https://fiorenst.github.io/SNID-SAGE/) • [Report Bug](https://github.com/FiorenSt/SNID-SAGE/issues) • [Request Feature](https://github.com/FiorenSt/SNID-SAGE/issues) • [Discussions](https://github.com/FiorenSt/SNID-SAGE/discussions)
</div>
| text/markdown | null | Fiorenzo Stoppa <fiorenzo.stoppa@physics.ox.ac.uk> | null | Fiorenzo Stoppa <fiorenzo.stoppa@physics.ox.ac.uk> | null | astronomy, supernova, spectrum, analysis, snid, machine-learning | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Astronomy",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Natural Language :: English"
] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"numpy>=1.19.0",
"scipy>=1.7.0",
"h5py>=2.10.0",
"pandas>=1.1.0",
"matplotlib>=3.3.0",
"astropy>=4.0.0",
"scikit-learn>=1.0.0",
"requests>=2.25.0",
"PySide6<6.9.1,>=6.7",
"pyqtgraph>=0.13.0",
"packaging>=20.0",
"platformdirs>=3.0.0",
"pytest>=6.0.0; extra == \"dev\"",
"pytest-cov>=2.10.0; extra == \"dev\"",
"black>=21.0.0; extra == \"dev\"",
"flake8>=3.8.0; extra == \"dev\"",
"mypy>=0.800; extra == \"dev\"",
"mkdocs>=1.5; extra == \"dev\"",
"mkdocs-material>=9.0; extra == \"dev\"",
"sphinx>=4.0.0; extra == \"dev\"",
"sphinx-rtd-theme>=1.0.0; extra == \"dev\"",
"twine>=4.0.0; extra == \"dev\"",
"build>=0.8.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/FiorenSt/SNID-SAGE",
"Documentation, https://fiorenst.github.io/SNID-SAGE",
"Repository, https://github.com/FiorenSt/SNID-SAGE.git",
"Bug Reports, https://github.com/FiorenSt/SNID-SAGE/issues",
"Download, https://github.com/FiorenSt/SNID-SAGE/archive/refs/heads/main.zip"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T08:47:29.640962 | snid_sage-1.2.1.tar.gz | 2,534,794 | 8d/d8/bd0170dbc5e5ca42e8f40ffd6f7144a56afe8dfc786297caecc9c5b6df26/snid_sage-1.2.1.tar.gz | source | sdist | null | false | 33fc768e6307decfad605a5b47bc07d7 | 7089d024829bcaa6a4b0fbaec7b681d68311391b7a2b5d41a098560b93b9f5f5 | 8dd8bd0170dbc5e5ca42e8f40ffd6f7144a56afe8dfc786297caecc9c5b6df26 | MIT | [
"LICENSE"
] | 264 |
2.4 | verify-everything | 0.1.12 | LLM-based code review tool that finds issues tests and linters miss | # Vet : Verify Everything
[](https://pypi.python.org/pypi/verify-everything/)
[](https://www.gnu.org/licenses/agpl-3.0)

[](https://discord.gg/sBAVvHPUTE)
Vet is a standalone verification tool for **code changes** and **coding agent behavior**.
It reviews git diffs, and optionally an agent's conversation history, to find issues that tests and linters often miss. Vet is optimized for use by humans, CI, and coding agents.
## Why Vet
- **Verification for agentic workflows**: "the agent said it ran tests" is not the same as "all tests ran successfully".
- **CI-friendly safety net**: catches classes of problems that may not be covered by existing tests.
- **Bring-your-own-model**: can run against hosted providers or local/self-hosted OpenAI-compatible endpoints.
## Installation
```bash
pip install verify-everything
```
Or install from source:
```bash
pip install git+https://github.com/imbue-ai/vet.git
```
## Quickstart
Run Vet in the current repo:
```bash
vet "Implement X without breaking Y"
```
Compare against a base ref/commit:
```bash
vet "Refactor storage layer" --base-commit main
```
## Using Vet with Coding Agents
Vet ships as an [agent skill](https://agentskills.io) that coding agents like [OpenCode](https://opencode.ai) and [Codex](https://github.com/openai/codex) can discover and use automatically. When installed, agents will proactively run vet after code changes and include conversation history for better analysis.
### Install the skill
```bash
curl -fsSL https://raw.githubusercontent.com/imbue-ai/vet/main/install-skill.sh | bash
```
You will be prompted to choose between:
- **Project level**: installs into `.agents/skills/vet/` and `.claude/skills/vet/` at the repo root (run from your repo directory)
- **User level**: installs into `~/.agents/`, `~/.opencode/`, `~/.claude/`, and `~/.codex/` skill directories, discovered globally by all agents
<details>
<summary>Manual installation</summary>
#### Project Level
From the root of your git repo:
```bash
for dir in .agents .claude; do
mkdir -p "$dir/skills/vet/scripts"
for file in SKILL.md scripts/export_opencode_session.py scripts/export_codex_session.py scripts/export_claude_code_session.py; do
curl -fsSL "https://raw.githubusercontent.com/imbue-ai/vet/main/skills/vet/$file" \
-o "$dir/skills/vet/$file"
done
done
```
#### User Level
```bash
for dir in ~/.agents ~/.opencode ~/.claude ~/.codex; do
mkdir -p "$dir/skills/vet/scripts"
for file in SKILL.md scripts/export_opencode_session.py scripts/export_codex_session.py scripts/export_claude_code_session.py; do
curl -fsSL "https://raw.githubusercontent.com/imbue-ai/vet/main/skills/vet/$file" \
-o "$dir/skills/vet/$file"
done
done
```
</details>
### Security note
The `--history-loader` option executes the specified shell command as the current user to load the conversation history. It is important to review history loader commands and shared config presets before use.
## GitHub PRs (Actions)
Vet can run on pull requests using the reusable GitHub Action.
Create `.github/workflows/vet.yml`:
```yaml
name: Vet
permissions:
contents: read
pull-requests: write
on:
pull_request:
types: [opened, edited, synchronize, reopened]
jobs:
vet:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
env:
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
steps:
- uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- uses: imbue-ai/vet@main
with:
agentic: false
```
The action handles Python setup, vet installation, merge base computation, and posting the review to the PR. `ANTHROPIC_API_KEY` must be set as a repository secret when using Anthropic models (the default). See [`action.yml`](https://github.com/imbue-ai/vet/blob/main/action.yml) for all available inputs.
## How it works
Vet snapshots the repo and diff, optionally adds a goal and agent conversation, runs LLM checks, then filters/deduplicates findings into a final list of issues.

## Output & exit codes
- Exit code `0`: no issues found
- Exit code `1`: unexpected runtime error
- Exit code `2`: invalid usage/configuration error
- Exit code `10`: issues found
Output formats:
- `text`
- `json`
- `github`
## Configuration
### Model configuration
Vet supports custom model definitions using OpenAI-compatible endpoints via JSON config files searched in:
- `$XDG_CONFIG_HOME/vet/models.json` (or `~/.config/vet/models.json`)
- `.vet/models.json` at your repo root
#### Example `models.json`
```json
{
"providers": {
"openrouter": {
"name": "OpenRouter",
"api_type": "openai_compatible",
"base_url": "https://openrouter.ai/api/v1",
"api_key_env": "OPENROUTER_API_KEY",
"models": {
"gpt-5.2": {
"model_id": "openai/gpt-5.2",
"context_window": 400000,
"max_output_tokens": 128000,
"supports_temperature": true
},
"kimi-k2": {
"model_id": "moonshotai/kimi-k2",
"context_window": 131072,
"max_output_tokens": 32768,
"supports_temperature": true
}
}
}
}
}
```
Then:
```bash
vet "Harden error handling" --model gpt-5.2
```
### Configuration profiles (TOML)
Vet supports named profiles so teams can standardize CI usage without long CLI invocations.
Profiles set defaults like model choice, enabled issue codes, output format, and thresholds.
See [the example](https://github.com/imbue-ai/vet/blob/main/.vet/configs.toml) in this project.
### Custom issue guides
You can customize the guide text for the issue codes via `guides.toml`. Guide files are loaded from:
- `$XDG_CONFIG_HOME/vet/guides.toml` (or `~/.config/vet/guides.toml`)
- `.vet/guides.toml` at your repo root
#### Example `guides.toml`
```toml
[logic_error]
suffix = """
- Check for integer overflow in arithmetic operations
"""
[insecure_code]
replace = """
- Check for SQL injection: flag any string concatenation or f-string formatting used to build SQL queries rather than parameterized queries
- Check for XSS: flag user-supplied data rendered into HTML templates without proper escaping or sanitization
- Check for path traversal: flag file operations where user input flows into file paths without validation against directory traversal (e.g. ../)
- Check for insecure cryptography: flag use of deprecated or weak algorithms (e.g. MD5, SHA1 for security purposes, DES, RC4)
- Check for hardcoded credentials: flag passwords, API keys, or tokens embedded directly in source code
"""
```
Section keys must be valid issue codes (`vet --list-issue-codes`). Each section supports three optional fields: `prefix` (prepends to built-in guide), `suffix` (appends to built-in guide), and `replace` (fully replaces the built-in guide). `prefix` and `suffix` can be used together, but `replace` is mutually exclusive with the other two. Guide text should be formatted as a list.
## Community
Join the [Imbue Discord](https://discord.gg/sBAVvHPUTE) for discussion, questions, and support. For bug reports and feature requests, please use [GitHub Issues](https://github.com/imbue-ai/vet/issues).
## License
This project is licensed under the [GNU Affero General Public License v3.0 (AGPL-3.0-only)](https://github.com/imbue-ai/vet/blob/main/LICENSE).
| text/markdown | Imbue | null | null | null | null | code-review, llm, verification, linting, ai, git, diff | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Quality Assurance",
"Topic :: Software Development :: Testing"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"jinja2",
"loguru",
"pydantic>=2.11.4",
"anyio",
"attrs",
"cachetools",
"cattrs",
"diskcache>=5.6.3",
"httpx",
"pathspec",
"pygit2>=1.18.0",
"pyhumps",
"tblib==2.0.0",
"toml",
"typeid-python",
"yasoo",
"anthropic~=0.54",
"openai>=1.79.0",
"tiktoken",
"groq>=0.18.0",
"google-genai>=1.26.0",
"async_lru",
"libcst"
] | [] | [] | [] | [
"Homepage, https://github.com/imbue-ai/vet",
"Repository, https://github.com/imbue-ai/vet",
"Issues, https://github.com/imbue-ai/vet/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:47:26.369669 | verify_everything-0.1.12.tar.gz | 187,783 | 40/b4/e6d9c5c965a9dbbc46c3e8d00f72746f0dc9362ddeebe2b932e9c44a26f5/verify_everything-0.1.12.tar.gz | source | sdist | null | false | a496633ec6d13baef74bbebae1e21236 | 14e86148b70219527d12159bf19c4ae4efc0780ea8a9768c275955e94ea4931e | 40b4e6d9c5c965a9dbbc46c3e8d00f72746f0dc9362ddeebe2b932e9c44a26f5 | AGPL-3.0-only | [
"LICENSE"
] | 235 |
2.4 | utilsds | 2.0.4 | Solution for DS Team | # utilsds
Utilsds is a library that includes classes and functions used in data science projects such as:
- **algorithm**:
- `Algorithm`: Base class for fitting, training, and getting hyperparameters of machine learning models.
- **data_ops**:
- `DataOperations`: Handle data operations locally and with Google Cloud services (BigQuery and Cloud Storage).
- BigQuery operations:
- `load_bq_data`: Load data from tables, views, and SQL files.
- `save_bq_view`, `save_bq_table`: Save views and tables.
- `load_bq_procedure`: Execute stored procedures.
- `load_bq_details`: Get table/view details and schema.
- `delete_bq_data`: Delete data with safety confirmations.
- `dry_run`: Perform dry runs to estimate query costs.
- Cloud Storage operations:
- `save_gcs_bucket`: Create buckets.
- `save_gcs_file`, `load_gcs_file`: Save and load files (.pkl, .json, .csv, .html, .sql).
- Local file operations:
- `save_local_file`, `load_local_file`: Save and load files (.pkl, .json, .csv, .html, .sql).
- **data_processing**:
- `SkewnessTransformer`: Transform skewed data using various methods (IHS, neglog, Yeo-Johnson, quantile).
- `NullReplacer`: Replace null values in specified columns with configurable strategies.
- `ColumnDropper`: Drop specified columns from a DataFrame.
- `OutliersCleaner`: Clean outliers by clipping values outside specified percentile ranges.
- `CategoricalMapper`: Map values in categorical columns according to a specified mapping scheme.
- `NumericalMapper`: Convert numerical columns to categorical by binning.
- `Encoder`: One-hot encode categorical columns in the data.
- `Normalizer`: Normalize numerical columns using a provided scaler.
- **data_split**:
- `train_test_validation_split`: Split data into training, testing, and validation sets.
- `resample_X_y`: resample train data and target column.
- **ds_statistics**:
- `test_kruskal_wallis`: Perform the Kruskal-Wallis statistical test.
- `test_agosto_pearsona`: Test for normality using D'Agostino-Pearson test.
- **evaluate**:
- `ModelEvaluator`: Evaluate models and generate plots for diagnostics.
- `ShapExplainer`: Explain model predictions using SHAP values.
- **experiments**:
- `VertexExperiment`: Manage experiments with Vertex AI.
- **optuna**:
- `Optuna`: Optimize hyperparameters using Optuna.
- **metrics**:
- `Metrics`: Calculate metrics for both classification and regression models.
- **modeling**:
- `Modeling`: Manage modeling, metrics, and logging with Vertex AI.
- **Supervised**:
- `LazyClassifier`: A classifier that automatically trains and evaluates multiple models.
- `LazyRegressor`: A regressor that automatically trains and evaluates multiple models.
- `get_card_split`: Function to split data into card-like groups.
- `adjusted_rsquared`: Calculate adjusted R-squared for regression models.
- **visualization**:
- `MetricsPlot`: Compare metrics for different parameter values.
- `Radar`: Create radar plots for visualizing data.
- `cluster_characteristics`: Analyze cluster characteristics.
- `comparison_density`: Compare density distributions.
- `elbow_visualisation`: Visualize the elbow method for clustering.
- `describe_clusters_metrics`: Describe metrics for clusters.
- `category_null_variables`: Visualize null variables in categorical data.
- `normal_distr_plots`: Visualize normal distribution plots.
- `distplot_limitations`: Visualize limitations of distplot.
- `boxplot_limitations`: Visualize limitations of boxplot.
- `violinplot_limitations`: Visualize limitations of violinplot.
- `countplot_limitations`: Visualize limitations of countplot.
- `categorical_variable_perc`: Visualize percentage of categorical variables.
- `spearman_correlation`: Visualize spearman correlation.
- `calculate_crammers_v`: Calculate Crammer's V.
- **what_if_streamlit**:
- `ShapSaver`: Save SHAP explainer components for lazy loading in what-if analysis.
- `ColumnMetadataGenerator`: Generate column metadata from a DataFrame or CSV file.
- **monitoring**:
- `mapping`: Create column mapping from configuration file for Evidently.
- `test_data`: Test data for issues using Evidently test suites.
- `check_data_drift`: Check data for drift using Evidently metrics.
- `send_email_with_table`: Send email notifications with HTML tables for monitoring alerts.
| text/markdown | null | DS Team <ds@sts.pl> | null | null | MIT License | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"pandas>=2.2.2",
"numpy>=1.26.0",
"scikit-learn>=1.5.0",
"scipy>=1.13.0",
"matplotlib>=3.9.0",
"seaborn>=0.13.2",
"google-cloud-bigquery>=3.25.0",
"google-cloud-bigquery-storage>=2.25.0",
"google-cloud-storage>=2.17.0",
"google-cloud-aiplatform>=1.60.0",
"db-dtypes>=1.3.0",
"xgboost>=2.1.0",
"lightgbm>=4.4.0",
"optuna>=4.0.0",
"shap>=0.45.0",
"numba>=0.60.0",
"plotly>=5.23.0",
"evidently<0.6.7,>=0.4.39",
"tqdm>=4.66.0",
"cloudpickle>=3.0.0",
"duckdb>=1.1.0",
"nbformat>=5.9.0",
"pygments>=2.17.0",
"pandas-gbq>=0.23.0",
"jinja2>=3.1.3",
"ipython>=8.20.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T08:47:04.428022 | utilsds-2.0.4.tar.gz | 49,404 | 97/aa/11810a1209fa6f4283ce730afe4a10982fcf66849880772abc54ebfb6f5c/utilsds-2.0.4.tar.gz | source | sdist | null | false | 974b326a406c993c1feeae584f1830e0 | 52a0961facac4cb59fdceb48c498a5feb0891948b7334b49f2ac317039b38991 | 97aa11810a1209fa6f4283ce730afe4a10982fcf66849880772abc54ebfb6f5c | null | [] | 201 |
2.4 | pathable | 0.5.0 | Object-oriented paths | # pathable
[](https://pypi.org/project/pathable/)
[](https://pypi.org/project/pathable/)
[](https://pypi.org/project/pathable/)
## About
Pathable provides a small set of "path" objects for traversing hierarchical data (mappings, lists, and other subscriptable trees) using a familiar path-like syntax.
It’s especially handy when you want to:
* express deep lookups as a single object (and pass it around)
* build paths incrementally (`p / "a" / 0 / "b"`)
* safely probe (`exists()`, `get(...)`) or strictly require segments (`//`)
## Key features
* Intuitive path-based navigation for nested data (e.g., dicts/lists)
* Pluggable accessor layer for custom backends
* Pythonic, chainable API for concise and readable code
* Per-instance (bounded LRU) cached lookup accessor for repeated reads of the same tree
## Quickstart
```python
from pathable import LookupPath
data = {
"parts": {
"part1": {"name": "Part One"},
"part2": {"name": "Part Two"},
}
}
root = LookupPath.from_lookup(data)
name = (root / "parts" / "part2" / "name").read_value()
assert name == "Part Two"
```
## Usage
```python
from pathable import LookupPath
data = {
"parts": {
"part1": {"name": "Part One"},
"part2": {"name": "Part Two"},
}
}
p = LookupPath.from_lookup(data)
# Concatenate path segments with /
parts = p / "parts"
# Check membership (mapping keys or list indexes)
assert "part2" in parts
# Read a value
assert (parts / "part2" / "name").read_value() == "Part Two"
# Iterate children as paths
for child in parts:
print(child, child.read_value())
# Work with keys/items
print(list(parts.keys()))
print({k: v.read_value() for k, v in parts.items()})
# Safe access
print(parts.get("missing", default=None))
# Strict access (raises KeyError if missing)
must_exist = parts // "part2"
# "Open" yields the current value as a context manager
with parts.open() as parts_value:
assert isinstance(parts_value, dict)
# Optional metadata
print(parts.stat())
```
## Filesystem example
Pathable can also traverse the filesystem via an accessor.
```python
from pathlib import Path
from pathable import FilesystemPath
root_dir = Path(".")
p = FilesystemPath.from_path(root_dir)
readme = p / "README.md"
if readme.exists():
content = readme.read_value() # bytes
print(content[:100])
```
## Core concepts
* `BasePath` is a pure path (segments + separator) with `/` joining.
* `AccessorPath` is a `BasePath` bound to a `NodeAccessor`, enabling `read_value()`, `exists()`, `keys()`, iteration, etc.
* `FilesystemPath` is an `AccessorPath` specialized for filesystem objects.
* `LookupPath` is an `AccessorPath` specialized for mapping/list lookups.
Notes on parsing:
* A segment like `"a/b"` is split into parts using the separator.
* `None` segments are ignored.
* `"."` segments are ignored (relative no-op).
* Operations like `relative_to()` and `is_relative_to()` also respect the instance separator.
Equality and ordering:
* `BasePath` equality, hashing, and ordering are all based on both `separator` and `parts`.
* Ordering is separator-sensitive and deterministic, even when parts mix types (e.g. ints and strings).
* Path parts are type-sensitive (`0` is not equal to `"0"`).
Lookup caching:
* `LookupPath` uses a per-instance LRU cache (default maxsize: 128) on its accessor.
* You can control it via `path.accessor.clear_cache()`, `path.accessor.disable_cache()`, and `path.accessor.enable_cache(maxsize=...)`.
* `path.accessor.node` is immutable; to point at a different tree, create a new `LookupPath`/accessor.
## Installation
Recommended way (via pip):
``` console
pip install pathable
```
Alternatively you can download the code and install from the repository:
``` console
pip install -e git+https://github.com/p1c2u/pathable.git#egg=pathable
```
## Benchmarks
Benchmarks live in `tests/benchmarks/` and produce JSON reports.
Local run (recommended as modules):
```console
poetry run python -m tests.benchmarks.bench_parse --output reports/bench-parse.json
poetry run python -m tests.benchmarks.bench_lookup --output reports/bench-lookup.json
```
Quick sanity run:
```console
poetry run python -m tests.benchmarks.bench_parse --quick --output reports/bench-parse.quick.json
poetry run python -m tests.benchmarks.bench_lookup --quick --output reports/bench-lookup.quick.json
```
Compare two results (fails if candidate is >20% slower in any scenario):
```console
poetry run python -m tests.benchmarks.compare_results \
--baseline reports/bench-before.json \
--candidate reports/bench-after.json \
--tolerance 0.20
```
CI (on-demand):
- GitHub Actions workflow `Benchmarks` runs via `workflow_dispatch` and uploads the JSON artifacts.
| text/markdown | Artur Maciag | maciag.artur@gmail.com | null | null | Apache-2.0 | dict, dictionary, list, lookup, path, pathable | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Repository, https://github.com/p1c2u/pathable"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:47:00.748852 | pathable-0.5.0.tar.gz | 16,655 | 72/55/b748445cb4ea6b125626f15379be7c96d1035d4fa3e8fee362fa92298abf/pathable-0.5.0.tar.gz | source | sdist | null | false | 8fa26ebc69d4c0aefc4f7ee91ec4b9c8 | d81938348a1cacb525e7c75166270644782c0fb9c8cecc16be033e71427e0ef1 | 7255b748445cb4ea6b125626f15379be7c96d1035d4fa3e8fee362fa92298abf | null | [
"LICENSE"
] | 1,241,034 |
2.4 | pulumi-aws | 7.21.0a1771571746 | A Pulumi package for creating and managing Amazon Web Services (AWS) cloud resources. | <p align="center">
<a href="https://www.pulumi.com?utm_campaign=pulumi-pulumi-aws-github-repo&utm_source=github.com&utm_medium=top-logo" title="Pulumi AWS Provider - Build and Deploy Infrastructure as Code Solutions on Any Cloud">
<img src="https://www.pulumi.com/images/logo/logo-on-white-box.svg?" width="350">
</a>
</p>
[](https://github.com/pulumi/pulumi-aws/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/aws)
[](https://pypi.org/project/pulumi-aws)
[](https://badge.fury.io/nu/pulumi.aws)
[](https://pkg.go.dev/github.com/pulumi/pulumi-aws/sdk/v7/go)
[](https://github.com/pulumi/pulumi-aws/blob/master/LICENSE)
# Amazon Web Services (AWS) provider
The Amazon Web Services (AWS) resource provider for Pulumi lets you use AWS resources in your cloud programs. To use
this package, [install the Pulumi CLI](https://www.pulumi.com/docs/get-started/install/). For a streamlined Pulumi walkthrough, including language runtime installation and AWS configuration, select "Get Started" below.
<div>
<a href="https://www.pulumi.com/docs/get-started/aws/?utm_campaign=pulumi-pulumi-aws-github-repo&utm_source=github.com&utm_medium=get-started" title="Get Started">
<img src="https://www.pulumi.com/images/get-started.svg?" width="120">
</a>
</div>
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/aws
or `yarn`:
$ yarn add @pulumi/aws
### Python
To use from Python, install using `pip`:
$ pip install pulumi_aws
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-aws/sdk/v7
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Aws
## Concepts
The `@pulumi/aws` package provides a strongly-typed means to create cloud applications that create and interact closely
with AWS resources. Resources are exposed for the entirety of AWS resources and their properties, including (but not
limited to), 'apigateway', 'cloudformation', 'EC2', 'ECS', 'iam', 'lambda', etc. Many convenience APIs have also been
added to make development easier and to help avoid common mistakes, and to get stronger typing.
### Serverless Functions
The `aws.lambda.CallbackFunction` class allows you to create an AWS lambda function directly out of a JavaScript/TypeScript
function object of the right signature. This allows a Pulumi program to simply define a lambda using a simple lambda in
the language of choice, while having Pulumi itself do the appropriate transformation into the final AWS Lambda resource.
This makes many APIs easier to use, such as defining a Lambda to execute when an S3 Bucket is manipulated,
or a CloudWatch timer is fired. To see some examples of this in action, please refer to the `examples/` directory.
## Configuration
The following configuration points are available:
- `aws:region` - (Required) This is the AWS region.
- `aws:accessKey` - (Optional) This is the AWS access key. It can also be sourced from the
`AWS_ACCESS_KEY_ID` environment variable, or via a shared credentials file if `aws:profile` is specified.
- `aws:secretKey` - (Optional) This is the AWS secret key. It can also be sourced from the
`AWS_SECRET_ACCESS_KEY` environment variable, or via a shared credentials file if `aws:profile` is specified.
- `aws:profile` - (Optional) This is the AWS profile name as set in the shared credentials file.
- `aws:sharedCredentialsFiles` - (Optional) List of paths to the shared credentials file. If not set and a profile
is used, the default value is [~/.aws/credentials]. A single value can also be set with the
`AWS_SHARED_CREDENTIALS_FILE` environment variable.
- `aws:token` - (Optional) Session token for validating temporary credentials. Typically provided after successful
identity federation or Multi-Factor Authentication (MFA) login. With MFA login, this is the session token provided
afterward, not the 6 digit MFA code used to get temporary credentials. It can also be sourced from the
`AWS_SESSION_TOKEN` environment variable.
- `aws:maxRetries` - (Optional) This is the maximum number of times an API call is retried, in the case where requests
are being throttled or experiencing transient failures. The delay between the subsequent API calls increases
exponentially. If omitted, the default value is `25`.
- `aws:allowedAccountIds` - (Optional) List of allowed AWS account IDs to prevent you from mistakenly using an incorrect
one. Conflicts with `aws:forbiddenAccountIds`.
- `aws:endpoints` - (Optional) Configuration block for customizing service endpoints. See the Custom Service Endpoints Guide for more information about connecting to alternate AWS endpoints or AWS compatible solutions. See also `aws:useFipsEndpoint`.
- `aws:forbiddenAccountIds` - (Optional) List of forbidden AWS account IDs to prevent you from mistakenly using the wrong
one. Conflicts with `aws:allowedAccountIds`.
- `aws:assumeRole` - (Optional) Supports the following (optional) arguments:
`durationSections`: Number of seconds to restrict the assume role session duration.
`externalId`: External identifier to use when assuming the role.
`policy`: IAM Policy JSON describing further restricting permissions for the IAM Role being assumed.
`policyArns`: Set of Amazon Resource Names (ARNs) of IAM Policies describing further restricting permissions for the role.
`roleArn`: Amazon Resource Name (ARN) of the IAM Role to assume.
`sessionName`: Session name to use when assuming the role.
`tags`: Map of assume role session tags.
- `aws:insecure` - (Optional) Explicitly allow the provider to perform "insecure" SSL requests. If omitted, the default value is `false`.
- `aws:skipCredentialsValidation` - (Optional) Skip the credentials validation via the STS API. Useful for AWS API implementations that do not have STS available or implemented. Default value is `false`. Can be set via the environment variable `AWS_SKIP_CREDENTIALS_VALIDATION`.
- `aws:skipRegionValidation` - (Optional) Skip validation of provided region name. Useful for AWS-like implementations that use their own region names or to bypass the validation for regions that aren't publicly available yet. Default value is `true`.
- `aws:skipRequestionAccountId` - (Optional) Skip requesting the account ID. Useful for AWS API implementations that do not have the IAM, STS API, or metadata API. Default value is `false`. When specified, the use of ARNs is compromised as there is no accountID available to construct the ARN.
- `aws:skipMetadataApiCheck` - (Optional) Skip the AWS Metadata API check. Useful for AWS API implementations that do not have a metadata API endpoint. This provider from authenticating via the Metadata API by default. You may need to use other authentication methods like static credentials, configuration variables, or environment variables. Can be set via the environment variable `AWS_SKIP_METADATA_API_CHECK`.
- `aws:s3UsePathStyle` - (Optional) Set this to true to force the request to use path-style addressing, i.e., `http://s3.amazonaws.com/BUCKET/KEY`. By default, the S3 client will use virtual hosted bucket addressing, `http://BUCKET.s3.amazonaws.com/KEY`, when possible. Specific to the Amazon S3 service. Default is `false`.
- `aws:useFipsEndpoint` - (Optional) Force the provider to resolve endpoints with FIPS capability. Can also be set with the `AWS_USE_FIPS_ENDPOINT` environment variable.
## Reference
For further information, visit [AWS in the Pulumi Registry](https://www.pulumi.com/registry/packages/aws/?utm_campaign=pulumi-pulumi-aws-github-repo&utm_source=github.com&utm_medium=reference)
or for detailed API reference documentation, visit [AWS API Docs in the Pulumi Registry](https://www.pulumi.com/registry/packages/aws/api-docs/?utm_campaign=pulumi-pulumi-aws-github-repo&utm_source=github.com&utm_medium=reference).
## Pulumi developer resources
Delve deeper into our project with additional resources:
- [Get Started with Pulumi](https://www.pulumi.com/docs/get-started/?utm_campaign=pulumi-pulumi-aws-github-repo&utm_source=github.com&utm_medium=examples-resources): Deploy a simple application in AWS, Azure, Google Cloud, or Kubernetes using Pulumi.
- [Documentation](https://www.pulumi.com/docs/?utm_campaign=pulumi-pulumi-aws-github-repo&utm_source=github.com&utm_medium=examples-resources): Learn about Pulumi concepts, follow user guides, and consult the reference documentation.
- [Pulumi Blog](https://www.pulumi.com/blog/?utm_campaign=pulumi-pulumi-aws-github-repo&utm_source=github.com&utm_medium=examples-resources) - Stay in the loop with our latest tech announcements, insightful articles, and updates.
- [Registry](https://www.pulumi.com/registry/?utm_campaign=pulumi-pulumi-aws-github-repo&utm_source=github.com&utm_medium=examples-resources): Search for packages and learn about the supported resources you need. Install the package directly into your project, browse the API documentation, and start building.
- [Try Pulumi AI](https://www.pulumi.com/ai/?utm_campaign=pulumi-pulumi-aws-github-repo&utm_source=github.com&utm_medium=examples-resources) - Use natural-language prompts to generate Pulumi infrastructure-as-code programs in any language.
## Pulumi roadmap
Review the planned work for the upcoming quarter and a selected backlog of issues that are on our mind but not yet scheduled on the [Pulumi Roadmap.](https://github.com/orgs/pulumi/projects/44)
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, aws | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-aws"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T08:45:36.829218 | pulumi_aws-7.21.0a1771571746.tar.gz | 8,779,601 | f3/f8/7c841257e193012286450ca255a5b053d4769731cc9097d17f727a5b74e5/pulumi_aws-7.21.0a1771571746.tar.gz | source | sdist | null | false | 23d3fede665a13d7a1d4a2def742d601 | a47ed217188c1145c8734cc1ff7fc2cc8c05be2554ea6839af0bd2a461761054 | f3f87c841257e193012286450ca255a5b053d4769731cc9097d17f727a5b74e5 | null | [] | 234 |
2.4 | bytetok | 0.2.0 | A fast, modular and light-weight BPE tokenizer for NLP research and prototyping. | # ByteTok
[](https://github.com/VihangaFTW/bytetok/actions/workflows/release.yaml)



ByteTok implements Byte Pair Encoding (BPE) at the byte-level with a Rust-accelerated core for training and encoding. Text is first converted to raw bytes (0-255), then iteratively merged using learned pair statistics.
The training algorithm is based on an optimized [BPE algorithm](https://aclanthology.org/2023.findings-acl.38.pdf) from the paper _A Formal Perspective on Byte-Pair Encoding_. The research has enabled ByteTok to achieve O(N log V) training time and O(N log N) encoding time versus the naive O(NV) approach.
> Here, N denotes the length of the input text and V is the tokenizer's vocabulary size.
## Features
- **High-performance Rust-powered training, encoding, and decoding**: Engineered from the ground up with a parallel processing pipeline for efficient handling of large-scale NLP datasets (1GB+) with the aim of enabling rapid processing for modern LLM applications.
- **Built-in regex patterns**: Choose from a pre-tokenization regex preset that includes GPT-2, GPT-4, GPT-4o, LLaMA 3, Qwen 2 and DeepSeek.
- **Custom regex patterns**: Supported alongside the built-in presets.
- **Special token strategies**: Control how special tokens are handled during encoding.
- **Serialization**: Supports versioned `.model` / `.vocab` file formats for saving tokenizer state, as well as easy loading via a `from_pretrained()` function.
## History
This project started as a weekend experiment with BPE for text compression. I later needed a tokenizer for my custom GPT, which was bottlenecked by context length due to character-level encoding. I wanted a simple API that did four things correctly at a reasonable speed:
- Train on custom text
- Save learned encodings
- Encode text
- Decode text
Feel free to check out robust libraries such as OpenAI's [tiktoken](https://github.com/openai/tiktoken) and Google's [sentencepiece](https://github.com/google/sentencepiece) that are widely adopted in production environments. Tiktoken resembles ByteTok the most, but it should be noted that ByteTok provides a training pipeline which Tiktoken lacks.
In contrast, ByteTok was developed with a different focus. It prioritizes simplicity and usability by offering a clear API that efficiently maps strings to lists of token IDs. All this without burdening users with overly complex configuration or excessive parameters.
## Benchmarks
These benchmarks were conducted on a Linux x86_64 system equipped with an Intel Core i7-12700H processor (20 cores @ 4.70 GHz) and 32GB DDR5 RAM. Encoding and decoding throughput represent the speed of `encode_batch()` and `decode_batch()` operations, respectively.
Dataset: [Sci-Fi Books (Gutenberg)](https://huggingface.co/datasets/stevez80/Sci-Fi-Books-gutenberg)
| Corpus Size | Vocab Size | Training Time | Encoding Throughput | Decoding Throughput | Compression Ratio | Size Reduction |
| ----------- | ---------- | ------------- | ------------------- | ------------------- | ----------------- | -------------- |
| 132.36 MB | 10,000 | 4.58 mins | 16.12 MB/sec | 82.4M tokens/sec | 1.38x | 27.5% |
| 216.96 MB | 10,000 | 8.75 mins | 13.82 MB/sec | 81.4M tokens/sec | 1.60x | 37.7% |
| 216.96 MB | 25,000 | 9.74 mins | 14.55 MB/sec | 70.2M tokens/sec | 1.68x | 40.6% |
| 216.96 MB | 50,000 | 10.67 mins | 14.99 MB/sec | 77.0M tokens/sec | 1.75x | 42.7% |
| 326.96 MB | 50,000 | 16.19 mins | 14.61 MB/sec | 79.3M tokens/sec | 1.44x | 30.7% |
## Requirements
- Python >= 3.12
## Installation
Install from PyPI:
```bash
# with pip
pip install bytetok
# or with uv (recommended)
uv add bytetok
```
### Building from Source
If you want to develop or build from source, you will need the Rust toolchain [rustup](https://rustup.rs/).
```bash
# clone the repository
git clone https://github.com/VihangaFTW/bytetok.git
# install with uv
uv sync
# or build with maturin
uv sync --group dev
uv run maturin develop --release
```
## Quick Start
Here you will find the primary workflows for using ByteTok tokenizers. For detailed API usage and additional features, see the [full documentation in the Wiki](https://github.com/VihangaFTW/bytetok/wiki/ByteTok-Documentation).
### Basics
The API has been designed with simplicity in mind:
```python
import bytetok as btok
# Create a tokenizer with a built-in pattern (default: gpt4o).
tokenizer = btok.get_tokenizer("gpt4o")
# Train on text.
tokenizer.train("your training corpus here...", vocab_size=1000)
# Encode and decode.
tokens = tokenizer.encode("Hello, world!")
text = tokenizer.decode(tokens)
assert text == "Hello, world!"
# Save and reload.
tokenizer.save("my_tokenizer")
reloaded = btok.from_pretrained("my_tokenizer.model")
```
Custom regex patterns can be used for pre-tokenization:
```python
import bytetok as btok
# Create a tokenizer with a custom pattern
# For example, split on whitespace and punctuation.
tokenizer = btok.get_tokenizer(custom_pattern = r"\w+|[^\w\s]")
```
For best results, it is recommended to choose from the built-in presets, which have been extensively validated.
### Parallel Encoding
ByteTok supports parallel encoding and decoding for faster processing of large batches of text.
Use `encode_batch` to perform parallel encoding to efficiently handle large collections of texts. You can then decode the resulting list of token sequences in parallel using `decode_batch`:
```python
import bytetok as btok
tokenizer = btok.get_tokenizer("gpt4o")
tokenizer.train("your training corpus here...", vocab_size=1000)
# Encode a batch of texts in parallel.
texts = ["First document...", "Second document...", "Third document..."]
encoded = tokenizer.encode_batch(texts)
# Decode the batch in parallel.
decoded = tokenizer.decode_batch(encoded, errors="replace")
assert decoded[0] == "First document..."
```
### Special Tokens
Register special tokens after training, then encode with a strategy to control how they are handled:
```python
import bytetok as btok
tokenizer = btok.get_tokenizer("gpt4o")
tokenizer.train("your training corpus here...", vocab_size=1000)
# Set special tokens (IDs must be >= vocab size).
tokenizer.set_special_tokens({"<|endoftext|>": 15005, "<|pad|>": 13005})
# Encode with strategy: "all" allows special tokens in text; "none" ignores them.
strategy = btok.get_strategy("all")
tokens = tokenizer.encode("Hello<|endoftext|>world", strategy=strategy)
# Batch encoding with special tokens.
encoded = tokenizer.encode_batch(
["Doc one.", "Doc two<|pad|>padding", "Doc three."],
strategy=strategy,
)
```
ByteTok automatically checks for conflicts when special tokens would replace existing tokens in the vocabulary or if there are duplicates.
## Acknowledgment
ByteTok is inspired by Andrej Kaparthy's [minbpe](https://github.com/karpathy/minbpe). A walkthrough of _minbpe_ repository is documented on his Youtube channel [here](https://youtu.be/zduSFxRajkE).
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
| text/markdown; charset=UTF-8; variant=GFM | Vihanga Malaviarachchi | null | null | null | MIT | bpe, tokenizer, nlp, byte-pair-encoding | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"regex"
] | [] | [] | [] | [
"Homepage, https://github.com/VihangaFTW/bytetok",
"Issues, https://github.com/VihangaFTW/bytetok/issues",
"Repository, https://github.com/VihangaFTW/bytetok"
] | maturin/1.12.3 | 2026-02-20T08:45:20.824770 | bytetok-0.2.0.tar.gz | 37,514 | f6/4b/5411501e548b3ac0115626557aeb6acb62e4b9aec395827345c8e0095e0e/bytetok-0.2.0.tar.gz | source | sdist | null | false | 6bea32662214d3876a8b11e08ba9639b | 3e96686a7a426f6d5cbe19b64eaa9eb5ec45ebd01f367587d9ed8a7ae539c196 | f64b5411501e548b3ac0115626557aeb6acb62e4b9aec395827345c8e0095e0e | null | [] | 455 |
2.4 | MindsDB | 26.0.0rc3 | MindsDB's AI SQL Server enables developers to build AI tools that need access to real-time data to perform their tasks |
<a name="readme-top"></a>
<div align="center">
<a href="https://pypi.org/project/MindsDB/" target="_blank"><img src="https://badge.fury.io/py/MindsDB.svg" alt="MindsDB Release"></a>
<a href="https://www.python.org/downloads/" target="_blank"><img src="https://img.shields.io/badge/python-3.10.x%7C%203.11.x%7C%203.12.x%7C%203.13.x-brightgreen.svg" alt="Python supported"></a>
<a href="https://hub.docker.com/u/mindsdb" target="_blank"><img src="https://img.shields.io/docker/pulls/mindsdb/mindsdb" alt="Docker pulls"></a>
<br />
<br />
<a href="https://trendshift.io/repositories/3068" target="_blank"><img src="https://trendshift.io/api/badge/repositories/3068" alt="mindsdb%2Fmindsdb | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
<a href="https://github.com/mindsdb/mindsdb">
<img src="/docs/assets/mindsdb_logo.png" alt="MindsDB" width="300">
</a>
<p align="center">
<br />
<a href="https://www.mindsdb.com?utm_medium=community&utm_source=github&utm_campaign=mindsdb%20repo">Website</a>
·
<a href="https://docs.mindsdb.com?utm_medium=community&utm_source=github&utm_campaign=mindsdb%20repo">Docs</a>
·
<a href="https://mindsdb.com/contact">Contact us for a Demo</a>
·
<a href="https://mindsdb.com/joincommunity?utm_medium=community&utm_source=github&utm_campaign=mindsdb%20repo">Community Slack</a>
</p>
</div>
----------------------------------------
MindsDB enables humans, AI, agents, and applications to get highly accurate answers across large scale data sources.
<a href="https://www.youtube.com/watch?v=MX3OKpnsoLM" target="_blank">
<img src="https://github.com/user-attachments/assets/119e7b82-f901-4214-a26f-ff7c5ad86064" alt="MindsDB Demo">
</a>
## Install MindsDB Server
MindsDB is an open-source server that can be deployed anywhere - from your laptop to the cloud, and everywhere in between. And yes, you can customize it to your heart's content.
* [Using Docker Desktop](https://docs.mindsdb.com/setup/self-hosted/docker-desktop). This is the fastest and recommended way to get started and have it all running.
* [Using Docker](https://docs.mindsdb.com/setup/self-hosted/docker). This is also simple, but gives you more flexibility on how to further customize your server.
[MindsDB has an MCP server built in](https://docs.mindsdb.com/mcp/overview) that enables your MCP applications to connect, unify and respond to questions over large-scale federated data—spanning databases, data warehouses, and SaaS applications.
----------------------------------------
# Core Philosophy: Connect, Unify, Respond
MindsDB's architecture is built around three fundamental capabilities:
## [Connect](https://docs.mindsdb.com/integrations/data-overview) Your Data
You can connect to hundreds of enterprise [data sources (learn more)](https://docs.mindsdb.com/integrations/data-overview). These integrations allow MindsDB to access data wherever it resides, forming the foundation for all other capabilities.
## [Unify](https://docs.mindsdb.com/mindsdb_sql/overview) Your Data
In many situations, it’s important to be able to prepare and unify data before generating responses from it. MindsDB SQL offers knowledge bases and views that allow indexing and organizing structured and unstructured data as if it were unified in a single system.
* [**KNOWLEDGE BASES**](https://docs.mindsdb.com/mindsdb_sql/knowledge-bases) – Index and organize unstructured data for efficient Q&A.
* [**VIEWS**](https://docs.mindsdb.com/mindsdb_sql/sql/create/view) – Simplify data access by creating unified views across different sources (no-ETL).
Unification of data can be automated using JOBs
* [**JOBS**](https://docs.mindsdb.com/mindsdb_sql/sql/create/jobs) – Schedule synchronization and transformation tasks for real-time processing.
## [Respond](https://docs.mindsdb.com/mindsdb_sql/agents/agent) From Your Data
Chat with Your Data
* [**AGENTS**](https://docs.mindsdb.com/mindsdb_sql/agents/agent) – Configure built-in agents specialized in answering questions over your connected and unified data.
* [**MCP**](https://docs.mindsdb.com/mcp/overview) – Connect to MindsDB through the MCP (Model Context Protocol) for seamless interaction.
----------------------------------------
## 🤝 Contribute
Interested in contributing to MindsDB? Follow our [installation guide for development](https://docs.mindsdb.com/contribute/install?utm_medium=community&utm_source=github&utm_campaign=mindsdb%20repo).
You can find our [contribution guide here](https://docs.mindsdb.com/contribute/contribute?utm_medium=community&utm_source=github&utm_campaign=mindsdb%20repo).
We welcome suggestions! Feel free to open new issues with your ideas, and we’ll guide you.
This project adheres to a [Contributor Code of Conduct](https://github.com/mindsdb/mindsdb/blob/main/CODE_OF_CONDUCT.md). By participating, you agree to follow its terms.
Also, check out our [community rewards and programs](https://mindsdb.com/community?utm_medium=community&utm_source=github&utm_campaign=mindsdb%20repo).
## 🤍 Support
If you find a bug, please submit an [issue on GitHub](https://github.com/mindsdb/mindsdb/issues/new/choose).
Here’s how you can get community support:
* Ask a question in our [Slack Community](https://mindsdb.com/joincommunity?utm_medium=community&utm_source=github&utm_campaign=mindsdb%20repo).
* Join our [GitHub Discussions](https://github.com/mindsdb/mindsdb/discussions).
* Post on [Stack Overflow](https://stackoverflow.com/questions/tagged/mindsdb) with the MindsDB tag.
For commercial support, please [contact the MindsDB team](https://mindsdb.com/contact?utm_medium=community&utm_source=github&utm_campaign=mindsdb%20repo).
## 💚 Current Contributors
<a href="https://github.com/mindsdb/mindsdb/graphs/contributors">
<img src="https://contributors-img.web.app/image?repo=mindsdb/mindsdb" />
</a>
Generated with [contributors-img](https://contributors-img.web.app).
## 🔔 Subscribe for Updates
Join our [Slack community](https://mindsdb.com/joincommunity)
| text/markdown | MindsDB Inc | jorge@mindsdb.com | null | null | Elastic License 2.0 | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | https://github.com/mindsdb/mindsdb | https://pypi.org/project/mindsdb/ | <3.14,>=3.10 | [] | [] | [] | [
"packaging",
"flask==3.0.3",
"werkzeug==3.0.6",
"flask-restx<2.0.0,>=1.3.0",
"pandas==2.2.3",
"python-multipart==0.0.20",
"cryptography>=35.0",
"psycopg[binary]",
"psutil~=7.0",
"sqlalchemy<3.0.0,>=2.0.0",
"psycopg2-binary",
"alembic>=1.3.3",
"redis<6.0.0,>=5.0.0",
"walrus==0.9.3",
"flask-compress>=1.0.0",
"appdirs>=1.0.0",
"mindsdb-sql-parser~=0.13.8",
"pydantic==2.12.5",
"duckdb==1.3.0; sys_platform == \"win32\"",
"duckdb~=1.3.2; sys_platform != \"win32\"",
"requests==2.32.4",
"dateparser==1.2.0",
"dill==0.3.6",
"numpy~=2.0",
"pytz",
"botocore",
"boto3>=1.34.131",
"python-dateutil",
"lark",
"prometheus-client==0.20.0",
"sentry-sdk[flask]==2.14.0",
"pyaml==23.12.0",
"uvicorn<1.0.0,>=0.30.0",
"a2wsgi~=1.10.10",
"starlette>=0.49.1",
"sse-starlette==2.3.3",
"pydantic_core>=2.33.2",
"pyjwt==2.10.1",
"pymupdf==1.25.2",
"filetype",
"charset-normalizer",
"openpyxl",
"aipdf==0.0.7.0",
"pyarrow<=19.0.0",
"orjson==3.11.3",
"mind-castle>=0.4.9",
"pydantic-ai>=0.0.14",
"bs4",
"urllib3>=2.6.3",
"openai<3.0.0,>=2.9.0; extra == \"agents\"",
"langchain-community==0.3.27; extra == \"agents\"",
"langchain-core==0.3.77; extra == \"agents\"",
"langchain-experimental==0.3.4; extra == \"agents\"",
"transformers>=4.42.4; extra == \"agents\"",
"mindsdb-evaluator==0.0.21; extra == \"agents\"",
"litellm==1.63.14; extra == \"agents\"",
"mcp~=1.10.1; extra == \"agents\"",
"httpx==0.28.1; extra == \"agents\"",
"jwcrypto==1.5.6; extra == \"agents\"",
"typing-extensions==4.14.1; extra == \"agents\"",
"pre-commit>=2.16.0; extra == \"dev\"",
"watchfiles==0.19.0; extra == \"dev\"",
"setuptools==78.1.1; extra == \"dev\"",
"wheel; extra == \"dev\"",
"deptry==0.20.0; extra == \"dev\"",
"twine; extra == \"dev\"",
"importlib_metadata==7.2.1; extra == \"dev\"",
"ruff==0.11.11; extra == \"dev\"",
"lxml==5.3.0; extra == \"kb\"",
"pgvector==0.3.6; extra == \"kb\"",
"langchain-core==0.3.77; extra == \"kb\"",
"litellm==1.63.14; extra == \"kb\"",
"langfuse==2.53.3; extra == \"langfuse\"",
"opentelemetry-api==1.27.0; extra == \"opentelemetry\"",
"opentelemetry-sdk==1.27.0; extra == \"opentelemetry\"",
"opentelemetry-exporter-otlp==1.27.0; extra == \"opentelemetry\"",
"opentelemetry-instrumentation-requests==0.48b0; extra == \"opentelemetry\"",
"opentelemetry-instrumentation-flask==0.48b0; extra == \"opentelemetry\"",
"opentelemetry-distro==0.48b0; extra == \"opentelemetry\"",
"scipy==1.15.3; extra == \"test\"",
"docker>=5.0.3; extra == \"test\"",
"openai<3.0.0,>=2.9.0; extra == \"test\"",
"pytest<9.0.0,>=8.3.5; extra == \"test\"",
"pytest-subtests; extra == \"test\"",
"pytest-xdist; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"pytest-json-report==1.5.0; extra == \"test\"",
"pytest-metadata==3.1.1; extra == \"test\"",
"python-dotenv==1.1.1; extra == \"test\"",
"responses; extra == \"test\"",
"coveralls; extra == \"test\"",
"locust; extra == \"test\"",
"ollama>=0.1.7; extra == \"test\"",
"anthropic>=0.21.3; extra == \"test\"",
"langchain-google-genai>=2.0.0; extra == \"test\"",
"mindsdb-sdk; extra == \"test\"",
"filelock==3.20.1; extra == \"test\"",
"mysql-connector-python==9.1.0; extra == \"test\"",
"walrus==0.9.3; extra == \"test\"",
"pymongo==4.8.0; extra == \"test\"",
"pytest-json-report==1.5.0; extra == \"test\"",
"appdirs>=1.0.0; extra == \"test\"",
"anthropic>=0.21.3; extra == \"all-extras\"",
"opentelemetry-sdk==1.27.0; extra == \"all-extras\"",
"opentelemetry-instrumentation-flask==0.48b0; extra == \"all-extras\"",
"setuptools==78.1.1; extra == \"all-extras\"",
"pytest-xdist; extra == \"all-extras\"",
"pytest-json-report==1.5.0; extra == \"all-extras\"",
"lxml==5.3.0; extra == \"all-extras\"",
"opentelemetry-api==1.27.0; extra == \"all-extras\"",
"opentelemetry-instrumentation-requests==0.48b0; extra == \"all-extras\"",
"watchfiles==0.19.0; extra == \"all-extras\"",
"langchain-google-genai>=2.0.0; extra == \"all-extras\"",
"coveralls; extra == \"all-extras\"",
"openai<3.0.0,>=2.9.0; extra == \"all-extras\"",
"mindsdb-evaluator==0.0.21; extra == \"all-extras\"",
"langchain-core==0.3.77; extra == \"all-extras\"",
"locust; extra == \"all-extras\"",
"scipy==1.15.3; extra == \"all-extras\"",
"mindsdb-sdk; extra == \"all-extras\"",
"deptry==0.20.0; extra == \"all-extras\"",
"pytest-cov; extra == \"all-extras\"",
"mcp~=1.10.1; extra == \"all-extras\"",
"mysql-connector-python==9.1.0; extra == \"all-extras\"",
"jwcrypto==1.5.6; extra == \"all-extras\"",
"opentelemetry-distro==0.48b0; extra == \"all-extras\"",
"appdirs>=1.0.0; extra == \"all-extras\"",
"docker>=5.0.3; extra == \"all-extras\"",
"pytest<9.0.0,>=8.3.5; extra == \"all-extras\"",
"langfuse==2.53.3; extra == \"all-extras\"",
"ollama>=0.1.7; extra == \"all-extras\"",
"ruff==0.11.11; extra == \"all-extras\"",
"pre-commit>=2.16.0; extra == \"all-extras\"",
"importlib_metadata==7.2.1; extra == \"all-extras\"",
"python-dotenv==1.1.1; extra == \"all-extras\"",
"filelock==3.20.1; extra == \"all-extras\"",
"langchain-experimental==0.3.4; extra == \"all-extras\"",
"pymongo==4.8.0; extra == \"all-extras\"",
"pytest-metadata==3.1.1; extra == \"all-extras\"",
"pytest-subtests; extra == \"all-extras\"",
"langchain-community==0.3.27; extra == \"all-extras\"",
"typing-extensions==4.14.1; extra == \"all-extras\"",
"pgvector==0.3.6; extra == \"all-extras\"",
"wheel; extra == \"all-extras\"",
"twine; extra == \"all-extras\"",
"walrus==0.9.3; extra == \"all-extras\"",
"opentelemetry-exporter-otlp==1.27.0; extra == \"all-extras\"",
"transformers>=4.42.4; extra == \"all-extras\"",
"responses; extra == \"all-extras\"",
"litellm==1.63.14; extra == \"all-extras\"",
"httpx==0.28.1; extra == \"all-extras\"",
"anthropic>=0.21.3; extra == \"all\"",
"opentelemetry-sdk==1.27.0; extra == \"all\"",
"opentelemetry-instrumentation-flask==0.48b0; extra == \"all\"",
"setuptools==78.1.1; extra == \"all\"",
"pytest-xdist; extra == \"all\"",
"pytest-json-report==1.5.0; extra == \"all\"",
"lxml==5.3.0; extra == \"all\"",
"opentelemetry-api==1.27.0; extra == \"all\"",
"opentelemetry-instrumentation-requests==0.48b0; extra == \"all\"",
"watchfiles==0.19.0; extra == \"all\"",
"langchain-google-genai>=2.0.0; extra == \"all\"",
"coveralls; extra == \"all\"",
"openai<3.0.0,>=2.9.0; extra == \"all\"",
"mindsdb-evaluator==0.0.21; extra == \"all\"",
"langchain-core==0.3.77; extra == \"all\"",
"locust; extra == \"all\"",
"scipy==1.15.3; extra == \"all\"",
"mindsdb-sdk; extra == \"all\"",
"deptry==0.20.0; extra == \"all\"",
"pytest-cov; extra == \"all\"",
"mcp~=1.10.1; extra == \"all\"",
"mysql-connector-python==9.1.0; extra == \"all\"",
"jwcrypto==1.5.6; extra == \"all\"",
"opentelemetry-distro==0.48b0; extra == \"all\"",
"appdirs>=1.0.0; extra == \"all\"",
"docker>=5.0.3; extra == \"all\"",
"pytest<9.0.0,>=8.3.5; extra == \"all\"",
"langfuse==2.53.3; extra == \"all\"",
"ollama>=0.1.7; extra == \"all\"",
"ruff==0.11.11; extra == \"all\"",
"pre-commit>=2.16.0; extra == \"all\"",
"importlib_metadata==7.2.1; extra == \"all\"",
"python-dotenv==1.1.1; extra == \"all\"",
"filelock==3.20.1; extra == \"all\"",
"langchain-experimental==0.3.4; extra == \"all\"",
"pymongo==4.8.0; extra == \"all\"",
"pytest-metadata==3.1.1; extra == \"all\"",
"pytest-subtests; extra == \"all\"",
"langchain-community==0.3.27; extra == \"all\"",
"typing-extensions==4.14.1; extra == \"all\"",
"pgvector==0.3.6; extra == \"all\"",
"wheel; extra == \"all\"",
"twine; extra == \"all\"",
"walrus==0.9.3; extra == \"all\"",
"opentelemetry-exporter-otlp==1.27.0; extra == \"all\"",
"transformers>=4.42.4; extra == \"all\"",
"responses; extra == \"all\"",
"litellm==1.63.14; extra == \"all\"",
"httpx==0.28.1; extra == \"all\"",
"sqlalchemy-access>=2.0.0; sys_platform == \"win32\" and extra == \"access\"",
"pyodbc>=5.0.0; sys_platform == \"win32\" and extra == \"access\"",
"aerospike~=13.0.0; extra == \"aerospike\"",
"jaydebeapi; extra == \"altibase\"",
"pyodbc; extra == \"altibase\"",
"anthropic==0.18.1; extra == \"anthropic\"",
"mysql-connector-python==9.1.0; extra == \"apache-doris\"",
"mysql-connector-python==9.1.0; extra == \"aurora\"",
"azure-storage-blob; extra == \"azure-blob\"",
"azure-core>=1.38.0; extra == \"azure-blob\"",
"pydantic-settings>=2.1.0; extra == \"bedrock\"",
"sqlalchemy-bigquery; extra == \"bigquery\"",
"google-cloud-bigquery[pandas]; extra == \"bigquery\"",
"google-auth; extra == \"bigquery\"",
"google-auth-oauthlib; extra == \"bigquery\"",
"binance-connector; extra == \"binance\"",
"scikit-learn==1.5.2; extra == \"byom\"",
"virtualenv; extra == \"byom\"",
"scylla-driver; extra == \"cassandra\"",
"chromadb~=0.6.3; extra == \"chromadb\"",
"onnxruntime==1.20.1; extra == \"chromadb\"",
"ckanapi; extra == \"ckan\"",
"clickhouse-sqlalchemy>=0.3.1; extra == \"clickhouse\"",
"sqlalchemy-spanner; extra == \"cloud-spanner\"",
"sqlparse>=0.5.4; extra == \"cloud-spanner\"",
"google-cloud-spanner; extra == \"cloud-spanner\"",
"pymssql>=2.1.4; extra == \"cloud-sql\"",
"mysql-connector-python==9.1.0; extra == \"cloud-sql\"",
"aiohttp>=3.13.3; extra == \"cohere\"",
"cohere==4.5.1; extra == \"cohere\"",
"couchbase==4.3.1; extra == \"couchbase\"",
"couchbase==4.3.1; extra == \"couchbasevector\"",
"sqlalchemy-cratedb; extra == \"crate\"",
"urllib3>=2.6.0; extra == \"crate\"",
"crate; extra == \"crate\"",
"pymysql; extra == \"d0lt\"",
"databend-sqlalchemy; extra == \"databend\"",
"databricks-sql-connector<4.0.0,>=3.7.1; extra == \"databricks\"",
"scylla-driver; extra == \"datastax\"",
"ibm-db; extra == \"db2\"",
"ibm-db-sa; extra == \"db2\"",
"jaydebeapi; extra == \"derby\"",
"pymongo==4.8.0; extra == \"documentdb\"",
"sqlalchemy_dremio; extra == \"dremio\"",
"urllib3>=2.2.2; extra == \"dropbox\"",
"dropbox; extra == \"dropbox\"",
"pydruid; extra == \"druid\"",
"portalocker; extra == \"duckdb-faiss\"",
"faiss-cpu>=1.7.4; extra == \"duckdb-faiss\"",
"mysql-connector-python==9.1.0; extra == \"edgelessdb\"",
"urllib3>=2.6.0; extra == \"elasticsearch\"",
"elasticsearch<9.0.0,>=8.0.0; extra == \"elasticsearch\"",
"chardet; extra == \"email\"",
"pyodbc; extra == \"empress\"",
"eventbrite-python; extra == \"eventbrite\"",
"faunadb; extra == \"faunadb\"",
"sqlalchemy-firebird<3.0.0,>=2.0.0; extra == \"firebird\"",
"fdb; extra == \"firebird\"",
"google-auth; extra == \"gcs\"",
"google-cloud-storage; extra == \"gcs\"",
"gcsfs; extra == \"gcs\"",
"fsspec; extra == \"gcs\"",
"aiohttp>=3.13.3; extra == \"gcs\"",
"pygithub==2.6.1; extra == \"github\"",
"python-gitlab; extra == \"gitlab\"",
"google-api-python-client; extra == \"gmail\"",
"google-auth; extra == \"gmail\"",
"google-auth-oauthlib; extra == \"gmail\"",
"google-analytics-admin; extra == \"google-analytics\"",
"google-api-python-client; extra == \"google-analytics\"",
"google-auth; extra == \"google-analytics\"",
"google-api-python-client; extra == \"google-books\"",
"google-auth; extra == \"google-books\"",
"google-api-python-client; extra == \"google-calendar\"",
"google-auth; extra == \"google-calendar\"",
"google-auth-oauthlib; extra == \"google-calendar\"",
"google-api-python-client; extra == \"google-content-shopping\"",
"google-auth; extra == \"google-content-shopping\"",
"google-auth; extra == \"google-fit\"",
"tzlocal; extra == \"google-fit\"",
"google-api-python-client; extra == \"google-fit\"",
"google; extra == \"google-fit\"",
"google-auth-oauthlib; extra == \"google-fit\"",
"pillow; extra == \"google-gemini\"",
"google-generativeai==0.3.2; extra == \"google-gemini\"",
"protobuf>=4.25.8; extra == \"google-search\"",
"google-auth; extra == \"google-search\"",
"google-api-python-client; extra == \"google-search\"",
"urllib3>=2.2.2; extra == \"google-search\"",
"zipp>=3.19.1; extra == \"google-search\"",
"mysql-connector-python==9.1.0; extra == \"greptimedb\"",
"tiktoken; extra == \"groq\"",
"pydantic-settings>=2.1.0; extra == \"groq\"",
"sqlalchemy-hana; extra == \"hana\"",
"hdbcli; extra == \"hana\"",
"pyhive; extra == \"hive\"",
"thrift; extra == \"hive\"",
"thrift-sasl; extra == \"hive\"",
"pyodbc==4.0.34; extra == \"hsqldb\"",
"hubspot-api-client==12.0.0; extra == \"hubspot\"",
"huggingface-hub; extra == \"huggingface-api\"",
"hugging_py_face; extra == \"huggingface-api\"",
"filelock>=3.20.3; extra == \"huggingface-api\"",
"evaluate==0.4.3; extra == \"huggingface\"",
"datasets==2.16.1; extra == \"huggingface\"",
"torch==2.8.0; extra == \"huggingface\"",
"transformers>=4.42.4; extra == \"huggingface\"",
"huggingface-hub==0.29.3; extra == \"huggingface\"",
"nltk==3.9.1; extra == \"huggingface\"",
"evaluate==0.4.3; extra == \"huggingface-cpu\"",
"datasets==2.16.1; extra == \"huggingface-cpu\"",
"torch==2.8.0+cpu; extra == \"huggingface-cpu\"",
"transformers>=4.42.4; extra == \"huggingface-cpu\"",
"huggingface-hub==0.29.3; extra == \"huggingface-cpu\"",
"nltk==3.9.1; extra == \"huggingface-cpu\"",
"ibm-cos-sdk; extra == \"ibm-cos\"",
"pyignite; extra == \"ignite\"",
"impyla; extra == \"impala\"",
"influxdb3-python; extra == \"influxdb\"",
"urllib3>=2.6.3; extra == \"influxdb\"",
"sqlalchemy-informix; extra == \"informix\"",
"pyodbc; extra == \"ingres\"",
"sqlalchemy-ingres[all]; extra == \"ingres\"",
"atlassian-python-api; extra == \"jira\"",
"lancedb~=0.3.1; extra == \"lancedb\"",
"lance; extra == \"lancedb\"",
"libsql-experimental; extra == \"libsql\"",
"protobuf==4.25.8; extra == \"lindorm\"",
"pyphoenix; extra == \"lindorm\"",
"phoenixdb; extra == \"lindorm\"",
"litellm==1.80.8; extra == \"litellm\"",
"pydantic-settings>=2.1.0; extra == \"llama-index\"",
"llama-index==0.13.0; extra == \"llama-index\"",
"llama-index-readers-web; extra == \"llama-index\"",
"llama-index-embeddings-openai; extra == \"llama-index\"",
"mysql-connector-python==9.1.0; extra == \"mariadb\"",
"pymysql; extra == \"matrixone\"",
"jaydebeapi; extra == \"maxdb\"",
"mediawikiapi; extra == \"mediawiki\"",
"mendeley; extra == \"mendeley\"",
"pymilvus==2.3; extra == \"milvus\"",
"sqlparse>=0.5.4; extra == \"mlflow\"",
"mlflow; extra == \"mlflow\"",
"pymonetdb; extra == \"monetdb\"",
"sqlalchemy-monetdb; extra == \"monetdb\"",
"pymongo==4.8.0; extra == \"mongodb\"",
"msal; extra == \"ms-one-drive\"",
"msal; extra == \"ms-teams\"",
"botbuilder-schema; extra == \"ms-teams\"",
"botframework-connector; extra == \"ms-teams\"",
"pymssql>=2.1.4; extra == \"mssql\"",
"pymssql>=2.1.4; extra == \"mssql-odbc\"",
"pyodbc>=5.2.0; extra == \"mssql-odbc\"",
"mysql-connector-python==9.1.0; extra == \"mysql\"",
"requests-oauthlib>=1.3.1; extra == \"netsuite\"",
"newsapi-python; extra == \"newsapi\"",
"notion-client; extra == \"notion\"",
"jaydebeapi; extra == \"nuo-jdbc\"",
"mysql-connector-python==9.1.0; extra == \"oceanbase\"",
"tiktoken; extra == \"openai\"",
"openbb==4.3.1; extra == \"openbb\"",
"openbb-core==1.3.1; extra == \"openbb\"",
"overpy; extra == \"openstreetmap\"",
"oracledb==3.3.0; extra == \"oracle\"",
"google-generativeai>=0.1.0; extra == \"palm\"",
"paypalrestsdk; extra == \"paypal\"",
"pgvector==0.3.6; extra == \"pgvector\"",
"pyphoenix; extra == \"phoenix\"",
"phoenixdb; extra == \"phoenix\"",
"pinecone-client==5.0.1; extra == \"pinecone\"",
"pinotdb; extra == \"pinot\"",
"plaid-python; extra == \"plaid\"",
"urllib3>=2.6.0; extra == \"plaid\"",
"mysql-connector-python==9.1.0; extra == \"planetscale\"",
"portkey-ai>=1.8.2; extra == \"portkey\"",
"pycaret[models]; extra == \"pycaret\"",
"pycaret; extra == \"pycaret\"",
"urllib3>=2.6.0; extra == \"qdrant\"",
"qdrant-client; extra == \"qdrant\"",
"questdb; extra == \"questdb\"",
"qbosdk; extra == \"quickbooks\"",
"praw; extra == \"reddit\"",
"rocketchat_API; extra == \"rocket-chat\"",
"urllib3>=2.6.0; extra == \"rocket-chat\"",
"mysql-connector-python==9.1.0; extra == \"rockset\"",
"salesforce_api==0.1.45; extra == \"salesforce\"",
"scylla-driver; extra == \"scylla\"",
"urllib3>=2.6.0; extra == \"sendinblue\"",
"sib_api_v3_sdk; extra == \"sendinblue\"",
"ShopifyAPI; extra == \"shopify\"",
"mysql-connector-python==9.1.0; extra == \"singlestore\"",
"slack_sdk==3.30.0; extra == \"slack\"",
"snowflake-connector-python[pandas]==3.15.0; extra == \"snowflake\"",
"snowflake-sqlalchemy==1.7.0; extra == \"snowflake\"",
"solace-pubsubplus; extra == \"solace\"",
"sqlalchemy-solr; extra == \"solr\"",
"sqlparse>=0.5.4; extra == \"solr\"",
"sqlalchemy-sqlany; extra == \"sqlany\"",
"sqlanydb; extra == \"sqlany\"",
"pysqream>=3.2.5; extra == \"sqreamdb\"",
"pysqream_sqlalchemy>=0.8; extra == \"sqreamdb\"",
"mysql-connector-python==9.1.0; extra == \"starrocks\"",
"urllib3>=2.2.2; extra == \"strava\"",
"stravalib; extra == \"strava\"",
"stripe; extra == \"stripe\"",
"pysurrealdb; extra == \"surrealdb\"",
"symbl; extra == \"symbl\"",
"requests>=2.32.4; extra == \"tdengine\"",
"urllib3>=2.5.0; extra == \"tdengine\"",
"taospy; extra == \"tdengine\"",
"teradatasqlalchemy; extra == \"teradata\"",
"teradatasql; extra == \"teradata\"",
"mysql-connector-python==9.1.0; extra == \"tidb\"",
"pyhive; extra == \"trino\"",
"trino~=0.313.0; extra == \"trino\"",
"twilio; extra == \"twilio\"",
"tweepy; extra == \"twitter\"",
"google-auth; extra == \"vertex\"",
"google-cloud-aiplatform>=1.35.0; extra == \"vertex\"",
"google-auth-oauthlib; extra == \"vertex\"",
"vertica-python; extra == \"vertica\"",
"sqlalchemy-vertica-python; extra == \"vertica\"",
"mysql-connector-python==9.1.0; extra == \"vitess\"",
"weaviate-client~=3.24.2; extra == \"weaviate\"",
"html2text; extra == \"web\"",
"dotty-dict==1.3.1; extra == \"webz\"",
"webzio==1.0.2; extra == \"webz\"",
"twilio; extra == \"whatsapp\"",
"xata; extra == \"xata\"",
"youtube-transcript-api; extra == \"youtube\"",
"google-api-python-client; extra == \"youtube\"",
"google-auth; extra == \"youtube\"",
"google-auth-oauthlib; extra == \"youtube\"",
"zenpy; extra == \"zendesk\"",
"pyzotero; extra == \"zotero\"",
"pymonetdb; extra == \"all-handlers-extras\"",
"databricks-sql-connector<4.0.0,>=3.7.1; extra == \"all-handlers-extras\"",
"hubspot-api-client==12.0.0; extra == \"all-handlers-extras\"",
"salesforce_api==0.1.45; extra == \"all-handlers-extras\"",
"llama-index==0.13.0; extra == \"all-handlers-extras\"",
"google-generativeai>=0.1.0; extra == \"all-handlers-extras\"",
"google-cloud-bigquery[pandas]; extra == \"all-handlers-extras\"",
"sqlalchemy-firebird<3.0.0,>=2.0.0; extra == \"all-handlers-extras\"",
"botframework-connector; extra == \"all-handlers-extras\"",
"binance-connector; extra == \"all-handlers-extras\"",
"sqlalchemy-bigquery; extra == \"all-handlers-extras\"",
"snowflake-sqlalchemy==1.7.0; extra == \"all-handlers-extras\"",
"urllib3>=2.6.3; extra == \"all-handlers-extras\"",
"urllib3>=2.2.2; extra == \"all-handlers-extras\"",
"sqlalchemy-sqlany; extra == \"all-handlers-extras\"",
"fsspec; extra == \"all-handlers-extras\"",
"requests>=2.32.4; extra == \"all-handlers-extras\"",
"pillow; extra == \"all-handlers-extras\"",
"dotty-dict==1.3.1; extra == \"all-handlers-extras\"",
"thrift-sasl; extra == \"all-handlers-extras\"",
"aerospike~=13.0.0; extra == \"all-handlers-extras\"",
"google-api-python-client; extra == \"all-handlers-extras\"",
"lance; extra == \"all-handlers-extras\"",
"pinotdb; extra == \"all-handlers-extras\"",
"trino~=0.313.0; extra == \"all-handlers-extras\"",
"scylla-driver; extra == \"all-handlers-extras\"",
"sqlalchemy-ingres[all]; extra == \"all-handlers-extras\"",
"portalocker; extra == \"all-handlers-extras\"",
"sqlalchemy-informix; extra == \"all-handlers-extras\"",
"jaydebeapi; extra == \"all-handlers-extras\"",
"scikit-learn==1.5.2; extra == \"all-handlers-extras\"",
"weaviate-client~=3.24.2; extra == \"all-handlers-extras\"",
"rocketchat_API; extra == \"all-handlers-extras\"",
"onnxruntime==1.20.1; extra == \"all-handlers-extras\"",
"sqlanydb; extra == \"all-handlers-extras\"",
"ibm-db-sa; extra == \"all-handlers-extras\"",
"solace-pubsubplus; extra == \"all-handlers-extras\"",
"google-cloud-spanner; extra == \"all-handlers-extras\"",
"pinecone-client==5.0.1; extra == \"all-handlers-extras\"",
"pymilvus==2.3; extra == \"all-handlers-extras\"",
"virtualenv; extra == \"all-handlers-extras\"",
"tweepy; extra == \"all-handlers-extras\"",
"thrift; extra == \"all-handlers-extras\"",
"faiss-cpu>=1.7.4; extra == \"all-handlers-extras\"",
"google-cloud-storage; extra == \"all-handlers-extras\"",
"qdrant-client; extra == \"all-handlers-extras\"",
"impyla; extra == \"all-handlers-extras\"",
"transformers>=4.42.4; extra == \"all-handlers-extras\"",
"vertica-python; extra == \"all-handlers-extras\"",
"chardet; extra == \"all-handlers-extras\"",
"ckanapi; extra == \"all-handlers-extras\"",
"huggingface-hub==0.29.3; extra == \"all-handlers-extras\"",
"pycaret; extra == \"all-handlers-extras\"",
"urllib3>=2.5.0; extra == \"all-handlers-extras\"",
"pyodbc>=5.2.0; extra == \"all-handlers-extras\"",
"dropbox; extra == \"all-handlers-extras\"",
"sib_api_v3_sdk; extra == \"all-handlers-extras\"",
"protobuf==4.25.8; extra == \"all-handlers-extras\"",
"libsql-experimental; extra == \"all-handlers-extras\"",
"sqlalchemy-solr; extra == \"all-handlers-extras\"",
"pygithub==2.6.1; extra == \"all-handlers-extras\"",
"newsapi-python; extra == \"all-handlers-extras\"",
"zipp>=3.19.1; extra == \"all-handlers-extras\"",
"gcsfs; extra == \"all-handlers-extras\"",
"databend-sqlalchemy; extra == \"all-handlers-extras\"",
"aiohttp>=3.13.3; extra == \"all-handlers-extras\"",
"google-analytics-admin; extra == \"all-handlers-extras\"",
"portkey-ai>=1.8.2; extra == \"all-handlers-extras\"",
"sqlalchemy-monetdb; extra == \"all-handlers-extras\"",
"atlassian-python-api; extra == \"all-handlers-extras\"",
"pycaret[models]; extra == \"all-handlers-extras\"",
"mlflow; extra == \"all-handlers-extras\"",
"webzio==1.0.2; extra == \"all-handlers-extras\"",
"msal; extra == \"all-handlers-extras\"",
"datasets==2.16.1; extra == \"all-handlers-extras\"",
"pymongo==4.8.0; extra == \"all-handlers-extras\"",
"pyodbc==4.0.34; extra == \"all-handlers-extras\"",
"tiktoken; extra == \"all-handlers-extras\"",
"pyhive; extra == \"all-handlers-extras\"",
"hugging_py_face; extra == \"all-handlers-extras\"",
"tzlocal; extra == \"all-handlers-extras\"",
"pysqream_sqlalchemy>=0.8; extra == \"all-handlers-extras\"",
"botbuilder-schema; extra == \"all-handlers-extras\"",
"qbosdk; extra == \"all-handlers-extras\"",
"stripe; extra == \"all-handlers-extras\"",
"slack_sdk==3.30.0; extra == \"all-handlers-extras\"",
"cohere==4.5.1; extra == \"all-handlers-extras\"",
"crate; extra == \"all-handlers-extras\"",
"sqlalchemy-cratedb; extra == \"all-handlers-extras\"",
"anthropic==0.18.1; extra == \"all-handlers-extras\"",
"pyignite; extra == \"all-handlers-extras\"",
"taospy; extra == \"all-handlers-extras\"",
"pymysql; extra == \"all-handlers-extras\"",
"influxdb3-python; extra == \"all-handlers-extras\"",
"symbl; extra == \"all-handlers-extras\"",
"fdb; extra == \"all-handlers-extras\"",
"hdbcli; extra == \"all-handlers-extras\"",
"openbb==4.3.1; extra == \"all-handlers-extras\"",
"google-auth; extra == \"all-handlers-extras\"",
"stravalib; extra == \"all-handlers-extras\"",
"sqlalchemy-hana; extra == \"all-handlers-extras\"",
"torch==2.8.0; extra == \"all-handlers-extras\"",
"sqlparse>=0.5.4; extra == \"all-handlers-extras\"",
"nltk==3.9.1; extra == \"all-handlers-extras\"",
"pyzotero; extra == \"all-handlers-extras\"",
"plaid-python; extra == \"all-handlers-extras\"",
"mysql-connector-python==9.1.0; extra == \"all-handlers-extras\"",
"google-generativeai==0.3.2; extra == \"all-handlers-extras\"",
"openbb-core==1.3.1; extra == \"all-handlers-extras\"",
"lancedb~=0.3.1; extra == \"all-handlers-extras\"",
"mendeley; extra == \"all-handlers-extras\"",
"torch==2.8.0+cpu; extra == \"all-handlers-extras\"",
"azure-storage-blob; extra == \"all-handlers-extras\"",
"llama-index-embeddings-openai; extra == \"all-handlers-extras\"",
"pyodbc; extra == \"all-handlers-extras\"",
"google-cloud-aiplatform>=1.35.0; extra == \"all-handlers-extras\"",
"llama-index-readers-web; extra == \"all-handlers-extras\"",
"evaluate==0.4.3; extra == \"all-handlers-extras\"",
"html2text; extra == \"all-handlers-extras\"",
"teradatasql; extra == \"all-handlers-extras\"",
"questdb; extra == \"all-handlers-extras\"",
"clickhouse-sqlalchemy>=0.3.1; extra == \"all-handlers-extras\"",
"pyphoenix; extra == \"all-handlers-extras\"",
"litellm==1.80.8; extra == \"all-handlers-extras\"",
"zenpy; extra == \"all-handlers-extras\"",
"ibm-cos-sdk; extra == \"all-handlers-extras\"",
"pysqream>=3.2.5; extra == \"all-handlers-extras\"",
"protobuf>=4.25.8; extra == \"all-handlers-extras\"",
"filelock>=3.20.3; extra == \"all-handlers-extras\"",
"pydruid; extra == \"all-handlers-extras\"",
"google; extra == \"all-handlers-extras\"",
"ShopifyAPI; extra == \"all-handlers-extras\"",
"pgvector==0.3.6; extra == \"all-handlers-extras\"",
"sqlalchemy-vertica-python; extra == \"all-handlers-extras\"",
"mediawikiapi; extra == \"all-handlers-extras\"",
"snowflake-connector-python[pandas]==3.15.0; extra == \"all-handlers-extras\"",
"pydantic-settings>=2.1.0; extra == \"all-handlers-extras\"",
"sqlalchemy_dremio; extra == \"all-handlers-extras\"",
"twilio; extra == \"all-handlers-extras\"",
"huggingface-hub; extra == \"all-handlers-extras\"",
"ibm-db; extra == \"all-handlers-extras\"",
"teradatasqlalchemy; extra == \"all-handlers-extras\"",
"azure-core>=1.38.0; extra == \"all-handlers-extras\"",
"couchbase==4.3.1; extra == \"all-handlers-extras\"",
"paypalrestsdk; extra == \"all-handlers-extras\"",
"faunadb; extra == \"all-handlers-extras\"",
"xata; extra == \"all-handlers-extras\"",
"google-auth-oauthlib; extra == \"all-handlers-extras\"",
"pysurrealdb; extra == \"all-handlers-extras\"",
"pyodbc>=5.0.0; sys_platform == \"win32\" and extra == \"all-handlers-extras\"",
"notion-client; extra == \"all-handlers-extras\"",
"requests-oauthlib>=1.3.1; extra == \"all-handlers-extras\"",
"chromadb~=0.6.3; extra == \"all-handlers-extras\"",
"elasticsearch<9.0.0,>=8.0.0; extra == \"all-handlers-extras\"",
"youtube-transcript-api; extra == \"all-handlers-extras\"",
"python-gitlab; extra == \"all-handlers-extras\"",
"sqlalchemy-access>=2.0.0; sys_platform == \"win32\" and extra == \"all-handlers-extras\"",
"oracledb==3.3.0; extra == \"all-handlers-extras\"",
"sqlalchemy-spanner; extra == \"all-handlers-extras\"",
"pymssql>=2.1.4; extra == \"all-handlers-extras\"",
"praw; extra == \"all-handlers-extras\"",
"overpy; extra == \"all-handlers-extras\"",
"urllib3>=2.6.0; extra == \"all-handlers-extras\"",
"phoenixdb; extra == \"all-handlers-extras\"",
"eventbrite-python; extra == \"all-handlers-extras\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.15 | 2026-02-20T08:44:25.509459 | mindsdb-26.0.0rc3.tar.gz | 2,638,172 | 18/44/5ea240d2440d7a82246df4130650b0fc8f6501fb2dfab3a0098fcd33b1cb/mindsdb-26.0.0rc3.tar.gz | source | sdist | null | false | d507a11300521820979048d7951caa70 | ebaf13b99c66618bcbdf5f7ec59b5f1aa15f39e6c3ee317e5d89e851fd67b2fc | 18445ea240d2440d7a82246df4130650b0fc8f6501fb2dfab3a0098fcd33b1cb | null | [
"LICENSE"
] | 0 |
2.4 | jentis-core | 0.0.1 | Core package for Jentis | # Jentis Core: Professional-Grade Tool Management
The `jentis.core` module provides a robust and intuitive system for creating, managing, and integrating tools with AI agents. It is the foundation of the Jentis Agentic Kit, designed for high-performance, type-safe, and framework-agnostic tool definition.
## Key Features
- **Declarative Tool Creation:** Use the elegant `@tool` decorator to instantly convert Python functions into AI-ready tools.
- **Automatic Schema Generation:** Parameters, type hints, and docstrings are automatically parsed to generate a complete tool schema.
- **Type Safety:** Leverages Python's type hints for robust parameter validation at runtime.
- **Docstring Intelligence:** Automatically extracts tool and parameter descriptions from your function's docstring, supporting Google-style, NumPy-style, and reStructuredText formats.
- **Flexible & Agnostic:** Designed to work with any AI agent or multi-agent system, with zero boilerplate.
## Quickstart: Using the `@tool` Decorator
Creating a tool is as simple as decorating a Python function. The system handles the rest.
```python
from jentis.core import tool
@tool
def calculator(expression: str) -> str:
"""
Performs mathematical calculations on a given expression.
Args:
expression (str): The mathematical expression to evaluate.
"""
try:
# Note: eval() is used for demonstration. For production, use a safer alternative.
return str(eval(expression))
except Exception as e:
return f"Error: {e}"
# Example Usage:
# result = calculator("2 + 2 * (3 - 1)")
# print(result) # Output: 6
```
The `@tool` decorator automatically registers the function with the following metadata:
- **Name:** `calculator`
- **Description:** "Performs mathematical calculations on a given expression."
- **Parameters:** An `expression` of type `str` that is required.
## Advanced Usage: The `Tool` Class
For fine-grained control or dynamic tool creation, you can instantiate the `Tool` class directly.
```python
from jentis.core import Tool
def web_search(query: str, limit: int = 10) -> str:
"""
Searches the web for a given query.
Args:
query (str): The search query to execute.
limit (int): The maximum number of results to return.
"""
return f"Simulating search for '{query}' with a limit of {limit} results."
# Manually define the tool
search_tool = Tool(
name="web_search",
description="Searches the web for information.",
function=web_search,
parameters={
"query": {"type": "str", "description": "The search query", "required": True},
"limit": {"type": "int", "description": "Maximum number of results", "required": False},
}
)
# Example Usage:
# search_results = search_tool.run(query="Jentis Agentic Kit")
# print(search_results)
```
## Integrating with AI Agents
Once defined, tools can be seamlessly added to any agent that conforms to the Jentis tool interface.
```python
# Assuming 'agent' is an instance of a Jentis-compatible AI agent
agent.add_tools(calculator, search_tool)
# The agent can now intelligently invoke these tools based on user input.
# agent.run("What is 5 factorial?")
# agent.run("Find information about Python decorators.")
```
This system empowers developers to build sophisticated and reliable AI applications by providing a clean, powerful, and easy-to-use tool management framework.
| text/markdown | Jentis | author@example.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/pypa/sampleproject | null | >=3.6 | [] | [] | [] | [] | [] | [] | [] | [
"Bug Tracker, https://github.com/pypa/sampleproject/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:40:59.065991 | jentis_core-0.0.1.tar.gz | 3,649 | 8e/59/37d1450a113d8169cc3d9899d41827b73f1784a8b01df684c34f61805714/jentis_core-0.0.1.tar.gz | source | sdist | null | false | 051bafedad89d75cf1b0559af725c6a8 | 82810cff28088ef9ea024c2bcfebba833e5a1f108c684948f44daa7e6d6f1c2b | 8e5937d1450a113d8169cc3d9899d41827b73f1784a8b01df684c34f61805714 | null | [
"LICENSE"
] | 229 |
2.4 | ptars | 0.0.16 | Convert from protobuf to arrow and back in rust | # ptars
[![PyPI Version][pypi-image]][pypi-url]
[![Python Version][versions-image]][versions-url]
[![PyPI Wheel][wheel-image]][wheel-url]
[![Documentation][docs-image]][docs-url]
[![Downloads][downloads-image]][downloads-url]
[![Downloads][downloads-month-image]][downloads-month-url]
[![Crates.io][crates-image]][crates-url]
[![Crates.io Downloads][crates-downloads-image]][crates-url]
[![docs.rs][docsrs-image]][docsrs-url]
[![Build Status][build-image]][build-url]
[![codecov][codecov-image]][codecov-url]
[![License][license-image]][license-url]
[![Ruff][ruff-image]][ruff-url]
[![snyk][snyk-image]][snyk-url]
[![Github Stars][stars-image]][stars-url]
[![GitHub issues][github-issues-image]][github-issues-url]
[![GitHub Release][release-image]][release-url]
[![Release Date][release-date-image]][release-url]
[![Last Commit][last-commit-image]][commits-url]
[![Commit Activity][commit-activity-image]][commits-url]
[![Open PRs][open-prs-image]][prs-url]
[![Contributors][contributors-image]][contributors-url]
[![Contributing][contributing-image]][contributing-url]
[![FOSSA Status][fossa-image]][fossa-url]
[![Repo Size][repo-size-image]][repo-size-url]
[![Rust][rust-image]][rust-url]
[![Apache Arrow][apache-arrow-image]][apache-arrow-url]
[![prek][prek-image]][prek-url]
[Repository](https://github.com/0x26res/ptars) |
[Python Documentation](https://ptars.readthedocs.io/) |
[Python Installation](https://ptars.readthedocs.io/en/latest/getting-started/#installation) |
[PyPI](https://pypi.org/project/ptars/) |
[Rust Crate](https://crates.io/crates/ptars) |
[Rust Documentation](https://docs.rs/ptars)
Fast convertion from Protocol Buffers to Arrow, and back, using Rust, with Python bindings.
## Example
Take a protobuf:
```protobuf
message SearchRequest {
string query = 1;
int32 page_number = 2;
int32 result_per_page = 3;
}
```
And convert serialized messages directly to `pyarrow.RecordBatch`:
```python
from ptars import HandlerPool
messages = [
SearchRequest(
query="protobuf to arrow",
page_number=0,
result_per_page=10,
),
SearchRequest(
query="protobuf to arrow",
page_number=1,
result_per_page=10,
),
]
payloads = [message.SerializeToString() for message in messages]
pool = HandlerPool([SearchRequest.DESCRIPTOR.file])
handler = pool.get_for_message(SearchRequest.DESCRIPTOR)
record_batch = handler.list_to_record_batch(payloads)
```
| query | page_number | result_per_page |
|:------------------|--------------:|------------------:|
| protobuf to arrow | 0 | 10 |
| protobuf to arrow | 1 | 10 |
You can also convert a `pyarrow.RecordBatch` back to serialized protobuf messages:
```python
array: pa.BinaryArray = handler.record_batch_to_array(record_batch)
messages_back: list[SearchRequest] = [
SearchRequest.FromString(s.as_py()) for s in array
]
```
## Configuration
Customize Arrow type mappings with `PtarsConfig`:
```python
from ptars import HandlerPool, PtarsConfig
config = PtarsConfig(
timestamp_unit="us", # microseconds instead of nanoseconds
timestamp_tz="America/New_York",
)
pool = HandlerPool([SearchRequest.DESCRIPTOR.file], config=config)
```
## Benchmark against protarrow
[Ptars](https://github.com/0x26res/ptars) is a rust implementation of
[protarrow](https://github.com/tradewelltech/protarrow),
which is implemented in plain python.
It is:
- 2.5 times faster when converting from proto to arrow.
- 3 times faster when converting from arrow to proto.
```benchmark
---- benchmark 'to_arrow': 2 tests ----
Name (time in ms) Mean
---------------------------------------
protarrow_to_arrow 9.4863 (2.63)
ptars_to_arrow 3.6009 (1.0)
---------------------------------------
---- benchmark 'to_proto': 2 tests -----
Name (time in ms) Mean
----------------------------------------
protarrow_to_proto 20.8297 (3.20)
ptars_to_proto 6.5013 (1.0)
----------------------------------------
```
[pypi-image]: https://img.shields.io/pypi/v/ptars
[pypi-url]: https://pypi.org/project/ptars/
[versions-image]: https://img.shields.io/pypi/pyversions/ptars
[versions-url]: https://pypi.org/project/ptars/
[wheel-image]: https://img.shields.io/pypi/wheel/ptars
[wheel-url]: https://pypi.org/project/ptars/
[docs-image]: https://readthedocs.org/projects/ptars/badge/?version=latest
[docs-url]: https://ptars.readthedocs.io/en/latest/
[downloads-image]: https://pepy.tech/badge/ptars
[downloads-url]: https://static.pepy.tech/badge/ptars
[downloads-month-image]: https://pepy.tech/badge/ptars/month
[downloads-month-url]: https://static.pepy.tech/badge/ptars/month
[build-image]: https://github.com/0x26res/ptars/actions/workflows/ci.yaml/badge.svg
[build-url]: https://github.com/0x26res/ptars/actions/workflows/ci.yaml
[codecov-image]: https://codecov.io/gh/0x26res/ptars/branch/main/graph/badge.svg?token=XMFH27IL70
[codecov-url]: https://codecov.io/gh/0x26res/ptars
[license-image]: http://img.shields.io/:license-Apache%202-blue.svg
[license-url]: https://github.com/0x26res/ptars/blob/master/LICENSE
[ruff-image]: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json
[ruff-url]: https://github.com/astral-sh/ruff
[snyk-image]: https://snyk.io/advisor/python/ptars/badge.svg
[snyk-url]: https://snyk.io/advisor/python/ptars
[stars-image]: https://img.shields.io/github/stars/0x26res/ptars
[stars-url]: https://github.com/0x26res/ptars
[github-issues-image]: https://img.shields.io/badge/issue_tracking-github-blue.svg
[github-issues-url]: https://github.com/0x26res/ptars/issues
[contributing-image]: https://img.shields.io/badge/PR-Welcome-%23FF8300.svg?
[contributing-url]: https://github.com/0x26res/ptars/blob/main/DEVELOPMENT.md
[fossa-image]: https://app.fossa.com/api/projects/git%2Bgithub.com%2F0x26res%2Fptars.svg?type=shield
[fossa-url]: https://app.fossa.com/projects/git%2Bgithub.com%2F0x26res%2Fptars?ref=badge_shield
[repo-size-image]: https://img.shields.io/github/repo-size/0x26res/ptars
[repo-size-url]: https://img.shields.io/github/repo-size/0x26res/ptars
[crates-image]: https://img.shields.io/crates/v/ptars
[crates-url]: https://crates.io/crates/ptars
[crates-downloads-image]: https://img.shields.io/crates/d/ptars
[docsrs-image]: https://docs.rs/ptars/badge.svg
[docsrs-url]: https://docs.rs/ptars
[last-commit-image]: https://img.shields.io/github/last-commit/0x26res/ptars
[commits-url]: https://github.com/0x26res/ptars/commits/main
[commit-activity-image]: https://img.shields.io/github/commit-activity/m/0x26res/ptars
[open-prs-image]: https://img.shields.io/github/issues-pr/0x26res/ptars
[prs-url]: https://github.com/0x26res/ptars/pulls
[contributors-image]: https://img.shields.io/github/contributors/0x26res/ptars
[contributors-url]: https://github.com/0x26res/ptars/graphs/contributors
[release-image]: https://img.shields.io/github/v/release/0x26res/ptars
[release-date-image]: https://img.shields.io/github/release-date/0x26res/ptars
[release-url]: https://github.com/0x26res/ptars/releases
[rust-image]: https://img.shields.io/badge/rust-%23000000.svg?logo=rust&logoColor=white
[rust-url]: https://www.rust-lang.org/
[apache-arrow-image]: https://img.shields.io/badge/Apache%20Arrow-powered-orange.svg
[apache-arrow-url]: https://arrow.apache.org/
[prek-image]: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/j178/prek/master/docs/assets/badge-v0.json
[prek-url]: https://github.com/j178/prek
| text/markdown; charset=UTF-8; variant=GFM | null | 0x26res <0x26res@gmail.com.co> | null | 0x26res <0x26res@gmail.com> | Apache-2.0 | apache-arrow, data, protobuf, rust | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Rust"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"googleapis-common-protos",
"protobuf",
"pyarrow"
] | [] | [] | [] | [
"changelog, https://github.com/0x26res/ptars/releases",
"code-of-conduct, https://github.com/0x26res/ptars/blob/main/CODE_OF_CONDUCT.md",
"contributing, https://github.com/0x26res/ptars/blob/main/CONTRIBUTING.md",
"documentation, https://ptars.readthedocs.io/en/latest/",
"issues, https://github.com/0x26res/ptars/issues",
"repository, https://github.com/0x26res/ptars"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:40:10.861799 | ptars-0.0.16.tar.gz | 60,152 | 2d/ba/70dd511e574d26e88a17554d6d1ac66fb35f94dd0fb50498c54110dabec1/ptars-0.0.16.tar.gz | source | sdist | null | false | 3973fa66642fe9918905e1df0a8b1dcc | 3afbefb1326e8ecab1cf1424d3c8e0e0ecc0641878a0b50e7016fae0f0cd3a65 | 2dba70dd511e574d26e88a17554d6d1ac66fb35f94dd0fb50498c54110dabec1 | null | [
"LICENSE"
] | 2,390 |
2.4 | python-wfirma | 0.1.0 | Python library for wFirma API with synchronous and asynchronous support | # python-wfirma
Python client for the [wFirma](https://wfirma.pl/) accounting API. Supports both synchronous and asynchronous usage.
> **Status**: Alpha (v0.1.0). The API surface may change before 1.0.
## Installation
```bash
pip install python-wfirma
```
Or with [uv](https://github.com/astral-sh/uv):
```bash
uv add python-wfirma
```
## Usage
### API Key Authentication
```python
from wfirma.sync.auth import APIKeyAuth
from wfirma.sync.client import WFirmaClient
auth = APIKeyAuth(
access_key="your_access_key",
secret_key="your_secret_key",
app_key="your_app_key",
)
with WFirmaClient(auth=auth, company_id=123) as client:
contractor = client.contractors.add(name="ACME Sp. z o.o.", nip="1234567890")
invoice = client.invoices.add(
invoice={
"contractor_id": contractor.id,
"type": "normal",
"paid": "0",
}
)
invoices = client.invoices.find()
```
### Async
```python
import asyncio
from wfirma.async_.auth import APIKeyAuth
from wfirma.async_.client import WFirmaClient
async def main() -> None:
auth = APIKeyAuth(
access_key="your_access_key",
secret_key="your_secret_key",
app_key="your_app_key",
)
async with WFirmaClient(auth=auth, company_id=123) as client:
contractor = await client.contractors.add(
name="ACME Sp. z o.o.", nip="1234567890"
)
invoices = await client.invoices.find()
asyncio.run(main())
```
### OAuth 2.0
```python
from wfirma.sync.auth import OAuth2Auth
from wfirma.sync.client import WFirmaClient
from wfirma.config import Environment
oauth = OAuth2Auth(
client_id="your_client_id",
client_secret="your_client_secret",
redirect_uri="https://yourapp.example.com/callback",
environment=Environment.PRODUCTION,
)
# Step 1: redirect user to authorization URL
url = oauth.build_authorization_url(scope="invoices-read")
# Step 2: exchange the code from callback
token = oauth.exchange_code(code="authorization_code_from_callback")
# Step 3: use the client
with WFirmaClient(auth=oauth, company_id=123) as client:
invoices = client.invoices.find()
```
## Configuration
The library reads credentials from environment variables when using `from_env()` class methods:
```bash
# .env
WFIRMA_APP_KEY=your_app_key
WFIRMA_APP_SECRET=your_app_secret
WFIRMA_ENVIRONMENT=sandbox # or production
WFIRMA_COMPANY_ID=123
```
```python
from wfirma import get_config
config = get_config()
print(config.base_url) # https://sandbox-api2.wfirma.pl
```
## Supported Resources
The client exposes the following wFirma API resources. Each resource provides methods matching the upstream API (typically `add`, `find`, `get`, `edit`, `delete`):
**Core**: invoices, contractors, goods, payments, expenses, documents
**Company**: company, company_accounts, company_packs
**Declarations**: declaration_countries, declaration_body_jpkvat, declaration_body_pit
**Warehouse**: warehouses, warehouse documents (PW, PZ, R, RW, WZ, ZD, ZPD, ZPM)
**Reference data**: tags, series, terms, term_groups, vat_codes, translation_languages, taxregisters, interests, ledger_accountant_years, ledger_operation_schemas
**Users & misc**: users, user_companies, vehicles, vehicle_run_rates, payment_cashboxes, invoice_deliveries, invoice_descriptions, notes, webhooks
## Development
Requires Python 3.12+. The project uses [uv](https://github.com/astral-sh/uv) for dependency management.
```bash
git clone https://github.com/dekoza/python-wfirma.git
cd python-wfirma
uv venv && uv sync --extra dev
uv run pre-commit install
```
### Tests
```bash
uv run pytest # all tests
uv run pytest --cov=wfirma --cov-report=html # with coverage
uv run pytest -n auto # parallel
```
### Linting & type-checking
```bash
uv run ruff format src tests
uv run ruff check --fix src tests
uv run mypy src
```
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md).
## License
MIT — see [LICENSE](LICENSE).
---
This library is not affiliated with wFirma. It is an independent project.
Built on [httpx](https://www.python-httpx.org/), [Pydantic](https://docs.pydantic.dev/), and [authlib](https://docs.authlib.org/).
| text/markdown | Python wFirma Contributors | null | null | null | MIT | accounting, api, async, httpx, invoicing, wfirma | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Office/Business :: Financial :: Accounting",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"anyio>=4.0.0",
"authlib>=1.6.6",
"email-validator>=2.3.0",
"httpx[http2]>=0.27.0",
"pendulum>=3.1.0",
"pydantic-xml>=2.0.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"mypy>=1.8.0; extra == \"dev\"",
"pre-commit>=3.6.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest-mock>=3.12.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"respx>=0.21.0; extra == \"dev\"",
"ruff>=0.2.0; extra == \"dev\"",
"tox>=4.0.0; extra == \"dev\"",
"myst-parser>=2.0.0; extra == \"docs\"",
"sphinx-autodoc-typehints>=1.25.0; extra == \"docs\"",
"sphinx-rtd-theme>=2.0.0; extra == \"docs\"",
"sphinx>=7.2.0; extra == \"docs\"",
"click>=8.1.0; extra == \"examples\"",
"fastapi>=0.109.0; extra == \"examples\"",
"flask>=3.0.0; extra == \"examples\"",
"ipython>=8.20.0; extra == \"examples\"",
"jupyter>=1.0.0; extra == \"examples\"",
"uvicorn>=0.27.0; extra == \"examples\"",
"beautifulsoup4>=4.12.0; extra == \"scraping\"",
"lxml>=5.1.0; extra == \"scraping\""
] | [] | [] | [] | [
"Homepage, https://github.com/dekoza/python-wfirma",
"Repository, https://github.com/dekoza/python-wfirma",
"Issues, https://github.com/dekoza/python-wfirma/issues"
] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Arch Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T08:39:33.319639 | python_wfirma-0.1.0.tar.gz | 203,491 | 98/63/8ebc1f17c7a21ea6fc56f0d4fe89d47beb5be6332ac3da2d777872206191/python_wfirma-0.1.0.tar.gz | source | sdist | null | false | b2bd538f2852b94092a8afe9c5e46965 | 7362bec621b232f53498df44e294d90ba5e0ffb831b7052cb74ff55cf9142145 | 98638ebc1f17c7a21ea6fc56f0d4fe89d47beb5be6332ac3da2d777872206191 | null | [
"LICENSE"
] | 224 |
2.4 | longtongue | 1.2.1 | Generate customized Password/Passphrase wordlist based on target information | <p align="center">
<img src="https://github.com/edoardottt/images/blob/main/longtongue/logo.png"><br>
<b>Generate customized Password/Passphrase wordlist based on target information</b>
<br>
<sub>
Coded with 💙 by edoardottt
</sub>
<br>
<!--Tweet button-->
<a href="https://twitter.com/intent/tweet?url=https%3A%2F%2Fgithub.com%2Fedoardottt%2Flongtongue%20&text=longtongue%20-%20Customized%20Password/Passphrase%20List%20inputting%20Target%20Info%20%21&hashtags=pentesting%2Clinux%2Cpython%2Cnetwork%2Cpassword%2Cbugbounty%2Cinfosec" target="_blank">Share on Twitter!
</a>
</p>
Installation ⬇️
----
### pipx
```bash
pipx install longtongue
```
### Source
```console
git clone https://github.com/edoardottt/longtongue.git
cd longtongue
pip install -r requirements.txt
python3 longtongue.py
```
Usage 💻
----
```console
usage: longtongue [-h] (-p | -c | -v) [-l | -L] [-y] [-n] [-m MINLENGTH] [-M MAXLENGTH] [-P COMMON_PASSWORD_LIST]
Generate customized Password/Passphrase wordlist based on target information
options:
-h, --help show this help message and exit
-p, --person The target is a person
-c, --company The target is a company
-v, --version Show the version of this program
-l, --leet Add also complete 1337(leet) passwords
-L, --leetall Add also ALL possible le37(leet) passwords
-y, --years Add also years to password. See years range inside longtongue.py
-n, --numbers Add also numbers to password. See numbers range inside longtongue.py
-m, --minlength MINLENGTH
Set the minimum length for passwords (default 0).
-M, --maxlength MAXLENGTH
Set the maximun length for passwords (default 100).
-P, --common-password-list COMMON_PASSWORD_LIST
Set the file which contains common passwords (default included in the source).
```
Examples 📖
-------
- `python longtongue.py -v`
- `python longtongue.py -h`
- `python longtongue.py -p`
- `python longtongue.py -pl`
- `python longtongue.py -pln`
- `python longtongue.py -plny`
- `python longtongue.py -c`
- `python longtongue.py -cl`
- `python longtongue.py -cln`
- `python longtongue.py -clny`
- `python longtongue.py -c -m 10`
- `python longtongue.py -p -m 10`
- `python longtongue.py -c -P ./common-passwords.txt`
- `python longtongue.py -p -P ./common-passwords.txt`
Changelog 📌
-------
Detailed changes for each release are documented in the [release notes](https://github.com/edoardottt/longtongue/releases).
Contributing 🤝
------
If you want to contribute to this project, open an [issue](https://github.com/edoardottt/longtongue/issues) or a [pull request](https://github.com/edoardottt/longtongue/pulls).
License 📝
-------
This repository is under [GNU General Public License v3.0](https://github.com/edoardottt/longtongue/blob/main/LICENSE).
[edoardottt.com](https://edoardottt.com) to contact me.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.7 | 2026-02-20T08:37:21.985088 | longtongue-1.2.1.tar.gz | 19,981 | 7c/6b/38797b0828eabc8336ffa6dcaa5663e3a29c740a99c3583d611a002c5427/longtongue-1.2.1.tar.gz | source | sdist | null | false | 5ee7a1861e43654af96d4604cc34263e | 8eb4dbb5f2dd2773bbd752c416b5d2eebaadd0e3f8d5f9a45ac998e4628aa7cb | 7c6b38797b0828eabc8336ffa6dcaa5663e3a29c740a99c3583d611a002c5427 | null | [
"LICENSE"
] | 244 |
2.1 | odoo-addon-l10n-ro-dvi | 19.0.0.1.0 | Romania - DVI | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
=============
Romania - DVI
=============
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:f0dbe340c8aa3f7f699d08ffdf6cf6df7fe0bbaad4897baa33c3cfacbc3cca9e
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Mature-brightgreen.png
:target: https://odoo-community.org/page/development-status
:alt: Mature
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fl10n--romania-lightgray.png?logo=github
:target: https://github.com/OCA/l10n-romania/tree/19.0/l10n_ro_dvi
:alt: OCA/l10n-romania
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/l10n-romania-19-0/l10n-romania-19-0-l10n_ro_dvi
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/l10n-romania&target_branch=19.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
DVI - declaraţie vamala de import
Se face legatura dintre factura de achizitie si DVI (landed cost)
Se genereaza automat un DVI cu doua linii si cu TVA.
Contul 447 trebuie sa fie un cont de reconciliere pentru a se putea
inchide prin banca
**Table of contents**
.. contents::
:local:
Usage
=====
DVI - Import Customs Declaration
For creating a DVi you must go to:
Accounting -> Actions -> DVI \* Create a new record \* Complete the tax
with VAT 19% deductible, invoices linked for this DVI \* Complete the
"Customs Duty Value" and "Customs Commission Value" \* Complete the DVI
lines quantity with the quantity declared \* Click on button "Post" to
validate the Customs declaration
At post, a landed cost is created to distribute the amounts to the
correct products and creating the account moves for the VAT paid.
You have the possibility to revert one declaration, which will create
new valuation layers with minus, and cancel the account move of the
inital landed cost.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/l10n-romania/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/l10n-romania/issues/new?body=module:%20l10n_ro_dvi%0Aversion:%2019.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Terrabit
* NextERP Romania
Contributors
------------
- `Terrabit <https://www.terrabit.ro>`__:
- Dorin Hongu <dhongu@gmail.com>
Do not contact contributors directly about support or help with
technical issues.
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
.. |maintainer-dhongu| image:: https://github.com/dhongu.png?size=40px
:target: https://github.com/dhongu
:alt: dhongu
.. |maintainer-feketemihai| image:: https://github.com/feketemihai.png?size=40px
:target: https://github.com/feketemihai
:alt: feketemihai
Current `maintainers <https://odoo-community.org/page/maintainer-role>`__:
|maintainer-dhongu| |maintainer-feketemihai|
This module is part of the `OCA/l10n-romania <https://github.com/OCA/l10n-romania/tree/19.0/l10n_ro_dvi>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | Terrabit,NextERP Romania,Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 19.0",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Development Status :: 6 - Mature"
] | [] | https://github.com/OCA/l10n-romania | null | null | [] | [] | [] | [
"odoo-addon-l10n_ro_stock_account_landed_cost==19.0.*",
"odoo==19.0.*"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T08:37:06.331409 | odoo_addon_l10n_ro_dvi-19.0.0.1.0-py3-none-any.whl | 737,019 | 3f/ed/e590ec5ee4d378801fbf1cd9e059ebe6d25f10c1f4abc37409673c7fa87e/odoo_addon_l10n_ro_dvi-19.0.0.1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 30d3310e99f616755861d74d8c975638 | ee2a16b6f322d37f2f07d89445849af28895763244382d1102abf57185ae0d04 | 3fede590ec5ee4d378801fbf1cd9e059ebe6d25f10c1f4abc37409673c7fa87e | null | [] | 94 |
2.4 | decode-jwt | 1.2.3 | A simple CLI tool to decode JWT tokens without verification | # decode-jwt
A simple command-line tool to decode JWT tokens without verification.
## Installation
```bash
pip install decode-jwt
| text/markdown | t.me/danger_ff_like | dangertg738@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.6 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T08:36:56.992451 | decode_jwt-1.2.3.tar.gz | 2,163 | d0/e2/398b35e28fae242f4305625fcd722d164b8aaa5370dbfeefa739f51a22d0/decode_jwt-1.2.3.tar.gz | source | sdist | null | false | 85927d0601ba34343298a82a603b832b | 8ad34886127bb725eda39e41c86b0d1b0ef6867d39f897a819b59c9bd8656a37 | d0e2398b35e28fae242f4305625fcd722d164b8aaa5370dbfeefa739f51a22d0 | null | [] | 262 |
2.4 | fremem | 0.2.1 | A persistent vector memory MCP server with multi-project isolation | # Fremem (formerly MCP Memory Server)



A persistent vector memory server for Windsurf, VS Code, and other MCP-compliant editors.
## 🌟 Philosophy
- **Privacy-first, local-first AI memory:** Your data stays on your machine.
- **No vendor lock-in:** Uses open standards and local files.
- **Built for MCP:** Designed specifically to enhance Windsurf, Cursor, and other MCP-compatible IDEs.
## ℹ️ Status (v0.2.0)
**Stable:**
- ✅ Local MCP memory with Windsurf/Cursor
- ✅ Multi-project isolation
- ✅ Ingestion of Markdown docs
**Not stable yet:**
- 🚧 Auto-ingest (file watching)
- 🚧 Memory pruning
- 🚧 Remote sync
> **Note:** There are two ways to run this server:
> 1. **Local IDE (stdio)**: Used by Windsurf/Cursor (default).
> 2. **Docker/Server (HTTP)**: Used for remote deployments or Docker (exposes port 8000).
## 🏥 Health Check
To verify the server binary runs correctly:
```bash
# From within the virtual environment
python -m fremem.server --help
```
## ✅ Quickstart (5-Minute Setup)
There are two ways to set this up: **Global Install** (recommended for ease of use) or **Local Dev**.
### Option A: Global Install (Like `npm -g`)
This method allows you to run `fremem` from anywhere without managing virtual environments manually.
**1. Install `pipx` (if not already installed):**
*MacOS (via Homebrew):*
```bash
brew install pipx
pipx ensurepath
# Restart your terminal after this!
```
*Linux/Windows:*
See [pipx installation instructions](https://github.com/pypa/pipx).
**2. Install `fremem`:**
```bash
# Install from PyPI
pipx install fremem
# Verify installation
fremem --help
```
**Configure Windsurf / VS Code:**
Since `pipx` puts the executable in your PATH, the config is simpler:
```json
{
"mcpServers": {
"memory": {
"command": "fremem",
"args": [],
"env": {
"MCP_MEMORY_PATH": "/Users/YOUR_USERNAME/mcp-memory-data"
}
}
}
}
```
> **Note on `MCP_MEMORY_PATH`**: This is where `fremem` will store its persistent database. You can point this to any directory you like (checks locally or creating it if it doesn't exist). We recommend something like `~/mcp-memory-data` or `~/.fremem-data`. It must be an absolute path.
### Option B: Local Dev Setup
**1. Clone and Setup**
```bash
git clone https://github.com/iamjpsharma/fremem.git
cd fremem
# Create virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install dependencies AND the package in editable mode
pip install -e .
```
**2. Configure Windsurf / VS Code (Local Dev)**
Add this to your `mcpServers` configuration (e.g., `~/.codeium/windsurf/mcp_config.json`):
**Note:** Replace `/ABSOLUTE/PATH/TO/fremem` with the actual full path to the cloned directory.
```json
{
"mcpServers": {
"memory": {
"command": "/ABSOLUTE/PATH/TO/fremem/.venv/bin/python",
"args": ["-m", "fremem.server"],
"env": {
"MCP_MEMORY_PATH": "/ABSOLUTE/PATH/TO/fremem/mcp_memory_data"
}
}
}
}
```
*In local dev mode, it's common to store the data inside the repo (ignored by git), but you can use any absolute path.*
## 🚀 Usage
### 0. HTTP Server (New)
You can run the server via HTTP (SSE) if you prefer:
```bash
# Run on port 8000
python -m fremem.server_http
```
Access the SSE endpoint at `http://localhost:8000/sse` and send messages to `http://localhost:8000/messages`.
### 🐳 Run with Docker
To run the server in a container:
```bash
# Build the image
docker build -t fremem .
# Run the container
# Mount your local data directory to /data inside the container
docker run -p 8000:8000 -v $(pwd)/mcp_memory_data:/data fremem
```
The server will be available at `http://localhost:8000/sse`.
### 1. Ingestion (Adding Context)
Use the included helper script `ingest.sh` to add files to a specific project.
```bash
# ingest.sh <project_name> <file1> <file2> ...
# Example: Project "Thaama"
./ingest.sh project-thaama \
docs/architecture.md \
src/main.py
# Example: Project "OpenClaw"
./ingest.sh project-openclaw \
README.md \
CONTRIBUTING.md
```
### 💡 Project ID Naming Convention
It is recommended to use a consistent prefix for your project IDs to avoid collisions:
- `project-thaama`
- `project-openclaw`
- `project-myapp`
### 2. Connect in Editor
Once configured, the following tools will be available to the AI Assistant:
- **`memory_search(project_id, q, filter=None)`**: Semantic search. Supports metadata filtering (e.g., `filter={"type": "code"}`). Returns distance scores.
- **`memory_add(project_id, id, text)`**: Manual addition.
- **`memory_list_sources(project_id)`**: specific files ingested.
- **`memory_delete_source(project_id, source)`**: Remove a specific file.
- **`memory_stats(project_id)`**: Get chunk count.
- **`memory_reset(project_id)`**: Clear all memories for a project.
The AI will effectively have "long-term memory" of the files you ingested.
## 🛠 Troubleshooting
- **"fremem: command not found" after installing**:
- This means `pipx` installed the binary to a location not in your system's PATH (e.g., `~/.local/bin`).
- **Fix:** Run `pipx ensurepath` and restart your terminal.
- **Manual Fix:** Add `export PATH="$PATH:$HOME/.local/bin"` to your shell config (e.g., `~/.zshrc`).
- **"No MCP server found" or Connection errors**:
- Check the output of `pwd` to ensure your absolute paths in `mcp_config.json` are 100% correct.
- Ensure the virtual environment (`.venv`) is created and dependencies are installed.
- **"Wrong project_id used"**:
- The AI sometimes guesses the project ID. You can explicitly tell it: "Use project_id 'project-thaama'".
- **Embedding Model Downloads**:
- On the first run, the server downloads the `all-MiniLM-L6-v2` model (approx 100MB). This may cause a slight delay on the first request.
## 🗑️ Uninstalling
To remove `fremem` from your system:
**If installed via `pipx` (Global):**
```bash
pipx uninstall fremem
```
**If installed locally (Dev):**
Just delete the directory.
## 📁 Repo Structure
```
/
├── src/fremem/
│ ├── server.py # Main MCP server entry point
│ ├── ingest.py # Ingestion logic
│ └── db.py # LanceDB wrapper
├── ingest.sh # Helper script
├── requirements.txt # Top-level dependencies
├── pyproject.toml # Package config
├── mcp_memory_data/ # Persistent vector storage (gitignored)
└── README.md
```
## 🗺️ Roadmap
### ✅ Completed (v0.1.x)
- [x] Local vector storage (LanceDB)
- [x] Multi-project isolation
- [x] Markdown ingestion
- [x] PDF ingestion
- [x] Semantic chunking strategies
- [x] Windows support + editable install fixes
- [x] HTTP transport wrapper (SSE)
- [x] Fix resource listing errors (clean MCP UX)
- [x] Robust docs + 5-minute setup
- [x] Multi-IDE support (Windsurf, Cursor-compatible MCP)
### 🚀 Near-Term (v0.2.x – Production Readiness)
**🧠 Memory Governance**
- [x] List memory sources per project
- [x] Delete memory by source (file-level deletion)
- [x] Reset memory per project
- [x] Replace / reindex mode (prevent stale chunks)
- [x] Memory stats (chunk count, last updated, size)
**🎯 Retrieval Quality**
- [x] Metadata filtering (e.g., type=decision | rules | context)
- [x] Similarity scoring in results
- [ ] Hybrid search (semantic + keyword)
- [ ] Return evidence + similarity scores with search results
- [ ] Configurable top_k defaults per project
**⚙️ Dev Workflow**
- [ ] Auto-ingest on git commit / file change
- [ ] `mcp-memory init <project-id>` bootstrap command
- [ ] Project templates (PROJECT_CONTEXT.md, DECISIONS.md, AI_RULES.md)
### 🧠 Advanced RAG (v0.3.x – Differentiators)
- [ ] Hierarchical retrieval (summary-first, detail fallback)
- [ ] Memory compression (old chunks → summaries)
- [ ] Temporal ranking (prefer newer decisions)
- [ ] Scoped retrieval (planner vs coder vs reviewer agents)
- [ ] Query rewrite / expansion for better recall
### 🏢 Team / SaaS Mode (Optional)
*Philosophy: Local-first remains the default. SaaS is an optional deployment mode.*
**🔐 Auth & Multi-Tenancy**
- [ ] Project-level auth (API keys or JWT)
- [ ] Org / team separation
- [ ] Audit logs for memory changes
**☁️ Remote Storage Backends (Pluggable)**
- [ ] S3-compatible vector store backend
- [ ] Postgres / pgvector backend
- [ ] Sync & Federation (Local ↔ Remote)
### 🚫 Non-Goals
- ❌ No mandatory cloud dependency
- ❌ No vendor lock-in
- ❌ No chat history as “memory” by default (signal > noise)
- ❌ No model fine-tuning
| text/markdown | null | Jaiprakash Sharma <user@example.com> | null | null | MIT | cursor, lancedb, mcp, memory, rag, vector-database, windsurf | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"lancedb>=0.17.0",
"mcp>=1.0.0",
"pypdf>=3.0.0",
"sentence-transformers>=3.3.0",
"sse-starlette>=2.1.0",
"starlette>=0.37.0",
"torch>=2.2.0",
"uvicorn>=0.30.0",
"build; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\"",
"twine; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/iamjpsharma/fremem",
"Repository, https://github.com/iamjpsharma/fremem.git",
"Issues, https://github.com/iamjpsharma/fremem/issues"
] | twine/6.2.0 CPython/3.13.7 | 2026-02-20T08:36:45.396110 | fremem-0.2.1.tar.gz | 21,799 | 64/21/6a2635703862bf6c14d83ff9082d0a0f58d68e8160f6dee8e09e1c015807/fremem-0.2.1.tar.gz | source | sdist | null | false | dce7e68298273e98eeede3408440431d | 19ce851b967ead1d79964abfa08dbf7fdafa253c1f0707000b14bec6ba9f3a16 | 64216a2635703862bf6c14d83ff9082d0a0f58d68e8160f6dee8e09e1c015807 | null | [
"LICENSE"
] | 246 |
2.4 | ketju | 0.1.1 | A Python library | # Ketju
Ketju is a small RAG-focused codebase for experimenting with document ingestion and question answering.
Its goal is to be a lightweight RAG engine enabling rapid prototyping, experimentation, and iteration.
## Installation
### Using `uv` (recommended)
```bash
uv sync
```
Install extras as needed:
```bash
uv sync --extra rag
uv sync --extra pgvector
uv sync --extra docling
uv sync --extra agent
uv sync --extra agui
uv sync --extra observability
uv sync --extra examples
uv sync --extra dev
```
## Examples
### Chat with pdf's
paste in file paths you paths in the conversation. Launch via.
```
uv run python -m ketju.examples.basic_usage
```
### Chat with docs using pgvector
```
➜ ketju git:(main) uv run python -m ketju.examples.pgvector_example --database-url "postgresql://postgres:admin123@127.0.0.1:5432/ketju_db"
Starting the agent...
You can ask questions about contents in pdf files, just paste the path to the file in the chat.
ketju-agent ➤ compare the time travel properties of docs/interdimensional-systems/Infini
te_Improbability_Drive_Technical_Manual.pdf and docs/interdimensional-systems/Flux_Capac
itor_Technical_Reference_Manual.pdf present tehcnical specs in a table
Below is a comparative table of the time travel properties and technical specifications
of the Infinite Improbability Drive and the Flux Capacitor:
Infinite Improbability Drive
Specification (IID) Flux Capacitor
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Functionality Probabilistic Temporal Displacement
Faster-Than-Light Propulsion
Transit Duration 0.8 to 1.3 seconds (nominal Temporal energy delivery
1.0 second) within 2.3 milliseconds
Improbability Index 10^6:1 to 10^24:1 N/A
Quantum Field Density 4.2e9 to 9.5e9 J/m³ Chroniton Flux Density:
3.6e12 particles/cm³
Reality Phase Variance 0.002 to 0.0005 Δφ N/A
Causality Drift ±0.05 to ±1.2 milliseconds N/A
Power Requirements Peak: 2.8e15 Joules; Nominal Input: 1.21
Sustained: 3.5e12 Watts Gigawatts
Cooling Medium N/A Liquid Nitrogen
Weight N/A 15.4 kg
Housing Material N/A Reinforced Titanium Alloy
Dimensions N/A 450 x 320 x 150 mm
Warnings Risk of uncontrolled Improper synchronization can
metaphysical side effects lead to spacetime shear
events or disintegration
Primary Risk Vector Reality Instability Causality Disruption
### Key Observations:
• The IID operates based on improbability levels and focuses on navigation through
controlled quantum fields, while the Flux Capacitor is centered around achieving time
travel via temporal displacement.
• The IID does not provide precise time travel capabilities, whereas the Flux Capacitor
requires a specific speed and energy input to achieve temporal displacement with high
accuracy.
• The IID emphasizes energy at a much higher scale compared to the Flux Capacitor,
which requires relatively lower energy for its operations.
Let me know if you need any more information or additional comparisons!
ketju-agent ➤
```
### AG-UI
Start backend
```
uv run python -m ketju.examples.agui_agent
```
or
```
uv run python -m ketju.examples.agui_agent --path=a/path/to/a/folder/or/pdf/file
```
and then start the frontend
```
cd ui
pnpm install
next dev
```
### Using `pip` (fallback)
```bash
pip install -e .
pip install -e '.[rag]'
```
## Running examples
Examples are available as importable modules under `ketju.examples`:
```bash
uv run python -m ketju.examples.basic_usage
uv run python -m ketju.examples.rag_comparison
uv run python -m ketju.examples.agui
uv run python -m ketju.examples.pgvector_example --database-url "postgresql://postgres:password@127.0.0.1:5432/ketju_db"
```
An instrumentation demo that prints spans to the console:
```bash
uv sync --extra rag --extra observability
uv run python -m ketju.examples.instrumentation_demo
```
To reduce noisy third-party logs while using the interactive CLI:
```bash
uv run python -m ketju.examples.basic_usage --log-level WARNING
```
To see all logs again:
```bash
uv run python -m ketju.examples.basic_usage --no-quiet
```
The top-level `examples/` scripts remain as thin wrappers for convenience:
```bash
uv run python examples/basic_usage.py
```
## Development
### Tests
```bash
uv run pytest
```
With coverage (requires `pytest-cov`):
```bash
pytest --cov=ketju --cov-report=term-missing
```
### Linting
```bash
uv run ruff check src tests
```
## Project layout
```
ketju/
├── src/ketju/ # library code
├── tests/ # unit tests (no optional deps required)
├── examples/ # wrapper scripts + docs
└── pyproject.toml # metadata + dependency groups (extras)
```
| text/markdown | null | Christoffer Björkskog <christoffer.bjorkskog@novia.fi> | Novia UAS | null | MIT | library, novia | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"devtools>=0.12.2",
"pydantic>=2.0.0",
"pydantic-ai-slim[cli,openai]; extra == \"agent\"",
"llama-index-protocols-ag-ui>=0.2.3; extra == \"agui\"",
"pydantic-ai-slim[ag-ui]; extra == \"agui\"",
"starlette; extra == \"agui\"",
"uvicorn; extra == \"agui\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff; extra == \"dev\"",
"docling>=2.68.0; extra == \"docling\"",
"langchain-docling>=2.0.0; extra == \"docling\"",
"llama-index-node-parser-docling>=0.4.2; extra == \"docling\"",
"llama-index-readers-docling>=0.4.2; extra == \"docling\"",
"chromadb; extra == \"examples\"",
"docling>=2.68.0; extra == \"examples\"",
"langchain-docling>=2.0.0; extra == \"examples\"",
"llama-index-embeddings-ollama>=0.8.6; extra == \"examples\"",
"llama-index-llms-litellm>=0.6.3; extra == \"examples\"",
"llama-index-node-parser-docling>=0.4.2; extra == \"examples\"",
"llama-index-observability-otel>=0.2.1; extra == \"examples\"",
"llama-index-protocols-ag-ui>=0.2.3; extra == \"examples\"",
"llama-index-readers-docling>=0.4.2; extra == \"examples\"",
"llama-index-vector-stores-chroma>=0.5.5; extra == \"examples\"",
"llama-index>=0.14.12; extra == \"examples\"",
"logfire>=4.10.0; extra == \"examples\"",
"opentelemetry-instrumentation-llamaindex>=0.51.1; extra == \"examples\"",
"pydantic-ai-slim[ag-ui,cli,openai]; extra == \"examples\"",
"pypdf>=6.6.0; extra == \"examples\"",
"pysrt>=1.1.2; extra == \"examples\"",
"pysword>=0.2.8; extra == \"examples\"",
"python-dotenv; extra == \"examples\"",
"starlette; extra == \"examples\"",
"uvicorn; extra == \"examples\"",
"llama-index-observability-otel>=0.2.1; extra == \"observability\"",
"logfire>=4.10.0; extra == \"observability\"",
"opentelemetry-exporter-otlp; extra == \"observability\"",
"opentelemetry-instrumentation-llamaindex>=0.51.1; extra == \"observability\"",
"opentelemetry-sdk; extra == \"observability\"",
"llama-index-storage-docstore-postgres>=0.4.1; extra == \"pgvector\"",
"llama-index-storage-index-store-postgres>=0.5.1; extra == \"pgvector\"",
"llama-index-storage-kvstore-postgres>=0.4.3; extra == \"pgvector\"",
"llama-index-vector-stores-postgres>=0.7.3; extra == \"pgvector\"",
"chromadb; extra == \"rag\"",
"llama-index-embeddings-ollama>=0.8.6; extra == \"rag\"",
"llama-index-llms-litellm>=0.6.3; extra == \"rag\"",
"llama-index-vector-stores-chroma>=0.5.5; extra == \"rag\"",
"llama-index>=0.14.12; extra == \"rag\"",
"pypdf>=6.6.0; extra == \"rag\"",
"pysrt>=1.1.2; extra == \"rag\"",
"pysword>=0.2.8; extra == \"rag\""
] | [] | [] | [] | [
"Homepage, https://github.com/Novia-RDI-Seafaring/ketju",
"Repository, https://github.com/Novia-RDI-Seafaring/ketju",
"Issues, https://github.com/Novia-RDI-Seafaring/ketju/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:35:47.553011 | ketju-0.1.1.tar.gz | 360,248 | 5d/a7/8dcf7a747bd04369fd93cf00f9860b48327b0303b4a2a53747afb76c833c/ketju-0.1.1.tar.gz | source | sdist | null | false | dfb5631ec3f561f72b9b18cabdbf5cfa | 620024949a395a54507bc081f36908b7194d4eae831cd5180c475869abc165de | 5da78dcf7a747bd04369fd93cf00f9860b48327b0303b4a2a53747afb76c833c | null | [
"LICENSE"
] | 244 |
2.3 | slpkg | 5.5.4 | Package manager utility for Slackware Linux | [<img src="https://gitlab.com/dslackw/slpkg/-/raw/site/docs/images/logo.png" title="slpkg">](https://dslackw.gitlab.io/slpkg)
## About
Slpkg is a software package manager that installs, updates, and removes packages on <a href="https://www.slackware.com" target="_blank">Slackware</a>-based systems. It automatically calculates dependencies and determines the required steps for package installation. Slpkg simplifies managing machine groups by eliminating manual updates. The tool adheres to the standards of the <a href="https://www.slackbuilds.org" target="_blank">slackbuilds.org</a> organization for building packages and follows Slackware Linux's procedures for package installation, upgrades, and removal.
## Homepage
Visit the project website [here](https://dslackw.gitlab.io/slpkg/).
## Source
* <a href="https://gitlab.com/dslackw/slpkg" target="_blank">GitLab</a> repository.
* <a href="https://slackbuilds.org/repository/15.0/system/slpkg/" target="_blank">SlackBuilds.org</a> repository.
* <a href="https://sourceforge.net/projects/slpkg/" target="_blank">SourceForge</a> repository.
* <a href="https://pypi.org/project/slpkg/" target="_blank">PyPi</a> repository.
## License
[MIT License](https://dslackw.gitlab.io/slpkg/license/)
## Donate
Did you know that we developers love coffee?
[<img src="https://gitlab.com/dslackw/slpkg/-/raw/site/docs/images/paypaldonate.png" alt="paypal" title="donate">](https://www.paypal.me/dslackw)
## Support
Please support:
* <a href="https://www.patreon.com/slackwarelinux" target="_blank">Slackware</a> project.
* <a href="https://slackbuilds.org/contributors/" target="_blank">SlackBuilds</a> project.
* <a href="https://alien.slackbook.org/blog/" target="_blank">AlienBob</a> project.
Thank you all for your support!
## Copyrights
Slackware® is a Registered Trademark of Patrick Volkerding.
Linux is a Registered Trademark of Linus Torvalds.
| text/markdown | null | Dimitris Zlatanidis <dslackw@gmail.com> | null | Dimitris Zlatanidis <dslackw@gmail.com> | null | slackware, linux, package, manager, tool | [
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Natural Language :: English",
"Environment :: Console",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Unix Shell",
"Topic :: Utilities",
"Topic :: Software Development :: Build Tools",
"Topic :: System :: Archiving :: Packaging",
"Topic :: System :: Software Distribution",
"Topic :: System :: Installation/Setup",
"Topic :: System :: Systems Administration",
"Topic :: System :: Software Distribution",
"Development Status :: 5 - Production/Stable"
] | [] | null | null | >=3.9 | [] | [
"slpkg"
] | [] | [
"tomlkit>=0.13.2",
"pythondialog>=3.5.3; extra == \"dialog\"",
"PySocks>=1.7.1; extra == \"socks\""
] | [] | [] | [] | [
"Changelog, https://gitlab.com/dslackw/slpkg/-/blob/master/CHANGELOG.md",
"Documentation, https://dslackw.gitlab.io/slpkg/",
"Homepage, https://dslackw.gitlab.io/slpkg/",
"Repository, https://gitlab.com/dslackw/slpkg.git"
] | python-requests/2.32.5 | 2026-02-20T08:35:46.518837 | slpkg-5.5.4.tar.gz | 129,587 | 38/43/14e0a2288e287eac12193ab090f6cee52f549b17d36e5ceac6b2fb7794aa/slpkg-5.5.4.tar.gz | source | sdist | null | false | 37167950cb7ae083fa1f94e763afe12a | 8534b0c88472340f8beb26ba65384d2abfbae56b576f644a800042f44bc4ade8 | 384314e0a2288e287eac12193ab090f6cee52f549b17d36e5ceac6b2fb7794aa | null | [] | 246 |
2.4 | trajectopy | 4.0.9 | Trajectory Evaluation in Python | <div align="center">
<h1>Trajectopy - Trajectory Evaluation in Python</h1>
<a href="https://github.com/gereon-t/trajectopy/releases"><img src="https://img.shields.io/github/v/release/gereon-t/trajectopy?label=version" /></a>
<a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/python-3.10+-blue.svg" /></a>
<a href="https://github.com/gereon-t/trajectopy/blob/main/LICENSE"><img src="https://img.shields.io/github/license/gereon-t/trajectopy" /></a>
<a href="https://github.com/psf/black"><img alt="Code style: black" src="https://img.shields.io/badge/code%20style-black-000000.svg"></a>
<br />
<a href="https://github.com/gereon-t/trajectopy"><img src="https://img.shields.io/badge/Windows-0078D6?st&logo=windows&logoColor=white" /></a>
<a href="https://github.com/gereon-t/trajectopy"><img src="https://img.shields.io/badge/Linux-FCC624?logo=linux&logoColor=black" /></a>
<a href="https://github.com/gereon-t/trajectopy"><img src="https://img.shields.io/badge/mac%20os-000000?&logo=apple&logoColor=white" /></a>
<div>
<img src=".images/uni_bonn.svg" height="50" />
<img src=".images/igg.gif" height="50" />
</div>
<h4>Trajectopy is a Python package with an optional graphical user interface for empirical trajectory evaluation. </h4>
<p align="center">
<img style="border-radius: 10px;" src="https://raw.githubusercontent.com/gereon-t/trajectopy/main/.images/trajectopy_gif_low_quality.gif">
</p>
Using [Mapbox](https://www.mapbox.com/), you can visualize your trajectories on a map:
<p align="center">
<img style="border-radius: 10px;" src=".images/plot.png">
</p>
</div>
## Installation
Full version (with GUI):
```bash
pip install "trajectopy[gui]"
```
Python package only:
```bash
pip install trajectopy
```
## Documentation
<a href="https://gereon-t.github.io/trajectopy/" target="_blank">https://gereon-t.github.io/trajectopy/</a>
## Key Features
Trajectopy offers a range of features, including:
- __Interactive GUI__: A user-friendly interface that enables seamless interaction with your trajectory data, making it easy to visualize, align, and compare trajectories.
- __Alignment__: An advanced trajectory alignment algorithm that can be tailored to the specific application and supports a similarity transformation, a leverarm and a time shift estimation.
- __Comparison__: Absolute and relative trajectory comparison metrics (__ATE and RPE__) that can be computed using various pose-matching methods.
- __Data Import/Export__: Support for importing and exporting data, ensuring compatibility with your existing workflows.
- __Customizable Visualization__: Powered by [Plotly](https://plotly.com/) or [Matplotlib](https://matplotlib.org/), trajectopy offers a range of interactive plots that can be customized to your needs. ([Demo](https://htmlpreview.github.io/?https://github.com/gereon-t/trajectopy/blob/main/example_data/report.html))
## Web Application (Docker)
A simple web application is available at [gereon-t/trajectopy-web](https://github.com/gereon-t/trajectopy-web) that allows you to use the core functionality of Trajectopy using Docker.
## Citation
If you use this library for any academic work, please cite our original [paper](https://www.degruyter.com/document/doi/10.1515/jag-2024-0040/html).
```bibtex
@article{Tombrink2024,
url = {https://doi.org/10.1515/jag-2024-0040},
title = {Spatio-temporal trajectory alignment for trajectory evaluation},
author = {Gereon Tombrink and Ansgar Dreier and Lasse Klingbeil and Heiner Kuhlmann},
journal = {Journal of Applied Geodesy},
doi = {doi:10.1515/jag-2024-0040},
year = {2024},
codeurl = {https://github.com/gereon-t/trajectopy},
}
```
| text/markdown | null | Gereon Tombrink <tombrink@igg.uni-bonn.de> | null | Gereon Tombrink <tombrink@igg.uni-bonn.de> | GPLv3 | alignment, epsg, evaluation, leverarm, robotics, similarity, trajectory | [
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"autograd>=1.8.0",
"jinja2>=3.1.6",
"matplotlib>=3.9.4",
"networkx>=3.2.1",
"numpy>=2.0.2",
"pandas>=2.2.3",
"plotly>=6.1.0",
"pyproj>=3.6.1",
"rich>=14.0.0",
"rosbags>=0.9.23",
"scipy>=1.13.1",
"pyqt6; extra == \"gui\""
] | [] | [] | [] | [
"Homepage, https://gereon-t.github.io/trajectopy/",
"Documentation, https://gereon-t.github.io/trajectopy/",
"Repository, https://github.com/gereon-t/trajectopy",
"Bug Tracker, https://github.com/gereon-t/trajectopy/issues"
] | uv/0.9.14 {"installer":{"name":"uv","version":"0.9.14","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T08:35:20.288539 | trajectopy-4.0.9.tar.gz | 24,748,763 | 01/e9/5b4a07e3ed0648d7fa26414c9bd81164cf8e89225130fdfdcc7148f2e929/trajectopy-4.0.9.tar.gz | source | sdist | null | false | 902145692583ae78ff750ee84161e93f | f9d76ca0a746dd309490cc37f773ee0b41ceafda6dcb93ffd8ee5f6306f5d5d7 | 01e95b4a07e3ed0648d7fa26414c9bd81164cf8e89225130fdfdcc7148f2e929 | null | [
"LICENSE"
] | 239 |
2.4 | tvb-ext-xircuits | 3.0.1 | Jupyterlab extension for building TVB workflows in a visual and interactive manner | <p>
<img src="style/icons/TVB_logo.svg" alt="TVB logo" title="TVB" height="100" style="padding: 15px"/>
<img src="style/icons/VBT_logo.svg" alt="VBT logo" title="VBT" height="100" />
</p>
# tvb-ext-xircuits
This is a jupyterlab extension built as a prototype for building EBRAINS
(including TVB simulator, Siibra API) workflows in a visual and interactive manner. It
extends the already existent [Xircuits](https://xircuits.io/) jupyterlab extension
by adding new components and new features on top.
Starting with version 2.0.0, tvb-ext-xircuits can be installed in **lightweight** mode or in **full** mode.
**Full** mode means that the extension will be fully working and able to run workflows.
**Lightweight** mode means that only the front-end part of the extension will be available, meaning that the users will
be able to see all the extension's components, but running workflows will not work.
To install the extension locally and in full mode (recommended):
pip install tvb-ext-xircuits[full]
To install the extension in lightweight mode (only for specialized users):
pip install tvb-ext-xircuits
For dev mode setup there are 2 alternatives:
1. Using `jlpm`:
`jlpm` is a JupyterLab-provided, locked version of `yarn` and has a similar usage:
```
conda activate [my-env]
pip install --upgrade pip
pip install -e .[full]
jupyter labextension develop . --overwrite # Link your development version of the extension with JupyterLab
jupyter server extension enable tvbextxircuits # Enable the server extension
tvbextxircuits
```
2. Using `yarn`:
You need to have a dedicated `Python env`, `yarn`, `rust` and `cargo` (from https://rustup.rs/) prepared:
```
conda activate [my-env]
pip install --upgrade pip
pip install -e .[full]
yarn install
yarn install:extension
tvbextxircuits
```
To rebuild the extension after making changes to it:
# Rebuild Typescript source after making changes
jlpm build
# Rebuild extension after making any changes
jupyter lab build
To rebuild automatically:
# Watch the source directory in another terminal tab
jlpm run watch
# Run Xircuits in watch mode in one terminal tab
jupyter lab --watch
## Notes
To be able to see details info related to TVB components you must first run the command `python generate_description_files.py`
Notebooks generated can be found at `TVB_generated_notebooks/<xircuits_id>`
## Acknowledgments
Copyright (c) 2022-2025 to Xircuits Team See: https://github.com/XpressAI/xircuits
Copyright (c) 2022-2023 to TVB-Xircuits team (SDL Neuroscience Juelich, INS Marseille, Codemart) for changes in this fork.
Copyright (c) 2024- 2025 to Codemart - Brainiacs team for further changes in this fork.
This extension is build on top of the Xircuits https://xircuits.io Jupyter extension, and it adds custom features, tailored for EBRAINS and VBT environments.
This project has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3).
This project has received funding from the European Union’s Horizon Europe Programme under the Specific Grant Agreement No. 101147319 (EBRAINS 2.0 Project).
This project has received funding from the European Union’s Research and Innovation Program Horizon Europe under Grant Agreement No. 101137289 (Virtual Brain Twin Project).
| text/markdown | null | TVB-Xircuits Team <science@codemart.ro> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | null | [
"Framework :: Jupyter",
"Framework :: Jupyter :: JupyterLab",
"Framework :: Jupyter :: JupyterLab :: 4",
"Framework :: Jupyter :: JupyterLab :: Extensions",
"Framework :: Jupyter :: JupyterLab :: Extensions :: Prebuilt",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"asgiref",
"dill",
"docutils",
"importlib-resources",
"ipykernel",
"joblib",
"json5",
"jupyter-core",
"jupyter-server<3,>=2.0.1",
"jupyterlab-server",
"jupyterlab-widgets",
"jupyterlab>=4.0.0",
"nbformat",
"notebook-shim",
"numpy",
"packaging",
"pyunicore",
"requests",
"toml",
"tornado",
"tvb-ext-bucket",
"tvb-ext-unicore",
"tvb-library",
"sbi; extra == \"full\"",
"siibra; extra == \"full\"",
"torch; extra == \"full\"",
"tvb-framework; extra == \"full\"",
"tvb-gdist; extra == \"full\"",
"tvb-widgets>=1.0; extra == \"full\"",
"vbi[inference]<=0.3; extra == \"full\"",
"coverage; extra == \"test\"",
"pytest; extra == \"test\"",
"pytest-asyncio; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"pytest-jupyter[server]>=0.6.0; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/the-virtual-brain/tvb-ext-xircuits",
"Bug Tracker, https://req.thevirtualbrain.org",
"Repository, https://github.com/the-virtual-brain/tvb-ext-xircuits"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T08:34:59.842602 | tvb_ext_xircuits-3.0.1.tar.gz | 3,170,314 | e9/c8/e29a98dab546d18c2057935d90e70d811f549174720329d9544ff19520ab/tvb_ext_xircuits-3.0.1.tar.gz | source | sdist | null | false | 020413ec6f1619e18b475c6bcc2eb131 | 9fc78531b9caf6a4015133847ad5aed71a5edaccade4c9a7fb579ba0e763d49e | e9c8e29a98dab546d18c2057935d90e70d811f549174720329d9544ff19520ab | null | [
"LICENSE"
] | 224 |
2.3 | cmem-plugin-base | 4.16.1 | Base classes for developing eccenca Corporate Memory plugins. | # cmem-plugin-base
Base classes for developing eccenca Corporate Memory plugins.
[](https://github.com/eccenca/cmem-plugin-base/actions) [](https://pypi.org/project/cmem-plugin-base) [](https://pypi.org/project/cmem-plugin-base)
[![poetry][poetry-shield]][poetry-link] [![ruff][ruff-shield]][ruff-link] [![mypy][mypy-shield]][mypy-link] [![copier][copier-shield]][copier]
[poetry-link]: https://python-poetry.org/
[poetry-shield]: https://img.shields.io/endpoint?url=https://python-poetry.org/badge/v0.json
[ruff-link]: https://docs.astral.sh/ruff/
[ruff-shield]: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json&label=Code%20Style
[mypy-link]: https://mypy-lang.org/
[mypy-shield]: https://www.mypy-lang.org/static/mypy_badge.svg
[copier]: https://copier.readthedocs.io/
[copier-shield]: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/copier-org/copier/master/img/badge/badge-grayscale-inverted-border-purple.json
| text/markdown | eccenca GmbH | cmempy-developer@eccenca.com | Sebastian Tramp | sebastian.tramp@eccenca.com | Apache-2.0 | eccenca Corporate Memory, plugins, DataIntegration | [
"Development Status :: 5 - Production/Stable",
"Environment :: Plugins",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | https://github.com/eccenca/cmem-plugin-base | null | <4.0,>=3.13 | [] | [] | [] | [
"cmem-cmempy>=25.2.0",
"pydantic<3.0.0,>=2.12.2",
"python-ulid<4.0.0,>=3.1.0"
] | [] | [] | [] | [
"Homepage, https://github.com/eccenca/cmem-plugin-base"
] | poetry/2.1.4 CPython/3.13.11 Linux/6.11.0-1018-azure | 2026-02-20T08:34:54.032162 | cmem_plugin_base-4.16.1.tar.gz | 29,036 | ae/b8/6a907d88a6e63ca961ac33443fcf54092ca9d0716341e68d4659099b661a/cmem_plugin_base-4.16.1.tar.gz | source | sdist | null | false | 21213df645218b5da7c9b602c9281d99 | 03c8ce275cae4d047ef56278b9aed7fc64f432820f65dce11e4eb13e62cf1228 | aeb86a907d88a6e63ca961ac33443fcf54092ca9d0716341e68d4659099b661a | null | [] | 302 |
2.4 | utilsdpngr | 0.0.4 | Solution for retrain DpNgr | # utilsds
Utilsds is a library that includes classes and functions used in data science projects.
| text/markdown | null | DS Team <ds@sts.pl> | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"pandas>=2.2.2",
"numpy>=1.26.0",
"scikit-learn>=1.5.0",
"matplotlib>=3.9.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T08:33:40.124554 | utilsdpngr-0.0.4.tar.gz | 13,234 | 02/4c/f28f892aa1ee397bd1abf3ccca90a8ebff071924cb1b479bace4410e231d/utilsdpngr-0.0.4.tar.gz | source | sdist | null | false | 0ce9ad821b60d04beb89977c37ba4574 | c64b811428a6763a97df178755465af7da4540f3a48982744f199879770fc441 | 024cf28f892aa1ee397bd1abf3ccca90a8ebff071924cb1b479bace4410e231d | null | [] | 228 |
2.1 | seaweedfs-bin | 4.13 | 🌿SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. | # seaweedfs-bin
https://github.com/seaweedfs/seaweedfs
SeaweedFS is a simple and highly scalable distributed file system. There are two objectives:
1. to store billions of files!
2. to serve the files fast!
SeaweedFS started as an Object Store to handle small files efficiently.
Instead of managing all file metadata in a central master,
the central master only manages volumes on volume servers,
and these volume servers manage files and their metadata.
This relieves concurrency pressure from the central master and spreads file metadata into volume servers,
allowing faster file access (O(1), usually just one disk read operation).
There is only 40 bytes of disk storage overhead for each file's metadata.
It is so simple with O(1) disk reads that you are welcome to challenge the performance with your actual use cases.
SeaweedFS started by implementing [Facebook's Haystack design paper](http://www.usenix.org/event/osdi10/tech/full_papers/Beaver.pdf).
Also, SeaweedFS implements erasure coding with ideas from
[f4: Facebook’s Warm BLOB Storage System](https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-muralidhar.pdf), and has a lot of similarities with [Facebook’s Tectonic Filesystem](https://www.usenix.org/system/files/fast21-pan.pdf)
On top of the object store, optional [Filer] can support directories and POSIX attributes.
Filer is a separate linearly-scalable stateless server with customizable metadata stores,
e.g., MySql, Postgres, Redis, Cassandra, HBase, Mongodb, Elastic Search, LevelDB, RocksDB, Sqlite, MemSql, TiDB, Etcd, CockroachDB, YDB, etc.
For any distributed key value stores, the large values can be offloaded to SeaweedFS.
With the fast access speed and linearly scalable capacity,
SeaweedFS can work as a distributed [Key-Large-Value store][KeyLargeValueStore].
SeaweedFS can transparently integrate with the cloud.
With hot data on local cluster, and warm data on the cloud with O(1) access time,
SeaweedFS can achieve both fast local access time and elastic cloud storage capacity.
What's more, the cloud storage access API cost is minimized.
Faster and cheaper than direct cloud storage!
## install
```sh
pip install seaweedfs-bin
```
| text/markdown | null | dowon <ks2515@naver.com> | null | null | Apache 2.0 | blob storage, data lake, distributed file system, distributed storage, erasure coding, file system, object storage, s3, seaweedfs | [
"Programming Language :: Go",
"Topic :: System :: Filesystems"
] | [] | null | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [
"repository, https://github.com/Bing-su/pip-binary-factory",
"seaweedfs, https://github.com/seaweedfs/seaweedfs"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:33:34.723876 | seaweedfs_bin-4.13.tar.gz | 95,924,996 | 4e/ba/ea88612890d804cec110356d1c5af9e6cfb70cbd5c1ab25e2a73014ef964/seaweedfs_bin-4.13.tar.gz | source | sdist | null | false | 6136bfda02681e90b4dcc1bb3749c9a2 | d33ce00ebbf349e30d16eb41aa838b9e08db36b7212ff51246661c66b147184b | 4ebaea88612890d804cec110356d1c5af9e6cfb70cbd5c1ab25e2a73014ef964 | null | [] | 473 |
2.1 | pocketbase-bin | 0.36.4 | Open Source realtime backend in 1 file | # pocketbase-bin
https://pocketbase.io/
https://github.com/pocketbase/pocketbase
[PocketBase](https://pocketbase.io) is an open source Go backend, consisting of:
- embedded database (_SQLite_) with **realtime subscriptions**
- built-in **files and users management**
- convenient **Admin dashboard UI**
- and simple **REST-ish API**
**For documentation and examples, please visit https://pocketbase.io/docs.**
> [!WARNING]
> Please keep in mind that PocketBase is still under active development
> and therefore full backward compatibility is not guaranteed before reaching v1.0.0.
## PyPI package
```sh
pip install pocketbase-bin
```
Compared to the original, this package has the following differences
- The `pb_data`, `pb_migrations` directories are created in the current working directory, not next to the executable.
- The default value of `publicDir` is also set to `pb_public` in the current working directory.
- `pocketbase update` command is disabled.
## Python SDK
[pocketbase](https://pypi.org/project/pocketbase/)
[pocketbase-async](https://pypi.org/project/pocketbase-async/)
| text/markdown | null | dowon <ks2515@naver.com> | null | null | MIT | admin, backend, golang | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Go",
"Topic :: Database"
] | [] | null | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [
"pocketbase, https://pocketbase.io",
"repository, https://github.com/Bing-su/pip-binary-factory"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:33:03.782023 | pocketbase_bin-0.36.4.tar.gz | 12,314 | ec/dd/441ae714c15c94725d2cf6670c7126b9a7caf22bfd0fa2ce62b8b9edb3a2/pocketbase_bin-0.36.4.tar.gz | source | sdist | null | false | 0237bae6c24d4f502d186df0b8021308 | 6f8da7d1b652551375d2e1c571ab0d01fcbd25b249fb619f6442d4e968f16341 | ecdd441ae714c15c94725d2cf6670c7126b9a7caf22bfd0fa2ce62b8b9edb3a2 | null | [] | 780 |
2.4 | insight-proto | 0.0.55 | Insight Python Protobuf | # Language Independent Interface Types For INSIGHT
The proto files can be consumed as GIT submodules or copied and built directly in the consumer project.
The compiled files are published to central repositories (Maven, ...).
## Prerequisites
### Python
```bash
pip install grpcio grpcio-tools mypy-protobuf
```
### Go
```bash
# protoc (protobuf compiler)
# Ubuntu/Debian:
sudo apt install -y protobuf-compiler
# macOS:
brew install protobuf
# Go protobuf plugins
go install google.golang.org/protobuf/cmd/protoc-gen-go@latest
go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest
go install github.com/srikrsna/protoc-gen-gotag@latest
```
Ensure `$GOPATH/bin` (usually `~/go/bin`) is on your `PATH`.
## Generate gRPC Client Libraries
To generate the raw gRPC client libraries, use `make gen-${LANGUAGE}`. Currently supported languages are:
* python
* golang
# Using local build
When testing you can build the python version locally using `make build-python`. This will build a version of 0.0.1-dev,
this can then be installed using `pip install`.
```bash
make build-python VERSION=0.0.2
```
Then update the agent `pyproject.yaml`
```toml
dependencies = [
"insight-proto @ file:///home/bdonnell/repo/github/opentrace/insight-proto/build/python",
]
```
Then run pip install for the agent:
```bash
pip install -e".[dev]"
```
Due to this being a dev build sometimes pip gets confused so you might need to run uninstall.
```bash
pip uninstall insight-proto
```
## Go
Build the Go protobuf locally and update the `insight-proto-go` submodule:
```bash
make build-go-local
```
Then in your Go service, add a `replace` directive in `go.mod` to point at your local copy:
```go
replace github.com/opentrace/insight-proto-go => /home/bdonnell/repo/github/opentrace/insight-proto/insight-proto-go
```
# Releasing
To release this we use GitHub Actions when a new release is tagged via GitHub.
| text/markdown | null | Ben Donnelly <ben@opentrace.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Typing :: Typed"
] | [] | null | null | >3.7 | [] | [] | [] | [
"protobuf==6.33.2",
"build; extra == \"dev\"",
"certifi>=2024.7.4; extra == \"dev\"",
"grpcio-tools>=1.59.0; extra == \"dev\"",
"grpcio>=1.59.0; extra == \"dev\"",
"idna>=3.7; extra == \"dev\"",
"mypy-protobuf>=3.5.0; extra == \"dev\"",
"pygments>=2.15.0; extra == \"dev\"",
"requests>=2.32.0; extra == \"dev\"",
"setuptools>=70.0.0; extra == \"dev\"",
"tqdm>=4.66.3; extra == \"dev\"",
"twine; extra == \"dev\"",
"urllib3>=2.2.2; extra == \"dev\"",
"zipp>=3.19.1; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T08:32:59.280650 | insight_proto-0.0.55.tar.gz | 48,220 | 77/13/fe6400398d10b052a725dabb8ab445bd9c46deaccaec096a57ac5bbf4388/insight_proto-0.0.55.tar.gz | source | sdist | null | false | f3f87cc0df47ed9e8c30144b68cf98f5 | 5747ccb29ac2e6b8451ad5692e6b217c851e98be0336fcf132da464046a0a6ee | 7713fe6400398d10b052a725dabb8ab445bd9c46deaccaec096a57ac5bbf4388 | AGPL-3.0-only | [] | 251 |
2.1 | fzf-bin | 0.68.0 | fzf - 🌸 A command-line fuzzy finder | # fzf-bin
https://github.com/junegunn/fzf
**fzf** is an interactive Unix filter for command-line that can be used with any list; files, command history, processes, hostnames, bookmarks, git commits, etc..
see original [repository](https://github.com/junegunn/fzf) for more information.

This is a python wrapper that can be installed with pip.
## install
```sh
pip install fzf-bin
```
| text/markdown | null | dowon <ks2515@naver.com> | null | null | MIT | fuzzy finder, fzf, tui | [
"Programming Language :: Go",
"Topic :: Software Development :: User Interfaces",
"Topic :: Terminals"
] | [] | null | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [
"repository, https://github.com/Bing-su/pip-binary-factory"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:32:41.463858 | fzf_bin-0.68.0.tar.gz | 385,212 | 57/c9/8c898965c0cc926a47a52b0beee5434c74cc28e1007a052ed936450c65f1/fzf_bin-0.68.0.tar.gz | source | sdist | null | false | 0fa439884de4d82127ecdc125de457f8 | 36ca696f7ced8850be97ecd403b0b6d94add948e3ac43f8ce6e1938ba2108cc4 | 57c98c898965c0cc926a47a52b0beee5434c74cc28e1007a052ed936450c65f1 | null | [] | 875 |
2.1 | pydevd-pycharm | 253.31033.139 | PyCharm Debugger (used in PyCharm and PyDev) | # PyDev.Debugger
PyCharms' fork of [PyDev.Debugger][pydevd].
## Installation
In general, the debugger backend should **NOT** be installed separately if you're using an IDE which already
bundles it (such as PyDev or PyCharm).
## Compatibility
It is however available in PyPi so that it can be installed for doing remote debugging with `pip` -- so, when
debugging a process which runs in another machine, it's possible to `pip install pydevd-pycharm` and in the code use
`pydevd_pycharm.settrace(host='10.1.1.1')` to connect the debugger backend to the debugger UI running in the IDE
(whereas previously the sources had to be manually copied from the IDE installation).
It should be compatible with Python 2.6 onwards (as well as Jython 2.7, IronPython and PyPy -- and
any other variant which properly supports the Python structure for debuggers -- i.e.: sys.settrace/threading.settrace).
Recent versions contain speedup modules using Cython, which are generated with a few changes in the regular files
to `cythonize` the files. To update and compile the cython sources (and generate some other auto-generated files),
`build_tools/build.py` should be run -- note that the resulting .pyx and .c files should be committed.
To generate a distribution with the precompiled binaries for the IDE, `build_binaries_windows.py` should be run (
note that the environments must be pre-created as specified in that file).
To generate a distribution to upload to PyPi, `python setup.py sdist bdist_wheel` should be run for each python version
which should have a wheel and afterwards `twine upload -s dist/pydevd-*` should be run to actually upload the contents
to PyPi.
## Dependencies
CI dependencies are stored in `ci-requirements/`. These are high-level dependencies required to initialize tests execution.
Basically `tox` and it's transient requirements.
Test dependencies are stored in `test-requirements/`. These dependencies are required for successful execution of all the tests.
For local development you only need CI dependencies. Test dependencies are completely handled by `tox`, assuming you are running tests
through it.
Dependencies are pinned and split by supported Python version. It is done ...
- to avoid rogue dependency update crashing the tests and consequently safe-push overnight if the test is in the aggregator,
- to have reproducible builds,
- to avoid finding a set of dependencies which satisfy all the supported Python version simultaneously.
For more details on the current dependency declaration approach see [PCQA-914][PCQA-914] and [PCQA-904][PCQA-904].
## Tests
Tests are executed via `tox` with the help of `pytest`.
To run all tests ...
```shell
tox
```
To run test vs. a specific Python version, e.g., Python 3.13 ...
```shell
tox -e py313
```
To run a specific test vs. a specific Python version ...
```shell
tox -e py313 -- pydev_tests/test_pyserver.py::TestCPython::test_message
```
[pydevd]: https://github.com/fabioz/PyDev.Debugger
[PCQA-904]: https://youtrack.jetbrains.com/issue/PCQA-904
[PCQA-914]: https://youtrack.jetbrains.com/issue/PCQA-914
| text/markdown | JetBrains, Fabio Zadrozny and others | null | null | null | Apache 2.0 | pydev, pydevd, pydev.debugger, pycharm | [
"Development Status :: 6 - Mature",
"Environment :: Console",
"Intended Audience :: Developers",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Programming Language :: Python",
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Debuggers"
] | [] | https://github.com/JetBrains/intellij-community | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/40.6.2 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.7.1 | 2026-02-20T08:31:45.673563 | pydevd_pycharm-253.31033.139.tar.gz | 7,635,270 | d2/d2/f9281d9130c0566bff7e7ee10ff1bb800313231239ddd52a5f859263846a/pydevd_pycharm-253.31033.139.tar.gz | source | sdist | null | false | 8b555da92b632a27dff683ba6b63cabc | afb296dc0dda381e5931c82fde1ae53c0e406f126accb93149679bd666f1bed3 | d2d2f9281d9130c0566bff7e7ee10ff1bb800313231239ddd52a5f859263846a | null | [] | 213 |
2.4 | arcane-bing | 0.5.7 | Helpers to request bing API | # Arcane bing
This package is based on [bingads](https://docs.microsoft.com/en-us/advertising/guides/request-download-report?view=bingads-13).
## Get Started
```sh
pip install arcane-bing
```
## Example Usage
### Reporting
```python
bing_client = Client(
credentials=Config.BING_ADS_CREDENTIALS,
secrets_bucket=Config.SECRETS_BUCKET,
refresh_token_location=Config.BING_ADS_REFRESH_TOKEN,
storage_client=storage_client
)
reporting_service_manager, reporting_service = bing_client.get_bing_ads_api_client()
report_request = build_campaigns_report(reporting_service, bing_account_id)
result_file_path = bing_client.submit_and_download(report_request, reporting_service_manager)
```
### Campaign Service
:warning: For some API methods, you must provide the client's account id and the manager's customer id
```python
from arcane.bing import Client
from arcane.bing.helpers import parse_webfault_errors, parse_bing_response
bing_client = Client(
credentials=Config.BING_ADS_CREDENTIALS,
secrets_bucket=Config.SECRETS_BUCKET,
refresh_token_location=Config.BING_ADS_REFRESH_TOKEN,
storage_client=storage_client,
customer_id=CUSTOMER_ID,
account_id=ACCOUNT_ID
)
campaign_service = bing_client.get_service_client(service_name='CampaignManagement')
try:
response = campaign_service.GetCampaignsByAccountId(AccountId=ACCOUNT_ID)
all_campaigns = parse_bing_response(response)['Campaign']
# do stuff with all_campaigns
except WebFault as e:
bing_error = parse_webfault_errors(e)
# do stuff with bing_error
```
| text/markdown | Arcane | product@wearcane.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"arcane-core<2.0.0,>=1.6.0",
"backoff>=1.10.0",
"bingads==13.0.26"
] | [] | [] | [] | [] | poetry/2.3.2 CPython/3.12.12 Linux/6.11.0-1018-azure | 2026-02-20T08:31:13.210213 | arcane_bing-0.5.7-py3-none-any.whl | 7,003 | e6/73/82519b4d2f76a048b625213848eb43f3ffda9ef60c724e4a194f3aa6b72b/arcane_bing-0.5.7-py3-none-any.whl | py3 | bdist_wheel | null | false | 804d6bd21d3b4dc3594edd445f78da14 | 1177d5c427498b141f6844513bfa023452a3b7847e412d35a93066c85ea38534 | e67382519b4d2f76a048b625213848eb43f3ffda9ef60c724e4a194f3aa6b72b | null | [] | 235 |
2.4 | exasol-extension-license-protocol | 0.4.0 | Software License Generator and Validator for closed-source Extensions. | # Exasol Extension License Protocol Python
Software License Generator and Validator for closed-source Exasol Extensions.
It contains software components and command-line tools that enable signing a new license and validating the authenticity of an existing license.
_As this project is part of Exasol Text AI, the [Exasol Text AI license](https://www.exasol.com/terms-and-conditions/) applies to it, too._
| text/markdown | null | Christoph Kuhnke <christoph.kuhnke@exasol.com> | null | null | null | exasol, extension_license_protocol | [] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"cryptography<46,>=45.0.2",
"humanfriendly<11,>=10.0",
"pydantic<3,>=2.11.4",
"pyhanko-certvalidator<1,>=0.27.0",
"toml<0.11.0,>=0.10.2"
] | [] | [] | [] | [] | python-httpx/0.28.1 | 2026-02-20T08:30:04.520900 | exasol_extension_license_protocol-0.4.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl | 2,946,481 | 9f/64/ed529b492b0c3167ecf62dec40debf9622a9fde2865f1b947e4a0202de8f/exasol_extension_license_protocol-0.4.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl | cp310 | bdist_wheel | null | false | ce66e75e3a03089ccc2e6a5e4167af2e | 9ab6243bcd569c501aeef8ac30c791964a726cdce838b05650d46dfb28abd6e9 | 9f64ed529b492b0c3167ecf62dec40debf9622a9fde2865f1b947e4a0202de8f | LicenseRef-Proprietary | [] | 108 |
2.2 | wxCvModule | 0.1.0 | High-performance OpenCV + wxWidgets integration for Python | # wxCvModule 使用者指南
[](https://badge.fury.io/py/wxcvmodule)
[](https://pypi.org/project/wxcvmodule/)
[](https://opensource.org/licenses/MIT)
> **版本**:v1.5 **最後更新**:2026-02-19
---
## 目錄
1. [簡介](#1-簡介)
2. [安裝](#2-安裝)
3. [各平台注意事項](#3-各平台注意事項)
4. [快速開始](#4-快速開始)
5. [核心類別說明](#5-核心類別說明)
6. [ROI 工具詳解](#6-roi-工具詳解)
7. [疊加層系統(Overlay)](#7-疊加層系統overlay)
8. [事件回調](#8-事件回調)
9. [視窗嵌入與 Resize](#9-視窗嵌入與-resize)
10. [API 快速參考](#10-api-快速參考)
11. [常見問題](#11-常見問題)
---
## 1. 簡介
`wxCvModule` 是一個高效能的 C++ + Python 混合函式庫,將 **OpenCV 影像處理**與 **wxWidgets GUI** 無縫整合進 Python wxPython 應用程式。
### 主要特色
| 特色 | 說明 |
|------|------|
| **高效能渲染** | 直接 C++ 渲染管線,處理大圖不卡頓 |
| **豐富的 ROI 工具** | 支援矩形、旋轉矩形、圓形、環形、多邊形、點、線段 |
| **互動操作** | Ctrl+滾輪縮放、滾輪平移、中鍵拖曳、右鍵選單 |
| **Python 友好** | 直接傳遞 NumPy array,零拷貝共享記憶體 |
| **事件委派** | 右鍵點擊、滑鼠移動、雙擊事件可委派給 Python 處理 |
| **跨平台** | Windows、Linux、macOS 均支援 |
---
## 2. 安裝
### 2.1 透過 pip 安裝(推薦)
```bash
pip install wxcvmodule
```
安裝必要的 Python 依賴:
```bash
pip install wxPython numpy opencv-python
```
### 2.2 從原始碼編譯
若要從原始碼編譯,請參閱 `CLAUDE.md` 中的編譯指令,以及各平台對應的 Build Guide:
- **Windows**:`docs/Windows_Local_Build_Guide.md`
- **Linux**:`docs/Linux_Wheel_Build_Guide.md`
- **macOS**:CLAUDE.md 的 macOS 編譯章節
編譯完成後,可透過以下指令打包 Wheel:
```bash
pip install build scikit-build-core
python -m build --wheel
```
---
## 3. 各平台注意事項
> 此章節是使用 wxCvModule 前最重要的閱讀內容,特別是 macOS 使用者。
### 3.1 Windows
Windows 是**最簡單**的平台,沒有特殊限制。
- **Import 順序**:無限制,`import wx` 與 `import wxCvModule` 順序不影響功能。
- **面板座標(x, y)**:建議使用 `-1`(讓系統自動決定)。
- **依賴管理**:Wheel 已包含所有 DLL(OpenCV、wxWidgets),安裝後即可使用,不需額外安裝任何套件。
- **路徑格式**:`LoadImage()` 完整支援中文等 Unicode 路徑(內部使用 `std::wstring`)。
- **GUI 後端**:Win32 API(`__WXMSW__`)
**典型初始化範例:**
```python
import wx
import wxCvModule # Windows 上 import 順序不影響
handle = container.GetHandle()
cv_panel = wxCvModule.wxCvROIAdvPanel(handle, wx.ID_ANY, -1, -1, width, height)
```
---
### 3.2 Linux
Linux 使用 GTK3 後端,需要系統預先安裝 GTK3 執行環境。
- **Import 順序**:無限制。
- **面板座標(x, y)**:使用 `-1`。
- **系統依賴**:GTK3 必須存在於系統中。
**安裝 GTK3(若尚未安裝):**
```bash
# Ubuntu / Debian
sudo apt install libgtk-3-0
# Fedora / RHEL
sudo dnf install gtk3
# Arch Linux
sudo pacman -S gtk3
```
- **路徑格式**:`LoadImage()` 使用 UTF-8 字串處理路徑,支援 Unicode 路徑。
- **Wheel 大小**:約 11–12 MB。OpenCV 與 wxWidgets 靜態連結進 `.so`,GTK3 則使用系統版本。
- **GUI 後端**:GTK3 (`__WXGTK__`)
- **圖像編解碼**:HEIF(`.heic`)與 JPEG XL(`.jxl`)需要 libheif / libjxl(Wheel 已包含)。
**典型初始化範例:**
```python
import wx
import wxCvModule
handle = container.GetHandle()
cv_panel = wxCvModule.wxCvROIAdvPanel(handle, wx.ID_ANY, -1, -1, width, height)
```
---
### 3.3 macOS(重要差異)
macOS 與 Windows/Linux 有**根本性的架構差異**,使用前務必閱讀以下說明。
#### ⚠️ 關鍵規則:Import 順序
```python
# ✅ 正確:先 import wx,再 import wxCvModule
import wx
import wxCvModule
# ❌ 錯誤:會拋出 RuntimeError
import wxCvModule # wx 尚未載入,crash!
import wx
```
**原因**:macOS 版本的 wxCvModule 使用 `-undefined dynamic_lookup` 技術,wx 符號在執行時才從 wxPython 的 dylib 解析。若 wxPython 未先載入,C++ 端的 `wxAppConsole::GetInstance()` 找不到 wxApp,會拋出明確的 `RuntimeError`。
#### ⚠️ 面板座標必須為 (0, 0)
```python
# ✅ macOS 正確:使用 0, 0
cv_panel = wxCvModule.wxCvROIAdvPanel(handle, wx.ID_ANY, 0, 0, width, height)
# ❌ macOS 錯誤:使用 -1, -1 可能導致面板無法正確嵌入
cv_panel = wxCvModule.wxCvROIAdvPanel(handle, wx.ID_ANY, -1, -1, width, height)
```
#### ⚠️ 關閉視窗時必須手動釋放 C++ panel
```python
def on_close(self, event):
self.cv_panel = None # 觸發 C++ 析構 → 避免 wxApp 結束卡住
self.roi_panel = None
self.Destroy()
```
**原因**:macOS 使用隱藏框架(hidden wxFrame)作為 C++ panel 的臨時父視窗。若不釋放,wxApp 結束時會因 top-level window 仍存在而卡住不退出。
#### 平台比較摘要
| 項目 | Windows | Linux | macOS |
|------|---------|-------|-------|
| Import 順序限制 | 無 | 無 | **必須先 `import wx`** |
| 面板 x, y 參數 | `-1, -1` | `-1, -1` | **`0, 0`** |
| 關閉時需釋放 panel | 建議 | 建議 | **必須** |
| wxWidgets 連結方式 | 動態(獨立 DLL) | 靜態(內嵌) | dynamic_lookup(共用 wxPython) |
| GUI 後端 | Win32 | GTK3 | Cocoa |
| 路徑 Unicode 支援 | `std::wstring` | UTF-8 | UTF-8 |
#### 跨平台最佳寫法
若要讓同一份程式碼在三個平台都能正確執行,建議:
```python
import sys
import os
# 自動搜尋 wxCvModule 位置
_dir = os.path.dirname(os.path.abspath(__file__))
for _p in [_dir, os.path.join(_dir, "..", "build"),
os.path.join(_dir, "..", "build", "Release")]:
if os.path.isdir(_p) and _p not in sys.path:
sys.path.insert(0, os.path.normpath(_p))
# wx 必須在 wxCvModule 之前(macOS 強制要求,其他平台無影響)
import wx
import wxCvModule
# 面板座標:macOS 用 0,0;其他平台用 -1,-1
_X = 0 if sys.platform == "darwin" else -1
_Y = 0 if sys.platform == "darwin" else -1
# 建立面板
cv_panel = wxCvModule.wxCvROIAdvPanel(handle, wx.ID_ANY, _X, _Y, w, h)
```
---
## 4. 快速開始
### 4.1 基本圖像顯示(wxCvPanel)
```python
import sys
import wx
import numpy as np
import wxCvModule
_X = 0 if sys.platform == "darwin" else -1
_Y = 0 if sys.platform == "darwin" else -1
class BasicViewerFrame(wx.Frame):
def __init__(self):
super().__init__(None, title="wxCvModule Basic Viewer", size=(800, 600))
self.cv_panel = None
# 建立容器 Panel
self.container = wx.Panel(self)
self.container.SetBackgroundColour(wx.Colour(40, 40, 40))
# 綁定事件
self.Bind(wx.EVT_SHOW, self.on_show)
self.Bind(wx.EVT_CLOSE, self.on_close)
self.container.Bind(wx.EVT_SIZE, self.on_resize)
self.Centre()
def on_show(self, event):
if event.IsShown() and self.cv_panel is None:
wx.CallAfter(self.init_panel)
event.Skip()
def on_close(self, event):
self.cv_panel = None # macOS 必要
self.Destroy()
def init_panel(self):
handle = self.container.GetHandle()
size = self.container.GetSize()
self.cv_panel = wxCvModule.wxCvPanel(
handle, wx.ID_ANY, _X, _Y, size.width, size.height
)
self.cv_panel.SetCenterImageEnable(True)
# 建立測試圖像
img = np.zeros((480, 640, 3), dtype=np.uint8)
img[:, :, 0] = 100 # B channel
self.cv_panel.SetMat(img)
self.cv_panel.SetZoomToFit()
def on_resize(self, event):
if self.cv_panel and self.cv_panel.IsOk():
sz = self.container.GetSize()
self.cv_panel.SetSize(sz.width, sz.height)
self.cv_panel.Refresh()
event.Skip()
if __name__ == "__main__":
app = wx.App()
BasicViewerFrame().Show()
app.MainLoop()
```
### 4.2 ROI 編輯(wxCvROIAdvPanel)
```python
class ROIEditorFrame(wx.Frame):
def __init__(self):
super().__init__(None, title="ROI Editor", size=(900, 650))
self.roi_panel = None
panel = wx.Panel(self)
sizer = wx.BoxSizer(wx.VERTICAL)
self.container = wx.Panel(panel)
sizer.Add(self.container, 1, wx.EXPAND | wx.ALL, 4)
# ROI 模式選擇
mode_sizer = wx.BoxSizer(wx.HORIZONTAL)
mode_sizer.Add(wx.StaticText(panel, label="ROI Mode:"), 0,
wx.ALIGN_CENTER_VERTICAL | wx.RIGHT, 6)
self.mode_choice = wx.Choice(panel, choices=[
"Nothing", "Point", "Line", "Rectangle",
"RotatedRect", "Circle", "Annulus", "Polygon"
])
self.mode_choice.SetSelection(0)
self.mode_choice.Bind(wx.EVT_CHOICE, self.on_mode_change)
mode_sizer.Add(self.mode_choice)
sizer.Add(mode_sizer, 0, wx.ALL, 4)
panel.SetSizer(sizer)
self.Bind(wx.EVT_SHOW, self.on_show)
self.Bind(wx.EVT_CLOSE, self.on_close)
self.container.Bind(wx.EVT_SIZE, self.on_resize)
self.Centre()
def on_show(self, event):
if event.IsShown() and self.roi_panel is None:
wx.CallAfter(self.init_panel)
event.Skip()
def on_close(self, event):
self.roi_panel = None # macOS 必要
self.Destroy()
def init_panel(self):
handle = self.container.GetHandle()
size = self.container.GetSize()
self.roi_panel = wxCvModule.wxCvROIAdvPanel(
handle, wx.ID_ANY, _X, _Y, size.width, size.height
)
# 啟用所有 ROI 工具及右鍵選單
self.roi_panel.SetFuncEnable(True, True, True)
self.roi_panel.SetMenuROIEnable(True, True, True, True, True, True, True)
self.roi_panel.SetCenterImageEnable(True)
# 設定 Crop 回調:使用者完成 ROI 後觸發
self.roi_panel.SetOnCropCallback(self.on_crop)
# 載入圖像
img = np.zeros((480, 640, 3), dtype=np.uint8)
self.roi_panel.SetMat(img)
self.roi_panel.SetZoomToFit()
def on_mode_change(self, event):
if self.roi_panel:
self.roi_panel.SetROIMode(self.mode_choice.GetSelection())
def on_resize(self, event):
if self.roi_panel and self.roi_panel.IsOk():
sz = self.container.GetSize()
self.roi_panel.SetSize(sz.width, sz.height)
self.roi_panel.Refresh()
event.Skip()
def on_crop(self, rect):
# rect = (x, y, width, height),原圖座標
wx.CallAfter(print, f"ROI Crop: {rect}")
```
### 4.3 純邏輯引擎(wxCvEngine,無需 GUI)
`wxCvEngine` 適合批次影像處理或命令列工具,**不需要 wxPython 也不需要顯示視窗**。
```python
import wxCvModule
import numpy as np
engine = wxCvModule.wxCvEngine()
# 設定圖像
img = np.zeros((480, 640, 3), dtype=np.uint8)
engine.SetMat(img)
print(f"Image size: {engine.GetImageSize()}") # (width, height)
# 影像處理操作
engine.ConvertToGray()
engine.Resize(320, 240)
engine.GaussianBlur(5, 1.5)
edges = engine.Canny(50, 150) # 直接回傳結果 numpy array
# 取得處理後的圖像
result = engine.GetMat() # 回傳 numpy.ndarray
# 從檔案載入 / 儲存(支援 Unicode 路徑)
engine.LoadImage("/path/to/image.png")
engine.SaveImage("/output/result.jpg")
engine.LoadImage("/path/with/unicode/圖片.png", wxCvModule.IMREAD_GRAYSCALE)
```
---
## 5. 核心類別說明
### 5.1 wxCvEngine — 純邏輯影像引擎
不依賴 GUI,適用於影像前處理、批次轉換等場景。
| 方法 | 說明 |
|------|------|
| `SetMat(img)` | 設定圖像(NumPy BGR/BGRA/Gray array) |
| `GetMat()` | 取得當前圖像(NumPy array) |
| `HasImage()` | 是否有圖像 |
| `GetImageSize()` | 回傳 `(width, height)` |
| `LoadImage(path, flag?)` | 從檔案載入(支援 Unicode 路徑) |
| `SaveImage(path)` | 儲存到檔案 |
| `ConvertToGray()` | 轉灰階 |
| `Resize(w, h)` | 縮放 |
| `GaussianBlur(ksize, sigma)` | 高斯模糊 |
| `Canny(t1, t2)` | Canny 邊緣偵測,回傳結果圖像 |
| `Clear()` | 清除圖像 |
**imread flag 常數:**
```python
wxCvModule.IMREAD_COLOR # 彩色(預設)
wxCvModule.IMREAD_GRAYSCALE # 灰階
wxCvModule.IMREAD_UNCHANGED # 保留 Alpha 通道
```
### 5.2 wxCvPanel — 基本圖像顯示面板
繼承 wxScrolledCanvas,提供縮放、平移、置中顯示功能。
| 方法 | 說明 |
|------|------|
| `SetMat(img)` | 設定顯示圖像 |
| `GetMat()` | 取得當前圖像 |
| `LoadImage(path)` | 從檔案載入(支援 Unicode 路徑) |
| `SetZoomToFit()` | 縮放至適合視窗 |
| `SetOriginal()` | 恢復 1:1 原始大小 |
| `SetZoomIn()` | 放大 |
| `SetZoomOut()` | 縮小 |
| `SetCenterImageEnable(bool)` | 啟用圖像置中顯示 |
| `SetCanvasBgColor(r, g, b)` | 設定畫布背景顏色 |
| `SetSize(w, h)` | 調整面板大小 |
| `Refresh()` | 強制重新繪製 |
| `IsOk()` | 面板是否正常初始化 |
| `GetHandle()` | 取得原生視窗 Handle |
### 5.2.1 滑鼠操作快捷鍵
| 操作 | 行為 |
|------|------|
| 滾輪上 / 下 | 畫面垂直捲動(圖像放大時生效) |
| 水平傾斜滾輪 | 畫面水平捲動(支援水平滾輪的滑鼠) |
| **Ctrl + 滾輪上** | 以滑鼠位置為中心**放大**圖像 |
| **Ctrl + 滾輪下** | 以滑鼠位置為中心**縮小**圖像 |
| 中鍵按住拖曳 | 自由平移畫面(圖像放大時生效) |
| 右鍵點擊 | 開啟 ROI 右鍵選單(或觸發 Python 回調) |
> 此設計符合 CVAT、LabelMe 等主流標注軟體的操作慣例。
### 5.3 wxCvROIAdvPanel — 進階 ROI 編輯面板
繼承 `wxCvPanel`,增加完整的 ROI 工具集與事件系統。
除了繼承 wxCvPanel 的所有方法外,還提供:
| 方法 | 說明 |
|------|------|
| `SetFuncEnable(menu, crop, move)` | 啟用右鍵選單 / Crop 功能 / 移動功能 |
| `SetMenuROIEnable(...)` | 控制右鍵選單中各 ROI 工具的顯示 |
| `SetROIMode(mode)` | 設定當前 ROI 工具(0–7) |
| `GetROIMode()` | 取得當前 ROI 工具 |
| `GetRect()` | 取得 ROI 邊界框 `(x, y, w, h)` |
| `GetPolygonPoints()` | 取得多邊形頂點 `[(x,y), ...]` |
| `GetRotateAngle()` | 取得旋轉角度 |
| `GetInnerRadius()` | 取得環形內半徑 |
| `GetOuterRadius()` | 取得環形外半徑 |
| `GetStartAngle()` | 取得環形起始角度 |
| `GetEndAngle()` | 取得環形結束角度 |
| `GetLeftMouseDownPoint()` | 取得拖曳起點(原圖座標) |
| `GetLeftMouseUpPoint()` | 取得拖曳終點(原圖座標) |
| `SetEditingROI(mode, pts, angle)` | 以程式設定 ROI |
| `AppendOverlay(mode, pts, color, size, angle?)` | 新增疊加層 |
| `AppendMaskOverlay(mask, color, alpha)` | 新增遮罩疊加層 |
| `ClearOverlay()` | 清除所有疊加層 |
| `SetDisplayOverlay(bool)` | 顯示 / 隱藏疊加層 |
| `UpdateDrawImage(bool)` | 強制更新渲染(加上 True 重建快取) |
| `ConvertMaskToPolygon(mask)` | 將二值遮罩轉為多邊形點集 |
---
## 6. ROI 工具詳解
### 6.1 ROI 模式編號
| 編號 | 名稱 | 說明 | 互動方式 |
|------|------|------|----------|
| 0 | Nothing | 無工具(清除 ROI) | — |
| 1 | Point | 點 | 左鍵點擊 |
| 2 | Line | 線段 | 左鍵拖曳 |
| 3 | Rectangle | 矩形 | 左鍵拖曳 |
| 4 | RotatedRect | 旋轉矩形 | 左鍵拖曳,拖曳邊緣旋轉 |
| 5 | Circle | 圓形 | 左鍵拖曳 |
| 6 | Annulus | 環形(扇環) | 左鍵拖曳 |
| 7 | Polygon | 多邊形 | 左鍵逐點點擊,**雙擊結束**(或點擊第一個頂點閉合) |
```python
roi_panel.SetROIMode(3) # 切換到矩形工具
```
### 6.2 ROI 參數格式
以下是 `SetEditingROI` 與 `AppendOverlay` 使用的點集格式(**非常重要,格式不正確會靜默失敗**):
| 模式 | SetEditingROI pts 格式 | AppendOverlay pts 格式 | 備註 |
|------|----------------------|----------------------|------|
| Point (1) | `[(x, y)]` | `[(x, y), ...]` 多點 | — |
| Line (2) | `[(x1, y1), (x2, y2)]` | 同左 | — |
| Rectangle (3) | `[(左上x, 左上y), (寬, 高)]` | 同左 | — |
| RotatedRect (4) | `[(左上x, 左上y), (寬, 高)]` + `angle` 參數 | 同左 + `angle` 參數 | — |
| Circle (5) | `[(cx, cy), (radius, 0)]` | 同左 | `GetRect` 回傳的 (x,y) 是**圓心** |
| Annulus (6) | `[(cx, cy), (outer_r, 0), (inner_r, 0), (start, end)]` | `[(cx, cy), (0, 0), (inner_r, outer_r), (start, end)]` | ⚠️ 兩者格式略有不同 |
| Polygon (7) | `[(x1, y1), (x2, y2), ..., (xn, yn)]` | 同左 | — |
> **環形(Annulus)注意**:`SetEditingROI` 與 `AppendOverlay` 的 pts 格式在 pts[1], pts[2] 欄位順序不同,使用前請仔細確認。
### 6.3 取得 ROI 資料
```python
mode = roi_panel.GetROIMode()
rect = roi_panel.GetRect() # (x, y, w, h)
if mode == 3: # Rectangle
x, y, w, h = rect
print(f"矩形: ({x}, {y}) 寬={w} 高={h}")
elif mode == 4: # RotatedRect
x, y, w, h = rect
angle = roi_panel.GetRotateAngle()
print(f"旋轉矩形: ({x}, {y}) 寬={w} 高={h} 角度={angle:.1f}°")
elif mode == 5: # Circle
# 注意:Circle 的 GetRect 回傳 (cx, cy, ?, ?),x,y 是圓心
cx, cy = rect[0], rect[1]
radius = roi_panel.GetOuterRadius()
print(f"圓形: 圓心=({cx}, {cy}) 半徑={radius}")
elif mode == 6: # Annulus
cx, cy = rect[0], rect[1]
inner_r = roi_panel.GetInnerRadius()
outer_r = roi_panel.GetOuterRadius()
start = roi_panel.GetStartAngle()
end = roi_panel.GetEndAngle()
print(f"環形: 圓心=({cx}, {cy}) 內徑={inner_r} 外徑={outer_r} "
f"角度={start:.1f}°~{end:.1f}°")
elif mode == 7: # Polygon
pts = roi_panel.GetPolygonPoints()
print(f"多邊形: {len(pts)} 個頂點")
for i, (x, y) in enumerate(pts):
print(f" [{i}] ({x:.1f}, {y:.1f})")
```
### 6.4 以程式設定 ROI(SetEditingROI)
```python
# 設定矩形 ROI(左上角 100,80,寬 200,高 150)
roi_panel.SetROIMode(3)
roi_panel.SetEditingROI(3, [(100, 80), (200, 150)], 0.0)
# 設定圓形 ROI(圓心 320,240,半徑 80)
roi_panel.SetROIMode(5)
roi_panel.SetEditingROI(5, [(320, 240), (80, 0.1)], 0.0)
# 注意:第二個點的 y 值用 0.1 而非 0,避免 C++ 有效性檢查拒絕
# 設定多邊形 ROI(五芒星)
import math
pts = []
for j in range(10):
a = -math.pi/2 + j * math.pi/5
r = 100 if j % 2 == 0 else 40
pts.append((320 + r*math.cos(a), 240 + r*math.sin(a)))
roi_panel.SetROIMode(7)
roi_panel.SetEditingROI(7, pts, 0.0)
```
---
## 7. 疊加層系統(Overlay)
Overlay 系統允許在圖像上疊加多個 ROI 形狀,每個可指定不同顏色,用於同時顯示多個標注結果。
### 7.1 基本使用
```python
# 清除所有既有的 overlay
roi_panel.ClearOverlay()
# 顏色格式為 OpenCV BGR:(Blue, Green, Red)
red = (0, 0, 255)
green = (0, 255, 0)
blue = (255, 0, 0)
yellow = (0, 255, 255)
# AppendOverlay(mode, points, color, line_width, angle=0.0)
# 矩形:[(左上x, 左上y), (寬, 高)]
roi_panel.AppendOverlay(3, [(50, 50), (200, 100)], red, 2)
# 圓形:[(圓心x, 圓心y), (半徑, 0)]
roi_panel.AppendOverlay(5, [(320, 240), (80, 0)], green, 2)
# 旋轉矩形:帶 angle 參數
roi_panel.AppendOverlay(4, [(150, 150), (180, 90)], blue, 2, 30.0)
# 環形:[(圓心), (0, 0), (inner_r, outer_r), (start_angle, end_angle)]
roi_panel.AppendOverlay(6, [(400, 300), (0, 0), (40, 80), (30, 210)], yellow, 2)
# 顯示 overlay
roi_panel.SetDisplayOverlay(True)
# 更新畫面(True = 重建快取)
roi_panel.UpdateDrawImage(True)
```
### 7.2 遮罩疊加(Mask Overlay)
適合顯示語義分割模型的輸出結果(如 SAM 分割遮罩):
```python
import numpy as np
import cv2
# 建立二值遮罩(0 = 背景,255 = 前景)
mask = np.zeros((480, 640), dtype=np.uint8)
cv2.fillPoly(mask, [np.array([(100,100),(300,100),(300,300),(100,300)])], 255)
# AppendMaskOverlay(mask, color_bgr, alpha)
# alpha: 0.0 = 完全透明,1.0 = 完全不透明
roi_panel.AppendMaskOverlay(mask, (0, 0, 255), 0.5) # 半透明紅色
roi_panel.SetDisplayOverlay(True)
roi_panel.UpdateDrawImage(True)
```
### 7.3 遮罩轉多邊形
```python
# 將 AI 模型輸出的 pixel mask 轉換為可編輯的多邊形
polygon_pts = roi_panel.ConvertMaskToPolygon(mask)
# 設定為可編輯的 ROI
roi_panel.SetEditingROI(7, polygon_pts, 0.0)
roi_panel.SetROIMode(7)
```
---
## 8. 事件回調
### 8.1 Crop 事件(使用者完成 ROI)
使用者拖曳完成 ROI 後觸發:
```python
def on_crop(rect):
# rect = (x, y, w, h),原圖座標
x, y, w, h = rect
print(f"ROI 完成: x={x:.0f} y={y:.0f} w={w:.0f} h={h:.0f}")
roi_panel.SetOnCropCallback(on_crop)
```
> **注意**:回調在 C++ 執行緒觸發,若要更新 GUI(如 `wx.TextCtrl`),必須使用 `wx.CallAfter`:
> ```python
> def on_crop(rect):
> wx.CallAfter(self.info_label.SetLabel, f"Rect: {rect}")
> ```
### 8.2 右鍵點擊委派(REQ-006)
攔截右鍵點擊,由 Python 自定義行為:
```python
def on_right_click(pt, hit_index):
"""
pt : (x, y) 原圖座標(浮點數)
hit_index : 點擊到的 overlay 索引;-1 表示沒有點到任何 ROI
return True → 攔截(隱藏 C++ 原生右鍵選單)
return False → 不攔截(顯示 C++ 原生右鍵選單)
"""
x, y = pt
if hit_index >= 0:
print(f"點擊到第 {hit_index} 個 ROI,座標: ({x:.1f}, {y:.1f})")
# 在此實作自定義選單
return True # 攔截原生選單
return False # 未點到 ROI,顯示原生選單
roi_panel.SetOnRightClickCallback(on_right_click)
# 恢復 C++ 原生選單
roi_panel.SetOnRightClickCallback(None)
```
### 8.3 滑鼠移動與雙擊(REQ-010)
即時追蹤滑鼠在圖像上的座標,以及雙擊事件(適合 SAM 互動標注、快速刪除等):
```python
def on_mouse_move(pt):
"""pt = (x, y) 原圖座標,高頻觸發"""
# 使用 CallAfter 更新 GUI,避免跨執行緒問題
wx.CallAfter(status_bar.SetStatusText, f"座標: ({pt[0]:.1f}, {pt[1]:.1f})")
def on_left_dclick(pt):
"""左鍵雙擊,適合快速選取 / SAM 前景點"""
wx.CallAfter(print, f"左鍵雙擊: ({pt[0]:.1f}, {pt[1]:.1f})")
def on_right_dclick(pt):
"""右鍵雙擊,適合快速刪除 / SAM 背景點"""
wx.CallAfter(print, f"右鍵雙擊: ({pt[0]:.1f}, {pt[1]:.1f})")
roi_panel.SetOnMouseMoveCallback(on_mouse_move)
roi_panel.SetOnLeftDClickCallback(on_left_dclick)
roi_panel.SetOnRightDClickCallback(on_right_dclick)
# 取消回調
roi_panel.SetOnMouseMoveCallback(None)
```
> **效能說明**:`on_mouse_move` 為高頻回調,只在有註冊回調時才有 Python GIL 開銷。未註冊時完全無效能影響。
---
## 9. 視窗嵌入與 Resize
### 9.1 Resize 手動同步
由於 C++ panel 使用原生 API 嵌入(非 wxSizer 管理),**不會自動跟隨父容器縮放**,需手動同步:
```python
class MyFrame(wx.Frame):
def __init__(self):
super().__init__(None, size=(800, 600))
self.container = wx.Panel(self)
self.cv_panel = None
# 綁定容器的 resize 事件(注意是 container,不是 frame)
self.container.Bind(wx.EVT_SIZE, self.on_container_resize)
def on_container_resize(self, event):
if self.cv_panel and self.cv_panel.IsOk():
sz = self.container.GetSize()
self.cv_panel.SetSize(sz.width, sz.height)
self.cv_panel.Refresh()
event.Skip() # 必須呼叫,讓 sizer 繼續處理
```
### 9.2 延遲初始化(必須在視窗顯示後)
C++ panel 需要有效的原生 Handle 才能初始化。Handle 在視窗顯示後才穩定可用,因此推薦使用 `EVT_SHOW` + `wx.CallAfter`:
```python
def on_show(self, event):
if event.IsShown() and self.cv_panel is None:
wx.CallAfter(self.init_panel) # 延遲一個事件循環,確保 Handle 有效
event.Skip()
```
---
## 10. API 快速參考
### 顯示控制
| 方法 | 說明 |
|------|------|
| `SetMat(img)` | 設定 NumPy array 圖像(BGR/BGRA/Gray) |
| `GetMat()` | 取得圖像 |
| `LoadImage(path)` | 從檔案載入(支援 Unicode 路徑) |
| `SetZoomToFit()` | 縮放至適合 |
| `SetOriginal()` | 1:1 原始大小 |
| `SetZoomIn()` | 放大 |
| `SetZoomOut()` | 縮小 |
| `SetCenterImageEnable(bool)` | 啟用圖像置中 |
| `SetCanvasBgColor(r, g, b)` | 設定背景顏色 |
### ROI 控制
| 方法 | 說明 |
|------|------|
| `SetROIMode(mode)` | 設定工具(0–7) |
| `GetROIMode()` | 取得當前工具 |
| `SetFuncEnable(m, c, mv)` | 啟用選單 / Crop / 移動 |
| `SetMenuROIEnable(...)` | 控制選單中各工具可見性(7個 bool) |
| `SetEditingROI(mode, pts, angle)` | 以程式設定 ROI |
| `GetRect()` | 取得邊界框 `(x, y, w, h)` |
| `GetPolygonPoints()` | 取得多邊形頂點 |
| `GetRotateAngle()` | 取得旋轉角度 |
| `GetInnerRadius()` | 取得環形內半徑 |
| `GetOuterRadius()` | 取得環形外半徑 |
| `GetStartAngle()` | 取得環形起始角度 |
| `GetEndAngle()` | 取得環形結束角度 |
### Overlay 控制
| 方法 | 說明 |
|------|------|
| `AppendOverlay(mode, pts, color, size, angle?)` | 新增疊加形狀 |
| `AppendMaskOverlay(mask, color, alpha)` | 新增遮罩疊加 |
| `ClearOverlay()` | 清除所有疊加 |
| `SetDisplayOverlay(bool)` | 顯示 / 隱藏疊加 |
| `UpdateDrawImage(True)` | 強制更新渲染 |
| `ConvertMaskToPolygon(mask)` | 遮罩轉多邊形 |
### 回調設定
| 方法 | 說明 |
|------|------|
| `SetOnCropCallback(fn)` | ROI 完成回調 `fn(rect)` |
| `SetOnRightClickCallback(fn)` | 右鍵回調 `fn(pt, hit_index) → bool` |
| `SetOnMouseMoveCallback(fn)` | 滑鼠移動回調 `fn(pt)` |
| `SetOnLeftDClickCallback(fn)` | 左鍵雙擊回調 `fn(pt)` |
| `SetOnRightDClickCallback(fn)` | 右鍵雙擊回調 `fn(pt)` |
---
## 11. 常見問題
### Q1:macOS 出現 `RuntimeError: wxCvModule on macOS requires 'import wx' before 'import wxCvModule'`
**原因**:macOS 版本的 wxCvModule 使用 `dynamic_lookup` 在執行時從 wxPython 解析 wx 符號。若 wxPython 未先載入,C++ 找不到 wxApp 實例。
**解決**:確保程式碼中 `import wx` 在 `import wxCvModule` 之前執行,包含所有間接匯入路徑。
---
### Q2:面板建立後空白(白色或黑色畫面)
常見原因與解決方式:
| 症狀 | 可能原因 | 解決方式 |
|------|----------|----------|
| 白色畫面(macOS) | 未使用 Reparent,直接 NSView 嵌入 | 使用 wxCvModule 提供的 API,不要自行做 Cocoa 嵌入 |
| 黑色畫面(macOS) | `setWantsLayer:YES` 衝突 | 不要對容器 NSView 手動設定 layer |
| 空白(所有平台) | `init_panel` 在視窗顯示前執行 | 使用 `EVT_SHOW` + `wx.CallAfter` 延遲初始化 |
| 空白(所有平台) | Handle 取得時機過早 | 確保 `GetHandle()` 在 `Show()` 之後才呼叫 |
---
### Q3:視窗關閉後 Python 程式不退出(macOS)
**原因**:C++ 隱藏框架(hidden wxFrame)是 top-level window,會阻止 wxApp 結束。
**解決**:在 `EVT_CLOSE` 中手動釋放 panel:
```python
def on_close(self, event):
self.cv_panel = None # 觸發 C++ 析構 → 銷毀隱藏框架
self.roi_panel = None
self.Destroy()
```
---
### Q4:Resize 後圖像沒有跟著縮放
C++ panel 不使用 wxSizer 管理,必須手動同步:
```python
def on_resize(self, event):
if self.cv_panel and self.cv_panel.IsOk():
sz = self.container.GetSize()
self.cv_panel.SetSize(sz.width, sz.height)
self.cv_panel.Refresh()
event.Skip()
```
---
### Q5:Linux 上出現 `libgtk-3.so: cannot open shared object file`
安裝 GTK3 執行時函式庫:
```bash
sudo apt install libgtk-3-0 # Ubuntu/Debian
sudo dnf install gtk3 # Fedora/RHEL
sudo pacman -S gtk3 # Arch Linux
```
---
### Q6:回調函數更新 GUI 時出現 Assertion 或崩潰
C++ 回調可能在非 GUI 執行緒觸發。使用 `wx.CallAfter` 確保 GUI 更新在主執行緒執行:
```python
def on_mouse_move(pt):
# ❌ 直接更新可能崩潰
# self.label.SetLabel(f"{pt}")
# ✅ 透過 CallAfter 安全更新
wx.CallAfter(self.label.SetLabel, f"({pt[0]:.0f}, {pt[1]:.0f})")
```
---
### Q7:`AppendOverlay` 沒有效果
常見原因:
1. 忘記呼叫 `SetDisplayOverlay(True)` 開啟 overlay 顯示。
2. 忘記呼叫 `UpdateDrawImage(True)` 觸發重繪。
3. 點集格式不正確(請對照第 6.2 節的格式表)。
```python
roi_panel.ClearOverlay()
roi_panel.AppendOverlay(3, [(100, 100), (200, 150)], (0, 0, 255), 2)
roi_panel.SetDisplayOverlay(True) # 必須!
roi_panel.UpdateDrawImage(True) # 必須!
```
---
*本指南對應 wxCvModule v1.5 及以上版本。*
*完整範例程式請參閱 `examples/python_demo.py`。* | text/markdown | null | wxCvRoot <karatow2022@outlook.com> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other data formats.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such attribution notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly declare otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [2026] [wxCvRoot]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| null | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: C++",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Topic :: Scientific/Engineering :: Image Processing",
"Topic :: Multimedia :: Graphics :: Viewers"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"numpy>=1.19.0",
"wxPython>=4.1.0; platform_system == \"Windows\" or platform_system == \"Darwin\""
] | [] | [] | [] | [
"Homepage, https://github.com/wxCvRoot",
"Documentation, https://github.com/wxCvRoot/wxCvModule-docs",
"Repository, https://github.com/wxCvRoot/wxCvModule",
"Issues, https://github.com/wxCvRoot/wxCvModule/issues"
] | twine/6.2.0 CPython/3.12.11 | 2026-02-20T08:29:37.140386 | wxcvmodule-0.1.0-cp39-cp39-macosx_26_0_arm64.whl | 32,750,090 | c9/e0/614b549b763b850d4780869b358e0c6d4700cacce06d6fbb00434b45e89f/wxcvmodule-0.1.0-cp39-cp39-macosx_26_0_arm64.whl | cp39 | bdist_wheel | null | false | accf6b43dbe0c3a177c373264ee1383c | e0c6b0d34eb6f02c720ebfbf97ec74121d9e6eee4560473b4bbef40a0c38c5b0 | c9e0614b549b763b850d4780869b358e0c6d4700cacce06d6fbb00434b45e89f | null | [] | 0 |
2.4 | epanet-plus | 0.2.2 | Python interface for EPANET-PLUS (incl. EPANET and EPANET-MSX) | [](https://pypi.org/project/epanet-plus/)
[](https://opensource.org/licenses/MIT)

[](https://github.com/WaterFutures/EPANET-PLUS/actions/workflows/build_test.yml)
[](https://epanet-plus.readthedocs.io/en/stable/?badge=stable)
[](https://pepy.tech/project/epanet-plus)
[](https://pepy.tech/project/epanet-plus)
# EPANET-PLUS
EPANET-PLUS is a C library that merges [EPANET](https://github.com/OpenWaterAnalytics/EPANET)
and [EPANET-MSX](https://github.com/OpenWaterAnalytics/epanet-msx) into a single library.
Most importantly, it also provides a Python package with a high-performance interface
(i.e., C extension) to the C library, together with additional helper functions for an easier
use of EPANET and EPANET-MSX.
If you are interested in creating and simulating complex scenarios, we recommend to take a look
at [EPyT-Flow](https://github.com/WaterFutures/EPyT-Flow), which builds upon EPANET-PLUS.
## Unique Features
Unique features of EPANET-PLUS that make it superior to other Python interfaces of EPANET are the following:
- High-performance (single) interface to the latest version of EPANET and EPANET-MSX
- Additional C-functions to extend EPANET and EPANET-MSX
- Python toolkit with handy functions for working with EPANET and EPANET-MSX
## Installation
Note that EPANET-PLUS supports Python 3.10 - 3.14.
The Python package contains the the C library as a C extension and is
already pre-build for all major platforms.
### PyPI
```
pip install epanet-plus
```
### Git
Download or clone the repository:
```
git clone https://github.com/WaterFutures/EPANET-PLUS.git
cd EPANET-PLUS
```
Install all requirements as listed in [REQUIREMENTS.txt](https://raw.githubusercontent.com/WaterFutures/EPANET-PLUS/main/REQUIREMENTS.txt):
```
pip install -r REQUIREMENTS.txt
```
Build and install the package:
```
pip install .
```
## Quick Example
```python
from epanet_plus import EPyT, EpanetConstants
if __name__ == "__main__":
# Load an .inp file in EPANET using the toolkit class
epanet_api = EPyT("net2-cl2.inp")
# Print some general information
print(f"All nodes: {epanet_api.get_all_nodes_id()}")
print(f"All links: {epanet_api.get_all_links_id()}")
print(f"Simulation duration in seconds: {epanet_api.get_simulation_duration()}")
print(f"Hydraulic time step in seconds: {epanet_api.get_hydraulic_time_step()}")
print(f"Demand model: {epanet_api.get_demand_model()}")
# Run hydraulic simulation and output pressure at each node (at every simulation step)
epanet_api.openH()
epanet_api.initH(EpanetConstants.EN_NOSAVE)
tstep = 1
while tstep > 0:
t = epanet_api.runH()
print(epanet_api.getnodevalues(EpanetConstants.EN_PRESSURE))
tstep = epanet_api.nextH()
epanet_api.closeH()
# Close EPANET
epanet_api.close()
```
## Documentation
Documentation is available on readthedocs: [https://epanet-plus.readthedocs.io/en/latest/](https://epanet-plus.readthedocs.io/en/stable)
# License
MIT license -- see [LICENSE](LICENSE)
## How to Cite?
If you use this software, please cite it as follows:
```bibtex
@misc{github:epanetplus,
author = {André Artelt},
title = {{EPANET-PLUS}},
year = {2025},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {https://github.com/WaterFutures/EPANET-PLUS}
}
```
## How to get Support?
If you come across any bug or need assistance please feel free to open a new
[issue](https://github.com/WaterFutures/EPyT-Flow/issues/)
if non of the existing issues answers your questions.
## How to Contribute?
Contributions (e.g. creating issues, pull-requests, etc.) are welcome --
please make sure to read the [code of conduct](CODE_OF_CONDUCT.md) and
follow the [developers' guidelines](DEVELOPERS.md).
| text/markdown | null | André Artelt <aartelt@techfak.uni-bielefeld.de> | null | null | null | epanet, water, networks, hydraulics, quality, simulations | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Operating System :: Microsoft :: Windows",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/WaterFutures/EPANET-PLUS",
"Documentation, https://epanet-plus.readthedocs.io/en/stable/",
"Repository, https://github.com/WaterFutures/EPANET-PLUS.git",
"Issues, https://github.com/WaterFutures/EPANET-PLUS/issues"
] | twine/6.1.0 CPython/3.10.13 | 2026-02-20T08:29:07.819148 | epanet_plus-0.2.2.tar.gz | 333,975 | 2c/63/ea446e40532bd37b4fc7831e3ab9266d411fde104f6354e2f157d80014f3/epanet_plus-0.2.2.tar.gz | source | sdist | null | false | df9fa63669207193b2438cd180202032 | 1f6fcbb5210accd3ed97f96efb0e91ad92b750127899ffe6a740eb9f2f3023f9 | 2c63ea446e40532bd37b4fc7831e3ab9266d411fde104f6354e2f157d80014f3 | MIT | [
"LICENSE"
] | 2,451 |
2.4 | s33-modem-mon | 0.1.1 | Tools for monitoring S33 modems | # s33-modem-mon
Tools for the S33 modem.
1. A simple command-line tool to monitor and fetch stats from the modem using
its HNAP API.
```bash
$ s33 --password mypassword
{
"customer_status_startup_sequence": {
"downstream_frequency_hertz": 741000000,
"downstream_comment": "Locked",
"connectivity_status": "OK",
"connectivity_comment": "Operational",
"boot_status": "OK",
"boot_comment": "Operational",
"config_file_status": "OK",
"config_file_comment": "",
"security_status": "Enabled",
"security_comment": "BPI+"
},
"customer_status_connection_info": {
"system_uptime_seconds": 602183,
"system_time_unix": 1771475283,
"network_access": "Allowed"
},
"customer_status_downstream_channel_info": {
"channels": [
{
"channel_id": 31,
"lock_status": "Locked",
"modulation": "QAM256",
"frequency_hz": 741000000,
"power_dbmv": 11.0,
"snr_db": 37.0,
"corrected_count": 0,
"uncorrectable_count": 0
...
```
2. A background service to continuously read stats from the modem and insert
them into a TimescaleDB database for long-term monitoring and analysis.
```bash
$ s33mon --modem-password mypassword --db-host localhost --db-user myuser --db-password mypassword
2026-02-19 05:54:53,110 INFO s33mon.mon: Connecting to myuser@localhost:5432/s33mon ...
2026-02-19 05:54:53,157 INFO s33mon.mon: Schema is ready.
2026-02-19 05:54:53,157 INFO s33mon.mon: Starting poll loop — modem: 192.168.0.1, interval: 900s
2026-02-19 05:54:53,305 INFO httpx: HTTP Request: POST https://192.168.0.1/HNAP1/ "HTTP/1.1 200 OK"
2026-02-19 05:54:53,361 INFO httpx: HTTP Request: POST https://192.168.0.1/HNAP1/ "HTTP/1.1 200 OK"
2026-02-19 05:55:00,120 INFO httpx: HTTP Request: POST https://192.168.0.1/HNAP1/ "HTTP/1.1 200 OK"
2026-02-19 05:55:00,163 INFO s33mon.mon: Pull #3 stored successfully.
2026-02-19 05:55:00,163 INFO s33mon.mon: Sleeping 900s ...
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.14 | [] | [] | [] | [
"httpx>=0.28.1",
"psycopg2-binary>=2.9.11"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Arch Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T08:28:08.611571 | s33_modem_mon-0.1.1-py3-none-any.whl | 10,855 | 5c/f1/dea78b2eed0c12c498884bee62c46eb48e6f0977eba90db58461bd70bf95/s33_modem_mon-0.1.1-py3-none-any.whl | py3 | bdist_wheel | null | false | c6bc0caff3b9cc911a41ff642c880a30 | 132bff81abd19c116fe89160cb36b76f62dbf3f83400ce884ada695d5e7139bb | 5cf1dea78b2eed0c12c498884bee62c46eb48e6f0977eba90db58461bd70bf95 | null | [] | 235 |
2.4 | smartspread | 1.1.3 | A Python library for Google Sheets with high-level API, automatic type inference, and efficient caching. | # SmartSpread
A Python library for Google Sheets that extends [gspread](https://gspread.readthedocs.io/) with a high-level API, automatic type inference, and efficient caching.
## Features
- **Simple API**: Intuitive interface for spreadsheet and tab operations
- **Multiple Data Formats**: Work with DataFrames, list of dicts, or list of lists
- **Automatic Type Inference**: Smart conversion of numeric, string, and None values
- **Efficient Caching**: Minimizes API calls to stay within rate limits
- **Pandas Integration**: Seamless DataFrame read/write operations
- **Row Operations**: Update or insert rows based on column patterns
## Installation
```bash
pip install smartspread
```
## Quick Start
### Authentication
1. Create a [Google Cloud Project](https://console.cloud.google.com/)
2. Enable the Google Sheets API
3. Create a service account and download credentials JSON
4. Share your spreadsheet with the service account email
### Basic Usage
```python
from smartspread import SmartSpread
# Initialize with credentials
spread = SmartSpread(
sheet_identifier="your-spreadsheet-id-or-name",
key_file="path/to/credentials.json"
)
# Get or create a tab
tab = spread.tab("MyTab")
# Read data as DataFrame
df = tab.read_data()
# Modify data
tab.data["new_column"] = "value"
# Write back to Google Sheets
tab.write_data(overwrite_tab=True)
```
### Update Rows by Pattern
```python
# Update existing row or insert new one
tab.update_row_by_column_pattern(
column="ID",
value=123,
updates={"Status": "completed", "Updated": "2024-01-01"}
)
tab.write_data(overwrite_tab=True)
```
### Filter Data
```python
# Filter rows by pattern
filtered = tab.filter_rows_by_column("Name", "Alice")
print(filtered)
```
### Work with Different Formats
```python
# DataFrame format (default)
tab_df = spread.tab("Sheet1", data_format="DataFrame")
df = tab_df.data # pandas DataFrame
# List of dicts format
tab_dict = spread.tab("Sheet2", data_format="dict")
data = tab_dict.data # [{"col1": "val1", ...}, ...]
# List of lists format
tab_list = spread.tab("Sheet3", data_format="list")
data = tab_list.data # [["header1", "header2"], ["val1", "val2"], ...]
```
### Refresh Data
```python
# Reload data after external changes
tab.refresh()
# Refresh spreadsheet metadata
spread.refresh()
```
## API Reference
### SmartSpread
- `SmartSpread(sheet_identifier, key_file=None, service_account_data=None, user_email=None)`
- `spread.tab(tab_name, data_format="DataFrame", keep_number_formatting=False)` - Get or create tab
- `spread.tab_names` - List all tab names
- `spread.tab_exists(tab_name)` - Check if tab exists
- `spread.url` - Get spreadsheet URL
- `spread.grant_access(email, role="owner")` - Grant access to user
- `spread.refresh()` - Clear cache and reload metadata
### SmartTab
- `tab.read_data()` - Read data from Google Sheets
- `tab.write_data(overwrite_tab=False, as_table=False)` - Write data to Google Sheets
- `tab.update_row_by_column_pattern(column, value, updates)` - Update or insert row
- `tab.filter_rows_by_column(column, pattern)` - Filter rows by pattern
- `tab.refresh()` - Reload data from Google Sheets
- `tab.data` - Access the data (DataFrame, list of dicts, or list of lists)
## Notes
- Google Sheets API has rate limits (60 requests/minute for free tier)
- SmartSpread uses caching to minimize API calls
- Empty cells are represented as `None` in DataFrames
- Integer columns use nullable `Int64` dtype to preserve `None` values
## Changelog
### v1.1.3 (2024)
- Fixed: pd.NA values now properly sanitized to None in list and dict output formats
### v1.1.2 (2024)
- Changed: Package renamed to `smartspread` (no underscore) for cleaner imports
- Added: Backwards compatibility for `from smart_spread import ...` with deprecation warning
### v1.1.1 (2024)
- Fixed: JSON serialization error when using `data_format="list"` with nullable Int64 columns containing `pd.NA` values
## License
MIT License - see LICENSE file for details.
## Links
- [GitHub Repository](https://github.com/Redundando/smart_spread)
- [PyPI Package](https://pypi.org/project/smartspread/)
| text/markdown | null | Arved Klöhn <arved.kloehn@gmail.com> | null | null | MIT | google-sheets, gspread, pandas, spreadsheet, automation, dataframe | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Office/Business :: Financial :: Spreadsheet",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"gspread>=5.0.0",
"pandas>=1.3.0",
"logorator>=0.1.0"
] | [] | [] | [] | [
"Homepage, https://github.com/Redundando/smart_spread",
"Repository, https://github.com/Redundando/smart_spread",
"Issues, https://github.com/Redundando/smart_spread/issues",
"Documentation, https://github.com/Redundando/smart_spread#readme"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T08:27:00.053016 | smartspread-1.1.3.tar.gz | 12,993 | bd/3e/170d1707b535e17efbb058a6410a36f9a96dbe34cb08b0f20b6d5018732c/smartspread-1.1.3.tar.gz | source | sdist | null | false | 31ecc270f606cf75879e9c81344f089b | 657c15e55f529c65487cd782195ca5b3e842d346eb75fbfeb89827f94a155919 | bd3e170d1707b535e17efbb058a6410a36f9a96dbe34cb08b0f20b6d5018732c | null | [] | 222 |
2.4 | kingfisher-bin | 1.84.0 | Kingfisher secret scanning CLI (packaged binary) | # Kingfisher (Python wheel)
This package ships the Kingfisher CLI as a platform-specific Python wheel.
The `kingfisher` console script executes the bundled binary for your
OS/architecture.
## Usage
```bash
pip install kingfisher-bin
kingfisher --help
```
## Development
Use the helper script in `scripts/build-pypi-wheel.sh` from the repo root to
build a wheel for a specific target after compiling the Rust binary.
| text/markdown | MongoDB | null | null | null | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/mongodb/kingfisher",
"Repository, https://github.com/mongodb/kingfisher"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:25:42.519752 | kingfisher_bin-1.84.0-py3-none-any.whl | 38,443,344 | af/d6/56b762c22c5f8bcb792b5fda4cdfe7ae4a2bfbd9a9752327021b9ff991e6/kingfisher_bin-1.84.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 03661ed47faee85f7712e4ae7f7f5fd8 | 202ee0f338af96b55759e2a83888dcf7c0aaef7c5c52bce348ee283fb8bbbf51 | afd656b762c22c5f8bcb792b5fda4cdfe7ae4a2bfbd9a9752327021b9ff991e6 | null | [] | 120 |
2.4 | neops_graphql | 1.17.0b55 | A low-level generated GraphQL client for Neops | # Generated Neops GraphQL Client
```shell
pip install neops_graphql
```
**ALPHA**
A low level generated graphql client for neops.
This is an low level client and should not be included directly into a project
## Generate a new Cliebt
To generate a new client, execute following steps
```shell
make get-latest-schema
poetry install --no-root
poetry run ariadne-codegen
poetry build
```
## Publish a new client
```shell
# Get API token on pypi
poetry config pypi-token.pypi your-api-token
poetry publish
``` | text/markdown | null | Leandro Lerena <leandro.lerena@zebbra.ch> | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx",
"pydantic",
"websockets"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T08:25:04.353666 | neops_graphql-1.17.0b55.tar.gz | 37,395 | 67/17/ec0584d1f9e1b8cdc44fbd0d15971e3403352c9143c7c6789aad1fb6bb37/neops_graphql-1.17.0b55.tar.gz | source | sdist | null | false | cfe34176e9f7b0e930ed8151ddd08a29 | 6bee7562b0193882cbcdb6d80de0ae158b143479cd887e115a3273a5abe2ec10 | 6717ec0584d1f9e1b8cdc44fbd0d15971e3403352c9143c7c6789aad1fb6bb37 | null | [] | 0 |
2.4 | chemap | 0.3.3 | Library for computing molecular fingerprint based similarities as well as dimensionality reduction based chemical space visualizations. |
<img src="./materials/chemap_logo_green_pink.png" width="400">

[](https://pypi.org/project/chemap/)

[](https://www.rdkit.org/)
# chemap - Mapping chemical space
Library for computing molecular fingerprint based similarities as well as dimensionality reduction based chemical space visualizations.
## Installation
`chemap` can be installed using pip.
```bash
pip install chemap
```
Or, to include UMAP computation abilities on either CPU or GPU chose one of the following option:
- CPU version: ```pip install "chemap[cpu]"```
- GPU version (CUDA 12): ```pip install "chemap[gpu-cu12]"```
- GPU version (CUDA 13): ```pip install "chemap[gpu-cu13]"```
## Fingerprint computations (choose from `RDKit` or `scikit-fingerprints`)
Fingerprints can be computed using generators from `RDKit` or `scikit-fingerprints`.
This includes popular fingerprint types such as:
### Path-based and circular fingerprints
- RDKit fingerprints
- Morgan fingerprints
- FCFP fingerprints
- ...
### Predefined substructure fingerprints
- MACCS fingerprints
- PubChem fingerprints
- Klekota-Roth fingerprints
- ...
### Topological distance based fingerprints
- Atom pair fingerprints
## Fingerprint computations II (implemtations in `chemap`)
Due to some existing limitations with present implementations, chemap also provides some fingerprint generator.
Those allow to generate folded as well as unfolded fingerprints, each either as binary or count variant.
- MAP4 fingerprint --> `from chemap.fingerprints import MAP4Gen`
- Lingo fingerprint --> `from chemap.fingerprints import LingoFingerprint`
And, not really a fingerprint in the classical sense, but usefull as a baseline for benchmarking tasks (or as an additional component of a fingerprint), chemap provides a
simple element count vector/fingerprint. This does nothing more than simply count the number of H's, C's, O's etc.
- ElementCount fingerprint --> `from chemap.fingerprints import ElementCountFingerprint`
Here a code example:
```python
import numpy as np
import scipy.sparse as sp
from rdkit.Chem import rdFingerprintGenerator
from skfp.fingerprints import MAPFingerprint, AtomPairFingerprint
from chemap import compute_fingerprints, DatasetLoader, FingerprintConfig
ds_loader = DatasetLoader()
# Load a single dataset from a local file
smiles = ds_loader.load("tests/data/smiles.csv")
# or load a dataset collection from a DOI based registry (e.g., Zenodo)
files = ds_loader.load_collection("10.5281/zenodo.18682050")
# pass one of the absolute file paths from files
smiles = ds_loader.load(files[0])
# ----------------------------
# RDKit: Morgan (folded, dense)
# ----------------------------
morgan = rdFingerprintGenerator.GetMorganGenerator(radius=3, fpSize=4096)
X_morgan = compute_fingerprints(
smiles,
morgan,
config=FingerprintConfig(
count=False,
folded=True,
return_csr=False, # dense numpy
invalid_policy="raise",
),
)
print("RDKit Morgan:", X_morgan.shape, X_morgan.dtype)
# -----------------------------------
# RDKit: RDKitFP (folded, CSR sparse)
# -----------------------------------
rdkitfp = rdFingerprintGenerator.GetRDKitFPGenerator(fpSize=4096)
X_rdkitfp_csr = compute_fingerprints(
smiles,
rdkitfp,
config=FingerprintConfig(
count=False,
folded=True,
return_csr=True, # SciPy CSR
invalid_policy="raise",
),
)
assert sp.issparse(X_rdkitfp_csr)
print("RDKit RDKitFP (CSR):", X_rdkitfp_csr.shape, X_rdkitfp_csr.dtype, "nnz=", X_rdkitfp_csr.nnz)
# --------------------------------------------------
# scikit-fingerprints: MAPFingerprint (folded, dense)
# --------------------------------------------------
# MAPFingerprint is a MinHash-like fingerprint (different from MAP4 lib).
map_fp = MAPFingerprint(fp_size=4096, count=False, sparse=False)
X_map = compute_fingerprints(
smiles,
map_fp,
config=FingerprintConfig(
count=False,
folded=True,
return_csr=False,
invalid_policy="raise",
),
)
print("skfp MAPFingerprint:", X_map.shape, X_map.dtype)
# ----------------------------------------------------
# scikit-fingerprints: AtomPairFingerprint (folded, CSR)
# ----------------------------------------------------
atom_pair = AtomPairFingerprint(fp_size=4096, count=False, sparse=False, use_3D=False)
X_ap_csr = compute_fingerprints(
smiles,
atom_pair,
config=FingerprintConfig(
count=False,
folded=True,
return_csr=True,
invalid_policy="raise",
),
)
assert sp.issparse(X_ap_csr)
print("skfp AtomPair (CSR):", X_ap_csr.shape, X_ap_csr.dtype, "nnz=", X_ap_csr.nnz)
# (Optional) convert CSR -> dense if you need a NumPy array downstream:
X_ap = X_ap_csr.toarray().astype(np.float32, copy=False)
```
## UMAP Chemical Space Visualization
`chemap` provides functions to compute UMAP coordinates based on molecular fingerprints.
Depending on your system and installation, this can be either via a very fast `cuml` library by
using `create_chem_space_umap_gpu`, which then only allows to use "cosine" as a metric, as well
as folded/fixed sized fingerprints.
The alternative is a numba-based variant `create_chem_space_umap` (so this is still optimized,
but much slower than the GPU version). While this is slower, it in return allows to use Tanimoto
as a metric and can also handle unfolded fingerprints.
Example:
```python
from rdkit.Chem import rdFingerprintGenerator
from chemap.plotting import create_chem_space_umap, scatter_plot_hierarchical_labels
data_plot = create_chem_space_umap(
data_compounds, # dataframe with smiles and class/subclass etc. information
col_smiles="smiles",
inplace=False,
x_col="x",
y_col="y",
fpgen = rdFingerprintGenerator.GetMorganGenerator(radius=9, fpSize=4096),
)
# Plot
fig, ax, _, _ = scatter_plot_hierarchical_labels(
data_plot,
x_col="x",
y_col="y",
superclass_col="Superclass",
class_col="Class",
low_superclass_thres=2500,
low_class_thres=5000,
max_superclass_size=10_000,
```
| text/markdown | Florian Huber | florian.huber@hs-duesseldorf.de | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"cuml-cu12>=25.6.0; platform_system == \"Linux\" and extra == \"gpu-cu12\"",
"cuml-cu13>=26.0.0; platform_system == \"Linux\" and extra == \"gpu-cu13\"",
"cupy-cuda12x>=13.0.0; platform_system == \"Linux\" and extra == \"gpu-cu12\"",
"cupy-cuda13x>=13.0.0; platform_system == \"Linux\" and extra == \"gpu-cu13\"",
"joblib>=1.3.2",
"map4>=1.1.3",
"matplotlib>=3.10.1",
"numba>=0.61.2",
"numpy>=2.1.0",
"pandas>=2.2.1",
"pooch>=1.8.2",
"pynndescent>=0.5.13; extra == \"cpu\"",
"rdkit>=2024.9.6",
"scikit-fingerprints>=1.15.0",
"scipy>=1.14.2",
"tqdm>=4.67.1",
"umap-learn>=0.5.8; extra == \"cpu\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:24:49.538564 | chemap-0.3.3.tar.gz | 57,635 | 1c/cb/8727ce4d413a196e5112cb6c255aee2e1a5baf28eb7c937f8d57e72d8278/chemap-0.3.3.tar.gz | source | sdist | null | false | d184cdd0954ae24f32b6ad38be503877 | c9e4b39e32c1756a130bb3272dde529272ff287e4cd47a5101a891a74973d319 | 1ccb8727ce4d413a196e5112cb6c255aee2e1a5baf28eb7c937f8d57e72d8278 | MIT | [
"LICENSE"
] | 240 |
2.4 | slurm-aio-gui | 1.1.2 | A modern, cross-platform GUI for managing and monitoring SLURM clusters. | # Slurm AIO
A modern, cross-platform GUI for managing and monitoring SLURM clusters, designed for simplicity and efficiency.
## ✨ Features
* **Real-time Monitoring:** Visualize cluster status (Nodes, CPU, GPU, and RAM usage) and track the job queue as it happens.
* **Project-Based Organization:** Group your jobs into projects for better management and clarity.
* **Intuitive Job Management:** Easily create, submit, modify, duplicate, and cancel jobs through a user-friendly interface.
* **Integrated Tools:**
* Browse remote directories on the cluster.
* Open an SSH terminal directly to the cluster or a running job's node.
* View job output and error logs in real-time.
* **Notifications:** Get Discord notifications for job status changes (start, completion, failure).
* **Modern UI:** A clean, dark-themed interface with helpful toast notifications.
## 📦 Installation Options
You can install **Slurm AIO** in two ways:
### 1. Install via pip (Recommended)
```sh
pip install slurm-aio-gui
```
After installation, run the app using:
```sh
slurm-aio-gui
```
### 2. Install from source
#### Prerequisites
* Python 3.8+
* Access to a SLURM cluster via SSH
* `sshpass` is required for password-based terminal authentication on Linux/macOS.
* **Ubuntu/Debian:** `sudo apt-get install sshpass`
* **macOS (Homebrew):** `brew install sshpass`
#### Steps
1. **Clone the repository:**
```sh
git clone https://github.com/Morelli-01/slurm_gui.git
cd slurm_gui
```
2. **Install the dependencies:**
```sh
pip install -r requirements.txt
```
3. **Run the application:**
```sh
python main_application.py
```
The first time you run the application, you will be prompted to enter your cluster's SSH connection details.
## 📸 Screenshots
*Visualization of the cluster status and the jobs panel.*


## 📄 License
This project is licensed under the MIT License.
| text/markdown | null | Morelli-01 <nicolamorelli30008@gmail.com> | null | null | MIT License | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Topic :: Scientific/Engineering",
"Development Status :: 4 - Beta",
"Environment :: X11 Applications :: Qt"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"bcrypt==4.3.0",
"certifi==2025.4.26",
"cffi==1.17.1",
"charset-normalizer==3.4.2",
"cryptography==45.0.3",
"idna==3.10",
"paramiko==3.5.1",
"pycparser==2.22",
"PyNaCl==1.5.0",
"PyQt6==6.9.0",
"PyQt6-Qt6==6.9.0",
"PyQt6_sip==13.10.2",
"requests==2.32.3",
"urllib3==2.4.0",
"packaging",
"toml"
] | [] | [] | [] | [
"Homepage, https://github.com/Morelli-01/slurm_gui",
"Bug Tracker, https://github.com/Morelli-01/slurm_gui/issues"
] | twine/6.1.0 CPython/3.12.3 | 2026-02-20T08:24:48.766882 | slurm_aio_gui-1.1.2.tar.gz | 483,370 | 80/fc/ee508dfd13ff96e36e1c69a1d98d01d88292c70216857f35e18276070954/slurm_aio_gui-1.1.2.tar.gz | source | sdist | null | false | 7ddf747a358f58ebac823ba6e4faffa7 | c7eabadde27331473d6353e222acd305ecc2b901099938b1130f7a6fa0271d84 | 80fcee508dfd13ff96e36e1c69a1d98d01d88292c70216857f35e18276070954 | null | [
"LICENSE"
] | 258 |
2.4 | entropy-audio-embeddings | 0.1.0 | entropy-audio-embeddings: audio mapping to low dimensional metric space | # Audio entropy embeddings
**entropy_audio_embeddings** provides the tools to get a low dimensional metric space mapping for audio files. The interface gives a set of tools to input an mp3 file and as output a numpy 2d matrix is generated. The embeddings are robust for time shifting and noise evironments also it discrmineates between versions e.g. music recordings. With the low dimension representation task like audio retrieval search by excerpt are solved.
## Installation
librosa>=0.11.0 is required.
pydub>=0.25.1 is required.
numpy>= 2.4.2 is required.
```
$ pip install entropy-audio-embeddings
```
## Usage
```
$ python3 example.py
```
For example:
```
from entropy_audio_embeddings import multiband_entropy_from_mp3
from pathlib import Path
print("Getting embeddings from mp3 file")
entropies = multiband_entropy_from_mp3(filename, sr=44100, n_fft=2048, window_size=2048, hop_length=2048, shingle_size=10, shingle_step=2)
print("Done getting embeddings from mp3 file")
```
## Results
<pre>
------ Entropies ------
numpy array:
shape: 24, t
t = time
</pre>
## Cite
[1] Camarena-Ibarrola, A., Chávez, E., & Tellez, E. S. (2009, November). Robust radio broadcast monitoring using a multi-band spectral entropy signature. In Iberoamerican Congress on Pattern Recognition (pp. 587-594). Berlin, Heidelberg: Springer Berlin Heidelberg.
[2] Camarena-Ibarrola, A., Luque, F., & Chavez, E. (2017, November). Speaker identification through spectral entropy analysis. In 2017 IEEE international autumn meeting on power, electronics and computing (ROPEC) (pp. 1-6). IEEE.
| text/markdown | Fernando Luque | ing.fernando.luqueg@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/fluques/entropy_audio_embeddings | null | >=3.10 | [] | [] | [] | [
"numpy",
"librosa",
"pydub"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.1 | 2026-02-20T08:24:05.502335 | entropy_audio_embeddings-0.1.0.tar.gz | 4,564 | 06/da/d52a19311235baffb7ca987755ecfee01302df873d4012bf43f5d9b6f014/entropy_audio_embeddings-0.1.0.tar.gz | source | sdist | null | false | 242f50ce6965a099d58329f132af02c1 | 0bc1cee64f6ebc0b00e5bd8ec38b696c4547bb20af9188463ae28938b64f3301 | 06dad52a19311235baffb7ca987755ecfee01302df873d4012bf43f5d9b6f014 | null | [] | 252 |
2.4 | docling-glm-ocr | 0.3.0 | A docling OCR plugin for GLM-OCR | # docling-glm-ocr
A docling OCR plugin that delegates text recognition to a remote
[GLM-OCR](https://huggingface.co/zai-org/GLM-OCR) model served by vLLM.
---
<p align="center">
<a href="https://github.com/DCC-BS/docling-glm-ocr">GitHub</a>
|
<a href="https://pypi.org/project/docling-glm-ocr/">PyPI</a>
</p>
---
[](https://pypi.org/project/docling-glm-ocr/)
[](https://pypi.org/project/docling-glm-ocr/)
[](https://github.com/DCC-BS/docling-glm-ocr/blob/main/LICENSE)
[](https://github.com/DCC-BS/docling-glm-ocr/actions/workflows/main.yml)
[](https://github.com/astral-sh/ruff)
[](https://codecov.io/gh/DCC-BS/docling-glm-ocr)
## Overview
`docling-glm-ocr` is a [docling](https://github.com/DS4SD/docling) plugin that
replaces the built-in OCR stage with a call to a remote
[GLM-OCR](https://huggingface.co/zai-org/GLM-OCR) model hosted on a
[vLLM](https://github.com/vllm-project/vllm) server.
Each page crop is sent to the vLLM OpenAI-compatible chat completion endpoint
as a base64-encoded image. The model returns Markdown-formatted text which
docling merges back into the document structure.
The plugin registers itself under the `"glm-ocr-remote"` OCR engine key so it
can be selected per-request through docling or docling-serve without changing
application code.
## Requirements
- Python 3.13+
- A running vLLM server hosting `zai-org/GLM-OCR` (or any compatible model)
## Installation
```bash
# with uv (recommended)
uv add docling-glm-ocr
# with pip
pip install docling-glm-ocr
```
## Usage
### Python SDK
```python
from docling.datamodel.base_models import InputFormat
from docling.datamodel.pipeline_options import PdfPipelineOptions
from docling.document_converter import DocumentConverter, PdfFormatOption
from docling_glm_ocr import GlmOcrRemoteOptions
pipeline_options = PdfPipelineOptions(
allow_external_plugins=True,
ocr_options=GlmOcrRemoteOptions(
api_url="http://localhost:8001/v1/chat/completions",
model_name="zai-org/GLM-OCR",
),
)
converter = DocumentConverter(
format_options={
InputFormat.PDF: PdfFormatOption(pipeline_options=pipeline_options)
}
)
result = converter.convert("document.pdf")
print(result.document.export_to_markdown())
```
### docling-serve
Select the engine per-request via the standard API:
```bash
curl -X POST http://localhost:5001/v1/convert/source \
-H 'Content-Type: application/json' \
-d '{
"options": {
"ocr_engine": "glm-ocr-remote"
},
"sources": [{"kind": "http", "url": "https://arxiv.org/pdf/2501.17887"}]
}'
```
The server must have `DOCLING_SERVE_ALLOW_EXTERNAL_PLUGINS=true` set so the
plugin is loaded automatically.
## Configuration
### Environment variables
| Variable | Description | Default |
|---|---|---|
| `GLMOCR_REMOTE_OCR_API_URL` | vLLM chat completion URL | `http://localhost:8001/v1/chat/completions` |
| `GLMOCR_REMOTE_OCR_PROMPT` | Text prompt sent with each image crop | see below |
### `GlmOcrRemoteOptions`
All options can be set programmatically via `GlmOcrRemoteOptions`:
| Option | Type | Description | Default |
|---|---|---|---|
| `api_url` | `str` | OpenAI-compatible chat completion URL | `GLMOCR_REMOTE_OCR_API_URL` env or `http://localhost:8001/v1/chat/completions` |
| `model_name` | `str` | Model name sent to vLLM | `zai-org/GLM-OCR` |
| `prompt` | `str` | Text prompt for each image crop | `GLMOCR_REMOTE_OCR_PROMPT` env or default prompt |
| `timeout` | `float` | HTTP timeout per crop (seconds) | `120` |
| `max_tokens` | `int` | Max tokens per completion | `16384` |
| `scale` | `float` | Image crop rendering scale | `3.0` |
| `max_concurrent_requests` | `int` | Max concurrent API requests | `10` |
| `max_retries` | `int` | Max retry attempts for HTTP errors | `3` |
| `retry_backoff_factor` | `float` | Exponential backoff factor for retries | `2.0` |
| `lang` | `list[str]` | Language hint (passed to docling) | `["en"]` |
Default prompt:
```
Recognize the text in the image and output in Markdown format.
Preserve the original layout (headings/paragraphs/tables/formulas).
Do not fabricate content that does not exist in the image.
```
## Architecture
```mermaid
flowchart LR
subgraph docling
Pipeline --> GlmOcrRemoteModel
end
subgraph vLLM
GLMOCR["zai-org/GLM-OCR"]
end
GlmOcrRemoteModel -- "POST /v1/chat/completions\n(base64 image)" --> GLMOCR
GLMOCR -- "Markdown text" --> GlmOcrRemoteModel
```
For each page the model:
1. Collects OCR regions from the docling layout analysis
2. Renders each region using the page backend (scale configurable, default 3×)
3. Encodes the crop as a base64 PNG data URI
4. POSTs concurrent chat completion requests to the vLLM endpoint (with retry logic)
5. Returns the recognised text as `TextCell` objects for docling to merge
## Starting a GLM-OCR vLLM server
```bash
docker run -d \
--rm --name ocr-glm \
--gpus device=0 \
--ipc=host \
-p 8001:8000 \
-v "${HOME}/.cache/huggingface:/root/.cache/huggingface" \
-e "HF_TOKEN=${HF_TOKEN}" \
--entrypoint /bin/bash \
vllm/vllm-openai:latest \
-c "uv pip install --system --upgrade transformers && \
exec vllm serve zai-org/GLM-OCR \
--served-model-name zai-org/GLM-OCR \
--port 8000 \
--trust-remote-code"
```
The plugin will connect to `http://localhost:8001/v1/chat/completions` by default.
## Development
### Setup
```bash
git clone https://github.com/DCC-BS/docling-glm-ocr.git
cd docling-glm-ocr
make install
```
### Available commands
```
make install Install dependencies and pre-commit hooks
make check Run all quality checks (ruff lint, format, ty type check)
make test Run tests with coverage report
make build Build distribution packages
make publish Publish to PyPI
```
### Running tests
```bash
make test
```
Tests are in `tests/` and use [pytest](https://pytest.org).
Coverage reports are generated at `coverage.xml` and printed to the terminal.
#### End-to-end tests
The e2e tests hit a real vLLM server and are **skipped by default**.
To run them, set the server URL and use the `e2e` marker:
```bash
GLMOCR_REMOTE_OCR_API_URL=http://localhost:8001/v1/chat/completions pytest -m e2e
```
### Code quality
This project uses:
- **[ruff](https://github.com/astral-sh/ruff)** – linting and formatting
- **[ty](https://github.com/astral-sh/ty)** – type checking
- **[pre-commit](https://pre-commit.com/)** – pre-commit hooks
Run all checks:
```bash
make check
```
### Releasing
Releases are published to PyPI automatically.
Update the version in `pyproject.toml`, then trigger the **Publish** workflow from GitHub Actions:
```
GitHub → Actions → Publish to PyPI → Run workflow
```
The workflow tags the commit, builds the package, and publishes to PyPI via trusted publishing.
## License
[MIT](LICENSE) © Data Competence Center Basel-Stadt
| text/markdown | null | Yanick Schraner <yanick.schraner@bs.ch>, Tobias Bollinger <tobias.bollinger@bs.ch> | null | null | MIT | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Typing :: Typed"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"docling>=2.73",
"httpx>=0.27",
"pillow>=10.0"
] | [] | [] | [] | [
"Homepage, https://github.com/DCC-BS/docling-glm-ocr",
"Repository, https://github.com/DCC-BS/docling-glm-ocr",
"Issues, https://github.com/DCC-BS/docling-glm-ocr/issues",
"Changelog, https://github.com/DCC-BS/docling-glm-ocr/releases"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T08:23:40.318460 | docling_glm_ocr-0.3.0-py3-none-any.whl | 9,598 | ec/2d/c2362aa6260f65c9c78192393db117b04804fcdb223c6f718e2004467194/docling_glm_ocr-0.3.0-py3-none-any.whl | py3 | bdist_wheel | null | false | f02e7164dd729a31883612ecc2013a5e | beb7eac38e290326355c2ce19feccdb778351c3466e49061d1eff193db75da3c | ec2dc2362aa6260f65c9c78192393db117b04804fcdb223c6f718e2004467194 | null | [
"LICENSE"
] | 248 |
2.4 | mathrobo | 0.0.3 | basic mathematical library for robotics reserach | # Mathrobo
Mathrobo is a lightweight library designed to support mathematical optimization and computations related to robotics.
## Installation
### Clone the repository
```bash
git clone https://github.com/MathRobotics/MathRobo.git
cd MathRobo
```
### Install dependencies with uv
Sync the environment from `pyproject.toml` and `uv.lock`:
```bash
uv sync
```
### Install the package in editable mode
```bash
uv pip install -e .
```
## Examples
Refer to the examples in the `examples` folder, where you can find Jupyter notebooks and scripts demonstrating various use cases of the library.
## Usage
You can also work with spatial transformations using the `SE3` class. The
following snippet creates a 90 degree rotation around the Z axis with a
translation, applies the transformation to a point and then inverts it:
```python
import numpy as np
import mathrobo as mr
# Rotation of 90 deg about Z and translation of 1 m along X
rot = mr.SO3.exp(np.array([0.0, 0.0, 1.0]), np.pi / 2)
T = mr.SE3(rot, np.array([1.0, 0.0, 0.0]))
point = np.array([0.0, 1.0, 0.0])
transformed = T @ point
recovered = T.inv() @ transformed
print(transformed)
print(recovered)
```
Here is a quick example that computes the numerical gradient of a simple function:
```python
import numpy as np
import mathrobo as mr
f = lambda x: np.sum(x**2)
x = np.array([1.0, 2.0, -3.0])
grad = mr.numerical_grad(x, f)
print(grad)
```
You can also perform numerical integration using the Gaussian quadrature helper
`gq_integrate`:
```python
import numpy as np
import mathrobo as mr
f = lambda s: np.array([np.sin(s)])
val = mr.gq_integrate(f, 0.0, np.pi, digit=5)
print(val) # ~ 2.0
```
For spline trajectories, SciPy's `BSpline` can be used to create and evaluate a
curve:
```python
import numpy as np
from scipy.interpolate import make_interp_spline
t = np.array([0, 1, 2, 3])
points = np.array([0.0, 1.0, 0.0, 1.0])
spl = make_interp_spline(t, points, k=3)
ts = np.linspace(0, 3, 20)
ys = spl(ts)
print(ys)
```
## Running Tests
Run the test suite with uv:
```bash
uv run pytest
```
## Contributing
Contributions are welcome! Feel free to report issues, suggest features, or submit pull requests.
## License
This project is licensed under the [MIT License](LICENSE).
| text/markdown | null | taiki-ishigaki <taiki000ishigaki@gmail.com> | null | null | MIT License | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"jax>=0.4.30",
"numpy>=2.0.2",
"pytest>=8.4.2",
"scipy>=1.13.1",
"sympy>=1.14.0"
] | [] | [] | [] | [
"Homepage, https://github.com/MathRobotics/MathRobo"
] | twine/6.2.0 CPython/3.13.1 | 2026-02-20T08:23:37.534023 | mathrobo-0.0.3.tar.gz | 108,929 | 5b/e6/1432b3b3eec6ac99465794a1efeac97664c8e39d82d8695944d9f5eb932e/mathrobo-0.0.3.tar.gz | source | sdist | null | false | 636f9898a0ea11c9a04dd33e645c5cd4 | 57dc0af2188ebbfd941c484864e9fbb568b2d89ee71b032a0b456c58bbe52aa9 | 5be61432b3b3eec6ac99465794a1efeac97664c8e39d82d8695944d9f5eb932e | null | [
"LICENSE"
] | 250 |
2.1 | foundry-dev-tools-transforms | 2.1.23 | Seamlessly run your Palantir Foundry Repository transforms code on your local machine. | <div align="center">
<br/>
<a href="https://github.com/emdgroup/foundry-dev-tools/actions/workflows/ci.yml"><img src="https://img.shields.io/github/actions/workflow/status/emdgroup/foundry-dev-tools/ci.yml?style=flat-square"/></img>
<a href="https://github.com/emdgroup/foundry-dev-tools/actions/workflows/docs.yml"><img src="https://img.shields.io/github/actions/workflow/status/emdgroup/foundry-dev-tools/docs.yml?style=flat-square"/></img>
<a href="https://pypi.org/project/foundry-dev-tools/"><img src="https://img.shields.io/pypi/pyversions/foundry-dev-tools?style=flat-square&label=Supported%20Python%20versions&color=%23ffb86c"/></a>
<a href="https://pypi.org/project/foundry-dev-tools/"><img src="https://img.shields.io/pypi/v/foundry-dev-tools.svg?style=flat-square&label=PyPI%20version&color=%23bd93f9"/></a>
<a href="https://anaconda.org/conda-forge/foundry-dev-tools"><img src="https://img.shields.io/conda/vn/conda-forge/foundry-dev-tools.svg?style=flat-square&label=Conda%20Forge%20Version&color=%23bd93f9" alt="Conda Version"/></a>
<a href="https://pypi.org/project/foundry-dev-tools/"><img src="https://img.shields.io/pypi/dm/foundry-dev-tools?label=PyPI%20Downloads&style=flat-square&color=%236272a4"/></a>
<a href="https://anaconda.org/conda-forge/foundry-dev-tools"><img src="https://img.shields.io/conda/dn/conda-forge/foundry-dev-tools.svg?style=flat-square&label=Conda%20Forge%20Downloads&color=%236272a4" alt="Conda Downloads"/></a>
<a href="https://github.com/emdgroup/foundry-dev-tools/issues"><img src="https://img.shields.io/github/issues/emdgroup/foundry-dev-tools?style=flat-square&color=%23ff79c6"/></a>
<a href="https://github.com/emdgroup/foundry-dev-tools/pulls"><img src="https://img.shields.io/github/issues-pr/emdgroup/foundry-dev-tools?style=flat-square&color=%23ff79c6"/></a>
<a href="http://www.apache.org/licenses/LICENSE-2.0"><img src="https://shields.io/badge/License-Apache%202.0-green.svg?style=flat-square&color=%234c1"/></a>
<p><a href="https://emdgroup.github.io/foundry-dev-tools">Documentation</a></p>
<a href="https://emdgroup.github.io/foundry-dev-tools/getting_started/index.html">Getting Started / Usage<a/>
•
<a href="https://emdgroup.github.io/foundry-dev-tools/examples/api.html">Examples<a/>
•
<a href="https://emdgroup.github.io/foundry-dev-tools/dev/contribute.html">Development/Contribute<a/>
</div>
# Foundry DevTools
Seamlessly run your Palantir Foundry Repository transforms code and more on your local machine.
Foundry DevTools is a set of useful libraries to interact with the Foundry APIs.
It consists of two parts:
- The [transforms](https://www.palantir.com/docs/foundry/transforms-python/transforms-python-api/) implementation
- An implementation of the Foundry `transforms` package that internally uses the `CachedFoundryClient`.
This allows you to seamlessly run your Palantir Foundry Code Repository transforms code on your local machine.
Foundry DevTools does not cover all of Foundry's features, more on this [here](https://emdgroup.github.io/foundry-dev-tools/dev/architecture.html#known-limitations-contributions-welcome).
- API clients
We implemented multiple clients for many foundry APIs like compass, catalog or foundry-sql-server.
- For example:
```python
from foundry_dev_tools import FoundryContext
# the context, that contains your credentials and configuration
ctx = FoundryContext()
df = ctx.foundry_sql_server.query_foundry_sql("SELECT * FROM `/Global/Foundry Training and Resources/Example Data/Aviation Ontology/airlines`", branch='master')
df.shape
# Out[2]: (17, 10)
```
## Quickstart
With pip:
```shell
pip install foundry-dev-tools
```
With conda or mamba on the conda-forge channel:
```shell
conda install -c conda-forge foundry-dev-tools
```
[Further instructions](https://emdgroup.github.io/foundry-dev-tools/getting_started/installation.html) can be found in our documentation.
## Why did we build this?
- Local development experience in your favorite IDE (PyCharm, VSCode, ...)
- Access to modern developer tools and workflows such as ruff, mypy, pylint, black, pre-commit hooks etc.
- Quicker turnaround time when making changes
- Debug, change code and run in a matter of seconds instead of minutes
- No accidental or auto commits
- Keep your git history clean
# License
Copyright (c) 2024 Merck KGaA, Darmstadt, Germany
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
The full text of the license can be found in the [LICENSE](https://github.com/emdgroup/foundry-dev-tools/blob/main/LICENSE) file in the repository root directory.
| text/markdown | null | Nicolas Renkamp <nicolas.renkamp@merckgroup.com>, Jonas Wunderlich <jonas.wunderlich@merckgroup.com> | null | null | Apache-2.0 | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Intended Audience :: Developers",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Topic :: Scientific/Engineering :: Information Analysis"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"foundry-dev-tools",
"typing_extensions; python_version < \"3.11\"",
"pyspark>=3.0.0",
"numpy<2",
"fs"
] | [] | [] | [] | [
"Homepage, https://emdgroup.github.io/foundry-dev-tools",
"Documentation, https://emdgroup.github.io/foundry-dev-tools",
"Source, https://github.com/emdgroup/foundry-dev-tools",
"Tracker, https://github.com/emdgroup/foundry-dev-tools/issues",
"Changelog, https://emdgroup.github.io/foundry-dev-tools/changelog.html"
] | pdm/2.26.6 CPython/3.12.12 Linux/6.11.0-1018-azure | 2026-02-20T08:22:57.975313 | foundry_dev_tools_transforms-2.1.23.tar.gz | 15,450 | 0d/63/9307b7d0536831f74a5e0cd25adc2e17b18cbe26177eaf1568e3fd63e804/foundry_dev_tools_transforms-2.1.23.tar.gz | source | sdist | null | false | e71d9c188de1b96af052162c8344287b | 12a5a1038c313bdf2378fbd056d521fc7d9cd53c88012a6eb6742849abda8787 | 0d639307b7d0536831f74a5e0cd25adc2e17b18cbe26177eaf1568e3fd63e804 | null | [] | 252 |
2.1 | foundry-dev-tools | 2.1.23 | Seamlessly run your Palantir Foundry Repository transforms code on your local machine. | <div align="center">
<br/>
<a href="https://github.com/emdgroup/foundry-dev-tools/actions/workflows/ci.yml"><img src="https://img.shields.io/github/actions/workflow/status/emdgroup/foundry-dev-tools/ci.yml?style=flat-square"/></img>
<a href="https://github.com/emdgroup/foundry-dev-tools/actions/workflows/docs.yml"><img src="https://img.shields.io/github/actions/workflow/status/emdgroup/foundry-dev-tools/docs.yml?style=flat-square"/></img>
<a href="https://pypi.org/project/foundry-dev-tools/"><img src="https://img.shields.io/pypi/pyversions/foundry-dev-tools?style=flat-square&label=Supported%20Python%20versions&color=%23ffb86c"/></a>
<a href="https://pypi.org/project/foundry-dev-tools/"><img src="https://img.shields.io/pypi/v/foundry-dev-tools.svg?style=flat-square&label=PyPI%20version&color=%23bd93f9"/></a>
<a href="https://anaconda.org/conda-forge/foundry-dev-tools"><img src="https://img.shields.io/conda/vn/conda-forge/foundry-dev-tools.svg?style=flat-square&label=Conda%20Forge%20Version&color=%23bd93f9" alt="Conda Version"/></a>
<a href="https://pypi.org/project/foundry-dev-tools/"><img src="https://img.shields.io/pypi/dm/foundry-dev-tools?label=PyPI%20Downloads&style=flat-square&color=%236272a4"/></a>
<a href="https://anaconda.org/conda-forge/foundry-dev-tools"><img src="https://img.shields.io/conda/dn/conda-forge/foundry-dev-tools.svg?style=flat-square&label=Conda%20Forge%20Downloads&color=%236272a4" alt="Conda Downloads"/></a>
<a href="https://github.com/emdgroup/foundry-dev-tools/issues"><img src="https://img.shields.io/github/issues/emdgroup/foundry-dev-tools?style=flat-square&color=%23ff79c6"/></a>
<a href="https://github.com/emdgroup/foundry-dev-tools/pulls"><img src="https://img.shields.io/github/issues-pr/emdgroup/foundry-dev-tools?style=flat-square&color=%23ff79c6"/></a>
<a href="http://www.apache.org/licenses/LICENSE-2.0"><img src="https://shields.io/badge/License-Apache%202.0-green.svg?style=flat-square&color=%234c1"/></a>
<p><a href="https://emdgroup.github.io/foundry-dev-tools">Documentation</a></p>
<a href="https://emdgroup.github.io/foundry-dev-tools/getting_started/index.html">Getting Started / Usage<a/>
•
<a href="https://emdgroup.github.io/foundry-dev-tools/examples/api.html">Examples<a/>
•
<a href="https://emdgroup.github.io/foundry-dev-tools/dev/contribute.html">Development/Contribute<a/>
</div>
# Foundry DevTools
Seamlessly run your Palantir Foundry Repository transforms code and more on your local machine.
Foundry DevTools is a set of useful libraries to interact with the Foundry APIs.
It consists of two parts:
- The [transforms](https://www.palantir.com/docs/foundry/transforms-python/transforms-python-api/) implementation
- An implementation of the Foundry `transforms` package that internally uses the `CachedFoundryClient`.
This allows you to seamlessly run your Palantir Foundry Code Repository transforms code on your local machine.
Foundry DevTools does not cover all of Foundry's features, more on this [here](https://emdgroup.github.io/foundry-dev-tools/dev/architecture.html#known-limitations-contributions-welcome).
- API clients
We implemented multiple clients for many foundry APIs like compass, catalog or foundry-sql-server.
- For example:
```python
from foundry_dev_tools import FoundryContext
# the context, that contains your credentials and configuration
ctx = FoundryContext()
df = ctx.foundry_sql_server.query_foundry_sql("SELECT * FROM `/Global/Foundry Training and Resources/Example Data/Aviation Ontology/airlines`", branch='master')
df.shape
# Out[2]: (17, 10)
```
## Quickstart
With pip:
```shell
pip install foundry-dev-tools
```
With conda or mamba on the conda-forge channel:
```shell
conda install -c conda-forge foundry-dev-tools
```
[Further instructions](https://emdgroup.github.io/foundry-dev-tools/getting_started/installation.html) can be found in our documentation.
## Why did we build this?
- Local development experience in your favorite IDE (PyCharm, VSCode, ...)
- Access to modern developer tools and workflows such as ruff, mypy, pylint, black, pre-commit hooks etc.
- Quicker turnaround time when making changes
- Debug, change code and run in a matter of seconds instead of minutes
- No accidental or auto commits
- Keep your git history clean
# License
Copyright (c) 2024 Merck KGaA, Darmstadt, Germany
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
The full text of the license can be found in the [LICENSE](https://github.com/emdgroup/foundry-dev-tools/blob/main/LICENSE) file in the repository root directory.
| text/markdown | null | Nicolas Renkamp <nicolas.renkamp@merckgroup.com>, Jonas Wunderlich <jonas.wunderlich@merckgroup.com> | null | null | Apache-2.0 | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Intended Audience :: Developers",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Topic :: Scientific/Engineering :: Information Analysis"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"requests",
"palantir-oauth-client",
"platformdirs",
"tomli; python_version < \"3.11\"",
"typing_extensions; python_version < \"3.11\"",
"click",
"inquirer",
"websockets",
"rich",
"packaging",
"tomli_w",
"aiobotocore[boto3]; extra == \"s3\"",
"foundry-platform-sdk; extra == \"public\"",
"foundry-dev-tools-transforms; extra == \"full\"",
"foundry-dev-tools[s3]; extra == \"full\"",
"foundry-dev-tools[public]; extra == \"full\""
] | [] | [] | [] | [
"Homepage, https://emdgroup.github.io/foundry-dev-tools",
"Documentation, https://emdgroup.github.io/foundry-dev-tools",
"Source, https://github.com/emdgroup/foundry-dev-tools",
"Tracker, https://github.com/emdgroup/foundry-dev-tools/issues",
"Changelog, https://emdgroup.github.io/foundry-dev-tools/changelog.html"
] | pdm/2.26.6 CPython/3.12.12 Linux/6.11.0-1018-azure | 2026-02-20T08:22:53.545355 | foundry_dev_tools-2.1.23.tar.gz | 116,425 | 71/36/f8692ab749a51235310bedb279dbac6f00eb6a9009e00ca2c4d868a47e5a/foundry_dev_tools-2.1.23.tar.gz | source | sdist | null | false | f4a9265ebf03ea93eafb170f1c0c83a7 | d746e5a991c6ff445e39ab141d8e376a64651f7cad57e970e355aff8fd7291f8 | 7136f8692ab749a51235310bedb279dbac6f00eb6a9009e00ca2c4d868a47e5a | null | [] | 505 |
2.4 | docling-pp-doc-layout | 0.2.0 | A Docling plugin for PaddlePaddle PP-DocLayout-V3 model document layout detection. | # docling-pp-doc-layout
A [Docling](https://github.com/docling-project/docling) plugin that provides document layout detection using the PaddlePaddle PP-DocLayout-V3 model.
This plugin seamlessly integrates with Docling's standard pipeline to replace the default layout models with [PP-DocLayout-V3](https://huggingface.co/PaddlePaddle/PP-DocLayoutV3), enabling high-accuracy, instance segmentation-based layout analysis with polygon bounding box support, properly processed in optimized batches for enterprise scalability.
---
<p align="center">
<a href="https://github.com/DCC-BS/docling-pp-doc-layout">GitHub</a>
|
<a href="https://pypi.org/project/docling-pp-doc-layout/">PyPI</a>
</p>
---
[](https://pypi.org/project/docling-pp-doc-layout/)
[](https://pypi.org/project/docling-pp-doc-layout/)
[](https://github.com/DCC-BS/docling-pp-doc-layout/blob/main/LICENSE)
[](https://github.com/DCC-BS/docling-pp-doc-layout/actions/workflows/main.yml)
[](https://github.com/astral-sh/ruff)
[](https://codecov.io/gh/DCC-BS/docling-pp-doc-layout)
## Overview
`docling-pp-doc-layout` provides the `PPDocLayoutV3Model` layout engine for Docling. It automatically registers itself into Docling's plugin system upon installation. When configured in a Docling `DocumentConverter`, it intercepts page images, batches them, and infers document structural elements (text, tables, figures, headers, etc.) using HuggingFace's transformers library.
Key Features:
- **High Accuracy Layout Parsing**: Uses the RT-DETR instance segmentation framework.
- **Polygon Conversion**: Gracefully flattens complex polygon masks to Docling-compatible bounding boxes.
- **Enterprise Scalability**: Configurable batch sizing avoids out-of-memory (OOM) errors on large documents.
## Architecture & Integration
When you install this package, Docling discovers it automatically through standard Python package entry points.
```mermaid
flowchart TD
A[Docling DocumentConverter] --> B[PdfPipeline]
subgraph Plugin System
C[Docling PluginManager] -.->|Discovers via entry-points| D[docling-pp-doc-layout]
D -.->|Registers| E[PPDocLayoutV3Model]
end
B -->|Initialization| C
B -->|Predict Layout Pages| E
E -->|Batched Tensors| F[HuggingFace AutoModel]
F -->|Raw Polygons / Boxes| E
E -->|Post-processed Clusters & BoundingBoxes| B
```
## Requirements
- Python 3.13+
- `docling>=2.73`
- `transformers>=5.1.0`
- `torch`
## Installation
```bash
# with uv (recommended)
uv add docling-pp-doc-layout
# with pip
pip install docling-pp-doc-layout
```
## Usage
Using `docling-pp-doc-layout` is exactly like configuring standard Docling options.
```python
from docling.document_converter import DocumentConverter, PdfFormatOption
from docling.datamodel.pipeline_options import PdfPipelineOptions
from docling_pp_doc_layout.options import PPDocLayoutV3Options
# 1. Define Pipeline Options
pipeline_options = PdfPipelineOptions()
# 2. Configure our custom PPDocLayoutV3Options
pipeline_options.layout_options = PPDocLayoutV3Options(
batch_size=8, # Tweak for GPU VRAM usage
confidence_threshold=0.5, # Filter low-confidence detections
model_name="PaddlePaddle/PP-DocLayoutV3_safetensors" # Target HuggingFace model repo
)
# 3. Create the converter
converter = DocumentConverter(
format_options={
"pdf": PdfFormatOption(pipeline_options=pipeline_options)
}
)
# 4. Convert Document
result = converter.convert("path/to/your/document.pdf")
print("Converted elements:", len(result.document.elements))
```
## Configuration Options
The `PPDocLayoutV3Options` dataclass gives you full control over the engine:
| Parameter | Type | Default | Description |
|-------------------------|---------|---------|-------------|
| `batch_size` | `int` | 8 | How many pages to process per single step. Decrease to lower memory usage; Increase to speed up processing of large documents. |
| `confidence_threshold` | `float` | 0.5 | The minimum confidence score (0.0 - 1.0) required to keep a layout detection cluster. |
| `model_name` | `str` | `"PaddlePaddle/PP-DocLayoutV3_safetensors"` | HuggingFace repository ID. Allows overriding if you host your local copy or a fine-tuned version. |
## Development
If you wish to contribute or modify the plugin locally:
```bash
git clone https://github.com/DCC-BS/docling-pp-doc-layout.git
cd docling-pp-doc-layout
# Install dependencies and pre-commit hooks
make install
# Run checks (ruff, ty) and tests (pytest)
make check
make test
```
## License
[MIT](LICENSE) © DCC Data Competence Center
| text/markdown | null | Yanick Schraner <yanick.schraner@bs.ch>, Tobias Bollinger <tobias.bollinger@bs.ch> | null | null | MIT | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Typing :: Typed"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"docling>=2.73",
"torch",
"transformers>=5.1.0"
] | [] | [] | [] | [
"Homepage, https://github.com/DCC-BS/docling-pp-doc-layout",
"Repository, https://github.com/DCC-BS/docling-pp-doc-layout",
"Issues, https://github.com/DCC-BS/docling-pp-doc-layout/issues",
"Changelog, https://github.com/DCC-BS/docling-pp-doc-layout/releases"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T08:22:44.494694 | docling_pp_doc_layout-0.2.0.tar.gz | 155,996 | 40/b3/e79beb7685255b2df31239eb78f3e03c331b4914f920fe3f11f70bd63ee4/docling_pp_doc_layout-0.2.0.tar.gz | source | sdist | null | false | baa2db000baf7dbb49bffd65a02c599f | e82f394cfd80dc8015067cc8fe713e4a77b05465698469bac7bd1c4f77d5b82c | 40b3e79beb7685255b2df31239eb78f3e03c331b4914f920fe3f11f70bd63ee4 | null | [
"LICENSE"
] | 238 |
2.4 | daily-cli-tool | 1.0.0 | Minimalist CLI for engineers to log daily work | # daily
Minimalist CLI for engineers to log daily work. Perfect for daily standups.
[](https://pypi.org/project/daily-cli-tool/)
[](https://pypi.org/project/daily-cli-tool/)
## Features
- **Fast capture**: Log work in under 10 seconds
- **Markdown-based**: Human-readable files, Git-friendly
- **Tag support**: Filter entries by project or topic
- **Cheat sheet**: Quick summary for daily standups
- **Interactive search**: Browse and edit past notes with fzf
- **No database**: Plain files in `~/.daily/dailies/`
## Installation
### Using pipx (recommended)
```bash
pipx install daily-cli-tool
```
That's it! The `daily` command is now available globally.
### From source (for development)
```bash
# Clone the repository
git clone https://github.com/creusvictor/daily-cli.git
cd daily-cli
# Install with pipx
pipx install .
# Or install with uv
uv sync
```
### Using uv (for development)
```bash
# Clone the repository
git clone https://github.com/user/daily-cli.git
cd daily-cli
# Install
uv sync
# Run
uv run daily --help
```
### Using pip
```bash
pip install daily-cli
```
### From source
```bash
git clone https://github.com/user/daily-cli.git
cd daily-cli
pip install -e .
```
## Quick Start
```bash
# Log completed work
daily did "Fixed CI/CD pipeline" --tags cicd,infra
# Plan today's work
daily plan "Review pending PRs" --tags code-review
# Log a blocker
daily block "Waiting for AWS access" --tags aws
# Log a meeting
daily meeting "Sprint planning" --tags team
# Show cheat sheet for standup
daily cheat
```
## Commands
| Command | Description | Section |
|---------|-------------|---------|
| `daily did "text"` | Log completed work | Done |
| `daily plan "text"` | Plan work for today | To Do |
| `daily block "text"` | Log a blocker | Blockers |
| `daily meeting "text"` | Log a meeting | Meetings |
| `daily cheat` | Show standup cheat sheet | - |
| `daily search` | Search and open daily files | - |
All commands support `--tags` or `-t` for tagging:
```bash
daily did "Deploy to production" --tags deploy,aws
daily did "Code review" -t review
```
## Cheat Sheet
The `daily cheat` command generates a clean summary for standups:
```
DONE
- Fixed CI/CD pipeline
- Deployed new feature
MEETINGS
- Sprint planning
- 1:1 with manager
TO DO
- Review pending PRs
- Write documentation
BLOCKERS
- Waiting for AWS access
```
### Options
```bash
# Filter by tags
daily cheat --tags aws
# Show today's file instead of yesterday's
daily cheat --today
# Plain text output (no colors)
daily cheat --plain
```
### Weekend Logic
By default, `daily cheat` skips weekends when looking for "yesterday's" file:
- On **Monday**, it shows **Friday's** entries
- On **Saturday** or **Sunday**, it shows **Friday's** entries
This matches typical standup workflows where you report on the last workday.
```bash
# Override: always skip weekends
daily cheat --workdays
# Override: use literal yesterday (even if weekend)
daily cheat --no-workdays
```
Configure the default in `~/.daily/config.toml`:
```toml
# Set to false to always use literal yesterday
skip_weekends = true
```
## Search
The `daily search` command provides an interactive fuzzy finder (fzf) to browse and edit your daily notes:
```bash
# Search all daily files
daily search
# Filter by tags (only show files with these tags)
daily search --tags aws,deploy
daily search -t projectA
```
**Features**:
- **Interactive selection**: Use arrow keys or fuzzy search to find notes
- **Preview panel**: See the content of each note before opening
- **Opens in $EDITOR**: Selected file opens in your preferred editor (vim, nano, etc.)
- **Tag filtering**: Only show files containing specific tags
- **Sorted by date**: Newest files appear first
- **Tag display**: Each file shows all tags used (e.g., `2026-02-20 (Friday) - tags: aws,deploy`)
**Requirements**: This command requires `fzf` to be installed:
```bash
# Ubuntu/Debian
sudo apt-get install fzf
# macOS
brew install fzf
# Arch Linux
sudo pacman -S fzf
```
## File Structure
Daily notes are stored in `~/.daily/dailies/` with format `YYYY-MM-DD-daily.md`:
```markdown
---
type: daily
date: 2026-01-27
---
## ✅ Done
- Fixed CI/CD pipeline #tags: cicd,infra
## ▶️ To Do
- Review pending PRs
## 🚧 Blockers
- Waiting for AWS access #tags: aws
## 🗓 Meetings
- Sprint planning #tags: team
## 🧠 Quick Notes
```
## Configuration
Create `~/.daily/config.toml` to customize behavior:
```toml
# Directory where daily notes are stored
dailies_dir = "~/.daily/dailies"
# Skip weekends in daily cheat (Monday shows Friday)
skip_weekends = true
```
### Custom directory
Set `DAILY_DIR` environment variable (takes priority over config file):
```bash
export DAILY_DIR=/path/to/my/dailies
```
Priority: Environment variable > Config file > Default (`~/.daily/dailies`)
## Development
```bash
# Install dev dependencies
uv sync
# Run tests
uv run pytest
# Run tests with coverage
uv run pytest --cov=daily
# Format code
uv run black daily tests
# Lint
uv run ruff check daily tests
# Type check
uv run mypy daily
```
## FAQ
**Q: Where are my notes stored?**
A: In `~/.daily/dailies/` by default. Each day creates a new file like `2026-01-27-daily.md`.
**Q: Can I edit files manually?**
A: Yes! Files are plain Markdown. Manual edits are preserved.
**Q: Does it work with Obsidian?**
A: Yes! Point Obsidian to your dailies directory for a nice viewing experience.
**Q: Can I use it with Git?**
A: Absolutely. The files are designed to be Git-friendly.
**Q: What if I forget to log something?**
A: You can edit the Markdown file directly, or use the API with a specific date.
**Q: Why does `daily cheat` show Friday's entries on Monday?**
A: By default, weekends are skipped so Monday's standup shows Friday's work. Use `--no-workdays` to show literal yesterday, or set `skip_weekends = false` in config.
## License
MIT
| text/markdown | null | Victor <victorcg98@gmail.com> | null | Victor <victorcg98@gmail.com> | MIT | cli, daily, engineering, markdown, notes, productivity, standup | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Office/Business",
"Topic :: Utilities",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"iterfzf>=1.4.0",
"rich>=13.0.0",
"typer>=0.12.0"
] | [] | [] | [] | [
"Homepage, https://github.com/creusvictor/daily-cli",
"Documentation, https://github.com/creusvictor/daily-cli#readme",
"Repository, https://github.com/creusvictor/daily-cli",
"Issues, https://github.com/creusvictor/daily-cli/issues",
"Changelog, https://github.com/creusvictor/daily-cli/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:22:32.009088 | daily_cli_tool-1.0.0.tar.gz | 57,755 | dc/27/8e0ac726eabd7479cef1fa6e86c1c1ab883edf8d7004906d016b986071fb/daily_cli_tool-1.0.0.tar.gz | source | sdist | null | false | 3cadf01839e6e168624095869fc6abd4 | 1392ab51dfdd52bbd8ced4bca390291285470c3cbfdcead311ca69e0907dad27 | dc278e0ac726eabd7479cef1fa6e86c1c1ab883edf8d7004906d016b986071fb | null | [
"LICENSE"
] | 254 |
2.4 | sqlmelt | 0.1.2 | This library is not that useful. Perhaps someday it will become great. | # Sqlmelt
*This library is not that useful. Perhaps someday it will become great.*
[](https://i.yapx.cc/X7gXT.png)
## What is it?
**Sqlmelt** - this library is an alternative to the "melt" function from **Pandas**, for DBMS in which this alternative is not available.
At the moment, the library contains only one function - for **Vertica**.
## Where to get it
```python
#for PyPI
%pip install sqlmelt
``` | text/markdown | null | ardavydovskiy <davydovskij.art@yandex.ru> | null | null | null | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.0.0 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.4 | 2026-02-20T08:22:12.683223 | sqlmelt-0.1.2.tar.gz | 3,487 | 04/2d/27e410d04af3102ee21b95d52dbce57ffc8e95aa9a84e1045bef212a79ce/sqlmelt-0.1.2.tar.gz | source | sdist | null | false | ae39ae1be621697fbca72c38a799f430 | e097e0cc9a633736deb2eb71009f0fdf44096783f01d0a7b42a6ca892a09e048 | 042d27e410d04af3102ee21b95d52dbce57ffc8e95aa9a84e1045bef212a79ce | null | [
"LICENSE.txt"
] | 245 |
2.4 | xccy | 0.3.2 | Python SDK for XCCY Protocol - Interest Rate Swap AMM on Polygon and Arbitrum | # xccy-py
Python SDK for the **XCCY Protocol** — an Interest Rate Swap AMM on Polygon and Arbitrum.
```
┌─────────────────────────────────────────────────────────────────────────┐
│ xccy-py SDK │
├─────────┬─────────┬─────────┬─────────┬─────────┬─────────┬─────────────┤
│ account │ margin │ trading │position │ oracle │ trades │ stream │
└────┬────┴────┬────┴────┬────┴────┬────┴────┬────┴────┬────┴──────┬──────┘
│ │ │ │ │ │ │
▼ ▼ ▼ ▼ ▼ ▼ ▼
┌─────────────────────────────────────────────────────────────────────────┐
│ On-Chain (Polygon / Arbitrum) │ Backend │ WebSocket RPC │
│ CollateralEngine │ VAMMManager │ Oracles│ api.xccy│ wss://... │
└─────────────────────────────────────────────────────────────────────────┘
```
**Supported chains**: Polygon (137), Arbitrum One (42161)
## Installation
```bash
pip install xccy
```
## Quick Start
```python
from xccy import XccyClient
from xccy.tokens import PolygonTokens
# Initialize client
client = XccyClient(
rpc_url="https://polygon-rpc.com",
private_key="0x...", # Optional, for signing transactions
)
# Create account identifier
account = client.account.create_account_id(
owner="0xYourWallet...",
account_id=0,
isolated_margin_token=None # Cross-margin mode
)
# Deposit margin
tx = client.margin.deposit(
account=account,
token=PolygonTokens.USDC,
amount=1000 * 10**6 # 1000 USDC
)
print(f"Deposit tx: {tx.transactionHash.hex()}")
# Check health
obligations = client.position.get_obligations(account)
margin_value = client.margin.get_total_value_usd(account)
health = margin_value / obligations if obligations > 0 else float('inf')
print(f"Health factor: {health:.2f}")
```
### Arbitrum
```python
from xccy import XccyClient
from xccy.tokens import ArbitrumTokens
client = XccyClient(
rpc_url="https://arb1.arbitrum.io/rpc",
network="arbitrum",
private_key="0x...",
)
account = client.account.create_account_id(
owner="0xYourWallet...",
account_id=0,
isolated_margin_token=None,
)
tx = client.margin.deposit(
account=account,
token=ArbitrumTokens.USDC,
amount=1000 * 10**6,
)
```
## Features
### Multi-Account System
One wallet can own multiple sub-accounts with independent positions and margin:
```python
# Cross-margin account
main = client.account.create_account_id(owner="0x...", account_id=0)
# Isolated margin account (USDC only)
isolated = client.account.create_account_id(
owner="0x...",
account_id=1,
isolated_margin_token=PolygonTokens.USDC
)
```
### Trading (Swap & LP)
```python
from xccy.math import fixed_rate_to_tick, notional_to_liquidity
# Execute a swap (pay fixed rate)
result = client.trading.swap(
pool_key=pool,
account=account,
notional=10_000 * 10**6,
is_fixed_taker=True,
tick_lower=-6930,
tick_upper=-6900,
)
# Provide liquidity with notional amount
tick_lower = fixed_rate_to_tick(0.06) # 6%
tick_upper = fixed_rate_to_tick(0.04) # 4%
liquidity = notional_to_liquidity(10_000 * 10**6, tick_lower, tick_upper)
client.trading.mint(pool, account, tick_lower, tick_upper, liquidity)
```
### Oracle Data
```python
# Get USD price
price = client.oracle.get_price_usd(PolygonTokens.USDC)
# Get current APR
apr = client.oracle.get_apr(PolygonTokens.A_USDC)
# Get rate between timestamps
rate = client.oracle.get_rate_from_to(
asset=PolygonTokens.A_USDC,
from_timestamp=1704067200,
to_timestamp=1735689600
)
```
### Trade Tracking
```python
# Historical trades from backend
trades, cursor = client.trades.get_pool_trades(pool_id, limit=50)
for trade in trades:
print(f"{trade.timestamp}: {trade.notional} @ {trade.fixed_rate:.2%}")
# User trades
my_trades = client.trades.get_user_trades(pool_id, my_address)
```
### Live Streaming (WebSocket)
```python
import asyncio
from xccy import XccyClient, TradeEvent
client = XccyClient(
rpc_url="https://polygon-rpc.com",
ws_rpc_url="wss://polygon-mainnet.g.alchemy.com/v2/KEY",
)
# Async iterator (recommended for trading strategies)
async def trading_strategy():
async for event in client.stream.events(event_types=["Swap"]):
print(f"New swap: {event.variable_token_delta}")
await react_to_trade(event)
# Wait for specific event
async def wait_for_my_fill():
event = await client.stream.next_event(
user_address=my_address,
event_types=["Swap"],
timeout=60.0,
)
print(f"Filled: {event.tx_hash}")
asyncio.run(trading_strategy())
```
### Math Utilities
```python
from xccy.math import (
tick_to_fixed_rate,
fixed_rate_to_tick,
liquidity_to_notional,
notional_to_liquidity,
wad_to_decimal,
)
# Tick ↔ Rate conversions
rate = tick_to_fixed_rate(-6930) # ~5% APR
tick = fixed_rate_to_tick(0.05)
# Liquidity ↔ Notional
liquidity = notional_to_liquidity(10_000 * 10**6, tick_lower, tick_upper)
notional = liquidity_to_notional(liquidity, tick_lower, tick_upper)
```
## Documentation
Full documentation available at [docs.xccy.finance](https://docs.xccy.finance)
- [Quick Start Guide](docs/quickstart.md)
- [Account System](docs/accounts.md)
- [Margin Operations](docs/margin.md)
- [Trading Guide](docs/trading.md)
- [Trade Tracking](docs/trades.md)
- [Health Monitoring](docs/health.md)
- [API Reference](docs/api-reference.md)
## Development
```bash
# Clone and install
git clone https://github.com/xccy-finance/xccy-sdk.git
cd xccy-sdk
pip install -e ".[dev,docs]"
# Run tests
pytest
# Run linter
ruff check xccy/
# Type check
mypy xccy/
# Build docs
mkdocs serve
```
## License
MIT License
| text/markdown | XCCY Team | null | null | null | null | amm, arbitrum, defi, ethereum, interest-rate-swap, polygon, web3 | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Office/Business :: Financial",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"eth-abi>=4.0.0",
"eth-typing>=3.0.0",
"httpx>=0.25.0",
"pydantic>=2.0.0",
"web3[websockets]>=6.0.0",
"mypy>=1.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"mkdocs-material>=9.0.0; extra == \"docs\"",
"mkdocs>=1.5.0; extra == \"docs\"",
"mkdocstrings[python]>=0.24.0; extra == \"docs\"",
"pymdown-extensions>=10.0.0; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://xccy.finance",
"Documentation, https://docs.xccy.finance",
"Repository, https://github.com/xccy-finance/xccy-sdk"
] | twine/6.2.0 CPython/3.13.1 | 2026-02-20T08:21:29.257903 | xccy-0.3.2.tar.gz | 81,488 | 0d/2f/340fbd05ea5cfe5e749cfb8a39a795e33937d75b1d6508d50946911e2e61/xccy-0.3.2.tar.gz | source | sdist | null | false | 54423e0334c325f19a1f1abb094bbb9e | 586a3da3dfe71fdd1057047758fb320dfb7f7910751769108a4cefe98be09530 | 0d2f340fbd05ea5cfe5e749cfb8a39a795e33937d75b1d6508d50946911e2e61 | MIT | [] | 239 |
2.4 | richforms | 0.4.0 | Turn Pydantic models into interactive Rich terminal forms — with built-in Click & Typer integrations. | # richforms
`richforms` turns Pydantic models into rich interactive terminal forms.
## Installation
You can add `richforms` to your project with `uv`.
```bash
uv add richforms
```
## Usage
You can start from Python in-process or from the bundled CLI.
### Python API
```python
from richforms.example.model import Metadata
from richforms import fill
metadata = fill(Metadata)
print(metadata.model_dump_json(indent=2))
```
### CLI
```bash
richforms fill richforms.example.model:Metadata --output project-metadata.json
```
### Integrations
You can integrate `richforms` directly into Click and Typer option callbacks.
```python
import typer
from richforms.integrations.typer import form_callback
from richforms.example.model import Metadata
app = typer.Typer()
@app.command()
def release(
metadata: Metadata = typer.Option(
None,
callback=form_callback(Metadata),
),
) -> None:
print(metadata.model_dump())
```
## Documentation
For more information, see the [documentation](https://shinybrar.github.io/richforms/).
| text/markdown | null | Shiny Brar <shiny.brar@nrc-cnrc.gc.ca> | null | Shiny Brar <shiny.brar@nrc-cnrc.gc.ca> | null | cli, click, forms, interactive, prompt, pydantic, rich, terminal, tui, typer, validation | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Framework :: Pydantic",
"Framework :: Pydantic :: 2",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Software Development :: User Interfaces",
"Topic :: Terminals",
"Topic :: Utilities",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic>=2.0",
"pyyaml>=6.0",
"rich>=14.0",
"typer>=0.20"
] | [] | [] | [] | [
"Homepage, https://github.com/shinybrar/richforms",
"Documentation, https://shinybrar.github.io/richforms",
"Repository, https://github.com/shinybrar/richforms",
"Issues, https://github.com/shinybrar/richforms/issues",
"Changelog, https://github.com/shinybrar/richforms/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T08:21:16.059670 | richforms-0.4.0.tar.gz | 77,053 | b2/aa/f23f1e44175e51bc977375c506e27a4b6c8ecadb9b5f9f73c64f595b9cce/richforms-0.4.0.tar.gz | source | sdist | null | false | a98a5877537e0d70331d35562d3c0fe2 | 6901522b4505887ccbc542d8d0ae48e50b8c517691a3a17e9663e6b101797c0e | b2aaf23f1e44175e51bc977375c506e27a4b6c8ecadb9b5f9f73c64f595b9cce | MIT | [
"LICENSE"
] | 252 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.