metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | candyfloss | 0.0.7 | An ergonomic interface over GStreamer |
# Candyfloss
Candyfloss is an ergonomic interface to GStreamer. It allows users to build and run pipelines to decode and encode video files, extract video frames to use from python code, map python code over video frames, etc.
## Installation
Candyfloss is installable by running setup.py in the normal way. It is also available on PyPI as [candyfloss](https://pypi.org/project/candyfloss/).
Candyfloss requires that gstreamer is installed. Most desktop linux distros have it installed already. If you aren't on linux or don't have it installed check the GStreamer install docs [here](https://gstreamer.freedesktop.org/documentation/installing/index.html?gi-language=c). In addition to the installation methods mentioned there if you're on macos you can install it with homebrew by running `brew install gstreamer`.
## Examples
```python
# scale a video file to 300x300
from candyfloss import Pipeline
with Pipeline() as p:
inp_file = p >> ['uridecodebin', {'uri':'file:///tmp/inp.mp4'}]
scaled_video = inp_file >> 'videoconvert' >> 'videoscale' >> ('video/x-raw', {'width':300,'height':300})
mux = p >> 'mp4mux'
scaled_video >> 'x264enc' >> mux
inp_file >> 'avenc_aac' >> mux
mux >> ['filesink', {'location':'output.mp4'}]
```
```python
# iterate over frames from a video file
from candyfloss import Pipeline
for frame in Pipeline(lambda p: p >> ['filesrc', {'location':'input.webm'] >> 'decodebin')):
frame.save('frame.jpeg') # frame is a PIL image
```
```python
# display your webcam with the classic emboss effect applied
from candyfloss import Pipeline
from PIL import ImageFilter
with Pipeline() as p:
p >> 'autovideosrc' >> p.map(lambda frame: frame.filter(ImageFilter.EMBOSS)) >> 'autovideosink'
```
```python
# display random noise frames in a window
from candyfloss import Pipeline
from PIL import Image
import numpy as np
def random_frames(shape):
rgb_shape = shape + (3,)
while 1:
mat = np.random.randint(0, 256, dtype=np.uint8, size=rgb_shape)
yield Image.fromarray(mat)
with Pipeline() as p:
p.from_iter(random_frames((300, 300))) \
>> ('video/x-raw', {'width':300, 'height':300}) >> display_video()
```
## Syntax
### Pipelines
Candyfloss runs **pipelines**. They are created by constructing a `candyfloss.Pipeline` object. `Pipeline`s can be used two ways: either as context managers or as iterators. When used as a context manager, they allow you to construct a **pipeline** within the context and then the **pipeline** is executed when the context exits.
***Example***
```python
from candyfloss import Pipeline
with Pipeline() as p:
p >> 'videotestsrc' >> 'autovideosink'
```
When the **pipeline** is used as an iterator, it allows you to iterate over the frames produced by the **pipeline** (as PIL Image objects). To construct the **pipeline**, pass a function to the `Pipeline` constructor that builds and returns it.
***Example***
```python
from candyfloss import Pipeline
for frame in Pipeline(lambda p: p >> 'videotestsrc'):
frame.save('test_frame.png')
```
### Elements
A **pipeline** is a graph of **elements** connected together. Some **elements** generate data, and that data flows out into the other **elements** they are connected to. To get a list of the different **elements** that are available on your system, run the `gst-inspect-1.0` command. To get documentation for a specific **element**, pass its name to the `gst-inspect-1.0` command (eg: `gst-inspect-1.0 tee`).
**Elements** are constructed by calling the `>>` operator on the builder object returned by the context manager (ie in a `with` statement) or passed as an argument to the supplied constructor function (ie in `for frame in Pipeline(lambda p: p >> 'testvideosrc`):`).
The syntax for constructing **elements** is:
1. A string literal (eg: `'videotestsrc'`) constructs an element with that name that takes no parameters.
2. A list literal (eg: `['videotestsrc', {'pattern':18}]`) constructs an element with that name and sets parameters from the supplied dict.
3. A tuple literal (eg: `('video/x-raw', {'width':100, 'height':100})`) constructs a **caps filter**. **Caps** are GStreamer's types, and caps filters are type constraints. The first argument is the type name and the second argument is a dictionary of parameters. Some **elements** will change their behavor at runtime based on the **caps** their upstream or downstream **elements** will accept. For example, the `videoscale` element will set the size it scales the video to based on the with and height parameters of the downstream **caps**. What **caps** an **element** supports is in the documentation generated by the `gst-inspect-1.0` command for that **element**.
4. Calling `.map` on the builder object and passing a callable (eg: `p.map(lambda x: x.resize((100,100)))`) turns the given callable into an **element** that maps over frames. The argument passed to the function is a PIL Image object.
5. Calling `.from_iter` on the builder object and passing an iterator over PIL Image objects turns the iterator into an **element** that produces frames from the iterator.
6. Calling `.multimap` on the query builder works almost the same as calling `.map`, except the element can have multiple upstream sources and the callback takes multiple arguments, one per buffer from each source. Instead of the callback being wrapped in a filter element, as with `.map`, it's wrapped in an [aggregator](https://gstreamer.freedesktop.org/documentation/base/gstaggregator.html?gi-language=c).
Trying to construct an **element** that does not exist raises a `KeyError`.
| text/markdown | null | Bob Poekert <bob@poekert.com> | null | null | MIT License
Copyright (c) 2025 Robert Poekert
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"Pillow",
"numpy",
"graphviz"
] | [] | [] | [] | [
"Homepage, https://git.hella.cheap/bob/candyfloss"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T15:31:20.782151 | candyfloss-0.0.7.tar.gz | 25,174 | 55/8e/f355a130e845ee403e57ea601fd3162b2a9e410a6c0ee9efdba8d762852e/candyfloss-0.0.7.tar.gz | source | sdist | null | false | 1e89eee3545d6a109792e4b9a64600e5 | 372eb66011a0cebfb5000a341425ec9f4af346023ebef370939136d5347fc077 | 558ef355a130e845ee403e57ea601fd3162b2a9e410a6c0ee9efdba8d762852e | null | [
"LICENSE"
] | 140 |
2.4 | aep-parser | 0.3.0 | A .aep (After Effects Project) parser | <a name="readme-top"></a>
<!-- PROJECT NAME -->
<br />
<div align="center">
<h3 align="center">aep_parser</h3>
<p align="center">
An After Effects file parser in Python!
<br />
<a href="https://forticheprod.github.io/aep_parser/"><strong>Explore the docs »</strong></a>
<br />
</p>
</div>
<!-- ABOUT THE PROJECT -->
## About The Project
This as a .aep (After Effects Project) parser in Python. After Effects files (.aep) are mostly binary files, encoded in RIFX format. This parser uses [Kaitai Struct](https://kaitai.io/) to parse .aep files and return a Project object containing items, layers, effects and properties. The API is as close as possible to the [ExtendScript API](https://ae-scripting.docsforadobe.dev/), with a few nice additions like iterators instead of collection items.
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- INSTALLATION -->
## Installation
```sh
pip install aep-parser
```
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- USAGE EXAMPLES -->
## Usage
```python
import aep_parser
# Parse an After Effects project
app = aep_parser.parse("path/to/project.aep")
project = app.project
# Access application-level info
print(f"AE Version: {app.version}")
# Access every item
for item in project:
print(f"{item.name} ({type(item).__name__})")
# Get a composition by name and its layers
comp = next(c for c in project.compositions if c.name == "Comp 1")
for layer in comp.layers:
print(f" Layer: {layer.name}, in={layer.in_point}s, out={layer.out_point}s")
# Access layer's source (for AVLayer)
if hasattr(layer, "source") and layer.source:
print(f" Source: {layer.source.name}")
# Get file path if source is footage with a file
if hasattr(layer.source, "file"):
print(f" File: {layer.source.file}")
# Access render queue
for rq_item in project.render_queue.items:
print(f"Render: {rq_item.comp_name}")
for om in rq_item.output_modules:
# Settings are a dict with ExtendScript keys
video_on = om.settings.get("Video Output", False)
print(f" Output: {om.name}, video={video_on}")
```
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- ROADMAP -->
## Roadmap
See the [open issues](https://github.com/forticheprod/aep_parser/issues) for a full list of proposed features and known issues.
If you encounter a bug, please submit an issue and attach a basic scene to reproduce your issue.
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- CONTRIBUTING -->
## Contributing
See the full [Contributing Guide](https://github.com/forticheprod/aep_parser/blob/main/CONTRIBUTING.md) on GitHub.
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- LICENSE -->
## License
Distributed under the MIT License.
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- CONTACT -->
## Contact
Aurore Delaunay - del-github@blurme.net
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- ACKNOWLEDGMENTS -->
## Acknowledgments
* [aftereffects-aep-parser in Go](https://github.com/boltframe/aftereffects-aep-parser)
* [Kaitai Struct](https://kaitai.io)
* [The invaluable Lottie Docs](https://github.com/hunger-zh/lottie-docs/blob/main/docs/aep.md)
* [After Effects Scripting Guide](https://ae-scripting.docsforadobe.dev/)
* [AE version parsing](https://github.com/tinogithub/aftereffects-version-check)
<p align="right">(<a href="#readme-top">back to top</a>)</p>
| text/markdown | null | Aurore Delaunay <del-github@blurme.net> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: File Formats",
"Topic :: Multimedia :: Graphics",
"Typing :: Typed"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"kaitaistruct>=0.9",
"importlib_metadata; python_version < \"3.8\"",
"typing_extensions>=3.7; python_version < \"3.8\"",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"ruff>=0.1; extra == \"dev\"",
"mkdocs>=1.5.0; extra == \"docs\"",
"mkdocs-material>=9.0.0; extra == \"docs\"",
"mkdocstrings[python]>=0.24.0; extra == \"docs\"",
"pymdown-extensions>=10.0; extra == \"docs\"",
"ruff>=0.1; extra == \"docs\""
] | [] | [] | [] | [
"Repository, https://github.com/forticheprod/aep_parser",
"Documentation, https://forticheprod.github.io/aep_parser/",
"Changelog, https://github.com/forticheprod/aep_parser/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:31:18.683024 | aep_parser-0.3.0.tar.gz | 533,032 | 2b/de/8bf999fddd353043a0f15760cca75c8cffa78ddc695bf34a57fb79d04c07/aep_parser-0.3.0.tar.gz | source | sdist | null | false | e7684c92c4cb5bd4c2e1c4734ef73048 | 84036dd219cbd4c6da588c4fde456c488bfda95534ebf0bee5248eb8ed7362bd | 2bde8bf999fddd353043a0f15760cca75c8cffa78ddc695bf34a57fb79d04c07 | null | [
"LICENSE"
] | 217 |
2.4 | xtc-tvm-python-bindings | 0.21.0.4 | TVM: An End to End Tensor IR/DSL Stack for Deep Learning Systems | <!--- Licensed to the Apache Software Foundation (ASF) under one -->
<!--- or more contributor license agreements. See the NOTICE file -->
<!--- distributed with this work for additional information -->
<!--- regarding copyright ownership. The ASF licenses this file -->
<!--- to you under the Apache License, Version 2.0 (the -->
<!--- "License"); you may not use this file except in compliance -->
<!--- with the License. You may obtain a copy of the License at -->
<!--- http://www.apache.org/licenses/LICENSE-2.0 -->
<!--- Unless required by applicable law or agreed to in writing, -->
<!--- software distributed under the License is distributed on an -->
<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
<!--- KIND, either express or implied. See the License for the -->
<!--- specific language governing permissions and limitations -->
<!--- under the License. -->
<img src=https://raw.githubusercontent.com/apache/tvm-site/main/images/logo/tvm-logo-small.png width=128/> Open Deep Learning Compiler Stack
==============================================
[Documentation](https://tvm.apache.org/docs) |
[Contributors](CONTRIBUTORS.md) |
[Community](https://tvm.apache.org/community) |
[Release Notes](NEWS.md)
Apache TVM is a compiler stack for deep learning systems. It is designed to close the gap between the
productivity-focused deep learning frameworks and the performance- and efficiency-focused hardware backends.
TVM works with deep learning frameworks to provide end-to-end compilation for different backends.
License
-------
TVM is licensed under the [Apache-2.0](LICENSE) license.
Getting Started
---------------
Check out the [TVM Documentation](https://tvm.apache.org/docs/) site for installation instructions, tutorials, examples, and more.
The [Getting Started with TVM](https://tvm.apache.org/docs/get_started/overview.html) tutorial is a great
place to start.
Contribute to TVM
-----------------
TVM adopts the Apache committer model. We aim to create an open-source project maintained and owned by the community.
Check out the [Contributor Guide](https://tvm.apache.org/docs/contribute/).
History and Acknowledgement
---------------------------
TVM started as a research project for deep learning compilation.
The first version of the project benefited a lot from the following projects:
- [Halide](https://github.com/halide/Halide): Part of TVM's TIR and arithmetic simplification module
originates from Halide. We also learned and adapted some parts of the lowering pipeline from Halide.
- [Loopy](https://github.com/inducer/loopy): use of integer set analysis and its loop transformation primitives.
- [Theano](https://github.com/Theano/Theano): the design inspiration of symbolic scan operator for recurrence.
Since then, the project has gone through several rounds of redesigns.
The current design is also drastically different from the initial design, following the
development trend of the ML compiler community.
The most recent version focuses on a cross-level design with TensorIR as the tensor-level representation
and Relax as the graph-level representation and Python-first transformations.
The project's current design goal is to make the ML compiler accessible by enabling most
transformations to be customizable in Python and bringing a cross-level representation that can jointly
optimize computational graphs, tensor programs, and libraries. The project is also a foundation
infra for building Python-first vertical compilers for domains, such as LLMs.
| text/markdown | Apache TVM | null | null | null | Apache | machine learning | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research"
] | [] | https://tvm.apache.org/ | https://github.com/apache/tvm/tags | null | [] | [] | [] | [
"cloudpickle",
"ml_dtypes",
"numpy",
"packaging",
"psutil",
"scipy",
"tornado",
"typing_extensions",
"coremltools; extra == \"importer-coreml\"",
"tensorflow; extra == \"importer-keras\"",
"tensorflow-estimator; extra == \"importer-keras\"",
"future; extra == \"importer-onnx\"",
"onnx; extra == \"importer-onnx\"",
"onnxoptimizer; extra == \"importer-onnx\"",
"onnxruntime; extra == \"importer-onnx\"",
"torch; extra == \"importer-onnx\"",
"torchvision; extra == \"importer-onnx\"",
"future; extra == \"importer-pytorch\"",
"torch; extra == \"importer-pytorch\"",
"torchvision; extra == \"importer-pytorch\"",
"tensorflow; extra == \"importer-tensorflow\"",
"tensorflow-estimator; extra == \"importer-tensorflow\"",
"tensorflow; extra == \"importer-tflite\"",
"tensorflow-estimator; extra == \"importer-tflite\"",
"tflite; extra == \"importer-tflite\"",
"ethos-u-vela; extra == \"tvmc\"",
"future; extra == \"tvmc\"",
"onnx; extra == \"tvmc\"",
"onnxoptimizer; extra == \"tvmc\"",
"onnxruntime; extra == \"tvmc\"",
"tensorflow; extra == \"tvmc\"",
"tflite; extra == \"tvmc\"",
"torch; extra == \"tvmc\"",
"torchvision; extra == \"tvmc\"",
"xgboost>=1.1.0; extra == \"tvmc\"",
"future; extra == \"xgboost\"",
"torch; extra == \"xgboost\"",
"xgboost>=1.1.0; extra == \"xgboost\"",
"astroid; extra == \"dev\"",
"autodocsumm; extra == \"dev\"",
"black==20.8b1; extra == \"dev\"",
"commonmark>=0.7.3; extra == \"dev\"",
"cpplint; extra == \"dev\"",
"docutils<0.17; extra == \"dev\"",
"image; extra == \"dev\"",
"matplotlib; extra == \"dev\"",
"pillow; extra == \"dev\"",
"pylint; extra == \"dev\"",
"sphinx; extra == \"dev\"",
"sphinx_autodoc_annotation; extra == \"dev\"",
"sphinx_gallery; extra == \"dev\"",
"sphinx_rtd_theme; extra == \"dev\"",
"types-psutil; extra == \"dev\"",
"cloudpickle; extra == \"all-prod\"",
"coremltools; extra == \"all-prod\"",
"ethos-u-vela; extra == \"all-prod\"",
"future; extra == \"all-prod\"",
"ml_dtypes; extra == \"all-prod\"",
"numpy; extra == \"all-prod\"",
"onnx; extra == \"all-prod\"",
"onnxoptimizer; extra == \"all-prod\"",
"onnxruntime; extra == \"all-prod\"",
"packaging; extra == \"all-prod\"",
"psutil; extra == \"all-prod\"",
"scipy; extra == \"all-prod\"",
"tensorflow; extra == \"all-prod\"",
"tensorflow-estimator; extra == \"all-prod\"",
"tflite; extra == \"all-prod\"",
"torch; extra == \"all-prod\"",
"torchvision; extra == \"all-prod\"",
"tornado; extra == \"all-prod\"",
"typing_extensions; extra == \"all-prod\"",
"xgboost>=1.1.0; extra == \"all-prod\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:30:57.665546 | xtc_tvm_python_bindings-0.21.0.4-cp314-cp314-macosx_14_0_arm64.whl | 41,404,165 | 17/87/516c403040187aac592af8bf424f6a19335e36dd61d21126654a76274b27/xtc_tvm_python_bindings-0.21.0.4-cp314-cp314-macosx_14_0_arm64.whl | cp314 | bdist_wheel | null | false | 0c592e06fd733897db78987afa4aa95c | d9d8cbcfbaee31dbef1815ebc2a8ad4dceb56630aa19ccd96afe33ae985a5392 | 1787516c403040187aac592af8bf424f6a19335e36dd61d21126654a76274b27 | null | [] | 555 |
2.4 | emailify | 0.1.17 | Emailify is a library for creating email templates in Python. | # Emailify
[](https://codecov.io/gh/choinhet/emailify)
Create beautiful HTML emails with tables, text, images and more. Built on MJML for consistent rendering across all email clients. Images are embedded as proper email attachments with Content-ID references.
## Installation
```bash
pip install emailify
```
## Usage
### Example
```python
import pandas as pd
import emailify as ef
# The render function now returns both HTML and attachments
html, attachments = ef.render(
ef.Text(
text="Hello, this is a table with merged headers",
style=ef.Style(background_color="#cbf4c9", padding_left="5px"),
),
ef.Table(
data=df,
merge_equal_headers=True,
header_style={
"hello": ef.Style(background_color="#000000", font_color="#ffffff"),
},
column_style={
"hello3": ef.Style(background_color="#0d0d0", bold=True),
},
row_style={
1: ef.Style(background_color="#cbf4c9", bold=True),
},
),
ef.Fill(style=ef.Style(background_color="#cbf4c9")),
ef.Image(data=buf, format="png", width="600px"),
)
# html: ready-to-send HTML string
# attachments: list of email.mime.application.MIMEApplication objects for images
```
#### Result

### Basic Table
#### Extra
Tables can also handle nested components.
```python
import pandas as pd
import emailify as ef
df = pd.DataFrame({"Name": ["Alice", "Bob"], "Score": [95, 87]})
table = ef.Table(data=df)
html, attachments = ef.render(table)
# attachments will be empty since tables don't produce attachments
```
### Text and Styling
```python
text = ef.Text(
text="Hello, this is a styled header",
style=ef.Style(background_color="#cbf4c9", padding_left="5px")
)
html, attachments = ef.render(text)
```
### Tables with Custom Styles
```python
table = ef.Table(
data=df,
header_style={"Name": ef.Style(background_color="#000", font_color="#fff")},
column_style={"Score": ef.Style(background_color="#f0f0f0", bold=True)},
row_style={0: ef.Style(background_color="#e6ffe6")}
)
```
### Images and Charts
```python
import io
import matplotlib.pyplot as plt
# Create a chart
buf = io.BytesIO()
plt.plot([1, 2, 3], [2, 4, 1])
plt.savefig(buf, format="png", dpi=150)
plt.close()
buf.seek(0)
# Render with image - note that images produce attachments
image = ef.Image(data=buf, format="png", width="600px")
html, attachments = ef.render(image)
# attachments contains MIMEApplication objects with proper Content-ID headers
# The HTML references images via cid: URLs that match the Content-ID
print(f"Generated {len(attachments)} attachment(s)")
```
## Key Features
- **Responsive Design**: Built on MJML for consistent rendering across Gmail, Outlook, Apple Mail, and other clients
- **Proper Image Attachments**: Images are embedded as email attachments with Content-ID references, not base64 data URIs
- **Rich Components**: Tables, text, images, and fills with extensive styling options
- **Pandas Integration**: Direct DataFrame rendering with customizable styles
- **Type Safety**: Full type hints and Pydantic models for robust development
## Email Integration
The `render()` function returns a tuple of `(html, attachments)`:
- `html`: Ready-to-send HTML string with `cid:` references for images
- `attachments`: List of `MIMEApplication` objects with proper headers for inline display
Use with your email library:
```python
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
html, attachments = ef.render(your_components)
msg = MIMEMultipart('related')
msg.attach(MIMEText(html, 'html'))
for attachment in attachments:
msg.attach(attachment)
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | <3.13,>=3.9.10 | [] | [] | [] | [
"jinja2>=3.1.6",
"pandas>=2.3.1",
"pillow>=11.3.0",
"pydantic>=2.11.7",
"quickjs>=1.19.4"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T15:30:40.139595 | emailify-0.1.17.tar.gz | 367,715 | a8/dd/7f5a71a898b952eff1a4efc8c814779a3922f4ba53a6abb6d58a26834ddb/emailify-0.1.17.tar.gz | source | sdist | null | false | 4d8f5c55c316b223ad2c6c5e7b433c57 | 5c2c15a4eb454a6175e21042dd2e713c9ab89fc3ba1632d310d202862b57db2e | a8dd7f5a71a898b952eff1a4efc8c814779a3922f4ba53a6abb6d58a26834ddb | null | [] | 222 |
2.4 | myofinder | 1.2.0 | Automatic calculation of the fusion index by AI segmentation | The Myoblast Fusion Index Determination Software
================================================
The MyoFInDer Python package aims to provide an open-source graphical interface
for automatic calculation of the fusion index in muscle cell cultures, based on
fluorescence microscopy images.
> [!IMPORTANT]
> MyoFInDer is currently in maintenance-only development phase. This means that
> reported bugs will be fixed, minor changes will be brought to support new
> Python versions if possible, but no major improvements or updates should be
> expected. User requests for new features could still be addressed, depending
> on how large they are.
> [!WARNING]
> MyoFInDer version 1.1.0 now uses [CellPose](https://www.cellpose.org/) for
> nuclei segmentation instead of [DeepCell](https://www.deepcell.org/). This is
> a major breaking change. Differences are to be expected in the results
> obtained with version 1.1.0 and earlier ones, even with similar processing
> parameters.
Presentation
------------
MyoFInDer is based on an Artificial Intelligence library for cell segmentation,
that it makes easily accessible to researchers with limited computer skills. In
the interface, users can manage multiple images at once, adjust processing
parameters, and manually correct the output of the computation. It is also
possible to save the result of the processing as a project, that can be shared
and re-opened later.
A more detailed description of the features and usage of MyoFInDer can be found
in the
[usage section](https://tissueengineeringlab.github.io/MyoFInDer/usage.html)
of the documentation.
MyoFInDer was developed at the
[Tissue Engineering Lab](https://tissueengineering.kuleuven-kulak.be/) in
Kortrijk, Belgium, which is part of the
[KU Leuven](https://www.kuleuven.be/kuleuven/) university. It is today the
preferred solution in our laboratory for assessing the fusion index of a cell
population.
Requirements
------------
To install and run MyoFInDer, you'll need Python 3 (3.10 to 3.14), approximately
6GB of disk space, and preferably 8GB of memory or more. MyoFInDer runs on
Windows, Linux, macOS, and potentially other OS able to run a compatible
version of Python.
The dependencies of the module are :
- [Pillow](https://python-pillow.org/)
- [opencv-python](https://pypi.org/project/opencv-python/)
- [CellPose](https://pypi.org/project/cellpose/)
- [XlsxWriter](https://pypi.org/project/XlsxWriter/)
- [screeninfo](https://pypi.org/project/screeninfo/)
- [numpy](https://numpy.org/)
Installation
------------
MyoFInDer is distributed on PyPI, and can thus be installed using the `pip`
module of Python :
```console
python -m pip install myofinder
```
Note that in the `bin` folder of this repository, a very basic `.msi` Windows
installer allows automatically installing the module and its dependencies for
Windows users who don't feel comfortable with command-line operations.
A more detailed description of the installation procedure can be found in the
[installation section](https://tissueengineeringlab.github.io/MyoFInDer/installation.html)
of the documentation.
Citing MyoFInDer
----------------
If MyoFInDer has been of help in your research, please reference it in your
academic publications by citing the following article:
- Weisrock A., Wüst R., Olenic M. et al., *MyoFInDer: An AI-Based Tool for
Myotube Fusion Index Determination*, Tissue Eng. Part A (30), 19-20, 2024,
DOI: 10.1089/ten.TEA.2024.0049.
([link to Weisrock et al.](https://www.liebertpub.com/doi/10.1089/ten.tea.2024.0049))
Documentation
-------------
The latest version of the documentation can be accessed on the
[project's website](https://tissueengineeringlab.github.io/MyoFInDer/). It
contains detailed information about the installation, usage, and
troubleshooting.
| text/markdown | null | Tissue Engineering Lab <antoine.weisrock@kuleuven.be> | null | Antoine Weisrock <antoine.weisrock@gmail.com> | null | segmentation, fusion index, automation, muscle culture | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Bio-Informatics",
"Topic :: Scientific/Engineering :: Image Recognition"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"Pillow",
"opencv-python-headless",
"cellpose==3.1.1.2",
"XlsxWriter>=3.0.0",
"screeninfo>=0.7",
"numpy"
] | [] | [] | [] | [
"Homepage, https://github.com/TissueEngineeringLab/MyoFInDer",
"Documentation, https://tissueengineeringlab.github.io/MyoFInDer/",
"Repository, https://github.com/TissueEngineeringLab/MyoFInDer.git",
"Issues, https://github.com/TissueEngineeringLab/MyoFInDer/issues",
"Download, https://pypi.org/project/myofinder/#files"
] | twine/6.2.0 CPython/3.10.12 | 2026-02-20T15:30:03.820275 | myofinder-1.2.0.tar.gz | 2,538,678 | ea/61/f70f6e935649e815a72432bfedc99ae53d9122a0197cdd8a999bf9df8d48/myofinder-1.2.0.tar.gz | source | sdist | null | false | f2bc4144808caa7f4c22ea250a4d2608 | 48f4bfafb64034eebc40115704e08017bcfc20d4860260a9d3ab992c12569da8 | ea61f70f6e935649e815a72432bfedc99ae53d9122a0197cdd8a999bf9df8d48 | GPL-3.0-or-later | [
"LICENSE"
] | 205 |
2.4 | auto-coder | 3.0.12 | AutoCoder: AutoCoder | <p align="center">
<picture>
<img alt="auto-coder" src="./logo/auto-coder.jpeg" width=55%>
</picture>
</p>
<h3 align="center">
Auto-Coder (powered by Byzer-LLM)
</h3>
<p align="center">
<a href="https://uelng8wukz.feishu.cn/wiki/QIpkwpQo2iSdkwk9nP6cNSPlnPc"><b>中文</b></a> |
</p>
---
*Latest News* 🔥
- [2025/01] Release Auto-Coder 0.1.208
- [2024/09] Release Auto-Coder 0.1.163
- [2024/08] Release Auto-Coder 0.1.143
- [2024/07] Release Auto-Coder 0.1.115
- [2024/06] Release Auto-Coder 0.1.82
- [2024/05] Release Auto-Coder 0.1.73
- [2024/04] Release Auto-Coder 0.1.46
- [2024/03] Release Auto-Coder 0.1.25
- [2024/03] Release Auto-Coder 0.1.24
---
## 安装
### 方法一:使用 pip 安装(推荐)
```shell
# 创建虚拟环境(推荐)
conda create --name autocoder python=3.10.11
conda activate autocoder
# 或者使用 venv
python -m venv autocoder
source autocoder/bin/activate # Linux/macOS
# autocoder\Scripts\activate # Windows
# 安装 auto-coder
pip install -U auto-coder
```
### 方法二:从源码安装
```shell
# 克隆仓库
git clone https://github.com/allwefantasy/auto-coder.git
cd auto-coder
# 创建虚拟环境
conda create --name autocoder python=3.10.11
conda activate autocoder
# 安装依赖
pip install -r requirements.txt
# 安装项目
pip install -e .
```
### 系统要求
- Python 3.10, 3.11 或 3.12
- 操作系统:Windows、macOS、Linux
- 内存:建议 4GB 以上
- 磁盘空间:建议 2GB 以上
### 验证安装
安装完成后,可以通过以下命令验证安装是否成功:
```shell
# 检查版本
auto-coder --version
# 启动聊天模式
auto-coder.chat
# 运行单次命令
auto-coder.run -p "Hello, Auto-Coder!"
```
## 使用指南
### 1. 聊天模式(推荐新手使用)
```shell
# 启动交互式聊天界面
auto-coder.chat
# 或者使用别名
chat-auto-coder
```
聊天模式提供友好的交互界面,支持:
- 实时对话
- 代码生成和修改
- 文件操作
- 项目管理
### 2. 命令行模式
#### 单次运行模式
```shell
# 基本用法
auto-coder.run -p "编写一个计算斐波那契数列的函数"
# 从管道读取输入
echo "解释这段代码的功能" | auto-coder.run -p
# 指定输出格式
auto-coder.run -p "生成一个 Hello World 函数" --output-format json
# 使用详细输出
auto-coder.run -p "创建一个简单的网页" --verbose
```
#### 会话模式
```shell
# 继续最近的对话
auto-coder.run --continue
# 恢复特定会话
auto-coder.run --resume 550e8400-e29b-41d4-a716-446655440000
```
#### 高级选项
```shell
# 限制对话轮数
auto-coder.run -p "优化这个算法" --max-turns 5
# 指定系统提示
auto-coder.run -p "写代码" --system-prompt "你是一个专业的Python开发者"
# 限制可用工具
auto-coder.run -p "读取文件内容" --allowed-tools read_file write_to_file
# 设置权限模式
auto-coder.run -p "修改代码" --permission-mode acceptEdits
```
### 3. 核心模式
```shell
# 启动核心模式(传统命令行界面)
auto-coder
# 或者使用别名
auto-coder.core
```
### 4. 服务器模式
```shell
# 启动 Web 服务器
auto-coder.serve
# 或者使用别名
auto-coder-serve
```
### 5. RAG 模式
```shell
# 启动 RAG(检索增强生成)模式
auto-coder.rag
```
### 常用命令示例
```shell
# 代码生成
auto-coder.run -p "创建一个 Flask Web 应用"
# 代码解释
auto-coder.run -p "解释这个函数的作用" < code.py
# 代码重构
auto-coder.run -p "重构这段代码,提高可读性"
# 错误修复
auto-coder.run -p "修复这个 bug" --verbose
# 文档生成
auto-coder.run -p "为这个项目生成 README 文档"
# 测试生成
auto-coder.run -p "为这个函数编写单元测试"
```
### 自动补全
Auto-Coder 支持命令行自动补全功能:
```shell
# 安装自动补全(bash)
echo 'eval "$(register-python-argcomplete auto-coder.run)"' >> ~/.bashrc
source ~/.bashrc
# 安装自动补全(zsh)
echo 'eval "$(register-python-argcomplete auto-coder.run)"' >> ~/.zshrc
source ~/.zshrc
```
## 卸载
### 完全卸载
```shell
# 卸载 auto-coder
pip uninstall auto-coder
# 删除虚拟环境(如果使用了虚拟环境)
conda remove --name autocoder --all
# 或者
rm -rf autocoder # 如果使用 venv 创建的环境
# 清理缓存文件(可选)
rm -rf ~/.autocoder # 用户配置和缓存目录
```
### 重新安装
```shell
# 卸载旧版本
pip uninstall auto-coder
# 清理缓存
pip cache purge
# 安装最新版本
pip install -U auto-coder
```
## 配置
### 环境变量
```shell
# 设置 API 密钥
export OPENAI_API_KEY="your-api-key"
export ANTHROPIC_API_KEY="your-api-key"
# 设置模型配置
export AUTOCODER_MODEL="gpt-4"
export AUTOCODER_BASE_URL="https://api.openai.com/v1"
```
### 配置文件
Auto-Coder 支持多种配置方式:
- `.autocoderrc`:项目级配置
- `~/.autocoder/config.yaml`:用户级配置
- 环境变量:系统级配置
## 故障排除
### 常见问题
1. **安装失败**
```shell
# 升级 pip
pip install --upgrade pip
# 清理缓存重新安装
pip cache purge
pip install auto-coder
```
2. **权限错误**
```shell
# 使用用户安装
pip install --user auto-coder
```
3. **依赖冲突**
```shell
# 使用虚拟环境
python -m venv autocoder_env
source autocoder_env/bin/activate
pip install auto-coder
```
4. **命令未找到**
```shell
# 检查 PATH
echo $PATH
# 重新安装
pip uninstall auto-coder
pip install auto-coder
```
### 获取帮助
```shell
# 查看帮助信息
auto-coder.run --help
# 查看版本信息
auto-coder --version
# 启用详细输出进行调试
auto-coder.run -p "test" --verbose
```
## 教程
0. [Auto-Coder.Chat: 通向智能编程之路](https://uelng8wukz.feishu.cn/wiki/QIpkwpQo2iSdkwk9nP6cNSPlnPc)
| text/markdown | allwefantasy | allwefantasy <allwefantasy@gmail.com> | null | null |
专有软件许可协议
PROPRIETARY SOFTWARE LICENSE AGREEMENT
版权所有 (c) 2024 auto-coder 项目所有者。保留所有权利。
重要提示:本软件受版权法和国际条约保护。未经授权的复制或分发本程序或其任何部分,可能导致严重的民事和刑事处罚,并将在法律允许的最大范围内受到起诉。
许可条款和条件
1. 定义
"软件"是指 auto-coder 项目的所有源代码、目标代码、文档、数据和相关材料。
"许可方"是指本软件的版权所有者。
"被许可方"是指获得本许可的个人或法人实体。
2. 许可限制
2.1 禁止商业使用
本软件严格禁止用于任何商业目的,包括但不限于:
- 直接或间接的商业销售
- 作为商业服务的一部分提供
- 用于商业产品开发
- 用于为第三方提供有偿服务
- 任何形式的商业化运营
2.2 源代码访问限制
- 未经许可方书面授权,严禁查看、访问、复制或分发本软件的源代码
- 禁止对本软件进行反向工程、反编译或反汇编
- 禁止尝试从目标代码推导源代码
- 仅授权用户可在获得特殊书面许可后访问源代码
2.3 使用限制
- 本软件仅供个人学习和研究使用
- 必须在封闭的、非公开的环境中使用
- 禁止将本软件或其任何部分公开发布或共享
- 禁止创建衍生作品
3. 知识产权
- 本软件的所有权利、所有权和利益均归许可方所有
- 本许可不授予被许可方任何知识产权
- 被许可方不得删除或修改任何版权声明
4. 保密义务
被许可方同意:
- 对本软件的所有信息严格保密
- 不向任何第三方披露本软件的任何内容
- 采取合理的安全措施保护本软件
5. 终止
- 违反本许可的任何条款将自动终止许可
- 许可终止后,被许可方必须立即销毁所有软件副本
6. 免责声明
本软件按"原样"提供,不提供任何形式的保证,无论是明示的还是暗示的。
许可方不对因使用或无法使用本软件而造成的任何损害承担责任。
7. 法律责任
违反本许可协议可能导致:
- 民事诉讼和损害赔偿
- 刑事起诉
- 禁令救济
- 律师费和诉讼费用的赔偿
8. 管辖法律
本许可受中华人民共和国法律管辖,并按其解释。
9. 完整协议
本许可构成双方之间关于本软件的完整协议,取代所有先前的协议或谅解。
10. 联系方式
如需获得特殊许可或有任何问题,请联系许可方。
严正声明:
任何未经授权的使用、复制、分发或查看本软件源代码的行为都将被视为侵权,
并将受到法律追究。
---
PROPRIETARY SOFTWARE LICENSE AGREEMENT
Copyright (c) 2024 auto-coder Project Owner. All Rights Reserved.
IMPORTANT: This software is protected by copyright laws and international treaties.
Unauthorized reproduction or distribution of this program, or any portion of it,
may result in severe civil and criminal penalties, and will be prosecuted to the
maximum extent possible under the law.
LICENSE TERMS AND CONDITIONS
1. DEFINITIONS
"Software" means all source code, object code, documentation, data, and
related materials of the auto-coder project.
"Licensor" means the copyright owner of this Software.
"Licensee" means the individual or legal entity receiving this license.
2. LICENSE RESTRICTIONS
2.1 NO COMMERCIAL USE
This Software is strictly prohibited for any commercial purposes, including but not limited to:
- Direct or indirect commercial sale
- Provision as part of commercial services
- Use in commercial product development
- Use in providing paid services to third parties
- Any form of commercial operation
2.2 SOURCE CODE ACCESS RESTRICTIONS
- Viewing, accessing, copying, or distributing the source code is strictly
prohibited without written authorization from the Licensor
- Reverse engineering, decompilation, or disassembly of this Software is prohibited
- Attempting to derive source code from object code is prohibited
- Only authorized users may access source code with special written permission
2.3 USE RESTRICTIONS
- This Software is for personal learning and research only
- Must be used in a closed, non-public environment
- Publishing or sharing this Software or any part thereof is prohibited
- Creating derivative works is prohibited
3. INTELLECTUAL PROPERTY
- All rights, title, and interest in the Software remain with the Licensor
- This license grants no intellectual property rights to the Licensee
- Licensee may not remove or modify any copyright notices
4. CONFIDENTIALITY
The Licensee agrees to:
- Keep all information about the Software strictly confidential
- Not disclose any content of the Software to third parties
- Take reasonable security measures to protect the Software
5. TERMINATION
- Violation of any terms of this license automatically terminates the license
- Upon termination, Licensee must immediately destroy all copies of the Software
6. DISCLAIMER
This Software is provided "AS IS" without warranty of any kind, express or implied.
The Licensor is not liable for any damages arising from use or inability to use the Software.
7. LEGAL LIABILITY
Violation of this license agreement may result in:
- Civil litigation and damages
- Criminal prosecution
- Injunctive relief
- Compensation for attorney fees and litigation costs
8. GOVERNING LAW
This license is governed by and construed in accordance with the laws of the
People's Republic of China.
9. ENTIRE AGREEMENT
This license constitutes the entire agreement between the parties regarding
the Software and supersedes all prior agreements or understandings.
10. CONTACT
For special licensing or any questions, please contact the Licensor.
STRICT NOTICE:
Any unauthorized use, copying, distribution, or viewing of the source code of this
Software will be considered infringement and will be prosecuted under the law.
| autocoder, ai, coding, automation | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | <=3.13,>=3.10 | [] | [] | [] | [
"contextlib2",
"ninja",
"jinja2",
"rich",
"paramiko",
"tqdm",
"loguru",
"pyjava>=0.6.21",
"fastapi",
"uvicorn",
"retrying",
"zhipuai",
"dashscope",
"tiktoken",
"tabulate",
"jupyter_client",
"prompt-toolkit",
"tokenizers",
"aiofiles",
"readerwriterlock",
"byzerllm[saas]>=0.1.202",
"patch",
"diff_match_patch",
"GitPython",
"openai>=1.14.3",
"anthropic",
"google-generativeai",
"protobuf",
"azure-cognitiveservices-speech",
"real_agent",
"duckdb",
"python-docx",
"docx2txt",
"pdf2image",
"docx2pdf",
"pypdf",
"pyperclip",
"colorama",
"pylint",
"reportlab",
"pathspec",
"openpyxl",
"python-pptx",
"watchfiles",
"cairosvg",
"matplotlib",
"mammoth",
"markdownify",
"pdfminer.six",
"puremagic",
"pydub",
"youtube-transcript-api",
"SpeechRecognition",
"pathvalidate",
"pexpect",
"mcp; python_version >= \"3.10\"",
"setuptools",
"filelock",
"argcomplete",
"psutil",
"patch-ng",
"firecrawl"
] | [] | [] | [] | [
"Homepage, https://github.com/allwefantasy/auto-coder"
] | twine/6.2.0 CPython/3.10.11 | 2026-02-20T15:29:59.926898 | auto_coder-3.0.12.tar.gz | 3,811,131 | d1/6d/1ebc89127ece16d8eb6044be584f2936bc9b1cb136bda5214b49116bc8fe/auto_coder-3.0.12.tar.gz | source | sdist | null | false | 65483646f14bdd591b1245ed1807aa66 | cea77cbaa5d2f3ad7a9fc2034339288b3ac26c760fff11f03f93fff230e042bc | d16d1ebc89127ece16d8eb6044be584f2936bc9b1cb136bda5214b49116bc8fe | null | [
"LICENSE"
] | 237 |
2.4 | fastpluggy-ui-tools | 0.0.12 | UI Tools plugin for Fastpluggy | # UI Tools Module for FastPluggy
A FastPluggy module that provides a collection of Jinja2 template filters and HTML rendering utilities for building user interfaces.
It includes base64 encoding, Pydantic model dumping, localization, JSON rendering, and image embedding.
## Features
* **Base64 Encoding**: `b64encode` filter to convert binary data to Base64 strings.
* **Pydantic Model Dump**: `pydantic_model_dump` filter to serialize Pydantic `BaseModel` or `BaseSettings` instances to dictionaries.
* **Localization**: `localizedcurrency`, `localizeddate`, and `nl2br` filters for number, date/time formatting, and newline-to-HTML conversions using Babel.
* **JSON Rendering**: `from_json` filter and HTML list conversion utilities for safely displaying JSON data.
* **Image Rendering**: Embed Base64-encoded images directly into templates with `<img>` tags.
* **Seamless Integration**: Easy registration with FastPluggy via the `UiToolsModule` plugin.
# Extra Widget
TODO: add widget description
## Requirements
* Python 3.7 or higher
* [Babel](https://pypi.org/project/Babel/)
* [Pillow](https://pypi.org/project/Pillow/)
Install dependencies:
```bash
pip install -r requirements.txt
```
## Usage
### Template Filters
| Filter | Description |
| --------------------- |------------------------------------------------------------------|
| `b64encode` | Base64-encode binary data (`bytes → str`). |
| `pydantic_model_dump` | Dump Pydantic models/settings to dictionaries. |
| `localizedcurrency` | Format a number as localized currency (default: `EUR`, `fr_FR`). |
| `localizeddate` | Format dates/datetimes with various styles/locales/timezones. |
| `nl2br` | Convert newline characters to `<br>` tags. |
| `from_json` | Parse a JSON string into Python objects (`list`/`dict`). |
| `render_bytes_size` | Format a size into human readable |
**Example in a Jinja2 template:**
```jinja
<h2>{{ user.name }}</h2>
<p>Balance: {{ user.balance | localizedcurrency('USD', 'en_US') }}</p>
<p>Joined: {{ user.joined_at | localizeddate('long', 'short', 'en_US') }}</p>
<pre>{{ config | pydantic_model_dump | pprint }}</pre>
```
### HTML Rendering Utilities
Import and use functions from `html_render.py` to render JSON or image data in HTML:
```python
from ui_tools.html_render import render_data_field, render_safe_data_field
# Render JSON string as HTML list
html_list = render_data_field(json_string)
# Safely render arbitrary data
safe_html = render_safe_data_field(raw_input)
```
## Running Tests
Ensure you have `pytest` installed, then run:
```bash
pytest tests/
```
## Contributing
1. Fork the repository.
2. Create a new branch: `git checkout -b feature/your-feature`.
3. Commit your changes and push to your fork.
4. Open a pull request for review.
## License
This project is licensed under the MIT License. See [LICENSE](LICENSE) for details.
| text/markdown | FastPluggy Team | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"jinja2",
"babel",
"Pillow"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T15:29:28.365150 | fastpluggy_ui_tools-0.0.12.tar.gz | 15,880 | e3/20/1c8dd3c5c23a9f5395f140c7685bfd19335966ac83cbe46ea02e01f06d93/fastpluggy_ui_tools-0.0.12.tar.gz | source | sdist | null | false | 898e75204e34e2790e02bf0dce65cca8 | 0a283522bd7705c45f86acc5b08ee7366b7ced2934878326559c248828e3d608 | e3201c8dd3c5c23a9f5395f140c7685bfd19335966ac83cbe46ea02e01f06d93 | null | [] | 228 |
2.4 | castor-pollux | 0.0.21 | A library for 'gemini' languagemodels without unnecessary dependencies. | # castor-pollux
Castor-Pollux (the twin sons of Zeus, routinely called 'gemini') is a pure REST API library for interacting with Google Generative AI API.
## Without (!!!):
- any whiff of 'Vertex' or GCP;
- any signs of 'Pydantic' or unnecessary (and mostly useless) typing;
- any other dependencies of other google packages trashed into the dumpster `google-genai` package.
## Installation:
<pre>
pip install castor-pollux
</pre>
Then:
```Python
# Python
import castor_pollux.rest as cp
```
## A text continuation request:
```Python
import castor_pollux.rest as cp
from yaml import safe_load as yl
kwargs = """ # this is a string in YAML format
model: gemini-3.1-pro-preview # thingking model
# system_instruction: '' # will prevail if put here
mime_type: text/plain #
modalities:
- TEXT # text for text
max_tokens: 10000
n: 2 # 1 is not mandatory
stop_sequences:
- STOP
- "\nTitle"
temperature: 0.5 # 0 to 1.0
top_k: 10 # number of tokens to consider.
top_p: 0.5 # 0 to 1.0
include_thoughts: True
thinking_level: high # for 3+ models
"""
instruction = 'You are Joseph Jacobs, you retell folk tales.'
text_to_continue = 'Once upon a time, when pigs drank wine '
machine_responses = cp.continuation(
text=text_to_continue,
instruction=instruction,
**yl(kwargs)
)
```
## A multi-turn conversation continuation request:
```Python
import castor_pollux.rest as cp
from yaml import safe_load as yl
kwargs = """ # this is a string in YAML format
model: gemini-2.5-pro
mime_type: text/plain
modalities:
- TEXT
max_tokens: 32000
n: 1 # no longer a mandatory 1
stop_sequences:
- STOP
- "\nTitle"
temperature: 0.5
top_k: 10
top_p: 0.5
include_thoughts: True
thinking_budget: 32768
"""
previous_turns = """
- role: user
parts:
- text: Can we change human nature?
- role: model
parts:
- text: Of course, nothing can be simpler. You just re-educate them.
"""
human_response_to_the_previous_turn = 'That is not true. Think again.'
instruction = 'I am an expert in critical thinking. I analyse.'
machine_responses = cp.continuation(
text=human_response_to_the_previous_turn,
contents=yl(previous_turns),
instruction=instruction,
**yl(kwargs)
)
```
| text/markdown | null | Alexander Fedotov <alex.fedotov@aol.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"requests>=2.32.3",
"PyYAML>=6.0.0",
"google-genai>=1.49.0; extra == \"gemini-google\""
] | [] | [] | [] | [
"Homepage, https://github.com/alxfed/castor-pollux",
"Bug Tracker, https://github.com/alxfed/castor-pollux/issues"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-20T15:29:23.360282 | castor_pollux-0.0.21.tar.gz | 5,280 | 8a/23/c0e07480684638565f1a918b6f276477d57ad6c91096523c7acbfb0e5152/castor_pollux-0.0.21.tar.gz | source | sdist | null | false | 770b3e11a4ce8b1cf636e586de4027f2 | 52562b6c1da4469e52ffa3926e8c1be88a9c65f94e3667e24e6329313b8ae9c2 | 8a23c0e07480684638565f1a918b6f276477d57ad6c91096523c7acbfb0e5152 | null | [
"LICENSE"
] | 228 |
2.4 | zehnex-pro | 1.0.1 | Ultra-fast Telegram bot framework with YouTube downloader, currency, Wikipedia PDF and QR code generation | # ⚡ Zehnex Pro 1.0.1
**Ultra-fast async Telegram bot framework** — YouTube yuklovchi, valyuta konvertor, Wikipedia→PDF va QR kod generatori bilan.
```
pip install zehnex-pro
```
---
## 🚀 Tez boshlash
```python
from zehnex_pro import ZehnexBot, VideoDownloader, CurrencyConverter, WikiToPDF, QRGenerator, Filter
bot = ZehnexBot("YOUR_BOT_TOKEN")
dl = VideoDownloader()
currency = CurrencyConverter()
wiki = WikiToPDF(language="uz")
qr = QRGenerator()
# ─── /start ───────────────────────────────────────────────
@bot.command("start", aliases=["help"])
async def start(ctx):
await ctx.reply(
"👋 Salom! <b>Zehnex Pro Bot</b>\n\n"
"📹 /video [URL] — video yuklab berish\n"
"🎵 /audio [URL] — mp3 yuklab berish\n"
"💱 /convert 100 USD UZS — valyuta\n"
"📖 /wiki [mavzu] — Wikipedia PDF\n"
"🔲 /qr [matn/URL] — QR kod\n"
"📶 /wifi [ssid] [parol] — WiFi QR"
)
# ─── Video yuklovchi ──────────────────────────────────────
@bot.command("video")
async def video_cmd(ctx):
url = ctx.args[0] if ctx.args else VideoDownloader.extract_url(ctx.text)
if not url:
await ctx.reply("❌ URL yuboring!\nMisol: /video https://youtube.com/watch?v=...")
return
await ctx.upload_video()
try:
info = await dl.get_info(url)
await ctx.reply(f"⏳ Yuklanmoqda...\n\n{info}")
path = await dl.download(url, quality="720p")
await ctx.send_video(path, caption=f"🎬 {info.title}")
dl.cleanup(path)
except Exception as e:
await ctx.reply(f"❌ Xato: {e}")
# ─── Audio (MP3) yuklovchi ────────────────────────────────
@bot.command("audio")
async def audio_cmd(ctx):
url = ctx.args[0] if ctx.args else None
if not url:
await ctx.reply("❌ URL yuboring!")
return
await ctx.typing()
path = await dl.download(url, audio_only=True)
await ctx.send_document(path, caption="🎵 MP3 tayyor!")
dl.cleanup(path)
# ─── Valyuta konvertatsiya ────────────────────────────────
@bot.command("convert")
async def convert_cmd(ctx):
# /convert 100 USD UZS
parsed = CurrencyConverter.parse_convert_command(ctx.text)
if not parsed:
await ctx.reply("❌ Foydalanish: /convert 100 USD UZS")
return
amount, from_c, to_c = parsed
await ctx.typing()
try:
result = await currency.convert(amount, from_c, to_c)
await ctx.reply(str(result))
except Exception as e:
await ctx.reply(f"❌ {e}")
@bot.command("kurs")
async def rates_cmd(ctx):
base = ctx.args[0].upper() if ctx.args else "USD"
await ctx.typing()
rates = await currency.get_popular_rates(base)
await ctx.reply(currency.format_rates(rates, base))
# ─── Wikipedia → PDF ──────────────────────────────────────
@bot.command("wiki")
async def wiki_cmd(ctx):
if not ctx.args:
await ctx.reply("📖 Foydalanish: /wiki Python\n(yoki /wiki Amir Temur)")
return
query = " ".join(ctx.args)
await ctx.typing()
result = await wiki.search(query)
if not result:
await ctx.reply(f"❌ '{query}' bo'yicha hech narsa topilmadi.")
return
await ctx.reply(result.preview())
await ctx.upload_document()
pdf_path = await wiki.to_pdf(result)
await ctx.send_document(pdf_path, caption=f"📄 {result.title}")
wiki.cleanup(pdf_path)
# ─── QR Kod ───────────────────────────────────────────────
@bot.command("qr")
async def qr_cmd(ctx):
if not ctx.args:
await ctx.reply("🔲 Foydalanish: /qr https://google.com\nYoki: /qr Salom Dunyo!")
return
data = " ".join(ctx.args)
await ctx.typing()
path = await qr.generate(
data,
style="rounded",
color="#1a237e",
size=500,
)
await ctx.send_photo(path, caption=f"✅ QR kod tayyor!\n\n<code>{data[:80]}</code>")
qr.cleanup(path)
# ─── WiFi QR ──────────────────────────────────────────────
@bot.command("wifi")
async def wifi_cmd(ctx):
args = ctx.args
if len(args) < 2:
await ctx.reply("📶 Foydalanish: /wifi [SSID] [Parol] [WPA|WEP|nopass]")
return
ssid = args[0]
password = args[1]
security = args[2] if len(args) > 2 else "WPA"
await ctx.typing()
path = await qr.wifi(ssid=ssid, password=password, security=security)
await ctx.send_photo(path, caption=f"📶 WiFi: <b>{ssid}</b>")
qr.cleanup(path)
# ─── URL yuborilsa avtomatik video yukla ──────────────────
@bot.message(Filter.AND(Filter.is_url, Filter.NOT(Filter.text_startswith("/"))))
async def auto_video(ctx):
url = ctx.text.strip()
if VideoDownloader.is_supported(url):
await ctx.reply(
"🔗 Link topildi! Yuklab beraymi?",
keyboard=bot.inline_keyboard([
[
{"text": "📹 Video (720p)", "callback_data": f"dl_video_720p:{url[:100]}"},
{"text": "🎵 MP3", "callback_data": f"dl_audio:{url[:100]}"},
]
])
)
# ─── Callback handlers ────────────────────────────────────
@bot.on("callback_query")
async def handle_callbacks(ctx):
await ctx.answer()
if ctx.data.startswith("dl_video_"):
await ctx.reply("⏳ Video yuklanmoqda... iltimos kuting.")
elif ctx.data.startswith("dl_audio:"):
await ctx.reply("⏳ Audio yuklanmoqda... iltimos kuting.")
# ─── Run ──────────────────────────────────────────────────
if __name__ == "__main__":
bot.run()
```
---
## 📦 O'rnatish
```bash
pip install zehnex-pro
```
Barcha imkoniyatlar bilan:
```bash
pip install zehnex-pro[all]
```
---
## 🧩 Modullar
| Modul | Tavsif |
|-------|--------|
| `ZehnexBot` | Asosiy bot engine (async polling) |
| `VideoDownloader` | YouTube, TikTok, Instagram, Twitter va 1000+ saytdan video |
| `CurrencyConverter` | Real-vaqt valyuta kurslari |
| `WikiToPDF` | Wikipedia qidiruv va PDF generator |
| `QRGenerator` | Chiroyli QR kod generatori |
| `Router` | Handler guruhlovchi |
| `Filter` | Xabar filtrlari |
| `Context` | Handler kontekst obyekti |
---
## ⚡ Nima uchun Zehnex Pro tez?
- **httpx** async HTTP — bir vaqtda yuzlab so'rov
- **asyncio.Semaphore** — parallel xabar ishlash
- **Keepalive connections** — TCP qayta ulanish xarajatisiz
- **Smart caching** — valyuta kurslari 5 daqiqa keshlanadi
- **Executor** — og'ir operatsiyalar thread poolda
---
## 📋 Talablar
- Python 3.9+
- `httpx` — async HTTP
- `yt-dlp` — video yuklovchi
- `reportlab` — PDF generator
- `qrcode[pil]` — QR kod
- `Pillow` — rasm ishlash
---
## 📄 Litsenziya
MIT License — bepul foydalaning, o'zgartiring, tarqating!
---
*Zehnex Pro 1.0.1 — Made with ❤️ for Uzbek developers*
| text/markdown | null | Zehnex Team <zehnex@example.com> | null | null | null | telegram, bot, framework, async, youtube-downloader, currency, wikipedia, qrcode, pdf | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Communications :: Chat",
"Topic :: Software Development :: Libraries :: Python Modules",
"Framework :: AsyncIO"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx>=0.27.0",
"yt-dlp>=2024.1.0",
"reportlab>=4.0.0",
"qrcode[pil]>=7.4.2",
"Pillow>=10.0.0",
"fpdf2>=2.7.0; extra == \"all\"",
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"black; extra == \"dev\"",
"mypy; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/zehnex-py/zehnex-pro",
"Repository, https://github.com/zehnex-py/zehnex-pro",
"Issues, https://github.com/zehnex-py/zehnex-pro/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T15:29:21.416576 | zehnex_pro-1.0.1.tar.gz | 23,013 | 4e/da/07c8e342fac7ec45025e09c053d0f3ee45bd5540895174ddfc36b072baa6/zehnex_pro-1.0.1.tar.gz | source | sdist | null | false | 00fcbaf9ccae2b91d209886d6fadcccf | a07daf4feb414de771e0e9d1fe877a02c2535a69bb8a92c88a14ac952bac98a3 | 4eda07c8e342fac7ec45025e09c053d0f3ee45bd5540895174ddfc36b072baa6 | MIT | [
"LICENSE"
] | 224 |
2.4 | dask-histogram | 2026.2.0 | Histogramming with Dask. | # dask-histogram
> Scale up histogramming with [Dask](https://dask.org).
[](https://github.com/dask-contrib/dask-histogram/actions/workflows/ci.yml)
[](https://dask-histogram.readthedocs.io/en/latest/?badge=latest)
[](https://pypi.org/project/dask-histogram/)
[](https://anaconda.org/conda-forge/dask-histogram)
[](https://pypi.org/project/dask-histogram/)
The [boost-histogram](https://github.com/scikit-hep/boost-histogram)
library provides a performant object oriented API for histogramming in
Python. Building on the foundation of boost-histogram, dask-histogram
adds support for lazy calculations on Dask collections.
See [the documentation](https://dask-histogram.readthedocs.io/) for
examples and the API reference.
| text/markdown | null | Doug Davis <ddavis@ddavis.io> | null | null | BSD-3-Clause | null | [
"Intended Audience :: Science/Research",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Information Analysis",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Scientific/Engineering :: Physics"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"boost-histogram>=1.3.2",
"dask-awkward>=2025",
"dask>=2021.03.0",
"dask-sphinx-theme>=4.0.0; python_version >= \"3.11\" and extra == \"dev\"",
"dask[array,dataframe]; extra == \"dev\"",
"hist; extra == \"dev\"",
"pytest; extra == \"dev\"",
"sphinx>=5.0.0; python_version >= \"3.11\" and extra == \"dev\"",
"dask-sphinx-theme>=4.0.0; python_version >= \"3.11\" and extra == \"docs\"",
"dask[array,dataframe]; extra == \"docs\"",
"sphinx>=5.0.0; python_version >= \"3.11\" and extra == \"docs\"",
"dask[array,dataframe]; extra == \"test\"",
"hist; extra == \"test\"",
"pytest; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/dask-contrib/dask-histogram",
"Documentation, https://dask-histogram.readthedocs.io/",
"Bug Tracker, https://github.com/dask-contrib/dask-histogram/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:29:18.713713 | dask_histogram-2026.2.0.tar.gz | 24,310 | 28/88/5e0a8fdb6e36320426ed0defac955d831310be04b1f5123a10a40bc6980b/dask_histogram-2026.2.0.tar.gz | source | sdist | null | false | e18191ed225a797bbc595d30c06434d8 | 76ea6ab994feaeaf13e2966be32f3cc493aa9d8e69f6fe14b6063c145bd3a58e | 28885e0a8fdb6e36320426ed0defac955d831310be04b1f5123a10a40bc6980b | null | [
"LICENSE"
] | 1,001 |
2.4 | hassette | 0.23.0 | Hassette is a simple, modern, async-first Python framework for building Home Assistant automations. | <p align="center">
<img src="https://raw.githubusercontent.com/NodeJSmith/hassette/main/docs/_static/hassette-logo.svg" />
</p>
# Hassette
[](https://badge.fury.io/py/hassette)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://hassette.readthedocs.io/en/latest/?badge=stable)
[](https://codecov.io/github/NodeJSmith/hassette)
A simple, modern, async-first Python framework for building Home Assistant automations.
**Documentation**: https://hassette.readthedocs.io
## ✨ Why Hassette?
- **Type Safe**: Full type annotations with Pydantic models and comprehensive IDE support
- **Async-First**: Built for modern Python with async/await throughout
- **Dependency Injection**: Clean handler signatures with FastAPI style dependency injection
- **Persistent Storage**: Built-in disk cache for storing data across restarts, intelligent rate-limiting, and more
- **Simple & Focused**: Just Home Assistant automations - no complexity creep
- **Built-in Web UI**: Monitor and manage your automations from anywhere with a clean and user-friendly dashboard
- **Developer Experience**: Clear error messages, proper logging, hot-reloading, and intuitive APIs
See the [Getting Started guide](https://hassette.readthedocs.io/en/latest/pages/getting-started/) for detailed instructions.
## 🖥️ Web UI
Hassette includes a dashboard for monitoring apps, viewing logs, browsing entities, and inspecting the event bus and scheduler — all in real time.
<p align="center">
<img src="https://raw.githubusercontent.com/NodeJSmith/hassette/main/docs/_static/web_ui_dashboard.png" alt="Hassette Web UI Dashboard" />
</p>
The web UI is enabled by default at `http://<host>:8126/ui/`. See the [Web UI documentation](https://hassette.readthedocs.io/en/latest/pages/web-ui/) for details.
## 🤔 Is Hassette Right for You?
**New to automation frameworks?**
- [Hassette vs. Home Assistant YAML](https://hassette.readthedocs.io/en/latest/pages/getting-started/hassette-vs-ha-yaml/) - Decide if Hassette is right for your needs
**Coming from AppDaemon?**
- [AppDaemon Comparison](https://hassette.readthedocs.io/en/latest/pages/appdaemon-comparison/) - See what's different and how to migrate
## 📖 Examples
Check out the [`examples/`](https://github.com/NodeJSmith/hassette/tree/main/examples) directory for complete working examples:
**Based on AppDaemon's examples**:
- [Battery monitoring](https://github.com/NodeJSmith/hassette/tree/main/examples/apps/battery.py) - Monitor device battery levels
- [Presence detection](https://github.com/NodeJSmith/hassette/tree/main/examples/apps/presence.py) - Track who's home
- [Sensor notifications](https://github.com/NodeJSmith/hassette/tree/main/examples/apps/sensor_notification.py) - Alert on sensor changes
**Real-world apps**:
- [Office Button App](https://github.com/NodeJSmith/hassette/tree/main/examples/apps/office_button_app.py) - Multi-function button handler
- [Laundry Room Lights](https://github.com/NodeJSmith/hassette/tree/main/examples/apps/laundry_room_light.py) - Motion-based lighting
**Configuration examples**:
- [Docker Compose Guide](https://hassette.readthedocs.io/en/latest/pages/getting-started/docker/) - Docker deployment setup
- [HassetteConfig](https://hassette.readthedocs.io/en/latest/reference/hassette/config/config/) - Complete configuration reference
## 🛣️ Status & Roadmap
Hassette is under active development. We follow [semantic versioning](https://semver.org/) and recommend pinning a minor version (e.g., `hassette~=0.x.0`) while the API stabilizes.
Development is tracked in our [GitHub project](https://github.com/users/NodeJSmith/projects/1). Open an issue or PR if you'd like to contribute!
### What's Next?
- 🔐 **Enhanced type safety** - Fully typed service calls and additional state models
- 🏗️ **Entity classes** - Rich entity objects with built-in methods (e.g., `await light.turn_on()`)
- 🔄 **Enhanced error handling** - Better retry logic and error recovery
- 🧪 **Testing improvements** - More comprehensive test coverage and user app testing framework
## 🤝 Contributing
Contributions are welcome! Whether you're:
- 🐛 Reporting bugs or issues
- 💡 Suggesting features or improvements
- 📝 Improving documentation
- 🔧 Contributing code
See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines on getting started.
## ⭐ Star History
[](https://www.star-history.com/#NodeJSmith/hassette&type=date&legend=top-left)
## 📄 License
[MIT](LICENSE)
| text/markdown | Jessica | Jessica <12jessicasmith34@gmail.com> | null | null | null | home-assistant, automation, async, typed, framework, smart-home, iot | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Home Automation",
"Topic :: Software Development :: Libraries :: Python Modules",
"Framework :: AsyncIO",
"Typing :: Typed"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"aiohttp>=3.11.18",
"anyio>=4.10.0",
"coloredlogs>=15.0.1",
"fastapi>=0.128.6",
"jinja2>=3.1.0",
"croniter>=6.0.0",
"deepdiff>=8.6.1",
"diskcache>=5.6.3",
"fair-async-rlock>=1.0.7",
"frozendict>=2.4.7",
"glom>=24.11.0",
"humanize>=4.13.0",
"orjson>=3.10.18",
"packaging>=25.0",
"platformdirs>=4.3.8",
"pydantic-settings>=2.10.0",
"python-dotenv>=1.1.0",
"tenacity>=9.1.2",
"typing-extensions==4.15.*",
"uvicorn[standard]>=0.34.0",
"watchfiles>=1.1.0",
"whenever==0.9.*"
] | [] | [] | [] | [
"Homepage, https://github.com/nodejsmith/hassette",
"Repository, https://github.com/nodejsmith/hassette",
"Documentation, https://github.com/nodejsmith/hassette#readme",
"Bug Reports, https://github.com/nodejsmith/hassette/issues",
"Changelog, https://github.com/nodejsmith/hassette/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:29:10.771147 | hassette-0.23.0.tar.gz | 232,336 | 4c/3c/58684abfe8927169a5befb99ae8565b52fb087f86f2dc63b665f0ad01fa9/hassette-0.23.0.tar.gz | source | sdist | null | false | 1c4f147a28697a51e7426b89f6789937 | fbb046d483bd192541383e211750ed78f854a783741cad13ce7bee4a5b40be7d | 4c3c58684abfe8927169a5befb99ae8565b52fb087f86f2dc63b665f0ad01fa9 | MIT | [] | 217 |
2.4 | mkdocs-material-joapuiib | 3.0.0 | Custom theme for MkDocs | # mkdocs-material-joapuiib
[](https://squidfunk.github.io/mkdocs-material/)
[](https://pypi.org/project/mkdocs-material-joapuiib/)
Tema de [MkDocs](https://www.mkdocs.org/) basat en [Material for MkDocs](https://squidfunk.github.io/mkdocs-material/)
per a la creació de material didàctic.
Podeu veure una demostració del tema a [joapuiib.github.io/mkdocs-material-joapuiib](https://joapuiib.github.io/mkdocs-material-joapuiib/).
## Característiques
Aquest tema inclou les següents característiques:
- Portades per als documents amb la plantilla `document.html`.
- Plantilla `slides.html` per a la creació de presentacions amb [Reveal.js](https://revealjs.com/).
- [Admonitions](https://squidfunk.github.io/mkdocs-material/reference/admonitions/): S'inclouen les alertes `docs`, `spoiler` i `solution`.
- S'habilita per defecte KaTeX per a escriure fórmules matemàtiques.
- Exemple per la traducció dels títols de les alertes (admonitions) al valencià (fitxer `mkdocs.yml`).
- S'ha extés el ressaltat de sintaxi amb [pygments-shell-console](https://github.com/joapuiib/pygments-shell-console/)
per a ressaltar el prompt de la consola i l'eixida de les comandes de Git.
## Instal·lació
Aquest tema es pot instal·lar a través de pip:
```bash
pip install mkdocs-material-joapuiib
```
Afegeix el següent al teu fitxer de configuració `mkdocs.yml`:
```yaml
theme:
name: 'material-joapuiib'
```
| text/markdown | null | Joan Puigcerver <joapuiib@gmail.com> | null | Joan Puigcerver <joapuiib@gmail.com> | null | null | [] | [] | null | null | >=3.5 | [] | [] | [] | [
"markdown-captions~=2.1",
"markdown_grid_tables~=0.3",
"mkdocs~=1.6",
"mkdocs-alias-plugin~=0.8",
"mkdocs-awesome-nav~=3.0",
"mkdocs-drawio~=1.7",
"mkdocs-exclude-search~=0.6",
"mkdocs-git-revision-date-localized-plugin~=1.2",
"mkdocs-glightbox~=0.4",
"mkdocs-macros-plugin~=1.2",
"mkdocs-materialx~=10.0",
"mkdocs-static-i18n~=1.2",
"pygments-shell-console~=0.8",
"pymdown-extensions~=10.12",
"pymdown-rubric-extension~=0.1",
"pymdownx-inline-blocks~=0.1",
"unidecode~=1.3",
"markdown-span-attr"
] | [] | [] | [] | [
"Repository, https://github.com/joapuiib/mkdocs-material-joapuiib"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:28:55.968624 | mkdocs_material_joapuiib-3.0.0.tar.gz | 356,273 | 9d/a7/faf3aaa8fd57ea76c076d49ddc128b3152e970afc0837c6f6202726dcd0a/mkdocs_material_joapuiib-3.0.0.tar.gz | source | sdist | null | false | 196b4a651a68097c8a55eea2a5f71bbc | bdc9ee8124ec491ae77bf6711fca5e9b731f2a99baf762ef64f1860e4ce02ada | 9da7faf3aaa8fd57ea76c076d49ddc128b3152e970afc0837c6f6202726dcd0a | null | [] | 216 |
2.4 | pyaxencoapi | 1.0.6 | Librairie d'utilisation de l'API REST/Websocket d'Axenco | # PyAxencoAPI
Client Python asynchrone pour interagir avec les dispositifs connectés MyNeomitis via API REST/Websocket.
## Installation
```bash
pip install pyaxencoapi
| text/markdown | null | Léo Periou <leo.periou@axenco.com> | null | null | MIT | axenco, api, iot, myneomitis, async | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Home Automation",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"aiohttp>=3.8.0",
"python-socketio[asyncio_client]>=5.0.0",
"pytest; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\"",
"aiohttp; extra == \"dev\"",
"aioresponses; extra == \"dev\"",
"python-socketio[asyncio_client]; extra == \"dev\"",
"coverage; extra == \"dev\"",
"pytest-cov; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/AXENCO-FR/ha-py-axenco-api",
"Repository, https://github.com/AXENCO-FR/ha-py-axenco-api",
"Issues, https://github.com/AXENCO-FR/ha-py-axenco-api/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:28:46.252382 | pyaxencoapi-1.0.6.tar.gz | 8,310 | e0/34/675b85559474fc4e707a4449826ed427db2b359153223f85a15eac9c5daa/pyaxencoapi-1.0.6.tar.gz | source | sdist | null | false | 7dc069debd7d4f7242d14f7463f07e47 | 8e74a042ef749abd0a6ea3de549f71104527123103f6ff12a95c26369ae68d67 | e034675b85559474fc4e707a4449826ed427db2b359153223f85a15eac9c5daa | null | [
"LICENSE"
] | 197 |
2.4 | dask-awkward | 2026.2.0 | Awkward Array meets Dask | dask-awkward
============
> Connecting [awkward-array](https://awkward-array.org) and
[Dask](https://dask.org/).
[](https://github.com/dask-contrib/dask-awkward/actions/workflows/pypi-tests.yml)
[](https://github.com/dask-contrib/dask-awkward/actions/workflows/conda-tests.yml)
[](https://dask-awkward.readthedocs.io/en/latest/?badge=latest)
[](https://codecov.io/gh/dask-contrib/dask-awkward/branch/main)
[](https://pypi.org/project/dask-awkward)
[](https://anaconda.org/conda-forge/dask-awkward)
Installing
----------
Releases are available from PyPI and conda-forge.
PyPI:
```
pip install dask-awkward
```
conda-forge:
```
conda install dask-awkward -c conda-forge
```
Documentation
-------------
Documentation is hosted at
[https://dask-awkward.readthedocs.io/](https://dask-awkward.readthedocs.io/).
Acknowledgements
----------------
Support for this work was provided by NSF grant [OAC-2103945](https://www.nsf.gov/awardsearch/showAward?AWD_ID=2103945).
| text/markdown | null | Doug Davis <ddavis@ddavis.io>, Martin Durant <mdurant@anaconda.com> | null | Doug Davis <ddavis@ddavis.io>, Martin Durant <mdurant@anaconda.com> | BSD-3-Clause | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Information Analysis",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Scientific/Engineering :: Physics",
"Topic :: Software Development"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"awkward>=2.8.12",
"cachetools",
"dask<2025.4.0,>=2023.4.0",
"typing-extensions>=4.8.0",
"pyarrow; extra == \"complete\"",
"pyarrow; extra == \"docs\"",
"sphinx-book-theme; extra == \"docs\"",
"sphinx-codeautolink; extra == \"docs\"",
"sphinx-design; extra == \"docs\"",
"pyarrow; extra == \"io\"",
"aiohttp; python_version < \"3.12\" and extra == \"test\"",
"dask-histogram; extra == \"test\"",
"dask[dataframe]; extra == \"test\"",
"distributed<2025.4.0,>=2025.3.1; extra == \"test\"",
"hist; extra == \"test\"",
"pandas; extra == \"test\"",
"pyarrow; extra == \"test\"",
"pytest-cov>=3.0.0; extra == \"test\"",
"pytest>=6.0; extra == \"test\"",
"requests; extra == \"test\"",
"uproot>=5.1.0; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/dask-contrib/dask-awkward",
"Bug Tracker, https://github.com/dask-contrib/dask-awkward/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:28:45.082366 | dask_awkward-2026.2.0.tar.gz | 77,650 | 53/a3/2bfd24d79e65ccd165ab515eb8642a90d56f5f5dfd139c20b00edfe2da57/dask_awkward-2026.2.0.tar.gz | source | sdist | null | false | be961d90e7fc34ed674802ef6ffe8048 | 24142baa536d769e089b198f5dc5320372f49c6612fc10d2a6fd8c7b446e2cd9 | 53a32bfd24d79e65ccd165ab515eb8642a90d56f5f5dfd139c20b00edfe2da57 | null | [
"LICENSE"
] | 5,215 |
2.4 | fastpluggy-redis | 0.0.24 | Redis Tools plugin for Fastpluggy | # Redis Tools for FastPluggy


A powerful Redis browser and management plugin for FastPluggy applications.
This plugin provides a user-friendly interface to browse, search, and manage Redis databases and keys.
## Features
- **Redis Browser UI**: Intuitive web interface to browse and manage Redis data
- **Multi-Database Support**: Switch between Redis databases easily
- **Key Management**: View, search, and delete Redis keys
- **Data Type Support**: Full support for all Redis data types (string, list, hash, set, zset)
- **Pickled Data Handling**: Automatic detection and unpickling of Python pickled data
- **TTL Management**: View time-to-live for keys
- **Database Flushing**: Clear entire databases with confirmation
- **Scheduled Cleanup**: Optional scheduled task to clean expired keys (requires tasks_worker plugin)
## Installation
1. Install the official plugin package:
```bash
pip install fastpluggy-plugins
```
or
```bash
pip install fastpluggy-plugin-redis
```
2. Add the plugin to your FastPluggy application:
There are several ways to add the plugin to your FastPluggy application:
#### Method 1: Using the Admin Interface
1. Navigate to `/admin/plugins` in your FastPluggy application
2. Click on "Install New Plugin"
3. Enter "redis_tools" as the plugin name
4. Click "Install"
#### Method 2: Using the Configuration File
Add the plugin to your FastPluggy configuration file:
```python
# config.py
FASTPLUGGY_PLUGINS = [
# other plugins...
"redis_tools",
]
```
#### Method 3: Programmatically
Add the plugin programmatically in your application code:
```python
from fastapi import FastAPI
from fastpluggy import FastPluggy
app = FastAPI()
pluggy = FastPluggy(app)
# Add the Redis Tools plugin
pluggy.add_plugin("redis_tools")
```
For more detailed installation and configuration instructions, see the [Installation Guide](docs/installation.md).
## Configuration
Configure Redis Tools through environment variables or directly in your FastPluggy configuration:
| Setting | Description | Default |
|---------|-------------|---------|
| `REDIS_DSN` | Redis connection string (overrides other connection settings if provided) | `None` |
| `redis_host` | Redis server hostname | `localhost` |
| `redis_port` | Redis server port | `6379` |
| `redis_db` | Default Redis database index | `0` |
| `use_ssl` | Whether to use SSL for Redis connection | `False` |
| `redis_password` | Redis server password | `None` |
| `redis_decode_responses` | Whether to decode Redis responses | `False` |
| `keys_limit` | Maximum number of keys to display | `100` |
## Usage
### Web Interface
Once installed, access the Redis browser at `/redis_tools/` in your FastPluggy application. The interface allows you to:
1. Select and switch between Redis databases
2. Search for keys using patterns (e.g., `user:*`)
3. View key details including type, TTL, and size
4. Examine and format key values based on their data type
5. Delete individual keys
6. Flush entire databases
### Programmatic Usage
You can also use the Redis Tools connector in your code:
```python
from redis_tools.redis_connector import RedisConnection
# Create a connection
redis_conn = RedisConnection()
# Test connection
if redis_conn.test_connection():
# Get all keys matching a pattern
keys = redis_conn.get_keys("user:*")
# Get details for a specific key
key_details = redis_conn.get_key_details("user:1001")
# Switch to another database
redis_conn.select_db(2)
# Delete a key
redis_conn.delete_key("temporary_key")
```
## Development
### Requirements
- Python 3.10+
- FastPluggy
- Redis
### Testing
Run tests with pytest:
```bash
cd redis_tools
pip install -r tests/requirements.txt
pytest
```
## License
This project is licensed under the MIT License - see the LICENSE file for details.
| text/markdown | FastPluggy Team | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"redis",
"fastpluggy-ui-tools>=0.0.12"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T15:28:24.725570 | fastpluggy_redis-0.0.24.tar.gz | 17,641 | ba/47/742e62d6e1867db14eb88fe249e2d602c718f26970e31e5bcd957c0e2f09/fastpluggy_redis-0.0.24.tar.gz | source | sdist | null | false | 6bb451e93b88482d2d777e6ba31e57a5 | cd20766a936b760bea045e774b0b6ee14df050218c585985cc92f4bb1dcb190c | ba47742e62d6e1867db14eb88fe249e2d602c718f26970e31e5bcd957c0e2f09 | null | [] | 208 |
2.4 | holiday-rlag | 0.1.9 | Fetches holiday information from https://github.com/vacanza/holidays and applies reverse lagging (that is, future indicators) for time series training. | # holiday-rlag
Fetches holiday information using the [holidays](https://github.com/vacanza/holidays) library and applies reverse lagging (future indicators) for time series training.
## Features
- Fetches holidays for a given date range and country
- Adds reverse lagged indicators for holidays (e.g., is a holiday coming up in the next N days?)
- Flags for Easter, Christmas, and bank holidays
## Installation
```bash
pip install holiday-rlag
# or, if using uv:
uv pip install .
```
## Requirements
- Python 3.12+
- [holidays](https://pypi.org/project/holidays/)
## Usage
```python
from holiday_rlag.laggedholidays import fetch_holidays
import datetime
date_start = datetime.date(2023, 1, 1)
date_end = datetime.date(2023, 12, 31)
holidays = fetch_holidays(date_start, date_end, country_code="DE", reverse_lag=7)
```
## API
### `fetch_holidays(...)`
Fetches holidays for a given date range and country, with optional reverse lagging.
**Parameters:**
- `date_start` (`datetime.date`): Start date
- `date_end` (`datetime.date`): End date
- `country_code` (`str`): Country code (default: "DE")
- `reverse_lag` (`int`): Number of days to look back for reverse lagging (default: 7)
- `col_is_easter`, `col_is_christmas`, `col_is_bank_holiday`: Column names for flags
- `col_lag_prefix`, `col_lag_suffix`: Prefix/suffix for lagged columns
**Returns:**
- `list[dict]`: List of dictionaries with holiday and lagged indicator flags
**Note:** The result is not a complete date list. Fill all dates you need and apply `fillna(0)` if converting to a DataFrame.
## Example Output
```json
[
{
"date": "2023-04-09",
"is_easter": 1,
"is_christmas": 0,
"is_bank_holiday": 1,
"is_easter_in_next_1_days": 1,
...
},
...
]
```
## License
MIT License. See LICENSE file.
| text/markdown | null | Sven Flake <sflake@paiqo.com> | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"holidays>=0.91"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.10 | 2026-02-20T15:27:55.879082 | holiday_rlag-0.1.9.tar.gz | 3,963 | 41/ad/680764a9947c8473ed79a57d908cf60be4ab7189942a0a3e7229ef61e8cb/holiday_rlag-0.1.9.tar.gz | source | sdist | null | false | 3e5d86eba55b08275e078c5a7df6d6a0 | ec0cdc2be74d789f7cad9cdd4ff7c60c3b356cb98793422271dd04b265c6ccfb | 41ad680764a9947c8473ed79a57d908cf60be4ab7189942a0a3e7229ef61e8cb | null | [
"LICENSE"
] | 202 |
2.4 | cmp3 | 1.0.3 | This project updates the concept of __cmp__ for python3. | ====
cmp3
====
Visit the website `https://cmp3.johannes-programming.online/ <https://cmp3.johannes-programming.online/>`_ for more information.
| text/x-rst | null | Johannes <johannes.programming@gmail.com> | null | null | The MIT License (MIT)
Copyright (c) 2026 Johannes
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"setdoc<2,>=1.2.10"
] | [] | [] | [] | [
"Download, https://pypi.org/project/cmp3/#files",
"Index, https://pypi.org/project/cmp3/",
"Source, https://github.com/johannes-programming/cmp3/",
"Website, https://cmp3.johannes-programming.online/"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T15:27:44.488900 | cmp3-1.0.3.tar.gz | 8,294 | 4f/5c/3ffea09d37b77d4e257be9e0a55a3c34d1ed69c43fd5c39d1b17ca73e007/cmp3-1.0.3.tar.gz | source | sdist | null | false | 6f72c2306617cae68eb544ac52d6dcd6 | b5d23e53a9b5dc839ab63051c3da4cf22158cc8393c8674d84f0f93e77825df8 | 4f5c3ffea09d37b77d4e257be9e0a55a3c34d1ed69c43fd5c39d1b17ca73e007 | null | [
"LICENSE.txt"
] | 199 |
2.4 | mnesis | 0.1.1 | Lossless Context Management — persistent memory for LLM agent sessions | <p align="center">
<img src="https://raw.githubusercontent.com/Lucenor/mnesis/main/docs/images/logo_icon.png" alt="mnesis logo" width="120"><br><br>
<img src="https://raw.githubusercontent.com/Lucenor/mnesis/main/docs/images/logo_wordmark.png" alt="mnesis" width="320"><br><br>
<em>Lossless Context Management for long-horizon LLM agents</em>
<br><br>
<a href="https://pypi.org/project/mnesis/"><img src="https://img.shields.io/pypi/v/mnesis?color=5c6bc0&labelColor=1a1a2e" alt="PyPI"></a>
<a href="https://pypi.org/project/mnesis/"><img src="https://img.shields.io/pypi/pyversions/mnesis?color=5c6bc0&labelColor=1a1a2e" alt="Python"></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/license-Apache%202.0-5c6bc0?labelColor=1a1a2e" alt="License"></a>
<a href="https://github.com/Lucenor/mnesis/actions/workflows/ci.yml"><img src="https://img.shields.io/github/actions/workflow/status/Lucenor/mnesis/ci.yml?color=5c6bc0&labelColor=1a1a2e" alt="CI"></a>
<a href="https://codecov.io/github/Lucenor/mnesis"><img src="https://img.shields.io/codecov/c/github/Lucenor/mnesis?color=5c6bc0&labelColor=1a1a2e" alt="Coverage"></a>
<a href="https://mnesis.lucenor.tech"><img src="https://img.shields.io/badge/docs-mnesis.lucenor.tech-5c6bc0?labelColor=1a1a2e" alt="Docs"></a>
<a href="https://github.com/Lucenor/mnesis/attestations"><img src="https://img.shields.io/badge/provenance-attested-5c6bc0?labelColor=1a1a2e&logo=githubactions&logoColor=white" alt="Attestation"></a>
<a href="https://scorecard.dev/viewer/?uri=github.com/Lucenor/mnesis"><img src="https://api.scorecard.dev/projects/github.com/Lucenor/mnesis/badge" alt="OpenSSF Scorecard"></a>
</p>
---
LLMs suffer from **context rot**: accuracy degrades 30–40% before hitting nominal token limits — not because the model runs out of space, but because reasoning quality collapses as the window fills with stale content.
The standard fix — telling the model to "summarize itself" — is unreliable. The model may silently drop constraints, forget file paths, or produce a summary that is itself too large.
**mnesis** solves this by making the *engine* — not the model — responsible for memory. It is a Python implementation of the [LCM: Lossless Context Management](https://github.com/Lucenor/mnesis/blob/main/docs/LCM.pdf) architecture.
---
## Benchmarks
Evaluated on [OOLONG](https://github.com/abertsch72/oolong), a long-context reasoning and aggregation benchmark. Both LCM-managed and Claude Code agents are built on Claude Opus 4.6; the gap comes entirely from context architecture.
> The charts below compare LCM-managed context against Claude Code and unmanaged Opus 4.6 across context lengths from 8K to 1M tokens.
**Score improvement over raw Opus 4.6 at each context length:**

**Absolute scores vs raw Opus 4.6 baseline:**

Raw Opus 4.6 uses no context management — scores collapse past 32K tokens.
---
## How it works
Traditional agentic frameworks ("RLM" — Recursive Language Models) ask the model to manage its own context via tool calls. LCM moves that responsibility to a deterministic engine layer:

The engine handles memory deterministically so the model can focus entirely on the task.
---
## Key properties
| | RLM (e.g. raw Claude Code) | mnesis |
|---|---|---|
| Context trigger | Model judgment | Token threshold |
| Summarization failure | Silent data loss | Three-level fallback — never fails |
| Tool output growth | Unbounded | Backward-scan pruner |
| Large files | Inline (eats budget) | Content-addressed references |
| Parallel workloads | Sequential or ad-hoc | `LLMMap` / `AgenticMap` operators |
| History | Ephemeral | Append-only SQLite log |
---
## Quick Start
```bash
uv add mnesis
```
```python
import asyncio
from mnesis import MnesisSession
async def main():
async with await MnesisSession.create(
model="anthropic/claude-opus-4-6",
system_prompt="You are a helpful assistant.",
) as session:
result = await session.send("Explain the GIL in Python.")
print(result.text)
asyncio.run(main())
```
No API key needed to try it — set `MNESIS_MOCK_LLM=1` and run any of the [examples](#examples).
### Provider support
mnesis works with any LLM provider via [litellm](https://docs.litellm.ai/). Pass the model string and set the corresponding API key:
| Provider | Model string | API key env var |
|---|---|---|
| Anthropic | `"anthropic/claude-opus-4-6"` | `ANTHROPIC_API_KEY` |
| OpenAI | `"openai/gpt-4o"` | `OPENAI_API_KEY` |
| Google Gemini | `"gemini/gemini-1.5-pro"` | `GEMINI_API_KEY` |
| OpenRouter | `"openrouter/meta-llama/llama-3.1-70b-instruct"` | `OPENROUTER_API_KEY` |
See the [Provider Configuration guide](https://mnesis.lucenor.tech/providers/) for the full provider configuration guide.
### BYO-LLM — use your own SDK
If you already use the Anthropic, OpenAI, or another SDK directly, use `session.record()` to let mnesis handle memory and compaction without routing calls through litellm:
```python
import anthropic
from mnesis import MnesisSession, TokenUsage
client = anthropic.Anthropic()
session = await MnesisSession.create(model="anthropic/claude-opus-4-6")
user_text = "Explain quantum entanglement."
response = client.messages.create(
model="claude-opus-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": user_text}],
)
await session.record(
user_message=user_text,
assistant_response=response.content[0].text,
tokens=TokenUsage(
input=response.usage.input_tokens,
output=response.usage.output_tokens,
),
)
```
---
## Core Concepts
### Immutable Store + Active Context
Every message and tool result is appended to an SQLite log and never modified. Each turn, the engine assembles a *curated view* of the log that fits the model's token budget — the Active Context.
### Three-Level Compaction
When token usage crosses the threshold, the `CompactionEngine` escalates automatically:
1. **Level 1** — Structured LLM summary: Goal / Discoveries / Accomplished / Remaining
2. **Level 2** — Aggressive compression: drop reasoning, maximum conciseness
3. **Level 3** — Deterministic truncation: no LLM, always fits, never fails
Compaction runs asynchronously and never blocks a turn.
### Tool Output Pruning
The `ToolOutputPruner` scans backward through history and tombstones completed tool outputs that fall outside a configurable protect window (default 40K tokens). Tombstoned outputs are replaced by a compact marker in the context — the data is still in the immutable store.
### Large File References
Files exceeding the inline threshold (default 10K tokens) are stored externally as `FileRefPart` objects with structural exploration summaries — AST outlines for Python, schema keys for JSON/YAML, headings for Markdown. The file is never re-read unless the model explicitly requests it.
### Parallel Operators
- **`LLMMap`** — stateless parallel LLM calls over a list of inputs with Pydantic schema validation and per-item retry. O(1) context cost to the caller.
- **`AgenticMap`** — independent sub-agent sessions per input item, each with full multi-turn reasoning. The parent session sees only the final output.
---
## Configuration
```python
from mnesis import MnesisConfig, CompactionConfig, FileConfig
config = MnesisConfig(
compaction=CompactionConfig(
auto=True,
buffer=20_000, # tokens reserved for compaction output
prune=True,
prune_protect_tokens=40_000, # never prune within last 40K tokens
prune_minimum_tokens=20_000, # skip pruning if volume is too small
level2_enabled=True,
),
file=FileConfig(
inline_threshold=10_000, # files > 10K tokens → FileRefPart
),
doom_loop_threshold=3, # consecutive identical tool calls before warning
)
```
| Parameter | Default | Description |
|---|---|---|
| `compaction.auto` | `True` | Auto-trigger on overflow |
| `compaction.buffer` | `20,000` | Tokens reserved for summary output |
| `compaction.prune_protect_tokens` | `40,000` | Recent tokens never pruned |
| `compaction.prune_minimum_tokens` | `20,000` | Minimum prunable volume |
| `file.inline_threshold` | `10,000` | Inline limit in tokens |
| `operators.llm_map_concurrency` | `16` | `LLMMap` parallel calls |
| `operators.agentic_map_concurrency` | `4` | `AgenticMap` parallel sessions |
| `doom_loop_threshold` | `3` | Identical tool call threshold |
---
## Examples
All examples run without an API key via `MNESIS_MOCK_LLM=1`:
```bash
MNESIS_MOCK_LLM=1 uv run python examples/01_basic_session.py
MNESIS_MOCK_LLM=1 uv run python examples/05_parallel_processing.py
```
| File | Demonstrates |
|---|---|
| `examples/01_basic_session.py` | `create()`, `send()`, context manager, compaction monitoring |
| `examples/02_long_running_agent.py` | EventBus subscriptions, streaming callbacks, manual `compact()` |
| `examples/03_tool_use.py` | Tool lifecycle, `ToolPart` streaming states, tombstone inspection |
| `examples/04_large_files.py` | `LargeFileHandler`, `FileRefPart`, cache hits, exploration summaries |
| `examples/05_parallel_processing.py` | `LLMMap` with Pydantic schema, `AgenticMap` sub-sessions |
| `examples/06_byo_llm.py` | `record()` — BYO-LLM, inject turns from your own SDK |
---
## Documentation
Full documentation is available at **[mnesis.lucenor.tech](https://mnesis.lucenor.tech)**, including:
- [Getting Started](https://mnesis.lucenor.tech/getting-started/)
- [Provider Configuration](https://mnesis.lucenor.tech/providers/)
- [BYO-LLM](https://mnesis.lucenor.tech/byo-llm/)
- [Concepts](https://mnesis.lucenor.tech/concepts/)
- [API Reference](https://mnesis.lucenor.tech/api/)
---
## Architecture
```
MnesisSession
├── ImmutableStore (SQLite append-only log)
├── SummaryDAGStore (logical DAG over is_summary messages)
├── ContextBuilder (assembles LLM message list each turn)
├── CompactionEngine (three-level escalation + atomic commit)
│ ├── ToolOutputPruner (backward-scanning tombstone pruner)
│ └── levels.py (level 1 / 2 / 3 functions)
├── LargeFileHandler (content-addressed file references)
├── TokenEstimator (tiktoken + heuristic fallback)
└── EventBus (in-process pub/sub)
Operators (independent of session)
├── LLMMap (parallel stateless LLM calls)
└── AgenticMap (parallel sub-agent sessions)
```
---
## Contributing
Contributions are welcome — bug reports, feature requests, and pull requests alike.
See [CONTRIBUTING.md](https://github.com/Lucenor/mnesis/blob/main/CONTRIBUTING.md) for the full guide. The short version:
```bash
# Clone and install with dev dependencies
git clone https://github.com/Lucenor/mnesis.git
cd mnesis
uv sync --group dev
# Lint + format
uv run ruff check src/ tests/
uv run ruff format src/ tests/
# Type check
uv run mypy src/mnesis
# Test (with coverage)
uv run pytest
# Run examples without API keys
MNESIS_MOCK_LLM=1 uv run python examples/01_basic_session.py
MNESIS_MOCK_LLM=1 uv run python examples/05_parallel_processing.py
# Build docs locally
uv sync --group docs
uv run mkdocs serve
```
Open an issue before starting large changes — it avoids duplicated effort.
---
## License
Apache 2.0 — see [LICENSE](https://github.com/Lucenor/mnesis/blob/main/LICENSE) and [NOTICE](https://github.com/Lucenor/mnesis/blob/main/NOTICE).
| text/markdown | Mnesis Authors | null | null | null | Apache-2.0 | agents, compaction, context, llm, memory | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiosqlite>=0.20.0",
"anyio>=4.4.0",
"jinja2>=3.1.4",
"jsonschema>=4.22.0",
"litellm>=1.40.0",
"pydantic>=2.7.0",
"python-magic>=0.4.27",
"python-ulid>=3.0.0",
"structlog>=24.2.0",
"tiktoken>=0.7.0"
] | [] | [] | [] | [
"Homepage, https://mnesis.lucenor.tech",
"Documentation, https://mnesis.lucenor.tech",
"Source, https://github.com/Lucenor/mnesis",
"Changelog, https://mnesis.lucenor.tech/changelog/",
"Issues, https://github.com/Lucenor/mnesis/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:27:11.208990 | mnesis-0.1.1.tar.gz | 2,006,682 | 7e/e1/c2efff4e2f86c768315499b2b44bca8332953635101cb8541f03ca75a82b/mnesis-0.1.1.tar.gz | source | sdist | null | false | 297a197757d8cae70a9b096c6225b3a3 | 4cc1932a1ef50ea175cff76466d453340db6b8f2562a55d932441d0c0086d627 | 7ee1c2efff4e2f86c768315499b2b44bca8332953635101cb8541f03ca75a82b | null | [
"LICENSE",
"NOTICE"
] | 200 |
2.4 | palfrey | 0.1.2 | A clean-room ASGI server with Uvicorn-compatible CLI and behavior mapping. | # Palfrey
<p align="center">
<a href="https://palfrey.dymmond.com"><img src="https://res.cloudinary.com/dymmond/image/upload/v1771522360/Palfrey/Logo/logo_ocxyty.png" alt='Palfrey'></a>
</p>
<p align="center">
<em>Palfrey is a clean-room, high-performance Python ASGI server with source-traceable parity mapping.</em>
</p>
<p align="center">
<a href="https://github.com/dymmond/palfrey/actions/workflows/ci.yml/badge.svg?event=push&branch=main" target="_blank">
<img src="https://github.com/dymmond/palfrey/actions/workflows/ci.yml/badge.svg?event=push&branch=main" alt="Test Suite">
</a>
<a href="https://pypi.org/project/palfrey" target="_blank">
<img src="https://img.shields.io/pypi/v/palfrey?color=%2334D058&label=pypi%20package" alt="Package version">
</a>
<a href="https://pypi.org/project/palfrey" target="_blank">
<img src="https://img.shields.io/pypi/pyversions/palfrey.svg?color=%2334D058" alt="Supported Python versions">
</a>
</p>
---
**Documentation**: [https://palfrey.dymmond.com](https://palfrey.dymmond.com) 📚
**Source Code**: [https://github.com/dymmond/palfrey](https://github.com/dymmond/palfrey)
**The official supported version is always the latest released**.
---
Palfrey is a clean-room ASGI server focused on three things:
- behavior you can reason about
- deployment controls you can operate safely
- performance you can reproduce and verify
Protocol runtime modes include HTTP/1.1 backends plus opt-in HTTP/2 (`--http h2`) and HTTP/3 (`--http h3`) paths.
## Palfrey vs Uvicorn
Palfrey was built with deep respect for Uvicorn and the ASGI ecosystem it helped mature.
This is not a "winner vs loser" comparison. Uvicorn is an excellent, battle-tested server, and Palfrey intentionally keeps a compatible API/CLI experience so teams coming from Uvicorn feel at home.
Our goal is to offer another strong option when teams want different internal architecture and extended runtime capabilities.
Benchmark snapshot (your run):
- Command: `python benchmarks/run.py --http-requests 50000`
| Scenario | Palfrey Ops/s | Uvicorn Ops/s | Relative Speed |
| --- | ---: | ---: | ---: |
| HTTP | 36793.21 | 22021.90 | `1.671x` |
| WebSocket | 36556.28 | 13822.97 | `2.645x` |
These numbers are environment-dependent. Always benchmark with your own app, traffic profile, and infrastructure before making production decisions.
This documentation is written for both technical and non-technical readers.
- Engineers can use the protocol details, option tables, and runbooks.
- Product, support, and operations teams can use the plain-language summaries and checklists.
## What Palfrey Does
At runtime, Palfrey sits between clients and your ASGI application.
1. accepts TCP or UNIX socket connections
2. parses protocol bytes into ASGI events
3. calls your app with `scope`, `receive`, `send`
4. writes responses back to clients
5. manages process behavior (reload, workers, graceful shutdown)
## Who Should Start Where
## If you are new to ASGI
1. [Installation](getting-started/installation.md)
2. [Quickstart](getting-started/quickstart.md)
3. [Terms and Mental Models](concepts/terms-and-mental-models.md)
4. [Server Behavior](concepts/server-behavior.md)
## If you operate production services
1. [Deployment](operations/deployment.md)
2. [Workers](operations/workers.md)
3. [Observability](operations/observability.md)
4. [Troubleshooting](guides/troubleshooting.md)
5. [Release Process](operations/release-process.md)
## First 60 Seconds
Create `main.py`:
```python
async def app(scope, receive, send):
"""Return a plain-text greeting for HTTP requests."""
if scope["type"] != "http":
return
body = b"Hello from Palfrey"
await send(
{
"type": "http.response.start",
"status": 200,
"headers": [
(b"content-type", b"text/plain; charset=utf-8"),
(b"content-length", str(len(body)).encode("ascii")),
],
}
)
await send({"type": "http.response.body", "body": body})
```
Run Palfrey:
```bash
palfrey main:app --host 127.0.0.1 --port 8000
```
Check it:
```bash
curl http://127.0.0.1:8000
```
Gunicorn + Palfrey worker:
```bash
gunicorn main:app -k palfrey.workers.PalfreyWorker -w 4 -b 0.0.0.0:8000
```
## Documentation Structure
## Getting Started
- install, verify, and run your first app
- move from a minimal app to real startup patterns
## Concepts
- what ASGI is, and how Palfrey applies it
- how HTTP, WebSocket, and lifespan flows behave
- how server internals affect user-visible outcomes
## Reference
- full CLI and config surface
- protocol and logging behavior
- env var model and common errors
## Guides
- migration, security hardening, production rollout
- practical troubleshooting and FAQ
## Operations
- deployment shapes, workers, reload model
- capacity planning, observability, benchmark method
- platform-specific notes and release process
## Plain-Language Summary
If your application is the business logic, Palfrey is the runtime control layer around it.
A good runtime control layer gives teams:
- predictable startup and shutdown
- fewer surprises under traffic spikes
- clearer incident response paths
- safer, repeatable deployments
| text/markdown | null | Tiago Silva <tarsil@tarsild.io> | null | null | MIT | asgi, click, server, websocket | [
"Development Status :: 4 - Beta",
"Environment :: Web Environment",
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Internet :: WWW/HTTP :: HTTP Servers",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click<9.0.0,>=8.1.7",
"h11>=0.8",
"uvicorn[standard]<1.0.0,>=0.34.0; extra == \"benchmark\"",
"maturin<2.0.0,>=1.9.0; extra == \"dev\"",
"ruff<1.0.0,>=0.11.0; extra == \"dev\"",
"ty>=0.0.1a16; extra == \"dev\"",
"griffe-typingdoc<1.0,>=0.2.2; extra == \"docs\"",
"mdx-include>=1.4.2; extra == \"docs\"",
"mkdocs-macros-plugin>=0.4.0; extra == \"docs\"",
"mkdocs-material>=9.4.4; extra == \"docs\"",
"mkdocs-meta-descriptions-plugin>=2.3.0; extra == \"docs\"",
"mkdocs<2.0.0,>=1.1.2; extra == \"docs\"",
"mkdocstrings[python]>=0.23.0; extra == \"docs\"",
"pyyaml<7.0.0,>=6.0; extra == \"docs\"",
"sayer>=0.7.4; extra == \"docs\"",
"typing-extensions>=3.10.0; extra == \"docs\"",
"h2>=4.1.0; extra == \"http2\"",
"aioquic>=1.2.0; extra == \"http3\"",
"colorama>=0.4; sys_platform == \"win32\" and extra == \"standard\"",
"httptools>=0.6.3; extra == \"standard\"",
"python-dotenv>=0.13; extra == \"standard\"",
"pyyaml>=5.1; extra == \"standard\"",
"uvloop>=0.15.1; (sys_platform != \"win32\" and (sys_platform != \"cygwin\" and platform_python_implementation != \"PyPy\")) and extra == \"standard\"",
"watchfiles>=0.20; extra == \"standard\"",
"websockets>=10.4; extra == \"standard\"",
"pytest-cov<7.0.0,>=5.0.0; extra == \"testing\"",
"pytest<9.0.0,>=8.3.0; extra == \"testing\""
] | [] | [] | [] | [
"Homepage, https://github.com/dymmond/palfrey",
"Documentation, https://palfrey.dymmond.com",
"Source, https://github.com/dymmond/palfrey",
"Changelog, https://github.com/dymmond/palfrey/blob/main/CHANGELOG.md"
] | Hatch/1.16.3 cpython/3.10.19 HTTPX/0.28.1 | 2026-02-20T15:27:09.844203 | palfrey-0.1.2-py3-none-any.whl | 101,523 | 2a/bf/ac982f41fdc4e4255a9c18df0c9da455defb26a64cf365adf854417c3a07/palfrey-0.1.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 431f57a6c34fadbae7add3c7a7c31e85 | 87e624e7e58523e613640856e523b77ee9ac9d12b6f761fb2e3b7ca013bdaa5e | 2abfac982f41fdc4e4255a9c18df0c9da455defb26a64cf365adf854417c3a07 | null | [
"LICENSE"
] | 220 |
2.4 | lvl3dev-todoist-cli | 1.0.0 | Simple command-line interface for the Todoist API Python SDK. | # lvl3dev-todoist-cli
Simple command-line interface for Todoist using the official `todoist-api-python` SDK.
## Development setup (uv)
```bash
uv sync
```
Run commands through `uv`:
```bash
uv run todoist --help
```
## Build and install (uv + pipx)
Build a wheel with `uv`:
```bash
uv build
```
Install globally with `pipx` from the built wheel:
```bash
pipx install dist/lvl3dev_todoist_cli-*.whl
```
For local iteration, reinstall after changes with:
```bash
pipx install --force dist/lvl3dev_todoist_cli-*.whl
```
Install from PyPI:
```bash
pipx install lvl3dev-todoist-cli
```
## Authentication
Set your API token:
```bash
export TODOIST_API_TOKEN="YOUR_API_TOKEN"
```
You can also pass `--token` per command.
## Usage
```bash
todoist --help
```
### Tasks
```bash
todoist tasks list
todoist tasks add "Pay rent" --due-string "tomorrow 9am" --priority 1
todoist tasks get <task_id>
todoist tasks update <task_id> --content "Pay rent and utilities"
todoist tasks complete <task_id>
todoist tasks delete <task_id>
```
Priority values are user-facing Todoist priorities: `p1` highest, `p4` lowest.
### Projects
```bash
todoist projects list
todoist projects add "Operations"
todoist projects update <project_id> "Ops"
todoist projects view-style <project_id> board
```
### Sections
```bash
todoist sections list --project-id <project_id>
todoist sections add --project-id <project_id> "In Progress"
todoist sections update <section_id> "Doing"
todoist sections delete <section_id>
```
### Boards
```bash
todoist boards show --project-id <project_id>
todoist boards move <task_id> --section-id <section_id>
```
### Calendar
```bash
todoist calendar today
todoist calendar week
todoist calendar range --from 2026-02-20 --to 2026-02-27
todoist calendar reschedule <task_id> --due-string "tomorrow 9am"
```
### Comments
```bash
todoist comments list --task-id <task_id>
todoist comments add --task-id <task_id> "Started work"
```
### Labels
```bash
todoist labels list
```
### JSON Output
Use `--json` to get structured output:
```bash
todoist --json tasks list
```
| text/markdown | Dolphin Electric | null | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: End Users/Desktop",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Office/Business :: Scheduling",
"Topic :: Utilities"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"todoist-api-python>=3.1.0"
] | [] | [] | [] | [
"Homepage, https://github.com/dolphin-electric/todoist-cli",
"Repository, https://github.com/dolphin-electric/todoist-cli",
"Issues, https://github.com/dolphin-electric/todoist-cli/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T15:27:04.441701 | lvl3dev_todoist_cli-1.0.0.tar.gz | 8,459 | f4/90/f1b73932cc94651aa69ee8bf3523887de1f8e8fb5008381fed3c094f30b2/lvl3dev_todoist_cli-1.0.0.tar.gz | source | sdist | null | false | 4031eb7438e63ca7f3dbfe23a901749e | bb5d27ea4fc19eb1e8adce710ad2ea58e9ff18ef41e39160f6797750d8011437 | f490f1b73932cc94651aa69ee8bf3523887de1f8e8fb5008381fed3c094f30b2 | MIT | [
"LICENSE"
] | 198 |
2.4 | github2gerrit | 1.0.6 | Submit a GitHub pull request to a Gerrit repository. | <!--
SPDX-License-Identifier: Apache-2.0
SPDX-FileCopyrightText: 2025 The Linux Foundation
-->
# github2gerrit
Submit a GitHub pull request to a Gerrit repository, implemented in Python.
This action is a drop‑in replacement for the shell‑based
`lfit/github2gerrit` composite action. It mirrors the same inputs,
outputs, environment variables, and secrets so you can adopt it without
changing existing configuration in your organizations.
The tool expects a `.gitreview` file in the repository to derive Gerrit
connection details and the destination project. It uses `git` over SSH
and `git-review` semantics to push to `refs/for/<branch>` and relies on
Gerrit `Change-Id` trailers to create or update changes.
## How it works (high level)
- Discover pull request context and inputs.
- **Detects PR operation mode** (CREATE, UPDATE, EDIT) based on event type.
- Detects and prevents tool runs from creating duplicate changes.
- Reads `.gitreview` for Gerrit host, port, and project.
- When run locally, will pull `.gitreview` from the remote repository.
- Sets up `git` user config and SSH for Gerrit.
- **For UPDATE operations**: Finds and reuses existing Gerrit Change-IDs.
- Prepare commits:
- one‑by‑one cherry‑pick with `Change-Id` trailers, or
- squash into a single commit and keep or reuse `Change-Id`.
- Optionally replace the commit message with PR title and body.
- Push with a topic to `refs/for/<branch>` using `git-review` behavior.
- **For UPDATE/EDIT operations**: Syncs PR metadata (title/description) to Gerrit.
- Query Gerrit for the resulting URL, change number, and patchset SHA.
- **Verifies patchset creation** to confirm updates vs. new changes.
- Add a back‑reference comment in Gerrit to the GitHub PR and run URL.
- Comment on the GitHub PR with the Gerrit change URL(s).
- By default, the tool preserves PRs after submission; set `PRESERVE_GITHUB_PRS=false` to close them.
## PR Update Handling (Dependabot Support)
GitHub2Gerrit now **intelligently handles PR updates** from automation tools like Dependabot:
### How PR Updates Work
When a PR updates (e.g., Dependabot rebases or updates dependencies):
1. **Automatic Detection**: The `synchronize` event triggers UPDATE mode
2. **Change-ID Recovery**: Finds existing Gerrit change using four strategies:
- Topic-based query (`GH-owner-repo-PR#`)
- GitHub-Hash trailer matching
- GitHub-PR trailer URL matching
- Mapping comment parsing
3. **Change-ID Reuse**: Forces reuse of existing Change-ID(s)
4. **New Patchset Creation**: Pushes create a new patchset, not a new change
5. **Metadata Sync**: Updates Gerrit change title/description if PR edits occur
6. **Verification**: Confirms patchset creation and increment
### PR Event Types
| Event | Action | Behavior |
| ------------- | ------ | ----------------------------------------- |
| `opened` | CREATE | Creates new Gerrit change(s) |
| `synchronize` | UPDATE | Updates existing change with new patchset |
| `edited` | EDIT | Syncs metadata changes to Gerrit |
| `reopened` | REOPEN | Treats as CREATE if no existing change |
| `closed` | CLOSE | Handles PR closure |
### Example: Dependabot Workflow
```yaml
on:
pull_request_target:
types: [opened, reopened, edited, synchronize, closed]
```
**Typical Dependabot flow:**
1. **Day 1**: Dependabot opens PR #29 → GitHub2Gerrit creates Gerrit change 73940
2. **Day 2**: Dependabot rebases PR #29 → GitHub2Gerrit updates change 73940 (new patchset 2)
3. **Day 3**: Dependabot updates dependencies in PR #29 → change 73940 gets patchset 3
4. **Day 4**: Someone edits PR title → metadata synced to Gerrit change 73940
5. **Day 5**: Change 73940 merged in Gerrit → PR #29 auto-closed in GitHub
### Key Features
- **No Duplicate Changes**: UPDATE mode enforces existing change presence
- **Robust Reconciliation**: Configurable similarity matching with dynamic threshold changes for PR updates
- **Metadata Synchronization**: PR title/description changes sync to Gerrit
- **Patchset Verification**: Confirms updates create new patchsets, not new changes
- **Clear Error Messages**: Helpful guidance when existing change not found
### Error Handling
If UPDATE fails to find existing change:
```text
❌ UPDATE FAILED: Cannot update non-existent Gerrit change
💡 GitHub2Gerrit did not process PR #42.
To create a new change, trigger the 'opened' workflow action.
```
## Close Merged PRs Feature
GitHub2Gerrit now includes **automatic PR closure** when Gerrit merges changes
and syncs them back to GitHub. This completes the lifecycle for automation PRs
(like Dependabot).
**How it works:**
1. A bot (e.g., Dependabot) creates a GitHub PR
2. GitHub2Gerrit converts it to a Gerrit change with tracking information
3. When the Gerrit change is **merged** and synced to GitHub, the original PR is automatically closed
4. When the Gerrit change is **abandoned**, the tool handles the PR based on `CLOSE_MERGED_PRS`:
- If `CLOSE_MERGED_PRS=true` (default): The tool closes the PR with an abandoned comment ⛔️
- If `CLOSE_MERGED_PRS=false`: PR remains open, but receives an abandoned notification comment ⛔️
**Key characteristics:**
- **Enabled by default** via `CLOSE_MERGED_PRS=true`
- **Non-fatal operation** - the tool logs missing or already-closed PRs as
info, not errors
- Works on `push` events when Gerrit syncs changes to GitHub mirrors
- **Abandoned change handling**: The tool closes PRs or adds comments based on the `CLOSE_MERGED_PRS` setting
**Gerrit change status handling:**
<!-- markdownlint-disable MD013 MD060 -->
| Scenario | `CLOSE_MERGED_PRS=true` (default) | `CLOSE_MERGED_PRS=false` |
| --------------------------- | -------------------------------------- | ------------------------------------------------------ |
| Change has MERGED status | ✅ Closes PR with merged comment | ⏭️ No action |
| Change has ABANDONED status | ✅ Closes PR with abandoned comment ⛔️ | 💬 Adds abandoned notification comment (PR stays open) |
| Change is NEW/OPEN | ⚠️ Closes PR with a warning | ⏭️ No action |
| Status UNKNOWN | ⚠️ Closes PR with a warning | ⏭️ No action |
<!-- markdownlint-enable MD013 MD060 -->
**Status reporting examples:**
```text
No GitHub PR URL found in commit abc123de - skipping
GitHub PR #42 is already closed - nothing to do
Gerrit change confirmed as MERGED
SUCCESS: Closed GitHub PR #42
```
**Abandoned change examples:**
With `CLOSE_MERGED_PRS=true`:
```text
Gerrit change ABANDONED; will close PR with abandoned comment
SUCCESS: Closed GitHub PR #42
```
With `CLOSE_MERGED_PRS=false`:
```text
Gerrit change ABANDONED; will add comment (CLOSE_MERGED_PRS=false)
SUCCESS: Added comment to PR #42 (PR remains open)
```
## Automatic Cleanup Features
GitHub2Gerrit includes **automatic cleanup operations** that run after successful
PR processing or when you close PRs. These features help maintain synchronization
between GitHub and Gerrit by cleaning up orphaned or stale changes.
### Cleanup Operations
There are two cleanup operations that run automatically:
#### 1. CLEANUP_ABANDONED (Abandoned Gerrit Changes → Close GitHub PRs)
**What it does:** Scans all open GitHub PRs in the repository and closes those
whose corresponding Gerrit changes Gerrit has abandoned.
- **Default:** ✅ Enabled (`CLEANUP_ABANDONED: true`)
- **Configurable:** Set `CLEANUP_ABANDONED: false` in workflow to disable
- **Runs during:** After successful PR processing, push events, and when you close PRs
- **Behavior:**
- Finds open GitHub PRs with `GitHub-PR` trailers in their associated Gerrit changes
- Checks if the Gerrit change has `ABANDONED` status
- Closes the GitHub PR with an appropriate comment explaining the abandonment
- Respects the `CLOSE_MERGED_PRS` setting for whether to close or just comment
**Example log output:**
```text
Running abandoned PR cleanup...
Found 150 open PRs to check
PR #42 Gerrit change has ABANDONED status - will close
Abandoned PR cleanup complete: closed 1 PR(s)
```
#### 2. CLEANUP_GERRIT (Closed GitHub PRs → Abandon Gerrit Changes)
**What it does:** Scans all open Gerrit changes in the project and abandons those
whose corresponding GitHub PRs you have closed.
- **Default:** ✅ Enabled (`CLEANUP_GERRIT: true`)
- **Configurable:** Set `CLEANUP_GERRIT: false` in workflow to disable
- **Runs during:** After successful PR processing, push events, and when you close PRs
- **Behavior:**
- Queries all open Gerrit changes in the project
- Extracts the `GitHub-PR` trailer from each change
- Checks if the GitHub PR has closed status
- Abandons the Gerrit change with a message including:
- PR number and URL
- Any comments made when closing the PR
- Automatic attribution to GitHub2Gerrit
**Example abandon message in Gerrit:**
```text
User closed GitHub pull request #34
PR URL: https://github.com/org/repo/pull/34
Comments when closing:
--- Comment 1 ---
Comment by username:
This PR is no longer needed because...
---
GitHub2Gerrit automatically abandoned this change
because user closed the source pull request.
```
**Example log output:**
```text
Running Gerrit cleanup for closed GitHub PRs...
Scanning open Gerrit changes in project-name for closed GitHub PRs
Found 25 open Gerrit change(s) to check
GitHub PR #34 has closed status, will abandon Gerrit change 12345
Abandoned Gerrit change 12345: https://gerrit.example.com/c/project/+/12345
Gerrit cleanup complete: abandoned 1 change(s)
```
### When Cleanup Runs
Cleanup operations run automatically in the following scenarios:
1. **After successful PR processing** - When you open, synchronize, or edit a PR
2. **On push events** - When Gerrit syncs changes back to GitHub (with `CLOSE_MERGED_PRS` enabled)
3. **When you close PRs** - Immediately when the system detects a PR close event
### PR Close Event Handling
When you close a GitHub PR, GitHub2Gerrit performs the following actions in order:
1. **Abandon the specific Gerrit change** for the closed PR
- Searches for the Gerrit change with matching `GitHub-PR` trailer
- Captures the last 3 comments from the PR (to preserve closure context)
- Abandons the Gerrit change with those comments included
2. **Run CLEANUP_ABANDONED** - Close any other GitHub PRs with abandoned Gerrit changes
3. **Run CLEANUP_GERRIT** - Abandon any other Gerrit changes with closed GitHub PRs
**Example workflow:**
```text
🚪 PR closed event - running cleanup operations
Checking for Gerrit change to abandon for PR #34
Found Gerrit change 12345 for PR #34
✅ Abandoned Gerrit change 12345
Running abandoned PR cleanup...
Running Gerrit cleanup for closed GitHub PRs...
✅ Cleanup operations completed for closed PR
```
### Configuration
These cleanup operations **enable by default** but you can control them via
workflow inputs. They remain non-fatal, meaning if cleanup fails, the system logs a warning
but doesn't fail the entire workflow.
**Configuration options:**
- `CLEANUP_ABANDONED` - Default: `true` (shows ☑️ when enabled in configuration output)
- `CLEANUP_GERRIT` - Default: `true` (shows ☑️ when enabled in configuration output)
**Example - Disable cleanup operations:**
```yaml
uses: lfreleng-actions/github2gerrit-action@main
with:
GERRIT_SSH_PRIVKEY_G2G: ${{ secrets.GERRIT_SSH_PRIVKEY_G2G }}
CLEANUP_ABANDONED: false # Don't close GitHub PRs for abandoned changes
CLEANUP_GERRIT: false # Don't abandon Gerrit changes for closed PRs
```
**Example - Enable only one cleanup operation:**
```yaml
uses: lfreleng-actions/github2gerrit-action@main
with:
GERRIT_SSH_PRIVKEY_G2G: ${{ secrets.GERRIT_SSH_PRIVKEY_G2G }}
CLEANUP_ABANDONED: true # Close GitHub PRs when Gerrit abandons changes
CLEANUP_GERRIT: false # But don't abandon Gerrit changes when PRs close
```
**Dry-run support:** Both cleanup operations respect the `DRY_RUN` setting for testing.
### Notes
- Cleanup operations remain **parallel-safe** - workflow runs won't interfere with each other
- Operations remain **idempotent** - safe to run repeatedly
- The system skips PRs you already closed or changes Gerrit already abandoned (no duplicate actions)
- The system logs errors during cleanup as warnings and doesn't fail the workflow
## Restrict PRs to Automation Tools
GitHub2Gerrit can restrict pull request processing to known automation tools.
Use this for GitHub mirrors where you want contributors to submit changes via
Gerrit, while still accepting automated dependency updates from tools like
Dependabot.
**Configuration:**
Set `AUTOMATION_ONLY=true` (default) to enable, or `AUTOMATION_ONLY=false`
to accept all PRs.
**Recognized automation tools:**
| Tool | GitHub Username(s) |
| ------------- | ------------------------------------- |
| Dependabot | `dependabot[bot]`, `dependabot` |
| Pre-commit.ci | `pre-commit-ci[bot]`, `pre-commit-ci` |
**What happens when enabled:**
The tool rejects PRs from non-automation users by:
1. Logging a warning message
2. Closing the PR with this comment:
```text
This GitHub mirror does not accept pull requests.
Please submit changes to the project's Gerrit server.
```
3. Exiting with code 1
**Example:**
```yaml
- uses: lfit/github2gerrit-action@main
with:
AUTOMATION_ONLY: "true" # default, accepts automation PRs
GERRIT_SSH_PRIVKEY_G2G: ${{ secrets.GERRIT_SSH_PRIVKEY }}
```
## Requirements
- Repository contains a `.gitreview` file. If you cannot provide it,
you must pass `GERRIT_SERVER`, `GERRIT_SERVER_PORT`, and
`GERRIT_PROJECT` via the reusable workflow interface.
- SSH key used to push changes into Gerrit
- The system populates Gerrit known hosts automatically on first run.
- The default `GITHUB_TOKEN` is available for PR metadata and comments.
- The workflow grants permissions required for PR interactions:
- `pull-requests: write` (to comment on and close PRs)
- `issues: write` (to create PR comments via the Issues API)
- The workflow runs with `pull_request_target` or via
`workflow_dispatch` using a valid PR context.
## Error Codes
The `github2gerrit` tool uses standardized exit codes for different failure types. This helps with automation,
debugging, and providing clear feedback to users.
<!-- markdownlint-disable MD013 -->
| Exit Code | Description | Common Causes | Resolution |
| --------- | ----------------------- | -------------------------------------------- | ------------------------------------------------------- |
| **0** | Success | Operation completed | N/A |
| **1** | General Error | Unexpected operational failure | Check logs for details |
| **2** | Configuration Error | Missing or invalid configuration parameters | Verify required inputs and environment variables |
| **3** | Duplicate Error | Duplicate change detected (when not allowed) | Use `--allow-duplicates` flag or check existing changes |
| **4** | GitHub API Error | GitHub API access or permission issues | Verify `GITHUB_TOKEN` has required permissions |
| **5** | Gerrit Connection Error | Failed to connect to Gerrit server | Check SSH keys, server configuration, and network |
| **6** | Network Error | Network connectivity issues | Check internet connection and firewall settings |
| **7** | Repository Error | Git repository access or operation failed | Verify repository permissions and git configuration |
| **8** | PR State Error | Pull request in invalid state for processing | Ensure PR is open and mergeable |
| **9** | Validation Error | Input validation failed | Check parameter values and formats |
<!-- markdownlint-enable MD013 -->
### Common Error Messages
#### GitHub API Permission Issues (Exit Code 4)
```text
❌ GitHub API query failed; provide a GITHUB_TOKEN with the required permissions
```
**Common causes:**
- Missing `GITHUB_TOKEN` environment variable
- Token lacks permissions for target repository
- Token expired or invalid
- Cross-repository access without proper token
**Resolution:**
- Configure `GITHUB_TOKEN` with a valid personal access token
- For cross-repository workflows, use a token with access to the target repository
- Grant required permissions: `contents: read`, `pull-requests: write`, `issues: write`
#### Configuration Issues (Exit Code 2)
```text
❌ Configuration validation failed; check required parameters
```
**Common causes:**
- Missing or invalid configuration parameters
- Invalid parameter combinations
- Missing `.gitreview` file without override parameters
**Resolution:**
- Verify all required inputs exist
- Check parameter compatibility (e.g., don't use conflicting options)
- Provide `GERRIT_SERVER`, `GERRIT_PROJECT` if `.gitreview` is missing
#### Gerrit Connection Issues (Exit Code 5)
```text
❌ Gerrit connection failed; check SSH keys and server configuration
```
**Common causes:**
- Invalid SSH private key
- SSH key not added to Gerrit account
- Incorrect Gerrit server configuration
- Network connectivity to Gerrit server
**Resolution:**
- Verify SSH private key is correct and has access to Gerrit
- Check Gerrit server hostname and port
- Ensure network connectivity to Gerrit server
### Integration Test Scenarios
The improved error handling is important for integration tests that run across different repositories.
For example, when testing the `github2gerrit-action` repository but accessing PRs in the `lfit/sandbox`
repository, you need:
1. **Cross-Repository Token Access**: Use `READ_ONLY_GITHUB_TOKEN` instead of the default `GITHUB_TOKEN`
for workflows that access PRs in different repositories.
2. **Clear Error Messages**: If the token lacks permissions, you'll see:
```text
❌ GitHub API query failed; provide a GITHUB_TOKEN with the required permissions
Details: Cannot access repository 'lfit/sandbox' - check token permissions
```
3. **Actionable Resolution**: The error message tells you what's needed - configure a token with access
to the target repository.
### Debugging Workflow
When troubleshooting failures:
1. **Check the Exit Code**: Each failure has a unique exit code to help identify the root cause
2. **Read the Error Message**: Look for the ❌ prefixed message that explains what went wrong
3. **Review Details**: Context appears when available
4. **Check Logs**: Enable verbose logging with `G2G_VERBOSE=true` for detailed debugging information
### Note on sitecustomize.py
This repository includes a sitecustomize.py that is automatically
imported by Python’s site initialization. It exists to make pytest and
coverage runs in CI more robust by:
- assigns a unique COVERAGE_FILE per process to avoid mixing data across runs
- proactively removing stale .coverage artifacts in common base directories.
The logic runs during pytest sessions and is best effort.
It never interferes with normal execution. Maintainers can keep it to
stabilize coverage reporting for parallel/xdist runs.
## Duplicate detection
Duplicate detection uses a scoring-based approach. Instead of relying on a hash
added by this action, the detector compares the first line of the commit message
(subject/PR title), analyzes the body text and the set of files changed, and
computes a similarity score. When the score meets or exceeds a configurable
threshold (default 0.8), the tool treats the change as a duplicate and blocks
submission. This approach aims to remain robust even when similar changes
appeared outside this pipeline.
### Examples of detected duplicates
- Dependency bumps for the same package across close versions
(e.g., "Bump foo from 1.0 to 1.1" vs "Bump foo from 1.1 to 1.2")
with overlapping files — high score
- Pre-commit autoupdates that change .pre-commit-config.yaml and hook versions —
high score
- GitHub Actions version bumps that update .github/workflows/* uses lines —
medium to high score
- Similar bug fixes with the same subject and significant file overlap —
strong match
### Allowing duplicates
Use `--allow-duplicates` or set `ALLOW_DUPLICATES=true` to override:
```bash
# CLI usage
github2gerrit --allow-duplicates https://github.com/org/repo
# GitHub Actions
uses: lfreleng-actions/github2gerrit-action@main
with:
ALLOW_DUPLICATES: 'true'
```
When allowed, duplicates generate warnings but processing continues.
The tool exits with code 3 when it detects duplicates and they are not allowed.
### Configuring duplicate detection scope
By default, the duplicate detector considers changes with status `open` when searching for potential duplicates.
You can customize which Gerrit change states to check using `--duplicate-types` or setting `DUPLICATE_TYPES`:
```bash
# CLI usage - check against open and merged changes
github2gerrit --duplicate-types=open,merged https://github.com/org/repo
# Environment variable
DUPLICATE_TYPES=open,merged,abandoned github2gerrit https://github.com/org/repo
# GitHub Actions
uses: lfreleng-actions/github2gerrit-action@main
with:
DUPLICATE_TYPES: 'open,merged'
```
Valid change states include `open`, `merged`, and `abandoned`. This setting determines which existing changes
to check when evaluating whether a new change would be a duplicate.
## Commit Message Normalization
The tool includes intelligent commit message normalization that automatically
converts automated PR titles (from tools like Dependabot, pre-commit.ci, etc.)
to follow conventional commit standards. This feature defaults to enabled
and you can control it via the `NORMALISE_COMMIT` setting.
### How it works
1. **Repository Analysis**: The tool analyzes your repository to determine
preferred conventional commit patterns by examining:
- `.pre-commit-config.yaml` for commit message formats
- `.github/release-drafter.yml` for commit type patterns
- Recent git history for existing conventional commit usage
2. **Smart Detection**: Applies normalization to automated PRs from
known bots (dependabot[bot], pre-commit-ci[bot], etc.) or PRs with
automation patterns in the title.
3. **Adaptive Formatting**: Respects your repository's existing conventions:
- **Capitalization**: Detects whether you use `feat:` or `FEAT:`
- **Commit Types**: Uses appropriate types (`chore`, `build`, `ci`, etc.)
- **Dependency Updates**: Converts "Bump package from X to Y" to
"chore: bump package from X to Y"
### Examples
**Before normalization:**
```text
Bump net.logstash.logback:logstash-logback-encoder from 7.4 to 8.1
pre-commit autoupdate
Update GitHub Action dependencies
```
**After normalization:**
```text
chore: bump net.logstash.logback:logstash-logback-encoder from 7.4 to 8.1
chore: pre-commit autoupdate
build: update GitHub Action dependencies
```
### Configuration
Enable or disable commit normalization:
```bash
# CLI usage
github2gerrit --normalise-commit https://github.com/org/repo
github2gerrit --no-normalise-commit https://github.com/org/repo
# Environment variable
NORMALISE_COMMIT=true github2gerrit https://github.com/org/repo
NORMALISE_COMMIT=false github2gerrit https://github.com/org/repo
# GitHub Actions
uses: lfreleng-actions/github2gerrit-action@main
with:
NORMALISE_COMMIT: 'true' # default
# or
NORMALISE_COMMIT: 'false' # disable
```
### Repository-specific Configuration
To influence the normalization behavior, configure your repository:
**`.pre-commit-config.yaml`:**
```yaml
ci:
autofix_commit_msg: |
Chore: pre-commit autofixes
Signed-off-by: pre-commit-ci[bot] <pre-commit-ci@users.noreply.github.com>
autoupdate_commit_msg: |
Chore: pre-commit autoupdate
Signed-off-by: pre-commit-ci[bot] <pre-commit-ci@users.noreply.github.com>
```
**`.github/release-drafter.yml`:**
```yaml
autolabeler:
- label: "chore"
title:
- "/chore:/i"
- label: "feature"
title:
- "/feat:/i"
- label: "bug"
title:
- "/fix:/i"
```
The tool will detect the capitalization style from these files and apply
it consistently to normalized commit messages.
### Example Usage in CI/CD
```bash
# Run the tool and handle different exit codes
if github2gerrit "$PR_URL"; then
echo "✅ Submitted to Gerrit"
elif [ $? -eq 2 ]; then
echo "❌ Configuration error - check your settings"
exit 1
elif [ $? -eq 3 ]; then
echo "⚠️ Duplicate detected - use ALLOW_DUPLICATES=true to override"
exit 0 # Treat as non-fatal in some workflows
else
echo "❌ Runtime failure - check logs for details"
exit 1
fi
```
## Change-ID Reconciliation
The action includes an intelligent reconciliation system that reuses existing
Gerrit Change-IDs when updating pull requests. This prevents creating
duplicate changes in Gerrit when developers rebase, add commits, or amend a PR.
### How It Works
When developers update a PR (e.g., via `synchronize` event), the reconciliation system:
1. **Queries existing Gerrit changes** using the PR's topic (or falls back to GitHub comments)
2. **Matches local commits** to existing changes using these strategies:
- **Trailer matching**: Reuses Change-IDs already present in commit messages
- **Exact subject matching**: Matches commits with identical subjects
- **File signature matching**: Matches commits with identical file changes
- **Subject similarity matching**: Uses Jaccard similarity on commit subjects
3. **Generates new Change-IDs** for commits that don't match any existing change
### Configuration
The reconciliation behavior can be fine-tuned with these parameters:
**`REUSE_STRATEGY`** (default: `topic+comment`)
- `topic`: Query Gerrit changes by topic
- `comment`: Search GitHub PR comments for Change-IDs
- `topic+comment`: Try topic first, fall back to comments
- `none`: Disable reconciliation (always generate new Change-IDs)
**`SIMILARITY_SUBJECT`** (default: `0.7`)
- Jaccard similarity threshold (0.0-1.0) for subject matching
- Higher values require more similarity between commit subjects
- Example: `0.7` means 70% of words must match
**`SIMILARITY_UPDATE_FACTOR`** (default: `0.75`)
- Multiplier applied to similarity threshold for UPDATE operations
- Allows more lenient matching for rebased/amended commits
- Applied as: `update_threshold = max(0.5, base_threshold × factor)`
- Example: With base `0.7` and factor `0.75`, UPDATE threshold becomes `0.525`
- Floor threshold of `0.5` prevents too-loose matching
**`SIMILARITY_FILES`** (default: `false`)
- Whether to require exact file signature match during reconciliation (Pass C)
- When `true`: Commits must touch the exact same set of files to match (strict mode)
- When `false` (recommended): Skips file signature matching, relies on subject matching
- **Why default is `false`**: File signature matching is too strict for common workflows:
- Developers add/remove files during PR updates
- Rebasing shifts file changes between commits
- Conflict resolution changes which files a commit touches
- Developers amend commits with more file changes
- **When to use `true`**: Enable this for controlled workflows where file sets never change
**`ALLOW_ORPHAN_CHANGES`** (default: `false`)
- When enabled, unmatched Gerrit changes don't generate warnings
- Useful when you expect to remove changes from the topic
### Why Adjustable Similarity?
PR updates often involve rebasing, which can change commit messages slightly
(e.g., updating references, fixing typos, or resolving conflicts). The
`SIMILARITY_UPDATE_FACTOR` allows the system to recognize these as the same
logical change despite minor message differences:
- **Base threshold** (`SIMILARITY_SUBJECT`): Used for initial PR creation
- **Update threshold** (base × factor): Used for PR synchronize events
- **Percentage-based**: Scales consistently across different base thresholds
- **Floor at 0.5**: Prevents matching unrelated commits
### Example Configurations
```bash
# Strict matching - require 90% similarity, minor relaxation on updates
SIMILARITY_SUBJECT=0.9
SIMILARITY_UPDATE_FACTOR=0.85
# Lenient matching - allow more variation in commit messages
SIMILARITY_SUBJECT=0.6
SIMILARITY_UPDATE_FACTOR=0.7
# Recommended: Flexible matching for most workflows (default settings)
SIMILARITY_SUBJECT=0.7
SIMILARITY_UPDATE_FACTOR=0.75
SIMILARITY_FILES=false # default - allows file changes in PR updates
# Strict matching - use for controlled workflows
SIMILARITY_SUBJECT=0.9
SIMILARITY_UPDATE_FACTOR=0.85
SIMILARITY_FILES=true # requires exact file matches
# Disable reconciliation (always create new Change-IDs)
REUSE_STRATEGY=none
```
### Common Pitfalls
**File signature matching can break reconciliation during normal workflows:**
### GitHub Actions Example
```yaml
- uses: lfreleng-actions/github2gerrit-action@main
with:
GERRIT_KNOWN_HOSTS: ${{ secrets.GERRIT_KNOWN_HOSTS }}
GERRIT_SSH_PRIVKEY_G2G: ${{ secrets.GERRIT_SSH_PRIVKEY_G2G }}
SIMILARITY_SUBJECT: '0.75'
SIMILARITY_UPDATE_FACTOR: '0.8'
# SIMILARITY_FILES defaults to 'false' - uncomment to enable strict mode
# SIMILARITY_FILES: 'true'
```
### CLI Example
```bash
# Custom similarity settings
github2gerrit \
--similarity-subject 0.75 \
--similarity-update-factor 0.8 \
https://github.com/owner/repo/pull/123
```
## Usage
This action runs as part of a workflow that triggers on
`pull_request_target` and also supports manual runs via
`workflow_dispatch`.
Minimal example:
```yaml
name: github2gerrit
on:
pull_request_target:
types: [opened, reopened, edited, synchronize]
workflow_dispatch:
permissions:
contents: read
pull-requests: write
issues: write
jobs:
submit-to-gerrit:
runs-on: ubuntu-latest
steps:
- name: Submit PR to Gerrit
id: g2g
uses: lfreleng-actions/github2gerrit-action@main
with:
SUBMIT_SINGLE_COMMITS: "false"
USE_PR_AS_COMMIT: "false"
FETCH_DEPTH: "10"
GERRIT_KNOWN_HOSTS: ${{ vars.GERRIT_KNOWN_HOSTS }}
GERRIT_SSH_PRIVKEY_G2G: ${{ secrets.GERRIT_SSH_PRIVKEY_G2G }}
GERRIT_SSH_USER_G2G: ${{ vars.GERRIT_SSH_USER_G2G }}
GERRIT_SSH_USER_G2G_EMAIL: ${{ vars.GERRIT_SSH_USER_G2G_EMAIL }}
ORGANIZATION: ${{ github.repository_owner }}
REVIEWERS_EMAIL: ""
ISSUE_ID: "" # Optional: adds 'Issue-ID: ...' trailer to the commit message
ISSUE_ID_LOOKUP_JSON: ${{ vars.ISSUE_ID_LOOKUP_JSON }} # Optional: JSON lookup table for automatic Issue-ID resolution
```
The action reads `.gitreview`. If `.gitreview` is absent, you must
supply Gerrit connection details through a reusable workflow or by
setting the corresponding environment variables before invoking the
action. The shell action enforces `.gitreview` for the composite
variant; this Python action mirrors that behavior for compatibility.
## Command Line Usage and Debugging
### Direct Command Line Usage
You can run the tool directly from the command line to process GitHub pull requests.
**For development (with local checkout):**
```bash
# Process a specific pull request
uv run github2gerrit https://github.com/owner/repo/pull/123
# Process all open pull requests in a repository
uv run github2gerrit https://github.com/owner/repo
# Run in CI mode (reads from environment variables)
uv run github2gerrit
```
**For CI/CD or one-time usage:**
```bash
# Install and run in one command
uvx github2gerrit https://github.com/owner/repo/pull/123
# Install from specific version/source
uvx --from git+https://github.com/lfreleng-actions/github2gerrit-action@main github2gerrit https://github.com/owner/repo/pull/123
```
### Available Options
```bash
# View help (local development)
uv run github2gerrit --help
# View help (CI/CD)
uvx github2gerrit --help
```
The comprehensive [Inputs](#inputs) table above documents all CLI options and shows
alignment between action inputs, environment variables, and CLI flags. All CLI flags
have corresponding environment variables for configuration.
Key options include:
- `--verbose` / `-v`: Enable verbose debug logging (`G2G_VERBOSE`)
- `--dry-run`: Check configuration without making changes (`DRY_RUN`)
- `--submit-single-commits`: Submit each commit individually (`SUBMIT_SINGLE_COMMITS`)
- `--use-pr-as-commit`: Use PR title/body as commit message (`USE_PR_AS_COMMIT`)
- `--issue-id`: Add an Issue-ID trailer (e.g., "Issue-ID: ABC-123") to the commit message (`ISSUE_ID`)
- `--preserve-github-prs`: Don't close GitHub PRs after submission (`PRESERVE_GITHUB_PRS`)
- `--duplicate-types`: Configure which Gerrit change states to check for duplicates (`DUPLICATE_TYPES`)
For a complete list of all available options, see the [Inputs](#inputs) section.
### Debugging and Troubleshooting
When encountering issues, enable verbose logging to see detailed execution:
```bash
# Using the CLI flag
github2gerrit --verbose https://github.com/owner/repo/pull/123
# Using environment variable
G2G_LOG_LEVEL=DEBUG github2gerrit https://github.com/owner/repo/pull/123
# Alternative environment variable
G2G_VERBOSE=true github2gerrit https://github.com/owner/repo/pull/123
```
Debug output includes:
- Git command execution and output
- SSH connection attempts
- Gerrit API interactions
- Branch resolution logic
- Change-Id processing
Common issues and solutions:
1. **Configuration Validation Errors**: The tool provides clear error messages when
required configuration is missing or invalid. Look for messages starting with
"Configuration validation failed:" that specify missing inputs like
`GERRIT_KNOWN_HOSTS`, `GERRIT_SSH_PRIVKEY_G2G`, etc.
2. **SSH Permission Denied**:
- Ensure `GERRIT_SSH_PRIVKEY_G2G` and `GERRIT_KNOWN_HOSTS` are properly set
- If you see "Permissions 0644 for 'gerrit_key' are too open", the action will automatically
try SSH agent authentication
- For persistent file permission issues, ensure `G2G_USE_SSH_AGENT=true` (default)
3. **Branch Not Found**: Check that the target branch exists in both GitHub and Gerrit
4. **Change-Id Issues**: Enable debug logging to see Change-Id generation and validation
5. **Account Not Found Errors**: If you see "Account '<Email@Domain.com>' not found",
ensure your Gerrit account email matches your git config email (case-sensitive).
6. **Gerrit API Errors**: Verify Gerrit server connectivity and project permissions
> **Note**: The tool displays configuration errors cleanly without Python tracebacks.
> If you see a traceback in the output, please report it as a bug.
### Environment Variables
The comprehensive [Inputs](#inputs) table above documents all environment variables.
Key variables for CLI usage include:
- `G2G_LOG_LEVEL`: Set to `DEBUG` for verbose output (default: `WARNING`)
- `G2G_VERBOSE`: Set to `true` to enable debug logging (same as `--verbose` flag)
- `GERRIT_SSH_PRIVKEY_G2G`: SSH private key content
- `GERRIT_KNOWN_HOSTS`: SSH known hosts entries
- `GERRIT_SSH_USER_G2G`: Gerrit SSH username
- `G2G_USE_SSH_AGENT`: Set to `false` to force file-based SSH (default: `true`)
- `DRY_RUN`: Set to `true` for check mode
- `CI_TESTING`: Set to `true` to ignore `.gitreview` file and use environment variables instead
For a complete list of all supported environment variables, their defaults, and
their corresponding action inputs and CLI flags, see the [Inputs](#inputs) section.
## Advanced usage
### Overriding .gitreview Settings
When `CI_TESTING=true`, the tool ignores any `.gitreview` file in the
repository and uses environment variables instead. This is useful for:
- **Integration testing** against different Gerrit servers
- **Overriding repository settings** when the `.gitreview` points to the wrong server
- **Development and debugging** with custom Gerrit configurations
**Example:**
```bash
export CI_TESTING=true
export GERRIT_SERVER=gerrit.example.org
export GERRIT_PROJECT=sandbox
github2gerrit https://github.com/org/repo/pull/123
```
### SSH Authentication Methods
This action supports two SSH authentication methods:
1. **SSH Agent Authentication (Default)**: More secure, avoids file permission issues in CI
2. **File-based Authentication**: Fallback method that writes keys to temporary files
#### SSH Agent Authentication
By default, the action uses SSH agent to load keys into memory rather than writing them to disk. This is more
secure and avoids the file permission issues commonly seen in CI environments.
To control this behavior:
```yaml
- name: Submit to Gerrit
uses: your-org/github2gerrit-action@v1
env:
G2G_USE_SSH_AGENT: "true" # Default: enables SSH agent (recommended)
# G2G_USE_SSH_AGENT: "false" # Forces file-based authentication
with:
GERRIT_SSH_PRIVKEY_G2G: ${{ secrets.GERRIT_SSH_PRIVKEY_G2G }}
# ... other inputs
```
**Benefits of SSH Agent Authentication:**
- No temporary files written to disk
- Avoids SSH key file permission issues (0644 vs 0600)
- More secure in containerized CI environments
- Automatic cleanup when process exits
#### File-based Authentication (Fallback)
If SSH agent setup fails, the action automatically falls back to writing the SSH key to a temporary file with
secure permissions. This method:
- Creates files in workspace-specific `.ssh-g2g/` directory
- Attempts to set proper file permissions (0600)
- Includes four fallback permission-setting strategies for CI environments
### Custom SSH Configuration
You can explicitly install the SSH key and provide a custom SSH configuration
before invoking this action. This is useful when:
- You want to override the port/host used by SSH
- You need to define host aliases or SSH options
- Your Gerrit instance uses a non-standard HTTP base path (e.g. /r)
Example:
```yaml
name: github2gerrit (advanced)
on:
pull_request_target:
types: [opened, reopened, edited, synchronize]
workflow_dispatch:
permissions:
contents: read
pull-requests: write
issues: write
jobs:
submit-to-gerrit:
runs-on: ubuntu-latest
steps:
- name: Submit PR to Gerrit (with explicit overrides)
id: g2g
uses: lfreleng-actions/github2gerrit-action@main
with:
# Behavior
SUBMIT_SINGLE_COMMITS: "false"
USE_PR_AS_COMMIT: "false"
FETCH_DEPTH: "10"
# Required SSH/identity
GERRIT_KNOWN_HOSTS: ${{ vars.GERRIT_KNOWN_HOSTS }}
GERRIT_SSH_PRIVKEY_G2G: ${{ secrets.GERRIT_SSH_PRIVKEY_G2G }}
GERRIT_SSH_USER_G2G: ${{ vars.GERRIT_SSH_USER_G2G }}
GERRIT_SSH_USER_G2G_EMAIL: ${{ vars.GERRIT_SSH_USER_G2G_EMAIL }}
# Optional overrides when .gitreview is missing or to force values
GERRIT_SERVER: ${{ vars.GERRIT_SERVER }}
GERRIT_SERVER_PORT: ${{ vars.GERRIT_SERVER_PORT }}
GERRIT_PROJECT: ${{ vars.GERRIT_PROJECT }}
# Optional Gerrit REST base path and credentials (if required)
# e.g. '/r' for some deployments
GERRIT_HTTP_BASE_PATH: ${{ vars.GERRIT_HTTP_BASE_PATH }}
GERRIT_HTTP_USER: ${{ vars.GERRIT_HTTP_USER }}
GERRIT_HTTP_PASSWORD: ${{ secrets.GERRIT_HTTP_PASSWORD }}
ORGANIZATION: ${{ github.repository_owner }}
REVIEWERS_EMAIL: ""
```
Notes:
- The action configures SSH internally using the provid | text/markdown | null | Matthew Watkins <mwatkins@linuxfoundation.org> | null | null | null | actions, ci, cli, gerrit, github, typer | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Build Tools",
"Topic :: Software Development :: Version Control",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"cryptography>=46.0.5",
"git-review>=2.5.0",
"pygerrit2>=2.0.15",
"pygithub>=2.8.1",
"pynacl>=1.6.2",
"pyyaml>=6.0.3",
"rich>=14.2.0",
"typer>=0.20.1",
"urllib3>=2.6.3",
"coverage[toml]>=7.10.6; extra == \"dev\"",
"mypy>=1.17.1; extra == \"dev\"",
"pytest-cov>=7.0.0; extra == \"dev\"",
"pytest-mock>=3.15.1; extra == \"dev\"",
"pytest>=9.0.2; extra == \"dev\"",
"responses>=0.25.8; extra == \"dev\"",
"ruff>=0.6.3; extra == \"dev\"",
"types-click>=7.1.8; extra == \"dev\"",
"types-requests>=2.31.0; extra == \"dev\"",
"types-urllib3>=1.26.25.14; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/lfreleng-actions/github2gerrit-action",
"Repository, https://github.com/lfreleng-actions/github2gerrit-action",
"Issues, https://github.com/lfreleng-actions/github2gerrit-action/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T15:26:49.635670 | github2gerrit-1.0.6.tar.gz | 844,834 | 12/bf/5e86360338e1393f7deb6aad9716c42b801c5cdc1d3bf420f37bc3aed743/github2gerrit-1.0.6.tar.gz | source | sdist | null | false | 7c559eab8da6bef5c635b25eebc177bb | f8fc6b549a6db3f68bfa22f5d164882b7d9873e18ddcf8d7f1a92e6b02c97b8e | 12bf5e86360338e1393f7deb6aad9716c42b801c5cdc1d3bf420f37bc3aed743 | Apache-2.0 | [
"LICENSE"
] | 219 |
2.4 | vijil | 0.1.58 | Python Client for Vijil | # vijil-python
Python Client for Vijil
## Setup
```bash
pip install -U vijil
```
Then initialize the client using
```python
from vijil import Vijil
client = Vijil()
```
Requires a `VIJIL_API_KEY`, either loaded in the environment or suppllied as `api_key` argument above.
## Run Evaluations
```python
client.evaluations.create(
model_hub="openai",
model_name="gpt-3.5-turbo",
model_params={"temperature": 0},
harnesses=["ethics","hallucination"],
harness_params={"sample_size": 5}
)
```
See the [minimal example](tutorials/minimal_example.ipynb) for more functionalities.
| text/markdown | Subho Majumdar | subho@vijil.ai | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"bs4>=0.0.2",
"fastapi>=0.121.0",
"fastparquet>=2024.5.0",
"markdownify<2.0.0,>=1.2.2",
"mypy<2.0.0,>=1.12.1",
"packaging>=24.2",
"pandas<3.0.0,>=2.1.4",
"pyngrok<8.0.0,>=7.2.4",
"python-dotenv>=1.0.1",
"requests>=2.32.3",
"starlette>=0.50.0",
"tornado<7.0,>=6.5",
"tqdm>=4.67.1",
"types-requests<3.0.0.0,>=2.32.0.20241016",
"uvicorn>=0.34.2",
"wakepy>=0.10.2.post1",
"weasyprint<69.0,>=68.0"
] | [] | [] | [] | [] | poetry/2.3.2 CPython/3.12.12 Linux/6.11.0-1018-azure | 2026-02-20T15:26:38.518916 | vijil-0.1.58-py3-none-any.whl | 37,548 | e8/89/638136a85545e04fc54a0d6a200f45663e675a237caa1d04258cbac5dc9b/vijil-0.1.58-py3-none-any.whl | py3 | bdist_wheel | null | false | 8333940e24d486dc382632e57ccefeec | 058ca1b28d5ee328f31c45fb59a25e2bb7dd9792e13f092b4fd9411569255bbb | e889638136a85545e04fc54a0d6a200f45663e675a237caa1d04258cbac5dc9b | null | [
"LICENSE"
] | 211 |
2.4 | nkunyim-iam | 1.3.5 | Auth library for Nkunyim apps and services | # nkunyim_iam
IAM module for Nkunyim Services
| text/markdown | null | Enoch Enchill <et.enchill@gmail.com> | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"django",
"djangorestframework",
"pydantic",
"requests"
] | [] | [] | [] | [
"Homepage, https://github.com/nkunyim/nkunyim_iam",
"Issues, https://github.com/nkunyim/nkunyim_iam/issues"
] | twine/6.1.0 CPython/3.12.3 | 2026-02-20T15:26:09.163170 | nkunyim_iam-1.3.5.tar.gz | 16,771 | 62/e6/785a23d97e4c9de049af36dfa6a91573b7458bf56200572d88992f047d75/nkunyim_iam-1.3.5.tar.gz | source | sdist | null | false | 12e68cf34deb900ee9c0b89d16eb7fb2 | 8ff9e3d09ca972a5d247f8d4db4a11c48c9962fb756f6a63f6588a7ff9569053 | 62e6785a23d97e4c9de049af36dfa6a91573b7458bf56200572d88992f047d75 | Apache-2.0 | [
"LICENSE"
] | 210 |
2.4 | deepnote-cli | 0.4.0 | Deepnote CLI packaged for PyPI | # deepnote-cli
Python package for the Deepnote CLI.
This package bundles a native `deepnote` executable built from `@deepnote/cli`
using Bun and exposes it as a Python console script.
## Usage
```bash
deepnote --help
deepnote --version
```
| text/markdown | Deepnote | null | null | null | Apache-2.0 | deepnote, cli, notebook, data-science | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Software Development :: Build Tools",
"Topic :: Utilities"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/deepnote/deepnote",
"Repository, https://github.com/deepnote/deepnote",
"Documentation, https://deepnote.com/docs/getting-started"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:25:58.115723 | deepnote_cli-0.4.0-py3-none-win_amd64.whl | 42,746,180 | cf/2d/6f678caf0b4c4b5c8fccff2db88e8b552b0e6ae4d13b0f65663370a9077d/deepnote_cli-0.4.0-py3-none-win_amd64.whl | py3 | bdist_wheel | null | false | 94445b7d5c2423e20b87ac9f9ef41136 | 173251d4d5b4314ad18a4d0613922893238dc95f82aa9b45bcf0cd9cf6ec1497 | cf2d6f678caf0b4c4b5c8fccff2db88e8b552b0e6ae4d13b0f65663370a9077d | null | [] | 220 |
2.4 | psutierlist-api-wrapper | 1.0.0 | API Wrapper | # psutierlist-api-wrapper
A fully-typed, modern Python wrapper for the internal API used on [psutierlist.org](https://psutierlist.org/).
> [!WARNING]
> This library is not affiliated with the website or anyone behind SPL's PSU Tier List. It may break at any time if the API changes, due to this being an unofficial wrapper that relies on an undocumented API.
## Installation
This package is on PyPI, so you can use `pip`:
```bash
pip install psutierlist-api-wrapper
```
## Usage
Sync usage:
```python
import psutierlist_api_wrapper as paw
def main() -> None:
pages = paw.get_pages() # type is `paw.Response`
if __name__ == "__main__":
main()
```
Async usage:
```python
import asyncio
import psutierlist_api_wrapper as paw
async def main() -> None:
pages = await paw.get_pages_async()
if __name__ == "__main__":
asyncio.run(main())
```
## Docs
I have not written docs yet. The code is pretty simple and self-explanatory, so you can just browse through `psutierlist_api_wrapper/__init__.py` and `psutierlist_api_wrapper/enums.py` at the [GitHub Page](https://github.com/PowerPCFan/psutierlist-api-wrapper) to see how everything is structured. Sorry for the inconvenience!
| text/markdown | null | PowerPCFan <charlie@powerpcfan.xyz> | null | PowerPCFan <charlie@powerpcfan.xyz> | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"httpx>=0.28.1"
] | [] | [] | [] | [
"Repository, https://github.com/PowerPCFan/psutierlist-api-wrapper",
"Issues, https://github.com/PowerPCFan/psutierlist-api-wrapper/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T15:25:46.270523 | psutierlist_api_wrapper-1.0.0-py3-none-any.whl | 17,535 | fc/bb/f57669458592ce4573a89c18d98b9a7cfd9ab8de0ac4509e50851a4bf895/psutierlist_api_wrapper-1.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | fe85788147d817f7c775a837de659dc3 | ce2615e55d256d6f13a4c0c57040b7ff3e8e115c7879bb74bc27d7f8e6fc3e7e | fcbbf57669458592ce4573a89c18d98b9a7cfd9ab8de0ac4509e50851a4bf895 | AGPL-3.0-only | [
"LICENSE"
] | 87 |
2.4 | lino-react | 26.2.1 | The React front end for Lino | ============================
The React front end for Lino
============================
The ``lino_react`` package contains the React front end for Lino.
Project homepage is https://gitlab.com/lino-framework/react
| text/x-rst | null | Rumma & Ko Ltd <info@saffre-rumma.net>, Luc Saffre <luc@saffre-rumma.net> | null | null | GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>. | Django, React, customized, framework | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Framework :: Django",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Natural Language :: English",
"Natural Language :: French",
"Natural Language :: German",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Topic :: Database :: Front-Ends",
"Topic :: Office/Business",
"Topic :: Software Development :: Libraries :: Application Frameworks"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"deepmerge",
"lino",
"atelier; extra == \"testing\"",
"hatch; extra == \"testing\"",
"pytest; extra == \"testing\"",
"pytest-cov; extra == \"testing\"",
"pytest-env; extra == \"testing\"",
"pytest-forked; extra == \"testing\"",
"pytest-html; extra == \"testing\""
] | [] | [] | [] | [
"Homepage, https://react.lino-framework.org",
"Repository, https://gitlab.com/lino-framework/react"
] | twine/6.1.0 CPython/3.12.3 | 2026-02-20T15:25:36.703052 | lino_react-26.2.1.tar.gz | 21,031,421 | df/f1/7dde552a99b01bcc0d4711f03aa7a71b0cc0e3014da288c4433421135c7d/lino_react-26.2.1.tar.gz | source | sdist | null | false | 1510bc44d543c925c75bdfc802d45e05 | a9df034494e5a6cb2550047c9649c82cb4eec6fc2424980d08a70085f7e58d75 | dff17dde552a99b01bcc0d4711f03aa7a71b0cc0e3014da288c4433421135c7d | null | [
"COPYING"
] | 201 |
2.4 | agenttrace-py | 0.1.2 | Deterministic execution and replay layer for AI Agents | # AgentTrace Python SDK
The deterministic execution and replay layer for AI Agents.
Stop guessing why your agents crash in production. AgentTrace records every LLM call, tool execution, and intermediate state—letting you rewind time, fork the execution, and test fixes instantly.
## Installation
```bash
pip install agenttrace
```
## Quick Start (The "Thin" SDK)
AgentTrace works as a thin data-collection layer. Simply decorate your main entry point, and use `with agenttrace.step()` to log intermediate actions. The SDK automatically bundles the events and syncs them securely to your dashboard.
```python
import agenttrace
# 1. Initialize with your API Key
agenttrace.init(api_key="at_live_...")
# 2. Add the @agenttrace.run decorator to track the entire execution
@agenttrace.run(name="checkout_agent")
def main():
# 3. Log an LLM call or thought process
with agenttrace.step("Planning Phase", type="thought"):
plan = "Validating the order."
# 4. Log a tool execution
with agenttrace.step("Validate Order", type="tool_call", input={"order_id": 123}):
# Your custom tool code here
result = {"status": "valid"}
agenttrace.set_result(result)
return {"success": True}
if __name__ == "__main__":
main()
```
When `main()` finishes, the complete trace is uploaded securely and becomes available in your AgentTrace dashboard for debugging and forking.
| text/markdown | AgentTrace | hello@agenttrace.com | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11"
] | [] | https://agenttrace.com | null | >=3.8 | [] | [] | [] | [
"requests>=2.20.0",
"typing-extensions>=4.0.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.1 | 2026-02-20T15:25:27.928856 | agenttrace_py-0.1.2.tar.gz | 5,066 | 2c/be/f3969ca128b6ca9bd2a448b3440d5cc329ddd76787f34376285654f9d3df/agenttrace_py-0.1.2.tar.gz | source | sdist | null | false | 4e07ccb61d1d8c93f9c9e4f203bd5c1e | c2c68681e5659d0b815bd9f47e6044d54050c237da57c449050937c4142efef2 | 2cbef3969ca128b6ca9bd2a448b3440d5cc329ddd76787f34376285654f9d3df | null | [] | 177 |
2.4 | yamlgraph | 0.4.52 | YAML-first framework for building LLM pipelines with LangGraph | # YamlGraph
[](https://pypi.org/project/yamlgraph/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
A YAML-first framework for building LLM pipelines using:
- **YAML Graph Configuration** - Declarative pipeline definition with schema validation
- **YAML Prompts** - Declarative prompt templates with Jinja2 support
- **Pydantic Models** - Structured LLM outputs
- **Multi-Provider LLMs** - Anthropic, Google/Gemini, Mistral, OpenAI, Replicate, xAI, LM Studio
- **LangGraph** - Pipeline orchestration with resume support
- **Human-in-the-Loop** - Interrupt nodes for user input
- **Streaming** - Token-by-token LLM output (prompt-level and graph-level)
- **Async Support** - FastAPI-ready async execution
- **Checkpointers** - Memory, SQLite, and Redis state persistence
- **Graph-Relative Prompts** - Colocate prompts with graphs
- **JSON Extraction** - Auto-extract JSON from LLM responses
- **LangSmith** - Observability and tracing
- **JSON Export** - Result serialization
- **Contrib Utilities** - Shared helpers for map results and Pydantic serialization
## What is YAMLGraph?
**YAMLGraph** is a declarative LLM pipeline orchestration framework that lets you define complex AI workflows entirely in YAML—no Python required for 60-80% of use cases. Built on LangGraph, it provides multi-provider LLM support (Anthropic, Google/Gemini, OpenAI, Mistral, Replicate, xAI, LM Studio), parallel batch processing via map nodes (using LangGraph Send), LLM-driven conditional routing, graph-level streaming, and human-in-the-loop interrupts with checkpointing. Pipelines are version-controlled, linted, and observable via LangSmith. The key insight: by constraining the API surface to YAML + Jinja2 templates + Pydantic schemas, YAMLGraph trades some flexibility for dramatically faster prototyping, easier maintenance, and built-in best practices—making it ideal for teams who want production-ready AI pipelines without the complexity of full-code frameworks.
## Installation
### From PyPI
```bash
pip install yamlgraph
# With Redis support for distributed checkpointing
pip install yamlgraph[redis]
```
### From Source
```bash
git clone https://github.com/sheikkinen/yamlgraph.git
cd yamlgraph
pip install -e ".[dev]"
```
## Quick Start
### 1. Create a Prompt
Create `prompts/greet.yaml`:
```yaml
system: |
You are a friendly assistant.
user: |
Say hello to {name} in a {style} way.
```
### 2. Create a Graph
Create `graphs/hello.yaml`:
```yaml
version: "1.0"
name: hello-world
nodes:
greet:
type: llm
prompt: greet
variables:
name: "{state.name}"
style: "{state.style}"
state_key: greeting
edges:
- from: START
to: greet
- from: greet
to: END
```
### 3. Set API Key
```bash
export ANTHROPIC_API_KEY=your-key-here
# Or: export MISTRAL_API_KEY=... or OPENAI_API_KEY=...
```
### 4. Run It
```bash
yamlgraph graph run graphs/hello.yaml --var name="World" --var style="enthusiastic"
```
Or use the Python API:
```python
from yamlgraph import load_and_compile
graph = load_and_compile("graphs/hello.yaml")
app = graph.compile()
result = app.invoke({"name": "World", "style": "enthusiastic"})
print(result["greeting"])
```
With tracing (when LangSmith is configured via `.env` or env vars):
```python
from yamlgraph import load_and_compile, create_tracer, get_trace_url, inject_tracer_config
graph = load_and_compile("graphs/hello.yaml")
app = graph.compile()
tracer = create_tracer() # None if LangSmith not configured
result = app.invoke({"name": "World"}, config=inject_tracer_config({}, tracer))
print(get_trace_url(tracer)) # https://smith.langchain.com/o/.../r/...
```
---
## More Examples
```bash
# Content generation pipeline
yamlgraph graph run examples/demos/yamlgraph/graph.yaml --var topic="AI" --var style=casual
# Sentiment-based routing
yamlgraph graph run examples/demos/router/graph.yaml --var message="I love this!"
# Self-correction loop (Reflexion pattern)
yamlgraph graph run examples/demos/reflexion/graph.yaml --var topic="climate change"
# AI agent with shell tools
yamlgraph graph run examples/demos/git-report/graph.yaml --var input="What changed recently?"
# Web research agent (requires: pip install yamlgraph[websearch])
yamlgraph graph run examples/demos/web-research/graph.yaml --var topic="LangGraph tutorials"
# Show LangSmith trace URL (requires LANGCHAIN_TRACING_V2=true + LANGSMITH_API_KEY)
yamlgraph graph run examples/demos/yamlgraph/graph.yaml --var topic="AI" --share-trace
```
📂 **More examples:** See [examples/README.md](examples/README.md) for the full catalog including:
- Parallel fan-out with map nodes
- Human-in-the-loop interview flows
- Code quality analysis pipelines
- FastAPI integrations
## Documentation
📚 **Start here:** [reference/README.md](reference/README.md) - Complete index of all 18 reference docs
### Reading Order
| Level | Document | Description |
|-------|----------|-------------|
| 🟢 Beginner | [Quick Start](reference/quickstart.md) | Create your first pipeline in 5 minutes |
| 🟢 Beginner | [Graph YAML](reference/graph-yaml.md) | Node types, edges, tools, state |
| 🟢 Beginner | [Prompt YAML](reference/prompt-yaml.md) | Schema and template syntax |
| 🟡 Intermediate | [Common Patterns](reference/patterns.md) | Router, loops, agents |
| 🟡 Intermediate | [Map Nodes](reference/map-nodes.md) | Parallel fan-out processing |
| 🟡 Intermediate | [Interrupt Nodes](reference/interrupt-nodes.md) | Human-in-the-loop |
| 🔴 Advanced | [Subgraph Nodes](reference/subgraph-nodes.md) | Modular graph composition |
| 🔴 Advanced | [Async Usage](reference/async-usage.md) | FastAPI integration |
| 🔴 Advanced | [Checkpointers](reference/checkpointers.md) | State persistence |
**More resources:**
- **[Examples](examples/)** - Working demos and production patterns
- **[Feature Requests](feature-requests/)** - Roadmap and planned improvements
- **[ARCHITECTURE.md](ARCHITECTURE.md)** - Internal architecture for core developers
## Architecture
🏗️ **For core developers:** See [ARCHITECTURE.md](ARCHITECTURE.md) for:
- Module architecture and data flows
- Extension points (adding node types, providers, tools)
- Testing strategy and patterns
- Code quality rules
See [ARCHITECTURE.md](ARCHITECTURE.md#file-reference) for detailed module line counts and responsibilities.
## Key Patterns
📚 **Full guide:** See [reference/patterns.md](reference/patterns.md) for comprehensive patterns including:
- Linear pipelines with dependencies
- Branching and conditional routing
- Map-reduce parallel processing
- LLM-based routing
- Human-in-the-loop workflows
- Self-correction loops (Reflexion)
- Agent patterns with tools
## Environment Variables
| Variable | Required | Description |
|----------|----------|-------------|
| `ANTHROPIC_API_KEY` | Yes* | Anthropic API key (* if using Anthropic) |
| `MISTRAL_API_KEY` | No | Mistral API key (required if using Mistral) |
| `OPENAI_API_KEY` | No | OpenAI API key (required if using OpenAI) |
| `PROVIDER` | No | Default LLM provider (anthropic/mistral/openai) |
| `ANTHROPIC_MODEL` | No | Anthropic model (default: claude-haiku-4-5) |
| `MISTRAL_MODEL` | No | Mistral model (default: mistral-large-latest) |
| `OPENAI_MODEL` | No | OpenAI model (default: gpt-4o) |
| `REPLICATE_API_TOKEN` | No | Replicate API token |
| `REPLICATE_MODEL` | No | Replicate model (default: ibm-granite/granite-4.0-h-small) |
| `XAI_API_KEY` | No | xAI API key |
| `XAI_MODEL` | No | xAI model (default: grok-4-1-fast-reasoning) |
| `LMSTUDIO_BASE_URL` | No | LM Studio server URL (default: http://localhost:1234/v1) |
| `GOOGLE_API_KEY` | No | Google API key (required if using Google/Gemini) |
| `GOOGLE_MODEL` | No | Google model (default: gemini-2.0-flash) |
| `LMSTUDIO_MODEL` | No | LM Studio model (default: qwen2.5-coder-7b-instruct) |
| `LANGCHAIN_TRACING_V2` | No | Enable LangSmith tracing (`true` to enable) |
| `LANGSMITH_API_KEY` | No | LangSmith API key |
| `LANGCHAIN_ENDPOINT` | No | LangSmith endpoint URL |
| `LANGCHAIN_PROJECT` | No | LangSmith project name |
## Testing
Run the test suite:
```bash
# Run all tests
pytest tests/ -v
# Run only unit tests
pytest tests/unit/ -v
# Run only integration tests
pytest tests/integration/ -v
# Run with coverage report
pytest tests/ --cov=yamlgraph --cov-report=term-missing
# Run with HTML coverage report
pytest tests/ --cov=yamlgraph --cov-report=html
# Then open htmlcov/index.html
```
See [ARCHITECTURE.md](ARCHITECTURE.md#testing-strategy) for testing patterns and fixtures.
## Security
### Shell Command Injection Protection
Shell tools (defined in `graphs/*.yaml` with `type: tool`) execute commands with variable substitution. All user-provided variable values are sanitized using `shlex.quote()` to prevent shell injection attacks.
```yaml
# In graph YAML - command template is trusted
tools:
git_log:
type: shell
command: "git log --author={author} -n {count}"
```
**Security model:**
- ✅ **Command templates** (from YAML) are trusted configuration
- ✅ **Variable values** (from user input/LLM) are escaped with `shlex.quote()`
- ✅ **Complex types** (lists, dicts) are JSON-serialized then quoted
- ✅ **No `eval()`** - condition expressions parsed with regex, not evaluated
**Example protection:**
```python
# Malicious input is safely escaped
variables = {"author": "$(rm -rf /)"}
# Executed as: git log --author='$(rm -rf /)' (quoted, harmless)
```
See [yamlgraph/tools/shell.py](yamlgraph/tools/shell.py) for implementation details.
### ⚠️ Security Considerations
**Shell tools execute real commands** on your system. While variables are sanitized:
1. **Command templates are trusted** - Only use shell tools from trusted YAML configs
2. **No sandboxing** - Commands run with your user permissions
3. **Agent autonomy** - Agent nodes may call tools unpredictably
4. **Review tool definitions** - Audit `tools:` section in graph YAML before running
For production deployments, consider:
- Running in a container with limited permissions
- Restricting available tools to read-only operations
- Implementing approval workflows for sensitive operations
## License
[MIT w/ SWC](LICENSE)
## Remember
Read the Scripture in .github/copilot-instructions.md.
Base process with Opus 4.5:
- new feature specific chat
- "Let us pray"
- "Plan new project/xxxx"
- "Judge" & "Amend" loop
- "Enforce"
| text/markdown | Sami Heikkinen | null | null | null | MIT | yaml, llm, langgraph, pipeline, ai, agent | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"langchain-anthropic>=0.3.0",
"langchain-google-genai>=2.0.0",
"langchain-mistralai>=0.2.0",
"langchain-openai>=0.3.0",
"langgraph>=0.2.0",
"langgraph-checkpoint-sqlite>=2.0.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"langsmith>=0.1.0",
"jinja2>=3.1.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"radon>=6.0.0; extra == \"dev\"",
"vulture>=2.0; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\"",
"jedi>=0.19.0; extra == \"analysis\"",
"ddgs>=6.0.0; extra == \"websearch\"",
"tavily-python>=0.5.0; extra == \"tavily\"",
"replicate>=0.25.0; extra == \"storyboard\"",
"langchain-litellm>=0.3.0; extra == \"replicate\"",
"langgraph-checkpoint-redis>=0.3.0; extra == \"redis\"",
"redis>=5.0.0; extra == \"redis-simple\"",
"orjson>=3.9.0; extra == \"redis-simple\"",
"lancedb>=0.4.0; extra == \"rag\"",
"openai>=1.0.0; extra == \"rag\"",
"fastapi>=0.110.0; extra == \"booking\"",
"httpx>=0.27.0; extra == \"booking\"",
"uvicorn>=0.27.0; extra == \"booking\"",
"fastapi>=0.110.0; extra == \"npc\"",
"python-multipart>=0.0.20; extra == \"npc\"",
"uvicorn>=0.27.0; extra == \"npc\"",
"feedparser>=6.0.0; extra == \"digest\"",
"resend>=2.0.0; extra == \"digest\"",
"beautifulsoup4>=4.12.0; extra == \"digest\"",
"httpx>=0.27.0; extra == \"digest\"",
"fastapi>=0.110.0; extra == \"digest\"",
"slowapi>=0.1.9; extra == \"digest\"",
"uvicorn>=0.27.0; extra == \"digest\"",
"python-multipart>=0.0.9; extra == \"digest\"",
"statemachine-engine>=1.0.70; extra == \"fsm\"",
"mcp>=1.0.0; extra == \"mcp\""
] | [] | [] | [] | [
"Homepage, https://github.com/sheikkinen/yamlgraph",
"Repository, https://github.com/sheikkinen/yamlgraph",
"Documentation, https://github.com/sheikkinen/yamlgraph#readme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:25:24.758404 | yamlgraph-0.4.52.tar.gz | 106,035 | 54/b9/25ebfd072dbe681b4fac270b759b2d5c6a735a3378d0bff6d477b119dce3/yamlgraph-0.4.52.tar.gz | source | sdist | null | false | 1b2a489362f59d40a0dd3be1f5c46cf6 | f33c249e4246ea6ac3c1696057b795622d23c0238b85799f373de220a2139c23 | 54b925ebfd072dbe681b4fac270b759b2d5c6a735a3378d0bff6d477b119dce3 | null | [
"LICENSE"
] | 186 |
2.4 | aixtools | 0.10.14 | Tools for AI exploration and debugging | # AIXtools
AIXtools is a comprehensive Python library for AI agent development, debugging, and deployment. It provides a complete toolkit for building, testing, and monitoring AI agents with support for multiple model providers, advanced logging, and agent-to-agent communication.
## Capabilities
- **[Installation](#installation)**
- **[Environment Configuration](#environment-configuration)**
- **[Agents](#agents)** - Core agent functionality
- Basic Agent Usage
- Agent Development & Management
- Agent Batch Processing
- Node Debugging and Visualization
- **[Context Engineering](#context-engineering)** - Transform files into agent-readable content
- File Type Processors
- Configuration
- Processing Examples
- **[Agent-to-Agent Communication](#a2a-agent-to-agent-communication)** - Inter-agent communication framework
- Core Features
- Google SDK Integration
- Remote Agent Connections
- **[Testing & Tools](#testing--tools)** - Comprehensive testing utilities
- Running Tests
- Testing Utilities
- Mock Tool System
- Model Patch Caching
- Agent Mock
- FaultyMCP Testing Server
- MCP Tool Doctor
- Tool Doctor
- Evaluations
- **[Logging & Debugging](#logging--debugging)** - Advanced logging and debugging
- Basic Logging
- Log Viewing Application
- Object Logging
- MCP Logging
- **[Databases](#databases)** - Traditional and vector database support
- **[Chainlit & HTTP Server](#chainlit--http-server)** - Web interfaces and server framework
- Chainlit Integration
- HTTP Server Framework
- **[Programming Utilities](#programming-utilities)** - Essential utilities
- Persisted Dictionary
- Enum with Description
- Context Management
- Configuration Management
- File Utilities
- Chainlit Utilities
- Truncation Utilities
## Installation
```bash
uv add aixtools
```
**Updating**
```bash
uv add --upgrade aixtools
```
## Environment Configuration
AIXtools requires environment variables for model providers.
**IMPORTANT:** Create a `.env` file based on `.env_template`:
Here is an example configuration:
```bash
# Model family (azure, openai, or ollama)
MODEL_FAMILY=azure
MODEL_TIMEOUT=120
# Azure OpenAI
AZURE_OPENAI_ENDPOINT=https://your_endpoint.openai.azure.com
AZURE_OPENAI_API_VERSION=2024-06-01
AZURE_OPENAI_API_KEY=your_secret_key
AZURE_MODEL_NAME=gpt-4o
# OpenAI
OPENAI_MODEL_NAME=gpt-4.5-preview
OPENAI_API_KEY=openai_api_key
# Ollama
OLLAMA_MODEL_NAME=llama3.2:3b-instruct-fp16
OLLAMA_LOCAL_URL=http://localhost:11434/v1
```
## Agents
### Basic Agent Usage
```python
from aixtools.agents.agent import get_agent, run_agent
async def main():
agent = get_agent(system_prompt="You are a helpful assistant.")
result, nodes = await run_agent(agent, "Explain quantum computing")
print(result)
```
### Agent Development & Management
The agent system provides a unified interface for creating and managing AI agents across different model providers.
```python
from aixtools.agents.agent import get_agent, run_agent
# Create an agent with default model
agent = get_agent(system_prompt="You are a helpful assistant.")
# Run the agent
result, nodes = await run_agent(agent, "Tell me about AI")
```
### Node Debugging and Visualization
The `print_nodes` module provides a clean, indented output for easy reading of the node from agent execution.
```python
from aixtools.agents.print_nodes import print_nodes, print_node
from aixtools.agents.agent import get_agent, run_agent
agent = get_agent(system_prompt="You are a helpful assistant.")
result, nodes = await run_agent(agent, "Explain quantum computing")
# Print all execution nodes for debugging
print_nodes(nodes)
```
**Features:**
- Node Type Detection: Automatically handles different node types (`UserPromptNode`, `CallToolsNode`, `ModelRequestNode`, `End`)
- Formatted Output: Provides clean, indented output for easy reading
- Tool Call Visualization: Shows tool names and arguments for tool calls
- Text Content Display: Formats text parts with proper indentation
- Model Request Summary: Shows character count for model requests to avoid verbose output
**Node Types Supported:**
- `UserPromptNode` - Displays user prompts with indentation
- `CallToolsNode` - Shows tool calls with names and arguments
- `ModelRequestNode` - Summarizes model requests with character count
- `End` - Marks the end of execution (output suppressed by default)
### Agent Batch Processing
Process multiple agent queries simultaneously with built-in concurrency control and result aggregation.
```python
from aixtools.agents.agent_batch import agent_batch, AgentQueryParams
# Create query parameters
query_parameters = [
AgentQueryParams(prompt="What is the meaning of life"),
AgentQueryParams(prompt="Who is the prime minister of Canada")
]
# Run queries in batches
async for result in agent_batch(query_parameters):
print(result)
```
## Context Engineering
Transform file formats into agent-readable content with enforced size limits to prevent context overflow. The main entry point is the `read_file()` function in `aixtools/agents/context/reader.py`, which provides automatic file type detection and delegates to specialized processors for each file type.
### Basic Usage
The `read_file()` function in `reader.py` is the main interface for processing files. It automatically detects file types and applies appropriate truncation strategies.
```python
from aixtools.agents.context.reader import read_file
from pathlib import Path
# Read a file with automatic type detection and truncation
result = read_file(Path("data.csv"))
if result.success:
print(f"File type: {result.file_type}")
print(f"Content length: {len(result.content)}")
print(f"Truncation info: {result.truncation_info}")
print(result.content)
# Optionally specify custom tokenizer and limits
result = read_file(
Path("large_file.json"),
max_tokens_per_file=10000,
max_total_output=100000
)
```
### Architecture
The context engineering system is organized with `reader.py` as the main interface:
- `reader.py` - Main `read_file()` function with file type detection and processing coordination
- `config.py` - Configurable size limits and thresholds
- `processors/` - Specialized processors for each file type (text, code, JSON, CSV, PDF, etc.)
- `data_models.py` - Data classes for results and metadata
### Supported File Types
- Text files (`.txt`, `.log`, `.md`)
- Code files (Python, JavaScript, etc.)
- Structured data (`JSON`, `YAML`, `XML`)
- Tabular data (`CSV`, `TSV`)
- Documents (`PDF`, `DOCX`)
- Spreadsheets (`.xlsx`, `.xls`, `.ods`)
- Images (`PNG`, `JPEG`, `GIF`, `WEBP`)
- Audio files
### Key Features
- Automatic file type detection based on MIME types and extensions
- Token-based truncation with configurable limits per file
- Intelligent content sampling (head + tail rows for tabular data)
- Structure-aware truncation for `JSON`, `YAML`, and `XML`
- Markdown conversion for documents using `markitdown`
- Binary content support for images with metadata extraction
- Comprehensive error handling with partial results when possible
### Configuration
All limits are configurable via environment variables:
```bash
# Output limits
MAX_TOKENS_PER_FILE=5000
MAX_TOTAL_OUTPUT=50000
# Text truncation
MAX_LINES=200
MAX_LINE_LENGTH=1000
# Tabular truncation
MAX_COLUMNS=50
DEFAULT_ROWS_HEAD=20
DEFAULT_ROWS_MIDDLE=10
DEFAULT_ROWS_TAIL=10
MAX_CELL_LENGTH=500
# Images
MAX_IMAGE_ATTACHMENT_SIZE=2097152 # 2MB
```
### Processing Examples
The recommended approach is to use the `read_file()` function which automatically handles file type detection and processing. However, you can also use individual processors directly for specific file types.
#### Using read_file() (Recommended)
```python
from aixtools.agents.context.reader import read_file
from pathlib import Path
# Process any file type automatically
result = read_file(Path("data.csv"))
if result.success:
print(result.content)
# Works with all supported types
pdf_result = read_file(Path("report.pdf"))
excel_result = read_file(Path("workbook.xlsx"))
json_result = read_file(Path("config.json"))
```
#### Processing Tabular Data Directly
```python
from aixtools.agents.context.processors.tabular import process_tabular
from pathlib import Path
# Process specific row range from large CSV
result = process_tabular(
file_path=Path("large_data.csv"),
start_row=100,
end_row=200,
max_columns=20,
max_cell_length=500
)
print(f"Rows shown: {result.truncation_info.rows_shown}")
print(f"Columns shown: {result.truncation_info.columns_shown}")
```
#### Processing Spreadsheets Directly
```python
from aixtools.agents.context.processors.spreadsheet import process_spreadsheet
from pathlib import Path
# Process Excel file with multiple sheets
result = process_spreadsheet(
file_path=Path("workbook.xlsx"),
max_sheets=3,
max_rows_per_sheet_head=20,
max_rows_per_sheet_tail=10
)
# Content includes all processed sheets with truncation info
print(result.content)
```
#### Processing Documents Directly
```python
from aixtools.agents.context.processors.document import process_document
from pathlib import Path
# Convert PDF to markdown and truncate
result = process_document(
file_path=Path("report.pdf"),
max_lines=200,
max_line_length=1000
)
if result.was_extracted:
print("Document successfully converted to markdown")
print(result.content)
```
### Output Format
All processors return consistent output with metadata:
```
File: data.csv
Columns: 8 (of 20000 total)
Rows: 20 (of 1000000 total)
col1,col2,...,col8
value1,value2,...
...
Truncated: columns: 8 of 20000, rows: 20 of 1000000, 45 cells
```
The context engineering system ensures agents receive properly formatted, size-limited content that fits within token budgets while preserving the most relevant information from each file type.
## A2A (Agent-to-Agent Communication)
The `A2A` module provides a comprehensive framework for enabling sophisticated communication between AI agents across different environments and platforms. It includes Google SDK integration, `PydanticAI` adapters, and `FastA2A` application conversion capabilities.
### Core Features
**Agent Application Conversion**
- Convert `PydanticAI` agents into `FastA2A` applications (deprecated)
- Support for session metadata extraction and context management
- Custom worker classes with enhanced data part support
- Automatic handling of user and session identification
**Remote Agent Connections**
- Establish connections between agents across different environments
- Asynchronous message sending with task polling capabilities
- Terminal state detection and error handling
- Support for various message types including text, files, and data
**Google SDK Integration**
- Native integration with Google's A2A SDK
- Card-based agent representation and discovery
- `PydanticAI` adapter for seamless Google SDK compatibility
- Storage and execution management for agent interactions
### Basic Usage
Enable sophisticated agent interactions with Google SDK integration and `PydanticAI` adapters.
```python
from aixtools.a2a.google_sdk.remote_agent_connection import RemoteAgentConnection
from aixtools.a2a.app import agent_to_a2a
# Convert a PydanticAI agent to FastA2A application
a2a_app = agent_to_a2a(
agent=my_agent,
name="MyAgent",
description="A helpful AI assistant",
skills=[{"name": "chat", "description": "General conversation"}]
)
# Connect agents across different environments
connection = RemoteAgentConnection(card=agent_card, client=a2a_client)
response = await connection.send_message_with_polling(message)
```
### Postgres DB Store for A2A agent
See implementation: `aixtools/a2a/google_sdk/store`
#### Alembic
In order to take full control of the database schema management Alembic is used for handling database migrations.
Thus make sure, that google-sdk Store objects are being created with parameter create_table=False
```python
from a2a.server.tasks import DatabaseTaskStore
...
task_store=DatabaseTaskStore(engine=db_engine, create_table=False)
```
#### Setup of database and applying migrations (manual if needed):
configure POSTGRES_URL env variable
```.dotenv
POSTGRES_URL=postgresql+asyncpg://user:password@localhost:5432/a2a_magic_db
```
```shell
# from scope of your a2a service
#activate your virtual environment
kzwk877@degfqx35d621DD a2a_magic_service % source .venv/bin/activate
# set the PATH_TO_ALEMBIC_CONFIG environment variable to point to the alembic configuration directory
(a2a_magic_service) kzwk877@degfqx35d621DD a2a_magic_service % export PATH_TO_ALEMBIC_CONFIG="$(pwd)/.venv/lib/python3.12/site-packages/aixtools/a2a/google_sdk/store"
# Make sure that database is existed
(a2a_magic_service) kzwk877@degfqx35d621DD a2a_magic_service % uv run "${PATH_TO_ALEMBIC_CONFIG}/ensure_database.py"
2025-11-11 10:08:51.501 WARNING [root] Looking for '.env' file in default directory
2025-11-11 10:08:52.750 INFO [root] Using .env file at '/PATH_TO_A2A_SERVICE/a2a_magic_service/.env'
2025-11-11 10:08:52.751 INFO [root] Using MAIN_PROJECT_DIR='/PATH_TO_A2A_SERVICE/a2a_magic_service'
2025-11-11 10:08:52.752 WARNING [root] Using DATA_DIR='/app/data'
2025-11-11 10:08:52.757 INFO [__main__] Starting database creation script...
...
2025-11-11 10:08:52.821 INFO [__main__] Creating database 'a2a_magic_db'...
2025-11-11 10:08:52.904 INFO [__main__] Database 'a2a_magic_db' created successfully
...
2025-11-11 10:08:52.921 INFO [__main__] Database creation script completed successfully!
# Apply alembic migrations
(a2a_magic_service) kzwk877@degfqx35d621DD a2a_magic_service % alembic --config "${PATH_TO_ALEMBIC_CONFIG}/alembic.ini" upgrade head
2025-11-11 10:11:34.185 WARNING [root] Looking for '.env' file in default directory
2025-11-11 10:11:35.046 WARNING [root] Looking for '.env' file at '/PATH_TO_A2A_SERVICE/a2a_magic_service'
2025-11-11 10:11:35.047 INFO [root] Using .env file at '/PATH_TO_A2A_SERVICE/a2a_magic_service/.env'
2025-11-11 10:11:35.048 INFO [root] Using MAIN_PROJECT_DIR='/PATH_TO_A2A_SERVICE/a2a_magic_service'
2025-11-11 10:11:35.049 WARNING [root] Using DATA_DIR='/app/data'
2025-11-11 10:11:35.054 INFO [env_py] Using database URL for migrations: postgresql://user:password@localhost:5432/a2a_magic_db
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> 68c6975ed20b, Added a2a-sdk Task table
```
#### Schema modifications
if new schema modifications has been introduced with new versions of a2a sdk suggested way
to create new alembic migrations would be:
- launch a2a service with passed parameter to `DatabaseStore` `create_table=True`
- make sure that all new tables/columns are created in
the database (possibly an new request to a2a server needs to be made)
- create new alembic migration script
```shell
(a2a_magic_service) kzwk877@degfqx35d621DD % alembic --config "${PATH_TO_ALEMBIC_CONFIG}/alembic.ini" revision --autogenerate -m "New table introduced"
```
- review the generated migration script
- apply and test
## Databases
### Database Integration
Support for both traditional and vector databases with seamless integration.
```python
from aixtools.db.database import Database
from aixtools.db.vector_db import VectorDB
# Traditional database
db = Database("sqlite:///app.db")
# Vector database for embeddings
vector_db = VectorDB()
vector_db.add_documents(documents)
```
## Logging & Debugging
AixTools provides functionality for logging and debugging.
### Basic Logging and Debugging
```python
from aixtools.agents.agent import get_agent, run_agent
async def main():
# Create an agent
agent = get_agent(system_prompt="You are a helpful assistant.")
# Run agent - logging is automatic via ObjectLogger
result, nodes = await run_agent(
agent,
"Explain quantum computing",
debug=True, # Enable debug logging
log_model_requests=True # Log model requests/responses
)
print(f"Result: {result}")
print(f"Logged {len(nodes)} nodes")
```
### Log Viewing Application
Interactive `Streamlit` application for analyzing logged objects and debugging agent behavior.
**Features:**
- Log file selection and filtering
- Node visualization with expand/collapse
- Export capabilities to `JSON`
- Regex pattern matching
- Real-time log monitoring
```bash
# Run the log viewer
log_view
# Or specify custom log directory
log_view /path/to/logs
```
### Object Logging & Debugging
Advanced logging system with object serialization and visual debugging tools.
```python
from aixtools.logging.log_objects import ObjectLogger
# Log any pickleable object
with ObjectLogger() as logger:
logger.log({"message": "Hello, world!"})
logger.log(agent_response)
```
### MCP Logging
AIXtools provides MCP support for both client and server implementations with easier logging for debugging purposes.
**Example:**
Let's assume we have an MCP server that runs an agent tool.
Note that the `ctx: Context` parameter is passed to `run_agent()`, this will enable logging to the MCP client.
```python
@mcp.tool
async def my_tool_with_agent(query: str, ctx: Context) -> str:
""" A tool that uses an gents to process the query """
agent = get_agent()
async with get_qb_agent() as agent:
ret, nodes = await run_agent(agent=agent, prompt=query, ctx=ctx) # Enable MCP logging
return str(ret)
```
On the client side, you can create an agent connected to the MCP server, the nodes from the MCP server will show on the `STDOUT` so you can see what's going on the MCP server's agent loop.
```python
mcp = get_mcp_client("http://localhost:8000") # Get an MCP client with a default log handler that prints to STDOUT
agent = get_agent(toolsets=[mcp])
async with agent:
# The messages from the MCP server will be printed to the STDOUT
ret, nodes = await run_agent(agent, prompt="...")
```
#### MCP Server Logging
Create MCP servers with built-in logging capabilities.
```python
from aixtools.mcp.fast_mcp_log import FastMcpLog
# Use FastMCP server with logging
mcp = FastMcpLog("Demo")
```
## Testing & Tools
AIXtools provides comprehensive testing utilities and diagnostic tools for AI agent development and debugging.
### Running Tests
Execute the test suite using the provided scripts:
```bash
# Run all tests
./scripts/test.sh
# Run unit tests only
./scripts/test_unit.sh
# Run integration tests only
./scripts/test_integration.sh
```
### Testing Utilities
The testing module provides mock tools, model patching, and test utilities for comprehensive agent testing.
```python
from aixtools.testing.mock_tool import MockTool
from aixtools.testing.model_patch_cache import ModelPatchCache
from aixtools.testing.aix_test_model import AixTestModel
# Create mock tools for testing
mock_tool = MockTool(name="test_tool", response="mock response")
# Use model patch caching for consistent test results
cache = ModelPatchCache()
cached_response = cache.get_cached_response("test_prompt")
# Test model for controlled testing scenarios
test_model = AixTestModel()
```
### Mock Tool System
Create and manage mock tools for testing agent behavior without external dependencies.
```python
from aixtools.testing.mock_tool import MockTool
# Create a mock tool with predefined responses
mock_calculator = MockTool(
name="calculator",
description="Performs mathematical calculations",
response_map={
"2+2": "4",
"10*5": "50"
}
)
# Use in agent testing
agent = get_agent(tools=[mock_calculator])
result = await run_agent(agent, "What is 2+2?")
```
### Model Patch Caching
Cache model responses for consistent testing and development workflows.
```python
from aixtools.testing.model_patch_cache import ModelPatchCache
# Initialize cache
cache = ModelPatchCache(cache_dir="./test_cache")
# Cache responses for specific prompts
cache.cache_response("test prompt", "cached response")
# Retrieve cached responses
response = cache.get_cached_response("test prompt")
```
### Model Patching System
Dynamic model behavior modification for testing and debugging.
```python
from aixtools.model_patch.model_patch import ModelPatch
# Apply patches to models for testing
with ModelPatch() as patch:
patch.apply_response_override("test response")
result = await agent.run("test prompt")
```
### Agent Mock
Replay previously recorded agent runs without executing the actual agent. Useful for testing, debugging, and creating reproducible test cases.
```python
from aixtools.testing.agent_mock import AgentMock
from aixtools.agents.agent import get_agent, run_agent
# Run an agent and capture its execution
agent = get_agent(system_prompt="You are a helpful assistant.")
result, nodes = await run_agent(agent, "Explain quantum computing")
# Create a mock agent from the recorded nodes
agent_mock = AgentMock(nodes=nodes, result_output=result)
# Save the mock for later use
agent_mock.save(Path("test_data/quantum_mock.pkl"))
# Load and replay the mock agent
loaded_mock = AgentMock.load(Path("test_data/quantum_mock.pkl"))
result, nodes = await run_agent(loaded_mock, "any prompt") # Returns recorded nodes
```
### FaultyMCP Testing Server
A specialized MCP server designed for testing error handling and resilience in MCP client implementations. `FaultyMCP` simulates various failure scenarios including network errors, server crashes, and random exceptions.
**Features:**
- Configurable error probabilities for different request types
- HTTP 404 error injection for `POST`/`DELETE` requests
- Server crash simulation on `GET` requests
- Random exception throwing in tool operations
- MCP-specific error simulation (`ValidationError`, `ResourceError`, etc.)
- Safe mode for controlled testing
```python
from aixtools.mcp.faulty_mcp import run_server_on_port, config
# Configure error probabilities
config.prob_on_post_404 = 0.3 # 30% chance of 404 on POST
config.prob_on_get_crash = 0.1 # 10% chance of crash on GET
config.prob_in_list_tools_throw = 0.2 # 20% chance of exception in tools/list
# Run the faulty server
run_server_on_port()
```
**Command Line Usage:**
```bash
# Run with default error probabilities
python -m aixtools.mcp.faulty_mcp
# Run in safe mode (no errors by default)
python -m aixtools.mcp.faulty_mcp --safe-mode
# Custom configuration
python -m aixtools.mcp.faulty_mcp \
--port 8888 \
--prob-on-post-404 0.2 \
--prob-on-get-crash 0.1 \
--prob-in-list-tools-throw 0.3
```
**Available Test Tools:**
- `add(a, b)` - Reliable addition operation
- `multiply(a, b)` - Reliable multiplication operation
- `always_error()` - Always throws an exception
- `random_throw_exception(a, b, prob)` - Randomly throws exceptions
- `freeze_server(seconds)` - Simulates server freeze
- `throw_404_exception()` - Throws HTTP 404 error
### MCP Tool Doctor
Analyze tools from MCP (Model Context Protocol) servers and receive AI-powered recommendations for improvement.
```python
from aixtools.tools.doctor.mcp_tool_doctor import tool_doctor_mcp
from pydantic_ai.mcp import MCPServerStreamableHTTP, MCPServerStdio
# Analyze HTTP MCP server
recommendations = await tool_doctor_mcp(mcp_url='http://127.0.0.1:8000/mcp')
for rec in recommendations:
print(rec)
# Analyze STDIO MCP server
server = MCPServerStdio(command='fastmcp', args=['run', 'my_server.py'])
recommendations = await tool_doctor_mcp(mcp_server=server, verbose=True)
```
**Command Line Usage:**
```bash
# Analyze HTTP MCP server (default)
tool_doctor_mcp
# Analyze specific HTTP MCP server
tool_doctor_mcp --mcp-url http://localhost:9000/mcp --verbose
# Analyze STDIO MCP server
tool_doctor_mcp --stdio-command fastmcp --stdio-args run my_server.py --debug
```
**Available options:**
- `--mcp-url URL` - URL of HTTP MCP server (default: `http://127.0.0.1:8000/mcp`)
- `--stdio-command CMD` - Command to run STDIO MCP server
- `--stdio-args ARGS` - Arguments for STDIO MCP server command
- `--verbose` - Enable verbose output
- `--debug` - Enable debug output
### Tool Doctor
Analyze tool usage patterns from agent logs and get optimization recommendations.
```python
from aixtools.tools.doctor.tool_doctor import ToolDoctor
from aixtools.tools.doctor.tool_recommendation import ToolRecommendation
# Analyze tool usage patterns
doctor = ToolDoctor()
analysis = doctor.analyze_tools(agent_logs)
# Get tool recommendations
recommendation = ToolRecommendation()
suggestions = recommendation.recommend_tools(agent_context)
```
### Evaluations
Run comprehensive Agent/LLM evaluations using the built-in evaluation discovery based on `Pydantic-AI` framework with AIXtools enhancements.
```bash
# Run all evaluations
python -m aixtools.evals
# Run evaluations with filtering
python -m aixtools.evals --filter "specific_test"
# Run with verbose output and detailed reporting
python -m aixtools.evals --verbose --include-input --include-output --include-reasons
# Specify custom evaluations directory
python -m aixtools.evals --evals-dir /path/to/evals
# Set minimum assertions threshold
python -m aixtools.evals --min-assertions 0.8
```
**Command Line Options:**
- `--evals-dir` - Directory containing `eval_*.py` files (default: `evals`)
- `--filter` - Filter to run only matching evaluations
- `--include-input` - Include input in report output (default: `True`)
- `--include-output` - Include output in report output (default: `True`)
- `--include-evaluator-failures` - Include evaluator failures in report
- `--include-reasons` - Include reasons in report output
- `--min-assertions` - Minimum assertions average required for success (default: `1.0`)
- `--verbose` - Print detailed information about discovery and processing
The evaluation system discovers and runs all `Dataset` objects from `eval_*.py` files in the specified directory, similar to test runners but specifically designed for LLM evaluations using `pydantic_evals`.
**Discovery Mechanism**
The evaluation framework uses an automatic discovery system:
1. **File Discovery**: Scans the specified directory for files matching the pattern `eval_*.py`
2. **Dataset Discovery**: Within each file, looks for variables named `dataset_*` that are instances of `pydantic_evals.Dataset`
3. **Target Function Discovery**: Within the same file looks for function or async function named `target_*`. There must be 1 target function per file.
4. **Function Discovery**: Looks for functions with specific prefixes:
- Functions prefixed with `scorer_*`, `evaluator_*` for custom scorer and evaluator functions that will be used for each dataset in that file
5. **Filtering**: Supports filtering by module name, file name, dataset name, or fully qualified name
**Example Evaluation File Structure:**
```python
# eval_math_operations.py
from pydantic_evals import Dataset, Case
# This dataset will be discovered automatically
dataset_addition = Dataset(
name="Addition Tests",
cases=[
Case(input="What is 2 + 2?", expected="4"),
Case(input="What is 10 + 5?", expected="15"),
],
evaluators=[...]
)
# This function will be used as the evaluation target
async def target_math_agent(input_text: str) -> str:
# Your agent run logic here
agent = get_agent(system_prompt="You are a math assistant.")
result, _ = await run_agent(agent, input_text)
return result
# This function will be used as evaluator for all datasets (optional)
def evaluator_check_output(ctx: EvaluatorContext) -> bool:
# Your result evaluation logic here
return ctx.output == ctx.expected_output
```
The discovery system will:
- Find `eval_math_operations.py` in the `evals` directory
- Discover `dataset_addition` as an evaluation dataset
- Use `evaluate_math_agent` as the target function for evaluation
- Run each case through the target function and evaluate results
#### Name-Based Discovery
The evaluation system uses name-based discovery for all components:
**Target Functions** (exactly one required per eval file):
- Purpose: The main function being evaluated - processes inputs and returns outputs
- Naming: Functions named `target_*` (e.g., `target_my_function`)
- Signature: `def target_name(inputs: InputType) -> OutputType` or `async def target_name(inputs: InputType) -> OutputType`
- Example: `async def target_math_agent(input_text: str) -> str`
**Scoring Functions** (optional):
- Purpose: Determine if evaluation results meet success criteria
- Naming: Functions named `scorer_*` (e.g., `scorer_custom`)
- Signature: `def scorer_name(report: EvaluationReport, dataset: AixDataset, min_score: float = 1.0, verbose: bool = False) -> bool`
- Example: `def scorer_accuracy_threshold(report, dataset, min_score=0.8, verbose=False) -> bool`
**Evaluator Functions** (optional):
- Purpose: Custom evaluation logic for comparing outputs with expected results
- Naming: Functions named `evaluator_*` (e.g., `evaluator_check_output`)
- Signature: `def evaluator_name(ctx: EvaluatorContext) -> EvaluatorOutput` or `async def evaluator_name(ctx: EvaluatorContext) -> EvaluatorOutput`
- Example: `def evaluator_exact_match(ctx) -> EvaluatorOutput`
This name-based approach works seamlessly with both synchronous and asynchronous functions.
#### Scoring System
The framework includes a custom scoring system with `average_assertions` as the default scorer. This scorer checks if the average assertion score meets a minimum threshold and provides detailed pass/fail reporting.
## Chainlit & HTTP Server
### Chainlit Integration
Ready-to-use `Chainlit` application for interactive agent interfaces.
```python
# Run the Chainlit app
# Configuration in aixtools/chainlit.md
# Main app in aixtools/app.py
```
### HTTP Server Framework
AIXtools provides an HTTP server framework for deploying agents and tools as web services.
```python
from aixtools.server.app_mounter import mount_app
from aixtools.server import create_server
# Create and configure server
server = create_server()
# Mount applications and endpoints
mount_app(server, "/agent", agent_app)
mount_app(server, "/tools", tools_app)
# Run server
server.run(host="0.0.0.0", port=8000)
```
**Features:**
- Application mounting system for modular service composition
- Integration with `Chainlit` for agent interfaces
- RESTful API support
- Middleware support for authentication and logging
## Programming Utilities
AIXtools provides essential programming utilities for configuration management, data persistence, file operations, and context handling.
### Persisted Dictionary
Persistent key-value storage with automatic serialization and file-based persistence.
```python
from aixtools.utils.persisted_dict import PersistedDict
# Create a persistent dictionary
cache = PersistedDict("cache.json")
# Store and retrieve data
cache["user_preferences"] = {"theme": "dark", "language": "en"}
cache["session_data"] = {"last_login": "2024-01-01"}
# Data is automatically saved to file
print(cache["user_preferences"]) # Persists across program restarts
```
### Enum with Description
Enhanced `Enum` classes with built-in descriptions for better documentation and user interfaces.
```python
from aixtools.utils.enum_with_description import EnumWithDescription
class ModelType(EnumWithDescription):
GPT4 = ("gpt-4", "OpenAI GPT-4 model")
CLAUDE = ("claude-3", "Anthropic Claude-3 model")
LLAMA = ("llama-2", "Meta LLaMA-2 model")
# Access enum values and descriptions
print(ModelType.GPT4.value) # "gpt-4"
print(ModelType.GPT4.description) # "OpenAI GPT-4 model"
# Get all descriptions
for model in ModelType:
print(f"{model.value}: {model.description}")
```
### Context Management
Centralized context management for sharing state across components.
```python
from aixtools.context import Context
# Create and use context
context = Context()
context.set("user_id", "12345")
context.set("session_data", {"preferences": {"theme": "dark"}})
# Retrieve context data
user_id = context.get("user_id")
session_data = context.get("session_data")
# Context can be passed between components
def process_request(ctx: Context):
user_id = ctx.get("user_id")
# Process with user context
```
### Configuration Management
Robust configuration handling with environment variable support and validation.
```python
from aixtools.utils.config import Config
from aixtools.utils.config_util import load_config
# Load configuration from environment and files
config = load_config()
# Access configuration values
model_name = config.get("MODEL_NAME", "gpt-4")
api_key = config.get("API_KEY")
timeout = config.get("TIMEOUT", 30, int)
# Configuration with validation
class AppConfig(Config):
model_name: str = "gpt-4"
max_tokens: int = 1000
temperature: float = 0.7
app_config = AppConfig()
```
### File Utilities
Enhanced file operations with `Path` support and utility functions.
```python
from aixtools.utils.files import read_file, write_file, ensure_directory
from pathlib import Path
# Read and write files with automatic encoding handling
content = read_file("data.txt")
write_file("output.txt", "Hello, world!")
# Ensure directories exist
data_dir = Path("data/logs")
ensure_directory(data_dir)
# Work with file paths
config_path = Path("config") / "settings.json"
if config_path.exists():
config_data = read_file(config_path)
```
### Chainlit Utilities
Specialized utilities for `Chainlit` integration and agent display.
```python
from aixtools.utils.chainlit.cl_agent_show import show_agent_response
from aixtools.utils.chainlit.cl_utils import format_message
# Display agent responses in Chainlit
await show_agent_response(
response="Hello, how can I help you?",
metadata={"model": "gpt-4", "tokens": 150}
)
# Format messages for Chainlit display
formatted_msg = format_message(
content="Processing your request...",
message_type="info"
)
```
### Truncation Utilities
Smart truncation utilities for handling large data structures and preventing context overflow in LLM applications.
```python
from aixtools.utils import (
truncate_recursive_obj,
truncate_df_to_csv,
truncate_text_head_tail,
truncate_text_middle,
format_truncation_message,
TruncationMetadata
)
# Truncate nested JSON/dict structures while preserving structure
data = {"items": [f"item_{i}" for i in range(1000)], "description": "A" * 10000}
truncated = truncate_recursive_obj(data, max_string_len=100, max_list_len=10)
# Get truncation metadata
result, metadata = truncate_recursive_obj(
data,
target_size=1000,
ensure_size=True,
return_metadata=True
)
print(f"Truncated: {metadata.was_truncated}")
print(f"Size: {metadata.original_size} → {metadata.truncated_size}")
# Truncate DataFrames to CSV with head+tail preview
import pandas as pd
df = pd.DataFrame({"col1": range(10000), "col2": ["x" * 200] * 10000})
csv_output = truncate_df_to_csv(
df,
max_rows=20, # Show first 10 and last 10 rows
max_columns=10, # Show first 5 and last 5 columns
max_cell_chars=80, # Truncate cell contents
max_row_chars=2000 # Truncate CSV lines
)
# Truncate text preserving head and tail
text = "A" * 10000
truncated, chars_removed = truncate_text_head_tail(text, head_chars=100, tail_chars=100)
# Truncate text in the middle
truncated, chars_removed = truncate_text_middle(text, max_chars=500)
# Format truncation messages
message = format_truncation_message(
original_size=10000,
truncated_size=500,
unit="chars",
recommendation="Consider processing in smaller chunks"
)
```
**Key Features:**
- **Structure-preserving truncation** - `truncate_recursive_obj()` maintains dict/list structure while truncating
- **DataFrame to CSV truncation** - `truncate_df_to_csv()` shows head+tail rows and left+right columns
- **Text truncation strategies** - Head+tail or middle truncation for different use cases
- **Type-safe metadata** - `TruncationMetadata` Pydantic model with full type hints
- **Size enforcement** - `ensure_size=True` guarantees output fits within target size
- **Informative messages** - Automatic generation of user-friendly truncation messages
**Truncation Metadata:**
All truncation functions support `return_metadata=True` to get detailed information:
```python
result, meta = truncate_recursive_obj(data, target_size=1000, return_metadata=True)
# TruncationMetadata attributes
meta.original_size # Original size in characters
meta.truncated_size # Final size after truncation
meta.was_truncated # Whether truncation occurred
meta.strategy # Strategy used: "none", "smart", "middle", "str"
```
### General Utilities
Common utility functions for everyday programming tasks.
```python
from aixtools.utils.utils import safe_json_loads, timestamp_now, hash_string
# Safe JSON parsing
data = safe_json_loads('{"key": "value"}', default={})
# Get current timestamp
now = timestamp_now()
# Generate hash for strings
file_hash = hash_string("content to hash")
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11.2 | [] | [] | [] | [
"a2a-sdk[postgresql]>=0.3.3",
"alembic>=1.17.1",
"cachebox>=5.0.1",
"chainlit>=2.5.5",
"colorlog>=6.9.0",
"fasta2a>=0.6.0",
"fastmcp>=2.14.0",
"hvac>=2.3.0",
"ipykernel>=6.29.5",
"jupyterlab>=4.4.3",
"langchain-chroma>=0.2.3",
"langchain-ollama>=0.3.2",
"langchain-openai>=0.3.14",
"markitdown[docx,pdf,pptx,xls,xlsx]>=0.1.3",
"mcp>=1.23.3",
"msal>=1.29.0",
"msal-extensions>=1.1.0",
"mypy>=1.18.2",
"nest_asyncio>=1.6.0",
"odfpy>=1.4.1",
"openpyxl>=3.1.5",
"pandas>=2.2.3",
"podkit>=0.7.6",
"psycopg2-binary>=2.9.11",
"pydantic-ai>=1.44.0",
"pyjwt>=2.10.1",
"python-frontmatter>=1.1.0",
"rich>=14.0.0",
"ruff>=0.11.6",
"sqlalchemy>=2.0.44",
"streamlit>=1.44.1",
"tiktoken>=0.9.0",
"watchdog>=6.0.0",
"xlrd>=2.0.2",
"pyyaml; extra == \"test\"",
"logfire; extra == \"feature\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:24:35.347748 | aixtools-0.10.14.tar.gz | 204,240 | b9/fa/64a2878012dd1586fd36dea41a3077f599e8a4342437e12a7596750d663d/aixtools-0.10.14.tar.gz | source | sdist | null | false | c93cf703bd2879fd2af49044525ac8f9 | beb242b3a1ab43e2b3a01a631edaec72fd31a2a63e8fed34b8b09cac936c88fd | b9fa64a2878012dd1586fd36dea41a3077f599e8a4342437e12a7596750d663d | null | [] | 209 |
2.4 | lino-xl | 26.2.1 | The Lino Extensions Library | =======================
Lino Extensions Library
=======================
This is the source code of the ``lino-xl`` Python package, a collection of
plugins used by many Lino applications. It is an integral part of the `Lino
framework <https://www.lino-framework.org>`__, a sustainably free open-source
project maintained by the `Synodalsoft team <https://www.synodalsoft.net>`__.
Your contributions are welcome.
- Developer Guide: https://dev.lino-framework.org
- Code repository: https://gitlab.com/lino-framework/xl
- Test suite: https://gitlab.com/lino-framework/book/-/pipelines
- Maintainer: https://www.synodalsoft.net
- Changelog: https://www.lino-framework.org/changes/
- Contact: https://www.saffre-rumma.net/contact/
| text/x-rst | null | Rumma & Ko Ltd <info@saffre-rumma.net>, Luc Saffre <luc@saffre-rumma.net> | null | null | GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>. | Django, React, customized, framework | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Framework :: Django",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Natural Language :: English",
"Natural Language :: French",
"Natural Language :: German",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Topic :: Database :: Front-Ends",
"Topic :: Office/Business",
"Topic :: Software Development :: Libraries :: Application Frameworks"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"commondata",
"lino",
"setuptools<80",
"hatch; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://www.lino-framework.org",
"Repository, https://gitlab.com/lino-framework/xl"
] | twine/6.1.0 CPython/3.12.3 | 2026-02-20T15:24:26.426228 | lino_xl-26.2.1.tar.gz | 5,460,339 | be/58/c525c2557bffb27a340b5681b6e93e019eb773ae3ab2ad93d387ffb786c4/lino_xl-26.2.1.tar.gz | source | sdist | null | false | 9f703ecfcdaddb273600a9262d66e960 | c9a53c01ff949343eb849df28bd030fdab0417f45974ed1e9ecff5468f0680b7 | be58c525c2557bffb27a340b5681b6e93e019eb773ae3ab2ad93d387ffb786c4 | null | [
"AUTHORS.rst",
"COPYING"
] | 190 |
2.4 | django-api-mixins | 0.1.1 | A collection of useful mixins for Django REST Framework ViewSets and APIViews | # django-api-mixins
A collection of useful mixins for Django REST Framework ViewSets and APIViews to simplify common API patterns.
## Features
- **APIMixin**: Dynamic serializer selection based on action (create, update, list, retrieve)
- **ModelMixin**: Automatic filter field generation for Django models
- **RelationshipFilterMixin**: Automatic filtering for reverse relationships and direct fields
- **RoleBasedFilterMixin**: Role-based queryset filtering for multi-tenant applications
## Installation
```bash
pip install django-api-mixins
```
## Requirements
- Python 3.8+
- Django 3.2+
- Django REST Framework 3.12+
## Quick Start
### APIMixin
Use different serializers for different actions (create, update, list, retrieve):
```python
from rest_framework import viewsets
from django_api_mixins import APIMixin
class UserViewSet(APIMixin, viewsets.ModelViewSet):
queryset = User.objects.all()
serializer_class = UserSerializer # Default serializer
create_serializer_class = UserCreateSerializer # For POST requests
update_serializer_class = UserUpdateSerializer # For PUT/PATCH requests
list_serializer_class = UserListSerializer # For GET list requests
retrieve_serializer_class = UserDetailSerializer # For GET detail requests
```
The mixin also automatically handles list data in requests:
```python
# POST /api/users/
# Body: [{"name": "User1"}, {"name": "User2"}]
# Automatically sets many=True for list data
```
### ModelMixin
Automatically generate filter fields for all model fields:
```python
from django.db import models
from django_api_mixins import ModelMixin
class Product(models.Model, ModelMixin):
name = models.CharField(max_length=100)
price = models.DecimalField(max_digits=10, decimal_places=2)
created_at = models.DateTimeField(auto_now_add=True)
is_active = models.BooleanField(default=True)
# Automatically generates filter fields:
# - name: exact, in, isnull
# - price: exact, in, isnull, gte, lte
# - created_at: exact, in, isnull, gte, lte
# - is_active: exact, in, isnull
```
Use in your ViewSet with Django Filter Backend:
```python
from rest_framework import viewsets
from django_filters.rest_framework import DjangoFilterBackend
from django_api_mixins import ModelMixin
class ProductViewSet(viewsets.ModelViewSet):
queryset = Product.objects.all()
filter_backends = [DjangoFilterBackend]
filterset_fields = Product.get_filter_fields() # Use auto-generated fields
```
Filter related models:
```python
# Get filter fields for a related model
filter_fields = Product.get_filter_fields_for_related_model('category')
# Returns: {'category__name': [...], 'category__id': [...], ...}
# Get filter fields for a foreign key field
filter_fields = Order.get_filter_fields_for_foreign_fields('product')
# Returns: {'product__name': [...], 'product__price': [...], ...}
```
### RelationshipFilterMixin
Automatically apply filters for reverse relationships and direct fields:
```python
from rest_framework import viewsets
from django_api_mixins import RelationshipFilterMixin
class OrderViewSet(RelationshipFilterMixin, viewsets.ModelViewSet):
queryset = Order.objects.all()
# Define which reverse relationship filters to allow
reverse_relation_filters = [
'customer__name',
'customer__email',
'product__category__name',
]
# Define which filters support list/array values
listable_filters = ['customer__id', 'product__id']
```
**Important**: Place the mixin BEFORE the ViewSet class:
```python
# ✅ Correct
class OrderViewSet(RelationshipFilterMixin, viewsets.ModelViewSet):
pass
# ❌ Wrong
class OrderViewSet(viewsets.ModelViewSet, RelationshipFilterMixin):
pass
```
Usage examples:
```python
# Filter by customer name
GET /api/orders/?customer__name=John
# Filter by multiple customer IDs (listable filter)
GET /api/orders/?customer__id=1,2,3
# or
GET /api/orders/?customer__id=[1,2,3]
# Filter by product category
GET /api/orders/?product__category__name=Electronics
```
### RoleBasedFilterMixin
Automatically filter querysets based on user roles:
```python
from rest_framework import viewsets
from django_api_mixins import RoleBasedFilterMixin
class OrderViewSet(RoleBasedFilterMixin, viewsets.ModelViewSet):
queryset = Order.objects.all()
# Required: field name to filter on
role_filter_field = 'operator_type'
# Optional: roles that see all data
admin_roles = ['admin', 'super_admin']
# Optional: roles that see no data
excluded_roles = ['guest']
# Optional: custom role to field value mapping
role_mapping = {
'custom_role': 'custom_value',
'manager': 'MGR',
}
```
**Important**: Place the mixin BEFORE the ViewSet class:
```python
# ✅ Correct
class OrderViewSet(RoleBasedFilterMixin, viewsets.ModelViewSet):
pass
# ❌ Wrong
class OrderViewSet(viewsets.ModelViewSet, RoleBasedFilterMixin):
pass
```
How it works:
1. Extracts the user's role from `user.role.name`
2. Admin roles see all data (no filtering)
3. Excluded roles see no data (empty queryset)
4. Other roles are filtered by `role_filter_field = role_name` (or mapped value)
Example:
```python
# User with role.name = 'operator'
# Model has operator_type field
# Automatically filters: Order.objects.filter(operator_type='operator')
# User with role.name = 'admin'
# Sees all orders (no filtering)
# User with role.name = 'guest'
# Sees no orders (queryset.none())
```
## FieldLookup Enum
The package includes a `FieldLookup` enum for consistent lookup naming:
```python
from django_api_mixins import FieldLookup
FieldLookup.EXACT # "exact"
FieldLookup.ICONTAINS # "icontains"
FieldLookup.CONTAINS # "contains"
FieldLookup.ISNULL # "isnull"
FieldLookup.GTE # "gte"
FieldLookup.LTE # "lte"
FieldLookup.IN # "in"
```
## Combining Mixins
You can combine multiple mixins:
```python
from rest_framework import viewsets
from django_api_mixins import (
APIMixin,
RelationshipFilterMixin,
RoleBasedFilterMixin,
)
class OrderViewSet(
RelationshipFilterMixin,
RoleBasedFilterMixin,
APIMixin,
viewsets.ModelViewSet
):
queryset = Order.objects.all()
serializer_class = OrderSerializer
create_serializer_class = OrderCreateSerializer
role_filter_field = 'operator_type'
reverse_relation_filters = ['customer__name', 'product__category__name']
listable_filters = ['customer__id']
```
**Note**: The order of mixins matters! Place filtering mixins before the ViewSet class.
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
MIT License - see LICENSE file for details.
## Support
For issues, questions, or contributions, please visit the [GitHub repository](https://github.com/subhrans/django-api-mixins).
| text/markdown | null | Subhransu Das <subhransud525@gmail.com> | null | null | MIT | django, django-rest-framework, drf, mixins, api, viewsets | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Framework :: Django",
"Framework :: Django :: 3.2",
"Framework :: Django :: 4.0",
"Framework :: Django :: 4.1",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"Django>=3.2",
"djangorestframework>=3.12"
] | [] | [] | [] | [
"Homepage, https://github.com/subhrans/django-api-mixins",
"Documentation, https://github.com/subhrans/django-api-mixins#readme",
"Repository, https://github.com/Subhrans/django-api-mixins",
"Issues, https://github.com/subhrans/django-api-mixins/issues"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T15:23:53.358229 | django_api_mixins-0.1.1.tar.gz | 8,596 | c0/03/5987ccdb959e18a98c8e40e719041ba7e8d39a7be3e3b56b154ee6961b26/django_api_mixins-0.1.1.tar.gz | source | sdist | null | false | 2d264d87edca5878562ee110ca5c2d15 | 9dcf169d75bfadf4bb387d4c59a0116e6260060b6e939fce3b700d0ccb3521c4 | c0035987ccdb959e18a98c8e40e719041ba7e8d39a7be3e3b56b154ee6961b26 | null | [
"LICENSE"
] | 187 |
2.4 | lino | 26.2.4 | A framework for writing desktop-like web applications using Django and ExtJS or React | ====================
The ``lino`` package
====================
This repository is the core package of the `Lino framework
<https://www.lino-framework.org>`__, a sustainably free open-source project
maintained by the `Synodalsoft team <https://www.synodalsoft.net>`__. Your
contributions are welcome.
- Developer Guide: https://dev.lino-framework.org
- Code repository: https://gitlab.com/lino-framework/lino
- Test suite: https://gitlab.com/lino-framework/book/-/pipelines
- Maintainer: https://www.synodalsoft.net
- Changelog: https://www.lino-framework.org/changes/
- Contact: https://www.saffre-rumma.net/contact/
| text/x-rst | null | Rumma & Ko Ltd <info@saffre-rumma.net>, Luc Saffre <luc@saffre-rumma.net> | null | null | GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>. | Django, React, customized, framework | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Framework :: Django",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Natural Language :: English",
"Natural Language :: French",
"Natural Language :: German",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Topic :: Database :: Front-Ends",
"Topic :: Office/Business",
"Topic :: Software Development :: Libraries :: Application Frameworks"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"babel",
"beautifulsoup4",
"dateparser",
"django",
"django-click",
"django-localflavor",
"docutils",
"etgen",
"html2text",
"jinja2",
"lxml",
"odfpy",
"openpyxl",
"python-dateutil",
"pytidylib",
"pyyaml",
"sphinx",
"unipath",
"weasyprint",
"hatch; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"atelier; extra == \"testing\"",
"pytest; extra == \"testing\"",
"pytest-env; extra == \"testing\"",
"pytest-forked; extra == \"testing\"",
"pytest-html; extra == \"testing\""
] | [] | [] | [] | [
"Homepage, https://www.lino-framework.org",
"Repository, https://gitlab.com/lino-framework/lino"
] | twine/6.1.0 CPython/3.12.3 | 2026-02-20T15:23:43.168842 | lino-26.2.4.tar.gz | 13,752,841 | cd/17/dbfbbbed1f6ac88d4f3e6f424a5b3653fa2cd93c3fd7d2a475bc143677c3/lino-26.2.4.tar.gz | source | sdist | null | false | ef0bf27e38b70e9ab7b9670849db5e9a | 97578b368eaccecb13e8d1d949513190a3107efb78afb47070ca473ac502b441 | cd17dbfbbbed1f6ac88d4f3e6f424a5b3653fa2cd93c3fd7d2a475bc143677c3 | null | [
"AUTHORS.rst",
"COPYING"
] | 203 |
2.4 | yagra | 1.0.0 | Declarative LangGraph Builder powered by YAML | # Yagra
<p align="center">
<img src="docs/assets/yagra-logo.jpg" alt="Yagra logo" width="720" />
</p>
<p align="center">
<a href="https://github.com/shogo-hs/Yagra/actions/workflows/ci.yml"><img src="https://github.com/shogo-hs/Yagra/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
<a href="https://pypi.org/project/yagra/"><img src="https://img.shields.io/pypi/v/yagra.svg" alt="PyPI"></a>
<a href="https://pypi.org/project/yagra/"><img src="https://img.shields.io/pypi/pyversions/yagra.svg" alt="Python"></a>
<a href="https://github.com/shogo-hs/Yagra/blob/main/LICENSE"><img src="https://img.shields.io/github/license/shogo-hs/Yagra.svg" alt="License"></a>
<a href="https://pypi.org/project/yagra/"><img src="https://img.shields.io/pypi/dm/yagra.svg" alt="Downloads"></a>
</p>
**Declarative LangGraph Builder powered by YAML**
Yagra enables you to build [LangGraph](https://langchain-ai.github.io/langgraph/)'s `StateGraph` from YAML definitions, separating workflow logic from Python implementation. Define nodes, edges, and branching conditions in YAML files—swap configurations without touching code.
Designed for **LLM agent developers**, **prompt engineers**, and **non-technical stakeholders** who want to iterate on workflows quickly without diving into Python code every time.
Built with **AI-Native principles**: JSON Schema export and validation CLI enable coding agents (Claude Code, Codex, etc.) to generate and validate workflows automatically.
## ✨ Key Features
- **Declarative Workflow Management**: Define nodes, edges, and conditional branching in YAML
- **Implementation-Configuration Separation**: Connect YAML `handler` strings to Python callables via Registry
- **Schema Validation**: Catch configuration errors early with Pydantic-based validation
- **Custom State Schema**: Pass any `TypedDict` (including `MessagesState`) via `state_schema` — full LangGraph reducer support
- **Advanced Patterns**: Fan-out/fan-in (parallel map-reduce via Send API) and subgraph nesting for composable workflows
- **Visual Workflow Editor**: Launch Studio WebUI for visual editing, drag-and-drop node/edge management, and diff preview
- **Template Library**: Quick-start templates for common patterns (branching, loops, RAG, parallel, subgraph, and more)
- **MCP Server**: Expose Yagra tools to AI agents via [Model Context Protocol](https://modelcontextprotocol.io/) (`yagra[mcp]`)
- **AI-Ready**: JSON Schema export (`yagra schema`) and structured validation for coding agents
## 📦 Installation
- Python 3.12+
```bash
# Recommended (uv)
uv add yagra
# With LLM handler utilities (optional)
uv add 'yagra[llm]'
# Or with pip
pip install yagra
pip install 'yagra[llm]'
```
### LLM Handler Utilities (Beta)
Yagra provides handler utilities to reduce boilerplate code for LLM nodes:
```python
from yagra.handlers import create_llm_handler
# Create a generic LLM handler
llm = create_llm_handler(retry=3, timeout=30)
# Register and use in workflow
registry = {"llm": llm}
app = Yagra.from_workflow("workflow.yaml", registry)
```
**YAML Definition:**
```yaml
nodes:
- id: "chat"
handler: "llm"
params:
prompt_ref: "prompts/chat.yaml#system"
model:
provider: "openai"
name: "gpt-4"
kwargs:
temperature: 0.7
output_key: "response"
```
The handler automatically:
- Extracts and interpolates prompts
- Calls LLM via [litellm](https://github.com/BerriAI/litellm) (100+ providers)
- Handles retries and timeouts
- Returns structured output
**See the full working example**: [`examples/llm-basic/`](examples/llm-basic/)
### Structured Output Handler (Beta)
Use `create_structured_llm_handler()` to get type-safe Pydantic model instances from LLM responses:
```python
from pydantic import BaseModel
from yagra.handlers import create_structured_llm_handler
class PersonInfo(BaseModel):
name: str
age: int
handler = create_structured_llm_handler(schema=PersonInfo)
registry = {"structured_llm": handler}
app = Yagra.from_workflow("workflow.yaml", registry)
result = app.invoke({"text": "My name is Alice and I am 30."})
person: PersonInfo = result["person"] # Type-safe!
print(person.name, person.age) # Alice 30
```
The handler automatically:
- Enables JSON output mode (`response_format=json_object`)
- Injects JSON Schema into the system prompt
- Validates and parses the response with Pydantic
**Dynamic schema (no Python code required)**: Define the schema directly in your workflow YAML using `schema_yaml`, and call `create_structured_llm_handler()` with no arguments:
```python
# No Pydantic model needed in Python code
handler = create_structured_llm_handler()
registry = {"structured_llm": handler}
```
```yaml
# workflow.yaml
nodes:
- id: "extract"
handler: "structured_llm"
params:
schema_yaml: |
name: str
age: int
hobbies: list[str]
prompt_ref: "prompts.yaml#extract"
model:
provider: "openai"
name: "gpt-4o"
output_key: "person"
```
Supported types in `schema_yaml`: `str`, `int`, `float`, `bool`, `list[str]`, `list[int]`, `dict[str, str]`, `str | None`, etc.
**See the full working example**: [`examples/llm-structured/`](examples/llm-structured/)
### Streaming Handler (Beta)
Stream LLM responses chunk by chunk:
```python
from yagra.handlers import create_streaming_llm_handler
handler = create_streaming_llm_handler(retry=3, timeout=60)
registry = {"streaming_llm": handler}
yagra = Yagra.from_workflow("workflow.yaml", registry)
result = yagra.invoke({"query": "Tell me about Python async"})
# Incremental processing
for chunk in result["response"]:
print(chunk, end="", flush=True)
# Or buffered
full_text = "".join(result["response"])
```
> **Note**: The `Generator` is single-use. Consume it once with either `for` or `"".join(...)`.
**See the full working example**: [`examples/llm-streaming/`](examples/llm-streaming/)
## 🚀 Quick Start
### Option 1: From Template (Recommended)
Yagra provides ready-to-use templates for common workflow patterns.
```bash
# List available templates
yagra init --list
# Initialize from a template
yagra init --template branch --output my-workflow
# Validate the generated workflow
yagra validate --workflow my-workflow/workflow.yaml
```
Available templates:
- **branch**: Conditional branching pattern
- **chat**: Single-node chat with `MessagesState` and `add_messages` reducer
- **loop**: Planner → Evaluator loop pattern
- **parallel**: Fan-out/fan-in map-reduce pattern via Send API
- **rag**: Retrieve → Rerank → Generate RAG pattern
- **subgraph**: Nested subgraph pattern for composable multi-workflow architectures
- **tool-use**: LLM decides whether to invoke external tools and executes them to answer
- **multi-agent**: Orchestrator, researcher, and writer agents collaborate in a multi-agent pattern
- **human-review**: Human-in-the-loop pattern that pauses for review and approval via `interrupt_before`
### Option 2: From Scratch
#### 1. Define State and Handler Functions
```python
from typing import TypedDict
from yagra import Yagra
class AgentState(TypedDict, total=False):
query: str
intent: str
answer: str
__next__: str # For conditional branching
def classify_intent(state: AgentState, params: dict) -> dict:
intent = "faq" if "料金" in state.get("query", "") else "general"
return {"intent": intent, "__next__": intent}
def answer_faq(state: AgentState, params: dict) -> dict:
prompt = params.get("prompt", {})
return {"answer": f"FAQ: {prompt.get('system', '')}"}
def answer_general(state: AgentState, params: dict) -> dict:
model = params.get("model", {})
return {"answer": f"GENERAL via {model.get('name', 'unknown')}"}
def finish(state: AgentState, params: dict) -> dict:
return {"answer": state.get("answer", "")}
```
#### 2. Define Workflow YAML
`workflows/support.yaml`
```yaml
version: "1.0"
start_at: "classifier"
end_at:
- "finish"
nodes:
- id: "classifier"
handler: "classify_intent"
- id: "faq_bot"
handler: "answer_faq"
params:
prompt_ref: "../prompts/support_prompts.yaml#faq"
- id: "general_bot"
handler: "answer_general"
params:
model:
provider: "openai"
name: "gpt-4.1-mini"
- id: "finish"
handler: "finish"
edges:
- source: "classifier"
target: "faq_bot"
condition: "faq"
- source: "classifier"
target: "general_bot"
condition: "general"
- source: "faq_bot"
target: "finish"
- source: "general_bot"
target: "finish"
```
#### 3. Register Handlers and Run
```python
registry = {
"classify_intent": classify_intent,
"answer_faq": answer_faq,
"answer_general": answer_general,
"finish": finish,
}
app = Yagra.from_workflow(
workflow_path="workflows/support.yaml",
registry=registry,
state_schema=AgentState,
)
result = app.invoke({"query": "料金を教えて"})
print(result["answer"])
```
## 🛠️ CLI Tools
Yagra provides CLI commands for workflow management:
### `yagra init`
Initialize a workflow from a template.
```bash
yagra init --template branch --output my-workflow
```
### `yagra schema`
Export JSON Schema for workflow YAML (useful for coding agents).
```bash
yagra schema --output workflow-schema.json
```
### `yagra validate`
Validate a workflow YAML and report issues.
```bash
# Human-readable output
yagra validate --workflow workflows/support.yaml
# JSON output for agent consumption
yagra validate --workflow workflows/support.yaml --format json
```
### `yagra explain`
Statically analyze a workflow YAML to show execution paths, required handlers, and variable flow.
```bash
# JSON output (default)
yagra explain --workflow workflows/support.yaml
# Read from stdin (pipe-friendly)
cat workflows/support.yaml | yagra explain --workflow -
```
### `yagra handlers`
List built-in handler parameter schemas (useful for coding agents).
```bash
# Human-readable output
yagra handlers
# JSON output for agent consumption
yagra handlers --format json
```
### `yagra mcp`
Launch Yagra as an [MCP (Model Context Protocol)](https://modelcontextprotocol.io/) server. Requires `yagra[mcp]` extra.
```bash
# Install with MCP support
pip install "yagra[mcp]"
# or
uv add "yagra[mcp]"
# Start the MCP server (stdio mode)
yagra mcp
```
Available MCP tools: `validate_workflow`, `explain_workflow`, `list_templates`, `list_handlers`
### `yagra visualize`
Generate a read-only visualization HTML.
```bash
yagra visualize --workflow workflows/support.yaml --output /tmp/workflow.html
```
### `yagra studio`
Launch an interactive WebUI for visual editing, drag-and-drop node/edge management, and workflow persistence.
```bash
# Launch with workflow selector (recommended)
yagra studio --port 8787
# Launch with a specific workflow
yagra studio --workflow workflows/support.yaml --port 8787
```
Open `http://127.0.0.1:8787/` in your browser.
**Studio Features:**
- **Handler Type Selector**: Node Properties panel provides a type selector (`llm` / `structured_llm` / `streaming_llm` / `custom`)
- Predefined types auto-populate the handler name — no manual typing required
- `custom` type enables free-text input for user-defined handlers
- **Handler-Aware Forms**: Form sections adapt automatically to the selected handler type
- `structured_llm` → Schema Settings section (edit `schema_yaml` as YAML)
- `streaming_llm` → Streaming Settings section (`stream: false` toggle)
- `custom` → LLM-specific sections hidden automatically
- **State Schema Editor**: Define workflow-level `state_schema` fields visually via a table editor (name, type, reducer columns) — no YAML hand-editing required
- **Visual Editing**: Edit prompts, models, and conditions via forms
- **Drag & Drop**: Add nodes, connect edges, adjust layout visually
- **Diff Preview**: Review changes before saving
- **Backup & Rollback**: Automatic backups with rollback support
- **Validation**: Real-time validation with detailed error messages
## 📚 Documentation
Full documentation is available at **[shogo-hs.github.io/Yagra](https://shogo-hs.github.io/Yagra/)**
- **[User Guide](https://shogo-hs.github.io/Yagra/)**: Installation, YAML syntax, CLI tools
- **[API Reference](https://shogo-hs.github.io/Yagra/api.html)**: Python API documentation
- **[Examples](https://shogo-hs.github.io/Yagra/examples.html)**: Practical use cases
You can also build documentation locally:
```bash
uv run sphinx-build -b html docs/sphinx/source docs/sphinx/_build/html
```
## 🎯 Use Cases
- Prototype LLM agent flows and iterate rapidly by swapping YAML files
- Enable non-engineers to adjust workflows (prompts, models, branching) without code changes
- Integrate with coding agents for automated workflow generation and validation
- Reduce boilerplate code when building LangGraph applications with complex control flow
## 🤝 Contributing
Contributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for development setup, coding standards, and guidelines.
## 📄 License
MIT License - see [LICENSE](LICENSE) for details.
## 📝 Changelog
See [CHANGELOG.md](CHANGELOG.md) for release history.
---
<p align="center">
<sub>Built with ❤️ for the LangGraph community</sub>
</p>
| text/markdown | Shogo Hasegawa | null | null | null | MIT License
Copyright (c) 2026 Shogo Hasegawa
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | agent, langgraph, llm, workflow, yaml | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"langgraph>=1.0.8",
"pydantic>=2.12.5",
"pyyaml>=6.0.3",
"litellm>=1.57.10; extra == \"llm\"",
"mcp>=1.26.0; extra == \"mcp\""
] | [] | [] | [] | [
"Homepage, https://pypi.org/project/yagra/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:23:31.064594 | yagra-1.0.0.tar.gz | 1,773,930 | de/5b/4db7a5749b3f6c7b4b9044f98507179cecf80183ea3a5c000906f70a1318/yagra-1.0.0.tar.gz | source | sdist | null | false | 1d3978a5a4421e789d07ee6a5d4f47d4 | 61b5256428c8ec5d7b1adf781f426fc845abf7602b0c6b18a4572c7742899f22 | de5b4db7a5749b3f6c7b4b9044f98507179cecf80183ea3a5c000906f70a1318 | null | [
"LICENSE"
] | 165 |
2.1 | fepydas | 0.0.75 | felix' python data analysis suite | A python3 tool for data analysis and fitting.
Install from https://pypi.org/project/fepydas/ with "pip install fepydas".
View repository at https://git.tu-berlin.de/nippert/fepydas
Find usage examples at https://git.tu-berlin.de/nippert/fepydas/-/tree/master/examples
Requirements:
Python3
numpy
scipy
matplotlib
lmfit
scikit-learn
pyamg
| text/markdown | Felix Nippert | felix@physik.tu-berlin.de | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://git.tu-berlin.de/nippert/fepydas | null | >=3.6 | [] | [] | [] | [
"numpy>=2.0.0",
"scipy",
"lmfit",
"matplotlib",
"scikit-learn",
"pyamg",
"sympy>=1.13.3",
"pathos",
"tqdm"
] | [] | [] | [] | [
"Bug Tracker, https://git.tu-berlin.de/nippert/fepydas/issues"
] | twine/4.0.2 CPython/3.7.17 | 2026-02-20T15:23:04.289144 | fepydas-0.0.75-py3-none-any.whl | 55,178 | be/58/8f3fe6fbb33754ba1b0affa300797ce5f31472528d2b598cccdb77ab81ef/fepydas-0.0.75-py3-none-any.whl | py3 | bdist_wheel | null | false | 06462f3ae59b3f653fed2c7bc0e9ed68 | 3962546272c033c05edc8e0ef7f3042edb611c3f09c444c4b82090fcac6d29a6 | be588f3fe6fbb33754ba1b0affa300797ce5f31472528d2b598cccdb77ab81ef | null | [] | 95 |
2.4 | opticalib | 1.2.1 | This package is a python interface for laboratory adaptive optics systems, with a focus on the calibration of deformable mirrors (DMs) and the acquisition of wavefront data using interferometers. | # OptiCalib : adaptive OPTics package for deformable mirrors CALibration

`Adaptive Optics Group, INAF - Osservatorio Astrofisico di Arcetri`
- [Pietro Ferraiuolo](mailto:pietro.ferraiuolo@inaf.it)
- [Marco Xompero](mailto:marco.xompero@inaf.it)
OptiCalib is a python package which first goal is to make easy deformable mirror's calibration in the laboratory (and not only).
It was born as a general extrapolation of the software built for the control and calibration of the `ELT @ M4` adaptive mirror and it's calibration tower, `OTT` (`Optical Test Tower`).
## Description
The `OPTICALIB` package serves two main purposes:
- Making connection to the hardware (interferometers, DMs, ...) easy;
- Providing routines for Deformable Mirrors calibrations.
The latests, stable, version can be installed from pypi:
```bash
pip install opticalib
```
The in-development version can be installed directly from this repository:
```bash
pip install git+"https://github.com/pietroferraiuolo/labott.git"
```
**But do expect some bugs!**
Upon installation, the software will create an entry point script called `calpy`, which is usefull to set up a specific experiment's environment. Let's say we have an optical bench composed of an interferometer 4D PhaseCam6110 and an Alpao Deformable mirror, say DM820. We can create the experiment's environment just like:
```bash
calpy -f ~/alpao_experiment --create
```
This will create, in the `~/alpao_experiment` folder, the package's data folder tree, together with a configuration file in the `SysConfig` folder. The [configuration file](./opticalib/core/_configurations/configuration.yaml), documented [here](./opticalib/core/_configurations/DOCS.md), is where all devices must be specified.
### Example usage
Once done with the configuration, we can then start using out instruments:
```bash
calpy -f ~/alpao_experiment
```
```python
# The `calpy` function will automatically import opticalib (with `opt`
# as alias), as well as the `opticalib.dmutils` as dmutils
interf = opt.PhaseCam(6110) # set in the configuration file
dm = opt.AlpaoDm(820) # set in the configuration file
# Having the bench set up and the configuration file set, we can acquire
# an Influence Function by just doing
tn = dmutils.iff_module.iffDataAcquisition(dm, interf) # Optional paramenters
# are `modesList, modesAmplitude, template`, which if not specified are
# read from the configuration file
```
## Documentation
For the API references, check [here](docs/opticalib.pdf) (work in progress...), while for the configuration file documentation check [here](./opticalib/core/_configurations/DOCS.md)
| text/markdown | Adaptive Optics Group - INAF - Osservatorio Astrofisico di Arcetri | pietro.ferraiuolo@inaf.it | Pietro Ferraiuolo | null | MIT License | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: Unix"
] | [] | https://github.com/ArcetriAdaptiveOptics/opticalib | null | >=3.10 | [] | [] | [] | [
"arte",
"astropy",
"scikit-image",
"scikit-learn",
"vmbpy",
"pipython",
"scipy",
"plico_dm",
"ruamel.yaml",
"Pyro4",
"h5py",
"jdcal",
"tqdm",
"matplotlib",
"xupy",
"numpy"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T15:22:52.277366 | opticalib-1.2.1.tar.gz | 344,354 | 06/63/766b44afff5dc6e20b363ad096e1e5a654cbd7f9f5404af94c1df4ca6fcf/opticalib-1.2.1.tar.gz | source | sdist | null | false | 8652cab676db75d6c0d7668737aca1bb | 695df576a76ebd5d8b4864d50cb3ea706a296b266bcc145daeb1b74bac18f421 | 0663766b44afff5dc6e20b363ad096e1e5a654cbd7f9f5404af94c1df4ca6fcf | null | [
"LICENSE"
] | 179 |
2.4 | esbmc_ai | 0.5.2.dev15 | LLM driven development and automatic repair kit. | # ESBMC AI
[](https://github.com/esbmc/esbmc-ai/actions/workflows/workflow.yml)
[](https://quay.io/repository/yiannis128/esbmc-ai)
[](https://github.com/esbmc/esbmc-ai/releases)
Automated LLM Integrated Workflow Platform. Primarily oriented around Automated Program Repair (APR) research. There are different commands that can be executed with ESBMC-AI. There are different commands that ESBMC-AI can run, and can also be extended with Addons (see [below](#wiki)).
The basic repair implementation passes the output from ESBMC to an AI model with instructions to repair the code. As the output from ESBMC can be quite technical in nature.


## Quick Start
```sh
hatch run esbmc-ai ...
```
## Demonstration
[](https://www.youtube.com/watch?v=anpRa6GpVdU)
More videos can be found on the [ESBMC-AI Youtube Channel](https://www.youtube.com/@esbmc-ai)
## Wiki
For full documentation, see the [ESBMC-AI Wiki](https://esbmc.github.io/esbmc-ai). Quick Links:
* [Initial Setup Guide](https://esbmc.github.io/esbmc-ai/docs/initial-setup/)
* [Built-in Commands](https://esbmc.github.io/esbmc-ai/docs/commands/)
* [Addons](https://esbmc.github.io/esbmc-ai/docs/addons/)
## Contributing
See [this page](https://esbmc.github.io/esbmc-ai/docs/contributing).
## License
> [!NOTE]
>This project is offered under a [dual-licence](https://github.com/esbmc/esbmc-ai/blob/master/LICENSE) model.
| text/markdown | null | Yiannis Charalambous <yiannis128@hotmail.com> | null | null | null | AI, LLM, automated code repair, esbmc | [
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"blessed",
"langchain",
"langchain-anthropic",
"langchain-community",
"langchain-ollama",
"langchain-openai",
"lizard",
"platformdirs",
"pydantic",
"python-dotenv",
"regex",
"structlog",
"torch",
"transformers"
] | [] | [] | [] | [
"Documentation, https://github.com/esbmc/esbmc-ai/wiki",
"Homepage, https://github.com/esbmc/esbmc-ai",
"Issues, https://github.com/esbmc/esbmc-ai/issues",
"Source Code, https://github.com/esbmc/esbmc-ai"
] | Hatch/1.16.3 cpython/3.12.0 HTTPX/0.28.1 | 2026-02-20T15:22:20.330338 | esbmc_ai-0.5.2.dev15.tar.gz | 58,610 | d8/16/ff2ab8b3b61d626c5edc3f5879bf52fe4e263d55b455991c6b769e520802/esbmc_ai-0.5.2.dev15.tar.gz | source | sdist | null | false | 1a851a1435b480d0ca42e4ff8a2042a1 | 5445a7fa8627db7b829e71a1ee937ec7f30a60ae83276bf895642333f832b6a2 | d816ff2ab8b3b61d626c5edc3f5879bf52fe4e263d55b455991c6b769e520802 | null | [
"CLA.md",
"LICENSE"
] | 0 |
2.4 | juniper-data-client | 0.3.0 | HTTP client for the JuniperData dataset generation service | # juniper-data-client
Python client library for the JuniperData REST API.
## Overview
`juniper-data-client` provides a simple, robust client for interacting with the JuniperData dataset generation service. It is the official client library used by both JuniperCascor (neural network backend) and JuniperCanopy (web dashboard).
## Installation
```bash
pip install juniper-data-client
```
Or install from source:
```bash
cd juniper-data-client
pip install -e .
```
## Quick Start
```python
from juniper_data_client import JuniperDataClient
# Create client (default: localhost:8100)
client = JuniperDataClient("http://localhost:8100")
# Check service health
health = client.health_check()
print(f"Service status: {health['status']}")
# Create a spiral dataset
result = client.create_spiral_dataset(
n_spirals=2,
n_points_per_spiral=100,
noise=0.1,
seed=42,
)
dataset_id = result["dataset_id"]
print(f"Created dataset: {dataset_id}")
# Download as numpy arrays
arrays = client.download_artifact_npz(dataset_id)
X_train = arrays["X_train"] # (160, 2) float32
y_train = arrays["y_train"] # (160, 2) float32 one-hot
X_test = arrays["X_test"] # (40, 2) float32
y_test = arrays["y_test"] # (40, 2) float32 one-hot
print(f"Training samples: {len(X_train)}")
print(f"Test samples: {len(X_test)}")
```
## Features
- **Simple API**: Easy-to-use methods for all JuniperData endpoints
- **Automatic Retries**: Built-in retry logic for transient failures (429, 5xx)
- **Connection Pooling**: Efficient HTTP connection reuse
- **Type Hints**: Full type annotations for IDE support
- **Context Manager**: Resource cleanup with `with` statement
- **Custom Exceptions**: Granular error handling
## Usage Examples
### Context Manager
```python
with JuniperDataClient("http://localhost:8100") as client:
result = client.create_spiral_dataset(seed=42)
arrays = client.download_artifact_npz(result["dataset_id"])
# Session automatically closed
```
### Wait for Service
```python
client = JuniperDataClient("http://localhost:8100")
# Wait up to 30 seconds for service to be ready
if client.wait_for_ready(timeout=30):
result = client.create_spiral_dataset(seed=42)
else:
print("Service not available")
```
### Custom Parameters
```python
# Using the general create_dataset method
result = client.create_dataset(
generator="spiral",
params={
"n_spirals": 3,
"n_points_per_spiral": 200,
"noise": 0.2,
"seed": 12345,
"algorithm": "legacy_cascor",
"radius": 10.0,
}
)
```
### Error Handling
```python
from juniper_data_client import (
JuniperDataClient,
JuniperDataConnectionError,
JuniperDataNotFoundError,
JuniperDataValidationError,
)
client = JuniperDataClient()
try:
result = client.create_dataset("spiral", {"n_spirals": -1})
except JuniperDataValidationError as e:
print(f"Invalid parameters: {e}")
except JuniperDataConnectionError as e:
print(f"Service unreachable: {e}")
```
### PyTorch Integration
```python
import torch
from juniper_data_client import JuniperDataClient
client = JuniperDataClient()
result = client.create_spiral_dataset(seed=42)
arrays = client.download_artifact_npz(result["dataset_id"])
# Convert to PyTorch tensors
X_train = torch.from_numpy(arrays["X_train"]) # torch.float32
y_train = torch.from_numpy(arrays["y_train"]) # torch.float32
```
## API Reference
### JuniperDataClient
| Method | Description |
| ----------------------------------- | -------------------------------------- |
| `health_check()` | Get service health status |
| `is_ready()` | Check if service is ready (boolean) |
| `wait_for_ready(timeout)` | Wait for service to become ready |
| `list_generators()` | List available generators |
| `get_generator_schema(name)` | Get parameter schema for generator |
| `create_dataset(generator, params)` | Create dataset with generator |
| `create_spiral_dataset(**kwargs)` | Convenience method for spiral datasets |
| `list_datasets(limit, offset)` | List dataset IDs |
| `get_dataset_metadata(id)` | Get dataset metadata |
| `download_artifact_npz(id)` | Download NPZ as dict of arrays |
| `download_artifact_bytes(id)` | Download raw NPZ bytes |
| `get_preview(id, n)` | Get JSON preview of samples |
| `delete_dataset(id)` | Delete a dataset |
| `close()` | Close the client session |
### Exceptions
| Exception | Description |
| ---------------------------- | ----------------------------- |
| `JuniperDataClientError` | Base exception for all errors |
| `JuniperDataConnectionError` | Connection to service failed |
| `JuniperDataTimeoutError` | Request timed out |
| `JuniperDataNotFoundError` | Resource not found (404) |
| `JuniperDataValidationError` | Invalid parameters (400/422) |
## NPZ Artifact Schema
Downloaded artifacts contain the following numpy arrays (all `float32`):
| Key | Shape | Description |
| --------- | ---------------------- | ----------------------------- |
| `X_train` | `(n_train, 2)` | Training features |
| `y_train` | `(n_train, n_classes)` | Training labels (one-hot) |
| `X_test` | `(n_test, 2)` | Test features |
| `y_test` | `(n_test, n_classes)` | Test labels (one-hot) |
| `X_full` | `(n_total, 2)` | Full dataset features |
| `y_full` | `(n_total, n_classes)` | Full dataset labels (one-hot) |
## Configuration
| Parameter | Default | Description |
| ---------------- | ----------------------- | ---------------------------------- |
| `base_url` | `http://localhost:8100` | JuniperData service URL |
| `timeout` | `30` | Request timeout in seconds |
| `retries` | `3` | Number of retry attempts |
| `backoff_factor` | `0.5` | Backoff multiplier between retries |
## Requirements
- Python >=3.11
- numpy >=1.24.0
- requests >=2.28.0
- urllib3 >=2.0.0
## License
MIT License - Copyright (c) 2024-2026 Paul Calnon
## See Also
- [JuniperData](https://github.com/pcalnon/Juniper/tree/main/JuniperData)
- [JuniperCascor](https://github.com/pcalnon/Juniper/tree/main/JuniperCascor)
- [JuniperCanopy](https://github.com/pcalnon/Juniper/tree/main/JuniperCanopy)
| text/markdown | Paul Calnon | null | null | null | null | juniper, dataset, machine-learning, api-client | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"numpy>=1.24.0",
"requests>=2.28.0",
"urllib3>=2.0.0",
"pytest>=7.0.0; extra == \"test\"",
"pytest-cov>=4.0.0; extra == \"test\"",
"pytest-timeout>=2.2.0; extra == \"test\"",
"responses>=0.23.0; extra == \"test\"",
"juniper-data-client[test]; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"isort>=5.12.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"flake8>=7.0.0; extra == \"dev\"",
"types-requests>=2.28.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/pcalnon/juniper-data-client",
"Repository, https://github.com/pcalnon/juniper-data-client",
"Documentation, https://github.com/pcalnon/juniper-data-client#readme",
"Issues, https://github.com/pcalnon/juniper-data-client/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:22:14.731858 | juniper_data_client-0.3.0.tar.gz | 14,316 | 98/d0/887a34fb041de7d8ee1bb8cab7061f0055b841adc28265b216bfeeef18df/juniper_data_client-0.3.0.tar.gz | source | sdist | null | false | 746019c8ae33ba8672b93cc48df2ace9 | 9c5fdd49c2d35c0df8b2c9b81e98660a04ebafd53631ffc8a4d5329cae701b40 | 98d0887a34fb041de7d8ee1bb8cab7061f0055b841adc28265b216bfeeef18df | MIT | [
"LICENSE"
] | 183 |
2.4 | nautilus-futu | 0.4.1 | Futu OpenD adapter for NautilusTrader | # nautilus-futu
Futu OpenD adapter for [NautilusTrader](https://github.com/nautechsystems/nautilus_trader) — 通过富途 OpenD 网关接入港股、美股、A股等市场的量化交易适配器。
## 特性
- **独立安装包** — 不依赖 NautilusTrader 主仓库,自主控制版本发布
- **Rust 协议层** — 用 Rust 实现 Futu OpenD TCP 二进制协议,高性能低延迟
- **无 protobuf 冲突** — 不依赖 `futu-api` Python 包,Rust 侧用 prost 处理 protobuf,彻底避免版本冲突
- **完整功能覆盖** — 行情订阅、历史K线、下单/改单/撤单、账户资金/持仓查询
## 支持市场
| 市场 | 行情 | 交易 |
|------|------|------|
| 港股 (HK) | ✅ | ✅ |
| 美股 (US) | ✅ | ✅ |
| A股 (CN) | ✅ | ✅ |
## 前置条件
- Python >= 3.12
- [Futu OpenD](https://openapi.futunn.com/futu-api-doc/opend/opend-cmd.html) 网关运行中(默认 `127.0.0.1:11111`)
- Rust 工具链(从源码安装时需要)
## 安装
```bash
pip install nautilus-futu
```
从源码安装:
```bash
git clone https://github.com/loadstarCN/nautilus-futu.git
cd nautilus-futu
pip install .
```
## 快速上手
```python
import asyncio
from nautilus_futu.config import FutuDataClientConfig, FutuExecClientConfig
async def main():
# 导入 Rust 客户端
from nautilus_futu._rust import PyFutuClient
client = PyFutuClient()
client.connect("127.0.0.1", 11111, "nautilus", 100)
# 获取行情快照
quotes = client.get_basic_qot([(1, "00700")]) # 腾讯控股
for q in quotes:
print(f"{q['code']}: {q['cur_price']}")
# 获取历史K线
bars = client.get_history_kl(
market=1,
code="00700",
rehab_type=1, # 前复权
kl_type=1, # 日K
begin_time="2025-01-01",
end_time="2025-12-31",
)
print(f"获取到 {len(bars)} 根K线")
client.disconnect()
asyncio.run(main())
```
### 集成 NautilusTrader
```python
from nautilus_trader.live.node import TradingNode
from nautilus_futu.config import FutuDataClientConfig, FutuExecClientConfig
from nautilus_futu.factories import FutuLiveDataClientFactory, FutuLiveExecClientFactory
node = TradingNode()
# 注册 Futu 适配器
node.add_data_client_factory("FUTU", FutuLiveDataClientFactory)
node.add_exec_client_factory("FUTU", FutuLiveExecClientFactory)
# 配置
data_config = FutuDataClientConfig(
host="127.0.0.1",
port=11111,
)
exec_config = FutuExecClientConfig(
host="127.0.0.1",
port=11111,
trd_env=0, # 0=模拟, 1=真实
trd_market=1, # 1=港股, 2=美股
unlock_pwd_md5="", # 真实交易需要填写
)
node.build()
node.run()
```
## 项目结构
```
nautilus-futu/
├── crates/futu/ # Rust 核心
│ ├── proto/ # Futu OpenD .proto 文件
│ └── src/
│ ├── protocol/ # TCP 协议:包头、编解码、加解密
│ ├── client/ # 连接管理、握手、心跳、消息分发
│ ├── quote/ # 行情:订阅、快照、历史K线
│ ├── trade/ # 交易:账户、下单、查询
│ ├── generated/ # Protobuf 生成的 Rust 类型
│ └── python/ # PyO3 绑定
├── nautilus_futu/ # Python NautilusTrader 适配器
│ ├── data.py # FutuLiveDataClient
│ ├── execution.py # FutuLiveExecutionClient
│ ├── providers.py # FutuInstrumentProvider
│ └── parsing/ # 数据类型转换
├── tests/
└── examples/
```
## 开发
```bash
# 安装 Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# 创建并激活虚拟环境
python -m venv .venv
# Linux / macOS
source .venv/bin/activate
# Windows
.venv\Scripts\activate
# 安装开发依赖
pip install -r requirements-dev.txt
# 开发模式构建(自动编译 Rust 并安装 Python 包)
maturin develop
# 运行 Rust 测试
cargo test
# 运行 Python 测试
pytest tests/python -v
```
## 架构
```
Python 应用 / NautilusTrader
│
▼
nautilus_futu (Python 适配器层)
│
▼ PyO3
nautilus_futu._rust (Rust 编译的扩展模块)
│
▼ TCP + Protobuf + AES
Futu OpenD 网关
│
▼
港股 / 美股 / A股 交易所
```
## 许可证
MIT
## 相关链接
- [NautilusTrader](https://github.com/nautechsystems/nautilus_trader)
- [Futu OpenD 文档](https://openapi.futunn.com/futu-api-doc/)
- [Futu API Proto 定义](https://github.com/FutunnOpen/py-futu-api)
| text/markdown; charset=UTF-8; variant=GFM | loadstarCN | null | null | null | null | null | [
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: MIT License"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"nautilus-trader>=1.221"
] | [] | [] | [] | [
"Repository, https://github.com/loadstarCN/nautilus-futu"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:22:09.622965 | nautilus_futu-0.4.1.tar.gz | 124,292 | 6f/46/bd25543ede5cb865f17425c834be249ea955000d476cc121e99e07f05e91/nautilus_futu-0.4.1.tar.gz | source | sdist | null | false | d7f1f598f2258a6abb1e9d63ce02a624 | e94c7a0b2fee50186cf1641f1d6abd8712ed3d858292eaeded677b749d42405e | 6f46bd25543ede5cb865f17425c834be249ea955000d476cc121e99e07f05e91 | null | [
"LICENSE"
] | 610 |
2.4 | setdoc | 1.2.10 | This project helps to set the doc string. | ======
setdoc
======
Visit the website `https://setdoc.johannes-programming.online/ <https://setdoc.johannes-programming.online/>`_ for more information.
| text/x-rst | null | Johannes <johannes.programming@gmail.com> | null | null | The MIT License (MIT)
Copyright (c) 2024 Johannes
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [
"Download, https://pypi.org/project/setdoc/#files",
"Index, https://pypi.org/project/setdoc/",
"Source, https://github.com/johannes-programming/setdoc/",
"Website, https://setdoc.johannes-programming.online/"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T15:21:59.383995 | setdoc-1.2.10.tar.gz | 5,774 | 58/fd/d9dcaf09829b36dbcff04b08710a380f23f0e71799bc39adb3db548a3150/setdoc-1.2.10.tar.gz | source | sdist | null | false | 7b45944c06439050dbade35c5020ecf6 | 507591c28bfc519973e1201ba5dc10b04e73f022c85cbe6083e8d84a2ffa9ede | 58fdd9dcaf09829b36dbcff04b08710a380f23f0e71799bc39adb3db548a3150 | null | [
"LICENSE.txt"
] | 168 |
2.4 | vantix | 1.2.9 | One command ML project scaffolding CLI — works on Windows, Mac, and Linux. | <div align="center">
<img src="https://raw.githubusercontent.com/alexcj10/vantix/main/assets/logo.svg" width="400" alt="Vantix">
**Professional ML project scaffolding CLI — one command, zero config.**
[](https://pypi.org/project/vantix/)
</div>
## Quick Start
```bash
pip install vantix
```
```bash
vantix init my_project
```
That's it. Your entire ML workspace is ready.
> Run `pip install -U vantix` to always get the latest version.
## What It Does
A single `vantix init` command will:
| Step | Action |
|:--:|:--|
| 1 | Create a professional folder structure (`data`, `src`, `notebooks`, `configs`, `logs`, `tests`) |
| 2 | Generate essential files (`.gitignore`, `.env`, `config.yaml`, `train.py`, `eda.ipynb`) |
| 3 | Include a full `SETUP_GUIDE.md` manual inside every project |
| 4 | Set up a virtual environment (`.venv`) with `pip`, `setuptools`, `wheel` |
| 5 | Install 10 core ML/DS packages (NumPy, Pandas, Scikit-learn, etc.) |
| 6 | Pin exact versions in `requirements.txt` |
| 7 | Generate a project-specific `README.md` |
| 8 | Initialize Git with an initial commit |
## Default Packages
All packages install the **latest compatible version** automatically.
| Package | Description |
|:--|:--|
| `numpy` | Numerical computing |
| `pandas` | Data manipulation |
| `scikit-learn` | Machine learning algorithms |
| `matplotlib` | Visualization |
| `seaborn` | Statistical visualization |
| `jupyter` | Interactive notebooks |
| `mlflow` | Experiment tracking |
| `fastapi` | REST API framework |
| `uvicorn` | ASGI server |
| `python-dotenv` | Environment variable management |
```bash
vantix packages # View all defaults anytime
```
> Note: These 10 core packages include their own transitive dependencies (~150+ total), all of which are auto-detected and pinned in `requirements.txt`.
## Customize Packages
Override any default or add new packages with `--pkg`:
```bash
# Pin specific versions
vantix init my_project --pkg numpy==1.24.0 --pkg pandas==2.0.0
# Install latest (no pin)
vantix init my_project --pkg numpy==latest
# Add packages not in defaults
vantix init my_project --pkg torch --pkg transformers
# Mix and match
vantix init my_project --pkg numpy==1.24.0 --pkg torch --pkg transformers==4.40.0
```
## Project Structure
```
my_project/
├── .venv/ Virtual environment (auto-created)
├── data/
│ ├── raw/ Original, untouched data
│ └── processed/ Cleaned and transformed data
├── notebooks/
│ └── eda.ipynb Exploration and visualization
├── src/
│ ├── features/ Feature engineering
│ ├── models/ Model definitions
│ ├── training/
│ │ └── train.py Training entry point
│ └── inference/ Prediction and serving
├── configs/
│ └── config.yaml Settings and hyperparameters
├── logs/ Training logs
├── tests/ Unit and integration tests
├── .env Secrets (never committed)
├── .gitignore
├── requirements.txt Pinned dependencies
├── SETUP_GUIDE.md Full manual reference
└── README.md
```
## After Setup
```bash
cd my_project
# Activate virtual environment
source .venv/bin/activate # Mac / Linux
.venv\Scripts\Activate.ps1 # Windows (PowerShell)
# Start training
python src/training/train.py
```
## All Commands
| Command | Description |
|:--|:--|
| `vantix init <name>` | Create project with default packages |
| `vantix init <name> --pkg <spec>` | Override specific packages |
| `vantix packages` | List default packages and versions |
| `vantix --version` | Show version |
| `vantix --help` | Show help |
## Requirements
- **Python** 3.8+
- **Git** (optional, for auto `git init`)
## Troubleshooting
<details>
<summary><b>Command not found: <code>vantix</code> (Windows)</b></summary>
If you see `vantix: The term 'vantix' is not recognized...`, your Python Scripts folder is not in PATH.
**Fix PATH (Recommended):**
1. Search Windows for *"Edit the system environment variables"*
2. Click *"Environment Variables"*
3. Under *"User variables"*, find `Path` and click *"Edit"*
4. Add your Python Scripts folder (e.g., `C:\Users\YourName\AppData\Roaming\Python\Python313\Scripts`)
5. Restart your terminal
**Or use Python module directly:**
```bash
python -m vantix init my_project
```
</details>
## License
[MIT](LICENSE)
| text/markdown | null | null | null | null | MIT | machine-learning, cli, scaffold, data-science, project-template | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Environment :: Console",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"pytest; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.1 | 2026-02-20T15:21:55.884262 | vantix-1.2.9.tar.gz | 13,708 | 61/d9/cd2e32462eaa13fdcfbe5e938002ea6749b64777d02f036e47f40b2f50e8/vantix-1.2.9.tar.gz | source | sdist | null | false | d762e2f0d135ec25ecc208b6a10e59ed | 8d2cb0984dc1fa8e7e985fcebb42fff2fc1f3111d18b4beee2115d322f68a567 | 61d9cd2e32462eaa13fdcfbe5e938002ea6749b64777d02f036e47f40b2f50e8 | null | [
"LICENSE"
] | 178 |
2.4 | connexity | 1.0.3 | A Python client for Connexity API. | # Connexity SDK for Pipecat
A Python SDK for tracking and analyzing voice AI call sessions with Pipecat. Provides frame observers for Twilio and Daily.co integrations to capture conversation data, latency metrics, and call analytics.
## Installation
```bash
pip install connexity
```
## Version
```python
from connexity import __version__
print(__version__)
```
## Usage
### Twilio Observer
Use `ConnexityTwilioObserver` for Twilio-based telephony calls:
```python
from pipecat.audio.vad.vad_analyzer import VADParams
from connexity.pipecat.twilio import ConnexityTwilioObserver
from connexity.utils.logging_config import LogLevel
from twilio.rest import Client
# Configure VAD parameters
vad_params = VADParams(
confidence=0.5,
min_volume=0.6,
start_secs=0.2,
stop_secs=0.8,
)
# Initialize Twilio client and start recording
twilio_client = Client(account_sid, auth_token)
twilio_client.calls(call_sid).recordings.create()
# Create and initialize the observer
observer = ConnexityTwilioObserver()
await observer.initialize(
sid=call_sid, # Twilio Call SID
agent_id="YOUR_AGENT_ID", # Your Connexity agent ID
api_key="YOUR_CONNEXITY_API_KEY", # Your Connexity API key
vad_params=vad_params, # VAD configuration
run_mode="production", # "development" or "production"
vad_analyzer="silero", # VAD engine name
twilio_client=twilio_client, # Twilio REST client instance
log_level=LogLevel.INFO, # Optional: DEBUG, INFO, WARNING, ERROR, CRITICAL
latency_threshold_ms=4000.0, # Optional: latency alert threshold
)
# Register with your Pipecat pipeline
pipeline.register_observer(observer)
```
### Daily.co Observer
Use `ConnexityDailyObserver` for Daily.co-based WebRTC calls:
```python
from pipecat.audio.vad.vad_analyzer import VADParams
from connexity.pipecat.daily import ConnexityDailyObserver
from connexity.utils.logging_config import LogLevel
# Configure VAD parameters
vad_params = VADParams(
confidence=0.5,
min_volume=0.6,
start_secs=0.2,
stop_secs=0.8,
)
# Create and initialize the observer
observer = ConnexityDailyObserver()
await observer.initialize(
sid=room_name, # Daily.co room name/ID
agent_id="YOUR_AGENT_ID", # Your Connexity agent ID
api_key="YOUR_CONNEXITY_API_KEY", # Your Connexity API key
vad_params=vad_params, # VAD configuration
run_mode="production", # "development" or "production"
vad_analyzer="silero", # VAD engine name
daily_api_key="YOUR_DAILY_API_KEY", # Daily.co API key (required)
log_level=LogLevel.INFO, # Optional: DEBUG, INFO, WARNING, ERROR, CRITICAL
latency_threshold_ms=4000.0, # Optional: latency alert threshold
)
# Note: Daily.co calls are always treated as "web" type with no phone numbers
# Register with your Pipecat pipeline
pipeline.register_observer(observer)
```
### Tool Observer
Use `@observe_tool` decorator to monitor and track tool function executions:
```python
from connexity.pipecat import observe_tool
from pipecat.services.llm_service import FunctionCallParams
from typing import Dict
@observe_tool(
tool_name="end_call", # Optional: explicit tool name
include_result=True, # Optional: capture return value (default: True)
include_traceback=True, # Optional: capture tracebacks on errors (default: True)
enable_timeout=True, # Optional: enforce execution timeout (default: True)
timeout_seconds=10.0, # Optional: timeout duration in seconds (default: 10.0)
)
async def end_call(params: FunctionCallParams) -> Dict[str, str]:
# Your tool implementation
return {"status": "ended", "sid": call_sid}
```
The decorator automatically tracks:
- Execution timing and duration
- Tool call IDs and arguments
- Success/failure status
- Error detection in results
- Timeout enforcement
- Callback invocation tracking
## Logging Configuration
The SDK uses a centralized logging system that can be configured globally:
```python
from connexity.utils.logging_config import set_sdk_log_level, LogLevel
# Set log level for all SDK components
set_sdk_log_level(LogLevel.DEBUG)
```
Available log levels: `DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`
## Features
- **Conversation Capture**: Records user/assistant messages with timing
- **Latency Tracking**: Measures STT, LLM, TTS, and end-to-end latency
- **Interruption Detection**: Identifies unsuccessful user interruptions and interruption loops
- **Tool Call Monitoring**: Tracks function call lifecycle and issues
- **Issue Reporting**: Automatically reports latency peaks and errors
- **System Prompt Extraction**: Captures and analyzes system prompts
- **Recording Integration**: Retrieves call recordings from Twilio/Daily.co
| text/markdown | null | Dima Ulyanets <dima@spacestep.ca>, Maksym Ilin <maksym@spacestep.ca>, Mykhailo Humeniuk <mykhailo.humeniuk@spacestep.ca> | null | null | MIT | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"aiohttp",
"python-dotenv",
"pipecat-ai>=0.0.101",
"autoflake>=2.3.1; extra == \"dev\"",
"ruff>=0.14.9; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T15:21:53.478325 | connexity-1.0.3.tar.gz | 62,090 | 75/6d/6fac44dc2f088c0bac5f917c13b483c46fe1942450a913f4727e127b0426/connexity-1.0.3.tar.gz | source | sdist | null | false | c13658af7a57d3057d85076420a559c9 | ca4e5615b63aaa45188db7d9c1a3988f44323a08bb0659c8d589684466bb772f | 756d6fac44dc2f088c0bac5f917c13b483c46fe1942450a913f4727e127b0426 | null | [] | 149 |
2.4 | idfkit | 0.2.0 | A fast, modern EnergyPlus IDF/epJSON parser with O(1) lookups and reference tracking | # idfkit
[](https://github.com/samuelduchesne/idfkit/releases)
[](https://github.com/samuelduchesne/idfkit/actions/workflows/main.yml?query=branch%3Amain)
[](https://codecov.io/gh/samuelduchesne/idfkit)
[](https://github.com/samuelduchesne/idfkit/blob/main/LICENSE)
**A fast, modern EnergyPlus IDF/epJSON toolkit for Python.**
> [!NOTE]
> idfkit is in **beta**. The API may change between minor versions. We're looking
> for early adopters and testers — especially users of eppy who want
> better performance and a modern API. If you try it out, please
> [open an issue](https://github.com/samuelduchesne/idfkit/issues) with feedback,
> bug reports, or feature requests.
idfkit lets you load, create, query, and modify EnergyPlus models with an
intuitive Python API. It is designed as a drop-in replacement for
[eppy](https://github.com/santoshphilip/eppy) with better performance,
built-in reference tracking, and native support for both IDF and epJSON
formats.
## Key Features
- **O(1) object lookups** — Collections are indexed by name, so
`doc["Zone"]["Office"]` is a dict lookup, not a linear scan.
- **Automatic reference tracking** — A live reference graph keeps track of
every cross-object reference. Renaming an object updates every field that
pointed to the old name.
- **IDF + epJSON** — Read and write both formats; convert between them in a
single call.
- **Schema-driven validation** — Validate documents against the official
EnergyPlus epJSON schema with detailed error messages.
- **Built-in 3D geometry** — `Vector3D` and `Polygon3D` classes for surface
area, zone volume, and coordinate transforms without external dependencies.
- **EnergyPlus simulation** — Run simulations as subprocesses with structured
result parsing, batch processing, and content-addressed caching.
- **Weather data** — Search 55,000+ weather stations, download EPW/DDY files,
and apply ASHRAE design day conditions.
- **Async & batch simulation** — Run simulations concurrently with
`async_simulate` or process parameter sweeps with `simulate_batch`.
- **3D visualization** — Render building geometry to interactive 3D views or
static SVG images with no external tools.
- **Schedule evaluation** — Parse and evaluate EnergyPlus compact, weekly, and
holiday schedules to time-series values.
- **Thermal properties** — Gas mixture and material thermal calculations for
glazing and construction analysis.
- **Broad version support** — Bundled schemas for every EnergyPlus release
from v8.9 through v25.2.
## Performance
idfkit is designed from the ground up for speed. On a **1,700-object IDF**,
looking up a single object by name is **over 4000x faster** than eppy and opyplus
thanks to O(1) dict-based indexing:
<picture>
<source media="(prefers-color-scheme: dark)" srcset="docs/assets/benchmark_dark.svg">
<source media="(prefers-color-scheme: light)" srcset="docs/assets/benchmark.svg">
<img alt="benchmark chart" src="docs/assets/benchmark.svg">
</picture>
See [full benchmark results](https://samuelduchesne.github.io/idfkit/benchmarks/)
for all six operations (load, get by type, get by name, add, modify, write) across four tools.
## Installation
Requires **Python 3.10+**.
```bash
pip install idfkit
```
Or with [uv](https://docs.astral.sh/uv/):
```bash
uv add idfkit
```
### Optional extras
| Extra | Install command | What it adds |
|-------|----------------|--------------|
| `weather` | `pip install idfkit[weather]` | Refresh weather station indexes from source (openpyxl) |
| `dataframes` | `pip install idfkit[dataframes]` | DataFrame result conversion (pandas) |
| `s3` | `pip install idfkit[s3]` | S3 cloud storage backend (boto3) |
| `plot` | `pip install idfkit[plot]` | Matplotlib plotting |
| `plotly` | `pip install idfkit[plotly]` | Plotly interactive charts |
| `progress` | `pip install idfkit[progress]` | tqdm progress bars for simulations |
| `all` | `pip install idfkit[all]` | Everything above |
## Quick Example
```python
from idfkit import load_idf, write_idf
# Load an existing IDF file
doc = load_idf("in.idf")
# Query objects with O(1) lookups
zone = doc["Zone"]["Office"]
print(zone.x_origin, zone.y_origin)
# Modify a field
zone.x_origin = 10.0
# See what references the zone
for obj in doc.get_referencing("Office"):
print(obj.obj_type, obj.name)
# Write back to IDF (or epJSON)
write_idf(doc, "out.idf")
```
### Creating a model from scratch
```python
from idfkit import new_document, write_idf
doc = new_document()
doc.add("Zone", "Office", x_origin=0.0, y_origin=0.0)
write_idf(doc, "new_building.idf")
```
## Simulation
```python
from idfkit.simulation import simulate
result = simulate(doc, "weather.epw", design_day=True)
# Query results from the SQLite output
ts = result.sql.get_timeseries(
variable_name="Zone Mean Air Temperature",
key_value="Office",
)
print(f"Max temp: {max(ts.values):.1f}°C")
```
> **Note:** `result.sql` requires EnergyPlus to produce SQLite output (the
> default). See the [Simulation Guide](https://samuelduchesne.github.io/idfkit/simulation/)
> for details on output configuration.
## Weather
```python
from idfkit.weather import StationIndex, geocode
index = StationIndex.load()
results = index.nearest(*geocode("Chicago, IL"))
print(results[0].station.display_name)
```
## Documentation
Full documentation is available at
**[samuelduchesne.github.io/idfkit](https://samuelduchesne.github.io/idfkit/)**.
Key sections:
- [Getting Started](https://samuelduchesne.github.io/idfkit/getting-started/installation/) — Installation, quick start, interactive tutorial
- [Simulation Guide](https://samuelduchesne.github.io/idfkit/simulation/) — Run EnergyPlus, parse results, batch processing
- [Weather Guide](https://samuelduchesne.github.io/idfkit/weather/) — Station search, downloads, design days
- [API Reference](https://samuelduchesne.github.io/idfkit/api/document/) — Complete API documentation
- [Migrating from eppy](https://samuelduchesne.github.io/idfkit/migration/) — Side-by-side comparison
## Development
```bash
make install # Install dependencies and pre-commit hooks
make check # Run linting, formatting, and type checks
make test # Run tests with coverage
make docs # Serve documentation locally
```
## Contributing
Contributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
## License
This project is licensed under the MIT License — see [LICENSE](LICENSE) for details.
| text/markdown | null | Samuel Letellier-Duchesne <samuelduchesne@me.com> | null | null | null | building simulation, energy modeling, energyplus, epjson, idf, parser | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"aiobotocore>=2.5; extra == \"all\"",
"boto3>=1.26; extra == \"all\"",
"matplotlib>=3.5; extra == \"all\"",
"openpyxl>=3.1.0; extra == \"all\"",
"pandas>=1.5; extra == \"all\"",
"plotly>=5.0; extra == \"all\"",
"tqdm>=4.60; extra == \"all\"",
"aiobotocore>=2.5; extra == \"async-s3\"",
"boto3>=1.26; extra == \"cloud\"",
"pandas>=1.5; extra == \"dataframes\"",
"pandas>=1.5; extra == \"pandas\"",
"matplotlib>=3.5; extra == \"plot\"",
"plotly>=5.0; extra == \"plotly\"",
"tqdm>=4.60; extra == \"progress\"",
"boto3>=1.26; extra == \"s3\"",
"openpyxl>=3.1.0; extra == \"weather\""
] | [] | [] | [] | [
"Homepage, https://samuelduchesne.github.io/idfkit/",
"Repository, https://github.com/samuelduchesne/idfkit",
"Documentation, https://samuelduchesne.github.io/idfkit/"
] | uv/0.9.30 {"installer":{"name":"uv","version":"0.9.30","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T15:21:50.948212 | idfkit-0.2.0.tar.gz | 12,846,790 | ff/38/c3bf3082338eba27ae066a7c6858f3bedcd464b3492cd47d49e84d24c632/idfkit-0.2.0.tar.gz | source | sdist | null | false | 25d41ab1a8af565f86011bf96fa2bcc7 | 119b3633e81c4f5ba1c67a400f4bd75ca104c135a151bfcd915b7db2ed92c667 | ff38c3bf3082338eba27ae066a7c6858f3bedcd464b3492cd47d49e84d24c632 | null | [
"LICENSE"
] | 213 |
2.3 | telnyx | 4.43.1 | Telnyx API SDK for global Voice, SMS, MMS, WhatsApp, Fax, Wireless IoT, SIP Trunking, and Call Control. | # Telnyx Python API library
<!-- prettier-ignore -->
[)](https://pypi.org/project/telnyx/)
The Telnyx Python library provides convenient access to the Telnyx REST API from any Python 3.9+
application. The library includes type definitions for all request params and response fields,
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
It is generated with [Stainless](https://www.stainless.com/).
## MCP Server
Use the Telnyx MCP Server to enable AI assistants to interact with this API, allowing them to explore endpoints, make test requests, and use documentation to help integrate this SDK into your application.
[](https://cursor.com/en-US/install-mcp?name=telnyx-mcp&config=eyJjb21tYW5kIjoibnB4IiwiYXJncyI6WyIteSIsInRlbG55eC1tY3AiXSwiZW52Ijp7IlRFTE5ZWF9BUElfS0VZIjoiTXkgQVBJIEtleSIsIlRFTE5ZWF9QVUJMSUNfS0VZIjoiTXkgUHVibGljIEtleSIsIlRFTE5ZWF9DTElFTlRfSUQiOiJNeSBDbGllbnQgSUQiLCJURUxOWVhfQ0xJRU5UX1NFQ1JFVCI6Ik15IENsaWVudCBTZWNyZXQifX0)
[](https://vscode.stainless.com/mcp/%7B%22name%22%3A%22telnyx-mcp%22%2C%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22telnyx-mcp%22%5D%2C%22env%22%3A%7B%22TELNYX_API_KEY%22%3A%22My%20API%20Key%22%2C%22TELNYX_PUBLIC_KEY%22%3A%22My%20Public%20Key%22%2C%22TELNYX_CLIENT_ID%22%3A%22My%20Client%20ID%22%2C%22TELNYX_CLIENT_SECRET%22%3A%22My%20Client%20Secret%22%7D%7D)
> Note: You may need to set environment variables in your MCP client.
## Documentation
The full API of this library can be found in [api.md](https://github.com/team-telnyx/telnyx-python/tree/master/api.md).
## Installation
```sh
# install from PyPI
pip install telnyx
```
## Usage
The full API of this library can be found in [api.md](https://github.com/team-telnyx/telnyx-python/tree/master/api.md).
```python
import os
from telnyx import Telnyx
client = Telnyx(
api_key=os.environ.get("TELNYX_API_KEY"), # This is the default and can be omitted
)
response = client.calls.dial(
connection_id="conn12345",
from_="+15557654321",
to="+15551234567",
webhook_url="https://your-webhook.url/events",
)
print(response.data)
```
While you can provide an `api_key` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `TELNYX_API_KEY="My API Key"` to your `.env` file
so that your API Key is not stored in source control.
## Async usage
Simply import `AsyncTelnyx` instead of `Telnyx` and use `await` with each API call:
```python
import os
import asyncio
from telnyx import AsyncTelnyx
client = AsyncTelnyx(
api_key=os.environ.get("TELNYX_API_KEY"), # This is the default and can be omitted
)
async def main() -> None:
response = await client.calls.dial(
connection_id="conn12345",
from_="+15557654321",
to="+15551234567",
webhook_url="https://your-webhook.url/events",
)
print(response.data)
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh
# install from PyPI
pip install telnyx[aiohttp]
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python
import os
import asyncio
from telnyx import DefaultAioHttpClient
from telnyx import AsyncTelnyx
async def main() -> None:
async with AsyncTelnyx(
api_key=os.environ.get("TELNYX_API_KEY"), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
response = await client.calls.dial(
connection_id="conn12345",
from_="+15557654321",
to="+15551234567",
webhook_url="https://your-webhook.url/events",
)
print(response.data)
asyncio.run(main())
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Pagination
List methods in the Telnyx API are paginated.
This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:
```python
from telnyx import Telnyx
client = Telnyx()
all_access_ip_addresses = []
# Automatically fetches more pages as needed.
for access_ip_address in client.access_ip_address.list(
page_number=1,
page_size=50,
):
# Do something with access_ip_address here
all_access_ip_addresses.append(access_ip_address)
print(all_access_ip_addresses)
```
Or, asynchronously:
```python
import asyncio
from telnyx import AsyncTelnyx
client = AsyncTelnyx()
async def main() -> None:
all_access_ip_addresses = []
# Iterate through items across all pages, issuing requests as needed.
async for access_ip_address in client.access_ip_address.list(
page_number=1,
page_size=50,
):
all_access_ip_addresses.append(access_ip_address)
print(all_access_ip_addresses)
asyncio.run(main())
```
Alternatively, you can use the `.has_next_page()`, `.next_page_info()`, or `.get_next_page()` methods for more granular control working with pages:
```python
first_page = await client.access_ip_address.list(
page_number=1,
page_size=50,
)
if first_page.has_next_page():
print(f"will fetch next page using these details: {first_page.next_page_info()}")
next_page = await first_page.get_next_page()
print(f"number of items we just fetched: {len(next_page.data)}")
# Remove `await` for non-async usage.
```
Or just work directly with the returned data:
```python
first_page = await client.access_ip_address.list(
page_number=1,
page_size=50,
)
print(f"page number: {first_page.meta.page_number}") # => "page number: 1"
for access_ip_address in first_page.data:
print(access_ip_address.id)
# Remove `await` for non-async usage.
```
## Nested params
Nested parameters are dictionaries, typed using `TypedDict`, for example:
```python
from telnyx import Telnyx
client = Telnyx()
response = client.calls.dial(
connection_id="7267xxxxxxxxxxxxxx",
from_="+18005550101",
to="+18005550100 or sip:username@sip.telnyx.com",
answering_machine_detection_config={
"after_greeting_silence_millis": 1000,
"between_words_silence_millis": 1000,
"greeting_duration_millis": 1000,
"greeting_silence_duration_millis": 2000,
"greeting_total_analysis_time_millis": 50000,
"initial_silence_millis": 1000,
"maximum_number_of_words": 1000,
"maximum_word_length_millis": 2000,
"silence_threshold": 512,
"total_analysis_time_millis": 5000,
},
)
print(response.answering_machine_detection_config)
```
## File uploads
Request parameters that correspond to file uploads can be passed as `bytes`, or a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance or a tuple of `(filename, contents, media type)`.
```python
from pathlib import Path
from telnyx import Telnyx
client = Telnyx()
client.ai.audio.transcribe(
model="distil-whisper/distil-large-v2",
file=Path("/path/to/file"),
)
```
The async client uses the exact same interface. If you pass a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance, the file contents will be read asynchronously automatically.
## Handling errors
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `telnyx.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `telnyx.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `telnyx.APIError`.
```python
import telnyx
from telnyx import Telnyx
client = Telnyx()
try:
client.number_orders.create(
phone_numbers=[{"phone_number": "+15558675309"}],
)
except telnyx.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except telnyx.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except telnyx.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python
from telnyx import Telnyx
# Configure the default for all requests:
client = Telnyx(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).number_orders.create(
phone_numbers=[{"phone_number": "+15558675309"}],
)
```
### Timeouts
By default requests time out after 1 minute. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python
from telnyx import Telnyx
# Configure the default for all requests:
client = Telnyx(
# 20 seconds (default is 1 minute)
timeout=20.0,
)
# More granular control:
client = Telnyx(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5.0).number_orders.create(
phone_numbers=[{"phone_number": "+15558675309"}],
)
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](https://github.com/team-telnyx/telnyx-python/tree/master/#retries).
## Advanced
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `TELNYX_LOG` to `info`.
```shell
$ export TELNYX_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py
from telnyx import Telnyx
client = Telnyx()
response = client.number_orders.with_raw_response.create(
phone_numbers=[{
"phone_number": "+15558675309"
}],
)
print(response.headers.get('X-My-Header'))
number_order = response.parse() # get the object that `number_orders.create()` would have returned
print(number_order.data)
```
These methods return an [`APIResponse`](https://github.com/team-telnyx/telnyx-python/tree/master/src/telnyx/_response.py) object.
The async client returns an [`AsyncAPIResponse`](https://github.com/team-telnyx/telnyx-python/tree/master/src/telnyx/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
```python
with client.number_orders.with_streaming_response.create(
phone_numbers=[{"phone_number": "+15558675309"}],
) as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py
import httpx
response = client.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented request params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python
import httpx
from telnyx import Telnyx, DefaultHttpxClient
client = Telnyx(
# Or use the `TELNYX_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python
client.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py
from telnyx import Telnyx
with Telnyx() as client:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/team-telnyx/telnyx-python/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py
import telnyx
print(telnyx.__version__)
```
## Requirements
Python 3.9 or higher.
## Contributing
See [the contributing documentation](https://github.com/team-telnyx/telnyx-python/tree/master/./CONTRIBUTING.md).
| text/markdown | null | Telnyx <support@telnyx.com> | null | null | MIT | api, communications, connectivity, fax, iot, mms, sip, sms, telephony, telnyx, trunking, voice, voip, whatsapp | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"anyio<5,>=3.5.0",
"distro<2,>=1.7.0",
"httpx<1,>=0.23.0",
"pydantic<3,>=1.9.0",
"sniffio",
"typing-extensions<5,>=4.10",
"aiohttp; extra == \"aiohttp\"",
"httpx-aiohttp>=0.1.9; extra == \"aiohttp\"",
"pynacl>=1.5.0; extra == \"webhooks\"",
"standardwebhooks; extra == \"webhooks\""
] | [] | [] | [] | [
"Homepage, https://github.com/team-telnyx/telnyx-python",
"Repository, https://github.com/team-telnyx/telnyx-python"
] | twine/5.1.1 CPython/3.12.9 | 2026-02-20T15:21:48.110180 | telnyx-4.43.1.tar.gz | 1,056,285 | 8a/18/2fa1cd812e9b2665aa20ace2fc6464d644b70a38329f25d90d0e65bc3449/telnyx-4.43.1.tar.gz | source | sdist | null | false | 8fe2f6970db2d6742ce49675c75b52d3 | e5af56b54386852df8860701e798be6c42f1adbf2df2f6deb5e37fa48c5fb4ba | 8a182fa1cd812e9b2665aa20ace2fc6464d644b70a38329f25d90d0e65bc3449 | null | [] | 1,653 |
2.4 | lazylinop | 1.23.1 | A package dedicated to lazy linear operators based on diverse backends/libraries. | ## Lazylinop

[](https://pepy.tech/project/lazylinop)
[](https://pepy.tech/project/lazylinop)
[](https://pypi.org/project/lazylinop/)
Lazylinop is a python toolbox to ease and accelerate computations with ("matrix-free") linear operators.
It provides glue to combine linear operators as easily as NumPy arrays, PyTorch/CuPy compatibility, standard signal/image processing linear operators, as well as advanced tools to approximate large matrices by efficient butterfly operators.
A `LazyLinOp` is a high-level linear operator based on an arbitrary underlying implementation, such as:
- custom Python functions,
- a [NumPy array](https://numpy.org/doc/stable/reference/generated/numpy.array.html),
- a [SciPy matrix](https://docs.scipy.org/doc/scipy/reference/sparse.html),
- a [CuPy array](https://docs.cupy.dev/en/stable/reference/generated/cupy.array.html),
- a [torch tensor](https://docs.pytorch.org/docs/stable/tensors.html),
- a [Faust](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1Faust.html) object,
- any Python linear operator.
Thanks to the Lazylinop API, this operator can be easily manipulated, transformed or aggregated with other linear operators to form more complex `LazyLinOp` objects.
Thus, many operations are available such as the addition, concatenation, adjoint etc.
These operations are all ruled by the **lazy paradigm**: their evaluation is delayed until the resulting `LazyLinOp` is actually applied to a vector (or to a collection of vectors, seen as a matrix).
## Get started with ``lazylinop``
Run the following command to install the ``lazylinop`` package from PyPI:
```bash
pip install lazylinop
```
Run the following commands to install from conda:
```bash
conda config --add channels conda-forge
conda config --add channels lazylinop
conda install lazylinop
```
## First steps
Build 2D FFT from 1D FFT and Kronecker product:
```python3
from lazylinop.signal import fft
from lazylinop.basicops import kron
fft2d = kron(fft(N), fft(N))
x = np.random.randn(fft2d.shape[1])
y = fft2d @ x
```
Build circular convolution from 1D FFT:
```python3
from lazylinop.signal import fft
from lazylinop.basicops import diag
DFT = fft(N) * np.sqrt(N)
D = diag(DFT @ filter, k=0)
L = (DFT / N).H @ D @ DFT
```
## Authors
- Pascal Carrivain
- Simon Delamare
- Hakim Hadj-Djilani
- Remi Gribonval
## Contribute to ``lazylinop``
You can contribute to ``lazylinop`` with bug report, feature request and merge request.
## Useful links
- Link to the documentation: [Gitlab pages](https://faustgrp.gitlabpages.inria.fr/lazylinop/index.html)
## Running unit tests
```bash
cd tests
python3 -m unittest TestLazyLinOp.py
python3 -m unittest TestSignal.py
``` | text/markdown | null | Inria <remi.gribonval@inria.fr>, Pascal Carrivain <pascal.carrivain@inria.fr>, Simon Delamare <simon.delamare@ens-lyon.fr>, Hakim Hadj-Djilani <hakim.hadj-djilani@inria.fr>, Rémi Gribonval <remi.gribonval@inria.fr> | null | null | null | butterfly, lazy computation, signal processing | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"array-api-compat>=1.12.0",
"numpy>=2.0",
"scipy>=1.13",
"sympy>=1.14"
] | [] | [] | [] | [
"Homepage, https://faustgrp.gitlabpages.inria.fr/lazylinop/",
"Documentation, https://faustgrp.gitlabpages.inria.fr/lazylinop/api.html",
"Repository, https://gitlab.inria.fr/faustgrp/lazylinop",
"Bug Tracker, https://gitlab.inria.fr/faustgrp/lazylinop/-/issues"
] | twine/6.2.0 CPython/3.11.2 | 2026-02-20T15:21:10.802890 | lazylinop-1.23.1-py3-none-any.whl | 268,788 | 18/f3/f171202959151d83a260d6accec4d222f20bb537d2830ab5d00f183a147b/lazylinop-1.23.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 61527be86f8f16b5dad7a7a43e204c1f | 841967840b4ef95a5b41f6bb5474cf1c91193c17ebfe37a9c2a92dbeab945046 | 18f3f171202959151d83a260d6accec4d222f20bb537d2830ab5d00f183a147b | BSD-2-Clause | [
"AUTHORS.md",
"LICENSE.txt"
] | 79 |
2.4 | pact-auth | 0.1.1 | Sovereign Agent Authentication Protocol — Python SDK | # pact-auth
**Sovereign Agent Authentication Protocol — Python SDK**
Cryptographic identity, delegation chains, and capability-based authorization for AI agents. Zero trust, zero registry — the public key IS the identity.
## Install
```bash
pip install pact-auth
```
## Quick Start
```python
import pact
from datetime import timedelta
# Create identities (Ed25519 keypairs)
human = pact.new_identity("human", "alice")
agent = pact.new_identity("agent", "my-agent")
# Delegate capabilities with expiry
delegation = pact.new_delegation(
from_identity=human,
to_identity=agent,
capabilities=["storage:read", "storage:write"],
ttl=timedelta(hours=24),
)
# Sign HTTP requests
req = pact.HttpRequest("GET", "https://api.example.com/data")
pact.sign_request(req, agent, [delegation])
# Verify (provider side)
result = pact.verify_request(req)
assert result.valid
```
## Features
- **Self-sovereign identity** — Ed25519 keypairs, `sha256:hex` IDs, no registry
- **Delegation chains** — Signed capability transfers with narrowing-only constraint
- **Capability URIs** — `resource:action,constraint=value` format with wildcard support
- **HTTP signing** — Content digest + signature headers compatible with RFC 9421
- **Session management** — Ephemeral session keys delegated from persistent root
- **Revocation** — Instant revocation with cryptographic proof
- **Zero dependencies** beyond `cryptography>=41.0`
## Protocol Spec
See [SPEC.md](https://github.com/anzal1/pact/blob/main/SPEC.md) for the full 968-line protocol specification.
## License
Apache 2.0
| text/markdown | anzal1 | null | null | null | null | agent, ai, authentication, delegation, ed25519, pact | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Security :: Cryptography",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"cryptography>=41.0"
] | [] | [] | [] | [
"Homepage, https://github.com/anzal1/pact",
"Repository, https://github.com/anzal1/pact",
"Documentation, https://github.com/anzal1/pact/blob/main/SPEC.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:21:09.584578 | pact_auth-0.1.1.tar.gz | 22,997 | d0/fc/71ea47f1977ebd943495a8d5580039330396149fc1f9bb0d2ca36527ceec/pact_auth-0.1.1.tar.gz | source | sdist | null | false | 0f2824e10ee0662490fbde1a2440937c | 2e21b472022ac011322e0158ab07133ea9875a783fdce8d3b3813be624ef4927 | d0fc71ea47f1977ebd943495a8d5580039330396149fc1f9bb0d2ca36527ceec | Apache-2.0 | [] | 156 |
2.4 | taskcluster | 96.5.2 | Python client for Taskcluster | # Taskcluster Client for Python
[](https://pypi.python.org/pypi/taskcluster)
[](http://mozilla.org/MPL/2.0)
**A Taskcluster client library for Python.**
This library is a complete interface to Taskcluster in Python. It provides
both synchronous and asynchronous interfaces for all Taskcluster API methods.
## Usage
For a general guide to using Taskcluster clients, see [Calling Taskcluster APIs](https://docs.taskcluster.net/docs/manual/using/api).
### Setup
Before calling an API end-point, you'll need to create a client instance.
There is a class for each service, e.g., `Queue` and `Auth`. Each takes the
same options, described below. Note that only `rootUrl` is
required, and it's unusual to configure any other options aside from
`credentials`.
For each service, there are sync and async variants. The classes under
`taskcluster` (e.g., `taskcluster.Queue`) operate synchronously. The classes
under `taskcluster.aio` (e.g., `taskcluster.aio.Queue`) are asynchronous.
#### Authentication Options
Here is a simple set-up of an Index client:
```python
import taskcluster
index = taskcluster.Index({
'rootUrl': 'https://tc.example.com',
'credentials': {'clientId': 'id', 'accessToken': 'accessToken'},
})
```
The `rootUrl` option is required as it gives the Taskcluster deployment to
which API requests should be sent. Credentials are only required if the
request is to be authenticated -- many Taskcluster API methods do not require
authentication.
In most cases, the root URL and Taskcluster credentials should be provided in [standard environment variables](https://docs.taskcluster.net/docs/manual/design/env-vars). Use `taskcluster.optionsFromEnvironment()` to read these variables automatically:
```python
auth = taskcluster.Auth(taskcluster.optionsFromEnvironment())
```
Note that this function does not respect `TASKCLUSTER_PROXY_URL`. To use the Taskcluster Proxy from within a task:
```python
auth = taskcluster.Auth({'rootUrl': os.environ['TASKCLUSTER_PROXY_URL']})
```
#### Authorized Scopes
If you wish to perform requests on behalf of a third-party that has small set
of scopes than you do. You can specify [which scopes your request should be
allowed to
use](https://docs.taskcluster.net/docs/manual/design/apis/hawk/authorized-scopes),
in the `authorizedScopes` option.
```python
opts = taskcluster.optionsFromEnvironment()
opts['authorizedScopes'] = ['queue:create-task:highest:my-provisioner/my-worker-type']
queue = taskcluster.Queue(opts)
```
#### Other Options
The following additional options are accepted when constructing a client object:
* `signedUrlExpiration` - default value for the `expiration` argument to `buildSignedUrl`
* `maxRetries` - maximum number of times to retry a failed request
### Calling API Methods
API methods are available as methods on the corresponding client object. For
sync clients, these are sync methods, and for async clients they are async
methods; the calling convention is the same in either case.
There are four calling conventions for methods:
```python
client.method(v1, v1, payload)
client.method(payload, k1=v1, k2=v2)
client.method(payload=payload, query=query, params={k1: v1, k2: v2})
client.method(v1, v2, payload=payload, query=query)
```
Here, `v1` and `v2` are URL parameters (named `k1` and `k2`), `payload` is the
request payload, and `query` is a dictionary of query arguments.
For example, in order to call an API method with query-string arguments:
```python
await queue.listTaskGroup('JzTGxwxhQ76_Tt1dxkaG5g',
query={'continuationToken': previousResponse.get('continuationToken')})
```
### Generating URLs
It is often necessary to generate the URL for an API method without actually calling the method.
To do so, use `buildUrl` or, for an API method that requires authentication, `buildSignedUrl`.
```python
import taskcluster
index = taskcluster.Index(taskcluster.optionsFromEnvironment())
print(index.buildUrl('findTask', 'builds.v1.latest'))
secrets = taskcluster.Secrets(taskcluster.optionsFromEnvironment())
print(secret.buildSignedUrl('get', 'my-secret'))
```
Note that signed URLs are time-limited; the expiration can be set with the `signedUrlExpiration` option to the client constructor, or with the `expiration` keyword arguement to `buildSignedUrl`, both given in seconds.
### Generating Temporary Credentials
If you have non-temporary taskcluster credentials you can generate a set of
[temporary credentials](https://docs.taskcluster.net/docs/manual/design/apis/hawk/temporary-credentials) as follows. Notice that the credentials cannot last more
than 31 days, and you can only revoke them by revoking the credentials that was
used to issue them (this takes up to one hour).
It is not the responsibility of the caller to apply any clock drift adjustment
to the start or expiry time - this is handled by the auth service directly.
```python
import datetime
start = datetime.datetime.now()
expiry = start + datetime.timedelta(0,60)
scopes = ['ScopeA', 'ScopeB']
name = 'foo'
credentials = taskcluster.createTemporaryCredentials(
# issuing clientId
clientId,
# issuing accessToken
accessToken,
# Validity of temporary credentials starts here, in timestamp
start,
# Expiration of temporary credentials, in timestamp
expiry,
# Scopes to grant the temporary credentials
scopes,
# credential name (optional)
name
)
```
You cannot use temporary credentials to issue new temporary credentials. You
must have `auth:create-client:<name>` to create a named temporary credential,
but unnamed temporary credentials can be created regardless of your scopes.
### Handling Timestamps
Many taskcluster APIs require ISO 8601 time stamps offset into the future
as way of providing expiration, deadlines, etc. These can be easily created
using `datetime.datetime.isoformat()`, however, it can be rather error prone
and tedious to offset `datetime.datetime` objects into the future. Therefore
this library comes with two utility functions for this purposes.
```python
dateObject = taskcluster.fromNow("2 days 3 hours 1 minute")
# -> datetime.datetime(2017, 1, 21, 17, 8, 1, 607929)
dateString = taskcluster.fromNowJSON("2 days 3 hours 1 minute")
# -> '2017-01-21T17:09:23.240178Z'
```
By default it will offset the date time into the future, if the offset strings
are prefixed minus (`-`) the date object will be offset into the past. This is
useful in some corner cases.
```python
dateObject = taskcluster.fromNow("- 1 year 2 months 3 weeks 5 seconds");
# -> datetime.datetime(2015, 10, 30, 18, 16, 50, 931161)
```
The offset string is ignorant of whitespace and case insensitive. It may also
optionally be prefixed plus `+` (if not prefixed minus), any `+` prefix will be
ignored. However, entries in the offset string must be given in order from
high to low, ie. `2 years 1 day`. Additionally, various shorthands may be
employed, as illustrated below.
```
years, year, yr, y
months, month, mo
weeks, week, w
days, day, d
hours, hour, h
minutes, minute, min
seconds, second, sec, s
```
The `fromNow` method may also be given a date to be relative to as a second
argument. This is useful if offset the task expiration relative to the the task
deadline or doing something similar. This argument can also be passed as the
kwarg `dateObj`
```python
dateObject1 = taskcluster.fromNow("2 days 3 hours");
dateObject2 = taskcluster.fromNow("1 year", dateObject1);
taskcluster.fromNow("1 year", dateObj=dateObject1);
# -> datetime.datetime(2018, 1, 21, 17, 59, 0, 328934)
```
### Generating SlugIDs
To generate slugIds (Taskcluster's client-generated unique IDs), use
`taskcluster.slugId()`, which will return a unique slugId on each call.
In some cases it is useful to be able to create a mapping from names to
slugIds, with the ability to generate the same slugId multiple times.
The `taskcluster.stableSlugId()` function returns a callable that does
just this.
```python
gen = taskcluster.stableSlugId()
sometask = gen('sometask')
assert gen('sometask') == sometask # same input generates same output
assert gen('sometask') != gen('othertask')
gen2 = taskcluster.stableSlugId()
sometask2 = gen('sometask')
assert sometask2 != sometask # but different slugId generators produce
# different output
```
### Scope Analysis
The `scopeMatch(assumedScopes, requiredScopeSets)` function determines
whether one or more of a set of required scopes are satisfied by the assumed
scopes, taking *-expansion into account. This is useful for making local
decisions on scope satisfaction, but note that `assumed_scopes` must be the
*expanded* scopes, as this function cannot perform expansion.
It takes a list of a assumed scopes, and a list of required scope sets on
disjunctive normal form, and checks if any of the required scope sets are
satisfied.
Example:
```python
requiredScopeSets = [
["scopeA", "scopeB"],
["scopeC:*"]
]
assert scopesMatch(['scopeA', 'scopeB'], requiredScopeSets)
assert scopesMatch(['scopeC:xyz'], requiredScopeSets)
assert not scopesMatch(['scopeA'], requiredScopeSets)
assert not scopesMatch(['scopeC'], requiredScopeSets)
```
### Pagination
Many Taskcluster API methods are paginated. There are two ways to handle
pagination easily with the python client. The first is to implement pagination
in your code:
```python
import taskcluster
queue = taskcluster.Queue({'rootUrl': 'https://tc.example.com'})
i = 0
tasks = 0
outcome = queue.listTaskGroup('JzTGxwxhQ76_Tt1dxkaG5g')
while outcome.get('continuationToken'):
print('Response %d gave us %d more tasks' % (i, len(outcome['tasks'])))
if outcome.get('continuationToken'):
outcome = queue.listTaskGroup('JzTGxwxhQ76_Tt1dxkaG5g', query={'continuationToken': outcome.get('continuationToken')})
i += 1
tasks += len(outcome.get('tasks', []))
print('Task Group %s has %d tasks' % (outcome['taskGroupId'], tasks))
```
There's also an experimental feature to support built in automatic pagination
in the sync client. This feature allows passing a callback as the
'paginationHandler' keyword-argument. This function will be passed the
response body of the API method as its sole positional arugment.
This example of the built in pagination shows how a list of tasks could be
built and then counted:
```python
import taskcluster
queue = taskcluster.Queue({'rootUrl': 'https://tc.example.com'})
responses = []
def handle_page(y):
print("%d tasks fetched" % len(y.get('tasks', [])))
responses.append(y)
queue.listTaskGroup('JzTGxwxhQ76_Tt1dxkaG5g', paginationHandler=handle_page)
tasks = 0
for response in responses:
tasks += len(response.get('tasks', []))
print("%d requests fetch %d tasks" % (len(responses), tasks))
```
### Pulse Events
This library can generate exchange patterns for Pulse messages based on the
Exchanges definitions provded by each service. This is done by instantiating a
`<service>Events` class and calling a method with the name of the vent.
Options for the topic exchange methods can be in the form of either a single
dictionary argument or keyword arguments. Only one form is allowed.
```python
from taskcluster import client
qEvt = client.QueueEvents({rootUrl: 'https://tc.example.com'})
# The following calls are equivalent
print(qEvt.taskCompleted({'taskId': 'atask'}))
print(qEvt.taskCompleted(taskId='atask'))
```
Note that the client library does *not* provide support for interfacing with a Pulse server.
### Logging
Logging is set up in `taskcluster/__init__.py`. If the special
`DEBUG_TASKCLUSTER_CLIENT` environment variable is set, the `__init__.py`
module will set the `logging` module's level for its logger to `logging.DEBUG`
and if there are no existing handlers, add a `logging.StreamHandler()`
instance. This is meant to assist those who do not wish to bother figuring out
how to configure the python logging module but do want debug messages
## Uploading and Downloading Objects
The Object service provides an API for reliable uploads and downloads of large objects.
This library provides convenience methods to implement the client portion of those APIs, providing well-tested, resilient upload and download functionality.
These methods will negotiate the appropriate method with the object service and perform the required steps to transfer the data.
All methods are available in both sync and async versions, with identical APIs except for the `async`/`await` keywords.
In either case, you will need to provide a configured `Object` instance with appropriate credentials for the operation.
NOTE: There is an helper function to upload `s3` artifacts, `taskcluster.helper.upload_artifact`, but it is deprecated as it only supports the `s3` artifact type.
### Uploads
To upload, use any of the following:
* `await taskcluster.aio.upload.uploadFromBuf(projectId=.., name=.., contentType=.., contentLength=.., uploadId=.., expires=.., maxRetries=.., objectService=.., data=..)` - asynchronously upload data from a buffer full of bytes.
* `await taskcluster.aio.upload.uploadFromFile(projectId=.., name=.., contentType=.., contentLength=.., uploadId=.., expires=.., maxRetries=.., objectService=.., file=..)` - asynchronously upload data from a standard Python file.
Note that this is [probably what you want](https://github.com/python/asyncio/wiki/ThirdParty#filesystem), even in an async context.
* `await taskcluster.aio.upload(projectId=.., name=.., contentType=.., contentLength=.., expires=.., uploadId=.., maxRetries=.., objectService=.., readerFactory=..)` - asynchronously upload data from an async reader factory.
* `taskcluster.upload.uploadFromBuf(projectId=.., name=.., contentType=.., contentLength=.., expires=.., uploadId=.., maxRetries=.., objectService=.., data=..)` - upload data from a buffer full of bytes.
* `taskcluster.upload.uploadFromFile(projectId=.., name=.., contentType=.., contentLength=.., expires=.., uploadId=.., maxRetries=.., objectService=.., file=..)` - upload data from a standard Python file.
* `taskcluster.upload(projectId=.., name=.., contentType=.., contentLength=.., expires=.., uploadId=.., maxRetries=.., objectService=.., readerFactory=..)` - upload data from a sync reader factory.
A "reader" is an object with a `read(max_size=-1)` method which reads and returns a chunk of 1 .. `max_size` bytes, or returns an empty string at EOF, async for the async functions and sync for the remainder.
A "reader factory" is an async callable which returns a fresh reader, ready to read the first byte of the object.
When uploads are retried, the reader factory may be called more than once.
The `uploadId` parameter may be omitted, in which case a new slugId will be generated.
### Downloads
To download, use any of the following:
* `await taskcluster.aio.download.downloadToBuf(name=.., maxRetries=.., objectService=..)` - asynchronously download an object to an in-memory buffer, returning a tuple (buffer, content-type).
If the file is larger than available memory, this will crash.
* `await taskcluster.aio.download.downloadToFile(name=.., maxRetries=.., objectService=.., file=..)` - asynchronously download an object to a standard Python file, returning the content type.
* `await taskcluster.aio.download.download(name=.., maxRetries=.., objectService=.., writerFactory=..)` - asynchronously download an object to an async writer factory, returning the content type.
* `taskcluster.download.downloadToBuf(name=.., maxRetries=.., objectService=..)` - download an object to an in-memory buffer, returning a tuple (buffer, content-type).
If the file is larger than available memory, this will crash.
* `taskcluster.download.downloadToFile(name=.., maxRetries=.., objectService=.., file=..)` - download an object to a standard Python file, returning the content type.
* `taskcluster.download.download(name=.., maxRetries=.., objectService=.., writerFactory=..)` - download an object to a sync writer factory, returning the content type.
A "writer" is an object with a `write(data)` method which writes the given data, async for the async functions and sync for the remainder.
A "writer factory" is a callable (again either async or sync) which returns a fresh writer, ready to write the first byte of the object.
When uploads are retried, the writer factory may be called more than once.
### Artifact Downloads
Artifacts can be downloaded from the queue service with similar functions to those above.
These functions support all of the queue's storage types, raising an error for `error` artifacts.
In each case, if `runId` is omitted then the most recent run will be used.
* `await taskcluster.aio.download.downloadArtifactToBuf(taskId=.., runId=.., name=.., maxRetries=.., queueService=..)` - asynchronously download an object to an in-memory buffer, returning a tuple (buffer, content-type).
If the file is larger than available memory, this will crash.
* `await taskcluster.aio.download.downloadArtifactToFile(taskId=.., runId=.., name=.., maxRetries=.., queueService=.., file=..)` - asynchronously download an object to a standard Python file, returning the content type.
* `await taskcluster.aio.download.downloadArtifact(taskId=.., runId=.., name=.., maxRetries=.., queueService=.., writerFactory=..)` - asynchronously download an object to an async writer factory, returning the content type.
* `taskcluster.download.downloadArtifactToBuf(taskId=.., runId=.., name=.., maxRetries=.., queueService=..)` - download an object to an in-memory buffer, returning a tuple (buffer, content-type).
If the file is larger than available memory, this will crash.
* `taskcluster.download.downloadArtifactToFile(taskId=.., runId=.., name=.., maxRetries=.., queueService=.., file=..)` - download an object to a standard Python file, returning the content type.
* `taskcluster.download.downloadArtifact(taskId=.., runId=.., name=.., maxRetries=.., queueService=.., writerFactory=..)` - download an object to a sync writer factory, returning the content type.
## Integration Helpers
The Python Taskcluster client has a module `taskcluster.helper` with utilities which allows you to easily share authentication options across multiple services in your project.
Generally a project using this library will face different use cases and authentication options:
* No authentication for a new contributor without Taskcluster access,
* Specific client credentials through environment variables on a developer's computer,
* Taskcluster Proxy when running inside a task.
### Shared authentication
The class `taskcluster.helper.TaskclusterConfig` is made to be instantiated once in your project, usually in a top level module. That singleton is then accessed by different parts of your projects, whenever a Taskcluster service is needed.
Here is a sample usage:
1. in `project/__init__.py`, no call to Taskcluster is made at that point:
```python
from taskcluster.helper import Taskcluster config
tc = TaskclusterConfig('https://community-tc.services.mozilla.com')
```
2. in `project/boot.py`, we authenticate on Taskcuster with provided credentials, or environment variables, or taskcluster proxy (in that order):
```python
from project import tc
tc.auth(client_id='XXX', access_token='YYY')
```
3. at that point, you can load any service using the authenticated wrapper from anywhere in your code:
```python
from project import tc
def sync_usage():
queue = tc.get_service('queue')
queue.ping()
async def async_usage():
hooks = tc.get_service('hooks', use_async=True) # Asynchronous service class
await hooks.ping()
```
Supported environment variables are:
- `TASKCLUSTER_ROOT_URL` to specify your Taskcluster instance base url. You can either use that variable or instanciate `TaskclusterConfig` with the base url.
- `TASKCLUSTER_CLIENT_ID` & `TASKCLUSTER_ACCESS_TOKEN` to specify your client credentials instead of providing them to `TaskclusterConfig.auth`
- `TASKCLUSTER_PROXY_URL` to specify the proxy address used to reach Taskcluster in a task. It defaults to `http://taskcluster` when not specified.
For more details on Taskcluster environment variables, [here is the documentation](https://docs.taskcluster.net/docs/manual/design/env-vars).
### Loading secrets across multiple authentications
Another available utility is `taskcluster.helper.load_secrets` which allows you to retrieve a secret using an authenticated `taskcluster.Secrets` instance (using `TaskclusterConfig.get_service` or the synchronous class directly).
This utility loads a secret, but allows you to:
1. share a secret across multiple projects, by using key prefixes inside the secret,
2. check that some required keys are present in the secret,
3. provide some default values,
4. provide a local secret source instead of using the Taskcluster service (useful for local development or sharing _secrets_ with contributors)
Let's say you have a secret on a Taskcluster instance named `project/foo/prod-config`, which is needed by a backend and some tasks. Here is its content:
```yaml
common:
environment: production
remote_log: https://log.xx.com/payload
backend:
bugzilla_token: XXXX
task:
backend_url: https://backend.foo.mozilla.com
```
In your backend, you would do:
```python
from taskcluster import Secrets
from taskcluster.helper import load_secrets
prod_config = load_secrets(
Secrets({...}),
'project/foo/prod-config',
# We only need the common & backend parts
prefixes=['common', 'backend'],
# We absolutely need a bugzilla token to run
required=['bugzilla_token'],
# Let's provide some default value for the environment
existing={
'environment': 'dev',
}
)
# -> prod_config == {
# "environment": "production"
# "remote_log": "https://log.xx.com/payload",
# "bugzilla_token": "XXXX",
# }
```
In your task, you could do the following using `TaskclusterConfig` mentionned above (the class has a shortcut to use an authenticated `Secrets` service automatically):
```python
from project import tc
prod_config = tc.load_secrets(
'project/foo/prod-config',
# We only need the common & bot parts
prefixes=['common', 'bot'],
# Let's provide some default value for the environment and backend_url
existing={
'environment': 'dev',
'backend_url': 'http://localhost:8000',
}
)
# -> prod_config == {
# "environment": "production"
# "remote_log": "https://log.xx.com/payload",
# "backend_url": "https://backend.foo.mozilla.com",
# }
```
To provide local secrets value, you first need to load these values as a dictionary (usually by reading a local file in your format of choice : YAML, JSON, ...) and providing the dictionary to `load_secrets` by using the `local_secrets` parameter:
```python
import os
import yaml
from taskcluster import Secrets
from taskcluster.helper import load_secrets
local_path = 'path/to/file.yml'
prod_config = load_secrets(
Secrets({...}),
'project/foo/prod-config',
# We support an optional local file to provide some configuration without reaching Taskcluster
local_secrets=yaml.safe_load(open(local_path)) if os.path.exists(local_path) else None,
)
```
## Compatibility
This library is co-versioned with Taskcluster itself.
That is, a client with version x.y.z contains API methods corresponding to Taskcluster version x.y.z.
Taskcluster is careful to maintain API compatibility, and guarantees it within a major version.
That means that any client with version x.* will work against any Taskcluster services at version x.*, and is very likely to work for many other major versions of the Taskcluster services.
Any incompatibilities are noted in the [Changelog](https://github.com/taskcluster/taskcluster/blob/main/CHANGELOG.md).
| text/markdown | null | Mozilla Taskcluster and Release Engineering <release+python@mozilla.com> | null | null | MPL-2.0 | api, client, taskcluster | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"aiohttp>=3.7.4",
"async-timeout>=2.0.0",
"mohawk>=0.3.4",
"python-dateutil>=2.8.2",
"requests>=2.4.3",
"slugid>=2",
"taskcluster-urls>=12.1.0"
] | [] | [] | [] | [
"Homepage, https://github.com/taskcluster/taskcluster",
"Repository, https://github.com/taskcluster/taskcluster",
"Documentation, https://docs.taskcluster.net",
"Changelog, https://github.com/taskcluster/taskcluster/blob/main/CHANGELOG.md",
"Issues, https://github.com/taskcluster/taskcluster/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T15:20:58.510125 | taskcluster-96.5.2.tar.gz | 254,608 | 84/e3/928865aceac2422922950a9b39efd040bb59ab4674cbd8e95068ca9af53c/taskcluster-96.5.2.tar.gz | source | sdist | null | false | b01cbd8aad5d3039ad7d43674c7bedb3 | 5d85208c719bd66dd520f1913f7530559444c0191d1aa5ae627f9aa7c14f517a | 84e3928865aceac2422922950a9b39efd040bb59ab4674cbd8e95068ca9af53c | null | [
"LICENSE"
] | 2,229 |
2.4 | feagi | 2.1.19 | Complete FEAGI SDK with Brain Visualizer - Framework for Evolutionary Artificial General Intelligence | # FEAGI - Framework for Evolutionary Artificial General Intelligence
Complete FEAGI SDK with Brain Visualizer included.
## Installation
```bash
pip install feagi
```
This installs:
- **feagi-core** - SDK for building agents and controlling FEAGI
- **Brain Visualizer** - Real-time 3D neural activity visualization
## Quick Start
```bash
# Start FEAGI with barebones genome
feagi start
# Launch Brain Visualizer
feagi bv start
```
## When to Use This Package
Use `feagi` (this package) when you want:
- Visual development and debugging
- Real-time neural activity monitoring
- Learning and tutorials
- Interactive demos
## Alternative: feagi-core (Slim)
For production deployments, CI/CD, or when you don't need visualization:
```bash
pip install feagi-core
```
The `feagi-core` package:
- **Smaller** - ~5MB vs ~196MB
- **Faster installs** - Great for containers
- **Same imports** - All code examples work identically
- **Perfect for** - Production, inference-only, edge devices, CI/CD
## Imports Work Identically
Both packages use the same import namespace:
```python
from feagi.agent import FeagiAgent
from feagi.pns import PNSClient
from feagi.engine import FeagiEngine
```
## Features
- **Agent SDK** - Build sensory/motor agents for robotics and simulations
- **Engine Control** - Start/stop FEAGI neural engine programmatically
- **PNS Client** - Connect to FEAGI's Peripheral Nervous System
- **Brain Visualizer** - 3D real-time visualization (included in this package)
- **CLI Tools** - `feagi` and `feagi bv` commands
## Documentation
- [Documentation](https://github.com/feagi/feagi/tree/main/docs)
## License
Apache License 2.0
| text/markdown | null | "Neuraville Inc." <feagi@neuraville.com> | null | null | Apache-2.0 | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"feagi-core==2.1.19",
"feagi-bv<3.0.0,>=2.2.1; sys_platform != \"darwin\"",
"feagi-bv-linux<3.0.0,>=2.2.1; sys_platform == \"linux\"",
"feagi-bv-windows<3.0.0,>=2.2.1; sys_platform == \"win32\"",
"opencv-python>=4.9.0; extra == \"video\"",
"bleak>=0.21.0; extra == \"bluetooth\"",
"websockets>=12.0; extra == \"bluetooth\"",
"pytest>=7.0.0; extra == \"test\"",
"pytest-cov>=4.0.0; extra == \"test\"",
"pytest-asyncio>=0.21.0; extra == \"test\"",
"opencv-python>=4.9.0; extra == \"test\"",
"requests>=2.31.0; extra == \"test\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"isort>=5.12.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"ruff>=0.0.270; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\"",
"opencv-python>=4.9.0; extra == \"dev\"",
"mkdocs>=1.4.0; extra == \"docs\"",
"mkdocs-material>=9.0.0; extra == \"docs\"",
"mkdocstrings>=0.20.0; extra == \"docs\"",
"mkdocstrings-python>=1.0.0; extra == \"docs\"",
"opencv-python>=4.9.0; extra == \"full\"",
"bleak>=0.21.0; extra == \"full\"",
"websockets>=12.0; extra == \"full\""
] | [] | [] | [] | [
"Homepage, https://www.neuraville.com/feagi",
"Source, https://github.com/feagi/feagi",
"Bug Tracker, https://github.com/feagi/feagi/issues",
"Documentation, https://github.com/feagi/feagi/tree/staging/docs"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:20:33.232293 | feagi-2.1.19.tar.gz | 2,927 | e1/e7/e1b3715bd665dc80ed83a811851f19f9eb51e07dfea65c7fb2597c83dc42/feagi-2.1.19.tar.gz | source | sdist | null | false | debe8e4e82516e447338c6b7fec96be6 | fe6cd827cd769a87261d54edb732cf0be4eede4af99867b74dc51573910471b1 | e1e7e1b3715bd665dc80ed83a811851f19f9eb51e07dfea65c7fb2597c83dc42 | null | [] | 197 |
2.4 | feagi-core | 2.1.19 | Core SDK for building FEAGI agents, controlling the neural engine, and creating marketplace packages (without Brain Visualizer) | # FEAGI Python SDK
**Build AI agents that learn like biological brains**
[](https://pypi.org/project/feagi/) [](https://pypi.org/project/feagi/) [](https://discord.gg/PTVC8fyGN8) [](https://opensource.org/licenses/Apache-2.0)
---
## Installation Options
**Full Experience (Recommended for most users):**
```bash
pip install feagi
```
Includes:
- FEAGI itself, for running neuronal simulations
- Brain Visualizer, for real-time 3D neural activity visualization (~196MB)
- Python bindings for FEAGI libraries (intended for advanced users)
- FEAGI Agent Python SDK for rapidly making agents for FEAGI
**Slim/Core (Recommended for Production):**
```bash
pip install feagi-core
```
Includes:
- Python bindings for FEAGI libraries (intended for advanced users)
- FEAGI Agent Python SDK for rapidly making agents for FEAGI
> **Note:** Both packages use identical imports (`from feagi import ...`)
---
## What is FEAGI?
**FEAGI (Framework for Evolutionary Artificial General Intelligence)** is a biologically inspired, modular neural execution engine designed for **embodied AI and robotics**. FEAGI enables spiking-neural-circuit-driven perception, cognition, and control across simulated and physical embodiments, with a strong emphasis on **real-time interaction, modularity, and cross-platform deployment**.
FEAGI serves as the core neural runtime behind **Neurorobotics Studio**, powering a growing ecosystem of reusable neural components ("brains"), tools, and integrations for robotics and physical AI.
### The FEAGI Python SDK
The FEAGI Python SDK provides the tools you need to:
- **Connect robots and devices** to FEAGI's neural network
- **Build learning agents** for robots, simulators, and games
- **Visualize neural activity** in real-time with Brain Visualizer
- **Control and manage FEAGI** from Python code
- **Interface with diverse embodiments** through standardized communication protocols
---
## Key Concepts
* **Neuromorphic by Design** – FEAGI is built as a neuromorphic framework inspired by biological neural computation. While it currently runs on conventional CPUs and GPUs, **native support for neuromorphic hardware is a near-term roadmap item**, enabling direct execution on event-driven, spike-based accelerators as they mature.
* **Embodied Intelligence First** – FEAGI is designed to control bodies (robots, agents, simulations), not just process static data.
* **Spiking Neural Networks (SNNs)** – Uses event-driven neuron firing rather than frame-based inference.
* **Modular Neural Architecture** – Neural circuits can be composed like building blocks (Lego-like micro-circuits).
* **Real-Time Closed Loop** – Continuous perception → cognition → action loop.
* **Cross-Simulator & Hardware Support** – One brain, many bodies.
---
## Quick Start
Get started with FEAGI in just 2 lines:
```bash
pip install "feagi"
feagi start
feagi bv start
```
That's it! This installs FEAGI with Brain Visualizer, creates default configuration automatically, and launches the visualizer.
## Documentation
- [Documentation](https://github.com/feagi/feagi/tree/main/docs)
- [Examples](./examples/)
---
### Configuration Management
Initialize FEAGI environment with default configuration:
```bash
feagi init
```
This creates:
- Configuration: `~/.feagi/config/feagi_configuration.toml`
- Genomes directory: `~/Documents/FEAGI/Genomes/` (macOS/Windows) or `~/FEAGI/genomes/` (Linux)
- Connectomes directory: `~/Documents/FEAGI/Connectomes/` or `~/FEAGI/connectomes/`
- Logs and cache directories
**For complete configuration options and customization, see [DEPLOY.md](./DEPLOY.md).**
### Start FEAGI Engine from Python
```python
from feagi.engine import FeagiEngine
engine = FeagiEngine()
engine.load_config() # Uses default config
engine.load_genome("my_brain.json") # Loads from genomes directory
engine.start()
```
Or from command line:
```bash
feagi start --config ~/.feagi/config/feagi_configuration.toml --genome my_brain.json
```
### SDK Architecture
```
feagi/
├── agent/ # Agent framework (BaseAgent)
├── pns/ # Peripheral Nervous System (communication)
├── engine/ # Engine control
├── config/ # Configuration management
├── paths/ # Cross-platform path utilities
├── cli/ # Command-line tools
├── genome/ # Runtime genome manipulation (coming soon)
├── connectome/ # Brain state management (coming soon)
└── packaging/ # Marketplace packages (coming soon)
```
## Examples
See [`examples/`](./examples/) for complete agent implementations:
- Basic sensory agent
- Robot agent (SDK-based)
- Simulator agent (Webots)
- Vision processing
---
## Community & Support
- **Discord**: [Join our community](https://discord.gg/PTVC8fyGN8)
- **Issues**: [Report bugs](https://github.com/feagi/feagi-python-sdk/issues)
- **Neurorobotics Studio**: [Cloud platform](https://brainsforrobots.com)
- **Homepage**: [feagi.org](https://feagi.org)
---
## Requirements
- Python 3.10 or higher
- Works on Linux, macOS, and Windows
---
## License
Apache 2.0 - See [LICENSE](LICENSE) for details.
**Copyright 2016-2025 Neuraville Inc. All Rights Reserved.**
---
## About Neuraville
FEAGI is developed by **Neuraville**, a company focused on democratizing robotics and enabling the next generation of embodied AI through modular, biologically inspired intelligence systems.
| text/markdown | null | "Neuraville Inc." <feagi@neuraville.com> | null | null | Apache-2.0 | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.20.0",
"aiohttp>=3.9.0",
"toml>=0.10.2",
"requests>=2.31.0",
"feagi-rust-py-libs>=0.0.86",
"pyserial>=3.5",
"feagi-bv<3.0.0,>=2.2.1; extra == \"bv\"",
"feagi-bv-linux<3.0.0,>=2.2.1; sys_platform == \"linux\" and extra == \"bv\"",
"feagi-bv-macos-arm64<3.0.0,>=2.2.1; (sys_platform == \"darwin\" and platform_machine == \"arm64\") and extra == \"bv\"",
"feagi-bv-macos-x86_64<3.0.0,>=2.2.1; (sys_platform == \"darwin\" and platform_machine == \"x86_64\") and extra == \"bv\"",
"feagi-bv-windows<3.0.0,>=2.2.1; sys_platform == \"win32\" and extra == \"bv\"",
"opencv-python>=4.9.0; extra == \"video\"",
"bleak>=0.21.0; extra == \"bluetooth\"",
"websockets>=12.0; extra == \"bluetooth\"",
"pytest>=7.0.0; extra == \"test\"",
"pytest-cov>=4.0.0; extra == \"test\"",
"pytest-asyncio>=0.21.0; extra == \"test\"",
"opencv-python>=4.9.0; extra == \"test\"",
"requests>=2.31.0; extra == \"test\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"isort>=5.12.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"ruff>=0.0.270; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\"",
"opencv-python>=4.9.0; extra == \"dev\"",
"mkdocs>=1.4.0; extra == \"docs\"",
"mkdocs-material>=9.0.0; extra == \"docs\"",
"mkdocstrings>=0.20.0; extra == \"docs\"",
"mkdocstrings-python>=1.0.0; extra == \"docs\"",
"feagi-bv<3.0.0,>=2.2.1; extra == \"full\"",
"feagi-bv-linux<3.0.0,>=2.2.1; sys_platform == \"linux\" and extra == \"full\"",
"feagi-bv-macos-arm64<3.0.0,>=2.2.1; (sys_platform == \"darwin\" and platform_machine == \"arm64\") and extra == \"full\"",
"feagi-bv-macos-x86_64<3.0.0,>=2.2.1; (sys_platform == \"darwin\" and platform_machine == \"x86_64\") and extra == \"full\"",
"feagi-bv-windows<3.0.0,>=2.2.1; sys_platform == \"win32\" and extra == \"full\"",
"opencv-python>=4.9.0; extra == \"full\"",
"bleak>=0.21.0; extra == \"full\"",
"websockets>=12.0; extra == \"full\""
] | [] | [] | [] | [
"Homepage, https://www.neuraville.com/feagi",
"Source, https://github.com/feagi/feagi",
"Bug Tracker, https://github.com/feagi/feagi/issues",
"Documentation, https://github.com/feagi/feagi/tree/staging/docs"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:20:27.475030 | feagi_core-2.1.19.tar.gz | 23,525,758 | 39/86/d7717e006916bc08be074ce1c07bd31c36d39412f574626598bdf5866460/feagi_core-2.1.19.tar.gz | source | sdist | null | false | a0a935a423ef4945df4646b742f811dd | b6987fc9eb239d87ef29ee61d64a99658ddbba114127eeadeb28c9439ec0c336 | 3986d7717e006916bc08be074ce1c07bd31c36d39412f574626598bdf5866460 | null | [
"LICENSE",
"NOTICE"
] | 210 |
2.4 | ngio | 0.5.4 | Next Generation file format IO | # Ngio - Next Generation file format IO
[](https://github.com/BioVisionCenter/ngio/raw/main/LICENSE)
[](https://pypi.org/project/ngio)
[](https://python.org)
[](https://github.com/BioVisionCenter/ngio/actions/workflows/ci.yml)
[](https://codecov.io/gh/BioVisionCenter/ngio)
ngio is a Python library designed to simplify bioimage analysis workflows, offering an intuitive interface for working with OME-Zarr files.
## What is Ngio?
Ngio is built for the [OME-Zarr](https://ngff.openmicroscopy.org/) file format, a modern, cloud-optimized format for biological imaging data. OME-Zarr stores large, multi-dimensional microscopy images and metadata in an efficient and scalable way.
Ngio's mission is to streamline working with OME-Zarr files by providing a simple, object-based API for opening, exploring, and manipulating OME-Zarr images and high-content screening (HCS) plates. It also offers comprehensive support for labels, tables and regions of interest (ROIs), making it easy to extract and analyze specific regions in your data.
## Key Features
### 🔍 Simple Object-Based API
- Easily open, explore, and manipulate OME-Zarr images and HCS plates
- Create and derive new images and labels with minimal boilerplate code
### 📊 Rich Tables and Regions of Interest (ROI) Support
- Tight integration with [tabular data](https://biovisioncenter.github.io/ngio/stable/table_specs/overview/)
- Extract and analyze specific regions of interest
- Store measurements and other metadata in the OME-Zarr container
- Extensible & modular allowing users to define custom table schemas and on disk serialization
### 🔄 Scalable Data Processing
- Powerful iterators for building scalable and generalizable image processing pipelines
- Extensible mapping mechanism for custom parallelization strategies
## Installation
You can install ngio via pip:
```bash
pip install ngio
```
To get started check out the [Quickstart Guide](https://BioVisionCenter.github.io/ngio/stable/getting_started/0_quickstart/).
## Supported OME-Zarr versions
Currently, ngio only supports OME-Zarr v0.4. Support for version 0.5 and higher is planned for future releases.
## Development Status
Ngio is under active development and is not yet stable. The API is subject to change, and bugs and breaking changes are expected.
We follow [Semantic Versioning](https://semver.org/). Which means for 0.x releases potentially breaking changes can be introduced in minor releases.
### Available Features
- ✅ OME-Zarr metadata handling and validation
- ✅ Image and label access across pyramid levels
- ✅ ROI and table support
- ✅ Image processing iterators
- ✅ Streaming from remote sources
- ✅ Documentation and examples
### Upcoming Features
- Support for OME-Zarr v0.5 and Zarr v3 (via `zarr-python` v3)
- Enhanced performance optimizations (parallel iterators, optimized io strategies)
## Contributors
Ngio is developed at the [BioVisionCenter](https://www.biovisioncenter.uzh.ch/en.html), University of Zurich, by [@lorenzocerrone](https://github.com/lorenzocerrone) and [@jluethi](https://github.com/jluethi).
## License
Ngio is released under the BSD-3-Clause License. See [LICENSE](https://github.com/BioVisionCenter/ngio/blob/main/LICENSE) for details.
| text/markdown | null | Lorenzo Cerrone <lorenzo.cerrone@uzh.ch> | null | null | BSD-3-Clause | null | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Typing :: Typed"
] | [] | null | null | <3.15,>=3.11 | [] | [] | [] | [
"aiohttp",
"anndata",
"dask[array]<2025.11.0",
"dask[distributed]<2025.11.0",
"filelock",
"fsspec",
"numpy",
"ome-zarr-models",
"pandas<3.0.0,>=1.2.0",
"pillow",
"polars",
"pooch",
"pyarrow",
"pydantic",
"requests",
"scipy",
"zarr>3",
"matplotlib; extra == \"dev\"",
"mypy; extra == \"dev\"",
"napari; extra == \"dev\"",
"notebook; extra == \"dev\"",
"pdbpp; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"pympler; extra == \"dev\"",
"pyqt5; extra == \"dev\"",
"rich; extra == \"dev\"",
"ruff; extra == \"dev\"",
"scikit-image; extra == \"dev\"",
"zarrs; extra == \"dev\"",
"griffe-typingdoc; extra == \"docs\"",
"markdown-exec[ansi]; extra == \"docs\"",
"matplotlib; extra == \"docs\"",
"mike; extra == \"docs\"",
"mkdocs; extra == \"docs\"",
"mkdocs-autorefs; extra == \"docs\"",
"mkdocs-git-committers-plugin-2; extra == \"docs\"",
"mkdocs-git-revision-date-localized-plugin; extra == \"docs\"",
"mkdocs-include-markdown-plugin; extra == \"docs\"",
"mkdocs-jupyter; extra == \"docs\"",
"mkdocs-material; extra == \"docs\"",
"mkdocstrings[python]; extra == \"docs\"",
"rich; extra == \"docs\"",
"ruff; extra == \"docs\"",
"scikit-image; extra == \"docs\"",
"tabulate; extra == \"docs\"",
"boto; extra == \"test\"",
"devtools; extra == \"test\"",
"moto[server]; extra == \"test\"",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"pytest-httpserver; extra == \"test\"",
"s3fs; extra == \"test\"",
"scikit-image; extra == \"test\""
] | [] | [] | [] | [
"homepage, https://github.com/BioVisionCenter/ngio",
"repository, https://github.com/BioVisionCenter/ngio"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:20:13.119299 | ngio-0.5.4.tar.gz | 257,516 | 6d/71/fdabf46d2bd23df40bed3e8d79974882a7dd613492183e629b3b0f946285/ngio-0.5.4.tar.gz | source | sdist | null | false | f87b8a41d29fcb81078900aa663b3932 | e3eaa40aa8336163aa6b325addc0103ae7f4c25270ee25ace1ddb97229d7360b | 6d71fdabf46d2bd23df40bed3e8d79974882a7dd613492183e629b3b0f946285 | null | [
"LICENSE"
] | 167 |
2.4 | acp-agent | 0.2.4 | A CLI tool to browse, search, and run ACP agents | # ACP Agent CLI & SDK 🚀
[](https://pypi.org/project/acp-agent/)
[](https://pypi.org/project/acp-agent/)
[](https://opensource.org/licenses/MIT)
This project provides a friendly and intuitive CLI and SDK for the [ACP Registry](https://github.com/agentclientprotocol/registry), enabling developers to quickly browse, search, run, and containerize [ACP (Agent Client Protocol)](https://agentclientprotocol.com) agents.
## Motivation 💡
The official ACP Registry provides an extensive list of agents. This project simplifies discovery and integration through three core pillars:
- **Interactive Discovery**: Browse and fuzzy-search the entire registry directly from your terminal.
- **Seamless Execution**: Run any agent locally with automatic environment setup, or integrate them into Python apps via the async SDK.
- **Production Deployment**: Automatically generate optimized Containerfiles to run agents in isolated environments or CI/CD pipelines.
## Usage 🚀
There are three ways to use `acp-agent`:
### 1. CLI Usage
We recommend using [uv](https://github.com/astral-sh/uv) to manage and run this project.
```bash
# List all agents
uvx acp-agent list
# Search for agents
uvx acp-agent search opencode
# Run an agent locally
uvx acp-agent run opencode
# Run with a specific working directory, environment variables, and extra arguments
# Any arguments after the options are passed directly to the agent
uvx acp-agent run opencode --cwd ./my-project -e DEBUG=true -- --help
```
#### Example Search Output
```
Search Results for 'opencode'
┏━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ID ┃ Name ┃ Description ┃
┡━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ opencode │ OpenCode │ The open source coding agent │
└──────────┴──────────┴──────────────────────────────┘
```
### 2. SDK Usage
You can integrate `acp-agent` into your Python projects through `ACPAgent`.
```python
import asyncio
from acp_agent import ACPAgent
async def main():
agent = ACPAgent("opencode")
# Run an agent and attach to its output (stdout/stderr)
# This will automatically handle downloading and environment setup
# returns the exit code of the agent
exit_code = await agent.run(attach=True)
print(f"Agent exited with code: {exit_code}")
if __name__ == "__main__":
asyncio.run(main())
```
The same instance can also generate a `Containerfile` and provide config paths:
```python
import asyncio
from pathlib import Path
from acp_agent import ACPAgent
async def main():
agent = ACPAgent("opencode")
# Generate Containerfile content for 'opencode'
# This will inject the agent installation and CMD into your base image
content = await agent.format_containerfile(containerfile="FROM python:3.12-slim")
Path("Dockerfile").write_text(content)
if config := agent.config:
print(f"Config path: {config.config}")
print(f"Credential path: {config.credential}")
if __name__ == "__main__":
asyncio.run(main())
```
After starting your container, you can manually copy these files from your host to the container's expected locations (e.g., via `docker cp`) to fully replicate your host-side environment and authentication state within the isolated container.
# License
[MIT License](LICENSE)
| text/markdown | observerw | observerw <wozluohd@gmail.com> | null | null | null | acp, agent, agent-client-protocol | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"agent-client-protocol>=0.7.1",
"aioshutil>=1.6",
"anyio>=4.12.1",
"attrs>=25.4.0",
"cyclopts>=4.5.1",
"httpx>=0.28.1",
"httpx-ws>=0.8.2",
"jinja2>=3.1.6",
"litestar>=2.19.0",
"loguru>=0.7.3",
"platformdirs>=4.5.1",
"pydantic>=2.12.5",
"pydantic-settings>=2.12.0",
"rich>=14.3.2"
] | [] | [] | [] | [
"Repository, https://github.com/observerw/acp-agent"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:20:00.519961 | acp_agent-0.2.4.tar.gz | 11,387 | 62/74/ad8ea8d93d9028cc1a4027da4ea168e224ffc71a36039f3925f94cce1538/acp_agent-0.2.4.tar.gz | source | sdist | null | false | 5a527cc52e011d516279f680c7c89b98 | d97b4e676de398f27da24f0b20a9cf1cf8cd9f75b268df526a1b86974061d0f4 | 6274ad8ea8d93d9028cc1a4027da4ea168e224ffc71a36039f3925f94cce1538 | MIT | [] | 148 |
2.4 | tatva | 0.6.0 | Lego-like building blocks for differentiable finite element analysis | <div align="center">
<img src="assets/logo-small.png" alt="drawing" width="400"/>
<h3 align="center">Tatva (टत्तव) : Lego-like building blocks for differentiable FEM</h3>
`tatva` (is a Sanskrit word which means principle or elements of reality). True to its name, `tatva` provide fundamental Lego-like building blocks (elements) which can be used to construct complex finite element method (FEM) simulations. `tatva` is purely written in Python library for FEM simulations and is built on top of JAX ecosystem, making it easy to use FEM in a differentiable way.
</div>
[](https://github.com/smec-ethz/tatva-docs/actions/workflows/pages/pages-build-deployment)
[](https://github.com/smec-ethz/tatva/actions/workflows/run_tests.yml)
## License
`tatva` is distributed under the GNU Lesser General Public License v3.0 or later. See `COPYING` and `COPYING.LESSER` for the complete terms. © 2025 ETH Zurich (SMEC).
## Features
- Energy-based formulation of FEM operators with automatic differentiation via JAX.
- Capability to handle coupled-PDE systems with multi-field variables, KKT conditions, and constraints.
- Element library covering line, surface, and volume primitives (Line2, Tri3, Quad4, Tet4, Hex8) with consistent JAX-compatible APIs.
- Mesh and Operator abstractions that map, integrate, differentiate, and interpolate fields on arbitrary meshes.
- Automatic handling of stacked multi-field variables through the `tatva.compound` utilities while preserving sparsity patterns.
## Installation
Install the current release from PyPI:
```bash
pip install tatva
```
For development work, clone the repository and install it in editable mode (use your preferred virtual environment tool such as `uv` or `venv`):
```bash
git clone https://github.com/smec-ethz/tatva.git
cd tatva
pip install -e .
```
## Documentation
Available at [**smec-ethz.github.io/tatva-docs**](https://smec-ethz.github.io/tatva-docs/). The documentation includes API references, tutorials, and examples to help you get started with `tatva`.
## Usage
Create a mesh, pick an element type, and let `Operator` perform the heavy lifting with JAX arrays:
```python
import jax.numpy as jnp
from tatva.element import Tri3
from tatva.mesh import Mesh
from tatva.operator import Operator
coords = jnp.array([[0.0, 0.0], [1.0, 0.0], [1.0, 1.0], [0.0, 1.0]])
elements = jnp.array([[0, 1, 2], [0, 2, 3]])
mesh = Mesh(coords, elements)
op = Operator(mesh, Tri3())
nodal_values = jnp.arange(coords.shape[0], dtype=jnp.float64)
# Integrate a nodal field over the mesh
total = op.integrate(nodal_values)
# Evaluate gradients at all quadrature points
gradients = op.grad(nodal_values)
```
Examples for various applications will be added very soon. They showcase patterns such as
mapping custom kernels, working with compound fields, and sparse assembly helpers.
## Dense vs Sparse vs Matrix-free
A unique aspect of `tatva` is that it can handle construct dense matrices, sparse matrices, and matrix-free operators. `tatva` uses matrix-coloring algorithm and sparse differentiation to construct a sparse matrix. We use our own coloring library  to color a matrix based on sparsity pattern, one can use other coloring libraries such as  for more advanced coloring algorithms. This significantly reduces the memory consumption. For large problems, we can also use matrix-free operators which do not require storing the matrix in memory. Since we have a energy functional, we can make use of `jax.jvp` ti compute the matrix-vector product without explicitly forming the matrix. This is particularly useful for large problems where storing the matrix is not feasible.
## Paper
To know more about `tatva` and how it works please check: ([arXiv link](https://arxiv.org/abs/2602.12365v1))
## 👉 Where to contribute
If you have a suggestion that would make this better, please fork the repo and create a pull request on [**github.com/smec-ethz/tatva**](https://github.com/smec-ethz/tatva). Please use that repository to open issues or submit merge requests. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
1. Fork the Project
2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the Branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request
| text/markdown | null | Mohit Pundir <mpundir@ethz.ch>, Flavio Lorez <florez@ethz.ch> | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"equinox",
"jax-autovmap>=0.3.0",
"jax>=0.4.1",
"numpy",
"tatva-coloring",
"pytest>=8.4.2; extra == \"dev\"",
"matplotlib; extra == \"plotting\""
] | [] | [] | [] | [
"repository, https://github.com/smec-ethz/tatva"
] | uv/0.9.25 {"installer":{"name":"uv","version":"0.9.25","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T15:18:57.250785 | tatva-0.6.0-py3-none-any.whl | 55,514 | 56/96/df7cb9de54af859af98edc016a86c6c65a858da0711bfc780d920decdb4a/tatva-0.6.0-py3-none-any.whl | py3 | bdist_wheel | null | false | bee33d4101b90eea8ed53d50b2188f5f | 53a597ba428e495534b201a9c10334a8b7f9a33294fc0e9ffd6a0f200bbe42dc | 5696df7cb9de54af859af98edc016a86c6c65a858da0711bfc780d920decdb4a | LGPL-3.0 | [
"COPYING",
"COPYING.LESSER"
] | 148 |
2.4 | iris-vector-graph | 1.6.5 | Transactional Graph + Vector retrieval system for InterSystems IRIS with hybrid search, openCypher, and GraphQL APIs | # IRIS Vector Graph
**The ultimate Graph + Vector + Text Retrieval Engine for InterSystems IRIS.**
[](https://www.python.org/downloads/)
[](https://www.intersystems.com/products/intersystems-iris/)
[](https://github.com/intersystems-community/iris-vector-graph/blob/main/LICENSE)
IRIS Vector Graph is a general-purpose graph utility built on InterSystems IRIS that supports and demonstrates knowledge graph construction and query techniques. It combines **graph traversal**, **HNSW vector similarity**, and **lexical search** in a single, unified database.
---
## Why IRIS Vector Graph?
- **Multi-Query Power**: Query your graph via **SQL**, **openCypher (v1.3 with DML)**, or **GraphQL** — all on the same data.
- **Transactional Engine**: Beyond retrieval — support for `CREATE`, `DELETE`, and `MERGE` operations.
- **Blazing Fast Vectors**: Native HNSW indexing delivering **~1.7ms** search latency (vs 5.8s standard).
- **Zero-Dependency Integration**: Built with IRIS Embedded Python — no external vector DBs or graph engines required.
- **Production-Ready**: The engine behind [iris-vector-rag](https://github.com/intersystems-community/iris-vector-rag) for advanced RAG pipelines.
---
## Installation
```bash
pip install iris-vector-graph
```
Note: Requires **InterSystems IRIS 2025.1+** with the `irispython` runtime enabled.
## Quick Start
```bash
# 1. Clone & Sync
git clone https://github.com/intersystems-community/iris-vector-graph.git && cd iris-vector-graph
uv sync
# 2. Spin up IRIS
docker-compose up -d
# 3. Start API
uvicorn api.main:app --reload
```
Visit:
- **GraphQL Playground**: [http://localhost:8000/graphql](http://localhost:8000/graphql)
- **API Docs**: [http://localhost:8000/docs](http://localhost:8000/docs)
---
## Unified Query Engines
### openCypher (Advanced RD Parser)
IRIS Vector Graph features a custom recursive-descent Cypher parser supporting multi-stage queries and transactional updates:
```cypher
// Complex fraud analysis with WITH and Aggregations
MATCH (a:Account)-[r]->(t:Transaction)
WITH a, count(t) AS txn_count
WHERE txn_count > 5
MATCH (a)-[:OWNED_BY]->(p:Person)
RETURN p.name, txn_count
```
**Supported Clauses:** `MATCH`, `OPTIONAL MATCH`, `WITH`, `WHERE`, `RETURN`, `UNWIND`, `CREATE`, `DELETE`, `DETACH DELETE`, `MERGE`, `SET`, `REMOVE`.
### GraphQL
```graphql
query {
protein(id: "PROTEIN:TP53") {
name
interactsWith(first: 5) { id name }
similar(limit: 3) { protein { name } similarity }
}
}
```
### SQL (Hybrid Search)
```sql
SELECT TOP 10 id,
kg_RRF_FUSE(id, vector, 'cancer suppressor') as score
FROM nodes
ORDER BY score DESC
```
---
## Scaling & Performance
The integration of a native **HNSW (Hierarchical Navigable Small World)** functional index directly into InterSystems IRIS provides massive scaling benefits for hybrid graph-vector workloads.
By keeping the vector index in-process with the graph data, we achieve **subsecond multi-modal queries** that would otherwise require complex application-side joins across multiple databases.
### Performance Benchmarks (2026 Refactor)
- **High-Speed Traversal**: **~1.84M TEPS** (Traversed Edges Per Second).
- **Sub-millisecond Latency**: 2-hop BFS on 10k nodes in **<40ms**.
- **RDF 1.2 Support**: Native support for **Quoted Triples** (Metadata on edges) via subject-referenced properties.
- **Query Signatures**: O(1) hop-rejection using ASQ-inspired Master Label Sets.
### Why fast vector search matters for graphs
Consider a "Find-and-Follow" query common in fraud detection:
1. **Find** the top 10 accounts most semantically similar to a known fraudulent pattern (Vector Search).
2. **Follow** all outbound transactions from those 10 accounts to identify the next layer of the money laundering ring (Graph Hop).
In a standard database without HNSW, the first step (vector search) can take several seconds as the dataset grows, blocking the subsequent graph traversals. With `iris-vector-graph`, the vector lookup is reduced to **~1.7ms**, enabling the entire hybrid traversal to complete in a fraction of a second.
---
## Interactive Demos
Experience the power of IRIS Vector Graph through our interactive demo applications.
### Biomedical Research Demo
Explore protein-protein interaction networks with vector similarity and D3.js visualization.
### Fraud Detection Demo
Real-time fraud scoring with transaction networks, Cypher-based pattern matching, and bitemporal audit trails.
To run the CLI demos:
```bash
export PYTHONPATH=$PYTHONPATH:.
# Cypher-powered fraud detection
python3 examples/demo_fraud_detection.py
# SQL-powered "drop down" example
python3 examples/demo_fraud_detection_sql.py
```
To run the Web Visualization demos:
```bash
# Start the demo server
uv run uvicorn src.iris_demo_server.app:app --port 8200 --host 0.0.0.0
```
Visit [http://localhost:8200](http://localhost:8200) to begin.
---
## iris-vector-rag Integration
IRIS Vector Graph is the core engine powering [iris-vector-rag](https://github.com/intersystems-community/iris-vector-rag). You can use it in your RAG pipelines like this:
```python
from iris_vector_rag import create_pipeline
# Create a GraphRAG pipeline powered by this engine
pipeline = create_pipeline('graphrag')
# Combined vector + text + graph retrieval
result = pipeline.query(
"What are the latest cancer treatment approaches?",
top_k=5
)
```
---
## Documentation
- [Detailed Architecture](https://github.com/intersystems-community/iris-vector-graph/blob/main/docs/architecture/ARCHITECTURE.md)
- [Biomedical Domain Examples](https://github.com/intersystems-community/iris-vector-graph/tree/main/examples/domains/biomedical/)
- [Full Test Suite](https://github.com/intersystems-community/iris-vector-graph/tree/main/tests/)
- [iris-vector-rag Integration](https://github.com/intersystems-community/iris-vector-rag)
- [Verbose README](https://github.com/intersystems-community/iris-vector-graph/blob/main/docs/README_VERBOSE.md) (Legacy)
---
## Changelog
### v1.6.0 (2025-01-31)
- **High-Performance Batch API**: New `get_nodes(node_ids)` reduces database round-trips by 100x+ for large result sets
- **Advanced Substring Search**: Integrated IRIS `iFind` indexing for sub-20ms `CONTAINS` queries on 10,000+ records
- **GraphQL Acceleration**: Implemented `GenericNodeLoader` to eliminate N+1 query patterns in GQL traversals
- **Transactional Batching**: Optimized `bulk_create_nodes/edges` with `executemany` and unified transactions
- **Functional Indexing**: Native JSON-based edge confidence indexing for fast complex filtering
### v1.5.4 (2025-01-31)
- **Schema Cleanup**: Removed invalid `VECTOR_DIMENSION` call from schema utilities
- **Refinement**: Engine now relies solely on inference and explicit config for dimensions
### v1.5.3 (2025-01-31)
- **Robust Embeddings**: Fixed embedding dimension detection for IRIS Community 2025.1
- **API Improvements**: Added `embedding_dimension` param to `IRISGraphEngine` for manual override
- **Auto-Inference**: Automatically infers dimension from input if detection fails
- **Code Quality**: Major cleanup of `engine.py` to remove legacy duplicates
### v1.5.2 (2025-01-31)
- **Engine Acceleration**: Ported high-performance SQL paths for `get_node()` and `count_nodes()`
- **Bulk Loading**: New `bulk_create_nodes()` and `bulk_create_edges()` methods with `%NOINDEX` support
- **Performance**: Verified 80x speedup for single-node reads and 450x for counts vs standard Cypher
### v1.5.1 (2025-01-31)
- **Extreme Performance**: Verified 38ms latency for 5,000-node property queries (at 10k entity scale)
- **Subquery Stability**: Optimized `REPLACE` string aggregation to avoid IRIS `%QPAR` optimizer bugs
- **Scale Verified**: Robust E2E stress tests confirm industrial-grade performance for 10,000+ nodes
### v1.4.9 (2025-01-31)
- **Exact Collation**: Added `%EXACT` to VARCHAR columns for case-sensitive matching
- **Performance**: Prevents default `UPPER` collation behavior in IRIS 2024.2+
- **Case Sensitivity**: Ensures node IDs, labels, and property keys are case-sensitive
### v1.4.8 (2025-01-31)
- **Fix SUBSCRIPT error**: Removed `idx_props_key_val` which caused errors with large values
- **Improved Performance**: Maintained composite indexes that don't include large VARCHAR columns
### v1.4.7 (2025-01-31)
- **Revert to VARCHAR(64000)**: LONGVARCHAR broke REPLACE; VARCHAR(64000) keeps compatibility
- **Large Values**: 64KB property values, REPLACE works, no CAST needed
### ~~v1.4.5/1.4.6~~ (deprecated - use 1.4.7)
- v1.4.5 used LONGVARCHAR which broke REPLACE function
- v1.4.6 used CAST which broke on old schemas
### v1.4.4 (2025-01-31)
- **Bulk Loading Support**: `%NOINDEX` INSERTs, `disable_indexes()`, `rebuild_indexes()`
- **Fast Ingest**: Skip index maintenance during bulk loads, rebuild after
### v1.4.3 (2025-01-31)
- **Composite Indexes**: Added (s,key), (s,p), (p,o_id), (s,label) based on TrustGraph patterns
- **12 indexes total**: Optimized for label filtering, property lookups, edge traversal
### v1.4.2 (2025-01-31)
- **Performance Indexes**: Added indexes on rdf_labels, rdf_props, rdf_edges for fast graph traversal
- **ensure_indexes()**: New method to add indexes to existing databases
- **Composite Index**: Added (key, val) index on rdf_props for property value lookups
### v1.4.1 (2025-01-31)
- **Embedding API**: Added `get_embedding()`, `get_embeddings()`, `delete_embedding()` methods
- **Schema Prefix in Engine**: All engine SQL now uses configurable schema prefix
### v1.4.0 (2025-01-31)
- **Schema Prefix Support**: `set_schema_prefix('Graph_KG')` for qualified table names
- **Pattern Operators Fixed**: `CONTAINS`, `STARTS WITH`, `ENDS WITH` now work correctly
- **IRIS Compatibility**: Removed recursive CTEs and `NULLS LAST` (unsupported by IRIS)
- **ORDER BY Fix**: Properties in ORDER BY now properly join rdf_props table
- **type(r) Verified**: Relationship type function works in RETURN/WHERE clauses
---
**Author: Thomas Dyar** (thomas.dyar@intersystems.com)
| text/markdown | null | InterSystems Community Team <grants@intersystems.com> | null | null | null | bioinformatics, biomedical, graph, iris, knowledge-graph, protein-interactions, vector-search | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Database",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Bio-Informatics"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"fastapi>=0.118.0",
"httpx>=0.28.1",
"intersystems-irispython>=3.2.0",
"networkx>=3.0",
"numpy>=1.24.0",
"pandas>=2.0.0",
"py2neo>=2021.2.4",
"pydantic>=2.11.9",
"pytest-asyncio>=1.2.0",
"python-dotenv>=1.0.0",
"requests>=2.28.0",
"strawberry-graphql[fastapi]>=0.280.0",
"uvicorn>=0.37.0",
"biopython>=1.81; extra == \"biodata\"",
"bioservices>=1.11.0; extra == \"biodata\"",
"mygene>=3.2.0; extra == \"biodata\"",
"python-fasthtml>=0.12.0; extra == \"demo\"",
"black>=23.0.0; extra == \"dev\"",
"flake8>=6.0.0; extra == \"dev\"",
"iris-devtester>=1.8.1; extra == \"dev\"",
"isort>=5.12.0; extra == \"dev\"",
"mypy>=1.5.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-playwright>=0.7.0; extra == \"dev\"",
"pytest>=7.4.0; extra == \"dev\"",
"scikit-learn>=1.3.0; extra == \"ml\"",
"scipy>=1.11.0; extra == \"ml\"",
"torch>=2.0.0; extra == \"ml\"",
"memory-profiler>=0.61.0; extra == \"performance\"",
"psutil>=5.9.0; extra == \"performance\"",
"graphviz>=0.20.0; extra == \"visualization\"",
"matplotlib>=3.7.0; extra == \"visualization\"",
"plotly>=5.15.0; extra == \"visualization\""
] | [] | [] | [] | [
"Homepage, https://github.com/isc-tdyar/iris-vector-graph",
"Documentation, https://github.com/isc-tdyar/iris-vector-graph/tree/main/docs",
"Repository, https://github.com/isc-tdyar/iris-vector-graph",
"Issues, https://github.com/isc-tdyar/iris-vector-graph/issues"
] | twine/6.2.0 CPython/3.12.9 | 2026-02-20T15:18:20.059959 | iris_vector_graph-1.6.5.tar.gz | 459,164 | a0/9f/51c495b1f994e1a2ef6b71f49b29428c9ed95954174730fbeb9e3b36dfb7/iris_vector_graph-1.6.5.tar.gz | source | sdist | null | false | c081f9afd45f9870c6358c88d60d076f | 7561341182ebc42a6de18d674c2e57195dcfb8495f54402f369a16c8b8e0af69 | a09f51c495b1f994e1a2ef6b71f49b29428c9ed95954174730fbeb9e3b36dfb7 | MIT | [
"LICENSE"
] | 165 |
2.1 | airbyte-source-facebook-marketing | 5.0.0 | Source implementation for Facebook Marketing. | # Facebook-Marketing source connector
This is the repository for the Facebook-Marketing source connector, written in Python.
For information about how to use this connector within Airbyte, see [the documentation](https://docs.airbyte.com/integrations/sources/facebook-marketing).
## Local development
### Prerequisites
- Python (~=3.9)
- Poetry (~=1.7) - installation instructions [here](https://python-poetry.org/docs/#installation)
### Installing the connector
From this connector directory, run:
```bash
poetry install --with dev
```
### Create credentials
**If you are a community contributor**, follow the instructions in the [documentation](https://docs.airbyte.com/integrations/sources/facebook-marketing)
to generate the necessary credentials. Then create a file `secrets/config.json` conforming to the `source_facebook_marketing/spec.yaml` file.
Note that any directory named `secrets` is gitignored across the entire Airbyte repo, so there is no danger of accidentally checking in sensitive information.
See `sample_files/sample_config.json` for a sample config file.
### Locally running the connector
```
poetry run source-facebook-marketing spec
poetry run source-facebook-marketing check --config secrets/config.json
poetry run source-facebook-marketing discover --config secrets/config.json
poetry run source-facebook-marketing read --config secrets/config.json --catalog integration_tests/configured_catalog.json
```
### Running unit tests
To run unit tests locally, from the connector directory run:
```
poetry run pytest unit_tests
```
### Building the docker image
1. Install [`airbyte-ci`](https://github.com/airbytehq/airbyte/blob/master/airbyte-ci/connectors/pipelines/README.md)
2. Run the following command to build the docker image:
```bash
airbyte-ci connectors --name=source-facebook-marketing build
```
An image will be available on your host with the tag `airbyte/source-facebook-marketing:dev`.
### Running as a docker container
Then run any of the connector commands as follows:
```
docker run --rm airbyte/source-facebook-marketing:dev spec
docker run --rm -v $(pwd)/secrets:/secrets airbyte/source-facebook-marketing:dev check --config /secrets/config.json
docker run --rm -v $(pwd)/secrets:/secrets airbyte/source-facebook-marketing:dev discover --config /secrets/config.json
docker run --rm -v $(pwd)/secrets:/secrets -v $(pwd)/integration_tests:/integration_tests airbyte/source-facebook-marketing:dev read --config /secrets/config.json --catalog /integration_tests/configured_catalog.json
```
### Running our CI test suite
You can run our full test suite locally using [`airbyte-ci`](https://github.com/airbytehq/airbyte/blob/master/airbyte-ci/connectors/pipelines/README.md):
```bash
airbyte-ci connectors --name=source-facebook-marketing test
```
### Customizing acceptance Tests
Customize `acceptance-test-config.yml` file to configure acceptance tests. See [Connector Acceptance Tests](https://docs.airbyte.com/connector-development/testing-connectors/connector-acceptance-tests-reference) for more information.
If your connector requires to create or destroy resources for use during acceptance tests create fixtures for it and place them inside integration_tests/acceptance.py.
### Dependency Management
All of your dependencies should be managed via Poetry.
To add a new dependency, run:
```bash
poetry add <package-name>
```
Please commit the changes to `pyproject.toml` and `poetry.lock` files.
## Publishing a new version of the connector
You've checked out the repo, implemented a million dollar feature, and you're ready to share your changes with the world. Now what?
1. Make sure your changes are passing our test suite: `airbyte-ci connectors --name=source-facebook-marketing test`
2. Bump the connector version (please follow [semantic versioning for connectors](https://docs.airbyte.com/contributing-to-airbyte/resources/pull-requests-handbook/#semantic-versioning-for-connectors)):
- bump the `dockerImageTag` value in in `metadata.yaml`
- bump the `version` value in `pyproject.toml`
3. Make sure the `metadata.yaml` content is up to date.
4. Make sure the connector documentation and its changelog is up to date (`docs/integrations/sources/facebook-marketing.md`).
5. Create a Pull Request: use [our PR naming conventions](https://docs.airbyte.com/contributing-to-airbyte/resources/pull-requests-handbook/#pull-request-title-convention).
6. Pat yourself on the back for being an awesome contributor.
7. Someone from Airbyte will take a look at your PR and iterate with you to merge it into master.
8. Once your PR is merged, the new version of the connector will be automatically published to Docker Hub and our connector registry.
| text/markdown | Airbyte | contact@airbyte.io | null | null | ELv2 | null | [
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11"
] | [] | https://airbyte.com | null | <3.12,>=3.10 | [] | [] | [] | [
"airbyte-cdk<8.0.0,>=7.4.1",
"facebook-business<24.0.0,>=23.0.0",
"cached-property<3,>=2"
] | [] | [] | [] | [
"Repository, https://github.com/airbytehq/airbyte",
"Documentation, https://docs.airbyte.com/integrations/sources/facebook-marketing"
] | poetry/1.8.5 CPython/3.11.14 Linux/6.11.0-1018-azure | 2026-02-20T15:18:09.171127 | airbyte_source_facebook_marketing-5.0.0.tar.gz | 56,462 | e4/e0/ac391a53a77804c35195eba12e5ff1d53ca5571dfafccfbbdbd65237e626/airbyte_source_facebook_marketing-5.0.0.tar.gz | source | sdist | null | false | 9d7055292ea23c927b495b2440feba6b | 040315080efff2a6fd46358603317739d82952d3bb3d7c12b36c8f350e62a854 | e4e0ac391a53a77804c35195eba12e5ff1d53ca5571dfafccfbbdbd65237e626 | null | [] | 1,245 |
2.4 | odoo-mcp-server-conn | 1.0.1 | Advanced MCP (Model Context Protocol) server for Odoo ERP — full CRUD, BI analytics, workflow navigation, security audit and more. | # Odoo MCP Server — Agentic Edition
[](https://pypi.org/project/odoo-mcp-server-conn/)
[](https://pypi.org/project/odoo-mcp-server-conn/)
[](https://opensource.org/licenses/MIT)
[](README.es.md)
An enterprise-grade **Model Context Protocol (MCP)** server for interacting with Odoo ERP through AI assistants. Unlike a traditional database connector, this server is designed as an **agentic bridge**: it teaches the AI how to navigate Odoo, understand its business logic, and perform complex data analysis.
---
## Why Agentic Edition?
- **Guided Workflows (Prompts)** — Native instructions that teach your AI how to audit inventory, analyse sales, or review security permissions step by step, without writing prompts from scratch.
- **Native BI** — Optimised support for `read_group` and advanced analytical operations, allowing the AI to generate financial reports or KPIs instantly.
- **Deep Introspection** — Tools that grant the AI X-ray vision: discover available action buttons, inspect view XML architectures, evaluate security rules — reducing hallucinations to a minimum.
- **Zero extra dependencies** — Uses only Python stdlib (`urllib`, `json`, `csv`, `xml`) plus the `mcp` package. No `requests`, no `odoorpc`.
- **Built-in Security** — Uses Odoo's own XML-RPC layer; the MCP server cannot bypass permissions the user doesn't already have.
---
## Installation
```bash
# Recommended — isolated and fast
uv tool install odoo-mcp-server-conn
# Standard
pip install odoo-mcp-server-conn
```
---
## Configuration
### Environment variables
| Variable | Required | Description |
|---|---|---|
| `ODOO_URL` | ✅ | Your Odoo URL (e.g. `http://localhost:8069`) |
| `ODOO_DB` | ✅ | Database name |
| `ODOO_USERNAME` | ✅ | User login or email |
| `ODOO_PASSWORD` | ✅ | Password or **API Key** (strongly recommended) |
| `MCP_AUTH_CACHE_TTL` | ❌ | Seconds before re-authenticating (default: `300`) |
> **Tip:** Always prefer an **API Key** over a plain password. Generate one in Odoo under *Settings → Users → Your User → API Keys*.
---
### Claude Code (CLI)
Run this command once — it stores the configuration permanently for the current project:
```bash
claude mcp add odoo-server \
--env ODOO_URL=http://localhost:8069 \
--env ODOO_DB=my_database \
--env ODOO_USERNAME=admin \
--env ODOO_PASSWORD=my_api_key \
-- odoo-mcp-server-conn
```
Verify the connection inside Claude Code:
```
/mcp
```
<details>
<summary>Project-scoped config — share with your team via <code>.mcp.json</code></summary>
Add `--scope project` to create a `.mcp.json` at the repository root that your whole team can use. Each developer supplies their own credentials via environment variables or CI secrets.
```bash
claude mcp add odoo-server \
--scope project \
--env ODOO_URL=http://localhost:8069 \
--env ODOO_DB=my_database \
--env ODOO_USERNAME=admin \
--env ODOO_PASSWORD=my_api_key \
-- odoo-mcp-server-conn
```
</details>
---
### Claude Desktop
Locate your config file:
- **macOS:** `~/Library/Application Support/Claude/claude_desktop_config.json`
- **Windows:** `%APPDATA%\Claude\claude_desktop_config.json`
Add the following block:
```json
{
"mcpServers": {
"odoo-server": {
"command": "odoo-mcp-server-conn",
"env": {
"ODOO_URL": "http://localhost:8069",
"ODOO_DB": "my_database",
"ODOO_USERNAME": "admin",
"ODOO_PASSWORD": "my_api_key"
}
}
}
}
```
---
### VS Code (Copilot / Cline / Roo Code)
Add to your `.vscode/mcp.json` (workspace-level) or user MCP settings:
```json
{
"servers": {
"odoo-server": {
"type": "stdio",
"command": "odoo-mcp-server-conn",
"env": {
"ODOO_URL": "http://localhost:8069",
"ODOO_DB": "my_database",
"ODOO_USERNAME": "admin",
"ODOO_PASSWORD": "my_api_key"
}
}
}
}
```
<details>
<summary>Cline / Roo Code — <code>cline_mcp_settings.json</code></summary>
```json
{
"mcpServers": {
"odoo-server": {
"command": "odoo-mcp-server-conn",
"env": {
"ODOO_URL": "http://localhost:8069",
"ODOO_DB": "my_database",
"ODOO_USERNAME": "admin",
"ODOO_PASSWORD": "my_api_key"
}
}
}
}
```
</details>
---
### Cursor IDE
1. Go to **Settings → Features → MCP**.
2. Click **Add new server** → type `command`.
3. Command: `odoo-mcp-server-conn`.
4. Add each environment variable in the *Env vars* section.
---
### Windsurf
Add to `~/.codeium/windsurf/mcp_config.json`:
```json
{
"mcpServers": {
"odoo-server": {
"command": "odoo-mcp-server-conn",
"env": {
"ODOO_URL": "http://localhost:8069",
"ODOO_DB": "my_database",
"ODOO_USERNAME": "admin",
"ODOO_PASSWORD": "my_api_key"
}
}
}
}
```
---
## Tools
### CRUD — Core operations
| Tool | Odoo method | Description |
|---|---|---|
| `search_records` | `search` | Return IDs matching a domain filter |
| `read_records` | `read` | Read fields from records by ID |
| `search_read_records` | `search_read` | Search + read in one optimised call |
| `create_record` | `create` | Create a new record |
| `update_record` | `write` | Update existing records |
| `delete_record` | `unlink` | Delete records ⚠️ |
| `count_records` | `search_count` | Count without loading records (fast) |
### Introspection — understand the data model in real time
| Tool | Description |
|---|---|
| `get_model_fields` | Live schema of any model (types, relations, required flags) |
| `get_view_architecture` | Final merged XML of a view (form / tree / kanban / search) |
| `get_workflow_states` | States, stages, and transition buttons of a model |
| `get_action_buttons` | All buttons in a view with visibility conditions and groups |
| `get_related_records` | Auto-resolved Many2one / One2many / Many2many relations |
| `get_security_rules` | ACL and `ir.rule` records for a model |
| `explain_domain` | Translates an Odoo domain filter into plain English |
### BI & Analytics
| Tool | Description |
|---|---|
| `read_group` | `GROUP BY` with sums, counts, averages. Supports `:month` / `:year` date grouping |
| `export_to_csv` | Export records to a CSV string (up to 1 000 rows) |
| `get_available_reports` | List PDF / HTML reports available for a model |
### Wildcard
| Tool | Description |
|---|---|
| `execute_method` | Call **any** public Odoo method (e.g. `action_confirm`, `button_validate`) |
---
## Guided Prompts (pre-trained workflows)
Invoke these prompts directly from your AI client to start a structured analysis:
| Prompt | What it does |
|---|---|
| `analyze_sales` | Sales KPIs — revenue trend, top customers, conversion rate |
| `diagnose_inventory` | Low-stock products, stuck transfers, warehouse utilisation |
| `audit_permissions` | Cross-reference security groups with ACL, flag over-privileged users |
| `financial_overview` | Cash flow overview — AR, AP, invoicing by period |
---
## Resources (navigable URIs)
The AI can query these URIs directly as if browsing the ERP:
| URI | Description |
|---|---|
| `odoo://models` | Master list of all models installed in the instance |
| `odoo://fields/{model}` | Data dictionary for a specific model |
| `odoo://record/{model}/{id}` | Complete record sheet |
| `odoo://search/{model}/{domain}` | Instant search, e.g. `odoo://search/res.partner/[["is_company","=",true]]` |
---
## Security
- The server operates entirely through Odoo's XML-RPC layer — it cannot bypass any permission the authenticated user doesn't already have in the ERP.
- No raw SQL, no forced commits, no ORM bypass.
- Credentials are read exclusively from environment variables — never hardcoded.
**License:** MIT
| text/markdown | null | Eduardo Bolivar <eduardobolivar2407@gmail.com> | null | null | MIT | odoo, mcp, model-context-protocol, erp, ai, xmlrpc, claude | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Office/Business :: Financial :: Accounting",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"mcp>=1.0.0",
"mcp[http]>=1.0.0; extra == \"http\""
] | [] | [] | [] | [
"Homepage, https://github.com/eduardobolivar/odoo-mcp-server",
"Repository, https://github.com/eduardobolivar/odoo-mcp-server"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T15:17:47.895841 | odoo_mcp_server_conn-1.0.1.tar.gz | 15,380 | b0/dc/7a69553e6318c05e40d872e8ff1e9c85372142a59ba165c07fe824a4d98c/odoo_mcp_server_conn-1.0.1.tar.gz | source | sdist | null | false | d87e6ac083600f8b2e7c0acd95a10a7e | a40a20d44f9f6ffecff42d209950b4686e58c29fcca4a89b98d902d3f494256d | b0dc7a69553e6318c05e40d872e8ff1e9c85372142a59ba165c07fe824a4d98c | null | [] | 154 |
2.4 | kiln-server | 0.25.0 | Kiln AI Server | # Kiln AI REST Server
[](https://pypi.org/project/kiln-server)
[](https://pypi.org/project/kiln-server)
---
## About Kiln AI
Learn more about Kiln AI at [kiln.tech](https://kiln.tech)
This package is the Kiln AI server package. There is also a separate desktop application and python library package.
Github: [github.com/Kiln-AI/kiln](https://github.com/Kiln-AI/kiln)
## Installation
We suggest installing with uv:
```console
uv tool install kiln_server
```
## REST API Docs
Our OpenAPI docs: [https://kiln-ai.github.io/Kiln/kiln_server_openapi_docs/index.html](https://kiln-ai.github.io/Kiln/kiln_server_openapi_docs/index.html)
## Running the server
After installing, run:
```console
kiln_server
```
## kiln_server Command Options
```
usage: kiln_server [-h] [--host HOST] [--port PORT] [--log-level LOG_LEVEL] [--auto-reload AUTO_RELOAD]
Run the Kiln AI REST Server.
options:
-h, --help show this help message and exit
--host HOST Host for network transports.
--port PORT Port for network transports.
--log-level LOG_LEVEL
Log level for the server when using network transports.
--auto-reload AUTO_RELOAD
Enable auto-reload for the server.
```
## Using the server in another FastAPI app
See server.py for examples, but you can connect individual API endpoints to your app like this:
```python
from kiln_server.project_api import connect_project_api
app = FastAPI()
connect_project_api(app)
```
## Kiln MCP Server
Also included in this package is a MCP server for serving Kiln tools.
See [it's README](./kiln_server/mcp/README.md) for details.
| text/markdown | null | "Steve Cosman, Chesterfield Laboratories Inc" <scosman@users.noreply.github.com> | null | null | This license applies only to the software in the libs/server directory. ======================================================= Copyright 2024 - Chesterfield Laboratories Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"fastapi>=0.115.4",
"httpx>=0.27.2",
"kiln-ai>=0.24.0",
"mcp>=1.10.1",
"pydantic>=2.9.2",
"python-dotenv>=1.0.1",
"python-multipart>=0.0.20",
"uvicorn>=0.32.0"
] | [] | [] | [] | [
"Homepage, https://kiln.tech",
"Repository, https://github.com/Kiln-AI/kiln",
"Documentation, https://github.com/Kiln-AI/kiln#readme",
"Issues, https://github.com/Kiln-AI/kiln/issues"
] | uv/0.8.3 | 2026-02-20T15:17:31.383387 | kiln_server-0.25.0.tar.gz | 195,817 | 05/4d/33e240e587ee38e89b3aca5b6ce3d26698c17cba9cdd44aefc4e891c46c5/kiln_server-0.25.0.tar.gz | source | sdist | null | false | 0d9213217f882d540a9ead47bb3f655a | 6f1322cdeeaae214ae5968c8a5a38c14146493df336ab45219908d4a64cb143f | 054d33e240e587ee38e89b3aca5b6ce3d26698c17cba9cdd44aefc4e891c46c5 | null | [
"LICENSE.txt"
] | 158 |
2.4 | kiln-ai | 0.25.0 | Kiln AI | # Kiln AI Core Library
<p align="center">
<picture>
<img width="205" alt="Kiln AI Logo" src="https://github.com/user-attachments/assets/5fbcbdf7-1feb-45c9-bd73-99a46dd0a47f">
</picture>
</p>
[](https://pypi.org/project/kiln-ai)
[](https://pypi.org/project/kiln-ai)
[](https://kiln-ai.github.io/Kiln/kiln_core_docs/index.html)
---
## Installation
```console
pip install kiln_ai
```
## About
This package is the Kiln AI core library. There is also a separate desktop application and server package. Learn more about Kiln AI at [kiln.tech](https://kiln.tech) and on Github: [github.com/Kiln-AI/kiln](https://github.com/Kiln-AI/kiln).
# Guide: Using the Kiln Python Library
In this guide we'll walk common examples of how to use the library.
## Documentation
The library has a [comprehensive set of docs](https://kiln-ai.github.io/Kiln/kiln_core_docs/index.html).
## Table of Contents
- [Connecting AI Providers](#connecting-ai-providers-openai-openrouter-ollama-etc)
- [Using the Kiln Data Model](#using-the-kiln-data-model)
- [Understanding the Kiln Data Model](#understanding-the-kiln-data-model)
- [Datamodel Overview](#datamodel-overview)
- [Load a Project](#load-a-project)
- [Run a Kiln Task from Python](#run-a-kiln-task-from-python)
- [Load an Existing Dataset into a Kiln Task Dataset](#load-an-existing-dataset-into-a-kiln-task-dataset)
- [Using your Kiln Dataset in a Notebook or Project](#using-your-kiln-dataset-in-a-notebook-or-project)
- [Using Kiln Dataset in Pandas](#using-kiln-dataset-in-pandas)
- [Building and Running a Kiln Task from Code](#building-and-running-a-kiln-task-from-code)
- [Tagging Task Runs Programmatically](#tagging-task-runs-programmatically)
- [Adding Custom Model or AI Provider from Code](#adding-custom-model-or-ai-provider-from-code)
- [Taking Kiln RAG to production](#taking-kiln-rag-to-production)
- [Load a LlamaIndex Vector Store](#load-a-llamaindex-vector-store)
- [Example: LanceDB Cloud](#example-lancedb-cloud)
- [Deploy RAG without LlamaIndex](#deploy-rag-without-llamaindex)
- [Full API Reference](#full-api-reference)
## Installation
```bash
pip install kiln-ai
```
## Connecting AI Providers (OpenAI, OpenRouter, Ollama, etc)
The easiest way to connect AI providers is to use the Kiln app UI. Once connected in the UI, credentials will be stored to `~/.kiln_ai/settings.yaml`, which will be available to the library.
For configuring credentials from code or connecting custom servers/model, see [Adding Custom Model or AI Provider from Code](#adding-custom-model-or-ai-provider-from-code).
## Using the Kiln Data Model
### Understanding the Kiln Data Model
Kiln projects are simply a directory of files (mostly JSON files with the extension `.kiln`) that describe your project, including tasks, runs, and other data.
This dataset design was chosen for several reasons:
- Git compatibility: Kiln project folders are easy to collaborate on in git. The filenames use unique IDs to avoid conflicts and allow many people to work in parallel. The files are small and easy to compare using standard diff tools.
- JSON allows you to easily load and manipulate the data using standard tools (pandas, polars, etc)
The Kiln Python library provides a set of Python classes that which help you easily interact with your Kiln dataset. Using the library to load and manipulate your dataset is the fastest way to get started, and will guarantees you don't insert any invalid data into your dataset. There's extensive validation when using the library, so we recommend using it to load and manipulate your dataset over direct JSON manipulation.
### Datamodel Overview
Here's a high level overview of the Kiln datamodel. A project folder will reflect this nested structure:
- Project: a Kiln Project that organizes related tasks
- Task: a specific task including prompt instructions, input/output schemas, and requirements
- TaskRun: a sample (run) of a task including input, output and human rating information
- Finetune: configuration and status tracking for fine-tuning models on task data
- Prompt: a prompt for this task
- DatasetSplit: a frozen collection of task runs divided into train/test/validation splits
### Load a Project
Assuming you've created a project in the Kiln UI, you'll have a `project.kiln` file in your `~/Kiln Projects/Project Name` directory.
```python
from kiln_ai.datamodel import Project
project = Project.load_from_file("path/to/your/project.kiln")
print("Project: ", project.name, " - ", project.description)
# List all tasks in the project, and their dataset sizes
tasks = project.tasks()
for task in tasks:
print("Task: ", task.name, " - ", task.description)
print("Total dataset size:", len(task.runs()))
```
### Run a Kiln Task from Python
If you've already created a Kiln task and want to run it as part of a Python app you can follow this example.
**Step 1: Export your Kiln task/project**
You can run any Kiln task from code using it's project file/folder on disk. However, these folders can contain thousands of files relating to past runs and evals, which is more than you probably want to deploy to a service. Only a few of these files are needed for running the task: you can export a minimal project folder with on the necessary files to run a task by running our CLI:
```bash
uvx kiln_ai package_project "/path/to/your/project.kiln" -t TASK_ID_TO_EXPORT
```
**Step 2: Run Kiln Task from Code**
Prerequisites:
- Already have a Kiln Task created and saved to disk at `TASK_PATH`. It doesn't matter if you created it using the Kiln app, the Kiln library, or exported it using the command above.
- Set a default run configuration in the Kiln UI specifying how to run the task: model, AI provider, etc. Alternatively you can create a RunConfigProperties instance in code.
- Set up any API keys required for the task. If running on the same machine as the Kiln app, these will already be saved in `~/.kiln_ai/settings.yaml` and will be loaded automatically. If running on a server, you can set the required environment variables (see `libs/core/kiln_ai/utils/config.py` for a list).
- If your task uses RAG, ensure you have run search indexing on this machine with the Kiln UI or via the library.
```python
from kiln_ai.adapters.adapter_registry import adapter_for_task
from kiln_ai.datamodel.task import Task
async def run_kiln_task(input: str) -> dict[str, Any] | str:
# Load your task from it's path on the filesystem
task = Task.load_from_file(TASK_PATH)
# Here we get the default run config from the task, which you can set in the UI. Alternatively you could pass a RunConfigProperties object with parameters like the model, temperature, etc.
run_config = next(c for c in task.run_configs() if c.id == task.default_run_config_id)
# An adapter can run a task
adapter = adapter_for_task(
task,
run_config_properties=run_config.run_config_properties,
)
task_run, run_output = await adapter.invoke_returning_run_output(input)
# Optional: can inspect the task run to see usage data (cost, tokens, etc.)
#print(f"Task run cost: {task_run.usage.cost}")
return run_output.output
```
### Load an Existing Dataset into a Kiln Task Dataset
If you already have a dataset in a file, you can load it into a Kiln project.
**Important**: Kiln will validate the input and output schemas, and ensure that each datapoint in the dataset is valid for this task.
- Plaintext input/output: ensure "output_json_schema" and "input_json_schema" not set in your Task definition.
- JSON input/output: ensure "output_json_schema" and "input_json_schema" are valid JSON schemas in your Task definition. Every datapoint in the dataset must be valid JSON fitting the schema.
Here's a simple example of how to load a dataset into a Kiln task:
```python
import kiln_ai
import kiln_ai.datamodel
# Created a project and task via the UI and put its path here
task_path = "/Users/youruser/Kiln Projects/test project/tasks/632780983478 - Joke Generator/task.kiln"
task = kiln_ai.datamodel.Task.load_from_file(task_path)
# Add data to the task - loop over you dataset and run this for each item
item = kiln_ai.datamodel.TaskRun(
parent=task,
input='{"topic": "AI"}',
output=kiln_ai.datamodel.TaskOutput(
output='{"setup": "What is AI?", "punchline": "content_here"}',
),
)
item.save_to_file()
print("Saved item to file: ", item.path)
```
And here's a more complex example of how to load a dataset into a Kiln task. This example sets the source of the data (human in this case, but you can also set it be be synthetic), the created_by property, and a 5-star rating.
```python
import kiln_ai
import kiln_ai.datamodel
# Created a project and task via the UI and put its path here
task_path = "/Users/youruser/Kiln Projects/test project/tasks/632780983478 - Joke Generator/task.kiln"
task = kiln_ai.datamodel.Task.load_from_file(task_path)
# Add data to the task - loop over you dataset and run this for each item
item = kiln_ai.datamodel.TaskRun(
parent=task,
input='{"topic": "AI"}',
input_source=kiln_ai.datamodel.DataSource(
type=kiln_ai.datamodel.DataSourceType.human,
properties={"created_by": "John Doe"},
),
output=kiln_ai.datamodel.TaskOutput(
output='{"setup": "What is AI?", "punchline": "content_here"}',
source=kiln_ai.datamodel.DataSource(
type=kiln_ai.datamodel.DataSourceType.human,
properties={"created_by": "Jane Doe"},
),
rating=kiln_ai.datamodel.TaskOutputRating(
value=5,
type=kiln_ai.datamodel.datamodel_enums.five_star,
),
),
)
item.save_to_file()
print("Saved item to file: ", item.path)
```
### Using your Kiln Dataset in a Notebook or Project
You can use your Kiln dataset in a notebook or project by loading the dataset into a pandas dataframe.
```python
import kiln_ai
import kiln_ai.datamodel
# Created a project and task via the UI and put its path here
task_path = "/Users/youruser/Kiln Projects/test project/tasks/632780983478 - Joke Generator/task.kiln"
task = kiln_ai.datamodel.Task.load_from_file(task_path)
runs = task.runs()
for run in runs:
print(f"Input: {run.input}")
print(f"Output: {run.output.output}")
print(f"Total runs: {len(runs)}")
```
### Using Kiln Dataset in Pandas
You can also use your Kiln dataset in a pandas dataframe, or a similar script for other tools like polars.
```python
import glob
import json
import pandas as pd
from pathlib import Path
task_dir = "/Users/youruser/Kiln Projects/test project/tasks/632780983478 - Joke Generator"
dataitem_glob = task_dir + "/runs/*/task_run.kiln"
dfs = []
for file in glob.glob(dataitem_glob):
js = json.loads(Path(file).read_text())
df = pd.DataFrame([{
"input": js["input"],
"output": js["output"]["output"],
}])
# Alternatively: you can use pd.json_normalize(js) to get the full json structure
# df = pd.json_normalize(js)
dfs.append(df)
final_df = pd.concat(dfs, ignore_index=True)
print(final_df)
```
### Building and Running a Kiln Task from Code
```python
# Step 1: Create or Load a Task -- choose one of the following 1.A or 1.B
# Step 1.A: Optionally load an existing task from disk
# task = datamodel.Task.load_from_file("path/to/task.kiln")
# Step 1.B: Create a new task in code, without saving to disk.
task = datamodel.Task(
name="test task",
instruction="Tell a joke, given a subject.",
)
# replace with a valid JSON schema https://json-schema.org for your task (json string, not a python dict).
# Or delete this line to use plaintext output
task.output_json_schema = json_joke_schema
# Step 2: Create an Adapter to run the task, with a specific model and provider
adapter = adapter_for_task(task, model_name="llama_3_1_8b", provider="groq")
# Step 3: Invoke the Adapter to run the task
task_input = "cows"
response = await adapter.invoke(task_input)
print(f"Output: {response.output.output}")
# Step 4 (optional): Load the task from disk and print the results.
# This will only work if the task was loaded from disk, or you called task.save_to_file() before invoking the adapter (ephemeral tasks don't save their result to disk)
task = datamodel.Task.load_from_file(tmp_path / "test_task.kiln")
for run in task.runs():
print(f"Run: {run.id}")
print(f"Input: {run.input}")
print(f"Output: {run.output}")
```
## Tagging Task Runs Programmatically
You can also tag your Kiln Task runs programmatically:
```py
# Load your Kiln Task from disk
task_path = "/Users/youruser/Kiln Projects/test project/tasks/632780983478 - Joke Generator/task.kiln"
task = kiln_ai.datamodel.Task.load_from_file(task_path)
for run in task.runs():
# Parse the task output from JSON
output = json.loads(run.output.output)
# Add a tag if the punchline is unusually short
if len(output["punchline"]) < 100:
run.tags.append("very_short")
run.save_to_file() # Persist the updated tags
```
### Adding Custom Model or AI Provider from Code
You can add additional AI models and providers to Kiln.
See our docs for more information, including how to add these from the UI:
- [Custom Models From Existing Providers](https://docs.kiln.tech/docs/models-and-ai-providers#custom-models-from-existing-providers)
- [Custom OpenAI Compatible Servers](https://docs.kiln.tech/docs/models-and-ai-providers#custom-openai-compatible-servers)
You can also add these from code. The kiln_ai.utils.Config class helps you manage the Kiln config file (stored at `~/.kiln_ai/settings.yaml`):
```python
# Addding an OpenAI compatible provider
name = "CustomOllama"
base_url = "http://localhost:1234/api/v1"
api_key = "12345"
providers = Config.shared().openai_compatible_providers or []
existing_provider = next((p for p in providers if p["name"] == name), None)
if existing_provider:
# skip since this already exists
return
providers.append(
{
"name": name,
"base_url": base_url,
"api_key": api_key,
}
)
Config.shared().openai_compatible_providers = providers
```
```python
# Add a custom model ID to an existing provider.
new_model = "openai::gpt-3.5-turbo"
custom_model_ids = Config.shared().custom_models
existing_model = next((m for m in custom_model_ids if m == new_model), None)
if existing_model:
# skip since this already exists
return
custom_model_ids.append(new_model)
Config.shared().custom_models = custom_model_ids
```
## Taking Kiln RAG to production
When you're ready to deploy your RAG system, you can export your processed documents to any vector store supported by LlamaIndex. This allows you to use your Kiln-configured chunking and embedding settings in production.
### Load a LlamaIndex Vector Store
Kiln provides a `VectorStoreLoader` that yields your processed document chunks as LlamaIndex `TextNode` objects. These nodes contain the same metadata, chunking and embedding data as your Kiln Search Tool configuration.
```py
from kiln_ai.datamodel import Project
from kiln_ai.datamodel.rag import RagConfig
from kiln_ai.adapters.vector_store_loaders import VectorStoreLoader
# Load your project and RAG configuration
project = Project.load_from_file("path/to/your/project.kiln")
rag_config = RagConfig.from_id_and_parent_path("rag-config-id", project.path)
# Create the loader
loader = VectorStoreLoader(project=project, rag_config=rag_config)
# Export chunks to any LlamaIndex vector store
async for batch in loader.iter_llama_index_nodes(batch_size=10):
# Insert into your chosen vector store
# Examples: LanceDB, Pinecone, Chroma, Qdrant, etc.
pass
```
**Supported Vector Stores:** LlamaIndex supports 20+ vector stores including LanceDB, Pinecone, Weaviate, Chroma, Qdrant, and more. See the [full list](https://developers.llamaindex.ai/python/framework/module_guides/storing/vector_stores/).
### Example: LanceDB Cloud
Internally Kiln uses LanceDB. By using LanceDB cloud you'll get the same indexing behaviour as in app.
Here's a complete example using LanceDB Cloud:
```py
from kiln_ai.datamodel import Project
from kiln_ai.datamodel.rag import RagConfig
from kiln_ai.datamodel.vector_store import VectorStoreConfig
from kiln_ai.adapters.vector_store_loaders import VectorStoreLoader
from kiln_ai.adapters.vector_store.lancedb_adapter import lancedb_construct_from_config
# Load configurations
project = Project.load_from_file("path/to/your/project.kiln")
rag_config = RagConfig.from_id_and_parent_path("rag-config-id", project.path)
vector_store_config = VectorStoreConfig.from_id_and_parent_path(
rag_config.vector_store_config_id, project.path,
)
# Create LanceDB vector store
lancedb_store = lancedb_construct_from_config(
vector_store_config=vector_store_config,
uri="db://my-project",
api_key="sk_...",
region="us-east-1",
table_name="my-documents", # Created automatically
)
# Export and insert your documents
loader = VectorStoreLoader(project=project, rag_config=rag_config)
async for batch in loader.iter_llama_index_nodes(batch_size=100):
await lancedb_store.async_add(batch)
print("Documents successfully exported to LanceDB!")
```
After export, query your data using [LlamaIndex](https://developers.llamaindex.ai/python/framework-api-reference/storage/vector_store/lancedb/) or the [LanceDB client](https://lancedb.github.io/lancedb/).
### Deploy RAG without LlamaIndex
While Kiln is designed for deploying to LlamaIndex, you don't need to use it. The `iter_llama_index_nodes` returns a `TextNode` object which includes all the data you need to build a RAG index in any stack: embedding, text, document name, chunk ID, etc.
## Full API Reference
The library can do a lot more than the examples we've shown here.
See the full API reference in the [docs](https://kiln-ai.github.io/Kiln/kiln_core_docs/index.html) under the `Submodules` section of the sidebar.
| text/markdown | null | "Steve Cosman, Chesterfield Laboratories Inc" <scosman@users.noreply.github.com> | null | null | null | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"anyio>=4.10.0",
"boto3>=1.37.10",
"coverage>=7.6.4",
"exceptiongroup>=1.0.0; python_version < \"3.11\"",
"google-cloud-aiplatform>=1.84.0",
"google-genai>=1.21.1",
"jsonschema>=4.23.0",
"lancedb>=0.24.2",
"litellm>=1.80.9",
"llama-index-vector-stores-lancedb>=0.4.2",
"llama-index>=0.13.3",
"mcp[cli]>=1.10.1",
"openai>=1.53.0",
"pdoc>=15.0.0",
"pillow>=11.1.0",
"pydantic>=2.9.2",
"pypdf>=6.0.0",
"pypdfium2>=4.30.0",
"pytest-benchmark>=5.1.0",
"pytest-cov>=6.0.0",
"pyyaml>=6.0.2",
"together",
"typer>=0.9.0",
"typing-extensions>=4.12.2",
"vertexai>=1.43.0"
] | [] | [] | [] | [
"Homepage, https://kiln.tech",
"Repository, https://github.com/Kiln-AI/kiln",
"Documentation, https://kiln-ai.github.io/Kiln/kiln_core_docs/kiln_ai.html",
"Issues, https://github.com/Kiln-AI/kiln/issues"
] | uv/0.8.3 | 2026-02-20T15:16:49.476011 | kiln_ai-0.25.0.tar.gz | 643,228 | 6c/dc/a8eeaabf3f95b86393ec4c881d8a07d5105af5af82a5efa7104aacd2b497/kiln_ai-0.25.0.tar.gz | source | sdist | null | false | aa70eb3855670a3b1cf0a48eaf567401 | c379b255bf52208655fd4d5dbe55f50b66d9c0b363f1fa943be17ab14754be87 | 6cdca8eeaabf3f95b86393ec4c881d8a07d5105af5af82a5efa7104aacd2b497 | null | [
"LICENSE.txt"
] | 188 |
2.4 | keystoneauth1 | 5.13.1 | Authentication Library for OpenStack Identity | ============
keystoneauth
============
.. image:: https://governance.openstack.org/tc/badges/keystoneauth.svg
.. image:: https://img.shields.io/pypi/v/keystoneauth1.svg
:target: https://pypi.org/project/keystoneauth1/
:alt: Latest Version
.. image:: https://img.shields.io/pypi/dm/keystoneauth1.svg
:target: https://pypi.org/project/keystoneauth1/
:alt: Downloads
This package contains tools for authenticating to an OpenStack-based cloud.
These tools include:
* Authentication plugins (password, token, and federation based)
* Discovery mechanisms to determine API version support
* A session that is used to maintain client settings across requests (based on
the requests Python library)
Further information:
* Free software: Apache license
* Documentation: https://docs.openstack.org/keystoneauth/latest/
* Source: https://opendev.org/openstack/keystoneauth
* Bugs: https://bugs.launchpad.net/keystoneauth
* Release notes: https://docs.openstack.org/releasenotes/keystoneauth/
| text/x-rst | null | OpenStack <openstack-discuss@lists.openstack.org> | null | null | Apache-2.0 | null | [
"Environment :: OpenStack",
"Intended Audience :: Information Technology",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: Apache Software License",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pbr>=2.0.0",
"iso8601>=2.0.0",
"requests>=2.14.2",
"stevedore>=1.20.0",
"os-service-types>=1.2.0",
"typing-extensions>=4.12",
"requests-kerberos>=0.8.0; extra == \"kerberos\"",
"lxml>=4.2.0; extra == \"saml2\"",
"oauthlib>=0.6.2; extra == \"oauth1\"",
"betamax>=0.7.0; extra == \"betamax\"",
"fixtures>=3.0.0; extra == \"betamax\"",
"PyYAML>=3.13; extra == \"betamax\""
] | [] | [] | [] | [
"Documentation, https://docs.openstack.org/keystoneauth/latest/",
"Source, https://opendev.org/openstack/keystoneauth/",
"Bugs, https://bugs.launchpad.net/keystoneauth/",
"Release Notes, https://docs.openstack.org/releasenotes/keystoneauth/"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T15:16:47.478785 | keystoneauth1-5.13.1.tar.gz | 288,548 | f3/bc/d99872ca0bc8bf5f248b50e3d7386dedec5278f8dd989a2a981d329a8069/keystoneauth1-5.13.1.tar.gz | source | sdist | null | false | f1a21175c1d1e06cb8d51c8b173499a2 | e011e47ac3f3c671ffae33505c095548650cc19dab7f6af3b2ea5bd18c98f0c9 | f3bcd99872ca0bc8bf5f248b50e3d7386dedec5278f8dd989a2a981d329a8069 | null | [
"LICENSE"
] | 19,610 |
2.4 | PyMorseLive | 0.3.0 | Simple multi-channel timing-based morse decoder written in Python. | # PyMorseLive
Simple multi-channel timing-based morse decoder written in Python.
https://github.com/user-attachments/assets/e211ac2e-db48-4be7-a0fe-034320a241fd
## Installation
Install from PyPI using
```
pip install PyMorseLive
```
## Using
Open a command window and type
```
pymorse
```
Alternatively download the source file (there's only one currently in the pymorse folder) and edit it as needed.
Type
```
pymorse --help
```
to show the available command line options
| text/markdown | null | G1OJS <g1ojs@yahoo.com> | null | null | null | ham radio, radio, CW, MORSE, DECODER | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy>=1.24.0",
"matplotlib",
"pyaudio"
] | [] | [] | [] | [
"Homepage, https://github.com/G1OJS/PyMorseLive",
"Issues, https://github.com/G1OJS/PyMorseLive/issues"
] | twine/6.1.0 CPython/3.12.2 | 2026-02-20T15:16:33.363737 | pymorselive-0.3.0.tar.gz | 7,253 | d5/40/5d85244cda69d8903cedf743277eb1e6ee883d65b3b7c8062259c222626e/pymorselive-0.3.0.tar.gz | source | sdist | null | false | f1914b573ae9f2efef30528ab582889d | b5198428b871aefb300d2165f6249b78b972f0e1e778919abe23d42b84987f0f | d5405d85244cda69d8903cedf743277eb1e6ee883d65b3b7c8062259c222626e | GPL-3.0-or-later | [
"LICENSE"
] | 0 |
2.4 | filigree | 1.2.0 | Agent-native issue tracker with convention-based project discovery | # Filigree
Agent-native issue tracker with convention-based project discovery.
[](https://github.com/tachyon-beep/filigree/actions/workflows/ci.yml)
[](https://pypi.org/project/filigree/)
[](https://pypi.org/project/filigree/)
[](https://github.com/tachyon-beep/filigree/blob/main/LICENSE)
## What Is Filigree?
Filigree is a lightweight, SQLite-backed issue tracker designed for AI coding agents (Claude Code, Codex, etc.) to use as first-class citizens. It exposes 43 MCP tools so agents interact natively, plus a full CLI for humans and background subagents.
Traditional issue trackers are human-first — agents have to scrape CLI output or parse API responses. Filigree flips this: agents read a pre-computed `context.md` at session start, claim work with optimistic locking, follow enforced workflow state machines, and resume sessions via event streams. For Claude Code, `filigree install` wires up session hooks and a workflow skill pack so agents get project context automatically.
Filigree is local-first. No cloud, no accounts. Each project gets a `.filigree/` directory (like `.git/`) containing a SQLite database, configuration, and auto-generated context summary. The optional web dashboard can serve multiple projects from a single instance via an ephemeral project registry.
### Key Features
- **MCP server** with 43 tools — agents interact natively without parsing text
- **Full CLI** with `--json` output for background subagents and `--actor` for audit trails
- **Claude Code integration** — session hooks inject project snapshots at startup; bundled skill pack teaches agents workflow patterns
- **Workflow templates** — 24 issue types across 9 packs with enforced state machines
- **Dependency graph** — blockers, ready-queue, critical path analysis
- **Hierarchical planning** — milestone/phase/step hierarchies with automatic unblocking
- **Atomic claiming** — optimistic locking prevents double-work in multi-agent scenarios
- **Pre-computed context** — `context.md` regenerated on every mutation for instant agent orientation
- **Web dashboard** — real-time project overview with Kanban drag-and-drop, dependency graphs, multi-project switching, and Deep Teal dark/light theme (optional extra)
- **Minimal dependencies** — just Python + SQLite + click (no framework overhead)
- **Session resumption** — `get_changes --since <timestamp>` to catch up after downtime
## Quick Start
```bash
pip install filigree # or: uv add filigree
cd my-project
filigree init # Create .filigree/ directory
filigree install # Set up MCP, hooks, skills, CLAUDE.md, .gitignore
filigree create "Set up CI pipeline" --type=task --priority=1
filigree ready # See what's ready to work on
filigree update <id> --status=in_progress
filigree close <id>
```
## Installation
```bash
pip install filigree # Core CLI
pip install "filigree[mcp]" # + MCP server
pip install "filigree[dashboard]" # + Web dashboard
pip install "filigree[all]" # Everything
```
Or from source:
```bash
git clone https://github.com/tachyon-beep/filigree.git
cd filigree && uv sync
```
### Entry Points
| Command | Purpose |
|---------|---------|
| `filigree` | CLI interface |
| `filigree-mcp` | MCP server (stdio transport) |
| `filigree-dashboard` | Web UI (port 8377) |
### Claude Code Setup
`filigree install` configures everything in one step. To install individual components:
```bash
filigree install --claude-code # MCP server + CLAUDE.md instructions
filigree install --hooks # SessionStart hooks (project snapshot + dashboard auto-start)
filigree install --skills # Workflow skill pack for agents
filigree doctor # Verify installation health
```
The session hook runs `filigree session-context` at startup, giving the agent a snapshot of in-progress work, ready tasks, and the critical path. The skill pack (`filigree-workflow`) teaches agents triage patterns, team coordination, and sprint planning via progressive disclosure.
## Why Filigree?
| | Filigree | GitHub Issues | Jira | TODO files |
|-|----------|---------------|------|------------|
| Agent-native (MCP tools) | Yes | No | No | No |
| Works offline / local-first | Yes | No | No | Yes |
| Structured queries & filtering | Yes | Yes | Yes | No |
| Workflow state machines | Yes | Limited | Yes | No |
| Zero configuration | Yes | No | No | Yes |
| Dependency tracking | Yes | Limited | Yes | No |
## Documentation
| Document | Description |
|----------|-------------|
| [Getting Started](docs/getting-started.md) | 5-minute tutorial: install, init, first issue |
| [CLI Reference](docs/cli.md) | All CLI commands with full parameter docs |
| [MCP Server Reference](docs/mcp.md) | 43 MCP tools for agent-native interaction |
| [Workflow Templates](docs/workflows.md) | State machines, packs, field schemas, enforcement |
| [Agent Integration](docs/agent-integration.md) | Multi-agent patterns, claiming, session resumption |
| [Python API Reference](docs/api-reference.md) | FiligreeDB, Issue, TemplateRegistry for programmatic use |
| [Architecture](docs/architecture.md) | Source layout, DB schema, design decisions |
| [Examples](docs/examples/) | Runnable scripts: multi-agent, workflows, CLI scripting, planning |
## Priority Scale
See [Workflow Templates — Priority Scale](docs/workflows.md#priority-scale) for the full priority definitions (P0–P4).
## Development
Requires Python 3.11+. Developed on 3.13.
```bash
git clone https://github.com/tachyon-beep/filigree.git
cd filigree
uv sync --group dev
make ci # ruff check + mypy strict + pytest with 85% coverage gate
make lint # Ruff check + format check
make format # Auto-format with ruff
make typecheck # Mypy strict mode
make test # Pytest
make test-cov # Pytest with coverage (fail-under=85%)
```
### Key Conventions
- **Ruff** for linting and formatting (line-length=120)
- **Mypy** in strict mode
- **Pytest** with pytest-asyncio for MCP server tests
- **Coverage** threshold at 85%
- Tests in `tests/`, source in `src/filigree/`
## Acknowledgements
Filigree was inspired by Steve Yegge's [beads](https://github.com/steveyegge/beads) project. Filigree builds on the core idea of git-friendly issue tracking, focusing on MCP-native workflows and local-first operation.
## License
[MIT](LICENSE) — Copyright (c) 2026 John Morrissey
| text/markdown | John Morrissey | null | null | null | null | agent, cli, issue-tracker, mcp, sqlite | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: POSIX",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Bug Tracking",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"click>=8.0",
"fastapi>=0.115; extra == \"all\"",
"mcp<2,>=1.0; extra == \"all\"",
"uvicorn>=0.34; extra == \"all\"",
"fastapi>=0.115; extra == \"dashboard\"",
"uvicorn>=0.34; extra == \"dashboard\"",
"mcp<2,>=1.0; extra == \"mcp\""
] | [] | [] | [] | [
"Homepage, https://github.com/tachyon-beep/filigree",
"Repository, https://github.com/tachyon-beep/filigree",
"Issues, https://github.com/tachyon-beep/filigree/issues",
"Changelog, https://github.com/tachyon-beep/filigree/blob/main/CHANGELOG.md",
"Documentation, https://github.com/tachyon-beep/filigree#readme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:16:30.722202 | filigree-1.2.0.tar.gz | 277,740 | 18/8c/130b464076c20086d0c29f69cb3a570f56f9a26ba2653585639a69800803/filigree-1.2.0.tar.gz | source | sdist | null | false | b70391f14145f7941bdab51d05a97937 | 57eb60d3ef9fec8ec7e3076935ce06892c0b28a0fb3d77238154279bb7e9d38d | 188c130b464076c20086d0c29f69cb3a570f56f9a26ba2653585639a69800803 | MIT | [
"LICENSE"
] | 174 |
2.4 | django-aws-api-gateway-websockets | 2.1.1 | Created to allow Django projects to be used as a HTTP backend for AWS API Gateway websockets | [](https://github.com/StevenMapes/django-aws-api-gateway-websockets/actions)
[](https://github.com/StevenMapes/django-aws-api-gateway-websockets/actions?workclow=CI)
[](https://pypi.org/project/django-aws-api-gateway-websockets/)


[](https://django-aws-api-gateway-websockets.readthedocs.io/)
[](https://pypistats.org/packages/django-aws-api-gateway-websockets)
# Django AWS API Gateway Websockets
It is the aim of this project to create a uniform way to record websocket connections, associate the Django user who established the connection and then retrieve that user within each request.
This project is designed to work exclusively with AWS API Gateway.
It is not intended to be a replacement of [Django Channels](https://github.com/django/channels) instead this project allows you to add [WebSockets](https://en.wikipedia.org/wiki/WebSocket) support into your project by writing normal HTTP request-response views whilst allowing [AWS API Gateway](https://aws.amazon.com/api-gateway/) to worry about the WebSocket connection.
This project introduced a new [Class-Based-View](https://docs.djangoproject.com/en/dev/topics/class-based-views/) to handle connections, disconnections, routing, basic security checks and ensuring that the User object is available within every request.
The project will keep track of which users created which WebSockets, which ones are active and will allow you to send messages back down the socket to the client via Boto3.
Please refer to the installation notes and Getting Start Guides.
# Security Concerns
**IMPORTANT:**: In order to work the dispatch method requires the ```csrf_exempt``` decorator to be added. This has
already been added as a class decorator on the base view, if you overload the dispatch method you will need to add
it back to avoid receiving CSRF Token failures.
# Restricting access to users
As of version 2.1.0 you can now use Django Permissions to restrict access to the methods that handle the Websocket
request within your views.py.
On the base view class there are two object properties that can be used to set the permissions that are required.
```has_any_permission``` -> User must have ANY of the permissions listed
```has_all_permission``` -> User must have ALL of the permissions listed
These checks will apply during the ```dispatch``` method and will return a 403 response if the user does not have the
required permissions. Having permissions at an individual level is not currently supported automatically and will be
added in a future release.
# Python and Django Support
This project only actively supports current Python and Django versions, Python 3.10-3.14, and Django 4.2, 5.1, 5.2 & 6.0.
It may work with other versions of Django from 4.2 up and Python 3.9+ but they will no longer be tested.
| **Python/Django** | **4.2** | **5.0** | **5.1** | **5.2** | **6.0** |
|-------------------|----------|---------|---------|---------|---------|
| 3.9 | Y | N/A | N/A | N/A | N/A |
| 3.10 | Y | Y** | Y | Y | N/A |
| 3.11* | Y | Y** | Y | Y | N/A |
| 3.12 | Y | Y** | Y | Y | Y |
| 3.13 | Y | Y** | Y | Y | Y |
| 3.14 | N | N | Y | Y | Y |
* *Python 3.11 only works with Django 4.1.3+
* **Django 5.0 is no longer being tested for support since May 2025 due to the final version having security issues
# Installation
You can install this package from pip using
```
pip install django-aws-api-gateway-websockets
```
## settings.py
Add ```django_aws_api_gateway_websockets``` into ```INSTALLED_APPS```
### Decide on AWS Credential Setup
This package supports differing ways to connect to AWS, depending on the option you require depends on which settings
you will need to define.
#### Using the IAM role of the instance
If you are running this package on an EC2 Instance that has an IAM profile then you only need to specifc the region to
connect to by setting the ```AWS_GATEWAY_REGION_NAME``` in ```settings.py``` (falls back to ```AWS_REGION_NAME```).
E.G. ```AWS_GATEWAY_REGION_NAME="eu-west-1"``` will mean this package will connect to the Ireland region using the IAM profile
of the machine
#### Named Profiles (AWS_IAM_PROFILE)
You can use a named profile by setting ```AWS_IAM_PROFILE``` to the name of the profile on your computer.
E.G ```AWS_IAM_PROFILE="example"```. As the profile contains the region you do not need to set anything else.
**NOTE:** If you are using ```Django-Storages``` you may already have set the value against ```AWS_S3_SESSION_PROFILE```
#### Using AWS IAM Access Key and Secret
Alternatively you can specify the exact ACCESS_KEY, SECRET_ACCESS_KEY and REGION_NAME you wish to use by setting the
three following ```settings.py``` values.
```
AWS_ACCESS_KEY_ID="My-Key-Here"
AWS_SECRET_ACCESS_KEY="My-Secret-Key-Here"
AWS_GATEWAY_REGION_NAME="region-to-use"
```
The order above is the recommended order to use. As with all custom Django settings it's advisable to store the real
values in either a secrets manager or environmental values. I tend to use https://pypi.org/project/python-decouple/
### IMPORTANT
If your site is **not** already running cross-origin you will need to update some settings and flush the sessions to ensure the primary domain and subdomain will work.
Because the API Gateway will run from a subdomain you need ensure the cookies are set-up to allow subdomains to read them.
Assuming your site runs from **www.example.com** and you wanted to use **ws.www.example.com** for websockets you would need to
set the below CSRF and COOKIE settings (Django 4 examples shown)
```
# CSRF
CSRF_COOKIE_SAMESITE=Lax
# CSRF_TRUSTED_ORIGINS=www.example.com,ws.www.example.com ## For Django 3*
## For Django 4+
CSRF_TRUSTED_ORIGINS=https://www.example.com,https://ws.www.example.com
CSRF_COOKIE_DOMAIN='.www.example.com'
# Sessions
SESSION_COOKIE_SAMESITE='Lax'
SESSION_COOKIE_NAME='mysessionid'
SESSION_COOKIE_DOMAIN='.www.example.com'
```
If you wanted to use **www.example.com** for the main site and **ws.example.com** for websockets you'd need to use
these settings
```
# CSRF
CSRF_COOKIE_SAMESITE=Lax
CSRF_TRUSTED_ORIGINS=https://www.example.com,https://ws.www.example.com
CSRF_COOKIE_DOMAIN='.example.com'
# Sessions
SESSION_COOKIE_SAMESITE='Lax'
SESSION_COOKIE_NAME='mysessionid'
SESSION_COOKIE_DOMAIN='.example.com'
```
**NOTE:** You need to rename the SESSION cookie. In the example I have renamed if from ```sessionid``` to ```mysessionid```. This will ensure that any old cookies are ignored.
### Flushing Sessions
Because you are changing the session cookie you will also need to flush any cached sessions using ```python manage.py clearsessions```.
## Clearing Stale Websocket connections
The websocket connections will become stale over time and some housekeeping is required. To help there is a management
command clearWebSocketSessions that can be run to delete the closed connections from the database. Simply run
```
python manage.py clearWebSocketSessions
```
I recommend setting this as a scheduled task.
# AWS Setup
In order for this package to create the API Gateway, it's routes, integration, custom domain and to publish messages
you will need to assign the correct permission to the IAM User/Role following best practices of restrictive permission.
If you are using a EC2/ECS then you should be using an IAM Role otherwise use a user.
This package **does not** include creating an AWS Certificate as you may already have one. You should create that
yourself. If you do not know how then see the [Appendix](#Appendix) section at the end of this file.
## IAM Policy
You'll need to grant the IAM permission to allow this project to create the API Gateway, create the domain mappings and
to execute the API to send messages from the server to the client(s).
I'm still reviewing the "minimum required permissions" but this project has been tested with the
following IAM policy which you can copy and paste into the JSON editor within the AWS console:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DjangoApiGatewayPolicy01",
"Effect": "Allow",
"Action": [
"apigateway:GET",
"apigateway:PATCH",
"apigateway:POST",
"apigateway:PUT",
"execute-api:*",
"iam:CreateServiceLinkedRole"
],
"Resource": [
"arn:aws:apigateway:*::/apis",
"arn:aws:apigateway:*::/apis/*",
"arn:aws:apigateway:*::/apis/*/authorizers",
"arn:aws:apigateway:*::/apis/*/authorizers/*",
"arn:aws:apigateway:*::/apis/*/cors",
"arn:aws:apigateway:*::/apis/*/deployments",
"arn:aws:apigateway:*::/apis/*/deployments/*",
"arn:aws:apigateway:*::/apis/*/exports/*",
"arn:aws:apigateway:*::/apis/*/integrations",
"arn:aws:apigateway:*::/apis/*/integrations/*",
"arn:aws:apigateway:*::/apis/*/integrations/*/integrationresponses",
"arn:aws:apigateway:*::/apis/*/integrations/*/integrationresponses/*",
"arn:aws:apigateway:*::/apis/*/models",
"arn:aws:apigateway:*::/apis/*/models/*",
"arn:aws:apigateway:*::/apis/*/models/*/template",
"arn:aws:apigateway:*::/apis/*/routes",
"arn:aws:apigateway:*::/apis/*/routes/*",
"arn:aws:apigateway:*::/apis/*/routes/*/requestparameters/*",
"arn:aws:apigateway:*::/apis/*/routes/*/routeresponses",
"arn:aws:apigateway:*::/apis/*/routes/*/routeresponses/*",
"arn:aws:apigateway:*::/apis/*/stages",
"arn:aws:apigateway:*::/apis/*/stages/*",
"arn:aws:apigateway:*::/apis/*/stages/*/accesslogsettings",
"arn:aws:apigateway:*::/apis/*/stages/*/cache/authorizers",
"arn:aws:apigateway:*::/apis/*/stages/*/routesettings/*",
"arn:aws:apigateway:{AWS-REGION-NAME}::/domainnames",
"arn:aws:apigateway:{AWS-REGION-NAME}::/domainnames/*/apimappings",
"arn:aws:apigateway:{AWS-REGION-NAME}::/domainnames/*/apimappings/*",
"arn:aws:execute-api:{AWS-REGION-NAME}:{AWS-ACCOUNT-NUMBER}:*/*/*/*",
"arn:aws:iam::{AWS-ACCOUNT-NUMBER}:role/aws-service-role/ops.apigateway.amazonaws.com/AWSServiceRoleForAPIGateway"
]
}
]
}
```
You will need to edit the permissions and replace the following:
1. ```{AWS-REGION-NANE}``` with the correct AWS region you are using, E.G ```eu-west-1```. If you wish to grant access to all regions then replace this placeholder with an ```*```
2. ```{AWS-ACCOUNT-NUMBER}``` with your account number E.G: 123456789101
This policy grants permissions to ensure the API Gateway(s) will be created, the custom domain name mapped to the
gateway and that you can send messages from the server to clients. The AWS Service role is required as it's used when
you create a custom domain name for API Gateway. If you do this via the console it will create the role for you so we
need to ensure the IAM user has the permission in order to replicate this.
Once you have created your API Gateway(s) you may wish to follow AWS best practice and restrict of revoke the
permissions to the API Gateway(s) you have created. Because I do not know what you will name your gateway, the
permissions above will allow you to add/edit and API gateway on your account.
# Getting Started
The core files within this project are:
1. ```django_aws_api_gateway_websockets.views.WebSocketView``` - The base class-based view from which you should extend
2. ```django_aws_api_gateway_websockets.models.ApiGateway``` - A model for managing the API Gateway. A Django Admin
page is included along with custom actions to create the API Gateway and configure a Custom Domain. For those with
projects not using Django Admin there are two management commands that perform the same actions.
3. ```django_aws_api_gateway_websockets.models.WebSocketSession``` - The websocket session store. Every connection
writes to this model which contains a method to send a message to the connection. The QuerySet of the objects model
manager has been extended to include a method to send messages to all records included within a queryset.
## Django
### URLS.py
Edit your urls.py file and add an entry for the URL you wish API Gateway to call. **IMPORTANT** The slug parameter
must be called "route". This will be populated by API Gateway with the route it uses E.G. $connect, $default or
$disconnect
E.G.
```
path("ws/<slug:route>", ExampleWebSocketView.as_view(), name="example_websocket")
```
### Creating the Views
Subclass the ```WebSocketView``` and implement methods where the name of the method is the name of the route the
API Gateway has been setup to use. There are already methods for $connect and $disconnect you just need to implement
a method for ```default``` along with any other custom routes you have created. The methods are selected dynamically
via the ```dispatch``` method with any leading dollar sign being remove.
The methods take the ```request``` parameter and only needs to return a response if you wish to return a negative HTTP
response such as a HttpResponseBadRequest otherwise there is no need to return anything.
```
from django_aws_api_gateway_websockets.views import WebSocketView
class ExampleWebSocketView(WebSocketView):
"""Custom Websocket view."""
def default(self, request, *args, **kwargs) -> JsonResponse:
"""Add the logic you wish to make here when you receive a message.
create your JSON response that you will handle within the Javascript
"""
logger.debug(f"body {self.body}")
```
If you want to send a response to the websocket that made the request then you need to call the ```send_message()``` on
the WebSocketSession that is being used. See the example below
```
from django_aws_api_gateway_websockets.views import WebSocketView
class ExampleWebSocketView(WebSocketView):
"""Custom Websocket view."""
def default(self, request, *args, **kwargs) -> JsonResponse:
# Do stuff
...
# Send a message back to the client - i.e unicast
self.websocket_session.send_message({"key1": "value1", "key2": "value2"})
```
If you are using the "channels" to group WebSocket connections together for multicasting, that is one-to-many
communication then you can use the following example
```
from django_aws_api_gateway_websockets.models import WebSocketSession
from django_aws_api_gateway_websockets.views import WebSocketView
class ExampleWebSocketView(WebSocketView):
"""Custom Websocket view."""
def default(self, request, *args, **kwargs) -> JsonResponse:
# Do stuff
...
# Multicast a message to ALL CONNECTED clients on the same "channel"
WebSocketSession.objects.filter(
channel_name=self.websocket_session.channel_name
).send_message({"key": "value})
# Multicast a message to ALL CONNECTED clients who are with the
# channel_nme "my-example-channel"
WebSocketSession.objects.filter(
channel_name=my-example-channel
).send_message(
{
"key": "value
}
)
```
#### Using the Route Selection Key value to call specific methods
API Gateway works by routing messages based on the "Route Selection Key". This project sets you up with a default route
so that you have a catch-all route but the Route Selection Key is preserved and is used by the dispatch method when
selecting the method to use to handle the request. This means that you can write individual methods to handle each
route individually for cleaner, more testable code.
In the example that follows the default Route Selection Key of **action** is being used.
Assume that two series of sends are made to the WebSocket. The first with the payload:
```{"action": "test", "value": "hello world"}``` and the second ```{"action": "help", "value": "Help Me"}```. You can
either handle these within the catch-all ```default``` method or you can write individual methods for each action
```
from django_aws_api_gateway_websockets.models import WebSocketSession
from django_aws_api_gateway_websockets.views import WebSocketView
class ExampleWebSocketView(WebSocketView):
"""Custom Websocket view."""
def test(self, request, *args, **kwargs) -> JsonResponse:
print(self.body.get("value")
# Prints "hello world"
def help(self, request, *args, **kwargs) -> JsonResponse:
print(self.body.get("value")
# Prints "Help Me"
```
**Remember**: The "action" key is the default ```route_selection_key```, if you chose to use a different one when
setting up the websocket make sure to update the ```route_selection_key``` class property to use the same value
### Debugging the View
Sometimes you the view may return a HTTP400 that you wish to debug further. In order to help with this you can pass
```debug=True``` into the ```as_view()``` method. The class will then call the private method ```_debug(msg)``` passing
in a string. By default this method will update a list property called ```debug_log``` with the message string but
you may wish to simply overload the method and call your logger.
E.G.
```
def _debug(self, msg: str):
if self.debug:
logger.debug(msg)
```
This can help track the issue which may be as simply as sending a message from the client that is missing the
```route_select_key```.
## Example of sending a message from the server to the client
To send a message to a specific connection simple load its ```WebSocketSession``` record and then call the
```send_message``` method passing in a JSON compatible dictionary of the payload you wish to send to the client.
### Sending a message to one connection
```python
from django_aws_api_gateway_websockets.models import WebSocketSession
obj = WebSocketSession.objects.get(pk=1)
obj.send_message({"type": "example", "msg": "This is a message"})
```
### Sending a message to ALL active connections associated with the same channel
```python
from django_aws_api_gateway_websockets.models import WebSocketSession
WebSocketSession.objects.filter(channel_name="Chatroom 1").send_message(
{"msg": "This is a a sample message"}
)
```
The ```WebSocketSessionQuerySet.send_message``` method automatically adds a filter of ```connected=True```
## Django Admin
Three Django Admin pages will be added to your project under the app _Django AWS APIGateway WebSockets_. Those pages
allow you to view and manage the three base models.
### Creating an API Gateway Endpoint
**Important** This section assumes that you are using an IAM account with the permissions listed earlier.
Using the Django Admin page create a new API Gateway record using the following for reference:
1. **API Name** - The human friendly API Name
2. **API Description** - Optional
3. **Default channel name** - Fill this in if you want all connections to this Websocket to also be associated with
the same "channel" otherwise leave it blank. "Channels" are groups of web socket connections and nothing more.
4. **Target base Endpoint** - This is the full URL path to the view you wish to use to handle the requests **excluding**
the ```route``` slug portion that will be automatically appended.
5. **Certificate ARN** - You'll need to manually create certificate within AWS. Once you have, copy the ARN into this field
6. **Hosted Zone ID** - If you use Route53 then you'll need to enter the Hosted Zone ID here if you wish to use a custom
domain name with the API Gateway Endpoint
7. **API Key Selection Expression** - In most cases leave this as the default value. See the AWS docs for more
8. **Route selection expression** - As per the above. This is the field that maps the "action" key within the payload
as being the key to determine the route to take. If you change this then you must overload the
```route_selection_key``` of the view
9. **Route key** - This is the default root key. In most cases you will not need to change this.
10. **Stage Name** - The name you wish to give to the staging. Currently this package does not support multiple stages.
If you leave it blank it will default to "production"
11. **Stage description" - Optional
12. **Tags** - Currently not implement but these will be used to create the tags with AWS
13. **API ID** - This will be populated when the API is created.
14. **API Endpoint** - This will be populated when the API is created.
15. **API Gateway Domain Name** - This will be populated when you run the Custom Domain setup. The value that appears
here is the value to which you should your DNS CNAME entry should point.
16. **API Mapping ID** - This will be populated when the API is created.
Once you have created the record within the database simply select it from the Django Admin list view, choose
**Create API Gateway** action from the actions list and click Go. The API Gateway record will be created within your
account. When it's ready the "API Created" column will show as True.
Once the API has been created you can now add a custom domain name mapping by choosing the row again and this time
selecting the **Create Custom Domain record for the API**. This will create the Custom Domain record and will associate
it with the stage name you entered earlier. Once it's completed the **Custom Domain Created** flag will be set as True.
At this point you can open the record where you'll find that the ```API Gateway Domain Name``` has been populated.
## Django Management Commands
If you are not using Django Admin then you can populate the apigateway database table manually using the same list
as shown above.
Once you've populated those fields you can then run the two actions as management commands rather than via Django Admin.
```
python manage.py createApiGateway --pk=1
```
```
python manage.py createCustomDomain --pk=1
```
The same actions will run as above.
## Adding Additional Routes
This project will route all requests to one URL by default however that is not always what you will need,
sometimes you will want to route requests to different URLs to send requests to different views potentially within
different apps. API Gateway support this by using _"integrations"_ and _"routes"_ that use unique "route keys" to
identify requests and then route those requests to a URL.
To support this approach this project uses the ```ApiGatewayAdditionalRoute``` model entries to map a route key to a chosen URL.
They are available to manage as inline forms on the main ```ApiGateway``` admin or as their own admin page.
The models are set-up to auto-deploy the new route if the ```ApiGateway``` has already been deployed. If it has not been
deployed already then these routes will be setup during the main deployment.
From the client side you choose the route you wish to take by setting the value of the **action** to the route key you
wish to match.
## Gotchas and debugging
### Failure to connect to websockets
#### Differing required headers
The most common reasons for the websocket failing to connect is due to different required headers. The base view is
set-up with two lists of expected headers. ```required_headers``` and ```additional_required_headers```. If you are
deploying to an EC2 server then you shouldn't have to change these but if you are deploying else where or are testing
locally you may find that you need to change some of these. During development of this library I was using an
[NGROK](https://ngrok.com/) network edge tunnel and found that the "X-Real-Ip" and "Connection" headers were being lost
during which is why they were moved to the additional_required_headers. If you find this is the case for you then simply
overload the class property and set it to an empty list.
# Client Side Integration (Javascript)
This section will guide you through two common ways of connecting to and using this project from a webpage.
### Basic Integration
Below is a very basic integration using the WebSockets API built into browsers. It does not handle reconnecting dropped
websockets, see the next section for that.
**WARNING**: This method will create a WebSocket that will timeout after around 10 minutes.
The below example assumes you created the API Gateway to work on the custom domain name ws.example.com
```javascript
let wss_url = 'wss://ws.example.com';
let regDeskWSocket = new WebSocket(wss_url);
regDeskWSocket.onmessage = function(event) {
// Take your action here to handle messages being received
console.log(event);
let msg = JSON.parse(event.data);
console.log(msg);
};
```
You can set the channel by using the **channel** querystring parameter during the connection
```javascript
let wss_url = 'wss://ws.example.com?channel=my+example+channel';
let exampleWS = new WebSocket(wss_url);
exampleWS.onmessage = function(event) {
// Take your action here to handle messages being received
console.log(event);
let msg = JSON.parse(event.data);
console.log(msg);
};
```
### Reconnecting WebSockets
Websockets can disconnect due to a variety of reasons; to work around this here are some links to libraries of proposed
solutions:
1. [Stack Overflow- WebSocket: How to automatically reconnect after it dies](https://stackoverflow.com/questions/22431751/websocket-how-to-automatically-reconnect-after-it-dies)
2. [JS library - reconnecting-websocket](https://github.com/joewalnes/reconnecting-websocket)
The below example is using the JS library. Note you just include the lib and then use the
```ReconnectingWebSocket``` class rather than ```WebSocket```:
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/reconnecting-websocket/1.0.0/reconnecting-websocket.min.js" integrity="sha512-B4skI5FiLurS86aioJx9VfozI1wjqrn6aTdJH+YQUmCZum/ZibPBTX55k5d9XM6EsKePDInkLVrN7vPmJxc1qA==" crossorigin="anonymous" referrerpolicy="no-referrer"></script>
<script>
let wss_url = 'wss://ws.example.com';
let exampleWS = new ReconnectingWebSocket(wss_url);
exampleWS.onmessage = function(event) {
// Take your action here to handle messages being received
console.log(event);
let msg = JSON.parse(event.data);
console.log(msg);
};
</script>
```
### Sending a message from the client to the server
Both the example above use the same method.
```javascript
let wss_url = 'wss://ws.example.com?channel=my+example+channel';
let exampleWS = new WebSocket(wss_url); // Or use ReconnectingWebSocket it does not matter
// Send a message
exampleWS.send(JSON.stringify({"action": "custom", "message": "What is this"}))
```
**IMPORTANT** The value of ```action``` determines the route that is used by **API Gateway**. By default, the only
routes that are set-up are ```$connect```, ```$disconnect``` and ```default```. Any messages sent to unknown routes on
the API Gateway are delivered to the ```default``` route. So if you created a custom route called ```bob``` and then
sent the following message from the client:
```
exampleWS.send(JSON.stringify({"action": "bob", "message": "What is this"}))
```
API Gateway will route this to the endpoint set for the "bob" route. This will be calling your view with the route slugs
value being assigned to **bob**. The ```dispatch``` method of the view will then look for a method on the class called
```bob```. If one is found then it will be invoked otherwise the ```default``` method will be called.
# Appendix
## Creating an SSL Certificate within AWS Certificate Manager
Please refer to the [official AWS documentation](https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html)
# Found a Bug?
Issues are tracked via GitHub issues at the [project issue page](https://github.com/StevenMapes/django-aws-api-gateway-websockets/issues)
# Have A Feature Request?
Feature requests can be raised by creating an issue within the [project issue page](https://github.com/StevenMapes/django-aws-api-gateway-websockets/issues), but please create the issue with "Feature Request -" at the start of the issue
# Testing
To run the tests use
```
coverage erase && \
python -W error::DeprecationWarning -W error::PendingDeprecationWarning -m coverage run --parallel -m pytest --ds tests.settings && \
coverage combine && \
coverage report
```
# Compiling Requirements
Run ```pip install pip-tools``` then run ```python requirements/compile.py``` to generate the various requirements files.
I use two local VIRTUALENVS to build the requirements, one running Python3.8 and the other running Python 3.11.
Also require ```pytest-django``` for testing
# Building
This project uses hatchling
```python -m build --sdist```
# tox
# Contributing
- [Check for open issues](https://github.com/StevenMapes/django-aws-api-gateway-websockets/issues) at the project issue page or open a new issue to start a discussion about a feature or bug.
- Fork the [repository on GitHub](https://github.com/StevenMapes/django-aws-api-gateway-websockets) to start making changes.
- Clone the repository
- Initialise pre-commit by running ```pre-commit install```
- Install requirements from one of the requirement files depending on the versions of Python and Django you wish to use.
- Add a test case to show that the bug is fixed or the feature is implemented correctly.
- Test using ```python -W error::DeprecationWarning -W error::PendingDeprecationWarning -m coverage run --parallel -m pytest --ds tests.settings```
- Create a pull request, tagging the issue, bug me until I can merge your pull request. Also, don't forget to add yourself to AUTHORS.
| text/markdown | null | Steven Mapes <steve@stevenmapes.com> | null | null | MIT License
Copyright (c) 2022 Steven Mapes
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | API Gateway, AWS, Amazon WebServices, Django, WebSockets | [
"Development Status :: 5 - Production/Stable",
"Framework :: Django",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Framework :: Django :: 5.1",
"Framework :: Django :: 5.2",
"Framework :: Django :: 6.0",
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"boto3>=1.10.0",
"django>=4.2.27",
"coverage; extra == \"dev\"",
"flake8; extra == \"dev\"",
"pytest; extra == \"dev\""
] | [] | [] | [] | [
"Documentation, https://django-aws-api-gateway-websockets.readthedocs.io/",
"Changelog, https://github.com/StevenMapes/django-aws-api-gateway-websockets/blob/main/CHANGELOG.md",
"Twitter, https://twitter.com/stevenamapes",
"Homepage, https://github.com/StevenMapes/django-aws-api-gateway-websockets"
] | twine/6.1.0 CPython/3.12.3 | 2026-02-20T15:16:29.960020 | django_aws_api_gateway_websockets-2.1.1.tar.gz | 116,338 | ca/e4/d44d81abb5cff1b0dc84f5944417009bc59a37cf0c4da67cfdfec2830f5e/django_aws_api_gateway_websockets-2.1.1.tar.gz | source | sdist | null | false | f2ce4e414a15e37148a10373401d1d70 | 06a969fe0a54411d92202645a8788debc0bcde37af6d68186af2e4cb4a477155 | cae4d44d81abb5cff1b0dc84f5944417009bc59a37cf0c4da67cfdfec2830f5e | null | [
"AUTHORS",
"LICENSE"
] | 109 |
2.4 | multiverse-tester | 0.1.0 | Симуляция пригодности вселенных для жизни при различных фундаментальных константах | # MultiverseTester
Симуляция пригодности вселенных для жизни при различных значениях фундаментальных физических констант. Исследует пространство параметров мультивселенной (от 2D до 10D) и вычисляет индекс пригодности.
**Закон 8D:** допустимый диапазон изменения константы обратно пропорционален силе соответствующего взаимодействия — ΔP ∝ 1/F.
## Установка
```bash
pip install multiverse-tester
```
С опциональной поддержкой изоповерхностей (scikit-image):
```bash
pip install ".[full]"
```
### Сборка для публикации на PyPI
```bash
pip install build
python -m build
# Файлы будут в dist/
pip install twine
twine upload dist/*
```
### Запуск тестов
```bash
pip install ".[test]"
pytest tests/ -v
```
## Использование
### Программный интерфейс
```python
from multiverse_tester import (
UniverseParameters,
UniverseAnalyzer,
UniversalConstants,
HabitabilityIndex,
)
# Создание вселенной с заданными параметрами
u = UniverseParameters(
name="Тестовая вселенная",
alpha=1/137.036, # постоянная тонкой структуры
m_p=1.6726219e-27, # масса протона (кг)
)
# Анализ пригодности для жизни
analyzer = UniverseAnalyzer(u)
index, score, metrics = analyzer.calculate_habitability_index()
print(f"Индекс пригодности: {score:.3f}")
print(f"Категория: {index.name}")
```
### CLI
После `pip install .` доступна команда:
```bash
multiverse-analyze # основной анализ с графиком
```
Оптимизаторы 2D–10D запускаются скриптами из корня проекта (см. «Запуск скриптов напрямую» ниже).
### Интерактивное веб-демо (Streamlit)
**[🌌 Запустить демо онлайн](https://multiverse-tester.streamlit.app)**
Или локально:
```bash
pip install ".[demo]" # streamlit входит в demo
multiverse-demo # или: streamlit run multiverse_tester/streamlit_demo.py
```
Откройте браузер и исследуйте «пузырь жизни» — меняйте ползунки (α, m_p, m_e, G, c, ħ, ε₀) и наблюдайте, как меняется пригодность вселенной. Ландшафт показывает область пригодности в плоскости (α, m_p).
### Пакетный запуск всех оптимизаторов
```bash
multiverse-run-optimizers # или: python -m multiverse_tester.run_all_optimizers
# 2D→10D, отчёт в reports/
```
### Оптимизаторы в коде
```python
from multiverse_tester.optimizers import (
UniverseOptimizer, # 2D (α, m_p)
HyperVolume5D, # 5D
HyperVolume10D, # 10D
plot_nd_2d_slice,
)
```
## Отчёты
| Файл | Описание |
|------|----------|
| [reports/OPTIMIZATION_REPORT.md](reports/OPTIMIZATION_REPORT.md) | Сводная таблица 2D–10D |
| [reports/FULL_ANALYSIS_2D_TO_8D.md](reports/FULL_ANALYSIS_2D_TO_8D.md) | Полный анализ, Закон 8D, иерархия констант |
| [reports/10D_CRITICAL_ANALYSIS.md](reports/10D_CRITICAL_ANALYSIS.md) | Критический анализ 10D (включая Λ) |
| [reports/MATHEMATICAL_FORMALIZATION.md](reports/MATHEMATICAL_FORMALIZATION.md) | Математическая формализация: ΔP(F) = min(20, 2.0·F^(-0.15)) |
## Модель
- **Атомная физика:** радиус Бора, энергия Ридберга, комптоновская длина волны
- **Ядерная физика:** энергия связи (формула Вайцзеккера), кулоновский барьер
- **Звёздный нуклеосинтез:** pp-цепочка, CNO-цикл, тройная альфа, s/r-процессы, сверхновые
- **Индекс пригодности:** DEAD → HOSTILE → MARGINAL → HABITABLE → OPTIMAL
## Результаты (2D→10D)
- **α** — единственный параметр с чётким оптимумом (~0.007–0.011)
- **m_p, m_e, ε₀, k_B** — стабилизируются на ~0.1× (могут быть в 10 раз меньше)
- **c, ħ, H₀** — стабилизируются на ~0.2× (могут быть в 5 раз меньше)
- **G** — стабилизируется на ~0.05× (может быть в 20 раз слабее)
- **Λ** (10D) — космологическая постоянная; при её добавлении доля пригодного пространства остаётся ~4.4%
- **Универсальная формула:** ΔP(F) = min(20, 2.0·F^(-0.15))
## Зависимости
- Python >= 3.8
- numpy
- matplotlib
- scipy
- scikit-image (опционально, для изоповерхностей в 3D)
## Лицензия
MIT
**Автор:** Timur Isanov
**Email:** tisanov@yahoo.com
| text/markdown | null | Timur Isanov <tisanov@yahoo.com> | null | null | MIT | physics, multiverse, simulation, habitability, fine-structure | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Physics"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy>=1.20.0",
"matplotlib>=3.4.0",
"scipy>=1.7.0",
"scikit-image>=0.19.0; extra == \"full\"",
"streamlit>=1.28.0; extra == \"demo\"",
"pytest>=7.0; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/timurisanov/multiverse-tester",
"Repository, https://github.com/timurisanov/multiverse-tester",
"Documentation, https://github.com/timurisanov/multiverse-tester#readme",
"Live Demo, https://multiverse-tester.streamlit.app"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:16:13.761247 | multiverse_tester-0.1.0.tar.gz | 54,030 | ab/28/0ae236bd2da418659d37d545b7c1bf95626e9eee6f90f00fb127111ed78f/multiverse_tester-0.1.0.tar.gz | source | sdist | null | false | 714c3b332fcb2a5203974172cd14b288 | 0503357d1cfcfe9c4fc75cd9757503ce651da6d224245ddae68a042fb4eb574f | ab280ae236bd2da418659d37d545b7c1bf95626e9eee6f90f00fb127111ed78f | null | [
"LICENSE"
] | 174 |
2.4 | pygama | 2.3.5 | Python package for data processing and analysis | <img src=".github/logo.png" alt="pygama logo" align="left" height="150">
# pygama
[](https://pypi.org/project/pygama/)
[](https://github.com/legend-exp/pygama/tags)
[](https://github.com/legend-exp/pygama/actions)
[](https://github.com/pre-commit/pre-commit)
[](https://github.com/psf/black)
[](https://app.codecov.io/gh/legend-exp/pygama)
[](https://github.com/legend-exp/pygama/issues)
[](https://github.com/legend-exp/pygama/pulls)
[](https://github.com/legend-exp/pygama/blob/main/LICENSE)
[](https://pygama.readthedocs.io)
[](https://zenodo.org/doi/10.5281/zenodo.10614246)
*pygama* is a Python package for:
- converting physics data acquisition system output to
[LH5-format](https://legend-exp.github.io/legend-data-format-specs) HDF5
files (functionality provided by the
[legend-pydataobj](https://legend-pydataobj.readthedocs.io) and
[legend-daq2lh5](https://legend-daq2lh5.readthedocs.io) packages)
- performing bulk digital signal processing (DSP) on time-series data
(functionality provided by the [dspeed](https://dspeed.readthedocs.io)
package)
- optimizing DSP routines and tuning associated analysis parameters
- generating and selecting high-level event data for further analysis
Check out the [online documentation](https://pygama.readthedocs.io).
If you are using this software, consider
[citing](https://zenodo.org/doi/10.5281/zenodo.10614246)!
## Related repositories
- [legend-exp/legend-pydataobj](https://github.com/legend-exp/legend-pydataobj)
→ LEGEND Python Data Objects
- [legend-exp/legend-daq2lh5](https://github.com/legend-exp/legend-daq2lh5)
→ Convert digitizer data to LEGEND HDF5
- [legend-exp/dspeed](https://github.com/legend-exp/dspeed)
→ Fast Digital Signal Processing for particle detector signals in Python
| text/markdown | The LEGEND collaboration | null | The LEGEND collaboration | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: MacOS",
"Operating System :: POSIX",
"Operating System :: Unix",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"hist",
"colorlog",
"dbetto",
"dspeed>=2.0",
"h5py>=3.2",
"iminuit",
"legend-pydataobj>=1.16",
"pylegendmeta>=0.9",
"matplotlib",
"numba!=0.53.*,!=0.54.*,!=0.57",
"numpy>=1.21",
"pandas>=1.4.4",
"tables",
"pint",
"pyyaml",
"scikit-learn",
"scipy>=1.0.1",
"tqdm>=4.66",
"pygama[docs,test]; extra == \"all\"",
"furo; extra == \"docs\"",
"jupyter; extra == \"docs\"",
"myst-parser; extra == \"docs\"",
"nbsphinx; extra == \"docs\"",
"sphinx; extra == \"docs\"",
"sphinx-copybutton; extra == \"docs\"",
"sphinx-inline-tabs; extra == \"docs\"",
"pre-commit; extra == \"test\"",
"pylegendtestdata; extra == \"test\"",
"pytest>=6.0; extra == \"test\"",
"pytest-cov; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/legend-exp/pygama",
"Bug Tracker, https://github.com/legend-exp/pygama/issues",
"Discussions, https://github.com/legend-exp/pygama/discussions",
"Changelog, https://github.com/legend-exp/pygama/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:16:13.064722 | pygama-2.3.5.tar.gz | 212,856 | e8/2d/26c61aac7c57f6d42744b165fd1c1c31b9694b3a506154d936921bfa3996/pygama-2.3.5.tar.gz | source | sdist | null | false | 77abc9c6178935a1da138e5c577616ac | 1b6faa4bb07ca2d314419dbc6c11d2e2263d5c3c8e3c9f2434fca763d27d5038 | e82d26c61aac7c57f6d42744b165fd1c1c31b9694b3a506154d936921bfa3996 | null | [
"LICENSE"
] | 389 |
2.4 | rrq | 0.10.4 | RRQ is a library for creating reliable job queues using Rust, Redis. with language-agnostic producers and workers. | # RRQ: Reliable Redis Queue
[](https://pypi.org/project/rrq/)
[](LICENSE)
**RRQ is a distributed job queue that actually works.** It combines a Rust-powered orchestrator with language-native workers to give you the reliability of battle-tested infrastructure with the flexibility of writing job handlers in Python, TypeScript, or Rust.
## Why RRQ?
Most job queues make you choose: either fight with complex distributed systems concepts, or accept unreliable "good enough" solutions. RRQ takes a different approach:
- **Rust orchestrator, any-language workers** - The hard parts (scheduling, retries, locking, timeouts) are handled by a single-binary Rust process. Your job handlers are just normal async functions in your preferred language.
- **Redis as the source of truth** - No separate databases to manage. Jobs, queues, locks, and results all live in Redis with atomic operations and predictable semantics.
- **Production-grade features built in** - Retry policies, dead letter queues, job timeouts, cron scheduling, distributed tracing, and health checks work out of the box.
- **Fast and lightweight** - The Rust orchestrator handles thousands of jobs per second with minimal memory. Workers are isolated processes that can be scaled independently.
## How It Works
```
┌──────────────────────────────┐
│ Your Application │
│ (Python, TypeScript, Rust) │
│ │
│ client.enqueue("job", {...}) │
└───────────────┬──────────────┘
│ enqueue jobs
▼
┌───────────────────────┐
│ Redis │
│ queues, jobs, locks │
└──────────┬────────────┘
│ poll/dispatch
▼
┌──────────────────────────────┐
│ RRQ Orchestrator │
│ (single Rust binary) │
│ • scheduling & retries │
│ • timeouts & deadlines │
│ • dead letter queue │
│ • cron jobs │
└──────────┬───────────────────┘
│ socket protocol
▼
┌─────────────────────────────────────────┐
│ Job Runners │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ Python │ │ TypeScript │ │
│ │ Worker │ │ Worker │ │
│ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────┘
```
## This Package
This Python package (`rrq`) gives you everything you need to work with RRQ from Python:
- **Producer client** - Enqueue jobs from your Python application
- **Runner runtime** - Execute job handlers written in Python
- **OpenTelemetry integration** - Distributed tracing from producer to runner
The Rust orchestrator binary is bundled in the wheel, so there's nothing else to install.
## Quick Start
### 1. Install
```bash
pip install rrq
# or
uv pip install rrq
```
### 2. Create configuration (`rrq.toml`)
```toml
[rrq]
redis_dsn = "redis://localhost:6379/0"
default_runner_name = "python"
[rrq.runners.python]
type = "socket"
cmd = ["rrq-runner", "--settings", "myapp.runner:settings"]
tcp_socket = "127.0.0.1:9000"
```
### 3. Write a job handler
```python
# myapp/handlers.py
from rrq.runner import ExecutionRequest
async def send_welcome_email(request: ExecutionRequest):
user_email = request.params.get("email")
template = request.params.get("template", "welcome")
# Your email sending logic here
await send_email(to=user_email, template=template)
return {"sent": True, "email": user_email}
```
### 4. Register handlers
```python
# myapp/runner.py
from rrq.runner_settings import PythonRunnerSettings
from rrq.registry import Registry
from myapp import handlers
registry = Registry()
registry.register("send_welcome_email", handlers.send_welcome_email)
settings = PythonRunnerSettings(registry=registry)
```
### 5. Start the system
```bash
# Terminal 1: Start the orchestrator (runs runners automatically)
rrq worker run --config rrq.toml
# That's it! The orchestrator spawns and manages your Python runners.
```
### 6. Enqueue jobs
```python
import asyncio
from rrq.client import RRQClient
async def main():
client = RRQClient(config_path="rrq.toml")
job_id = await client.enqueue(
"send_welcome_email",
{
"params": {
"email": "user@example.com",
"template": "welcome",
}
},
)
print(f"Enqueued job: {job_id}")
await client.close()
asyncio.run(main())
```
## Features
### Scheduled Jobs
Delay job execution:
```python
# Run in 5 minutes
await client.enqueue("cleanup", {"defer_by_seconds": 300})
# Run at a specific time
from datetime import datetime, timezone
await client.enqueue(
"report",
{"defer_until": datetime(2024, 1, 1, 9, 0, tzinfo=timezone.utc)},
)
```
### Unique Jobs (Idempotency)
Prevent duplicate jobs:
```python
# Only one job with this key will be enqueued
await client.enqueue_with_unique_key(
"process_order",
"order-123",
{"params": {"order_id": "123"}},
)
```
### Rate Limiting
Limit how often a job can run:
```python
job_id = await client.enqueue_with_rate_limit(
"sync_user",
{
"params": {"user_id": "456"},
"rate_limit_key": "user-456",
"rate_limit_seconds": 60,
},
)
if job_id is None:
print("Rate limited, try again later")
```
### Debouncing
Delay and deduplicate rapid job submissions:
```python
# Only the last enqueue within the window will execute
await client.enqueue_with_debounce(
"save_document",
{
"params": {"doc_id": "789"},
"debounce_key": "doc-789",
"debounce_seconds": 5,
},
)
```
### Cron Jobs
Schedule recurring jobs in `rrq.toml`:
```toml
[[rrq.cron_jobs]]
function_name = "daily_report"
schedule = "0 0 9 * * *" # 9 AM daily (6-field cron with seconds)
queue_name = "scheduled"
```
### Job Status
Check job progress:
```python
status = await client.get_job_status(job_id)
print(f"Status: {status}")
```
### OpenTelemetry
Enable distributed tracing:
```python
from rrq.integrations import otel
otel.enable(service_name="my-service")
```
Traces propagate from producer → orchestrator → runner automatically.
## Configuration Reference
See [docs/CONFIG_REFERENCE.md](docs/CONFIG_REFERENCE.md) for the full TOML schema.
Key settings:
```toml
[rrq]
redis_dsn = "redis://localhost:6379/0"
default_runner_name = "python"
default_job_timeout_seconds = 300 # 5 minutes
default_max_retries = 5
[rrq.runners.python]
type = "socket"
cmd = ["rrq-runner", "--settings", "myapp.runner:settings"]
tcp_socket = "127.0.0.1:9000"
pool_size = 4 # Number of runner processes
max_in_flight = 10 # Concurrent jobs per runner
```
## Related Packages
| Package | Language | Purpose |
| ----------------------------------------------------- | ---------- | --------------------------------------- |
| [rrq](https://pypi.org/project/rrq/) | Python | Producer client + runner (this package) |
| [rrq-ts](https://www.npmjs.com/package/rrq-ts) | TypeScript | Producer client + runner |
| [rrq](https://crates.io/crates/rrq) | Rust | Orchestrator binary |
| [rrq-producer](https://crates.io/crates/rrq-producer) | Rust | Native producer client |
| [rrq-runner](https://crates.io/crates/rrq-runner) | Rust | Native runner runtime |
## Requirements
- Python 3.11+
- Redis 5.0+
## License
Apache-2.0
| text/markdown | null | Mazdak Rezvani <mazdak@me.com> | null | null | null | null | [
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Distributed Computing",
"Topic :: System :: Monitoring"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"pydantic>=2.11.4",
"redis[hiredis]>=4.2.0",
"pytest-asyncio>=1.0.0; extra == \"dev\"",
"pytest-cov>=6.0.0; extra == \"dev\"",
"pytest>=8.3.5; extra == \"dev\"",
"python-dotenv>=1.2.1; extra == \"dev\"",
"ruff==0.14.9; extra == \"dev\"",
"ty==0.0.1-alpha.26; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/getresq/rrq",
"Bug Tracker, https://github.com/getresq/rrq/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:16:11.792064 | rrq-0.10.4-py3-none-manylinux_2_35_x86_64.whl | 7,972,498 | ad/78/7f68ba5e944287e1b514b3abf12f2f3ecb433efd93279f1ea1ac4c3f2033/rrq-0.10.4-py3-none-manylinux_2_35_x86_64.whl | py3 | bdist_wheel | null | false | 30892d75f61285bca23c9230ee78b141 | 85d17d6ca3b7d05697d924dbeb06d2a2a1bd1abbdb1264c0782bca43f66ba846 | ad787f68ba5e944287e1b514b3abf12f2f3ecb433efd93279f1ea1ac4c3f2033 | null | [
"LICENSE"
] | 201 |
2.4 | tree-copy | 0.1.5 | Keyboard-driven file tree sidebar for tmux | # tree-copy

[](https://pypi.org/project/tree-copy/)
[](https://pypi.org/project/tree-copy/)
[](LICENSE)
[](https://pypi.org/project/tree-copy/)
A keyboard-driven file tree sidebar for tmux, built with [Textual](https://github.com/Textualize/textual).
Browse your project, jump between directories, copy paths, and preview files — without leaving the terminal.
**[PyPI](https://pypi.org/project/tree-copy/) · [GitHub](https://github.com/tzafrir/tree-copy)**
## Features
- Navigate the file tree with arrow keys
- Jump between sibling directories with `Shift+↑/↓`
- Copy relative or absolute paths to clipboard
- Preview files with [glow](https://github.com/charmbracelet/glow) (falls back to `less`)
- Edit files with `$TREE_COPY_EDITOR` (falls back to nano → vi)
- Root folder stays open (non-collapsible)
## Requirements
- Python 3.10+
- [glow](https://github.com/charmbracelet/glow) *(optional, recommended for file preview — falls back to `less`)*
## Install
```bash
pip install tree-copy
```
Or from source:
```bash
pip install .
```
## Usage
```bash
tree-copy [directory] # defaults to current directory
```
As a togglable tmux sidebar, add to `~/.tmux.conf`:
```bash
bind-key e run-shell " \
PANE=$(tmux show-option -wqv @tree-copy-pane); \
if [ -n \"$PANE\" ] && tmux list-panes -F '#{pane_id}' | grep -q \"^$PANE\$\"; then \
tmux kill-pane -t \"$PANE\"; \
tmux set-option -w @tree-copy-pane ''; \
else \
NEW_PANE=$(tmux split-window -hbP -l 35 -F '#{pane_id}' \"tree-copy '#{pane_current_path}'\"); \
tmux set-option -w @tree-copy-pane \"$NEW_PANE\"; \
fi"
```
`prefix + e` opens the sidebar; pressing it again closes it. State (expanded folders, cursor position) is saved automatically and restored on next open.
## Keybindings
| Key | Action |
|-----|--------|
| `↑` / `↓` | Navigate |
| `Shift+↑` / `Shift+↓` | Jump between sibling directories; moves to parent at bounds |
| `Enter` / `Space` | Toggle directory open/close |
| `o` | Preview file (glow if available, else less) |
| `e` | Edit file (`$TREE_COPY_EDITOR`, else nano, else vi) |
| `c` | Copy relative path to clipboard |
| `C` | Copy absolute path to clipboard |
| `q` / `Esc` | Quit |
## License
[MIT](LICENSE)
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"textual==8.0.0",
"watchdog>=4.0",
"textual-serve>=1.1; extra == \"serve\""
] | [] | [] | [] | [
"Homepage, https://github.com/tzafrir/tree-copy",
"Repository, https://github.com/tzafrir/tree-copy",
"Bug Tracker, https://github.com/tzafrir/tree-copy/issues",
"PyPI, https://pypi.org/project/tree-copy/"
] | twine/6.2.0 CPython/3.10.12 | 2026-02-20T15:16:04.665664 | tree_copy-0.1.5.tar.gz | 7,961 | 08/31/be7cbbb1e9ce014a6eaa6000584013b7c460b301937e1fb1dcdb420ecccf/tree_copy-0.1.5.tar.gz | source | sdist | null | false | 5254b4ac1d4325e15680e3fa292cc40e | 5c83cc875bba158acde92c5f619108addde945b6d4e4eaaa3b67eac3642c43e0 | 0831be7cbbb1e9ce014a6eaa6000584013b7c460b301937e1fb1dcdb420ecccf | MIT | [
"LICENSE"
] | 156 |
2.4 | cypher_validator | 0.9.0 | Rust-accelerated Cypher query validator, generator, and NL-to-Cypher pipeline via GLiNER2 | # cypher_validator
A fast, schema-aware **Cypher query validator and generator** with optional **GLiNER2 relation-extraction** and **Graph RAG** support for LLM-driven graph database applications.
The core parser and validator are written in **Rust** (via [pyo3](https://pyo3.rs/) and [maturin](https://github.com/PyO3/maturin)) for performance. The GLiNER2 integration layer is pure Python and is an optional add-on.
---
## Table of contents
- [Features](#features)
- [Installation](#installation)
- [Quick start](#quick-start)
- [Core API](#core-api)
- [Schema](#schema)
- [CypherValidator](#cyphervalidator)
- [ValidationResult](#validationresult)
- [CypherGenerator](#cyphergenerator)
- [parse\_query / QueryInfo](#parse_query--queryinfo)
- [GLiNER2 integration](#gliner2-integration)
- [**NLToCypher** ← start here](#nltocypher)
- [DB-aware query generation](#db-aware-query-generation)
- [EntityNERExtractor](#entitynerextractor)
- [GLiNER2RelationExtractor](#gliner2relationextractor)
- [RelationToCypherConverter](#relationtocypherconverter)
- [Neo4jDatabase](#neo4jdatabase)
- [LLM integration](#llm-integration)
- [Schema prompt helpers](#schema-prompt-helpers)
- [extract\_cypher\_from\_text](#extract_cypher_from_text)
- [repair\_cypher](#repair_cypher)
- [format\_records](#format_records)
- [few\_shot\_examples](#few_shot_examples)
- [cypher\_tool\_spec](#cypher_tool_spec)
- [GraphRAGPipeline](#graphragpipeline)
- [What the validator checks](#what-the-validator-checks)
- [Generated query types](#generated-query-types)
- [Performance](#performance)
- [Type stubs and IDE support](#type-stubs-and-ide-support)
- [Project structure](#project-structure)
- [Examples](#examples)
- [Development](#development)
---
## Features
| Capability | Description |
|---|---|
| **Syntax validation** | Parses Cypher with a hand-written PEG grammar (Pest) and surfaces clear syntax errors |
| **Semantic validation** | Checks node labels, relationship types, properties, endpoint labels, relationship direction, and unbound variables against a user-supplied schema |
| **"Did you mean?" suggestions** | Typos in labels and relationship types produce helpful suggestions (e.g. `:Preson → did you mean :Person?`) via capped Levenshtein edit-distance |
| **Batch validation** | `validate_batch()` validates many queries in parallel using Rayon, releasing the Python GIL for the duration |
| **Query generation** | Generates syntactically correct and schema-valid Cypher queries for 13 common patterns |
| **Schema-free parsing** | Extracts labels, relationship types, and property keys from any query without requiring a schema |
| **Schema serialization** | `Schema.to_dict()`, `from_dict()`, `to_json()`, `from_json()`, and `merge()` for complete schema lifecycle management |
| **NL → Cypher** | Converts GLiNER2 relation-extraction output to MATCH / MERGE / CREATE queries with automatic deduplication |
| **DB-aware generation** | `db_aware=True` looks up every extracted entity in Neo4j before query generation — existing nodes are MATCHed, new ones are CREATEd inline, preventing duplicate nodes |
| **NER entity extraction** | `EntityNERExtractor` wraps spaCy or any HuggingFace Transformers NER pipeline to enrich entity-label resolution during DB-aware generation |
| **Zero-shot RE** | Wraps the `gliner2` model for natural-language relation extraction (optional) |
| **LLM schema context** | `to_prompt()`, `to_markdown()`, `to_cypher_context()` format the schema for LLM system prompts |
| **Cypher extraction** | `extract_cypher_from_text()` pulls Cypher out of any LLM response (fenced blocks, inline, plain text) |
| **Self-repair loop** | `repair_cypher()` feeds validation errors back to an LLM for iterative self-correction |
| **Result formatting** | `format_records()` renders Neo4j results as Markdown, CSV, JSON, or plain text for LLM context |
| **Few-shot examples** | `few_shot_examples()` auto-generates (description, Cypher) pairs for LLM prompting |
| **Tool spec builder** | `cypher_tool_spec()` produces Anthropic / OpenAI function-calling schemas for Cypher execution |
| **Graph RAG pipeline** | `GraphRAGPipeline` chains schema injection → Cypher generation → validation → execution → answer |
| **Schema introspection** | `Neo4jDatabase.introspect_schema()` discovers the live DB schema automatically |
| **Type stubs** | Full `.pyi` stub files for IDE autocompletion and mypy / pyright type checking |
---
## Installation
### Prerequisites
- Python ≥ 3.8
- Rust toolchain (for building from source — `rustup.rs`)
- `maturin` (install via `pip install maturin`)
### From source
```bash
# Clone the repository
git clone <repo-url>
cd cypher_validator
# Build and install in editable/development mode
maturin develop
# Or build an optimised release wheel
maturin build --release
pip install dist/cypher_validator-*.whl
```
### Optional dependencies
```bash
# Neo4j driver (required for execute=True and db_aware=True)
pip install "cypher_validator[neo4j]"
# NER with spaCy (EntityNERExtractor.from_spacy)
pip install "cypher_validator[ner-spacy]"
python -m spacy download en_core_web_sm # or en_core_web_trf for transformer accuracy
# NER with HuggingFace Transformers (EntityNERExtractor.from_transformers)
pip install "cypher_validator[ner-transformers]"
# Everything at once
pip install "cypher_validator[neo4j,ner]"
```
---
## Quick start
```python
from cypher_validator import Schema, CypherValidator, CypherGenerator, parse_query
# 1. Define your graph schema
schema = Schema(
nodes={
"Person": ["name", "age"],
"Movie": ["title", "year"],
},
relationships={
# rel_type: (source_label, target_label, [properties])
"ACTED_IN": ("Person", "Movie", ["role"]),
"DIRECTED": ("Person", "Movie", []),
},
)
# 2. Validate a single query
validator = CypherValidator(schema)
result = validator.validate("MATCH (p:Person)-[:ACTED_IN]->(m:Movie) RETURN p.name, m.title")
print(result.is_valid) # True
# 3. Validate with errors — get "did you mean?" suggestions
result = validator.validate("MATCH (p:Preson)-[:ACTEDIN]->(m:Movie) RETURN p")
print(result.is_valid) # False
print(result.errors)
# ["Unknown node label: :Preson — did you mean :Person?",
# "Unknown relationship type: :ACTEDIN — did you mean :ACTED_IN?"]
# 4. Validate multiple queries in parallel
results = validator.validate_batch([
"MATCH (p:Person) RETURN p",
"MATCH (p:Person)-[:ACTED_IN]->(m:Movie) RETURN p.name",
"MATCH (x:BadLabel) RETURN x",
])
for r in results:
print(r.is_valid, r.errors)
# 5. Round-trip schema serialization
d = schema.to_dict()
schema2 = Schema.from_dict(d)
# … or as a JSON string
json_str = schema.to_json()
schema3 = Schema.from_json(json_str)
# 6. Merge two schemas
s_extra = Schema({"Director": ["name"]}, {"DIRECTED": ("Director", "Movie", [])})
merged = schema.merge(s_extra) # union of labels, types, and properties
# 7. Generate random valid queries
gen = CypherGenerator(schema, seed=42)
print(gen.generate("match_relationship"))
# MATCH (a:Person)-[r:ACTED_IN]->(b:Movie) RETURN a, r, b
# Generate many queries at once (avoids per-call Python overhead)
batch = gen.generate_batch("match_return", 100)
# 8. NL → Cypher with GLiNER2 (no boilerplate — this is the recommended entry point)
from cypher_validator import NLToCypher
pipeline = NLToCypher.from_pretrained("fastino/gliner2-large-v1", schema=schema)
cypher = pipeline(
"Tom Hanks acted in Cast Away.",
["acted_in"],
mode="merge",
)
# MERGE (a0:Person {name: $a0_val})-[:ACTED_IN]->(b0:Movie {name: $b0_val})
# RETURN a0, b0
# 10. Parse without a schema — also extracts property keys
info = parse_query("MATCH (p:Person)-[:ACTED_IN]->(m:Movie) RETURN p.name, m.year")
print(info.is_valid) # True
print(info.labels_used) # ['Movie', 'Person']
print(info.rel_types_used) # ['ACTED_IN']
print(info.properties_used) # ['name', 'year']
```
---
## Core API
### Schema
Describes the graph model: node labels with their allowed properties, and relationship types with their source label, target label, and allowed properties.
```python
from cypher_validator import Schema
schema = Schema(
nodes={
"Person": ["name", "age", "email"],
"Company": ["name", "founded"],
"City": ["name", "population"],
},
relationships={
# "REL_TYPE": ("SourceLabel", "TargetLabel", ["prop1", "prop2"])
"WORKS_FOR": ("Person", "Company", ["since", "role"]),
"LIVES_IN": ("Person", "City", []),
"HEADQUARTERED_IN": ("Company", "City", []),
},
)
```
**Schema inspection methods:**
```python
schema.node_labels() # ["City", "Company", "Person"]
schema.rel_types() # ["HEADQUARTERED_IN", "LIVES_IN", "WORKS_FOR"]
schema.has_node_label("Person") # True
schema.has_rel_type("WORKS_FOR") # True
schema.node_properties("Person") # ["name", "age", "email"]
schema.rel_properties("WORKS_FOR") # ["since", "role"]
schema.rel_endpoints("WORKS_FOR") # ("Person", "Company")
```
**Round-trip serialization:**
```python
# Export to a plain Python dict (JSON-serializable)
d = schema.to_dict()
# {
# "nodes": {"Person": ["name", "age", "email"], ...},
# "relationships": {"WORKS_FOR": ["Person", "Company", ["since", "role"]], ...}
# }
# Reconstruct from a dict (e.g. loaded from JSON)
import json
with open("schema.json") as f:
schema2 = Schema.from_dict(json.load(f))
# or directly
schema2 = Schema.from_dict(d)
```
**JSON serialization (`to_json` / `from_json`):**
```python
# Serialise to a compact JSON string
json_str = schema.to_json()
# '{"nodes":{"Person":["age","email","name"],...},...}'
# Restore from the JSON string
schema2 = Schema.from_json(json_str)
# Store/load via a file
with open("schema.json", "w") as f:
f.write(schema.to_json())
with open("schema.json") as f:
schema3 = Schema.from_json(f.read())
```
**Merging schemas (`merge`):**
```python
s1 = Schema(
nodes={"Person": ["name", "age"]},
relationships={"KNOWS": ("Person", "Person", [])},
)
s2 = Schema(
nodes={"Movie": ["title"], "Person": ["email"]}, # Person gets extra property
relationships={"ACTED_IN": ("Person", "Movie", ["role"])},
)
merged = s1.merge(s2)
merged.node_labels() # ["Movie", "Person"]
merged.node_properties("Person") # ["age", "email", "name"] ← union
merged.rel_types() # ["ACTED_IN", "KNOWS"]
```
`Schema.from_dict()` is the preferred way to restore a schema from a plain dict produced by `to_dict()`.
---
### CypherValidator
Validates Cypher queries against a `Schema` in two phases:
1. **Syntax** — Parses the query with the Cypher PEG grammar.
2. **Semantic** — Checks labels, types, properties, directions, and variable scopes against the schema. Typos in labels and relationship types trigger a "did you mean?" suggestion using capped Levenshtein edit-distance.
```python
from cypher_validator import CypherValidator
validator = CypherValidator(schema)
# Single query
result = validator.validate("MATCH (p:Person)-[:WORKS_FOR]->(c:Company) RETURN p.name")
print(result.is_valid) # True
# Batch validation — parallel Rayon execution, GIL released for the duration
results = validator.validate_batch([
"MATCH (p:Person) RETURN p.name",
"MATCH (p:Person)-[:WORKS_FOR]->(c:Company) RETURN p, c",
"MATCH (x:Employe) RETURN x", # typo
])
for r in results:
print(r.is_valid, r.errors)
# True []
# True []
# False ["Unknown node label: :Employe — did you mean :Employee?"]
```
**`validate_batch()` notes:**
- Accepts a `list[str]` and returns a `list[ValidationResult]` in the same order.
- Validation is done in parallel on the Rust side using [Rayon](https://github.com/rayon-rs/rayon).
- The Python GIL is released for the entire batch, so other Python threads (e.g. an asyncio event loop) are not blocked.
- There is no minimum batch size — it is efficient even for a single query, though the overhead is negligible for small lists.
---
### ValidationResult
Returned by both `CypherValidator.validate()` and each element of `CypherValidator.validate_batch()`.
| Attribute | Type | Description |
|---|---|---|
| `is_valid` | `bool` | `True` when no errors were found |
| `errors` | `list[str]` | All errors combined (syntax + semantic) |
| `syntax_errors` | `list[str]` | Parse / grammar errors only |
| `semantic_errors` | `list[str]` | Schema-level errors only |
Also supports `bool(result)` and `len(result)`:
```python
result = validator.validate(query)
if result:
print("Valid!")
else:
print(f"{len(result)} error(s):")
for err in result.errors:
print(" -", err)
# Categorised errors
print(result.syntax_errors) # e.g. ["Parse error: …"]
print(result.semantic_errors) # e.g. ["Unknown node label: :Foo — did you mean :Bar?"]
```
**Example error messages:**
```
# Unknown label — with suggestion
"Unknown node label: :Preson — did you mean :Person?"
# Unknown label — no close match
"Unknown node label: :Actor"
# Unknown relationship type — with suggestion
"Unknown relationship type: :ACTEDIN — did you mean :ACTED_IN?"
# Property not in schema
"Unknown property 'salary' for node label :Person"
# Wrong relationship direction / endpoints
"Relationship :ACTED_IN expects source label :Person, but node has label(s): :Movie"
"Relationship :ACTED_IN expects target label :Movie, but node has label(s): :Person"
# Unbound variable
"Variable 'x' is not bound in this scope"
# WITH scope reset
"Variable 'n' is not bound in this scope" # used after WITH that didn't project it
# Label used in SET/REMOVE
"Unknown node label: :Managr — did you mean :Manager?"
```
"Did you mean?" suggestions appear when a misspelled label or relationship type has a Levenshtein edit-distance of ≤ 2 from a known schema entry (case-insensitive comparison).
---
### CypherGenerator
Generates syntactically correct, schema-valid Cypher queries for rapid prototyping, testing, and dataset creation.
```python
from cypher_validator import CypherGenerator
gen = CypherGenerator(schema) # random seed each run
gen = CypherGenerator(schema, seed=42) # deterministic / reproducible output
query = gen.generate("match_return")
# "MATCH (n:Person) RETURN n LIMIT 17"
query = gen.generate("order_by")
# "MATCH (n:Movie) RETURN n ORDER BY n.year DESC LIMIT 5"
query = gen.generate("distinct_return")
# "MATCH (n:Person) RETURN DISTINCT n.name"
query = gen.generate("unwind")
# "MATCH (n:Person) UNWIND n.name AS item RETURN item"
# List all supported patterns
CypherGenerator.supported_types()
# ['match_return', 'match_where_return', 'create', 'merge', 'aggregation',
# 'match_relationship', 'create_relationship', 'match_set', 'match_delete',
# 'with_chain', 'distinct_return', 'order_by', 'unwind']
# Generate many queries in one call (avoids per-call Python overhead)
queries = gen.generate_batch("match_return", 500) # list[str], len == 500
```
**Supported query types (13 total):**
| Type | Example output |
|---|---|
| `match_return` | `MATCH (n:Movie) RETURN n` |
| `match_where_return` | `MATCH (n:Person) WHERE n.name = "Alice" RETURN n.name` |
| `create` | `CREATE (n:Person {name: "Bob", age: 42}) RETURN n` |
| `merge` | `MERGE (n:Movie {title: $value}) RETURN n` |
| `aggregation` | `MATCH (n:Person) RETURN count(n.age) AS result` |
| `match_relationship` | `OPTIONAL MATCH (a:Person)-[r:ACTED_IN]->(b:Movie) RETURN a, r, b` |
| `create_relationship` | `MATCH (a:Person),(b:Movie) CREATE (a)-[r:ACTED_IN]->(b) RETURN r` |
| `match_set` | `MATCH (n:Person) SET n.name = "Carol" RETURN n` |
| `match_delete` | `MATCH (n:Movie) DETACH DELETE n` |
| `with_chain` | `MATCH (n:Person) WITH n.name AS val RETURN count(*)` |
| `distinct_return` | `MATCH (n:Person) RETURN DISTINCT n.name LIMIT 10` |
| `order_by` | `MATCH (n:Movie) RETURN n ORDER BY n.year DESC LIMIT 25` |
| `unwind` | `MATCH (n:Person) UNWIND n.age AS item RETURN item` |
Generated scalar values cycle through string literals (`"Alice"`), integers (`42`), booleans (`true`/`false`), and parameters (`$name`). `OPTIONAL MATCH` and `LIMIT` clauses are randomly included. All generated queries are guaranteed to pass `CypherValidator` with the same schema.
---
### parse\_query / QueryInfo
Parse a Cypher string and extract structural information **without needing a schema**.
```python
from cypher_validator import parse_query
info = parse_query("MATCH (p:Person)-[:ACTED_IN]->(m:Movie) RETURN p.name, m.year")
info.is_valid # True
info.errors # []
info.labels_used # ["Movie", "Person"] (sorted, deduplicated)
info.rel_types_used # ["ACTED_IN"] (sorted, deduplicated)
info.properties_used # ["name", "year"] (sorted, deduplicated)
bool(info) # True (same as is_valid)
# Invalid query
info = parse_query("THIS IS NOT CYPHER")
info.is_valid # False
info.errors # ["Parse error: …"]
```
`QueryInfo` attributes:
| Attribute | Type | Description |
|---|---|---|
| `is_valid` | `bool` | Syntax check result |
| `errors` | `list[str]` | Syntax error messages (empty when valid) |
| `labels_used` | `list[str]` | Sorted, deduplicated node labels referenced in the query |
| `rel_types_used` | `list[str]` | Sorted, deduplicated relationship types referenced in the query |
| `properties_used` | `list[str]` | Sorted, deduplicated property keys accessed anywhere in the query |
`properties_used` collects every property key that appears in the query — in `RETURN`, `WHERE`, `SET`, inline node/relationship maps, `WITH`, `ORDER BY`, list comprehensions, etc. This is useful for dependency analysis, schema introspection, and query auditing without a schema.
```python
# Complex query — all property accesses are captured
info = parse_query("""
MATCH (p:Person {age: 30})-[r:WORKS_FOR {since: 2020}]->(c:Company)
WHERE p.name = "Alice" AND c.founded > 2000
WITH p, p.email AS contact
RETURN contact, c.name
ORDER BY c.name
""")
info.properties_used
# ['age', 'email', 'founded', 'name', 'since']
```
---
## GLiNER2 integration
The GLiNER2 integration converts relation-extraction results into Cypher queries.
> **Most users should start with [`NLToCypher`](#nltocypher)** — it wraps all three classes into a single callable that goes from raw text to Cypher (and optionally executes it against Neo4j) in one line.
> `RelationToCypherConverter` and `GLiNER2RelationExtractor` are lower-level building blocks exposed for advanced use cases.
### RelationToCypherConverter
A **pure-Python** class that converts any dict matching the GLiNER2 output format into a Cypher query. No ML model required.
```python
from cypher_validator import RelationToCypherConverter, Schema
# GLiNER2 extraction result
results = {
"relation_extraction": {
"works_for": [("John", "Apple Inc.")],
"lives_in": [("John", "San Francisco")],
"founded": [], # requested but not found in text
}
}
# Without schema (no node labels)
converter = RelationToCypherConverter()
# All three methods return (cypher_str, params_dict) — entity values are
# passed as $param placeholders to prevent Cypher injection.
# ── MATCH mode (find existing data) ────────────────────────────────────────
cypher, params = converter.to_match_query(results)
print(cypher)
# MATCH (a0 {name: $a0_val})-[:WORKS_FOR]->(b0 {name: $b0_val})
# MATCH (a1 {name: $a1_val})-[:LIVES_IN]->(b1 {name: $b1_val})
# RETURN a0, b0, a1, b1
print(params)
# {"a0_val": "John", "b0_val": "Apple Inc.", "a1_val": "John", "b1_val": "San Francisco"}
# ── MERGE mode (upsert) ────────────────────────────────────────────────────
cypher, params = converter.to_merge_query(results)
# ── CREATE mode (insert new) ───────────────────────────────────────────────
cypher, params = converter.to_create_query(results)
# ── Unified dispatcher ─────────────────────────────────────────────────────
cypher, params = converter.convert(results, mode="merge")
# Pass both to Neo4jDatabase.execute() — the driver handles escaping:
# results = db.execute(cypher, params)
```
**With a schema** — node labels are added automatically:
```python
schema = Schema(
nodes={"Person": ["name"], "Company": ["name"], "City": ["name"]},
relationships={
"WORKS_FOR": ("Person", "Company", []),
"LIVES_IN": ("Person", "City", []),
},
)
converter = RelationToCypherConverter(schema=schema)
cypher, params = converter.to_merge_query(results)
print(cypher)
# MERGE (a0:Person {name: $a0_val})-[:WORKS_FOR]->(b0:Company {name: $b0_val})
# MERGE (a1:Person {name: $a1_val})-[:LIVES_IN]->(b1:City {name: $b1_val})
# RETURN a0, b0, a1, b1
```
**Multiple pairs of the same relation type:**
```python
results = {
"relation_extraction": {
"works_for": [
("John", "Microsoft"),
("Mary", "Google"),
("Bob", "Apple"),
],
}
}
cypher, params = converter.to_merge_query(results)
print(cypher)
# MERGE (a0:Person {name: $a0_val})-[:WORKS_FOR]->(b0:Company {name: $b0_val})
# MERGE (a1:Person {name: $a1_val})-[:WORKS_FOR]->(b1:Company {name: $b1_val})
# MERGE (a2:Person {name: $a2_val})-[:WORKS_FOR]->(b2:Company {name: $b2_val})
# RETURN a0, b0, a1, b1, a2, b2
print(params)
# {"a0_val": "John", "b0_val": "Microsoft", "a1_val": "Mary", ...}
```
**Automatic deduplication:**
If the same `(subject, object)` pair for a given relation type appears more than once in the extraction output (which can happen with overlapping model detections), the converter silently deduplicates them — each unique triple `(subject, relation, object)` appears exactly once in the generated Cypher:
```python
results = {
"relation_extraction": {
"works_for": [
("John", "Apple"), # first occurrence
("John", "Apple"), # duplicate — skipped
("Mary", "Apple"), # different subject — kept
]
}
}
# Only two MERGE clauses are emitted, not three
```
**Constructor parameters:**
| Parameter | Default | Description |
|---|---|---|
| `schema` | `None` | `cypher_validator.Schema` for label-aware generation |
| `name_property` | `"name"` | Node property key used for entity text spans |
**`convert()` parameters:**
| Parameter | Default | Description |
|---|---|---|
| `relations` | required | GLiNER2 output dict |
| `mode` | `"match"` | `"match"`, `"merge"`, or `"create"` |
| `return_clause` | auto | Custom `RETURN …` tail (e.g. `"RETURN *"`) |
**`to_db_aware_query()` — advanced low-level use:**
If you want to generate a MATCH/CREATE query without going through `NLToCypher`, you can call `to_db_aware_query()` directly after building the entity status dict yourself:
```python
converter = RelationToCypherConverter(schema=schema)
# entity_status maps each entity name to its DB lookup result
entity_status = {
"John": {"var": "e0", "label": "Person", "param_key": "e0_val",
"found": True, "introduced": False}, # exists in DB
"Apple Inc.": {"var": "e1", "label": "Company", "param_key": "e1_val",
"found": False, "introduced": False}, # new
}
relations = {"relation_extraction": {"works_for": [("John", "Apple Inc.")]}}
cypher, params = converter.to_db_aware_query(relations, entity_status)
# MATCH (e0:Person {name: $e0_val})
# CREATE (e0)-[:WORKS_FOR]->(e1:Company {name: $e1_val})
# RETURN e0, e1
```
`NLToCypher._collect_entity_status()` builds this dict automatically from a live DB when `db_aware=True` is used — the low-level API is exposed for cases where you control the lookup yourself.
---
### DB-aware query generation
> **`db_aware=True`** is the flag that makes `NLToCypher` graph-state-aware.
> Without it, every call blindly CREATEs all entities, producing **duplicate nodes** for entities that already exist in the database.
> With it, each entity is looked up first and either MATCHed (existing) or CREATEd (new).
#### How it works
When `db_aware=True` is passed to `__call__()` or `extract_and_convert()`:
1. Relations are extracted from the text (same as normal).
2. Each unique entity is identified and its label is resolved from the schema (and optionally enriched by a `EntityNERExtractor`).
3. A `MATCH (n:Label {name: $val}) RETURN elementId(n) LIMIT 1` query is sent to Neo4j for each entity.
4. A mixed query is generated:
- **Existing entities** → `MATCH (eN:Label {name: $eN_val})` at the top.
- **New entities** → `CREATE (eN:Label {name: $eN_val})` inline on first use; subsequent relations reuse the bare variable `eN`.
- **Relationship edges** → always `CREATE (eA)-[:REL]->(eB)`.
#### All four entity-existence combinations
```python
from cypher_validator import NLToCypher, Neo4jDatabase, Schema
schema = Schema(
nodes={"Person": ["name"], "Company": ["name"]},
relationships={"WORKS_FOR": ("Person", "Company", [])},
)
db = Neo4jDatabase("bolt://localhost:7687", "neo4j", "password")
pipeline = NLToCypher.from_pretrained("fastino/gliner2-large-v1", schema=schema, db=db)
```
**Case 1 — neither entity exists:**
```python
cypher = pipeline("John works for Apple Inc.", ["works_for"], db_aware=True)
# CREATE (e0:Person {name: $e0_val})-[:WORKS_FOR]->(e1:Company {name: $e1_val})
# RETURN e0, e1
```
**Case 2 — subject (John) exists, object is new:**
```python
# (John was previously inserted into the DB)
cypher = pipeline("John works for Apple Inc.", ["works_for"], db_aware=True)
# MATCH (e0:Person {name: $e0_val})
# CREATE (e0)-[:WORKS_FOR]->(e1:Company {name: $e1_val})
# RETURN e0, e1
```
**Case 3 — object (Apple Inc.) exists, subject is new:**
```python
cypher = pipeline("John works for Apple Inc.", ["works_for"], db_aware=True)
# MATCH (e1:Company {name: $e1_val})
# CREATE (e0:Person {name: $e0_val})-[:WORKS_FOR]->(e1)
# RETURN e1, e0
```
**Case 4 — both exist:**
```python
cypher = pipeline("John works for Apple Inc.", ["works_for"], db_aware=True)
# MATCH (e0:Person {name: $e0_val})
# MATCH (e1:Company {name: $e1_val})
# CREATE (e0)-[:WORKS_FOR]->(e1)
# RETURN e0, e1
```
#### Multiple relations — shared entity reuse
The same entity variable is introduced once and reused across all relations, regardless of how many it participates in:
```python
schema = Schema(
nodes={"Person": ["name"], "Company": ["name"], "City": ["name"]},
relationships={
"WORKS_FOR": ("Person", "Company", []),
"LIVES_IN": ("Person", "City", []),
},
)
pipeline = NLToCypher.from_pretrained(..., schema=schema, db=db)
# John exists in DB; Apple Inc. and San Francisco are new
cypher = pipeline(
"John works for Apple Inc. and lives in San Francisco.",
["works_for", "lives_in"],
db_aware=True,
)
# MATCH (e0:Person {name: $e0_val}) ← John MATCHed once
# CREATE (e0)-[:WORKS_FOR]->(e1:Company {name: $e1_val}) ← Apple created inline
# CREATE (e0)-[:LIVES_IN]->(e2:City {name: $e2_val}) ← SF created inline, e0 reused
# RETURN e0, e1, e2
```
When all three exist:
```python
# MATCH (e0:Person {name: $e0_val})
# MATCH (e1:Company {name: $e1_val})
# MATCH (e2:City {name: $e2_val})
# CREATE (e0)-[:WORKS_FOR]->(e1) ← only edges are created
# CREATE (e0)-[:LIVES_IN]->(e2)
# RETURN e0, e1, e2
```
#### Combine `db_aware` with `execute`
```python
# Look up entities, generate query, AND execute it — all in one call
cypher, records = pipeline(
"John works for Apple Inc.",
["works_for"],
db_aware=True,
execute=True,
)
# cypher → mixed MATCH/CREATE string
# records → [{"e0": <Node John>, "e1": <Node Apple Inc.>}]
```
#### Why this matters vs. plain `execute=True`
```python
# ── Without db_aware (legacy) ─────────────────────────────────────────────
# John is already in the DB — this creates a second John node:
cypher, _ = pipeline("John works for Apple Inc.", ["works_for"],
mode="create", execute=True)
# CREATE (a0:Person {name: $a0_val})-[:WORKS_FOR]->(b0:Company {name: $b0_val})
# → John now appears TWICE in the database ✗
# ── With db_aware ─────────────────────────────────────────────────────────
cypher, _ = pipeline("John works for Apple Inc.", ["works_for"],
db_aware=True, execute=True)
# MATCH (e0:Person {name: $e0_val})
# CREATE (e0)-[:WORKS_FOR]->(e1:Company {name: $e1_val})
# → John reused, no duplicate ✓
```
---
### EntityNERExtractor
An optional NER wrapper that enriches entity-label resolution during DB-aware query generation. Useful when the schema is absent, incomplete, or when you want finer-grained entity typing (e.g. distinguishing `Person` from `Organization` for entities that appear as arguments of an unknown relation type).
Supports two backends — **spaCy** (fast, CPU-friendly) and **HuggingFace Transformers** (higher accuracy, GPU-optional):
```python
from cypher_validator import EntityNERExtractor
```
#### spaCy backend
```python
# pip install "cypher_validator[ner-spacy]"
# python -m spacy download en_core_web_sm
ner = EntityNERExtractor.from_spacy("en_core_web_sm")
ner.extract("John works for Apple Inc. and lives in San Francisco.")
# [
# {"text": "John", "label": "Person"},
# {"text": "Apple Inc.", "label": "Organization"},
# {"text": "San Francisco", "label": "Location"},
# ]
```
Built-in spaCy label → graph node-label mappings:
| spaCy type | Graph label |
|---|---|
| `PERSON` | `Person` |
| `ORG` | `Organization` |
| `GPE` | `Location` |
| `LOC` | `Location` |
| `FAC` | `Facility` |
| `PRODUCT` | `Product` |
| `EVENT` | `Event` |
| `NORP` | `Group` |
| *(unknown)* | *type capitalised as-is* |
Override or extend with `label_map`:
```python
ner = EntityNERExtractor.from_spacy(
"en_core_web_trf",
label_map={"ORG": "Company", "GPE": "City"}, # override defaults
)
```
#### HuggingFace Transformers backend
```python
# pip install "cypher_validator[ner-transformers]"
# General English NER (default model)
ner = EntityNERExtractor.from_transformers(
"dbmdz/bert-large-cased-finetuned-conll03-english"
)
# High-accuracy general NER
ner = EntityNERExtractor.from_transformers("Jean-Baptiste/roberta-large-ner-english")
# Biomedical NER — fine-tuned model (recommended for medical/scientific graphs)
ner = EntityNERExtractor.from_transformers(
"d4data/biomedical-ner-all",
label_map={
"Medication": "Drug",
"Disease_disorder": "Disease",
"Sign_symptom": "Symptom",
"Biological_structure": "Anatomy",
"Diagnostic_procedure": "Procedure",
},
aggregation_strategy="first", # avoids subword fragments with this model
)
ner.extract("John works for Apple Inc.")
# [{"text": "John", "label": "Person"}, {"text": "Apple Inc.", "label": "Organization"}]
```
> **Note on `dmis-lab/biobert-v1.1`:** This is a *pre-trained language model*, not a fine-tuned NER classifier. When loaded as a token-classification pipeline it outputs generic `LABEL_0` / `LABEL_1` tags with no semantic meaning. Use it with a fully custom `label_map` (e.g. `{"LABEL_0": "BioEntity", "LABEL_1": "BioEntity"}`) if you only need candidate spans, or use a fine-tuned biomedical NER model such as `d4data/biomedical-ner-all` for semantic labels. See [examples/11_biobert_ner.py](examples/11_biobert_ner.py) for a detailed comparison.
Built-in HuggingFace label → graph node-label mappings:
| HF tag | Graph label |
|---|---|
| `PER` / `PERSON` | `Person` |
| `ORG` | `Organization` |
| `LOC` / `GPE` | `Location` |
| `MISC` | `Entity` |
| *(unknown)* | *tag capitalised as-is* |
#### Plugging the NER extractor into `NLToCypher`
When `ner_extractor` is supplied, its labels enrich or **override** schema-based label resolution for DB lookups. This is especially helpful when the schema doesn't cover all relation types:
```python
from cypher_validator import NLToCypher, EntityNERExtractor, Neo4jDatabase
ner = EntityNERExtractor.from_spacy("en_core_web_sm")
db = Neo4jDatabase("bolt://localhost:7687", "neo4j", "password")
pipeline = NLToCypher.from_pretrained(
"fastino/gliner2-large-v1",
schema=schema,
db=db,
ner_extractor=ner, # ← plugged in here
)
cypher = pipeline(
"John works for Apple Inc.",
["works_for"],
db_aware=True,
)
# NER identifies "John" as Person and "Apple Inc." as Organization
# DB lookup uses those labels for the MATCH query
# Generated Cypher uses the schema's "Company" label (schema wins when both provide a label)
```
**Label resolution priority (highest first):**
1. NER extractor label (when `ner_extractor` is set and entity text matches)
2. Schema endpoint label (derived from the relation type)
3. Empty string (no label in pattern)
**`EntityNERExtractor` API:**
| Method | Description |
|---|---|
| `EntityNERExtractor.from_spacy(model_name, label_map=None)` | Load a spaCy `nlp` model |
| `EntityNERExtractor.from_transformers(model_name, label_map=None, **kwargs)` | Load a HuggingFace NER pipeline |
| `extractor.extract(text)` | Return `list[{"text": str, "label": str}]` |
---
### GLiNER2RelationExtractor
Wraps a loaded `gliner2.GLiNER2` model and normalises its output to the standard format.
```python
from cypher_validator import GLiNER2RelationExtractor
# Load from HuggingFace Hub
extractor = GLiNER2RelationExtractor.from_pretrained("fastino/gliner2-large-v1")
# Or set a custom threshold
extractor = GLiNER2RelationExtractor.from_pretrained(
"fastino/gliner2-large-v1",
threshold=0.7,
)
text = "John works for Apple Inc. and lives in San Francisco."
results = extractor.extract_relations(
text,
relation_types=["works_for", "lives_in", "founded"],
)
# {
# "relation_extraction": {
# "works_for": [("John", "Apple Inc.")],
# "lives_in": [("John", "San Francisco")],
# "founded": [], # requested but not found
# }
# }
# Override threshold for a single call
results = extractor.extract_relations(
text,
["works_for"],
threshold=0.85, # high precision
)
```
**Key behaviours:**
- Every requested relation type is **always present** in the output — missing types get an empty list.
- Works with both the wrapped (`{"relation_extraction": {...}}`) and flat (`{rel_type: [...]}`) model output formats.
- `threshold` set at construction becomes the default; it can be overridden per-call.
| Method / attribute | Description |
|---|---|
| `GLiNER2RelationExtractor.from_pretrained(model_name, threshold=0.5)` | Load from HuggingFace Hub or local path |
| `extractor.extract_relations(text, relation_types, threshold=None)` | Extract relations from text |
| `extractor.threshold` | Instance-level default confidence threshold |
| `GLiNER2RelationExtractor.DEFAULT_MODEL` | `"fastino/gliner2-large-v1"` |
---
### NLToCypher
> **This is the recommended entry point for most users.** It wraps the extractor and converter into one callable — you only need to supply text, relation types, and a mode.
End-to-end pipeline combining `GLiNER2RelationExtractor` and `RelationToCypherConverter`.
```python
from cypher_validator import NLToCypher, Schema
schema = Schema(
nodes={"Person": ["name"], "Company": ["name"], "City": ["name"]},
relationships={
"WORKS_FOR": ("Person", "Company", []),
"LIVES_IN": ("Person", "City", []),
},
)
pipeline = NLToCypher.from_pretrained(
"fastino/gliner2-large-v1",
schema=schema, # optional: enables label-aware generation
threshold=0.5,
)
# Single sentence → Cypher
cypher = pipeline(
"John works for Apple Inc. and lives in San Francisco.",
relation_types=["works_for", "lives_in"],
mode="merge",
)
# MERGE (a0:Person {name: $a0_val})-[:WORKS_FOR]->(b0:Company {name: $b0_val})
# MERGE (a1:Person {name: $a1_val})-[:LIVES_IN]->(b1:City {name: $b1_val})
# RETURN a0, b0, a1, b1
# Get both the raw extraction dict and the Cypher string
relations, cypher = pipeline.extract_and_convert(
"Alice manages the Engineering team.",
["manages", "reports_to"],
mode="match",
)
print(relations)
# {"relation_extraction": {"manages": [("Alice", "Engineering team")], "reports_to": []}}
print(cypher)
# MATCH (a0 {name: $a0_val})-[:MANAGES]->(b0 {name: $b0_val})
# RETURN a0, b0
# High-precision extraction
cypher = pipeline(
"Bob acquired TechCorp in 2019.",
["acquired", "merged_with"],
mode="merge",
threshold=0.85,
)
```
**Database execution (`execute=True`):**
Pass a `Neo4jDatabase` to execute the generated query directly and receive both the Cypher string and the Neo4j records:
```python
from cypher_validator import NLToCypher, Neo4jDatabase
db = Neo4jDatabase("bolt://localhost:7687", "neo4j", "password")
pipeline = NLToCypher.from_pretrained("fastino/gliner2-large-v1", schema=schema, db=db)
# execute=True → returns (cypher, records) instead of just cypher
cypher, records = pipeline(
"John works for Apple Inc.",
["works_for"],
mode="create",
execute=True,
)
# cypher → 'CREATE (a0:Person {name: $a0_val})-[:WORKS_FOR]->(b0:Company {name: $b0_val})\nRETURN a0, b0'
# records → [{"a0": {...}, "b0": {...}}]
```
**Credentials from environment variables (`from_env`):**
```bash
export NEO4J_URI=bolt://localhost:7687
export NEO4J_USERNAME=neo4j # optional, defaults to "neo4j"
export NEO4J_PASSWORD=secret
```
```python
pipeline = NLToCypher.from_env("fastino/gliner2-large-v1", schema=schema)
cypher, records = pipeline("John works for Apple Inc.", ["works_for"],
mode="create", execute=True)
```
**`from_pretrained()` / `from_env()` parameters:**
| Parameter | Default | Description |
|---|---|---|
| `model_name` | `"fastino/gliner2-large-v1"` | HuggingFace model ID or local path |
| `schema` | `None` | Optional schema for label-aware Cypher |
| `threshold` | `0.5` | Confidence threshold for relation extraction |
| `name_property` | `"name"` | Node property key for entity text |
| `db` | `None` | `Neo4jDatabase` connection (`from_pretrained` only) |
| `database` | `"neo4j"` | Neo4j database name (`from_env` only) |
| `ner_extractor` | `None` | Optional `EntityNERExtractor` for enriched entity labels in DB-aware mode |
**`__call__()` / `extract_and_convert()` parameters:**
| Parameter | Default | Description |
|---|---|---|
| `text` | required | Input sentence or passage |
| `relation_types` | required | Relation labels to extract |
| `mode` | `"match"` | Cypher generation mode (`"match"`, `"merge"`, `"create"`). Ignored when `db_aware=True`. |
| `threshold` | `None` | Override instance threshold |
| `execute` | `False` | When `True`, run the query against the DB and return `(cypher, records)` |
| `db_aware` | `False` | When `True`, look up each entity in the DB and generate MATCH/CREATE accordingly (see below) |
| `return_clause` | auto | Custom `RETURN …` tail |
---
### Neo4jDatabase
Thin wrapper around the official [Neo4j Python driver](https://neo4j.com/docs/python-manual/current/) for executing Cypher queries. Requires `pip install "cypher_validator[neo4j]"`.
```python
from cypher_validator import Neo4jDatabase
# Direct instantiation
db = Neo4jDatabase("bolt://localhost:7687", "neo4j", "pa | text/markdown; charset=UTF-8; variant=GFM | null | Amiya Mandal <amiyamandal-dev@users.noreply.github.com> | null | null | MIT | cypher, neo4j, graph, nlp, validation | [
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Database",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"License :: OSI Approved :: MIT License"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"gliner2",
"neo4j>=6.1.0",
"urllib3>=2.2.3",
"spacy>=3.0",
"transformers>=4.0",
"torch"
] | [] | [] | [] | [
"Homepage, https://github.com/amiyamandal-dev/cypher_validator",
"Issues, https://github.com/amiyamandal-dev/cypher_validator/issues",
"Repository, https://github.com/amiyamandal-dev/cypher_validator"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T15:15:00.730455 | cypher_validator-0.9.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl | 962,648 | 4d/74/f9984315cace0cfdb90d6e55e5e224ccd117d2c525cc1648a2261024d18f/cypher_validator-0.9.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl | cp313 | bdist_wheel | null | false | 8fcc5ab2d82cd975894cd8eb7da923eb | 41e9e147e115d4b855cf5d0ddfd2b1b7cb3b101c5dafe1b0fa61bfb446b80a3d | 4d74f9984315cace0cfdb90d6e55e5e224ccd117d2c525cc1648a2261024d18f | null | [] | 0 |
2.4 | iploop | 1.5.6 | Residential proxy SDK with full scraping capabilities — render JS, bypass protection, extract data | # IPLoop Python SDK
Residential proxy SDK — one-liner web fetching through millions of real IPs.
```bash
pip install iploop
```
## Quick Start
```python
from iploop import IPLoop
ip = IPLoop("your-api-key")
# Fetch any URL through a residential proxy
response = ip.get("https://httpbin.org/ip")
print(response.text)
# Target a specific country
response = ip.get("https://example.com", country="DE")
# POST request
response = ip.post("https://api.example.com/data", json={"key": "value"})
```
## Smart Headers
Headers are automatically matched to the target country — correct language, timezone, and User-Agent:
```python
ip = IPLoop("key", country="JP") # Japanese Chrome headers automatically
```
## Sticky Sessions
Keep the same IP across multiple requests:
```python
s = ip.session(country="US", city="newyork")
page1 = s.fetch("https://site.com/page1") # same IP
page2 = s.fetch("https://site.com/page2") # same IP
```
## Auto-Retry
Failed requests (403, 502, 503, timeouts) automatically retry with a fresh IP:
```python
# Retries up to 3 times with different IPs
response = ip.get("https://tough-site.com", retries=5)
```
## Async Support
```python
import asyncio
from iploop import AsyncIPLoop
async def main():
async with AsyncIPLoop("key") as ip:
results = await asyncio.gather(
ip.get("https://site1.com"),
ip.get("https://site2.com"),
ip.get("https://site3.com"),
)
for r in results:
print(r.status_code)
asyncio.run(main())
```
## Support API
```python
ip.usage() # Check bandwidth quota
ip.status() # Service status
ip.ask("how do I handle captchas?") # Ask support
ip.countries() # List available countries
```
## Data Extraction (v1.2.0)
Auto-extract structured data from popular sites:
```python
# eBay — extract product listings
products = ip.ebay.search("laptop", extract=True)["products"]
# [{"title": "MacBook Pro 16", "price": "$1,299.00"}, ...]
# Nasdaq — extract stock quotes
quote = ip.nasdaq.quote("AAPL", extract=True)
# {"price": "$185.50", "change": "+2.30", "pct_change": "+1.25%"}
# Google — extract search results
results = ip.google.search("best proxy service", extract=True)["results"]
# [{"title": "...", "url": "..."}, ...]
# Twitter — extract profile info
profile = ip.twitter.profile("elonmusk", extract=True)
# {"name": "Elon Musk", "handle": "elonmusk", ...}
# YouTube — extract video metadata
video = ip.youtube.video("dQw4w9WgXcQ", extract=True)
# {"title": "...", "channel": "...", "views": 1234567}
```
## Smart Rate Limiting
Built-in per-site rate limiting prevents blocks automatically:
```python
# These calls auto-delay to respect site limits
for q in ["laptop", "phone", "tablet"]:
ip.ebay.search(q) # 15s delay between requests
```
## LinkedIn (New)
```python
ip.linkedin.profile("satyanadella")
ip.linkedin.company("microsoft")
```
## Concurrent Fetching (v1.3.0)
Batch fetch up to 25 URLs in parallel:
```python
# Concurrent fetching (safe up to 25)
batch = ip.batch(max_workers=10)
results = batch.fetch_all([
"https://ebay.com/sch/i.html?_nkw=laptop",
"https://ebay.com/sch/i.html?_nkw=phone",
"https://ebay.com/sch/i.html?_nkw=tablet"
], country="US")
# Multi-country comparison
prices = batch.fetch_multi_country("https://ebay.com/sch/i.html?_nkw=iphone", ["US", "GB", "DE"])
```
## Chrome Fingerprinting (v1.3.0)
Every request auto-applies a 14-header Chrome desktop fingerprint — the universal recipe from Phase 9 testing:
```python
# Auto fingerprinting — no setup needed
html = ip.fetch("https://ebay.com", country="US") # fingerprinted automatically
# Get fingerprint headers directly
headers = ip.fingerprint("DE") # 14 headers for German Chrome
```
## Stats Tracking (v1.3.0)
```python
# After making requests...
print(ip.stats)
# {"requests": 10, "success": 9, "errors": 1, "total_time": 23.5, "avg_time": 2.35, "success_rate": 90.0}
```
## Debug Mode
```python
ip = IPLoop("key", debug=True)
# Logs: GET https://example.com → 200 (0.45s) country=US session=abc123
```
## Exceptions
```python
from iploop import AuthError, QuotaExceeded, ProxyError, TimeoutError
try:
response = ip.get("https://example.com")
except QuotaExceeded:
print("Upgrade at https://iploop.io/pricing")
except ProxyError:
print("Proxy connection failed")
except TimeoutError:
print("Request timed out")
```
## Links
- **Website**: https://iploop.io
- **Docs**: https://docs.iploop.io
- **Dashboard**: https://iploop.io/dashboard
| text/markdown | IPLoop | IPLoop <partners@iploop.io> | null | null | MIT | proxy, residential, scraping, web, rotating | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | https://iploop.io | null | >=3.7 | [] | [] | [] | [
"requests>=2.28",
"aiohttp>=3.8; extra == \"async\"",
"playwright>=1.40.0; extra == \"render\""
] | [] | [] | [] | [
"Homepage, https://iploop.io",
"Documentation, https://docs.iploop.io",
"Repository, https://github.com/iploop/iploop-python"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T15:14:01.903186 | iploop-1.5.6.tar.gz | 437,513 | 53/2a/7ccb6c46f5a7738427be3cf9dbfc30f372cdb2d759fa1e1e5e7657c15f15/iploop-1.5.6.tar.gz | source | sdist | null | false | a1a8a052b80331700f2206feca4d3e9e | 0dbae595f152efa3e2809fd56f1f44111d7927fe105442abbda5f33067760ecf | 532a7ccb6c46f5a7738427be3cf9dbfc30f372cdb2d759fa1e1e5e7657c15f15 | null | [
"LICENSE"
] | 139 |
2.4 | sgept-gta-mcp | 0.4.9 | MCP server for Global Trade Alert — 75,000+ trade policy interventions | # GTA MCP Server
Query 78,000+ trade policy interventions through Claude — tariffs, subsidies, export bans, and more from 200+ countries.
## What is Global Trade Alert?
Global Trade Alert (GTA) is a transparency initiative by the St Gallen Endowment for Prosperity through Trade (SGEPT) that tracks government trade policy changes worldwide since November 2008. Unlike trade databases that rely on government self-reporting, GTA independently documents and verifies policy interventions using primary sources — official gazettes, ministry announcements, legislative records, and press releases.
GTA covers all types of trade measures: not just tariffs, but subsidies, export restrictions, FDI barriers, public procurement rules, localisation requirements and more. Each intervention is classified by color: Red (harmful/discriminatory), Amber (likely harmful but uncertain), or Green (liberalising). The database contains over 78,000 documented interventions across 60+ jurisdictions.
This breadth distinguishes GTA from the WTO's trade monitoring system, which captures only measures that governments voluntarily report. GTA reveals the full landscape of state intervention in markets — including measures governments prefer not to highlight.
## What can you ask?
**Tariffs and trade barriers:**
- "What tariffs has the United States imposed on China since January 2025?"
- "Which countries have imposed tariffs affecting US exports in 2025?"
**Critical minerals and supply chains:**
- "What export controls has China imposed on rare earth elements?"
- "Has the use of export restrictions increased since 2020?"
**Subsidies and state aid:**
- "Which countries subsidise their domestic semiconductor industry?"
- "Which G20 countries have increased state aid to EV manufacturers since 2022?"
**Trade negotiations:**
- "What harmful measures has the EU imposed on US exports since 2024?"
- "What measures has Brazil implemented affecting US agricultural exports?"
**Trade defence:**
- "Find all anti-dumping investigations targeting Chinese steel since 2020"
**Monitoring:**
- "How many harmful interventions were implemented globally in 2025 versus 2024?"
## Quick Start
### Prerequisites
You need **uv** (a Python package manager) installed on your system. If you don't have it:
**macOS/Linux:**
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
**Windows:**
```powershell
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
```
After installing, restart your terminal. Verify it works by running `uvx --version` — you should see a version number.
### Getting an API key
You need a GTA API key from SGEPT. Request access at https://globaltradealert.org/api-access — you'll receive your demo key direcly; request support for full access credentials.
### For Claude Desktop (recommended)
Add to your Claude Desktop config file:
- **macOS:** `~/Library/Application Support/Claude/claude_desktop_config.json`
- **Windows:** `%APPDATA%\Claude\claude_desktop_config.json`
If this file doesn't exist yet, create it with the content below. If it already exists and contains other MCP servers, add the `"gta"` entry inside the existing `"mcpServers"` object.
```json
{
"mcpServers": {
"gta": {
"command": "uvx",
"args": ["sgept-gta-mcp@latest"],
"env": {
"GTA_API_KEY": "your-api-key-here"
}
}
}
}
```
Then **completely quit and restart Claude Desktop** (not just close the window — fully quit from the menu bar/system tray).
### For Claude Code
```bash
claude mcp add --transport stdio gta -e GTA_API_KEY=your-key -- uvx sgept-gta-mcp@latest
```
### For any MCP client
```bash
pip install sgept-gta-mcp
GTA_API_KEY=your-key gta-mcp
```
## Is it working?
After restarting Claude Desktop, try this prompt:
> Show me 3 recent trade interventions implemented by the United States.
**If it works:** You'll see a formatted list of US trade measures with titles, dates, and links to the GTA website.
**If you see "tool not found":** The server isn't connected. Check that:
1. Your `claude_desktop_config.json` is valid JSON (no trailing commas, all quotes matched)
2. You completely quit and restarted Claude Desktop
3. Your API key is correctly set in the `env` section
**If you see "Authentication Error":** Your API key is invalid or expired. Verify it at https://globaltradealert.org/api-access.
## Use Cases
See [USE_CASES.md](USE_CASES.md) for 40+ example prompts organized by professional use case — from competitive subsidy intelligence to trade negotiation prep.
## Available Tools
### 1. `gta_search_interventions`
Search and filter trade interventions by country, type, date, sector, and evaluation.
**Key parameters:**
- `implementing_jurisdictions`: Countries implementing measures (e.g., ["USA", "CHN", "DEU"])
- `intervention_types`: Filter by measure type (e.g., ["Import tariff", "Export subsidy", "Export ban"])
- `date_announced_gte` / `date_announced_lte`: Filter by announcement date range
- `evaluation`: Red (harmful), Amber (likely harmful), or Green (liberalising)
- `limit`: Maximum results to return (default 100)
**Example:** "What tariffs has India imposed on steel imports since 2024?"
### 2. `gta_get_intervention`
Get full details for a specific intervention by ID (identification number from search results).
**Key parameters:**
- `intervention_id`: The unique GTA intervention ID
**Example:** "Show me the full details for intervention 123456"
### 3. `gta_list_ticker_updates`
Monitor recent changes to existing interventions — removals, extensions, modifications.
**Key parameters:**
- `published_gte` / `published_lte`: Filter by when the update was published
- `implementing_jurisdictions`: Filter by country
- `limit`: Maximum results to return
**Example:** "What changes to existing trade measures were published in the last 30 days?"
### 4. `gta_get_impact_chains`
Analyze implementing-product-affected jurisdiction relationships — which countries impose measures on which products affecting which other countries.
**Key parameters:**
- `implementing_jurisdictions`: Countries implementing measures
- `affected_jurisdictions`: Countries affected by measures
- `mast_chapters`: Product categories (MAST classification system)
**Example:** "Show me how US measures on semiconductors affect China and Taiwan"
### 5. `gta_count_interventions`
Get aggregated counts across 24 dimensions including year, country, type, sector, and evaluation.
**Key parameters:**
- `group_by`: Dimension to count by (e.g., "year", "implementing_jurisdiction", "intervention_type")
- Filters: Same as `gta_search_interventions`
**Example:** "How many harmful trade interventions did G20 countries implement each year from 2020 to 2025?"
### 6. `gta_lookup_hs_codes`
Search HS (Harmonized System) product codes by keyword, chapter number, or code prefix. Use this before `gta_search_interventions` when asking about specific commodities or products.
**Key parameters:**
- `search_term`: Product keyword (e.g., "lithium"), chapter number (e.g., "28"), or code prefix (e.g., "8541")
- `max_results`: Maximum codes to return (default 50)
**Example:** "Look up HS codes for steel" returns codes like 7206-7229 which you can then pass to `gta_search_interventions` as `affected_products`.
### 7. `gta_lookup_sectors`
Search CPC (Central Product Classification) sector codes by keyword or code prefix. Use this before `gta_search_interventions` when asking about services or broad economic sectors.
**Key parameters:**
- `search_term`: Sector keyword (e.g., "financial", "transport") or code prefix (e.g., "71")
- `max_results`: Maximum sectors to return (default 50)
**Example:** "Look up sectors related to financial services" returns CPC codes like 711, 715, 717 which you can pass as `affected_sectors`.
## Available Resources
Resources provide reference data and documentation through the MCP resource system (accessible via prompts like "Show me the GTA glossary").
| Resource | URI | Purpose |
|----------|-----|---------|
| Jurisdictions | `gta://reference/jurisdictions` | Country codes, names, ISO/UN mapping |
| Jurisdiction Groups | `gta://reference/jurisdiction-groups` | G7, G20, EU-27, BRICS, ASEAN, CPTPP, RCEP member codes |
| Intervention Types | `gta://reference/intervention-types` | Definitions, examples, MAST mapping |
| MAST Chapters | `gta://reference/mast-chapters` | Product classification system |
| Sectors | `gta://reference/sectors` | Economic sector taxonomy |
| Glossary | `gta://reference/glossary` | Key GTA terms explained for non-experts |
| Data Model | `gta://guide/data-model` | How interventions, products, and jurisdictions relate |
| Date Fields | `gta://guide/date-fields` | Announced vs implemented dates |
| CPC vs HS | `gta://guide/cpc-vs-hs` | When to use sector codes vs product codes |
| Analytical Caveats | `gta://guide/analytical-caveats` | Data limitations and interpretation guidance |
| Query Intent Mapping | `gta://guide/query-intent-mapping` | Natural language terms to structured GTA filters |
| Query Patterns | `gta://guide/query-patterns` | Common analysis workflows |
| Privacy Policy | `gta://legal/privacy` | Data handling, collection, and your rights |
## Understanding GTA Data
**Evaluation colors:** Red = harmful/discriminatory, Amber = likely harmful but uncertain, Green = liberalising. See `gta://reference/glossary` for detailed definitions.
**Date fields:** `date_announced` (when disclosed) vs `date_implemented` (when takes effect). Implementation dates may be months or years after announcement. See `gta://guide/date-fields`.
**Publication lag:** Recent data is always incomplete due to a 2-4 week verification process. Counts from the last month should be considered preliminary. See `gta://guide/analytical-caveats`.
**Counting:** One intervention can affect many products and countries. Counting by product or affected jurisdiction inflates numbers — count by intervention for accurate totals. See `gta://guide/data-model`.
## Troubleshooting
| Symptom | Cause | Solution |
|---------|-------|----------|
| "uvx: command not found" | uv is not installed | Install uv first — see [Prerequisites](#prerequisites) above |
| "GTA_API_KEY not set" | Environment variable missing | Set it in Claude Desktop config `env` section, or `export GTA_API_KEY=...` for CLI |
| "Authentication Error" | Invalid or expired key | Verify at globaltradealert.org/api-access, check for typos |
| "Response truncated" | Too many results for context window | Use `limit` parameter (e.g., 10) or add more specific filters |
| Server not appearing in Claude | Config issue | Check JSON syntax, verify path, quit+restart Claude fully |
| No results returned | Filters too narrow or wrong date field | Try `date_announced_gte` instead of `date_implemented_gte`, broaden jurisdiction filter |
| Timeout errors | Query too broad | Add country or date filters to narrow results |
| Invalid jurisdiction code | Wrong format | Use ISO 3-letter codes (USA, CHN, DEU), not 2-letter or numeric |
| Rate limit (429) | Too many queries | Wait 30 seconds and retry |
## Privacy
The GTA MCP server runs locally on your machine and does not store, cache,
or log your queries. Search parameters are forwarded to the GTA API backend
operated by SGEPT, which records only standard access logs (no query content).
Access logs are retained for 90 days.
Full policy: [PRIVACY.md](PRIVACY.md) | [globaltradealert.org/privacy](https://globaltradealert.org/privacy)
## For Developers
### Architecture
```
Claude Desktop
↓
MCP Protocol (stdio)
↓
GTA MCP Server (FastMCP)
↓
GTA API v2.0 (REST)
↓
PostgreSQL Database
```
### Development Install
```bash
git clone https://github.com/sgept/sgept-mcp-servers.git
cd sgept-mcp-servers/gta-mcp
pip install -e ".[dev]"
export GTA_API_KEY=your-key
mcp dev gta_mcp/server.py
```
### Running Tests
```bash
# All tests
pytest
# With coverage
pytest --cov=gta_mcp --cov-report=term-missing
# Specific test file
pytest tests/test_tools.py
```
### Code Quality
- Python 3.12+
- Type hints on all functions
- FastMCP for MCP protocol handling
- httpx for async API requests
- pytest for testing
- Black for formatting
- Ruff for linting
### Contributing
1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Ensure all tests pass
5. Submit a pull request
## Version and License
**Current version:** 0.4.1 (February 2026)
**License:** MIT
**Support:**
- API access: Contact SGEPT at support@sgept.org
- Server bugs: File issues on GitHub at https://github.com/sgept/sgept-mcp-servers
- Full changelog: [CHANGELOG.md](CHANGELOG.md)
**About SGEPT:** Learn more at https://sgept.org
| text/markdown | null | St Gallen Endowment for Prosperity through Trade <info@sgept.org> | null | null | MIT | claude, global-trade-alert, gta, mcp, mcp-server, policy-analysis, sgept, subsidies, tariffs, trade-barriers, trade-data, trade-policy | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Information Analysis"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27.0",
"mcp>=1.0.0",
"pydantic>=2.0.0",
"rapidfuzz>=3.0.0"
] | [] | [] | [] | [
"Homepage, https://www.globaltradealert.org",
"Repository, https://github.com/global-trade-alert/sgept-mcp-servers",
"Documentation, https://github.com/global-trade-alert/sgept-mcp-servers/blob/main/gta-mcp/README.md",
"Changelog, https://github.com/global-trade-alert/sgept-mcp-servers/blob/main/gta-mcp/CHANGELOG.md",
"Bug Tracker, https://github.com/global-trade-alert/sgept-mcp-servers/issues",
"Privacy Policy, https://globaltradealert.org/privacy"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:13:56.041488 | sgept_gta_mcp-0.4.9.tar.gz | 661,947 | 36/b7/6f43d9cecf013b5d7cb3a09fec843d6dc179888aefe20b2fdc62d783c94a/sgept_gta_mcp-0.4.9.tar.gz | source | sdist | null | false | b0e851c871204b50ed24796f8671e260 | 4a488409ff732c51383e66a624da53e263ea9d223b44836ff8942d2a8c911533 | 36b76f43d9cecf013b5d7cb3a09fec843d6dc179888aefe20b2fdc62d783c94a | null | [
"LICENSE"
] | 170 |
2.4 | holmes-hydro | 3.4.2 | HOLMES (HydrOLogical Modeling Educationnal Software) is a software developped to teach operational hydrology. It is developed at the university Laval, Québec, Canada. | # HOLMES
[](https://github.com/antoinelb/holmes/actions)
[](https://github.com/antoinelb/holmes/actions)


[](https://pypi.org/project/holmes-hydro)
[](https://antoinelb.github.io/holmes/)
HOLMES (HydrOLogical Modeling Educational Software) is a software developed to teach operational hydrology. It is developed at Université Laval, Québec, Canada.
📖 **[Documentation](https://antoinelb.github.io/holmes/)** · 📦 **[PyPI](https://pypi.org/project/holmes-hydro/)**
## Usage
### Installation
```bash
pip install holmes-hydro
```
### Running HOLMES
After installation, start the server with:
```bash
holmes
```
The web interface will be available at http://127.0.0.1:8000.
### Configuration
Customize the server by creating a `.env` file:
```env
DEBUG=True # Enable debug mode (default: False)
RELOAD=True # Enable auto-reload on code changes (default: False)
HOST=127.0.0.1 # Server host (default: 127.0.0.1)
PORT=8000 # Server port (default: 8000)
```
## Development
### Setup
1. Install [uv](https://docs.astral.sh/uv/):
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
2. Clone and install in development mode:
```bash
git clone https://github.com/antoinelb/holmes.git
cd holmes
uv sync
```
### Running
```bash
uv run holmes
```
Or activate the virtual environment and run directly:
```bash
source .venv/bin/activate
holmes
```
### Code Quality
```bash
ruff format src/ tests/
ruff check src/ tests/
ty check src/ tests/
```
## References
- [Bucket Model](https://github.com/ulaval-rs/HOOPLApy/tree/main/hoopla/models/hydro)
| text/markdown | null | Antoine Lefebvre-Brossard <antoinelb@proton.me> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Education",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Hydrology"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"holmes-rs>=0.3.0",
"numpy>=2.4.0",
"polars>=1.36.0",
"starlette>=0.50.0",
"uvicorn>=0.36.0",
"websockets>=15.0.1"
] | [] | [] | [] | [
"Homepage, https://github.com/antoinelb/holmes",
"Repository, https://github.com/antoinelb/holmes"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:13:55.133061 | holmes_hydro-3.4.2.tar.gz | 39,879,805 | 22/5f/b283e95715c0370fed63d0edcb6883cf338a5dae4f8f7ed06819254f4cb6/holmes_hydro-3.4.2.tar.gz | source | sdist | null | false | 82c95f7f3bea6b38050147ae79399833 | 4fa8c5874fe06a2c3022c1e9872196e178362a6044ae40ff0618d8551384f67b | 225fb283e95715c0370fed63d0edcb6883cf338a5dae4f8f7ed06819254f4cb6 | null | [
"LICENSE"
] | 172 |
2.1 | bids-validator-deno | 2.4.1 | Typescript implementation of the BIDS validator | [](https://github.com/bids-standard/bids-validator/actions/workflows/deno_tests.yml)
[](https://github.com/bids-standard/bids-validator/actions/workflows/web_build.yml)
[](https://bids-validator.readthedocs.io/en/latest/?badge=latest)
[](https://doi.org/10.5281/zenodo.3688707)
# The BIDS Validator
The BIDS Validator is a web application, command-line utility,
and Javascript/Typescript library for assessing compliance with the
[Brain Imaging Data Structure (BIDS)][BIDS] standard.
## Getting Started
In most cases,
the simplest way to use the validator is to browse to the [BIDS Validator][] web page:


The web validator runs in-browser, and does not transfer data to any remote server.
In some contexts, such as when working on a remote server,
it may be easier to use the command-line.
The BIDS Validator can be run with the [Deno][] runtime
(see [Deno - Installation][] for detailed installation instructions):
```shell
deno run -ERWN jsr:@bids/validator
```
Deno by default sandboxes applications like a web browser.
`-E`, `-R`, `-W`, and `-N` allow the validator to read environment variables,
read/write local files, and read network locations.
A pre-compiled binary is published to [PyPI][] and may be installed with:
```
pip install bids-validator-deno
bids-validator-deno --help
```
### Configuration file
The schema validator accepts a JSON configuration file that reclassifies issues as
warnings, errors or ignored.
```json
{
"ignore": [
{ "code": "JSON_KEY_RECOMMENDED", "location": "/T1w.json" }
],
"warning": [],
"error": [
{ "code": "NO_AUTHORS" }
]
}
```
The issues are partial matches of the `issues` that the validator accumulates.
Pass the `--json` flag to see the issues in detail.
### Development tools
From the repository root, use `./local-run` to run with all permissions enabled by default:
```shell
# Run from within the /bids-validator directory
cd bids-validator
# Run validator:
./local-run path/to/dataset
```
## Schema validator test suite
```shell
# Run tests:
deno test --allow-env --allow-read --allow-write src/
```
This test suite includes running expected output from bids-examples and may throw some expected failures for bids-examples datasets where either the schema or validator are misaligned with the example dataset while under development.
## Modifying and building a new schema
To modify the schema a clone of bids-standard/bids-specification will need to be made. README and schema itself live here https://github.com/bids-standard/bids-specification/tree/master/src/schema.
After changes to the schema have been made to a local copy the dereferenced single json file used by the validator will need to be built. The `bidsschematools` python package does this. It can be installed from pypi via pip or a local installation can be made. It lives in the specification repository here https://github.com/bids-standard/bids-specification/tree/master/tools/schemacode
The command to compile a dereferenced schema is `bst -v export --schema src/schema --output src/schema.json` (this assumes you are in the root of the bids-specification repo). Once compiled it can be passed to the validator via the `-s` flag, `./bids-validator-deno -s <path to schema> <path to dataset>`
## Documentation
The BIDS validator documentation is available on [Read the Docs](https://bids-validator.readthedocs.io/en/latest/).
[BIDS]: https://bids.neuroimaging.io
[BIDS Validator]: https://bids-standard.github.io/bids-validator/
[Deno]: https://deno.com/
[Deno - Installation]: https://docs.deno.com/runtime/getting_started/installation/
[PyPI]: https://pypi.org/project/bids-validator-deno/
| text/markdown | bids-standard developers | null | null | null | MIT | BIDS, BIDS validator | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Bio-Informatics",
"License :: OSI Approved :: MIT License",
"Programming Language :: JavaScript"
] | [] | null | null | null | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://bids-validator.readthedocs.io/",
"Source code, https://github.com/bids-standard/bids-validator",
"Issues, https://github.com/bids-standard/bids-validator/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:13:26.518918 | bids_validator_deno-2.4.1.tar.gz | 83,474 | 3e/f9/d536d65bf268275710e567ea335c1dfd6dd6b62d19899f4a76b9ee109e56/bids_validator_deno-2.4.1.tar.gz | source | sdist | null | false | 689f0b52d8ea32b2798c9df930a1dd13 | a79b4af2818a200ff5a597715a3c1050ae93be7cb523bcfef54b93af057d8466 | 3ef9d536d65bf268275710e567ea335c1dfd6dd6b62d19899f4a76b9ee109e56 | null | [] | 2,595 |
2.4 | usdm4 | 0.18.0 | A python package for using the CDISC TransCelerate USDM, version 4 | # USDM4
A Python library for the CDISC TransCelerate Unified Study Data Model (USDM) Version 4.
## Overview
USDM4 provides tools for building, assembling, validating, converting, and expanding clinical study definitions using the USDM Version 4 specification. It enables programmatic creation and manipulation of machine-readable study definitions that conform to CDISC standards.
## Features
- **Build** - Create USDM4 study structures programmatically with a fluent builder interface
- **Assemble** - Orchestrate complete study assembly from structured input data
- **Validate** - Validate USDM4 JSON files against defined rules
- **Load** - Load USDM4 data from JSON files or dictionaries
- **Convert** - Transform USDM data structures between formats
- **Expand** - Expand schedule timelines for study designs
## Installation
```bash
pip install usdm4
```
### Requirements
- Python 3.10 or higher
## Quick Start
```python
from usdm4 import USDM4
from simple_error_log.errors import Errors
# Initialize
usdm = USDM4()
errors = Errors()
# Create a minimal study
wrapper = usdm.minimum("My Study", "SPONSOR-001", "1.0", errors)
# Access the study
print(wrapper.study.id)
```
## Usage
### Loading Studies
Load a study from a JSON file:
```python
errors = Errors()
wrapper = USDM4().load("study.json", errors)
```
Load from a dictionary:
```python
data = {...}
wrapper = USDM4().loadd(data, errors)
```
### Validating Studies
```python
result = USDM4().validate("study.json")
if result.passed_or_not_implemented():
print("Validation passed")
else:
print("Validation failed")
```
### Building Studies
Use the builder for programmatic study creation with access to controlled terminology:
```python
errors = Errors()
builder = USDM4().builder(errors)
# Get CDISC codes
code = builder.cdisc_code("C207616", "Official Study Title")
# Get ISO codes
country = builder.iso3166_code("USA")
language = builder.iso639_code("en")
# Create organizations
sponsor = builder.sponsor("My Pharma Corp")
# Create any USDM4 class
study_version = builder.create("StudyVersion", {"versionNumber": "1.0"})
```
### Assembling Studies
For structured assembly of complete studies from domain-organized input:
```python
errors = Errors()
assembler = USDM4().assembler(errors)
assembler.execute({
"identification": {...},
"document": {...},
"population": {...},
"study_design": {...},
"amendments": {...},
"study": {...}
})
wrapper = assembler.wrapper("MySystem", "1.0")
```
### Assembler JSON Input Structure
The assembler accepts a single dictionary with the following top-level keys, each processed by a dedicated sub-assembler:
```json
{
"identification": { ... },
"document": { ... },
"population": { ... },
"amendments": { ... },
"study_design": { ... },
"soa": { ... },
"study": { ... }
}
```
All top-level keys are required except `soa`, which is optional.
---
#### `identification`
Study identification, titles, identifiers, organizations, and roles.
```json
{
"titles": {
"brief": "string",
"official": "string",
"public": "string",
"scientific": "string",
"acronym": "string"
},
"identifiers": [
{
"identifier": "string",
"scope": {
"standard": "string",
"non_standard": {
"type": "string",
"role": "string | null",
"name": "string",
"description": "string",
"label": "string",
"identifier": "string",
"identifierScheme": "string",
"legalAddress": {
"lines": ["string"],
"city": "string",
"district": "string",
"state": "string",
"postalCode": "string",
"country": "string"
}
}
}
}
],
"roles": {
"co_sponsor": {
"name": "string",
"address": {
"lines": ["string"],
"city": "string",
"district": "string",
"state": "string",
"postalCode": "string",
"country": "string"
}
},
"local_sponsor": { },
"device_manufacturer": { }
},
"other": {
"sponsor_signatory": "string | null",
"medical_expert": "string | null",
"compound_names": "string | null",
"compound_codes": "string | null"
}
}
```
**Notes:**
- `titles` is optional (defaults to empty). Valid title types: `brief`, `official`, `public`, `scientific`, `acronym`.
- `identifiers` is optional (defaults to empty list). Each identifier `scope` must contain either `standard` or `non_standard`, not both.
- Valid `standard` keys: `ct.gov`, `ema`, `fda`. These resolve to predefined organizations with complete address information.
- Valid `non_standard` type values: `registry`, `regulator`, `healthcare`, `pharma`, `lab`, `cro`, `gov`, `academic`, `medical_device`.
- Valid `role` values: `co-sponsor`, `manufacturer`, `investigator`, `pharmacovigilance`, `project manager`, `local sponsor`, `laboratory`, `study subject`, `medical expert`, `statistician`, `idmc`, `care provider`, `principal investigator`, `outcomes assessor`, `dec`, `clinical trial physician`, `sponsor`, `adjudication committee`, `study site`, `dsmb`, `regulatory agency`, `contract research`.
- `roles` is optional (defaults to empty). Each role key (`co_sponsor`, `local_sponsor`, `device_manufacturer`) can be `null` to skip. The `address` field within each role is optional.
- `other` is optional. When present, all four sub-fields are read directly.
---
#### `document`
Protocol document metadata and hierarchical content sections.
```json
{
"document": {
"label": "string",
"version": "string",
"status": "string",
"template": "string",
"version_date": "string"
},
"sections": [
{
"section_number": "string",
"section_title": "string",
"text": "string"
}
]
}
```
**Notes:**
- All fields in `document` are required.
- Valid `status` values: `APPROVED`, `DRAFT`, `DFT`, `FINAL`, `OBSOLETE`, `PENDING`, `PENDING REVIEW` (case-insensitive).
- `version_date` should be in ISO format (e.g. `2024-01-15`).
- Section hierarchy is determined by `section_number` depth: `"1"` = level 1, `"1.1"` = level 2, `"1.1.1"` = level 3.
- `text` content may contain HTML.
---
#### `population`
Population definitions and eligibility criteria.
```json
{
"label": "string",
"inclusion_exclusion": {
"inclusion": ["string"],
"exclusion": ["string"]
}
}
```
**Notes:**
- All fields are required.
- Each inclusion and exclusion item is a text string describing the criterion.
- The `label` is used to generate the internal name (uppercased, spaces replaced with hyphens).
---
#### `amendments`
Study amendment information. Can be `null` or empty to skip amendment processing entirely.
```json
{
"identifier": "string",
"summary": "string",
"reasons": {
"primary": "string",
"secondary": "string"
},
"impact": {
"safety_and_rights": {
"safety": { "substantial": boolean, "reason": "string" },
"rights": { "substantial": boolean, "reason": "string" }
},
"reliability_and_robustness": {
"reliability": { "substantial": boolean, "reason": "string" },
"robustness": { "substantial": boolean, "reason": "string" }
}
},
"enrollment": {
"value": "integer | string",
"unit": "string"
},
"scope": {
"global": boolean,
"countries": ["string"],
"regions": ["string"],
"sites": ["string"],
"unknown": ["string"]
},
"changes": [
{
"section": "string",
"description": "string",
"rationale": "string"
}
]
}
```
**Notes:**
- `reasons` values use `CODE:DECODE` format (e.g. `"C207609:New Safety Information Available"`).
- Valid reason codes: `C207612` (Regulatory Agency Request), `C207608` (New Regulatory Guidance), `C207605` (IRB/IEC Feedback), `C207609` (New Safety Information), `C207606` (Manufacturing Change), `C207602` (IMP Addition), `C207601` (Change In Strategy), `C207600` (Change In Standard Of Care), `C207607` (New Data Available), `C207604` (Investigator/Site Feedback), `C207611` (Recruitment Difficulty), `C207603` (Inconsistency/Error In Protocol), `C207610` (Protocol Design Error), `C17649` (Other), `C48660` (Not Applicable).
- `enrollment` is optional. The `value` is converted to integer internally.
- `scope` is optional. Items in `unknown` are resolved to country or region codes via ISO 3166 lookup. Empty strings in `unknown` are skipped.
- `changes` section references use `"NUMBER, TITLE"` format (e.g. `"1.5, Safety Considerations"`), which are matched against document sections.
---
#### `study_design`
Study design structure and trial phase.
```json
{
"label": "string",
"rationale": "string",
"trial_phase": "string"
}
```
**Notes:**
- All fields are required.
- Valid `trial_phase` values: `0`, `PRE-CLINICAL`, `1`, `I`, `1-2`, `1/2`, `1/2/3`, `1/3`, `1A`, `IA`, `1B`, `IB`, `2`, `II`, `2-3`, `II-III`, `2A`, `IIA`, `2B`, `IIB`, `3`, `III`, `3A`, `IIIA`, `3B`, `IIIB`, `4`, `IV`, `5`, `V`, `2/3/4`. Prefixes `PHASE` or `TRIAL` are automatically stripped.
- Default intervention model is Parallel Study (CDISC code C82639).
---
#### `soa` (Schedule of Activities)
Timeline data including epochs, visits, timepoints, activities, and conditions. This entire section is optional.
```json
{
"epochs": {
"items": [
{ "text": "string" }
]
},
"visits": {
"items": [
{
"text": "string",
"references": ["string"]
}
]
},
"timepoints": {
"items": [
{
"index": "string | integer",
"text": "string",
"value": "string | integer",
"unit": "string"
}
]
},
"windows": {
"items": [
{
"before": integer,
"after": integer,
"unit": "string"
}
]
},
"activities": {
"items": [
{
"name": "string",
"visits": [
{
"index": integer,
"references": ["string"]
}
],
"children": [
{
"name": "string",
"visits": [
{
"index": integer,
"references": ["string"]
}
],
"actions": {
"bcs": ["string"]
}
}
],
"actions": {
"bcs": ["string"]
}
}
]
},
"conditions": {
"items": [
{
"reference": "string",
"text": "string"
}
]
}
}
```
**Notes:**
- Epochs, visits, and timepoints arrays must be parallel (same length, aligned by index).
- `windows` must also be parallel with timepoints.
- Negative timepoint `value` indicates before the reference anchor. The first non-negative value determines the anchor point.
- `references` on visits and activities are condition keys that link to entries in the `conditions` array.
- `children` are sub-activities nested under a parent activity.
- `actions.bcs` lists Biomedical Concept names. Known concepts are resolved from the CDISC BC library; unknown names create surrogate BiomedicalConcept objects.
- Supported time units: `years`/`yrs`/`yr`, `months`/`mths`/`mth`, `weeks`/`wks`/`wk`, `days`/`dys`/`dy`, `hours`/`hrs`/`hr`, `minutes`/`mins`/`min`, `seconds`/`secs`/`sec` (case-insensitive).
---
#### `study`
Core study information and metadata.
```json
{
"name": {
"identifier": "string",
"acronym": "string",
"compound": "string"
},
"label": "string",
"version": "string",
"rationale": "string",
"description": "string",
"sponsor_approval_date": "string",
"confidentiality": "string",
"original_protocol": "string | boolean"
}
```
**Notes:**
- `name` is required. At least one of `identifier`, `acronym`, or `compound` must be non-empty. Priority order: `identifier` > `acronym` > `compound`. The name is auto-generated (uppercased, non-alphanumeric characters removed).
- `version` and `rationale` are required.
- `label` is optional; used as fallback if name generation produces an empty string.
- `description`, `sponsor_approval_date`, `confidentiality`, and `original_protocol` are all optional.
- `original_protocol` is converted to boolean: `"true"`, `"1"`, `"yes"`, `"y"` map to `true` (case-insensitive).
- `sponsor_approval_date` should be in ISO format (e.g. `2024-01-15`).
- When present, `confidentiality`, `original_protocol`, `compound_codes`, `compound_names`, `sponsor_signatory`, and `medical_expert` are stored as extension attributes on the study version.
---
### Converting Studies
```python
converter = USDM4().convert()
# Transform data structures as needed
```
### Expanding Timelines
```python
expander = USDM4().expander(wrapper)
# Process schedule timeline expansion
```
## API Classes
USDM4 includes 73 domain model classes covering:
| Domain | Classes |
|--------|---------|
| Study Structure | `Study`, `StudyVersion`, `StudyDesign`, `StudyArm`, `StudyEpoch`, `StudyElement` |
| Interventions | `StudyIntervention`, `Activity`, `Administration`, `Procedure`, `Encounter` |
| Population | `StudyDesignPopulation`, `AnalysisPopulation`, `EligibilityCriterion`, `SubjectEnrollment` |
| Documents | `StudyDefinitionDocument`, `StudyDefinitionDocumentVersion`, `Amendment` |
| Coding | `Code`, `AliasCode`, `BiomedicalConcept`, `Objective`, `Endpoint` |
| Timelines | `ScheduleTimeline`, `ScheduledActivityInstance`, `ScheduledDecisionInstance` |
| Organization | `StudyIdentifier`, `Organization`, `StudySite` |
## Development
### Running Tests
```bash
pytest
```
Tests require 100% code coverage.
### Code Formatting
```bash
ruff format
ruff check
```
### Building the Package
```bash
python3 -m build --sdist --wheel
```
### Publishing
```bash
twine upload dist/*
```
## Related Projects
- [usdm3](https://pypi.org/project/usdm3/) - USDM Version 3 support
## License
This project is licensed under the GNU General Public License v3.0 - see the [LICENSE](LICENSE) file for details.
| text/markdown | D Iberson-Hurst | null | null | null | null | null | [
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Operating System :: OS Independent"
] | [] | null | null | null | [] | [] | [] | [
"usdm3==0.12.1",
"simple_error_log>=0.7.0",
"python-dateutil==2.9.0.post0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.3 | 2026-02-20T15:13:10.837481 | usdm4-0.18.0.tar.gz | 2,227,255 | ab/53/fd9cbe10b15a9e8fe30eda7b9fedd6860ab5893d3ac981a192f067109b30/usdm4-0.18.0.tar.gz | source | sdist | null | false | 13d529331cd4d06100d5e271dd881535 | 1abd791d887f333bebab697c1dc8486fde0395d8e29144ee6ebc1012f1d6b1e2 | ab53fd9cbe10b15a9e8fe30eda7b9fedd6860ab5893d3ac981a192f067109b30 | null | [
"LICENSE"
] | 179 |
2.4 | openmeter | 1.0.0a225 | Client for OpenMeter: Real-Time and Scalable Usage Metering | # OpenMeter Python SDK
[On PyPI](https://pypi.org/project/openmeter)
This package is generated by `@typespec/http-client-python` with Typespec.
## Prerequisites
- Python 3.9 or later is required to use this package.
## Install
> The Python SDK is in preview mode.
```sh
pip install --pre openmeter
# or using an exact version
pip install openmeter==1.0.0bXXX
```
## Examples
### Setup
#### Synchronous Client
```python
from openmeter import Client
client = Client(
endpoint="https://openmeter.cloud",
token="your-api-token",
)
```
#### Async Client
```python
from openmeter.aio import Client
client = Client(
endpoint="https://openmeter.cloud",
token="your-api-token",
)
```
### Ingest an Event
#### Synchronous
```python
import datetime
import uuid
from openmeter.models import Event
# Create an Event instance (following CloudEvents specification)
event = Event(
id=str(uuid.uuid4()),
source="my-app",
specversion="1.0",
type="prompt",
subject="customer-1",
time=datetime.datetime.now(datetime.timezone.utc),
data={
"tokens": 100,
"model": "gpt-4o",
"type": "input",
},
)
# Ingest the event
client.events.ingest_event(event)
```
#### Async
```python
import datetime
import uuid
import asyncio
from openmeter.aio import Client
from openmeter.models import Event
async def main():
async with Client(
endpoint="https://openmeter.cloud",
token="your-api-token",
) as client:
# Create an Event instance (following CloudEvents specification)
event = Event(
id=str(uuid.uuid4()),
source="my-app",
specversion="1.0",
type="prompt",
subject="customer-1",
time=datetime.datetime.now(datetime.timezone.utc),
data={
"tokens": 100,
"model": "gpt-4o",
"type": "input",
},
)
# Ingest the event
await client.events.ingest_event(event)
asyncio.run(main())
```
### Query Meter
#### Synchronous
```python
from openmeter.models import MeterQueryResult
# Query total values
r: MeterQueryResult = client.meters.query_json(meter_id_or_slug="tokens_total")
print("Query total values:", r.data[0].value)
```
#### Async
```python
import asyncio
from openmeter.aio import Client
from openmeter.models import MeterQueryResult
async def main():
async with Client(
endpoint="https://openmeter.cloud",
token="your-api-token",
) as client:
# Query total values
r: MeterQueryResult = await client.meters.query_json(
meter_id_or_slug="tokens_total"
)
print("Query total values:", r.data[0].value)
asyncio.run(main())
```
## Client API Reference
The OpenMeter Python SDK provides a comprehensive client interface organized into logical operation groups. Below is a complete reference of all available methods.
### Overview
| Namespace | Operation | Method | Description |
| ---------------------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------- |
| **Events** | | | Track usage by ingesting events |
| | Create | `client.events.ingest_event(event)` | Ingest a single event |
| | Create | `client.events.ingest_events(events)` | Ingest batch of events |
| | Create | `client.events.ingest_events_json(events_json)` | Ingest events from JSON |
| | Read | `client.events.list(**kwargs)` | List ingested events with filtering |
| | Read | `client.events_v2.list(**kwargs)` | List ingested events with advanced filtering (V2) |
| **Meters** | | | Track and aggregate usage data from events |
| | Create | `client.meters.create(meter)` | Create a new meter |
| | Read | `client.meters.get(meter_id_or_slug)` | Get a meter by ID or slug |
| | Read | `client.meters.list(**kwargs)` | List all meters |
| | Read | `client.meters.query_json(meter_id_or_slug, **kwargs)` | Query usage data in JSON format |
| | Read | `client.meters.query_csv(meter_id_or_slug, **kwargs)` | Query usage data in CSV format |
| | Read | `client.meters.query(meter_id_or_slug, **kwargs)` | Query usage data |
| | Read | `client.meters.list_subjects(meter_id_or_slug, **kwargs)` | List subjects for a meter |
| | Read | `client.meters.list_group_by_values(meter_id_or_slug, **kwargs)` | List group-by values for a meter |
| | Update | `client.meters.update(meter_id_or_slug, meter)` | Update a meter by ID or slug |
| | Delete | `client.meters.delete(meter_id_or_slug)` | Delete a meter by ID or slug |
| **Subjects** | | | Manage entities that consume resources |
| | Create | `client.subjects.upsert(subjects)` | Create or update one or multiple subjects |
| | Read | `client.subjects.get(subject_id_or_key)` | Get a subject by ID or key |
| | Read | `client.subjects.list()` | List all subjects |
| | Delete | `client.subjects.delete(subject_id_or_key)` | Delete a subject by ID or key |
| **Customers** | | | Manage customer information and lifecycles |
| | Create | `client.customers.create(customer)` | Create a new customer |
| | Read | `client.customers.get(customer_id_or_key, **kwargs)` | Get a customer by ID or key |
| | Read | `client.customers.list(**kwargs)` | List all customers |
| | Read | `client.customers.list_customer_subscriptions(customer_id_or_key, **kwargs)` | List customer subscriptions |
| | Update | `client.customers.update(customer_id_or_key, customer)` | Update a customer |
| | Delete | `client.customers.delete(customer_id_or_key)` | Delete a customer |
| **Customer (Single)** | | | Customer-specific operations |
| | Read | `client.customer.get_customer_access(customer_id_or_key)` | Get customer access information |
| **Customer Apps** | | | Manage customer app integrations |
| | Read | `client.customer_apps.list_app_data(customer_id_or_key, **kwargs)` | List app data for a customer |
| | Update | `client.customer_apps.upsert_app_data(customer_id_or_key, app_data)` | Upsert app data for a customer |
| | Delete | `client.customer_apps.delete_app_data(customer_id_or_key, app_id)` | Delete app data for a customer |
| **Customer Stripe** | | | Manage Stripe integration for customers |
| | Read | `client.customer_stripe.get(customer_id_or_key)` | Get Stripe customer data |
| | Update | `client.customer_stripe.upsert(customer_id_or_key, data)` | Upsert Stripe customer data |
| | Create | `client.customer_stripe.create_portal_session(customer_id_or_key, **kwargs)` | Create a Stripe customer portal session |
| **Customer Entitlement** | | | Single customer entitlement operations |
| | Read | `client.customer_entitlement.get_customer_entitlement_value(customer_id_or_key, **kwargs)` | Get customer entitlement value |
| **Customer Overrides** | | | Manage customer-specific pricing overrides |
| | Read | `client.customer_overrides.list(customer_id_or_key)` | List customer overrides |
| | Read | `client.customer_overrides.get(customer_id_or_key, override_id)` | Get a customer override |
| | Update | `client.customer_overrides.upsert(customer_id_or_key, override)` | Upsert a customer override |
| | Delete | `client.customer_overrides.delete(customer_id_or_key, override_id)` | Delete a customer override |
| **Features** | | | Define application capabilities and services |
| | Create | `client.features.create(feature)` | Create a new feature |
| | Read | `client.features.get(feature_id)` | Get a feature by ID |
| | Read | `client.features.list(**kwargs)` | List all features |
| | Delete | `client.features.delete(feature_id)` | Delete a feature by ID |
| **Plans** | | | Manage subscription plans and pricing |
| | Create | `client.plans.create(request)` | Create a new plan |
| | Read | `client.plans.get(plan_id, **kwargs)` | Get a plan by ID |
| | Read | `client.plans.list(**kwargs)` | List all plans |
| | Update | `client.plans.update(plan_id, body)` | Update a plan |
| | Delete | `client.plans.delete(plan_id)` | Delete a plan by ID |
| | Other | `client.plans.publish(plan_id)` | Publish a plan |
| | Other | `client.plans.archive(plan_id)` | Archive a plan version |
| | Other | `client.plans.next(plan_id_or_key)` | Create new draft plan version |
| **Plan Addons** | | | Manage addons assigned to plans |
| | Create | `client.plan_addons.create(plan_id, body)` | Create addon assignment for plan |
| | Read | `client.plan_addons.get(plan_id, plan_addon_id)` | Get addon assignment for plan |
| | Read | `client.plan_addons.list(plan_id, **kwargs)` | List addon assignments for plan |
| | Update | `client.plan_addons.update(plan_id, plan_addon_id, body)` | Update addon assignment for plan |
| | Delete | `client.plan_addons.delete(plan_id, plan_addon_id)` | Delete addon assignment for plan |
| **Addons** | | | Manage standalone addons available across plans |
| | Create | `client.addons.create(request)` | Create a new addon |
| | Read | `client.addons.get(addon_id, **kwargs)` | Get an addon by ID |
| | Read | `client.addons.list(**kwargs)` | List all addons |
| | Update | `client.addons.update(addon_id, request)` | Update an addon |
| | Delete | `client.addons.delete(addon_id)` | Delete an addon by ID |
| | Other | `client.addons.publish(addon_id)` | Publish an addon |
| | Other | `client.addons.archive(addon_id)` | Archive an addon |
| **Subscriptions** | | | Manage customer subscriptions |
| | Create | `client.subscriptions.create(body)` | Create a new subscription |
| | Read | `client.subscriptions.get_expanded(subscription_id, **kwargs)` | Get a subscription with expanded details |
| | Update | `client.subscriptions.edit(subscription_id, body)` | Edit a subscription |
| | Update | `client.subscriptions.change(subscription_id, body)` | Change a subscription |
| | Update | `client.subscriptions.migrate(subscription_id, body)` | Migrate subscription to a new plan version |
| | Update | `client.subscriptions.restore(subscription_id)` | Restore a canceled subscription |
| | Delete | `client.subscriptions.cancel(subscription_id, body)` | Cancel a subscription |
| | Delete | `client.subscriptions.delete(subscription_id)` | Delete a subscription |
| | Other | `client.subscriptions.unschedule_cancelation(subscription_id)` | Unschedule a subscription cancelation |
| **Subscription Addons** | | | Manage addons on subscriptions |
| | Create | `client.subscription_addons.create(subscription_id, body)` | Add an addon to a subscription |
| | Read | `client.subscription_addons.get(subscription_id, subscription_addon_id)` | Get a subscription addon |
| | Read | `client.subscription_addons.list(subscription_id, **kwargs)` | List addons on a subscription |
| | Update | `client.subscription_addons.update(subscription_id, subscription_addon_id, body)` | Update a subscription addon |
| **Entitlements** | | | Admin entitlements management |
| | Read | `client.entitlements.list(**kwargs)` | List all entitlements (admin) |
| | Read | `client.entitlements.get(entitlement_id)` | Get an entitlement by ID |
| **Entitlements V2** | | | V2 Admin entitlements management |
| | Read | `client.entitlements_v2.list(**kwargs)` | List all entitlements V2 (admin) |
| | Read | `client.entitlements_v2.get(entitlement_id_or_feature_key, **kwargs)` | Get an entitlement V2 by ID or feature key |
| **Customer Entitlements V2** | | | Manage customer entitlements (V2) |
| | Create | `client.customer_entitlements_v2.post(customer_id_or_key, body)` | Create a customer entitlement |
| | Read | `client.customer_entitlements_v2.list(customer_id_or_key, **kwargs)` | List customer entitlements |
| | Read | `client.customer_entitlements_v2.get(customer_id_or_key, entitlement_id_or_feature_key)` | Get a customer entitlement |
| | Delete | `client.customer_entitlements_v2.delete(customer_id_or_key, entitlement_id)` | Delete a customer entitlement |
| | Update | `client.customer_entitlements_v2.override(customer_id_or_key, entitlement_id_or_feature_key, override)` | Override a customer entitlement |
| **Customer Entitlement V2** | | | Single customer entitlement operations (V2) |
| | Read | `client.customer_entitlement_v2.get_grants(customer_id_or_key, entitlement_id_or_feature_key, **kwargs)` | List grants for a customer entitlement |
| | Read | `client.customer_entitlement_v2.get_customer_entitlement_value(customer_id_or_key, entitlement_id_or_feature_key, **kwargs)` | Get customer entitlement value |
| | Read | `client.customer_entitlement_v2.get_customer_entitlement_history(customer_id_or_key, entitlement_id_or_feature_key, **kwargs)` | Get customer entitlement history |
| | Create | `client.customer_entitlement_v2.create_customer_entitlement_grant(customer_id_or_key, entitlement_id_or_feature_key, grant)` | Create a grant for customer entitlement |
| | Update | `client.customer_entitlement_v2.reset_customer_entitlement(customer_id_or_key, entitlement_id, **kwargs)` | Reset customer entitlement usage |
| **Grants** | | | Admin grants management |
| | Read | `client.grants.list(**kwargs)` | List all grants (admin) |
| | Delete | `client.grants.delete(grant_id)` | Delete (void) a grant |
| **Grants V2** | | | V2 Admin grants management |
| | Read | `client.grants_v2.list(**kwargs)` | List all grants V2 (admin) |
| **Billing Profiles** | | | Manage billing profiles |
| | Create | `client.billing_profiles.create(profile)` | Create a billing profile |
| | Read | `client.billing_profiles.get(id)` | Get a billing profile by ID |
| | Read | `client.billing_profiles.list(**kwargs)` | List billing profiles |
| | Update | `client.billing_profiles.update(id, profile)` | Update a billing profile |
| | Delete | `client.billing_profiles.delete(id)` | Delete a billing profile |
| **Invoices** | | | Manage invoices |
| | Read | `client.invoices.list(**kwargs)` | List invoices |
| | Other | `client.invoices.invoice_pending_lines_action(customer_id, **kwargs)` | Invoice pending lines for customer |
| **Invoice** | | | Single invoice operations |
| | Read | `client.invoice.get_invoice(id, **kwargs)` | Get an invoice by ID |
| | Update | `client.invoice.update_invoice(id, invoice)` | Update an invoice |
| | Delete | `client.invoice.delete_invoice(id)` | Delete an invoice |
| | Other | `client.invoice.advance_action(id)` | Advance invoice to next status |
| | Other | `client.invoice.approve_action(id)` | Approve an invoice |
| | Other | `client.invoice.retry_action(id, body)` | Retry advancing invoice after failure |
| | Other | `client.invoice.void_invoice_action(id)` | Void an invoice |
| | Other | `client.invoice.recalculate_tax_action(id)` | Recalculate invoice tax amounts |
| | Other | `client.invoice.snapshot_quantities_action(id)` | Snapshot invoice quantities |
| **Customer Invoice** | | | Customer-specific invoice operations |
| | Create | `client.customer_invoice.create_pending_invoice_line(customer_id, body)` | Create pending invoice line for customer |
| | Other | `client.customer_invoice.simulate_invoice(customer_id, **kwargs)` | Simulate an invoice for a customer |
| **Apps** | | | Manage integrations and app installations |
| | Read | `client.apps.list(**kwargs)` | List installed apps |
| | Read | `client.apps.get(id)` | Get an app by ID |
| | Update | `client.apps.update(id, app)` | Update an app |
| | Delete | `client.apps.uninstall(id)` | Uninstall an app |
| **App Stripe** | | | Stripe app integration |
| | Create | `client.app_stripe.webhook(id, body)` | Handle Stripe webhook event |
| | Update | `client.app_stripe.update_stripe_api_key(id, request)` | Update Stripe API key |
| | Create | `client.app_stripe.create_checkout_session(body)` | Create Stripe checkout session |
| **App Custom Invoicing** | | | Custom invoicing app integration |
| | Other | `client.app_custom_invoicing.draft_syncronized(id, invoice_number, **kwargs)` | Notify when draft invoice synchronized |
| | Other | `client.app_custom_invoicing.finalized(id, invoice_number, **kwargs)` | Notify when invoice finalized |
| | Other | `client.app_custom_invoicing.payment_status(id, invoice_number, body)` | Update invoice payment status |
| **Marketplace** | | | App marketplace operations |
| | Read | `client.marketplace.list(**kwargs)` | List marketplace apps |
| | Read | `client.marketplace.get(app_type)` | Get marketplace app |
| | Read | `client.marketplace.get_o_auth2_install_url(app_type, **kwargs)` | Get OAuth2 install URL |
| | Create | `client.marketplace.authorize_o_auth2_install(app_type, **kwargs)` | Authorize OAuth2 installation |
| | Create | `client.marketplace.install_with_api_key(app_type, body)` | Install app with API key |
| | Create | `client.marketplace.install(app_type, body)` | Install marketplace app |
| **Notification Channels** | | | Manage notification channels |
| | Create | `client.notification_channels.create(channel)` | Create a notification channel |
| | Read | `client.notification_channels.get(channel_id)` | Get a notification channel by ID |
| | Read | `client.notification_channels.list(**kwargs)` | List notification channels |
| | Update | `client.notification_channels.update(channel_id, channel)` | Update a notification channel |
| | Delete | `client.notification_channels.delete(channel_id)` | Delete a notification channel |
| **Notification Rules** | | | Manage notification rules |
| | text/markdown | Andras Toth | 4157749+tothandras@users.noreply.github.com | null | null | Apache-2.0 | openmeter, api, client, usage, usage-based, metering, ai, aggregation, real-time, billing, cloud | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | https://openmeter.io | null | <4.0,>=3.9 | [] | [] | [] | [
"isodate<0.8.0,>=0.6.1",
"corehttp[aiohttp,requests]>=1.0.0b6",
"typing-extensions>=4.6.0",
"cloudevents<2.0.0,>=1.10.0",
"urllib3<3.0.0,>=2.0.0"
] | [] | [] | [] | [
"Homepage, https://openmeter.io",
"Repository, https://github.com/openmeter/openmeter"
] | poetry/2.2.1 CPython/3.13.11 Linux/6.14.0-1018-aws | 2026-02-20T15:13:00.598195 | openmeter-1.0.0a225.tar.gz | 213,557 | 13/09/44bf0861a9d811efbfadd86049d6a049c2229e7abdb27930d27b1c8615e4/openmeter-1.0.0a225.tar.gz | source | sdist | null | false | 48add88db6efa9bf264c70e4daaf16c0 | 8d1e72435bea0bf53ae430825da24cd6b625f00f0dbccd1ca2f6350945279cc5 | 130944bf0861a9d811efbfadd86049d6a049c2229e7abdb27930d27b1c8615e4 | null | [] | 157 |
2.4 | msgspex | 1.0.0 | An extra msgspec collection of custom types, casters, encode hooks and decode hooks. | # msgspex
A collection of `msgspec` extensions: custom types, cast helpers, `decode hooks`, and `encode hooks`.
## Quick Start
```python
import msgspex
from msgspex.custom_types import Email, datetime
value = msgspex.decoder.decode('"user@example.com"', type=Email)
dt = msgspex.decoder.decode('"2024-01-02T03:04:05Z"', type=datetime)
payload = msgspex.encoder.encode(dt)
```
After `import msgspex`, all hooks and types are registered automatically.
## Custom Types
### 1. Types from kungfu
- `Option[T]` — optional value type based on `kungfu` (`Some | Nothing | msgspec.UnsetType`).
There is also decode-hook integration for `kungfu.Sum` (not a custom type, but supported by the decoder).
### 2. Types Derived from stdlib
- `date` — re-export of `datetime.date`.
- `datetime` — meta-type that covers `StringTimestampDatetime`, `IntTimestampDatetime`, `FloatTimestampDatetime`, `ISODatetime` (alias: `isodatetime`), and `datetime.datetime`.
- `timedelta` — subclass of `datetime.timedelta` with cast support.
- `StrEnum`, `IntEnum`, `FloatEnum`, `BaseEnumMeta` — `enum` extensions for stable handling of unknown values.
- `Literal` — runtime type conceptually compatible with `typing.Literal`.
### 3. OpenAPI-Oriented Types
- `Email` — `format: email`
- `IDNEmail` — `format: idn-email`
- `URI` — `format: uri`
- `URIReference` — `format: uri-reference`
- `IRI` — `format: iri`
- `IRIReference` — `format: iri-reference`
- `Hostname` — `format: hostname`
- `IDNHostname` — `format: idn-hostname`
- `IPv4` — `format: ipv4`
- `IPv6` — `format: ipv6`
- `JsonPointer` — `format: json-pointer`
- `RelativeJsonPointer` — `format: relative-json-pointer`
- `Regex` — `format: regex`
- `Int32`, `Int64` — range-limited integer types
- `Float32`, `Float64` — finite, range-limited floating-point types
`UUID` is not redefined here, because it is already supported by `msgspec`.
| text/markdown | null | luwqz1 <howluwqz1@gmail.com> | null | luwqz1 <howluwqz1@gmail.com> | MIT License Copyright (c) 2026 luwqz1 Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | msgspec, custom types, decode hooks, encode hooks, msgspex, fast model, dataclasses, kungfu | [
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Software Development :: Quality Assurance",
"Typing :: Typed"
] | [] | null | null | >=3.14 | [] | [] | [] | [
"kungfu-fp>=1.0.0",
"msgspec>=0.20.0"
] | [] | [] | [] | [
"Source, https://github.com/luwqz1/msgspextra",
"Bug Tracker, https://github.com/luwqz1/msgspextra/issues"
] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T15:12:30.273207 | msgspex-1.0.0.tar.gz | 24,166 | 7f/02/8b7d6c82e58af3067d1512bc1a412e7b8474704c012d201b71a0bdc5dd3e/msgspex-1.0.0.tar.gz | source | sdist | null | false | 6829070c77a4b80b927a71875eb3e226 | 9b84eac4dabcf603785689337fdcb6a13cb94d2723e1a55c062c19ab2d02fcd3 | 7f028b7d6c82e58af3067d1512bc1a412e7b8474704c012d201b71a0bdc5dd3e | null | [
"LICENSE"
] | 191 |
2.4 | qiskit-mcp-servers | 0.7.0 | Model Context Protocol servers for IBM Quantum services and Qiskit | # Qiskit MCP Servers
[](https://github.com/Qiskit/mcp-servers/actions/workflows/test.yml)
[](https://opensource.org/licenses/Apache-2.0)
[](https://www.python.org/downloads/)
[](https://github.com/astral-sh/ruff)
[](http://mypy-lang.org/)
[](https://registry.modelcontextprotocol.io/?q=io.github.Qiskit%2Fqiskit-mcp-server)
[](https://registry.modelcontextprotocol.io/?q=io.github.Qiskit%2Fqiskit-ibm-runtime-mcp-server)
[](https://registry.modelcontextprotocol.io/?q=io.github.Qiskit%2Fqiskit-ibm-transpiler-mcp-server)
[](https://registry.modelcontextprotocol.io/?q=io.github.Qiskit%2Fqiskit-code-assistant-mcp-server)
[](https://registry.modelcontextprotocol.io/?q=io.github.Qiskit%2Fqiskit-gym-mcp-server)
A collection of **Model Context Protocol (MCP)** servers that provide AI assistants, LLMs, and agents with seamless access to IBM Quantum services and Qiskit libraries for quantum computing development and research.
## 🌟 What is This?
This repository contains production-ready MCP servers that enable AI systems to interact with quantum computing resources through Qiskit. Instead of manually configuring quantum backends, writing boilerplate code, or managing IBM Quantum accounts, AI assistants can now:
- 🤖 **Generate intelligent quantum code** with context-aware suggestions
- 🔌 **Connect to real quantum hardware** automatically
- 📊 **Analyze quantum backends** and find optimal resources
- 🚀 **Execute quantum circuits** and monitor job status
- 💡 **Provide quantum computing assistance** with expert knowledge
## 🛠️ Available Servers
### 🔬 Qiskit MCP Server
**Core Qiskit quantum computing capabilities**
Provides quantum circuit creation, manipulation, transpilation, and serialization utilities (QASM3, QPY) for local quantum development using [Qiskit](https://github.com/Qiskit/qiskit)
**📁 Directory**: [`./qiskit-mcp-server/`](./qiskit-mcp-server/)
---
### 🧠 Qiskit Code Assistant MCP Server
**Intelligent quantum code completion and assistance**
Provides access to [IBM's Qiskit Code Assistant](https://quantum.cloud.ibm.com/docs/en/guides/qiskit-code-assistant) for AI-assisted quantum programming
**📁 Directory**: [`./qiskit-code-assistant-mcp-server/`](./qiskit-code-assistant-mcp-server/)
---
### ⚙️ Qiskit IBM Runtime MCP Server
**Complete access to IBM Quantum cloud services**
Comprehensive interface to IBM Quantum hardware via [Qiskit IBM Runtime](https://github.com/Qiskit/qiskit-ibm-runtime/)
**📁 Directory**: [`./qiskit-ibm-runtime-mcp-server/`](./qiskit-ibm-runtime-mcp-server/)
---
### 🚀 Qiskit IBM Transpiler MCP Server
**AI-powered circuit transpilation**
Access to the [qiskit-ibm-transpiler](https://github.com/Qiskit/qiskit-ibm-transpiler) library for AI-optimized circuit routing and optimization.
**📁 Directory**: [`./qiskit-ibm-transpiler-mcp-server/`](./qiskit-ibm-transpiler-mcp-server/)
---
## 🏋️ Community Servers
### 🏋️ Qiskit Gym MCP Server
**Reinforcement learning for quantum circuit synthesis**
Uses [qiskit-gym](https://github.com/rl-institut/qiskit-gym) to train RL models for optimal quantum circuit synthesis, including permutation routing, linear function synthesis, and Clifford circuits.
**📁 Directory**: [`./qiskit-gym-mcp-server/`](./qiskit-gym-mcp-server/)
## 📚 Examples
Each MCP server includes example code demonstrating how to build AI agents using LangChain:
| Server | Examples |
|--------|----------|
| Qiskit MCP Server | [`qiskit-mcp-server/examples/`](./qiskit-mcp-server/examples/) |
| Qiskit Code Assistant MCP Server | [`qiskit-code-assistant-mcp-server/examples/`](./qiskit-code-assistant-mcp-server/examples/) |
| Qiskit IBM Runtime MCP Server | [`qiskit-ibm-runtime-mcp-server/examples/`](./qiskit-ibm-runtime-mcp-server/examples/) |
| Qiskit IBM Transpiler MCP Server | [`qiskit-ibm-transpiler-mcp-server/examples/`](./qiskit-ibm-transpiler-mcp-server/examples/) |
| Qiskit Gym MCP Server (Community) | [`qiskit-gym-mcp-server/examples/`](./qiskit-gym-mcp-server/examples/) |
Each examples directory contains:
- **Jupyter Notebook** (`langchain_agent.ipynb`) - Interactive tutorial with step-by-step examples
- **Python Script** (`langchain_agent.py`) - Command-line agent with multiple LLM provider support
### Advanced: Quantum Volume Finder
The [`examples/`](./examples/) directory contains a multi-agent system that combines multiple MCP servers to **find the highest achievable Quantum Volume** for IBM Quantum backends through actual hardware execution. It demonstrates multi-server orchestration, local tool wrappers to keep large data out of the LLM context, and both single-circuit and full statistical protocol modes. See the [examples README](./examples/README.md) for details.
## 🚀 Quick Start
### Prerequisites
- **Python 3.10+** (3.11+ recommended)
- **[uv](https://astral.sh/uv)** package manager (fastest Python package manager)
- **IBM Quantum account** and API token
- **Qiskit Code Assistant access** (for code assistant server)
### Installation
#### Install from PyPI
```bash
# Install all MCP servers (core + community)
pip install qiskit-mcp-servers[all]
# Install just the core servers (default)
pip install qiskit-mcp-servers
# Install individual servers
pip install qiskit-mcp-servers[qiskit] # Qiskit server only
pip install qiskit-mcp-servers[code-assistant] # Code Assistant server only
pip install qiskit-mcp-servers[runtime] # IBM Runtime server only
pip install qiskit-mcp-servers[transpiler] # IBM Transpiler server only
pip install qiskit-mcp-servers[gym] # Qiskit Gym server only (community)
# Install community servers only
pip install qiskit-mcp-servers[community]
```
#### Install from Source
Each server is designed to run independently. Choose the server you need:
#### 🔬 Qiskit Server
```bash
cd qiskit-mcp-server
uv run qiskit-mcp-server
```
#### 🧠 Qiskit Code Assistant Server
```bash
cd qiskit-code-assistant-mcp-server
uv run qiskit-code-assistant-mcp-server
```
#### ⚙️ IBM Runtime Server
```bash
cd qiskit-ibm-runtime-mcp-server
uv run qiskit-ibm-runtime-mcp-server
```
#### 🚀 IBM Transpiler Server
```bash
cd qiskit-ibm-transpiler-mcp-server
uv run qiskit-ibm-transpiler-mcp-server
```
#### 🏋️ Qiskit Gym Server (Community)
```bash
cd qiskit-gym-mcp-server
uv run qiskit-gym-mcp-server
```
### 🔧 Configuration
#### Environment Variables
```bash
# For IBM Runtime Server
export QISKIT_IBM_TOKEN="your_ibm_quantum_token_here"
# For Code Assistant Server
export QISKIT_IBM_TOKEN="your_ibm_quantum_token_here"
export QCA_TOOL_API_BASE="https://qiskit-code-assistant.quantum.ibm.com"
```
#### Using with MCP Clients
Both servers are compatible with any MCP client. Test interactively with MCP Inspector:
```bash
# Test Code Assistant Server
npx @modelcontextprotocol/inspector uv run qiskit-code-assistant-mcp-server
# Test IBM Runtime Server
npx @modelcontextprotocol/inspector uv run qiskit-ibm-runtime-mcp-server
```
## 🏗️ Architecture & Design
### 🎯 Unified Design Principles
Both servers follow a **consistent, production-ready architecture**:
- **🔄 Async-first**: Built with FastMCP for high-performance async operations
- **🧪 Test-driven**: Comprehensive test suites with 65%+ coverage
- **🛡️ Type-safe**: Full mypy type checking and validation
- **📦 Modern packaging**: Standard `pyproject.toml` with hatchling build system
- **🔧 Developer-friendly**: Automated formatting (ruff), linting, and CI/CD
### 🔌 MCP Protocol Support
Both servers implement the full **Model Context Protocol specification**:
- **🛠️ Tools**: Execute quantum operations (code completion, job submission, backend queries)
- **📚 Resources**: Access quantum data (service status, backend information, model details)
- **⚡ Real-time**: Async operations for responsive AI interactions
- **🔒 Secure**: Proper authentication and error handling
## 🧪 Development
### 🏃♂️ Running Tests
```bash
# Run tests for Code Assistant server
cd qiskit-code-assistant-mcp-server
./run_tests.sh
# Run tests for IBM Runtime server
cd qiskit-ibm-runtime-mcp-server
./run_tests.sh
```
### 🔍 Code Quality
Both servers maintain high code quality standards:
- **✅ Linting**: `ruff check` and `ruff format`
- **🛡️ Type checking**: `mypy src/`
- **🧪 Testing**: `pytest` with async support and coverage reporting
- **🚀 CI/CD**: GitHub Actions for automated testing
## 📖 Resources & Documentation
### 🔗 Essential Links
- **[Model Context Protocol](https://modelcontextprotocol.io/introduction)** - Understanding MCP
- **[Qiskit IBM Runtime](https://quantum.cloud.ibm.com/docs/en/api/qiskit-ibm-runtime)** - Quantum cloud services
- **[Qiskit Code Assistant](https://quantum.cloud.ibm.com/docs/en/guides/qiskit-code-assistant)** - AI code assistance
- **[MCP Inspector](https://github.com/modelcontextprotocol/inspector)** - Interactive testing tool
- **[FastMCP](https://github.com/jlowin/fastmcp)** - High-performance MCP framework
### AI Development Assistant Support
This repository includes AI-generated code and offers comprehensive guidance for AI coding assistants (like [IBM Bob](https://www.ibm.com/products/bob), Claude Code, GitHub Copilot, Cursor AI, and others) in [AGENTS.md](AGENTS.md). This helps AI assistants provide more accurate, context-aware suggestions when working with this codebase.
## 📄 License
This project is licensed under the **Apache License 2.0**.
| text/markdown | null | "Quantum+AI Team. IBM Quantum" <Quantum.Plus.AI@ibm.com> | null | null | Apache-2.0 | ai, ibm, mcp, qiskit, quantum, quantum-computing | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Scientific/Engineering :: Physics",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"qiskit-code-assistant-mcp-server>=0.3.0",
"qiskit-ibm-runtime-mcp-server>=0.5.0",
"qiskit-ibm-transpiler-mcp-server>=0.3.1",
"qiskit-mcp-server>=0.1.2",
"qiskit-code-assistant-mcp-server>=0.3.0; extra == \"all\"",
"qiskit-gym-mcp-server>=0.2.0; extra == \"all\"",
"qiskit-ibm-runtime-mcp-server>=0.5.0; extra == \"all\"",
"qiskit-ibm-transpiler-mcp-server>=0.3.1; extra == \"all\"",
"qiskit-mcp-server>=0.1.2; extra == \"all\"",
"qiskit-code-assistant-mcp-server>=0.3.0; extra == \"code-assistant\"",
"qiskit-gym-mcp-server>=0.2.0; extra == \"community\"",
"qiskit-gym-mcp-server>=0.2.0; extra == \"gym\"",
"qiskit-mcp-server>=0.1.2; extra == \"qiskit\"",
"qiskit-ibm-runtime-mcp-server>=0.5.0; extra == \"runtime\"",
"qiskit-ibm-transpiler-mcp-server>=0.3.1; extra == \"transpiler\""
] | [] | [] | [] | [
"Homepage, https://github.com/Qiskit/mcp-servers",
"Repository, https://github.com/Qiskit/mcp-servers",
"Documentation, https://github.com/Qiskit/mcp-servers#readme",
"Bug Tracker, https://github.com/Qiskit/mcp-servers/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:11:39.750346 | qiskit_mcp_servers-0.7.0.tar.gz | 989,140 | a4/b5/86139835b732640129eb86dd9f15e0f08623413f402f6468fee35a8a5357/qiskit_mcp_servers-0.7.0.tar.gz | source | sdist | null | false | d8e6523b561b82db06b1506784548dd9 | 28b16e9a908651d0d6baa3bda1f48ea6865f86bd0a633ac02f2ecfc5dd8153cf | a4b586139835b732640129eb86dd9f15e0f08623413f402f6468fee35a8a5357 | null | [
"LICENSE"
] | 169 |
2.4 | specwizard | 0.9.7.0 | . | Overview
========
.. INTRO_FLAG
SPECWIZARD is a python package to compute and analyze mock quasar absorption spectra from cosmological simulations.
.. INTRO_FLAG_END
Citing SPECWIZARD
-----------------
Installation Notes
==================
.. INSTALL_FLAG
SPECWIZARD is written in ``python3`` and stable versions are available on PyPI_. Therefore the easiest installation method is through ``pip``
.. code-block::
pip install specwizard
The code is constantly updated. Any feedback, bug or error is greatly appreciated. This can be done through email_ or github.
.. _PyPI: https://pypi.org/project/specwizard/
.. _email: mailto:aramburo@lorentz.leidenuniv.nl
.. INSTALL_FLAG_END
| text/x-rst | null | Andres Aramburo-Garcia <aramburo@lorentz.leidenuniv.nl>, Tom Theuns <tom.theuns@durham.ac.uk> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"numpy",
"scipy",
"swiftsimio",
"mendeleev",
"roman",
"swiftsimio",
"pyread_eagle",
"matplotlib",
"hydrangea",
"bs4",
"pyyaml",
"h5py",
"requests",
"html5lib",
"astropy"
] | [] | [] | [] | [
"Homepage, https://github.com/specwizard/specwizard",
"Bug Tracker, https://github.com/specwizard/specwizard/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:11:23.824489 | specwizard-0.9.7.0.tar.gz | 93,509 | 10/30/fcd32a755268e860c830c026a21d3be6d7db5078eb6f9086ef08ec3a78b6/specwizard-0.9.7.0.tar.gz | source | sdist | null | false | 9337c36cf955770d86ec346db5ec57b4 | b200b8d4af0da78deabea87dcd065c6d66b0a74f118e5a86b3ee1d51aac31bb5 | 1030fcd32a755268e860c830c026a21d3be6d7db5078eb6f9086ef08ec3a78b6 | null | [
"LICENSE"
] | 169 |
2.4 | qiskit-ibm-runtime-mcp-server | 0.5.0 | MCP server for IBM Quantum computing services through Qiskit IBM Runtime | # Qiskit IBM Runtime MCP Server
[](https://registry.modelcontextprotocol.io/?q=io.github.Qiskit%2Fqiskit-ibm-runtime-mcp-server)
<!-- mcp-name: io.github.Qiskit/qiskit-ibm-runtime-mcp-server -->
A comprehensive Model Context Protocol (MCP) server that provides AI assistants with access to IBM Quantum computing services through Qiskit IBM Runtime. This server enables quantum circuit creation, execution, and management directly from AI conversations.
## Features
- **Circuit Execution with Primitives**: Run circuits using EstimatorV2 (expectation values) and SamplerV2 (measurement sampling) with built-in error mitigation
- **Quantum Backend Management**: List, inspect, and get calibration data for quantum backends
- **Qubit Optimization**: Find optimal qubit chains and subgraphs based on real-time calibration data
- **Job Management**: Monitor, cancel, and retrieve job results
- **Account Management**: Easy setup and configuration of IBM Quantum accounts
## Prerequisites
- Python 3.10 or higher
- IBM Quantum account (free at [quantum.cloud.ibm.com](https://quantum.cloud.ibm.com))
- IBM Quantum API token
## Installation
### Install from PyPI
The easiest way to install is via pip:
```bash
pip install qiskit-ibm-runtime-mcp-server
```
### Install from Source
This project recommends using [uv](https://astral.sh/uv) for virtual environments and dependencies management. If you don't have `uv` installed, check out the instructions in <https://docs.astral.sh/uv/getting-started/installation/>
### Setting up the Project with uv
1. **Initialize or sync the project**:
```bash
# This will create a virtual environment and install dependencies
uv sync
```
2. **Get your IBM Quantum token** (if you don't have saved credentials):
- Visit [IBM Quantum](https://quantum.cloud.ibm.com/)
- Find your API key. From the [dashboard](https://quantum.cloud.ibm.com/), create your API key, then copy it to a secure location so you can use it for authentication. [More information](https://quantum.cloud.ibm.com/docs/en/guides/save-credentials)
3. **Configure your credentials** (choose one method):
**Option A: Environment Variable (Recommended)**
```bash
# Copy the example environment file
cp .env.example .env
# Edit .env and add your IBM Quantum API token
export QISKIT_IBM_TOKEN="your_token_here"
# Optional: Set instance for faster startup (skips instance lookup)
export QISKIT_IBM_RUNTIME_MCP_INSTANCE="your-instance-crn"
```
**Option B: Save Credentials Locally**
```python
from qiskit_ibm_runtime import QiskitRuntimeService
# Save your credentials (one-time setup)
QiskitRuntimeService.save_account(
channel="ibm_quantum_platform",
token="your_token_here",
overwrite=True
)
```
This stores your credentials in `~/.qiskit/qiskit-ibm.json`
**Option C: Pass Token Directly**
```python
# Provide token when setting up the account
await setup_ibm_quantum_account(token="your_token_here")
```
**Credential Resolution Priority:**
The server looks for credentials in this order:
1. Explicit token passed to `setup_ibm_quantum_account()`
2. `QISKIT_IBM_TOKEN` environment variable
3. Saved credentials in `~/.qiskit/qiskit-ibm.json`
**Instance Configuration (Optional):**
To speed up service initialization, you can specify your IBM Quantum instance:
- Set `QISKIT_IBM_RUNTIME_MCP_INSTANCE` environment variable with your instance CRN
- This skips the automatic instance lookup which can be slow
- Find your instance CRN in [IBM Quantum Platform](https://quantum.cloud.ibm.com/instances)
**Instance Priority:**
- If you saved credentials with an instance (via `save_account(instance="...")`), the SDK uses it automatically
- `QISKIT_IBM_RUNTIME_MCP_INSTANCE` **overrides** any instance saved in credentials
- If neither is set, the SDK performs a slow lookup across all instances
> **Note:** `QISKIT_IBM_RUNTIME_MCP_INSTANCE` is an MCP server-specific variable, not a standard Qiskit SDK environment variable.
## Quick Start
### Running the Server
```bash
uv run qiskit-ibm-runtime-mcp-server
```
The server will start and listen for MCP connections.
### Basic Usage Examples
#### Async Usage (MCP Server)
```python
# 1. Setup IBM Quantum Account (optional if credentials already configured)
# Will use saved credentials or environment variable if token not provided
await setup_ibm_quantum_account() # Uses saved credentials/env var
# OR
await setup_ibm_quantum_account(token="your_token_here") # Explicit token
# 2. List Available Backends (no setup needed if credentials are saved)
backends = await list_backends()
print(f"Available backends: {len(backends['backends'])}")
# 3. Get the least busy backend
backend = await least_busy_backend()
print(f"Least busy backend: {backend}")
# 4. Get backend's properties
backend_props = await get_backend_properties("backend_name")
print(f"Backend_name properties: {backend_props}")
# 5. List recent jobs
jobs = await list_my_jobs(10)
print(f"Last 10 jobs: {jobs}")
# 6. Get job status
job_status = await get_job_status("job_id")
print(f"Job status: {job_status}")
# 7. Get job results (when job is DONE)
results = await get_job_results("job_id")
print(f"Counts: {results['counts']}")
# 8. Cancel job
cancelled_job = await cancel_job("job_id")
print(f"Cancelled job: {cancelled_job}")
```
#### Sync Usage (Scripts, Jupyter)
For frameworks that don't support async operations, all async functions have a `.sync` attribute:
```python
from qiskit_ibm_runtime_mcp_server.ibm_runtime import (
setup_ibm_quantum_account,
list_backends,
least_busy_backend,
get_backend_properties,
get_backend_calibration,
get_coupling_map,
find_optimal_qubit_chains,
find_optimal_qv_qubits,
run_estimator,
run_sampler,
list_my_jobs,
get_job_status,
get_job_results,
cancel_job
)
# Optional: Setup account if not already configured
# Will automatically use QISKIT_IBM_TOKEN env var or saved credentials
setup_ibm_quantum_account.sync() # No token needed if already configured
# Use .sync for synchronous execution - no setup needed if credentials saved
backends = list_backends.sync()
print(f"Available backends: {backends['total_backends']}")
# Get least busy backend
backend = least_busy_backend.sync()
print(f"Least busy: {backend['backend_name']}")
# Find optimal qubit chains for linear experiments
chains = find_optimal_qubit_chains.sync(backend['backend_name'], chain_length=5)
print(f"Best chain: {chains['chains'][0]['qubits']}")
# Find optimal qubits for Quantum Volume experiments
qv_qubits = find_optimal_qv_qubits.sync(backend['backend_name'], num_qubits=5)
print(f"Best QV subgraph: {qv_qubits['subgraphs'][0]['qubits']}")
# Works in Jupyter notebooks (handles nested event loops automatically)
jobs = list_my_jobs.sync(limit=5)
print(f"Recent jobs: {len(jobs['jobs'])}")
```
**LangChain Integration Example:**
> **Note:** To run LangChain examples you will need to install the dependencies:
> ```bash
> pip install langchain langchain-mcp-adapters langchain-openai python-dotenv
> ```
```python
import asyncio
import os
from langchain.agents import create_agent
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.tools import load_mcp_tools
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
# Load environment variables (QISKIT_IBM_TOKEN, OPENAI_API_KEY, etc.)
load_dotenv()
async def main():
# Configure MCP client
mcp_client = MultiServerMCPClient({
"qiskit-ibm-runtime": {
"transport": "stdio",
"command": "qiskit-ibm-runtime-mcp-server",
"args": [],
"env": {
"QISKIT_IBM_TOKEN": os.getenv("QISKIT_IBM_TOKEN", ""),
"QISKIT_IBM_RUNTIME_MCP_INSTANCE": os.getenv("QISKIT_IBM_RUNTIME_MCP_INSTANCE", ""),
},
}
})
# Use persistent session for efficient tool calls
async with mcp_client.session("qiskit-ibm-runtime") as session:
tools = await load_mcp_tools(session)
# Create agent with LLM
llm = ChatOpenAI(model="gpt-5.2", temperature=0)
agent = create_agent(llm, tools)
# Run a query
response = await agent.ainvoke("What QPUs are available and which one is least busy?")
print(response)
asyncio.run(main())
```
For more LLM providers (Anthropic, Google, Ollama, Watsonx) and detailed examples including Jupyter notebooks, see the [examples/](examples/) directory.
## API Reference
### Tools
#### `setup_ibm_quantum_account(token: str = "", channel: str = "ibm_quantum_platform")`
Configure IBM Quantum account with API token.
**Parameters:**
- `token` (optional): IBM Quantum API token. If not provided, the function will:
1. Check for `QISKIT_IBM_TOKEN` environment variable
2. Use saved credentials from `~/.qiskit/qiskit-ibm.json`
- `channel`: Service channel (default: `"ibm_quantum_platform"`)
**Returns:** Setup status and account information
**Note:** If you already have saved credentials or have set the `QISKIT_IBM_TOKEN` environment variable, you can call this function without parameters or skip it entirely and use other functions directly.
#### `list_backends()`
Get list of available quantum backends.
**Returns:** Array of backend information including:
- Name, status, queue length
- Number of qubits, coupling map
- Simulator vs. hardware designation
#### `least_busy_backend()`
Get the current least busy IBM Quantum backend available.
**Returns:** The backend with the fewest number of pending jobs
#### `get_backend_properties(backend_name: str)`
Get detailed properties of specific backend.
**Returns:** Complete backend configuration including:
- Hardware specifications
- Gate set and coupling map
- Current operational status
- Queue information
#### `get_coupling_map(backend_name: str)`
Get the coupling map (qubit connectivity) for a backend with detailed analysis.
Supports both real backends (requires credentials) and fake backends (no credentials needed).
Use `fake_` prefix for offline testing (e.g., `fake_sherbrooke`, `fake_brisbane`).
**Parameters:**
- `backend_name`: Name of the backend (e.g., `ibm_brisbane` or `fake_sherbrooke`)
**Returns:** Connectivity information including:
- `edges`: List of [control, target] qubit connection pairs
- `adjacency_list`: Neighbor mapping for each qubit
- `bidirectional`: Whether all connections work in both directions
- `num_qubits`: Total qubit count
**Use cases:**
- Circuit optimization and qubit mapping
- SWAP gate minimization planning
- Offline testing with fake backends
#### `get_backend_calibration(backend_name: str, qubit_indices: list[int] | None = None)`
Get calibration data for a backend including T1, T2 coherence times and error rates.
**Parameters:**
- `backend_name`: Name of the backend (e.g., `ibm_brisbane`)
- `qubit_indices` (optional): List of specific qubit indices. If not provided, returns data for the first 10 qubits.
**Returns:** Calibration data including:
- T1 and T2 coherence times (in microseconds)
- Qubit frequency (in GHz)
- Readout errors for each qubit
- Gate errors for common gates (x, sx, cx, etc.)
- `faulty_qubits`: List of non-operational qubit indices
- `faulty_gates`: List of non-operational gates with affected qubits
- Last calibration timestamp
**Note:** For static backend info (processor_type, backend_version, quantum_volume), use `get_backend_properties` instead.
#### `find_optimal_qubit_chains(backend_name, chain_length, num_results, metric)`
Find optimal linear qubit chains for quantum experiments based on connectivity and calibration data.
Algorithmically identifies the best qubit chains by combining coupling map connectivity
with real-time calibration data. Essential for experiments requiring linear qubit arrangements.
**Parameters:**
- `backend_name`: Name of the backend (e.g., `ibm_brisbane`)
- `chain_length`: Number of qubits in the chain (default: 5, range: 2-20)
- `num_results`: Number of top chains to return (default: 5, max: 20)
- `metric`: Scoring metric to optimize:
- `two_qubit_error`: Minimize sum of CX/ECR gate errors (default)
- `readout_error`: Minimize sum of measurement errors
- `combined`: Weighted combination of gate errors, readout, and coherence
**Returns:** Ranked chains with detailed metrics:
- `qubits`: Ordered list of qubit indices in the chain
- `score`: Total score (lower is better)
- `qubit_details`: T1, T2, readout_error for each qubit
- `edge_errors`: Two-qubit gate error for each connection
**Use cases:**
- Select qubits for variational quantum algorithms (VQE, QAOA)
- Plan linear qubit layouts for error correction experiments
- Identify high-fidelity qubit paths for state transfer
- Optimize qubit selection for 1D physics simulations
#### `find_optimal_qv_qubits(backend_name, num_qubits, num_results, metric)`
Find optimal qubit subgraphs for Quantum Volume experiments.
Unlike linear chains, Quantum Volume benefits from densely connected qubit sets where
qubits can interact with minimal SWAP operations. This tool finds connected subgraphs
and ranks them by connectivity and calibration quality.
**Parameters:**
- `backend_name`: Name of the backend (e.g., `ibm_brisbane`)
- `num_qubits`: Number of qubits in the subgraph (default: 5, range: 2-10)
- `num_results`: Number of top subgraphs to return (default: 5, max: 20)
- `metric`: Scoring metric to optimize:
- `qv_optimized`: Balanced scoring for QV (connectivity + errors + coherence) (default)
- `connectivity`: Maximize internal edges and minimize path lengths
- `gate_error`: Minimize total two-qubit gate errors on internal edges
**Returns:** Ranked subgraphs with detailed metrics:
- `qubits`: List of qubit indices in the subgraph (sorted)
- `score`: Total score (lower is better)
- `internal_edges`: Number of edges within the subgraph
- `connectivity_ratio`: internal_edges / max_possible_edges
- `average_path_length`: Mean shortest path between qubit pairs
- `qubit_details`: T1, T2, readout_error for each qubit
- `edge_errors`: Two-qubit gate error for each internal edge
**Use cases:**
- Select optimal qubits for Quantum Volume experiments
- Find densely connected regions for random circuit sampling
- Identify high-quality qubit clusters for variational algorithms
- Plan qubit allocation for algorithms requiring all-to-all connectivity
#### `list_my_jobs(limit: int = 10)`
Get list of recent jobs from your account.
**Parameters:**
- `limit`: The N of jobs to retrieve
#### `get_job_status(job_id: str)`
Check status of submitted job.
**Parameters:**
- `job_id`: The ID of the job to get its status
**Returns:** Current job status, creation date, backend info
**Job Status Values:**
- `INITIALIZING`: Job is being prepared
- `QUEUED`: Job is waiting in the queue
- `RUNNING`: Job is currently executing
- `DONE`: Job completed successfully
- `CANCELLED`: Job was cancelled
- `ERROR`: Job failed with an error
#### `get_job_results(job_id: str)`
Retrieve measurement results from a completed quantum job.
**Parameters:**
- `job_id`: The ID of the completed job
**Returns:** Dictionary containing:
- `status`: "success", "pending", or "error"
- `job_id`: The job ID
- `job_status`: Current status of the job
- `counts`: Dictionary of measurement outcomes and their counts (e.g., `{"00": 2048, "11": 2048}`)
- `shots`: Total number of shots executed
- `backend`: Name of the backend used
- `execution_time`: Quantum execution time in seconds (if available)
- `message`: Status message
**Example workflow:**
```python
# 1. Submit job
result = await run_sampler_tool(circuit, backend_name)
job_id = result["job_id"]
# 2. Check status (poll until DONE)
status = await get_job_status(job_id)
print(f"Status: {status['job_status']}")
# 3. When DONE, retrieve results
if status['job_status'] == 'DONE':
results = await get_job_results(job_id)
print(f"Counts: {results['counts']}")
```
#### `cancel_job(job_id: str)`
Cancel a running or queued job.
**Parameters:**
- `job_id`: The ID of the job to cancel
#### `run_estimator(circuit, observables, ...)`
Run a quantum circuit using the Qiskit Runtime EstimatorV2 primitive. Computes expectation values of observables with built-in error mitigation.
**Parameters:**
- `circuit`: Quantum circuit (OpenQASM 3.0/2.0 string or base64-encoded QPY)
- `observables`: Observable(s) to measure. Accepts:
- Single Pauli string: `"ZZ"`
- List of Pauli strings: `["IIXY", "ZZII"]`
- Weighted Hamiltonian: `[("XX", 0.5), ("ZZ", -0.3)]`
- `parameter_values` (optional): Values for parameterized circuits
- `backend_name` (optional): Backend name. If not provided, uses the least busy backend.
- `circuit_format`: `"auto"` (default), `"qasm3"`, or `"qpy"`
- `optimization_level`: Transpilation level 0-3 (default: 1)
- `resilience_level`: Error mitigation level 0-2 (default: 1)
- `zne_mitigation`: Enable Zero Noise Extrapolation (default: True)
- `zne_noise_factors` (optional): Noise factors for ZNE (default: (1, 1.5, 2))
**Returns:** Job submission status including `job_id`, `backend`, and `error_mitigation` summary.
**Note:** Jobs run asynchronously. Use `get_job_status` to monitor and `get_job_results` to retrieve expectation values.
#### `run_sampler(circuit, ...)`
Run a quantum circuit using the Qiskit Runtime SamplerV2 primitive. Returns measurement outcome samples with built-in error mitigation.
**Parameters:**
- `circuit`: Quantum circuit (OpenQASM 3.0/2.0 string or base64-encoded QPY). Must include measurement operations.
- `backend_name` (optional): Backend name. If not provided, uses the least busy backend.
- `shots`: Number of measurement repetitions (default: 4096)
- `circuit_format`: `"auto"` (default), `"qasm3"`, or `"qpy"`
- `dynamical_decoupling`: Suppress decoherence during idle periods (default: True)
- `dd_sequence`: DD pulse sequence: `"XX"`, `"XpXm"`, or `"XY4"` (default)
- `twirling`: Pauli twirling on 2-qubit gates (default: True)
- `measure_twirling`: Measurement twirling for readout error mitigation (default: True)
**Returns:** Job submission status including `job_id`, `backend`, `shots`, and `error_mitigation` summary.
**Note:** Jobs run asynchronously. Use `get_job_status` to monitor and `get_job_results` to retrieve measurement counts.
#### `list_saved_accounts()`
List all IBM Quantum accounts saved on disk.
**Returns:** Dictionary containing:
- `status`: "success" or "error"
- `accounts`: Dictionary of saved accounts (keyed by account name)
- Each account contains: channel, url, token (masked for security)
- `message`: Status message
**Note:** Tokens are masked in the response, showing only the last 4 characters.
#### `delete_saved_account(account_name: str)`
Delete a saved IBM Quantum account from disk.
**WARNING:** This permanently removes credentials from `~/.qiskit/qiskit-ibm.json`. The operation cannot be undone.
**Parameters:**
- `account_name`: Name of the saved account to delete. Use `list_saved_accounts()` to find available names.
**Returns:** Dictionary containing:
- `status`: "success" or "error"
- `deleted`: Boolean indicating if deletion was successful
- `message`: Status message
#### `active_account_info()`
Get information about the currently active IBM Quantum account.
**Returns:** Dictionary containing:
- `status`: "success" or "error"
- `account_info`: Account details including channel, url, token (masked for security)
**Note:** Tokens are masked in the response, showing only the last 4 characters.
#### `active_instance_info()`
Get the Cloud Resource Name (CRN) of the currently active instance.
**Returns:** Dictionary containing:
- `status`: "success" or "error"
- `instance_crn`: The CRN string identifying the active instance
#### `available_instances()`
List all IBM Quantum instances available to the active account.
**Returns:** Dictionary containing:
- `status`: "success" or "error"
- `instances`: List of available instances with CRN, plan, name, and pricing info
- `total_instances`: Count of available instances
#### `usage_info()`
Get usage statistics and quota information for the active instance.
**Returns:** Dictionary containing:
- `status`: "success" or "error"
- `usage`: Usage metrics including:
- `usage_consumed_seconds`: Time consumed this period
- `usage_limit_seconds`: Total quota for the period
- `usage_remaining_seconds`: Remaining quota
- `usage_limit_reached`: Boolean indicating if limit is reached
- `usage_period`: Current billing period
### Resources
#### `ibm://status`
Get current IBM Quantum service status and connection info.
#### `circuits://bell-state`
Pre-built 2-qubit Bell state circuit creating |Phi+> = (|00> + |11>)/sqrt(2). Pass the returned `circuit` field directly to `run_sampler`. Expected results: ~50% '00' and ~50% '11'.
#### `circuits://ghz-state`
Pre-built 3-qubit GHZ state circuit creating (|000> + |111>)/sqrt(2). Expected results: ~50% '000' and ~50% '111'.
#### `circuits://random`
Pre-built 4-qubit quantum random number generator. Each qubit is put in superposition and measured. Expected results: all 16 outcomes with ~6.25% probability each.
#### `circuits://superposition`
Simplest quantum circuit: single qubit Hadamard gate creating (|0> + |1>)/sqrt(2). Expected results: ~50% '0' and ~50% '1'.
## Security Considerations
- **Store IBM Quantum tokens securely**: Never commit tokens to version control
- **Use environment variables for production deployments**: Set `QISKIT_IBM_TOKEN` environment variable
- **Credential Priority**: The server automatically resolves credentials in this order:
1. Explicit token parameter (highest priority)
2. `QISKIT_IBM_TOKEN` environment variable
3. Saved credentials in `~/.qiskit/qiskit-ibm.json` (lowest priority)
- **Token Validation**: The server rejects placeholder values like `<PASSWORD>`, `<TOKEN>`, etc., to prevent accidental credential corruption
- **Implement rate limiting for production use**: Monitor API request frequency
- **Monitor quantum resource consumption**: Track job submissions and backend usage
## Contributing
Contributions are welcome! Areas for improvement:
- Additional error mitigation/correction techniques
- Other qiskit-ibm-runtime features
### Other ways of testing and debugging the server
> _**Note**: to launch the MCP inspector you will need to have [`node` and `npm`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm)_
1. From a terminal, go into the cloned repo directory
1. Switch to the virtual environment
```sh
source .venv/bin/activate
```
1. Run the MCP Inspector:
```sh
npx @modelcontextprotocol/inspector uv run qiskit-ibm-runtime-mcp-server
```
1. Open your browser to the URL shown in the console message e.g.,
```
MCP Inspector is up and running at http://localhost:5173
```
## Testing
This project includes comprehensive unit and integration tests.
### Running Tests
**Quick test run:**
```bash
./run_tests.sh
```
**Manual test commands:**
```bash
# Install test dependencies
uv sync --group dev --group test
# Run all tests
uv run pytest
# Run only unit tests
uv run pytest -m "not integration"
# Run only integration tests
uv run pytest -m "integration"
# Run tests with coverage
uv run pytest --cov=src --cov-report=html
# Run specific test file
uv run pytest tests/test_server.py -v
```
### Test Structure
- `tests/test_server.py` - Unit tests for server functions
- `tests/test_sync.py` - Unit tests for synchronous execution
- `tests/test_integration.py` - Integration tests
- `tests/conftest.py` - Test fixtures and configuration
### Test Coverage
The test suite covers:
- ✅ Service initialization and account setup
- ✅ Backend listing, calibration, and analysis
- ✅ Circuit execution with EstimatorV2 and SamplerV2 primitives
- ✅ Job management and monitoring
- ✅ Synchronous execution (`.sync` methods)
- ✅ Error handling and input validation
- ✅ Integration scenarios
- ✅ Resource and tool handlers
| text/markdown | null | "Quantum+AI Team. IBM Quantum" <Quantum.Plus.AI@ibm.com> | null | null | Apache-2.0 | null | [] | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"fastmcp<3,>=2.8.1",
"nest-asyncio>=1.5.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"qiskit-ibm-runtime>=0.40.0",
"qiskit-mcp-server",
"bandit[toml]>=1.7.0; extra == \"all\"",
"mypy>=1.15.0; extra == \"all\"",
"pre-commit>=3.5.0; extra == \"all\"",
"pytest-asyncio>=0.21.0; extra == \"all\"",
"pytest-cov>=4.1.0; extra == \"all\"",
"pytest-mock>=3.11.0; extra == \"all\"",
"pytest>=7.4.0; extra == \"all\"",
"ruff>=0.11.13; extra == \"all\"",
"bandit[toml]>=1.7.0; extra == \"dev\"",
"mypy>=1.15.0; extra == \"dev\"",
"pre-commit>=3.5.0; extra == \"dev\"",
"ruff>=0.11.13; extra == \"dev\"",
"langchain-mcp-adapters>=0.1.0; extra == \"examples\"",
"langchain>=1.2.0; extra == \"examples\"",
"python-dotenv>=1.0.0; extra == \"examples\"",
"pytest-asyncio>=0.21.0; extra == \"test\"",
"pytest-cov>=4.1.0; extra == \"test\"",
"pytest-mock>=3.11.0; extra == \"test\"",
"pytest>=7.4.0; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/Qiskit/mcp-servers",
"Repository, https://github.com/Qiskit/mcp-servers",
"Documentation, https://github.com/Qiskit/mcp-servers/tree/main/qiskit-ibm-runtime-mcp-server#readme",
"Bug Tracker, https://github.com/Qiskit/mcp-servers/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:10:55.663627 | qiskit_ibm_runtime_mcp_server-0.5.0.tar.gz | 186,550 | 91/7c/e23ea257ef2394500b620a622bb2c0c765cfe072b79fb1f70b9bacf47217/qiskit_ibm_runtime_mcp_server-0.5.0.tar.gz | source | sdist | null | false | 8d559cbb3f3c797db41bb27e44012206 | fa44e1482f91cd2c817ea3a253990dc6d6c7ff91a245811576a66d7a28fa34e7 | 917ce23ea257ef2394500b620a622bb2c0c765cfe072b79fb1f70b9bacf47217 | null | [
"LICENSE"
] | 190 |
2.4 | django-management-ui | 0.4.1 | Reusable Django app for executing management commands from Django Admin | # django-management-ui
Run any Django management command from the admin panel. No models, no migrations — just install and go.
**[Documentation](https://mlopotkov.gitlab.io/django-management-ui/)**
The app introspects argparse definitions of your commands and dynamically builds forms with appropriate widgets: text inputs, checkboxes, dropdowns, number fields, file uploads for `Path` arguments.
## Features
- Automatic discovery of all registered management commands
- Dynamic form generation from argparse argument types
- File upload with temp file lifecycle for `Path` arguments (upload or enter path as text)
- Grouped display: command-specific args + collapsible BaseCommand options
- Output capture (stdout/stderr) with execution time
- Superuser-only access
- Appears on the Django Admin index page in the app list as "Commands" section
- Command blacklist to hide dangerous commands
- Configurable execution timeout (default 30s)
- Output truncation for large command outputs
## Quick start
### 1. Install
```bash
pip install django-management-ui
```
### 2. Add to INSTALLED_APPS
```python
INSTALLED_APPS = [
...
"management_ui",
]
```
### 3. Add URL configuration
```python
# urls.py
from django.contrib import admin
from django.urls import include, path
urlpatterns = [
path("admin/", include("management_ui.urls")),
path("admin/", admin.site.urls),
]
```
That's it. Navigate to `/admin/commands/` to see all available commands.
## How it works
```
argparse introspection → ArgumentInfo → Django form field → execute command → CommandResult
```
1. **Discovery** — finds all registered management commands via `django.core.management.get_commands()`
2. **Introspection** — parses argparse `_actions` into `ArgumentInfo` dataclasses
3. **Form generation** — maps argument types to Django form fields:
| Argument type | Form field |
|---|---|
| `store_true` / `store_false` | `BooleanField` (checkbox) |
| `choices` | `ChoiceField` (dropdown) |
| `type=int` | `IntegerField` |
| `type=float` | `FloatField` |
| `type=Path` | `PathField` (file upload + text input toggle) |
| default / `type=str` | `CharField` |
4. **Execution** — runs command via `call_command()`, captures stdout/stderr, measures duration
5. **Result** — displays output with success/error styling
## Path argument support
When a command argument has `type=Path`, the form renders a dual-mode widget:
- **File upload mode** (default) — upload a file, it's saved to `/tmp/` with a unique name, path passed to command, temp file cleaned up after execution
- **Text path mode** — enter a server path as text (toggle via "or enter path as text" link)
## Requirements
- Python >= 3.11
- Django >= 4.2
## License
MIT
| text/markdown | null | Mikhail Lopotkov <dev-mlopotkov@yandex.ru> | null | null | MIT | admin, commands, django, management | [
"Framework :: Django",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Framework :: Django :: 5.1",
"Framework :: Django :: 5.2",
"Framework :: Django :: 6.0",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"django>=4.2"
] | [] | [] | [] | [
"Homepage, https://mlopotkov.gitlab.io/django-management-ui/",
"Documentation, https://mlopotkov.gitlab.io/django-management-ui/",
"Repository, https://gitlab.com/mlopotkov/django-management-ui"
] | uv/0.9.30 {"installer":{"name":"uv","version":"0.9.30","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"12","id":"bookworm","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T15:10:09.738281 | django_management_ui-0.4.1.tar.gz | 544,879 | 71/d8/f94a522728db7013939bf15b76b5fc7286e5bbdee50efbd2b6a30a068426/django_management_ui-0.4.1.tar.gz | source | sdist | null | false | bf59817e87ab27e875fe536b7a12e12f | 32a666f5a65c8dfa3d799d7f38ac3f8567b40e920112ed0089ee8cbc7e122ac3 | 71d8f94a522728db7013939bf15b76b5fc7286e5bbdee50efbd2b6a30a068426 | null | [
"LICENSE"
] | 163 |
2.4 | calfkit | 0.1.7 | Build loosely coupled, event-driven AI workflows and agents. | # 🐮 Calfkit SDK
[](https://pypi.org/project/calfkit/)
[](https://pepy.tech/projects/calfkit)
[](https://pypi.org/project/calfkit/)
[](LICENSE)
The SDK to build event-driven, distributed AI agents.
Calfkit lets you compose agents with independent services—chat, tools, routing—that communicate asynchronously. Add agent capabilities without coordination. Scale each component independently. Stream agent outputs to any downstream system.
Agents should work like real teams, with independent, distinct roles, async communication, and the ability to onboard new teammates or tools without restructuring the whole org. Build AI employees that integrate.
```bash
pip install calfkit
```
## Why Event-Driven?
Building agents like traditional web applications, with tight coupling and synchronous API calls, creates the same scalability problems that plagued early microservices:
- Tight coupling: Changing one tool or agent breaks dependent agents and tools
- Scaling bottlenecks: Since all agents and tools live on one runtime, everything must scale together
- Siloed outputs: Agent and tool outputs stay trapped in your AI layer, streaming outputs to external dependencies is not as natural as distributed, event-driven designs
Event-driven architectures provide the solution. Instead of direct API calls between components, agents and tools asynchronously communicate. Each component runs independently, scales horizontally, and outputs can flow anywhere: CRMs, data warehouses, analytics platforms, other agents, or even more tools.
## Why Use Calfkit?
Calfkit is a Python SDK that builds event-driven agents out-the-box. You get all the benefits of a asynchronous, distributed system (loose coupling, horizontal scalability, durability) without the complexity of managing event-driven infrastructure and orchestration yourself.
- Distributed agents out-the-box: Build event-driven, multi-service agents without writing orchestration code or managing infrastructure
- Add agent capabilities without touching existing code: Deploy new tool capabilities as independent services that agents can dynamically discover, no need to touch your agent code
- Scale what you need, when you need it: Chat handling, tool execution, and routing each scale independently based on demand
- Nothing gets lost: Event persistence ensures reliable message delivery and traceability, even during service failures or restarts
- High throughput under pressure: Asynchronous communication decouples requests from processing, so Calfkit agents work through bursty traffic reliably, maximizing throughput
- Real-time responses: Low-latency event processing enables agents to react instantly to incoming data
- Development team independence: Because of the decoupled design, dev teams can develop and deploy chat, tools, routing, and upstream or downstream dependencies in parallel without cross-team collaboration overhead
- Universal data flow: Decoupling enables data to flow freely in both directions.
- Downstream, agent outputs can be streamed to any system (CRMs, customer data platforms, warehouses, or even another AI workflow).
- Upstream, tools can wrap any data sources and deploy independently, no coordination needed.
## Quick Start
### Prerequisites
- Python 3.10 or later
- Docker installed and running (for local testing with a Calfkit broker)
- OpenAI API key (or another OpenAI API compliant LLM provider)
### Install
```bash
pip install calfkit
```
### ☁️ Calfkit Cloud (Coming Soon)
Skip the infrastructure. Calfkit Cloud is a fully-managed Kafka service built for Calfkit AI agents and multi-agent teams. No server infrastructure to self-host or maintain, with built-in observability and agent-event tracing.
Coming soon. [Fill out the interest form →](https://forms.gle/Rk61GmHyJzequEPm8)
### Start Local Calfkit Server (Requires Docker)
Calfkit uses Kafka as the event broker. Run the following command to clone the [calfkit-broker](https://github.com/calf-ai/calfkit-broker) repo and start a local Kafka broker container:
```shell
git clone https://github.com/calf-ai/calfkit-broker && cd calfkit-broker && make dev-up
```
Once the broker is ready, open a new terminal tab to continue with the quickstart.
### Define and Deploy the Tool Node
Define and deploy a tool as an independent service. Tools are not owned by or coupled to any specific agent—once deployed, any agent in your system can discover and invoke the tool. Deploy once, use everywhere.
```python
# weather_tool.py
import asyncio
from calfkit.nodes import agent_tool
from calfkit.broker import BrokerClient
from calfkit.runners import NodesService
# Example tool definition
@agent_tool
def get_weather(location: str) -> str:
"""Get the current weather at a location"""
return f"It's sunny in {location}"
async def main():
broker_client = BrokerClient(bootstrap_servers="localhost:9092") # Connect to Kafka broker
service = NodesService(broker_client) # Initialize a service instance
service.register_node(get_weather) # Register the tool node in the service
await service.run() # (Blocking call) Deploy the service to start serving traffic
if __name__ == "__main__":
asyncio.run(main())
```
Run the file to deploy the tool service:
```shell
python weather_tool.py
```
### Deploy the Chat Node
Deploy the LLM chat node as its own service.
```python
# chat_service.py
import asyncio
from calfkit.nodes import ChatNode
from calfkit.providers import OpenAIModelClient
from calfkit.broker import BrokerClient
from calfkit.runners import NodesService
async def main():
broker_client = BrokerClient(bootstrap_servers="localhost:9092") # Connect to Kafka broker
model_client = OpenAIModelClient(model_name="gpt-5-nano")
chat_node = ChatNode(model_client) # Inject a model client into the chat node definition so the chat deployment can perform LLM calls
service = NodesService(broker_client) # Initialize a service instance
service.register_node(chat_node) # Register the chat node in the service
await service.run() # (Blocking call) Deploy the service to start serving traffic
if __name__ == "__main__":
asyncio.run(main())
```
Set your OpenAI API key:
```shell
export OPENAI_API_KEY=sk-...
```
Run the file to deploy the chat service:
```shell
python chat_service.py
```
### Deploy the Agent Router Node
Deploy the agent router that orchestrates chat, tools, and conversation-level memory.
```python
# agent_router_service.py
import asyncio
from calfkit.nodes import agent_tool, AgentRouterNode, ChatNode
from calfkit.stores import InMemoryMessageHistoryStore
from calfkit.broker import BrokerClient
from calfkit.runners import NodesService
from weather_tool import get_weather # Import the tool, the tool definition is reusable
async def main():
broker_client = BrokerClient(bootstrap_servers="localhost:9092") # Connect to Kafka broker
router_node = AgentRouterNode(
chat_node=ChatNode(), # Provide the chat node definition for the router to orchestrate the nodes
tool_nodes=[get_weather],
system_prompt="You are a helpful assistant",
message_history_store=InMemoryMessageHistoryStore(), # Stores messages in-memory in the deployment runtime
)
service = NodesService(broker_client) # Initialize a service instance
service.register_node(router_node) # Register the router node in the service
await service.run() # (Blocking call) Deploy the service to start serving traffic
if __name__ == "__main__":
asyncio.run(main())
```
Run the file to deploy the agent router service:
```shell
python agent_router_service.py
```
### Invoke the Agent
Send a request and receive the response.
When invoking an already-deployed agent, use the `RouterServiceClient`. The node is just a configuration object, so you don't need to redefine the deployment parameters.
```python
# client.py
import asyncio
from calfkit.nodes import AgentRouterNode
from calfkit.broker import BrokerClient
from calfkit.runners import RouterServiceClient
async def main():
broker_client = BrokerClient(bootstrap_servers="localhost:9092") # Connect to Kafka broker
# Thin client - no deployment parameters needed
router_node = AgentRouterNode()
client = RouterServiceClient(broker_client, router_node)
# Invoke and wait for response
response = await client.invoke(user_prompt="What's the weather in Tokyo?")
final_msg = await response.get_final_response()
print(f"Assistant: {final_msg.text}")
if __name__ == "__main__":
asyncio.run(main())
```
Run the file to invoke the agent:
```shell
python client.py
```
The `RouterServiceClient` handles ephemeral Kafka communication and cleanup automatically. You can also stream intermediate messages:
```python
response = await client.invoke(user_prompt="What's the weather in Tokyo?")
# Stream all messages (tool calls, intermediate responses, etc.)
async for message in response.messages_stream():
print(message)
```
### Runtime Configuration (Optional)
Clients can override the system prompt and restrict available tools at invocation time without redeploying:
```python
from weather_tool import get_weather
# Client with runtime patches
router_node = AgentRouterNode(
system_prompt="You are an assistant with no tools :(", # Override the deployed system prompt
tool_nodes=[], # Patch in any subset of the deployed agent's set of tools
)
client = RouterServiceClient(broker_client, router_node)
response = await client.invoke(user_prompt="Weather in Tokyo?")
```
This lets different clients customize agent behavior per-request. Tool patching is currently limited to subsets of tools configured in the deployed router.
## Motivation
Scalable agent teams must progress beyond brittle, tightly coupled, synchronous coordination. This means embracing event-driven, asynchronous communication patterns between agents and their dependencies.
## Contact
[](https://x.com/ryanyuhater)
[](https://www.linkedin.com/company/calfkit)
## Support
If you found this project interesting or useful, please consider:
- ⭐ Starring the repository — it helps others discover it!
- 🐛 [Reporting issues](https://github.com/calf-ai/calfkit-sdk/issues)
- 🔀 Submitting PRs
## License
This project is licensed under the Apache License 2.0. See the [LICENSE](LICENSE) file for details.
| text/markdown | Ryan Yu | null | null | null | null | agents, ai, decoupled, distributed, event-driven, kafka, llm, workflows | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"anthropic>=0.20.0",
"exceptiongroup>=1.2.2; python_version < \"3.11\"",
"faststream[kafka]>=0.6.6",
"genai-prices>=0.0.48",
"griffe>=1.14.0",
"httpx>=0.27",
"openai>=1.0.0",
"opentelemetry-api>=1.28.0",
"pydantic-graph>=1.47.0",
"pydantic>=2.12.5",
"python-dotenv>=1.2.1",
"typing-inspection>=0.4.0",
"uuid-utils>=0.14.0"
] | [] | [] | [] | [
"Homepage, https://github.com/calf-ai/calf-sdk",
"Repository, https://github.com/calf-ai/calf-sdk",
"Issues, https://github.com/calf-ai/calf-sdk/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:09:49.632131 | calfkit-0.1.7.tar.gz | 378,241 | 8f/d6/5893680a41c8c875be965986a4f6f791b99914107b10ec2d6fb296ab1f38/calfkit-0.1.7.tar.gz | source | sdist | null | false | 592ec8f32f205fcf6815b6532eb91d5f | d571612985c8c9e8bde4028c471a59f8ac8ef028a7ff9df894171edd6297a767 | 8fd65893680a41c8c875be965986a4f6f791b99914107b10ec2d6fb296ab1f38 | Apache-2.0 | [
"LICENSE"
] | 176 |
2.1 | mitosheet | 0.2.62 | The Mito Spreadsheet | # The Mito Spreadsheet
The Mito spreadsheet is desgined to help folks automate their repeititive reporting with Python. Every edit you make to the Mito spreadsheet is automatically converted to production-ready Python code. Use spreadsheet formulas like VLOOKUP, pivot tables, and all of your favorite Excel functionality.
## Installing the Mito Spreadsheet
It is important to install the correct version of mitosheet for your version of JupyterLab.
**JupyterLab 4.x**: To intall mitosheet for JupyterLab 4.x, run the following command:
```bash
pip install mitosheet
```
**JupyterLab 3.x**: To install mitosheet for JupyterLab 3.x, use the latest release of the mitosheet 0.1.x series. Run the following command:
```bash
pip install mitosheet~=0.1
```
## Codebase structure
This folder contains a variety of packages and utilities for the `mitosheet` Python package. The primary folders of interest:
- `mitosheet` contains the Python code for the `mitosheet` Python package.
- `src` contains the TypeScript, React code for the `mitosheet` JupyterLab extension front-end.
- `css` contains styling for the frontend.
- `deployment` contains scripts helpful for deploying the `mitosheet` package
## The `mitosheet` Package
The mitosheet package currently works for JupyterLab 4.0, Streamlit, and Dash.
### For Mac
We have a setup script for Mac. Just run
```
bash dev/macsetup.sh
```
#### Open JupyterLab
In a seperate terminal, run
```
source venv/bin/activate
jupyter lab
```
(note that the second command can be `jupyter notebook` if you want to develop in notebook).
#### Open Streamlit
In a seperate terminal, run
```
source venv/bin/activate
streamlit run /path/to/app.py
```
### For Windows
First, delete any existing virtual environment that you have in this folder, and create a new virtual environment.
On Windows (in command prompt, not powershell):
```
rmdir /s venv
python3 -m venv venv
venv\Scripts\activate.bat
```
Then, run the following commands to create a virtual enviorment, install a development version of `mitosheet` in it, and then launch Jupyter Lab 3.0.
```bash
pip install -e ".[test, deploy]"
jupyter labextension develop . --overwrite
jupyter lab
```
If the `pip install -e ".test, deploy]"` fails and the folder `pip-wheel-metadata` exists in your Mito folder, delete it.
In a seperate terminal, to recompile the front-end, run the following commands (`jlpm install` only needs to be run the first time).
```
jlpm install
jlpm run watch
```
NOTE: On Windows, this seperate terminal _must_ be a Adminstrator terminal. To launch an admin terminal, search for Command Prompt, and then right click on the app and click Run as adminstrator. Then navigate to the virtual environment, start it, and then run `jlpm run watch`.
Furthermore, if the final `jlpm run watch` or `jlpm install` command fails, you may need to run `export NODE_OPTIONS=--openssl-legacy-provider`.
### One Liner Command for Mac
```bash
deactivate; rm -rf venv; python3 -m venv venv && source venv/bin/activate && pip install -e ".[test, deploy]" && jupyter labextension develop . --overwrite && jupyter lab
```
# Testing
## Backend Tests
Run automated backend tests with
```
pytest
```
Automated tests can be found in `mitosheet/tests`. These are tests written using standard `pytest` tools, and include tests like testing the evaluate function, the MitoWidget, and all other pure Python code.
### Linting
This project has linting set up for both (Python)[https://flake8.pycqa.org/en/latest/index.html] and (typescript)[https://github.com/typescript-eslint/typescript-eslint].
Run typescript linting with the command
```
npx eslint . --ext .tsx --fix
```
### Using the fuzzer
Setting up the fuzzer is an annoying and long process, and so we do not include it in the main install commands for setting up Mito (for now, we will if we figure out how to optimize this).
To use the fuzzer, you need to install `pip install atheris`. This might work for you (it didn't for me). If it doesn't work, and you get a red error, check the error to see if it is telling you to download the latest version of clang. If it is, then try:
```
cd ~
git clone https://github.com/llvm/llvm-project.git
cd llvm-project
mkdir build
cd build
cmake -DLLVM_ENABLE_PROJECTS='clang;compiler-rt' -G "Unix Makefiles" ../llvm # NOTE: if this doesn't work, you might need to install cmake. Google how to do this
make -j 100 # This literally takes hours
```
Then, go back to the venv you want to install the fuzzer in, and run: `CLANG_BIN="/Users/nate/llvm-project/build/bin/clang" pip install atheris`, and it should work.
### Running the fuzzer
Run the fuzzer with
`python mitosheet/tests/fuzz.py`, and it will run till it hits an error.
## How the Build Works
This represents my best understanding of how the packaging process works. There might be slight misunderstandings here, so don't take this as gospel, but rather as the general shape of things.
### For JupyterLab 4 and Notebook 7
1. First, the TypeScript is compiled to JS, and placed in the `./lib` folder.
2. Then, the `./lib` and `./css` folder (specified in files) are build by the command `jupyter labextension watch .` into the `mitosheet/labextension` folder.
3. Note that `jupyter labextension watch .` figures out the source and destination locations through the `jupyterlab` information in the `package.json`.
| text/markdown | Mito Sheet | aaron@sagacollab.com | null | null | GNU Affero General Public License v3 | null | [
"License :: OSI Approved :: GNU Affero General Public License v3",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Framework :: Jupyter",
"Framework :: Jupyter :: JupyterLab",
"Framework :: Jupyter :: JupyterLab :: 4",
"Framework :: Jupyter :: JupyterLab :: Extensions",
"Framework :: Jupyter :: JupyterLab :: Extensions :: Prebuilt"
] | [
"Linux"
] | https://github.com/mito-ds/mito | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/mito-ds/mito",
"Repository, https://github.com/mito-ds/mito",
"Issues, https://github.com/mito-ds/mito/issues"
] | twine/5.1.1 CPython/3.10.19 | 2026-02-20T15:09:39.329150 | mitosheet-0.2.62.tar.gz | 3,154,466 | 6a/30/54c60ecf3bd47bccb33bf586257afc64f4c2d181ffd8d6841ae173684ab2/mitosheet-0.2.62.tar.gz | source | sdist | null | false | 33361e8d62543209891e202778c14f5c | 2540a6f010414ccd5e707703426888a97c7fa2dbf30ded3a154902f1156c3cee | 6a3054c60ecf3bd47bccb33bf586257afc64f4c2d181ffd8d6841ae173684ab2 | null | [] | 259 |
2.4 | geopolrisk-py | 2.0.0 | Python library for geopolitical supply risk assessment (GeoPolRisk) | # The geopolrisk-py library documentation
The **geopolrisk-py** library implements the Geopolitical Supply Risk (GeoPolRisk) method for assessing raw material criticality in Life Cycle Assessment. It complements traditional resource and environmental indicators and can also be applied in comparative risk assessment.
The GeoPolRisk method relies on multiple data sources and non-trivial calculations, including characterization factors and supply risk scores. The **geopolrisk-py** library operationalizes the method by providing a structured and reproducible implementation that processes inputs such as raw materials, countries, and years. In addition to generic assessments, the library supports **company specific supply risk analysis**, allowing users to evaluate risks based on their own trade data.
## Features of the `geopolrisk-py` library
The library is organised into four core modules:
1. **`database.py`**
Handles loading and management of background data required by the method, including mining production data (World Mining Data), trade data (BACI), and governance indicators (World Bank). These datasets are stored in a SQLite database that is distributed with the repository and updated annually. Upon first use, the library creates a dedicated folder in the user’s home directory containing:
* `databases`: input templates and background data
* `output`: generated SQLite databases and Excel result files
* `logs`: log files for debugging and traceability
2. **`core.py`**
Implements the numerical core of the GeoPolRisk method, including calculation of HHI, import risk, GeoPolRisk scores, and characterization factors. This module executes the equations defining the method using structured inputs prepared by supporting modules.
3. **`utils.py`**
Provides data preparation and harmonisation utilities. It maps raw material and country names to standardized identifiers (HS codes, ISO codes), aligns production and trade datasets, aggregates overlapping trade codes, and ensures data consistency before calculations.
4. **`main.py`**
Provides a user-facing interface that orchestrates the full workflow. Users define raw materials, years, and economic units, and the module coordinates calls to the core and utility functions. Results are written to Excel and SQLite outputs in a structured directory layout.
## License
The source code of `geopolrisk-py` is licensed under the **GNU General Public License v3.0 (GPL-3.0)**. A copy of the license is provided in the `LICENSE` file at the root of this repository.
The GPL-3.0 license applies to the original software implementation developed for the GeoPolRisk method.
Structured database files distributed with this package may contain derived data originating from third-party sources (e.g. World Mining Data, BACI, and Worldwide Governance Indicators). These underlying data remain subject to the licenses and terms of use defined by their respective original providers and are not relicensed under GPL-3.0.
## Third party data sources and licenses
The GeoPolRisk method relies on several external data sources for mining production, international trade, and governance indicators. These data sources are provided by third parties and are subject to their respective licenses and terms of use, as defined by the original data providers.
All databases distributed with this library are structured specifically for the operational implementation of the GeoPolRisk method. They are provided solely for use within this methodological context and are not intended to serve as standalone general-purpose data repositories.
### World Mining Data (WMD)
Mining production statistics are based on *World Mining Data*, published by the Austrian Federal Ministry of Finance. The production data included in this package are derived from World Mining Data and have been processed and structured specifically for use within the GeoPolRisk methodology. They do not constitute or claim to represent the official World Mining Data publication.
The licensing and terms of use are specified in the official documentation and on the publisher’s website: [World Mining Data – Austrian Federal Ministry of Finance](https://www.bmf.gv.at/en/topics/mining/mineral-resources-policy/wmd.html)
### BACI international trade database
International trade data are sourced from the BACI database developed by CEPII and derived from UN Comtrade data. BACI is distributed under the **Etalab Open License 2.0**, as specified by CEPII: [BACI database – CEPII](https://www.cepii.fr/CEPII/en/bdd_modele/bdd_modele_item.asp?id=37)
### Worldwide Governance Indicators (WGI)
Governance indicators are obtained from the Worldwide Governance Indicators project of the World Bank. These data are licensed under the **Creative Commons Attribution 4.0 International (CC BY 4.0)** license: [Worldwide Governance Indicators – World Bank](https://data360.worldbank.org/en/dataset/WB_WGI)
## Installation
### Python requirements (important)
**geopolrisk-py requires Python ≥ 3.10 and < 3.12.**
This restriction is intentional and enforced in the package metadata.
Using a newer Python version (e.g. 3.12 or 3.13) will result in installation errors.
To avoid issues, it is strongly recommended to install the library **inside a virtual environment created with Python 3.11**. An possibility is presented in the [documentation](https://geopolrisk-py.readthedocs.io/en/latest/installation.html)
### Install from PyPI
Once the correct Python environment is active:
```bash
pip install geopolrisk-py
```
### Install from source (development version)
```bash
git clone https://github.com/akoyamp/geopolrisk-py.git
cd geopolrisk-py
pip install -e .
```
## Testing
The full automated test suite is provided **only in the source repository** and is documented separately.
Please refer to the dedicated test documentation located in:
```
tests/README.md
```
That document contains:
* The scope and structure of the test suite
* Python version requirements
* Environment creation instructions
* Exact commands required to run all tests via the provided test runner
## After installation
Detailed usage instructions are available in the official documentation: [https://geopolrisk-py.readthedocs.io/en/latest/](https://geopolrisk-py.readthedocs.io/en/latest/)
The documentation includes module-level explanations, a step-by-step user guide, and example workflows. A Jupyter notebook demonstrating typical usage is provided both online and in the `examples` folder of the repository.
## Support and contact
For bug reports and feature requests, please use the GitHub issue tracker: [https://github.com/akoyamp/geopolrisk-py/issues](https://github.com/akoyamp/geopolrisk-py/issues)
For questions related to the GeoPolRisk method, interpretation of results, or academic use, contact:
**Anish Koyamparambath**
Email: [anish.koyam@hotmail.com](mailto:anish.koyam@hotmail.com)
| text/markdown | null | Anish Koyamparambath <anish.koyam@hotmail.com>, Thomas Schraml <Thomas.Schraml@uni-bayreuth.de> | null | null | null | LCA, criticality, geopolitical, supply-risk, raw-materials | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering"
] | [] | null | null | <3.12,>=3.10 | [] | [] | [] | [
"setuptools>=61.0",
"pandas<=2.3.3",
"tqdm>=4.66.4",
"geopy>=2.4.1",
"openpyxl>=3.1.2",
"setuptools; extra == \"testing\"",
"pytest; extra == \"testing\""
] | [] | [] | [] | [
"Homepage, https://github.com/akoyamp/geopolrisk-py",
"Documentation, https://geopolrisk-py.readthedocs.io/en/latest/",
"Source, https://github.com/akoyamp/geopolrisk-py"
] | twine/6.2.0 CPython/3.11.0 | 2026-02-20T15:09:34.537079 | geopolrisk_py-2.0.0.tar.gz | 30,000,602 | d5/e0/f3658091a21b33718e3ae74f02d4b3e1c2773eb549333d635c38f0910c07/geopolrisk_py-2.0.0.tar.gz | source | sdist | null | false | 4f10088a6ed96820a4e4a5c960f88816 | bacd698fa243c53dff9feb66446cab837c542ea870daeea36b86117e30302089 | d5e0f3658091a21b33718e3ae74f02d4b3e1c2773eb549333d635c38f0910c07 | null | [
"LICENSE.txt"
] | 178 |
2.4 | dayamlchecker | 1.0.0 | An LSP for Docassemble YAML interviews | # DAYamlChecker
An LSP for Docassemble YAML Interviews
## How to run
```bash
pip install .
python3 -m dayamlchecker `find . -name "*.yml" -path "*/questions/*" snot -path "*/.venv/*" -not -path "*/build/*"` # i.e. a space separated list of files
```
## MCP / LLM integration
DAYamlChecker includes an optional Model Context Protocol (MCP) server. This allows AI assistants like GitHub Copilot to validate Docassemble YAML directly within your editor.
### Quick Start
codex mcp add dayamlchecker -- "~/venv/bin/python" -m dayamlchecker.mcp.server
1. **Install with MCP support:**
```bash
pip install "dayamlchecker[mcp]"
```
2. **VS Code Automatic Setup:**
Open this project in VS Code. The included `.vscode/mcp.json` file will automatically configure the MCP server for you (assuming you have a `.venv` created).
For detailed instructions on installation, manual configuration, and usage with other clients, please see [docs/MCP_SERVER.md](docs/MCP_SERVER.md).
### Generate a VS Code MCP configuration
To make it easy for VS Code users to install locally, install DAYamlChecker with the `mcp` extra, then run the packaged generator to create `.vscode/mcp.json`:
```bash
# Install in the active environment
pip install "dayamlchecker[mcp]"
# Generate workspace MCP config
dayamlchecker-gen-mcp
```
Optional flags: `--venv <path>`, `--python <path>`, and `--non-interactive`.
For example, if you have a global venv in ~/venv, and a github repository
you want to make the MCP available in named docassemble-AssemblyLine:
```bash
cd ~/docassemble-AssemblyLine
source ~/venv/bin/activate
pip install dayamlchecker[mcp]
dayamlchecker-gen-mcp --venv ~/venv
```
### Codex CLI (optional)
If you use Codex CLI/IDE and want Codex to call this MCP server:
```bash
cd /path/to/your/repo
codex mcp add dayamlchecker -- "$(pwd)/.venv/bin/python" -m dayamlchecker.mcp.server
# Or add using a global venv
codex mcp add dayamlchecker -- "~/venv/bin/python" -m dayamlchecker.mcp.server
# If the package is installed globally
codex mcp add dayamlchecker -- dayamlchecker-mcp
```
Important: The `codex mcp add` command only registers the MCP server configuration in Codex's settings; it does not create virtual environments or install the `dayamlchecker` package into the target interpreter. Make sure the selected interpreter has `dayamlchecker` installed before you add the server.
### Click-to-install for VS Code
If you want VS Code users to add the MCP server with a single click, include one of the links below. These open VS Code and pre-fill the Add MCP Server dialog. They rely on an interpreter being present at the configured path — the local link expects a repository `.venv` and the global link expects a global venv such as `~/venv`.
[Add dayamlchecker (workspace .venv)](vscode:mcp/install?%7B%22name%22%3A%22dayamlchecker%22%2C%22type%22%3A%22stdio%22%2C%22command%22%3A%22%24%7BworkspaceFolder%7D%2F.venv%2Fbin%2Fpython%22%2C%22args%22%3A%5B%22-m%22%2C%22dayamlchecker.mcp.server%22%5D%7D)
Click to add a server that uses a global `~/venv`:
[Add dayamlchecker (global ~/venv)](vscode:mcp/install?%7B%22name%22%3A%22dayamlchecker%22%2C%22type%22%3A%22stdio%22%2C%22command%22%3A%22~%2Fvenv%2Fbin%2Fpython%22%2C%22args%22%3A%5B%22-m%22%2C%22dayamlchecker.mcp.server%22%5D%7D)
Note: Some clients may not expand `~`, so replace it with the absolute path if the link doesn't work for you (e.g. `/home/yourname/venv/bin/python`). Also ensure the package is installed in the selected venv (`pip install "dayamlchecker[mcp]"`), and the `.venv` path exists with a Python binary.
Important: The `Add` links above only register the MCP server configuration in VS Code — they do **not** install the `dayamlchecker` Python package or create a virtual environment. Before clicking the link, make sure the runtime is installed in the selected venv. For example:
```bash
# create a repo venv and install the package (recommended)
python -m venv .venv
source .venv/bin/activate
pip install "dayamlchecker[mcp]"
# or for a global venv
python -m venv ~/venv
source ~/venv/bin/activate
pip install "dayamlchecker[mcp]"
```
| text/markdown | null | Bryce Willey <bryce.steven.willey@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"esprima>=4.0.1",
"mako>=1.3.10",
"pyyaml>=6.0.2",
"black>=24.0.0",
"ruamel.yaml>=0.18.0",
"mcp[cli]; extra == \"mcp\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:09:26.254725 | dayamlchecker-1.0.0.tar.gz | 26,410 | 7e/35/cc0eb17e71193b15f0d3d1503cc723ea15187dbed5dd337fde60ff4c7d7b/dayamlchecker-1.0.0.tar.gz | source | sdist | null | false | 016229f4cd805234d8e3edc971906244 | a7db1f44770d4d2b54df8b5907010005e203d260fb35a63a3151a38fa1912427 | 7e35cc0eb17e71193b15f0d3d1503cc723ea15187dbed5dd337fde60ff4c7d7b | MIT | [
"LICENSE"
] | 220 |
2.4 | mito-ai | 0.1.62 | AI chat for JupyterLab | # mito_ai
[](/actions/workflows/build.yml)
AI chat for JupyterLab. This codebase contains two main components:
1. A Jupyter server extension that handles the backend logic for the chat.
2. Several JupyterLab extensions that handle the frontend logic for interacting with the AI, including the chat sidebar and the error message rendermime.
## Requirements
- JupyterLab >= 4.0.0
## Install
To install the extension, execute:
```bash
pip install mito-ai
```
## Configuration
This extension has two AI providers; OpenAI and Mito (calling OpenAI).
Mito is the fallback but the number of request is limited for free tier.
To use OpenAI directly, you will to create an API key on https://platform.openai.com/docs/overview.
Then set the environment variable `OPENAI_API_KEY` with that key.
The OpenAI model can be configured with 1 parameters:
- `OpenAIProvider.model`: Name of the AI model; default _gpt-4o-mini_.
You can set those parameters through command line when starting JupyterLab; e.g.
```sh
jupyter lab --OpenAIProvider.max_completion_tokens 20 --OpenAIProvider.temperature 1.5
```
> If a value is incorrect, an error message will be displayed in the terminal logs.
## Uninstall
To remove the extension, execute:
```bash
pip uninstall mito-ai
```
## Contributing
### Development install
To ensure consistent package management, please use `jlpm` instead of `npm` for this project.
Note: You will need NodeJS to build the extension package.
The `jlpm` command is JupyterLab's pinned version of
[yarn](https://yarnpkg.com/) that is installed with JupyterLab.
```bash
# Clone the repo to your local environment
# Change directory to the mito-ai directory
# Required to deal with Yarn 3 workspace rules
touch yarn.lock
# Install package in development mode
pip install -e ".[test]"
# Install the node modules
jlpm install
# Build the extension
jlpm build
# Link your development version of the extension with JupyterLab
jupyter labextension develop . --overwrite
# Start the jupyter server extension for development
jupyter server extension enable --py mito_ai
# (Optional) Setup fixed Jupyter token for Cursor agent testing
# This creates a development-only config that allows Cursor to automatically test changes
python dev/setup_jupyter_dev_token.py
# Watch the source directory in one terminal, automatically rebuilding when needed
# In case of Error: If this command fails because the lib directory was not created (the error will say something like
# unable to find main entry point) then run `jlpm run clean:lib` first to get rid of the old buildcache
# that might be preventing a new lib directory from getting created.
jlpm watch
```
Then, in a new terminal, run:
```bash
# Run JupyterLab in another terminal
jupyter lab --autoreload
```
With the watch command running, every saved change will immediately be built locally and available in your running JupyterLab. With the `--autoreload` flag, you don't need to refresh JupyterLab to load the change in your browser. It will launch a new window each time you save a change to the backend.
By default, the `jlpm build` command generates the source maps for this extension to make it easier to debug using the browser dev tools. To also generate source maps for the JupyterLab core extensions, you can run the following command:
```bash
jupyter lab build --minimize=False
```
### Development uninstall
```bash
pip uninstall mito-ai
```
In development mode, you will also need to remove the symlink created by `jupyter labextension develop`
command. To find its location, you can run `jupyter labextension list` to figure out where the `labextensions`
folder is located. Then you can remove the symlink named `mito-ai` within that folder.
### Testing the extension
#### Integration tests
Integration tests for mito-ai are written using Playwright and Gelata in the mito/tests directory.
To run these tests, follow the directions in the tests/README.md file.
#### Backend Unit tests
Backend tests for mito-ai are written using pytest in the mito/mito-ai/mito_ai/tests directory.
To run the pytests, just run `pytest` in the mito-ai directory.
#### Backend Mypy tests
To run the mypy tests, just run `mypy mito_ai/ --ignore-missing-imports` in the mito-ai directory.
#### Frontend Unit tests
Frontend unit tests for mito-ai are written using Jest in the mito/mito-ai/src/tests directory.
To run the Jest tests, just run `npm test` in the mito-ai directory.
#### Frontend Tests
Frontend tests for mito-ai are written using Playwright and Gelata in the mito/tests directory. See the [tests/README.md](tests/README.md) file for more information.
#### Frontend Linting
Frontend linting for mito-ai is done using ESLint in the mito-ai directory.
To run the ESLint tests, just run `jlpm eslint` in the mito-ai directory.
#### Performance Tests
Performance tests for mito-ai are written using pytest in the mito-ai/tests directory.
To run the performance tests, just run `python -m pytest mito_ai/tests/performance_test.py -v -s` in the mito-ai directory.
Note that you'll have to edit `open_ai_utils.py`, specifically the `is_running_test` condition.
#### Running Databases
To ensure reproducibility, databases, like Postgres, are created using Docker. To run:
```bash
docker-compose -f mito_ai/tests/docker/postgres.yml up
```
When you're done, stop and remove the container and its volumes with:
```bash
docker-compose -f mito_ai/tests/docker/postgres.yml down -v
```
| text/markdown | null | Aaron Diamond-Reivich <aaron@sagacollab.com> | null | null | Copyright (c) 2020-2024 Saga Inc.
See the LICENSE.txt file at the root of this monorepo for licensing information. | jupyter, jupyterlab, jupyterlab-extension | [
"Framework :: Jupyter",
"Framework :: Jupyter :: JupyterLab",
"Framework :: Jupyter :: JupyterLab :: 4",
"Framework :: Jupyter :: JupyterLab :: Extensions",
"Framework :: Jupyter :: JupyterLab :: Extensions :: Prebuilt",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"analytics-python",
"anthropic>=0.6.81",
"google-genai",
"jinja2>=3.0.3",
"jupyterlab<5,>=4.1.0",
"litellm",
"openai>=1.0.0",
"pipreqs",
"pydantic",
"requests>=2.25.0",
"sqlalchemy",
"streamlit",
"tornado>=6.2.0",
"traitlets",
"unidiff",
"hatch-jupyter-builder>=0.5; extra == \"deploy\"",
"hatch-nodejs-version>=0.3.2; extra == \"deploy\"",
"hatchling>=1.27.0; extra == \"deploy\"",
"twine>=4.0.0; extra == \"deploy\"",
"mypy>=1.8.0; extra == \"test\"",
"pytest-asyncio==0.25.3; extra == \"test\"",
"pytest==8.3.4; extra == \"test\"",
"types-requests>=2.25.0; extra == \"test\"",
"types-setuptools; extra == \"test\"",
"types-tornado>=5.1.1; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://trymito.io",
"Bug Tracker, https://github.com/mito-ds/monorepo/issues",
"Repository, https://github.com/mito-ds/monorepo"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T15:09:23.988024 | mito_ai-0.1.62.tar.gz | 16,664,115 | 9a/b8/8c38872b81eb277a4e1f38c9d47be6545e94bc7a10c70c3670932e60e539/mito_ai-0.1.62.tar.gz | source | sdist | null | false | bb536249c6892227c3aac1e1b83fb020 | 910867920a74a0a1103cfcd941593a5321d8e715b0ba63364c68c379df1060d8 | 9ab88c38872b81eb277a4e1f38c9d47be6545e94bc7a10c70c3670932e60e539 | null | [
"LICENSE"
] | 224 |
2.4 | molprisma | 1.0.1 | Tool for fast inspection of PDB molecular files inside the terminal | # MolPrisma
This is a tool for fast inspection of PDB molecular files inside the terminal. It is very lightweight, its only dependency being the [Prisma TUI](https://github.com/DiegoBarMor/prismatui) framework (which itself has no dependencies for Linux).
## Quickstart
```bash
pip install molprisma
molprisma your_file.pdb
```
## Features

- Use the `UP`,`DOWN`,`PREVPAGE`,`NEXTPAGE`, `-` (top) and `+` (end) keys to quickly nagivate through the PDB rows.
- Use the `LEFT` and `RIGHT` keys to highlight a concrete PDB section (i.e. column) and see their indices/name (according to [the standard](https://www.cgl.ucsf.edu/chimera/docs/UsersGuide/tutorials/pdbintro.html)).
- Visual separation of the PDB sections via colors also helps to easily spot offset issues.
- Show/hide whole groups of rows via a simple key press:
- `1`: Toggle between showing all or showing nothing.
- `2`: Toggle the *atoms* (lines starting with `ATOM`).
- `3`: Toggle the *heteroatoms* (lines starting with `HETATM`).
- `4`: Toggle the *metadata* (everything else not considered by `2` or `3`). It is hidden by default.
- Filter out rows that don't match a specific combination of values.
- `a`: Alternate *atom_name* value to filter.
- `r`: Alternate *residue_name* value to filter.
- `e`: Alternate *element_id* value to filter.
- `c`: Alternate *segment_id* (a.k.a chain) value to filter.
- `i`: Alternate *residue_insertion_code* value to filter.
- `l`: Alternate *altloc* (i.e. alternate location indicator) value to filter.
- Reset the shown/hidden groups and the filters at any moment by pressing `k`.
| text/markdown | DiegoBarMor | diegobarmor42@gmail.com | null | null | MIT | pdb rcsb molecular terminal tui protein nucleic rna dna ligand | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/diegobarmor/molprisma | null | >=3.10 | [] | [] | [] | [
"prismatui==0.3.2"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T15:08:56.433187 | molprisma-1.0.1.tar.gz | 11,197 | 15/a1/4123021a3f56988977896ecd9e5359bd734960cc1273fbeb8921e8e39193/molprisma-1.0.1.tar.gz | source | sdist | null | false | da318d4f54b951ae7d1fc16073ffa970 | ac68048c99ba454b8d4501383ecf561a295b254e0f432a73fa47c86caf7490da | 15a14123021a3f56988977896ecd9e5359bd734960cc1273fbeb8921e8e39193 | null | [
"LICENSE.md"
] | 188 |
2.4 | hexagon | 0.64.4 | Build your Team's CLI | # hexagon
Make your team's knowledge truly accessible, truly shared, and truly empowering by creating your own CLI.
[](https://github.com/lt-mayonesa/hexagon/actions/workflows/01-python-package.yml)
[](https://github.com/psf/black)
[](https://pypi.org/project/hexagon/)
[](https://pypi.org/project/hexagon/)
[](https://pypi.org/project/hexagon/)
[](https://pypi.org/project/hexagon/)
[](https://asciinema.org/a/Mk8of7EC0grfsSgWYrEdGCjdF)
---
## Getting Started
### Install hexagon
```bash
pipx install hexagon
```
### Create your teams CLI
Either use our [template repo](https://github.com/lt-mayonesa/hexagon-tools) or create a YAML like the following
```yaml
cli:
custom_tools_dir: . # relative to this file
name: Test CLI
command: tc
envs:
- name: dev
alias: d
- name: qa
alias: q
tools:
- name: google
alias: g
long_name: Google
description: Open google
type: web
envs:
dev: google.dev
qa: google.qa
action: open_link
- name: hello-world
alias: hw
long_name: Greet the world
type: shell
action: echo "Hello World!"
```
### Install the CLI
Run `hexagon` and select the CLI installation tool
## Options
### Theming
Hexagon supports 3 themes for now:
- default (some nice colors and decorations)
- disabled (no colors and no decorations)
- result_only (with colors but only shows the result logs)
This can be specified by the envvar `HEXAGON_THEME`, i.e.,
```bash
# assuming you installed a CLI with command tc
HEXAGON_THEME=result_only tc
```
## Development
### Pre-requisites
```bash
pip install pipenv
```
### Run:
```bash
# start a shell
pipenv shell
# install hexagon dependencies
pipenv install --dev
# run it
python -m hexagon
```
### Unit Tests:
```bash
pytest -svv tests/
```
### E2E Tests:
```bash
# first generate the transalation files
.github/scripts/i18n/build.sh
# run tests
pytest -svv tests_e2e/
```
| text/markdown | Joaco Campero | juacocampero@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/lt-mayonesa/hexagon | null | >=3.10 | [] | [] | [] | [
"annotated-types==0.7.0; python_version >= \"3.8\"",
"clipboard==0.0.4",
"inquirerpy==0.3.4; python_version >= \"3.7\" and python_version < \"4.0\"",
"markdown==3.10.2; python_version >= \"3.10\"",
"markdown-it-py==4.0.0; python_version >= \"3.10\"",
"mdurl==0.1.2; python_version >= \"3.7\"",
"packaging==26.0; python_version >= \"3.8\"",
"pfzy==0.3.4; python_version >= \"3.7\" and python_version < \"4.0\"",
"prompt-toolkit==3.0.52; python_version >= \"3.8\"",
"pydantic==2.12.5; python_version >= \"3.9\"",
"pydantic-core==2.41.5; python_version >= \"3.9\"",
"pydantic-settings==2.13.1; python_version >= \"3.10\"",
"pygments==2.19.2; python_version >= \"3.8\"",
"pyperclip==1.11.0",
"python-dotenv==1.2.1; python_version >= \"3.9\"",
"rich==14.3.3; python_full_version >= \"3.8.0\"",
"ruamel.yaml==0.19.0; python_version >= \"3.9\"",
"ruamel.yaml.clibz==0.3.4; python_version >= \"3.9\"",
"typing-extensions==4.15.0; python_version >= \"3.9\"",
"typing-inspection==0.4.2; python_version >= \"3.9\"",
"wcwidth==0.2.14; python_version >= \"3.6\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/lt-mayonesa/hexagon/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:08:36.608861 | hexagon-0.64.4.tar.gz | 98,889 | a6/7c/57c8b2b8301bf661d85f2e8f855812ce5ff220998641d75d2adaa15cbf56/hexagon-0.64.4.tar.gz | source | sdist | null | false | 0926ac7b265fb08999fc0c4da5e14a3f | a4c54d09d3f8587d2c7cd19921b47ba76ed2c55a119c78035b482f995fde5acc | a67c57c8b2b8301bf661d85f2e8f855812ce5ff220998641d75d2adaa15cbf56 | null | [
"LICENSE"
] | 181 |
2.4 | vllm-sr | 0.1.0b2.dev20260220150754 | vLLM Semantic Router - Intelligent routing for Mixture-of-Models | # vLLM Semantic Router
Intelligent Router for Mixture-of-Models (MoM).
GitHub: https://github.com/vllm-project/semantic-router
## Quick Start
### Installation
```bash
# Install from PyPI
pip install vllm-sr
# Or install from source (development)
cd src/vllm-sr
pip install -e .
```
### Usage
```bash
# Initialize vLLM Semantic Router Configuration
vllm-sr init
# Start the router (includes dashboard)
# Provide your HF_TOKEN to run the evaluation tests; this is required for downloading the necessary datasets
HF_TOKEN=hf_xxx vllm-sr serve
# Open dashboard in browser
vllm-sr dashboard
# View logs
vllm-sr logs router
vllm-sr logs envoy
vllm-sr logs dashboard
# Check status
vllm-sr status
# Stop
vllm-sr stop
```
## Features
- **Router**: Intelligent request routing based on intent classification
- **Envoy Proxy**: High-performance proxy with ext_proc integration
- **Dashboard**: Web UI for monitoring and testing (http://localhost:8700)
- **Metrics**: Prometheus metrics endpoint (http://localhost:9190/metrics)
## Endpoints
After running `vllm-sr serve`, the following endpoints are available:
| Endpoint | Port | Description |
|----------|------|-------------|
| Dashboard | 8700 | Web UI for monitoring and Playground |
| API | 8888* | Chat completions API (configurable in config.yaml) |
| Metrics | 9190 | Prometheus metrics |
| gRPC | 50051 | Router gRPC (internal) |
| Jaeger UI | 16686 | Distributed tracing UI |
| Grafana (embedded) | 8700 | Dashboards at /embedded/grafana |
| Prometheus UI | 9090 | Metrics storage and querying |
*Default port, configurable via `listeners` in config.yaml
### Observability
`vllm-sr serve` automatically starts the observability stack:
- **Jaeger**: Distributed tracing embedded at http://localhost:8700/embedded/jaeger (also available directly at http://localhost:16686)
- **Grafana**: Pre-configured dashboards embedded at http://localhost:8700/embedded/grafana
- **Prometheus**: Metrics collection at http://localhost:9090
**Note**: Grafana is optimized for embedded access through the dashboard. For the best experience, use http://localhost:8700/embedded/grafana where anonymous authentication is pre-configured.
Tracing is enabled by default. Traces are visible in Jaeger under the `vllm-sr` service name.
## Configuration
### Plugin Configuration
The CLI supports configuring plugins in your routing decisions. Plugins are per-decision behaviors that customize request handling (security, caching, customization, debugging).
**Supported Plugin Types:**
- `semantic-cache` - Cache similar requests for performance
- `jailbreak` - Detect and block adversarial prompts
- `pii` - Detect and enforce PII policies
- `system_prompt` - Inject custom system prompts
- `header_mutation` - Add/modify HTTP headers
- `hallucination` - Detect hallucinations in responses
- `router_replay` - Record routing decisions for debugging
**Plugin Examples:**
1. **semantic-cache** - Cache similar requests:
```yaml
plugins:
- type: "semantic-cache"
configuration:
enabled: true
similarity_threshold: 0.92 # 0.0-1.0, higher = more strict
ttl_seconds: 3600 # Optional: cache TTL in seconds
```
2. **jailbreak** - Block adversarial prompts:
```yaml
plugins:
- type: "jailbreak"
configuration:
enabled: true
threshold: 0.8 # Optional: detection sensitivity 0.0-1.0
```
3. **pii** - Enforce PII policies:
```yaml
plugins:
- type: "pii"
configuration:
enabled: true
threshold: 0.7 # Optional: detection sensitivity 0.0-1.0
pii_types_allowed: ["EMAIL_ADDRESS"] # Optional: list of allowed PII types
```
4. **system_prompt** - Inject custom instructions:
```yaml
plugins:
- type: "system_prompt"
configuration:
enabled: true
system_prompt: "You are a helpful assistant."
mode: "replace" # "replace" (default) or "insert" (prepend)
```
5. **header_mutation** - Modify HTTP headers:
```yaml
plugins:
- type: "header_mutation"
configuration:
add:
- name: "X-Custom-Header"
value: "custom-value"
update:
- name: "User-Agent"
value: "SemanticRouter/1.0"
delete:
- "X-Old-Header"
```
6. **hallucination** - Detect hallucinations:
```yaml
plugins:
- type: "hallucination"
configuration:
enabled: true
use_nli: false # Optional: use NLI for detailed analysis
hallucination_action: "header" # "header", "body", or "none"
```
7. **router_replay** - Record decisions for debugging:
```yaml
plugins:
- type: "router_replay"
configuration:
enabled: true
max_records: 200 # Optional: max records in memory (default: 200)
capture_request_body: false # Optional: capture request payloads (default: false)
capture_response_body: false # Optional: capture response payloads (default: false)
max_body_bytes: 4096 # Optional: max bytes to capture (default: 4096)
```
**Validation Rules:**
- **Plugin Type**: Must be one of: `semantic-cache`, `jailbreak`, `pii`, `system_prompt`, `header_mutation`, `hallucination`, `router_replay`
- **enabled**: Must be a boolean (required for most plugins)
- **threshold/similarity_threshold**: Must be a float between 0.0 and 1.0
- **max_records/max_body_bytes**: Must be a positive integer
- **ttl_seconds**: Must be a non-negative integer
- **pii_types_allowed**: Must be a list of strings (if provided)
- **system_prompt**: Must be a string (if provided)
- **mode**: Must be "replace" or "insert" (if provided)
**CLI Commands:**
```bash
# Initialize config with plugin examples
vllm-sr init
# Validate configuration (including plugins)
vllm-sr validate config.yaml
# Generate router config with plugins
vllm-sr config router --config config.yaml
```
### File Descriptor Limits
The CLI automatically sets file descriptor limits to 65,536 for Envoy proxy. To customize:
```bash
export VLLM_SR_NOFILE_LIMIT=100000 # Optional (min: 8192)
vllm-sr serve
```
## License
Apache 2.0
| text/markdown | vLLM-SR Team | null | null | null | Apache-2.0 | vllm, semantic-router, llm, routing, caching | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.1.7",
"pyyaml>=6.0.2",
"jinja2>=3.1.4",
"requests>=2.31.0",
"pydantic>=2.0.0",
"huggingface_hub[cli]>=0.20.0",
"pytest>=8.4.1; extra == \"dev\"",
"black>=22.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/vllm-project/vllm-semantic-router",
"Documentation, https://github.com/vllm-project/vllm-semantic-router/blob/main/README.md",
"Repository, https://github.com/vllm-project/vllm-semantic-router",
"Issues, https://github.com/vllm-project/vllm-semantic-router/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T15:08:11.141123 | vllm_sr-0.1.0b2.dev20260220150754.tar.gz | 71,742 | 1c/47/7b20d7e201110716b970c5e5e7f89b029429646e103f2633a04224f504ae/vllm_sr-0.1.0b2.dev20260220150754.tar.gz | source | sdist | null | false | d8f3420b7221e88708c4685676e22873 | e1d10cff26b76310d20cbcd98ce51c2d0509a2fd76e4bde8da36ded8eaa61724 | 1c477b20d7e201110716b970c5e5e7f89b029429646e103f2633a04224f504ae | null | [] | 178 |
2.3 | ttyg | 3.2.0 | Natural language querying for GraphDB using LangGraph agents | <p align="center">
<img alt="Graphwise Logo" src="https://github.com/Ontotext-AD/ttyg-langgraph/blob/main/.github/Graphwise_Logo.jpg">
</p>
# Talk to Your Graph (TTYG)
TTYG is a Python module that enables Natural Language Querying (NLQ) using [GraphDB](https://graphdb.ontotext.com/) and [LangChain Agents](https://docs.langchain.com/oss/python/langchain/agents).
It includes a lightweight GraphDB client and a collection of tools designed for integration with large language model (LLM) agents.
## License
Apache-2.0 License. See [LICENSE](https://github.com/Ontotext-AD/ttyg-langgraph/blob/main/LICENSE) file for details.
## Installation
```bash
pip install ttyg
```
## Maintainers
Developed and maintained by [Graphwise](https://graphwise.ai/).
For issues or feature requests, please open [a GitHub issue](https://github.com/Ontotext-AD/ttyg-langgraph/issues).
## Usage
A sample usage is provided in [the Jupyter Notebook](https://github.com/Ontotext-AD/ttyg-langgraph/tree/main/jupyter_notebooks/NLQ_with_LangChain_Agents.ipynb), which demonstrates natural language querying using the Star Wars dataset.
### Run Jupyter Notebook
#### Prerequisites
- Install [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html). `miniconda` will suffice.
- Install [Docker](https://docs.docker.com/get-docker/). The documentation is created using Docker version `28.3.3` which bundles Docker Compose. For earlier Docker versions you may need to install Docker Compose separately.
#### Create and activate the Conda environment
```bash
conda create --name ttyg --file conda-linux-64.lock
conda activate ttyg
```
#### Install dependencies with Poetry
Depending on the LLM provider you want to use, run one of the following:
```
poetry install --with llm-openai --with jupyter
# or
poetry install --with llm-anthropic --with jupyter
```
#### Run the Notebook
```bash
jupyter notebook
```
| text/markdown | null | null | null | null | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | https://github.com/Ontotext-AD/ttyg-langgraph | null | <3.14,>=3.12 | [] | [] | [] | [
"sparqlwrapper==2.0.0",
"langchain==1.2.10",
"langgraph==1.0.9"
] | [] | [] | [] | [
"Repository, https://github.com/Ontotext-AD/ttyg-langgraph"
] | poetry/2.1.4 CPython/3.13.12 Linux/6.11.0-1018-azure | 2026-02-20T15:08:03.003131 | ttyg-3.2.0.tar.gz | 16,854 | d5/bd/5e7fa73846d1e68551fe23cce80236f14115fd5df0c17ca2143e6787e9a4/ttyg-3.2.0.tar.gz | source | sdist | null | false | a86f78b850dd21bd947d7889faba3374 | aa19f1c3eb456283125ae2c404120605b1a55d1c0ec999970d52919107c0a9cb | d5bd5e7fa73846d1e68551fe23cce80236f14115fd5df0c17ca2143e6787e9a4 | null | [] | 170 |
2.4 | zoommap-converter | 0.4.1 | A Python CLI tool for converting Leaflet to Zoommap. | ### Leaflet-to-Zoommap Converter
This repository contains a Python CLI tool to parse Obsidian Vaults and convert Leaflet codeblocks to Zoommap ([TTRPG Tools: Maps](https://github.com/Jareika/zoom-map)) format, supporting map scales, icons and shapes.
## Overview
ZoomMap Converter is designed to facilitate the migration from the Obsidian Leaflet plugin to the ZoomMap plugin. It handles the conversion of map notes, markers, and configurations while maintaining compatibility with Obsidian vault structures.
## Features
- **Note Conversion**: Converts Leaflet-formatted codeblocks to ZoomMap format.
- **Icon Processing**: Transforms custom SVG icons with color and size normalisation over to Zoommap.
- **Error Handling**: Validation and logging for troubleshooting.
- **Path Management**: Handles Obsidian vault file paths and structures.
## Installation
### Prerequisites
- Python 3.12+
- Obsidian vault with Leaflet plugin notes
- Leaflet Plug-In installed and enabled.
- Zoommap Plug-In installed and enabled.
### Setup
1. Download the CLI tool using `pip`.
```bash
pip install zoommap-converter
```
2. Once installed, you can test the install was successful using:
```bash
zoommap-converter --version
```
## Usage
### Pre-flight Checklist
Before running the converter, please ensure you have satisfied the [Prerequisites](#prerequisites).
In addition, you should ensure the following conditions are met:
- If you wish to have your markers converted across to Zoommap, you should make sure you have the SVG icons downloaded and extracted/unzipped in the vault. They must be stored in the same location as defined in the TTRPG Tools: Maps settings (e.g. default folder is `ZoomMap/SVGs`). Without this, your markers will not be migrated across and the default pin will be used. Note: Custom Images/SVGs are not supported at this time.

### Basic Conversion
1. Create and configure the `settings.yaml` file and ensure to include the `vault_path` and `target_path`:
```yaml
vault_path: leaflet-vault
target_path: converted-leaflet-vault
```
**Note:** See the sample [`settings.yaml`](settings.yaml) for more configuration options.
2. Set the path to your settings.yaml via environment variables:
```bash
export SETTINGS=path/to/settings.yaml
zoommap-converter
```
Or pass it as an argument to the tool:
```bash
zoommap-converter --settings path/to/settings.yaml
```
### Settings
Conversion settings are stored in YAML format. In this vault, we use [`settings.yaml`](settings.yaml). We pass this file to the converter process using the `--settings` flag, or by exporting the following environment variable:
```bash
export SETTINGS=/path/to/settings.yaml
```
### CLI Arguments
The following table contains the list of CLI arguments that are used for the tool. Similar functionality can be used in the [`settings.yaml`](settings.yaml) file.
| Argument | Description | Default Value | Type/Action |
|-------------------|-------------------------------------------------------------------------------------------------|------------------------|---------------------|
| `--settings` | Path to the settings file. | `$SETTINGS` env var | `Path` |
| `--version` | Show the tool version and exit. | - | `version` |
| `--no-backup` | Do not create a backup of the original vault before conversion. | `False` | `store_true` |
| `-v`, `--verbose` | Increase output verbosity. | `False` | `store_true` |
| `--in-place` | Convert the vault in-place (no duplicate created). | `False` | `store_true` |
| `--merge-by-image` | Merge Zoommap `markers.json` files by images, rather than image + ids or image + marker_tags| `False` | `store_true` |
| `--ignore-marker-tags` | Ignore merging by image + marker_tags, instead only merging by id. Note: this only affects the outputted Zoommap JSON file, notes with marker tags in the frontmatter will still be parsed for markers. | `False` | `store_true` |
| `--log-level` | Set the logging level. Choices: `DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`. | `$LOG_LEVEL` or `INFO` | `str` (uppercase) |
## Development
### Project Structure
```
├── src
│ └── zoommap_converter
│ ├── __init__.py
│ ├── __main__.py
│ ├── app.py
│ ├── cli.py # Command-line interface
│ ├── logs.py
│ ├── bootstrap/ # Initialisation and setup
│ ├── conf/ # Settings Config
│ ├── converter/ # Core conversion logic
│ └── models/ # Data models and schemas
tests/
```
### Developer Setup
1. Clone the repository:
```bash
git clone https://codeberg.org/paddyd/zoommap-converter.git
cd zoommap-converter
```
2. Install dependencies using `uv`:
```bash
uv install
```
3. Configure the vault path in `settings.yaml` or via environment variables
### Running Tests
```bash
uv run pytest tests
```
### Building
```bash
uv build
```
## Contributing
Contributions are welcome. Please use the provided issue and pull request templates when contributing and follow these guidelines:
1. Fork the repository
2. Create a feature branch
3. Submit a pull request with clear documentation
4. Include tests for new functionality
## License
This project is licensed under the MIT License - see the LICENSE file for details.
## Roadmap
Current items on the roadmap:
- Migrating Custom SVGs/Icons
- Overlays
If you have any other features you'd like me to include, please write a [Feature Request](https://codeberg.org/paddyd/zoommap-converter/issues/new?template=.forgejo%2fISSUE_TEMPLATE%2ffeature.md).
## Support
For issues, questions or feature requests, please file an [issue](https://codeberg.org/paddyd/zoommap-converter/issues/new).
## Acknowledgements
- [Jareika](https://github.com/Jareika) the creator of [TTRPG Tools: Maps](https://github.com/Jareika/zoom-map).
- Obsidian community for plugin development
- Font Awesome for icon assets
- Pydantic for data validation
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | <4.0,>=3.12 | [] | [] | [] | [
"obsidianmd-parser>=0.4.0",
"pillow>=12.1.0",
"pydantic>=2.12.5"
] | [] | [] | [] | [] | uv/0.9.30 {"installer":{"name":"uv","version":"0.9.30","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"12","id":"bookworm","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T15:07:52.010151 | zoommap_converter-0.4.1.tar.gz | 30,005 | 73/3e/25d496fb18bad1b3eb35996d168cebadcc71ad5ff36125407aea6e3c4e92/zoommap_converter-0.4.1.tar.gz | source | sdist | null | false | 2d309c451ee3e61ce99ac4a5e3739880 | 7d7a521b20fb2fe9f50fe6e8260210751a40223c5fa7356d6ddc0f35236331ae | 733e25d496fb18bad1b3eb35996d168cebadcc71ad5ff36125407aea6e3c4e92 | null | [
"LICENSE"
] | 187 |
2.4 | glean-api-client | 0.12.8 | Python Client SDK Generated by Speakeasy. | # Glean Python API Client
The Glean Python SDK provides convenient access to the Glean REST API from any Python 3.8+ application. It includes type hints for all request parameters and response fields, and supports both synchronous and asynchronous usage via [httpx](https://www.python-httpx.org/).
<!-- No Summary [summary] -->
## Unified SDK Architecture
This SDK combines both the Client and Indexing API namespaces into a single unified package:
- **Client API**: Used for search, retrieval, and end-user interactions with Glean content
- **Indexing API**: Used for indexing content, permissions, and other administrative operations
Each namespace has its own authentication requirements and access patterns. While they serve different purposes, having them in a single SDK provides a consistent developer experience across all Glean API interactions.
```python
# Example of accessing Client namespace
from glean.api_client import Glean
import os
with Glean(api_token="client-token", instance="instance-name") as glean:
search_response = glean.client.search.query(query="search term")
print(search_response)
# Example of accessing Indexing namespace
from glean.api_client import Glean, models
import os
with Glean(api_token="indexing-token", instance="instance-name") as glean:
document_response = glean.indexing.documents.index(
document=models.Document(
id="doc-123",
title="Sample Document",
container_id="container-456",
datasource="confluence"
)
)
```
Remember that each namespace requires its own authentication token type as described in the [Authentication Methods](https://github.com/gleanwork/api-client-python/blob/master/#authentication-methods) section.
<!-- Start Table of Contents [toc] -->
## Table of Contents
<!-- $toc-max-depth=2 -->
* [Glean Python API Client](https://github.com/gleanwork/api-client-python/blob/master/#glean-python-api-client)
* [Unified SDK Architecture](https://github.com/gleanwork/api-client-python/blob/master/#unified-sdk-architecture)
* [SDK Installation](https://github.com/gleanwork/api-client-python/blob/master/#sdk-installation)
* [IDE Support](https://github.com/gleanwork/api-client-python/blob/master/#ide-support)
* [SDK Example Usage](https://github.com/gleanwork/api-client-python/blob/master/#sdk-example-usage)
* [Authentication](https://github.com/gleanwork/api-client-python/blob/master/#authentication)
* [Available Resources and Operations](https://github.com/gleanwork/api-client-python/blob/master/#available-resources-and-operations)
* [File uploads](https://github.com/gleanwork/api-client-python/blob/master/#file-uploads)
* [Retries](https://github.com/gleanwork/api-client-python/blob/master/#retries)
* [Error Handling](https://github.com/gleanwork/api-client-python/blob/master/#error-handling)
* [Server Selection](https://github.com/gleanwork/api-client-python/blob/master/#server-selection)
* [Custom HTTP Client](https://github.com/gleanwork/api-client-python/blob/master/#custom-http-client)
* [Resource Management](https://github.com/gleanwork/api-client-python/blob/master/#resource-management)
* [Debugging](https://github.com/gleanwork/api-client-python/blob/master/#debugging)
* [Experimental Features and Deprecation Testing](https://github.com/gleanwork/api-client-python/blob/master/#experimental-features-and-deprecation-testing)
* [Development](https://github.com/gleanwork/api-client-python/blob/master/#development)
* [Maturity](https://github.com/gleanwork/api-client-python/blob/master/#maturity)
* [Contributions](https://github.com/gleanwork/api-client-python/blob/master/#contributions)
<!-- End Table of Contents [toc] -->
## SDK Installation
> [!NOTE]
> **Python version upgrade policy**
>
> Once a Python version reaches its [official end of life date](https://devguide.python.org/versions/), a 3-month grace period is provided for users to upgrade. Following this grace period, the minimum python version supported in the SDK will be updated.
The SDK can be installed with either *pip* or *poetry* package managers.
### PIP
*PIP* is the default package installer for Python, enabling easy installation and management of packages from PyPI via the command line.
```bash
pip install glean-api-client
```
### Poetry
*Poetry* is a modern tool that simplifies dependency management and package publishing by using a single `pyproject.toml` file to handle project metadata and dependencies.
```bash
poetry add glean-api-client
```
### Shell and script usage with `uv`
You can use this SDK in a Python shell with [uv](https://docs.astral.sh/uv/) and the `uvx` command that comes with it like so:
```shell
uvx --from glean-api-client python
```
It's also possible to write a standalone Python script without needing to set up a whole project like so:
```python
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.9"
# dependencies = [
# "glean-api-client",
# ]
# ///
from glean.api_client import Glean
sdk = Glean(
# SDK arguments
)
# Rest of script here...
```
Once that is saved to a file, you can run it with `uv run script.py` where
`script.py` can be replaced with the actual file name.
<!-- No SDK Installation [installation] -->
<!-- Start IDE Support [idesupport] -->
## IDE Support
### PyCharm
Generally, the SDK will work well with most IDEs out of the box. However, when using PyCharm, you can enjoy much better integration with Pydantic by installing an additional plugin.
- [PyCharm Pydantic Plugin](https://docs.pydantic.dev/latest/integrations/pycharm/)
<!-- End IDE Support [idesupport] -->
<!-- Start SDK Example Usage [usage] -->
## SDK Example Usage
### Example 1
```python
# Synchronous Example
from glean.api_client import Glean, models
import os
with Glean(
api_token=os.getenv("GLEAN_API_TOKEN", ""),
) as glean:
res = glean.client.chat.create(messages=[
{
"fragments": [
models.ChatMessageFragment(
text="What are the company holidays this year?",
),
],
},
], timeout_millis=30000)
# Handle response
print(res)
```
</br>
The same SDK client can also be used to make asynchronous requests by importing asyncio.
```python
# Asynchronous Example
import asyncio
from glean.api_client import Glean, models
import os
async def main():
async with Glean(
api_token=os.getenv("GLEAN_API_TOKEN", ""),
) as glean:
res = await glean.client.chat.create_async(messages=[
{
"fragments": [
models.ChatMessageFragment(
text="What are the company holidays this year?",
),
],
},
], timeout_millis=30000)
# Handle response
print(res)
asyncio.run(main())
```
### Example 2
```python
# Synchronous Example
from glean.api_client import Glean, models
import os
with Glean(
api_token=os.getenv("GLEAN_API_TOKEN", ""),
) as glean:
res = glean.client.chat.create_stream(messages=[
{
"fragments": [
models.ChatMessageFragment(
text="What are the company holidays this year?",
),
],
},
], timeout_millis=30000)
# Handle response
print(res)
```
</br>
The same SDK client can also be used to make asynchronous requests by importing asyncio.
```python
# Asynchronous Example
import asyncio
from glean.api_client import Glean, models
import os
async def main():
async with Glean(
api_token=os.getenv("GLEAN_API_TOKEN", ""),
) as glean:
res = await glean.client.chat.create_stream_async(messages=[
{
"fragments": [
models.ChatMessageFragment(
text="What are the company holidays this year?",
),
],
},
], timeout_millis=30000)
# Handle response
print(res)
asyncio.run(main())
```
<!-- End SDK Example Usage [usage] -->
<!-- Start Authentication [security] -->
## Authentication
### Per-Client Security Schemes
This SDK supports the following security scheme globally:
| Name | Type | Scheme | Environment Variable |
| ----------- | ---- | ----------- | -------------------- |
| `api_token` | http | HTTP Bearer | `GLEAN_API_TOKEN` |
To authenticate with the API the `api_token` parameter must be set when initializing the SDK client instance. For example:
```python
from glean.api_client import Glean, models
from glean.api_client.utils import parse_datetime
import os
with Glean(
api_token=os.getenv("GLEAN_API_TOKEN", ""),
) as glean:
glean.client.activity.report(events=[
{
"action": models.ActivityEventAction.HISTORICAL_VIEW,
"timestamp": parse_datetime("2000-01-23T04:56:07.000Z"),
"url": "https://example.com/",
},
{
"action": models.ActivityEventAction.SEARCH,
"params": {
"query": "query",
},
"timestamp": parse_datetime("2000-01-23T04:56:07.000Z"),
"url": "https://example.com/search?q=query",
},
{
"action": models.ActivityEventAction.VIEW,
"params": {
"duration": 20,
"referrer": "https://example.com/document",
},
"timestamp": parse_datetime("2000-01-23T04:56:07.000Z"),
"url": "https://example.com/",
},
])
# Use the SDK ...
```
<!-- End Authentication [security] -->
### Authentication Methods
Glean supports different authentication methods depending on which API namespace you're using:
#### Client Namespace
The Client namespace supports two authentication methods:
1. **Manually Provisioned API Tokens**
- Can be created by an Admin or a user with the API Token Creator role
- Used for server-to-server integrations
2. **OAuth**
- Requires OAuth setup to be completed by an Admin
- Used for user-based authentication flows
#### Indexing Namespace
The Indexing namespace supports only one authentication method:
1. **Manually Provisioned API Tokens**
- Can be created by an Admin or a user with the API Token Creator role
- Used for secure document indexing operations
> [!IMPORTANT]
> Client tokens **will not work** for Indexing operations, and Indexing tokens **will not work** for Client operations. You must use the appropriate token type for the namespace you're accessing.
For more information on obtaining the appropriate token type, please contact your Glean administrator.
<!-- Start Available Resources and Operations [operations] -->
## Available Resources and Operations
<details open>
<summary>Available methods</summary>
### [Authentication](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/authentication/README.md)
* [checkdatasourceauth](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/authentication/README.md#checkdatasourceauth) - Check datasource authorization
### [Client.Activity](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientactivity/README.md)
* [report](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientactivity/README.md#report) - Report document activity
* [feedback](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientactivity/README.md#feedback) - Report client activity
### [Client.Agents](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/agents/README.md)
* [retrieve](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/agents/README.md#retrieve) - Retrieve an agent
* [retrieve_schemas](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/agents/README.md#retrieve_schemas) - List an agent's schemas
* [list](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/agents/README.md#list) - Search agents
* [run_stream](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/agents/README.md#run_stream) - Create an agent run and stream the response
* [run](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/agents/README.md#run) - Create an agent run and wait for the response
### [Client.Announcements](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/announcements/README.md)
* [create](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/announcements/README.md#create) - Create Announcement
* [delete](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/announcements/README.md#delete) - Delete Announcement
* [update](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/announcements/README.md#update) - Update Announcement
### [Client.Answers](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/answers/README.md)
* [create](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/answers/README.md#create) - Create Answer
* [delete](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/answers/README.md#delete) - Delete Answer
* [update](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/answers/README.md#update) - Update Answer
* [retrieve](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/answers/README.md#retrieve) - Read Answer
* [~~list~~](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/answers/README.md#list) - List Answers :warning: **Deprecated**
### [Client.Authentication](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientauthentication/README.md)
* [create_token](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientauthentication/README.md#create_token) - Create authentication token
### [Client.Chat](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientchat/README.md)
* [create](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientchat/README.md#create) - Chat
* [delete_all](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientchat/README.md#delete_all) - Deletes all saved Chats owned by a user
* [delete](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientchat/README.md#delete) - Deletes saved Chats
* [retrieve](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientchat/README.md#retrieve) - Retrieves a Chat
* [list](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientchat/README.md#list) - Retrieves all saved Chats
* [retrieve_application](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientchat/README.md#retrieve_application) - Gets the metadata for a custom Chat application
* [upload_files](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientchat/README.md#upload_files) - Upload files for Chat.
* [retrieve_files](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientchat/README.md#retrieve_files) - Get files uploaded by a user for Chat.
* [delete_files](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientchat/README.md#delete_files) - Delete files uploaded by a user for chat.
* [create_stream](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientchat/README.md#create_stream) - Chat
### [Client.Collections](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/collections/README.md)
* [add_items](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/collections/README.md#add_items) - Add Collection item
* [create](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/collections/README.md#create) - Create Collection
* [delete](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/collections/README.md#delete) - Delete Collection
* [delete_item](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/collections/README.md#delete_item) - Delete Collection item
* [update](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/collections/README.md#update) - Update Collection
* [update_item](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/collections/README.md#update_item) - Update Collection item
* [retrieve](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/collections/README.md#retrieve) - Read Collection
* [list](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/collections/README.md#list) - List Collections
### [Client.Documents](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientdocuments/README.md)
* [retrieve_permissions](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientdocuments/README.md#retrieve_permissions) - Read document permissions
* [retrieve](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientdocuments/README.md#retrieve) - Read documents
* [retrieve_by_facets](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientdocuments/README.md#retrieve_by_facets) - Read documents by facets
* [summarize](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientdocuments/README.md#summarize) - Summarize documents
### [Client.Entities](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/entities/README.md)
* [list](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/entities/README.md#list) - List entities
* [read_people](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/entities/README.md#read_people) - Read people
### [Client.Governance.Data.Policies](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/policies/README.md)
* [retrieve](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/policies/README.md#retrieve) - Gets specified policy
* [update](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/policies/README.md#update) - Updates an existing policy
* [list](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/policies/README.md#list) - Lists policies
* [create](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/policies/README.md#create) - Creates new policy
* [download](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/policies/README.md#download) - Downloads violations CSV for policy
### [Client.Governance.Data.Reports](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/reports/README.md)
* [create](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/reports/README.md#create) - Creates new one-time report
* [download](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/reports/README.md#download) - Downloads violations CSV for report
* [status](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/reports/README.md#status) - Fetches report run status
### [Client.Governance.Documents.Visibilityoverrides](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/visibilityoverrides/README.md)
* [list](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/visibilityoverrides/README.md#list) - Fetches documents visibility
* [create](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/visibilityoverrides/README.md#create) - Hide or unhide docs
### [Client.Insights](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/insights/README.md)
* [retrieve](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/insights/README.md#retrieve) - Get insights
### [Client.Messages](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/messages/README.md)
* [retrieve](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/messages/README.md#retrieve) - Read messages
### [Client.Pins](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/pins/README.md)
* [update](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/pins/README.md#update) - Update pin
* [retrieve](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/pins/README.md#retrieve) - Read pin
* [list](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/pins/README.md#list) - List pins
* [create](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/pins/README.md#create) - Create pin
* [remove](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/pins/README.md#remove) - Delete pin
### [Client.Search](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/search/README.md)
* [query_as_admin](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/search/README.md#query_as_admin) - Search the index (admin)
* [autocomplete](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/search/README.md#autocomplete) - Autocomplete
* [retrieve_feed](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/search/README.md#retrieve_feed) - Feed of documents and events
* [recommendations](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/search/README.md#recommendations) - Recommend documents
* [query](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/search/README.md#query) - Search
### [Client.Shortcuts](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientshortcuts/README.md)
* [create](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientshortcuts/README.md#create) - Create shortcut
* [delete](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientshortcuts/README.md#delete) - Delete shortcut
* [retrieve](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientshortcuts/README.md#retrieve) - Read shortcut
* [list](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientshortcuts/README.md#list) - List shortcuts
* [update](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientshortcuts/README.md#update) - Update shortcut
### [Client.Tools](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/tools/README.md)
* [list](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/tools/README.md#list) - List available tools
* [run](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/tools/README.md#run) - Execute the specified tool
### [Client.Verification](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientverification/README.md)
* [add_reminder](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientverification/README.md#add_reminder) - Create verification
* [list](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientverification/README.md#list) - List verifications
* [verify](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/clientverification/README.md#verify) - Update verification
### [Governance](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/governance/README.md)
* [createfindingsexport](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/governance/README.md#createfindingsexport) - Creates findings export
* [listfindingsexports](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/governance/README.md#listfindingsexports) - Lists findings exports
* [downloadfindingsexport](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/governance/README.md#downloadfindingsexport) - Downloads findings export
* [deletefindingsexport](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/governance/README.md#deletefindingsexport) - Deletes findings export
### [Indexing.Authentication](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingauthentication/README.md)
* [rotate_token](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingauthentication/README.md#rotate_token) - Rotate token
### [Indexing.Datasource](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingdatasource/README.md)
* [status](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingdatasource/README.md#status) - Beta: Get datasource status
### [Indexing.Datasources](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/datasources/README.md)
* [add](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/datasources/README.md#add) - Add or update datasource
* [retrieve_config](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/datasources/README.md#retrieve_config) - Get datasource config
### [Indexing.Documents](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingdocuments/README.md)
* [add_or_update](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingdocuments/README.md#add_or_update) - Index document
* [index](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingdocuments/README.md#index) - Index documents
* [bulk_index](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingdocuments/README.md#bulk_index) - Bulk index documents
* [process_all](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingdocuments/README.md#process_all) - Schedules the processing of uploaded documents
* [delete](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingdocuments/README.md#delete) - Delete document
* [debug](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingdocuments/README.md#debug) - Beta: Get document information
* [debug_many](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingdocuments/README.md#debug_many) - Beta: Get information of a batch of documents
* [check_access](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingdocuments/README.md#check_access) - Check document access
* [~~status~~](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingdocuments/README.md#status) - Get document upload and indexing status :warning: **Deprecated**
* [~~count~~](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingdocuments/README.md#count) - Get document count :warning: **Deprecated**
### [Indexing.People](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/people/README.md)
* [debug](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/people/README.md#debug) - Beta: Get user information
* [~~count~~](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/people/README.md#count) - Get user count :warning: **Deprecated**
* [index](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/people/README.md#index) - Index employee
* [bulk_index](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/people/README.md#bulk_index) - Bulk index employees
* [process_all_employees_and_teams](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/people/README.md#process_all_employees_and_teams) - Schedules the processing of uploaded employees and teams
* [delete](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/people/README.md#delete) - Delete employee
* [index_team](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/people/README.md#index_team) - Index team
* [delete_team](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/people/README.md#delete_team) - Delete team
* [bulk_index_teams](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/people/README.md#bulk_index_teams) - Bulk index teams
### [Indexing.Permissions](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingpermissions/README.md)
* [update_permissions](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingpermissions/README.md#update_permissions) - Update document permissions
* [index_user](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingpermissions/README.md#index_user) - Index user
* [bulk_index_users](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingpermissions/README.md#bulk_index_users) - Bulk index users
* [index_group](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingpermissions/README.md#index_group) - Index group
* [bulk_index_groups](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingpermissions/README.md#bulk_index_groups) - Bulk index groups
* [index_membership](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingpermissions/README.md#index_membership) - Index membership
* [bulk_index_memberships](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingpermissions/README.md#bulk_index_memberships) - Bulk index memberships for a group
* [process_memberships](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingpermissions/README.md#process_memberships) - Schedules the processing of group memberships
* [delete_user](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingpermissions/README.md#delete_user) - Delete user
* [delete_group](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingpermissions/README.md#delete_group) - Delete group
* [delete_membership](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingpermissions/README.md#delete_membership) - Delete membership
* [authorize_beta_users](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingpermissions/README.md#authorize_beta_users) - Beta users
### [Indexing.Shortcuts](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingshortcuts/README.md)
* [bulk_index](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingshortcuts/README.md#bulk_index) - Bulk index external shortcuts
* [upload](https://github.com/gleanwork/api-client-python/blob/master/docs/sdks/indexingshortcuts/README.md#upload) - Upload shortcuts
</details>
<!-- End Available Resources and Operations [operations] -->
<!-- Start File uploads [file-upload] -->
## File uploads
Certain SDK methods accept file objects as part of a request body or multi-part request. It is possible and typically recommended to upload files as a stream rather than reading the entire contents into memory. This avoids excessive memory consumption and potentially crashing with out-of-memory errors when working with very large files. The following example demonstrates how to attach a file stream to a request.
> [!TIP]
>
> For endpoints that handle file uploads bytes arrays can also be used. However, using streams is recommended for large files.
>
```python
from glean.api_client import Glean
import os
with Glean(
api_token=os.getenv("GLEAN_API_TOKEN", ""),
) as glean:
res = glean.client.chat.upload_files(files=[])
# Handle response
print(res)
```
<!-- End File uploads [file-upload] -->
<!-- Start Retries [retries] -->
## Retries
Some of the endpoints in this SDK support retries. If you use the SDK without any configuration, it will fall back to the default retry strategy provided by the API. However, the default retry strategy can be overridden on a per-operation basis, or across the entire SDK.
To change the default retry strategy for a single API call, simply provide a `RetryConfig` object to the call:
```python
from glean.api_client import Glean, models
from glean.api_client.utils import BackoffStrategy, RetryConfig, parse_datetime
import os
with Glean(
api_token=os.getenv("GLEAN_API_TOKEN", ""),
) as glean:
glean.client.activity.report(events=[
{
"action": models.ActivityEventAction.HISTORICAL_VIEW,
"timestamp": parse_datetime("2000-01-23T04:56:07.000Z"),
"url": "https://example.com/",
},
{
"action": models.ActivityEventAction.SEARCH,
"params": {
"query": "query",
},
"timestamp": parse_datetime("2000-01-23T04:56:07.000Z"),
"url": "https://example.com/search?q=query",
},
{
"action": models.ActivityEventAction.VIEW,
"params": {
"duration": 20,
"referrer": "https://example.com/document",
},
"timestamp": parse_datetime("2000-01-23T04:56:07.000Z"),
"url": "https://example.com/",
},
],
RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False))
# Use the SDK ...
```
If you'd like to override the default retry strategy for all operations that support retries, you can use the `retry_config` optional parameter when initializing the SDK:
```python
from glean.api_client import Glean, models
from glean.api_client.utils import BackoffStrategy, RetryConfig, parse_datetime
import os
with Glean(
retry_config=RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False),
api_token=os.getenv("GLEAN_API_TOKEN", ""),
) as glean:
glean.client.activity.report(events=[
{
"action": models.ActivityEventAction.HISTORICAL_VIEW,
"timestamp": parse_datetime("2000-01-23T04:56:07.000Z"),
"url": "https://example.com/",
},
{
"action": models.ActivityEventAction.SEARCH,
"params": {
"query": "query",
},
"timestamp": parse_datetime("2000-01-23T04:56:07.000Z"),
"url": "https://example.com/search?q=query",
},
{
"action": models.ActivityEventAction.VIEW,
"params": {
"duration": 20,
"referrer": "https://example.com/document",
},
"timestamp": parse_datetime("2000-01-23T04:56:07.000Z"),
"url": "https://example.com/",
},
])
# Use the SDK ...
```
<!-- End Retries [retries] -->
## Error Handling
All operations return a response object or raise an exception:
| Status Code | Description | Error Type | Content Type |
| ----------- | ----------------------- | ---------------------- | ---------------- |
| 400 | Invalid Request | errors.GleanError | \*/\* |
| 401 | Not Authorized | errors.GleanError | \*/\* |
| 403 | Permission Denied | errors.GleanDataError | application/json |
| 408 | Request Timeout | errors.GleanError | \*/\* |
| 422 | Invalid Query | errors.GleanDataError | application/json |
| 429 | Too Many Requests | errors.GleanError | \*/\* |
| 4XX | Other Client Errors | errors.GleanError | \*/\* |
| 5XX | Internal Server Errors | errors.GleanError | \*/\* |
### Example
```python
from glean.api_client import Glean, errors, models
import os
with Glean(
api_token=os.getenv("GLEAN_API_TOKEN", ""),
) as g_client:
try:
res = g_client.client.search.execute(search_request=models.SearchRequest(
tracking_token="trackingToken",
page_size=10,
query="vacation policy",
request_options=models.SearchRequestOptions(
facet_filters=[
models.FacetFilter(
field_name="type",
values=[
models.FacetFilterValue(
value="article",
relation_type=models.RelationType.EQUALS,
),
models.FacetFilterValue(
value="document",
relation_type=models.RelationType.EQUALS,
),
],
),
models.FacetFilter(
field_name="department",
values=[
models.FacetFilterValue(
value="engineering",
relation_type=models.RelationType.EQUALS,
),
],
),
],
facet_bucket_size=246815,
),
))
# Handle response
print(res)
except errors.GleanError as e:
print(e.message)
print(e.status_code)
print(e.raw_response)
print(e.body)
# If the server returned structured data
except errors.GleanDataError as e:
print(e.data)
print(e.data.errorMessage)
```
By default, an API error will raise a errors.GleanError exception, which has the following properties:
| Property | Type | Description |
|----------------------|------------------|-----------------------|
| `error.status_code` | *int* | The HTTP status code |
| `error.message` | *str* | The error message |
| `error.raw_response` | *httpx.Response* | The raw HTTP response |
| `error.body` | *str* | The response content |
<!-- No Error Handling [errors] -->
<!-- Start Server Selection [server] -->
## Server Selection
### Server Variables
The default server `https://{instance}-be.glean.com` contains variables and is set to `https://instance-name-be.glean.com` by default. To override default values, the following parameters are available when initializing the SDK client instance:
| Variable | Parameter | Default | Description |
| ---------- | --------------- | ----------------- | ------------------------------------------------------------------------------------------------------ |
| `instance` | `instance: str` | `"instance-name"` | The instance name (typically the email domain without the TLD) that determines the deployment backend. |
#### Example
```python
from glean.api_client import Glean, models
from glean.api_client.utils import parse_datetime
import os
with Glean(
server_idx=0,
instance="instance-name",
api_token=os.getenv("GLEAN_API_TOKEN", ""),
) as glean:
glean.client.activity.report(events=[
{
"action": models.ActivityEventAction.HISTORICAL_VIEW,
"timestamp": parse_datetime("2000-01-23T04:56:07.000Z"),
"url": "https://example.com/",
},
{
"action": models.ActivityEventAction.SEARCH,
"params": {
"query": "query",
},
"timestamp": parse_datetime("2000-01-23T04:56:07.000Z"),
"url": "https://example.com/search?q=query",
},
{
"action": models.ActivityEventAction.VIEW,
"params": {
"duration": 20,
"referrer": "https://example.com/document",
},
"timestamp": parse_datetime("2000-01-23T04:56:07.000Z"),
"url": "https://example.com/",
},
])
# Use the SDK ...
```
### Override Server URL Per-Client
The default server can be overridden globally by passing a URL to the `server_url: str` optional parameter when initializing the SDK client instance. For example:
```python
from glean.api_client import Glean, models
from glean.api_client.uti | text/markdown | Glean Technologies, Inc. | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | https://github.com/gleanwork/api-client-python.git | null | >=3.10 | [] | [] | [] | [
"httpcore>=1.0.9",
"httpx>=0.28.1",
"pydantic>=2.11.2"
] | [] | [] | [] | [
"Repository, https://github.com/gleanwork/api-client-python.git"
] | poetry/2.2.1 CPython/3.10.19 Linux/6.8.0-1044-azure | 2026-02-20T15:07:33.997922 | glean_api_client-0.12.8.tar.gz | 265,657 | 33/57/2e4d3ed8ed07f797a408f28b7337730638610e71d436c664ab95fdc9aa0f/glean_api_client-0.12.8.tar.gz | source | sdist | null | false | 3e92ba8b307484c64f8501dac2376972 | b67f3ebb57a3e3f35d00680283aae625e53acb1778b0205666bc352c5e6ea339 | 33572e4d3ed8ed07f797a408f28b7337730638610e71d436c664ab95fdc9aa0f | null | [] | 373 |
2.4 | pyegeria | 5.5.3.17 | A python client for Egeria | <!-- SPDX-License-Identifier: CC-BY-4.0 -->
<!-- Copyright Contributors to the ODPi Egeria project. -->

[](LICENSE)
# pyegeria: a python client for Egeria
A lightweight Python 3.12+ client and CLI for the Egeria open metadata and governance platform. It helps you configure and operate Egeria services and work with metadata (assets, glossaries, lineage, etc.) from Python, with examples, tests, and documented report formats.
This is a package for easily using the Egeria
open metadata environment from python. Details about the
open source Egeria project can be found at [Egeria Project](https://egeria-project.org).
This package is in active development. There is initial
support for many of Egeria's services including configuration and operation. This client depends on
This release supports Egeria 6.0 - although most of the functions may work on earlier versions of Egeria as well.
The code is organized to mimic the existing Egeria Java Client structure.
The commands folder holds the Egeria Command Line Interface and corresponding commands
to visualize and use Egeria. The commands also serve as useful examples.
An examples folder holds some useful examples showing different facets of using pyegeria.
For detailed guidance on output formats and report specs (including nested/master–detail), see:
- docs/output-formats-and-report-specs.md
### Report specs: families and filtering
Report specs (aka format sets) can be tagged with an optional `family` string to help organize and discover related specs.
- Show names with family and sort by family, then name:
```python
from pyegeria.view.base_report_formats import report_spec_list
names = report_spec_list(show_family=True, sort_by_family=True)
for n in names:
print(n)
```
- Filter specs by family programmatically:
```python
from pyegeria.view.base_report_formats import report_specs
# Exact family match (case-insensitive)
security_specs = report_specs.filter_by_family("Security")
# Specs with no family assigned
no_family_specs = report_specs.filter_by_family("")
```
WARNING: files that start with "X" are in-progress placeholders that are not meant to be used..they will mature and
evolve.
All feedback is welcome. Please engage via our [community](http://egeria-project.org/guides/community/),
team calls, or via github issues in this repo. If interested in contributing,
you can engage via the community or directly reach out to
[dan.wolfson\@pdr-associates.com](mailto:dan.wolfson@pdr-associates.com?subject=pyegeria).
This is a learning experience.
## Configuration
pyegeria uses a simple, predictable precedence for configuration:
1. Built-in defaults (Pydantic models in pyegeria.config)
2. Config file (JSON) if found
3. Environment variables (OS env and optional .env)
4. Explicit env file passed to get_app_config/load_app_config
Environment always overrides config file, which overrides defaults.
### Where to put your configuration
- Config file: A JSON file named config.json. The loader looks in this order:
- If PYEGERIA_CONFIG_DIRECTORY is set: $PYEGERIA_CONFIG_DIRECTORY/$PYEGERIA_CONFIG_FILE
- Else if PYEGERIA_ROOT_PATH is set: $PYEGERIA_ROOT_PATH/$PYEGERIA_CONFIG_FILE
- Else: ./config.json (the current working directory)
- .env file: Optional. If present in the current working directory (.env), variables from it will be loaded. You can also pass a specific env file path to get_app_config(env_file=...) or load_app_config(env_file=...). For sample variables, see config/env in this repo.
### Common environment variables
- PYEGERIA_CONFIG_DIRECTORY: directory containing your config.json
- PYEGERIA_ROOT_PATH: root folder used to resolve config.json when CONFIG_DIRECTORY is not set
- PYEGERIA_CONFIG_FILE: filename of the configuration JSON (default: config.json)
- PYEGERIA_CONSOLE_WIDTH: integer console width (e.g., 200 or 280)
- EGERIA_PLATFORM_URL, EGERIA_VIEW_SERVER_URL, EGERIA_ENGINE_HOST_URL: URLs for your Egeria servers
- EGERIA_USER, EGERIA_USER_PASSWORD: credentials used by some clients
- Logging related: PYEGERIA_ENABLE_LOGGING, PYEGERIA_LOG_DIRECTORY, PYEGERIA_CONSOLE_LOG_LVL, PYEGERIA_FILE_LOG_LVL, etc.
See config/env for more variables and defaults.
### Example .env
# PYEGERIA_CONFIG_DIRECTORY=/path/to/configs
# PYEGERIA_ROOT_PATH=/path/to/project
# PYEGERIA_CONFIG_FILE=config.json
# EGERIA_PLATFORM_URL=https://localhost:9443
# EGERIA_VIEW_SERVER=qs-view-server
# EGERIA_VIEW_SERVER_URL=https://localhost:9443
# EGERIA_USER=myuser
# EGERIA_USER_PASSWORD=mypassword
# PYEGERIA_CONSOLE_WIDTH=280
Lines starting with # are comments. Quotes are optional; python-dotenv/pydantic-settings handle both.
### Example config.json (minimal)
{
"Environment": {
"Pyegeria Root": ".",
"Egeria Platform URL": "https://localhost:9443"
},
"User Profile": {
"Egeria Home Collection": "MyHome"
}
}
### Programmatic usage
from pyegeria import get_app_config
cfg = get_app_config() # uses OS env and ./.env
# or with explicit env file
cfg = get_app_config(env_file="/path/to/dev.env")
# Access values via Pydantic models
print(cfg.Environment.egeria_platform_url)
print(cfg.Logging.enable_logging)
### CLI quick checks
- Validate your env file:
python scripts/validate_env.py --env config/env
python scripts/validate_env.py # auto-detects ./config/env or ./.env
### Testing
By default, running pytest executes unit tests that use monkeypatching/fakes and do not contact a live Egeria.
- Run unit tests (recommended default):
poetry install
poetry run pytest -v
You can also run tests live against a local Egeria instance. Enable live mode with either a CLI flag or an environment variable. In live mode, tests marked as `unit` are skipped and live tests run using a real Client2 connection.
- Enable live mode via CLI:
poetry run pytest -q --live-egeria
- Or enable via environment variable:
PYEG_LIVE_EGERIA=1 poetry run pytest -q
Default live connection parameters (can be overridden via env):
- server_name = "qs-view-server" (override with PYEG_SERVER_NAME)
- platform_url = "https://localhost:9443" (override with PYEG_PLATFORM_URL)
- user_id = "peterprofile" (override with PYEG_USER_ID)
- user_pwd = "secret" (override with PYEG_USER_PWD)
Notes:
- SSL verification is controlled by pyegeria._globals.enable_ssl_check, which defaults to False in this repo to support localhost/self-signed certs.
- See tests/conftest.py for the live test fixtures and switches.
### Troubleshooting
### Troubleshooting
- If your env doesn’t seem to apply, confirm which config.json is used (the loader checks PYEGERIA_CONFIG_DIRECTORY first, then PYEGERIA_ROOT_PATH, then ./config.json).
- .env files are optional. Missing .env is not an error.
- You can always override values with OS environment variables (they take precedence over config.json).
----
License: [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/),
Copyright Contributors to the ODPi Egeria project.
| text/markdown | null | Dan Wolfson <dan.wolfson@pdr-associates.com> | null | null | null | egeria, metadata, governance | [
"Programming Language :: Python",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"httpx",
"rich>=14.2",
"validators",
"urllib3",
"requests",
"jupyter",
"click==8.3.1",
"trogon",
"psycopg2-binary>=2.9.11",
"jupyter-notebook-parser",
"loguru",
"inflect",
"pydantic>=2.12.3",
"pydantic-settings>=2.10.1",
"pydevd-pycharm>=253.27642.35",
"wcwidth",
"altair==6.0.0",
"mcp>=0.1",
"markers>=0.3.0",
"pytest-asyncio>=1.2.0",
"python-dotenv>=1.1.1",
"pytest>=8.4.2",
"optional>=0.0.1",
"pytest; extra == \"test\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.7 | 2026-02-20T15:07:14.385225 | pyegeria-5.5.3.17.tar.gz | 1,127,649 | e3/2e/525025a28669be50b68f8322b8f08abfeff961772f56d69aeb0ab7ee9a97/pyegeria-5.5.3.17.tar.gz | source | sdist | null | false | 530f97c0b1387e71f6a33be450d13958 | 41deb4a62c486168a0ae75b2562572149250d35ece011b6d69318cff267d8179 | e32e525025a28669be50b68f8322b8f08abfeff961772f56d69aeb0ab7ee9a97 | Apache-2.0 | [
"LICENSE"
] | 208 |
2.3 | taktile-auth | 1.1.84 | Auth Package for Taktile | # Taktile Auth
[](https://pypi.python.org/pypi/taktile-auth)
[](https://www.apache.org/licenses/LICENSE-2.0)
This package is part of the Taktile ecosystem.
Taktile enables data science teams to industrialize, scale, and maintain machine learning models. Our ML development platform makes it easy to create your own end-to-end ML applications:
- Turn models into auto-scaling APIs in a few lines of code
- Easily add model tests
- Create and share model explanations through the Taktile UI
Find more information in our [docs](https://docs.taktile.com).
| text/markdown | Taktile GmbH | devops@taktile.com | null | null | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"pydantic<3.0",
"PyYAML<7.0,>=6.0",
"PyJWT[crypto]==2.10.1",
"requests<3.0,>=2.32",
"cryptography>=44.0.1"
] | [] | [] | [] | [] | poetry/2.1.3 CPython/3.12.12 Linux/6.11.0-1018-azure | 2026-02-20T15:06:31.608371 | taktile_auth-1.1.84.tar.gz | 18,483 | 53/42/e75b8bfa28142134e19f05b855264a7bb35f04ebbf8efb2d84c4792d84db/taktile_auth-1.1.84.tar.gz | source | sdist | null | false | 9d5df96a837e10b4448e82a264566e2f | 749c14caea69dc7d30bfe3970ffa97ee80a81cfba978205e15118565f9f179c8 | 5342e75b8bfa28142134e19f05b855264a7bb35f04ebbf8efb2d84c4792d84db | null | [] | 211 |
2.1 | renovosolutions.aws-cdk-crowdstrike-ingestion | 0.1.1 | A CDK library to ease repetetive construct creation for CrowdStrike data ingestion | # cdk-library-crowdstrike-ingestion
A CDK library to ease repetitive construct creation for CrowdStrike data ingestion.
This library provides a construct that creates an S3 bucket with the necessary configuration for CrowdStrike data ingestion, along with an SQS queue for notifications, an IAM role for access, and optionally a KMS key for encryption.
It also provides another construct that handles creating log group subscriptions to a central bucket, along with the role needed for CloudWatch Logs to create the subscription.
## Features
### CrowdStrike Bucket Construct
* Creates an S3 bucket with appropriate security settings for CrowdStrike data ingestion
* Creates an SQS queue for bucket notifications with a dead-letter queue
* Creates an IAM role that CrowdStrike can assume to access the data
* Optionally creates a KMS key for encrypting data (to use if the service generating the data wants it)
* Reads external ID from SSM parameter
* Supports organization-wide access for multi-account setups
* Configures bucket policies for logging if needed
* Provides customization options for all resources
### Log Group Subscription Construct
* Creates a CloudWatch Log Group Subscription to forward logs to a central S3 bucket
* Automatically creates the necessary IAM role for CloudWatch Logs to create the subscription
* Supports passing in an existing role if desired
* Allows customization of the filter pattern for the subscription
## API Doc
See [API](API.md)
## License
This project is licensed under the Apache License, Version 2.0 - see the [LICENSE](LICENSE) file for details.
## Examples
### TypeScript
```python
import { Stack, StackProps, Duration, aws_iam as iam, aws_logs as logs } from 'aws-cdk-lib';
import { Construct } from 'constructs';
import { CrowdStrikeBucket, CrowdStrikeLogSubscription } from '@renovosolutions/cdk-library-crowdstrike-ingestion';
export class CrowdStrikeIngestionStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
// Basic usage with default settings
new CrowdStrikeBucket(this, 'BasicBucket', {
bucketName: 'my-crowdstrike-bucket',
crowdStrikeRoleArn: 'arn:aws:ssm:us-east-1:123456789012:parameter/custom/crowdstrike/roleArn',
crowdStrikeExternalIdParameterArn: 'arn:aws:ssm:us-east-1:123456789012:parameter/custom/crowdstrike/externalId',
});
// Advanced usage with KMS key and organization access
new CrowdStrikeBucket(this, 'AdvancedBucket', {
bucketName: 'my-advanced-crowdstrike-bucket',
createKmsKey: true,
keyProps: {
alias: 'crowdstrike-key',
enableKeyRotation: true,
description: 'KMS Key for CrowdStrike data encryption',
},
queueProps: {
queueName: 'crowdstrike-notifications',
visibilityTimeout: Duration.seconds(300),
},
roleProps: {
roleName: 'crowdstrike-access-role',
assumedBy: new iam.PrincipalWithConditions(new iam.ArnPrincipal('arn:aws:iam::123456789012:role/CrowdStrikeRole'), {
StringEquals: {
'sts:ExternalId': 'externalId123',
},
}),
},
loggingBucketSourceName: 'my-logging-bucket', // Allow this bucket to send access logs
orgId: 'o-1234567', // Allow all accounts in the organization to write to the bucket });
// Example of creating a log group subscription
const logGroup = new aws_logs.LogGroup(this, 'MyLogGroup', {
logGroupName: 'my-log-group',
});
const subscription = new CrowdStrikeLogSubscription(stack, 'BasicTestSubscription', {
logGroup,
logDestinationArn: 'arn:aws:logs:us-east-1:123456789012:destination:test-destination',
});
new CrowdStrikeLogSubscription(stack, 'AdvancedTestSubscription', {
logGroup,
logDestinationArn: 'arn:aws:logs:us-east-1:123456789012:destination:another-test-destination',
role: subscription.role,
filterPattern: 'error',
});
}
}
```
### Python
```python
from aws_cdk import (
Stack,
Duration,
aws_iam as iam,
aws_kms as kms,
aws_logs as logs,
)
from constructs import Construct
from crowdstrike_ingestion import ( CrowdStrikeBucket, CrowdStrikeLogSubscription )
class CrowdStrikeIngestionStack(Stack):
def __init__(self, scope: Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
# Basic usage with default settings
CrowdStrikeBucket(self, 'BasicBucket',
bucket_name='my-crowdstrike-bucket',
crowd_strike_role_arn='arn:aws:ssm:us-east-1:123456789012:parameter/custom/crowdstrike/roleArn', crowd_strike_external_id_parameter_arn='arn:aws:ssm:us-east-1:123456789012:parameter/custom/crowdstrike/externalId')
# Advanced usage with KMS key and organization access
CrowdStrikeBucket(self, 'AdvancedBucket',
bucket_name='my-advanced-crowdstrike-bucket',
create_kms_key=True,
key_props={
alias='crowdstrike-key',
enable_key_rotation=True,
description='KMS Key for CrowdStrike data encryption'
},
queue_props={
'queue_name': 'crowdstrike-notifications',
'visibility_timeout': Duration.seconds(300)
},
role_props={
'role_name': 'crowdstrike-access-role',
'assumed_by': iam.PrincipalWithConditions(
iam.ArnPrincipal('arn:aws:iam::123456789012:role/CrowdStrikeRole'),
{'StringEquals': {'sts:ExternalId': 'externalId123'}})
},
logging_bucket_source_name='my-logging-bucket', # Allow this bucket to send access logs
org_id='o-1234567') # Allow all accounts in the organization to write to the bucket
# Example of creating a log group subscription
log_group = logs.LogGroup(self, 'MyLogGroup', log_group_name='my-log-group')
subscription = CrowdStrikeLogSubscription(self, 'BasicTestSubscription',
log_group=log_group,
log_destination_arn='arn:aws:logs:us-east-1:123456789012:destination:test-destination')
CrowdStrikeLogSubscription(self, 'AdvancedTestSubscription',
log_group=log_group,
log_destination_arn='arn:aws:logs:us-east-1:123456789012:destination:another-test-destination',
role=subscription.role,
filter_pattern='error')
```
### C Sharp
```csharp
using Amazon.CDK;
using IAM = Amazon.CDK.AWS.IAM;
using KMS = Amazon.CDK.AWS.KMS;
using Logs = Amazon.CDK.AWS.Logs;
using SQS = Amazon.CDK.AWS.SQS;
using Constructs;
using System.Collections.Generic;
using renovosolutions;
namespace CrowdStrikeIngestionExample
{
public class CrowdStrikeIngestionStack : Stack
{
internal CrowdStrikeIngestionStack(Construct scope, string id, IStackProps props = null) : base(scope, id, props)
{
// Basic usage with default settings
new CrowdStrikeBucket(this, "BasicBucket", new CrowdStrikeBucketProps
{
BucketName = "my-crowdstrike-bucket",
CrowdStrikeRoleArn = "arn:aws:ssm:us-east-1:123456789012:parameter/custom/crowdstrike/roleArn", CrowdStrikeExternalIdParameterArn = "arn:aws:ssm:us-east-1:123456789012:parameter/custom/crowdstrike/externalId"
});
// Advanced usage with KMS key and organization access
new CrowdStrikeBucket(this, "AdvancedBucket", new CrowdStrikeBucketProps
{
BucketName = "my-advanced-crowdstrike-bucket",
CreateKmsKey = true,
KeyProps = new KMS.KeyProps
{
Alias = "crowdstrike-key",
EnableKeyRotation = true,
Description = "KMS Key for CrowdStrike data encryption"
},
QueueProps = new SQS.QueueProps
{
QueueName = "crowdstrike-notifications",
VisibilityTimeout = Duration.Seconds(300)
},
RoleProps = new IAM.RoleProps
{
RoleName = "crowdstrike-access-role"
AssumedBy = new IAM.PrincipalWithConditions(new IAM.ArnPrincipal("arn:aws:iam::123456789012:role/CrowdStrikeRole"), new Dictionary<string, object>
{
{ "StringEquals", new Dictionary<string, string> { { "sts:ExternalId", "externalId123" } } }
})
},
LoggingBucketSourceName = "my-logging-bucket", // Allow this bucket to send access logs
OrgId = "o-1234567" // Allow all accounts in the organization to write to the bucket });
// Example of creating a log group subscription
var logGroup = new Logs.LogGroup(this, "MyLogGroup", new Logs.LogGroupProps
{
LogGroupName = "my-log-group"
});
var subscription = new CrowdStrikeLogSubscription(this, "BasicTestSubscription", new CrowdStrikeLogSubscriptionProps
{
LogGroup = logGroup,
LogDestinationArn = "arn:aws:logs:us-east-1:123456789012:destination:test-destination"
});
new CrowdStrikeLogSubscription(this, "AdvancedTestSubscription", new CrowdStrikeLogSubscriptionProps
{
LogGroup = logGroup,
LogDestinationArn = "arn:aws:logs:us-east-1:123456789012:destination:another-test-destination",
Role = subscription.Role,
FilterPattern = "error"
});
}
}
}
```
| text/markdown | Renovo Solutions<webmaster+cdk@renovo1.com> | null | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: JavaScript",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Typing :: Typed",
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved"
] | [] | https://github.com/RenovoSolutions/cdk-library-crowdstrike-ingestion.git | null | ~=3.9 | [] | [] | [] | [
"aws-cdk-lib<3.0.0,>=2.239.0",
"cdk-nag<3.0.0,>=2.37.55",
"constructs<11.0.0,>=10.5.1",
"jsii<2.0.0,>=1.126.0",
"publication>=0.0.3",
"typeguard==2.13.3"
] | [] | [] | [] | [
"Source, https://github.com/RenovoSolutions/cdk-library-crowdstrike-ingestion.git"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T15:06:30.862908 | renovosolutions_aws_cdk_crowdstrike_ingestion-0.1.1.tar.gz | 81,109 | 70/5b/787ebccc717c879b69c9eea041ea7bf144939aa014e93431c6266b538335/renovosolutions_aws_cdk_crowdstrike_ingestion-0.1.1.tar.gz | source | sdist | null | false | 2ddbbe3281a2c7204f145e85d3d6c19c | 59bfb0cdc4334ba0fccdfa8373b1ab5bef27c66092cec0d8aceca1934feafbb3 | 705b787ebccc717c879b69c9eea041ea7bf144939aa014e93431c6266b538335 | null | [] | 0 |
2.4 | fit-aic | 0.1.1 | AIC and AICc wrappers for scipy and lmfit | # fit_aic
[](https://badge.fury.io/py/fit-aic)
AIC and AICc wrappers for scipy and lmfit, making it easy to compute Akaike information criteria for model comparison.
## Overview
`fit_aic` provides convenient wrappers around popular curve fitting libraries that automatically compute AIC (Akaike Information Criterion) and AICc (corrected AIC) values. This makes model comparison straightforward without manual calculations.
## Features
- **scipy.optimize.curve_fit wrapper**: Drop-in replacement with automatic AIC/AICc computation
- **lmfit.Model wrapper**: Drop-in replacement that adds AICc to the existing lmfit result
- **AIC and AICc calculations**: Properly formatted and corrected for sample size
- **Seamless integration**: Works with your existing scipy and lmfit code
- **Tested for accuracy**: Validated against lmfit results
## Background
The Akaike Information Criterion (AIC) and its corrected version for small sample sizes (AICc) is a metric for comparing statistical models that balances goodness of fit against model complexity [[1, 2, 3]](#references). A model with more parameters will always fit data better, but risks overfitting — AIC penalizes complexity to find the best tradeoff.
Given two models fit to the same data, the one with the **lower AIC is preferred**. The absolute value of AIC is not meaningful — only differences between models matter.
As an example we simulate data from a bi-exponential decay:
$$y = 3e^{-x/1} + 5e^{-x/10}$$
If we did not know the underlying analytical function of this process we might try fitting the data with different models to get an idea which model fits best. This is where the Akaike Information Criterion comes in. While increasing model complexity will reduce the residuals, there is no penalty for overfitting — AIC does provide this penalty. In our example we fit three models:
$$f_1(x) = a_1 e^{-x/t_1}$$
$$f_2(x) = a_1 e^{-x/t_1} + a_2 e^{-x/t_2}$$
$$f_3(x) = a_1 e^{-x/t_1} + a_2 e^{-x/t_2} + a_3 e^{-x/t_3}$$
The lowest AIC value can be used to select the best model that fits the data. This is illustrated in the figure below. Model 1 is plotted in blue, model 2 in orange, and model 3 in green. It appears model 2 and model 3 are almost identical, but model 2 is better as it has the lower AIC value — which makes sense given that the fits look the same but model 3 has 2 more free parameters.

AICc is a corrected version of AIC for small sample sizes. As $n \to \infty$, AICc converges to AIC. It is recommended to always use AICc unless $n/k > 40$, where $n$ is the number of data points and $k$ the number of parameters.
A common rule of thumb for interpreting AIC differences ($\Delta AIC$):
- $\Delta AIC < 2$: models are essentially equivalent
- $2 < \Delta AIC < 10$: moderate evidence for the better model
- $\Delta AIC > 10$: strong evidence for the better model
## Installation
```bash
pip install fit_aic
```
Or with lmfit integration (optional):
```bash
pip install fit_aic[lmfit]
```
## Quick Start
### scipy wrapper
```python
import numpy as np
from fit_aic.scipy import curve_fit
# Define your model
def exponential_decay(x, A, tau):
return A * np.exp(-x / tau)
# Fit with AIC/AICc computation
x = np.linspace(0, 10, 50)
y = exponential_decay(x, 3, 2) + np.random.normal(0, 0.1, size=x.shape)
popt, pcov, infodict, mesg, ier = curve_fit(
exponential_decay, x, y,
p0=[3, 2],
full_output=True
)
print(f"AIC: {infodict['aic']:.2f}")
print(f"AICc: {infodict['aicc']:.2f}")
```
### lmfit wrapper
```python
import numpy as np
from fit_aic.lmfit import Model
def exponential_decay(x, A, tau):
return A * np.exp(-x / tau)
x = np.linspace(0, 10, 50)
y = exponential_decay(x, 3, 2) + np.random.normal(0, 0.1, size=x.shape)
model = Model(exponential_decay)
result = model.fit(y, x=x, A=3, tau=2)
print(f"AIC: {result.aic:.2f}")
print(f"AICc: {result.aicc:.2f}")
```
### Model Comparison
```python
import numpy as np
from fit_aic.scipy import curve_fit
def model1(x, A1, tau1, A2, tau2):
return A1 * np.exp(-x / tau1) + A2 * np.exp(-x / tau2)
def model2(x, A1, tau1):
return A1 * np.exp(-x / tau1)
x = np.linspace(0, 20, 50)
y = model1(x, 3, 1, 5, 10) + np.random.normal(0, 0.25, size=x.shape)
result1 = curve_fit(model1, x, y, p0=[3, 1, 5, 10], full_output=True)
result2 = curve_fit(model2, x, y, p0=[3, 2], full_output=True)
aic1 = result1[2]['aic']
aic2 = result2[2]['aic']
print(f"Model 1 AIC: {aic1:.2f}")
print(f"Model 2 AIC: {aic2:.2f}")
print(f"Best model: {'Model 1' if aic1 < aic2 else 'Model 2'}")
```
## API Reference
### `fit_aic.scipy.curve_fit`
Wrapper around `scipy.optimize.curve_fit` with AIC/AICc support.
**Parameters:**
- All parameters are identical to `scipy.optimize.curve_fit`
- `full_output`: If `True`, returns 5-tuple with infodict containing `aic` and `aicc` (default: `False`)
**Returns:**
- If `full_output=False`: Returns `(popt, pcov)` — identical to scipy behavior
- If `full_output=True`: Returns `(popt, pcov, infodict, mesg, ier)` where `infodict` includes `aic` and `aicc` keys
### `fit_aic.lmfit.Model`
Subclass of `lmfit.Model` with AICc support. All existing lmfit behavior is preserved.
**Additional attributes on `ModelResult`:**
- `result.aicc`: Corrected AIC value
**Usage:**
```python
from fit_aic.lmfit import Model
model = Model(my_func)
result = model.fit(y, x=x, A=3, tau=2)
print(f"AIC: {result.aic:.2f}") # lmfit built-in
print(f"AICc: {result.aicc:.2f}") # added by fit_aic
```
## Information Criteria
**AIC:**
$$AIC = n \ln(RSS/n) + 2k$$
**AICc:**
$$AICc = AIC + \frac{2k(k+1)}{n-k-1}$$
Where:
- $n$ = number of observations
- $k$ = number of parameters
- $RSS$ = residual sum of squares
AICc includes a correction for small sample sizes and converges to AIC as $n \to \infty$. It is recommended when $n/k < 40$.
## Development
Install with dev dependencies:
```bash
pip install -e ".[dev]"
```
Run tests:
```bash
pytest tests/
```
## License
MIT
## References
<a id="references"></a>
[1] H. Akaike, "A New Look at the Statistical Model Identification," *IEEE Trans. Autom. Control* 19(6), 716–723 (1974). https://doi.org/10.1109/TAC.1974.1100705
[2] J. E. Cavanaugh, "Unifying the derivations for the Akaike and corrected Akaike information criteria," *Stat. & Probab. Lett.* 33(2), 201–208 (1997). https://doi.org/10.1016/S0167-7152(96)00128-9
[3] K. P. Burnham & D. R. Anderson, *Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach* (2nd ed.). Springer, New York (1998).
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"scipy",
"numpy",
"lmfit; extra == \"lmfit\"",
"pytest; extra == \"dev\"",
"lmfit; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/dagrass/fit_aic",
"Repository, https://github.com/dagrass/fit_aic"
] | twine/6.2.0 CPython/3.11.9 | 2026-02-20T15:06:10.882571 | fit_aic-0.1.1.tar.gz | 14,991,824 | 0d/78/30ef7aacd0074c69beba5fd76b9b4d2f9190d4c3a1316369b3404dd5b1c8/fit_aic-0.1.1.tar.gz | source | sdist | null | false | 8b21de33a2828518eb5112cfed78224f | 0d41dc64e78c12853ae35ccf36764fcde7c9caa8ffff85d7494b92a56751d798 | 0d7830ef7aacd0074c69beba5fd76b9b4d2f9190d4c3a1316369b3404dd5b1c8 | null | [
"LICENSE"
] | 193 |
2.4 | reasoning-core | 0.2.1 | A RL env with procedurally generated symbolic reasoning data | # Reasoning core ◉
reasoning-core is a text-based RLVR for LLM reasoning training.
It is centered on expressive symbolic tasks, including full fledged FOL, formal mathematics with TPTP, formal planning with novel domains, and syntax tasks.
🤗 https://hf.co/datasets/reasoning-core/rc1
# Prime Environment Hub
```python
#!pip install uv #install uv if needed
!uv tool install prime --with openai -q
!uv tool run prime -- env install sileod/reasoning-core-env
from verifiers import load_environment
import os; from openai import OpenAI
env = load_environment("reasoning-core-env")
client = OpenAI( base_url="https://openrouter.ai/api/v1", api_key=os.getenv("OPENROUTER_API_KEY")) #🔑
results = env.evaluate(client=client, model="gpt-4.1-mini", num_examples=20, rollouts_per_example=1)
df=env.make_dataset(results).to_pandas()
```
# Standalone
```python
pip install reasoning_core
from reasoning_core import list_tasks, get_task, score_answer
T = get_task('arithmetics')()
x = T.generate_example()
assert score_answer(x.answer, x)==1
```
# Generation
Run `bash run_generate.sh` for multi-threaded generation to json files (readable by Huggingface Datasets).
# Reasoning gym
We use a custom interface, leaner than reasoning-gym (RG). But our tasks, which are all orthogonal to RG, can be imported in it.
```python
import reasoning_gym
from reasoning_core import register_to_reasoning_gym
register_to_reasoning_gym()
specs = [
# here, leg_counting tasks will make up two thirds of tasks
DatasetSpec(name='leg_counting', weight=2, config={}), #from reasoning_gym 🏋
DatasetSpec(name='arithmetics', weight=2, config={}), #from reasoning_core ◉
]
D=reasoning_gym.create_dataset('composite', size=10, seed=42, datasets=specs)
```
## Citation
```
@misc{reasoningcore2025,
title={Reasoning Core: A Scalable RL Environment for LLM Symbolic Reasoning},
author={Valentin Lacombe and Valentin Quesnel and Damien Sileo},
year={2025},
eprint={2509.18083},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2509.18083},
}
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | null | [] | [] | [] | [
"appdirs",
"beautifulsoup4",
"duckdb",
"easydict",
"exrex",
"faker",
"funcy",
"gramforge",
"inflection",
"lazy-object-proxy",
"multiprocess",
"networkx",
"nltk",
"num2words",
"numpy",
"pandas",
"pgmpy",
"pooch",
"prime",
"pyparsing",
"pyyaml",
"rapidfuzz",
"reasoning-gym",
"regex",
"requests",
"sympy",
"tabulate",
"tarski",
"tiktoken",
"timeout-decorator",
"timeoutcontext",
"tqdm",
"udocker",
"unified-planning[pyperplan]",
"verifiers",
"xpflow"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.10 | 2026-02-20T15:06:09.575084 | reasoning_core-0.2.1.tar.gz | 206,559 | ff/9f/62cde856e66d5748470266d9179d46c1bb7662ff9a386a55a2b6f8bfcad4/reasoning_core-0.2.1.tar.gz | source | sdist | null | false | c1ad4c6fd5b081d6f3c9f46d6cfd1cad | 06a522f04a9c3f928c84e103aa57921cf945b1e5cfacaa0be64f4d9851251910 | ff9f62cde856e66d5748470266d9179d46c1bb7662ff9a386a55a2b6f8bfcad4 | null | [
"LICENSE"
] | 192 |
2.4 | hexlogger | 2.0.2 | Premium console logging library with themes, gradients, panels, tables, progress bars, and spinners. | A premium, zero-dependency Python console logging library with beautiful colored output, multiple themes, gradient text, panels, tables, progress bars, and spinners.
---
```bash
pip install hexlogger
```
---
```python
import hexlogger
hexlogger.success(" BOT READY ")
hexlogger.info("Logged in as: MyBot#1234")
hexlogger.info("Bot Name: MyBot")
hexlogger.info("Bot ID: 987654321098765432")
hexlogger.info("Shards: 4")
hexlogger.info("Connected to: 128 guilds")
hexlogger.info("Connected to: 94,210 users")
hexlogger.warn("Rate limit hit on /api/messages")
hexlogger.error("Failed to fetch guild: 403 Forbidden")
hexlogger.critical("Shard 2 disconnected unexpectedly")
hexlogger.fatal("Token invalidated — shutting down")
```
**Output:**
```
[-(14:30:00)-] [ ● ] SUCCESS » BOT READY
[-(14:30:00)-] [ ● ] INFO » Logged in as: MyBot#1234
[-(14:30:00)-] [ ● ] INFO » Bot Name: MyBot
[-(14:30:00)-] [ ● ] INFO » Bot ID: 987654321098765432
[-(14:30:00)-] [ ● ] INFO » Shards: 4
[-(14:30:00)-] [ ● ] INFO » Connected to: 128 guilds
[-(14:30:00)-] [ ● ] INFO » Connected to: 94,210 users
[-(14:30:00)-] [ ● ] WARNING » Rate limit hit on /api/messages
[-(14:30:00)-] [ ● ] ERROR » Failed to fetch guild: 403 Forbidden
[-(14:30:00)-] [ ● ] CRITICAL » Shard 2 disconnected unexpectedly
[-(14:30:00)-] [ ● ] FATAL » Token invalidated — shutting down
```
---
| Function | Level |
|---|---|
| `trace(msg)` | TRACE |
| `debug(msg)` | DEBUG |
| `info(msg)` | INFO |
| `success(msg)` | SUCCESS |
| `warn(msg)` / `warning(msg)` | WARNING |
| `error(msg)` | ERROR |
| `critical(msg)` | CRITICAL |
| `fatal(msg)` | FATAL |
---
Nine display styles are available. All styles respond to the active theme for color. Switch styles with `hexlogger.setstyle(name)`.
```python
hexlogger.setstyle("default")
```
```
[-(14:30:00)-] [ ● ] SUCCESS » BOT READY
[-(14:30:00)-] [ ● ] INFO » Logged in as: MyBot#1234
[-(14:30:00)-] [ ● ] INFO » Bot Name: MyBot
[-(14:30:00)-] [ ● ] INFO » Bot ID: 987654321098765432
[-(14:30:00)-] [ ● ] INFO » Shards: 4
[-(14:30:00)-] [ ● ] INFO » Connected to: 128 guilds
[-(14:30:00)-] [ ● ] INFO » Connected to: 94,210 users
[-(14:30:00)-] [ ● ] WARNING » Rate limit hit on /api/messages
[-(14:30:00)-] [ ● ] ERROR » Failed to fetch guild: 403 Forbidden
[-(14:30:00)-] [ ● ] CRITICAL » Shard 2 disconnected unexpectedly
[-(14:30:00)-] [ ● ] FATAL » Token invalidated — shutting down
```
```python
hexlogger.setstyle("box")
```
```
┃ 14:30:00 │ ✓ SUCCESS │ BOT READY
┃ 14:30:00 │ ℹ INFO │ Logged in as: MyBot#1234
┃ 14:30:00 │ ℹ INFO │ Bot Name: MyBot
┃ 14:30:00 │ ℹ INFO │ Bot ID: 987654321098765432
┃ 14:30:00 │ ℹ INFO │ Shards: 4
┃ 14:30:00 │ ℹ INFO │ Connected to: 128 guilds
┃ 14:30:00 │ ℹ INFO │ Connected to: 94,210 users
┃ 14:30:00 │ ⚠ WARNING │ Rate limit hit on /api/messages
┃ 14:30:00 │ ✗ ERROR │ Failed to fetch guild: 403 Forbidden
┃ 14:30:00 │ ☠ CRITICAL │ Shard 2 disconnected unexpectedly
┃ 14:30:00 │ 💀 FATAL │ Token invalidated — shutting down
```
```python
hexlogger.setstyle("modern")
```
```
14:30:00 ✓ SUCCESS BOT READY
14:30:00 ℹ INFO Logged in as: MyBot#1234
14:30:00 ℹ INFO Bot Name: MyBot
14:30:00 ℹ INFO Bot ID: 987654321098765432
14:30:00 ℹ INFO Shards: 4
14:30:00 ℹ INFO Connected to: 128 guilds
14:30:00 ℹ INFO Connected to: 94,210 users
14:30:00 ⚠ WARNING Rate limit hit on /api/messages
14:30:00 ✗ ERROR Failed to fetch guild: 403 Forbidden
14:30:00 ☠ CRITICAL Shard 2 disconnected unexpectedly
14:30:00 💀 FATAL Token invalidated — shutting down
```
```python
hexlogger.setstyle("bracket")
```
```
[14:30:00] [SUCCESS] BOT READY
[14:30:00] [INFO] Logged in as: MyBot#1234
[14:30:00] [INFO] Bot Name: MyBot
[14:30:00] [INFO] Bot ID: 987654321098765432
[14:30:00] [INFO] Shards: 4
[14:30:00] [INFO] Connected to: 128 guilds
[14:30:00] [INFO] Connected to: 94,210 users
[14:30:00] [WARNING] Rate limit hit on /api/messages
[14:30:00] [ERROR] Failed to fetch guild: 403 Forbidden
[14:30:00] [CRITICAL] Shard 2 disconnected unexpectedly
[14:30:00] [FATAL] Token invalidated — shutting down
```
```python
hexlogger.setstyle("arrow")
```
```
14:30:00 ▸ SUCCESS ▸ BOT READY
14:30:00 ▸ INFO ▸ Logged in as: MyBot#1234
14:30:00 ▸ INFO ▸ Bot Name: MyBot
14:30:00 ▸ INFO ▸ Bot ID: 987654321098765432
14:30:00 ▸ INFO ▸ Shards: 4
14:30:00 ▸ INFO ▸ Connected to: 128 guilds
14:30:00 ▸ INFO ▸ Connected to: 94,210 users
14:30:00 ▸ WARNING ▸ Rate limit hit on /api/messages
14:30:00 ▸ ERROR ▸ Failed to fetch guild: 403 Forbidden
14:30:00 ▸ CRITICAL ▸ Shard 2 disconnected unexpectedly
14:30:00 ▸ FATAL ▸ Token invalidated — shutting down
```
```python
hexlogger.setstyle("pipe")
```
```
▌ 14:30:00 | SUCCESS | BOT READY
▌ 14:30:00 | INFO | Logged in as: MyBot#1234
▌ 14:30:00 | INFO | Bot Name: MyBot
▌ 14:30:00 | INFO | Bot ID: 987654321098765432
▌ 14:30:00 | INFO | Shards: 4
▌ 14:30:00 | INFO | Connected to: 128 guilds
▌ 14:30:00 | INFO | Connected to: 94,210 users
▌ 14:30:00 | WARNING | Rate limit hit on /api/messages
▌ 14:30:00 | ERROR | Failed to fetch guild: 403 Forbidden
▌ 14:30:00 | CRITICAL | Shard 2 disconnected unexpectedly
▌ 14:30:00 | FATAL | Token invalidated — shutting down
```
```python
hexlogger.setstyle("tag")
```
```
14:30:00 SUCCESS BOT READY
14:30:00 INFO Logged in as: MyBot#1234
14:30:00 INFO Bot Name: MyBot
14:30:00 INFO Bot ID: 987654321098765432
14:30:00 INFO Shards: 4
14:30:00 INFO Connected to: 128 guilds
14:30:00 INFO Connected to: 94,210 users
14:30:00 WARNING Rate limit hit on /api/messages
14:30:00 ERROR Failed to fetch guild: 403 Forbidden
14:30:00 CRITICAL Shard 2 disconnected unexpectedly
14:30:00 FATAL Token invalidated — shutting down
```
> Each level label renders with a distinct background color in the terminal.
```python
hexlogger.setstyle("dots")
```
```
✓ 14:30:00 · SUCCESS · BOT READY
ℹ 14:30:00 · INFO · Logged in as: MyBot#1234
ℹ 14:30:00 · INFO · Bot Name: MyBot
ℹ 14:30:00 · INFO · Bot ID: 987654321098765432
ℹ 14:30:00 · INFO · Shards: 4
ℹ 14:30:00 · INFO · Connected to: 128 guilds
ℹ 14:30:00 · INFO · Connected to: 94,210 users
⚠ 14:30:00 · WARNING · Rate limit hit on /api/messages
✗ 14:30:00 · ERROR · Failed to fetch guild: 403 Forbidden
☠ 14:30:00 · CRITICAL · Shard 2 disconnected unexpectedly
💀 14:30:00 · FATAL · Token invalidated — shutting down
```
```python
hexlogger.setstyle("clean")
```
```
14:30:00 SUCCESS BOT READY
14:30:00 INFO Logged in as: MyBot#1234
14:30:00 INFO Bot Name: MyBot
14:30:00 INFO Bot ID: 987654321098765432
14:30:00 INFO Shards: 4
14:30:00 INFO Connected to: 128 guilds
14:30:00 INFO Connected to: 94,210 users
14:30:00 WARNING Rate limit hit on /api/messages
14:30:00 ERROR Failed to fetch guild: 403 Forbidden
14:30:00 CRITICAL Shard 2 disconnected unexpectedly
14:30:00 FATAL Token invalidated — shutting down
```
---
Six built-in themes control the colors applied to every style. Themes do not affect the structure of the output — only the palette.
```python
hexlogger.settheme("default") # Soft blue/green palette
hexlogger.settheme("neon") # Vivid cyan/magenta palette
hexlogger.settheme("minimal") # Muted grays, abbreviated labels
hexlogger.settheme("hacker") # Green-on-black matrix palette
hexlogger.settheme("sunset") # Warm orange/red palette
hexlogger.settheme("ocean") # Deep blue/teal palette
```
The `minimal` theme also uses abbreviated level labels:
```
[-(14:30:00)-] [ ● ] OK » BOT READY
[-(14:30:00)-] [ ● ] INF » Logged in as: MyBot#1234
[-(14:30:00)-] [ ● ] INF » Bot Name: MyBot
[-(14:30:00)-] [ ● ] INF » Bot ID: 987654321098765432
[-(14:30:00)-] [ ● ] INF » Shards: 4
[-(14:30:00)-] [ ● ] INF » Connected to: 128 guilds
[-(14:30:00)-] [ ● ] INF » Connected to: 94,210 users
[-(14:30:00)-] [ ● ] WRN » Rate limit hit on /api/messages
[-(14:30:00)-] [ ● ] ERR » Failed to fetch guild: 403 Forbidden
[-(14:30:00)-] [ ● ] CRT » Shard 2 disconnected unexpectedly
[-(14:30:00)-] [ ● ] FTL » Token invalidated — shutting down
```
```python
from hexlogger import rgb
hexlogger._config.add_theme("mytheme", {
"timestamp": rgb(0, 100, 200),
"bracket": rgb(0, 80, 160),
"dot": rgb(0, 200, 255),
"separator": rgb(0, 150, 200),
"debug": rgb(100, 180, 255),
"info": rgb(0, 200, 255),
"success": rgb(0, 255, 200),
"warning": rgb(255, 200, 0),
"error": rgb(255, 80, 80),
"critical": rgb(255, 0, 50),
"fatal": rgb(200, 0, 0),
"trace": rgb(100, 150, 200),
"divider": rgb(0, 60, 100),
"label_debug": "DEBUG",
"label_info": "INFO",
"label_success": "SUCCESS",
"label_warning": "WARNING",
"label_error": "ERROR",
"label_critical": "CRITICAL",
"label_fatal": "FATAL",
"label_trace": "TRACE",
})
hexlogger.settheme("mytheme")
```
---
```python
hexlogger.divider()
hexlogger.divider(char="═", length=80)
```
```
────────────────────────────────────────────────────────────
```
```python
hexlogger.section("RESULTS")
hexlogger.rule("Settings", char="═")
hexlogger.blank(2)
```
```
══════════════════════════ RESULTS ═══════════════════════════
```
```python
hexlogger.banner("HexLogger\nv3.0.0")
```
```
╔═══════════════╗
║ HexLogger ║
║ v3.0.0 ║
╚═══════════════╝
```
```python
hexlogger.panel(
"Server: online\nPing: 42ms\nUptime: 99.9%",
title="Status",
style="rounded"
)
```
```
╭─ Status ──────────────────────────────────────╮
│ Server: online │
│ Ping: 42ms │
│ Uptime: 99.9% │
╰───────────────────────────────────────────────╯
```
Available panel styles: `rounded`, `sharp`, `double`, `heavy`, `ascii`
```python
hexlogger.box("Deployment complete.", style="double", align="center")
```
```
╔═════════════════════════╗
║ Deployment complete. ║
╚═════════════════════════╝
```
Available box styles: `rounded`, `sharp`, `double`, `heavy`, `ascii`, `stars`, `dashes`
```python
hexlogger.table(
headers=["Name", "Status", "Ping"],
rows=[
["Server 1", "Online", "12ms"],
["Server 2", "Offline", "---"],
["Server 3", "Online", "45ms"],
]
)
```
```
╭──────────┬─────────┬──────╮
│ Name │ Status │ Ping │
├──────────┼─────────┼──────┤
│ Server 1 │ Online │ 12ms │
│ Server 2 │ Offline │ --- │
│ Server 3 │ Online │ 45ms │
╰──────────┴─────────┴──────╯
```
```python
hexlogger.pair("Logged in as", "MyBot#1234")
hexlogger.pair("Bot ID", "987654321098765432")
hexlogger.pair("Guilds", "128")
hexlogger.pair("Users", "94,210")
```
```
Logged in as: MyBot#1234
Bot ID: 987654321098765432
Guilds: 128
Users: 94,210
```
---
```python
hexlogger.gprint("Hello World!", (255, 0, 100), (100, 0, 255))
hexlogger.rbprint("Rainbow text!")
text = hexlogger.gtext("Custom gradient", (0, 255, 200), (255, 0, 100))
multi = hexlogger.mgtext("Multi-stop", [(255,0,0), (0,255,0), (0,0,255)])
```
```python
from hexlogger import bold, italic, uline, clr, Colors
print(bold("Bold text"))
print(italic("Italic text"))
print(uline("Underlined text"))
print(clr("Cyan text", Colors.CYAN))
```
```python
from hexlogger import rgb, bgrgb, hexclr, bghex
color = rgb(255, 128, 0)
bg = bgrgb(30, 30, 30)
from_hex = hexclr("#FF8800")
bg_hex_ = bghex("#1E1E1E")
```
---
```python
import time
bar = hexlogger.Progress(total=100, label="Downloading", width=40)
for i in range(100):
time.sleep(0.05)
bar.update()
```
```
Downloading │████████████████████░░░░░░░░░░░░░░░░░░░│ 52.0% (52/100) [2.6s / ETA 2.4s]
```
| Parameter | Type | Default | Description |
|---|---|---|---|
| `total` | int | — | Total number of steps |
| `width` | int | `30` | Bar width in characters |
| `label` | str | `"Progress"` | Label displayed before the bar |
| `fill` | str | `"█"` | Fill character |
| `empty` | str | `"░"` | Empty character |
| `color` | str | theme | Bar color |
---
```python
import time
with hexlogger.Spinner("Loading data...", style="dots"):
time.sleep(3)
spinner = hexlogger.Spinner("Processing...", style="circle")
spinner.start()
time.sleep(2)
spinner.stop("Done!")
```
```
⠹ Loading data...
```
Available styles: `dots`, `line`, `arrow`, `bounce`, `box`, `circle`
---
```python
import time
with hexlogger.Timer("Database query"):
time.sleep(1.5)
timer = hexlogger.Timer("API call")
timer.start()
elapsed = timer.stop()
```
---
```python
hexlogger.settheme("neon")
hexlogger.setstyle("modern")
hexlogger.setlog("app.log")
hexlogger.setlevel("warning")
hexlogger._config.time_format = "%Y-%m-%d %H:%M:%S"
hexlogger._config.show_time = False
```
Only messages at or above the configured level are printed.
```python
hexlogger.setlevel("trace")
hexlogger.setlevel("debug")
hexlogger.setlevel("info")
hexlogger.setlevel("warning")
hexlogger.setlevel("error")
hexlogger.setlevel("critical")
hexlogger.setlevel("fatal")
```
---
- Zero dependencies
- 6 built-in themes with full custom theme support
- 9 display styles
- RGB, background, and hex color helpers
- Gradient and rainbow text
- Panels, tables, banners, and boxes
- Progress bars with ETA
- Animated spinners in 6 styles
- Timer utility
- File logging with automatic ANSI stripping
- Thread-safe output
- Windows console compatible
- Python 3.7+
---
MIT License — Created by Itz Montage
| text/markdown | Itz Montage | null | null | null | MIT | logging, console, terminal, colors, logger, hex, ansi | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries",
"Topic :: System :: Logging",
"Operating System :: OS Independent"
] | [] | null | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/Montagexd/hexlogger"
] | twine/6.2.0 CPython/3.12.10 | 2026-02-20T15:05:58.439557 | hexlogger-2.0.2.tar.gz | 17,318 | 4c/96/11f172b16ed864a1c983383b98b3217d9683c7af80989b52f602a9ed8261/hexlogger-2.0.2.tar.gz | source | sdist | null | false | 56442eeefb068d5ddcac1b101ca5b516 | f299b76124850651653ae499a3b05d969dadad40f9581b274772ee028b346dbd | 4c9611f172b16ed864a1c983383b98b3217d9683c7af80989b52f602a9ed8261 | null | [
"LICENSE"
] | 196 |
2.4 | topologicpy | 0.9.4 | An AI-Powered Spatial Modelling and Analysis Software Library for Architecture, Engineering, and Construction. | # topologicpy
<img src="https://topologic.app/wp-content/uploads/2023/02/topologicpy-logo-no-loop.gif" alt="topologicpy logo" width="250" loop="1">
# An AI-Powered Spatial Modelling and Analysis Software Library for Architecture, Engineering, and Construction
## Introduction
Welcome to topologicpy (rhymes with apple pie). topologicpy is an open-source python 3 implementation of [Topologic](https://topologic.app) which is a powerful spatial modelling and analysis software library that revolutionizes the way you design architectural spaces, buildings, and artefacts. Topologic's advanced features enable you to create hierarchical and topological information-rich 3D representations that offer unprecedented flexibility and control in your design process. With the integration of geometry, topology, information, and artificial intelligence, Topologic enriches Building Information Models with Building *Intelligence* Models.
Two of Topologic's main strengths are its support for *defeaturing* and *encoded meshing*. By simplifying the geometry of a model and removing small or unnecessary details not needed for analysis, defeaturing allows for faster and more accurate analysis while maintaining topological consistency. This feature enables you to transform low-quality, heavy BIM models into high-quality, lightweight representations ready for rigorous analysis effortlessly. Encoded meshing allows you to use the same base elements available in your commercial BIM platform to cleanly build 3D information-encoded models that match your exacting specifications.
Topologic's versatility extends to entities with mixed dimensionalities, enabling structural models, for example, to be represented coherently. Lines can represent columns and beams, surfaces can represent walls and slabs, and volumes can represent solids. Even non-building entities like structural loads can be efficiently attached to the structure. This approach creates mixed-dimensional models that are highly compatible with structural analysis simulation software.
Topologic's graph-based representation makes it a natural fit for integrating with Graph Machine Learning (GML), an exciting new branch of artificial intelligence. With GML, you can process vast amounts of connected data and extract valuable insights quickly and accurately. Topologic's intelligent algorithms for graph and node classification take GML to the next level by using the extracted data to classify building typologies, predict associations, and complete missing information in building information models. This integration empowers you to leverage the historical knowledge embedded in your databases and make informed decisions about your current design projects. With Topologic and GML, you can streamline your workflow, enhance your productivity, and achieve your project goals with greater efficiency and precision.
Experience Topologic's comprehensive and well-documented Application Protocol Interface (API) and enjoy the freedom and flexibility that Topologic offers in your architectural design process. Topologic uses cutting-edge C++-based non-manifold topology (NMT) core technology ([Open CASCADE](https://www.opencascade.com/)), and python bindings. Interacting with Topologic is easily accomplished through a command-Line interface and scripts, visual data flow programming (VDFP) plugins for popular BIM software, and cloud-based interfaces through [Streamlit](https://streamlit.io/). You can easily interact with Topologic in various ways to perform design and analysis tasks or even seamlessly customize and embed it in your own in-house software and workflows. Plus, Topologic includes several industry-standard methods for data transport including IFC, OBJ, BREP, HBJSON, CSV, as well serializing through cloud-based services such as [Speckle](https://speckle.systems/).
Topologic’s open-source philosophy and licensing ([AGPLv3](https://www.gnu.org/licenses/agpl-3.0.en.html)) enables you to achieve your design vision with minimal incremental costs, ensuring a high return on investment. You control and own your information outright, and nothing is ever trapped in an expensive subscription model. Topologic empowers you to build and share data apps with ease, giving you the flexibility to choose between local or cloud-based options and the peace of mind to focus on what matters most.
Join the revolution in architectural design with Topologic. Try it today and see the difference for yourself.
## Installation
topologicpy can be installed using the **pip** command as such:
`pip install topologicpy --upgrade`
## Prerequisites
topologicpy depends on the following python libraries which will be installed automatically from pip:
<details>
<summary>
<b>Expand to view dependencies</b>
</summary>
* [numpy](http://numpy.org) >= 1.24.0
* [scipy](http://scipy.org) >= 1.10.0
* [plotly](http://plotly.com/) >= 5.11.0
* [ifcopenshell](http://ifcopenshell.org/) >=0.7.9
* [ipfshttpclient](https://pypi.org/project/ipfshttpclient/) >= 0.7.0
* [web3](https://web3py.readthedocs.io/en/stable/) >=5.30.0
* [openstudio](https://openstudio.net/) >= 3.4.0
* [topologic_core](https://pypi.org/project/topologic_core/) >= 6.0.6
* [lbt-ladybug](https://pypi.org/project/lbt-ladybug/) >= 0.25.161
* [lbt-honeybee](https://pypi.org/project/lbt-honeybee/) >= 0.6.12
* [honeybee-energy](https://pypi.org/project/honeybee-energy/) >= 1.91.49
* [json](https://docs.python.org/3/library/json.html) >= 2.0.9
* [py2neo](https://py2neo.org/) >= 2021.2.3
* [pyvisgraph](https://github.com/TaipanRex/pyvisgraph) >= 0.2.1
* [specklepy](https://github.com/specklesystems/specklepy) >= 2.7.6
* [pandas](https://pandas.pydata.org/) >= 1.4.2
* [scipy](https://scipy.org/) >= 1.8.1
* [dgl](https://github.com/dmlc/dgl) >= 0.8.2
</details>
## How to start using Topologic
1. Open your favourite python editor ([jupyter notebook](https://jupyter.org/) is highly recommended)
1. Type 'import topologicpy'
1. Start using the API
## API Documentation
API documentation can be found at [https://topologicpy.readthedocs.io](https://topologicpy.readthedocs.io)
## How to cite topologicpy
If you wish to cite the actual software, you can use:
**Jabi, W. (2024). topologicpy. pypi.org. http://doi.org/10.5281/zenodo.11555172**
To cite one of the main papers that defines topologicpy, you can use:
**Jabi, W., & Chatzivasileiadi, A. (2021). Topologic: Exploring Spatial Reasoning Through Geometry, Topology, and Semantics. In S. Eloy, D. Leite Viana, F. Morais, & J. Vieira Vaz (Eds.), Formal Methods in Architecture (pp. 277–285). Springer International Publishing. https://doi.org/10.1007/978-3-030-57509-0_25**
Or you can import the following .bib formatted references into your favourite reference manager
```
@misc{Jabi2025,
author = {Wassim Jabi},
doi = {https://doi.org/10.5281/zenodo.11555173},
title = {topologicpy},
url = {http://pypi.org/projects/topologicpy},
year = {2025},
}
```
```
@inbook{Jabi2021,
abstract = {Topologic is a software modelling library that supports a comprehensive conceptual framework for the hierarchical spatial representation of buildings based on the data structures and concepts of non-manifold topology (NMT). Topologic supports conceptual design and spatial reasoning through the integration of geometry, topology, and semantics. This enables architects and designers to reflect on their design decisions before the complexities of building information modelling (BIM) set in. We summarize below related work on NMT starting in the late 1980s, describe Topologic’s software architecture, methods, and classes, and discuss how Topologic’s features support conceptual design and spatial reasoning. We also report on a software usability workshop that was conducted to validate a software evaluation methodology and reports on the collected qualitative data. A reflection on Topologic’s features and software architecture illustrates how it enables a fundamental shift from pursuing fidelity of design form to pursuing fidelity of design intent.},
author = {Wassim Jabi and Aikaterini Chatzivasileiadi},
city = {Cham},
doi = {10.1007/978-3-030-57509-0_25},
editor = {Sara Eloy and David Leite Viana and Franklim Morais and Jorge Vieira Vaz},
isbn = {978-3-030-57509-0},
journal = {Formal Methods in Architecture},
pages = {277-285},
publisher = {Springer International Publishing},
title = {Topologic: Exploring Spatial Reasoning Through Geometry, Topology, and Semantics},
url = {https://link.springer.com/10.1007/978-3-030-57509-0_25},
year = {2021},
}
```
topologicpy: © 2025 Wassim Jabi
Topologic: © 2025 Cardiff University and UCL
| text/markdown | null | Wassim Jabi <wassim.jabi@gmail.com> | null | null | AGPL v3 License
Copyright (c) 2024 Wassim Jabi
This program is free software: you can redistribute it and/or modify it under
the terms of the GNU Affero General Public License as published by the Free Software
Foundation, either version 3 of the License, or (at your option) any later
version.
This program is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
details.
You should have received a copy of the GNU Affero General Public License along with
this program. If not, see <https://www.gnu.org/licenses/>.
| null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent"
] | [] | null | null | <3.15,>=3.8 | [] | [] | [] | [
"numpy>=1.18.0",
"scipy>=1.4.1",
"pandas",
"tqdm",
"plotly",
"lark",
"specklepy",
"webcolors",
"topologic_core>=7.0.1",
"pytest-xdist>=2.4.0; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/wassimj/TopologicPy",
"Bug Tracker, https://github.com/wassimj/TopologicPy/issues",
"Documentation, https://topologicpy.readthedocs.io"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:05:18.338330 | topologicpy-0.9.4-py3-none-any.whl | 576,965 | 2b/26/e1891d5f7b56aaacec986a540fc3bf6c90fcab3417b523195d4c4a376076/topologicpy-0.9.4-py3-none-any.whl | py3 | bdist_wheel | null | false | 759d060041a892ac29c790f5c3d81de4 | d045145ca0d84c82e0e0edb53abab50aabd97c786019f0d24ac2a69db23f5084 | 2b26e1891d5f7b56aaacec986a540fc3bf6c90fcab3417b523195d4c4a376076 | null | [
"LICENSE"
] | 110 |
2.4 | iterative_stats | 0.1.2 | This package implements iterative algorithms to compute some basics statistics | # BasicIterativeStatistics
In this repository, basic iterative statistics are implemented.
## Installation
The python package **iterative_stats** is available on [pypi](https://pypi.org/project/iterative-stats/).
```
pip install iterative-stats
```
Otherwise, one can clone the following repository:
```
git clone https://github.com/IterativeStatistics/BasicIterativeStatistics.git
```
To install the environnement, please use poetry:
```
poetry install
```
NB: One can also use a conda or python environment. The list of dependencies are available into the [pyproject.toml](pyproject.toml) file.
To run the tests:
```
poetry run pytest tests
```
or for a specific test (ex: tests/unit/test_IterativeMean.py)
```
poetry run pytest tests/unit/test_IterativeMean.py
```
## License
The python package **iterative_stats** free software distributed under the BSD 3-Clause License. The terms of the BSD 3-Clause License can be found in the file LICENSE.
## Iterative statistics
In this repository, we implement the following statistics:
- Mean (see examples [here](./tests/unit/test_IterativeMean.py))
- Variance (see examples [here](./tests/unit/test_IterativeVariance.py))
- Higher-order moments, skewness and kurtosis (see examples [here](./tests/unit/test_IterativeMoments.py))
- Extrema (see examples [here](./tests/unit/test_IterativeExtrema.py))
- Covariance (see examples [here](./tests/unit/test_IterativeCovariance.py))
- Threshold (see examples [here](tests/unit/test_IterativeThreshold.py)) (count the number of threshold exceedances).
- Quantile (see examples [here](tests/unit/test_IterativeQuantile.py)): this statistics is still a work in progress and must be use with care!
- Sobol indices
*About the Iterative higher-order moments* The iterative higher order moments are available as `IterativeMoments` and permit a user to compute higher-order moments up to the 4th order (including skewness and kurtosis). The implementation follows [[5]](#5),
!!! danger "Beware of 4th order"
The 4th order (kurtosis) does not pass our tests comparing to Scipy non-iterative Kurtosis calculations. While this is something to beware of, we also test the popular library OpenTurns, which also fails to pass the same test and uses the same equation as us. Follow discussion [here](https://github.com/openturns/openturns/issues/2345).
*About the quantiles*: Following [[4]](#4), we implements the Robbins-Monro (RM) algorithm for quantile estimation. The tuning parameters of this algorithm have been studied (through intensive numerical tests). In the implemented algorithm, the final number of iterations (i.e., the number of runs of the computer model) N is a priori fixed, which is a classical way of dealing with uncertainty quantization problems.
*About the Sobol indices*:
It also contains more advanced statistics: the Sobol indices. For each method (Martinez, Saltelli and Jansen), the first order indices, computed iteratively, as well as the total orders are available. We also include the second order for the Martinez and Jansen methods (the second order for the Saltelli method is still a work in progress).
- Pearson coefficient (Martinez): examples are available [here](tests/unit/sensitivity/test_IterativeSensitivityMartinez.py).
- Jansen method: examples are available [here](tests/unit/sensitivity/test_IterativeSensitivityJansen.py).
- Saltelli method: examples are available [here](tests/unit/sensitivity/test_IterativeSensitivityJansen.py).
NB: This package contains also useful methods for performing iterative statistics computations such as shift averaging and shift dot product computation:
- Shifted dot product (see example [here](./tests/unit/test_IterativeDotProduct.py))
- Shifted mean (see example [here](./tests/unit/test_IterativeMean.py))
### Fault-Tolerance
For each statistics class, we implement a **save_state()** and **load_from_state()** methods to respectively save the current state and create a new object of type IterativeStatistics from a state object (a python dictionary).
These methods can be used as follows (as example):
```python
iterativeMean = IterativeMean(dim=1, state=state)
# ... Do some computations
# Save the current state
state_obj = iterativeMean.save_state()
# Reload an IterativeMean object of state state_obj
iterativeMean_reload = IterativeMean(dim=1, state=state_obj)
```
NB: the methods **save_state()** and **load_from_state()** are not available yet for the quantile and Saltelli Sobol indices. This is still a work in progress.
### Examples
Here are some examples of how to use **iterative-stats** to compute Sobol index iteratively.
```python
from iterative_stats.sensitivity.sensitivity_martinez import IterativeSensitivityMartinez as IterativeSensitivityMethod
dim = 10 #field size
nb_parms = 3 #number of parameters
second_order = True # a boolean to compute the second order or not
# Create an instance of the object IterativeSensitivityMethod
sensitivity_instance = IterativeSensitivityMethod(dim = dim, nb_parms = nb_parms, second_order = second_order)
# Generate an experimental design
from tests.mock.uniform_3d import Uniform3D
input_sample_generator = Uniform3D(nb_parms = nb_parms, nb_sim = nb_sim, second_order=second_order).generator()
# Load a function (here ishigami function)
from tests.mock.ishigami import ishigami
while True :
try :
# Generate the next sample
input_sample = next(input_sample_generator)
# Apply ishigami function
output_sample = np.apply_along_axis(ishigami, 1,input_sample)
# Update the sensitivity instance
sensitivity_instance.increment(output_sample)
except StopIteration :
break
first_order = sensitivity_instance.getFirstOrderIndices()
print(f" First Order Sobol indices (Martinez method): {first_order}")
total_order = sensitivity_instance.getFirstOrderIndices()
print(f" Total Order Sobol indices (Martinez method): {first_order}")
second_order = sensitivity_instance.getFirstOrderIndices()
print(f" Second Order Sobol indices (Martinez method): {first_order}")
```
NB: The computation of Sobol Indices requires the preparation of a specific experimental design based on the pick-freeze method (see [[1]](#1) for details). This method has been implemented into the class [**AbstractExperiment**](iterative_stats/experimental_design/experiment.py) and some examples can be found [here](tests/unit/experimental_design/test_experiments.py).
## References
The implementation of the iterative formulas is based on the following papers:
<a id="1">[1]</a> Théophile Terraz, Alejandro Ribes, Yvan Fournier, Bertrand Iooss, and Bruno Raffin. 2017. Melissa: large scale in transit sensitivity analysis avoiding intermediate files. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC '17). Association for Computing Machinery, New York, NY, USA, Article 61, 1–14. https://doi.org/10.1145/3126908.3126922
<a id="2">[2]</a> M. Baudin, K. Boumhaout, T. Delage, B. Iooss, and J-M. Martinez. 2016. Numerical stability of Sobol' indices estimation formula. In Proceedings of the 8th International Conference on Sensitivity Analysis of Model Output (SAMO 2016). Le Tampon, Réunion Island, France.
<a id="3">[3]</a> Philippe Pébay. 2008. Formulas for robust, one-pass parallel computation of covariances and arbitrary-order statistical moments. Sandia Report SAND2008-6212, Sandia National Laboratories 94 (2008).
<a id="4">[4]</a> Iooss, Bertrand, and Jérôme Lonchampt. "Robust tuning of Robbins-Monro algorithm for quantile estimation-Application to wind-farm asset management." ESREL 2021. 2021.
<a id="5">[5]</a> Meng, Xiangrui. "Simpler online updates for arbitrary-order central moments." arXiv preprint arXiv:1510.04923 (2015).
| text/markdown | Frederique Robin | null | null | null | LICENSE | null | [
"License :: OSI Approved :: BSD License",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <3.15,>=3.11 | [] | [] | [] | [
"numpy<3.0.0,>=1.26.0",
"pyyaml>=6.0"
] | [] | [] | [] | [
"Homepage, https://github.com/your-username/iterative-stats",
"Repository, https://github.com/your-username/iterative-stats"
] | poetry/2.3.2 CPython/3.12.12 Linux/6.11.0-1018-azure | 2026-02-20T15:04:54.108972 | iterative_stats-0.1.2-py3-none-any.whl | 19,211 | 93/59/f048ab03e17a228ce533a71037387e51aa9353cded6fbf75bafc177f9007/iterative_stats-0.1.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 9fe7b4ff3948e6d66a9ddea3ed93d324 | 435ecdd5d6047150f3384b374a324ac3617ad5b66fa2ba583e92438b5917e4d0 | 9359f048ab03e17a228ce533a71037387e51aa9353cded6fbf75bafc177f9007 | null | [
"LICENSE"
] | 0 |
2.4 | qodev-apollo-api | 0.1.1 | Async Python client for Apollo.io CRM API | # qodev-apollo-api
[](https://github.com/qodevai/apollo-api/actions/workflows/ci.yml)
[](https://pypi.org/project/qodev-apollo-api/)
[](https://pypi.org/project/qodev-apollo-api/)
Async Python client for Apollo.io CRM API with full type safety.
## Features
- **Async-first design** with httpx
- **Full Pydantic v2 models** for type safety
- **Context manager support** for clean resource management
- **Intelligent contact matching** with 3-tier fallback strategy
- **Built-in rate limit tracking** (400/hour, 200/min, 2000/day)
- **40+ API methods** across 8 endpoint groups
- **Comprehensive error handling** with custom exceptions
- **ProseMirror to Markdown conversion** for notes
## Installation
```bash
pip install qodev-apollo-api
```
Or with uv:
```bash
uv add qodev-apollo-api
```
## Quick Start
```python
from qodev_apollo_api import ApolloClient
async with ApolloClient() as client:
# Search contacts
contacts = await client.search_contacts(limit=10)
for contact in contacts.items:
print(f"{contact.name} - {contact.email}")
# Get contact details
contact = await client.get_contact("contact_id")
print(f"Title: {contact.title} at {contact.company}")
# Enrich organization data
company = await client.enrich_organization("apollo.io")
print(f"Employees: {company.get('estimated_num_employees')}")
# Create a note
await client.create_note(
content="Great conversation about Q1 goals",
contact_ids=["contact_id"],
)
```
## Configuration
### API Key
Set environment variable:
```bash
export APOLLO_API_KEY="your_api_key"
```
Or pass directly:
```python
async with ApolloClient(api_key="your_api_key") as client:
...
```
### Timeout
Customize request timeout (default 30 seconds):
```python
async with ApolloClient(timeout=60.0) as client:
...
```
## Rate Limiting
Apollo.io enforces these limits:
- **400 requests/hour** (primary bottleneck for sustained operations)
- 200 requests/minute
- 2,000 requests/day
Monitor rate limits via `client.rate_limit_status`:
```python
async with ApolloClient() as client:
await client.search_contacts(limit=10)
status = client.rate_limit_status
print(f"Hourly: {status['hourly_left']}/{status['hourly_limit']}")
print(f"Minute: {status['minute_left']}/{status['minute_limit']}")
print(f"Daily: {status['daily_left']}/{status['daily_limit']}")
```
**Best practices:**
- Add delays between requests (10+ seconds for sustained operations)
- Monitor `hourly_left` - stop if < 50 requests remaining
- Handle `RateLimitError` with exponential backoff
## Contact Matching
Three-tier fallback strategy for robust contact finding:
```python
contact_id = await client.find_contact_by_linkedin_url(
linkedin_url="https://linkedin.com/in/johndoe",
person_name="John Doe", # Fallback if URL changed
create_if_missing=True, # Auto-create from People DB (210M+ contacts)
contact_stage_id="stage_id", # Stage to assign if created
)
```
**How it works:**
1. **LinkedIn URL search** - Exact match (most reliable)
2. **Name search** - Handles URL changes (requires unique match)
3. **People database** - Auto-create if enabled (verifies URL match)
**Common scenarios:**
- LinkedIn URLs change when users update custom URLs
- Name search returns multiple matches → skipped for safety
- People database doesn't support special characters (umlauts, etc.)
## API Methods
### Contacts
```python
# Search
contacts = await client.search_contacts(
limit=100,
q_keywords="CEO",
contact_stage_ids=["stage_id"],
)
# Get by ID
contact = await client.get_contact("contact_id")
# Create
result = await client.create_contact(
first_name="John",
last_name="Doe",
email="john@example.com",
title="CEO",
)
# Get contact stages
stages = await client.get_contact_stages()
# Find by LinkedIn URL (3-tier fallback)
contact_id = await client.find_contact_by_linkedin_url(
linkedin_url="https://linkedin.com/in/johndoe",
person_name="John Doe",
)
```
### Accounts
```python
# Search
accounts = await client.search_accounts(
limit=100,
q_organization_name="Apollo",
)
# Get by ID
account = await client.get_account("account_id")
```
### Deals / Opportunities
```python
# Search
deals = await client.search_deals(
limit=100,
opportunity_stage_ids=["stage_id"],
)
# Get by ID
deal = await client.get_deal("deal_id")
```
### Pipelines & Stages
```python
# List all pipelines
pipelines = await client.list_pipelines()
# Get pipeline by ID
pipeline = await client.get_pipeline("pipeline_id")
# List stages for pipeline
stages = await client.list_pipeline_stages("pipeline_id")
```
### Enrichment
```python
# Enrich organization (35M+ companies, free)
company = await client.enrich_organization("apollo.io")
# Enrich person (210M+ people, costs 1 credit)
person = await client.enrich_person("john@example.com")
# Search people database (free)
results = await client.search_people(q_keywords="CEO Apollo")
```
### Notes
```python
# Search notes
notes = await client.search_notes(
contact_ids=["contact_id"],
limit=50,
)
# Create note
result = await client.create_note(
content="Meeting notes from Q1 planning",
contact_ids=["contact_id"],
account_ids=["account_id"],
)
```
### Activities
```python
# Search calls, tasks, emails
calls = await client.search_calls(limit=100)
tasks = await client.search_tasks(limit=100)
emails = await client.search_emails(limit=100)
# Create task
result = await client.create_task(
contact_ids=["contact_id"],
note="Follow up on proposal",
priority="high",
)
# List contact activities
calls = await client.list_contact_calls("contact_id")
tasks = await client.list_contact_tasks("contact_id")
```
### News & Jobs
```python
# Get news for account
news = await client.list_account_news("account_id")
# Get job postings for account
jobs = await client.list_account_jobs("account_id")
```
## Models
All responses are typed Pydantic models:
- **Contact** - All contact fields (id, name, email, title, linkedin_url, phone_numbers, etc.)
- **Account** - All account fields (id, name, domain, employees, revenue, industries, tech stack, etc.)
- **Deal** - All deal fields (id, name, amount, stage, close_date, is_won, etc.)
- **Pipeline** - Pipeline info (id, title, is_default, sync_enabled, etc.)
- **Stage** - Stage info (id, name, probability, is_won, is_closed, etc.)
- **Note** - Notes with Markdown content (converted from ProseMirror JSON)
- **Call**, **Task**, **Email** - Activity records
- **EmploymentHistory** - Work history entries
- **PaginatedResponse[T]** - Generic pagination wrapper
## Error Handling
```python
from qodev_apollo_api import ApolloClient, AuthenticationError, RateLimitError, APIError
try:
async with ApolloClient() as client:
contacts = await client.search_contacts(limit=10)
except AuthenticationError:
print("Invalid API key")
except RateLimitError as e:
print(f"Rate limit exceeded. Retry after {e.retry_after} seconds")
except APIError as e:
print(f"API error: {e} (status: {e.status_code})")
```
## Development
```bash
# Install dependencies
make install
# Install pre-commit hooks
make install-hooks
# Run all checks (lint, format, typecheck, typos)
make check
# Run tests with coverage
make test
# Run tests without coverage
make test-fast
# Lint and format code
make lint
make format
# Type checking
make typecheck
# Spell check
make typos
# Clean generated files
make clean
```
## License
MIT
| text/markdown | null | Jan Scheffler <jan.scheffler@qodev.ai> | null | null | MIT | api, apollo, apollo.io, async, client, crm, prospecting, sales | [
"Development Status :: 4 - Beta",
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"httpx>=0.27.0",
"pydantic>=2.0.0",
"pyright>=1.1.0; extra == \"dev\"",
"pytest-asyncio>=0.24.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest-mock>=3.12.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.8.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/qodevai/apollo-api",
"Repository, https://github.com/qodevai/apollo-api",
"Issues, https://github.com/qodevai/apollo-api/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T15:04:31.799945 | qodev_apollo_api-0.1.1.tar.gz | 72,609 | 0c/76/10cb344b52d9abedfac7c9b2361de0e25839362ee121b49b442b030e3ae1/qodev_apollo_api-0.1.1.tar.gz | source | sdist | null | false | 4f353540d9601fd936ae249a962b6d37 | 9683c39f9031afcbd7261c967131b119cf15ef7717f43db9cd07d6cda3911560 | 0c7610cb344b52d9abedfac7c9b2361de0e25839362ee121b49b442b030e3ae1 | null | [
"LICENSE"
] | 182 |
2.4 | SPRpy | 1.1.5 | SPRpy is an open-source project developing GUI data analysis tools for multi-parameter surface plasmon resonance measurements. | # SPRpy: GUI analysis methods for MP-SPR measurements
This program can be used to perform data analysis on multi-parameter surface plasmon resonance (MP-SPR) measurements
acquired using [Bionavis SPR instruments](https://www.bionavis.com/en/technology/why-choose-mpspr/). Specifically, fresnel modelling and exclusion height analysis of full angular scans is currently available.
Apart from launching SPRpy and selecting files in the file dialog windows, SPRpy is designed to be a fully interactive graphical user interface (GUI) running inside a web browser, thus requiring no programming knowledge to operate (of course, the core functions of the code may be adapted for your own programming scripts).
Fresnel calculations are based on MATLAB implementations of the [transfer-matrix-method](https://en.wikipedia.org/wiki/Transfer-matrix_method_(optics))
by [Andreas Dahlin](https://www.adahlin.com/matlab-programs). The GUI elements are built using [Plotly Dash](https://dash.plotly.com/).
To cite this work, navigate to the "Cite this repository" menu on the [SPRpy github repository](https://github.com/John-trailinghyphon/SPRpy), or click this Zenodo badge:
[](https://doi.org/10.5281/zenodo.13479400)
## Installation
SPRpy is written in Python 3.11 and also works with 3.12. It is not yet compatible with python > 3.13 (issue with tkinter for python 3.13.0, should be fixed in 3.13.1), and has not been tested on earlier versions of python. Python 3 releases can be found [here](https://www.python.org/downloads/). NOTE: It is recommended to check the box during installation that adds python.exe to the PATH environment variable (or see manual instructions [here](https://datatofish.com/add-python-to-windows-path/)) to allow running SPRpy scripts by double-clicking them in your file explorer. Alternatively, SPRpy can be set up as any python project in your favourite IDE.
SPRpy is available on [PyPI](https://pypi.org/project/SPRpy/) and can be installed using pip (after installing python):
Windows (in cmd or Powershell):
```python -m pip install SPRpy```
(the source code is also available to clone from the [SPRpy github repository](https://github.com/John-trailinghyphon/SPRpy))
Linux/Mac (always remove "python -m"):
```pip install SPRpy```
To add a shortcut to the SPRpy folder to the desktop after installation, run the following command in the command prompt (windows only):
```SPRpy-desktop-shortcut```
(If the shortcut still does not appear, for Windows you can usually find the installation folder by pasting the following in the explorer window (also note to make sure the python version e.g. \Python311\ or \Python312\, matches the one you installed) ```%USERPROFILE%\AppData\Local\Programs\Python\Python311\Lib\site-packages\SPRpy```
To update to a newer version of SPRpy (overwriting the previously installed version), run the following command:
```python -m pip --no-cache-dir install --upgrade SPRpy```
To install an additional copy of a specific version of SPRpy, run the following command:
```python -m pip install --target ADD_FOLDER_PATH --ignore-installed SPRpy==X.Y.Z```
(change ADD_FOLDER_PATH to desired folder and change X.Y.Z to desired version). This may sometimes be necessary to properly open older SPRpy sessions no longer compatible with the latest release.
Note that SPRpy is designed to leverage parallel computing where applicable, thus performance for exclusion height calculations will be heavily influenced by the number of logical processors of your CPU and their individual clock speeds. While running exclusion height modelling calculations, one can typically expect to see 100 % CPU usage on a 12th generation Intel i7 with 10 logical processors with a runtime of a few minutes. Low-end laptops with weaker CPUs can experience significatly longer computation times in comparison.
## Running SPRpy
A text configuration file can be found as "config.toml" that contain some default initial settings that can be tuned. Especially make sure to check that the single 'instrument_TIR_sensitivity' value and several values below [instrument_SPR_sensitivity] matches your specific instrument (when downloading it initially matches measured values of the SPR from A. Dahlin's lab, other users should switch to the highlighted defaults from BioNavis or other determined values).
Before running SPRpy, you need to convert your MP-SPR measurement files (.spr2) to a specific .csv format. This can be achieved by running two separate scripts (simply double-click):
1) "SPRpy_X_cal.py", a script which generally only needs to be run once to convert the stepper motor values to angles for a particular Bionavis instrument (depending on its setup). Requires a full range scan at highest angular resolution (slow scan), along with its .spr2 file and corresponding exported .dto files from the Bionavis Viewer for each instrument wavelength. The script produces a .csv file that is used by the second script in 2). You can update the default .csv that is loaded in config.toml at the entry 'default_poly_file'.
2) "SPRpy_spr2_to_csv.py", a script that is used to convert measurements (.spr2) to a specific .csv format used by SPRpy. You will be prompted to select a .spr2 measurement file to convert (and X_cal.csv, file unless the script finds the default). One .csv file will be created for each wavelength in the same folder as the original file with the filename of the original + channel and wavelength information (NOTE! The appended part of the file name must not be changed, it is used by SPRpy). Also note that the runtime is heavily increased for lower scanspeeds (increased angular resolution).
To run SPRpy, double-click "SPRpy.py" from the SPRpy folder or run it inside a python interpreter.
SPRpy will first prompt you if you wish to load a previous session or start a new one. All sessions are initially created and stored in a subfolder as ...\\SPRpy\\SPRpy sessions\\SESSION EXAMPLE FOLDER. By default, each new session folder is generated with a name containing the date and time of its creation (thus giving it a unique name), but it can also be renamed to whatever you want inside the GUI while SPRpy is running. One can also rename or move the session folder using the file explorer when SPRpy is **not** running. However, its content structure or .pickle file names must not be changed! If you choose to load a previous session, you will be prompted to select a previous session.pickle file from a session folder. If you choose to start a new session, you will instead be prompted to select an initial SPRpy converted .csv measurement data file to load. NOTE! If you open the converted .csv files in a 3rd party program (like excel), it is recommended to **not** save them as the default .csv option as this may break the formatting (if this happens, rerun the SPRpy_spr2_to_csv.py conversion script for that measurement). Additional measurement files can be added later in the GUI workflow.
Next, the GUI will be initiated to a local IP address (by default http://127.0.0.1:8050/). Simply open this address in your browser to access the GUI. It is recommended to add this address as a bookmark for easy access. If you wish ro run multiple instances of SPRpy simultaneously, you can increment the host number in the config.toml file (e.g. http://127.0.0.2:8050/) and add the new IP address in your browser window before running SPRpy again. NOTE: It is a bad idea to open the same session file in two simultaneously running instances of SPRpy...
## Usage
### Workflow with sessions, sensors and analysis instances
SPRpy is designed to perform the data analysis of multiple measurements within a working *session*. A session starts upon running SPRpy, with the optional choice to reload a previous session. Each session instance contains all the needed parameters to perform the analysis, along with any obtained results, and the session is automatically saved every time a change occurs. Thus, apart from when calculations are still running, SPRpy can be aborted at will and later be relaunched where a previous session may be reloaded to pick up where you left off.
Within each session, any number of *sensor* instances can be added. A sensor instance contains information corresponding to the type of sensor chip that was used in a SPR measurement via the parameters describing its optical layer structure (layer thicknesses and wavelength dependent refractive index values). By default, an initial gold sensor will be instanced when starting a new SPRpy session. Additional sensor instances may be quick-added as gold, SiO2, platinum or palladium, with refractive index values matching the wavelength of the currently loaded measurement file. However, any custom layer structure may be built by interacting with the layers of the sensor table in the GUI. The grey button can be used to bring up a temporary table of refractive index values (this can even be customized in the config.toml file for your own materials). Refractive index values for various materials can be found at [refractiveindex.info](https://refractiveindex.info/).
Separate to the sensor instances are *analysis* instances for each type of analysis method (currently fresnel modelling (FM) and exclusion height determination (EH)). These keep track of selected additional model specfic parameters and the results. When a new analysis instance is added, it draws its data and optical parameters from the currently loaded measurement file and the currently selected sensor. However, when selecting previous analysis instances with mismatching data paths to the currently loaded measurement file (indicated by green data trace instead of blue), rerunning calculations will pull data from the initial path (this can fail if the folders or file along the path has been moved or renamed since).
Sharing sensor or analysis instances between different sessions via their .pickle files is currently not supported.
### Session name and session log
A log field is available to write comments that are saved to the session. The session name can also be renamed here.
### File and sensor controls
Next you can find the file and sensor controls. The file controls are used to load new data files and define optical parameters for the sensor that is used. NOTE! In general, all calculations and changes that are executed will pull the data from the currently loaded measurement file and parameter values from the currently loaded sensor in the sensor table (except during batch processing).
When adding new sensors the refractive index values will be updated according to the wavelength from the filename of the current measurement file and the refractive index of the bulk medium is always calculated based on the TIR angle of the currently loaded measurement data. Copying a sensor will update the wavelength and channel name based on the currently loaded measurement file, but will **NOT** update the refractive index values if a different wavelength measurement was loaded (i.e. only from copy sensors with the same wavelength).
New layers can be added to the sensor table with the "Add layer" button and layers can also be removed using the crosses next to the label column. The sensor table values can be directly edited, however, note that "Save edited values" must be clicked to commit any user edited values in the sensor table before they take effect (including changing number of layers). Clicking outside of the table cells while editing a value will abort the editing. Instead, select a neighboring cell when finished typing by pressing enter/tab/arrow keys or clicking to make the table accept what was typed. For fresnel fitting, the main variable to be fitted is selected using the sensor table by highlighting a value (red tint) and clicking on the green button "Select variable to fit" (no need for clicking the red save edited values button afterwards). The sensor may also be renamed to what you wish by clicking "Rename sensor", however it will always have a unique identifier SX, where X will be number of created sensors for this session. It is recommended to include some short unique identifier based on the type of layer that can be easily associated with a measurement file (like Au1, Glass2, PEG3 etc.).
### Response quantification
The first tab amoung the analysis options shows two figures for the loaded measurement file. The first left figure shows the angular trace for the last scan of the measurement file by default. The currently presented trace of this figure corresponds to the data that is used for fresnel modelling. Below it is a button that allows for loading more angular scan traces to the figure from other .csv files. Additionally, a second button can be used to plot theoretical fresnel model traces based on the values present in the currently selected sensor data table.
The right plot shows the full measurement sensorgram, including the SPR, TIR and bulk corrected angle traces. When hovering the cursor over particular data points in the sensorgram, the left figure updates with the corresponding angular trace (unless the "Stop mouse hover updates" toggle is activated). This may be used to select exactly which angular trace to use for fresnel modelling from the loaded measurement. Additionally, clicking a datapoint in the sensorgram will create a new offset in the Y-axis at this timepoint. Clicking the legend of any of the data traces will hide it (this goes for all figures in general). The bulk corrected trace is calculated according to the formula presented at the bottom of the page, where each parameter may be adjusted (defaults can be changed in config.toml). The toggle "Show TIR/SPR fitting parameters" can be clicked to show two additional plot windows that are updated when hovering over points in the sensorgram figure, along with additional parameters for tuning the TIR and SPR angle fitting. NOTE! Tuning these fitting parameters will be applied and saved to the current session as a whole (i.e. also affecting further modelling runs). The default loaded TIR and SPR fitting parameters for newly created sessions can be changed in the file "config.toml".
Further reading for the bulk correction method:
Accurate Correction of the “Bulk Response” in Surface Plasmon Resonance Sensing Provides New Insights on Interactions Involving Lysozyme and Poly(ethylene glycol)
Justas Svirelis, John Andersson, Anna Stradner, and Andreas Dahlin
ACS Sensors 2022 7 (4), 1175-1182
https://doi.org/10.1021/acssensors.2c00273
### Fresnel modelling
The second analysis tab may be used for performing fresnel model fits of angular scan measurements against any of the variables in the sensor data table. The SPRpy implementation of fresnel modelling operates under the following assumptions:
- Each layer is assumed to be homogenous in X,Y and Z (good assumption for non-particulates and non-swollen films)
- The refractive index of each material layer is assumed to be equivalent to its dry bulk counterpart (good assumption in air)
- The incoming light is monochromatic
- The incoming light has no beam divergence
- The incoming light is fully p-polarized (can be adjusted with the p-factor setting)
Additionally, larger deviations in observed reflectivity and theoretically predicted reflectivity is assumed to be due to non-ideal optical conditions of the instrument and/or sensor glass. Perfectly compensating for these discrepancies in a physically accurate manner by tuning all available parameters within reasonable tolerances can be challenging and time-consuming. Fortunately, the reflectivity on its own generally carries little to no information about the thickness and refractive index of the sensor surface adlayer(s) of interests. Thus, in SPRpy, fitting is focused around the SPR minimum and a constant reflectvity offset of the fresnel coefficients is by default fitted together with the chosen layer variable. Additionally, small corrections to the prism extinction coefficient, *k*, can be simultaneously fitted (also by default) to compensate for peak broadening around the SPR angle minimum with negligible influence on the adlayer properties (works for adlayer materials with k ~ 0).
#### Experimental procedure
1) Measure the angular reflectivity trace (including the TIR region) of the cleaned sensor (or use a previous representative one, depending on required accuracy).
2) Add the layer of interest or perform desired surface treatment and measure the sample.
#### Modelling procedure
1) Load the desired measurement file and create a new sensor instance using the file and sensor controls. Alternatively, if you remeasure the same sensor after multiple treatments or layers, a previous sensor can be selected and used as is or after adding a new layer to it.
2) Choose which parameter should be fitted. For clean metal sensors, choose the extinction coefficient, _k_, of the metal layer (the plasmonic metal layer thickness should usually not be altered). The thickness of the Cr layer may also need to be manually tuned. For all other types of layers, select the surface layer thickness, *n* or *k*, as variable to be fitted.
3) In the fresnel modelling tab, click "Add new fresnel analysis".
4) Adjust the initial guess and lower and upper bounds if needed. By default, the current value of the fitted variable is presented as an initial guess, with lower and upper bounds calculated as initial guess / 4 or initial guess * 2 respectively.
5) Then choose a suitable angle range to fit using the sliders, unless the initial automatic guess is already satisfactory. A suitable range should be covering say 20-60 % of the lower part of the SPR minimum. In the config.toml file one can tune how many points the automatic guess should include above and below the minimum value of the SPR dip.
6) It is recommended (at least initially) to simultaneously fit an intensity offset and the prism *k* value, but this may also be disabled using the two checkboxes.
7) Finally, press run calculations and wait for the result to show up in the bottom of the settings field and variables being updated in the sensor table.
#### Batch analysis
For modelling of several replicates with the same sensor layer structure and materials (and same wavelength, for now), the batch analysis button is both convenient and time saving. It requires an example sensor and example analysis that has already been run and which parameters will be copied over. There are then two main options to choose from: 1) the example sensor is used directly as a template for new sensor instances for each replicate, or 2) individual sensor backgrounds are selected as templates for each selected measurement file and a new layer is added according to the surface layer of the analysis example. For option 2), one may also choose between adding the new layer directly to the background sensor instance, or instead making a new copy for each.
### Exclusion height determination
In the exclusion height determination tab the non-interacting height probe method can be used to determine the exclusion height of a probing particle for a swollen layer in solution. The method is based on the following peer-reviewed papers:
Schoch, R. L. and Lim, R. Y. H. (2013). Non-Interacting Molecules as Innate Structural Probes in Surface Plasmon Resonance.
_Langmuir_, _29(12)_, _4068–4076_.
https://doi.org/10.1021/la3049289
Emilsson, G., Schoch, R. L., Oertle, P., Xiong, K., Lim, R. Y. H., and Dahlin, A. B. (2017).
Surface plasmon resonance methodology for monitoring polymerization kinetics and morphology changes of brushes—evaluated with poly(N-isopropylacrylamide).
_Applied Surface Science_, _396_, _384–392_.
https://doi.org/10.1016/j.apsusc.2016.10.165
The non-interacting probe method have the following requirements:
* The probing particle does not interact with the layer of interest (it should be excluded from entering into it and not stick to it)
* The layer is sufficiently thin compared to the sensor decay length such that it is able to give a large enough response to the injected probe (use longer wavelengths for thicker films).
* Swollen material layers with non-zero *k* values are seemingly unsuitable for the non-interacting probe method
#### Experimental procedure
Measure the SPR/TIR response from a surface containing your layer of interest while making 2-3 repeated injections of the probing particles for 5-10 minutes each. High contrast in the response with the probe compared to running buffer is required for accurate determination of the exclusion height. For protein or polymer probes, 10-20 g/L is typically used depending on the expected layer thickness to get sufficient contrast. Verify that the baseline response returns to the same level after rinsing out each probe injection (normally within < 10-20 millidegrees deviation, approximately).
#### Modelling procedure
1) Response quantification & Fresnel model tabs: It is necessary to first fresnel model a liquid scan from the same height probing measurement file. Use a scan immediately before the first probe injection starts for the modelling. The right scan can be selected by highlighting this part of the sensorgram in the response quantification tab with the hover lock unselected (be careful to move your cursor up and down without accidentally selecting the wrong part of the trace). Add a sensor layer corresponding to your swollen layer of interest, with *n* value of its dry bulk state (the thickness and *n* value doesn't actually matter, but make sure *k* is 0, non-negative *k* for swollen layers will likely not work with this method). Make sure to include the offset and prism k correction fits here, as this is will improve the result from the height exclusion height algorithm. Note that the obtained fitted result will not be physically accurate as it would correspond to a 0 % hydrated state (if that is what you set *n* to be), so its value should be ignored for now.
2) Exclusion height determination tab: Click add new exclusion height analysis. A prompt will pop up asking for the required background fresnel object before proceeding.
3) Check that the height bounds are reasonable for your expected swollen layer. By default, they are calculated according to the 0 % hydrated state from the fresnel background and 6 times this value.
4) Change to "Choose injection points" under the SPR sensorgram and click the SPR data trace before and after each probe injection (so 2 points per injection). These points will be used to plot the SPR angle vs TIR angle traces during probe injections, which may help to verify if the probe is truly non-interacting (linear relationship with low degree of hysteresis means non-interaction).
5) Switch to "Choose buffer points". At this point it may help to click the legend of the TIR angle trace and injection point markers to hide them. A stable range of scans without probe present just before and after each injection should be selected, i.e. a total of 4 selected points per injection.
6) Next, switch to "Choose probe points". Choose a suitable stable range on top of the probe injection, i.e. a total of 2 points per injection.
7) Once all points are selected, click "Initialize model". For each range of previously selected buffer and probe points all scans within it will be averaged into a single average scan. Scroll down to verify that the SPR vs TIR angle and averaged buffer and probe plots look OK for each injection step. In some cases errors may appear due to how the points were selected, then try clearing the selected points and make a new attempt (try only clicking one of the traces and avoid any markers).
8) Finally, click "Start calculations". The exclusion height will then be calculated for each "step" in the response: the buffer -> injected probe (step up), and, probe -> buffer rinse (step down). Thus, 2 exclusion heights are calculated for each probe injection, and all of them are also averaged into a single value with a standard deviation. Once the calculations are finished, new plots of possible fitted pairs of thickness and refractive index for both the buffer and probe averaged scans are presented for each step. The exclusion height is found where these curves graphically intersect (this is automatically detected, but good to verify it worked correctly if the values seem odd). If the exclusion height values differ significantly between different steps, there could be problems with the selected points (try again with a new set of points). Problems may also occur if the probe interacts with something on the sample over time, partly adsorbs to the surface, or needs longer time to rinse properly from the flow cell (shift buffer range to further after probe rinsing). Sometimes no intersection occurs for a data set no matter which points are selected, then one has to retry the experiment, and if this still doesn't work deeper investigation are needed, alternatively the swollen layer or probe may not be suitable for the non-interacting probe method. Note that the calculations may take several minutes. If they take way too long, the "Resolution" setting can be lowered to gain some speed if needed (at a loss of accuracy). While generally not needed, the fitting may be further improved across all points of the thickness/refractive index pairs by checking the two "Refitting" options (again at the expense of slightly longer computation times). Remember to press "Initialize model" again before rerunning calculations if any settings has been changed since.
### Result summary
The results from fresnel modelling and exclusion height determination are presented in two tables in the left-most column and can be exported into a .csv format using the button at the top.
The right barplot groups and plots fresnel model results based on what sensor object was used as background and all its layers that were fitted.
NOTE: For session files < v1.0.0 only the label for the latest fitted layer is available, but the value will correctly represent earlier layers (you can verify with the analysis column in the table to the left).
### The dual-wavelength method (Planned feature, WIP)
The dual-wavelength method can be used to determine the extension of swollen layers with unknown refractive index,
based on the following requirements:
* The refractive index increment for the layer material for each wavelength is known.
* The thickness of the measured layer is _much smaller_ than the decay length for each wavelength.
* The sensitivity factors for each wavelength of the instrument is known (easily determined).
It is based on the peer-reviewed paper by:
Rupert, D. L. M., et al. (2016). Dual-Wavelength Surface Plasmon Resonance for Determining the Size and Concentration of Sub-Populations of Extracellular Vesicles.
_Analytical Chemistry_, _88(20)_, 9980–9988.
https://pubs.acs.org/doi/full/10.1021/acs.analchem.6b01860
| text/markdown | null | John Andersson <anjohn@chalmers.se> | null | null | null | Dash, Data Analysis, GUI, Python, SPR, Surface Plasmon Resonance | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Chemistry",
"Topic :: Scientific/Engineering :: Information Analysis"
] | [] | null | null | <3.13,>=3.11 | [] | [] | [] | [
"bottleneck>=1.3.7",
"dash-bootstrap-components==1.7.1",
"dash==2.18.2",
"kaleido>=0.1.0.post1",
"numpy>=1.26.2",
"pandas==2.3.1",
"plotly>=5.16.1",
"pywin32; platform_system == \"Windows\"",
"scipy>=1.11.2"
] | [] | [] | [] | [
"Documentation, https://github.com/John-trailinghyphon/SPRpy#readme",
"ReleaseNotes, https://github.com/John-trailinghyphon/SPRpy/blob/master/CHANGELOG.md",
"Issues, https://github.com/John-trailinghyphon/SPRpy/issues",
"Source, https://github.com/John-trailinghyphon/SPRpy"
] | twine/6.2.0 CPython/3.11.9 | 2026-02-20T15:04:26.970448 | sprpy-1.1.5.tar.gz | 35,323,772 | 4a/ff/33caf7d21294545b30a706274b1519ba5c531a47a9db629c78aae5c46f88/sprpy-1.1.5.tar.gz | source | sdist | null | false | 6be43db21a1229ec70c11825d984c82a | ff791f82257ecbf2faf45fa927fffd5e0c6d78ff1a4f058a9138cd3282e7730d | 4aff33caf7d21294545b30a706274b1519ba5c531a47a9db629c78aae5c46f88 | MIT | [
"LICENSE"
] | 0 |
2.4 | aize | 0.1.0 | aize — lightweight NLP analysis toolkit (Zipf, Heap's law, TF-IDF, sentiment, readability & more) | # aize · NLP Analysis Toolkit
[](https://pypi.org/project/aize/)
[](https://pypi.org/project/aize/)
[](https://opensource.org/licenses/MIT)
> A lightweight, pip-installable Python library for deep text analysis — covering everything from Zipf's law to sentiment, readability, TF-IDF, and more. Comes with a Streamlit dashboard and a FastAPI backend out of the box.
---
## Table of Contents
- [Features](#features)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Module Reference](#module-reference)
- [compute_stats](#compute_stats)
- [analyze_groupwords](#analyze_groupwords)
- [analyze_zipf](#analyze_zipf)
- [analyze_heaps](#analyze_heaps)
- [calculate_density](#calculate_density)
- [compare_vocab](#compare_vocab)
- [compute_tfidf](#compute_tfidf)
- [compute_ngrams](#compute_ngrams)
- [analyze_sentiment](#analyze_sentiment)
- [compute_readability](#compute_readability)
- [analyze_pos](#analyze_pos)
- [generate_wordcloud](#generate_wordcloud)
- [Streamlit Dashboard](#streamlit-dashboard)
- [FastAPI Backend](#fastapi-backend)
- [Dependencies](#dependencies)
- [Project Structure](#project-structure)
- [License](#license)
---
## Features
| Category | Capability |
|---|---|
| 📊 **Statistics** | Word count, unique words, avg word length, sentence count |
| 📏 **Word Grouping** | Frequency distribution grouped by word length |
| 📉 **Zipf's Law** | Rank-frequency distribution, hapax & dis legomena percentages |
| 📈 **Heap's Law** | Vocabulary growth curve as corpus size increases |
| 🚫 **Stopwords** | Stopword density analysis |
| 🔤 **Vocabulary** | Side-by-side vocabulary comparison across multiple texts |
| 🔍 **TF-IDF** | Top keyword extraction per document in a corpus |
| 🔗 **N-grams** | Most common bigrams and trigrams |
| 💬 **Sentiment** | VADER-based positive / negative / neutral / compound scoring |
| 📖 **Readability** | Flesch Reading Ease & Flesch-Kincaid Grade Level |
| 🏷️ **POS Tagging** | Part-of-speech frequency breakdown |
| ☁️ **Word Cloud** | Generates word cloud images from any text |
| 🖥️ **Dashboard** | Interactive Streamlit UI for all analyses |
| ⚡ **API** | FastAPI REST backend for programmatic access |
---
## Installation
### Core library
```bash
pip install aize
```
### With the Streamlit dashboard
```bash
pip install aize[dashboard]
```
### With the FastAPI backend
```bash
pip install aize[api]
```
### Everything (dashboard + API)
```bash
pip install aize[all]
```
### From source (development)
```bash
git clone https://github.com/eokoaze/aize.git
cd aize
pip install -e .[all]
```
> **Python 3.9+** is required.
---
## Quick Start
```python
import aize
text = """
Natural language processing is a subfield of linguistics and artificial intelligence.
It is primarily concerned with giving computers the ability to understand text and speech.
"""
# Basic stats
print(aize.compute_stats(text))
# Sentiment
print(aize.analyze_sentiment(text))
# Readability
print(aize.compute_readability(text))
# Zipf's Law
print(aize.analyze_zipf(text))
```
---
## Module Reference
### `compute_stats`
```python
from aize import compute_stats
result = compute_stats(text)
```
Returns basic corpus statistics.
| Key | Type | Description |
|---|---|---|
| `word_count` | `int` | Total number of words |
| `unique_words` | `int` | Number of distinct words |
| `avg_word_length` | `float` | Average characters per word |
| `sentence_count` | `int` | Number of sentences |
---
### `analyze_groupwords`
```python
from aize import analyze_groupwords
result = analyze_groupwords(text)
```
Groups words by their character length and returns frequency counts per length bucket.
---
### `analyze_zipf`
```python
from aize import analyze_zipf
result = analyze_zipf(text)
```
Computes Zipf's Law statistics over the text.
| Key | Type | Description |
|---|---|---|
| `frequency` | `dict` | `{word: count}` sorted most → least frequent |
| `rank_freq` | `list[tuple]` | `[(rank, count)]` for rank-frequency plotting |
| `hapax_pct` | `float` | % of words appearing exactly once |
| `dis_pct` | `float` | % of words appearing exactly twice |
| `freq_gt2_pct` | `float` | % of words appearing more than twice |
---
### `analyze_heaps`
```python
from aize import analyze_heaps
result = analyze_heaps(text)
```
Returns a vocabulary growth curve (Heap's Law). Useful for visualising how the vocabulary expands as more text is read.
---
### `calculate_density`
```python
from aize import calculate_density
result = calculate_density(text)
```
Calculates the proportion of stopwords in the text, returning a stopword density percentage and associated word lists.
---
### `compare_vocab`
```python
from aize import compare_vocab
result = compare_vocab({"doc1": text1, "doc2": text2})
```
Compares vocabulary across multiple documents — unique words per document, shared vocabulary, and overlap statistics.
---
### `compute_tfidf`
```python
from aize import compute_tfidf
result = compute_tfidf(
texts=["text of doc1...", "text of doc2..."],
labels=["doc1", "doc2"],
top_n=15
)
# Returns: {"doc1": [("word", score), ...], "doc2": [...]}
```
Extracts the top `n` TF-IDF keywords for each document in a corpus. Uses scikit-learn under the hood with English stopword filtering.
---
### `compute_ngrams`
```python
from aize import compute_ngrams
bigrams = compute_ngrams(text, n=2, top_n=20)
trigrams = compute_ngrams(text, n=3, top_n=20)
# Returns: [("phrase here", count), ...]
```
Returns the most frequent n-grams (bigrams, trigrams, etc.) from the text.
---
### `analyze_sentiment`
```python
from aize import analyze_sentiment
result = analyze_sentiment(text)
```
Runs VADER sentiment analysis. NLTK's `vader_lexicon` is auto-downloaded on first use.
| Key | Type | Description |
|---|---|---|
| `positive` | `float` | Proportion of positive sentiment |
| `negative` | `float` | Proportion of negative sentiment |
| `neutral` | `float` | Proportion of neutral sentiment |
| `compound` | `float` | Overall score from `-1.0` (most negative) to `+1.0` (most positive) |
| `label` | `str` | `"Positive"`, `"Negative"`, or `"Neutral"` |
---
### `compute_readability`
```python
from aize import compute_readability
result = compute_readability(text)
```
Computes Flesch-Kincaid readability metrics.
| Key | Type | Description |
|---|---|---|
| `flesch_reading_ease` | `float` | 0–100 score; higher = easier to read |
| `fk_grade_level` | `float` | Approximate US school grade level |
| `sentences` | `int` | Sentence count |
| `words` | `int` | Word count |
| `syllables` | `int` | Total syllables |
| `interpretation` | `str` | `"Very Easy"` → `"Very Confusing"` |
---
### `analyze_pos`
```python
from aize import analyze_pos
result = analyze_pos(text)
```
Returns a part-of-speech frequency breakdown (nouns, verbs, adjectives, adverbs, etc.) using NLTK's POS tagger.
---
### `generate_wordcloud`
```python
from aize import generate_wordcloud
image = generate_wordcloud(text)
```
Generates a word cloud image from the input text. Returns a PIL `Image` object that can be displayed or saved.
```python
image.save("wordcloud.png")
```
---
## Streamlit Dashboard
An interactive, browser-based UI for all analyses is included.
```bash
streamlit run nlp_dashboard.py
```
The dashboard lets you upload one or more `.txt` files and interactively explore all analysis modules with charts and tables powered by Plotly.
---
## FastAPI Backend
A REST API is included for programmatic or remote access to the toolkit.
```bash
uvicorn api:app --reload
```
The API will be available at `http://127.0.0.1:8000`. Interactive docs are auto-generated at:
- **Swagger UI**: `http://127.0.0.1:8000/docs`
- **ReDoc**: `http://127.0.0.1:8000/redoc`
---
## Dependencies
| Package | Purpose |
|---|---|
| `nltk >= 3.8` | Tokenisation, POS tagging, VADER sentiment |
| `scikit-learn >= 1.2` | TF-IDF vectorisation |
| `wordcloud >= 1.9` | Word cloud image generation |
| `pandas >= 1.5` | Data manipulation |
| `plotly >= 5.0` | Interactive charts in the dashboard |
| `streamlit >= 1.28` | Web dashboard UI |
| `fastapi >= 0.100` | REST API framework |
| `uvicorn >= 0.23` | ASGI server for FastAPI |
| `python-multipart >= 0.0.6` | File upload support for FastAPI |
---
## Project Structure
```
aize/
├── aize/ # Core library package
│ ├── __init__.py # Public API surface
│ └── analysis/
│ ├── stats.py # Basic text statistics
│ ├── groupwords.py # Word length grouping
│ ├── zipf.py # Zipf's law analysis
│ ├── heaps.py # Heap's law analysis
│ ├── stopwords.py # Stopword density
│ ├── vocab.py # Vocabulary comparison
│ ├── tfidf.py # TF-IDF & n-grams
│ ├── sentiment.py # VADER sentiment
│ ├── readability.py # Flesch-Kincaid scores
│ ├── pos.py # POS tagging
│ └── wordcloud_gen.py # Word cloud generation
├── .github/workflows/
│ └── publish.yml # Auto-publish to PyPI on version tags
├── nlp_dashboard.py # Streamlit dashboard
├── api.py # FastAPI REST backend
├── pyproject.toml # Package config & dependency extras
├── MANIFEST.in # Source distribution file rules
├── requirements.txt # All-inclusive dev requirements
└── README.md
```
---
## License
This project is licensed under the **MIT License**. See [LICENSE](LICENSE) for details.
---
<p align="center">Built with ❤️ using Python, NLTK, scikit-learn, Streamlit & FastAPI</p>
| text/markdown | eokoaze | null | null | null | null | nlp, natural-language-processing, text-analysis, zipf, tfidf, sentiment, readability, wordcloud | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Text Processing :: Linguistic"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"nltk>=3.8",
"scikit-learn>=1.2",
"wordcloud>=1.9",
"pandas>=1.5",
"streamlit>=1.28; extra == \"dashboard\"",
"plotly>=5.0; extra == \"dashboard\"",
"Pillow>=9.0; extra == \"dashboard\"",
"fastapi>=0.100; extra == \"api\"",
"uvicorn>=0.23; extra == \"api\"",
"python-multipart>=0.0.6; extra == \"api\"",
"aize[dashboard]; extra == \"all\"",
"aize[api]; extra == \"all\"",
"aize[all]; extra == \"dev\"",
"build>=1.0; extra == \"dev\"",
"twine>=5.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/eokoaze/aize",
"Repository, https://github.com/eokoaze/aize",
"Bug Tracker, https://github.com/eokoaze/aize/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T15:03:24.908381 | aize-0.1.0.tar.gz | 15,140 | 3c/b2/bf1adabc811313b47ba4fd25e94443f34baaecb2c50d45ab9dbc70b4edc3/aize-0.1.0.tar.gz | source | sdist | null | false | 112c6729a16736d6ee5627c92b80909b | 724eaca1de5f6681bd3ce2ab1cedfb6ba2c71881385f1193adaaa884e04c04ac | 3cb2bf1adabc811313b47ba4fd25e94443f34baaecb2c50d45ab9dbc70b4edc3 | MIT | [
"LICENSE"
] | 209 |
2.4 | deteqt | 0.1.5 | Python client for DeteQT metrology and quantum-chip diagnostics on InfluxDB v2. | # deteqt
Python client for DeteQT metrology and quantum-chip diagnostics on
InfluxDB v2.
## Documentation
- [Main website](https://deteqt-d9f77a.gitlab.io/)
- [Usage](https://deteqt-d9f77a.gitlab.io/usage/)
- [API reference](https://deteqt-d9f77a.gitlab.io/api/)
## Install
```zsh
# Create and activate a virtual environment
uv venv deteqt-env --python 3.13 # any supported version >=3.12
source deteqt-env/bin/activate # Linux/macOS
# .\deteqt-env\Scripts\activate # Windows
# Install from PyPI
uv pip install deteqt
# Verify
uv run --active python -c "import deteqt; print(deteqt.__version__)"
```
If you cloned this repository for local development:
```zsh
uv pip install -e . --group all
```
## Configure settings
```python
from deteqt import write_settings_file
settings_path = write_settings_file(
url="https://deteqt.duckdns.org",
org="YOUR_ORG",
bucket="YOUR_BUCKET",
token="YOUR_WRITE_TOKEN",
)
print(settings_path) # settings.toml
```
This creates `settings.toml` in your current working directory.
You can also create it manually:
```toml
url = "https://deteqt.duckdns.org"
org = "YOUR_ORG"
bucket = "YOUR_BUCKET"
token = "YOUR_WRITE_TOKEN"
```
## Write one point
```python
from deteqt import TUID, write_point
run_tuid: TUID = TUID("20260213-143319-249-01d39d")
summary = write_point(
qoi="frequency",
nominal_value=1.01,
uncertainty=0.01,
tuid=run_tuid,
element="q1",
label="f01",
unit="GHz",
element_label="qubit-01",
device_ID="chip-a",
run_ID="run-001",
cycle_ID="cycle-01",
condition="4K",
extra_tags={
"area_of_interest": "sweetspot",
"long_label": "Q1 frequency",
},
extra_fields={
"temperature_mK": 12.3,
"passed_qc": True,
},
)
print(summary) # {'records_total': 1, 'records_written': 1, 'records_failed': 0}
```
## Write batch (with optional extras)
```python
from deteqt import write_batch
summary = write_batch(
[
{
"qoi": "frequency",
"nominal_value": 1.01,
"uncertainty": 0.01,
"run_ID": "run-001",
"cycle_ID": "cycle-01",
"extra_tags": {
"area_of_interest": "sweetspot",
"long_label": "Q1 frequency",
},
"extra_fields": {"temperature_mK": 12.3},
},
{
"qoi": "phase",
"nominal_value": 0.12,
"run_ID": "run-001",
"extra_tags": {
"area_of_interest": "sweetspot",
"long_label": "Q1 frequency",
},
"extra_fields": {"passed_qc": True},
},
]
)
```
## Common schema policy
Use these common keys across all integrations:
- measurement: `qoi`
- tags:
`element`,
`label`,
`unit`,
`element_label`,
`device_ID`,
`run_ID`,
`cycle_ID`,
`condition`
- fields:
`nominal_value`,
`uncertainty`,
`tuid`
Partner-specific keys `area_of_interest` and `long_label` are treated as
optional extra tags.
If your settings file is elsewhere, pass `settings_path=".../settings.toml"`.
## Recursive ingest/write (single mode)
```python
from deteqt import ingest_and_write
summary = ingest_and_write(folder="quantify-data")
print(summary)
```
The recursive path is JSON-only and always hybrid:
- values come from `quantities_of_interest*.json`
- metadata/tags come from `snapshot*.json`
- only standard SCQT transmon QoIs are kept
Supported analysis aliases are normalized to standard names (for example:
`T1 -> t1`, `Qi -> resonator_qi`, `Qc -> resonator_qc`, `fr -> resonator_freq_low` and
`resonator_freq_high`).
| text/markdown | null | null | null | null | null | influxdb, qoi, telemetry, timeseries | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"influxdb-client<2.0.0,>=1.50.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T15:03:22.919456 | deteqt-0.1.5.tar.gz | 17,239 | 5c/63/c2cdb102d609545aa950bd5b002c51233069412f36a32a0bc86413b53d8f/deteqt-0.1.5.tar.gz | source | sdist | null | false | 435992074e36d702ccca917b22087980 | 58aba7d935b8e98630b300616031c756fa6d4ac65191b3c2b21d279a46d0bf35 | 5c63c2cdb102d609545aa950bd5b002c51233069412f36a32a0bc86413b53d8f | null | [] | 186 |
2.4 | ocean-runner | 0.3.4 | A fluent API for OceanProtocol algorithms | # Ocean Runner
[](https://pypi.org/project/ocean-runner/)
[](https://github.com/agrospai/ocean-runner)
Ocean Runner is a package that eases algorithm creation in the scope of OceanProtocol.
## Installation
```bash
pip install ocean-runner
# or
uv add ocean-runner
```
## Usage
### Minimal Example
```python
import random
from ocean_runner import Algorithm
algorithm = Algorithm()
@algorithm.run
def run(_: Algorithm):
return random.randint()
if __name__ == "__main__":
algorithm()
```
This code snippet will:
- Read the OceanProtocol JobDetails from the environment variables and use default configuration file paths.
- Execute the run function.
- Execute the default saving function, storing the result in a "result.txt" file within the default outputs path.
### Tuning
#### Application Config
The application configuration can be tweaked by passing a Config instance to its constructor.
```python
from ocean_runner import Algorithm, Config
algorithm = Algorithm(
Config(
custom_input: ... # dataclass
# Custom algorithm parameters dataclass.
logger: ... # type: logging.Logger
# Custom logger to use.
source_paths: ... # type: Iterable[Path]
# Source paths to include in the PATH
environment: ...
# type: ocean_runner.Environment. Mock of environment variables.
)
)
```
```python
import logging
from pydantic import BaseModel
from ocean_runner import Algorithm, Config
class CustomInput(BaseModel):
foobar: string
logger = logging.getLogger(__name__)
algorithm = Algorithm(
Config(
custom_input: CustomInput,
"""
Load the Algorithm's Custom Input into a CustomInput instance.
"""
source_paths: [Path("/algorithm/src")],
"""
Source paths to include in the PATH. '/algorithm/src' is the default since our templates place the algorithm source files there.
"""
logger: logger,
"""
Custom logger to use in the Algorithm.
"""
environment: Environment(
base_dir: "./_data",
"""
Custom data path to use test data.
"""
dids: '["17feb697190d9f5912e064307006c06019c766d35e4e3f239ebb69fb71096e42"]',
"""
Dataset DID.
"""
transformation_did: "1234",
"""
Random transformation DID to use while testing.
"""
secret: "1234",
"""
Random secret to use while testing.
"""
)
"""
Should not be needed in production algorithms, used to mock environment variables, defaults to using env.
"""
)
)
```
#### Behaviour Config
To fully configure the behaviour of the algorithm as in the [Minimal Example](#minimal-example), you can do it decorating your defined function as in the following example, which features all the possible algorithm customization.
```python
from pathlib import Path
import pandas as pd
from ocean_runner import Algorithm
algorithm = Algorithm()
@algorithm.on_error
def error_callback(algorithm: Algorithm, ex: Exception):
algorithm.logger.exception(ex)
raise algorithm.Error() from ex
@algorithm.validate
def val(algorithm: Algorithm):
assert algorithm.job_details.files, "Empty input dir"
@algorithm.run
def run(algorithm: Algorithm) -> pd.DataFrame:
_, filename = next(algorithm.job_details.inputs())
return pd.read_csv(filename).describe(include="all")
@algorithm.save_results
def save(algorithm: Algorithm, result: pd.DataFrame, base: Path):
algorithm.logger.info(f"Descriptive statistics: {result}")
result.to_csv(base / "result.csv")
if __name__ == "__main__":
algorithm()
```
### Default implementations
As seen in the minimal example, all methods implemented in `Algorithm` have a default implementation which will be commented here.
```python
.validate()
"""
Will validate the algorithm's job detail instance, checking for the existence of:
- `job_details.ddos`
- `job_details.files`
"""
.run()
"""
Has NO default implementation, must pass a callback that returns a result of any type.
"""
.save_results()
"""
Stores the result of running the algorithm in "outputs/results.txt"
"""
```
### Job Details
To load the OceanProtocol JobDetails instance, the program will read some environment variables, they can be mocked passing an instance of `Environment` through the configuration of the algorithm.
Environment variables:
- `DIDS` (optional) Input dataset(s) DID's, must have format: `["abc..90"]`. Defaults to reading them automatically from the `DDO` data directory.
- `TRANSFORMATION_DID` (optional, default="DEFAULT"): Algorithm DID, must have format: `abc..90`.
- `SECRET` (optional, default="DEFAULT"): Algorithm secret.
- `BASE_DIR` (optional, default="/data"): Base path to the OceanProtocol data directories.
| text/markdown | null | AgrospAI <agrospai@udl.cat>, Christian López <christian.lopez@udl.cat> | null | null | Copyright 2025 spin3l Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiofiles>=25.1.0",
"oceanprotocol-job-details>=0.4.2",
"pydantic-settings>=2.12.0",
"pydantic>=2.12.5",
"pytest>=8.4.2",
"returns[compatible-mypy]>=0.26.0"
] | [] | [] | [] | [
"Homepage, https://github.com/AgrospAI/ocean-runner",
"Issues, https://github.com/AgrospAI/ocean-runner/issues"
] | twine/6.0.1 CPython/3.12.8 | 2026-02-20T15:02:57.561979 | ocean_runner-0.3.4.tar.gz | 6,374 | c8/d3/571f8d1683b73d90cef58dd86996e098bb5593fa332ff4315b612d452f11/ocean_runner-0.3.4.tar.gz | source | sdist | null | false | 197df6f26626c1bf55d7b9a34d27fc0b | bb2d1e555c4898c675e9d20e71009cefb21a384f0b7483a05f68b51ff2a4dd55 | c8d3571f8d1683b73d90cef58dd86996e098bb5593fa332ff4315b612d452f11 | null | [] | 186 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.